You are on page 1of 746

New version of Structure of global catastrophe,

2016 revision, draft.


Most ideas are the same, but some maps are added.

How to prevent
existential risks
From full list to complete prevention plan

(Draft)

Copyright 2016 Alexey Turchin.

Contents
Preface

11

About this book


What is human extinction risks?
Part 1. General considerations.

11
11
13

Chapter 1. Types and nature of global catastrophic risks

13

Main open question about x-risks: before or after 2050?


The difference between a global catastrophe and existential risk
Smaller catastrophes as steps to possible human extinction
One-factorial scenarios of global catastrophe
Principles of classification of global risks
Precautionary principle for existential risks
X-risks and other human values
Global catastrophic risks, human extinction risks and existential risks
Chapter 2. Problems of probability calculation

13
16
17
20
21
23
24
24
26

Problems of calculation of probabilities of various scenarios


28
Quantitative estimates of the probability of the global catastrophe, given by various authors 29
The model of the future
32
Chapter 3. History of the research on global risk
36
Current situation with global risks
Part 2. Typology of x-risks

47

45

Chapter 4. The map of all know global risks

47

Block 1 Natural risks

47

Chapter 5. The risks connected with natural catastrophes

47

Universal catastrophes
Geological catastrophes
Eruptions of supervolcanoes
Falling of asteroids
Asteroid threats in the context of technological development
Zone of defeat depending on force of explosion
Solar flashes and luminosity increase
Gamma ray bursts
Supernova stars
Super-tsunami
Super-Earthquake
Polarity reversal of the magnetic field of the Earth
Emerge of new illness in the nature
Debunked and false risks from media, science fiction and fringe science or old theories
Block 2 Anthropogenic risks
80
Chapter 6. Global warming

80
4

47
49
50
51
54
58
60
62
65
65
66
69
70
79

Chapter 7. The anthropogenic risks which are not connected with new technologies83
Chapter 8. Artificial Triggering of Natural Catastrophes
Chapter 9. Nuclear Weapons

86
102

The Evolution of Scientific Opinion on Nuclear Winter


Nuclear War and Human Extinction
More Exotic Nuclear Scenarios
Nuclear space weapons
Nuclear attack on nuclear power stations
Cheap nukes and new ways enrichment
Estimating the Probability of Nuclear War Causing Human Extinction
Near misses
The map of x-risks connected with nuclear weapons
Chapter 10. Global chemical contamination

103
112
112
119
119
119
119
121
123
128

Chapter 11. Space Weapons

135

Latest developments:
Block 4 Super technologies. Nanotech and Biotech.

176

Chapter 12. Biological weapons

176

The map of biorisks


Chapter 13. Superdrug

209

172

203

Design features
Chapter 14: Nanotechnology and Robotics

212
217

The map of nanorisks


Chapter 15. Space Weapons

275

Chapter 16: Artificial Intelligence

316

270

Current state of AI risks 2016


317
Distributed agent net as a basis for hyperbolic law of acceleration and its implication for AI
timing and x-risks
317
The map AI failures modes and levels
323
AI safety in the age of neural networks and Stanislaw Lem 1959 prediction
325
The map of x-risks prevention
334
Estimation of timing of AI risk
335
Doomsday argument in estimating of AI arrival timing
339
Several interesting ideas of AI control
341
Chapter 17. The risks of SETI
348
Latest development in 2016
365
Chapter 18. The risks connected with blurring the borders of human and transhumans 366
The risks connected with a problem of "the philosophical zombie
Chapter 19. The causes of catastrophes unknown to us now

367
368

Phil Torres article about unknown unknowns (UU)


List of possible types of Unknown unknowns
Unexplained aerial phenomena as global risks
Part 3. Different factors influencing global risks landscape

372

Chapter 20. Ways of detecting one-factorial scenarios of global catastrophe

372

369
369
371

The general signs of any dangerous agent


Chapter 19. Multifactorial scenarios

372
381

Integration of the various technologies, creating situations of risk


Double scenarios
Studying of global catastrophes by means of models and analogies
Inevitability of achievement of a steady condition
Recurrent risks
Global risks and problem of rate of their increase
Comparative force of different dangerous technologies
Sequence of appearance of various technologies in time
Comparison of various technological risks
The price of the question of x-risks prevention
The universal cause of the extinction of civilizations
Does the collapse of technological civilization means human extinction?
Chapter. Agents, which could start x-risks
The social groups, willing to risk the destiny of the planet
Humans as a main factors in global risks, as a coefficient in risks assessment
Decision-making about nuclear attack
Chapter 21. The events changing probability of global catastrophe

381
382
386
388
390
390
391
392
393
395
397
398
399
403
404
406
408

Definition and the general reasons


Events which can open a vulnerability window
System crises
Crisis of crises
Technological Singularity
Overshooting leads to simultaneous exhaustion of all resources
System crisis and technological risks
Chapter 21. Cryptowars, arms race and others scenario factors raising probability of global
catastrophe
426

408
409
410
418
419
422
424

Cryptowar
Chapter 22. The factors influencing for speed of progress

426
441

Global risks of the third sort


Moore's law
Chapter 23 X-risks prevention

441
443
446

The general notion of preventable global risks


Introduction

451

The problem
The context
In fact, we dont have a good plan
Overview of the map
The procedure for implementing the plans
The probability of success of the plans
Steps
Plan A. Prevent the catastrophe
Plan A1. Super UN or international control system
A1.1 Step 1: Research
6

446
451
451
452
452
453
454
454
455
455
455

Plan A1.1: Step 2: Social support


457
Reactive and Proactive approaches
458
A1.1-Step 3. International cooperation
459
Practical steps to confront certain risks
460
1.1 Risk control
461
Elimination of certain risks
462
A1.1 Step 4: Second level of defense on high-tech level: Worldwide risk
prevention authority
463
Planetary unification war
464
Active shields
464
Step 5 Reaching indestructibility of civilization with negligible annual
probability of global catastrophe: Singleton
466
Plan A1.2 Decentralized risk monitoring
466
A1.2 1.Values transformation
Ideological payload of new technologies
A1.2 2: Improving human intelligence and morality
Intelligence
A1.2 3. Cold War, local nuclear wars and WW3 prevention
A1.2 4. Decentralized risk monitoring
Plan 2. Creating Friendly AI
A2.1 Study and Promotion
A2 2. Solid Friendly AI theory
A2.3 AI practical studies
Seed AI
Superintelligent AI
UnfriendlyAI
Plan A3. Improving Resilience

467
469
469
469
470
471
471
471
472
473
473
473
474
475

A3 1.Improving sustainability of civilization


3 2. Useful ideas to limit the scale of catastrophe
3.3 High-speed Tech Development needed to quickly pass risk window
A3.4. Timely achievement of immortality on highest possible level
AI based on uploading of its creator
Plan 4. Space Colonization
477

475
476
476
477
477

4.1. Temporary asylums in space


4.2. Space colonies near the Earth
Colonization of the Solar System
4.3. Interstellar travel
Interstellar distributed humanity
Plan B. Survive the catastrophe

478
478
479
479
480

B1. Preparation
B2. Buildings
Natural refuges
B3. Readiness
B4. Miniaturization for survival and invincibility
7

480
481
481
481
482
482

B5. Rebuilding civilization after catastrophe


Reboot of civilization
Plan . Leave Backups

483
483
483

C1. Time capsules with information


C2. Messages to ET civilizations
C3. Preservation of earthly life
C4. Robot-replicators in space

485

Resurrection by another civilization


Plan D. Improbable Ideas

486

484
484
484
486

D1. Saved by non-human intelligence


486
D2. Strange strategy to escape Fermi paradox
488
D4. Technological precognition
489
D5. Manipulation of the extinction probability using Doomsday argument 489
D6. Control of the simulation (if we are in it)
490
Bad plans
491
Prevent x-risk research because it only increases risk
Controlled regression
Depopulation
Computerized totalitarian control
Choosing the way of extinction: UFAI
Attracting good outcome by positive thinking
Conclusion

494

Literature:

495

491
492
493
493
494
494

Active shields
Existing and future shields
Conscious stop of technological progress
Means of preventive strike
Removal of sources of risks on considerable distance from the Earth
Creation of independent settlements in the remote corners of the Earth
Creation of the file on global risks and growth of public understanding of the problematics
connected with them
Refuges and bunkers
Quick spreading in space
All somehow will manage itself
Degradation of the civilization to level of a steady condition
Prevention of one catastrophe by means of another
Advance evolution of the man
Possible role of the international organizations in prevention of global catastrophe
Infinity of the Universe and question of irreversibility of human extinction
Assumptions of that we live in "Matrix".
Global catastrophes and society organization
Global catastrophes and current situation in the world
The world after global catastrophe
The world without global catastrophe: the best realistic variant of prevention of global
catastrophes
8

495
497
501
502
504
504
505
505
509
510
511
511
512
513
515
516
517
520
521
522

Maximizing pleasure if catastrophe is inevitable


Chapter 24. Indirect ways of an estimate of probability of global catastrophe

524

523

Chapter 25. The most probable scenario of global catastrophe

541

Part 5. Cognitive biases affecting judgments of global risks

546

Chapter 1. General Remarks:

546

Cognitive Biases and Global Catastrophic Risks

546

Chapter. Meta-biases

549

Chapter 2. Cognitive biases concerning global catastrophic risks

551

Chapter 2. Cognitive biases concerning global catastrophic risks

551

Chapter 3. How cognitive biases in general influence estimates of global risks

565

Chapter 3. How cognitive biases in general influence

565

estimates of global risks

565

Chapter 4. The universal logical errors, able to be shown in reasoning on global risks

615

Chapter 5. Specific errors arising in discussions about danger of uncontrollable development of


Artificial Intelligence
626
Chapter 7. Conclusions from the analysis of cognitive biases in the estimate of global risks and
possible rules for rather effective estimate of global risks
651
The conclusion. Prospects of prevention of global catastrophes
What is AI? MA part
Artificial Intelligence Today
Projects in Artificial General Intelligence
Whole Brain Emulation
Ensuring the Continuation of Moore's Law
The Software of Artificial General Intelligence
Features of Superhuman Intelligence
Seed Artificial Intelligence
From Virtuality to Physicality
The Yudkowsky-Omohundro Thesis of AI Risk
Friendly AI
Stages of AI Risk
AI Forecasting
Broader Risks
Preventing AI Risk and AI Risk Sources
AI Self-Improvement and Diminishing Returns Discussion
Philosophical Failures in Advanced AI, Failures of Friendliness
Impact and Conclusions
Frequently Asked Questions on AI Risk
Chapter. Collective biases and errors

652
666
671
673
675
679
684
687
690
694
699
704
708
712
718
719
723
730
733
740
743

10

Preface
Existential risk one where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and
drastically curtail its potential.
N. Bostrom. Existential Risks: Analyzing Human
Extinction Scenarios and Related Hazards

About this book


This book had developed from an encyclopedia of global catastrophic risks to a roadmap of
risk prevention.

What is human extinction risks?


In the 20th century the possibility of the extinction of humankind was almost entirely
connected to nothing but the threat of nuclear war. Now, at the beginning of the XXI century, we can
easily name more than 10 various sources of possible irreversible global catastrophe, mostly
deriving form novel technologies, and the number of sources of risk constantly grows. Research on
global risk is widely neglected. Problems such as of the potential exhaustion of oil, the future of the
Chinese economy, or outer space exploration get more attention, than irreversible global
catastrophes. It seems senseless to discuss the future of human civilization before we have an
intelligent estimate of its chances of survival. Even if as a result of such research we learn that the
risk is negligibly small, it is important to study the question. Preliminary studies by experts do not
give encouraging results. Sir Martin Rees, formerly British astronomer royal and author of a book
on global risks, estimates mankind's chances of survival to 2100 at only 50% estimate.
This book is devoted to consistent review of the "threats to existence, that is, to risks of the
irreversible destruction of all human civilization and extinction of mankind. The purpose of this
book is to give a wide review of this theme. However, many of the claims therein have a debatable
character. Our goal is not to give definitive answers, but to encourage measured thought in the
reader and to nurture soil for further discussions. Many of the hypotheses stated here might seem
11

unduly radical. However, when discussing them, we were guided by a precautionary principle
which directs us to consider worst realistic scenarios as a matter of caution. The point is not to
fantasize about doom for its own sake, but to give the worst scenarios the attention they deserve
from a straightforward utilitarian moral perspective.
In this volume you will find monograph Risks of human extinction. The monograph consists of
two parts research of concrete threats and methodology of the analysis. The analysis of concrete
threats in the first part consists of a detailed list with references to sources and critical analysis.
Then, systemic effects of the interaction of different risks are investigated, and probability estimates
are assessed. Finally we suggest roadmap of existential risks prevention.
The methodology offered in the second part consists of critical analysis of the ability of
humans to intelligently estimate the probability of global risks. It may be used, with little change,
and in other systematic attempts to assess the probability of uncertain future events.
Though only a few books with general reviews of the problem of global risks have been
published in the world, a certain tradition has already been formed. It consists of discussion of
methodology, classification of possible risks, estimates of their probability, ways of ameliorating
those risks and then a review of further philosophical issues related to global risks, such as the
Doomsday argument, which will be introduced shortly. The current main books on global
catastrophic risks are: J. Leslie's The End of the world. A science and ethics of human extinction,
1996, Sir Martin Rees' Our Final Hour, 2003, Richard Posner's Catastrophe: Risk and Response,
2004, and the volume edited by Nick Bostrom Global catastrophic risks, 2008.
This book differs considerably from previous books on the topic of global risks. First of all,
we review a broader set of risks than prior works. For example, an article by Eliezer Yudkowsky
lists ten cognitive biases affecting our judgment of global risks. In our book, we address more then a
100 such biases. In the section devoted to the classification of risks, we mention some risks, which
are entirely missing from previous works. I have aspired to create a systematic point of view of the
problem of global risks, which allows us not only to list various risks and understand them, but also
how different risks, influencing one another, form an interlocked structure.
I will use the terms global catastrophe, x-risk, existential risk and human extinction as
synonymous designating total and irreversible die off of all Homo sapience.
Main conclusion of the book is that chances of human extinction is around 50 per cent on the
XXI century but they could be lowered by the order of magnitude if all need actions will be done.
All information used in the analysis is taken from the open sources listed in the bibliography.

12

Part 1. General considerations.

Chapter 1. Types and nature of global catastrophic risks

Main open question about x-risks: before or after 2050?

Robin Hanson wrote a lot about two modes of attitude usually displayed, that is near mode and far mode.
People have very different attitude to things that are happening now and to things that may happen
in distant future. For example if there is a fire in the house, everyone would try to escape, but if the
question of discussion will be should humanity live forever many nice people would say that they
are OK with human extinction.
And even inside the discussion of x-risks we could easily distinguish two approaches.
Two main opinions about timing of exist: decades or centuries. Or catastrophe will happen in next 15-30
years, or in next couple of centuries.
If we take in account many predictions about continuing exponential or even hyperbolic development of
new technologies when we should conclude that superhuman AI and ability to create super deadly
biological viruses should be ready between 2030 (Vinge) or 2045 (Kurzweil). We write this text in
2014, so it is just 15-30 years from now. As well as predictions about runaway global warming,
limits of growth, peak oil and some version of Doomsday argument all of them are centers around
the year 2030 2050.
If it is 2030, or even earlier, not much could be done to prepare. Late dates left more room for possible
action. But the main problem is not the risk are so large, but that society, government and research
communities are completely unprepared and unwilling to deal with them.
But if we take one hundred years risks timeframe we (as authors) will have some advantages. We are
signaling to be more respectful and conservative. It will be almost never proved that we are false
during our lifetime. We have more chances to be right just because we have larger timeframe. We
have plenty of time to implement some defense measures or in fact to think that such measures
13

would be implemented in the remote future (they will not). We may also think that we are correcting
overoptimistic bias. It is well known that predictions about AI used to be overoptimistic.
The difference between two risks timeframe prediction is like difference in two predictions about future
of a man: one that claims that he will die in next 50 years and another that he will have cancer in
next 5 years. First of them doesnt bring almost any new information, the second is urgent message
which could be false and have high costs. But also the urgent messages tend to attract more
attention, which could bias some sensationalist author. More scientific authors tend to be more
careful and try to distinguish themselves from sensationalist and so give prediction on longer time
periods.
Good example here is E. Yudkowsky who in 2001 claimed that super exponential growth is possible with
ever-shorter doubling periods and super AI in 2005. After this prediction failed he and his
community Lesswrong are more biased to around 100 years estimate until super AI.
So the question if technologies continue their exponential growth is equal to the question of time scale of
global catastrophe. If they do continue to grow exponentially, then the global catastrophe will
happened in nearest decades or will be permanently prevented.
Let take a closer look at both scenarios. Arguments for decades scenario:
1. Because of NBI convergence, advanced nanotech, biotech and AI will appear almost
simultaneously.
2. New technologies will grow exponentially with a doubling time of approximately 2 years and
their risks will grow with a similar or even greater speed.
3. They will interact with each other as any smaller catastrophe could lead to a bigger one. For
example global warming will lead to a fight for recourses and nuclear war, and this nuclear war
will result in the release of dangerous biological weapons. It may be called oscillations near
Singularity.
4. Several possible triggers of x-risks could happen in the near future. It is world war and especially
new arms race, peak oil and runaway global warming.
There are several compelling arguments for centuries scenario:
1. Most predictions about AI were premature. The majority of Doomsday prophecies also have
been proven to be false.
2. Exponential growth will level up. Moores law may come to an end in the near future.
3. The most likely x-risks could be caused by an independent accidental event of unknown
origin, and not by complex interaction of known things.
14

4. There will be no cascade chain reaction of destructive even


5. Long-term predictions are more scientific in the public view thus improving the reputation of
the field of x-risks research, finally helping to prevent x-risks.
Decades scenario is worse, because it is sooner, we have less time to prepare in fact no time, knowing
how little have been done before. Because catastrophic scenarios are more complex and because we
will shorter expected personal and civilizational lifetime.
One of main factors in timeframe of x-risks is our assessment of the time then full AI capable to
self-improvement will be created. Several articles tried to address with question from different
points of view. Modelling, extrapolation and expert quiz were used. For example, here:
http://sethbaum.com/ac/2011_AI-Experts.html We will address this question again in AI chapter, but
the fact is that no body knows for sure, and we should use very vague prior to catch all different
estimates. Bostrom claims that AI will be created before the beginning of 22 century or it will be
proved that some hard obstacles exist, and this vaguest estimate seems to be true.
We need to search for effective mode of actions to prevent x-risks. We need to create social
demand for preventing existential risks as for example the fight against nuclear war in 80ies. We
need political parties, which consider prevention of existential risks as well as life extension as the
main goals of society.
I would like to draw attention to the investigation of non-scientific social, and media factors in
discussing global catastrophe. Much attention is concentrated on scientific research with popular
reception and reaction seen as a secondary factor. However, the value of this secondary factor should
not be underestimated, as it can have a huge effect on the way in which global catastrophe might be
met, and eventually, averted. There is the example of 1980s nuclear disarmament after global
antinuclear protests.
The main difference of both scenarios is that first will happened during our expected lifetime, and
the second most likely will not touch us personally. So the second is hypothetical and not urgent.
The border between two scenarios is around 2050.
Claims that global catastrophe may happen only after 2050 make it insignificant problem, and
preclude any real efforts to prevent it.
Human mind and society is build in such a way that not much questions about remote future is
interesting for us with several important exceptions. One is safety of buildings, and another is our
interest in wellbeing and longevity of our children and grandchildren. But these questions existed for
centuries and our culture had adapted to build strong buildings and invest in children. Global risks
15

are new problem and no such adaptation has happened. More about cognition on global risks is said
in the second part of the book.
The difference between a global catastrophe and existential risk
Any global catastrophe will affect the entire surface of the earth and all of its inhabitants, though not all
of them will perish. From the viewpoint of the personal history of any person, there is no big
difference between a global catastrophe and total extinction in both cases, he will most likely die.
But from the viewpoint of human civilization, the difference is enormous. It will either end forever
or simply transform and go on a new path.
Bostrom suggested expanding the term existential risks to include events that threaten human
civilization with irreversible damage, like, for example, half-hostile artificial intelligence that
evolves in the direction completely opposite to the current values of humanity, or a worldwide
totalitarian government that will forever stop progress.
But perpetual worldwide totalitarianism is impossible: it will either lead to the extinction of civilization,
maybe in several million years, or smoothly evolve into a new form.
However, it is possible to still include many other things under the category of existential risks we
have lost many things indeed over the course of history dead languages, extinct styles of the
history of art, ancient philosophy and so on.
The real dichotomy passes between complete extinction and simple global catastrophe. Extinction
means the complete destruction of mankind and cessation of history. A global catastrophe could
destroy 90% of the population, but only slow down the course of history for 100 years.
The difference here, rather, is the value of the continuation of human history and the value of future
generations, which for most people is extremely speculative. This probably presents one of the
reasons for ignoring the risks of complete extinction.

Smaller catastrophes as steps to possible human extinction


Though in this book we investigate global catastrophes, which can lead to human extinction, it
is easy to notice that the same catastrophes on a smaller scale may not destroy mankind, but damage
us greatly.
For example global virus pandemic could or completely destroy humanity or kill only a part of
it. In second case the level of civilization would decline to some extent, and after it is possible or
further extinction or restoration of the civilization. Therefore the same class of catastrophes can be
16

both the direct cause of human extinction, or a factor, which opens a window of vulnerability for
following catastrophes. For example after pandemic the civilization would be more vulnerable to the
next pandemic or famine or cant prevent collision with asteroid.
In 2007 was published article of Robin Hanson Catastrophe, Social Collapse, and Human
Extinction in which he used simple math model to estimate how variance in resilience would
change probability of extinction by aftermath of not total catastrophe. He wrote: For many types of
disasters, severity seems to follow a power law distribution. For some of types, such as wars and
earthquakes, most of the expected harm is predicted to occur in extreme events, which kill most
people on Earth. So if we are willing to worry about any war or earthquake, we should worry
especially about extreme versions. If individuals varied little in their resistance to such disruptions,
events a little stronger than extreme ones would eliminate humanity, and our only hope would be to
prevent such events. If individuals vary a lot in their resistance, however, then it may pay to increase
the variance in such resistance, such as by creating special sanctuaries from which the few
remaining humans could rebuild society.
Depending on the size of the catastrophe there can be various degrees of damage, which can
be characterized by different probabilities of the subsequent extinction, and the further recovery. It is
possible to imagine several semi-stable levels of degradation:
1. New Middle ages: Destruction of the social system similar to downfall of the Roman
Empire but in global scale. It means termination of the development of technologies, connectivity
reduction and population falling for several percent, however some essential technologies continue
to develop successfully. For example, some kinds of agriculture continue to develop in the early
Middle Ages. Technological development will continue, manufacture and use of dangerous
weaponry will also continue which could lead to total extinction or to degradation to even lower
level during next war. Degradation of Easer Island during internal war is clear example. But return
to the previous level is rather probable.
2. Postapocalyptic world: Considerable degradation of economy, loss of statehood and
society disintegration on small fighting kingdoms. The example of it could be current Somali in
some extent. The basic form of activity is a robbery or digging in ruins. This is probably a society
after large-scale nuclear war. The population is reduced many times, but, nevertheless, millions of
people survive. Reproduction of technologies will stop, but separate carriers of knowledge and
library will remain. Such world can be united in hands of one governor, and revival of the state will
begin. The further degradation could occur as a result of epidemics, pollution of environment excess
weaponry stored from previous epoch etc.
17

3. Survivals: Catastrophe in which survive only separate small groups of the people, which
are not connected to each other: polar explorers, crews of the sea ships, inhabitants of bunkers. On
one side, small groups appear to be in more favorable position, than in the previous case, as in them
there is no struggle between groups of people. On the other hand, forces, which have led to
catastrophe of such scales, are very great and, most likely, continue to operate and limit freedom of
moving of people from the survived groups. These groups will be compelled to struggle for the life.
The restoration period under the most favorable circumstances will occupy hundreds years and will
be connected with change of generations with loss of knowledge and skills. Ability to continue
sexual reproduction will be the basis of survival of such groups. Hanson estimated that the group of
healthy individuals, which may survive, is around 100 people. It is not accounted for injured, elderly
and parentless young children who could survive initial catastrophe but will not contribute to the
future survival of the group.
4. Last man on Earth: Only a few people survive on the Earth, but they are incapable neither
to keep knowledge, nor to give rise to new mankind. In this case people, most likely, are doomed.
5. Bunker people: It is possible to designate also "bunker" level that is level on which only
those people survive who are out of the usual environment. Groups of people would survive in the
certain closed spaces, accidentally or preplanned. Conscious transition to bunker level is possible
even without quality loss if the mankind will keep ability to further develop technologies.
Intermediate scenarios of the post-apocalyptic world are possible also, but I believe, that the
listed four variants are the most typical. Each step down means bigger chances to fall even lower
and less chances to rise. On the other hand, the long-term stability island is possible at a level of
separate tribal communities when dangerous technologies have already collapsed, dangerous
consequences of their applications have disappeared, and new technologies are not created yet and
cannot be created like different species of Homo lived in tribes for millions years before neolith.
But it required lower level of intelligence as stability factor. And the former required less Darwinian
pressure to increase intelligence again.
It is thus incorrect to think, that degradation is simply switching of historical time for a
century or a millennium in the past, for example, on level of a society XIX or XV centuries.
Degradation of technologies will not be linear and simultaneous. For example, it will be difficult to
forget such thing as Kalashnikov's gun. In Afghanistan, for example, locals have learned to make
rough copies of Kalashnikov's. But in a society where there is an automatic machine, knightly
tournaments and horse armies are impossible. What was stable equilibrium at movement from the
past to the future cannot be an equilibrium condition at the path of degradation. In other words, if
18

technologies of destruction degrade more slowly, than technologies of creation the society is
doomed to continuous sliding downwards.
However we can classify degradation degree not by the quantity of victims, but by degree of
loss of knowledge and technologies. In this sense it is possible to use historical analogies,
understanding, however, that loss of technologies will not be linear. Maintenance of social stability
at lower level of evolution demands lesser number of people, and this level is steadier both against
progress and decline. Such communities can arise only after the long period of stabilization after
catastrophe.
As to "chronology", following base variants of regress in the past (partly similar to the
previous classification) are possible:
1. Industrial production level: railways, coal, a firearms, etc. Level of self-maintenance
demands, possibly, tens millions humans. In this case it is possible to expect preservation of all basic
knowledge and skills of an industrial society, at least by means of books.
2. Level, sufficient for agriculture maintenance. It demands, probably, from thousand to
millions of people.
3. Level of small group like a hunter-gathers society.
We also could measure smaller catastrophes by the time on which they delay technical
progress and by probability that the humanity will recover at all.
Technological Level of Catastrophe and "Periodic System" of possible disasters
Possible disasters can be classified in terms of the contribution of the person and then the
technology in their causes. These disasters can also be referred to by the period of history of when
they are the most typical. But you can also give a total estimate of the probability for each type of
disaster for the 21st century.
In this case, it turns out that the more likely a disaster can happen, in the sense of a
technological disaster, has a high probability. In addition, the disaster can be classified according to
their possible causes, in the sense of how they cause the death of people (explosion, replication,
poisoning or infection disease) on the basis of that, I made a sort of "periodic system", which
outlined the possible global risks, a large map, which is on the site immortality-roadmap.com,
along with a map of the ways to prevent global risks and a map of how to achieve immortality.
1)
Natural. They are the disasters which have nothing to do with human activity,
and can occur on their own. These include falling asteroids, supernova explosions, and so on.
Their likelihood is relatively small, just tens of millions of years, but perhaps we seriously
underestimate them because of the effects of observational selection.
2)
Anthropogenic. These natural processes are caused by human activities. The
first is global warming and resource depletion. There are also more exotic options, such as
19

artificial awakening using atomic bombs. The main thing is that one awakens the natural
course of business.
3)
Process-level existing technologies. They are primarily concerned with atomic
weapons, as well as conventional biological weapons made by already existing influenza,
smallpox, anthrax.
4)
The expected breakthrough of technology. It is, first of all, nanotech and
biotech, i.e., the creation of microscopic robots or synthetic biology with the creation of
entirely new biological organisms through genetic engineering and synthetic biology. They
can be called super-technologies, because in the end they will give power over the dead and
living matter.
5)
Artificial Intelligence - superhuman levels as the ultimate technology.
Although it can be created in about the same period as the nanotech, but due to its potential
ability to self-upgrade, it is able to transcend or create any other technology.
6)
Space and hypothetical risks that we face in the distant future with the
development of the universe.
Various possible disasters may differ in their destructive power. One could be withstood
relatively easily, others practically impossible. It is more difficult to resist the disasters that have a
greater speed, more power, more penetrating power, and most importantly, an agent which has the
greatest intelligence. And also those which are more likely and more sudden, are difficult to predict,
and thus, it makes it difficult to prepare for them.
Therefore, man-made disasters more natural and technological they are man-made. At the
top of the pyramid as disasters are super-technology disasters and their king the enemy of artificial
intelligence.
The view of this catastrophe, which is on the top of the pyramid disasters is that it is the most
likely and the most destructive.

One-factorial scenarios of global catastrophe


In several following chapters we will consider the classic point of view on global catastrophes,
which consists of individual disasters, each of which might lead to the destruction of all mankind.
Clearly this description is incomplete, because it does not consider multifactorial and long-decline
scenarios of global catastrophe. A classic example of the consideration of one-factorial scenarios is
covered in Nick Bostrom's article Existential risks.
Here we will also consider some sources of global risks which, from the point of view of the
author, are not real global risks, but in the public view their danger is exacerbated, and so we will
examine them. In other words, we will consider all events, which usually are considered global risks
even if we later reject these events.
The next point is studding double factor scenarios of existential risks, which recently did Seth
Baum in his article Double Catastrophe: Intermittent Stratospheric Geoengineering Induced By
Societal

Collapse

http://sethbaum.com/ac/2013_DoubleCatastrophe.pdf

20

In

it

he

studied

hypothetical situation in which anti global warming geoengineering program is interrupted by social
collapse which lead to rapid rise of global temperatures.
There are many possible pairs of such double risks, and from such pairs could be built
chains. For example: Nuclear was accidental release of bioweapons.

Principles of classification of global risks


The method of classification of global risks is extremely important, because it allows, as a
periodical table of elements, to figure out empty spots and to predict the existence of new threats.
Also, it opens the possibility of understanding our own methodology and offering principles on
which new risks may be assessed.
The most obvious approach to an establishment of possible sources of global risks is the
historiographical method. It consists in the analysis of all accessible scientific literature on a theme;
first of all, all already conducted survey works on global risks. One could also scan existing hard
science fiction, for interesting ideas of human extinction.
The method of extrapolation of small catastrophes consists in finding of small types of
catastrophes and the analysis of whether there can be a similar event on a much larger scale. For
example, if we see small meteors falling on Earth, we could ask is it possible that large enough
asteroid would wipe all life on Earth?
Upgrading global events method is based on idea that any global catastrophe should have
global outreach. So we should consider any events that could influence on all surface of the Earth
and see if the could became deadly. For example is it possible that sufficiently large tidal wave in
the oceans will destroy all life on the continents if heavy celestial body would fly nearby?
Upgrading causes of death method consist of consideration of one particular way of human
death and asking a question if this cause of death could became global. For example, some people
die because of allergic shock. Is it possible that some bioengineered pollen could cause global
deadly allergy? Probably, not.
Extrapolating cause of extinction from other specie method 99 species that existed had
vanished and many continue to do so. They are going extinct by different reasons, and by analyzing
these reasons we could hypotize that our specie could do so the same way. For example, in the
beginning of XX century total specie of food banana was destroyed by fungal rust, now we re eating
different specie, and fungi also known to completely kill other species. So is it possible to
genetically engineer human killing fungi? I do not know the answer to this question.
21

The paleontological method consists of the analysis of previous mass extinctions in Earth's
history, such as Permian-Triassic extinction, which wiped out 99% of all species on Earth. Could it
happen again and with stronger force?
At last, the method of devils advocate consists in the deliberate design of scenarios of
extinction as though our purpose is to destroy the Earth.
Each extinction risk could be described by following criteria:

Anthropogenic/natural,

Total probability,

Are needed technologies already exist,

How far in time it from us,

How we could defend us from it

Natural risks could happen with any specie of living beings. Technological risks are not quite
identical to anthropogenic risks, as an overpopulation and exhaustion of resources is quite
anthropogenic. The basic sign of technological risks is their uniqueness for a technological
civilization.
Where also a category of proved and unproved existential risks:
Events, which we considered as possible x-risks and decided that they are not.
Events about we cant say now definitely are they risky or not.
Events about which we have good scientific base to say that they are risky.
But the biggest part here is type of events, which may be risky based on some consideration,
which seems to be true but cant be proved as a matter of fact and because of that are questionable
by lot of skeptics.
It is possible to distinguish three categories of technological risks:

Risks for which the technology is completely developed or which demands only slightly
improved technology. This includes nuclear warfare.

Risks, technologies for which are under construction and there are no visible theoretical
obstacles for its development in the foreseeable future (e.g. biotechnology).

Risks, technologies for which are in accordance with known laws of physics, but large
practical and theoretical obstacles need to be overcome that is nanotechnology and AI.
22

Risks, which demand for their appearance new physical discoveries. The bigger part of
global risks in the XX century has occurred from essentially new and unexpected discoveries
at the time I mean nuclear weapons.
The list of global risks in following chapters is sorted by degree of readiness of technologies

necessary for them.

Precautionary principle for existential risks


The classical interpretation of precautionary principle is that it is about who prove the safety:
The precautionary principle or precautionary approach states that if an action or policy has a
suspected risk of causing harm to the public or to the environment in the absence of scientific
consensus that the action or policy is not harmful, the burden of prove that it is not harmful falls on
those taking an action (Wikipedia).

In our reasoning we will use our version of the precaution principle: something is an
existential risk until the otherwise is proved. Something here is a hypothesis or crazy idea. And
the burden of prove is not on one who suggest the crazy idea, but on specialist of existential risks.
For example, if someone asks, could Earths core explode, we should calculate all possible scenarios
in which it may happen and estimate their probabilities.

X-risks and other human values


To prevent x-risks we need value of x-risks prevention.
It probably consist of the value of life all living now people + value to the future generations + the
value of the whole human history.
If we value only people who live now, we should concentrate of preventing their death and aging.

The main thing, which makes x-risks unique is the value of future
generations.
Anyway most people give very small value to future generations,
23

especially as a practical value in real acts, not claimed value. And


so it translate in marginal interest in remote x-risks.
May be we should make stronger value connection with x-risks, if we
want real actions against them.
People were afraid of nuclear catastrophe because it was able to
affect them at the moment. So they thought about it in "near mode"
and were ready to protest against it.
People also give a lot of value to their children and grandchildren,
but almost zero value to the 6th generation after them.
One article was named lets prevent AI from killing our children and
seems to be good way to connect existing values
Global catastrophic risks, human extinction risks and existential risks

Global catastrophic risks (GCR) have been defined as risks of catastrophe in which 90 per cent of
humanity will die. It will most likely include me and the reader. From personal point of view there
will be not much difference between human extinction and a global catastrophe. The main difference
is the future of humanity.
The main question is this what probability GCR will result in human extinction. 700 mln people is
still a lot, but scavenging economy, remaining nukes and other weapons, worldwide guerilla war,
epidemics, AIDS, global warming and depletion of easy resources could result in constant decline
and even in human extinction. I will address the question further, but I think it is safe to estimate that
GCR is equal to 1 per cent chance of human extinction. This will help us to unite two fields of
research, one of which is much more established and the is more important.
Phil Torres gave rightful criticism of Bostrom term of existential risks. This term is not selfapparent, and it combines human extinction and just mere not realizing whole potential of humanity.
Humanity could live billions years and colonize the Galaxy and still not reach its whole potential,
may be because other civilization will colonize another Galaxy. Most human being live full lives,
but how we could say that a random person reached his full potential? Maybe he was born to be
Napoleon? Even Napoleon didnt get what he wanted after all.
Problems with Defining an Existential Risk http://ieet.org/index.php/IEET/more/torres20150121
But term existential risks wins and often reduced to x-risks. Bostroms classifications of 4 types of
x-risks has not become popular.
He suggests the following classification:
1. Bangs abrupt catastrophes, which results in human extinction
2. Crunches slow arrest of development, resource deletion, totalitarian government. It is not
extinction, and may be not very bad for people there, but it is unstable configuration which would
result in either extinction or supercivilization. The lower the level of equilibrium, the longer
24

civilization could exist on it that is it is more stable. Paleolithic people could live for million of
years.
3. Shrieks this is not extinction but replacement of humanity with some other more powerful
agent, which is non human. It is either AI or some posthumans. Most
4. Whimpers A posthuman civilization arises but evolves in a direction that leads gradually but
irrevocably to either the complete disappearance of the things we value or to a state where those
things are realized to only a minuscule degree of what could have been achieved. It is mostly a
catastrophes which could happened with advance supercivilization. Bostrom suggest two thing: war
with aliens and erosion of our core values.
We could also add context risks different situations in the world imply different chances of a
global catastrophe. Cold war results in arm race and risks of hot war. Apocalyptic ideologies raises
probability of the existential terrorism.
We could add risks changing the speed of tech. progress.
May be we should add a category of risks which change our ability to prevent x-risks. I am not in
position to make recommendations which may be implemented. But may be it would be safe to
create a couple more centers of x-risks research. Or not, if it result in rivalry and incomparable
safety solutions.

25

Chapter 2. Problems of probability calculation

Human extinction catastrophe is one-time event, which will not have any observer (as long
as they are alive, it is not a such catastrophe).
For this reason, the traditional use of the term probability in relation to the likelihood
of global risks rather is pointless, no matter whether we understand probability in statistical terms, as
a proportion, or in Bayesian terms, as a measure of the uncertainty of our knowledge.
If the catastrophe starts to happen we cant distinguish is it very rare type of catastrophe or
inevitable one.
The concept of probability has undergone a long evolution, and it has two directions
objectivist, where the probability is considered as the fraction of events of a certain set,
and subjectivist, where the probability is considered as a measure of our ignorance.
Both approaches are applicable to the determination of what constitutes a probability of global
catastrophe, but with certain amendments.
Regularly is rising the question: "what is the probability of global catastrophe," but the answer
depends of what kind of probability is meant. I propose a list of different definitions of probability
and notions of probability of x-risk in which the answer will make sense.
1. The Fermi probability (so named by me in honor of the Fermi paradox). This term I
suggest for probability that a certain percentage of technological civilizations in the Universe die for
one specific reason, and the probability is defined as their share from the total amount of
technological civilizations. This quantity is unknown and unlikely to be objectively known until we
survey the entire galaxy, so that it can only be the object of subjective assumptions. Obviously, some
civilizations will make very big efforts in risk prevention, and some smaller efforts, but Fermi
probability also reflects the total share of the effectiveness of prevention I mean chances that they
will be applied and will be successful. I called it Fermi because knowing this probability distribution
could help answer Fermi paradox.
2. Objective probability if we were in the multiverse, it would be a share of all versions of
the future Earth, where will be a global catastrophe of certain type. In principle, it should be close to
the Fermi probability, but differ from it at the expense of some special features of the Earth, if any,
or if we will have create some.
26

3. Conditional probability of the certain type of catastrophe is the probability of the accident,
provided that no other global catastrophes will happen. (E.g. chance of asteroid impact within the
next million years). It is opposite to the probability that the specific type of accident will happen, in
between of all possible catastrophes.
4. Minimum probability is the probability that a disaster would happen anyway, even if we
undertake all possible efforts to prevent it. And the maximum probability of an x-risk is the
probability of it if nothing will be done to prevent it. I think that these probabilities differ in average
10 times for global risks, but better assessment is needed may be based on some analogies.
5. The total probability of global catastrophe vs. the probability of the extinction because of
one particular reason. Since many scenarios of global catastrophe may include several reasons, for
example, one there most of the population dies as a result of a nuclear war, the rest of population is
severely affected by multipandemia, and the last survivors on a remote island are dying because of
hunger. What is the main reason in this case hunger, biotech or nuclear war or dangerous
combination of this three, which are not so deadly independently?
6. Assigned probability the probability which we must ascribe to the particular risk to protect
ourselves from it in the best way, but do not overspend on it our resources that could be directed to
other risks. This is like a stake in the game; with the only difference being that the game is played
once. Here we are talking about estimates or order of magnitude, which are needed to properly plan
our actions. It is also replacement of Bayesian probability of existential risk, which cannot be
calculated without some subjectivism. Torino scale of asteroid risk is good example here.
7. Expected survival time of the civilization. Although we cannot measure the very probability
of global catastrophe of some type, we can transform it into an expected lifetime. Expected lifetime
includes our knowledge of the future change of the probability of a disaster whether it will grow
exponentially or decrease smoothly with increasing our knowledge to prevent it.
8. Yearly probability density. For example, if the probability of certain event in one year is 1
per cent, during 100 years it would be 63.3 percent, and in 1000 years period it would be 99.9.
Yearly linear probability density implies exponential growth of total probability of the event.
9. Exponentially growing probability density of total x-risk can be associated with the
exponential growth of new technologies based on Moore's Law, and it gives the total probability of
catastrophe as an exponent of the exponent, that is growing very quickly. It could go from near 0 to
almost 1 in just 2-3 doubling of technology based on Moore law, or in 4-6 years based on current
temp of technological developments. This means that a period of high catastrophic risk will be
around 6 years and probably during it some smaller catastrophes will also happen.
27

10. Posteriori probability is the probability of a global catastrophe, which we estimate after it
did not happen, for example, all-out nuclear war in the XX century (if we assume that it was an
existential risk). Such an assessment of probability is greatly distorted by observational selection
toward understatement.
We will return to the question of the probability of x-risks in the end of the book.

Problems of calculation of probabilities of various scenarios


The picture of global risks and their interaction with each other inspires a natural desire to
calculate exact probabilities of different scenarios. However, it is obvious that giving exact answers
is impossible; the probabilities merely represent our guesses, and may be updated with further
information. Even though our probabilities are not perfect, refusing to make any estimate is not
helpful. It is important to interpret and apply the probabilities we derive in a reasonable way. For
example, say we determine, based on some model, that the probability of appearance of dangerous
unfriendly AI is 14% over the next 30 years. How can we use this information? Will our actions
differ if it would be 15% estimate? So exact probabilities matters only if the could differ our actions.
Further, such calculation should consider time sequence of different risks. For example, if the
risk A has probability 50% in first half of the XXI century, and risk B 50% in second half, our
real chances to die from risk B are only 25% because in half of cases we will not survive until it.
At last, for different risks we wish to receive annual probability density. I will remind, that
here should be applied the formula of continuous increase of percent, as in case of radioactive decay.
It means, that any risk set on some time interval is possible to normalize on "half-life period", that is
time on which it would mean 50% probability of extinction of the civilization.
In our methodology part we will discuss approximately 150 different cognitive biases, which
can perturb the rational evaluation of risks. Even if the contribution of each bias error is no more
than one percent, it adds up. When people undertake a project for the first time, they usually
underestimate the riskiness of the project by a factor as much as 40-100. This was apparent in the
examples of Chernobyl and the Challenger. (Namely, the Space Shuttle had been calculated for one
failure every 1000 flights, but exploded on the 25th flight. In his paper on cognitive biases with
respect to global risks, Yudkowsky highlights that a safety estimate of 1 in 25 would be more
correct, 40 times greater than the initial estimate; the Chernobyl reactors were calculated to undergo
one failure every one million years, but the first large scale failure occurred after less than 10,000
station-years of operation, that is, a safety estimate 100 times lower would have been more precise.)
28

So, there are serious biases to consider, which mean we should greatly expand our default
confidence intervals to come up with more realistic estimates. Confidence intervals are range of
probabilities for some risk. Like for example nuclear war may have probability interval 0.5 2% per
year. How much we should expand our confidence intervals?
For decision-making we need to know an order of the size of the risk, instead of exact value.
Let's assume that the probability of global catastrophes can be estimated, at the best, to within
an order of magnitude (and, the accuracy of such an estimate will be plus-minus an order of
magnitude) and that such an estimate is enough to define the necessity of the further attentive
research and problem monitoring. Similar examples of risk scales are the Turin and Palermo scales
of asteroid risk.
The eleven-point (from 0 to 10) Turin scale of asteroid danger characterizes the degree of
potential danger of an Earth-threatening asteroid or comet. The point on the Turin scale of asteroid
danger is assigned to a small Solar system bodies at the moment of their discovery, depending on the
weight of the body, speed, and probability of its collision with the Earth. In the process of further
research of the orbit of an object, its point on the Turin scale can be updated. Zero means an absence
of threat, ten indicates a probability more than 99% of collision with a body with a diameter more
than 1 km. The Palermo scale differs from Turin in that it considers time remaining before the fall of
an asteroid: less time means a higher score on the scale.

Quantitative estimates of the probability of the global catastrophe, given by


various authors
The estimates of the extinction by leading experts are not far from one another. Of course
there could be some bias in peaking experts but these one are forming a group which deliberately
studding risks of human extinction from all possible courses:

J. Leslie, 1996, "The end of the world": 30% the next 500 years with the account of
Doomsday Argument, without it 5%.

N. Bostrom, 2002, in Existential risks. The analysis of scenarios of human extinction


and similar dangers wrote: my subjective opinion is that setting this probability
lower than 25% would be misguided, and the best estimate may be considerably
higher in the next two centuries.

Sir Martin Rees, 2003, Our final hour: 50% in the XXI century.

29

In 2009 was published a book by Willard Wells "Apocalypse when?" [Wells 2009]
devoted to the mathematical analysis of a possible global catastrophe mostly based on
Doomsday argument in Gotts version. Its conclusion is that its probability is about 3%
per decade, which roughly corresponds to 27% per century.

On the other hand, in days of cold war some estimates of probability of extinction was even
higher.
The researcher of a problem of extraterrestrial civilizations Horner attributed to a selfliquidation hypothesis of psyhozoe chances of 65%.
Von Neumann considered that nuclear war is inevitable and also all will die in it.
During 2008 conference Global catastrophic risks was made survey of experts, which
estimated total risks of human extinction as 19 per cent before 2010 (http://www.fhi.ox.ac.uk/gcrreport.pdf). The result of survey is presented in the table:

30

31

The model of the future


The model of global risks depends on the very model of future. The model of the future should
answer, what is the main driving force of historical process, and how this historical process would
develop in near term, medium term and long term perspective.
This book is based on idea that the main driving force is self-sustained evolution of
technologies and (which I will show later) the speed of this evolution is hyperbolic.
Different theories of the future predict different global risks. For example Rome club model
states that driving force of evolution is interaction of 5 forces, from which most important is
exhaustion of natural resources. This predicts sinusoidal graph of future evolution, where
overpopulation and ecological crisis is the main expected catastrophes.
If the take religious model of the world, it is driving by God will and its end is Apocalypses.
Talebs black sawn world model emphasis unknown risks as most important.
There are also only two possible outcomes of our civilization evolution that is extinction or
becoming supercivilization, controlled by AI and practicing interstellar travel.
There are many more world models and they are presented in the following table. Models are
not reality and they may be or may be not useful.
Many of these models are based on empirical data, so how we should choose the right model?
Main principle: strongest process wins. Large wave will obliterate smaller waves.
In this case hyperbolic model should win. But the main contrurgument to it is its too extreme
predictions which will begin to differ from other model very soon. It predict singularity in 2030 and
extreme acceleration of the events should start now. (We could use ad hoc idea that singularity
would postponed for 20 years or something because of growing resistance of materials. It is like in
collapsing star, in stops collapse several times because compressing and heating matter produce
enough pressure to temporary postpone contraction, until this matter cools somehow and pressure
drops)
Meta model: As we are uncertain about which of the models is correct, we could try to use
Bayesian approach. We choose several most promising models and update our estimates of them
based on new facts.
United models: for example, spiral-hyperbolic model. Some models may be paired with one
another.

32

The precautionary principle recommends us to do the same: as we cant know for sure which
future model is correct, and different models imply different global risks, we should use several
most plausible models. (we cant use all models, as we get to much noise in result).
In our case hyperbolic, exponential, wave and black swan models are most plausible.
To assess the models we could use several characteristics:
1)

Refutability some models have early predictions and we could check these
predictions and see if this model works. This is like Popper criteria. Other models
cant be falsified and this makes them weaker.

2)

Completeness the model should take into account all known strong historic trends.

3)

Evidence base the model should predict past that is be built on large empirical
base.

4)

Support the model should have support from different futurists, and better if they
came to it independently.

5)

We should distinct prediction models from planning (normative) models. The


planning models tend to turn into wishful thinking. Planning must be based on some
model of future itself.

6)

Complexity. If the model is too complex or its predictions are too sharp it is probably
false as we live in very uncertain world.

7)

Too strong predictions for near future contradict Copernican mediocracy. Because
if we randomly chosen from the time when the model works, we should be
somewhere in the middle of it. For example, if I predict that nuclear war will happen
tomorrow, I will strongly go against the fact that if didnt happen for 70 years, and its
a priory probability to happen tomorrow of very small.

---- The same text in thesis mode ----

TL;DR: Many models of the future exist. Several are relevant.


Hyperbolic model is strongest, but too strange.
Wall of text: off
Our need: correct model of the future
Different people: different models = no communication.
Assumptions:
Model of the future = main driving force of historical process +
33

graphic of changes
Model of the future determines global risks

The map
The map: lists all main future models.
Structure: from fast growth to slow growth models.

How to estimate validity of future's


models?
Refutability
check early predictions. Popper criteria.
Falsification.
Completeness Include: all known strong historic trends.
Evidence base predict past + built on large empirical base.
Support Who: thinkers and futurists. How: independently
Planning (normative) models. No wishful thinking. Planning
must be based on some model of future itself.
Complexity. Too complex = false. Too exact = false.
Too strong predictions for near future contradict Copernican
mediocracy. Because if we randomly chosen from the time
when the model works, we should be somewhere in the middle
of it. For example, if I predict that nuclear war will happen
tomorrow, I bet against the fact that if didnt happen for 70
years, and its a priory probability to happen tomorrow of very
small.
Main principle: strongest process wins.
Metaphor: Large wave will obliterate smaller waves.
Conclusion:
Hyperbolic model should win. It is strongest process.
But: Too extreme predictions.
Like: Singularity in 2030
Ad hoc repair: growing inertia of process results in postponed
Singularity (20 years).
Analogy: Stellar collapse into black hole. Overheated matter delays
collapse by pressure. Not for long.

34

Combining models

1. Meta model: Bayesian approach, many models.


How: Update estimates of models based on evidence

2. United models
Example: spiral-hyperbolic model.
Some models may be paired with one another.

3. The precautionary principle: use several models in risks


assessment.
Criteria: Only plausible models.
In case of global risks: hyperbolic, exponential, wave and black swan
models

Other methods of prediction:


Extrapolation
Analogy
Statistical
Induction
Trends
35

Poll of experts, foresight


Prediction markets
Debiasing

Chapter 3. History of the research on global risk

Antique and medieval ideas about the doomsday were that it could happened based on will of
God or as a result of war between demons (Armageddon, Ragnarek).
First scientific ideas of the end of the world and human race appeared in XIX century. In the
beginning of XIX century Malthus created idea of overpopulation though it was not directly
related to complete human extinction, it became a base of future limits of growth ideas. Lord
Kelvin in 1850ies suggested the possibility of thermal death of the Universe that is thermal
equilibrium and stop of all processes. Most popular idea was that life on earth would vanish after
dimming of Sun. Now we know that the Sun is becoming even brighter as it is going older as well as
19 century timescale for these event was quite different in order of several million years as the
source of the energy of stars was not yet known. As the idea of space travel didnt appear yet the
death of life on Earth was equal to death of humanity. All these scenarios had in common that
natural end of the world would be slow and remote process of something like freezing. Humans
cant do anything about it neither stop nor create.
In first half of the XX century we find descriptions of grandiose natural disasters in science
fiction, for example, the works of H.G. Wells (War of the Worlds) and Sir Conan Doyle (The
poison belt). Like collision with giant meteors, poisoning by comets gases, genetic degradation.
During great influenza pandemic in 1918 one physician said that if the thing would go this
way the humanity would be finished in several weeks.
The history of modern scientific study of global risks dates back to 1945. Before the first
atomic bomb tests in the United States there were worries if it would lead to chain reaction of fusion
of the nitrogen in Earth's atmosphere.
In order to assess the risk of it was established commission, headed by physicist Arthur
Compton. He prepared report LA-602 Ignition of the Atmosphere with Nuclear Bombs [LA-602
36

1945], which was recently declassified and is now available to everyone on the Internet in the form
of poorly scanned typewritten pages. Compton shows that due to the scattering of photons by
electrons, the latter will be cooled (because their energy is greater than that of photons), and the
radiation will not heat and but cool the reaction region. Thus, increasing the area of the reaction
process is not able to become self-sustaining. This ensured the impossibility of the chain reaction in
nitrogen atmosphere. However at the end of the text has been mentioned that not all factors were
taken into account for instance, the effect of water vapor contained in the atmosphere.
Because it was a secret report, it was not intended to convince the public which differs it
from in favorable way from recent reports on the safety of the collider. But its target audience was
the decision makers. Compton told them that the chance that a chain reaction will start reaction in
the atmosphere is 3 per million. In the 1970s was conducted journalist investigation, found that
Compton took these figures "out of my head" because found them compelling enough for the
president in the report there is no probability estimates [Kent 2004]. Compton believed that a
realistic assessment of the probability of disaster is not important, because if we Americans
repudiate the bomb tests the Germans or other hostile countries would do them.
In 1979 was published an article by Wood and Weaver [Weaver, Wood 1979] on
thermonuclear detonation in the atmosphere and oceans, which shows that conditions suitable for
thermonuclear self-sustainable reaction do not exist on our planet (but they are possible on other
planets, if there is a high enough concentration of deuterium and they provide minimum
requirements for it).
The next important step in history of global risks research was the realization of not just the
possibility of accidental global catastrophe, but also technical possibility of the deliberated
destruction of humanity. It has become clear after proposal of cobalt bomb by Leo Szilard [Smith
2007]. During the debate on the radio show with Hans Bethe in 1950 about a possible threat to life
on Earth by nuclear weapons, he proposed a new type of bomb: the hydrogen bomb (which was still
not physically at that time, but the ability to create which has been widely discussed), wrapped in a
shell of cobalt-59, which would be converted during the explosion in cobalt-60. This highly
radioactive isotope with a half-life of about 5 years could make the entire continent or the whole
Earth uninhabitable if the bomb is large enough. After such declaration the Department of Energy
has decided to conduct an investigation in order to prove that such a bomb is impossible. However,
it hired scientist has shown that if the mass of the bomb is 200 000 tonnes (i.e. something like 20
modern nuclear reactors and so theoretically achievable), it is enough to destroy all highly organized
life on Earth. Such a device would inevitably be stationary. So could be used only as a doomsday
37

weapon. After all, no one would dare attack a country that has created such a device. In the 60s the
idea of the theoretical possibility of the destruction of the world by using a cobalt bomb was a very
popular and widely discussed in the press and in scientific literature and in art, but then was quite
forgotten. For example, in the book by Kahn "On Thermonuclear War [Khan 1960], N. Shute's
novel On the beach [Shute 1957], in the Kubrick's movie "Dr. Strangelove".
In 1950 Fermi postulated his famous paradox Where are they? that if alien civilizations
exist, why we dont see them. One of obvious explanation at that time was that civilizations use to
destroy themselves in nuclear war and silence of the Universe is explained by this selfdectruction.
And as we are typical civilization we will probably also destroy our selves.
Furthermore, in the 60s appeared a lot of ideas about potential disasters or dangerous
technologies that would be developed in the future. English mathematician J. Good wrote an essay
On the first super-intelligent machine [Good 1965], where he showed that as soon as such a
machine will be created, it will be able to improve itself, and leave humans behind forever later
these ideas formed the basis of ideas about technological singularity by V. Vinge [Vinge 1993], the
essence of which is that, based on current trends, by 2030, it will be create artificial intelligence, a
superior to human beings, and then the history will become fundamentally unpredictable.
Astrophysicist F. Hoyle [Hoyle 1962] wrote the novel "Andromeda", which described the
attack on Earth by hostile alien artificial intelligence, downloaded via radio telescope from space.
He gave most plausible description of scenario of such attack with several steps.
Physicist Richard Feynman wrote an essay Theres plenty of room at the bottom [Feynman
1959], where he was first to suggested the possibility of molecular manufacturing, i.e. nanorobots.
Important role in the realization of global risks played science fiction, which had its golden
age in sixties.
Forester published in 1960 an article entitled Judgment Day: Friday, November 13, 2026. At
this date human population will approach infinity if it grows as it has grown in the last two
millennia [Foerester, 1960]. It is more likely that he chose this title not for making prediction but to
draw attention to his explanation of past growth. So he did false projection of infinite growth of
human population but these prediction interestingly show the same date as several other predictions
made by other by different methods, like on done by Vinge about AI 2030. It also paved the way to
Limits of Growth theories. Forester idea was that human population is growing based on hyperbolic
law and any hyperbolic law is reaching infinity in final time. Of course population cant grow as
much by biological reasons but if we add population of computers we could find that his
prediction was still working around 2010.
38

In 1972 was published Meadows book Limits of growth. It didnt directly predict human
extinction, but only a decline of the human population at the end of the XXI century due to complex
crisis caused by overpopulation, limited recourses and pollution.
In general we have two lines of thoughts regarding future global catastrophe. One is based on
Malthusian theories, and another is based on predictions of technological developments. Idea of total
human extinction belongs to the second line, because Limit of growth theories tend to
underestimate role of technologies in all aspects of human life and one of them is role of
technologies in building weapons. Meadows theory doesnt take in account possibility of nuclear
war (or even more destructive wars and catastrophes based on XXI century technologies), which
could be logically predicted as a result of war for recourses.
In the 1970s became clear the danger associated with biotechnology. In 1971, American
biologist Robert Pollack learned [Teals 1989] that in the next laboratory are planned experiments to
embed oncogenic SV40 virus genome into the bacterium Escherichia coli. He immediately
suggested that if as E. coli spread throughout the world and it could cause a worldwide epidemic of
cancer. He appealed to this laboratory to suspend experiments before they started the experiments.
The result was the ensuing discussions in Asilomar Conference in 1975, which adopted the
recommendations

for

the

safe

genetic

engineering.

http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
In 1981 Asimov published a book "Choice of catastrophes" [Asimov 2002]. Although it was
one of the first attempts to systematize various global risks, the focus was on distant events, such as
the expansion of the Sun, and the main message of the book was that people would be able to
overcome the global risks.
In 1983 B. Carter suggested now famous anthropic principle. Carter's reasoning had the
second part, which he decided not to publish, but only to report on the meeting of the Royal Society,
because he knew that it would cause an even bigger protest. Later it was popularized by J. Leslie
[Leslie 1996]. This second half of the argument became known as the Doomsday argument, DA.
Briefly its essence is that on the basis of past humanity lifetime and the assumption that we are
roughly in the middle of its existence, we can estimate the future existence of mankind. Carter used
more complex form of DA with conditional probability of future catastrophe, which should change
depending of the fact if we find ourselves before or after the catastrophe. In 1993 Richard Gott
suggested simpler version, which is working directly with future lifetime.
In the early '80s appeared a new theory of human extinction as a result of use of nuclear
weapons the theory of "nuclear winter." In the computer simulation of the behavior of the
39

atmosphere after nuclear war was shown that shade from the emission of soot particles in the
troposphere will be a long and significant. This would result in prolong freezing. The question of
how realistic is such blackout and what temperature drop can survive mankind, remains open. This
theory was part of ongoing in 80s political fight against nuclear war. Nuclear war was portrayed in
mass consciousness as inevitably leading to human extinction. While it was not true in most realistic
cases, it was helpful in promoting idea of nuclear disarmament. And it has resulted in drastic
reduction of nuclear arsenals after cold war. So successful public fighting against existential risks is
possible.
In the 80s first publications appeared about the risks of particle accelerators experiments.
In 1985 was published a book of E. Drexler Engines of Creation [Drexler 1985] devoted to
radical nanotechnology that is, the creation of self-replicating nanobots. Drexler showed that such
an event would have revolutionary consequences for the economy and military affairs. He examines
various scenarios of global catastrophe associated with nanorobots. The first is gray goo", i.e.
unrestricted breeding of nanorobots over which control is lost in the environment. In just a few days
they could fully absorb the Earth's biosphere. The second risk is "unstable arms race."
Nanotechnology will allow fast and extremely cheap creation of weapons of unprecedented
destructive power. First of all we are talking about microscopic robots capable of engaging
manpower and equipment of the enemy. Instability of the arms race means that it "one who starts
first takes all" and the balance between two opposing forces, as it was during the Cold War is
impossible.
Public perception of existential risks was mostly formed by art, which later was criticized for
unrealistic description of risk scenarios. In 1984 appeared first movie of Terminator trilogy, where
military AI named Skynet tried to eliminate humanity for self-defense reasons, which later became a
metaphor of dangerous AI. In fact risks of military AI are still underestimated by Friendly AI
community partly because of rejection, which they feel to the Terminator movie.
In 1993 Vernor Vinge coined idea of technological Singularity that will be the moment when
will be created first superhuman AI and one of clear options after it as that it will destroy all
humanity. He predicted that he would be surprise if it happen before 2005 or after 2030. All
predictions about AI are known to be premature.
In 1996 was published the book by Canadian philosopher John Leslie "End of the World.
Science and ethics of human extinction [Leslie 1996], which was radically different from the
Asimovs book primarily by its pessimistic tone and focus on the near future. It examines all the

40

new discoveries of hypothetical disaster, including nanorobots and DA, and concludes that the
chances of human extinction are 30 percent in the next 200 years.
John Leslie was probably the first one who summarized all possible risks as well as DA
argument and started the modern tradition in its discussion but it was Bill Joy who brings these ideas
to the public.
In 2000 Wired magazine came out with sensational article by Bill Joy one of the founders of
Sun Microsystems Why the future doesnt need us [Joy 2000]. In it, he paints a very pessimistic
picture of the future of civilization in which people will be replaced by robots. Human will be like
pets for AI at best. Advances in technology will create a "knowledge of mass destruction" that can
be distributed over the Internet, for example, the genetic codes of dangerous viruses. In 2005, Joy
was in the company to remove from the Internet recently published Spanish flu virus genome. In
2003, Joy said that he wrote two manuscript books, that he decided not to publish. At first he wanted
to warn people of impending danger, but his published article fulfilled this task. In the second, he
wanted to offer possible solutions, but the solutions are not yet satisfying him, and that "knowledge
is not an area where you have right to a second shot."
Since the end of the last century, J. Lovelock [Lovelock 2006] has developed the theory of the
possibility of runaway global warming. The gist of it is that if the usual warming associated with the
accumulation of carbon dioxide in the atmosphere exceeds a certain threshold which is very small
(1-2 degrees C), than the vast reserves of methane hydrates on the seabed and in the tundra,
accumulated there during the recent ice ages begin stand out. Methane is tens times stronger
greenhouse gas than carbon dioxide, and this may lead to a further increase of the temperature of the
Earth, it will launch other chains with positive feedback. For example, it could start burning of the
vegetation on land and more CO2 will be emitted into the atmosphere; oceans would also warm up
and as a result would fall solubility CO2, and again it will be emitted into the atmosphere, and will
form the anoxic area in the ocean there will emit methane. In September 2008 were discovered
bubbles of methane escaping from the bottom of the pillars of the Arctic Ocean. Finally, water vapor
is also a greenhouse gas, and with its concentration rising temperatures will also rise. As a result, the
temperature could rise by tens of degrees, greenhouse catastrophe happens, and all living things die.
Although it is not inevitable, the risk of such a development is the worst possible outcome with the
maximum expected damage. From 2012 and as of 2014 group of scientists unite in Arctic Methane
emergency group with collective blog (http://arctic-news.blogspot.ru/). They are predicting total ice
melt in Arctic as yearly as in 2015 which will help more relise of methane and could lead to growing
of temperature of 10-20 degrees in XXI century and total human extinction by their opinion.
41

At the end of XX beginning of XXI century appeared several articles describing


fundamentally new risks, the realization of which was made possible by creative analysis of the
capabilities of new technologies. It is works by R. Freitas Gray goo problem [Freitas 2000], R.
Carrigan Do potential SETI signals need to be decontaminated? [Carrigan 2006], the book
Doomsday men by P.D. Smith [Smith 2007] and "Accidental nuclear war" by Bruce Blair [Blair
1993]. Another risk is an artificial revival of supervolcano using deep drilling. There are projects of
autonomous probes, which can penetrate into the core of the earth to a depth of 1000 km by melting
of mantle material [Stivenson 2003], [Circovic 2004].
From 2001 E. Yudkowsky explores the problem of so called Friendly AI (that is safe selfimproving AI) in California. He created Singularity Institute (now MIRI, do not confuse it with
startup incubator Singularity University). He wrote several papers on security issues of AI first of
them is Creating Friendly AI. In 2006, he wrote two articles on which we will often refer in this
book: Cognitive Biases Potentially Affecting Judgment of Global Risks [Yudkowsky 2008b] and
Artificial Intelligence as a positive and negative factor in global risk [Yudkowsky 2008a].
In 2003 English Royal Astronomer Sir Martin Rees published the book Our Final Hour
[Rees 2003]. It is much smaller in volume than the Leslies book, and does not contain
fundamentally new information, however, it was addressed to a wide audience and sold in large
quantities.
In 2002 Nick Bostrom wrote his seminal article Existential risks. Analizing human extinction
scenarios and several other articles about DA and so called Simulation. He coined term existential
risk, which included not only extinction but also everything, which could harm human potential
forever. He also showed that main risks are unknown unknowns that is new unpredictable risks.
In 2004 Richard Posner published Catastrophe: Risk and response [Posner 2004], which
repeat many of previous books but provide an economic analysis of the risks of global catastrophe
and prices of efforts to prevent it (for example, efforts to deflect asteroids and values of the
experiments on accelerators).
In the XXI century the main task of the researchers was not only listing of the various possible
global risks, but analysis of the general mechanisms of their origin and prevention. It turned out that
most of the possible risks associated with the wrong knowledge and wrong decisions people. At the
beginning of the XXI century began the formation of global risk analysis methodology, the
transition from the description of risks to the meta-analysis of human ability to detect and correctly
assess global risks. It should be mostly attributed to Bostrom and Yudkowsky.

42

In 2008 several events have increased interest to the risks of global catastrophe: it is planned
(but not yet fully realized) start of adron collider, vertical jump in oil prices, release of methane in
the Arctic, the war with Georgia and the global financial crisis.
At the beginning of the XXI century is the formation of global risk analysis methodology, the
transition from the transfer of risk to the meta-analysis of human ability to detect and correctly
assess global risks.
In 2008, a conference was held in Oxford "Global catastrophic risks and its materials
proceedings were published under the same title, edited by N. Bostrom and M. irkovi [Bostrom,
Circovic 2008]. It includes more than 20 articles by various authors.
There was an article by M. irkovi about the role of observational selection in the evaluation
of the frequency of future disasters, which claim impossible to draw any conclusions about the
frequency of future disasters, based on a previous frequency
Arnon Dar examines risks of supernovae and gamma-ray bursts, and also shows that the
specific threat to the Earth comes from cosmic rays produced by galactic gamma-ray bursts.
William Neiper in the article about the threat of comets and asteroids showed that perhaps we
are living in a period of intense cometary bombardment, when the frequency of impact is 100 times
higher than average.
Michael Rampino gave an overview of catastrophe risk associated with supervolcanoes.
At the beginning of the XXI century appeared several organizations that promote the
protection of the global risks, for example, Lifeboat Foundation and CRN (Centre for Responsible
Nanotechnology), MIRI, Future of Humanity Institute in Oxford and CGR by Seth Baum. Most of
them are very small and dont have any impact. Most interesting work are done by MIRI and its
subsidiary Lesswrong community forum.
In Cambridge in 2012 was created Centre for study of existential risks with several prominent
figures in board: Huw Price, Martin Rees and Jaan Tallin (http://cser.org/). It got a lot of public
attention, which is good, but no much actual work was done.
Except MIRI which created working community all other institution are in the shadow of
work of their leaders, or not doing any work at all. For example Lifeboat Foundation has very large
boards with thousands people in them but rarely consult them. But they have good mailing list about
x-risks.
The blog Overcoming Bias and articles of Robin Hanson were important contribution to the
study of x-risks. Katja Grace from Australia did important contribution to the theory of Doomsday

43

argument

by

mathematically

connecting

it

with

Fermi

Paradox

(http://www.academia.edu/475444/Anthropic_Reasoning_in_the_Great_Filter).
The study of global risks had the following path: awareness about possibility of human
extinction and the possibility of extinction in the near future, then realization of several different
risks and then attempt to create an exhaustive list of global risks, and then create a system of their
description, which takes into account any global risks and determine the risk of any new
technologies and discoveries. Systematic description has greater predictive value than just a list, as it
allows finding new points of vulnerability, just as the periodic table allows us to find new elements.
And then, study the limits of human thinking about global risks as a first step in creating
methodology that is capable of effectively find and evaluate global risks.
From 2000 to 2008 was the golden age of x-risk research. Many seminal books and articles
were published from Bill Joy to Bostrom and Yudkowsky and many new ideas appeared. But
after that, the stream of new ideas almost stopped. This might be good, because every new idea
increased the total risk, and perhaps all important ideas about the topic had been discussed but
unfortunately nothing was done for preventing x-risks, and dangerous tendencies continued.
Risks of nuclear war are growing. No FAI theory exists. Biotech is developing very quickly
and genetically modified viruses are cheaper and cheaper. The time until the catastrophe is running
out. The next obvious step is to create a new stream of ideas ideas on how to prevent x-risks and
when to implement these ideas. But before doing this, we need a consensus between researchers
about the structure of the incoming risks. This can be via dialog, especially informal dialogs during
scientific conferences.
Lack of new ideas was somehow compensated by appearance of many think tanks as well as
publication of many popular articles about the problem. FHI, GCRI, Lesswrong, Arctic methane
group are among new players in field. But communication between them was not good especially
then it was ideological barrier these are mostly about which risk is most serious: AI or climate, war
or viruses.
Also x-risks researches seems to be less cooperative than for example anti-aging researches,
because maybe each of x-risk researchers pretend to save the world and has his own understanding
how to do it. And these theories dont add up to one another. This is my personal impression.

Current situation with global risks

44

The first version of this book was written in Russian in 2007 (under the name The Structure
of the global catastrophe) and from that time not much changes for good. The predicted risky
trends have continued and now risks of world war are very high. The world war is a corridor for
creating x-risks weapons and situations, as well as of disseminating values, which are promoting xrisks. These values are values of nationalism, religion sectarianism, mysticism, fatalism, short-term
income, risky behaviour and winning in general. The values of human life, safety, rationality and
unity of humankind are not growing as quick as they should be.
In fact there are two groups of human values. One of them is about fighting with other
groups of people and is based on false and irrational beliefs from nationalism and religion, while the
other is about value of human life. The first of these promotes global catastrophe while the second is
good. Of course this is oversimplification but this simple map of values is very useful. And values
can't save us from catastrophe, because different people have different values but they can change
probability.
1. Most important thing: a global catastrophe has not happened. Colossal terrorist attacks, wars,
natural disasters also didnt happened.
2. Key technology trends like exponential growth in the spirit of Moore's Law have not changed. It
is especially true about biology and genes.
3. Economic crisis of 2008 has began. And I think its aftermath is not ended yet because
Quantitative Easing creates a lot of money, which could spiral as inflation, and large defaults are still
possible.
4. Several new potentially pandemic viruses appear like Swine flu and MERS.
5. New artificial viruses were created to test how mutated bird flu could wipe out humanity and
protocols of the experiments were published.
6. Arctic ice is collapsing and methane readings in Arctic are high.
7. Fukushima nuclear catastrophe shows again that unimaginable catastrophes could happen. By the
way Martin Rees predicted it in his book about existential risks.
8. Orbit infrared telescope Wise was launched, it will be able to clarify the question of the
existence of dark comets and directly answer the question of risk associated with them.
9. Many natural language processing AI projects has started and maximum computer power has rose
around 1 000 times after first edition of the book.
10. 2012 end of world crazy bonanza spoiled a lot the efforts of promoting rational approach to
global risks.
45

11. The start of adron collider helped to raise questions about risks of scientific experiments but
truth was lost in quarrels between opinions for and against it. I mean works of Adrian Kent and
Andreas Sandberg about small risks with large consequences.
12. In 2014 the situation in Ukraine becomes close to war between Russia and West. The peculiarity
of this situation is that it could deteriorate in small steps, and there is no natural barrier like not
using nuclear weapons was for nuclear war. And it will already result in new cold war, arm race and
nuclear race.
13. Peak oil has not happened mostly because of shale oil and shale gas. Again intellect proved to be
more powerful then limits of resources.

46

Part 2. Typology of x-risks


Chapter 4. The map of all know global risks

I like to create full exhaustive lists, and I could not stop myself from creating a list of
human extinction risks. Soon I reached around 100 items, although not all of them
are really dangerous. I decided to convert them into something like periodic table
i.e to sort them by several parameters in order to help predict new risks.
For this map I chose two main variables: the basic mechanism of risk and the
historical epoch during which it could happen. Also any map should be based on
some kind of future model, and I chose Kurzweils model of exponential
technological growth which leads to the creation of super technologies in the middle
of the 21st century. Also risks are graded according to their probabilities: main,
possible and hypothetical. I plan to attach to each risk a wiki page with its
explanation.
I would like to know which risks are missing from this map. If your ideas are too
dangerous to openly publish them, PM me. If you think that any mention of your idea
will raise the chances of human extinction, just mention its existence without the
details.
I think that the map of x-risks is necessary for their prevention. I offered prizes for
improving the previous map which illustrates possible prevention methods of x-risks
and it really helped me to improve it. But I do not offer prizes for improving this map
as it may encourage people to be too creative in thinking about new risks.

http://immortality-roadmap.com/x-risks%20map15.pdf
lesswrong discussion: http://lesswrong.com/lw/mdw/a_map_typology_of_human_extinction_risks/
In the following chapters I will go in details about all mentioned risks.

Block 1 Natural risks


Chapter 5. The risks connected with natural catastrophes

Universal catastrophes
47

Catastrophes which will change all Universe as whole, on scale equal to the Big Bang
are theoretically possible. From statistical reasons their probability is less than 1 % in the
nearest billion years as have shown by Bostrom and Tegmark. However the validity of
reasonings of Bostrom and depends on the validity of their premise - namely that
the intelligent life in our Universe could arise not only now but also a several billions years
ago. This suggestion is based on that the heavy elements necessary for existence of a life,
have arisen already after several billions years after Universe appearance, long before
formation of the Earth. Obviously, however, that degree of reliability which we can attribute
to this premise is less than 100 billion to 1 as we do not have its direct proofs - namely the
traces of early civilisations. Moreover, obvious absence of earlier civilisations (Fermi's
paradox) gives certain reliability to an opposite idea - namely, that the mankind has arisen
extremely improbable early. Probably, that existence of heavy elements is not a unique
necessary condition for emergence of intelligent life, and also there are other conditions,
for example, that frequency of flashes of close quasars and hypernovas has considerably
decreased (and the density of these objects really decreases in process of expansion of
the Universe and exhaustion of hydrogen clouds). Bostrom and write: One might
think that since life here on Earth has survived for nearly 4 Gyr (Gigayears), such
catastrophic events must be extremely rare. Unfortunately, such an argument is flawed,
giving us a false sense of security. It fails to take into account the observation selection
effect that precludes any observer from observing anything other than that their own
species has survived up to the point where they make the observation. Even if the
frequency of cosmic catastrophes were very high, we should still expect to find ourselves
on a planet that had not yet been destroyed. The fact that we are still alive does not even
seem to rule out the hypothesis that the average cosmic neighborhood is typically sterilized
by vacuum decay, say, every 10000 years, and that our own planet has just been
extremely lucky up until now. If this hypothesis were true, future prospects would be
bleak.
And though further Bostrom and reject the assumption of high frequency of
"sterilising catastrophes, being based on late time of existence of the Earth, we cannot
accept their conclusion, because as we spoke above, the premise on which it is based, is
unreliable. It does not mean, however, inevitability of close extinction as a result of
universal catastrophe. The only our source of knowledge of possible universal
catastrophes is theoretical physics as, by definition, such catastrophe never happened
48

during life of the Universe (except for Big Bang). The theoretical physics generates a large
quantity of unchecked hypotheses, and in case of universal catastrophes they can be
essentially uncheckable. We will notice also, that proceeding from today's understanding,
we cannot prevent universal catastrophe, nor be protected from it (though, we can provoke
it - see the section about dangerous physical experiments.) Let's designate now the list of
possible - from the point of view of some theorists - universal catastrophes:
1. Disintegration of false vacuum. We already discussed problems of false vacuum in
connection with physical experiments.
2. Collision with object in multidimensional space - brane. There are assumptions,
that our Universe is only object in the multidimensional space, named brane (from a word
"membrane"). The Big Bang is a result of collision of our brane with another brane. If there
will be one more collision it will destroy at once all our world.
3. The Big Rupture. Recently open dark energy results, as it is considered, to more
and more accelerated expansion of the Universe. If speed of expansion grows, in one
moment it will break off Solar system. But it will be ten billions years after modern times, as
assumes theories. (Phantom Energy and Cosmic Doomsday. Robert R. Caldwell, Marc
Kamionkowski, Nevin N. Weinberg. http://xxx.itep.ru/abs/astro-ph/0302506)
4. Transition of residual dark energy in a matter. Recently the assumption has been
come out, that this dark energy can suddenly pass in a usual matter as it already was in
time of the Big Bang.
5. Other classic scenario of the death of the universe are heat-related deaths rise
in entropy and alignment temperature in the universe and the compression of the Universe
through gravitational forces. But they again away from us in the tens of billions of years.
6. One can assume the existence of certain physical process that makes the
Universe unfit for habitation after a certain time (as it was unfit for habitation because of
intense radiation of nuclei of galaxies - quasars - billions of early years of its existence). For
example, such a process can be evaporation of primordial black holes through Hawking
radiation. If so, we exist in a narrow interval of time when the universe is inhabitable - just
as Earth is located in the narrow space of habitable zone around the Sun, and Sun - in a
narrow field of the galaxy, where the frequency of its rotation synchronized with the rotation
of the branches of the galaxy, making it does not fall within those branches and is not
subjected to a supernova.
8. If our world has to some extent arisen from anything by absolutely unknown to us
way, what prevents it to disappear suddenly also?

Geological catastrophes
Geological catastrophes kill in millions times more people, than falling of asteroids,
however they, proceeding from modern representations, are limited on scales.
Nevertheless the global risks connected with processes in the Earth, surpass space risks.

49

Probably, that there are mechanisms of allocation of energy and poisonous gases from
bowels of the Earth which we simply did not face owing to effect of observation selection.

Eruptions of supervolcanoes
Probability of eruption of a supervolcano of proportional intensity is much more, than
probability of falling of an asteroid. However modern science cannot prevent and even
predict this event. (In the future, probably, it will be possible to pit gradually pressure from
magmatic chambers, but this in itself is dangerous, as will demand drilling their roofs.) The
basic hurting force of supereruption is volcanic winter. It is shorter than nuclear as it is
heavier than a particle of volcanic ashes, but them can be much more. In this case the
volcanic winter can lead to a new steady condition - to a new glacial age.
Large eruption is accompanied by emission of poisonous gases - including sulphur. At
very bad scenario it can give a considerable poisoning of atmosphere. This poisoning not
only will make its of little use for breath, but also will result in universal acid rains which will
burn vegetation and will deprive harvest of crops. The big emissions carbon dioxid and
hydrogen are also possible.
At last, the volcanic dust is dangerous to breathe as it litters lungs. People can easily
provide themselves with gas masks and gauze bandages, but not the fact, that they will
suffice for cattle and pets. Besides, the volcanic dust simply cover with thick layer huge
surfaces, and also pyroclastic streams can extend on considerable distances. At last,
explosions of supervolcanoes generate a tsunami.
It all means that people, most likely, will survive supervolcano eruption, but it with
considerable probability will send mankind on one of postapocalyptic stages. Once the
mankind has appeared on the verge of extinction because of the volcanic winter caused by
eruption of volcano Toba 74 000 years ago. However modern technologies of storage of
food and building of bunkers allow considerable group of people to go through volcanic
winter of such scale.
In an antiquity took place enormous vulgar eruptions of volcanoes which have flooded
millions square kilometres with the fused lava - in India on a plateau the Decan in days of
extinction of dinosaurs (probably, is was provoked by falling of an asteroid on the Earth
opposite side, in Mexico), and also on the East-Siberian platform. There is a doubtful
50

assumption, that strengthening of processes of hydrogen decontamination on Russian


plain is a harbinger of appearance of the new magmatic centre. Also there is a doubtful
assumption of possibility catastrophic

dehiscence of Earth crust on lines of oceanic

breaks and powerful explosions of water steam under a curst.


An interesting question is that whether the overall inner heat inside the Earth groes
through the disintegration of radioactive elements, or vice versa, decreases due to cooling
emissivity. If increases, volcanic activity should increase throughout hundreds millions
years. (A. Asimov writes in the book Choice of catastrophes, about glacial ages: On
volcanic ashes in ocean adjournment it is possible to conclude, that volcanic activity in the
last of 2 million years was approximately four times more intensively, than for previous 18
million years.)

Falling of asteroids
Falling of asteroids and comets is often considered as one of the possible reasons of extinction
of mankind. And though such collisions are quite possible, the chances of total extinction as a result
of them are often exaggerated. Experts think an asteroid would need to be I about 37 miles (60 km)
in diameter to wipe out all complex life on Earth. However, the frequency of asteroids of such size
hitting the Earth is extremely rare, approximately once every billion years. In comparison, the
asteroid that wiped out the dinosaurs was about 6 mi (10 km) in diameter, which is a volume about
200 times less than a potential life-killer.
The asteroid Apophis has approximately a 3 in a million chance of impacting the Earth in
2068, but being only about 1,066 ft (325 m), is not a threat to the future of life. In a worst-case
scenario, it could impact in the Pacific Ocean and produce a tsunami which kills several hundred
thousand people.
2,2 million years ago the comet in diameter of 0,5-2 km fell between southern America and
Antarctica (Eltanin catastrophehttp://de.wikipedia.org/wiki/Eltanin_(Asteroid) ). The wave in 1 km
in height threw out whales to the Andes. In vicinities of the Earth there are no asteroids in the sizes
which could destroy all people and all biosphere. However comets of such size can come from Oort
cloud. In article of Napir, etc. Comets with low reflecting ability and the risk of space collisions is
shown, that the number of dangerous comets can be essential underestimated as the observable
quantity of comets in 1000 times less than expected which is connected with the fact that comets
51

after several flights round the Sun become covered by a dark crust, cease to reflect light and become
imperceptible. Such dark comets are invisible by modern means. Besides, allocation of comets from
Oort cloud depends on the tidal forces created by the Galaxy on Solar system. These tidal forces
increase, when the Sun passes through more dense areas of the Galaxy, namely, through spiral
sleeves and a galactic plane. And just now we pass through a galactic plane that means, that during a
present epoch comet bombardment is in 10 times stronger, than on the average for history of the
Earth. Napir connects the previous epoch intensive of comet bombardments with mass extinction 65
and 251 million years ago.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
asteroid winter, connected with emission of particles of a dust in atmosphere.
The basic hurting factor at asteroid falling would become not only a wave-tsunami, but also
asteroid winter, connected with emission of particles of a dust in atmosphere. Falling of a
large asteroid can cause deformations in Earth crust which will lead to eruptions of
volcanoes. Besides, the large asteroid will cause the worldwide Earthquake dangerous first
of all for technogenic civilisation.
The scenario of intensive bombardment of the Earth by set of splinters is more
dangerous. Then strike will be distributed in more regular intervals and will demand smaller
quantity of a material. These splinters to result from disintegration of some space body
(see further about threat of explosion Callisto), comet splitting on a stream of fragments
(the Tungus meteorite was, probably, a splinter of comet Enke), as a result of asteroid hit in
the Moon or as the secondary hurting factor from collision of the Earth with a large space
body. Many comets already consist of groups of fragments, and also can collapse in
atmosphere on thousand pieces. It can occur and as a result unsuccessful attempt to bring
down an asteroid by means of the nuclear weapon.
Falling of asteroids can provoke eruption of supervolcanoes if the asteroid gets to a
thin site of Earth crust or in a cover of a magmatic copper of a volcano or if shift from the
stike disturbs the remote volcanoes. The melted iron formed at falling of an iron asteroid,
can play a role Stevenson's probe - if it is possible in general, - that is melt Earth crust
and a mantle, having formed the channel in Earth bowels that is fraught with enormous
volcanic activity. Though usually it did not occur at falling of asteroids to the Earth, lunar
"seas" could arise thus. Besides, outpourings of magmatic breeds could hide craters from
such asteroids. Such outpourings are Siberian trap basalts and a Decan plateau in India.
The last is simultaneous to two large impacts (Chixulub and crater Shiva). It is possible to
52

assume, that shock waves from these impacts, or the third space body, a crater from which
has not remained, have provoked this eruption. It is not surprising, that several large
impacts

occur simultaneously. For example, core s of comets can consist of several

separate fragments - for example, comet Shumejker-Levi running into Jupiter in 1994, has
left on it a dotted trace as by the collision moment has already broken up to fragments.
Besides, there can be periods of intensive formation of comets when the solar system
passes near to other star. Or as a result of collision of asteroids in a belt of asteroids.
Much more dangerously air explosions of meteorites in some tens metres in diameter
which can cause false operations of systems of early warning of a nuclear attack, or hits of
such meteorites in areas of basing of rockets.

Pustynsky in his article comes to following conclusions: According to the estimates made in
present article, the prediction of collision with an asteroid is not guaranteed till now and is casual. It
is impossible to exclude that collision will occur absolutely unexpectedly. Thus for collision
prevention it is necessary to have time of an order of 10 years. Asteroid detection some months prior
to collision would allow to evacuate the population and nuclear-dangerous plants in a falling zone.
Collision with asteroids of the small size (to 1 km in diameter) will not result to all planet
consequences (excluding, of course, practically improbable direct hit in area of a congestion of
nuclear materials). Collision with larger asteroids (approximately from 1 to 10 km in diameter,
depending on speed of collision) is accompanied by the most powerful explosion, full destruction of
the fallen body and emission in atmosphere to several thousand cubic km. of stones. On the
consequences this phenomenon is comparable with the largest catastrophes of a terrestrial origin,
such as explosive eruptions of volcanoes. Destruction in a falling zone will be total, and the planet
climate will in sharply change and will settle into shape only in some years (but not decades and
centuries!) Exaggeration of threats of global catastrophe proves to be true by the fact that during the
history of the Earth it has survived set of collisions with similar asteroids, and it has not left is
proved an appreciable trace in its biosphere (anyway, far not always left). Only collision with larger
space bodies (diameter more ~15-20 km) can make more appreciable impact on planet biosphere.
53

Such collisions occur less often, than time in 100 million years, and we while do not have the
techniques allowing even approximately to calculate their consequence.
So, the probability of destruction of mankind as a result of asteroid falling in the XXI century
is very small.

Asteroid threats in the context of technological development


It is easy to notice that the direct risks of collision with an asteroid decrease as the
technology development. First of all, they are decreasing due to more accurate
measurement of their very probability - that is due to more and more accurate detection of
dangerous asteroids and measuring their orbits. (If, however, confirmed the assumption
that we live during an episode of cometary bombardment, the risk assessment will increase
by 100 times the background.) Second, they are decreasing due to the growth of our
abilities deflect asteroids.
On the other hand, the effects of asteroid strikes are becoming larger - not only
because the density of the population is growing, but because of growing global
connectivity of the system, resulting in damage in one spot can backfire on the entire
planet.
In other words, although the probability of collision is reducing, indirect risks related to
the asteroid danger is increasing.
The main indirect risks are as follows:
A) The destruction of hazardous industries in the crash site, for example, a nuclear
power plant. The whole mass of the station in such a case would be evaporated and the
release of radiation will be higher than in Chernobyl. In addition, there may be additional
nuclear reactions because of heavy compression of the power plant when the asteroid
destroys it. Yet the chances of a direct hit by an asteroid in a nuclear plant are small, but
they grow with the number of plants.
B) There is a risk that even a small group of meteors, moving at a certain angle in a
certain place of the earth's surface can trigger early warning and cause of accidental
nuclear war. The same consequences will be of an explosion of a small asteroid (a few
54

meters in size) in the air. The first option is more likely to happen with superpowers (with
some unsecured areas in their missile defense system like in the Russian Federation which
results in inability to track full missile traectory) warning system missile attack, while the
second - for a regional nuclear powers (like India and Pakistan, North Korea, etc.), not
reaching is able to track missiles, but able to react to a single explosion.
C) Technology to move asteroids in the future will create a hypothetical possibility to
direct the asteroids not only from the Earth, but also to it. And even if there will be a
accidential asteroid impact, there will be gossips that it was sent on purpose. Yet hardly
anyone will direct the asteroids to the Earth, as such action is easy to spot, hit accuracy is
low, and this should be done for decades before the impact.
D) For safe deflection of asteroids will require the creation of space weapons, which
can be nuclear, laser or kinetic. Such weapons may be used against the Earth or against
the satellites of the potential enemy. Although the risk of the use of it against the land is
small, it still creates a greater potential for damage than falling asteroids.
E) Asteroid destruction by a nuclear explosion would lead to an increase in its lethal
force at the expense of its fragments that is an increasing number of explosions over a
larger area, as well as radioactive contamination of the debris.
Modern technical means are able to reject only the relatively small asteroids which
dont have global threat. The real danger is black comet bodies several kilometers accross
moving along elongated elliptical orbits with great speed.
However, in the future (may be as soon as 2030_2050), the space can be quickly and
cheaply scanned (and transformed) via self-replicating robots, based on nanotech. They
will create a huge telescopes in space able detect all dangerous body in the solar system.
It will be enough to land on an asteroid a microrobot that will multiply on it and then took it
apart into pieces and built the engine, which will change its orbit. Nanotech will help to
create self-sustaining human settlements on the Moon and other celestial bodies. This
suggests that the problem of the asteroid hazard will be irrelevant in a few decades.
Thus, the problem of preventing the collision of Earth with asteroids in the coming
decades can only be a diversion of resources from global risks.

55

Firstly, because that we still cannot reject the objects that can really lead to the
complete extinction of mankind.
Secondly by the time (or shortly thereafter), when the system nuclear annihilation of
asteroids will be created, it will become obsolete as nanotech can be used for quick and
cheap exploration of the solar system by the middle of the 21st century, and perhaps
earlier.
Third, because such a system in the conditions when the Earth is divided into warring
states, asteroid deflection system will be weapon in the event of war.
Fourth, because the probability of human extinction as a result of asteroid impact in
the narrow period of time, when the asteroid deflection system is already deployed, but
powerful nanotechnology may not be established, is extremely small. This time interval can
be set to 20 years, say from 2030 to 2050, and the chances of falling 10 kilometer body
during this time, even if we assume that we live in a period of cometary bombardment,
when the intensity is 100 times higher than is 1 in 15 000 (based on the average rate of
incidence of such bodies in time 30 million years). Moreover, if we consider the dynamics,
we will able to reject only by the end of this period the really dangerous objects, and
perhaps even later, since the larger the asteroid, the more large-scale and long-term
project for its rejection is required. Although 1 to 15 000 is still unacceptably high risk, it is
commensurate with the risk of the use of space-based weapons against Earth.
Fifth, anti-asteroid protection diverts attention from other global problems, due to the
limited human attention span (even mass media attention span) and financial resources.
This is due to the fact that the asteroid danger is very easy to understand: it is easy to
imagine the impact, it is easy to calculate its probability and it is understandable to the
general public. And there is no doubt in its reality and it is clear how to protect us against it.
(For example, the probability of a volcanic catastrophe comparable to the asteroid
according to various estimates, from 5 to 20 times more for the same level of energy - but
have no idea how it can be prevented.) This differs from other risks that are difficult to
imagine, that cannot be to quantify, but may mean the probability of extinction of tens of
percent. It is AI risks, biotech, nanotech and nuclear weapons.
56

Sixth, if we talk about relatively small bodies like Apophis, it may be cheaper to
evacuate the area of future impact than to deflect the asteroid. And, most likely, will it fall
area ocean, so it would require antitsunami measures.
Still, I do not call to abandon anti-asteroid protection, because our first need is to find
out whether we are living in a period of cometary bombardment. In this case, the probability
1 km body impact within the next 100 years is like 6 percent. (Based on data from a
hypothetical impacts in the last 10 000 years like a so called comet Clovis
http://en.wikipedia.org/wiki/Younger_Dryas_impact_event, which traces may be 500,000 or
similar to craters formations called Carolina bays http://en.wikipedia.org/wiki/Carolina_bays
and

created

large

crater

near

New

Zealand

in

1443

http://en.wikipedia.org/wiki/Mahuika_crater etc.). It is necessary first of all to give up power


to the monitor dark comets and on the analysis of fresh craters.
These ideas that we live during Comet bombardment episode are presented by group
of scientists https://en.wikipedia.org/wiki/Holocene_Impact_Working_Group
Napier also wrote that we could strongly underestimate number of dark comets.
http://lib.convdocs.org/docs/index-124751.html?page=30 The number of such comets we
have found is 100 times less than expected.
If we improve our observation technics, we could prove that the probability of
extinction level impact in 21 century is zero. If we prove it, there will be no need of large
scale space defense systems, which could be dangerous itself. Smaller scale system may
be useful to stop regional effect size impactors like Cheliabinsk asteroid.
We already proved the reduced probability of impact by creating large catalog of Near
Earth objects.
Very important instrument in it is infrared telescope Wise. If its observation is correct it
now found 90 per cent of planetary killing asteroids that is equal to 90 per cent reduction of
their risk in near term perspective.
In 2005 Congress gave the agency until 2020 to catalogue 90 percent of the

total population of midsized NEOs at or above 140 meters in diameterobjects


big enough to devastate entire regions on Earth. NASA has already catalogued 90
percent of the NEOs that could cause planetary-scale catastrophethose with a
57

diameter of one kilometer or morebut is unlikely to meet the 2020 deadline for
cataloguing midsize NEOs.
But in 2016 some question aroused about validity of their data.
http://www.scientificamerican.com/article/for-asteroid-hunting-astronomersnathan-myhrvold-says-the-sky-is-falling1/
If infrared observations are correct they greatly reduce chances of large family of
dark comets (but 99 per cent of them are most time beyond Mars which make
difficult to find them).
In 2015 article Napier speaks about dangers of comets centauris, which sometimes
come from outer Solar System, disintegrate and result in the period of bombardment
https://www.ras.org.uk/images/stories/press/Centaurs/Napier.Centaurs.revSB.pdf
Encke comet could be fragment of larger disintegrated comet. Analysisoftheagesofthelunar
microcraters(zappits)onrocksreturnedintheApolloprogrammeindicatethatthenearEarthinterplanetarydust
(IPD)fluxhasbeenenhancedbyafactorofabouttenoverthepast~10kyrcomparedtothelongtermaverage.

Theeffectsofrunningthroughthedebristrailofalargecometareliabletobecomplex,andtoinvolveboththe
depositionoffinedustintothemesosphereand,potentially,thearrivalofhundredsorthousandsofmegatonlevel
bolidesoverthespaceofafewhours.Incomingmeteoroidsandbolidesmaybeconvertedtomicronsizedsmoke
particles(Klekociuketal.2005),whichhavehighscatteringefficienciesandsothepotentialtoyieldalargeoptical
depthfromasmallmass.Modellingoftheclimaticeffectsofdustandsmokeloadingoftheatmospherehasfocusedon
theinjectionofsuchparticulatesinanuclearwar.
Suchworkhasimplicationsforatmosphericdustingeventsofcosmicorigin,althoughtherearesignificantdifferences,
ofcourse.Hoyle&Wickramasinghe(1978)consideredthattheacquisitionof~1014gofcometdustintheupper
atmospherewouldhaveasubstantialeffectontheEarthsclimate.Suchanencounterisareasonablyprobableevent
duringtheactivelifetimeofalarge,disintegratingcometinanEnckelikeorbit(Napier2010)
Apartfromtheireffectsonatmosphericopacity,aswarmofTunguskalevelfireballscouldyieldwildfiresoveranarea
oforder1%oftheEarthssurface

Zone of defeat depending on force of explosion


Here we will consider hurting action of explosion as a result of asteroid falling (or for
any other reason). The detailed analysis with similar conclusions see in article of
Pustynsky.
The defeat zone grows very slowly with growth of force of explosion that is true as
asteroids, and for super-power nuclear bombs. Though energy of influence falls
proportionally to a square of distance from epicentre, at huge explosion it falls much faster,
first, because of curvature of the Earth which as though protects that is behind horizon
(therefore nuclear explosions are most effective in air, instead of on the Earth), and
58

secondly, that ability of a matter is elastic to transfer a shock wave is limited by a certain
limit from above, and all energy moreover is not transferred, and turns to heat around
epicentre. For example, at ocean there can not be a wave above its depth, and as
explosion epicentre is a dot (unlike epicentre of a usual tsunami which it represents a break
line), then will linearly decrease depending on distance. The superfluous heat formed at
explosion, or is radiated in space, or remains in the form of lake of the fused substance in
epicentre. The sun delivers for days to the Earth light energy of an order 1000 gigaton (10
22

joules), therefore the role of the thermal contribution of superexplosion in the general

temperature of the Earth is insignificant. (On the other hand, the mechanism of distribution
of heat from explosion will be not streams of heated air, but the cubic kilometres of splinters
thrown out by explosion with the weight comparable to weight of the asteroid, but smaller
energy, many of which will have the speed close to first cosmic speed, and owing to it to fly
on ballistic trajectories as intercontinental rockets fly. In an hour they reach all corners of
the Earth and though they, operating as the kinetic weapon, will hurt not each point on a
surface, they will allocate at the input in atmosphere huge quantities of energy, that is will
warm up atmosphere on all area of the Earth, probably, to temperature of ignition of a tree
that else will aggravate.)
We can roughly consider, that the destruction zone grows proportionally to a root of 4
power from force of explosion (exact values are defined by military men empirically as a
result of tests and heights lay between degrees 0,33 and 0,25, thus depending from force
of explosion, etc.). Thus each ton of weight of a meteorite gives approximately 100 tons of
a trotyl equivalent of energy - depending on speed of collision which usually makes some
tens kilometres per second. (In this case a stone asteroid in 1 cubic km. in the size will give
energy in 300 gigatons. The density of comets is much less, but they can be scattered in
air, strengthening strike, and, besides, move on perpendicular to ours orbits with much
bigger speeds.) Accepting, that the radius of complete destruction from a hydrogen bomb
in 1 megaton makes 10 km, we can receive radiuses of destruction for asteroids of the
different sizes, considering, that the destruction radius decreases proportionally the fourth
degree force of explosion. For example, for an asteroid in 1 cubic km it will be radius in 230
km. For an asteroid in diameter in 10 km it will be radius in 1300 km. For 100 km of an
asteroid it will be radius of dectruction of an order of 7000 km. That this radius of the
guaranteed destruction became more than half of width of the Earth (20 000 km), that is
guaranteed covered all Earth, the asteroid should have the sizes of an order of 400 km. (If
59

to consider, that the destruction radius grows as a root of the third degree it will be
diameter of an asteroid destroying all about 30 km. Real value lays between these two
figures (30-400 km), also the estimation Pustynsky gives independent estimation: 60 km.)
Though the given calculations are extremely approximate, from them it is visible, what
even that asteroid which connect with extinction of dinosaurs has not hurt all territory of the
Earth, and even all continent where it has fallen. And extinction if it has been connected
with an asteroid (now is considered, that there complex structure of the reasons) it has
been caused not by strike, but by the subsequent effect - the asteroid winter connected
with the dust carrying over by atmosphere. Also collision with an asteroid can cause an
electromagnetic impulse, as in a nuclear bomb, for the account of fast movement of
plasma. Besides, it is interesting to ask a question, whether there can be thermonuclear
reactions at collision with a comet if its speed is close to greatest possible about 100 km/s
(a comet on a counter course, the worst case) as in a strike point there can be a
temperature in millions degrees and huge pressure as at implosion in a nuclear bomb. And
even if the contribution of these reactions to energy of explosion will be small, it can give
radioactive pollution.
Strong explosion will create strong chemical pollution of all atmosphere, at least by
oxides of nitrogen which will form rains of nitric acid. And strong explosion will litter
atmosphere with a dust that will create conditions for nuclear winter.
From the told follows, that the nuclear superbomb would be terrible not force of the
explosion, and quantity of radioactive deposits which it would make. Besides, it is visible,
that terrestrial atmosphere represents itself as the most powerful factor of distribution of
influences.

Solar flashes and luminosity increase


That is known to us about the Sun, does not give the bases for anxiety. The sun
cannot blow up. Only presence unknown to us or the extremely improbable processes can
lead to flash (coronary emission) which will strongly singe the Earth in the XXI century. But
other stars have flashes, in millions times surpassing solar. However change of luminosity
of the sun influences change of a climate of the Earth that proves coincidence of time of a
small glacial age in XVII century with a Maunder minimum of solar spots. Probably, glacial
ages are connected with luminosity fluctuations also.
Process of gradual increase in luminosity of the sun (for 10 percent in billion years)
will result anyway to boiling oceans - with the account of other factors of warming - during
60

next 1 billion years (that is much earlier, than the sun becomes the red giant and,
especially, white dwarf). However in comparison with an interval investigated by us in 100
years this process is insignificant (if only it has not developed together with other
processes conducting to irreversible global warming - see further).
There are assumptions, that in process of hydrogen burning out in the central part of
the Sun, that already occurs, will grow not only luminosity of the Sun (luminosity grows for
the account of growth of its sizes, instead of surface temperatures), but also instability of its
burning. Probably, that last glacial ages are connected with this reduction of stability of
burning. It is clear on the following metaphor: when in a fire it is a lot of firewood, it burns
brightly and steadily but when the most part of fire wood burns through, it starts to die away
a little and brightly flash again when finds not burnt down branch.
Reduction of concentration of hydrogen in the sun centre can provoke such process
as convection which usually in the Sun core does not occur therefore in the core fresh
hydrogen will arrive. Whether such process is possible, whether there will be it smooth or
catastrophic, whether will occupy years or millions years, it is difficult to tell. Shklovsky
assumed, that as a result of convection the Sun temperature falls each 200 million years
for a 10 million perod, and that we live in the middle of such period. That is end of this
process when fresh fuel at last will arrive in the core and luminosity of the sun will increase
is dangerous. (However it is marginal theory, as at the moment is resolved one of the basic
problems which has generated it - a problem of solar neitrino.)
It is important to underline, however, that the Sun cannot flash as supernova or nova,
proceeding from our physical representations.
At the same time, to interrupt a intelligent life on the Earth, it is enough to Sun to be
warmed up for 10 percent for 100 years (that would raise temperature on the Earth on 1020 degrees without a greenhouse effect, but with the green house effect account, most
likely, it would appear above a critical threshold of irreversible warming). Such slow and
rare changes of temperature of stars of solar type would be difficult for noticing by
astronomical methods at supervision of sun-like stars - as necessary accuracy of the
equipment only is recently reached. (The logic paradox of a following kind is besides,
possible: sun-like stars are stable stars of spectral class G7 by definition. It is not
surprising, that as a result of their supervision we find out, that these stars are stable.)
So, one of variants of global catastrophe consists that as a result of certain internal
processes luminosity of the sun will steadily increase on dangerous size (and we know,
61

that sooner or later it will occur). At the moment the Sun is on an ascending century trend
of the activity, but any special anomalies in its behaviour has not been noticed. The
probability of that it happens in the XXI century is insignificant is small.
The second variant of the global catastrophe connected with the Sun, consists that
there will be two improbable events - on the Sun there will be very large flash and emission
of this flash will be directed to the Earth. Concerning distribution of probability of such event
it is possible to assume, that the same empirical law, as concerning Earthquakes and
volcanoes here operates: 20 multiple growth of energy of event leads to 10 multiple
decrease in its probability (the law of repeatability of Gutenberg-Richter). In XIX century
was observed flash in 5 times, by modern estimations, stronger, than the strongest flash in
the XX century. Probably, that time in tens and hundred thousand years on the Sun there
are the flashes similar on a rarity and scale to terrestrial eruptions of supervolcanoes.
Nevertheless it is the extremely rare events. Large solar flashes even if they will not be
directed to the Earth, can increase a little solar luminosity and lead to additional heating of
the Earth. (Usual flashes give the contribution no more than 0,1 percent).
At the moment the mankind is incapable to affect processes on the Sun, and it looks much
more difficult, than influence on volcanoes. Ideas of dump of hydrogen bombs on the Sun for
initiation of thermonuclear reaction look unpersuasively (however such ideas were expressed, that
speaks about tireless searches by human mind of the weapon of the Doomsday).
There is a precisely enough reckoned scenario of influence to the Earth magnetic making solar
flash. At the worst scenario (that depends on force of a magnetic impulse and its orientation - it
should be opposite to a terrestrial magnetic field), this flash will create the strong currents in electric
lines of distant transfer of the electric power that will result in burning out of transformers on
substations. In normal conditions updating of transformers occupies 20-30 years, and if all of them
burn down will be nothing to replace them there, as will require many years on manufacture of
similar quantity of transformers that it will be difficult to organize without an electricity. Such
situation hardly will result in human extinction, but is fraught with a world global economic crisis
and wars that can start a chain of the further deterioration. The probability of such scenario is
difficult for estimating, as we possess electric networks only about hundred years.

Gamma ray bursts


Gamma ray bursts are intensive short streams of gamma radiation coming from far space.
Gamma ray bursts, apparently, are radiated in the form of narrow bunches, and consequently their
62

energy more concentrated, than at usual explosions of stars. Probably, strong gamma ray bursts from
close sources have served as the reasons of several mass extinctions tens and hundred millions years
ago. It is supposed, that gamma ray bursts occur at collisions of black holes and neutron stars or
collapses of massive stars. Close gamma ray bursts could cause destruction of an ozone layer and
even atmosphere ionization. However in the nearest environment of the Earth there is no visible
suitable candidates neither on sources of gamma ray bursts, nor for supernovas (the nearest
candidate for a gamma ray burst source, a star Eta Carinae - it is far enough - an order of 7000 light
years and hardly its axis of inevitable explosion in the future will be directed to the Earth - Gamma
ray bursts extend in a kind narrow beam jets; However at a potential star-hypernew of star WR 104
which are on almost at same distance, the axis is directed almost towards the Earth. This star will
blow up during nearest several hundreds thousand years that means chance of catastrophe with it in
the XXI century less than 0.1 %, and with the account of uncertainty of its parameters of rotation
and our knowledge about scale - splashes - and is even less). Therefore, even with the account of
effect of observant selection, which increases frequency of catastrophes in the future in comparison
with the past in some cases up to 10 times (see my article Anthropic principle and Natural
catastrophes) the probability of dangerous gamma ray burst in the XXI century does not exceed
thousand shares of percent. Mankind can survive even serious gamma ray burst in various bunkers.
Estimating risk of gamma ray bursts, Boris Stern writes: We take a moderate case of energy relies
of 10 ** 52 erg and distance to splash 3 parsec, 10 light years, or 10 ** 19 sm - in such limits from
us are tens stars. On such distance for few seconds on each square centimeter of a planet got on
ways of gamma ray will be allocated 10 ** 13 erg. It is equivalent to explosion of a nuclear bomb on
each hectare of the sky! Atmosphere does not help: though energy will be highlighted in its top
layers, the considerable part will instantly reach a surface in the form of light. Clearly, that all live
on half of planet will be instantly exterminated, on second half hardly later at the expense of
secondary effects. Even if we take in 100 times bigger distance (it a thickness of a galactic disk and
hundred thousand stars), the effect (on a nuclear bomb on a square with the party of 10 km) will be
the hard strike, and here already it is necessary to estimate seriously - what will survive and whether
something will survive in general. Stern believes, that gamma ray burst in Our galaxy happens on
the average time in one million years. Gamma ray burst in such star as WR 104, can cause intensive
destruction of the ozone layer on half of planet. Probably, Gamma ray burst became reason of
Ordovician mass extinction 443 million years ago when 60 % of kinds of live beings (and it is
considerable the big share on number of individuals as for a survival of a specie there is enough
preservation of only several individuals) were lost. According to John Scalo and Craig Wheeler,
63

gamma ray bursts make essential impact on biosphere of our planet approximately everyone five
millions years.
Even far gamma ray burst or other high-energy space event can be dangerous by radiation hurt
of the Earth - and not only direct radiation which atmosphere appreciably blocks (but avalanches of
high-energy particles from cosmic rays reach a terrestrial surface), but also for the formation
account in atmosphere of radioactive atoms, that will result in the scenario similar described in
connection with cobalt bomb. Besides, the scale radiation causes oxidation of nitrogen of
atmosphere creating opaque poisonous gas dioxide of nitrogen which is formed in an upper
atmosphere and can block a sunlight and cause a new Ice age. There is a hypothesis, that neutrino
radiation arising at explosions of supernovas can lead in some cases to mass extinction as neutrino is
elastic dissipate on heavy atoms with higher probability, and energy of this dispersion is sufficient
for infringement of chemical bonds, and therefore neutrino will cause more often DNA damages,
than other kinds of radiation having much bigger energy. (J.I.Collar. Biological Effects of Stellar
Collapse Neutrinos. Phys.Rev.Lett. 76 (1996) 999-1002 http://arxiv.org/abs/astro-ph/9505028)
Danger of gamma ray burst is in its suddenness - it begins without warning from invisible
sources and extends with a velocity of light. In any case, gamma ray burst can amaze only one
hemisphere of the Earth as they last only a few seconds or minutes.
Activization of the core of galaxy (where there is a huge black hole) is too very improbable
event. In far young galaxies such cores actively absorb substance which twists at falling in accretion
disk and intensively radiates. This radiation is very powerful and also can interfere with life
emerging on planets. However the core of our galaxy is very great and consequently can absorb stars
almost at once, not breaking off them on a part, so, with smaller radiation. Besides, it is quite
observed in infra-red beams (a source the Sagittarius), but is closed by a thick dust layer in an
optical range, and near to the black hole there is no considerable quantity of the substance ready to
absorption by it, - only one star in an orbit with the period in 5 years, but also it can fly still very
long. And the main thing, it is very far from Solar system.
Except distant gamma ray bursts, there are the soft Gamma ray bursts connected with
catastrophic processes on special neutron stars - magnitars. On August, 27th, 1998 flash on magnitar
has led to instant decrease in height of an ionosphere of the Earth on 30 km, however this magnitar
was on distance of 20 000 light years. Magnitars in vicinities of the Earth are unknown, but find out
them it can not to be simple.

Supernova stars
64

Real danger to the Earth would be represented by close explosion supernova on distance
to 25 light years or even less. But in vicinities of the Sun there are no stars which could become
dangerous supernova. (The nearest candidates - the Mira and Betelgeuse - are on distance of
hundreds light years.) Besides, radiation of supernova is rather slow process (lasts months), and
people can have time to hide in bunkers. At last, only if the dangerous supernova will be strict in an
equatorial plane of the Earth (that is improbable), it can irradiate all terrestrial surface, otherwise one
of poles will escape. See Michael Richmond's review. Will a Nearby Supernova Endanger Life on
Earth?

http://www.tass-survey.org/richmond/answers/snrisks.txt Rather close supernova can

be sources of space beams which will lead to sharp increase in cloud amount at the Earth that is
connected with increase in number of the centers of condensation of water. It can lead to sharp
cooling of a climate for the long period. (Nearby Supernova May Have Caused Mini-Extinction,
Scientists Say http://www.sciencedaily.com/releases/1999/08/990803073658.htm)

Super-tsunami
Ancient human memory keep enormous flooding as the most terrible catastrophe.
However on the Earth there is no such quantity of water that ocean level has risen above
mountains. (Messages on recent discovery of underground oceans are a little exaggerated
- actually it is a question only of rocks with the raised maintenance of water - at level of 1
percent.) Average depth of world ocean is about 4 km. And limiting maximum height of a
wave of the same order - if to discuss possibility of a wave, instead of, whether the reasons
which will create the wave of such height are possible. It is less, than height of highmountainous plateaus in the Himalayas where too live people. Variants when such wave is
possible is the huge tidal wave which has arisen if near to the Earth fly very massive body
or if the axis of rotation of the Earth would be displaced or speed of rotation would change.
All these variants though meet in different "horror stories" about a doomsday, look
impossible or improbable.
So, it is very improbable, that the huge tsunami will destroy all people - as the
submarines, many ships and planes will escape. However the huge tsunami can destroy a
considerable part of the population of the Earth, having translated mankind in a
postapocalyptic stage, for some reasons:

65

1. Energy of a tsunami as a superficial wave, decreases proportionally 1/R if the


tsunami is caused by a dot source, and does not decrease almost, if a source linear (as at
Earthquake on a break).
2. Losses on the transmission of energy in the wave are small.
3. The considerable share of the population of the Earth and a huge share of its
scientific and industrial and agricultural potential is directly at coast.
4. All oceans and the seas are connected.
5. To idea to use a tsunami as the weapon already arose in the USSR in connection
with idea of creations gigaton bombs.
Good side here is that the most dangerous tsunami are generated by linear natural
sources - movements of geological faults, and the most accessible for artificial generation
sources of a tsunami are dots: explosions of bombs, falling of asteroids, collapses of
mountain.

Super-Earthquake
We could name super-earthquake a hypothetical large scale quake leading to full
destructions of human built structures on the all surface of the Earth. No such quakes
happened in human history and the only scientifically solid scenario for them seems to be
large asteroid impact.
Such event could not result in human extinction itself, as there would be ships,
planes, and people on the countryside. But it unequivocally would destroy all technological
civilization. To do

so

it should have

intensity around

10

in

Mercally scale

http://earthquake.usgs.gov/learn/topics/mercalli.php

It would be interesting to assess probability of worldwide


earthquakes, which could destroy everything on the earth surface.
Plate tectonics as we know it can't produce them. But distribution of
largest earthquakes could have long and heavy tail which may
include worldwide quakes.
So, how it could happen?
1) Asteroid impact surely could result in worldwide earthquake. I
think that 1 mile asteroid is enough to create worldwide
earthquake.
66

2) Change of buoyancy of large land mass may result whole


continent uplifting may be in miles. (This is just my conjecture,
not proved scientific fact, so the possibility of it needs further
assessment.) Smaller scale event of this type happened in
1957 during Gobi-Altay earthquake when the whole mountain
ridge moved.
https://en.wikipedia.org/wiki/1957_Mongolia_earthquake
3) Unknown processes in mantle sometimes results in large deep
earthquakes
https://en.wikipedia.org/wiki/2013_Okhotsk_Sea_earthquake
4) Very hypothetical changes in Earth core also may result in
worldwide earthquakes. If core somehow collapse, because of
change of crystal structure of iron in it or because of possible
explosion of (hypothetical) natural uranium nuclear reactor in
it. Passing through clouds of dark matter may result in
activation of Earth core as it could be heated by annihilation of
dark matter particles as was suggested in one recent research.
http://www.sciencemag.org/news/2015/02/did-dark-matter-killdinosaurs
Such warming of the earth core will result in its expansion and
may trigger large deep quakes.
5)

Superbomb explosion. Blockbusters bombs in WW2 was used to create miniquakes


as its main killing effect, and they explode after they penetrate ground. Large nukes
may be used the same way, but super earthquake requires energy which is beyond
current power of nukes on several orders of magnitude. Many superbombs may be
needed to create superquake.

6)

The Earth cracks in the area of oceanic rifts. I read about suggestions that oceanic
rifts expand not gradually but in large jumps. This middle oceanic rifts creates new
oceanic floor. https://en.wikipedia.org/wiki/Mid-Atlantic_Ridge The evidence for it is
large steps in ocean floor in the zone of oceanic rifts. Boiling of water trapped into
the rift and contacted with magma may also contribute to explosive zip style rapture
of the rifts. But this idea may be from fridge science catastrophism so should be
taken with caution.
67

7) Supervolcano explosions. Largescale eruptions like the


Kimberlitic tube explosion would also produce earthquake
which will be felt on all earthquake, but not uniformly. But they
must be much stronger than Krakatoa explosion in 1883. Large
explosions of natural explosives at the depth of 100 km like
https://en.wikipedia.org/wiki/Trinitrotoluene were suggested as
a possible mechanism of Kimberlitic explosions.
Superquakes effects:
1.Superquake surely will come with megatsunami which will
result in most damage. Supertsunami may be miles high in some
areas and scenarios. Tsunamis may have different ethology, for
example resonance may play a role or change of speed of rotation
of the Earth.
2.Ground liquefaction
https://en.wikipedia.org/wiki/Soil_liquefaction may result in ground
waves, that is some kind of surface waves on some kind of soils
(this is my idea, which should be more researched).
3. Supersonic impact waves and high frequency vibration.
Superquake could come with unusual patterns of wavering, which
are typically dissipate in soil or not appear. It could be killing sound
more than 160 db, or shock supersonic waves which reflect from the
surface, but result in destruction of solid surface by spalling the
same way as antitank munitions do it
https://en.wikipedia.org/wiki/High-explosive_squash_head
4. Other volcanic events and gases release. Methane deposits in
Arctic would be destabilized strong greenhouse methane will erupt
on surface. Carbon dioxide will be released from oceans as a result
of shaking (the same way as shaking of soda can result in bubbles).
Other gases including sulfur and CO2 will be realized by volcanos.
5. Most dams will fall resulting in flooding
6. Nuclear facilities will meltdown. See Seth Baum discussion
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4143
7. Biological weapons will be released from facilities
8. Nuclear warning system will be triggered.
9. All roads and buildings will be destroyed.
10. Large fires will happen.
11. As natural ability of the earth to dissipate the seismic waves will
be saturated, the waves will reflect inside the earth several times
resulting in very long and repapering quake.
68

12. The waves (from the surface location event) will focus on the
opposite side of the Earth, as it may be happened after Chicxulub
asteroid impact which coincide with Deccan traps on opposite side
of the Earth and result in comparable destruction where.
13. Large displacement of mass may result into small change of the
speed of rotation of the Earth, which would contribute to tsunamis.
14. Secondary quakes will follow, as energy will be realized from
tectonic tensions and mountain collapses.
Large, non-global earthquakes also could become precursors for
global catastrophes in several occasions. The following podcast by
Seth Baum is devoted to this possibility.
http://futureoflife.org/2016/07/25/earthquake-existentialrisk/#comment-4147
1) Destruction of biological facilities like CDC which has smallpox
samples or other viruses
2) Nuclear meltdowns
3) Economical crisis or showering of tech. progress in case of large EQ
in San Francisco or other important area.
4) Starting of nuclear war.
5) X-risks prevention groups are disproportionally concentrated in San
Francisco and around London. They are more concentrated than
possible sources of risks. So in event of devastating EQ in SF our
ability to prevent x-risks may be greatly reduced.

Polarity reversal of the magnetic field of the Earth


We live in the period of easing and probably

polarity reversal of the magnetic field

of the Earth. In itself inversion of a magnetic field will not result in extinction of people as
polarity reversal already repeatedly occurred in the past without appreciable harm. In the
process of polarity reversal the magnetic field could fall to zero or to be orientated toward
Sun (pole will be on equator) which would lead to intense suck of charged particles into
the atmosphere. The simultaneous combination of three factors - falling to zero of the
magnetic field of the Earth, exhaustion of the ozone layer and strong solar flash could
result in death of all life on Earth, or: at least, to crash of all electric systems that is fraught
with falling of a technological civilisation. And itself this crash is not terrible, but is terrible
what will be in its process with the nuclear weapon and all other technologies.
69

Nevertheless the magnetic field decreases slowly enough (though speed of process
accrues) so hardly it will be nulled in the nearest decades. Other catastrophic scenario magnetic field change is connected with changes of streams of magma in the core, that
somehow can infuence global volcanic activity (there are data on correlation of the periods
of activity and the periods of change of poles). The third risk - possible wrong
understanding of the reasons of existence of a magnetic field of the Earth.
There is a hypothesis that the growth of solid nucleus of Earth did the Earth's
magnetic field less stable, and it exposed more often polarity reversal, that is consistent
with the hypothesis of weakening the protection that we receive from anthropic principle.

Emerge of new illness in the nature


It is extremely improbable, that there will be one illness capable at once to destroy all people.
Even in case of a mutation of a bird flu or bubonic plague many people will be survived and do not
catch the diseased. However as the number of people grows, the number of "natural reactors in
which the new virus can be cultivated grows also. Therefore it is impossible to exclude chances of a
large pandemic in the spirit of "Spaniard" flu of in 1918. Though such pandemic cannot kill all
people, it can seriously damage level of development of the society, having lowered it on one of
postapocaliptic stages. Such event can happen only before will appear powerful biotechnologies as
they can create quickly enough medicines against it - and will simultaneously eclipse risks of natural
illnesses possibility with much bigger speed of creation of the artificial deceases. The natural
pandemic is possible and on one of postapocaliptic stages, for example, after nuclear war though and
in this case risks of application of the biological weapon will prevail. For the natural pandemic
became really dangerous to all people, there should be simultaneously a set of essentially different
deadly agents - that is improbable naturally. There is also a chance, that powerful epizootic like the
syndrome of a collapse of colonies of bees CCD, the African fungus on wheat (Uganda mold
UG99), a bird flu and similar - will break the supply system of people so, that it will result in the
world crisis fraught with wars and decrease of a level of development. Appearance of new illness
will make strike not only on a population, but also on connectivity which is the important factor of
existence of a uniform planetary civilization Growth of the population and increase in volume of
identical agricultural crops increase chances of casual appearance of a dangerous virus as speed of
"search" increases. From here follows, that there is a certain limit of number of the interconnected
70

population of one specie after which new dangerous illnesses will arise every day. From real-life
illnesses it is necessary to note two:
Bird flu. As it was already repeatedly spoken, not the bird flu is dangerous, but possible
mutation of strain H5N1, capable to be transferred from human to human. For this purpose, in
particular, should change attaching fibers on a surface of the virus that would attached not in the
deep in lungs, but above where there are more chances for virus to get out as cough droplets.
Probably, that it is rather simple mutation. Though there are different opinions on, whether H5N1 is
capable to mutate this way, but in history already there are precedents of deadly flu epidemics. The
worst estimate of number of possible victims of mutated bird flu was 400 million humans. And
though it does not mean full extinction of mankind, it almost for certain will send the world on a
certain post-apocalyptic stage.
AIDS. This illness in the modern form cannot lead to full extinction of mankind though he has
already sent a number of the countries of Africa on a post-apocalyptic stage. There are interesting
reasoning of Supotinsky about the nature of AIDS and on how epidemics of retroviruses repeatedly
cut the population of hominids. He also assumes, that the HIV have a natural carrier, probably, a
microorganism. If AIDS began to transfer as cold, the mankind fate would be sad. However and now
AIDS is deadly almost on 100 %, and develops slowly enough to have time to spread.
We should note new strains of microorganisms which are steady against antibiotics, for
example, the hospital infection of golden staphylococci and medicine-steady tuberculosis. Thus
process of increase of stability of various microorganisms to antibiotics develops, and such
organisms spread more and more, that can give in some moment cumulative wave from many steady
illnesses (against the weakened immunity of people). Certainly, it is possible to count, that
biological supertechnologies will win them but if in appearance of such technologies there will be a
certain delay a mankind fate is not good. Revival of the smallpox, plague and other illnesses though
is possible, but separately each of them cannot destroy all people. On one of hypotheses,
Neanderthal men have died out because of a version of the mad cow decease that is the illness,
caused by prion (autocatalytic form of scaling down of protein) and extended by means of
cannibalism so we cannot exclude risk of extinction because of natural illness and for people.
At last, the story that the virus of "Spaniard" flu has been allocated from burial places, its
genome was read and has been published on the Internet looks absolutely irresponsible. Then under
requirements of the public the genome have been removed from open access. But then still there was
a case when this virus have by mistake dispatched on thousand to laboratories in the world for
equipment testing.
71

Marginal natural risks


Further we will mention global risks connected with natural events which probability in the
XXI century is smallest, and moreover which possibility is not conventional. Though I believe, that
these events should be taken into consideration, and they in general are impossible, I think, that it is
necessary to create for them a separate category in our list of risks that, from a precaution principle,
to keep certain vigilance concerning appearance of the new information, able to confirm these
assumptions.

Hypercanes

Kerry Emanuel from the University of Michigan put forward the hypothesis that in the past,
the Earth's atmosphere was much less stable, resulting in mass extinction. If the temperature of
ocean surface would be increased to 15-20 degrees, which is possible as a result of a sharp global
warming, falling asteroid or underwater eruption, it would invoke the so-called Hypercane--a huge
storm with wind speeds of approximately 200-300 meters per second, the size of a continent, high
live-time and pressure in the center of about 0.3 atmosphere. Removed from their place of
appearance, such hypercane would destroy all life on land and at the same time, in its place over
warm ocean site would form new hypercane. (This idea is used in the Barnes novel The mother
storms.)
Emanuel has shown that when fall asteroid with diameter more than 10 km in the shallow sea
(as it was 65 million years ago, when the fall asteroid near Mexico, which is associated with the
extinction of dinosaurs) may form site of high temperature of 50 km, which would be enough to
form hypercane. Hypercane ejects huge amount of water and dust in the upper atmosphere that
could lead to dramatic global cooling or warming.
http://en.wikipedia.org/wiki/Great_Hurricane_of_1780
http://en.wikipedia.org/wiki/Hypercane
Emanuel, Kerry (1996-09-16). "Limits on Hurricane Intensity". Center for Meteorology and
Physical

Oceanography

MIT

http://wind.mit.edu/~emanuel/holem/node2.html#SECTION00020000000000000000
Did

storms

land

the

dinosaurs

in

hot

water?

http://www.newscientist.com/article/mg14519632.600-did-storms-land-the-dinosaurs-in-hot72

water.html

Unknown processes in the core of the Earth


There are assumptions, that the source of terrestrial heat is the natural nuclear
reactor on uranium several kilometres in diameter in the planet centre. Under certain
conditions, assumes V. Anisichkin, for example, at collision with a large comet, it can pass
in supercritical condition and cause planet explosion, that, probably, caused Phaeton
explosion from which, probably, the part of a belt of asteroids was generated. The theory
obviously disputable as even Phaeton existence is not proved, and on the contrary, is
considered, that the belt of asteroids was generated from independent planetesimals.
Other author, R. Raghavan assumes, that the natural nuclear reactor in the centre of the
Earth has diameter in 8 km and can cool down and cease to create terrestrial heat and a
magnetic field.
If to geological measures certain processes have already ripened, it means what
much easier to press a trigger hook, to start them, - so, human activity can wake them.
The dictanse to border of the terrestrial core is about 3000 km, and to the Sun - of 150 000
000 km. From geological catastrophes every year perish ten thousand people, and from
solar catastrophes - nobody. Directly under us there is a huge copper with the stuck lava
impregnated with compressed gases. The largest extinction of live beings well correlate
with epoch of intensive volcanic activity. Processes in the core in the past, probably,
became the reasons of such terrible phenomena, as trap volcanics. On the border of the
Perm period 250 million years ago in the Eastern Siberia has streamed out 2 million cubic
km. of lavas, that in thousand times exceeds volumes of eruptions of modern
supervolcanoes. It has led to extinction of 95 % of species.
Processes in the core also are connected with changes of a magnetic field of the
Earth, the physics of that is not very clear yet. V.A. Krasilov in article Model of biospheric
crises. Ecosystem reorganisations and biosphere evolution assumes, that the invariance
periods, and then periods of variability of a magnetic field of the Earth precede enormous
trap eruptions. Now we live in the period of variability of a magnetic field, but not after a
long pause. The periods of variability of a magnetic field last ten millions years, being
replaced by not less long periods of stability. So at a natural course of events we have
73

millions years before following act of trap volcanic if it at all will happen. The basic danger
here consists that people by any penetrations deep into the Earths can push these
processes if these processes have already ripened to critical level.
In a liquid terrestrial core the gases dissolved in it are most dangerous. They are
capable to be pulled out of a surface if they get a channel. In process of sedimentation of
heavy iron downwards, it is chemically cleared (restoration for the heat account), and more
and more quantity of gases is liberated, generating process of de-gasation of the Earth.
There are assumptions, that powerful atmosphere of Venus has arisen rather recently as a
result of intensive de-gasation of its bowels. Certain danger is represented by temptation to
receive gratuitous energy of terrestrial bowels, extorting the heated magma. (Though if it to
do it in the places which have been not connected with plumes it should be safe enough).
There is an assumption, that shredding of an oceanic bottom from zones of median rifts
occurs not smoothly, but jerky which, on the one hand, are much more rare (therefore we
did not observe them), than Earthquakes in zones of subduction, but are much more
powerful. Here the following metaphor is pertinent: Balloon rupture is much more powerful
process, than its corrugation. Thawing of glaciers leads to unloading plates
and to strengthening of volcanic activity (for example, in Iceland - in 100 times). Therefore
the future thawing of a glacial board of Greenland is dangerous.
At last, is courageous assumptions, that in the centre of the Earth (and also other
planets and even stars) are microscopic (on astronomical scales) relict black holes which
have arisen in time of Big Bang. See A.G. Parhomov's article About the possible effects
connected with small black holes. Under Hawking's theory relic holes should evaporate
slowly, however with accruing speed closer to the end of the existence so in the last
seconds such hole makes flash with the energy equivalent approximately of 1000 tons of
weight (and last second of 228 tons), that is approximately equivalent to energy 20 000
gigaton of trotyl equivalent - it is approximately equal to energy from collision of the Earth
with an asteroid in 10 km in diameter. Such explosion would not destroy a planet, but would
cause on all surface Earthquake of huge force, possibly, sufficient to destroy all structures
and to reject a civilisation on deeply postapocalyptic level. However people would survive,
at least those who would be in planes and helicopters during this moment. The microscopic
black hole in the centre of the Earth would test simultaneously two processes accretion of
matter and energy losses by hawking radiation which could be in balance, however
balance shift in any party would be fraught with catastrophe - either hole explosion, or
74

absorption of the Earth or its destruction for the account of stronger allocation of energy at
accretion. I remind, that there are no facts confirming existence of relic black holes and it is
only the improbable assumption which we consider, proceeding from a precaution principle.

Sudden de-gasation of the gases dissolved at world ocean


Gregory Ryskin has published in 2003 the article Methane-driven oceanic eruptions and mass
extinctions in which considers a hypothesis that infringements of a metastable condition of the
gases dissolved in water were the reason of many mass extinctions, first of all relies of methane.
With growth pressure solubility of methane grows, therefore in depth it can reach considerable sizes.
But this condition is metastable as if there will be a water hashing de-gasation chain reaction as in
an open bottle with champagne will begin. Energy allocation thus in 10 000 times will exceed
energy of all nuclear arsenals on the Earth. Ryskin shows, that in the worst case the weight of the
allocated gases can reach tens billions tons that is comparable to weight of all biosphere of the
Earth. Allocation of gases will be accompanied by powerful tsunami and burning of gases. It can
result or in planet cooling for the account of formation of soot, or, on the contrary, to an irreversible
warming up as the allocated gases are greenhouse. Necessary conditions for accumulation of the
dissolved methane in ocean depths is the anoxia (absence of the dissolved oxygen, as, for example,
in Black sea) and absence of hashing. Decontamination off metan-hydrates on a sea-bottom can
promote process also. To cause catastrophic consequences, thinks Ryskin, there is enough
decontamination even small area of ocean. Sudden decontamination of Lake Nios which in 1986 has
carried away lives of 1700 humans became an example of catastrophe of a similar sort. Ryskin
notices that question on what the situation with accumulation of the dissolved gases at modern world
ocean, demands the further researches.
Such eruption would be relatively easy to provoke, lowering pipe in the water and starting to
pour up water that would run self-reinforcing process. This can happen accidentally when drilling
deep seabed. A large quantity of hydrogen sulfide has accumulated in the Black Sea, and there also
is anoxic areas.
Gregory Ryskin. Methane-driven oceanic eruptions and mass extinctions. Geology 31, 741 744 2003. http://pangea.stanford.edu/Oceans/GES205/methaneGeology.pdf

75

Explosions of other planets of solar system


There are other assumptions of the reasons of possible explosion of planets, besides explosions
of uranium reactors in the centre of planets by Anisichkin, namely, special chemical reactions in the
electrolyzed ice. E.M. Drobyshevsky in article Danger of explosion of Callisto and priority of
space missions ( ..

//

1999.

69,

9.

http://www.ioffe.ru/journals/jtf/1999/09/p10-14.pdf) assumes that such events regularly occur in


the ice satellites of Jupiter, and they are dangerous to the Earth by formation of a huge meteoric
stream. Electrolysis of ice occurs as a result of movement of containing it celestial body in a
magnetic field, causing powerful currents. These currents result in the degradation of water to
hydrogen and oxygen, which leads to the formation of explosive mixture. He states hypothesis, that
in all satellites these processes have already come to the end, except Callisto which can blow up at
any moment, and suggests to direct on research and prevention of this phenomenon considerable
means. (It is necessary to notice, that in 2007 has blown up Holmes's comet, and knows nobody why
- and electrolysis of ice in it during Sun fly by is possible.)
I would note that if the Drobyshevsky hypothesis is correct, the very idea of the research
mission to Callisto and deep drilling of his bowels in search of the electrolyzed ice is dangerous
because it could trigger an explosion.
In any case, no matter what would cause destruction of other planet or the large satellites in
Solar system, this would represent long threat of a terrestrial life by fall of splinters. (The
description of one hypothesis about loss of splinters see here: An asteroid breakup 160 Myr ago as
the

probable

source

of

the

K/T

http://www.nature.com/nature/journal/v449/n7158/abs/nature06070.html

76

impactor
)

Cancellation of "protection" which to us provided Antropic principle


In detail I consider this question in article Natural catastrophes and Antropic
principle. The threat essence consists that the intelligent life on the Earth, most likely, was
generated in the end of the period of stability of natural factors necessary for its
maintenance. Or, speaking short, the future is not similar to the past because the past we
see through effect of observant selection. An example: a certain human has won three
times successively in a roulette, putting on one number. Owing to it it, using the inductive
logic, he comes to a fallacy that will win and further. However if he knew, that in game,
besides it, 30 000 humans participated, and all of them were eliminated, it could come to
more true conclusion, as he with chances 35 to 36 will lose in following round. In other
words, his period of the stability consisting in a series from three prizes, has ended.
For formation of intelligent life on the Earth there should be a unique combination of
conditions which operated for a long time (uniform luminosity of the sun, absence of close
supernova, absence of collisions with very big asteroids etc.) However from this does not
follow at all, these conditions will continue to operate eternally. Accordingly, in the future we
can expect that gradually these conditions will disappear. Speed of this process depends
on that, and unique the combination of the conditions, allowed to be generated intelligent
life on the Earth (as in an example with a roulette was how much improbable: the situation
of prize got three times in a row is more unique successively, the bigger probability the
player will lose in the fourth round - that is be in that roulette of 100 divisions on a wheel
chances of an exit in the fourth round would fall to 1 to 100). If such combination is more
improbable, it will end faster. It speaks effect of elimination - if in the beginning there were,
let us assume, billions planets at billions stars where the intelligent life could start to
develop as a result of elimination only on one Earth the intelligent life was formed, and
other planets have withdrawn, as Mars and Venus. However intensity of this elimination is
unknown to us, and to learn to us it stirs effect of observation selection - as we can find out
ourselves only on that planet where the life has survived, and the intelligence could
develop. But elimination proceeds with the same speed.
For the external observer this process will look as sudden and causeless deterioration
of many vital parametres supporting life on the Earth. Considering this and similar
examples, it is possible to assume, that the given effect can increase probability of the
sudden natural catastrophes, capable to tear off a life on the Earth, but no more, than in 10
77

times. (No more as then enter the actions of the restriction similar described in article of
Bostrom and which consider this problem in the relation of cosmic catastrophes.
However real value of these restrictions for geological catastrophes requires more exact
research.) For example if absence of superhuge eruptions of volcanoes on the Earth,
flooding all surface, is lucky coincidence, and in norm they should occur time in 500 million
years the chance of the Earth to appear in its unique position would be 1 to 256, and
expected time of existence of a life - 500 million years.
We still will return to discussion of this effect in the chapter about calculation of
indirect estimations of probability of global catastrophe in the end of the book. The
important methodological consequence is that we cannot use concerning global
catastrophes any reasonings in the spirit of: it will not be in the future because it was not in
the past. On the other hand, deterioration in 10 times of chances of natural catastrophes
reduces expected time of existence of conditions for a life on the Earth from billion to
hundred millions that gives very small contribution to probability of extinction to the XXI
century.
Frightening acknowledgement of the hypothesis that we, most likely, live in the end of
the period of stability of natural processes, is R.Rods and R.Muller's article in Nature about
cycle of extinctions of live beings with the period 62 (+/-3 million years) - as from last
extinction has passed just 65 million years. That is time of the next cyclic event of
extinction has come for a long time already. We will notice also, that if the offered
hypothesis about a role of observant selection in underestimations of frequency of global
catastrophes is true, it means, that intelligent life on the Earth is extremely unusual event in
the Universe, and we are alone in the observable Universe with a high probability. In this
case we cannot be afraid of extraterestial intrusion, and also we cannot do any conclusions
about frequency of self-destruction of the advanced civilisations in connection with Fermi's
paradox (space silence). As a result net contribution of the stated hypothesis to our
estimation of probability of a human survival can be positive.

Debunked and false risks from media, science fiction and fringe science or old
theories
Nemesis
Gases from comets
78

Rogue black holes


Neutrinos are warming earth core
There are also a number of theories that are either made by different researchers and have been
refuted, or circulating in the yellow press and in popular consciousness and are based on honest
mistakes, lies and misunderstanding, or associated with certain belief systems. It should, however,
allow a tiny chance that some of these theories prove correct.
1. The sudden change of direction and / or the speed of rotation of the Earth, which causes
catastrophic earthquakes, floods and climate change. Changing the shape of Earth, associated with
the increase of the polar caps may cause the axis of rotation will cease to be the axis with the lowest
moment of inertia, and turn the Earth as nut Dzhanibekova. Or does it happen as a result of
changes in the Earth's moment of inertia associated with the rebuild of its subsoil, or the speed of
change as a result of a collision with a large asteroid.
2.Theories of great deluge, based on the Biblical legend.
3.Explosion of the Sun after six years, supposedly predicted the Dutch astronomer.
4.Collision of the Earth with wandering black hole. In the vicinity of the Sun is not black
holes, as far as is known, because they could be found on the accretion of interstellar gas on them
and on gravitational distortions of light from more distant stars. In doing so, sucks ability of black
hole is no different from any stars with similar mass, so black hole is not more dangerous than a star.
But collisions with stars, or at least, dangerous rapprochement with them, occur rarely, and all such
convergence are millions of years from now. Because black holes at galaxy are far less than the
stars, then the chances of the collision with a black hole are even smaller. We cannot, however,
exclude collision of Solar system with single rogue planets, but it is highly unlikely, and relatively
no-harm event.

Weakening of stability and human interventions


The contribution of probability shift because of cancellation of defence by Antropic
principle in total probability of extinction in XXI century, apparently, is small. Namely, if the
Sun maintains comfortable temperature on the Earth not for 4 billion years, but only 400
million in the XXI century it all the same gives ten-thousand shares of percent of probability
of catastrophe, if we uniformly distribute this probability of the Sun failture (0,0004 %).
However easing of stability which to us gave Antropic principle, means, first, that processes
79

become less steady and more inclined to fluctuations (that is quite known concerning the
sun which will burn, in process of hydrogen exhaustion, more and more brightly and nonuniformly), and secondly, that it seems to more important, - they become more sensitive to
possible small human influences. That is one business to pull a hanging elastic band, and
another - for an elastic band tense to a limit of rapture.
For example, if a certain eruption of a supervolcano has ripened, there can pass still
many thousand years while it will occur, but there is enough chink in some kilometres depth
to break stability of a cover of the magmatic chamber. As scales of human activity grow in
all directions, chances to come across such instability increase. It can be both instability of
vacuum, and terrestrial lithosphere, and something else of what we do not think at all.

Block 2 Anthropogenic risks


Chapter 6. Global warming
TL;DR: Small probability of runaway global warming requires preparation of urgent unconventional
measures of its prevention that is sunlight dimming.
Abstract:
The most expected version of limited global warming of several degrees C in 21 century will not
result in human extinction, as even the thawing after Ice Age in the past didnt have such an impact.
The main question of global warming is the possibility of runaway global warming and the
conditions in which it could happen. Runaway warming means warming of 30 C or more, which
will make the Earth uninhabitable. It is unlikely event but it could result in human extinction.
Global warming could also create some context risks, which will change the probability of other
global risks.
I will not go here in all details about nature of global warming and established ideas about its
prevention as it has extensive coverage in Wikipedia (https://en.wikipedia.org/wiki/Global_warming
and https://en.wikipedia.org/wiki/Climate_change_mitigation).
Instead I will concentrate on heavy tails risks and less conventional methods of global warming
prevention.
The map provides summary of all known methods of GW prevention and also of ideas about scale
of GW and consequences of each level of warming.
80

The map also shows how prevention plans depends of current level of technologies. In short, the
map has three variables: level of tech, level of urgency in GW prevention and scale of the warming.
The following post consists of text wall and the map, which are complimentary: the text provides in
depths details about some ideas and the map gives general overview of the prevention plans.
The map: http://immortality-roadmap.com/warming3.pdf
Uncertainty
The main feature of climate theory is its intrinsic uncertainty. This uncertainty is not about climate
change denial; we are almost sure that anthropogenic climate change is real. The uncertainty is about
its exact scale and timing, and especially about low probability tails with high consequences. In the
case of risk analysis we cant ignore these tails as they bear the most risk. So I will focus mainly on
the tails, but this in turn requires a focus on more marginal, contested or unproved theories.
These uncertainties are especially large if we make projections for 50-100 years from now; they are
connected with the complexity of the climate, the unpredictability of future emissions and the
chaotic nature of the climate.
Clathrate methane gun
An unconventional but possible global catastrophe accepted by several researchers is a greenhouse
catastrophe named the runaway greenhouse effect. The idea is well covered in wikipedia
https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis
Currently large amounts of methane clathrate are present in the Arctic and since this area is warming
quickly than other regions, the gasses could be released into the atmosphere.
https://en.wikipedia.org/wiki/Arctic_methane_emissions
Predictions relating to the speed and consequences of this process differ. Mainstream science sees
the methane cycle as dangerous but slow process which could result eventually in a 6 C rise in
global temperature, which seems bad but it is survivable. It will also take thousands of years.
It has happened once before during Late-Paleocene, known as the Paleocene-Eocene thermal
maximum, https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum
(PETM), when the temperature jumped by about 6 C, probably because of methane. Methane-driven
global warming is just 1 of 10 hypotheses explaining PETM. But during PETM global methane
clathrate deposits were around 10 times smaller than they are at present because the ocean was
warmer. This means that if the clathrate gun fires again it could result in much more severe
consequences.
But some scientists think that it may happen quickly and with stronger effects, which would result in
runaway global warming, because of several positive feedback loops. See, for example the blog
http://arctic-news.blogspot.ru/
There are several possible positive feedback loops which could make methane-driven warming
stronger:
1)
The Sun is now brighter than before because of stellar evolution. The increase in the Suns
luminosity will eventually result in runaway global warming in a period 100 million to 1 billion
years from now. The Sun will become thousand of times more luminous when it becomes a red
giant. See more here: https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans

81

2)
After a long period of a cold climate (ice ages), a large amount of methane clathrate
accumulated in the Arctic.
3)
Methane is short living atmospheric gas (seven years). So the same amount of methane
would result in much more intense warming if it is released quickly, compared with a scenario in
which it is scattered over centuries. The speed of methane release depends on the speed global
warming. Anthropogenic CO2 increases very quickly and could be followed by a quick release of
the methane.
4)
Water vapor is the strongest green house gas and more warming results in more water vapor
in the atmosphere.
5)
Coal burning resulted in large global dimming
https://en.wikipedia.org/wiki/Global_dimming And the current switch to cleaner technologies could
stop the masking of the global warming.
6)
The oceans ability to solve CO2 falls with a rise in temperature.
7)
The Arctic has the biggest temperature increase due to global warming, with a projected
growth of 5-10 C, and as result it will lose its ice shield and that would reduce the Earths albedo
which would result in higher temperatures. The same is true for permafrost and snow cover.
8)
Warmer Siberian rivers bring their water into the Arctic ocean.
9)
The Gulfstream will bring warmer water from the Mexican Gulf to the Arctic ocean.
10)
The current period of a calm, spotless Sun would end and result in further warming.
Anthropic bias
One unconventional reason for global warming to be more dangerous than we used to think is
anthropic bias.
1. We tend to think that we are safe because not runaway global warming events have ever
happened in the past. But we could observe only a planet where this never happened. Milan
Cirncovich and Bostrom wrote about it. So the real rate of runaway warming could be much higher.
See here: http://www.nickbostrom.com/papers/anthropicshadow.pdf
2. Also we, humans tend to find ourselves in a period when climate changes are very strong because
of climate instability. This is because human intelligence as a universal adaptation mechanism was
more effective in the period of instability. So climate instability helps to breed intelligent beings.
(This is my idea and may need additional proof).
3. But if runaway global warming is long overdue this would mean that our environment is more
sensitive even to smaller human actions (compare it with an over-pressured balloon and small
needle). In this case the amount of CO2 we currently release could be such an action. So we could
underestimate the fragility of our environment because of anthropic bias. (This is my idea and I
wrote about here: http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defendus-observation-selection-and-fragility-of-our-environment)
The timeline of possible runaway global warming
We could name the runaway global warming a Venusian scenario because thanks to a greenhouse
effect on the surface of Venus its temperature is over 400 C, despite that, owing to a high albedo
(0.75, caused by white clouds) it receives less solar energy than the Earth (albedo 0.3).
A greenhouse catastrophe can consist of three stages:
1. Warming of 1-2 degrees due to anthropogenic C02 in the atmosphere, passage of a trigger
point. We dont where the tipping point is, we may have passed it already, conversely we may be
underestimating natural self-regulating mechanisms.
2. Warming of 10-20 degrees because of methane from gas hydrates and the Siberian bogs as well as
the release of CO2 currently dissolved in the oceans. The speed of this self-amplifying process is
82

limited by the thermal inertia of the ocean, so it will probably take about 10-100 years. This process
can be arrested only by sharp hi-tech interventions, like an artificial nuclear winter and-or eruptions
of multiple volcanoes. But the more warming occurs, the lesser the ability of civilization to stop it
becomes, as its technologies will be damaged. But the later that global warming happens, the higher
the tech will be that can be used to stop it.
3.Moist greenhouse. Steam is a major contributor to a greenhouse effect, which results in an even
stronger and quicker positive feedback loop. A moist greenhouse will start if the average
temperature of the earth is 47 C (currently 15 C) and it will result in a runaway evaporation of the
oceans, resulting in 900 C surface temperatures.
(https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans ). All the water on the planet
will boil, resulting in a dense water vapor atmosphere. See also here:
https://en.wikipedia.org/wiki/Runaway_greenhouse_effect
Prevention
If we survive until positive Singularity, global warming will be not an issue. But if strong AI and
other super techs dont arrive until the end of the 21st century, we wll need to invest a lot in its
prevention, as the civilization could collapse before the creation of strong AI, which means that we
will never be able to use all of its benefits.
I have a map, which summarizes the known ideas for global warming prevention and adds some
new ones for urgent risk management. http://immortality-roadmap.com/warming2.pdf
The map has two main variables: our level of tech progress and size of the warming which we want
to prevent. But its main variable is the ability of humanity to unite and act proactively. In short, the
plans are:
No plan do nothing, and just adapt to warming
Plan A cutting emissions and removing greenhouse gases from the atmosphere. Requires a lot of
investment and cooperation. Long term action and remote results.
Plan B geo-engineering aimed at blocking sunlight. Not much investment and unilateral action are
possible. Quicker action and quicker results, but involves risks in the case of switching off.
Plan C emergency actions for Sun dimming, like artificial volcanic winter.
Plan D moving to other planets.
All plans could be executed using current tech levels and also at a high tech level through the use of
nanotech and so on.
I think that climate change demands that we go directly to plan B. Plan A is cutting emissions, and
its not working, because it is very expensive and requires cooperation from all sides. Even then it
will not achieve immediate results and the temperature will still continue to rise for many other
reasons.
Plan B is changing the opacity of the Earths atmosphere. It could be a surprisingly low cost exercise
and could be operated locally made. There are suggestions to release something as simple as sulfuric
acid into the upper atmosphere to raise its reflection abilities.
"According to Keiths calculations, if operations were begun in 2020, it would take 25,000 metric
tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of
sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric
tons of it each year, at an annual cost of $700 million, would be required to compensate for the
83

increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program
would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft."
https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/
There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in the
oceans and by dispersing the carbon capturing mineral olivine.
The problem with that approach is that it can't be stopped. As Seth Baum wrote, a smaller
catastrophe could result in the disruption of such engineering and the consequent immediate return
of global warming with a vengeance. http://sethbaum.com/ac/2013_DoubleCatastrophe.html
There are other ways pf preventing global warming. Plan C is creating an artificial nuclear winter
through a volcanic explosion or by starting large scale forest fires with nukes. This idea is even more
controversial and untested than geo-engineering.
A regional nuclear war I capable of putting 5 mln tons of black carbon into the upper athmosphere,
average global temperatures would drop by 2.25 degrees F (1.25 degrees C) for two to three years
afterward, the models suggest.
http://news.nationalgeographic.com/news/2011/02/110223-nuclear-war-winter-global-warmingenvironment-science-climate-change/ Nuclear explosions in deep forests may have the same effect
as attacks on cities in term of soot production.
Fighting between Plan A and Plan B
So we are not even close to being doomed by global warming but we may have to change the way
we react to it.
While cutting emissions is important it will probably not work within a 10-20 year period, quicker
acting measures should be devised.
The main risk is abrupt runaway global warming. It is low probability event with the highest
consequences. To fight it we should prepare rapid response measures.
Such preparation should be done in advance, which requires expensive scientific experiments. The
main problem here is (as always) funding, and regulators approval. The impact of sulfur aerosols
should be tested. Complicated math models should be evaluated.
Contra-arguments are the following: Openly embracing climate engineering would probably also
cause emissions to soar, as people would think that there's no need to even try to lower emissions
any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever
was disrupted, we'd be in trouble. And do we know enough of such measures to say that they are
safe? Of course, if we believe that history will end anyways within decades or centuries because of
singularity, long-term effects of such measures may not matter so much Another big issue with
changing insolation is that it doesn't solve ocean acidification. No state actor should be allowed to
start geo-engineering until they at least take simple measures to reduce their emissions. (comments
from Lesswrong discussion about GW).
Currently it all looks like a political fight between Plan A (cutting emissions) and Plan B (geoengineering), where plan As approval is winning. It has been suggested not to implement Plan B as
an increase in the warming would demonstrate a real need to implement Plan A (cutting emissions).
84

Regulators didnt approve even the smallest experiments with sulfur shielding in Britain. Iron ocean
seeding also has regulatory problems.
But the same logic works in the opposite direction. China and the coal companies will not cut
emissions, because they want to press policymakers to implement plan B. It looks like a prisoners
dilemma of two plans.
The difference between the two plans is that plan A will return everything to its natural state and
plan B is aimed on creating instruments to regulate the planets climate and weather.
In the current global political situation, cutting emissions is difficult to implement because it
requires collaboration between many rival companies and countries. If several of them defect (most
likely China, Russia and India, who have heavy use of coal and other fossil fuels), it will not work,
even if all of Europe were solar powered.
Transition to zero-emission economy could happen naturally in 20 years after electric transportation
will become widespread as well as solar energy.
Plan C should be implemented if the situation suddenly changes for the worse, with the temperature
jumping 3-5 C in one year. In this case the only option we have is to bomb Pinatubo volcano to
make it erupt again, or probably even several volcanos. A volcanic winter will give us time to adopt
other geo-engineering measures.
I would also advocate for a mixture of both plans, because they work on different timescale. Cutting
emissions and removing CO2 using the current level of technologies would take decades to have an
impact on the climate. But geo-engineering has a reaction time of around one year so we could use it
to cover the bumps in the road.
Especially important is the fact that if we completely stop emissions, we could also stop global
dimming from coal burning which would result in a 3 C global temperature jump. So stopping
emissions may result in a temperature jump, and we need a protection system in this case.
In all cases we need to survive until stronger technologies develop. Using nanotech or genetic
engineering we could solve the warming problem with less effort. But we have to survive until this
time.
It seems to me that the idea of cutting emissions is overhyped and solar management is
"underhyped" in terms of public opinion and funding. By changing that misbalance we could
achieve more common good.
An unpredictable climate needs a quicker regulation system
The management of climate risks depends on their predictability and it seems that this is not very
high. The climate is a very complex and chaotic system.
It may react unexpectedly in response to our own actions. This means that long-term actions are less
favorable. The situation could change many times during their implementation.
The quick actions like solar shielding are better for management of poor predictable processes, as
we can see the results of our actions and quickly cancel them or make them stronger if we don't like
the results.
85

Context risks influencing the probability of other global risks


Global warming has some context risks:
1 it could slow tech progress,
2. it could raise the chances of war (probably already happened in Syria because of draught
http://futureoflife.org/2016/07/22/climate-change-is-the-most-urgent-existential-risk/),
and exacerbate conflicts between states about how to share recourses (food, water etc.) and about the
responsibility for risk mitigation. All such context risks could lead to a larger global catastrophe. See
also a book Climate wars https://www.amazon.com/Climate-Wars-Fight-SurvivalOverheats/dp/1851688145
3. Another context risk is that global warming is captures almost all the available public attention for
global risks mitigation, and other more urgent risks may get less attention.
4 Impaired cognition. Rising CO2 levels could also impair human intelligence and slow tech
progress as CO2 levels near 1000 ppm are known to have negative effects on cognition.
5. Warming may also result in large hurricanes. They can appear if the sea temperature reaches 50 C
and they have a wind speed of 800 km/h, which is enough to destroy any known human structure.
They will be also very stable and live very long, thus influencing the atmosphere and creating strong
winds all over the world. The highest sea temperature currently is around 30C.
http://en.wikipedia.org/wiki/Hypercane
Warming and other risks
Many people think that runaway global warming constitutes the main risk of global catastrophe.
Another group think it is AI, and there is no dialog between these two groups.
The level of warming which is survivable strongly depends of our tech level. Some combinations of
temperature and moisture are non-survivable for human beings without air conditioning: If the
temperature rises by 15 C, half of the population will be in a non-survivable environment
http://www.sciencedaily.com/releases/2010/05/100504155413.htm because very humid and hot air
prevents cooling by perspiration and feels like a much higher temperature. With the current level of
tech we could fight it, but if humanity falls to a medieval level, it would be much more difficult to
recover in such conditions.
In fact we should compare not the magnitude but speed of global warming with the speed of tech
progress. If the warming is quicker it wins. If we have very slow warming, but even slower progress,
the warming still wins. In general I think that progress will overrun warming, and we will create
strong AI before we have to deal with serious global warming consequences.
The war also could happen if one country attempts geo-engineering, say USA, and another will
think about is as climate weapon which undermines its agricultural productivity (China or Russia).
Scientific studies and preparation for GE is probably the longest part of GE, it and could and should
be done in advance, and it should not provoke war. If real necessity of GE appear, all need
technologies will be ready.
Different predictions
86

Multiple people predict extinction due to global warming but they are mostly labeled as alarmists
and are ignored. Some notable predictions:
1.
David Auerbach predicts that in 2100 warming will be 5 C and combined with resource
depletion and overcrowding it will result in global catastrophe.
http://www.dailymail.co.uk/sciencetech/article-3131160/Will-child-witness-end-humanity-Mankindextinct-100-years-climate-change-warns-expert.html
2.
Sam Carana predicts that warming will be 10 C in the 10 years following 2016, and
extinction will happen in 2030. http://arctic-news.blogspot.ru/2016/03/ten-degrees-warmer-in-adecade.html
3.
Conventional predictions of the IPCC give a maximum warming of 6.4 C at 2100 in worst
case emission scenario and worst climate sensitivity to them:
https://en.wikipedia.org/wiki/Effects_of_global_warming#SRES_emissions_scenarios
4.
The consensus of scientists is that climate tipping point will be in 2200
http://www.independent.co.uk/news/science/scientists-expect-climate-tipping-point-by-22002012967.html
5.
If humanity continues to burn all known carbon sources it will result in a 10 C warming by
2030. https://www.newscientist.com/article/mg21228392-300-hyperwarming-climate-could-turnearths-poles-green/ The only scenario in which we are still burning fossil fuels by 2300 (but not
extinct and not a solar powered supercivilzation running nanotech and AI) is a series of nuclear wars
or other smaller catastrophes which will permit the existence of regional powers which often smash
each other into ruins and then rebuild using coal energy. Something like global nuclear Somali
world.
6. Kopparapu says that if current IPCC temperature projections of a 4 degrees K (or Celsius)
increase by the end of this century are correct, our descendants could start seeing the signatures of
a moist greenhouse by 2100. Earth on the edge of runaway warming. http://arctic-

news.blogspot.ru/2013/04/earth-is-on-the-edge-of-runaway-warming.html
We should give more weight to less mainstream predictions, because they describe heavy tails of
possible outcomes. I think that it will be reasonable to estimate the risks of extinction level runaway
global warming in the next 100-300 years at 1 per cent and act as it is the main risk from global
warming.

87

The map of (runaway) global warming prevention


Time

High tech realization


in the second half of 21 century

Low tech realization


s
in the firt hal f of 21 cent ur y
No
plan

Plan A

Greenhouse
gases
limitation

Local adaptations
to changing
climate

Cutting emissions
of CO2

Reduction of the
emissions of other
greenhouse
gases

Air conditioning
Cities replacement
New crops
Irrigation

Could work in case small


changes like 2-4 C
Will not work in case of
runaway warming

Switching from coal to gas


Switching to electric
i
cars
More fuel-efficent car s
The transition to renewable and solar energy, which does not result in CO2
emissions
Lower consumption

Wait until strong AI creation

Using nanotech and biotech to create new modes of transport and energy
sources
Using AI to create a comprehensive climate model and calculate impacts

Use of lasers to destroy CFC gases


Methane in the atmosphere can be destroyed by releasing hydroxyl groups or
methanophile organisms

wiki

Capturing CO2
from the
atmosphere

Reforestation
Stimulation of plankton: seeding the ocean with iron
Using common mineral olivine, which when spray enters into chemical reaction
with CO2 and absorbs it, and

GMO organisms capable of capturing CO2


Nanorobots use carbon from the air for self replication
Nanomachines in the upper layers of the atmosphere trap harmful gases

wiki

Plan B

Use of
geo-engineering for
c
reflet i on
of sunlight
Risk: turning off
could result in immediate bounce of
global warming
Risk: Less
incentives in
cutting emissions

Reflet ion of
sunlight
radiation
in stratosphere

Increase of
albedo of the
earths surface

Increase of
cloud albedo

Space solutions
i

1. Stratospheric aerosols based on sulfuric acid wiki


Risk: the destruction of the ozone layer.
Risk: acid rains
Expected price: may be zero, if cheap plane fuel used

2. Change the albedo


https://en.wikipedia.org/wiki/Refle
c
t i v e_surfaces_(geoengineering)
c
Using reflet i v e construction technologies
c
Crops with high albedo
The foam in the ocean
Covering deserts with c
reflet i v e plastic
Huge airships
m reflet i v e thin fil in the upper at m
o spher e

3. Increasing of the albedo of clouds over the sea with the help of injecting sea-c
water-based condensation nuclei
https://en.wikipedia.org/wiki/Cloud_refle
a
t i vi t y_modifict i on
1500 ships can do it. The spray itself will go to the upper atmosphere
Risk: The water turns to steam, which is itself a greenhouse gas.
Risk: Incorrect altitude clouds can lead to heating
https://en.wikipedia.org/wiki/Solar_radiation_management#Weaponization

Mirrors in space
Spraying moondust: explosions on the moon could create a cloud of dust
Construction of factories for the production of the moon, satellites umbrellas
Lenses or smart dust at L1 point
Deviation of an asteroid to the Moon so that it would result into impact and a
cloud of dust on the moons orbit will be created, and will be screening solar radiation.
c

Robots replicators in space will building mirrors

Artifical expl osi ons of iv olcanoes to create artifical v olcanic iwinter


Artifical nuc l ear wi nt er - expl osi ons i n tai ga and in coal beds

Plan
Urgent measures
to stop global
warming

Plan C could be realized in half a year with help of already existing nuclear
weapons

Plan D
Escape
Escape into high mountains, like Himalaya or in Antarctic
High-tech air condition escapes

Space stations
Other planet colonisation
Uploading into AI

Types of warming and itshconsequences

Consequences

Probability

New Ice age

Large economic problems


Millions people will die

No warming

Useless waste of money, time and attention in figt ing gl obal w arming

Warming according ICPP


predictions: 2-4 C until
the end of 21 century

Large economic problems


Hurricanes, famine, sea change levels
Millions people will die

Catastrophic warming
10-20 C

Big parts of land became inhabitable,


human moves to poles
and mountains
Civilizational collapse

Catastrophic warming
New equilibrium of the
climate at 55 C (40 C
warming)

Small groups of people could survive


on very high mountains,
like Himalaya

Venusian runaway
warming (Mild)
The medium temperature
reach more than 100 C

Some form of life survive


on mountains tops

Venusian runaway warming (Strong, all water


evaporation)
The medium temperature
reach more than 1600 C

Life on Earth irreversibly ends

88

Chapter 7. The anthropogenic risks which are not connected with new technologies
Exhaustion of resources
The problem of exhaustion of resources, growth of the population and pollution of
environment is system problem, and in this quality we will consider to it further. Here we will
consider only, whether each of these factors separately can lead to mankind extinction.
Widespread opinion is that the technogenic civilization is doomed because of exhaustion of
readily available hydrocarbons. In any case, this in itself will not result in extinction of all mankind
as earlier people lived without oil. However there will be vital issues if oil ends earlier, than the
society will have time to adapt for it - that is will end quickly. However coal stocks are considerable,
and the "know-how" of liquid fuel from it was actively applied in Hitlers Germany. Huge stocks of
hydrate of methane are on a sea-bottom, and effective robots could extract it. And wind-energy,
transformation of a solar energy and similar as a whole it is enough existing technologies to keep
civilization development, though probably certain decrease in a standard of life is possible, and in
the worst case - considerable decrease in population, but not full extinction.
In other words, the Sun and a wind contain energy which in thousand times surpasses
requirements of mankind, and we as a whole understand how to take it. The question is not, whether
will suffice energy for us, but whether we will have time to put necessary capacities into operation
before shortage of energy will undermine technological possibilities of the civilization at the adverse
scenario.
To the reader can seem, that I underestimate a problem of exhaustion of resources to which the
is devoted set of books (Meadows, Parhomenko), researches and the Internet of sites (in the spirit of
www.theoildrum.com ). Actually, I do not agree with many of these authors as they start with the
precondition, that technical progress will stop. We will pay attention to last researches in the field of
maintenance with power resources: In 2007 in the USA industrial release of solar batteries in cost
less than 1 dollar for watt has begun, that twice it is less, than energy cost on coal power station, not
considering fuel. The quantity wind energy which can be taken from ocean shoal in the USA makes
900 gigawatts, that covers all requirements of the USA for the electric power. Such system would
give a uniform stream of energy for the account of the big sizes. The problem of accumulation of
surpluses of the electric power is solved for the account of application of return back waters in
89

hydroelectric power stations and developments of powerful accumulators and distribution, for
example, in electromobiles. The large amount of energy can be taken from sea currents, especially
Gulf Stream, and from underwater deposits methane hydrates.
Besides, end of exhaustion of resources is behind horizon of the forecast which is established
by rate of scientific and technical progress. (But the moment of change of the tendency - Peak Oil is in this horizon.)
One more variant of global catastrophe is poisoning by products of our own live. For example,
yeast in a bottle with wine grows on exponent, and then poisoned with products of the disintegration
(spirit) and all to one will be lost. This process takes place and with mankind, but it is not known,
whether we can pollute and exhaust so our inhabitancy that only it has led to our complete
extinction. Besides energy, following resources are necessary to people:
Materials for manufacture - metals, rare-Earth substances etc. Many important ores can end
by 2050. However materials, unlike energy, do not disappear, and at development nanotechnology
there is possible a full processing of a waste, extraction of the necessary materials from sea water
where the large quantity is dissolved, for example, uranium, and even transportation of the necessary
substances from space.
Food. According to some information, the peak of manufacture of foodstuff is already
passed: soils disappear, the urbanization grasps the fertile fields, the population grows, fish comes to
an end, environment becomes soiled by waste and poisons, water does not suffice, wreckers extend.
On the other hand, transition to essentially new industrial type of manufacture of the food plants,
based on hydroponics - that is cultivation of plants in water is possible, without soil in the closed
greenhouse that protects from pollution and parasites and is completely automated. (see Dmitry
Verhoturova's and Kirillovsky article Agrotechnologies of the future: from an arable land to
factory). At last, margarine and, possibly, many other things necessary components of a foodstuff, it
is possible to develop from oil at the chemical enterprises.
Water. It is possible to provide potable water for the account desalination sea water, today it
costs about dollar on ton, but the water great bulk goes on crop cultivation - to thousand tons of
water on wheat ton that does desalination unprofitable for agriculture. But at transition on
hydroponic water losses on evaporation will sharply decrease, and desalination can become
profitable.
Place for a life. Despite fast rates of a gain of quantity of the population on the Earth, it is
still far to a theoretical limit.

90

Pure air. Already now there are the conditioners clearing air from a dust and raising in it the
maintenance of oxygen.
- Exceeding the global hypsithermal limit.
Artificial Wombs

Technological revolution causes following factors in population growth:


Increase in number of beings which we attribute the rights equal to the human: monkeys,
dolphins, cats, dogs.
Simplification of a birth and education of children. Possibilities of reproductive cloning,
creation of artificial mothers, robots-assistants on housekeeping etc.
Appearance of the new mechanisms applying for the human rights and-or consuming
resources: cars, robots, AI systems.
Possibilities of prolongation of a life and even revival of dead (for example, by cloning on
remained DNA).
Growth of a "normal" consumption level.

Crash of the biosphere


If people seize genetic technologies it presumes both to arrange crash of biosphere of
improbable scales, and to find resources for its protection and repair. It is possible to imagine the
scenario at which all biosphere is so infected by radiation, genetically modified organisms and
toxins, that it will be not capable to fill requirement of mankind for the foodstuffs. If it occurs
suddenly, it will put a civilization on a side of economic crash. However advanced enough
civilization can adjust manufacture of a foodstuff in a certain artificial biosphere, like greenhouses.
Hence, biosphere crash is dangerous only at the subsequent recoil of a civilization on the previous
step - or if crash of biosphere causes this recoil.
But biosphere is very complex system in which self-organized criticality and a sudden collapse
are possible. Well known story is destruction of sparrows in China and the subsequent problems
with the foodstuffs because of invasion of wreckers. Or, for example, now corals perish worldwide
because sewage take out a bacterium which hurt them.

91

Chapter 8. Artificial Triggering of Natural Catastrophes


There are a number of natural catastrophes which could potentially be triggered by
the use of powerful weapons, especially nuclear weapons. These include: 1) initiation of a
supervolcano eruption, 2) dislodging of a large part of the Cumbre Vieja volcano on La
Palma in the Canary Islands, causing a megatsunami, 3) possible nuclear ignition of a gas
giant planet or the Sun. The first is the most dangerous and plausible, the second is
plausible but not dangerous to the whole of mankind, and the third is currently rather
speculative, but in need of further research. Besides these three risks, there is also the risk
of asteroids or comets being intentionally redirected to impact the Earth's surface and the
risk of destruction of the ozone layer through an auto-catalyzing reaction. These five risks
and some closely related others will be examined in this chapter.
We begin with the Cumbre Vieja risk, as it is the most studied among these
possibilities and serves as an illustrative example. According to heavily contested claims,
there may be a block with volume between 150 km 3 and 500 km3 on the Cumbre Vieja
volcano on La Palma in the Canary Islands, which if dislodged, would cause 3-8 m (10- 26
ft, in the instance of 150 km3 collapse) to 10-25 m (33- 108 ft, in 500 km3 collapse) waves
to hit the North American Atlantic seaboard, causing massive destruction and waves
traveling up to 25 km (16 mi) inland1. All the assumptions underlying this scenario have
been heavily questioned, including the most basic, that there is an unstable block to begin
with2,3. According to the researchers (Ward & Day) who made the original claims, there is a
30 km (17 mi) fissure along Cumbre Vieja. Quoting Ward and Day, The unstable block
above the detachment extends to the north and south at least 15 km. The evidence for this
detachment is not based on any obvious surface feature but rather is based on an analytic
argument by Day et al. which incorporates evidence such as vent activity. As such, the
statement has been questioned by critics, who say that the evidence does not imply such a
large detachment and that at most the detachment is 3 km (1.8 mi) in length.
We leave detailed discussion on the topic of the Cumbre Vieja volcano to the
referenced worksour point is to establish an archetype for the category of artificiallytriggered natural risk. If the volcano exploded, or a nuclear bomb were detonated in the
right place, it could send a large chunk of land from La Palma into the ocean, creating a
mega-tsunami that would sink ships, impact the East Coast and cost many lives. Saying
92

that this would certainly not occur, with confidence, is not possible now because the case
has not been investigated thoroughly enough. Unfortunately, the other risks which we
discuss in this chapter have been analyzed even less. It is important to highlight them,
however, so that future research can be prompted.
Yellowstone Supervolcano Eruption
A less controversial claim than the status of the Cumbre Vieja volcano is that there is
a huge, pressurized magma chamber beneath Yellowstone National Park in Wyoming.
Others have gone on to suggest, off the record, that it would blow if its cap were destroyed
by a nuclear weapon. No geologists have publicly addressed the possibility, but it is entirely
consistent with their statements on the pressure of the magma chamber and its depth 4. The
magma chamber is 80 km (50 mi) long and 40 km (24 mi) wide, and has 4,000 km 3 (960 cu
mi) of underground volume, of which 1030% is filled with molten rock. The top of the
chamber is 8 km (5 mi) below the surface, the bottom about 16 km (10 mi) below. That
means that anything which could weaken or annihilate the 8 km (5 mi) cap could release
the hyper-pressurized gases and molten rock and trigger a supervolcanic eruption, causing
the loss of millions of lives.
Before we review the effects of a supervolcano eruption, it is worth considering the
depth penetration of nuclear explosions. During Operation Plowshare, an experiment of the
peaceful use of nuclear weapons, craters 100 m (320 ft) deep were created. Simplistically
speaking, this means that 80 similar weapons would be needed to burrow all the way down
to the Yellowstone Caldera magma chamber. Realistically speaking, fewer would be
needed, since deep nuclear explosions cause collapses which reduce overhead pressure.
Our estimate is that just 10 ten-megaton nuclear explosions or fewer would be sufficient to
connect the magma chamber to the surface. If there are solid boundaries between the
explosion cavities, they could be bridged by a drilling machine. That would release the
pressure and allow the magma to explode to the surface. You might wonder what sort of
people would have the motivation to do such a thing. America's enemies, for one, but there
are other possibilities. We explore the general case in a later chapter.
For now, let's review the effects of a supervolcano eruption. Like many of the risks
discussed in this book, they would not be likely to kill all of humanity, but only a few
hundreds of millions of people. It's in combination with other risks that the supervolcano
93

risk becomes a threat to the human species in general. Regardless, it would be


cataclysmic.
Specifically: a supervolcano eruption on the scale of Yellowstone's prior events would
eject about 1,000 km3 (250 cu mi) of molten rock into the sky, creating ash plumes which
cover as much as two-thirds of the United States in a layer of ash a foot thick, making it
uninhabitable for decades. 80,000 people would be killed instantly, and hundreds of
millions more would die over the subsequent months due to lack of food and water in the
wake of the eruption. Ash would kill all plants on the surface, causing the death of any
animals not able to survive on detritus or fungus. The average world temperature would
drop by several degrees, causing catastrophic crop failures and hundreds of millions of
deaths by starvation. The situation would be similar to that after a nuclear war, with the
ameliorating factor that volcanic aerosols would persist in the atmosphere for less time
than nuclear aerosols, due to greater average particle size. Therefore, a volcanic winter
would be shorter than a nuclear winter, although the acute effects would be far worse.
Nuclear war would barely affect many inland areas, but a Yellowstone supereruption would
cover them uniformly in ash. As a world power, the United States would be done for.
A foot thick layer of ash pretty much puts an end to all human activity. USGS geologist
Jake Lowenstern is quoted as saying that a Yellowstone supereruption would deposit a
layer of ash at least 10 cm (4 in) thick for a radius of 500 miles around Yellowstone 5. This is
somewhat less than the 10 foot thick layer of ash claims seen in disaster documentaries
and in other sensationalist sources, but still more than enough to heavily disrupt activity
across a huge area. The effects would be similar to the catastrophe outlined in the nuclear
weapons chapter, but even more severe. Naturally, water would still be available from wells
and streams, but all crops and much game would be ruined, leaving people completely
dependent upon canned and other stored food. The systems that provide power would be
covered in dust, requiring weeks to months to repair. More likely than not, civil order would
completely collapse across the effected area, making repairs to power systems difficult and
piecemeal. Of course, if the victims don't prepare enough canned food, they can always
eat one another, which is the standard (and rarely spoken of) outcome in this kind of
disaster scenario.

94

Still, little of this directly matters in the context of this book, since such an event, while
severe, does not threaten humanity as a whole. Although worldwide temperatures would
drop by a few degrees, and the continental US would be devastated, the world population
as a whole would survive and live on, guaranteeing humanity's future. It's still worth noting
this scenario because 1) there may be multiple supervolcanos worldwide which could be
triggered simultaneously, which either individually or concurrently with nuclear weapons
could cause a volcanic/nuclear winter so severe that no one survives it, 2) it is an
exacerbating factor which could add to the tension of a World War or similar scenario. A
strike on a supervolcano would be deadlier than a strike on a major city, and could
correspondingly raise the stakes of any international conflict. Threatening to attack a
supervolcano with a series of ICBMs could be a potential blackmailing strategy in the
darkest hours of a World War.
Asteroid Bombardment
A risk which has more potential to be life-ending than a supervolcano eruption is that
of intentionally-directed asteroid bombardment, which could be quite dangerous to the
planet indeed. Directing an asteroid of sufficient size towards the Earth would require a
tremendous amount of energy, orders of magnitude greater than mankind's current total
annual energy consumption, but it could eventually be done, perhaps with the tools
described in the nanotechnology chapter.
The Earth's orbit already places it in some danger of being hit by a deadly asteroid.
65 million years ago, an asteroid between 5 km (2 mi) and 15 km (6 mi) impacted the
Earth, causing the extinction of the dinosaurs. There is an asteroid, 1950 DA, 1 km (0.6 mi)
in diameter, which scientists say has a 0.3 percent chance of impacting Earth in 2880 6. The
dinosaur-killer impact was so severe that its blast wave ignited most of the forests in North
America, destroying them and making fungus the dominant species for several years after
the impact7. Despite this, some human-sized species survived, including various turtles and
alligators. On the other hand, many human-sized and larger species were wiped out,
including all non-avian dinosaurs. More research is needed to determine whether a
dinosaur-killer-class asteroid would be likely to wipe out humanity in our entirety, taking into
account our ability to take refuge in bunkers with food and water for decades or possibly

95

even centuries at a time. Detailed studies have not been done, and we were only able to
locate one paper on the topic8.
In the geological record, there are asteroid impacts up to half the size of the
Chicxulub impactor which wiped out the dinosaurs, which are known not to have caused
mass extinctions. The Chicxulub impactor, on the other hand, caused a mass extinction
that destroyed about three-quarters of all living plant and animal species on Earth. It seems
fair to assume that an asteroid needs to be at least as large as the Chicxulub impactor to
have a chance of wiping out humanity, and probably significantly larger.
Sometimes, asteroids with a diameter of greater than 10 km (4 mi) are called lifekiller asteroids, though this is an exaggeration. At least one asteroid of this size has
impacted the Earth during the last 600 million years and not wiped out multicellular life,
though it did burn down the entire biosphere. The two largest impact craters known, with a
diameter of 300 and 250 km (186 and 155 mi) respectively, correspond to impacts which
occurred before the evolution of multicellular life. The Chicxulub crater, with a diameter of
180 km (112 mi), is the third-largest impact crater on Earth which is definitively known, and
the only known major asteroid to hit the planet after the rise of complex, multicellular life.
Craters of similar size from the last 600 million years cannot be definitively identified, but
that does not mean that such an impact has not occurred. The crater could very well have
been on part of the Earth's surface that has since been subsumed beneath a continental
plate, or could be camouflaged by surface features. There is one possible crater of even
larger size, the Shiva crater, which, if real (it is highly contested) is 600 km (370 miles)
long by 400 km (250 mi) wide, and may correspond to the impact of a 40 km (25 mi) sized
object. If this were confirmed, it would substantially increase the necessary size for a
human-killer asteroid, but since it is highly contested, it does not count, and we ought to
assume that an asteroid just modestly larger than the Chicxulub impactor, say near the top
of its probable size range, 15 km (6 mi) could potentially wipe out the human species, just
as it did the dinosaurs. Comets have somewhat greater speed than asteroids, due to their
origin farther out in the solar system, and can be correspondingly smaller than asteroids
but still do equivalent damage. It is unknown whether the Chicxulub impactor was an
asteroid or a comet. To put it in perspective, an asteroid 10 km (4 mi) across is similar to
the size of Mt. Everest, but with greater mass (due to its spherical, rather than conical
shape).
96

Restricting our search to asteroids of size 15 km (9 mi) or larger, we might wonder


where objects of this size may be found. Among 1,006 potentially hazardous asteroids
(PHOs) classified by NASA, the largest is 4.752.41.95 km, with a mass of 5.010 13 kg.
This is rather large, but not large enough to be categorized as a potential humanity-killer,
according to our analysis. (Furthermore, this object has a relatively low density, which
would make its impact even slighter than its size alone would suggest.) Moving on to nearEarth objects (NEOs), asteroids which orbit in Earth's neighborhood but are not at risk of
impact, more than 10,713 are categorized by NASA. Over 1,000 are estimated to have a
diameter greater than 1 km (0.6 mi). The largest, 1036 Ganymed, has an estimated
diameter of 3234 km (2021 mi), more than enough to qualify as a potential humanitykiller. Like all impacts of similar size, it would eject a huge amount of rock and dust into the
upper atmosphere, which would heat up into molten rock upon reentry, broiling the
surface upon its return. At a distance of 800 km (~500 mi), the ejecta would take roughly 8
minutes to arrive. According to one source, the impact would result in a volcanic winter with
a drop of 13 Kelvin after 20 days, rebounding by about 6 K after a year, at which point onethird of the Northern Hemisphere would be covered in ice 9. In addition, a sufficiently large
impact would create a mantle plume at the antipodal point (opposite side of the planet),
which would cause a supervolcanic eruption and a volcanic winter all on its own. The
combination of an impact and a supervolcanic eruption could cause a temperature drop
even greater than 13 Kelvin. Impactors greater than 3 km (1.2 mi) in diameter are likely to
create global firestorms, probably burning up a majority of the world's dense forests 10,11.
The smoke from the wood fires would cause the impact winter to last even longer, creating
even thicker ice sheets at lower latitudes which would be even more omnicidal. Still,
according to our best guess, a decade of food and underground refuges would be enough
for humans to survive an impactor of equivalent size to the Chicxulub impactor. Studies of
the civilizational outcome of a Ganymed-sized impactor cannot be found, but it is fair to say
it would be very bad, though thankfully very unlikely.
There are several reasons it seems unlikely that a single asteroid impact could wipe
out humanity. The foremost is, that, unlike the dinosaurs, some minority of us could seal
ourselves in caves with a pleasant climate and live for dozens if not hundreds of years on
stored food alone. If supplied with a nuclear reactor, metal shop, a large supply of light
bulbs, a reliable water source, and means of disposing of waste, people could even grow
97

plants underground and extend their stay for the life of the reactor, which could be 100
years or longer. A thick ice sheet forming on top of the bunker could kill all life inside, by
denial of oxygen, but such ice sheets are unlikely to form in the tropics, where billions of
people live today. Perhaps a greater risk would be a series of artificially engineered
asteroid impacts, 50-100 years apart, designed to last for thousands of years. It seems
more likely that an unnatural scenario such as that could actually wipe out all human life,
but also seems correspondingly more difficult to engineer. It could become possible during
the late 21st century and beyond, however.
Manually redirecting an asteroid with an orbit relatively far from the Earth, say 0.3
astronomical units (AU) in the case of 1036 Ganymed, would require a tremendous amount
of energy, even by the standards of high energy density MNT-built machinery. Redirecting
an object of that size by a substantial amount would require many tens of thousands of
years and a corresponding number of orbits. If someone had an objective to destroy a
target on Earth, it would be much simpler to blow it up with a nuclear weapon or direct
sunlight from a space mirror to vaporize it rather than drop an asteroid on it. It would be
even far easier to pick up a mountain, launch it off the surface of the Earth, orbit it around
the Earth until it picked up sufficient energy, then drop it on the Earth, rather than
redirecting a distant object. For this reason, a man-engineered asteroid impact seems like
an unlikely risk to humanity for the foreseeable future (tens of thousands of years), and an
utterly negligible one for the time frame under consideration, the 21 st century. Of course,
the probability of a natural impact of this size in the next century is minute.
Runaway Autocatalytic Reactions
Very few people know that if the concentration of deuterium (an isotope of hydrogen,
also known as heavy water) in the world's oceans were just 22 times higher than it is, it
would be possible to ignite a self-sustaining nuclear reaction that would vaporize the seas
and the Earth's crust with it. The oceans have 1 atom per 6,247 of deuterium, and the
critical threshold for a self-sustaining nuclear chain reaction is one atom per 300 12. It may
be that there are deposits of ice or heavy water with the required concentration of
deuterium. Heavy water has a slightly higher melting point than normal water and thus
concentrates during natural processes. Even a cube of heavy water ice just 100 m (320 ft)
on a side, small in terms of geologic deposits, could release energy equivalent to many
98

gigatons of TNT if ignited, greater than the largest nuclear bombs ever detonated. The
reason we do not observe these events in other star systems may be due to the fact that
artificial nuclear explosions are needed to trigger them, or that the required concentrations
of deuterium do not exist naturally. More research on the topic is needed. There is even a
theory that the Moon formed during a natural nuclear explosion from concentrated uranium
at the core-mantle boundary13. It may be that we have not observed such an event in other
star systems yet because it is sufficiently uncommon.
It should be possible to go looking for higher-than-normal concentrations of deuterium
in Arctic ice deposits. Theses studies should be pursued out of an abundance of caution. If
such deposits are found, this would provide more evidence for the usefulness of space
stations distant from the Earth as an insurance policy against geogenic human extinction
triggered by nuclear chain reaction in geologic deposits. There may be other autocatalytic
runaway reactions which are possible, threatening structures such as the ozone layer.
These possibilities should be studied in greater detail by geologic chemists. Dangerous
thresholds of deuterium may exist in the icy bodies of the outer solar system, on Mars, or
among the asteroids. The use of nuclear weapons in space should be restricted until these
objects are thoroughly investigated for deuterium levels.
Gas Giant Ignition
The potential ignition of gas giants is of sufficient importance that it merits its own
section. The first reaction to such an idea is that it sounds utterly crazy. There is a reflexive
search for a rationalization, a quick rebuttal, to put such an implausible idea out of our
heads. Problematically, however, many of the throw-away rebuttals have been dismissed
on rational grounds, and if we are going to rule out this possibility decisively, more work
needs to be done14. Specifically, we are talking about a nuclear chain reaction being
triggered by a nuclear bomb detonated in a deuterium-rich cloud layer of a planet like
Jupiter, Saturn, Uranus, or Neptune. Even a pocket only several kilometers in diameter
would be enough to sterilize the solar system if ignited.
For ignition to be a danger, there needs to be a pocket in a gas giant where the level
of deuterium per normal hydrogen atom is at least 1:300. That is all it takes. A nuclear
explosion could then theoretically start a self-reinforcing nuclear chain reaction. If all the
deuterium in the depths of Jupiter were somehow ignited, it would release energy
99

equivalent to 3000 years of luminescence of the Sun during a few tens of seconds, enough
to melt the first few kilometers of the Earth's crust and penetrate much deeper with x-rays.
Surviving this would be extremely difficult, though possibly machine-based life forms buried
deeply underground could do it. If it turns out to seem possible, the threat of blowing up a
gas giant could be used as a blackmailing device by someone to get all of humanity to do
what they want.
The average deuterium concentration of Jupiter's atmosphere is low, about 26 per 1
million hydrogen atoms, similar to what is thought to be the primordial ratio of deuterium
created shortly after the Big Bang. Although this is a small amount, there may be chemical
and physical processes, which concentrate deuterium in parts of the atmosphere of Jupiter.
Natural solar heating of ice in comets leads to isotope separation and greater local
concentrations, for instance. Scientific details of how isotope separation can occur and
what isotope concentrations are reached in the interior of gas giants is poorly understood.
If the required deuterium concentrations exist, there are additional reasons to be
concerned that a runaway reaction could be initiated in a gas giant. One is the immense
pressure and opacity of the gas giants beneath the cloud layer. 10,000 km beneath
Jupiter's cloud tops, the pressure is 1 million bar. For comparison, the pressure at the core
of the Earth is 3 million bar. Deep in the bowels of gas giants, hydrogen changes to a
different phase called metallic hydrogen, where it is so compressed that it behaves as an
electrical conductor, and would be considerably more opaque than normal hydrogen. The
pressure and opacity would help contain any nuclear reaction and ensure it gets going
before fizzling out. Another factor to consider is that a gas giant has plenty of fusion fuel
which has never been involved in nuclear reactions, elements like helium-3 and lithium.
This makes it potentially more ignitable than a star.
There are runaway fusion reactions which occur in astronomical bodies in nature. The
most obvious is a supernova, where a red giant star collapses and fuses a large amount of
fuel very quickly. Another example is a helium flash, where a degenerate star crosses a
critical threshold of helium pressure and 60-80 percent of the helium in its core fuses in a
matter of seconds. This causes the luminosity of star to increase to about 10 11 solar
luminosities, similar to the luminosity of an entire galaxy. Jupiter, which is about a thousand

100

times less massive than the Sun, would still sterilize the solar system if any substantial
portion of its helium fused. Helium makes up about 10 percent of the mass of Jupiter.
The arguments for and against the likelihood of planetary ignition are complicated,
and require some knowledge of nuclear physics. For this reason, we will conclude
discussion of the topic here and point to some key references. Besides the risk of planetary
ignition, there has also been some discussion of the possibility of solar ignition, but this has
been more limited. Like with planetary ignition, it is tempting to dismiss the possibility
without considering the evidence, which is a danger.
Deep Drilling and Geogenic Magma Risk
Returning back to the realm of Earth, it may be possible to dig very deeply, creating
an artificial supervolcano even more energetic than Yellowstone. We should always
remember that 1792 mi (2885 km) beneath us is a molten, pressurized, liquid core
interspersed with gas. If even a tiny amount of that energy could be released to the
surface, it could wipe out all life. This is especially concerning in light of recent proposals to
send a probe to the mantle-core boundary. Although the liquid iron in the core would be too
heavy to rise to the surface on its own, the gases in the core could eject it through a
channel, like opening the cork of a champagne bottle. If even a small channel were flooded
with pressurized magma, it could make the fissure larger by melting its walls all the way to
the top, until it became large enough to eject a substantial amount of liquid iron.
A scientist at Caltech has devised a proposal for sending a grapefruit-sized
communications probe to the Earth's core, by creating a crack and pouring in a huge
amount of molten iron. This would slowly melt its way downwards, taking about a week to
travel the 1,792 miles to the core. Being conducted as a routine scientific exploration, like a
space mission, the risks might be ignored until it is too late. The idea of a probe to the
Earth's core causing the end of life on the surface just feels far-fetched; even though we
don't know enough about the physics to know for sure one way or another. These
seemingly far-fetched risks are especially dangerous because the reasons for dismissing
them are superficial.

101

Artificial irreversible global warming


The risk of global runaway warming caused by the release of methane from methane
hydrates in Arctic has been discussed for long.
https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis. These hydrates are deposited at
the depth of 300 meters or more as layer of a few hundred meters thick. Warming of the
ocean water, connected with the change in the amount of surface arctic ice, as well as
inflow of warm water from the rivers and the Gulf Stream, gradually destabilize the
hydrates, which are stable in a narrow range of temperatures and pressures. Namely, they
require high pressure and a temperature around 0 C.
If the temperature will rise by a few degrees, it would lead to their destruction and
release of methane. (Some think it could be quick process and other think that it will be
slow.) They also would be destroyed if the pressure drops.
Now a significant portion of the hydrates is on the verge of stability and release of
methane takes place in the form of bubbles on the surface, which leads to an increased
concentration of methane in the Arctic (sometimes more than 2200 ppb, with a mean of
1600). Methane is a hundred times more powerful greenhouse gas than carbon dioxide, so
the 2 parts per million of methane is equal to 200 parts per million by carbon dioxide. So
the contribution of methane and carbon dioxide to the warming are almost equal size.
According to some scientists, the release of methane from the Arctic tundra and arctic
undersea ice is a process with positive feedback that will lead to global warming of few
degrees in the period 2020-2030, followed by stronger greenhouse effect of water vapor.
After it global temperature will rise on tens degrees, and the death of all life on Earth will
happen in a century. http://arctic-news.blogspot.ru/ Others take more conservative
approach and think that outgassing will take millennia.
https://en.wikipedia.org/wiki/Runaway_climate_change
Although this view is not share by everybody, and the probability of its truth is not
high, as these processes can be much slower, the described warming mechanism is
apparently true.

102

As a result, there is a technical possibility to destabilize the fields of methane


hydrates by means of nuclear weapons and other means that will lead to the accelerated
release of methane and result in rapid runaway global warming.
Such weapons can be used as doomsday machine for blackmail of an adversary by
large arctic country (Russia or Canada-US). Russia has recently built secret arctic deep
water submarine Losharic, with unknown purpose, which is capable to go 2500 metes
under surface. https://en.wikipedia.org/wiki/Russian_submarine_Losharik
Here we will discuss the minimum requirements for the initial explosions, and the
factors that lead to the highest allocation of methane. These estimates are very preliminary.
Lets suggest that there is a country that has access to the Arctic, and that possesses
a nuclear arsenal of 10 000 nuclear warheads (like the USSR and the United States at the
height of the Cold War)
Then it can distribute these nukes at the bottom of the Arctic Ocean at the points
where the largest and most unstable deposits of methane hydrates laid. So their
simultaneous explosion would result in craters in seafloor and large destabilization of
methane hydrates.
At the same time the very location (and detonation order) of the charges may be able
to create a big wave in the ocean, which will go over other fields of methane hydrates,
changing the pressure over them, which will break them, and will lead to explosive
outgassing. The wave may also affect enemys shore.
In addition, such charges should be located not far to each other, so that the total
contribution of the explosion in the heating of the ocean waters will not be lost at large
areas. Heating and mixing ocean water would contribute to methane outgassing.
Nukes may be also put under the seafloor, so the shock wave shattered and threw to
the surface of pieces of methane hydrates which would increase methane outgassing.
If one explosion of 1 Mt order destabilize an area of about 1 square. km, it could
release 10 million tons of methane.

103

In case of using 10 000 bombs it would be released 100 gigatons of methane, which
will lead to the ten times more than now concentration of methane in the atmosphere. This
is roughly equivalent to the amount of atmospheric CO2 ten fold increase by its effect on
global warming and would result in immediate rise of global temperatures on around 10 C,
which will probably enough to start chain reaction of fires, outgassing and evaporation
which would lead to even high temperatures. If median earth temperature once rise to 47C,
than water vapor greenhouse effect will push it until new stable state at 900 C.
Of course, part of the methane would burn, but its concentration will increase in the
northern hemisphere, which will lead to the continuation of the release of methane from
wetlands and ocean.
This is similar to shaking of a can of soda that typically result in strong outgassing.
A similar effect can be achieved by dropping millions of ordinary depth charges on the
seabed, and the cost may be less, or by using just several multimegaton devices.
The lower stability of gas hydrates, the less intervention is needed to destabilize
them.
Another scenario to cause irreversible global warming is to spraye into the upper
atmosphere more efficient greenhouse substances.
For example, frenons, substances used in refrigeration, can be 8000 times more
effective greenhouse gases than carbon dioxide.
https://en.wikipedia.org/wiki/Greenhouse_gas This means that a strong changes of thermal
balance of the atmosphere may be due to the emissions of about 1 Gt of freons. In
addition, such amount in stratosphere will totally destroy the ozone. Currently about 5
million tons of freons situated in a different old refrigeration systems.
https://en.wikipedia.org/wiki/Chlorofluorocarbon
The most potent greenhouse gas is sulfur hexafluoride, which effect is 23 000 times
greater than of CO2 (on a century-old interval, but the immediate effect is only 3200). Its
annual production is 10 000 tons, and the contribution to global warming is now 0.2
percent. https://en.wikipedia.org/wiki/Sulfur_hexafluoride

104

Synthesis of gigatons of freons would be a huge scientific and technical challenge,


easily visible, expensive and pointless, but it is theoretically possible creation of GMO
bacteria that produce freons.
Another greenhouse gas is nitrogen oxide N2O, which has 298 times stronger
greenhouse effect than carbon dioxide. https://en.wikipedia.org/wiki/Nitrous_oxide It can be
produced by atomic explosions, volcanic eruptions, and also by solar radiation and
supernova explosions. It is also heavier than air and has a narcotic effect, that is, can lead
to physiological reactions.
Ozone has immediate greenhouse effect of 1,000 times greater than CO2, but
tropospheric ozone decay in 22 days. Therefore, ozone may result in regional greenhouse
warming, and in some regions, the contribution of ozone warming is greater than of CO2.
Nuclear explosions can lead to a lot of local ozone created by ionizing radiation.
The main greenhouse gas on Earth is actually water vapor, and nuclear explosions
can locally increase the concentration of water vapor, as well as put it into the upper
atmosphere. However, the vapor has the ability to turn into clouds and crystals that
increase the Earth's albedo and lower the temperature.
One of the possible causes of the extinction of 250 million years ago was
appearing of a new class of methane-generating bacteria that have arisen due to the
emergence and horizontal transfer of a single gene. These bacteria are used nickel emitted
by the eruption of the Siberian Traps as catalytic for the synthesis of methane from organic
sediments on the ocean floor. The exponential growth in the number of these bacteria has
led to rapid global warming. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3992638/
Catalysts now used to stimulate the growth of plankton (iron put into ocean), and the
production and dispersion of nickel are quite large.
Several factors could lead to increase of the number of methane-generating
bacteria and large scale release of methane, which may result in irreversible global
warming:

GMO bacterium that can synthesize methane,

a gene encoding catalytic ability for such bacteria


105

explosion supervolcano in nickel deposits

unintentional accumulation of nickel and iron in the ocean from waste


Conclusion: while deliberate creation of doomsday machine based on
methane gun seems to be implausible possibility, mostly because of
unpredictable results of such attack. Even if a state decides to go on with
Doomsday machine, it has many more predictable options, including
biological and nuclear. In the other way, if it will be proved that starting
clathrate gun is the most devastating use of exiting nuclear arsenal, than the
risk is real.

References
1.

Ward, S. N. & Day, S. J. 2001. Cumbre Vieja Volcano; potential collapse and
tsunami at La Palma, Canary Islands. Geophysical Research Letters. 28-17, 3397-3400.

2.

La Palma Tsunami: The mega-hyped tidal wave story. http://www.lapalmatsunami.com/

3.

Risk is low, but US East Coast faces variety of tsunami threats. November 16,
2011. NBC News.

4.

Alexandra Witze. 2013. "Large magma reservoir gets bigger". Nature.

5.

Analee Newitz. What will really happen when the Yellowstone supervolcano
erupts? May 17, 2013. io9.

6.

Steven N. Ward and Erik Asphaug. Asteroid impact tsunami of 2880 March 16.
2003. Geophysical Journal International 153, F6F10.

7.

Robertson, D.S., Lewis, W.M., Sheehan, P.M. & Toon, O.B. 2013. K/Pg extinction:
re-evaluation

of

the

heat/fire

hypothesis.

Journal

of

Geophysical

Research:

Biogeosciences.
8.

Victoria Garshnek, David Morrison, and Frederick M. Burkle Jr. The mitigation,
management, and survivability of asteroid/comet impact with Earth. 2000. Space Policy
16, Issue 3, 16 July 2000, pp. 213222.

106

9.

M.C. MacCracken; Covey, C.; Thompson, S.L.; Weissman, P.R. Global Climatic
Effects of Atmospheric Dust from An Asteroid or Comet Impact on Earth. 1994. Global and
Planetary Change 9 (3-4): 263273.

10.

Clark R. Chapman and David Morrison. Impacts on the Earth by Asteroids and
Comets - Assessing the Hazard. 1994. Nature 367 (6458): 3340.

11.

John S. Lewis. Rain Of Iron And Ice: The Very Real Threat Of Comet And Asteroid
Bombardment. 1997. Helix Books.

12.

Thomas A. Weaver and Lowell Wood. Necessary conditions for the initiation and
propagation of nuclear-detonation waves in plane Atmospheres. 1978. Physical Review A,
vol 20, no. 1.

13.

R.J. de Meijer, V.F. Anisichkin, W. van Westrenen. Forming the Moon from
terrestrial silicate-rich material. 2010. Journal of Chemical Geology.

14.

Alexei Turchin. The possibility of artificial fusion explosion of giant planets and other
objects of Solar system. Scribd.

Block 3 risks, connected with 20 century technologies


Chapter 9. Nuclear Weapons
The risk of nuclear warfare has been discussed in many places, but this book is the
first to thoroughly analyze the risks of electromagnetic pulse (EMP) and nuclear winter and
how these effects combine on the ground. Research which has only been published since
2007 has shed new light on the effects of nuclear winter and is the first research to
examine 10-year, multi-scenario simulations. Previous simulations were run in the late 80s
and early 90s and used far inferior computers.
We should state from the outset that the risk of human extinction from nuclear
weapons alone is rather minute. Rather, nuclear warfare could create a general
atmosphere of chaos and decline that eventually leads to possible human extinction, in
combination with other factors such as biological warfare (to be covered in chapter four)
and nuclear winter. Even taking all of this completely into account, our estimate of the risk
of human extinction from nuclear warfare is rather small, less than 1 percent over the
107

course of the 21st century. We aren't going to specify how far below 1 percent it is, but
simply use 1 percent as a convenient number that seems to be in the right ballpark.
To analyze the effects of nuclear warfare properly requires a complete understanding
of two fields; the literature and debate on nuclear winter, and the likely effects of
electromagnetic pulse (EMP) caused by a high-altitude nuclear detonation. People with a
thorough understanding of both these domains are fairly rare, even within the small slice of
academia and public policy circles that discusses the risks of nuclear war. Therefore, it is
important to understand that overall appraisals of nuclear war from seemingly trustworthy
sources, including physicists and arms control experts, may be unreliable or even
completely worthless, as these individuals often lack knowledge on crucial pieces of the
puzzle. The spectacular and intimidating nature of nuclear warfare tends to make casual
scientists exploring the topic fixate on a particular interpretation and be reluctant to back
away from it in the face of new evidence. Popular incorrect interpretations abound, such as
the false notion that nuclear war would assuredly wipe out humanity (a mistaken view
causing by sensationalist fictional accounts such as On the Beach) or the idea that nuclear
war would have rather mild long-term effects (this idea was prominent in the late 90s and
early 00s).
The leading scientific conception of the effects of nuclear war, particularly nuclear
winter, has taken a serpentine path, fluctuating back and forth since the invention of the
bomb. Since 2007, it is firmly understood, thanks to the work of Alan Robock, that even a
minor nuclear exchange, such as a few dozen warheads between India and Pakistan,
would have important worldwide climactic effects which would be devastating on crop
yields1. Crops are bred to mature within a very specific time frame, which would be gravely
disrupted if the growing season were shortened by even a couple weeks, as would occur if
there were even a minor nuclear exchange. The end result would be worldwide food
shortages, food riots, civil disorder, hunger-motivated crimes and atrocities, and mass
starvation.

The Evolution of Scientific Opinion on Nuclear Winter


Before we launch into likely scenarios for nuclear war, and to what degree they
threaten the long-term survival of humanity, we will briefly overview the shifting sands of
the dominant opinion on nuclear winter. In 1983 the first highly influential paper on nuclear
108

winter was published, by Turco, Toon, Ackerman, Pollack, and Carl Sagan (TTAPS,
pronounced T-Taps). The paper was titled Nuclear Winter: Global Consequences of
Multiple Nuclear Explosions2, and is credited for introducing the term nuclear winter to
the public. The paper claimed, For many simulated exchanges of several thousand
megatons, in which dust and smoke are generated and encircle the earth within 1 to 2
weeks, average light levels can be reduced to a few percent of ambient and land
temperatures can reach -15 degrees to -25 degrees C.
These numbers later turned out to be overly pessimistic, which Sagan admitted in his
1995 book Demon-Haunted World. Though land temperatures would reach -15 degrees to
-25 degrees C in some highly landlocked areas in the event of full-scale nuclear war
(thousands of megatons), temperatures would not fall so low across most of the Earth's
land mass, according to the best present studies (more detail later). The paper also
claimed that in certain high-warhead exchange scenarios, the ambient radiation across the
surface of the Earth would reach 50 rad, which turned out to be a great exaggeration. The
danger of nuclear war is not primarily from the radiation (though it would kill hundreds of
millions) but the long-term temperature drop and its resulting impact on harvests.
Over the course of the 1980s, doubt proliferated regarding the extremity of the
nuclear winter predictions made by TTAPS, and the motives of the authors were
questioned. In 1987, Cresson Kearny, a respected civil defense engineer and former
military officer, rebutted the claims of TTAPS in his book Nuclear War Survival Skills. In the
book, Kearny said3:
Unsurvivable nuclear winter is a discredited theory that, since its conception
in 1982, has been used to frighten additional millions into believing that trying to
survive a nuclear war is a waste of effort and resources, and that only by ridding the
world of almost all nuclear weapons do we have a chance of surviving.
Non-propagandizing scientists recently have calculated that the climatic and
other environmental effects of even an all-out nuclear war would be much less
severe than the catastrophic effects repeatedly publicized by popular astronomer
Carl Sagan and his fellow activist scientists, and by all the involved Soviet scientists.
Conclusions reached from these recent, realistic calculations are summarized in an
article, "Nuclear Winter Reappraised", featured in the 1986 summer issue of Foreign
109

Affairs, the prestigious quarterly of the Council on Foreign Relations. The authors,
Starley L. Thompson and Stephen H. Schneider, are atmospheric scientists with the
National Center for Atmospheric Research. They showed " that on scientific grounds
the global apocalyptic conclusions of the initial nuclear winter hypothesis can now
be relegated to a vanishing low level of probability."
Kearny's primary source for these statements was 1986 paper Nuclear Winter
Reappraised by Starley Thompson and Stephen Schneider 4. They later clarified that they
resisted the interpretation that this means a rejection of the basic points made about
nuclear winter5. Regardless, all of this thinking is now obsolete, since the present science
is much better and our computers are thousands of times faster.
It turns out, in light of modern research and simulations, that Kearny and Sagan are
both wrong. Nuclear war is neither a guarantee of intense nuclear winter, as Sagan and
TTAPS implied, nor is it of vanishingly low level of probability. Some form of nuclear
winter is practically guaranteed under any nuclear conflict, though in some cases it might
be mild enough to qualify as a nuclear autumn, meaning a drop of only a few degrees
Celsius. A full-blown, extremely deadly nuclear winter scenario, however, is quite assured if
any significant portion of the present-day nuclear arsenals of the United States and Russia
are used. Deadly in this sense means causing hundreds of millions of deaths, perhaps as
many as 2 billion.
In his book, Kearny claims that the estimated temperature drop would be about 20
degrees Fahrenheit (11 degrees Celsius) and last only a few days. It turns out he was
completely incorrect; the average global temperature drop would be close to twice that, and
as much as three times that in very inland areas such as Kazakhstan and other parts of
interior Eurasia. This is extreme cooling, and well worthy of the term nuclear winter.
What's more, the severe temperature drop would last for 5-10 years, with atmospheric
smoke reducing by a factor of e (2.718) every 5.5 years in the 150 Tg (teragram) smoke
injection scenario, which corresponds to full-blown nuclear war 6. This would be extremely
unpleasant if it were to happen, and the details will be left to later in this chapter.
For every stage of scientific progress on the question of nuclear war, there are still
people, including scientists, still stuck there. Most of them are stuck in the year 1983, when
it was thought that nuclear winter would be universally fatal and that nothing could be done
110

to survive it. Some of them are stuck in the year 1987 (as one of the authors was until
performing research for this book), thinking that nuclear winter is not really an issue and
would only be a mild nuclear autumn. The great majority of commentators on nuclear war
and nuclear winter are mentally stuck in one of these two places, being more than 20 years
out of date. Imagine if the knowledge of most computer experts were 20 years out of
date. That is where the state of knowledge is today regarding nuclear war. Because
nuclear war is such a complex topic, few people take notice of the mismatch. 20-year-old
obsolete knowledge is considered acceptably up-to-date since nearly everyone, including
the well-educated, share the same ignorance.
The next major course correction in the view of nuclear winter took place in 1990,
when TTAPS published a revised estimate that lowered their estimation of the temperature
drop which would ensue from nuclear war7. Their update was Climate and Smoke: An
Appraisal of Nuclear Winter, published in Science. The article was insufficiently
retractive of the initial mistaken estimates from 1983, which triggered another
backlash of skepticism and accusations of a political disarmament agenda 8. The
abstract summarizes their results: For the most likely soot injections from a full-scale
nuclear exchange, three-dimensional climate simulations yield midsummer land
temperature decreases that average 10 degrees to 20 degrees C in northern mid-latitudes,
with local cooling as large as 35 degrees C, and subfreezing summer temperatures in
some regions. These numbers are significantly less than the initial numbers and added
much-needed nuance, provided by better simulations on superior computers. At the time,
the longest detailed simulation was only 105 days, and the computing power was not
available to make an accurate long-term estimate. The authors exhibited overconfidence in
the wording of their statements, exaggerating the degree of confidence it was possible to
achieve with the tools of the time.
From 1991 until 2007, the field of nuclear winter simulation studies was relatively
stagnant. According to climatologist Alan Robock, no major simulations worth citing were
conducted between the years 1991 and 2007 9. By the time studies resumed in 2007, the
price performance of computers had improved by a factor of greater than 800, representing
more than 10 doublings of Moore's law. This allowed the simulations to be correspondingly
more detailed. In a 2012 featured article for the Bulletin of the Atomic Scientists, Robock
wrote that previous models were based on primitive computer models of the climate
111

system and that the duration of nuclear winter was found to be much longer than
previously thought10.
Today, Alan Robock's detailed work is the dominant model for the effects of nuclear
war, and his results, while not as gloomy as Sagan's, show that nuclear winter would still
be extremely severe, even in a limited regional conflict 11. However, billions of people would
still be likely to survive. The temperature drop in places like South America, Africa,
southeast Asia, and Australia would be more liveable, ranging from 2.5 to 12.5 degrees
Celsius, depending on location. In most population centers in these locales, the
temperature drop would average around 5 degrees. Rather than being directly threatened
by the cold, as the northern continental areas would be, these people would be more
threatened by the indirect effects of food shortages, which would be severe, but probably
not civilization-threatening.
Robock's work shows that the temperature drops would be the greatest for North
America, Europe, and Eurasia, including China. In the case of full-blown nuclear war,
injected 150 Tg of smoke into the upper atmosphere, the aforementioned areas would
suffer temperature drops between 15 and 35 degrees Celsius. In areas like Ukraine, the
temperature would remain below-freezing year round. Ukraine, being the breadbasket of
Europe, would not be able to provide any crops whatsoever. Canada, much of Russia, and
large parts of China would also remain below-freezing year round. Such temperatures are
even lower than those during the last Ice Age. During the last Ice Age, global temperatures
were only around 5 degrees Celsius colder than today. In this scenario, all of Europe and
America would be 15 to 20 degrees colder.
It is hard to properly convey how cold this would be. In combination with the effects of
ruined infrastructure due to EMP and general disorder, greater than 95 percent of the
population of Europe and America would be likely to perish. This percentage estimate is
from the authors of this workRobock does not give any precise estimate of likely
casualties. He only calls nuclear warfare self-assured destruction, meaning that an
attacking state would seal their own demise, even if they were not hit by any nuclear
weapons in return. Experts commenting on EMP have stated that 9 in 10 Americans would
die in a grid-down scenario in the United States, however 12. It is our view that the
combined effects of EMP, nuclear war, and nuclear winter would be likely to kill 95 percent
112

of citizens in the target countries, if not more. Some of this stems from the greater
susceptibility of northern hemisphere to nuclear winter and power grid disruption, due to
the greater concentration of land mass, distance from the equator, and dependency on
electricity, respectively. A greater concentration of land mass means that more land is
isolated from the thermal moderating effects of the world's oceans. This applies strongly to
North America and Eurasia.
In 2013, Robock co-authored a CNN article with Ira Helfand titled, No such thing as a
safe number of nukes, which argues exactly that13. In the article, they write, A study by
Physicians for Social Responsibility showed that if only 300 warheads in the Russian
arsenal got through to targets in American cities, 75 million to 100 million people would be
killed in the first 30 minutes by the explosions and firestorms that would destroy all of our
major metropolitan areas, and vast areas would be blanketed with radioactive fallout.
The key questions regarding the severity of nuclear winter are the following:

W
hat quantity of soot will arise and will be thrown into the troposphere in the case
of large-scale nuclear war?
How will it influence the temperature of the Earth?
How long it will soot remain in the upper atmosphere, and at what latitudes will it
persist?
W
hat influence will the temperature drop have on the ability of humans to survive?
Research relevant to nuclear winter makes slightly different assumptions regarding all
of these points, and comes to different conclusions accordingly. It is important, however, to
cut back on all the complexity and hypothesize a set of normative scenarios, or there is
the inclination to get lost in analysis and avoid the urgency of concrete action. That is why
this chapter mentioned Robock's normative scenario before introducing ambiguity in the
form of the above questions.
The primary questions that have the greatest possible latitude in answering are 1)
how much smoke is injected into the atmosphere, 2) what latitude is that smoke injected at
(this strongly influences how much makes it into the upper atmosphere), and 3) what
influence will the temperature drop have on the ability of humans to survive? Regarding the
113

question of volume of smoke injection, there is a fair amount of uncertainty, because it


depends strongly on the scale of war. Robock's nuclear winter scenario outlined above is
likely in the instance of a full-scale nuclear war, but a more limited exchange would have
correspondingly more limited effects. An even worse outcome is possible if nuclear
weapons are intentionally detonated over forests in addition to cities, to deliberately
increase smoke injection into the upper atmosphere. A detailed scenario regarding the
likely effects on human survival and daily life that nuclear war and nuclear winter are likely
to have is presented later in this chapter.
There's more on this, but hold that thought, because it's time to switch gears to
another almost-certain prelude to nuclear warelectromagnetic pulse.
Effects of Electromagnetic Pulse
Any nuclear attack would almost certainly be preceded by the high-altitude detonation
of a hydrogen bomb, which would create a massive electromagnetic pulse, frying much
electronic equipment in the target area, which could extend across much of the continental
United States or western Russia, respectively. Several bombs might be used, saturating
the entire target country, or most of it, with the EMP. IN 2008, a congressionally mandated
body, the EMP Commission, completed a detailed study of the probable effects of EMP on
critical American infrastructure15. The prognosis was for the complete collapse of the power
grid, and ensuing collapse of essentially all critical infrastructure, from water infrastructure
to food infrastructure to all emergency services.
The long-term effects would be grim. A conference by the national security/public
interest group The United West featured speakers who argued that an EMP attack alone
would result in the death of 90 percent of Americans within 12 to 18 months. William
Forstchen, author of the EMP warning book One Second After, cited a 2004 study and
said, Testimony in that study said 90 percent, let me repeat that, 90 percent of all
Americans would die within 12-18 months of an EMP attack. The conference featured
prestigious speakers such as (quoting from an article) CIA Director R. James Woolsey,
CIA Covert Operative Reza Kahlili, House Armed Services Committee Chairman Rep.
Roscoe Bartlett, former Director of Strategic Defense Initiative Organization Henry Cooper,
former National Intelligence Council Chairman Fritz Erdmarth and William Graham,
President Reagans science adviser and chairman of the EMP Commission.
114

In the scenario of an EMP attack, a hydrogen bomb would be detonated about 100
miles above the surface of the United States, possibly launched from a container ship in
the Gulf of Mexico or sent via intercontinental ballistic missile (ICBM). This would produce
a blast of x-rays which would ionize massive volumes of air, generating an electromagnetic
pulse. The radiation would ricochet within the ionosphere, reflecting back and forth,
building in intensity and blanketing the ground below it with the massive pulse. The EMP
would create a tremendous inductive current in all exposed wires, delivering a crippling
voltage spike to anything connected to the power network. The pulse would be frontloaded, meaning that surge protectors would not have the chance to detect a rise in
electrical activity and switch off in advance, destroying nearly every electrical appliance.
The surge would be channeled through phone cords, power cords, ethernet cords,
everything. The overload would destroy all transformers in the power grid, from large to
small. The largest transformers take three years to replace under normal conditions and
are only manufactured in China. The United States lacks the manufacturing capacity to
make them. Without these transformers, the power grid would go down, and stay down.
Even a relatively minor disturbance has historically caused large parts of the grid to
temporarily collapse, and this disturbance would be on a level far greater than the grid has
ever been hit with. The widespread, simultaneous failure would make timely repairs
impossible, not only due to a lack of replacement parts, but also due to the ensuing
collapse of social order certain to be caused by grid down. The grid would be done for
good, more or less, and all crucial components would need to be completely replaced at
immense cost.
Besides destroying the power grid itself, the EMP would also destroy much of the
critical equipment at power plants and relay facilities. It would destroy the pumps that relay
water to vast areas which depend on them. It would destroy the pumps that send gasoline
from refineries to trucks which then distribute gasoline to gas stations around the country. It
would destroy the refineries that convert crude oil to gasoline and other products. It would
shut down the 25,000 food processing facilities in the United States that create food
products from raw foodstuffs. Nearly every feature of local civilization as know it would be
destroyed, at least 90 percent of the population would die, and the recovery time would be
greater than a decade, perhaps two or three. A decade would likely requiredminimum
to even reunify the country. It would consist of feudal, isolated groups until then. Taking into
115

account the crop-destroying effects of nuclear winter, food would be extremely scarce. The
attack would cause the complete collapse of civil society for more than a decade, possibly
as long as a generation. It would truly be an event without any historic precedent
whatsoever. For a visual dramatization of just ten days of grid down, see the National
Geographic docudrama American Blackout 201315. Imagine the scenario portrayed in that
documentary, but continuing for 5-20 years.
The EMP Commission verified that all this critical infrastructure would fail by analyzing
the impact of EMP on SCADA (supervisory data control and acquisition) boxes, which are
interspersed with nearly all modern infrastructural equipment, from power plants to water
utility plants to relay stations and so on. The repair teams created to deal with malfunctions
in these devices only have the numbers and spare parts needed to repair a few at a time
when they break down, and would never be able to handle a simultaneous, system-wide
grid-down scenario. Because the large transformers are the most crucial part of the grid
and the most difficult to replace, we'll quote the section from the EMP Commission's report
on damage to these components in its entirety:
The transformers that handle electrical power within the transmission system
and its interfaces with the generation and distribution systems are large, expensive,
and to a considerable extent, custom built. The transmission system is far less
standardized than the power plants are, which themselves are somewhat unique
from one to another. All production for these large transformers used in the United
States is currently offshore. Delivery time for these items under benign
circumstances is typically one to two years. There are about 2,000 such
transformers rated at or above 345 kV in the United States with about 1 percent per
year being replaced due to failure or by the addition of new ones. Worldwide
production capacity is less than 100 units per year and serves a world market, one
that is growing at a rapid rate in such countries as China and India. Delivery of a
new large transformer ordered today is nearly 3 years, including both manufacturing
and transportation. An event damaging several of these transformers at once means
it may extend the delivery times to well beyond current time frames as production is
taxed. The resulting impact on timing for restoration can be devastating. Lack of
high voltage equipment manufacturing capacity represents a glaring weakness in
our survival and recovery to the extent these transformers are vulnerable.
116

Nuclear War and Human Extinction


In discussions of human extinction, it is often remarked that nuclear war alone does
not threaten to wipe out humanity. In general, this is correctnuclear war alone does not
threaten to wipe out humanity. But what about nuclear war in combination with, say,
biological warfare involving anthrax and novel prions? What about nuclear warfare in
conjunction with a nuclear bomb-triggered supervolcano eruption? What about nuclear war
in conjunction with nuclear weapons deliberately planted in coal seams or tropical forests,
which detonate and release even more particulate matter, blocking out the sun so intensely
that it triggers a new Ice Age? Nevertheless, we must concede that total human extinction
is not particularly likely in any of these scenarios. The objective in mentioning them is to
show that those who too quickly dismiss the threat of nuclear war may not have considered
conjunctive scenarios which could greatly exacerbate it. Many of them also underestimate
the intensity of nuclear war and nuclear winter and certainly have not gone so far as to
come to the realization that mass cannibalism would be a likely nutritional necessity in the
instance of nuclear winter. This shows a lack of earnestness or honesty in the analysis.

More Exotic Nuclear Scenarios


Putting mass cannibalism behind us, there are a number of more exotic nuclear risks
which ought to be mentioned although conventional nuclear war is the greatest and most
probable risk. The most notable exotic scenario concerns the use of cobalt bombs, that is,
nuclear weapons wrapped with an envelope of cobalt 23. Upon detonation, the free neutrons
created by the explosions would transmute the cobalt into the highly radioactive isotope
cobalt-60, which has a half-life of 5.27 years. This is in contrast with most other fallout
products of a nuclear explosion, which only have a half-life of a few days and decay to
acceptable levels in just three to five weeks. In the cobalt bomb scenario, the fallout would
remain lethal for decades instead of weeks, and the only way to deal with it would be to
wait for it to break down.
The concept of the cobalt bomb was first made public in a 1950 radio interview by
physicist Leo Szilard. In the interview, he suggested that an arsenal of cobalt bombs could
destroy all life on Earth, an assertion which has since been refuted by experts 24. Szilard
remarked that making the surface of the Earth uninhabitable to humans would require the
transmutation and vaporization of cobalt equal to the half the mass of the battleship USS
Missouri, which weighs in at 40,820 tons. The idea that some country or organization would
117

1) accumulate enough nuclear bombs to vaporize all this cobalt directly enough to
transmute it, 2) gather 20,000 tons of cobalt, 3) place these bombs all across the world, in
nearly every country, in such a perfect distribution as to cover as much land as possible...
is implausible. Fallout has a limited extent which it can travel, due to the weight of particles
and the strength of the prevailing winds. A major fallout plume can only travel a few
hundred miles, along a fairly restricted flight path a few dozen miles wide, and does not
distribute any serious quantity of particles or radiation farther than that. Any practical
number of bombs hitting restricted strategic targets would never produce enough fallout to
cover all the inhabitable areas of the Earth, or even more than about 10 percent of it. At the
very least, there are numerous remote islands and areas where there are no strategic
targets and will remain fallout-free.
When Szilard gave his radio interview and hypothesized that cobalt bombs could
mean the end of humanity, it was generally thought that radioactive fallout would be
dispersed by atmospheric winds evenly across the entire planet, and fall to the earth in a
more or less homogeneous manner. The more contained and directional nature of fallout
plumes was not well understood at the time. If fallout really did behave the way they
thought it did, cobalt bombs would be a danger to humanity's survival, and they don't, and
they are not. Of course, the inclusion of cobalt bombs in a nuclear war could make a grim
outcome even more grim, by more thoroughly ensuring that people in the fallout zone are
killed by radiation poisoning, but many billions of people would remain outside of any fallout
zone and would be spared from the radioactive cobalt, even if it did make a portion of the
Earth's surface completely uninhabitable for several decades.
Another unusual nuclear weapon risk would be the detonation of nuclear bombs in a
coal seam, resulting in a major smoke injection into the atmosphere. The severe nuclear
winter scenario described by Robock involved the injection of 150 teragrams (150 million
tonnes) of smoke into the upper atmosphere, resulting in a nuclear winter of ten years or
longer. Robock calculated plausible smoke injection levels for various nuclear war
scenarios based on the fuel loading (carbon footprint) of the average person in given
countries. In a scenario involving 50 weapons, each with an extremely low yield of 15
kilotons, for instance, he calculated that Argentina being struck by that number of weapons
of that yield would produce about 1 million tons of smoke from burning cities, Brazil 2
million tons, China 5 million, Egypt 2.5 million, France 1 million, India 3.7 million, Iran 2.4
118

million, Israel 1 million, Japan 2 million, Pakistan 3 million, Russia 2 million, UK 1 million,
US 1 million25. Let us compare these numbers to the plausible smoke injection from a coal
seam detonation.
As the yield and number of the weapons increases, so does the projection of emitted
smoke, though at somewhat of a shallow slope. At the level of 2,000 weapons of 100
kiloton yield, which is more in the ballpark of what a full-fledged nuclear war would be like,
the emitted smoke would be 90 million tons of smoke for China, 43 million tons for Russia,
and 38 million tons for the United States. That adds up to about 171 million tons of smoke,
which would cause the severe nuclear winter scenario described earlier. These numbers
don't even take into account the effects of smoke from Europe, though that is likely to be
similar to the numbers for Russia and the US, or about 40 million tons, giving a possible
grand total of roughly 211 million tons, for that scenario. Robock estimates 180 million
tonnes of smoke released in a scenario involving 4,400 100-kiloton weapons.
We can roughly calculate the amount of particulate matter which would be ejected
from a nuclear explosion in a coal seam. The radius of the rock melted by an underground
nuclear explosion is about 12 meters times the cube root of the yield in kilotons 26. Consider
a one megaton nuclear device. The number of kilotons (1,000) has a cube root of 10,
multiply that by 12 and we get 120, so it would produce a cavity with a radius of 120
meters. Coal seams have been discovered which are as thick as 45 meters, so a nuclear
explosion from a one-megaton device could vaporize a cylindrical coal area with a radius of
120 meters. That's about 2 million cubic meters of coal. Repeat that with ten devices, and
you have 20 million cubic meters of coal completely vaporized. Figure half of that makes it
to the stratosphere. That gives you a smoke injection of 10 million cubic meters of coal
smoke. A cubic meter of solid butiminous coal is 1,346 kilograms, so that gives us a smoke
injection of 13 billion kilograms, or 13 teragrams. This is substantial relative to the 150 Tg
scenario described earlier, so it could be an aggravating factor if some country or leader
were insane enough to try it. Threatening to nuke a coal seam and cause nuclear winter in
the case of attack could be a considerable Doomsday threat, even if not conducive to
international respect. This sort of self-destructive behavior has been described as the
Samson Option. It is represented by the phrase mess with me, and I'll take down
everyone around me. We certainly wouldn't put it past some dictators to try this, and thus it
must be included in our sphere of consideration.
119

Another risk concerning nuclear weapons which has been mentioned in the risk
literature, in passing, is the use of nuclear weapons to vaporize methane clathrate on the
ocean floor, in an attempt to trigger runaway global warming. Methane clathrates are ice
crystals with embedded methane. It is sometimes called fire ice for the way it burns when
lit. The 20-year global warming potential (GWP) of methane is 86 times greater than
carbon dioxide, so methane is a potent greenhouse gas. Due to the current trend of global
warming, methane clathrates are melting and releasing methane from the ocean floor in
large amounts, which is thought to be accelerating global warming 27. However, we strongly
doubt that the use of nuclear weapons could accelerate this trend enough to make much of
a difference, and the global cooling potential of using nuclear weapons on cities, a forest,
or coal seam is much greater than the global warming potential of using it on methane
clathrate deposits. Also, it is doubtful that global warming could cause human extinction
anyway. Global warming will be explored in much more detail in the later chapter on that
subject.
Greater than the risk of nuclear weapons being used to attack objects like coal seams
is the risk that uranium could become much easier to enrich and therefore accessible to
many more states, increasing the overall probability of nuclear war. Obviously, if nuclear
weapons can be built by more states more cheaply, the likelihood that they will be used
increases. Our global safety has been safeguarded by the fact that nuclear weapons are
difficult to produce. It's been more than 70 years since they were invented, but only nine
states have developed nuclear weapons since then. These are the United States, Russia,
China, France, the United Kingdom, India, Pakistan, North Korea, and Israel. All except for
North Korea are thought to have at least 100 warheads, enough to cause a serious (though
not severe as outlined earlier) nuclear winter. What if another 20 states acquired nuclear
weapons? States like Sudan, South Africa, Libya, Syria, Lebanon, Egypt, Turkey, Saudi
Arabia, Iraq, Afghanistan, and so on. What if North Korea had 10,000 nuclear warheads
instead of just 10-20? With better enrichment technology, in the long run of the 21 st century,
it could happen.
Throughout nuclear history, the primary method of uranium enrichment has been
gaseous diffusion through semi-permeable membranes. This method is extremely
expensive and power-hungry. The Portsmouth Gaseous Diffusion Plant south of Piketon,
Ohio covers 640 acres, with the largest buildings, the process buildings, cover 93 acres
120

and extending more than one and a half miles, containing 10 million feet of floor space. At
its peak power consumption the plant used 2,100 megawatts of power (it shut down in
2001). In contrast, the power consumption of New York City is about 4,570 megawatts. So
this plant used almost half the electrical power of New York City. This is an extremely large
amount of power, equivalent to the daily power consumption of about 4 million people.
These extreme power requirements put uranium enrichment out of reach of many states.
Construction costs for the plant were about $750 million.
The extreme cost of uranium enrichment has been lowered with the development of
centrifuge enrichment technology, which was adopted in the 1970s. Gas centrifuge
enrichment is the dominant enrichment method today, making up about 54 percent of
worldwide uranium enrichment. Instead of pushing uranium hexaflouride gas through semipermeable membranes, this approach uses centrifuge cascades to separate the slightly
lighter uranium isotope U-235, which can be used in nuclear power plants and nuclear
weapons, from the more common U-238 isotope. The exact process is highly secretive.
Gas centrifuges have been used by Iran to enrich uranium for nuclear plants, which has led
to international sanctions and various noise being made at the United Nations. The gas
only needs to pass through 30-40 centrifuge stages to reach the 90 percent enrichment
level needed for weapons grade uranium. However, the process is still crude, according
to enrichment experts, must be repeated over and over, and still consumes massive
amounts of electrical power, at immense cost.
Today, these two methods are the only used. New methods are under development
which may offer significant improvements to cost-effectiveness. One method, currently
being developed by a team of Australian scientists led by Michael Goldsworthy, is called
laser enrichment28. Instead of spinning uranium gas or pushing it through membranes, this
method uses a precisely tuned laser to selectively photoionize uranium particles, which
then adhere to a collector. The longer name for this is atomic vapor laser isotope
separation. It has been researched since at least 1985, and still has not been made more
cost-effective than centrifuges, though the Australian team claims they have made
breakthroughs that put it within their reach. This may just be hype, and only time will tell. In
the mid-90s, the United States Enrichment Corporation contracted with the Department of
Energy for a $100 million project to develop the technology, but failed 29. Iran is known to
have had a secret laser enrichment program, but it was discovered in 2003 and they have
121

since officially claimed it was dismantled. Whether this actually happened has not been
verified by weapons inspectors.
Laser enrichment technology, if perfected, would hardly offer revolutionary
improvements in enrichment capability. However, it is still making arms control bodies like
the US Nuclear Regulatory Commission (NRC) concerned about proliferation risk. The
lower cost and more compact nature of laser enrichment would allow enrichment facilities
to be about four times smaller, making them harder to detect from surveillance photos. This
could allow so-called rogue states to enrich uranium and possibly manufacture weapons
with impunity.
The effectiveness of centrifuges depends upon manufacturing details and power-toweight ratio. It may be possible to make them vastly more effective by inventing
fundamentally new manufacturing technology and building new, better centrifuges with
superior velocities and enrichment throughput. We will explore this possibility in the later
chapter on nanotechnology, and consider the mass-production of nuclear weapons to be a
risk associated with advancements in that field. During that discussion later in the book, be
sure to recall the grave dangers of nuclear warfare and nuclear winter discussed in this
chapter. Even a limited nuclear exchange would cause a serious nuclear winter, and a full
nuclear exchange would cause a crippling one. If enrichment technology becomes much
cheaper and more widespread, we could see a renewed nuclear arms race, where dozens
of states each have tens or even hundreds of thousands of warheads. That could quickly
lead us to an extremely unstable world. Competitive rhetoric between states does not cool
down just because they have nuclear weapons. Human nature and aggressiveness is a
constant, but our destructive capability is not.
There are a couple more nuclear risks which we ought to mention, although they are
longer-term risks, more likely to emerge near the end of the 21 st century rather than the
beginning or the middle. The first is the use of extremely high-yield bombs to vaporize large
amounts of rainforest, injecting more smoke into the atmosphere than the 150 Tg scenario.
The highest-yield nuclear weapon ever built, Tsar Bomba, had a yield of 50 megatons, and
its intensity was truly impressive. Quoting the nuclear weapon archive 30 , the yield of the
bomb was 10 times the combined power of all the conventional explosives used in World
War II, or one quarter of the estimated yield of the 1883 eruption of Krakatoa, and 10% of
122

the combined yield of all nuclear tests to date. The fireball alone was five miles (8
kilometers) in diameter and all buildings were destroyed as far as 34 miles (55 km) away
from the explosion. The heat of the explosion was intense enough that it could have
caused third degree burns 62 miles (100 km) away from ground zero. The amount of forest
a bomb of this nature could vaporize would be truly apocalyptic, and it could conceivably
inject enough smoke into the atmosphere to cause a truly world-ending temperature drop.
More research into this possibility is needed. Could humanity survive in a world that has a
temperature equal to interior Antarctica? Let's hope we never find out.
In the same vein, in the more distant future it may be possible to create extremely
large bombs and detonate them high up in the atmosphere, showering the earth with x-rays
and gamma rays in sufficient quantities to be fatal to life on the surface. It is not known was
the maximum possible yield of a nuclear bomb truly is. Apparently, at one point there were
vague references to the idea that a 500 megaton nuclear bomb could be developed by the
Soviet Union and used to trigger a gigantic tidal wave to slam into California 31. There are
scary nuclear possibilities which were raised during the 50s and 60s, which seem to be all
but forgotten today. Who knows what obscure facts about the maximum potential of
nuclear weapons can be found in the United States' and Russia's top secret archives?
There may be devices which could be assembled on short order that make our currentlyknown nuclear weapons seem like firecrackers. This is a topic that is scarcely discussed,
and would benefit from academic analysis.
Still another highly intimidating scenario is the potential use of large nuclear bombs to
uncork supervolcanoes, specifically the Yellowstone Supervolcano in Yellowstone
National Park, which lies mostly within the boundaries of the state of Wyoming. If this
supervolcano erupted, it would shower the western United States in a layer of ash several
feet deep. Calculations have shown, however, that the cooling potential of volcanic ash is
much lower than the cooling of nuclear ash, and would cause a correspondingly more mild
volcanic winter32. In 1816, the Year Without a Summer is thought to have been triggered
by a volcanic eruption, and contributed to famine, though modern agriculture would be
much more resistant to such an event occurring in contemporary times. Although
uncorking a supervolcano would be a highly effective attack on the United States itself,
and could weaken the country severely, it is not likely to threaten the entirety of humanity

123

nearly as much as nuclear war would. This is primarily because volcanic particles are finer
than smoke particles from fires and would drop from the sky much faster.
There are around 20 dormant supervolcanoes on the earth and some of them maybe
triggered by the means of nuclear weapons. If several supervolcanoes will be triggered
simultaneously the result will be much larger then single eruption.

Nuclear space weapons

Nuclear attack on nuclear power stations

Cheap nukes and new ways enrichment

Estimating the Probability of Nuclear War Causing Human Extinction


In this chapter, we have thoroughly examined the dangers of nuclear warfare and
nuclear weapons at a level that is commensurate with the best published literature on the
topic. We have integrated the most up-to-date information at the time of this writing
(January 2015) on EMP, nuclear war, and nuclear winter, to give an integrated sketch the
severe risks. After all this analysis, we can carefully conclude that nuclear weapons are not
likely to cause the end of humanity. The only scenario we can imagine which could truly be
fatal to the human species would be the smoke injection of massive amounts of incinerated
rainforest by Tsar Bomba-class nuclear bombs. Even then, there would probably be
significant equatorial regions where the temperature drop would only be 30 degrees
Celsius or so, and millions of people could easily survive there, though they might be
alarmed at the darkened skies and unprecedented snowfall.
No matter how much smoke is injected to the atmosphere, Robock found that it would
reduce by a factor of e (~2.718) roughly every 5.5 years, and that makes almost any
conceivable scenario survivable. If people can survive at the peak of nuclear winter, during
the first 5 years, then they will likely be able to survive the whole thing. The Earth is a large
planet, with very high temperatures near the equator, which make it vastly resistant to
124

global cooling. In planetary history, roughly 700 million years ago, the Earth did go through
the Cryogenian, a period where glaciers extended to the equator and significant portions
of the oceans may have frozen over. Multicellular life is thought to have blossomed shortly
after the end of this period, so it may be that is could not have really existed during the
Cryogenian, a testament to how intensely cold it must have been.
If a Snowball Earth scenario could be triggered by enough smoke injection, it would
actually be possible that nuclear weapons could cause the end of humanity. We lack the
technology to create self-sufficient colonies in orbit, on the Moon, or on Mars, and will likely
lack it for many decades, perhaps all of the 21st century. So, we truly do depend on the
inhabitability of the Earth. Yet, no climatologist has yet performed an analysis of how much
smoke would need to be injected into the atmosphere to trigger Snowball Earth, so we are
quite literally in the dark on this.
As stated at the beginning of the chapter, we have decided to assign a probability of 1
percent to the probability that nuclear war could cause the end of humanity in the 21 st
century, partially because of the plausible intensity of nuclear war as an exacerbating factor
of extinction when considered in combination with other risks. There is no other currentlyexisting technology which we can confidently say would wipe out 95 percent of the
population in areas unlucky enough to be ravaged by it. Certainly, there is no historical
precedent for epidemics that kill so many. Although nuclear war lacks the prospect of
totality that the worst biotechnology, nanotechnology, and Artificial Intelligence scenarios
present, we think it would be premature to dismiss the risk. Thus, we cautiously assign a
risk probability of 1 percent, with more probability concentrated towards the end of the
century.

Near misses
I wrote an article how we could use such data in order to estimate cumulative probability of
the nuclear war up to now.
TL;DR: from other domains we know that frequency of close calls is around 100:1 to actual
events. If approximate it on nuclear war and assume that there were much more near
misses than we know, we could conclude that probability of nuclear war was very high and
we live in improbable world there it didn't happen.

125

Yesterday 27 October was Arkhipov day in memory of the man who prevented nuclear war.
Today 28 October is Bordne and Bassett day in memory of Americans who
preventedanother near-war event. Bassett was the man who did most of the work of
preventing launch based false attack code, and Bordne made the story public.
The history of the Cold War shows us that there were many occasions when the world
stood on the brink of disaster. The most famous of them being the cases
ofPetrov,Arkhipov and the recently openedBordne casein Okinawa
I know of over ten, but less than a hundred similar cases of varying degrees of reliability.
Other global catastrophic risk near-misses are not nuclear, but biological such as the Ebola
epidemic, swine flu, bird flu, AIDS, oncoviruses and the SV-40 vaccine.
The pertinent question is whether we have survived as a result of observational selection,
or whether these cases are not statistically significant.
In the Cold War era, these types of situations were quite numerous, (such as the Cuban
missile crisis). However, in each case, it is difficult to say if the near-miss was actually
dangerous. In some cases, the probability of disaster is subjective, that is, according to
participants it was large, whereas objectively it was small. Other near-misses could be a
real danger, but not be seen by operators.
We can definenear-miss of the first type as a case that meets the both following criteria:
a) safety rules have been violated
b) emergency measures were applied in order to avoid disaster (e.g. emergency breaking
of a vehicle, refusal to launch nuclear missiles)
Near-miss can also be defined as an event which, according to some participants of the
event, was very dangerous. Or, as an event, during which a number of factors (but not all)
of a possible catastrophe coincided.
Another type of near-miss is the miraculous salvation. This is a situation whereby a
disaster was averted by a miracle, that is, it had to happen, but it did not happen because
of a happy coincidence of newly emerged circumstances (for example, a bullet stuck in the
gun barrel). Obviously, in the case of miraculous salvation a chance catastrophe was much
higher than in near-misses of the first type, on which we will now focus.
We may take the statistics of near-miss cases from other areas where a known correlation
between the near-miss and actual event exists, for example, compare the statistics of nearmisses and actual accidents with victims in transport.
Industrial research suggests that one crash accounts for 50-100 near-miss cases in
different areas, and 10,000 human errors or violations of regulations. (Gains from Getting
Near Misses Reported )
126

Another surveyestimates 1 to 600 and another1 to 300 andeven1 to 3000 (but in case of
unplanned maintenance).
The spread of estimates from 100 to 3000 is due to the fact that we are considering
different industries, and different criteria for evaluating a near-miss.
However, the average ratio of near-misses is in the hundreds, and so we can not conclude
that the observed non-occurrence of nuclear war results from observational selection.
On the other hand, we can use a near-miss frequency to estimate the risk of a global
catastrophe. We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case,
because the type of phenomena for which the level of near-miss is very high will dominate
the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases,
and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will
dominate).
During the Cold War there were several dozen near-misses, and several near-miss
epidemics at the same time, this indicates that at the current level of technology we have
about one such case a year, or perhaps more: If we analyze the press, several times a
year there is some kind of situation which may lead to the global catastrophe: a threat of
war between North and South Korea, an epidemic, a passage of an asteroid, a global
crisis. And also many near-misses remain classified.
If the average level of safety in regard to global risks does not improve, the frequency of
such cases suggests that a global catastrophe could happen in the next 50-100 years,
which coincides with the estimates obtained by other means.
It is important to increase detailed reporting on such cases in the field of global risks, and
learn how to make useful conclusions based on them. In addition, we need to reduce the
level of near misses in the areas of global risk, by rationally and responsibly increasing the
overall level of security measures.

The map of x-risks connected with nuclear weapons


http://immortality-roadmap.com/nukerisk2.pdf

interactive version: http://immortality-roadmap.com/nukerisk3bookmarks.pdf


http://lesswrong.com/lw/n3k/global_catastrophic_risks_connected_with_nuclear/

127

References

Alan Robock, Luke Oman, Georgiy L. Stenchikov, Owen B. Toon, Charles


Bardeen, and Richard P. Turco. Climatic consequences of regional nuclear
conflicts. 2007a. Atmospheric Chemistry and Physics., 7, 2003-2012.

Alan Robock, Luke Oman, and Georgiy L. Stenchikov. 2007b: Nuclear winter
revisited with a modern climate model and current nuclear arsenals: Still
catastrophic consequences. 2007b. Journal of Geophysical Research, 112,
D13107.

Cresson Kearny. Nuclear War Survival Skills. 1979. Oak Ridge National
Laboratory.

Starley

L.

Thompson

and

Stephen

H.

Schneider.

Nuclear

Winter

Reappraised. 1986. Foreign Affairs Vol. 64, No. 5 (Summer, 1986), pp. 9811005.

Stephen H. Schneider, letter, Wall Street Journal, 25 November 1986.

Robock 2007b.
128

R.P. Turco, O.B. Toon, T.P. Ackerman, J.B. Pollack, Carl Sagan. Climate and
Smoke: an Appraisal of Nuclear Winter.

Malcolm M. Browne. Nuclear Winter Theorists Pull Back. January 23, 1990.
The New York Times.

Robock 2007b.

Alan Robock and Owen Brian Toon. Self-assured destruction: The climate
impacts of nuclear war. 2012. The Bulletin of Atomic Scientists, 68(5) 66-74.

Robock 2007a.

Joseph Farah. EMP Could Leave '9 Out of 10 Americans Dead'. May 3, 2010.
WorldNetDaily.

Ira Helfand and Alan Robock. No such thing as safe number of nukes. June
20, 2013. CNN.

John S. Foster et al. "Report of the Commission to Assess the Threat to the
United States from Electromagnetic Pulse (EMP) Attack. April 2008. Congress
of the United States.

American Blackout 2013. October 27, 2013. National Geographic Channel.

Daniel DeSimone et al. The Effects of Nuclear War. May 1979. Congress of
the United States.

Daniel Ellsberg. U.S. Nuclear War Planning for a Hundred Holocausts.


September 13, 2009. Ellsberg.net.

Farah 2010.

Supply and Disappearance Data. USDA.gov.

Yingcong Dai (2009). The Sichuan Frontier and Tibet: Imperial Strategy in the
Early Qing. University of Washington Press. pp. 2227.

Costco Annual Report 2013.

About the NALC. Native American Land Conservancy.

Brian Clegg. Armageddon Science: The Science of Mass Destruction. 2011.


St. Martins Griffin. p. 77.

The Effects of Nuclear Weapons (Report) (3rd ed.). 1977. Washington, D.C.:
United States Department of Defense and Department of Energy.

Robock 2007a.

Cary Sublette. The Effects of Underground Explosions. March 30, 2001.


NuclearWeaponArchive.org.
129

Michael Marshall. Major methane release is almost inevitable. February 21,


2013. New Scientist.

Richard Macey. Laser enrichment could cut cost of nuclear power. May 27,
2006. The Sydney Morning Herald.

Macey 2006.

Big Ivan, The Tsar Bomba (King of Bombs). September 3, 2007.


NuclearWeaponsArchive.org.

Citation for 500 gigaton tidal wave bomb. (ask Alexei.)

Stephen Self. "The effects and consequences of very large explosive volcanic
eruptions". August 15, 2006. Philosophical Transactions of the Royal Society A,
vol. 364 no. 1845 2073-2097.

Other comments:
1 Satellites may host nuclear weapons for the first EMP strike and North Korean first
satellite was on polar orbit which is best suitable for it
2 China may have much large nuclear arsenal then it is known, maybe several 1000
nuclear bombs - link below
3 At some point pure fusion weapons could be created link below
4 Large nuclear bombs were proposed to serve as anti asteroid shield but could be used in
nuclear war. Teller planned to do it.
5 Nuclear summer after nuclear winter
6 Question of nuclear detonation LA-602 Ignition of atmosphere with nuclear bomb [LA-602 1945], proved to be impossible in the air. Another article assessed if it possible in sea water and found that deuterium
concentration is 20 lower than needed (but it is small margin as some places on the earth may have
naturally higher concentration of deuterium and also lithium like dry lakes). And some other constrictions
like enormous initial bomb. But large bomb in Uranium shaft or in lithium deposit maybe but probably not.
This unfinished article discusses more on topic: http://www.scribd.com/doc/8299748/The-possibility-ofartificial-fusion-explosion-of-giant-planets-and-other-objects-of-Solar-system
7 Risks of accidental nuclear war. Bruce Blair book is about it. Nuclear proliferation lead to many more
hostile nuclear states pairs.
8 Nuclear terrorism provocation of war attack on a nuclear station. What would happen if a bomb will
explode inside nuclear reactor?
9 Use of nuclear weapons against bioweapons facilities. Will kill the virus or disseminate it?

130

10 Probability of nuclear war is around 1 per cent a year based on frequency approach if we think of 1945 as
nuclear war. But most future nuclear wars maybe not all our nuclear war. Probability (or speaking more
clearly our credence in it based on known facts and best for risk prevention) of nuclear war that stops
progress for 10-500 years is in order of 10 per cent.
11

Coronal

mass

ejection

could

have

the

same

devastating

results

as

EMP.

http://www.washingtonpost.com/blogs/capital-weather-gang/wp/2014/07/23/how-a-solar-storm-nearlydestroyed-life-as-we-know-it-two-years-ago/?hpid=z5
In years 774-775AD 20 times stronger event happened http://arxiv.org/pdf/1212.0490v1.pdf
12 If one country is under attack from high attitude EMP it has incentive to put other rival countries in the
same conditions so many places attack is possible all over the world
13. Nuclear stations need electricity for cooling OR they will melt like Fukusima.

131

Chapter 10. Global chemical contamination


Chemical weapons are usually not considered a doomsday weapon. There are no
academic references of which the authors are aware that seriously argue that chemical
weaponry could be a threat to the survival of the human species as a whole. However, in
the interests of completeness, and considering distant risks, we will address chemical
weapons in the context of global catastrophic risks here.
Of course, chemical and biological weapons are completely different. Biological
weapons, such as smallpox, may be self-replicating, whereas chemical weapons are not.
VX gas, the most persistent chemical warfare agent, has a maximum persistence in a cold
environment (40-60 degrees F) of thirty to ninety days 1. In a warm environment (70-90
degrees F), it persists for ten to thirty days. In the context of global risks, every habitable
location on the globe would need to be saturated with VX liquid in enough quantity to kill
every, or nearly every human being within ten days or less. Given currently available
technology, this would be impossible. Even taking into account mass-produced unmanned
aerial vehicles, it would probably take billions of flying robots working around the clock for
many days, drawing on a supply of many hundreds of tonnes, to exterminate all human life
on Earth. It does seem to be theoretically possible in the long-term future, if robotics,
nanotechnology, and artificial intelligence continue to advance and if progress in offensive
applications outpaces defensive applications.
The lethal dose for VX gas is only ten milligrams coming into contact with the skin,
30-50 mg if inhaled. Assuming ten billion people and perfectly accurate delivery methods, it
would take about 100 tonnes to exterminate the human population. In reality, it would
probably take 100-1,000 times that to cover sufficient area to be sure of contaminating it,
so, a supply of 10,000-100,000 tonnes would be required. Throughout the course of World
War I, it is estimated that the Germans and French used about 100,000 tons of chlorine
gas, so such stockpiles are certainly within the realm of possibility, it is just global delivery
which presents a technological challenge. In the 1960s, about 77,000 tons of Agent Orange
defoliant was sprayed in Vietnam2, along with over 50,000 tons of Agent White, Blue,
Purple, Pink, and Green. Of course, the problem with using agents like VX gas to try and
wipe out the human species are defensive measures like bunkers with air purification

132

systems and gas masks. Though one can imagine chemical weapons being used in
tandem with other methods.
There are agents more lethal than VX gas, such as botulinum toxin, for which the
lethal dose is only 0.1 micrograms, but it is very unstable in the environment. Such toxins
may one day be used as the lethal payload for military microrobots 3, and a relatively small
amount of toxin, less than a kilogram, would be sufficient to kill everyone on Earth. Again,
delivery is the problem. The risk of microbot-administered botulinum toxin will be
addressed in the chapter on robotics and nanotechnology, and considered a robotic risk
rather than chemical.
Dioxins are another compound which are extremely lethal, with an LD50 (the dose at
which half of the experimental animal models die) of about 0.6 micrograms per kilogram of
body weight, and are very stable in the environment, putting them in the category of
persistent organic pollutants (POPs). A leak of about 25 kilograms of dioxin at Seveso in
Italy in 1976 caused a contamination of 17 square kilometers, killing 3,300 local animals
within days4. 80,000 animals had to be slaughtered to prevent dioxin from entering the
food chain. Dioxins bioaccumulate, meaning they reach higher concentrations in animals
higher on the food chain. Through this, they contaminate meat and other animal products.
The Seveso disaster resulted in no confirmed human casualties, but almost 200 cases of
chloracne, a severe type of disfiguring acne caused by chemical contamination. The total
cost of decontamination exceeded 40 billion lire (US $47.8 million).
The most prominent mental association of dioxin is with Agent Orange. Although
Agent Orange only contains small amounts of dioxin, it was used in sufficient quantities in
Vietnam to cause severe birth defects and other long-term health problems among the
native population. Since dioxin is persistent, it has more potential to be globally destructive
than a substance like VX, which rapidly dissipates. However, since dioxin has never been
used in a purposeful military way to attack enemy armies or civilians, the quantity needed
to kill a large percentage of people in a given area over a given amount of time is not well
understood. Starting with the Seveso contamination numbers of 25 kilograms of dioxin
contaminating a 17 square kilometer area, we can estimate the amount required to
contaminate other inhabited areas to a similar level. The Earth's land surface area minus
Antarctica is about 138 million square kilometers. Half of this is uninhabited, so we can
estimate the inhabited area of the planet as roughly 69 million square kilometers.
Contaminating this entire area on a level comparable to the Seveso disaster would require
133

100 million kilograms, or 100,000 tonnes of dioxin. This is well within the storage capacity
of a Suezmax-class oil tanker. It is theoretically possible that an industrially developed state
could manufacture this much dioxin over a few years. As always, distribution is the
problem.
Another potential means of global chemical contamination is from some natural
process, or an artificial catalyst or trigger accelerating a natural process. The major risk
factor is the unity of the terrestrial atmosphere. If enough poison is produced somewhere, it
will eventually circulate everywhere. Some candidates for natural causes of global
chemical contamination include: supervolcano eruption 5, release of methane clathrates
causing runaway global warming6, or the sudden oxidation of large, unknown subterranean
mineral deposits, such as heretofore unknown hydrocarbon layers. Supervolcano eruption
and methane clathrate release are important enough topics that they will receive their own
treatment in the later chapter on natural risks, but we want to at least mention them here.
As a separate artificial risk, we may also consider the gradual accumulation of
chemicals destructive to the environment, such as freon, which may be relatively harmless
individually but harmful taken in combination with other destructive chemicals. It may also
be possible to destroy the ozone layer through chemical means, which would increase
incidence of cancer but would not be likely to wipe out the human species.
Besides the scenarios outlined above, there are eight improbable variants of global
chemical contamination we will mention for the sake of completeness:
a)

Global extermination through carbon dioxide poisoning. Breathing


concentrations of carbon dioxide in air at greater than 3 percent is dangerous
over the long term and can cause death by hypercapnia (carbon dioxide
poisoning). The normal atmospheric concentration of CO2 is just 0.03
percent. At the Permian-Triassic boundary, there is evidence that CO2
concentrations increased by 2000 ppm (parts per million) and global
temperature increased by 8 C (14 F) in a relatively short period of time,
10,000 years or less7. This was probably caused by extensive volcanism.
Even 2000 ppm is only a 0.2 percent concentration, more than an order of
magnitude below what is needed to directly kill complex organisms, though
the warming effects of carbon dioxide (along with other factors) were fatal to
96% of all marine species and 70% of all terrestrial vertebrates living at the
134

time. Risks connected to global warming through greenhouse gas emission or


volcanic eruption will be covered in the chapter on global warming. It is worth
noting that the high CO2 concentrations on planet Venus derive from
volcanism. If several supervolcanoes around the world could be artificially
triggered by nuclear explosions, it might be possible to render the surface of
the earth uninhabitable for complex life.
b)

Catastrophic methane release from methane clathrate deposits in the tundra


and on the continental shelf would release immense amounts of methane, a
gas with 80 times more warming potential than carbon dioxide. During the
Permian-Triassic extinction event, it is thought that this methane release, also
caused by a buildup of methane-producing microbes, caused a catastrophic
global warming episode triggering anoxia in the ocean and increased aridity
on land. This resulted in the death of so many plants that the predominant
river pattern of the era switched from meandering to braided, meaning there
were too few plants to channel the water flow.8

c)

There may exist gigantic deposits of hydrogen and/or hydrocarbons deep


within the earth, which, if released, could cause all kinds of mayhem, from
destroying the ozone layer to igniting and causing a gigantic explosion. There
may be natural processes which create hydrocarbons within the earth. As an
example, there are natural hydrocarbons on celestial bodies such as Titan,
Saturn's moon, which has hundreds of times the hydrocarbons of all known
gas and oil reserves on Earth. If these reserves exist on Earth, they could be
a vast threat in the form of potential explosive energy and atmospheric
chemical contamination.

d)

It may be possible to exhaust the oxygen in the atmosphere by some


spectacular combustion process, such as the quick combustion of massive
deposits of hydrogen or hydrocarbons released from deep within the earth.

e)

The fall of a comet with a considerable amount of poisonous gas. Cyanide


gas and cyanide polymers are thought to be a major component of comets 9. A
paper on the topic says, The original presence on cometary nuclei of frozen
volatiles such as methane, ammonia and water makes them ideal sites for the
formation and condensed-phase polymerization of hydrogen cyanide. We
propose that the non-volatile black crust of comet Halley consists largely of
135

such polymers. The tails of comets are rich in cyanide, but it is not dense
enough to cause damage when the Earth passes through such a tail. We can
imagine it causing much more damage if the comet itself fell to Earth,
vaporized, and circulated poisonous dust around the planet.
f)

Poisoning of the world ocean through oil or other means. Imagine a very large
undersea oil deposit uncorked through the application of a nuclear weapon. A
series of nuclear weapons could open up a huge channel between the ocean
and an oil well a mile or so deep. Due to the massive size of the resulting
hole, it would be impossible to seal, and would simply release an enormous
quantity of oil into the ocean. The Deepwater Horizon oil leak in 2010
released almost five million barrels of oil into the sea, a nuclear weaponcaused leak could release considerably more. Whether this could profoundly
threaten the ocean's biosystems, and thereby the rest of the world, has not
been well studied. Terrestrial life is dependent on microorganisms in the
ocean for participation in the carbon cycle.

g)

Blowout of the Earth's atmosphere. This would need to be caused by an


explosion so powerful that it accelerates a substantial part of the atmosphere
to escape velocity. Difficult to imagine, but mentioned here for the sake of
completeness.

h)

An auto-catalytic reaction extending across the surface of the Earth in the


spirit of Ice-9 from the novel Cat's Cradle by Kurt Vonnegut. Similar
reactions have been confirmed in narrow circumstances. Nick Szabo
describes it on his blog10: Self-replicating chemicals are not merely
hypothetical: since Cat's Cradle, scientists have discovered some real-world
example of crystals that seed the environment, converting other forms
(polymorphs) of the crystal into their own. The population of the original
polymorph diminishes as it is converted into the new form: it is a
disappearing polymorph. In 1996 Abbott Labs began manufacturing the new
anti-AIDS drug ritonavir. In 1998 a more stable polymorph appeared in the
American manufacturing plant. It converted the old form of the drug into a
new polymorph, Form 2, that did not fight AIDS nearly as well. Abbotts plant
was contaminated, and it could no longer manufacture effective rintonavir.
Abbott continued to successfully manufacture the drug in its Italian plant.
136

Then American scientists visited, and that plant too was contaminated was
contaminated and could henceforth only produce the ineffective Form 2.
Apparently the scientists had carried some Form 2 crystals into the plant on
their clothing. Could such an autocatalytic reaction be discovered that puts
humanity at risk, perhaps by spreading from brain to brain as a contagious
prion? We might hope not, but the fact is that we don't yet know.
The authors do not rate global chemical contamination as a very severe global
catastrophic or extinction risk. Our estimate of the probability of a severe global chemical
contamination event is on the order of 0.1% for the duration of the 21 st century. As with
many of these risks, the current probability is very small, but advances in areas such as
nanotechnology would allow for the cheap mass production of dangerous chemical agents.
Fortunately, those very same advances would also allow means to clean up any chemical
contamination or even enhance living humans to make them immune to chemical
contaminants.
Our conclusion is that although the theoretical possibility of contaminating the
atmosphere or surface of the planet with chemical agents is theoretically possible, the risk
is far outweighed by the creation of toxic and epidemiological bio-agents, which are not
only self-replicating but have a much better track-record of mortality. Whatever can be
done with chemical weapons, can be done much more cheaply and effectively with
genetically engineered pathogens. Pathogens also have the fortunate advantage that the
default kind used in biological warfare would threaten just humans and not the animal or
plant kingdom. Presumably, even psychopaths would be significantly more likely to want to
target humanity specifically and not actually end all life on Earth. The amount of planning
and implementation needed to seriously threaten all life on the planet with chemical
weapons is immense, and would be more likely put to use in biological warfare. However,
we should not rule out the use of chemical weapons in a total war-type scenario, and
should take their possible utilization into account in considering different scenarios.
References
1. Federation of American Scientists. Chemical Weapons Information Table. 2007.
2. Aspen Institute. History: Agent Orange/Dioxin in Vietnam. August 2011.
137

3. Jurgen Altmann. Military Nanotechnology: Potential Applications and Preventive


Arms Control. New York: Routledge. 2006.
4. "The Seveso Accident: Its Nature, Extent and Consequences". Ann. Occup. Hyg
(Pergamon Press) 22 (4): 327370. doi:10.1093/annhyg/22.4.327
5. Morgan T. Jones, R. Stephen J. Sparks, Paul J. Valdes. The Climactic Impact of
Supervolcanic Ash Blankets. Climate Dynamics, November 2007, Volume 29, Issue
6, pp 553-564.
6. L. D. Danny Harvey, Zhen Huang. Evaluation of the potential impact of methane
clathrate destabilization on future global warming. Journal of Geophysical
Research. 01/1995; 100:2905-2926.
7. Michael R. Rampino and Ken Caldeira. Major perturbation of ocean chemistry and
a Strangelove Ocean after the end-Permian mass extinction. Terra Nova, vol. 17,
issue 6, 554559, December 2005.
8. Peter D. Ward, David R. Montgomery, Roger Smith. Altered River Morphology in
South Africa Related to the Permian-Triassic Extinction.
9. Hydrogen cyanide polymers, comets and the origin of life. Faraday Discussions,
2006, 133, 393-401.
10. Nick Szabo. Patent goo: self-replicating Paxil. Unenumerated blog. November
2005.

138

Chapter 11. Space Weapons


The late 21st century could witness a number of existential dangers related to possible
expansion into space and the deployment of weapons there. Before we explore these, it is
necessary to note that mankind may choose not to engage in extensive space expansion
during the 21st century, so many of these risks may be a non-issue, at least during the next
100 years. A generation of Baby Boomers and their children were inspired and influenced
by science fiction such as Star Trek and Star Wars, which causes us to systematically
overestimate the likelihood of space expansion in the near future. Similarly, limited success
at building launch vehicles by companies such as SpaceX and declarations of desire to
colonize Mars by its CEO Elon Musk are a far cry from self-sustaining, economically
realistic space colonization.
Space colonization feels futuristic, it feels like something that should happen in the
future, but this is not a compelling argument for why it actually will. During the 60s and 70s,
many scientists were utterly convinced that expansion into space was going to occur in the
2000s, with permanent moon bases by the 2020s. This could still happen, but it seems far
less likely than it did during the 70s. In our view, large-scale space colonization is unlikely
during the 21st century (unless there is a Singularity, in which case anything is possible),
but could pick up shortly after it. This chapter examines the possibility that it will happen
during the 21st century even though it may not be the most likely outcome.
There are many challenges to colonizing space which makes it more appealing to
consider colonizing places like Canada, Siberia, the Dakotas, or even Greenland or
Antarctica first. Much of this planet is completely uninhabited and unexploited. If we seek to
travel to new realms, exploit new resources, and so on, we ought to look to North Dakota
or Canada before we look to the Moon or Mars. The economic calculus is strongly in favor
of colonizing these locations before we colonize space. First of all, lifting anything into
space is extremely expensive; $2,200 per kilogram ($1,000/lb) to low Earth orbit, at least 1.
Would the American pioneers have ventured West if they needed to pay such a premium
for each kilogram of baggage they carried? Absolutely not. It would be impractical. Second,
space is empty. For the most part, it is a void. All proposals for developing it, such as space
colonies, asteroid mining, or helium-3 harvesting on the Moon, have extremely high capital
139

investment costs that will make them prohibitive to everyone well into the 21 st century.
Third, space and zero gravity are dangerous to human health. Cosmic rays and
weightlessness cause a variety of health problems and a heightened risk of cancer 2, not to
mention the constant risk of micrometeorites3,4 and the hazards of taking spacewalks to
repair even the most minor equipment. These three barriers to entry make space
colonization highly unlikely during the 21st century unless advanced molecular
nanotechnology (MNT) is developed, but few space enthusiasts even know what these
words mean, and as we reviewed in the earlier chapter on the topic, basic research
towards MNT is extremely slow in coming.
To highlight the dangers of space, consider some recent comments by the Canadian
astronaut Robert Thirsk, who calls a one-way Mars trip a suicide mission 5 . Thirsk, who
has spent 204 days in orbit, says we lack the technology to survive a trip to Mars, and that
he spent much of his time in space repairing basic equipment like the craft's CO2
scrubbers and toilet. His comments were prompted by plans by the Netherlands-based
Mars One project to launch a one-way trip to Mars, but they also apply to any long-term
efforts in space, including space stations in low Earth orbit, trips to the asteroids, lunar
colonies, and so on. A prerequisite for space colonization are basic systems, like CO2
scrubbers and toilets, which break down only very rarely. Until we can manufacture these
systems with extreme reliability, we haven't even taken the first serious step towards
colonizing space. You can't colonize space until you can build a toilet that doesn't
constantly break down.
Next, the topic of getting there. Consider a rocket. It is a bomb with a hole poked in
the side. A rocket is a hollow skyscraper filled with enormous amounts of fuel costing tens
of millions of dollars, all of which is burned away in a few minutes, never to be recovered.
Rockets have a tendency to spontaneously explode for the most trivial of reasons, such as
errors in a few lines of code6,7. If the astronauts on-board do happen to make it to orbit in
one piece, a chipped tile is enough to seal their fate during reentry 8. Even if they do make it
to orbit, what is there? Absolutely nothing. To make true use of space requires putting
megatons of equipment, water, soil, and other resources up there, creating a simulacra of
Earth. Without extremely ambitious engineering projects like Space Piers (to be described
soon) or asteroid-towing spacecraft, a space station is just too small, a vulnerable tin can
filled with slightly nauseous people who must exercise vigorously on a continuous basis to
140

make sure their bones don't become irreversibly smaller 9. A finger-sized hole in a space
station is enough to kill everyone on board 10. Out of 7 Space Shuttles, 2 of them exploded
in a shower of flame. That would be considered a failure. Without radically improved
materials, automation, means of launch, reliability, and tests spanning many decades,
space is only a highly experimental domain suitable for a few dozen specialists at a time.
Space hotels, notwithstanding the self-promotional announcements made by companies
such as Russian firm Orbital Technologies11, are more likely to resemble jail cells than the
Ritz for decades to come.
Since the 1970s, space has been the ultimate mismatch between vision and
practicality. This is not to say that space isn't important, it eventually will be. People are just
prone to vastly underestimating how soon that is likely to be. For the foreseeable future (at
least the first half of the 21st century), it is likely to just be a diversion for celebrities who
take quick trips into low Earth orbit. One accident that causes the death of a few celebrities
could set the entire field back by a decade or more.
While the Baby Boomer generation who are among the most enthusiastic about
space quickly move into retirement, a new generation is being raised on computers and
video games, a new inner space that offers more than outer space plausibly can 12. Within
a couple decades, there will be compelling virtual reality and haptic suits that put the user
in another place, psychologically speaking. These worlds of our own invention have more
character and practical value than a dangerous vacuum at near absolute zero temperature.
If people want to experience space, they will enjoy it in virtual reality, in the safety of their
homes on terra firma. It will be far cheaper and safer to build interactive spaces that
simulate zero-g with robotic suspension systems and VR goggles than to actually launch
people into space and take that risk. Colonizing space is not like Europeans colonizing the
Americas, where both areas have the same temperature, the same gravity, the same
abundant organic materials, wide open spaces for farming, an atmosphere to protect them
from ultraviolet light, and the familiarity of Earth's landscape. The differences between the
Earth's surface and the surface of the Moon or Mars are shocking and extreme.
Otherworldly landscapes are instantly deadly to unprotected human beings. Without a
space suit, a person on Mars would lose consciousness in 15 seconds due to a lack of
oxygen. Within 30 seconds to 1 minute, the blood boils due to low pressure. This is fatal.

141

To construct an extremely effective and reliable space suit, such as a buckypaper skintight
suit, would likely require molecular manufacturing.
Few advocates of space colonization understand the level of technological
improvement which would be required for humans live in orbit, on the Moon, or on Mars in
any appreciable numbers for any substantial length of time. You'd need a space elevator,
Space Pier13, or large set of mass accelerators, which would cost literally trillions of dollars
to build. (Not enough rockets could be built to launch thousands of people and all the
resources they would need; they are too expensive.) Construction would be a multi-decade
project unless it were carried out almost completely by robots. Even if such a bridge were
built, the energy costs of ferrying items up and sending them out would still be in the
hundreds of dollars per kilogram. Loads of a few tens of tons could be sent every 5-10
minutes or so at most per track (for a Space Pier), which is a serious limitation if we
consider that a space station that can hold just 3,000 people would weigh seven million
tons14. The original proposed design for Kalpana One, a space station, assumes $500/kg
launch costs and hundreds of thousands of flights from the Moon, Earth, and NEOs to
deliver the necessary material, including millions of tons of regolith for radiation shielding.
This is all to build a space station for just a few thousand people which has no economic
value. The inhabitants would be sealed on the inside of a cylinder. It seems very difficult to
economically justify such a project unless the colony is filled with multi-millionaries giving
up a major portion of the their wealth just to live there. Meanwhile, without advanced
robotics or next-generation self-repairing materials to protect them, a single hull breach
would be all it takes to kill everyone on board. The cost of building a rotating colony alone
is prohibitive unless the majority of the materials-gathering and launch work is conducted in
an automated fashion by robots and artificial intelligences. Without the assistance of highly
advanced nano-manufacturing and artificial intelligence, such a project would likely be
delayed to the early 22nd century. It is often helpful to belabor these points, since so many
intelligent people have such an emotional over-investment in space colonization and space
technologies, and a nave underestimation of the difficulties involved.
Keeping all this in mind, this chapter will look to a possible future in the late 21 st
century, where, due to abrupt and seemingly miraculous breakthroughs in automation and
nano-manufacturing, it becomes economically feasible to construct large facilities in space,
including space colonies and asteroid mines, which fundamentally change the context of
142

human civilization and open up entirely new risks. We envision scenarios where hundreds
of thousands of people can make it into space with the required tools to keep them there,
within the lifetime of some people alive today (the 2060s-2100s). The possibility of
expanding into space would tempt the nations of Earth to compete for new resources, to
gain a foothold in orbit or on the Moon before their rivals do. Increased competition
between the United States, Russia, and China could become plausible, much like
competition between the US and Russia for Arctic resources is looming today. The
scenarios analyzed in this chapter also have relevance to human extinction risks on
timescales beyond the 21st century, and we briefly violate our exclusive focus on the 21 st
century in some of these sections.
Overview of Space Risks
Space risks are in a different category from biotech risks, nuclear risks,
nanotechnology, robotics, and Artificial Intelligence. They are in a category of lower
intensity. Space risks require much more scientific effort and advanced technology to
become a serious global threat than any of the other risks mentioned. Even the smallest
space risks that plausibly involve the extinction of mankind involve megaengineering
projects with space stations tens, hundreds, even hundreds of thousands of kilometers
across. Natural space risks, such as Gamma Ray Bursts or massive asteroid or large
comet impacts only occur once every few hundred million years. With such low
probabilities of natural disasters from space, our attention is well spent on artificial space
weapons instead, which could be constructed on the timescale of decades or centuries.
Though such risks may seem far off today, they may develop over the course of the 21 st or
22nd centuries to become a more substantial portion of the total probability mass of global
catastrophic risk. Likewise, they may not. The only way to make a realistic estimate is to
observe technological developments as they progress. Today's space efforts, such as
those by SpaceX, likely have very little to do with future space possibilities, since, as we've
argued and will continue to argue, any large-scale future space exploitation will likely be
based on MNT, not on anything we are developing today. Whether space technology
appears to be progressing slowly or quickly from our vantage point, it will be leapfrogged
when and if MNT is developed. If MNT is not developed in the 21 st century, then space
technologies pose no immediate threat, since weapons platforms will not be built on a
scale large enough to do any serious damage to Earth. Full-scale space development and
143

possible global risk from space is almost wholly dependent on molecular manufacturing; it
is hard to imagine it happening otherwise, and especially not in this century. Other
technologies simply cannot build objects on the scale to make it notable in the context of
global risk. The masses and energies involved are too great, on the order of tens of
thousands of times greater than global annual electricity consumption. We discuss this in
more detail shortly.
There are four primary anthropogenic space risks: 1) orbiting mirrors or particle beam
weapons that set cities, villages, soldiers, and civilians on fire, 2) nuclear weapons
launched from space, 3) biological or chemical weapons dispersed from space, and 4)
hitting the Earth with a fast-moving projectile. Let's briefly review these. To wipe out
humanity with orbiting mirrors, the aspiring evil dictator would need to build a lot of them;
hundreds or thousands, each nearly a mile in diameter, then painstakingly aim them at
every human being on Earth until they all died. This would probably take decades. Then
the dictator would need to kill himself and all his followers, or accidentally achieve the
same, otherwise the total extinction of humanity (what this book is exclusively concerned
with) would not be secured. This is obviously not a highly plausible scenario, but we aren't
ruling anything out here. The nuclear weapons scenario is more plausible; enough nuclear
weapons could be launched from a space platform that the Earth goes through a crippling
nuclear winter and becomes temporarily uninhabitable. This could be combined with space
mirror attacks, artificially triggering supervolcano eruptions, and so on. The third scenario,
dispersion of biological weapons, will be discussed in more detail later in the chapter. The
fourth, hitting the Earth with a giant projectile, is very complicated and energy-intensive,
and we also cover it at some length here. This scenario is notable because while difficult to
pull off, a sufficiently large and fast projectile would be extremely destructive to the entire
surface of the Earth, sterilizing it in one go. Such an attack could literally burn everything
on the surface to ashes and make it nearly uninhabitable. Fortunately, it would require
about 100,000 times more energy than the United States consumes in a year to accelerate
a projectile to the suitable speed, among other challenges.
Deviation of Asteroids
The first catastrophic space risk that many people immediately think of is the
deviation of an asteroid to impact Earth, like the kind that killed off the dinosaurs. There are
144

a number of reasons, however, that this would be difficult to carry out, inertia being
foremost among them. Other attack methods would be preferable from a military and cost
perspective if one were trying to do a tremendous amount of damage to the planet. The
energy required to deviate an asteroid of any substantial size is prohibitive, so much so
that in nearly every case, it would be preferable to just build an iron projectile and launch it
directly at the Earth with rockets, or to use nuclear weapons or some other destructive
means instead.
One of the definitions of a planet is that it is a celestial body which has cleared the
neighborhood around its orbit. Its great mass and gravity has pulled in all the rocks that
were in its orbital neighborhood billions of years ago, when the solar system was formed
out of dust and rubble. These ancient rocks are mostly long gone from Earth's orbit. Of
three categories of Near-Earth Objects (NEOs), the closest, the Aten asteroids, number
only 815, and nearly all of them are smaller than 100 m (330 ft) in diameter 15. A 100 meter
diameter asteroid, impacting into dense rock, creates about a 1 megaton explosion, the
size of a modern nuclear weapon16. This is enough to wipe out a city if it is in the wrong
place at the wrong time, but is not likely to have any global effects. About 867 NEOs are of
1 km (3,280 ft) in size or greater, and 167 of these are categorized as PHOs (potentially
hazardous objects)17. About 92 percent of these are estimated to have been discovered so
far. If a 1 km (0.6 mi) wide impactor hit the Earth, it would release about a couple tens of
thousands of megatons TNT equivalent of energy, enough to create a 12 km (7 mi) crater
and a century-long period of lower temperatures, which would effect harvests worldwide
and could lead to billions of deaths by starvation 18,19,20,21,22,23. It would be a catastrophic
event, creating an explosion more than a hundred times greater than the largest atom
bomb ever detonated, but it would not threaten humanity in general. The asteroid that
wiped out the dinosaurs was ten times larger.
To evaluate the risk to humanity from asteroid redirection, we located the largest
possible NEO with a future trajectory that brings it close to Earth, and calculated the energy
input which would be required to cause it to impact. The asteroid that stands out the most
is 4179 Toutatis, which will pass within 8 lunar distances of the Earth in 2069. The distance
to the Moon is 384,400 km (238,855 miles), meaning 8 lunar distances is 3,075,200 km
(1,910,840 mi) from Earth. 4179 Toutatis is a huge rock with dimensions approximately
4.752.41.95 km (2.951.491.21 mi), shaped like a lumpy potato, with a mass of about
145

50 billion tonnes. Imagine pushing 50 billion tonnes for almost two million miles by hand.
Seems impossible, right? Pushing it with a rocket turns out to be almost as hard, by both
the standards of today's technology, and the likely technology available in the 21 st century.
About 1.5 10^16 Newton-seconds of force would be required, equal to the thrust of about
2,142,857 Saturn V heavy lift rockets. The Saturn V rocket is what took the Apollo
astronauts to the Moon, it is 363.0 feet (110.6 m) tall, with a diameter of 33.0 feet (10.1 m),
it weighs 3,000,000 kg (6,600,000 pounds) and costs about $47.25 billion in 2015 dollars to
build. Thus, it would cost about $101,250 trillion to redirect the asteroid with currently
imaginable technology, about 6,457 times the annual GDP of the United States. But, wait
those rockets need other rockets to send them up to space in one piece and unused. If we
just sent the rockets up into space on their own, they would be depleted and couldn't push
the asteroid. You get the idea. If a nation can afford hundreds of millions or possibly billions
of Moon rockets, it would be far cheaper and more destructive to directly bombard the
target with such rockets than to redirect an asteroid that is mostly made of loose rubble
anyway and wouldn't even cause that much destruction when it lands. Various impact
effects calculators available online allow for an estimate of the immediate destruction
caused by a single impact24.
Other asteroids are even more massive or distant from the Earth. 1036 Ganymed, the
largest NEO, is about 34 km (21 mi) in diameter, with a mass about 10 17 kg, a hundred
trillion tonnes. Its closest approach to the Earth, 55,964,100 km (34,774,500 mi), is roughly
a third of the distance between the Earth and the Sun. Ganymed would definitely destroy
practically everything on the surface of the Earth if it impacted us, but with masses and
distances of that magnitude, no one is directing it to hit anything. It is difficult for us to
intuitively imagine how much momentum a 34 km (21 mi) wide asteroid has and how much
energy is needed to move it even a few feet off its current course. Exploding all atomic
bombs in the US and Russia's nuclear arsenals would barely scratch it. The entire energy
output of the history of human civilization on Earth would scarcely move it by a few
hundred feet. One day, a super-advanced civilization may be able to slowly move objects
like this with energy from solar panels larger than Earth's surface, but it is not on the top of
our list of risks to humanity in the 21st century. The same applies to breaking off a piece of
Ganymed and moving itit is simply too distant. Asteroids in the asteroid belt are even

146

more distant, and even more impractical to move. It would be far easier to build a giant
furnace and cook the Earth's surface to a crisp than to move these distant asteroids.
Chicxulub Impact
As many know, approximately 65.5 million years ago, an asteroid 10 km (6 mi) in
diameter crashed into the Earth, causing the extinction of roughly three-quarters of all
terrestrial species, including all non-avian dinosaurs, and a third of marine species 25. The
effects of this object hitting the Earth, just north of the present-day Yucatan peninsula in
Mexico, were severe and global. The impact kicked up about 5 10 15 kg of flaming ejecta,
sending it well above the Earth's atmosphere and raining down around the globe at
velocities between 5 and 10 km/h26. Reentering the skies worldwide, the tremendous air
friction made this material glow red-hot and broiled the surface in thermal radiation
equivalent to 1 megaton nuclear bombs detonated at 6 km (3.7 mi) intervals around the
globe. This is like detonating about 20 million nuclear weapons in the skies above the
Earth. For at least an hour and as long as several hours, the entire surface, from Antarctica
to the equator, was bathed in thermal radiation 50 to 150 times more intense than full
sunlight. This is enough to ignite most wood, and to certainly ignite all the dry tinder on the
world's forest floors. A thick ash layer in the geologic record shows us that the entire
biosphere burned down. Being on the opposite side of the planet from the impact would
have hardly helped. In fact, the amount of flaming ejecta raining from the sky at the socalled antipodal point was even greater than anywhere but immediately around the
impact.
The ash layer evidence that the biosphere burned down is corroborated by the
survival pattern of species that made it through the K-T extinction event, as it is called 27.
The animals that survived were those with the ability to burrow or hide underwater during
the heat flux in the hour or two after the impact. During this time, the temperature of the
surface was literally be as hot as a broiler, and almost every single large animal, including
favorites like Tyrannosaurus Rex and Triceratops, would have been cooked to a blackened
steak. A few fortunate enough to hide underwater or in caves may have survived. If they
were large, their survival wouldn't persist for long, however, since the impact winter began
within a few weeks to a few months after the impact, lasting for decades and causing even
further destruction28. Living plant and animal matter would have been scarce, meaning only
147

detritivoresanimals that can survive on detritushad enough to eat. During this time,
only a few isolated communities in refugia, protected places like swamps alongside
overhanging cliffs or equatorial islands, would have survived. Examples of animals that
would have had a relatively easy time of surviving would be the ancestors of modern-day
earthworms, pill bugs, and millipedes, all of which feed mainly on detritus.
Events like the Chicxulub impact, which happen only once every few hundred million
years, are different from some of the earlier risks discussed in this book (except AI) in the
sense that they are more comprehensively destructive and involve the release of more
energy, especially energy in the form of heat. Whereas some individuals may have
immunity to certain microbes during a global plague, or be able to survive nuclear winter in
a self-sufficient fortress in the mountains, an asteroid or comet impact that throws flaming
ejecta across the planet has a totality and intensity that is hard for many other risks to
match. The only risk we've discussed so far that is comparable is an Artificial Intelligence
using nano-robots to convert the entire Earth into paperclips. An asteroid impact is
intermediate between AI risk and the risk of a bio-engineered multi-plague or similar event
in terms of its brute killing power. It is intense enough to destroy the entire surface through
brute heat, but not dangerous enough to intelligently seek out and kill humans.
After the multi-decade long impact winter, the Chicxulub impactor caused centuries of
greater-than-normal temperatures due to greenhouse effects from all the carbon dioxide
released by the incinerated biosphere. This, in combination with the scarcity of plants
caused by their conflagration, caused huge interior continental regions to transform into
deserts and badlands. The only comparable natural event that can create climactic
changes of this magnitude would probably be the Deccan Traps, a series of volcanic
eruptions which lasted 500,000 years during the end of the Permian era 250 million years
ago.
The K-T extinction was highly selective. Many alligator, turtle, and salamander
species survived. This was because they could both hide underwater and eat detritus. In
general, detritus-eating animals were able to survive, since that's all there was for many
years after the impact. Like the alligators and turtles of 65 million years ago, if a Chicxulubsized asteroid were to hit us today, many human beings would figure out a way to survive,
both during the initial impact and in the ensuing years. Like many of the scenarios in this
148

book, such an event would likely wipe out 99 to 99.99 percent of humanity, but many (over
500,000) would survive. Many people work underground or in places where they would be
completely shielded by the initial thermal pulse. Even if everything exposed to the surface
burned, there would be sufficient stored food underground to keep millions alive for
decades without farming. Wheat berries stored in an oxygen-free environment can retain
nutritional value for hundreds of years. This would give humanity enough time to start
growing new food and locate refugia where possible. The Earth's fossil fuels and many
functional machines and electronics would remain, giving us tools to stage a recovery. A
five degree or even ten degree Celsius temperature increase for hundreds of years, while
extremely harsh and potentially lethal to as many as 90-95 percent of all living plant and
animal species, could not wipe out every single human. There would always be
somewhere, like Iceland, Svalbard, northern Siberia and Canada, which would remain at
mild temperatures even if the global average greatly increased. Humans are not dinosaurs.
We are smarter, our nutritional needs are fewer, and we would be able to survive, even if
photosynthesis completely shut down for several years.
A brief word here on the difficulty of surviving various global temperature changes.
During humanity's existence on Earth, for the last 200,000 years, there have been global
temperature variations of significant magnitude, mostly in the negative direction relative to
the present day. Antarctic ice cores show that the global temperature average during the
last Ice Age was about 8-9 degrees Celsius cooler than it is now 29. We know that humanity
and other animal species can survive significant drops in temperature. What makes impact
winter qualitatively different than a simple Ice Age is the complete shutdown of
photosynthesis caused by ash-choked skies. Even just a few years of this is enough to
reshape global biota entirely. The third risk, global warming, which is distinct from global
cooling and photosynthetic shutdown, but like these risks, is an effect of major asteroid
impact, has the potential to be as deadly as the others because it is more difficult for life to
adapt to increased temperatures than decreased temperatures. If an animal is cold, it can
eat more food, or migrate to a warmer place, and survive. If an animal is too hot, it can
migrate, but it suffers more in the process of doing so. Overheating is extremely
dangerous, which is why tropical animals have so many elaborate adaptations to
preventing it. The relative contributions of initial firestorms, impact winter, and impact

149

summer to species extinction at the K-T boundary is poorly studied and requires more
research.
Daedalus Impact
To consider an impact that could truly wipe out humanity completely, we have to either
analyze the impact of a larger object, a faster object, or an object with both qualities. We
also should expand our scope beyond natural asteroids and comets, large versions of
which impact us only rarely, and consider the possibility of artificially accelerated objects. At
some point in the future, humanity will probably harvest the Sun's energy in larger
amounts, with huge arrays of solar panels which might even approach the size of planetary
surfaces. If we do spread beyond Earth and conquer the solar system, this would be very
useful for our energy needs. Such systems would give humanity and our descendants
access to tremendous amounts of energy, enough to accelerate large objects up to a
significant fraction of the speed of light, say 0.1 c.
Assuming sophisticated automated robotics, artificial intelligence, and
nanotechnology, it would be possible to disassemble asteroids and convert them into
massive solar arrays within the next hundred years. If the right technology is in place, it
could be done with the press of a button, and the size of the asteroid itself would be
immaterial. This energy could then be applied to harvesting other fuels, such as helium-3 in
the atmosphere of Uranus. This could give human groups access to tremendous amounts
of energy, possibly even thousands of times greater than the Earth's present power
consumption, within a hundred to two hundred years. We aren't saying that this is
necessarily particularly likely, just imaginable. After all, the Industrial Revolution also rapidly
improved the capabilities of humanity within a short amount of time, and some scientists
and engineers anticipate an even greater capability boost from molecular manufacturing,
AI, and robotics during the 21st century. So it shouldn't be considered completely out of the
question.
The energy released by the Chicxulub impactor was between 310 million megatons to
13.86 billion megatons of TNT, according to a careful study30. (100 million megatons, a
frequently cited number, is too low.) As a comparison, the largest nuclear bomb ever
detonated, Tsar Bomba, had a yield of 50 megatons. Scaling this way up, we make the
general assumption that it would require an explosion with a yield of 200 billion megatons
150

to completely wipe out humanity. This would be an explosion much larger than anything
that has occurred during the era of multicellular life. An asteroid has not triggered an
explosion this large since the Late Heavy Bombardment, roughly 4 billion years ago when
huge asteroids were routinely hitting the Earth. It seems like a roughly arbitrary number,
which it is, but it has a couple things to recommend it: 1) it's more than ten times greater
than the explosion which wiped out the dinosaurs, 2) it's large enough to be beyond the
class of objects that has any chance of hitting Earth naturally, but small enough that it isn't
completely outlandish. 3) it's easily large enough to argue that it could wipe out all of
humanity, even that it would be likely to do so, in the absence of bunkers built deliberately
to last for many decades involving major temperature drops and no farming.
An impact releasing energy equivalent to 200 billion megatons (2 x 10 11 MT) of TNT is
extremely enormous and difficult to imagine. The following effects are all from the results of
the impact simulator of the Earth Impacts Effects Program, a collaboration between
astronomers and physicists at Imperial College London and Purdue University. A 200 billion
MT impact would release approximately 1027 joules of energy, opening a crater in water
(assuming it strikes the ocean) with a diameter of 881 km (547 mi), about the distance
between San Francisco and Las Vegas. The crater opened on the seafloor would be about
531 km (329 mi) in diameter, which, after the crater wall undergoes collapse, would form a
final crater 1,210 km (749 mi) in diameter. This crater would be so large it would span
almost the entire states of California and Nevada. This is so much larger than the
Chicxulub crater, at only 180 km (110 mi) in diameter. The explosion, which is between 144
and 645 times more powerful than the Chicxulub impact, leaves a crater about 7 times
larger and with 49 times greater area. All this greater area corresponds to more dust and
soot clogging up the sky in the decades to come, as well as more molten rock raining from
the heavens in the few hours following the impact. Both heat and dust translate into killing
power.
The physical effects of the impact itself would be mind-boggling. The fireball
generated would be more than 122 km (76 mi) across. At a distance of 1,000 miles, the
thermal pulse would hit 15.6 seconds after impact, with a duration of 6.74 hours, and a
radiant heat flux 5,330 times greater than full sunlight (all these numbers are from the
impact effects calculator). At a distance of 3,000 miles (4,830 km), equivalent to that
between San Francisco and New York, the fireball would appear 5.76 times larger than the
151

sun, with an intensity 13.7 times greater than full sunlight, enough to ignite clothing,
plywood, grass, deciduous trees, and third degree burns over most of the body. Roughly
815,000 cubic miles of molten material would be ejected, arriving approximately 16.1
minutes after impact. The impact would cause shaking at 12.1 on the Richter scale, greater
than any earthquake in recorded history. Even at this continental distance of 3,000 miles,
the ejecta, which arrives after about 25.4 minutes as an all-incinerating wave of flaming
dust particles, has an average thickness on the ground of 6.28 meters (20.6 ft). That is
incredible, enough to cover and kill just about everything. The air blast would arrive about 4
full hours after impact, with a maximum wind velocity of 1,760 mph, peak overpressure of
10.6 bars (150 psi), and a sound intensity of 121 dB. At a distance of 12,450 miles (20,037
km), the maximum possible distance from the impact, the air blast arrives after 16.8 hours,
with a peak overpressure of 0.5 bars (7.6 psi), maximum wind velocity of 233 mph, and a
sound intensity of 95 dB. The force of the blast would be enough to collapse almost all
wood frame buildings and blow down 90 percent of trees. Altogether, more than 99.99
percent of the world's trees would be blown down, regardless of where the impact hits.
No authors have considered in much detail the effects such a blast would have on
humanity itself, probably because it could conceivably only be caused by an artificial
impact rather than a natural one, and anthropogenic high-velocity object impacts are rarely
considered, especially of that energy level. What is particularly interesting about blasts of
around this level is that they are somewhere between sure survival and sure death for
the human species, so there is definite uncertainty about the likely outcome. Nearly anyone
not in an underground shelter would be destroyed, just as they would in the winds of a
powerful hurricane. Hurricanes do not bring flaming hot ejecta that lands in 10-ft thick
layers and burns away all oxygen on the surface, however. The Chicxulub impact kicked up
~5 x 1015 kg of material which deposited an average of 10 kg (22 lb) per square meter,
forming a layer 3-4 mm thick on average. The larger impact we describe above, which we
will call a Daedalus impact for reasons which will become known shortly, would eject at
least 3,625,000 cubic miles (15,110,000 cu km) of material, equivalent to a cube 153 miles
(246 km) on a side, into the atmosphere, for a mass of ~5 x 10 19 kg, roughly 10,000 times
greater. Extrapolating, this means we could expect an average deposition rate of 100
tonnes (110 tons) per square meter, forming a layer 30-40 meters (100-130 ft) on average!
This is inconsistent with the Impact Effects Calculator result of just 6 meters thickness only
152

3,000 miles away, and knowing the difference is crucial. (note: double-check this result and
consult w/ expert)
If the impact really did leave a layer of molten silicate dust 30 meters thick, 100
tonnes per square meter, it is easy to imagine how that might threaten the existence of
every last human being, especially as the post-impact years tick by and food is hard to
come by. During the K-T event, global temperature is thought to have dropped by 13 Kelvin
after 20 days31. With 10,000 times as much dust in the atmosphere as the K-T event, how
much cooling would the Daedalus event cause? It could be catastrophic cooling, not just in
the sense of wiping out 75-90 percent of all terrestrial animal species, but more in the
sense of potentially wiping out all terrestrial non-arthropod species.
There would be some warning time for those distant from the blast. At a distance of
about 10,000 km (6,210 miles), the seismic shock would arrive after about 33.3 minutes,
measuring 12.2 on the Richter scale. In comparison, the 1906 San Francisco earthquake,
which destroyed 80 percent of the city, was about 7.8 on the moment magnitude scale
(modern Richter scale), which corresponds to about 7.6 on the old Richter scale. Each
increment on the scale corresponds to a 10 times greater shaking amplitude, so the
earthquake following a Daedalus impact would have a shaking amplitude greater than
20,000 times that of the San Francisco earthquake at the hypocenter, meaning a shaking
amplitude of tens of miles. At a distance of 10,000 km, this would rate at VI or VII on the
Mercalli scale. VI on the Mercalli scale refers to, Felt by all, many frightened. Some heavy
furniture moved; a few instances of fallen plaster. Damage slight. VII refers to, Damage
negligible in buildings of good design and construction; slight to moderate in well-built
ordinary structures; considerable damage in poorly built or badly designed structures;
some chimneys broken. Upon feeling the seismic shock, people would check the Internet,
television, or radio to find news that an object had hit the Earth, and that the deadly blast
wave was on its way. Starting after about 45 minutes, the ejecta would begin arriving, the
red-hot rain of dust, getting thicker over the next 25 minutes and reaching maximum
intensity 70 minutes after impact. This all-encompassing heat would be sufficient to cook
everything on the surface. The devastating blast wave, which is the largest physical
disruption and would include maximum winds of up to 677 mph and pressures of 30.8 psi,
similar to those felt at ground zero of a nuclear explosion, would arrive after about 8.42
hours. This would completely level everything on the ground, including any structures.
153

Contrary to popular conception, the pressures on a level similar to those directly


underneath a nuclear explosion are survivable, through using a simple arched-earth
structure over a closed trench. One such secure trench was located near ground zero in
Nagasaki, where pressures approached 50 psi. Such trenches require a secure blast door,
however. Perhaps a greater problem would be the buildup of ejecta, which would rest
everywhere and cause a great amount of pressure, crushing unreinforced structures and
their inhabitants. It would also be super-hot. We can imagine a more hopeful situation on
the side of a cliff or mountain, where ejecta slides off to lower altitudes, or in tropical
lagoons and lakes, where even 100 ft worth of dust may simply sink to the bottom, sparing
anyone floating on the surface. Oxygen would be a greater problem, however, meaning
that those in elevated, steep areas with plenty of fresh air would be at an advantage. Such
areas would have increased exposure to thermal radiation, on the other hand, making it a
tradeoff. Ideal for survival would be a secure structure carved into a cliff or mountainside,
or a hollowed-out mountain like the Cheyenne Mountain nuclear bunker in Colorado, which
is manned by about 1,400 people. The bunker has a water reservoir of 1,800,000 US gal
(6,800,000 l), which would be more than enough to support its 1,400 workers for over a
year. One wonders if the intense heat of a 100-130 ft layer of molten dust would be
sufficient to melt and clog the air intake vents and suffocate everyone inside. Our guess
would be probably not, which makes us even question the assumption that a 200 billion
megaton explosion would be sufficient to truly wipe out all of humanity. The general
concept requires deeper investigation.
If staff in a highly secured bunker like the Cheyenne complex were somehow able to
survive the initial ejecta and blast wave, the world they emerged out into when it cooled
would be very different. Unlike the post-apocalyptic landscape faced by the survivors of the
Chicxulub impact, which was only dusted with a few millimeters of material, these refugees
would be dealing with deep, multi-story building thick layers of dust which would get into
everything and create a nutrient-free layer nearly impossible for plants to grow in. Any
survivors would need to find a deposit of remaining soil, which could be done by digging
into a mountainside or possibly clearing an area with a nuclear explosion. Then, they would
need to use rain or whatever other available water source to attempt to grow food. Given
that the sky would be blacked out for at least several years and possibly longer, this would
be quite a challenge. Perhaps they could grow plants underground with artificial light from a
154

nuclear-powered generator. Survivors could scavenge any underground grain reserves, if


they manage to locate these and expend the energy to dig down to them by hand. The
United States and many other countries have grain surpluses sufficient to feed a few
thousand people almost indefinitely, as we reviewed in the nuclear chapter.
Over the years, moss and lichen would begin to grow over the surface of the cooled
ash, and rivers would wash some of it into the sea. If humanity were lucky, there would be
a fern spike, a massive recolonization of the land by ferns, as generally occurs after mass
extinctions. However, the combination of falling temperatures, limited power sources,
everything on Earth being covered in a layer of ash 100 feet deep, and so on, could easily
prove sufficient to snuff out a few isolated colonies of several thousand people fortunate
enough to have survived the initial blast and heat. One option might be to construct sea
cities, if the survivors had the technology available for it. It would be difficult to reboot the
large-scale technological infrastructure needed to construct such cities, especially working
with few people, though possibly working components could be salvaged. In a matter of a
couple decades, without a technological infrastructure to build new tools, all but the most
simple devices would break down, putting humanity back into a technological limbo similar
to the early Bronze Age. This would make it difficult for us to achieve tasks such as locating
grain stores hundreds of miles away, determining their precise location, and digging 100
feet down to reach them. Because of all these extreme challenges to survival, we
tentatively anticipate that such an impact probably would wipe out humanity, and indeed
most of the rest, if not all complex terrestrial life.
How could such an impact occur? It would have to be an artificial projectile, about the
size of the Great Pyramid of Giza, with a radius of 0.393 km, made of iron, accelerated into
the surface of the Earth at one-tenth the speed of light (0.1 c). The name Daedalus is
borrowed from the Project Daedalus, a project undertaken by the British Interplanetary
Society between 1973 and 1978 to design an unmanned interstellar probe 32. The craft
would have a length of about 190 meters (626 ft) and weigh 54,000 tons, with a scientific
payload of 400 tons. The craft was designed to be powered using nuclear fusion, fueled by
deuterium harvested from an outer planet like Uranus. Accelerating to 0.12 c over the
course of 4 years, the craft would then cruise for 46 years to reach its target star system,
Bernard's star, 5.9 light years distant. The craft would be shielded by an artificially
generated cloud of particles called dust bugs hovering 200 km ahead of the vehicle to
155

remove larger obstacles such as small rocks. Micro-sized dust grains would impact the
craft's beryllium shield and ablate it over time.
The Daedalus craft would only accelerate 400 tons, instead of the 9.3 x 10 9 kg (9.3
million tonnes, 1.02 million tons) required to deal a potential deathblow to mankind. It would
need to be scaled up by a factor of 2,550 to reach critical mass. You hear discussion about
starships in science fiction frequently, but no science fiction I'm aware of addresses the
consequences of the fact that an interstellar probe just 2,550 times larger than the
minimum viable interstellar probe could cause an explosion on the Earth that covers the
entire surface in 50-100 feet of molten rock. Once such a probe began to get going, it
would be very difficult to stop. No object would have the momentum to push it off course; it
would need to be stopped before it acquired a high speed, in just four years. If an
antimatter drive could be developed that allows an even greater speed, one closer to the
speed of light, the projectile would only require a mass of 8.38 x 10 8 kg, about a 29 m (96
ft) radius iron sphere, similar to the mass of the Seawise Giant, the longest and heaviest
ship ever built. On the human scale, that is rather large, but on the cosmic scale, it's like a
dust grain. Any future planetary civilization that wants to survive unknown threats will need
high-resolution monitoring of its entire cosmic area, all the way out to tens of light years
and as far beyond that as possible.
Consider that hundreds of years ago, all transportation was by wind power, foot, or
pack animal, and the railroad, diesel ship, and commercial aircraft didn't exist. As humanity
developed huge power sources and more energetic modes of transport, energy release
levels thousands, tens of thousands, and hundreds of thousands of times greater than we
were accustomed to became routine. Today, there are about 93,000 commercial flights
daily, and about one in every hundred million is hijacked and flown into something, causing
events like 9/11. Imagine a future where interstellar travel is routine, and a similar
circumstance might become a threat with respect to starships used as projectiles. Instead
of killing a few thousand people, however, the hijacking could kill everyone on Earth. This
would be especially useful if the terrorist were a nationalist for a future country outside the
Earth, located on the Moon, Mars, asteroid belt, among space stations, or even a separate
star system. For whatever reason, such an individual or group may have no sympathy for
the planet and be perfectly willing to ruin it. This would probably not put humanity as a

156

whole at risk, since many would be off-world at that stage, but the logic of the scenario has
implications for our security in the long-term future.
Heliobeams
Perhaps a more plausible risk in the nearer future is some group using a space
station as a tool to attack the Earth. They might wipe out most or all of the human species
to replace us with their own people, or to otherwise dominate the planet. This scenario was
portrayed in the 1979 James Bond film Moonraker. Space is the ultimate high ground; from
it, it would be easier to distribute capsules of some lethal biological agent, observe the
enemy, launch nuclear weapons, and so on. Perhaps, speculatively, this process could get
carried away until those who control low Earth orbit begin to see themselves as gods and
start to pose a threat to humanity in general.
To create space stations on a scale required to actually control or threaten the entire
Earth's surface would be a difficult task. In the 1940s, Nazi scientists developed the design
for a large mirror in orbit with a 1 mile diameter 33, for focusing light on the surface and
incinerating cities or military formations. Called the Sun Gun or heliobeam, they anticipated
it would take around 50 to 100 years to construct. There are no detailed documents for the
Sun Gun design which survived the war, but assuming a mass of about 1,191 kg (2,626 lb)
per square meter, corresponding to a steel plate 6 inches thick, a heliobeam a mile in
diameter would have a surface area of about 2,034,162 square meters (21,895,500 sq ft)
and mass of about 2,422,690 tonnes (2,670,560 tons), similar to the mass of 26 Nimitzclass aircraft carriers. At current space launch costs of $2,200/kg ($1,000 per pound) as of
March 2013 for the Falcon Heavy Rocket, lifting that much material to orbit would cost
$5,330,000,000, about the annual GDP of Japan. Even given the relatively enormous US
military budget, this would be a rather difficult expense to justify. Such a project would have
to be carried out over the course of many years, or weight would have to be sacrificed,
making the heliobeam thinner, which would make it more vulnerable to attack or disruption.
There are ways to take the heliobeam concept as devised by the Nazis and transform
it into a better design which makes it easier to construct and more resilient to potential
attack. Instead of one giant mirror, it could be a cluster of several dozen giant mirrors,
communicating with data links and designed to point to the same spot. Instead of being
made of thick metal, these mirrors could be constructed out of futuristic nanomaterials
157

which are extremely light, yet durable. This could lower the launch weight by a factor of
tens, hundreds, maybe even thousands. The preferred construction material for a scaffold
would be diamond or fullerenes. Even if MNT is not developed in the near future, the cost
of industrially produced diamond is falling, though not to the extent that would be required
to make such large quantities affordable. This makes the construction of a heliobeam seem
mostly dependent on progress in the field of MNT, much like the other hypothetical
structures in this chapter.
If launch costs and the costs of bulk diamond could be brought way down, an
effective heliobeam could potentially be constructed for only a few trillion dollars instead of
a few hundred trillion, which might put it within the reach of the United States or Russian
military during the latter half of the 21st century. The United States is spending between
$620 and $661 billion over the next decade on maintaining its nuclear weapons, as an
example of a project in a similar cost window. Spending a similar amount on a heliobeam
could be justified if the ethical and geopolitical considerations looked right. After all, a
heliobeam would be like a military laser beam that never runs out of power. There would be
few limits to the destruction it could do. Against stationary targets in particular, it would be
extremely effective. There is nothing like vaporizing your enemies using a giant beam from
space.
Countries are especially dependent on their power and military infrastructure to
persist. If these could be destroyed one by one, over the course of several days by a giant
orbiting mirror, that could cripple a country and make it ripe for ground invasion. Nuclear
missiles could defend against such a mirror by damaging it, but possibly a mirror could be
sent up in the aftermath of a World War during which most nuclear weapons were used up,
leaving the remainder at the mercy of the gods in the sky with the sun mirror. Whatever
group controlled the sun mirror could use it to cripple key human infrastructure (power
plants, ports), forcing humanity to live as it did in ancient times, without electricity. If the
group were the only ones who retained industrial technology, such as advanced fighter
aircraft, they would be fighting against people who had little more than crude firearms. The
difference in technological capability between Israelis and Palestinians comes to mind, only
moreso. The future could be a world of entirely different technological levels, where an elite
group is essentially running a giant zoo for its own amusement. If they got tired of the
masses of humanity, they might even decide to genocide us through other means such as
158

nuclear weapons. A heliobeam need not be used to perform every important military
maneuver, just the most crucial moves like destroying power plants, air bases, and tank
formations. It would still give whoever controlled it a dominant position.
There are many important strategic, and therefore geopolitical, differences between a
hypothetical heliobeam and nuclear weapons. Nuclear weapons are more of an all-ornothing thing. When you use a nuclear weapon, it makes a gigantic explosion with ultra
high-speed winds and a fearsome mushroom cloud that showers lethal radioactivity for
miles around. Even the most power-hungry politician or military commander knows they
are not to be used lightly. A heliobeam, on the other hand, could be used in an attack which
is arbitrarily small and seemingly muted. By selectively obscuring parts of a main mirror, or
selecting just a few out of a cluster to target sunlight towards a particular location, a
heliobeam array could be used to destroy just a few hundred troops instead of a few
thousand, or to harass on an even smaller scale. Furthermore, its use would be nearly free
once it is built. The combination of low maintenance costs and arbitrarily minor attack
potential would make the incentive to use a heliobeam substantially greater than the
incentive to use nuclear missiles. A country with messianic views about its role in the world,
such as the United States, could use it to get our way in every conflict, no matter how
minor. This could exacerbate global tensions, paving the way for an eventual global
dictatorship with absolute power. We ought to be particularly wary of any technologies
which could be used to enforce global dictatorship, due to the potentially irreversible nature
of a transition to one, and the attendant suffering it could cause once firmly in place 34.
In conclusion, it is difficult to imagine how space-based heliobeams could kill all of
humanity, but they could certainly oppress us greatly, possibly locking us into a state where
a dictatorship controls us completely. Combined with life extension technologies and
oppressive brain implants (see next chapter), a very long-lived dictator could secure his
position, preventing technological development and personal freedom in perpetuity. Nick
Bostrom defines an existential risk as One where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and drastically curtail its
potential.35 This particular risk seems less likely to annihilate life as it is to permanently
and drastically curtail its potential if put in the wrong hands. As a counterpoint, however,
one may argue that nuclear weapons have the same great power, but they have not
resulted in the establishment of a global dictatorship.
159

Dispersal of Biological Agents


In Moonraker, bad guy Hugo Drax attempts to release 50 spheres of nerve gas from a
space station to wipe out the global population, replacing it with his own master race.
Would this be possible? Almost certainly not. Even if 50 spheres could hold sufficient toxic
agent, they would not achieve the necessary level of dispersion necessary to distribute a
lethal dose across the entire surface of the Earth, or even a tiny fraction of it. But is it
possible in principle? As always, it is our job to find out.
First, it would not make sense to disperse a biological agent in an un-targeted
fashion. If someone is trying to kill as many people as possible with a biological agent, it
makes the most sense to distribute it in areas where there are people, especially people
quickly dispersing across far distances, like an airport. This is best done on the ground,
with an actual human vector, spraying aerosols in places like public bathrooms. The
method of dispersal would be up close, personal, and meticulous, not slipshod or
haphazard, as in a shotgun approach. The downside of this is that you can't hit millions of
people at once, and it puts the attacker himself at risk of contracting the disease.
Any biological weapon launched from a distance has to deal with the problem of
adequate dispersal. The biological bomblets developed by the United States military during
our biological weapons program are small spheres designed to spin and spray a biological
agent as they are dropped from a plane. If bomblets are to be launched from a space
station, a number of weaponization challenges would need to be overcome. First, the
bomblets would need to be encased in a larger module with a heat shield to protect it from
burning up on reentry. Most of the heat of a reentering space capsule is transferred just 6
km (3.7 mi) over the ground, where the air gets much thicker relative to the upper
atmosphere, so the bomblets have to remain enclosed down to that altitude at least (unless
constructed from some advanced material like diamond). By the time they are that far
down, they can't disperse across that wide an area, maybe an area 10 km (6.2 mi) across.
Breaking this general rule would either require making ultra-tough reentry vehicles that can
reenter in one piece despite being relatively small, or having the reentry module break up
into dozens or hundreds of rockets which go off horizontally in all directions before falling
down to Earth. Both are serious challenges, and the technology does not exist, as far as is
publicly known.
160

Most people live in cities. Cities are the natural place where an aspiring Bond villain
would want to spread a plague. A problem is that major cities are far apart. Another
problem is that doomsday viruses don't survive very well without a host, quickly getting
destroyed by low-level biological activity (such as virophages) or sunlight. To optimally hit
humanity with a virus would require not only a plethora of viruses (since many people
would undoubtedly be immune to just one), but a multitude of living hosts to spread the
viruses, since viruses in aerosols or otherwise unprotected would very quickly be
degenerated by UV light from the sun. So the problem of wiping out humanity with a
biological weapon launched from a space station is actually a problem of launching bats,
monkeys, rats, or some similar stable vector in large numbers from a space station to
hundreds or thousands of major cities on Earth. When you think about it that way, in
combination with the reentry module challenges cited above, pulling this off clearly is a bit
more complicated than as was portrayed in Moonraker.
Could biological bomblets filled with bats be launched into the world's major cities,
flooding them with doomsday bats? It's certainly imaginable, and would be a more subtle
and achievable way of trying to kill off humanity than using the Daedalus impactor.
However, it certainly doesn't seem like a threat in the near future, and difficult to imagine
even in this century, though difficulty of imagination is not always the best criterion for
predicting the future. For the vectors to make it to their targets in one piece, they would
need to be put in modules with parachutes, which would be rather noticeable upon entering
a city. Many modules could be used, thousands of separate pods filled with bats each
landing in different locations and different cities, for a total of hundreds or thousands of
modules. This would entail a lot of mass, in the tens of thousands of tons. Keeping all
these bats or rats and their 50-100 necessary deadly viruses all contained on a space
station would be a major task, and require substantial amount of space and resources. It
could conceivably be done, but it would have to be a relatively large space station, one with
living and working space for 1,000 people at least. There would need to be isolation
chambers on the space station itself, with workers entering numerous staging chambers
filled with the modules which would need to be loaded with the biological specimens by
hand or via remote-controlled robot.
To take a serious crack at destroying as many human beings as possible, the
biological attack would need to be targeted at every city with over a million people, of which
161

there are 476. To ensure that enough people are infected to actually get the multipandemic going, rather than being contained, one would need as many vectors as
possible, in the tens of thousands per city. Rats, squirrels, or mice would be more suitable
than bats, since they would be more innocuous-seeming in world cities, though all of the
above could be used. Each city would need its own reentry capsule, which, to contain
10,000 rats without killing them, assuming (generously) ten per cubic foot, would need to
have around 1,000 cubic feet of internal space, or a cube 10 ft (3 m) on a side. The Apollo
Service module, for instance, had an internal space of 213 cubic feet. Assuming a module
carrying rats would need to be about 4 times heavier to contain the necessary internal
space, that gives us a weight of 98,092 kg (216,256 lbs) per module, which we round off to
100 tonnes. One module for every city with over a million people makes that 47,600
tonnes. Now we see the scale of size a space station would need to contain the necessary
facilities to launch a species-threatening bio-attack on the Earth. A space station of five
million tonnes or more would likely be needed to contain all the modules, preparatory
facilities, staff, gardens to grow food for the staff, space for it to be tolerable for them to live
there, the water they need, and so on. At that point, you might as well build Kalpana One,
the AIAA (American Institute of Aeronautics and Astronautics) designed space colony we
mentioned earlier, designed to fit 3,000 people in a colony that would weigh about 7 million
tonnes36. Incidentally, this number is close to the minimum viable population (MVP) needed
for a species to survive. A meta-analysis of the MVP of different species found the average
number to be 4,16937.
Would 10,000 rats released in each city in the world, carrying a multitude of viruses,
be sufficient to wipe out humanity? It seems unlikely, but could be possible in combination
with nuclear bombardment and targeted use of a heliobeam. After wiping out all human
beings on the surface, the space station could be struck by a NEO and suffer catastrophic
reentry into the atmosphere, killing everyone on board. After that, no more humanity.
Destroying every last human being on Earth with viruses seems seriously difficult, given
that there are individuals living out in rural Canada or Russia with no human contact
whatsoever. Yet, if only a few thousand or tens of thousands of individuals remain, are too
widely distributed, and fail to mate and reproduce in the wake of a serious disaster, it could
be possible. It would require a lot of things all going wrong simultaneously, or in a
sequence. But it is definitely possible.
162

Motivations
Given all this, we may rightly wonder: what would motivate someone to do these
horrible things in the first place? Most of us don't seriously consider wiping out humanity,
and those that do tend to be malcontent misfits with little hope of carrying their plans out.
The topic of motivations will be addressed at greater length further along in the book, but
we ought to briefly address it here in the specific context of space-originating artificial risks.
Space represents a new frontier, the possibility of starting over. To some, this means
replacing the old with the new. When you consider that 7-10 billion people only have a
collective mass of around 316 million tons, a tiny fraction of the size of the Earth, you
realize that's not a lot of matter to scramble and thereby have the entire Earth to yourself.
People concerned about the direction of the planet, or of society, combined with a
genocidal streak, may be sufficient consider wiping humanity out and starting over. They
may even see a messianic role for themselves in accomplishing it. Wiping out humanity
opens up the possibility of using the Earth for literally anything an individual or small group
could imagine. It would allow them to impose their preferences over the whole future.
The science fiction film Elysium showed a possible future for Earth: the planet noisy,
messy, dusty, and crowded, and a space station where everything is clean and utopian. In
the movie, people from Earth were able to launch themselves aboard the space station and
reach asylum, but in real life, it would be quite difficult to dock on a space station if the
inhabitants didn't want you there. Defended by a cluster of mirrors or lasers, they could
cook any unwanted visitors before they ever got to the front door. If a space colony could
actually maintain a stable social system, they might develop a very powerful sense of ingroup, more powerful than that experienced by the great majority of humans who lived
throughout history. Historically, communities almost always have some contact with the
outside, but on a space station, true self-sufficiency and isolation could become possible.
3,000 people might not be sufficient to create full self-sufficiency, but a network of 10-20
such space stations might. If these space stations had access to molecular manufacturing,
they could build things without the large factories we have today, making the best use of
available space. Eventually such colonies could spread to the Moon and Mars, colonizing
the solar system. Powerful in-group feelings could cause them to eventually think much

163

less of those left behind on Earth, even to the point of despising us. Of course, this is by no
means certain, but it ought to be considered.
The potential for hyper-in-group feelings increases when we consider the possibility of
mental and physical enhancement, modifications to the body and brain that change who
people are. A group of people on a space station could become a new species, capable of
flying through outer space unprotected for short periods. Their skin cells could be equipped
with small scales that fold to create a hard shell which maintains internal pressure when
exposed to space. These people could work on asteroids wearing casual clothing, a new
race of human beings adapted to the harshness of the vacuum. Beyond physical
modifications, they could have mental modifications as well. These could increase the
variance of possible emotions, allow long-term wakefulness, or even increase the
happiness set-point. They might see themselves as doing a service by wiping out the
billions of model 1.0 humans on Earth. In the aftermath of such an act, infighting among
these space groups could lead to their eventual death and the total demise of the human
species in all its variations.
In the context of all of this, it is important to recall Fermi's Paradox and the Great
Filter. Though it may simply be that the prior probability for the development of intelligent
life is extremely low, there may be more sinister reasons for the silence in the cosmos,
developmental inevitabilities that cause the demise of any intelligent race. These may be
more subtle than nanotechnology or artificial intelligence; they could involve psychological
or mental dead-ends that inevitably occur when an intelligent species starts expanding out
into space, modifying its own brain, or both. Maybe intelligent species reliably expand out
into the space immediately around their planet, inevitably wipe out the people left behind,
then inevitably wipe themselves out in space. Space certainly has a lot fewer organic
resources than the Earth, and a lot more things that can go wrong. If the surface of the
planet Earth became uninhabitable for whatever reason, hanging onto survival in space is
much more of a risky prospect. Maybe small groups of people living in space eventually go
insane over time scales of decades or centuries. We really have no idea. Nothing should
be taken for granted, especially our future.
Space and Molecular Nanotechnology

164

At the opening of this chapter, we made the controversial claim that molecular
manufacturing (MM) is a necessary prerequisite to large-scale space development. In this
section we'll provide more justification and reasoning for the evidence behind this
assumption, while being more specific about what precise levels of space development we
mean.
Let's begin by considering an optimal scenario for space development in the absence
of MM. Say that a space elevator is actually built in 2050 as one Japanese company
claims38. Say that we actually find a way to make carbon nanotubes of the required length
and thickness (we can't now), tens of thousands of miles long. Say that geopolitical
considerations are overcome and nations don't fight over or forbid the existence of an
elevator which would give anyone who controls it a huge military and strategic advantage
over other nations. Say that the risk of having a huge cable in space which would cut
through or get tangled in anything that got in its way, providing a huge hazard to anything in
orbit, was considered acceptable. If all these challenges are overcome, it would still take a
number of years to thicken a space elevator to the point where it could carry substantial
loads. Bradley C. Edwards, who conducted a space elevator study for NASA, found that
five years of thickening a space elevator cable would make it strong enough to send 1,100
tons (106 kg) of cargo into orbit every four days 39. That's 912 trips every decade, or about
108 kg, 100,000 tonnes. Space enthusiasts have trouble grasping how little this is; less
than 70 times the quantity needed to build a rotating space colony that holds only 3,000
people.
The capacity of a space elevator can be increased by robots that add to its thickness,
but the weight of these robots prevents the space elevator from being in use while
construction is happening. The thickness that can be added to a space elevator on any
given trip is extremely limited, because the elevator itself is at least 35,800 km (22,245 mi)
long and a robot can only carry so much material up on any given trip without breaking the
elevator due to excessive weight. Spreading out a few hundred tons on a string that long
only adds a little bit of width. These facts create fundamental limitations on the rate that a
space elevator can be improved. In the long term, if a reliable method of mass-producing
carbon nanotubes is found, space elevators are found to be safe and defensible from
attacks, then extremely large space elevators could be constructed, but it seems unlikely in

165

this century, which is our scope of focus. It's possible that space elevators may be the
primary route to space in the 22nd or 23rd century, but it seems unlikely in the 21st.
Enthusiasts might object that mass drivers based on electromagnetic acceleration
could be used to send up more material more cheaply, with claims of costs as low as $1/lb
for electricity40. Edwards claims his space elevator design could put a kilogram in orbit for
$220, the cost lowering to $55/kg as the power beaming system efficiency improves. Still,
these systems have limited launch capacity. Even if the costs of sending payloads into
space were free, there would still be that limitation. Perhaps the limitation could be
bypassed somewhat by constructing mass drivers on the Moon which can send up large
amounts of material. Even then, the number of mass drivers required to send up any
substantial amount of mass would cost trillions of dollars worth of investment, and involve
mega-projects on another celestial body, something which is very far off (without the help
of MM). This is something that could be started in the late 21 st century, but definitely not
completed. What about towing NEOs into the Earth's orbit as building materials? As we
analyzed earlier in this chapter, the energy costs of modifying the orbit of large objects are
very high.
Simple facts remain: the gravity wells of both the Moon and Earth are very powerful,
making it difficult to send up any substantial amount of matter without great cost. Without
nanotechnology and automated self-replicating robotics, space colonization is a project
which would unfold over the timescale of centuries, and mostly involve just a few tens of
thousands of people. That's 0.0001 percent of the human race. These are likely to be
professionals living in close quarters, like the scientists who do work in the Antarctic, not
like the explorers who won the West. The romance of large-scale space colonization is a
future reserved only for people living in a civilization with self-replicating robotics and selfreplicating robotics. Efforts like SpaceX or Mars One are token efforts only, fun for press
releases, which may inspire people, but actually getting to space requires self-replicating
robotics. Anyone working on rockets is not working on the fastest route to space
colonization; they are working on an irrelevant route to space colonization, one which will
never reach its target without the help of nanotechnology. Space enthusiasts should be
working on diamondoid mechanosynthesis (DMS), but very few of them even know what
those words mean. Rockets are sexier than DMS. These will be great for space tourism,
glimpsing the arc of the Earth for a few minutes before returning, but they will not allow us
166

to build space colonies of any appreciable size. Rockets are simply too expensive on a perkilogram basis. Space elevators and mass drivers do not have the necessary capacity to
send up supplies to build facilities for more than a few thousand people on the timescale of
decades. 1,100 tons every four days simply isn't a lot, especially when you need 10 tons
per square meter of hull just to shield yourself from radiation.
We've mentioned every key technology for getting into space without taking into
account MM: space elevators, mass drivers (including on the Moon), and rockets. They all
suffer from capacity problems in the 21st century. Based on all this, it is hard to conclude
that pre-molecular manufacturing space-based weaponry is a serious risk to human
survival during the 21st century. Any risk of space-based weaponry appears to derive from
molecular manufacturing and from molecular manufacturing only. Accordingly, it seems
logical to tentatively see space weapon risks as a subcategory of the larger class of
molecular manufacturing (MM) risks.
To review, molecular manufacturing is the hypothetical future technology that uses
self-replicating nanosystems to manufacture nanofactories which are exponentially selfreplicating and can build a wide array of diamondoid products with fantastic strength and
power specifications at a high throughput. Developing MM requires us to master
diamondoid mechanosynthesis (DMS), the science and engineering of joining together
individual carbon atoms into atomically precise diamond shapes. This has not been
achieved yet, though very limited examples of mechanosynthesis (with silicon) have been
demonstrated41. Only a few scientists are working on mechanosynthesis or even consider it
important42. However, many prominent futurists, including the most prominent futurist, Ray
Kurzweil, predicts that nanofactories will be developed in the 2030s, which will allow us to
enhance our bodies, extend our lifespans, develop realistic virtual reality, and so on 43. The
mismatch between his predictions and the actual state of the art of nanotechnology in
2015, however, is jarring. We seem nowhere close to developing nanofactories, with much
more basic research, design, and development required 44. The entire field of
nanotechnology needs to entirely reorient itself towards this goal, which it is currently
ignoring45.
Molecular manufacturing is fundamentally different than any other technology
because it is 1) self-replicating, 2) atomically precise, 3) can build almost anything, 4)
167

cheap once it gets going. Nanotechnology policy analysts have even called it magic
because of these properties. Any effort to colonize space needs it; it's key.
How about colonizing space with the assistance of molecular nanotechnology? It gets
much easier. Say that the first nanofactory is built in 2050. It replicates itself in under 12
hours, those two nanofactories replicate themselves, and so on, using natural gas, which is
highly plentiful, for natural feedstocks46. Even after the natural gas runs out, we can obtain
large amounts of carbon from the atmosphere47. Reducing the atmospheric CO2 levels
from about 362 parts per million to 280 parts per million (pre-industrial levels) would allow
for the extraction of 118 gigatonnes of carbon. That's 118 billion metric tons. Given that
diamond produced by nanofactories would be atomically perfect and ultra-strong, we could
build skyscrapers or megastructures which only use only 1/100 th the materials for the loadsupporting structures, compared to present-day buildings. Burj Khalifa, the world's tallest
building as of this writing, at 829.8 m (2,722 ft), uses 55,000 tonnes of steel rebar in its
construction. Creating a building of the same height with diamond beams would only
require 550 tonnes of diamond. For the people who design and build buildings, it's hard to
imagine that so little material could be used, but it can. Some might object that diamond is
more brittle than steel and therefore not a suitable building material for a building, since it
doesn't have any flexibility. That problem is easily solvable by using carbon nanotubes,
also known as fullerenes, for construction. Another option would be to use diamond but
connect together a series of segments with joints that allow the structure to bend slightly.
With nanofactories, you don't just have diamond at your disposal, but the entire class of
materials made out of carbon, which includes fullerenes such as buckyballs and
buckytubes (carbon nanotubes)48.
Imagine a tower much taller than Burj Khalifa, 100 kilometers (62 mi) in height instead
of 829.8 meters. This tower would use 100 times more structural material to be stable.
Flawless diamond is so strong (50 GPa compressive strength) that it does not need to
taper at all to be stable for a tower of that height. That works out to about 55,000 tonnes of
material. Build 10 of these structures in a row and put a track on top of them, and we have
a Space Pier, a launch platform made up of about 600,000 tonnes of material which puts
us halfway to anywhere, designed by J. Storrs Hall 49. At that altitude, the air is 100 times
thinner than at ground level, making it easier to launch payloads into space. In fact, the
structure is so tall that it reaches the Karman Line, the international designation for the
168

boundary of space. The top of the tower itself is in space, technically. The really fantastic
thing about this structure is that it could hold great weight (tens of thousands of tons) and
run launches every 10 minutes or so instead of every four days. It avoids many other
problems that space elevators have, such as the risk of colliding with objects in low Earth
orbit. It is far more structurally stable than a space elevator. It only occupies the territory of
one sovereign nation, and avoids issues relating to who owns the atmosphere and space
above a given country. (To say that a country automatically owns the space above it and
has a right to build a space elevator there is not consistent with any current legal
conception of space.)
When it comes to putting objects into orbit, a Space Pier and a space elevator are in
completely different categories. Although a Space Pier does not actually extend into space
like a space elevator does, it is much more efficient at getting payloads into orbit. A 10
tonne payload can be sent into orbit for just $4,300 of electricity, a rate of 43 cents per
kilogram. That is cheap, much cheaper than any other proposal. Why build a mass driver
on the ground when you can build it at the edge of space? Only molecular nanotechnology
makes it possible; nothing else will do. Nothing else can build flawless diamond in the
massive quantities needed to put up this structure. Nothing else but a Space Pier can put
payload into space in sufficient quantities to achieve the space expansion daydreams of
the 60s and 70s.
A future of mankind in space is only enabled by 1) molecular nanotechnology, and
2) Space Piers, which can only be built using it. Nothing else will do. A Space Pier
combines two simple concepts: that of a mass driver, and putting it above most of the
atmosphere. It's extremely simple, and is the logical conclusion of building taller and taller
buildings. Although a Space Pier is large, building it would only require a small amount of
the total carbon available. A Space Pier consumes about half a million tonnes, while our
total carbon budget from the atmosphere is about 118 billion tonnes. That's roughly
1/200,000 of the carbon available. It's definitely a major project that would use up
substantial resources, but is well within the realm of possibility.
Assuming a launch of 10 tonnes every ten minutes, we get a figure of 5,256,000
tonnes a decade, instead of the measly 100,000 tonnes a decade of the Edwards space
elevator. That makes a Space Pier roughly 52 times better at sending payloads into space
169

than a space elevator, which is a rather big deal. It's an even bigger deal when you start
adding additional tracks and support to the Space Pier, allowing it to support multiple
launches simultaneously. At some point, the main restriction becomes how much power
you can produce at the site using nuclear power plants, rather than the costs of additional
construction itself, which are modest in comparison. A Space Pier can be worked on and
expanded while launches are ongoing, unlike a space elevator which is non-operational
during construction. A 10-track Space Pier can launch 50,256,000 tonnes a decade,
enough to start building large structures on the Moon, such as its own Space Pier. Carbon
is practically non-existent on the Moon's surface, so any Space Pier built there would need
to be built with carbon sent from Earth. 50 million tonnes would be enough to build over a
thousand mass drivers on the Moon, which could then send up 5 billion tonnes of Moon
rock in a single decade! Now we're talking. Large amounts of dead, dumb matter like Moon
rock is needed to shield space colonies from the dangerous effects of radiation. Each
square meter will need at least 10 tonnes, more for colonies outside of the Earth's Van
Allen Belts, in attractive locations like the Lagrange points, where colonies can remain
stable in their orbits relative to the Earth and Moon. 5 billion tonnes of Moon rock is enough
to facilitate the creation of about 500 space colonies, assuming 10 million tonnes per
colony, which is enough to contain roughly 1,500,000 people. Even with the magic of
nanotechnology, a ten-track Space Pier on the Earth, a thousand industrial-scale mass
drivers on the Moon operating around the clock, each launching ten tonnes every ten
minutes, and two or three decades of work to build it all, we only could put 0.15 percent of
the global population into space. To grasp the amount of infrastructure it would take to do
this, it's comparable in mass to the entire world's annual oil output.
Creating space colonies could even be accelerated beyond the above scenario by
using self-replicating factories on the Moon which create mass drivers primarily out of local
materials, using ultra-strong diamond materials only for the most crucial components.
Since the Moon has no atmosphere, you can build a mass driver right on the ground,
instead of building a huge, expensive tower to support it. Robert A. Freitas conducted a
study of self-replicating factories on the Moon, titled A Self-Replicating, Growing Lunar
Factory in 198150. Freitas' design begins with a 100 ton seed and replicates itself in a year,
but it does not use molecular manufacturing. A design based on molecular manufacturing
and powered by a nuclear reactor could replicate itself much faster, on the order of a day.
170

In just 18 days, the factories could harvest 4 billion tons of rock a year, roughly equivalent
to the annual industrial output of all human civilization.
From them on, providing nuclear power for the factories is the main limitation on
increasing output. A single nuclear power plant with an output of 2 GW is enough to power
174 of these factories, so many nuclear power plants would be needed. Operating solely
on solar energy generated by themselves, the factories would take a year to self-replicate.
Assisting the factories with large solar panels in space beaming down power is another
option. A 4 GW solar power station would weigh about 80,000 tonnes, which could be
launched from an Earth-based 10-track Space Pier in pieces over the course of six days.
Every six days, energy infrastructure could be provided for the construction of 4,000
factories with a combined annual output of 400 million tons (~362 million tonnes) of Moon
rock, assuming each factory requires 10 MW to power it (Freitas' paper estimates 0.47 MW
- 11.5 MW power requirements per factory). That alone would be sufficient to produce 3.6
billion tonnes a decade, more than half the 5 billion tonne benchmark for 1,000 mass
drivers sending material up into space. If the self-replicating lunar robotics could construct
mass drivers on their own, billions of tonnes of Moon rock might be sent up every week
instead of every decade. Eventually, this would be enough to build so many space colonies
around the Earth that they would create a gigantic and clearly visible ring.
Hopefully these sketches illustrate the massive gulf between space colonization with
pre-MM technologies and space colonization with MM. They are huge. The primary limiting
factory in the case of MM is human supervision, the degree to which would be necessary is
still unknown. If factories, mass drivers, and space stations can all be constructed
according to an automated program, with minimal human supervision, the colonization of
space could proceed quite rapidly in the decades after the development of MM. That is why
we can't completely rule out major space-based risks this century. If MM is developed in
2050, that gives humanity 50 years to colonize space in our crucial window, at which point
risks do emerge. The bio-attack risk outlined in this chapter could certainly become feasible
within that window. However, if MM is not developed this century, then it seems that major
risks from space are also unlikely.
Space and the Far Future

171

In the far future, if humanity doesn't destroy itself, some mass expansion into space
seems likely. It probably isn't necessary in the nearer term, as the Earth could hold trillions
of people with plenty of space if huge underground caverns are excavated and flooded with
sunlight, millions of 50-mile high arcologies are built, and so on. The latest population
projections, contrary to two decades of prior consensus, have world population continuing
to increase throughout the entire 21st century rather than leveling off51. Quoting the
researchers: world population stabilization unlikely this century. Assuming the world
population continues to grow indefinitely, eventually all these people will need to find
somewhere to be put which isn't the Earth.
Assuming we do expand into space, increasing our level of reproduction to produce
enough people to take up all that space, what are the long-term implications? Though
space may present dangers in the short term, in the long term it actually secures our future
as a species. If mankind spreads across the galaxy, it would become almost impossible to
wipe us out. For a detailed step-by-step scenario of how mankind might go from building
modest structures in low Earth orbit to colonizing the entire galaxy, we recommend the
book The Millennial Project by Marshall T. Savage52.
The long-term implications of space colonization depend heavily on the degree of
order and control that exists in future civilization. If the future is controlled by a singleton, a
single top-level decision-making agency, there may be little to no danger, and it could be
safe to live on Earth53. If there is a lack of control, we could reach a point where small
organizations or even individuals could gather enough energy to send a Great Pyramidsized iron sphere into the Earth at the speed of light, causing impact doomsday.
Alternatively, perhaps the Earth could be surrounded by defensive layers and structures so
powerful that they can stop such attacks. Or, perhaps detectors could be scattered across
the local cosmic neighborhood, alerting the planet to the unauthorized acceleration of
distant objects and allowing the planet to form a response long in advance. All of these
outcomes are possible.
Probability Estimates and Discussion
As we've stated several times in this chapter, space colonization on the level required
to pose a serious threat to Earth does not seem particularly likely in the first half of the 21 st
century. One of the reasons why is that molecular nanotechnology does not seem very
172

likely to be developed in the first half of the century. There is close to zero effort towards
the prerequisite steps, namely diamondoid mechanosynthesis, complex nanomachine
design, and positional molecular placement. As we've argued, space expansion depends
upon the development of MM to become a serious risk.
Even if MM is developed, there are other risks that derive from it which seem to pose
a greater danger than the relatively speculative and far-out risks discussed in this chapter.
Why would someone launch a bio-attack on the Earth from space when they could do it
from the ground and seal themselves inside a bunker? For the cost of putting a 7 million
ton space station into orbit, they could build an underground facility with enough food and
equipment for decades, and carry out their plans of world domination that way. Anything
that can be built in space can be built on the ground in a comparable structure, for great
savings, improved specifications, or both.
Space weapons seem to pose a greater threat in the context of destabilizing the
international system than they do in terms of direct threat to the human species. Rather
than space itself being the threat, we ought to consider the exacerbation of competition
over space resources (especially the Moon) leading to nuclear war or a nano arms race on
Earth.
We've reviewed that materials in space are scarce, there are not many NEOs
available, and launching anything into space is expensive, even with potential future
assistance from megastructures like the Space Pier. Even if a Space Pier can be built in a
semi-automated fashion, there is a limit to how much carbon is available, and real estate
will continue to have value. A Space Pier would cast vast shadows which would effect
property values across tens of thousands of square miles. Perhaps the entire structure
could be covered in displays, such as phased array optics, which simulate it being
transparent, but then the footprint of the towers themselves would still require substantial
real estate. Perhaps the greatest cost of all would be the idea that it exists and the demand
for its use, which could cause aggravation or jealousy among other nations or powerful
companies.
All of these factors mean that access to space would still have value. Even if many
Space Piers are constructed, it will continue to be relatively scarce and probably in
substantial demand. Space has resources, like NEOs and the asteroid belt, which contain
173

gigatons of platinum and gold that could be harvested, returned to Earth, and sold for a
profit. A Space Pier would need to be defended, just like any other important structure. It
would be an appealing target for attack, just as the Twin Towers in New York were.
The scarcity of materials in space and the cost of sending payloads into orbit means
that the greatest chunk of material there, the Moon, would have great comparative value.
The status of the Moon is ostensibly determined by the 1979 Moon Treaty, The Agreement
Governing the Activities of States on the Moon and Other Celestial Bodies, which assigns
the jurisdiction of the Moon to the international community and international law. The
problem is that the international community and international law are abstractions which
aren't actually real. Sooner or later there will be some powerful incentive to begin
colonizing the Moon and mining it for material to build space colonies, and figuring that the
whole object is in the possession of the international community will not be specific
enough. It will become necessary to actually protect the Moon with military resources.
Currently, putting weapons in space is permitted according to international law, as long as
they are not weapons of mass destruction. So, the legal precedent exists to deploy weapon
systems to secure the Moon. Whoever protects the Moon will become its de facto owner,
regardless of what treaties say.
Consider the electrical cost of launching a ten metric payload into orbit from a
hypothetical Space Pier; $4,200. Besides that cost, there is also the cost of building the
structure in the first place, and it seems likely that these costs will be amortized and
wrapped into charges for each individual launch. Therefore, launches could be
substantially more expensive than $4,200. If the cost for building the structure is $10 billion,
and the investors want to make all their money back within five years, and launches occur
every ten minutes around the clock without interruption, a $38,051 charge would need to
be added to every ten tonne load. That brings the total cost per launch to $41,251.
Launching ten tonnes from the Moon might cost only $1,000 or less, considering many
factors; 1) the initially low cost of lunar real estate, 2) the Moon's lack of atmosphere
encumbering the acceleration of the load, 3) the comparatively lower gravity of the Moon. If
someone is building a 7 million tonne space station with materials launched from the Earth,
launch costs would be $26,635,700,000, or roughly $26 trillion. Building a similar mass
driver on the surface of the Moon would be a lot cheaper than building a Space Pier;
perhaps by a factor of 10. We can ballpark the cost of launch per ten tonnes of Moon rock
174

at $5,000, making the same assumption that investors will want their money back in five
years and that electricity costs will be minimal. At that price, launching 7 million tonnes
works out to $3,500,000,000, or about 7 times cheaper. The basic fact is that the Moon's
gravity is 6 times less than the Earth's and it has no atmosphere. That makes it much
easier to use as a staging area for launching large quantities of basic building materials
into the Earth-Moon neighborhood.
The uncertain status of the Moon and its status as a source of material at least ten
times cheaper than the Earth (much more at large scales) means that there will be an
incentive to fight over it and control it. Like the American continent, it could become home
to independent states which begin as colonies of Earth and eventually declare their
independence. This could lead to bloody wars going both ways, wars waged with futuristic
weapons like those outlined in this and earlier chapters. Those on the Moon might think
little of people on the Earth, and do their best to sterilize the surface. If the surface of the
Earth became uninhabitable, say through a Daedalus impact, the denizens of the Moon
and space would have to be extremely confident in their ability to keep the human race
going without that resource. This seems very plausible, especially when we take into
account human enhancement, but it's worth noting. Certainly such an event would lead to
many deaths and lessen the quality of life for all humanity.

Latest developments:
Yuri Milner created a project to use very powerful laser so send nanobots to stars.
Such phase array with the power of gigawats will be based in space. It could be used to
attack targets on Earth and could change its aim probably thousands times a second or
more, so it could be used to kill people. The safe solution would be to put it on opposite to
Earth side of the Moon.

References

1. Upgraded SpaceX Falcon 9.1.1 will launch 25% more than old Falcon 9 and bring
price down to $4109 per kilogram to LEO. March 22, 2013. NextBigFuture.
175

2. Kenneth Chang. Beings Not Made for Space. January 27, 2014. The New York
Times.
3. Lucian Parfeni. Micrometeorite Hits the International Space Station, Punching a
Bullet Hole. April 30, 2013. Softpedia.
4. David Dickinson. How Micrometeoroid Impacts Pose a Danger for Todays
Spacewalk. April 19, 2013. Universe Today.
5. Bill Kaufmann. Mars colonization a suicide mission, says Canadian astronaut.
6. James Gleick. Little Bug, Big Bang. December 1, 1996. The New York Times.
7. Leslie Horn. That Massive Russian Rocket Explosion Was Caused by Dumb
Human Error. July 10, 2013. Gizmodo.
8. Space Shuttle Columbia Disaster. Wikipedia.
9. Your Body in Space: Use it or Lose It. NASA.
10. Alexander Davies. Deadly Space Junk Sends ISS Astronauts Running for Escape
Pods. March 26, 2012. Discovery.
11. Tiffany Lam. Russians unveil space hotel. August 18, 2011. CNN.
12. John M. Smart. The Race to Inner Space. December 17, 2011. Ever Smarter
World.
13. J. Storrs Hall. The Space Pier: a hybrid Space-launch Tower concept. 2007.
Autogeny.org.
14. Al Globus, Nitin, Arora, Ankur Bajoria, Joe Straut. The Kalpana One Orbital Space
Settlement Revised. 2007. American Institute of Aeronautics and Astronautics.
15. List Of Aten Minor Planets. February 2, 2012. Minor Planet Center.
16. Robert Marcus, H. Jay Melosh, and Gareth Collins. Earth Impact Effects Program.
2010. Imperial College London.
17. Alan Chamberlin. NEO Discovery Statistics. 2014. Near Earth Object Program,
NASA.
18. Clark R. Chapman and David Morrison. Impacts on the Earth by Asteroids and
Comets - Assessing the Hazard. January 6, 1994. Nature 367 (6458): 3340.
19. Curt Covey, Starley L. Thompson, Paul R. Weissman, Michael C. MacCracken.
Global climatic effects of atmospheric dust from an asteroid or comet impact on
Earth. December 1994. Global and Planetary Change: 263-273.
20. John S. Lewis. Rain Of Iron And Ice: The Very Real Threat Of Comet And Asteroid
Bombardment. 1997. Helix Books.
176

21. Kjeld C. Engvild. A review of the risks of sudden global cooling and its effects on
agriculture. 2003. Agricultural and Forest Meteorology. Volume 115, Issues 34, 30
March 2003, Pages 127137.
22. Covey, C; Morrison, D.; Toon, O.B.; Turco, R.P.; Zahnle, K. Environmental
Perturbations Caused By the Impacts of Asteroids and Comets. Reviews of
Geophysics 35 (1): 4178.
23. Bains, KH; Ianov, BA; Ocampo, AC; Pope, KO. Impact Winter and the CretaceousTertiary Extinctions - Results Of A Chicxulub Asteroid Impact Model. Earth and
Planetary Science Letters 128 (3-4): 719725.
24. Earth Impact Effects Program.
25. Alvarez LW, Alvarez W, Asaro F, Michel HV. Extraterrestrial cause for the
CretaceousTertiary extinction. 1980. Science 208 (4448): 10951108.
26. H. J. Melosh, N. M. Schneider, K. J. Zahnle, D. Latham. Ignition of global wildfires
at the Cretaceous/Tertiary boundary. 1990.
27. MacLeod N, Rawson PF, Forey PL, Banner FT, Boudagher-Fadel MK, Bown PR,
Burnett JA, Chambers, P, Culver S, Evans SE, Jeffery C, Kaminski MA, Lord AR,
Milner AC, Milner AR, Morris N, Owen E, Rosen BR, Smith AB, Taylor PD, Urquhart
E, Young JR; Rawson; Forey; Banner; Boudagher-Fadel; Bown; Burnett; Chambers;
Culver; Evans; Jeffery; Kaminski; Lord; Milner; Milner; Morris; Owen; Rosen; Smith;
Taylor; Urquhart; Young. The CretaceousTertiary biotic transition. 1997. Journal
of the Geological Society 154 (2): 265292.
28. Johan Vellekoopa, Appy Sluijs, Jan Smit, Stefan Schouten, Johan W. H. Weijers,
Jaap S. Sinninghe Damst, and Henk Brinkhuis. Rapid short-term cooling following
the Chicxulub impact at the CretaceousPaleogene boundary. May 27, 2014.
Proceedings of the National Academy of Sciences of the United States of America,
vol. 111, no 21, 75377541.
29. Petit, J.R., J. Jouzel, D. Raynaud, N.I. Barkov, J.-M. Barnola, I. Basile, M. Benders,
J. Chappellaz, M. Davis, G. Delayque, M. Delmotte, V.M. Kotlyakov, M. Legrand,
V.Y. Lipenkov, C. Lorius, L. Ppin, C. Ritz, E. Saltzman, and M. Stievenard. Climate
and atmospheric history of the past 420,000 years from the Vostok ice core,
Antarctica. 1999. Nature 399: 429-436.
30. Hector Javier Durand-Manterola and Guadalupe Cordero-Tercero. Assessments of
the energy, mass and size of the Chicxulub Impactor. March 19, 2014. Arxiv.org.
177

31. Covey 1994


32. Project Daedalus Study Group: A. Bond et al. Project Daedalus The Final Report
on the BIS Starship Study, JBIS Interstellar Studies, Supplement 1978.
33. Science: Sun Gun. July 9, 1945. Time.
34. Bryan Caplan. The Totalitarian Threat. 2006. In Nick Bostrom and Milan Cirkovic,
eds. Global Catastrophic Risks. Oxford: Oxford University Press, pp. 504-519.
35. Nick Bostrom. Existential Risks: Analyzing Human Extinction Scenarios. 2002.
Journal of Evolution and Technology, Vol. 9, No. 1.
36. Globus 2007
37. Traill LW, Bradshaw JA, Brook BW. Minimum viable population size: A metaanalysis of 30 years of published estimates. 2007. Biological Conservation 139 (12): 159166.
38. Michelle Starr. Japanese company plans space elevator by 2050. September 23,
2014. CNET.
39. Bradley C. Edwards, Eric A. Westling. The Space Elevator: A Revolutionary Earthto-Space Transportation System. 2003.
40. Henry Kolm. Mass Driver Update. September 1980. L5 News. National Space
Society.
41. Oyabu, Noriaki; Custance, scar; Yi, Insook; Sugawara, Yasuhiro; Morita, Seizo.
Mechanical Vertical Manipulation of Selected Single Atoms by Soft Nanoindentation
Using Near Contact Atomic Force Microscopy. 2003. Physical Review Letters 90
(17).
42. Nanofactory Collaboration. 2006-2014.
43. Ray Kurzweil. The Singularity is Near. 2005. Viking.
44. Ralph Merkle and Robert A. Freitas. Remaining Technical Challenges for Achieving
Positional Diamondoid Molecular Manufacturing and Diamondoid Nanofactories.
2007. Nanofactory Collaboration.
45. Eric Drexler. Radical Abundance: How a Revolution in Nanotechnology Will Change
Civilization. 2013. PublicAffairs.
46. Michael Anissimov. Interview with Robert A. Freitas. 2010. Lifeboat Foundation.
47. Robert J. Bradbury. Sapphire Mansions: Understanding the Real Impact of
Molecular Nanotechnology. June 2003. Aeiveos.
48. Drexler 2013.
178

49. Hall 2007


50. Robert A. Freitas. A Self-Replicating, Growing Lunar Factory. Proceedings of the
Fifth Princeton/AIAA Conference. May 18-21, 1981. Eds. Jerry Grey and Lawrence
A. Hamdan. American Institute of Aeronautics and Astronautics
51. Patrick Gerland, Adrian E. Raftery, Hana evkov, Nan Li, Danan Gu, Thomas
Spoorenberg, Leontine Alkema, Bailey K. Fosdick, Jennifer Chunn, Nevena Lalic,
Guiomar Bay, Thomas Buettner, Gerhard K. Heilig, John Wilmoth. World population
stabilization unlikely this century. September 18, 2014. Science.
52. Marshall T. Savage. The Millennial Project: Colonizing the Galaxy in Eight Easy
Steps. 1992. Little, Brown, and Company.
53. Nick Bostrom. What is a singleton? 2006. Linguistic and Philosophical
Investigations,

Vol.

5,

No.

2:

pp.

48-54.

Block 4 Super technologies. Nanotech and Biotech.


Chapter 12. Biological weapons
The most immediate global catastrophic risk facing humanity, especially in the prior
to 2030 time frame, plausibly appears to be the threat of genetically engineered viruses
causing a global pandemic. In 1918, the Spanish flu killed 50 million people, a number
greater than the casualties from World War I which occurred immediately prior. After
biology researchers publicly released the full genome for the virus, tech thinkers Ray
Kurzweil and Bill Joy published an op-ed article in The New York Times in 2005 castigating
them1, saying, This is extremely foolish. The genome is essentially the design of a weapon
of mass destruction.
The greatest risk comes into play when we consider the possibility of either creating
dangerous pathogens, like the Spanish flu, from scratch, or modifying the genomes of
existing pathogens, like bird flu, to make it transmissible between humans 2. Once the new
virus exists, the risk is that it either is used in warfare, or spread around airports in a
terrorist act.

179

Biological weapons are forbidden by the Biological Weapons Convention, which


entered into force in 1975 under the auspices of the United Nations 3. The treaty has been
signed or ratified by 154 UN member states, signed but not ratified by 10 UN member
states, and not signed or ratified by an additional 16 UN member states. There are three
non-UN member states, which have not participated in the treaty: Kosovo, Taiwan, and
Vatican City. A summary of the convention is in Article 1 of its text:
Each State Party to this Convention undertakes never in any circumstances to
develop, produce, stockpile or otherwise acquire or retain:

(1) Microbial or other biological agents, or toxins whatever their origin or


method of production, of types and in quantities that have no justification for
prophylactic, protective or other peaceful purposes;
(2) Weapons, equipment or means of delivery designed to use such agents
or toxins for hostile purposes or in armed conflict.
The two largest historical stockpiles, those of the United States and Russia, have
allegedly been destroyed, though Russian destruction of Soviet biological weapons
facilities is mostly undocumented. In 1992, Ken Alibek, a former Soviet biological warfare
expert who was the First Deputy Director of the Biopreparat (Soviet biological weapons
facilities) defected to the United States. Over the subsequent decade, he wrote various opeds and a book on how massive the Soviet biological weapons program was and lamented
the lack of international oversight of Russia's biological weapons program, pointing out that
inspectors are still denied access to many crucial facilities 4.
Both the United States and Russia still maintain facilities that study biological
weapons for defensive purposes, and these facilities are known to contain samples of
dangerous viruses, such as smallpox. Two medical journal papers claim that as recently as
the late 80s, the United States military was currently developing vaccines for tularemia, Q
fever, Rift Valley fever, Venezuelan equine encephalitis, Eastern and Western equine
encephalitis, chikungunya fever, Argentine hemorrhagic fever, the botulinum toxicoses, and
anthrax5,6. The biological weapons convention is mostly a gentleman's agreement, is not
backed by monitoring or inspections, and offers wide latitude for the development of
offensive biological weaponry. The James Martin Center for Nonproliferation maintains a
180

detailed list of known, probable, and possible biological and chemical agents
possessed by various countries, a list on which many dangerous pathogens appear 7.
Furthermore, the Federation of American scientists has questioned whether the current US
interpretation of the Biological Weapons Convention is in line with its original
interpretation8:
Recently, the US interpretation of the Biological Weapons Convention has come to
reflect the point of view that Article I, which forbids the development or production of
biological agents except under certain circumstances, does not apply to non-lethal
biological weapons. This position is at odds with original US interpretation of the
Convention. From the perspective of this original interpretation, current non-lethal
weapons research clearly exceeds the limits of acceptability defined by Article I.
There is also the dual-use problem, meaning that pathogens grown for testing and
defensive uses can quickly be re-purposed for offensive use. The United States had an
extensive biological weapons program from 19431969, weaponizing anthrax, tularemia,
brucellosis, Q-fever, VEE, botulism, and staph B. The procedures and methods needed to
grow and weaponize these pathogens are known to the US military, and industrial
production could begin at any time. The US developed munitions such as the E120
biological bomblet, a specialized bomblet designed to hold 0.1 kg of biological agent, to
rotate rapidly and shatter on impact, spraying tularemia bacteria across as wide an area as
possible. Despite the international agreement, all the tools for biological warfare could
readily be built.
What is tularemia? It is a highly infectious bacteria which is found in various
lagomorphs (hares, rabbits, pikas) in North America. Humans can become infected by as
few as 10-50 bacteria. From the point of view of biological weaponry, tularemia is attractive
because it is easy to aerosolize, easy to clean up (unlike anthrax), and is incapacitating to
the victim. Within a few days of contact, tularemia causes pus-filled lesions to form,
followed by fever, lethargy, anorexia, symptoms of blood poisoning, and possibly death.
Tularemia causes the nymph nodes to swell and fill with pus, similar to bubonic plague.
The death rate with treatment is about 1%, without treatment it is 7%. The low death rate is
actually somewhat helpful from the perspective of warfare, as sick or injured soldiers and

181

civilians must be taken care of by others, creating a burden that hampers fighting
capability.
Even worse than tularemia is weaponized anthrax, because anthrax spores are
persistent and very difficult to clean up. To clean them up requires sterilizing an entire area
with diluted formaldehyde. Anthrax spores are not fungal sporesthe scientific term is
endospores, which are just a dormant and more resilient form of the original bacteria.
Spores is for short. Anthrax spores stay in a dehydrated form and are reactivated when a
host inhales or ingests them, or they come into contact with a skin lesion. At that point, they
come to life quickly and can cause fatal disease. We'll spare you a detailed description of
the disease, but the death rate for ingestion of anthrax spores is 25-60%, depending upon
promptness of treatment9.
After anthrax-contaminated letters containing just a few teaspoons worth of spores
were sent by terrorists to congressional officials in 2002, it cost over $100 million to clean
up the anthrax trail safely, which included spores at the Brentwood mail sorting facility in
northeast Washington and the State Department mail sorting facility in Sterling, Virginia 10.
Dorothy A. Canter, chief scientist for bioterrorism issues at the Environmental Protection
Agency, who tracked the decontamination work, said, It's in the hundreds of millions of
dollars for the cleanup alone.
If it cost over $100 million to clean up just two mailing facilities, imagine what it
would cost to clean up a whole city. It would be close to impossible. Anthrax spores are
very long-lastingthere have been cases where anthrax-infected animal corpses have
caused disease 70 years after they were dug up. So, it is reasonable to assume that a site
thoroughly contaminated with anthrax would remain uninhabitable for 70 years or longer 11.
It would be cheaper and easier to just rebuild a city in a different place than clean one up.
The total real estate value of Manhattan was $802.4 billion in 2006, consider if that entire
area were contaminated with anthrax. It would knock more than $800 billion off the US
economy all at once, and far more due to second-order economic disruption.
The case of Gruinard Island, an island in Scotland contaminated with anthrax due to
biological weapons testing, illustrates what is required to remove anthrax 12. The island is
just soil and dirt (no trees), and is under a square mile in size. To decontaminate the island
required diluting 280 tonnes of formaldehyde in seawater and spreading it over the entire
surface of the island, with complete removal of the worst-contaminated topsoil. After the

182

decontamination, code-named Dark Harvest, was completed, a flock of sheep was


introduced to the island and they thrived without incident.
These descriptions of biological weapons are included to give the reader more
clarity into what we are referring to when the phrase is used.
Consider a total nuclear war between the United States and Russia, fought over
Crimea or some other border area. After each side unloaded on one another, many major
cities and military bases would be destroyed, but each side would still have ample military
resources to continue fighting. As the conflict dragged on, there would be an increasing
desperation to turn to exotic weapons with a better cost-benefit profile than conventional
arms. Biological weapons fall into this category. Anthrax could quickly be produced in huge
quantities (if there aren't already secret stockpiles we don't know about), loaded up into
bomblets, and delivered to the enemy. Recall that few people even remember the exact
reason behind why World War I was fought, yet it still lasted for four years and led to the
death of more than 9 million combatants. Such a total war is certainly plausible, even if it
would be fought for reasons that don't seem to make much sense to us today.
Alongside anthrax, other dangerous pathogens, such as tularemia might be used,
creating an artificial epidemic. Refugees pouring out from the cities into the countryside
would carry the pathogens along with them, spreading them among friends, family, and
whoever else they came across. The war might be accompanied by the high-altitude
detonation of hydrogen bombs, causing a continent-wide EMP that fries all unprotected
electronics and makes industrial agriculture impossible 13. All sophisticated farm machinery
is dependent on electronic circuits and cannot function without them. They would be fried in
an EMP attack. The refugee situation would make a lack of hygiene rampant, as refugee
camps would lack proper sanitation and would likely be contaminated with human waste.
Combinations of factors such as thesenuclear war, biological war, pandemic, breakdown
of infrastructureall could combine to deal an extremely hard hit to large countries such as
Russia and the United States, causing them to lose more than nine-tenths of their
population. Frank Gaffney, head of the Center for Security Policy, has said of an EMP
attack, Within a year of that attack, nine out of 10 Americans would be dead, because we
cant support a population of the present size in urban centers and the like without
electricity.14
Without some external force to reestablish order in a situation such as that
described here, countries like the United States and Russia could even decline into feudal
183

societies. Further war and conflict based on grudges from the original conflict could
continue whittling down the human species, until crop failures from nuclear winter finally kill
us off. Modern agriculture is based on artificial fertilizer, and if infrastructure is down and
the security situation is poor, artificial fertilizer cannot be produced. Based on subsistence
agriculture alone, the carrying capacity of the Earth would be significantly lower than it is
now. Our modern population depends on industrial agriculture and artificial fertilizer to
sustain this many people. Without these tools, billions would perish.
We've briefly reviewed one possible disaster scenario, which would not be fatal to
humanity all at once, but could be the trigger for a long-term decline that eventually
culminates in extinction. We will return to such scenarios later in the chapter, but for now
we will review the tools which could be used to cause trouble.
The key danger area is synthetic biology, a broad term defined as follows 15:
A) the design and construction of new biological parts, devices, and systems, and
B) the re-design of existing, natural biological systems for useful purposes.
The term synthetic biology somewhat replaces the older term genetic
engineering, since synthetic biology has a more inclusive definition which captures much
more of the work happening today. In synthetic biology, genes are not merely modified, but
completely new genes may be synthesized and inserted, or even viruses constructed from
entirely synthetic genomes, as was achieved by Craig Venter in 2010 16. Venter had
previously chemically synthesized the genome of the bacteriophage virus phi X 174 in
2003, but it was not until 2010 that an artificial bacterial genome was created. The genome
was synthesized and injected into a bacteria, the nucleolus of which had been previously
hollowed out, and it booted up the organism and it came to life, Frankenstein-style, and
demonstrated itself capable of self-replication. The bacteria was named Mycoplasma
laboratorium, after the Mycoplasma genitalium bacterium which inspired its creation, the
simplest bacteria known at the time. Venter's Institute for Biological Alternatives called the
original 2003 viral synthesis, an important advance toward the goal of a completely
synthetic genome that could aid in carbon sequestration and energy production. 17
In 2002, the polio virus was synthesized from an artificial genome 18, an advance that
Craig Venter called irresponsible and inflammatory without scientific justification. These
synthetic virus particles

are absolutely indistinguishable from natural particles on all


184

parametersfrom size to behavior and contagiousness. Though Venter was working on


developing a synthetic artificial viral genome at the time, the objectionable nature of the
experiment was that polio is a virus infectious to humans. The project was financed by the
US Department of Defense as part of a biowarfare response program. We see biowarfare
response also encompasses creating dangerous pathogens, not just responding to them.
That is the fundamental nature of dual use. Commenting on the advance, researcher
Eckard Wimmer said, The reason we did it was to prove that it can be done and it is now a
reality.19
The synthetic biology path pursued by Craig Venter's J. Craig Venter Institute (JCVI)
focuses on stripping down the simplest known bacterium, Mycoplasma genitalium, which
consists of 525 genes making up 582,970 base pairs. The idea, called the Minimal
Genome Project is to simplify the genome of the bacterium to make a free-living cell with
the simplest possible genome for survival, which can then serve as a base template for
engineering new organisms with functions like producing renewable fuels from biological
feedstock. The overview of the Synthetic Biology and Bioenergy group of JVCI says,
Since 1995, Dr. Venter and his teams have been trying to develop a minimal cell both to
understand the fundamentals of biology and to begin the process of building a new cell and
organism with optimized functions.
The activity of the Minimal Genome Project project picked up in 2006, with
participation from Nobel laureate Hamilton Smith and microbiologist Clyde A. Hutchison III.
The project is still ongoing, and has reached important milestones. From the 525 genes of
M. genitalium, JCVI plans to scale down to just 382 genes, synthesizing them and
transplanting them into a M. genitalium whose nucleolus has been hollowed out for this
purpose. This has not yet been achieved as of this writing (January 2015). In 2010, the
group successfully synthesized the 1,078,809 base pair genome of Mycoplasma mycoides
and transplanted it into a Mycoplasma capricolum cell, which was shown to be viable, i.e.,
it self-replicated billions of times. Craig Venter said the new organism, called Synthia, was
the first species... to have its parents be a computer. 20
Today, synthesizing genomes the length of a bacterium, i.e., half a million base
pairs, is extremely expensive and complicated. The 1,078,809 base pair genome of
Mycoplasma laboratorium cost $40 million to make and required a decade of work from 20
people. That works out to almost $40 per base pair. For more routine gene synthesis of
segments about 2,000-3,000 base pairs long, the more typical cost is 28 cents per base
185

pair, or $560 for a 2,000 base pair segment. Synthesizing a viral genome like phi X 174,
5,375 base pairs in length, currently costs about $1,500. So, the barrier to creating new
doomsday viruses is not cost so much as it is knowledge about what kind of genome to
synthesize.
Basic tools for garage lab bioengineering and synthetic biology have been
developed. One of the most prominent international groups is the International Genetically
Engineered Machines (iGEM) competition, which

was initially oriented towards

undergraduate students, but has since expanded to include High School students and
entrepreneurs. The MIT group led by Tom Knight which created iGEM also created a
standardized set of DNA sequences for genetic engineering, called BioBricks, and The
Registry of Standard Biological Parts, used by iGEM. The way iGEM works is that students
are assigned a kit of biological parts from the Registry of Standard Biological Parts, and are
given the summer to engineer these parts into working biological systems operating within
cells. As an example, the winners in 2007 were a team from Peking University, who
submitted Towards a Self-Differentiated Bacterial Assembly Line. Other winners have
developed a new, more efficient approach to bio production (Slovenia 2010), improved the
design and construction of biosensors (Cambridge 2009), created a prototype designer
vaccine (Slovenia 2008),and re-engineered human cells

to combat sepsis (Slovenia

2006)21.
There is a Biobrick Public Agreement, which is a free-to-use legal tool that allows
individuals, companies, and institutions to make their standardized biological parts free for
others to use. This makes what is essentially a DNA dev kit available for all to use. The
Standard Biological Parts Registry currently contains over 8,000 standard biological parts,
which can be used to create a wide variety of biological systems within cells. Most of the
parts within the registry are designed to operate within the E. coli bacterium, which lives in
the intestine and is the most widely studied prokaryotic organism.
The basic technological trend is towards biotech lab equipment becoming cheaper
and the knowledge of how to use it becoming more diffuse. The constant reduction in price
and improvements in the simplicity of DNA sequencers and synthesizers make
biohackers possible. However, it is important to point out that falling costs in the field of
gene synthesis are absolutely not exponential, and there is no predictable Moore's law
type effect happening (besides, Moore's law itself is leveling off). Still, we are entering an

186

era where it will be possible for nearly anyone to synthesize custom viruses, and eventually
bacteria.
Even simple genetic engineering, making a few modifications to an existing genome,
can be extremely dangerous. In 2014, controversial Dutch virologist Ron Fouchier and

his team at Erasmus Medical Center in the Netherlands studied the H5N1 avian flu
virus and determined that only five mutations were needed to make this bird flu
transmissible between humans22. This was confirmed by making the new strain and
spreading it between ferrets, which serve as a reliable human model for contagiousness. In
2011, Fouchier and his group had published details on how to make bird flu contagious
among humans, which was met by widespread condemnation by critics 23. Microbiologist

David Relman of Stanford University told NPR, I still don't understand why such a
risky approach must be taken [] I'm discouraged. 24 D.A Henderson at the
University of Pittsburgh, a pioneer in the eradication of smallpox said of this
ominous news that the benefits of this work do not outweigh the risks. 25 The
World Health Organization conveyed deep concern about possible risks and misuses
associated with this research and the potential negative consequences. 26 In December
2011, the National Science Advisory Board for Biosecurity (NSABB) ruled that Fouchier's
work be censored, but after much debate, the ruling was reversed and all the specific
mutation data was openly published27. Around that same time, Secretary of State Hillary
Clinton said there is evidence in Afghanistan thatal Qaedamade a call to arms for
and I quote'brothers with degrees in microbiology or chemistry to develop a weapon of
mass destruction.28 Fouchier told New Scientist that his human transmissible bird flu
variant is transmitted as efficiently as seasonal flu and that it has a 60 percent mortality
rate29.
Biological risk
The basic one-factorial scenario of biological catastrophe is global pandemic caused
by one type of virus or bacteria. The spread of this pathogen could occur via two means
as an epidemic transferred from human to human, or in the form of contamination of the
environment (air, water, food, soil). The Spanish Flu epidemic of 1918 spread over the
entire world except several remote islands, as an example. Would-be man-killing
187

epidemics face two problems. The first is if carriers of the pathogen die too quickly it can't
spread. The second is that no matter how lethal the epidemic, some people usually have
congenital immunity to it. For example, about one in every three hundred people has an
innate immunity to AIDS, meaning they can contract HIV and it never progresses to AIDS.
Even if innate immunity is rare or non-existent, as is apparently the case with anthrax (the
authors were not able to find any medical research papers citing cases of innate immunity
to anthrax), vaccines are usually available.
A second variant of a global biological catastrophe would be the appearance of an
omnivorous agent which destroys the entire biosphere, devouring all live cells, or a more
limited version that takes out a large chunk of the biosphere. This is a more fantastic
scenario, less likely in the near term, but of uncertain likelihood in the long term. It seems
fanciful that natural evolution could produce such an agent, but perhaps through deliberate
biological engineering it could be possible. A bacterium immune to bacteriophages due to
fundamental genetic differences from the rest of all living bacteria would be a possible
candidate for such a pathogen. Harvard geneticist George Church is working towards such
a bacterium for research purposes30.
In 2009, Chris Phoenix of the Center for Responsible Nanotechnology wrote 31:
I learned about research that is nearing completion to develop a strain of E. coli
which cannot be infected by bacteriophages. Phages are a major mechanism
likely *the* major mechanismthat keeps bacteria from growing out of control. A
phage-proof bacterium might behave very similarly to "red tide" algae blooms, which
apparently happen when an algae strain is transported away from its specialized
parasites. But E. coli is capable of living in a wide range of environments, including
soil, fresh water, and anaerobic conditions. A virus-proof version, with perhaps 50%
lower mortality, and (over time) less metabolic load from shedding virus defenses
that are no longer needed, might thrive in many conditions where it currently only
survives. The researchers doing this acknowledge the theoretical risk that some
bacteria might become invasive, but they don't seem to be taking anywhere near the
appropriate level of precaution. They are one gene deletion away from creating the
strain.

188

The work Phoenix is referring to is work undertaken by the famous geneticist George
Church, who describes his work in a Seed magazine article 32:
Given this knowledge, the modern tools of biotechnology allow us to do something
amazing: We can alter the translational code within an organism by modifying the
DNA bases of its genome, making the organism effectively immune to viral infection.
My colleagues and I are exploring this within E. coli, the microbial powerhouse of
the biotech world. By simply changing a certain 314 of the 5 million bases in the E.
coli genome, we can change one of its 64 codons. In 2009 this massive (albeit
nanoscale) construction project is nearing completion via breakthroughs in our
ability to write genomes. This process is increasingly automated and inexpensive
soon it will be relatively easy to change multiple codons. Viral genomes range
from 5,000 to a million bases in length, and each of the 64 codons is present, on
average, 20 times. This means that to survive the change of a single codon in its
host, a virus would require 20 simultaneous, specific, spontaneous changes to its
genome. Even in viruses with very high mutation rates, for example HIV, the chance
of getting a mutant virus with the correct 20 changes and zero lethal mutations is
infinitesimally small.
As of January 2015, this E. coli has not yet been created, though intermediate
variants that display increased virus resistance have been. The basic idea is scary, though
how much more effective would E. coli be if it were immune to bacteriophages? What
would the large-scale consequences be? Are we going to be up to necks in self-replicating,
virus-immune E. coli? The specific safety issues and the worst-case scenarios in this vein
have been studied poorly, if at all. Articles on the topic are often filled with reassurances by
scientists that nothing will go wrong, while other scientists issue ominous warnings. George
Church recommends the following countermeasures 33:
If we engineer organisms to be resistant to all viruses, we must anticipate that
without viruses to hold them in check, these GEOs could take over ecosystems.
This might be handled by making engineered cells dependent on nutritional
components absent from natural environments. For example, we can delete the
genes required to make diaminopimelate, an organic compound that is essential for
189

bacterial cell walls (and hence bacterial survival) yet very rare in humans and our
environment.
Notice that it requires special effort to come up with an idea to make these
organisms less harmful. They would take over planetary ecosystems by default if released.
That is a scary thought. Church refers to the possibility of molecular kudzu taking over the
biosphere. The risk seems so clear that engineering such organisms is arguably a risk we
should not be willing to take. Where are the international treaties forbidding the creation of
such organisms? With one of the world's most prominent geneticists, George Church,
pioneering their creation, it seems extremely unlikely that the United States government will
do anything to restrict this research. We're just going to have to wait until these organisms
are created and see what happens.
The third variant of globe-threatening biological disaster is a binary bacteriological
weapon. For example, the tuberculosis and AIDS are chronic illnesses, but even more
deadly in combination. A similar concept would be a two-level biological weapon. The first
stage would make a certain bacterium, the

toxin of which imperceptibly spreads

worldwide. The second stage would be a bacterium that makes an additional toxin which is
lethal in combination with the first. This opens up a larger space of possibilities than a unifactorial biological attack. Such an attack could be possible, for instance, if a ubiquitous
bacterium such as E. coli is used as the template for a bioweapon. The selective nature of
a two-staged attack would be more amenable to the biological attack being targeted
towards a certain nation or ethnic group.
The fourth variant of a biological global catastrophic risk is the dispersion in the air of
a considerable quantity of anthrax spores (or a similar agent). This variant does not
demand a self-replicating pathogenic agentit may be inert but lethal. As previously
mentioned, anthrax contamination lasts a very long time, more than 50 years.
Contamination does not require a very large quantity of the pathogen. One gram of
anthrax, properly dispersed, can infect a whole building. To contaminate a substantial
portion of the planet would require thousands of tonnes of anthrax, and very effective
delivery methods (drones), but it can be done in theory. This amount of anthrax required is
not unattainablethe USSR had stockpiled 100 to 200 tons of anthrax at Kantubek on
Vozrozhdeniya Island until it was destroyed in 1992 34. If this amount of anthrax had been
released from its drums as a powder and stirred up by a large storm, it could have
190

contaminated an area hundreds of miles across for more than 50 years. At its height, the
Soviet biological weapons program employed over 50,000 people and produced 100 tons
of weaponized smallpox a year 35. The huge fermenting towers at Stepnogorsk in modernday Kazakhstan could produce more than two tons a day, enough to wipe out an entire
city36. More advanced basic technology and manufacturing could improve this throughput
by an order of magnitude or more, and it could all be done in secret. Many in the US
intelligence community are skeptical that Russia has even shut down its biological
weapons program completely. When Ken Alibek defected from Russia in 1992, he said the
biological weapons program was still ongoing, despite the country being a signatory to the
Biological Weapons Convention forbidding the use or production of biological weapons.
A fifth variant of a world-threatening bioweapon is an infectious agent changing
human behavior, such as by causing heightened aggression leading to war. A virus that
causes aggression or the loss of the feeling of fear (toxoplasma) can induce infected
animals to behavior which promotes the contamination of other animals. It is theoretically
possible to imagine an agent which would trigger a person to spread it to others, and for
this to continue until everyone is infected. This may actually be more intimidating than all
the previous scenarios, because a pathogen spread by aggressive carriers could be
spread much more totally and effectively than a typical pandemic, and spread to remote
settlements.
Toxoplasma gondii is the world's best known behavior-changing pathogen, a parasitic
protozoan estimated to infect up to a third of the world's population. One study of its
prevalence in the United States found the protozoan in 11 percent of women of childbearing age37 (15 to 44 years), while another study put the overall US prevalence rate at
22.5 percent38. The same study found a prevalence rate of 75% in El Salvador. Research
has linked toxoplasmosis with ADD 39, OCD40, and schizophrenia41, and numerous studies
have found a positive correlation between latent toxoplasmosis and suicidal behavior 42,43,44.
T. gondii has the capability to infect nearly all warm-blooded animals, though pigs, sheep,
and goats have the highest infection rates. The prevalence of T. gondii among domestic
cats is 30-40 percent45. Czech parasitologist Jaroslav Flegr found that men with
toxoplasmosis are likely to act more suspicious relative to controls, while infected women
show more warmth46. In addition, infected men show higher testosterone in the blood than
uninfected men, while infected women have lower levels of testosterone relative to their
counterparts.
191

Another obvious example of a behavior-changing pathogen is rabies. Pathogens like


rabies take one to three months to show symptoms, which would be enough time to spread
around much of the world. As an example, the Spanish flu spread extremely quickly, with
populations going from near-zero flu-related deaths to 25 deaths per thousand people per
week in just under a month.47
It is not just unusual pathogens like T. gondii and rabies that studies have proved
modify human behavior. One team of epidemiologists found that even the common flu
vaccine modifies human behavior. Their abstract 48:
Purpose: The purpose of this study was to test the hypothesis that exposure to a
directly transmitted human pathogen-flu virus-increases human social behavior
presymptomatically. This hypothesis is grounded in empirical evidence that animals
infected with pathogens rarely behave like uninfected animals, and in evolutionary
theory as applied to infectious disease. Such behavioral changes have the potential
to increase parasite transmission and/or host solicitation of care.
Methods: We carried out a prospective, longitudinal study that followed participants
across a known point-source exposure to a form of influenza virus (immunizations),
and compared social behavior before and after exposure using each participant as
his/her own control.
Results: Human social behavior does, indeed, change with exposure. Compared to
the 48 hours pre-exposure, participants interacted with significantly more people,
and in significantly larger groups, during the 48 hours immediately post-exposure.
Conclusions: These results show that there is an immediate active behavioral
response to infection before the expected onset of symptoms or sickness behavior.
Although the adaptive significance of this finding awaits further investigation, we
anticipate it will advance ecological and evolutionary understanding of humanpathogen interactions, and will have implications for infectious disease epidemiology
and prevention.
This study shows that we are only beginning to uncover the complex nature of
human-pathogen interactions, and common diseases may have a more substantial
behavioral modification component than is commonly thought. That behavioral modification
component is likely to be adaptive from the perspective of the pathogen; that is, the
192

pathogen forces people to transmit it to others more effectively. With the right genetic
tweaks, could this property be magnified to an intense level? Only future genetic
engineering and experimentation will be able to determine that. It is definitely a global risk
worth considering, and is almost never discussed.
A sixth variant of the biological threat is an auto-catalytic molecule capable of
spreading beyond natural boundaries. One example would be prions, protein fragments
which cause mad cow disease and in some cases infect and kill humans. Prions only
spread through meat, however. Prions are unique among pathogens in that they
completely lack nucleic acids. What is even more scary is that prion diseases affect the
brain or neural tissue, are untreatable, and universally fatal. They consist of a misfolded
protein which enters the brain and causes healthy proteins to change to the misfolded
state, leading to destruction of the healthy brain and death. The prion particles are
extremely stable, meaning they are difficult to destroy through conventional chemical and
physical denaturation methods, complicating containment and disposal. Fungi also harbor
prion-like molecules, though they do not damage their hosts. Outbreaks of prion diseases
among cattle, called mad cow disease because of its unusual behavioral effects, can occur
when cows are given feed that includes ground-up cow brains.
Is there some kind of prion or other subviral protein particle which can spread quickly
between people through mucous membranes or lesions, which antibiotics and antiviral
drugs cannot fight? We may observe that the fact this has not happened in the course of
recorded history is evidence against its possibility, but synthetic biology will open up vast
new spaces of molecular engineering, and we will be able to create and experiment with
molecules inaccessible to natural evolutionary processes. Among these possibilities may
be a dangerous new prion or some other kind of auto-catalytic molecule that bypasses our
immune systems completely.
A seventh variant of biological disaster is the distribution throughout the biosphere of
some species of yeast or mold which is genetically engineered to release a deadly toxin
such as dioxin or botulinum. Since yeast and mold are ubiquitous, they would distribute the
toxin everywhere. It would probably be much easier to carry out this kind of bio-attack than
manufacturing dioxin in bulk and distributing it around the globe. Botulinum bacteria are
capable of producing eight types of toxins, of which there are antidotes for only seven. In
2013, scientists discovered a new type of botulism toxin and the gene that allows the cell to

193

produce it, and in a highly unusual move, withheld its genetic details from publication until
an antitoxin could be developed.49
One last biological danger is the creation of so-called artificial life, that is living
organisms constructed with use of DNA or another set of amino acids but with deep
biochemical differences from other life forms. Such organisms might appear invincible to
the immune systems of modern organisms and threaten the integrity of the biosphere, as in
the E. coli example discussed earlier. The virus-immune bacteria described above would
just be the first step into a world of organisms that have a different number or different
types of fundamental codons than the rest of life. In this way, we will eventually create
organisms which can rightly be said to be outside the Tree of Life.
A possible countermeasure to all these threats would be to create a world immune
systemthat is, the worldwide dispersion of sets of genetically modified bacteria which
are capable of neutralizing dangerous pathogens. However, such a strategy would have to
be extremely complex, and may introduce other problems, such as autoimmune reactions
or other forms of loss of control. A worldwide immune system based on reprogrammable
nanotechnology would be likely to be more effective in the long-term, but it would be even
more difficult to build than a biological global immune system. Alternatively, nanobots could
be developed which can be injected into the bloodstream of any human and can eliminate
any pathogen whatsoever50.
Ubiquitous Polyomaviruses: SV40, BKV, JKV
There is another case which may help to better illustrate biological risk, the case of
the asymptomatic simian virus 40, which was inadvertently spread to hundreds of millions
of people in the United States and the former Soviet Union through polio vaccines 51. The
case of SV40 is extremely controversial, and talking about it bothers some biologists
deeply, who call it a zombie scare and attack any discussion about it as scaremongering.
Regardless of these attacks, it is an objective fact that over a hundred million people were
infected with this virus in the 1950s and 1960s, when polio vaccine stocks were
contaminated with it. One study found that 4 percent of healthy American subjects have it 52.
The virus was present in humans even before the poliovirus vaccine was invented 53, and
appears to be endemic in the human population, though the primary reservoir is unknown.
The controversy over SV40 primary revolves around whether it causes cancer in
humans. We find the studies that argue there is no link between SV40 and cancer in
194

humans to be convincing54. For us, a far more interesting topic is recognizing that there are
asymptomatic viruses which are prevalent in the population. Studies have found that SV40
antigen prevalence (a key indicator of the presence of the virus) in England is a few
percent55. There are other viruses which are far more common; BK virus, from the same
family of polyomaviruses, has a prevalence between 65 percent and 90 percent in
England, depending on age56. Prevalence is highest for those under age 40 and declines
somewhat during seniority. Another virus, JCV, starts off at just a 10 percent prevalence
among English infants, jumping to 20 percent prevalence in teenagers and escalating to 50
percent prevalence among seniors. Studies have confirmed that BKV and JCV are both
horizontally and vertically transmissible in humans 57,58. Despite their ubiquity, practically
nothing is known about the method of transmission.
BKV, JCV, and SV40 are all similar viruses, and cause tumors in monkeys and
rodents. The notion that there are ubiquitous viruses which cause cancer in animals and
about which close to nothing is known of the method of transmission is rather important,
and scary. It raises the possibility of a similar virus being genetically engineered in the
future, one that actually does cause cancer in humans (like SV40 has been incorrectly
claimed to), possibly at a high incidence. If the virus does not respond to antiviral drugs
(many viruses don't), it might be able to spread in an (unknown) fashion similar to the BK
virus, achieving 80 percent incidence in the population and causing a high incidence of
fatal cancer. It would be difficult for the human species to shake loose from the virus,
since it would be passed from mothers to children. The only way of perpetuating the human
species in circumstance would be for only women and men known to be uninfected to
produce offspring. Meanwhile, normal horizontal infection would keep occurring. Keeping
civilization going in such a circumstance, especially as infrastructure collapsed due to a
lack of population, could be very difficult.
Plural biological strike
Though it may be possible to stop the spread of one pandemic through the effective
use of vaccination and antibiotics, an epidemic caused by several dozen species of diverse
viruses and bacteria, released simultaneously in many different parts of the globe, would
produce a much greater challenge. It would be difficult to assemble all the appropriate
antibiotics and vaccines and administer them all in time. If the virus with 50 percent lethality
195

would be simply be a huge catastrophe, 30 diverse viruses and bacteria with 50 percent
lethality would mean the guaranteed destruction of all who had not hidden in bunkers.
Alternatively, about 100 different organisms with 10 percent lethality could have a similar
outcome.
Plural strike could be the most powerful means of conducting biological war, or setting
up a Doomsday weapon. But a plural strike could also occur inadvertently through the
release of pathogenic agents by different terrorists or competition among biohackers in a
wartime scenario. Even nonlethal agents, in conjunction, could weaken the immune system
of a human so greatly that his further survival becomes improbable.
The possibility of plural application of biological weapons is one of the most
considerable factors of global risk.
Biological delivery systems
To cause mass mayhem and damage, a biological weapon should not only be deadly,
but also infectious and rapidly spreading. Genetic engineering and synthetic biology offer
possibilities not only for the creation of a lethal weapon, but also new methods of delivery.
One does not need to possess a great imagination to imagine a genetically modified
malarial mosquito which can live in any environment and spread with great speed around
the planet, infecting everyone with a certain bio-agent. Fleas, rats, cats, and dogs are all
nearly global animals which would be excellent vectors or reservoirs with which to infect
the human population.
Probability of application of biological weapons and its distribution in time
Taking everything into account, we estimate there is a probability of 10% that
biotechnologies will lead to human extinction during the 21 st century. This estimate is based
on the assumption there that will inevitably be a wide circulation of cheap tabletop devices
allowing the simple creation of various biological agents. The assumption is that gene
synthesizer and bioprinting machines will be

circulated on a scale comparable to

computers today.
The properties a dangerous bioprinter (cheap mini-laboratory) would have are:
1) inevitability of appearance,
2) low cost,
3) wide prevalence,
196

4) uncontrollable by the authorities,


5) ability to create essentially new bio-agents,
6) simplicity of application,
7)

a variety of created objects and organisms,

8)

appeal as a device for bioweapon manufacture and drugs.


A device meeting these requirements could consist of a typical laptop, a softwarepiracy distributed program with a library of initial elements, and a reliable gene synthesis or
simple genetic engineering device, like CRISPR. These components already exist today,
they just need to be refined, improved, and made cheaper. Such a set might be called a
bioprinter. Drug-manufacturing communities could be the distribution channel for the
complete set. As soon as these critical components come together, the demand rises, and
the system shows itself to have many practical uses (biofuels, pharmaceuticals,
psychedelics, novelties like glowing plants), it will quickly become difficult to control. The
future of biopiracy could become similar to the present of software piracyubiquitous and
rampant.
A mature bioprinter would allow the user to synthesize self-replicating cells with nearly
arbitrary properties. Anyone could synthesize cells that pumped out psilocybin (magic
mushrooms), THC (marijuana), cocaine precursors, spider silk, mother of pearl, or botulism
toxin. You could print out cells that produce life-saving medicines, produce glowing moss,
or print your own foods. Eventually, a very wide range of biomaterials and biological
structures could be created using this system, including some very deadly and selfreplicating ones.
The probability of global biological catastrophe given widely available bioprinters will
increase very quickly as such devices are perfected and distributed. We can describe a
probability curve which now has a small but non-zero size, and after a while escalates to a
large size. What is interesting is not the exact form of the curve, but the time when it starts
to grow sharply.
The authors estimate that this major upswing will take place sometime between 2020
and 2030. (Martin Rees estimates that a major terrorist bio attack with more than a million
casualties will occur by 2020, and he has put a wager of $1,000 behind this prediction 59.)
The cost of gene sequencers and synthesizers continues to fall, and will eventually make a
bioprinter accessible. The genomes of many organisms are sequenced newly every year,
197

and there is an explosion in knowledge about the operative principles of organisms at the
genetic and biochemical level. A much cheaper method of gene synthesis is also needed;
current methods involve expensive reagents.
Soon there will be software programs that can quickly model the consequences of a
wide variety of possible genetic interventions to common lab organisms. This software,
along with improved hardware, will allow the creation of the bioprinter described above.
That growth of probability density around 2020-2030 does not mean that there aren't
already terrorists who could develop dangerous viruses in a laboratory today.
As stated previously, based on what we currently know, the misuse of biotechnologies
appears to be number one on the current list of plausible global catastrophes (especially if
concurrent with a nuclear war), at least for the next 20-30 years. This risk would be
substantially lowered in the following scenarios:
1) Biological attack could be survived in underground shelters. The United States and
other countries could substantially upgrade their shelter infrastructure to preserve a major
percentage of the affected population after biological warfare or terrorism. There are
bunkers for the entire population of Switzerland, for instance, though they are not
adequately stocked.
2) The first serious catastrophe connected with a leak of dangerous pathogens results
in draconian control measures, enough to prevent the creation or distribution of dangerous
bioprinters. This would require a government substantially more authoritarian than what we
have now.
3) AI and molecular manufacturing are developed first, and serve as a shield against
bioweapons before they are developed. (Highly unlikely as these technologies seem much
more difficult than bioprinters.)
4) Nuclear war or other disaster interrupts the development of biotechnologies.
Obviously, depending on a disaster to mitigate another disaster is not the best strategy.
5) It is possible that biotechnologies will allow us to create something like a universal
vaccine or artificial immune system before biological minilaboratories become a serious
problem.
Unfortunately, there is the following unpleasant chain of a feedback connected with
protection against a biological weapon. For the best protection we should prepare vaccines
198

and countermeasures against as many viruses and bacteria as possible, but the more
experts working on that task, the greater the potential that one of them goes rogue and
becomes a terrorist or troublemaker. Another risk besides targeted, weaponized pathogens
is the probability of creating biological green goo, that is, a universally omnivorous
microorganism capable of quickly digesting a wide variety of organic matter in the
biosphere in general. For this purpose, it would be necessary create one microorganism
with

properties

which

are

available

separately

in

different

microorganismsto

photosynthesize, dissolve and acquire nutrients, to breed with the speed of E. coli, and so
on.
Usually, such a scenario is not considered because it is assumed that such organisms
would have already evolved, but the design space accessible through deliberate biodesign
is much, much larger than through evolution and natural selection alone.

Such an

organism that ends up as a bioweapon might even be designed for some other purpose,
such as cleaning up an oil spill. There is also a view that enhancement of crops for farming
could eventually lead to the creation of a superweed that outcompetes a majority of
natural plants in their biological niches.
In conclusion for this section, there are many imaginable ways of applying
biotechnology to the detriment of mankind, and we've only scratched the surface here. The
space of unknown risks is much larger than has been explicitly enumerated. Though it is
possible to limit the damage of each separate application of biotechnology, low cost,
inherent privacy, and prevalence of these technologies makes their ill-intentioned
application practically inevitable. In addition, many biological risks are not obvious and will
emerge over time as biotechnology develops.
Already, biological weapons are the cheapest way of causing mass death: it takes a
fraction of a cent of botulism toxin to kill someone. Manufacturing modern bioweapons like
anthrax for military purposes necessitates large protected laboratories and skilled
personnel, but it is only a matter of time until this changes. It can be even cheaper to kill if
we consider the ability of a pathogenic agent to self replicate.
It is often said that biological weapons are not effective in military applications.
However, it may have a non-conventional use--as a weapon for a stealth-strike targeting
the back of an enemy or as a last-ditch defense weapon to infect invading armies.
Plausible Motives for Use of Biological Weapons
199

The issue of motive with regard to unleashing extremely destructive and potentially
uncontrollable agents will be addressed in further detail in section two of this book, but it's
worth addressing here specifically with regard to biological weapons.
There are two likely scenarios in which biological weapons might be used; warfare
and terrorism. As mentioned above, biological weapons could be used non-conventionally
in warfare, either as a stealth-strike in a time of great military need, a last-ditch weapon to
wipe out invaders, or as a scorched earth tactic.
There are two prominent historical examples where either a stealth strike or
aggressive scorched earth tactics were considered. The first is Hitler's so-called Nero
Decree, named after the Roman emperor Nero who allegedly engineered the Great Fire of
Rome in 64 AD. Hitler ordered that anything [] of value be destroyed in German
territory, to prevent its use by the encroaching Allies 60. The relevant section of the decree is
as follows:
It is a mistake to think that transport and communication facilities, industrial
establishments and supply depots, which have not been destroyed, or have only
been temporarily put out of action, can be used again for our own ends when the
lost territory has been recovered. The enemy will leave us nothing but scorched
earth when he withdraws, without paying the slightest regard to the population. I
therefore order:
1) All military transport and communication facilities, industrial establishments and
supply depots, as well as anything else of value within Reich territory, which could in
any way be used by the enemy immediately or within the foreseeable future for the
prosecution of the war, will be destroyed.
The Nazi Minister of Armaments and War Production, Albert Speer, who had been
assigned responsibility for carrying out the decree, secretly ignored it, and Hitler committed
suicide 41 days later, making it impossible to ensure his orders were carried out. A
summons of Nazi party leaders in Dusseldorf in the closing weeks of the war, to be posted
throughout the city, called for the evacuation of all inhabitants and said, Let the enemy
march into a burned out, deserted city! 61 Around the same time, many sixteen-year-olds
and old men were also recruited in a great levy as a last-ditch effort to fight the Allies.

200

Back to the so-called Nero Decree, the orders for destruction were wide in extent. The
details are recalled in Speer's memoirs62:
As if to dramatize what lay in store for Germany after Hitler's command, immediately
after this discussion a teletype message came from the Chief of Transportation.
Dated March 29, 1945, it read: Aim is creation of a transportation wasteland in
abandoned territory... Shortage of explosives demands resourceful utilization of all
possibilities for producing lasting destruction. Included in the list of facilities slated
for destruction were, once again, all types of bridges, tracks, roundhouses, all
technical installations in the freight depots, workshop equipment, and sluices and
locks in our canals. Along with this, simultaneously all locomotives, passenger cars,
freight cars, cargo vessels, and barges were to be completely destroyed and the
canals and rivers blocked by sinking ships in them. Every type of ammunition was to
be employed for this task. If such explosives were not available, fires were to be set
and important parts smashed. Only the technician can grasp the extent of the
calamity that execution of this order would have brought upon Germany. The
instructions were also prime evidence of how a general order of Hitler's was
translated into terrifying thorough terms.
Speer called the order for the destruction of all resources within the Reich as a
death sentence for the German people:
The consequences would have been inconceivable: For an indefinite period
there would have been no electricity, no gas, no pure water, no coal, no
transportation. All railroad facilities, canals, locks, docks, ships, and locomotives
destroyed. Even where industry had not been demolished, it could not have
produced anything for lack of electricity, gas, and water. No storage facilities, no
telephone communicationsin short, a country thrown back into the Middle Ages.
This shows the degree to which an authoritarian leader may choose to ruin his own
country to deny its resources to a victorious enemy, even at great detriment to the future of
the homeland. If these orders were carried out, it would have caused many millions of
unnecessary deaths. To put it bluntly, Hitler was willing to sacrifice millions of German lives
to express his contempt for the enemy.
201

Now, imagine Hitler had access to a Doomsday virus which could kill off millions of
the invading Allied soldiers but only at the cost of millions of native German lives. Would he
have used it? Giving the reluctance of his subordinates to carry out the scorched earth
policy, it might not have worked, but what if he were a dictator in the future, around the year
2040, and had access to an army of drones which could disperse the pathogen with a
single push of a button, no cooperation from subordinates required? In that case, we can
imagine that it is well within possibility. Perhaps Hitler would have rather seen the
destruction of Europe than to suffer the failure of his plan to dominate it. Are there not
dictators in the world today who might not do the same thing? Will not there also be in the
future? It certainly seems there will be. With an arsenal of biological weapons and dispersal
drones at their disposal, combined with a fevered and angry state of mind, such an
outcome is certainly imaginable. From the initial hot zone, the disease could spread
worldwide.
The second example of the potential use of non-conventional weapons in warfare on
a mass scale was the case of Churchill wanting to use poison gas on German cities in
1944 to counter the continuous rocket attacks hitting British cities. He issued a
memorandum telling his military chiefs to think very seriously over this question of using
poison gas, that "it is absurd to consider morality on this topic when everybody used it in
the last war without a word of complaint, and that 63:
I should be prepared to do anything [Churchill's emphasis] that would hit the enemy
in a murderous place. I may certainly have to ask you to support me in using poison
gas. We could drench the cities of the Ruhr and many other cities in Germany... We
could stop all work at the flying bombs starting points.... and if we do it, let us do it
one hundred per cent.
Winston Churchill, 'Most Secret' Prime Minister's Personal Minute to the Chiefs of
Staff, 6 July 1944

There have also been claims that Churchill advocated the use of anthrax, but these
have since been shown to be false. The key point is that Churchill was willing to anything to
damage the enemy, meaning that if he did have sufficient anthrax, it seems likely he would
have advocated its use. Such a mass distribution of anthrax would have contaminated wide
202

areas of Germany for decades to come, as cleanup would have a prohibitive cost unless
used only in very limited areas. If anthrax had been used, many cities would have had to
have been completely abandoned and fenced off for generations. Anthrax has the
advantage that it is not contagious in the way that flu is; you can only catch it by directly
inhaling spores from cadavers. Therefore, a long-distance bombing would minimize the risk
of collateral damage, and increase the incentive for its use.
Many view WWII as the most intense war in history, which it certainly was, but they
also give it an aura of mystery and unrepeatability as if it were a unique event. That is not
necessarily the case. We are still at risk of a World War today. The Mutually Assured
Destruction (MAD) doctrine is falseit certainly would be possible to disable a great many
nuclear silos with submarine-launched ballistic missiles, and thereby greatly ameliorate the
damage of a counterstrike. Leaning on MAD as a guardian angel to protect from the threat
of another World War is not very prudent. In a total war, it is plausible that all ethics would
fly out the window, as it very nearly did in World War II. When the big guns come out, they
could very well include next-generation, genetically engineered biological weapons
alongside nuclear missiles and conventional arms. The military scenarios outlined above
make that imaginable.
The third major category of plausible usage of a Doomsday virus is, of course,
through a terrorist incident. The key threat is fundamentalist Islam, and Hillary Clinton was
previously cited in this chapter as noting that Al Qaeda was specifically looking for
specialists in the area. Fundamentalists Muslims might be quite happy with wiping out
billions of people in North America, South America, and Eurasia as long as, in their eyes, it
gave Islam a better foothold in the world. In the not-too-distant future, it may be possible to
design viruses that selectively target specific races, which would help enable this.
Ethnic bioweapons are a well-established and well-recognized area of risk 64. In
ancient times, the rulers of Nepal maintained the malaria-infected Terai forest as a natural
defense against invaders from the Ganges south, as the natives of the forest had innate
resistance against malaria while the invaders did not. Certain populations living around
specific microbes for a long period of time evolve defenses against them which other
groups lack. Microbes originating in the crowded cities of Europe killed millions of natives in
the Americas when the European explorers first arrived, far more than could be killed with
guns or swords alone. The native population simply lacked immune defenses against these
203

germs.
In 1997, US Secretary of Defense William Cohen referred to the concept of
ethnic-specific bioweapons, saying 65, Alvin Toffler has written about this in terms of some
scientists in their laboratories trying to devise certain types of pathogens that would be
ethnic specific so that they could just eliminate certain ethnic groups and races. An 1998
interview with a UK defense intelligence official, Dr. Christopher Davis, refers to ethnic
bioweapons66. Davis said, ...we also have the possibility of targeting specific ethnic groups
of specific genetic subtypes, if you like, of the population, that you can indiscriminately, in a
way, spray something, but it only kills the certain people that this material is designed to
find and attack. In 2005, an official announcement of the International Committee of the
Red Cross said67, The potential to target a particular ethnic group with a biological agent is
probably not far off. These scenarios are not the product of the ICRC's imagination but
have either occurred or been identified by countless independent and governmental
experts.
Even more highly specific bioweapons than ethnic bioweapons may be
possible. In 2012, an article in The Atlantic covered various advances in synthetic biology
and concluded that a weapon that, for instance, only caused a mild flu for the general
population but was fatal to the President of the United States could be possible in the
future (timeframe unspecified)68. None of the authors of the article were biologists, so the
article should be taken with a grain of salt, but there are enough references to the
possibility of gene-specific bioweapons from more solid sources, such as by the Red Cross
and British Medical Association, that the prospect is worth taking seriously. In our literature
search, we could not find an in-depth description of the precise possible mechanisms or a
biochemical analysis of such a hypothetical pathogen. Microbes that affect one group more
than others are common in nature, but a pathogen that exclusively infects one group and
infects close to zero of another group is very rare, mostly because worldwide travel has
spread pathogens universally and produced immune resistance in many to all of them.
Consider if there were a massive global war and various ethnicities unleashed
bioweapons which were all targeted to kill one another, ultimately resulting in omnicide. Or,
perhaps a virus designed to attack one ethnicity might be designed in such a way that it
were possible for it to mutate to effect other ethnicities, perhaps even infecting the ethnic
group of the creators of the bioweapon. If biological minilaboratories with cheap gene
204

synthesis capability become universal and lack sufficient safeguards or are hackable, this
is a real possibility. Such an attack backfiring should be taken into account.
Aside from interethnic or international warfare and religious terrorism, there is also
the possibility of even more abstract and sinister ideological threats, such as Doomsday
cults who desire the destruction of the entire population. Heaven Gate, the suicide cult that
committed mass suicide in California in 1997, anticipated the cleansing of the Earth by
aliens who had ascended to a level higher than human. Perhaps if they could have
deliberately carried out such a cleansing themselves, though would have done it. There are
many examples of such cultists who get themselves into a state of mind where the body is
just a vessel, so engaging in mass killing would not be construed as actually doing wrong,
but rather freeing their fellow man from their vessel and allowing them to ascend to the
higher level in a cosmic body. People who are not cultists (most of us, presumably) flinch
away from considering such worldviews, but they do exist among many people, and do
threaten us whether we acknowledge their existence or not.
This chapter has been a lengthy one, and we've covered many of the basic threats
from biological weapons, and the motivations which might give rise to their use. Overall,
this analysis just scratches the surface, and a more detailed, up-to-date book length
analysis is required. A very in-depth analysis will be needed to inform scientists and
policymakers of the true extent of the dangers and allow them to chart a course that
minimizes aggregate risk.

The map of biorisks

The map of global catastrophic risks connected with biological weapons and
genetic engineering
TL;DR: Biorisks could result in extinction because of multipandemic in near future and their risks is the same order
magnitudeasrisksofUFAI.Alotofbiorisksexist,theyarecheapandcouldhappensoon.

It may be surprising that number of published research about risks of biological global
catastrophe is much less than number of papers about risks of self-improving AI. (One of
exception here is "Strategic terrorism research parer by former chief technology officer of
Microsoft.)
205

It cant be explain by the fact that biorisks have smaller probability (it will not be known until
Bostrom will write the book Supervirus). I mean we dont know it until a lot of research will
be done.
Also biorisks are closer in time than AI risks and because of it they shadow AI risks, lowering
the probability that extinction will happen by means of UFAI, because it could happen
before it by means of bioweapons (e.g. if UFAI risk is 0.9, but chances that we will die
before its creation from bioweapons is 0.8, than actual AI risk is 0.18). So studying biorisks
may be more urgent than AI risks.
There is no technical problem to create new flu virus that could kill large part of human
population. And the idea of multi pandemic - that it the possibility to release 100 different
agents simultaneously - tells us that biorisk could have arbitrary high global lethality. Most
of bad things from this map may be created in next 5-10 years, and no improbable insights
are needed. Biorisks are also very cheap in production and small civic or personal biolab
could be used to create them.
May be research in estimation probability of human extinction by biorisks had been done
secretly? I am sure that a lot of analysis of biorisks exist in secret. But this means that they
do not exist in public and scientists from other domains of knowledge cant independently
verify them and incorporate into broader picture of risks. The secrecy here may be useful if
it concerns concrete facts about how to crete a dangerous virus. (I was surprised by
effectiveness with which Ebola epidemic was stopped after the decision to do so was
made, so maybe I should not underestimate government knowledge on the topic).
I had concerns if I should publish this map. I am not a biologist and chances that I will find
really dangerous information are small. But what if I inspire bioterrorists to create
bioweapons? Anyway we have a lot of movies with such inspiration.
So I self-censored one idea that may be too dangerous to publish and put black box instead. I
also have a section of prevention methods in the lower part of the map. All ideas in the map
may be found in wikipedia or other open sources.
The goal of this map is to show importance of risks connected with new kinds of biological
weapons which could be created if all recent advances in bioscience will be used for bad.
The map shows what we should be afraid off and try to control. So it is map of possible
future development of the field of biorisks.
Not any biocatastrophe will result in extinction, it is in the fat tail of the distribution. But smaller
catastrophes may delay other good things and wider our window of vulnerability. If
protecting measures will be developed on the same speed as possible risks we are mostly
safe. If total morality of bioscientists is high we are most likely safe too - no one will make
dangerous experiments.
Timeline: Biorisks are growing at least exponentially with the speed of Moore law in biology.
After AI will be created and used to for global government and control, biorisks will probably
206

ended. This means that last years before AI creation will be most dangerous from the point
of biorisks.
The first part of the map presents biological organisms that could be genetically edited for
global lethality and each box presents one scenario of a global catastrophe. While many
boxes are similar to existing bioweapons, they are not the same as not much known
bioweapons could result in large scale pandemic (except smallpox and flu). Most probable
biorisks are outlined in red in the map. And the real one will be probably not from the map
as the world bio is very large and I cant cover it all.
The map is provided with links which are clickable in the pdf, which is here: http://immortalityroadmap.com/biorisk.pdf

References
1. Ray Kurzweil and Bill Joy. Recipe for Destruction. The New York Times, October
17, 2005.
2. Linster M, van Boheemen S, de Graaf M, et al. Identification, Characterization and
Natural Selection of Mutations Driving Airborne Transmission of H5N1 Virus. Cell.
2014.
3. Convention on the Prohibition of the Development, Production and Stockpiling of
Bacteriological (Biological) and Toxin Weapons and on Their Destruction. Signed at
London, Moscow and Washington on 10 April 1972. Entered into force on 26 March
1975.
4. Alibek, K. and S. Handelman. Biohazard: The Chilling True Story of the Largest
Covert Biological Weapons Program in the World - Told from Inside by the Man Who
Ran it. 1999.
5. Huxsoll DL, Parrott CD, Patrick WC III. "Medicine in defense against biological
warfare". JAMA. 1989;265:677679.
6. Takafuji ET, Russell PK. "Military immunizations: Past, present and future
prospects". Infect Dis Clin North Am. 1990;4:143157.
7. Chemical and Biological Weapons: Possession and Programs Past and Present,
James Martin Center for Nonproliferation Studies, Middlebury College, April 9, 2002.
8. Introduction to Biological Weapons, Federation of American Scientists, official site.

207

9. Anthrax Q&A: Signs and Symptoms. Emergency Preparedness and Response.


Centers for Disease Control and Prevention. 2003.
10. Scott Shane. Cleanup of anthrax will cost hundreds of millions of dollars.
December 18, 2012. Baltimore Sun.
11. Jeanne Guillemin. Anthrax: The Investigation of a Deadly Outbreak. 1999. University
of California Press.
12. Pearson, Dr. Graham S. (October 1990) Gruinard Island Returns to Civil Use The
ASA Newsletter. Applied Science and Analysis. Inc. Retrieved 12 January 2008.
13. John S. Foster et al. Report of the Commission to Assess the Threat to the United
States from Electromagnetic Pulse (EMP) Attack. 2004. Congressional report.
14. EMP Could Leave '9 Out of 10 Americans Dead' WND.com. May 3, 2010.
15. Landing page, SyntheticBiology.org.
16. Craig Venter et al. Creation of a Bacterial Cell Controlled by a Chemically
Synthesized Genome. Science 329 (5987): 5256.
17. JCVI press release. IBEA Researchers Make Significant Advance in Methodology
Toward Goal of a Synthetic Genome. November 3, 2010.
18. Andrew Pollack. Scientists Create a Live Polio Virus. The New York Times. July
12, 2002.
19. Sarah Post. For the first time, scientists create a virus using only its genome
sequence. GenomeWeb. July 23, 2012.
20. How Scientists Made 'Artificial Life'. BBC. May 20, 2010.
21. BWC calls on more awareness about Biosecurity. Kuwait News Agency. October
11, 2011.
22. Linster op cit.
23. Denise Grady and Donald G. McNeil Jr. Debate Persists on Deadly Flu Made
Airborne. The New York Times. December 26, 2011.
24. Nell Greenfieldboyce. Scientists Publish Recipe For Making Bird Flu More
Contagious. NPR. April 10, 2014.
25. Thomas Inglesby, Anita Cicero and D.A. Henderson. Benefits of HN1 Research Do
Not Outweigh the Risks. Milwaukee-Wisconsin Journal Sentinel.
26. WHO concerned that new H5N1 influenza research could undermine the 2011
Pandemic Influenza Preparedness Framework. World Health Organization
statement. December 30, 2011.
208

27. Donald G. McNeil Jr. Bird Flu Paper is Published After Debate. The New York
Times. June 21, 2012.
28. Frank Jordans, Associated Press. Clinton warns of bioweapon threat from gene
tech. Physorg. December 7, 2011.
29. Deborah MacKenzie. Five easy mutations to make bird flu a lethal pandemic. New
Scientist. September 26, 2011.
30. Philip Bethge and Johann Grolle. Interview with George Church: Can Neanderthals
Be Brought Back from the Dead? Spiegel Online International. January 18, 2013.
31. David Brin quoting Chris Phoenix. The Old and New Versions of Culture War.
Contrary Brin. May 8, 2009. Originally from Center for Responsible Nanotechnology
blog.
32. George Church. Safeguarding Biology. Seed magazine. February 2, 2009.
33. Church op. cit.
34. Michael R. Edelstein, Astrid Cerny, Abror Gadaev, eds. Disaster by Design: the Aral
Sea and its Lessons for Sustainability. 2012. Emerald.
35. Ken Alibek and Stephen Handelman. Biohazard: The Chilling True Story of the
Largest Covert Biological Weapons Program in the World--Told from Inside by the
Man Who Ran It. 2000. Delta.
36. Michael Dobbs. Program to Halt Bioweapons Work Assailed. Washington Post.
September 12, 2002.
37. Jones JL, Kruszon-Moran D, Sanders-Lewis K, Wilson M (2007). Toxoplasma
gondii infection in the United States, 19992004, decline from the prior decade. Am
J Trop Med Hyg 77 (3): 40510.
38. Montoya J, Liesenfeld O (2004). Toxoplasmosis. Lancet 363 (9425): 196576.
39. Kathleen McAuliffe. How Your Cat is Making You Crazy. The Atlantic. February 6,
2012.
40. McAuliffe op cit.
41. Torrey, EF; Bartko, JJ; Yolken, RH (May 2012). Toxoplasma gondii and other risk
factors for schizophrenia: an update. Schizophrenia bulletin 38 (3): 6427.
42.

Zhang, Yuanfen; Trskman-Bendz, Lil; Janelidze, Shorena; Langenberg, Patricia;


Saleh, Ahmed; Constantine, Niel; Okusaga, Olaoluwa; Bay-Richter, Cecilie; Brundin, Lena;
Postolache, Teodor T. (Aug 2012). Toxoplasma gondii immunoglobulin G antibodies and
nonfatal suicidal self-directed violence. J Clin Psychiatry 73 (8): 106976.
209

43.

Mortensen, Preben Bo; Mortensen, Preben Bo; Norgaard-Pedersen, Bent;


Postolache, Teodor T. (2012). Toxoplasma gondii Infection and Self-directed Violence in
Mothers. Archives of General Psychiatry 69 (11): 1.

44.

Ling, VJ; Lester, D; Mortensen, PB; Langenberg, PW; Postolache, TT (2011).


Toxoplasma gondii Seropositivity and Suicide rates in Women. The Journal of Nervous and
Mental Disease 199 (7): 440444.
45. Elmore, SA; Jones, JL; Conrad, PA; Patton, S; Lindsay, DS; Dubey, JP (April 2010).
"Toxoplasma gondii: epidemiology, feline clinical aspects, and prevention". Trends in
parasitology 26 (4): 1906.
46. McAuliffe op cit.
47. J. K. Taubenberger and D. M. Morens. 1918 Influenza: the Mother of All
Pandemics. Cdc.gov.
48. Reiber C, Shattuck EC, Fiore S, Alperin P, Davis V, Moore J. Change in human
social behavior in response to a common vaccine. Annals of Epidemiology. 2010
Oct;20(10):729-33.
49. Debora MacKenzie. New botox super-toxin has its details censored. New Scientist.
October 14, 2013.
50. J. Storrs Hall. Nanofuture: What's Next for Nanotechnology. 2005. Prometheus
Books.
51. Robert L. Garcea and Michael J. Imperiale. Simian Virus 40 Infection of Humans.
Journal of Virology. May 2003 vol. 77 no. 9 5039-5045.
52. Rui-Mei Li, Mary H. Branton, Somsak Tanawattanacharoen, Ronald A. Falk, J.
Charles Jennette and Jeffrey B. Kopp. Molecular Identification of SV40 Infection in
Human Subjects and Possible Association with Kidney Disease. Journal of the
American Society of Nephrology, September 1, 2002 vol. 13 no.9, 2320-2330.
53. Garcea op. cit.
54. NIH/National Cancer Institute. Studies Find No Evidence That Simian Virus 40 Is
Related To Human Cancer. ScienceDaily. August 25, 2004.

55.

Knowles WA, Pipkin P, Andrews N, Vyse A, Minor P, Brown DWG, Miller E.


Population-based study of antibody to the human polyomaviruses BKV and JCV and the
simian polyomavirus SV40. Journal of Medical Virology 2003; 71: 11523.
56. Knowles op. cit.

210

57. T. Kitamura, T. Kunitake, J. Guo, T. Tominaga, K. Kawabe and Y. Yogo.


Transmission of the human polyomavirus JC virus occurs both within the family and
outside the family. Journal of Clinical Microbiology. 1994, 32(10):2359.
58. Boldorini R., Allegrini S., Miglio U., Paganotti A., Cocca N., Zaffaroni M., Riboni F.,
Monga G., Viscidi R. Serological evidence of vertical transmission of JC and BK
polyomaviruses in humans. Journal of General Virology, 2011 May;92(Pt 5):104450.
59. Martin Rees. By 2020, bioterror or bioerror will lead to one million casualties in a
single event. LongBets.org.
60. Albert Speer. Inside the Third Reich [Translated by Richard and Clara Winston].
1970. New York and Toronto: Macmillan
61. Speer op. cit.
62. Speer op. cit.
63. Paxman, Jeremy; Harris, Robert (2002-08-06) [1982]. "The War That Never Was". A
higher form of killing: the secret history of chemical and biological warfare. p. 128.
64. Malcolm Dando. Biotechnology, Weapons and Humanity II. 2004. British Medical
Association report. BMA Professional Division Publications.
65. William Cohen. Terrorism, Weapons of Mass Destruction, and U.S. Strategy. 1997.
Sam Nunn Policy Forum, University of Georgia.
66. Interview of Dr Christopher Davis, UK Defence Intelligence Staff, Plague War, Frontline,
PBS. October 1998.
67. Preventing the use of biological and chemical weapons: 80 years on, Official
Statement by Jacques Forster, vice-president of the ICRC, 10-06-2005.
Andrew Hessel, Marc Goodman, Steven Kotler. Hacking the Presidents DNA. The Atlantic,
2012.

Chapter 13. Superdrug

Biotechnology and cognitive science could eventually lead to the production of


superdrugs, or soma. Consider a drug that has a 100% addiction rate and makes its
211

addicts completely dependent on their supplier. They might quit their jobs and go to work in
fields or factories just to make more of the drug, a scenario portrayed in Philip K. Dick's
novel A Scanner Darkly (1977). If the drug did not incapacitate people, and they retained
all their vigor and cognitive capabilities (or even enhanced them), entire societies and
nations could be based around addiction to it. Such societies could even launch wars on
rival nations and force them into addiction until the entire world is consumed by the drug. If
the drug had long-term lethal or anti-reproductive effects, it could lead to the end of
civilization or humanity itself.
Instead of a literal drug, the drug might also be a computer game or wearable
computer. It could even be a pill filled with microbots which are able to cross the bloodbrain barrier (perhaps by creating a small perforation) and creating a direct brain-tocomputer interface. This was the drug Accela portrayed in the Japanese animation Serial
Experiments Lain (1998). In the story, the drug acted to multiply the brain's operational
capacity by 2 to 12 times, speeding it up and making the outside world seem slower. A real
version of such a pill would likely not be developed until after 2045 and would have more
limited effects, such as giving a mind direct access to search engines and other secondary
processing tools. It all depends on when the technology of molecular manufacturing would
be developed (see chapter 7 for more details).
We can postulate that any drug will not affect the entire population of the planet, as
there will always be people who will refuse it. On the other hand, we can imagine several
superdrugs emerging at once, all of which have the general effect of switching people off
from social life. Relatedly, the popularity of virtual reality and massively multiplayer online
RPGs (MMOs) could reach a point where fertility rates drop to 1 child per couple or lower
and there are devastating consequences to civilization. Fertility rates in tech-heavy places
like Japan and South Korea have already dropped to 1.39 and 1.24 respectively 1. The
fertility rate a population needs to replace itself is about 2.1. So, these populations are
shrinking and aging. By 2060, more than 40 percent of the Japanese population will be
over age 652.
The super-strong drug could be similar to an infectious illness if it causes people to
produce more of the drug and get others introduced. Here is a list of possible superdrug
varieties:
212

A superdrug that has a direct influence on the pleasure centers in a brain. This may
be through chemical means, like ecstasy or THC, or otherwise. There is research on
stimulating the brain's pleasure centers by means of a rotating magnetic field (a
Persiger helmet, Shakti helmet), transcranial magnetic stimulation, audiostimulation
(binaural rhythms), photostimulation, and biological feedback through devices that
can read EEGs (encephalograms), such as thought helmets for computer games.

The future appearance of microrobots which allows them carry out direct stimulation
of the brain, for instance the brain's pleasure centers or working memory centers 3.

Bio-engineering will allow us to create genetically modified plants which can produce
a range of psychoactive chemicals but appear to be simple house plants. The
knowledge to produce these plants would be widely distributed over the Internet and
it could be possible to produce seeds for them with the mini bio-laboratories
discussed in the previous chapter.

Better knowledge of biology, neurology, and neurochemistry will allow us to think up


much stronger psychoactive substances with specifically determined properties, and
with fewer side effects which make them more attractive.

Specially programmed microbes could be genetically modified to cross the bloodbrain barrier, perhaps through a temporarily-induced hole, and integrate with the
brain to stimulate the pleasure centers or have some other neuropsychological
effect. This might be technologically easier than creating microbots to perform the
same function.

Virtual reality is progressing, and could lead to helmets which are extremely
addictive. These could risk disconnecting millions or even billions of people from
reality in unhealthy ways. Arguably this is already on the verge of happening.
It is clear that there are different possible combinations of the listed items which would

only strengthen their effects.

Design features
There are several major technical challenges to creating a drug which could pose a
threat to humanity as a whole. First, the drug would have to not incapacitate its users too
213

much, at least initially, otherwise a great many people would be extremely resistant to
taking it, and it would not spread. Recreational drug use, outside of a few extremely
common drugs like cannabis and caffeine, is a niche culture. For instance, only about 0.6
percent of Americans use cocaine 4. Cannabis and caffeine are common, however4
percent of the world's adult population uses cannabis annually 5, and in North America, 90
percent of adults consume caffeine daily6. The point being that a drug would need
extraordinarily appealing properties, or need to be administered by force, to reel in a large
percentage or all of the population. It would also need to be cheap and easily produced.
First, we ought to consider a drug based on conventional neurochemistry (rather than
complex functional microbots or the like) which could pose a threat to humanity. Such a
drug could be extremely simple as long as it were satisfying enough to prevent humans
from reproducing. Arguably, drugs like alcohol and cannabis are used in this way by some
today; those who live their lives partying instead of reproducing. Since 1960, the fertility
rate (births per woman) in the United States has dropped from 3.65 to 1.85; below
replacement7. Obviously, drugs and alcohol are not exclusively to blame, if at all. Instead,
social forces are to blame. Eventually, the problem will work itself out, as populations
which don't reproduce will die out, and be replaced by those which do. Or perhaps there
will be some more science fictional long-term solution, like artificial wombs. Artificial wombs
and robotic maids could ameliorate some of the reproduction-dampening effects of any
major drug addiction epidemics of the future, but such techno-fixes will not be a panacea
unless driven by advanced artificial intelligence, since otherwise there will always need to
be humans to maintain and direct machines to some degree.
Consider how typical pleasure center stimulating drugs, such as cocaine, peter out
before making the entire population addicted to them. Most people simply prefer the
pleasure they get from the real world, or they associate drug use with failure at life. What if
there were a drug that stimulated the pleasure center more powerfully than any drug
currently available? When considering this possibility, our minds jump right away to the use
of microbots, or some other direct technological neural stimulation, rather than through the
indirect means of psychochemistry. No drugs which are currently known are capable of
stimulating the pleasure center to the more extreme degree with which we are familiar with
from science fiction stories or rat experiments.

214

A classic set of experiments, morphine rat experiments, involved rats in small cages
deprived of sensory stimulation or socialization, with access to a level that administered
them morphine through an intravenous delivery tube 8. The experiments found that the rats
would self-administer the drug until they died of thirst. The result was spurious and based
on their sparse environment, howevera subsequent experiment found that when the rats
were given a cage 200 times larger, and opportunities to socialize, they only used
morphine in moderation9. There have also been experiments involving rats with the ability
to directly stimulate their own pleasure centers through an electrode 10. Like the morphine
experiment, they self-administered the stimulation until they died of thirst. No experiment
has yet been conducted that gives rats the ability to directly self-stimulate their pleasure
centers but also have a cage large enough to move around in and socialize with other rats.
We speculate that rats given such a larger cage would actually not self-stimulate to the
point of dying of thirst, but the experiment would need to be run before we can know for
sure.
Because of the results of the second rat morphine experiment, we are skeptical that a
drug or means of directly stimulating the pleasure center would necessarily suck in all of
humanity and make us all addicted. It would need to be combined with other features to be
sufficiently interesting to be a threat. For instance, a brain-computer interface connected to
an elaborate virtual world. People might be able to earn money in the virtual world to order
food, so they would never need to leave it. They might never exercise and eventually die of
obesity. Of course, this would not be a threat to the portion of the world without electricity,
but eventually the entire world might be electrified.
A sufficiently advanced brain-computer interface could even simulate the feeling of
being outside, and trigger the appropriate brain centers to release the endorphins and
hormones which would be gained from exercise, to give someone the sensation they were
outdoors and exercising. It is not known if this feeling could be perfectly simulated by a
brain-computer interface, but it certainly is a possibility to consider. The virtual world might
ultimately be able to simulate every single possible sensation available in the real world
from sex to hiking to other sports and so on 11. This might not even be a bad thing, if
humanity can manage to keep one foot in the real world and maintain basic functions such
as reproduction, physical construction, and socialization. The real danger is if such
addicts really do not contribute to society and come to be regarded as deadweight, to be
215

discarded at the earliest opportunity, and if an increasing number of people fall into this
category.
Some futurists argue that virtual reality will become our preferred reality. That is, we
will spend all our time in virtual realities. Others argue that the structure of human brains
and minds will actually be uploaded into computers, a la the movie Transcendence. A
similar possibility would be people controlling proxy bodies from inside a computer, like
Avatar but the controller is a conscious computer program instead of a physical body.
Some may argue that this is impossible, but we mention the possibility anyway for
completeness. For those interested in the debate, search for the phrase Church-Turing
thesis.
Bearing all this in mind, there are two questions we have to ask:

Is it possible that a psychopharmacological drug could be designed which would bring


humanity to its knees?

Given that the right brain-computer interface scenario would be particularly likely to bring
humanity to its knees, what sorts of scenarios would qualify?
Note that we assume from the outset that some kind of brain-computer interface
would indeed cripple humanity. This is an important point. The space of possible braincomputer interface programs is so large, so expansive, that it seems plausible to assume
that somewhere in there lurks a program or device which would engross us entirely,
beyond all boundaries of reason. Of course, for this to be a true threat, either the cost of
the interface would need to fall below something like 20 dollars, and/or it would need to be
coercively spread, and it would need to catch on at a point in time of the development of
the world when most people have access to electricity. Only about 80 percent of the world
is currently electrified, meaning there are over 1,300 million people without electricity, about
half of those in Africa, the other half in South Asia. These people would be outside the
reach of any coercive brain-computer interface for now, but probably not by 2030.
Just hand-waving, here are the design features we postulate a drug that is a risk to
humanity would need to have:

Extremely engaging virtual reality, either psychedelic, brain-computer interface


216

facilitated, or both.

Pleasure center stimulation, ability to confer sensation of psychological pleasure


directly.

Preferable to real reality in every way, and capable of simulating many of the most
appealing features of everyday reality, including outdoors reality.
If all these conditions are met, we imagine that the drug or brain-computer interface

could be a threat to the continuation of human civilization.


There is an interesting incentive to develop such a systemthe foreseen level of
structural unemployment caused by automation and robotics. As of the time of this writing
(June 2014), the adult labor force participation rate in the United States is under 63
percent, meaning 37 percent of adults do not work 12. Only 12.7 percent of the adult
population are seniors. This means that 37 percent of the adult population are sitting
around with nothing to do. Most of them are not in education or training of any kind. They
are perfect targets for games and drugs, thirsty for any sensation or meaning. Some
economists are arguing that this low level of employment is the new normal, and that
reemployment is not particularly unlikely. Like how horses became economically obsolete
due to the invention of the car, unspecialized workers are becoming economically obsolete
with the introduction of outsourcing and automation.
There is a name for 20-40-somethings who stay inside all day, have no girlfriend,
playing video games and surfing the Internet. They are called hikikomori, from the
Japanese word for pulling inward, being confined. According to the government statistics
of Japan, about 700,000 people live as hikikomoris in that country, with an average age of
31, but the numbers are almost certainly higher 13. 700,000 is about half a percent of the
population of Japan, but the real numbers likely exceed 1 percent. Furthermore, a great
number of Japanese men and women say they are uninterested in dating and sex45
percent of Japanese women between the ages of 16-24 claim they are uninterested in or
despise sexual contact14. A survey in 2011 found that 61% of unmarried men and 49% of
women aged 18-34 were not in any kind of romantic relationship. This celibacy syndrome
is concerning, and is likely related to technology. From a peak of 126 million, the population

217

of Japan is expected to fall to 42 million by 2100, the same as 1900 levels 15. This is
elimination of 66 percent of the population, merely due to a lack of desire to breed.
All these anecdotes aside, it is extremely difficult for us to predict when potentially
catastrophic drugs or online interfaces will be developed, or what precise characteristics
they will have. A brain-computer interface that achieves mass participation from humanity
may actually be useful and enhance our performance rather than being dangerous. It is
hard to tell in advance. The idea of a superdrug that threatens humanity is one which has
not been adequately explored and is in need of more research. Here we have only
provided a crude outline.

References

Google Public Data Explorer.

"Japan population to shrink by one-third by 2060". BBC News. January 30,


2012.

Ray Kurzweil. The Singularity is Near. 2005. Viking.

National Household Survey on Drug Abuse.

United Nations Office on Drugs and Crime (2006). "Cannabis: Why We


Should Care" (PDF). World Drug Report 1 (S.l.: United Nations). p. 14.

Richard Lovett. "Coffee: The demon drink?" September 24, 2005. New
Scientist (2518).

Google Public Data Explorer.

Alexander, Bruce K., (2001) "The Myth of Drug-Induced Addiction", a


paper delivered to the Canadian Senate, January 2001, retrieved
December 12, 2004.

Lauren Slater. Opening Skinner's Box: Great Psychological Experiments


of the Twentieth Century. 2004. W.W. Norton & Company.
218

Michael R. Liebowitz. The Chemistry of Love. 1983. Boston: Little, Brown,


& Co.

Kurzweil 2005.

Bureau of Labor Statistics.

Michael Hoffman. Nonprofits in Japan help 'shut-ins' get out into the
open. The Japan Times Online. The Japan Times.

Abigail Haworth. Why have young people in Japan stopped having sex?
October 19, 2013. The Guardian.

National Institute of Population and Social Security Research, Population


Projections for Japan: 2006-2055, December 2006. The Japanese Journal
of Population, Vol.6, No.1 (March 2008) pp. 76-114.

Chapter 14: Nanotechnology and Robotics


Imagine a tabletop machine the size of a laser printer that can create complex
products out of diamond and allotropes of carbon like buckytubes or graphite. Called a
nanofactory, this machine can produce laptops, cell phones, stoves, watches, remotecontrolled drones, gene sequencing devices, sensors, culinary implements, textiles,
houses, ductwork, cranes, tractors, furnaces, heaters, coolers, engines, guns,
greenhouses, telescopes, microscopes, vacuum cleaners, scales, hammers, coat hangers,
ice cube trays, beer mugs, worm robots, gecko suits, radio towers, and many other useful
objects and devices1. The machine uses natural gas as its feedstock and can produce a
kilogram of product for twenty dollars. It can build a copy of itself in just fifteen hours 2. The
nanofactory is based on molecular nanotechnology (MNT), a manufacturing process that
uses arrays of nanoscale robotic manipulators to build products from the atoms up 3.
A tabletop nanofactory could become a reality sometime over the course of the next
century. Some writers expect nanofactories to be developed as early as the 2020s 4,5,
others not until the 2100s6. The prospect has been addressed by arms control experts 7, as
well as physicists and chemists of all stripes. Some are skeptical they could be built 8,9, but
219

evidence grows for their feasibility10,11. If they can be built, they would be a serious risk to
humankind, as well as a major potential boon.
A nanofactory would use quintillions of tiny nanorobotic arms, called assemblers, to
individually place carbon atoms into nanoblocks which are then combined into
progressively larger units, until a suitable diamond product is created 12. Carbon is the
element used, both because of its strength and its versatility. Since diamond is a very
strong material, nano-products could be mostly empty space, or filled with ballast like
water. A typical 1 kg (2.2 lb) product might only use 0.1 kg of diamond and cost two
dollars. The functional components of a nanofactory would be very complex, small, and
automated, by necessity. A working nanofactory would have internal complexity greater
than even the most complicated large-scale factories of today. The individual assemblers
would only be 200 nanometers across and contain just four million atoms each. A full-sized
nanofactory would contain about 100,000 quadrillion such assemblers, organized into 10
quadrillion production modules. By comparison, the human body contains about 50,000
quadrillion ribosomes, the assemblers of biological life. All these components would be
built by self-replication from the very first assembler. Specifically when or how that
assembler will be built is unknown. Some say it will be built in the 2030s, others, the 2060s,
while others say never. The basic requirements for an assembler are that it can individually
place carbon atoms, that it has three degrees of freedom, and that it can replicate itself.
We know that nanofactories of some kind are possible because all of biological life is
based on the principle of molecular assembly. Molecules making up every organism on
earth, from pond scum to giant redwoods, are made by ribosomes. Ribosomes evolved at
least 3.5 billion years ago, maybe as early as 4.4 billion years ago, not long after the
formation of the Earth itself. Because they are the root of life, ribosomes differ little
between organisms. Ribosomes are part of the basic package that was included in the first
prokaryotic cells. There are only a few basic types.
Consider what ribosomes can accomplish. The ribosomes in bamboo pump out 4 feet
of woody growth in just 24 hours. Thats two inches per hour. The ribosomes of mayflies or
locusts produce enough organisms to cover the sky in a dark cloud for days. The
ribosomes in human beings take us from a fertilized egg to an adult over twenty years.
Millions of years ago, ribosomes built dinosaurs over a hundred feet long. Today, they build
220

fungal networks miles long. Ribosomes synthesize proteins that are hard like the tusks of
elephants, soft like rose petals, stretchy like the membranes of a bat, warm like the fur of a
mink, shiny like the shell of a tortoise, tough like the sinews of a cheetah, and iridescent
like the wings of a butterfly. Over billions of years, ribosomes have pumped out about 4
billion living species. All that biological complexity derives from tiny molecular machines,
every atom and molecule of which has been exhaustively mapped by scientists 13.
Skeptics of molecular nanotechnology (MNT), the postulated-but-not-yet-invented
technology behind hypothetical nanofactories, have made many points against MNT since
the concept was formalized in 1986 by MIT engineer Eric Drexler. They point out that while
ribosomes work in water and synthesize floppy molecules, assemblers of the kind in
postulated nanofactories would have to work in a dry environment with high precision,
which due to certain physics issues, isnt possible14. Nobel Prize winning chemist Richard
Smalley argued that molecular assembler fingers would be too fat and too sticky to
manipulate individual atoms, called the fat fingers and sticky fingers arguments,
respectively. These and similar arguments have been responded to at length by Drexler
and others15,16,17,18.
The only way to confirm or disconfirm that MNT is possible is to try and build it. MNT
of the kind formalized by Drexler in his 1992 book Nanosystems does not exist yet, but
there are steps in that directionenzymes which work out of water and nanoscale
programmable assembly lines that build simple molecular products 19,20. This chapter
assumes that molecular nanotechnology and nanofactories will be developed sometime in
the 21st century, even if we cannot say when or even if it will happen at all. Many experts at
least consider it likely, and we see addressing it in the context of global risk as essential.
Even if dry nanotechnology of the kind described by Drexler cannot be built exactly as
envisioned, compromises or hybrids (between wet and dry systems) will serve a similar
role and have similar capabilities to the original vision. This chapter also covers
conventional nanotechnology of the kind that already exists, and shows how there is
something of a continuum between conventional nanotechnology (NT) and molecular
nanotechnology (MNT), even though the two are distinct. First we highlight the basic
capabilities, then examine the risks. Due to limited space, well primarily focus on potential
military applications of MNT, since therein lies the greatest danger.

221

The nanofactory model described by Chris Phoenix, Eric Drexler, and colleagues has
several crucial features that set it apart from all other modes of manufacturing. First, it is
self-replicating. Second, it is fully automated. Third, it is fully general, meaning it can build
almost anything out of carbon that is allowed by the laws of chemistry. Fourth, it is
atomically precise, meaning every atom is put in a specific, predetermined place according
to a design. Fifth, it has a high throughput, meaning it can fabricate products several times
more rapidly than via conventional means 21. Sixth, its products have superior material
properties, being made of diamond or other carbon allotropes such as buckytubes 22.
Seventh, its products have a high energy density, so extremely powerful motors and
generators can be built in a tiny space23. The Center for Responsible Nanotechnology
(CRN) says that nanomotors and nanogenerators will convert electrical power to motion,
and vice versa, with 10 times the efficiency and about 10 8 (100,000,000) times more
compactly.
Consider that last point, converting electrical power to motion a hundred million times
more compactly. If possible, it means an engine as powerful as a fighter aircraft that fits in
a matchbox. Phoenix writes, such a motor can convert more than 500,000 watts (0.5
megawatts) of power per cubic millimeter at better than 99% efficiency 24. For comparison,
the power output of a blue whale is about 2.5 megawatts, a diesel locomotive about 3
megawatts. A nanotech engine equivalent to their output could fit in a few cubic millimeters.
You might wonder, how could such a small machine contain so much power without
overheating? Indeed, cooling would be an issue. If adequate cooling can be provided,
tremendous power densities could be sustained. Even if the motors are tiny, they could be
hooked up to tiny rods which are then hooked up to larger and larger rods, driving shafts or
rotary axles to channel the power. The diamond rods connected to the engine itself would
be strong enough not to break, even given the power densities produced by nanoengines.
In this fashion, engines for the largest and most powerful vehicles could be produced in
small desktop fabricators. Its difficult to imagine what the consequences of this could be,
since barely anyone has given it any serious thought. We will return to this scenario and its
implications for global risk later in the chapter.
Grey Goo Scenario

222

When people think of nanotechnology risks, the first thing that usually comes to mind
is the grey goo scenarioout of control self-replicating nanobots saturating the
environment, like out-of-control locusts or rabbits. In this scenario, a tiny nanomachine selfreplicates by using sunlight, biomass, or other available power and matter sources until it
destroys all life on Earth25. Though this may initially sound plausible to the non-technical
futurist, there are a number of reasons why its rather difficult to achieve technically 26.
Specifically, it seems to be less of a risk than biological warfare or artificial intelligence. The
most dangerous forms of grey goo would be likely to be bio-technical hybrids, or tools used
by advanced, human-equivalent or superintelligent artificial intelligences, rather than dumb
self-replicators (which could be outsmarted or otherwise beaten if detected early enough).
Grey goo was highlighted in Eric Drexlers 1986 book Engines of Creation, and was
subsequently featured in 90s and 00s science fiction, from Star Trek to Michael Crichtons
novel Prey. In a short 2003 article, Grey goo is a small issue, Drexler said that he thought
the risk was overblown and that he was mistaken to emphasize it in his 1986 book 27. This
is because grey goo would be very difficult to engineer, and especially difficult to engineer
accidentally. If you wanted to use nanotechnology to build a robot army, or a nanobot army,
you would just build the robots in a factory, instead of trying to engineer an autonomous,
open environment-ready nanoscale self-replicator. For most imaginable goals, even taking
over the world, this would be both more efficient and fully sufficient.
Why? Because converting raw materials from the environment into functioning
nanobots would be extremely complex and subject to trial and error. The first nanofactories
will require highly purified inputs such as refined natural gas. Trying to put complex,
unprocessed molecules from a natural biological environment through a nanofactory would
be like throwing a wrench into finely tuned turbines. To get the right molecules, it would be
far easier to purify them in bulk using purpose-built industrial systems, rather than
collecting them with nanobots from the dirty, impure natural environment. Navigation and
coordination among nanobots would also be problems. To solve them, youd practically
need human-equivalent Artificial Intelligence, at which point youre dealing with a selfimproving AI risk, not a nanotech risk.
Programming free-floating molecular assemblers is a robotics task far more complex
than programming assemblers pinned down in neat rows of a nanofactory. This is for the
223

same reasons that its easier to build purpose-built robotic arms to build cars in a factory
than to build robotic arms that somehow build cars from substances available in a forest.
Like a car, an assembler needs to be built out of highly purified, refined components in a
predictable environment where nothing can go wrong. Assemblers that replicate in the
external environment would need to be more complex and sturdy, spending a lot of space
on features like protective shells, membranes, and navigational sensors/computers.
We arent claiming that free-floating self-replicating assemblers are impossible or
could never exist, just that they would not be among the earlier nanotech risks to enter the
scene. The existence of viruses, bacteria, and all other organisms shows that assemblershells (organisms) filled with assemblers (ribosomes) powered by materials in the
environment (food or hosts) can exist. Blue-green algae (cyanobacteria) has been quite
effective in self-replicating with ordinary sunlight and carbon dioxide from the atmosphere.
Cyanobacteria sucks CO2 out of the atmosphere and replicate out of it using a chemical
process called the Calvin cycle. In the long run, there is no reason why there couldnt be
artificial cyanobacteria that do precisely the same thing, but faster.
For many nanosystems, building materials will be extracted from the carbon cycle,
which includes carbon dioxide in the atmosphere, calcium carbonate on the ocean floor,
and biomass all around. Some nanotech pundits think that carbon dioxide will be removed
from the atmosphere in such quantities that well take to burning down forests to replenish
the atmosphere and prevent global cooling28. Thats quite a variation on the usual line we
hear with regard to carbon dioxide.
Three Phases of Nanotech Risk
Now that weve gotten grey goo out of the way, its time to move on to a broad
categorization of nanotech risks in general. We see three primary phases of existential risk
connected to nanotechnology and robotics; a first phase involving nanotechnological arms
races, which may involve the mass-production of nuclear weapons on a scale never before
seen, followed by nuclear war and then globally catastrophic nuclear winter. This, in
combination with other factors like biological warfare, could potentially lead to the end of
the human species. The second phase is the risk of out-of-control or intentionally released
replicators, grey goo, which we just addressed, and will return to later in the chapter. As
part of this phase we also include the blue goo scenario, referring to deliberately
224

constructed police goo designed to protect the biosphere from grey goo, which ends up
going out of control or being used for warfare and acting as grey goo itself. The third phase
is the risk of advanced artificial intelligence or human intelligence augmentees, the first part
of which we explored thoroughly in the last chapter, the second of which will be addressed
in Chapter 11 on transhumanism.
The three phases are not mutually exclusive. The only way to avoid all these risks
would be 1) some kind of world system that enforces top-down peace, possibly benevolent
AI or a benevolent world dictatorship, sometimes referred to as a singleton 29, 2) mutually
assured destruction sustaining world peace, somewhat analogous to the situation with
nuclear weapons today (though some arms control experts would object to this
statement30). The problem with the second solution is that nanotech weapons may offer a
first strike advantage, meaning it might be possible to wipe out the enemys capacity to
respond to an attack, ensuring a first-strike military victory for any aggressor 31. This would
increase the incentive to attack an enemy before the enemy attacks you, and incentivize
both local and geopolitical aggression overall. Quoting the Center for Responsible
Nanotechnology, nanotech could be very bad for peace and security 32.
The First Phase: Military Nanorobotics
Consider an attack robot so tiny its invisible, an eighth of a millimeter wide, the size of
a fairyfly. We explored such a possibility in the previous chapter, in the context of a weapon
a hostile artificial intelligence might use to eliminate humanity. Mass-produced, these
robots could be distributed worldwide, sitting dormant and ready to be activated. Made out
of an extremely durable material like diamond or buckypaper, they would hold a payload of
anthrax spores, which remain inactive for years and kill quickly once in contact with a host.
If a country could manufacture millions or even billions and distribute them globally, they
could murder entire continents and dominate the planet. The only way to counteract them
would be to wear full-body suits around the clock and maintain hermetically sealed spaces.
Consider what could happen if such robots went out of control and attacked people
indiscriminately. Think what could happen if several countries each built a fleet of robots,
setting them on the populations of rival countries without defenses. During World War II, a
total war, especially towards the end, both sides had a mind to commit the atrocities
necessary to defeat the other side, and did. Combine that scenario with nanorobotics and
225

you could have a massacre that kills 99 percent or more of the population of the warring
countries. The fact that these nanorobots could be programmed to kill autonomously,
without controller input, would make them even more attractive to warring parties in a total
war, where hesitation can be fatal. If they could somehow power themselves, they could
continue to kill long after the war is over, like heat-seeking, invisible, aerial land mines. If
enough of them covered the surface of the planet, they could somehow continue to power
themselves autonomously, and the technology to locate and destroy them were not
available, the Earth could become uninhabitable. It sounds like an episode of the Twilight
Zone, but possibilities like it have been seriously considered by arms control experts.
Higher-energy scenarios are also imaginable. Instead of tiny killer robots, countries
could manufacture larger autonomous killer robots, the size of tanks, powered by the
super-powerful nanoengines described earlier. Such robots would not only overpower
human beings, but they would be hundreds or thousands of times stronger despite being of
similar size. They would be equipped with targeting systems to score perfect shots from
miles away, farther than human snipers can reach. Such automatic aiming systems are
already under development33. There are mortar systems that can reach targets behind
barriers and bunker busters that penetrate through deep layers of rock. All these systems
could be mounted on and delivered by high-power, high-energy MNT-built robots, which
would make our present-day robots look like cheap plastic toys.
Present-day military systems are built with the assumption that enemy shots
sometimes miss. Even if they do hit, they do limited damage due to armor. With nanotechbuilt weaponry, it will be possible to build systems that hit every time, and to calculate the
number of shots needed to destroy a target in advance, so by the time a target is visible, it
is as good as destroyed unless it has some outstanding defense system. In all likelihood,
the initial development of MNT attack robotics will be monopolized by a single nation,
giving it a huge advantage over the rest of the world, what Drexer calls the Leading
Force34. Combine that with human arrogance and natural geopolitical instability, and the
outcome is difficult to predict.
At any given time in the world, as on any given playground, there are always strong
people who want to expand their influence. They consider themselves the good guys, and
anyone they come in conflict with the bad guys, even if the reality is nuanced. To take a
226

concrete example, since the fall of the Soviet Union, NATO has extended its influence up to
the borders of Russia, bringing almost a dozen new countries under its wing. When Russia
dared to intervene in Ukranian affairs in 2014 after the government there collapsed,
Western media portrayed it as an outrage, and NATO imposed sanctions on Russia. Was
NATO justified in such an act? From our point of view, it doesnt really matter. To a typical
American, it was justified, and to a typical Russian, it wasnt. All that matters is that such
conflicts are routine, inevitable, and have the potential to escalate into world war. The
military technologies available at the time contribute to the volatility of the situation.
At some point in the future, it will be possible for a nanotech-armed power to
intervene in any global conflict consequence-free. Consider if the United States were a
nanotech power when Russia invaded Crimea. It would be simple for the USA to threaten
the Russians with nanotech weapons and compel them to leave, or prevent them from
entering in the first place. Any military conflict could be resolved decisively in favor of the
United States with no risk to our citizens and property. The geopolitical influence of the
USA would expand like a balloon, with no natural challengers or rivals to stop it.
You might think, so what? As long as the USA does not actually invade other
countries, the status quo would be maintained. But the fact that the USA (or NATO) would
have truly supreme military status over the rest of the world would not be only a curiosity,
but would have real geopolitical consequences, in both overt and subtle ways. Russian,
Iranian, and Chinese companies, media, and leaders would have an incentive to abandon
their national values and pride, instead appealing only to American values and American
pride. Their primary way of looking out for their interests would be to appeal to American
decision-making processes. Eventually this would converge to a true global hegemon, a de
facto global government, even if superficially each nation were independent. Today, NATO
may be the worlds greatest military power, but it cannot act with impunity. Military and
economic factors constrain it significantly. What happens when a power comes into being
which can act with impunity over the face of the whole planet? We dont really know. It
could have negative consequences, if not in the short term, then in the long term 35.
In this scenario, the rise of a stable global hegemon might actually be a beneficial
outcome, but the key word is stable. If one nation acquires nanotech weapons and
prevents all others from acquiring them indefinitely, thats a steady state, what Bostrom
227

calls a singleton. This scenario has its own risks, but they are smaller than the risk of open
conflict leading to nuclear or nanotech war. The danger is more in one country or group
getting used to hegemonic status, overextending themselves geopolitically, then recoiling in
horror when another nation acquires nanotech weapons, leading to conflict and full-scale
nano warfare. Years of built-up resentment from being on the receiving end of geopolitical
hegemony could inspire subordinate countries to regain all the land and influence they
lost while being a second-class state, with potentially explosive results. As we mentioned
earlier, when it comes to nanotech weapons, first strike may be the guarantee of victory, so
there is every incentive to move fast, and the possibility that states with less military
hardware could be victorious as long as they strike first. A state with a hundred times less
hardware may be able to prevail over a much larger state if they attack first and use both
nanotech and nuclear weapons with a strong emphasis on autonomous robotics.
We ought to consider the possibility that the relative peace of the last 70 years has
partially been the result of a time in history when attacking the enemy with the most
powerful available weapon (nuclear weapons) is self-destructive rather than just destructive
to the adversary. It seems fairly clear that the up-front costs of major conflict decrease
when highly destructive attacks are possible with non-nuclear weapons which use
conventional explosives or pure kinetics. These are more politically acceptable than
nuclear weapons. Nanotech weapons would make it easier to target attacks precisely, to
strike military targets with minimal collateral damage. Though this may initially cause fewer
casualties, it may paradoxically increase casualties in the long run because it makes the
initial escalation seem less foreboding, making such escalations likely overall. The
tendency towards greater escalation may overwhelm the short-term decrease in casualties
enabled by greater precision.
Another technology that nanotech-built robotics would provide is that of effective antimissile defense. Though missiles are very fast (Mach ~0.85 ), they can be tracked and hit
if the intercept system is precise enough. Even with todays technology, there are relatively
effective anti-missile systems such as Israels Iron Dome, though they only work against
smaller missiles, and certainly not against intercontinental ballistic missiles (ICBMs). If all
ballistic missiles can easily be blocked, as nanotech could enable, it creates an incentive to
build exotic new delivery systems, such as nuclear weapons that form by the combination
of different components on the battlefield, or clouds of missiles so large that they cant be
228

stopped36. If even a small drone could potentially be part of a nuclear attack, it pushes
defenders to treat each incursion as if it could be a weapon of mass destruction and
respond accordingly.
Nanotech-built robotics create many other exotic attack possibilities: lasers that fire
from space platforms, kinetic systems that accelerate projectiles from space to hit ground
targets (these are investigated in more detail in Chapter 10), supercavitating missiles that
travel under the ocean (too fast to be detected by sonar), and so on. Every destabilizing
military technology you can imagine, molecular manufacturing would be able to build. The
technology is so potentially destabilizing that arms control expert Jurgen Altmann has
suggested an outright ban on autonomous robots smaller than 1 m (3 ft) in diameter,
though this seems as if it would be difficult to enforce 37.
As previously mentioned, the only circumstances we can imagine which would
prevent this MNT arms race scenario would be global hegemony or mutually assured
destruction (MAD) of some kind. Global hegemony seems more stable, because a first
strike advantage imperils the MAD scenario. Anyone considering the future of
nanotechnology should note that its extremely great potential in the military realm
incentivizes geopolitical hegemony to minimize the likelihood of conflict.
Uranium Enrichment
Besides creating its own unique risks, nanotechnology exacerbates nearly every
other risk. It exacerbates the risk of unfriendly AI by giving us a huge amount of computing
power38. It exacerbates the risk of biowarfare by allowing large, automated test facilities for
new pathogens and biotechnology tools. It exacerbates the risk of nuclear warfare by
making it easier to enrich uranium and putting it within the reach of more states, including
states who have absolutely no intention of participating in global mediating bodies like the
UN39. It increases the risk of global warming or global cooling by giving us unprecedented
ability to warm up the Earth or cool it down, both deliberately (geoengineering) and
accidentally (waste heat, CO2 extraction from the atmosphere).
We mentioned in the chapter on nuclear weapons that uranium enrichment is
expensive, and that the primary US enrichment facility during the Cold War consumed
electric power equivalent to half that used by New York City. Nanotechnology would be the
229

ideal tool for enriching uranium, because of its capacity for creating extremely high power
density systems (like centrifuges), but also the prospect of developing new enrichment
techniques that work more efficiently for less power. It may be possible to develop
nanosystems that weigh individual atoms and separate out the useful isotopes, making
facilities both more compact and efficient. Even today there is the risk of development of
new enrichment facilities that cannot be observed by surveillance satellites. In the
nanotech future, enrichment facilities will become so small that the only way they could be
detected would be through ground surveillance, which could be difficult to pull off in a
foreign country. The only way to reveal such facilities would be if a human being passed on
the information to foreign intelligence services and provided photographic evidence. If the
scientists stayed quiet, the facility could remain hidden, increasing the risk of a behind-thescenes nuclear arms race and making international monitoring impossible.
The enrichment of uranium enabled by nanotech products might be among the
greatest risks of nanotechnology (aside from unfriendly AI), because while nanotech offers
the possibility of new weapons that minimize collateral damage, nuclear weapons are the
ultimate trigger of collateral damage on a vast scale, and seem likely to be used in any
sufficiently aggressive 21st century war. In the nuclear war chapter, we reviewed Alan
Robocks simulation result that even a limited nuclear exchange could cripple global
harvests and cause hundreds of millions of deaths, and a full-scale nuclear war would put
huge parts of Asia and North America below freezing for years at a time 40,41. The explosion
of nuclear weapons in forests could even bring the Earth into a new glacial age. Combine
this with the fact that nanotechnology would enable self-sufficient space stations, and some
groups may be tempted to end all life on the planet, hide out in space, and return a couple
decades later to a world that is theirs alone.
The Earths crust up to 25 km (15 mi) down is estimated to contain 10 17 kg of uranium,
making it roughly 40 times more abundant than silver. About 0.7 percent of that is the
isotope that can be used to build nuclear weapons (U-235). It only takes about 5 kg (11 lb)
to construct a bomb. That means this planet has enough uranium, in principle, to build
trillions upon trillions of nuclear weapons. The primary barrier standing in the way of this is
the technological difficulty of uranium enrichment. If uranium enrichment becomes as easy
as extracting aluminum from ore (world annual production over 47 million tonnes), we
would have a huge global security problem. At one point aluminum was more valuable than
230

gold, now its dirt cheap. Would it be possible to regulate the processing of aluminum ore,
even if we wanted to? Not through anything less than employing a significant percentage of
the population to inspect every industrial facility on the planet for aluminum traces, or
placing armed guards at all known bauxite (aluminum ore) deposits. Nanotech could put us
in a similar position with the enriched uranium that can be used to build nuclear weapons.
Biological Warfare and Experimentation
Biological weapons research operates as follows: dangerous pathogens are isolated,
grown in culture, then tested on animals. The weapons are fine-tuned by optimizing their
distribution method depending on the nature of the pathogen. A biological bomblet, for
instance, is a bomb filled with a pathogenic agent designed to spray pathogen across as
widely an area as possible. If a biological bomblet just created a little puddle on the ground,
it wouldnt be very effective. The optimal distribution method depends on the pathogen.
The worlds most powerful countries nominally ended their biological weapon
research and development in 1991, when the Biological Weapons Convention was signed.
Research continues under the auspices of defensive research. In reality, offensive and
defensive bioweapons research are hard to distinguish. The only way to develop defenses
for a given bioweapon is to grow it, use it on animals, and test out antidotes or other
protective measures. That is how the process works.
There are several technologies that NT would enable or greatly improve which would
make biological weapons research far simpler. The first is gene sequencing and gene
synthesis. The second is automated experimentation. The third is biological isolation
chambers. The fourth is proteomics. All of these fields would be vastly improved or
accelerated by the maturity of nanotechnology, and would make it much easier for more
people to design deadly new pathogens in a warehouse or even their basement. We
already covered the dangers of pathogens in an earlier chapter, and how new genemodification technologies are opening the window to entirely new pathogens that have
never existed before. The same dangers apply here, only magnified, since NT would give
us so many new tools which are high-performance, cheap, and useful.
A true doomsday virus would be a virus that spreads exponentially and
asymptomatically for 3-4 weeks, lies dormant so no one notices it, then hits every corner of
231

the globe at once with deadly effects. Such viruses do not tend to arise naturally, since if a
pathogen takes that long to kick in, it tends not to produce very deadly poisons. As an
example, ebolavirus produces symptoms about 2-21 days after it is contracted, typically 810. Engineering a virus that is both deadly, spreads quickly, and reliably takes multiple
weeks (and no less) for the symptoms to kick in is not an easy task. To do so effectively will
require an understanding of millions of proteins and their interactions with the immune
system of the human body, the genes that code for them, the existing viruses that possess
these genes, and so on. With our current technology, achieving this is quite difficult.
However, with automated experiment labs involving robotics manipulating millions of
human tissue test samples interacting with millions of possible pathogenic proteins, it
becomes much easier. There are already robots that carry out massively parallel
experimentation, but they tend to be expensive. With nanofactories, these robotic test-beds
could be built in someones basement, and would have so many legitimate uses that they
could not be easily screened for terrorism or similar ill uses. The same factor applies to
future gene synthesis technologies. Currently, gene synthesis companies screen requests
for dangerous sequences, but with nanotech-enabled devices, the technology will be in the
hands of everyone and no company or authority will be able to screen it all.
Miniature Nuclear Weapons?
As far as is currently publicly known, the core of a nuclear bomb requires a minimum
of about 5 kg (11 lbs) of enriched uranium or plutonium to reach critical mass. However, in
the arms control book Military Nanotechnology, Jurgen Altmann raises the possibility of
miniaturized nuclear weapons that break this lower limit. Instead of being triggered by
conventional explosives in a spherical shell pattern, as in current nuclear bombs, these
smaller payloads would be triggered by minute amounts of antimatter, on the order of
micrograms. It is unknown if this is possible, but if so, it could create nuclear weapons just
centimeters across, with yields in the 1-1,000 tons of TNT range.
If these weapons could be mass-produced, they would have extremely negative
consequences for geopolitical stability. Arbitrarily small yields could be possible. Such
weapons would only be accessible to medium-to-large states, since the production of
antimatter depends on large particle accelerators. It may also be possible to efficiently
harvest antimatter from space, where it can be found in low densities. The current record
232

for storing antimatter in a stable state, using a magnetic trap, is 16 minutes 42. Storing it for
at least a few days would presumably be necessary to weaponize it.
Besides miniature nuclear weapons, nanotechnology could enable miniature rockets
which could carry conventional explosives or biological payloads. Rockets could be made
more energetic per pound of propellent by using nanoparticles, which allow better mixing of
fuel and oxidizer43. Whereas today rocket engines tend to be a few tens of centimeters long
at the smallest, better engines built using nanotechnology could be just a few millimeters
across. There is already a DARPA program, which does not use nanotechnology, that aims
to demonstrate a liquid-fueled micro-rocket with turbopumps, with 15 N thrust meant to
deliver 200 g satellites to low earth orbit 44. If a rocket can reach orbit, it certainly can be
used on the battlefield. In 2005, a breakthrough was achieved that allowed the fabrication
of micro-thrusters 50 to 100 times more efficient than previous models 45. These guidedmicro-thrusters, if perfected, would weigh mere dozens of milligrams and could power
munitions, space launches, and micro air vehicles with high maneuverability.
Combining small rockets with antimatter-triggered nuclear bombs of arbitrarily low
yield would usher in new military possibilities only vaguely hinted at today. With enough of
these rockets, it might feel as if it were impossible to run out of ammo, since so many of
them could be contained within such a small space, and they would be so destructive. An
array of micro-rockets the size of ten AK-47 magazines could be used to level a small city.
The W54, the smallest nuclear warhead ever built, has a core that weighs about 23 kg (50
lb) and its most powerful version has a yield up to 1,000 tons of TNT. Imagine scaling that
down by a thousand times, to a core that has a weight of 23 g (0.8 oz), a yield of one ton of
TNT, small enough to fit about 20 rounds (taking into account additional mass for micromissile and propellent) into a 1 kg (2.2) steel magazine of the kind used in an AK-47.
Thats like having 20 tons of TNT in a single rifle magazine. Like larger nuclear weapons,
miniature nuclear weapons would also produce radioactive fallout, especially if detonated
at ground level. They could also be given cobalt coatings to contaminate target zones for
decades. The possibilities are quite destructive, though the technology itself is speculative.
Military Applications of Conventional Nanotechnology
In the academic volume Military Nanotechnology, arms control expert Jrgen Altmann
summarizes a list of what he considers important military applications of both
233

nanotechnology in general and molecular nanotechnology in particular. This subsection


and the next covers these. The (non-molecular/non-atomically-precise) nanotechnology
(NT) military applications he lists are as follows: 1) electronics, photonics, magnetics, 2)
computers, communication, 3) software/artificial intelligence, 4) materials, 5) energy
sources, storage, 6) propulsion, 7) vehicles, 8) propellants and explosives, 9) camouflage,
10) distributed sensors, 11) armor, protection, 12) conventional weapons, 13) soldier
systems, 14) implanted systems, body manipulation, 15) autonomous systems, 16)
mini/micro-robots, 17) bio-technical hybrids, 18) small satellites and space launchers, 19)
nuclear weapons, 20) chemical weapons, 21) biological weapons, and 22)
chemical/biological protection. Each of these has geopolitical implications in the bigger
picture.
To clarify the difference between nanotechnology and molecular nanotechnology,
nanotechnology refers to devices with nanoscale features manufactured using any method
besides positional placement of individual atoms (such as molecular self-assembly, 3Dprinting, vapor deposition, or with laser cutters), as opposed to bottom-up manufacturing,
building structures atom by atom (molecular nanotechnology). Altmann expects most of the
developments listed below to be in use by the mid-2020s, though we tend to be more
pessimistic (or optimistic, as the case may be), placing most of them in the 2030s or later,
with specific predictions provided for select applications.
In microelectronics, nanotechnology could provide tools to break the 7 nm lower limit
feature size of photolithography. Even if this limit cannot easily be broken, NT methods
could be used to stack wafers three-dimensionally, permitting the continuation of Moores
law and further increases in computing power. NT could also be used to build gigahertz
mechanical resonators for filters and other applications. In photonics, NT will provide many
new opportunities to play with light, from generating natural light with LEDs, to detecting
single photons, and other optical tools such as waveguides and nanostructured
metamaterials. Related to camouflage, nano-photonics will allow the creation of garments
that project video images of the scene behind them towards a specific target, providing a
crude invisibility cloak (crude because it does not literally bend the light around it, as
other, smaller invisibility cloaks do). Such camouflage systems, and any conceivable
camouflage system, primarily work in one direction. Using phased array optics, holodecks
could be produced that simulate the appearance of virtual objects at any distance, real
234

enough that they could be viewed with binoculars46. Coupled with haptic pressure suits (to
simulate the sense of touch), this could provide realistic virtual reality even in the absence
of MNT. Nanophotonic circuits could provide more powerful computers and communication
links that transfer descriptively tagged three-dimensional videos of a scene in a fraction of
a second.
Nanotechnology would provide for new displays of any shape, small or large.
Covering wavy surfaces such as clothing or the inside of helmets, they would be incredibly
sharp and bright. Generally, NT will enable the increasing trend towards more ample light
in interior spaces. The pixel size of NT displays would be similar to that of a red blood cell
(12 microns). They would be durable and operate under wide temperature ranges. Instead
of being disposable electronics, displays would become durable objects, like stone or metal
monuments. Advances in magnetics enabled by the exploitation of giant
magnetoresistance (GMR) will allow improved memory drives which retain state during
power-down and boot instantly. All of these features will make electronics more comparable
to physical objects like rifles in terms of their durability and reliability instead of the fragile
devices they are now. It will permit electronics and displays to be built into tools that
undergo abuse such as helmets, clothing, vehicles, structures, walls, drones, and so on.
Larger structures will play with light like crystals or mirrors, camouflaging tanks or creating
visual chaff or {rapheme{or labyrinths on the battlefield.
Altmann states making use of NT-enabled miniaturization and integration, complete
electronic systems could fit into a cubic millimeter or less. However, power supplies
(batteries) would not shrink to a corresponding degree, putting limitations on the ability of
these devices to transmit radio signals unless they are connected to larger (but still small)
batteries similar to button batteries. Nanostructured batteries with extremely high power
densities have been demonstrated, but these are unstable (they only last for a few
recharges) and whether they could be mass-produced using NT methods is unknown. We
generally assume they will not be, but unforeseen breakthroughs should not be ruled out.
Regardless, miniaturization of electronics to a significant but not extreme degree is to be
expected as nanotechnology advances incrementally over the next twenty to thirty years. It
will become practical to integrate functional NT devices into small objects like glasses and
ammunition. Computers superior to present-day laptops will fit in cubes a few mm on a side
and be embedded in a flexible communications network throughout all military devices.
235

The advancement of software and artificial intelligence in relation to the development


of nanotechnology is an area where analytic caution is warranted. Mostly, pre-MNT
nanotechnology will not directly impact AI research, much of which is currently based on
pure mathematics and trial and error. Improved computers will allow the brute-forcing of
certain straightforward data processing challenges, like voice recognition and image
classification, but qualitative improvements require human ingenuity and programming,
which is on an independent vector from advances in physical technologies like
nanotechnology. MNT could offer such great quantitative improvements in computing
power that it will provide qualitatively better results, but it seems unlikely (though not
impossible) that NT could.
AI milestones to watch for include computers that communicate in natural spoken
language, realtime language translation, computers that accurately anticipate the need for
and pre-emptively search for required information in a messy realtime environment, and
those that visually recognize their immediate environment in three dimensions and perform
complex navigational tasks autonomously. Precisely quantifying anticipated improvements
in automated planning, decision preparation and management software is difficult, since
these are very complex tasks and improvements are hard to predict. It seems reasonable
to expect modest improvements in performance of the above tasks by 2020 and more
substantial improvements by 2030, but unforeseen stalls or breakthroughs are likely. Of
course, software that can perform all of the listed tasks would be invaluable for military
applications.
Fundamental improvements in materials from NT alone will be somewhat limited.
Specialty products such as carbon nanotube-infused armor and artificial sapphire
missile/view-port windows will become less expensive and more widely used, and the
performance of various materials will improve incrementally due to nanoscale additives
which modify their macroscale properties. Carbon nanotube composite speedboats, for
instance, have about twice the strength-to-weight ratio of steel speedboats and use
correspondingly less fuel. The same materials will be used for trucks and airframes.
Nanostructured and microstructured meshes will enable better ergonomics that allow pilots
to endure greater g-forces and impacts without blacking out. Amorphous metal, an
acrystalline (disordered on the atomic level) form of metal, has about twice the tensile
strength as conventional metal and three times the toughness. In 2011, there was a
236

breakthrough to lower its cost, but as of 2015 it is not being mass-produced. If it can be
mass-produced, it will make improved structural components, penetrators, and armors. The
Air Force Research Laboratory is currently investigating the use of amorphous aluminum
and titanium for airframes.
More advanced nanotechnology will permit the production of materials with selfhealing properties, surfaces made of microscale polymers which seal when cut. Simple
smart materials such as shape memory alloys and piezoelectric actuators will allow drone
wings which can withstand unusual stresses or vehicles which can slightly change shape to
perform specific tasks. Similar actuators could be used for haptic suits which give people
the sensation of picking up real objects in virtual reality, applying pressure to their fingertips
or other parts of the body. This could be used to greatly improve training simulations for
soldiers or enable better data management, display, and analysis for planners. Eventually,
many common military objects made of plastic will be reinforced with carbon nanotubes,
making them stronger and lighter. This includes everything from backpacks to internal
missile components to car seats.
The next domain concerns energy sources and storage. As mentioned before,
batteries have a limited window of improvement prior to anticipated improvements from
pure bottom-up manufacturing. Improvements of 50 percent in storage capacity over the
next 10-20 years are likely. Nanotechnology will allow the fabrication of much smaller
batteries than were previously available, to power tiny, invisible sensors, monitors,
communication networks, flying cameras, implants, devices that flow through the
bloodstream, and so on. Nanostructured fuel cells might make hydrogen fuel cells more
widely used, though these energy sources seem unlikely to replace conventional batteries
and motors except for specialty uses.
A candidate for the crowning achievement of nanotechnology would be the mass
production of extremely cheap and efficient solar cells, though it may be a number of years
before they really become dirt cheap. Improvements in solar cells are on a Moores lawlike curve, represented by an annual 7 percent reduction in dollars per watt for solar cells, a
trend which has held up for 30 years and should continue for the next 20 years at least.
One scientist predicts that solar electricity will drop below the cost of coal-generated
electricity by 2018, and that solar will cost half as much as coal electricity by 2030. If this
237

pans out, it could double our available energy, as well as enabling military operations and
power sources in remote areas without power infrastructure.
Current micro-robots and similar devices depend upon conventional batteries to
provide power, or integrated circuits that wirelessly receive small amounts of power from an
external source, in the case of RFID tags. As conventional NT improves, it will enable
microsystems that work more like car engines, utilizing hydrocarbon fuels and micro
thermo-electric converters to improve energy density storage by a factor of ten. This will
permit longer flight and loiter times for micro-UAVs, some of which may be the size of small
insects, and enable more complex and powerful microsystems in general. In fact, you
might say it would make micro-UAVs truly practical, and that current micro-UAVs are
impractical given their short-lasting conventional power source. Scaled up to the weight of
a few pounds, such systems could provide power in the range of 20-60 W to soldier
systems (like heads-up displays in a helmet) and small robots (surveillance drones or
bomb disposal robots). This is enough to power lasers, heaters, spotlights, computers,
pumps, terahertz imagers, climbing gear, listening gear, seismic sensors, mechanical
grippers, lab-on-a-chip, biological digesters, propulsion thrusters, and so on. Instead of
charging their electronic devices in a wall socket, the soldiers of 2030 could just fill up
with hydrocarbon fuel and get a full charge instantly. Instead of carrying expensive and
heavy batteries for each device that they need to swap out, soldiers would have one liquid
fuel battery for each device and carry a small fuel tank.
Improvements to propulsion from NT primarily involve miniaturization opportunities.
As previously mentioned, very small rockets with payloads and fuel tanks weighing just a
few dozen milligrams each should be possible. These would have a range in the tens to
hundreds of meters. Anything much smaller than these would have difficulty pushing
against the air and would have to be slower and have a lesser range, making them useful
for a point-blank attack only. Since the volume of fuel cells decreases cubically as their
linear dimension decreases linearly, rockets smaller than a centimeter or with a weight of
less than five grams or so would have too limited of a range, be too slow and inefficient for
regular use. Slightly larger rockets (but still extremely small by todays standards) would
work fine for relatively close-range combat.

238

Used as weapons, these rockets could set off small explosions which would be fatal,
or drive themselves into flesh and bone mechanically. They could target sensitive parts of
the body like the brain or heart. Small propulsion systems will also enable a range of small
robots for every purpose from surveillance to transport to confusing the enemy. These
robots could cover land, sea, and air in huge numbers and blur the lines between
brinksmanship and direct conflict. Just like the electronics that soldiers carry with them,
these robots could refuel with liquid hydrocarbons. There could even be tanker robots that
travel in the field and refuel these swarms. Carrier-like systems are possible, where
numerous small robots travel in larger robotic carriers for efficiency and protection until
they reach a target zone and disperse to complete a mission. These robots could even
consume local biomass for fuel, as has already been demonstrated in a prototype.
More exotic forms of propulsion based on shape-changing materials and
nano-/microstructured materials could take inspiration from biology: submarines that swim
like squid, UAVs that flap their wings like birds, ground robots that stalk like tigers, etc.
Nanotechnology will enable redundant propulsion. By making systems small, highly
functional, and flexible, several types of locomotion could be used by the same unit. One
possible combination could be rocket thrusters, swimming ability, and climbing ability. For
example, these locomotive abilities would allow a swarm of attack robots to fly to a marine
position, swim to shore, climb shoreline cliffs, and reach a target. This would enable
greater stealth, unpredictability, and more efficient use of fuel. Other combinations are
possible. Covert robots that fly through the sky as rods too fast for the eye to see could
be fabricated, their invisibility similar to how we cannot see flying bullets in the air. Such
rods would be impossible to detect with radar and would require physical barriers to defend
against.
More mundanely, NT will allow rocket and conventional engines that operate at higher
temperatures because they are made out of better materials. This will make them more
efficient and give the craft that use them a longer range. As far as using smart materials in
the military, MITs Institute for Soldier Nanotechnologies is developing an exoskeleton that
uses the shape-changing polymer polypyrpole for locomotion. This polymer contracts or
expands in response to an electric field, much like the piezoelectric crystal in a scanning
tunneling microscope. A form of locomotion that could be used for nano-robots is the
rotation of the natural flagellum used by bacteria, or artificial variations which accomplish
239

the same task. A hybrid artificial/biological motor that runs on ATP has already been built
with funding from DARPA.
For vehicles, NT will create new opportunities for the wider integration and production
of light armor in the form of nanotube plastic composites. First-generation NT would have a
minimal impact on heavy armor, as it relies on bulk metals, unless there is an unforeseen
breakthrough in the production of amorphous metals, as mentioned earlier. If there is no
breakthrough, the overall structure of tanks and battleships would remain much the same
as they are today. Similarly, because tanks and battleships rely on large, high-intensity
power sources that run on fossil fuels or nuclear power, relatively little would change for
them as far as new power sources introduced by NT, which are most suited to small units.
To reiterate, we are looking at the 2030s time window hereadvances in MNT in the
2040s-2060s and beyond could potentially allow the production of extremely durable and
high-performance tanks and battleships constructed out of diamond (if MNT is a success),
but conventional NT would not make this available. It is important to recognize the
differences in capability increases provided by nanotechnology (NT) versus molecular
nanotechnology (MNT), and to reiterate that the latter is an advanced subset of the former,
expected further in the future.
As far as vehicles, those which would benefit the most from initial advances in NT are
aircraft, where weight and space are at a premium. This especially applies to unmanned
craft, which can be made small since they have no pilot. Weve already summarized how
NT will enable the mass production of tens of thousands or even millions of small flying
units, which could use hierarchies of carriers for secure transport between battlefields and
war zones. These swarms could introduce abrasives into enemy vehicles, distribute
nanofilaments that cause short circuits, clog intake and exhaust filters, initiate dangerous
and damaging chemical reactions (acids, etc.), cover and disable view ports or other
sensors, cause overheating (how bees cooperate to kill giant wasps), even cover vehicles
in immobilizing nets or bags of melt-resistant polymers. Large swarms could defend highvalue stationary targets and intercept bullets or even smaller missiles and rockets. With
high maneuverability, swarm behavior, varied propulsion methods, and varied attack
methods, coordinated flocks of small flying robots might seem like magic to the soldiers of
today. These are systems which will become available within 20-30 years, whether or not
MNT is developed. Small swarms of quadcopters performing cooperative tasks like tossing
240

a ball up from a net have already been publicly demonstrated and have elicited military
funding. Their performance is already eerily impressive. For a picture of how larger swarms
of more maneuverable drones might operate, look up flocks of starlings on YouTube.
Guiding and providing information to such swarms on the battlefield will be vast
networks of sensors, especially when on defense rather than offense. Today, soldiers are
provided with very limited environmental data, and mostly rely on the same basic means of
threat detection used since the Cambrian era; our eyeballs. Supplemental information
includes local intelligence, GPS devices, and aerial surveillance. Given the uncertainty of
the battlefield, even small pieces of information from auxiliary sources such as these can
make the difference between life and death.
With NT, environmental sensors can be built which are a millimeter (1000 m) or less
across, producing smart dust that will eventually reach the dimensions of actual dust
particles (.05 10 m). These could be blown around by the wind and pick up footsteps,
vehicle movement, even detect whether someone is asleep or not. It will become possible
to actually listen to the heartbeats of the enemy with smart dust networks.
Countermeasures would have to rely on brain-to-brain interfaces, decoys, or similar means
of selecting withholding strategically relevant audio or visual information from the
environment. Some information, like the physical location of a human being, may be too
inherently noisy to conceal.
A persons location is given away by the heat they emit, the noise they make while
walking and breathing, their visual signature, and other variables. These will be picked up
by smart dust on future battlefields and used to target individuals for lethal or non-lethal
strikes. It will increase the incentive to put soldiers behind armor or to remote control
drones from concealed or distant positions. Sensor networks could detect minute changes
in light, radio, sound, temperature, vibration, magnetism, or chemical concentrations.
Though nascent efforts towards smart dust have been made, the technology basically does
not exist and will not for another 10-20 years. NT will pave the way. Current battlefield
sensors are a few centimeters in diameter at the smallestsmart dust will consist of
sensors a millimeter or smaller in size. A millimeter is about the thickness of ten pages of
printer paper. Smaller sensors are conceivable for specialized applications like chemical

241

sensing. Some may be biologically inspired or borrow components from nature, such as
chemical-sensitive microorganisms.
So far, weve covered 10 military applications of nanotechnology out of 22.
Understanding how they all combine to unlock new capabilities and geopolitical chaos is
crucial for comprehending the nanotechnology and robotics threat profile, what these
technologies will enable in the next 90 years, and how they could threaten our survival as a
species. It also helps us build up an overall picture of the future as a starting point to
consider threats in a holistic way. Seth Baum, founder of the Global Catastrophic Risk
Institute, has emphasized this specific point: we need to examine risks holistically rather
than just individually and separately.
Nanotechnology is especially important because it enables technologies that underlie
a great deal of foreseeable 21st century risks. Thus a solid understanding is crucial for
global risk analysis, and especially for analyzing the risk of human extinction. The
remaining potential military applications to cover include armor, conventional weapons,
soldier systems, implants and enhancement, autonomous systems, mini/micro-robots, biotechnical hybrids, small satellites and space launchers, nuclear weapons, chemical
weapons, biological weapons, and chemical/biological protection. Again, we would like to
make clear that all of these advancements, many of which already exist in prototype form,
are based on conventional nanotechnology, not molecular nanotechnology, so critiques
directed at MNT do not apply to the technologies listed in this section. We describe
potential MNT military applications in the next section so that each category can be
considered separately.
With regard to armor, we already mentioned that improvements would be
concentrated in light armor instead of heavy armor. In the short run (10-20 years) new
materials will provide armor that is about twice as protective as todays best bulletproof
vests. In the long run (40+ years), mass production of buckypaper could lead to armor that
is 300 times stronger than steel and several times lighter. Based on a technique called the
Egg of Columbus, scientists have made Endumax fibers (a kind of synthetic fiber) dozens
of times stronger by knitting them into small knots on the microscale, setting the record for
the worlds toughest fiber47. If used with {rapheme, the researchers who developed this
technology say it could be made with as toughness modulus as high as 100,000 J/g. In
242

contrast, the toughness modulus of spider silk is 170 J/g, of Kevlar 80 J/g. Since the knots
would unravel on impact, this technique would only provide protection from one strike,
though the protection it would provide for that encounter would be extremely high. Thick
layers of {rapheme fiber armor built with billions of small knots would provide extremely
effective protection for tanks, ships, airplanes, exoskeletons, and so on.
Nanotechnology will improve conventional arms. One of the most notable and
potentially dangerous inventions would be the development of metal-free handguns and
rifles, made using nanotube-reinforced polymers. In Military Nanotechnology, Altmann even
recommends that metal-free firearms be banned outright. Already, there exist 3D printed
plastic guns which only use a small amount of metal, though their performance is
substantially worse than a real gun. Of course, a plastic gun could go right through any
metal detector, and be less conspicuous to x-ray scanners. Besides plastic guns,
nanotechnology will allow for reduced weight on projectiles such as missiles and bullets,
allowing for increased muzzle velocity and range. It could also allow electronics and
guidance systems to be integrated even into rifle bullets, which would be equipped with fins
to guide them to a target. This would allow bullets to curve in midair, producing rifles and
machine guns that have near-perfect accuracy even if the shooter has little experience.
With such bullets, every shot would be a head shot. Every kind of projectile, from the
Vulcan chain guns on helicopters to anti-aircraft missiles fired from man-portable systems
to mortar shells to handguns, could be provided with self-guiding ammo. This could allow
missiles to be smaller and more precisely targeted towards weak points on enemy
vehicles, requiring a smaller payload to destroy a target.
As far as penetration capacity of firearms itself, NT would provide only limited
improvements, since the metals currently used for high-velocity rounds are close to the
theoretical limit of material density in normal matter. Better manufacturing procedures could
lower the cost of sabot rounds, which travel faster than conventional rounds by including
components that strip off from a flechette during flight, imparting momentum. As previously
mentioned, NT will allow very small missiles, down to a few millimeters in diameter, which
would have low kinetic energy but could still deliver a fatal blow by setting off just a few
grams of explosive near a sensitive part of the body such as the face or stomach.
Improvements in power source and supply will make exotic weapons such as lasers,
microwave beams, and electromagnetic accelerators (railguns) more practical. Large
243

railguns have already been developed by General Atomics for the Navy and tested in labs,
with at-sea testing scheduled for 201648. These railguns fire projectiles at Mach 7, about
5,000 mph. It is unknown whether electromagnetic accelerators could be miniaturized
enough for use in small arms, but if so, they could be powered by the portable hydrocarbon
fuel cells mentioned earlier.
Soldier systems refers to systems that soldiers carry on their bodies which augment
the natural capacities and talents of the soldier like sight and planning. These systems are
universally important for warfighting, and especially for protracted and guerrilla conflicts,
when resources are stretched thinner and maximum mileage must be extracted from longlasting systems. Examples include night vision, shoes, clothing, laser rangefinders,
backpacks, water filters, and so on. The DARPA-funded Institute for Soldier
Nanotechnologies (ISI) is pursuing nanotechnological upgrades to soldier systems with
their super-suit vision, what they call a nano-enhanced super soldier. The projects aims
to create a 21st century battlesuit that combines high-tech capabilities with light weight and
comfort that monitors health, eases injuries, communicates automatically, and maybe
even lends superhuman abilities. A demo video of the battlesuit by video students
unaffiliated with DARPA or ISI shows the anticipated functions of the suit: a super shield
described as anticipatory projectile deceleration, a super cast built into the suit that
snaps broken bones back into place and holds them there, super vision that is essentially
night vision combined with augmented reality, and super stealth that makes soldiers
invisible to all electronic forms of detection and even the human eye (an implausible claim
given that invisibility cloaks only work in one direction).
Soldier systems serve as force multipliers, meaning they make the soldier more
effective on the battlefield and make him harder to kill. A sniper with night vision,
concealment, and skill can potentially take out dozens or even hundreds of enemy soldiers
without being shot or captured. Conversely, an untrained and undisciplined mercenary with
an AK-47 may not be a very effective soldier at all. Another factor has to do not with the
effectiveness of individual soldiers, but the logistic demands of groups, which shape
deployment and movement patterns even more than the fighting capacity of individual
soldiers. An example of a NT-related invention that provides solutions in this domain is a
water extracting system (atmospheric water generator) that can extract hundreds of gallons
of water directly from the air, even in a desert 49. This already exists, but could be improved.
244

In a typical desert mission, a squad or platoon will be dependent on regular resupply from a
water truck, which is itself vulnerable to attack and gives away the position of soldiers. A
squad or platoon that can use nano-filters to purify water from local sources will not be as
dependent on a water truck, which, aggregated across hundreds or thousands of squads
with similar advantages, can make an entire military force more capable by reducing its
logistic complications. This can be even more helpful to fighting a war than better guns or
armor. Much of war is fought with food and water.
When it comes to clothing used for physical activities, or just clothing in general, our
technology today is rather limited. With NT, we will be able to build microsystems that
circulate cooling or heating fluids through hollow nano- or micro-fibers in combat fatigues or
a battlesuit. This would be especially crucial for a full-body battlesuit, which would
otherwise become intolerably hot in short order. In cold environments, avoiding overheating
is especially important, because overheating causes one to sweat, which then lowers body
temperature and increases the risk of hypothermia. Using active cooling suits, the future
soldier could be kept at a perfect temperature at all timescooled down while running to
minimize sweating, warmed up quickly when having to hold position in a freezing wind.
Clothes could be more form-fitting, since the primary advantage of baggy clothingair
circulation and coolingwould be achieved instead by these active systems. Specialized
clothes could perform the function of a cast or bandage by utilizing multi-functional NT
materials which provide both thermoregulation, blood absorption, and stiffening in response
to injury. Nano-clothes such as these seem almost mandatory for making battlefield
exoskeletons practical, otherwise it would be an unacceptable inconvenience and a danger
to step out of an exoskeleton for minor medical treatment or simply the feeling of
overheating.
The number one preventable cause of death on the battlefield is from bleed-outs,
especially from injuries to the neck or groin, where arterial bleeding causes the soldier to
lose too much blood too quickly, causing death50. A 2012 study of US combat soldier
deaths in Iraq and Afghanistan concluded that 25 percent were potentially survivable,
meaning that if the victims had better medical equipment or treatment, they could have
survived. A nano-suit that covers the neck and groin could potentially stretch to seal a
wound as soon as it is created, stopping bleeding until the injury can be treated by a
combat medic.
245

A nano-suit that covers the entire body could protect it from shrapnel fragments that
would otherwise impact above or below a bulletproof vest and helmet. If artificial blood is
developed in the next decade or two, which seems plausible, a nano-suit could contain
many small reservoirs of the fluid, pumping it into the bloodstream to buy an injured soldier
more time. Glucose reservoirs could be used for the storage of food energy. Combined with
an exoskeleton, a nano-suit would let soldiers jump farther, run faster, and carry heavier
loads. A typical soldier load is somewhere north of 100 lbs, heavy enough that it causes a
worrisome incidence of permanent back and spine injuries among soldiers. Even an
exoskeleton operating on low power could make a decisive difference in the speed and
agility of soldiers with a full pack, saving lives. Experiments have shown that soldiers can
retain combat effectiveness for as long as 40 hours with the help of modafinil; this would be
made easier if an exoskeleton is doing all the heavy lifting. After about 12 hours of heavy
activity, even the best soldiers get tired and need increasingly longer breaks.
NT opens up the possibility of implants for physical and cognitive enhancement.
Implants today are mostly limited to cochlear implants for the deaf and hard-of-hearing, or
pacemakers for heart patients. With mature NT, implants could be fabricated for a mindbogglingly large array of uses, from implants that release chemicals to accelerate
metabolism, to those which allow artificial telepathy, to ocular implants that outperform
real eyes. Some of these, like the first, are just around the corner, others 20-30 years in the
future, and still others (particularly useful brain implants) 40-50 years in the future and
beyond. The 2002 NSF-funded report Converging Technologies for Improving Human
Performance outlines some of these possibilities, but only scratches the surface 51.
Exhaustive options for human implants have yet to be explored technically because
they are not yet practical to implement. NT will make them practical within the lifetime of
many people living today. One difficulty to be overcome is the relative invasiveness of
surgery, especially brain surgery. Perhaps surgery by robots could be improved to the point
where tissue damage is negligible and post-surgery healing can occur quickly, maybe with
the assistance of micro- or nano-structured tissue trusses and grafts. There are also a
number of ethical issues which have to be confronted regarding implants, but these issues
havent prevented the military, and especially DARPA, from funding extensive studies on
the potential military uses of implants 52. If a nano super-suit turns someone into a supersoldier, a super-suit plus implants would make a soldier doubly super. Because the use of
246

NT-based implants is such a massive topic, we leave further details to Chapter 11, which
focuses on the risks and benefits of human enhancement in the context of global risks.
Potentially the greatest impact of military NT is in autonomous battlefield systems.
These would replace many of the functions of soldiers for attack, defense, and recon. One
source defines autonomous robots as follows: A fully autonomous robot can: 1) Gain
information about the environment 2) work for an extended period without human
intervention, 3) move either all or part of itself throughout its operating environment without
human assistance, 4) Avoid situations that are harmful to people, property, or itself unless
those are part of its design specifications. The last one is particularly tricky, and it seems
likely that if autonomous robots are ever widely deployed, they will accidentally damage
property and people as well. However, the same applies to the actions of human beings,
and the military appeal of these robots makes it unlikely that they will be relinquished
because of this.
As weve already mentioned several times already, swarms of small robots would be
effective NT-built weapons, perhaps even the killer app of military nanotechnology.
Without the ability to operate autonomously, robots have limited use, since they need to be
piloted remotely by human beings. With autonomy and mass-production, all sorts of
systems could be built, swarms that swim, hop, glide, flap, and wiggle their way to war.
Unless there is a major change in ethical views towards autonomous systems, its unlikely
that these robots will kill without a human in the loop, but they could disrupt, drown out,
harass, conceal, or confuse targets without human input. Some of these robots might use
neural nets with cognitive complexity and design principles borrowed from or inspired by
cockroaches, rats, fish, birds, mosquitoes, squirrels, mules, horses, tigers, lions, cheetahs,
elephants, jellyfish, bats, penguins, wasps, bees, worms, leeches, mice, lizards, springtails,
dinosaurs, and many other animals.
Altmann defines macro-scale autonomous systems as those with a size above about
0.5 m (1.6 ft). These already exist in the US military today, as bomb disposal robots or
flying surveillance drones. NT will allow them to use materials and miniaturized electronics
that make them more flexible, adaptive, numerous, coordinated, autonomous, and animallike. Because full environmental autonomy is a software problem, progress on this front is
more difficult to predict than something like the gradual improvement in computers, so we
247

predict that it could be anywhere between 10 and 40 years before there is software to
direct large swarms of medium-sized robots autonomously and effectively in unpredictable
environments with changing weather.
Many functions must be performed for robots to autonomously complete meaningful
missions, including navigating terrain, avoiding moving obstacles like wild animals,
livestock, and people, recon, concealment, refueling, defense and attack formations,
escorting, following, dispersal, return to base, self-destruct, and so on. Today, the software
palette of autonomous robot functions is very limited. Even the task of traversing terrain is
still a challenge. Substantial progress is happening, however, led by robotics companies
like Boston Dynamics, acquired by Google.
Despite improvements in miniaturized electronics, powerful robots with 5 centimeter
or larger guns will weigh several tons, though their size doesnt mean they cant be
autonomous. Some autonomous systems could be quite large, the size of tanks or even
larger. Lacking a crew, these tanks could accelerate more rapidly, engage rougher terrain,
hide underwater or underground, and be completely redesigned for greater mobility and the
ability to perform maneuvers that would kill a driver. The same applies to small submarines,
boats, and planes. Autonomous vehicles could more easily operate around the clock with
minimal breaks, independent from the weaknesses of a human crew. Lacking life support
systems, these vehicles could be smaller and lighter and therefore have room for more
armor. Entry ports might even be welded shut.
Moving on to smaller sizes, Altmann defines mini robots as under 0.5 m (1.6 ft) in
size, and micro robots as below 5 mm (0.2 in) in size. Of course, construction of these
would be greatly enabled by advances in NT. Altmann claims that NT will likely allow
development of mobile autonomous systems below 0.1 mm, maybe down to 10 m. This
is still 2-3 orders of magnitude larger than the size of nano-robots and nano-assemblers
that are in the realm of MNT, but extremely small at the lower end (10 m is a bit larger
than the diameter of a red blood cell and is small enough for microbots to circulate through
the bloodstream). Altmann does not explicate how these robots would be fabricated, but
based on prototypes that have already been built in a lab, they would probably be laser cut
laminated sheets of advanced materials or photolithography-fabricated MEMS (microelectro-mechanical systems).
248

Not many autonomous micro-robots have been fabricated as of this writing, and those
which have tend to be one-shot prototypes. The Harvard Microrobotics Lab has developed
a system for the mass production of flying micro-robots in 2012, but no actual mass
production has been done53. Korean researchers have built small bio-mimetic worm robots
that use actuators (artificial muscle wires) made of shape-memory alloy54. The worm uses
tiny earthworm-like claws and is 4.0 mm long by 2.7 mm wide. The Harvard Microrobotics
Laboratory is a current center of microbot activity, their flagship project being a Monolithic
Bee that is fabricated using a unique 2D-to-3D pop-up book process, where the chassis
is formed as a single piece of laser-cut metal, which is then pushed upwards by pins to
create the full 3D design. The chassis material uses 18 layers of carbon fiber, Kapton (a
plastic film), titanium, brass, ceramic, and adhesive sheets which are laminated together,
and the end result is about the size of a dime. More complex designs that fold like origami
are possible.
Currently, these robots are quite primitive. There is a major issue with regard to
limited energy supply, given that these robotic bugs cannot eat. Tiny batteries cannot
supply the power that digestive systems can. Without food, their time of operation is
measured in minutes rather than hours, making them impractical for any real use. They
lack stability control systems, meaning that with seconds of takeoff, they slam into the
ground again like drunken flies. The smallest stable artificial flyers are several tens of
centimeters (several inches) across, about a hundred times heavier than these microbot
protoypes. Today, that is roughly the size autonomous flyers need to be to carry out useful
functions such as surveillance. With advances in NT, however, crucial systems will be
miniaturized and micro-flyers will actually become useful along with other micro-robots. In
military parlance, very small autonomous flyers are called UAVs. Like many of the
projects described in this chapter, the project is receiving grants from DARPA.
There are a number of challenges to building functional autonomous systems at the
sub-5 cm scale. Besides power, these smaller systems carry sensors that are less effective
than they would be if larger, due to a smaller detection surface. Communication is also a
problem, as small communication antennae running on tiny batteries have a poor signal
strength. Laser communication would be difficult because diffraction causes a smaller
beam emitter to emit a wider beam. For its information to be accessed, tiny surveillance
drones may simply have to return to base or a larger carrier to have their data obtained
249

through physical download. Aside from that, smaller systems are more fragile and less
mobile. A gust of wind may send a UAV flying into a tree or concrete surface, snapping its
delicate components. A thick mist might cause micro-UAVs to become covered in water
which makes it impossible for them to fly, similar to how many smaller insects cannot fly
well in foggy conditions. Small ground robots could simply be stepped on or washed away
with hoses. Like human beings, they could be blown away with anti-personnel mines such
as Claymores or weapons like automatic shotguns. Naturally, they would have greater
difficulty passing physical barriers than larger machines or human soldiers would.
For mission applications, micro-robots could be used in reconnaissance (path-finding,
detection of enemy units and structures, environmental threats, mapping terrain), serving
as beacons to delineate targets for larger, remotely sourced attacks (railgun, bombing,
artillery strikes), or directly attacking the enemy, either by flying directly into targets,
detonating a small explosive, or injecting a toxic substance. Small UAVs could even
attach themselves to a person and threaten to kill them unless they comply with certain
commands. Many terrifying options would be possible for countries or organizations without
scruples or with sufficient motivation to win at any cost. Of course, since they are small,
these robots could be destroyed or disabled quickly via a variety of means, including
swatting them like flies.
Despite all their limitations, there are many futurists who think that the future of civic
policing, area denial, and population control lies in large numbers of these small robots 55. It
seems like the optimal robotic armies would make use of combinations of robots of various
sizes, from the tiny to tank-sized. Today, disabling a tank is quite easy if you can get close
to ita thermite reaction over the engine block of a tank can ruin it. In the not-too-distant
future, tanks might be escorted by small swarms of hopping, crawling, and flying robots
which defend the area around them and provide short-range reconnaissance. These
smaller robots would also be able to go places tanks cannot go, such as underground, into
buildings, up steep hills, around corners, and the like. They could disguise themselves as
bugs or use quickly changing camouflage. Large swarms of these robots could also be
used to confuse or terrify the enemy with sights and sounds. They might even be used to
trigger epilepsy. Try looking at a YouTube video designed to trigger epilepsy and you will
quickly get a headache, imagine that times a thousand. There are international treaties
against blinding laser weapons, but not against using brilliant multicolored lights or sounds
250

to disorient or antagonize the enemy. Lights, especially strobe lights, are an effective
weapon that has barely begun to be exploited by the military. The battlefield of the NT
future may look like a disco, with soldiers having to depend upon virtual reality goggles to
carefully filter out the visual garbage and reveal real targets. To the soldier of today, it will
be a strange, strange world. All this could happen just within the next 30 years.
Besides worm or flea-sized robots and larger tank-like robots, there is the realm of
robots too small to be seen by the naked eye (<0.1 mm) and robots the size of singlecelled organisms (2-200 m). We examine these later in this chapter, in the sections on
MNT and grey goo. There is also the complex possibilities of bio-technical hybrids, robots
of all sizes which incorporate both biological and artificial structures. These will also be
covered later.
Remaining applications of NT include small satellites and space launchers, nuclear
weapons, chemical weapons, biological weapons, and biological/chemical protection. All of
these topics have been covered sufficiently in other sections, so well move on.
Military Applications of Molecular Nanotechnology (MNT)
Now that weve reviewed the potential military applications of conventional NT, we
examine the more extreme and grandiose implications of bottom-up manufacturing, MNT.
In the opening of this chapter we cited the primary arguments against the feasibility of the
technology, and the numerous responses which have been made to critics by members of
the MNT community. Instead of delving deeply into these issues (which could fill an entire
volume), we simply assume the Drexler/Phoenix-style nanofactories described earlier in
this chapter are possible, and consider the military implications which proceed therefrom.
As previously stated, we expect these devices to be built sometime between 2040 and
2100, though exactly when is difficult to state. When nanofactories are developed, they
could have an extreme impact on the world system very quickly, on timescales of weeks or
months. We would be going from the Iron Age to the Diamond Age, to use sci-fi author
Neal Stephensons term. The cost of goods could fall precipitously, making it so that even
work might become optional in some areas.
The military applications of conventional NT and MNT could not be more different.
The first involves incremental upgrades and miniaturization of existing military systems, the
251

latter concerns exponential automated production. Assuming a doubling time of 15 minutes


for assemblers, MNT manufacturing systems could theoretically go from one kilogram to
100,000 tonnes of fabricators in 28 doubling cycles, or seven hours (providing sufficient
feedstock and suitable space would be a challenge). In Design of a Primitive Nanofactory,
Chris Phoenix writes, the doubling time of a nanofactory is measured in hours 56.
Assuming a doubling time of 15 hours (the number used in the nanofactory paper), we go
from one kilogram to 100,000 tonnes of fabricators in 17 and a half days. Logistics and
regulations would be likely to slow the proliferation of nanofactories from this lower bound,
but mass adoption within 5-6 months seems like a realistic estimate if nanofactories are
made available to the public. In contrast, the mass adoption of cell phones took 20 years.
Nanofactories would be considerably more useful than cell phones, so we can suppose
that their adoption would be correspondingly more rapid.
Where would the matter and energy for all this manufacturing activity come from? The
sun, mostly. Phoenix states: A power use of 250 kWh/kg means that a large 1-GW power
plant, or four square miles of sun-collecting surface, could produce ~12,000 8-kg
nanofactories per day (not including feedstock production). Thats roughly 100 tonnes of
nanofactory per day. Devote 100 power plants to production, and that goes up to 10,000
tonnes per day, giving us 1,000,000 tonnes of fabricators in 100 days. For matter, there are
two possible sources: hydrocarbons such as those in natural gas, or carbon dioxide from
the atmosphere. US annual natural gas production is over 30 trillion cubic feet. The weight
of a cubic foot of natural gas is about 22 grams and annual production is over 600 million
tonnes. Just one tenth of one percent of this, 6,000,000 tonnes, would suffice to build the
1,000,000 tonnes of fabricators we outlined, and plenty of products besides. Energy and
matter are not serious limits to the exponential proliferation of nanofactories.
This brings us to the first and probably most important military application of MNT;
endless production. Because the factories are exponentially self-replicating and automated,
increases in their numbers could continue until logistic or legal limitations are reached.
These limits may be rather extreme. For instance, there may be a limit such that one
soldier can only control five autonomous tanks and twenty autonomous aircraft at any
given time. In that case, given a million soldiers, there would be no point in manufacturing
many more than five million tanks and a hundred million aircraft. Considering that the
United States military currently only has 6,344 tanks and 13,683, thats quite a lot, enough
252

to take over the world many times over. Even a smaller country like Switzerland, if it had
MNT and the US did not, could potentially conquer the world with a few hundred thousand
tanks and aircraft. If a country has domination of the skies, it barely matters how many
soldiers you have; you will be bombed into submission. A country with 150,000 soldiers
(the size of Switzerlands military) and 100,000 autonomous tanks and aircraft is likely to
defeat a country with 1,000,000 soldiers (US military) and just 20,000 tanks and aircraft,
unless nuclear weapons are brought into the equation.
Think of what would happen if one country with access to MNT began massproducing tanks and aircraft in this way. A country like China, Russia, or the United States
could increase the firepower of their main battle tank force by over ten times in 24 days, or
a hundred times in 240 days. Here are the calculations: the United States has about 6,000
main battle tanks at 50 tons each, manufacturing 60,000 such tanks would require about 3
million tons of diamond or other carbon allotrope (like buckypaper or other fullerenes), one
three-hundredth of US annual natural gas supply. With 100,000 tonnes of fabricators,
achieving that output would take just over nine days, going by the product manufacturing
time estimates given by Phoenix. If the country were starting out with an even smaller
army, it could conceivably increase its firepower by over a thousand times. All it needs is
the energy (which can come from MNT-manufactured solar panels if need be) and
hydrocarbons, which can be extracted from the atmosphere or fossil fuels. Of course, such
a military buildup need not be limited to tanks. It could include aircraft carriers, jets,
bombers, drones, conventional weapons, and so on. The electronic components could be
manufactured by the nanofactory, in the form of diamondoid nanoelectronics. A few
components, such as ammunition, would require the insertion of parts that could not be
built by nanofactories, requiring some traditional industrial infrastructure.
Naturally, it might upset the current order of things if any one nation could massproduce so much materiel so quickly. Its beyond our experience as a civilization. These
tanks neednt be made out of diamond, they could be built out of nanotubes, a kind of
fullerene, which are even stronger and are not brittle like diamond. Since they would be
made out of a material much stronger than steel, they could have less mass devoted to
armor, making it possible to manufacture more of them on a per-ton basis. Instead of using
50 tons of fullerenes to build a 50 ton tank, you might use 10 tons of fullerenes and fill up
the hollow spaces with steel shavings to provide inertia and stability. The assembly process
253

could have a stage where steel shavings are pumped into the shells of each tank, then
melted to form a solid steel plate. Although the nanofactories themselves could only
produce carbon-based products, they could manufacture other systems to handle steel and
other non-carbon materials very rapidly. The entire manufacturing infrastructure, civilian as
well as military, could be rebuilt to better specs with MNT machinery. The performance of
mining machines could be improved ten to a hundred-fold, allowing increased exploitation
of natural resources. All of this would speed up the pace of manufacturing. Futurist Michio
Kaku calls it the second Industrial Revolution 57.
Besides the mass production of conventional weapons, as well as the swarms of
micro-UAVs and autonomous tanks described in the previous section, MNT would permit
the construction of completely new systems for attack and defense, both stationary and
mobile. One example would be utility fog, clouds of microbots with six mechanical grippers
that allow them to holds hands and form into solid structures in response to external
commands58,59,60. Designed by scientist J. Storrs Hall, a cloud of foglets holding hands
would occupy about 3 percent of the volume of air they occupy, with a density of 0.3 g per
cc, and a anisotropic tensile strength of 1000 psi, about that of wood.
These foglets could move and connect in different patterns, manipulating armor
plates, bombs, or spikes of diamond along the way. A city surrounded by utility fog could
use it to raise diamond plates for shielding from attacks, or launch diamond spikes to
retaliate against besiegers. Utility fog could manipulate mirrors and concentrate sunlight to
overheat or ignite enemy tanks or airplanes. It could project lights and sounds to confuse
the enemy, and partially regenerate if blown apart. Sophisticated utility fog would be closer
to what we think of as magic than machines. Each foglet would have a diameter of 100
microns, the width of a human hair. A single nanofactory could manufacture trillions per
hour, enough to fill a cubic meter.
Besides being used to mass manufacture tanks, airplanes, and utility fog, MNT could
mass-manufacture ordinary structures, from houses to military bases. If carbon became too
expensive or difficult to come by, structures could be built out of sapphire, which is
aluminum oxide. Aluminum is one of the most abundant elements on the planet. With MNT,
it would be feasible to mass-produce sapphire as well as diamond, though it would require
nanofactories with different tool-tips designed to manipulate aluminum oxides. Building
254

construction might use special mobile nanofactories which saunter around a construction
site and extrude material as they move. In this way, many structures could be built very
quickly with minimal human supervision. It would become possible to pave large areas of
land and put up buildings, even if they were never destined to be used. Mega-cities could
be built, covering hundreds of square miles. Large underground caverns could be
excavated, with sunlight routed into them through the use of MNT-built fiber optics. Entire
cities could be built underground or underwater, supported by diamond or sapphire pillars
and domes. The military implications of all this are extremely relevant to how mankind fares
in the 21st century.
Open Colonization of New Areas
Now that weve overviewed some of the key applications of NT and MNT, we turn
particularly towards factors which could cause or exacerbate geopolitical risk or tension in
the context of a post-MNT world. We assume that MNT will be accessible to several major
states around the same time, states like the US, European states, China, India, and so on.
So, events beyond the assembler breakthrough will be molded by the interaction of these
states on the global stage. The alternative is that MNT will be controlled and monopolized
by a single state, which would be unprecedented for a manufacturing technology which
provides greater humanitarian and practical benefits.
Today, there are certain limited resources which the world struggles over and which
characterizes interactions between states: primarily oil, but also natural gas and other
natural resources such as minerals. With MNT, the value of oil as a fuel goes down
because it becomes possible to build safe nuclear reactors or solar panels in huge
numbers. At the same time, the value of oil as feedstock goes up, because it is rich in
hydrocarbons which can be used to build nanoproducts and nanostructures. Thus, oilproducing states like Saudi Arabia will continue to be important. On the other hand, it is
possible to extract carbon from the atmosphere or biomass. Economic research is needed
to determine how resource prices, demand, and supply could change in a post-MNT world.
Though MNT will lead to new, more powerful, and more numerous weapons and new
possibilities for eliminating poverty, the same old tribal affiliations are certain to persist.
Palestinians and Israelis will still be attacking each other, radical Islamists will still seek
expansion, the United States will still seek to impose political hegemony on the world, with
255

Russia and China resisting it, and so on. Some of these factors may change in 40 to 50
years, but many of them are likely to persist in recognizable forms. People will not suddenly
become friendly and enlightened just because a new manufacturing technology is invented.
Human nature stays the same even when technology changes. Therefore, we can expect
countries to continue to behave antagonistically towards one another and compete over
resources, with the attendant conflict.
Two resources stand out as likely to be especially valuable in a post-MNT world;
carbon and space. We refer primarily to physical space on the planet, not outer space,
though outer space would have great value as well. With MNT, it would become cheap and
possible to build large cities at sea, farming seafood and generating power from solar
panels and the heat differential between the ocean depths and the surface. Countries
where there are issues with overcrowding, such as China, Japan, and India, may begin
building sea cities in their territorial waters. Although there are treaties that nominally
address the issue of maritime borders, these treaties were not made for a world where
marine cities can be built all the way up to these borders. The possibility of a land rush for
the sea could increase tensions between states like China and Japan, who have a history
of bad blood. There are many oil fields in the shallow seas around the world which could be
exploited by marine cities. The desire to take over these oil fields could lead to territorial
disputes, MNT arms races, and eventually all-out nano-war. In combination with biological
and nuclear weapons, such wars could threaten the survival of our species and lead to
widespread civic breakdown of the kind described in the nuclear weapons chapter.
Besides carbon sources, empty space itself would have value, both for exploring new
possibilities of human habitation (every person on Earth could have their own sapphire
mansion and private jet) as well as providing buffer zones to defend against possible
attack. Besides colonizing the continental shelf with sea cities, there would be an incentive
to build sea cities all the way out to the center of the Atlantic and Pacific oceans. If major
states do not take the initiative to build these cities, independent actors will, assuming
nanofactories are available to the public and international law does not prevent it. New
countries could be formed entirely at sea. Previously barren and mostly valueless areas
such as the taiga and Sahara Desert would increase in value as it suddenly becomes
possible to build real cities there. The Arctic Ocean would increase in value as superior
tools make the recovery of oil and gas there more feasible, prompting conflict between
256

Canada and Russia. With the aid of MNT, cities could be built on platforms that float on
swampy taiga, opening up millions of new square miles for habitation or construction.
Advances in MNT could provide everyone on Earth with great material wealth and
space, yet unfortunately, there is a Red Queen dynamic at work. The Red Queen, an
infamous character from Lewis Carrolls novel Through the Looking Glass, conducts a race
where everyone must run constantly just to stay in the same place. The same applies to
evolution and human status competitions, known as keeping up with the Joneses. Even if
you have a mansion and a private jet, it means little if your neighbor has one too. People
dont just get satisfied after a certain amount of material wealth, they need more, more than
the guy next to them. Most modern people are fabulously wealthy by the standards of our
ancestors, but we may consider ourselves poor even if we drive an SUV, with water
coming out of the tap, sleeping in a warm bed in a secure house. The same will apply in
2080 when people are fabulously wealthy by todays standards but still want more. A
particular issue may be that carbon is limited on this planet and that all the most interesting
and useful things, from diamond to rainforests, are made out of it. There may be
considerable competition to monopolize as much carbon as possible. Asteroid mining could
help add to the carbon budget, but that takes time and effort. Limited resources will be a
cause of conflict, especially in a world where all resources can be dug up and utilized.
Consider the global supply of carbon. The current atmospheric CO 2 level is ~362
parts per million (0.036%) which is to 5.2 10 14 kg (520 Pg). Assuming that a reduction of
CO2 levels to the pre-industrial level of 280 ppm is acceptable, ~118 Pg (petagrams) of
carbon could be extracted. Assuming a world population of about 10 billion in 2060, that
gives us 11,800 kg per person. That is enough for a nice car and plenty of other toys 61. Yet,
for many people that may not be enough: theyll want more. Diamond houses, diamond
factories, diamond palaces, and so on. The thirst for diamondoid nanoproducts could lead
to conflict over the Earths carbon dioxide. There may be a race to pull it out of the
atmosphere as quickly as possible. A mass draw-down of CO2 could cause global cooling,
which would be counteracted somewhat by the waste heat generated by nanofactories.
Though climate change would probably not be an immediate concern, the overall context of
fighting over CO2 is definitely a weird prospect, something that has rarely been considered.
If all CO2 were removed from the atmosphere, it would cause the death of all vegetation

257

and the total collapse of the natural food chain. Other bizarre surprises and plot twists
might lie in store for us in a post-nanomanufacturing world.
Super Empowered Individuals
Previously, we mentioned how nano-suits and implants could combine to create
super-soldiers. Beyond combat capability, MNT will facilitate the creation of super
empowered individuals (SEIs), people who have superior capabilities not just in the military
realm, but also in domains like finance, popular charisma, and geopolitics. The superlative
capabilities that MNT will confer to those who properly exploit it could spill into many areas,
creating SEIs who are a head and shoulders above the rest of humanity. These individuals
could establish totalitarian governments that control their citizens through selective
restriction and distribution of the fruits of MNT, in addition to other coercive means such as
novel forms of surveillance or psychological manipulation 62.
The capabilities that MNT would confer would seem almost magical from the
perspective of people unprepared for it. If the power is sufficiently restricted to a few, they
might even seem like gods. Economist Noah Smith has written about Drone Lords,
asymmetrically empowered individuals with drone armies 63. He compares the rise of
drones to the rise of guns, and calls it the end of People Power. Prior to the invention of
the rifle, military power rested in the hands of the few; those who commanded the loyalty of
well-trained and well-equipped knights. With the rifle, military training became less
important, and key factor was numbers. Now, as drones are being introduced and
improved, we may enter an era where a small group of people may command a large
drone force, which they can use to dominate an area, enabling a new authoritarianism or a
new imperialism. This could contribute to war, disrupt the current order of NATO/American
world hegemony, or reinforce it to levels beyond what we can imagine today. If drones
alone could destabilize the world order, imagine the degree of disorder which could be
introduced by exponential, automated production of military hardware in general.
MNT will enable very powerful computers that outperform legacy hardware by orders
of magnitude. This will allow those with access to nanocomputers to completely control
quantitative finance and thereby the world economy. More than half of all trades today are
made by software, a proportion is likely to increase as time goes on. Unless AI improves
considerably, and perhaps even if it does, there will continue to be the risk of a flash
258

crash that wreaks havoc on the world economy. A nation or organization with MNT would
not even need to use military force to take over the world; they could simply accomplish it
with computers. The margin of performance to dominate in quantitative finance is so thin
that trading firms pay top dollar for an office several blocks closer to crucial trading nodes,
to minimize latency in their transactions. The abrupt improvement in performance caused
by the fabrication of nanocomputers will enable complete financial domination by the
parties that possess them. Any nations that want to retain control over their own economies
will need to institute aggressive protectionist measures, and will likely be pressured by the
nano-powers-that-be to deregulate and be subjugated. Hegemonic totalitarianism could
rise in the name of open markets.
Besides finance, MNT will open up new possibilities for the control of human beings
by greatly improving the tools used for psychological study and manipulation. High
sensitivity brain-computer interfaces (BCI) built by nanofactories will allow much more
detailed modeling of human psychological processes, which could lead to effective
strategies for manipulating populations. Widespread nano-surveillance would allow states
to monitor every word their citizens say, both online and off. Combined with drones, this
could lead to dictatorships which are truly impossible to topple, because they can easily put
down threats before they arise. Whether on the right or the left side of the political
spectrum, history has shown that both sides are fully capable of implementing police states
which massacre millions of people. Totalitarian states clashing with nanoweapons could
cause wars which lead to the end of civilization. Democracies should not be considered
automatically safe, either; it seems likely that democratic peace theory, the observation
that democracies tend to go to war with each other more rarely than other forms of
government, may be an artifact of post-WWII American hegemony rather than any inherent
feature of democracies. In fact, there are arguments that democracies are more likely to
engage in total war than more authoritarian forms of government 64. In an authoritarian
government, war begins or ends at the decision of the ruling power, but in a democracy, the
call for war can become a mob dynamic which is impossible to stop once it gets started.
MNT will provide tools for human enhancement in every domain, from improving voice
to appearance to strength and alertness. It will provide tools to extend lifespans, by
producing microbots that flow through the bloodstream, removing dead or diseased cells
more effectively than the immune system. MNT-built microrobots will extend telomeres and
259

conduct other biological maintenance work to stave off the effects of aging, perhaps
extending human lifespans by 10-20 years at first, and eventually indefinitely 65,66. There is
no cosmic law that states that biological systems cannot be repaired faster than they break
down, its just extremely complicated and hasnt been done yet. MNT could enable a sort of
immortality, meaning individuals who do not die through aging or disease. The only way
to kill such people would be through physical attack, which may be difficult if they are
surrounded by utility fog or robot soldiers, or concealed deep within the ground or high in
the sky. Such levels of power could create unaccountable individuals. Not unaccountable
like todays politicians and bankers, but unaccountable like a god on Earth. These
individuals might even hide their true selves deep in bunkers and only manifest as
projections in utility fog on the surface.
We could say more on the issue of super empowered individuals and how they
interact with the equation of global risk, but additional analysis will be saved for Chapter 11,
which specifically addresses Transhumanism and human enhancement. The inequalities
which could be created by varying access to nanofactories could make the differences of
today look superficial. The cosmetic improvements to the body alone might allow certain
lucky individuals to captivate millions.
Bio-Technical War Machines
Weve spent a fair amount of space in this chapter analyzing microbots and drones,
but MNT will enable breakthroughs in larger war machines as well. There will always be a
place in war for such constructions, as there is no substitute for pure mass. Mass has a
number of advantages; it has inertia, which means it is hard to move around; it has
thickness and toughness, meaning it is good defense; it can move through obstacles
effectively, crushing things that stand in its way; it can contain more functions and devices,
giving it more options than smaller systems. Recall that the biosphere was dominated by
the dinosaurs for 135 million years. Although many dinosaurs were small (fewer of these
bones are preserved due to preservation bias), the occurrence of many large dinosaurs
shows that size was a viable strategy for survival and success in the context of the
Mesozoic era. Mammals have a faster metabolism than dinosaurs, and as such, tend to be
smaller. This is because we need more food per kilogram of body mass, on average, to
survive. But what of a world where greenhouses can be automatically built and maintained
260

by robots, or nuclear reactors can be integrated into bio-technical war machines the size of
dinosaurs? As bizarre and science-fictional as it may sound, the eventual creation of such
machines seems a near-certainty. They offer too many military benefits for it to be
otherwise.
Drones and microbots are throw-away tools. They are constructed in large numbers,
and designed to be sacrificed. In a drone attack on a future city, thousands or even millions
might be vaporized by powerful laser beams or flak cannons. To supplement these weak
units requires strong units, like tanks but better. The problem with traditional tanks is that
they are not very mobile. Tanks cannot traverse forests or mountain slopes. They cannot
hide underwater, regenerate, or power themselves with a variety of energy sources. They
are not intelligent, and they have no agility. Their armament is limited to a main gun and a
small machine gun, and if a tank is hit in the right spot by an anti-tank round, its all over.
They depend upon human operators which make human errors. A human tank operator
who leaves his tank to urinate may return to discover that a grenade has been thrown into
the cabin, ruining the vehicle.
All these deficiencies could be overcome by creating animal-like tanks which contain
both biological and nanotechnological features. These constructs might be 10-50 m (32164 ft) large, with armor thick enough to block armor-piercing shells or the pressures at the
bottom of the ocean. Modeled on dinosaurs or large felines or canines, they might have
three sets of legs rather than two, and exploit other design features not accessible to
biological evolution, due to limitations on incremental adaptation. Like certain sauropod
dinosaurs, they might have two brains rather than just one, providing crucial redundancy.
Instead of being made up just of individual cells, like all life forms, they could have many
fused components (like bone) interspersed throughout their structure, giving them greater
durability than biological animals. Of course, they would be built out of fullerenes rather
than proteins, making them hundreds of times stronger than steel and several times lighter;
quite unusual for a living organism. Powered by ultra-high power density nano-motors
and actuators, they could move very quickly despite their size. Creating bio-technical war
machines that are the size of tanks but run at hundreds of miles per hour is not out of the
question. Some could even fly, like the dragons and gryphons of mythology. Others could
swim to the bottom of the ocean or dig underground, where they could hide undetected
until ready to be deployed. Being machines, they could power down and consume barely
261

any oxygen or food. If powered by small nuclear reactors, they could survive for decades
without food, much like nuclear submarines only require refueling every twenty years.
These beings would have all the strengths of biology and machines with none of the
weaknesses of either.
The super empowered individuals described earlier might even use these animals
as their bodies or exoskeletons, making them even more powerful. A human brain and
spinal cord could be connected to these machines, giving them a brand new body. Ultrafast regeneration would be possible. These bodies would contain reservoirs of
nanomachines that repair damage as soon as it occurs. Even if penetrated by ultra-highvelocity rounds like railgun bolts, these animals could regenerate and continue on. A little
bit of brain could be contained throughout the animal as an insurance policy, such that
even if the central processor were destroyed, nanomachines could rebuild it based on a
blueprint. Large herds of these machines would be much more effective at warfare than the
tanks we use today, and defeating them might require nothing less than nuclear weapons.
This would obviously increase the chance of nuclear war, providing a plausible justification
for the use of nuclear weapons.
Global thermal Limit
Given the scenarios and technology outlined in this chapter, it may appear like there
would be nearly no limit as to how greatly MNT could be used to remodel the planet's
surface and build war machines, but there is a limit. This limit has to do with the global
energy budget of the technosphere, the heat dissipation of all human artifice 67. The
biomass of all human beings combined dissipates roughly 10 12 watts of heat energy into
the atmosphere, and our current technology ten times that, roughly 1.2 x 10 13 watts. In
comparison, the solar energy absorbed by the Earth's surface is ~1.75 x 10 17 watts. The
power dissipation of all vegetation is ~1014 watts. According to Robert Freitas,
Climatologists have speculated that an anthropogenic release of ~2 x 10 15 watts (~1%
solar insolation) might cause the polar icecaps to melt. The polar ice caps melting would
not be the end of the world, contrary to public perception, but certainly there is a limit to
how much heat can be generated by nanomachinery before it becomes a problem. Freitas
points out that the ~3.3 x 1017 watts received by Venus at its cloudtops has made that

262

planet into a hell-world, with a surface hot enough to melt lead. Somewhere between here
and there seems like a good place to stop.
Freitas summarizes the issue of hypsithermal limit with respect to nanobots as
follows68:
The hypsithermal ecological limit in turn imposes a maximum power limit on
the entire future global mass of active nanomachinery or "active nanomass."
Assuming the typical power density of active nanorobots is ~10 7 W/m3, the
hypsithermal limit implies a natural worldwide population limit of ~10 8 m3 of active
functioning nanorobots, or ~1011 kg at normal densities. Assuming the worldwide
human population stabilizes near ~1010 people in the 21st century and assuming a
uniform distribution of nanotechnology, the above population limit would correspond
to a per capita allocation of ~10 kg of active continuously-functioning nanorobots, or
~1016 active nanorobots per person (assuming 1 micron 3 nanorobots developing ~10
pW each, and ignoring nonactive devices held in inventory). Whether a ~10-liter per
capita allocation (~100 KW/person) is sufficient for all medical, manufacturing,
transportation and other speculative purposes is a matter of debate.
Taking this into consideration, we can imagine new sources of conflict and new risks
to humanity's survival. A nanotech arms race that puts the world energy balance above the
hypsithermal limit could literally cook the planet alive. A number of countermeasures are
possible to buy time, but the only compelling one is moving civilization off-planet. Freitas
states that removal of all greenhouse gases from the atmosphere (which would be the end
of life as we know it, due to the removal of carbon dioxide) would increase the limit by a
factor of 10-20. Speculatively, if the entire atmosphere were removed, it would allow ~ 2 x
1016 watts (~10% of solar insolation) of operating nanomachinery while maintaining current
surface temperatures, according to Freitas. These calculations on hypsithermal limits make
it clear that sci-fi societies like Coruscant, the planet in Star Wars that is covered from pole
to pole in dense city, would be impossible. They would simply produce too much heat, and
the planet would be incinerated. More research is needed to clarify the various effects at
different levels of heat generation and how MNT systems could be used to compensate for
these effects.
Space Stations and MNT
263

MNT would open up space in a new way. Even with MNT, the costs of lifting a load to
outer space will be significant. The demand for spending the energy that could be used to
lift a payload to space for something on Earth instead will be high, meaning it will likely be
restricted to the wealthy and powerful. As of January 2015, launch prices are $4,109 per
kilogram ($1,864/lb) to low Earth orbit, for SpaceX's Falcon 9 launcher 69. The ability to
manufacture diamondoid rockets with nanofactories would certainly bring the relative cost
of launches down, but rockets still require a lot of mass and fuel to get anywhere, and
given all the fire and smoke they produce, it is likely that the use of large rockets will be
regulated and require a lot of real estate. This ensures that they will continue to have
scarcity, even in a world of great material abundance.
To truly bring down costs for space launches requires building megastructures that
allow launches to begin closer to space. Space elevators, a large tower stretching from the
Earth's surface to geostationary orbit, are often floated as a way of bringing down launch
costs, but they are impractical. This is because it would be too easy for a terrorist or enemy
state to sever them, causing a 35,790 km (22,239 mi) long tower to come crashing down to
Earth. A space elevator would just be too appealing of a target for disruption. (This
argument is made at greater length in chapter 10, which focuses on space weapons.)
More plausible is the construction of space piers, extremely tall (100 km, 62 mi)
towers supporting ramps along which spacecraft can accelerate and leave the Earth's
gravity, a concept created by J. Storrs Hall (also the originator of utility fog) 70. The launch
would take advantage of the thin air at that altitude, which is a million times thinner than at
the surface. Unlike space elevators, space piers could be heavily reinforced and made
resilient to attack. Space piers could be constructed such that even several nuclear
weapons would not be enough to bring down the entire structure. That would make them a
much more appealing investment than space elevators.
A space pier would consist of a track 300 km (186 mi) long and 100 km (62 mi) tall,
supported by 10-20 trusses below. These trusses would come two at a time, forming
bipods along the length of this colossal structure. An elevator would bring a 10-tonne craft
from the ground level to the launch level, and the craft could launch into space, spending
ten thousand times less energy than it would during a conventional rocket launch. Manned
craft would travel at an acceleration of 10 g for launch, which can be withstood by human
264

beings for the 80 seconds required if form-fitting fluid sarcophagi are used, according to
Hall, but even so, the launch will not be particularly pleasant. This space pier approach
makes a nice compromise between the dangerous, energy-intense route of using
conventional rockets, and the risky method of building a space elevator that is a long thin,
fragile thread to orbit.
With the space pier, many payloads could be launched into orbit in sufficient volume
to get true colonization of the inner solar system going. All the luxuries of Earth could be
sent up into orbit and to the Moon and Mars, from millions of gallons of water, to space
stations which create artificial gravity by spinning, to machines that mine the asteroids to
create additional space stations. Hall estimates that the pier could launch a 10 tonne
payload and use a 1 GW power plant to recharge the tower every five minutes. There's no
reason why we can't multiply that by a thousand, assuming 1,000 one-GW power plants
and 1,000 rails in a row all providing launches. With launches every five minutes, the pier
could launch 10,000 tonnes into space every five minutes. That should be more than
enough to get humanity started with colonization of space.
Kalpana One, a design for a space colony that has been endorsed by the National
Space Society, is a good model of what we can expect a space pier would be used to
launch into orbit71. The space station is a cylinder with a radius of 250 m (820 ft), a length
of 325 m (1066 ft) and a living area of 510,000 m 2, enough for 170 m2 for its 3,000
residents. Its mass is 6.3 million tonnes, equivalent to 63 Nimitz-class aircraft carriers. The
first such space colony would likely be built entirely with material launched from the surface
of Earth, while later colonies will be built with materials from the Moon. Assuming the
1,000-rail space pier outlined above operating around the clock, the necessary mass could
be delivered to orbit in 52.5 hours. That's 14 space colonies per month, enough for 42,000
people.
Space colonies have many benefits. By spreading throughout the cosmos, mankind
can become more resilient and eventually immune to any isolated disaster. Space
platforms have their own dangers, however. They could be used as outposts to drop rocks
onto the Earth, bombarding it. A space station that is intentionally directed to impact the
surface would make quite a bang. The current international law of no weapons of mass
destruction in space may eventually have to be violated. If there are colonies in space, of
265

course they will need powerful weapons to defend themselves. Like the scenario with sea
cities, this creates a land grab. Space colonies would be premium property, both because
of their novelty, because of new manufacturing possibilities afforded by zero gravity, and
their position by which they could launch missions to colonize the rest of the solar system
and exploit its resources. Space stations are the ultimate high ground, capable of
observing anything on Earth and striking it from height. However, the benefits to increasing
the probability of long-term human survival by building space colonies outweighs any
disruptive dynamics they might cause. It is important to be aware that with the positives
also come negatives, however.
Phase Two: Grey Goo
Now that we've reviewed many risks and technologies which could be produced by
NT or first-order MNT, we return to grey goo, the danger of out-of-control self-replicating
microbots based on nanotechnology72. The term nanobots is often used, but in all
probability, the size of dangerous replicators will be measured in microns, not nanometers.
Their components will have useful features on the nanoscale, however, just like biological
life.
Everything prior to this point is referred to as phase one: military robotics in our
analysis. Those are the products that could be built 5-10 years after the first nanofactories
are developed, possibly excepting space colonies and the more advanced bio-technical
hybrids. Grey goo is phase two. It would likely take a bit longer for civilization to develop
because it requires human ingenuity, one thing that nanofactories could not easily
replicate. Designing a microbot self-replicator requires human intelligence, probably with
the aid of advanced simulations and extensive experiments involving trial-and-error. The
microbot may have a design that is inspired by biological life, but ultimately it is a different
beast, made out of diamond instead of proteins. Naturally, they would be immune to
biological attack.
Like everything mentioned in this chapter so far, grey goo would have a military
application. By harvesting materials directly from the environment, it could potentially
fabricate military units in the field. Grey goo itself could self-replicate many times from a
single microbot, so if a swarm of grey goo were hit with a powerful explosive, even a
nuclear weapon, it is plausible that some of it could survive and continue to replicate. Grey
266

goo would be an excellent defense and an effective attack, as long as the opponent did not
have grey goo of his own. Any nutrients not secured or bolted down could be exploited by
the grey goo for self-replication.
Grey goo would need a minimum set of internal components, just like a cell. This
would include a self-replication program (in life: DNA), a coating (membrane), innate
defenses, collective behavioral strategies, sensors, and so on. Assuming grey goo could be
made as effective as the best viruses and bacteria, it would have a self-replication time of
15-30 minutes. Remember that our natural environment is completely coated in trillions of
bacteria and viruses at all times and in all places. It is also coated in other microorganisms
like flatworms, which we cannot see, but which are certainly there. Nature is constant
competition among self-replicators, from the largest sizes to the smallest. They are
everywhere.
The danger originates when grey goo is developed that can out-compete the natural
self-replicators of the world and displace them. If it can overcome the bacteria and viruses
in the environment, it can break them down and dominate the biosphere. Of course, this
could cause the food web to collapse, at least in areas where the goo is replicating. The
biomass of living things is about 560 billion tonnes, which is a rough estimate. Assuming a
replicator that starts off with a mass of about a picogram (one-trillionth of a gram) and a
self-replication rate of 30 minutes, grey goo could overrun the biosphere in about 93
doublings, or 46 hours. Of course, even under ideal conditions, replication would take
longer, since physical distance separates concentrations of biomass, but you get the idea.
The risk of grey goo has been analyzed in a detailed paper by Robert Freitas 73. He
concludes that replicators designed to operate slowly enough to be difficult to detect would
require 20 months to consume the biosphere, but that faster speeds are possible if stealth
is not an issue. Freitas proposes a moratorium on all artificial life experiments due to the
grey goo risk, as well as continuous comprehensive infrared surveillance of Earth's
surface by geostationary satellites and a long-term program to research countermeasures
to grey goo. It seems likely to us that the only suitable countermeasure to grey goo would
be environmental blue goo distributed over the surface of the planet. The blue goo
needn't all be owned by a single entity; it is possible that a network of blue goo owned by
different nations could provide a comprehensive shield in aggregate. More research is
267

needed to design a global system to handle the threat of grey goo 74. Any such system
would need advanced artificial intelligence of the human-friendly kind to respond to threats
quickly enough, making that a key prerequisite for building a grey goo shield, as well as
blurring the lines between efforts towards friendly AI and mitigation of nano-risk.
More Nano Risks
There are many more potential risks from nanotechnology we haven't covered here.
Possibly the most salient is the general space around human enhancement and AI giving
rise to recursively self-improving, smarter-than-human intelligence. We already discussed
AI in a previous chapter, and will review the risks from transhuman intelligence in the later
chapter on that topic. We call these the third stage of NT/MNT risk.
The risks from NT interact with nearly every single other major global risk, from
nuclear war to supervolcano explosion. Missiles built using nanotechnology could be used
to trigger supervolcano eruption. Superdrugs could be built with the assistance of NT which
put humanity on a lotus-eating path to our extinction. Along with the risks we can
recognize, NT could open up new categories of risk we can't even imagine now. In the
bigger picture, it confers god-like powers on primates whose brains are primarily adapted
to living in small communities and handling stone tools. There is bound to be trouble.
Another category of risk has to do with extreme modification of the environment.
Automated and semi-automated programs may create large amounts of nano-garbage,
clogging streams and penetrating the membranes of animals. Nanotechnology allows
major climatological engineering, from dialing down the amount of carbon dioxide in the
atmosphere to building giant mirrors in space that direct light to or away from certain spots
on the Earth. People may modify the environment willy-nilly without stopping to analyze the
consequences of what they are doing. MNT will enable quick environmental changes, on
timescales faster than is possible for reflection. We must be careful, but given humanity's
record when it comes to the use of new technologies, we shouldn't be too hopeful.
Quantifying Risk
Here we guess at the probabilities of NT/MNT causing the extinction of humanity. For
comparison, we put the risk of nuclear extinction in the 21 st century at 1 percent. As a
268

general probability, we will assign the risk of extinction through NT during the 21 st century
at 4 percent. This is a rather high value, but we think it is justified. NT contains so many
dangers within it, as well as so many benefits, it appears certain to fundamentally transform
the world. Even if the more radical visions of MNT do not come to pass, extreme outcomes
are still possible with NT alone. This is especially true with the way that NT feeds into every
other possible risk. It takes certain things which seem merely likely to threaten millions of
lives (pandemics and nuclear war) and pushes them over the edge into threatening billions
of lives and the entire species. Besides that, it creates completely new threats, such as
grey goo. Still, remember that a few hundred nuclear weapons detonated in cities or large
forests would be enough to kill off nearly 99% of the population of the northern hemisphere
through crop and infrastructure failures. NT, by making uranium enrichment much more
affordable, will greatly increase this risk.
The countermeasures necessary to decreasing increased NT-faciliated, increased
computation-derived AI risks are radical, and may go beyond the boundaries of what
people are willing to do. True risk management seems as if it would require a singleton, a
global decision-making agency that controls the overall course of civilization 75. From what
we can tell, the world is not psychologically prepared for that, but it may happen anyway.
Even a disaster that kills many millions or even billions of people is not likely to make an
impression significant enough to make people overcome our differences and defer to a
central authority. We'll instead be left with a shoddy patchwork of countermeasures, all of
which work partially but none of which work completely, and which leave major holes
through which terrors can crawl.
Rather than being something that people rationally respond to in advance, NT, and
especially MNT, when they are more fully developed, will be technologies that catch us by
surprise. The official and unofficial response will be reactive rather than pro-active,
designed to protect local resources rather than taking into account the planet as a whole.
This is a mistake. The inevitable holes produced by an excessively local focus will make it
as if there are few global protections at all. Concepts such as the rule of law could
evaporate in the free-for-all chaos of a world where nanofactories are widely available and
people are using them to implement arbitrary designs. Even in the absence of outright
maliciousness, there are many paths to destruction, from simple national self-interest to the
mercurial pranks of nano script kiddies making new self-replicators in their basement.
269

The only real way to deal with the risks from NT, and especially MNT, is to go offworld, concurrently with developing some form of human-friendly transhuman intelligence
which can visualize the complex web of threats and put a system in place to ameliorate
them76.

The map of nanorisks


Nanotech seems to be smaller risk than AI or biotech, but it advanced form has many ways
of omnicide. Nanotech will be probably created after strong biotech, but short before strong
AI (or by AI), so the period of vulnerability is rather short. Anyway nanotech has different
stages it its future development, mostly dependent on its level of miniaturisation and ability
to replicate. To control it in the future will be build some kind of protection shield which may
have its own failure modes.
The main reading about the risk is Freitas's article "Some limits to global ecophagy by
biovorousnanoreplicators" and "Nanoshield".
Some integration between bio and nanotech has already started in the form of DNAorigami. So may be first nanobots will be bionanobots, like upgraded version of E.coli.

http://immortality-roadmap.com/nanorisk.pdf
References

1. Chris

Phoenix.

Personal

nanofactories.

2003a.

Center

for

Responsible

Nanotechnology.
2. Chris Phoenix. Design of a primitive nanofactory. October 2003b. Journal of
Evolution and Technology, Vol. 13.
3. Eric Drexler. Radical Abundance: How a Revolution in Nanotechnology Will Change
Civilization. 2013. PublicAffairs.
4. Nick Bostrom. Transhumanist FAQ. 2003. World Transhumanist Association.
5. Ray Kurzweil. The Singularity is Near. 2005. Viking.
6. Michio Kaku. Can nanotechnology create utopia? October 24, 2012. Big Think.
7. Jrgen Altmann. Military Nanotechnology: Potential Applications and Preventive
Arms Control. 2006. Routledge.
270

8. Richard E. Smalley. Of Chemistry, Love and Nanobots. September 2001. Scientific


American.
9. Richard Jones. Is mechanosynthesis feasible? The debate moves up a gear.
September 16, 2004. Soft Machines.
10. Hongzhou Gu, Jie Chao, Shou-Jun Xiao & Nadrian C. Seeman. A proximity-based
programmable DNA nanoscale assembly line. 2010. Nature 465, 202205 (13 May
2010).
11. Drexler 2013.
12. Phoenix 2003b.
13. Nenad Ban, Poul Nissen, Jeffrey Hansen, Peter B. Moore, Thomas A. Steitz. The
Complete Atomic Structure of the Large Ribosomal Subunit at 2.4 Resolution.
Science. August 11, 2000: Vol. 289 no. 5481 pp. 905-920.
14. Smalley 2001.
15. K. Eric Drexler, David Forrest, Robert A. Freitas Jr., J. Storrs Hall, Neil Jacobstein,
Tom McKendree, Ralph Merkle, Christine Peterson. On Physics, Fundamentals,
and Nanorobots: A Rebuttal to Smalleys Assertion that Self-Replicating Mechanical
Nanorobots Are Simply Not Possible. September 2001. Institute for Molecular
Manufacturing.
16. Ralph Merkle. How good scientists reach bad conclusions. April 2001. Foresight
Institute.
17. Ralph Merkle and Robert A. Freitas. Remaining Technical Challenges for Achieving
Positional Diamondoid Molecular Manufacturing and Diamondoid Nanofactories.
2007. Nanofactory Collaboration.
18. Chris Phoenix. The Hollowness of Denial. August 16, 2004. Center for Responsible
Nanotechnology.
19. Phoenix 2004.
20. Gu 2010.
21. Chris Phoenix and Tijamer Toth-Fejel. Large-Product General-Purpose Design and
Manufacturing Using Nanoscale Modules. May 2, 2005. NASA Institute for
Advanced Concepts. CP-04-01 Phase I Advanced Aeronautical/Space Concept
Studies.
22. Chris Phoenix. Powerful Products of Molecular Manufacturing. 2003c. Center for
Responsible Nanotechnology.
271

23. Phoenix 2003c.


24. Phoenix 2003c.
25. Robert A. Freitas. Some Limits to Global Ecophagy by Biovorous Nanoreplicators,
with Public Policy Recommendations. April 2000. Foresight Institute.
26. Eric Drexler and Chris Phoenix. Grey goo is a small issue. December 14, 2003.
Center for Responsible Nanotechnology.
27. Drexler & Phoenix 2003.
28. Tihamer Toth-Fejel. A few lesser implications of nanofactories: Global Warming is
the least of our problems. 2009. Nanotechnology Perceptions 5:3759.
29. Nick Bostrom. What is a singleton? 2006. Linguistic and Philosophical
Investigations, Vol. 5, No. 2 (2006): pp. 48-54.
30. Cresson Kearny. Nuclear War Survival Skills. 1979. Oak Ridge National Laboratory.
31. Chris Phoenix. Frequently Asked Questions. 2003d. Center for Responsible
Nanotechnology.
32. Phoenix 2003d.
33. David Crane. "Trackingpoint XActSystem Precision Guided Firearm (PGF) Sniper
Rifle Package with Integrated Networked Tracking Scope (Tactical Smart Scope).
January 15, 2013. Defense Review.
34. Eric Drexler. Engines of Creation: the Coming Era of Nanotechnology. 1986.
Doubleday.
35. Bryan Caplan. The Totalitarian Threat. 2006. In Nick Bostrom and Milan Cirkovic,
eds. Global Catastrophic Risks. Oxford: Oxford University Press, pp. 504-519.
36. Altmann 2006.
37. Altmann 2006.
38. Eliezer Yudkowsky. Creating Friendly AI 1.0: The Analysis and Design of Benevolent
Goal Architectures. 2001. The Singularity Institute, San Francisco, CA, June 15.
39. Mark Gubrud. Nanotechnology and International Security. 1997. Draft paper for a
talk given at the Fifth Foresight Conference on Molecular Nanotechnology,
November 5-8, 1997; Palo Alto, CA.
40. Alan Robock, Luke Oman, Georgiy L. Stenchikov, Owen B. Toon, Charles Bardeen,
and Richard P. Turco. Climatic consequences of regional nuclear conflicts. 2007a.
Atmospheric Chemistry and Physics., 7, 2003-2012.

272

41. Alan Robock, Luke Oman, and Georgiy L. Stenchikov. 2007b. Nuclear winter
revisited with a modern climate model and current nuclear arsenals: Still
catastrophic consequences. 2007b. Journal of Geophysical Research, 112,
D13107.
42. ALPHA Collaboration: G. B. Andresen, M. D. Ashkezari, M. Baquero-Ruiz, W.
Bertsche, P. D. Bowe, E. Butler, C. L. Cesar, M. Charlton, A. Deller, S. Eriksson, J.
Fajans, T. Friesen, M. C. Fujiwara, D. R. Gill, A. Gutierrez, J. S. Hangst, W. N.
Hardy, R. S. Hayano, M. E. Hayden, A. J. Humphries, R. Hydomako, S. Jonsell, S.
L. Kemp, L. Kurchaninov, N. Madsen, S. Menary, P. Nolan, K. Olchanski, A. Olin, P.
Pusa, C. . Rasmussen, F. Robicheaux, E. Sarid, D. M. Silveira, C. So, J. W.
Storey, R. I. Thompson, D. P. van der Werf, J. S. Wurtele, Y. Yamazaki.
Confinement of antihydrogen for 1,000 seconds. Nature Physics, 2011.
43. Altmann 2006.
44. Department of Defense. Research, Development, Test, and Evaluation, DefenseWide: Volume 1: Defense Advanced Research Projects Agency. February 2005. pp.
154-155.
45. Space Daily. DARPA Demonstrates Micro-Thruster Breakthrough. May 9, 2005.
46. Brian Wowk. Phased Array Optics. October 3, 1991. http://www.phasedarray.com/1996-Book-Chapter.html
47. Nicola M. Pugno. The Egg of Columbus for making the world toughest fibres. April
24, 2013. arXiv:1304.6658 [cond-mat.mtrl-sci] arxiv.org/abs/1304.6658
48. Kris Osborn. Navy Plans to Test Fire Railgun at Sea in 2016. April 7, 2014.
Military.com.
49. Anita Hamilton. This Gadget Makes Gallons of Drinking Water Out of Air. April 24,
2014. TIME.
50. Patricia Kime. Study: 25% of war deaths medically preventable. June 28, 2012.
Army Times.
51. Mihail C. Roco and William Sims Bainbridge, eds. Converging technologies for
improving

human

performance:

nanotechnology,

biotechnology,

information

technology and cognitive science. 2002. U.S. National Science Foundation.


52. Roco 2002.
53. P S Sreetharan, J P Whitney, M D Strauss and R J Wood. Monolithic fabrication of
millimeter-scale machines. 2012. Journal of Micromechanics and Microengineering.
273

54. Byungkyu Kim, Moon Gu Lee, Young Pyo Lee, YongIn Kim, GeunHo Lee. An
earthworm-like micro robot using shape memory alloy actuator. 2006. Sensors and
Actuators A: Physical, Volume 125, Issue 2, 10 January 2006, Pages 429437.
55. Conversation with Michael Vassar.
56. Phoenix 2003b.
57. Kaku 2012.
58. John Storrs Hall. Utility Fog: A Universal Physical Substance. 1993. In Vision-21:
Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis,
ed., NASA Publication CP-10129, pp. 115-126 (1993).
59. John Storrs Hall. Utility Fog: The Stuff that Dreams Are Made Of. July 5, 2001.
KurzweilAI.
60. John Storrs Hall. What I want to be when I grow up, is a cloud. July 6, 2001.
KurzweilAI.
61. Robert Bradbury. Sapphire Mansions: Understanding the Real Impact of Molecular
Nanotechnology. October 2001. Aeiveos.
http://web.archive.org/web/20081121233908/http://www.aeiveos.com:8080/~bradbur
y/Papers/SM.html
62. Caplan 2006.
63. Noah Smith. Drones will cause an upheaval of society like we havent seen in 700
years. March 11, 2014. Quartz.
64. Hans-Hermann Hoppe. Democracy: the God That Failed. 2001. Transaction
Publishers.
65. Kurzweil 2005.
66. Robert A. Freitas. Nanomedicine, Volume I: Basic Capabilities. October 1999.
Landes Bioscience.
67. Freitas 1999.
68. Freitas 1999.
69. Upgraded SpaceX Falcon 9.1.1 will launch 25% more than old Falcon 9 and bring
price down to $4109 per kilogram to LEO. March 22, 2013. NextBigFuture.
70. J. Storrs Hall. The Space Pier: a hybrid Space-launch Tower concept. 2007.
Autogeny.org.
71. Al Globus, Nitin, Arora, Ankur Bajoria, Joe Straut. The Kalpana One Orbital Space
Settlement Revised. 2007. American Institute of Aeronautics and Astronautics.
274

72. Freitas 2000.


73. Freitas 2000.
74. Michael Vassar and Robert A. Freitas. Lifeboat Foundation NanoShield Version
0.90.2.13. 2006. Lifeboat Foundation.
75. Bostrom 2006.
76. Eliezer Yudkowsky. Artificial Intelligence as a Positive and Negative Factor in
Global Risk. In Global Catastrophic Risks, edited by Nick Bostrom and Milan M.
irkovi, 308345. 2008. New York: Oxford University Press.

Chapter 15. Space Weapons


The late 21st century could witness a number of existential dangers related to possible
expansion into space and the deployment of weapons there. Before we explore these, it is
necessary to note that mankind may choose not to engage in extensive space expansion
during the 21st century, so many of these risks may be a non-issue, at least during the next
100 years. A generation of Baby Boomers and their children were inspired and influenced
by science fiction such as Star Trek and Star Wars, which causes us to systematically
overestimate the likelihood of space expansion in the near future. Similarly, limited success
at building launch vehicles by companies such as SpaceX and declarations of desire to
colonize Mars by its CEO Elon Musk are a far cry from self-sustaining, economically
realistic space colonization.
Space colonization feels futuristic, it feels like something that should happen in the
future, but this is not a compelling argument for why it actually will. During the 60s and 70s,
many scientists were utterly convinced that expansion into space was going to occur in the
2000s, with permanent moon bases by the 2020s. This could still happen, but it seems far
less likely than it did during the 70s. In our view, large-scale space colonization is unlikely
during the 21st century (unless there is a Singularity, in which case anything is possible),
but could pick up shortly after it. This chapter examines the possibility that it will happen
during the 21st century even though it may not be the most likely outcome.
There are many challenges to colonizing space which makes it more appealing to
consider colonizing places like Canada, Siberia, the Dakotas, or even Greenland or
275

Antarctica first. Much of this planet is completely uninhabited and unexploited. If we seek to
travel to new realms, exploit new resources, and so on, we ought to look to North Dakota
or Canada before we look to the Moon or Mars. The economic calculus is strongly in favor
of colonizing these locations before we colonize space. First of all, lifting anything into
space is extremely expensive; $2,200 per kilogram ($1,000/lb) to low Earth orbit, at least 1.
Would the American pioneers have ventured West if they needed to pay such a premium
for each kilogram of baggage they carried? Absolutely not. It would be impractical. Second,
space is empty. For the most part, it is a void. All proposals for developing it, such as space
colonies, asteroid mining, or helium-3 harvesting on the Moon, have extremely high capital
investment costs that will make them prohibitive to everyone well into the 21 st century.
Third, space and zero gravity are dangerous to human health. Cosmic rays and
weightlessness cause a variety of health problems and a heightened risk of cancer 2, not to
mention the constant risk of micrometeorites3,4 and the hazards of taking spacewalks to
repair even the most minor equipment. These three barriers to entry make space
colonization highly unlikely during the 21st century unless advanced molecular
nanotechnology (MNT) is developed, but few space enthusiasts even know what these
words mean, and as we reviewed in the earlier chapter on the topic, basic research
towards MNT is extremely slow in coming.
To highlight the dangers of space, consider some recent comments by the Canadian
astronaut Robert Thirsk, who calls a one-way Mars trip a suicide mission 5 . Thirsk, who
has spent 204 days in orbit, says we lack the technology to survive a trip to Mars, and that
he spent much of his time in space repairing basic equipment like the craft's CO2
scrubbers and toilet. His comments were prompted by plans by the Netherlands-based
Mars One project to launch a one-way trip to Mars, but they also apply to any long-term
efforts in space, including space stations in low Earth orbit, trips to the asteroids, lunar
colonies, and so on. A prerequisite for space colonization are basic systems, like CO2
scrubbers and toilets, which break down only very rarely. Until we can manufacture these
systems with extreme reliability, we haven't even taken the first serious step towards
colonizing space. You can't colonize space until you can build a toilet that doesn't
constantly break down.
Next, the topic of getting there. Consider a rocket. It is a bomb with a hole poked in
the side. A rocket is a hollow skyscraper filled with enormous amounts of fuel costing tens
276

of millions of dollars, all of which is burned away in a few minutes, never to be recovered.
Rockets have a tendency to spontaneously explode for the most trivial of reasons, such as
errors in a few lines of code6,7. If the astronauts on-board do happen to make it to orbit in
one piece, a chipped tile is enough to seal their fate during reentry 8. Even if they do make it
to orbit, what is there? Absolutely nothing. To make true use of space requires putting
megatons of equipment, water, soil, and other resources up there, creating a simulacra of
Earth. Without extremely ambitious engineering projects like Space Piers (to be described
soon) or asteroid-towing spacecraft, a space station is just too small, a vulnerable tin can
filled with slightly nauseous people who must exercise vigorously on a continuous basis to
make sure their bones don't become irreversibly smaller 9. A finger-sized hole in a space
station is enough to kill everyone on board 10. Out of 7 Space Shuttles, 2 of them exploded
in a shower of flame. That would be considered a failure. Without radically improved
materials, automation, means of launch, reliability, and tests spanning many decades,
space is only a highly experimental domain suitable for a few dozen specialists at a time.
Space hotels, notwithstanding the self-promotional announcements made by companies
such as Russian firm Orbital Technologies11, are more likely to resemble jail cells than the
Ritz for decades to come.
Since the 1970s, space has been the ultimate mismatch between vision and
practicality. This is not to say that space isn't important, it eventually will be. People are just
prone to vastly underestimating how soon that is likely to be. For the foreseeable future (at
least the first half of the 21st century), it is likely to just be a diversion for celebrities who
take quick trips into low Earth orbit. One accident that causes the death of a few celebrities
could set the entire field back by a decade or more.
While the Baby Boomer generation who are among the most enthusiastic about
space quickly move into retirement, a new generation is being raised on computers and
video games, a new inner space that offers more than outer space plausibly can 12. Within
a couple decades, there will be compelling virtual reality and haptic suits that put the user
in another place, psychologically speaking. These worlds of our own invention have more
character and practical value than a dangerous vacuum at near absolute zero temperature.
If people want to experience space, they will enjoy it in virtual reality, in the safety of their
homes on terra firma. It will be far cheaper and safer to build interactive spaces that
simulate zero-g with robotic suspension systems and VR goggles than to actually launch
277

people into space and take that risk. Colonizing space is not like Europeans colonizing the
Americas, where both areas have the same temperature, the same gravity, the same
abundant organic materials, wide open spaces for farming, an atmosphere to protect them
from ultraviolet light, and the familiarity of Earth's landscape. The differences between the
Earth's surface and the surface of the Moon or Mars are shocking and extreme.
Otherworldly landscapes are instantly deadly to unprotected human beings. Without a
space suit, a person on Mars would lose consciousness in 15 seconds due to a lack of
oxygen. Within 30 seconds to 1 minute, the blood boils due to low pressure. This is fatal.
To construct an extremely effective and reliable space suit, such as a buckypaper skintight
suit, would likely require molecular manufacturing.
Few advocates of space colonization understand the level of technological
improvement which would be required for humans live in orbit, on the Moon, or on Mars in
any appreciable numbers for any substantial length of time. You'd need a space elevator,
Space Pier13, or large set of mass accelerators, which would cost literally trillions of dollars
to build. (Not enough rockets could be built to launch thousands of people and all the
resources they would need; they are too expensive.) Construction would be a multi-decade
project unless it were carried out almost completely by robots. Even if such a bridge were
built, the energy costs of ferrying items up and sending them out would still be in the
hundreds of dollars per kilogram. Loads of a few tens of tons could be sent every 5-10
minutes or so at most per track (for a Space Pier), which is a serious limitation if we
consider that a space station that can hold just 3,000 people would weigh seven million
tons14. The original proposed design for Kalpana One, a space station, assumes $500/kg
launch costs and hundreds of thousands of flights from the Moon, Earth, and NEOs to
deliver the necessary material, including millions of tons of regolith for radiation shielding.
This is all to build a space station for just a few thousand people which has no economic
value. The inhabitants would be sealed on the inside of a cylinder. It seems very difficult to
economically justify such a project unless the colony is filled with multi-millionaries giving
up a major portion of the their wealth just to live there. Meanwhile, without advanced
robotics or next-generation self-repairing materials to protect them, a single hull breach
would be all it takes to kill everyone on board. The cost of building a rotating colony alone
is prohibitive unless the majority of the materials-gathering and launch work is conducted in
an automated fashion by robots and artificial intelligences. Without the assistance of highly
278

advanced nano-manufacturing and artificial intelligence, such a project would likely be


delayed to the early 22nd century. It is often helpful to belabor these points, since so many
intelligent people have such an emotional over-investment in space colonization and space
technologies, and a nave underestimation of the difficulties involved.
Keeping all this in mind, this chapter will look to a possible future in the late 21 st
century, where, due to abrupt and seemingly miraculous breakthroughs in automation and
nano-manufacturing, it becomes economically feasible to construct large facilities in space,
including space colonies and asteroid mines, which fundamentally change the context of
human civilization and open up entirely new risks. We envision scenarios where hundreds
of thousands of people can make it into space with the required tools to keep them there,
within the lifetime of some people alive today (the 2060s-2100s). The possibility of
expanding into space would tempt the nations of Earth to compete for new resources, to
gain a foothold in orbit or on the Moon before their rivals do. Increased competition
between the United States, Russia, and China could become plausible, much like
competition between the US and Russia for Arctic resources is looming today. The
scenarios analyzed in this chapter also have relevance to human extinction risks on
timescales beyond the 21st century, and we briefly violate our exclusive focus on the 21 st
century in some of these sections.
Overview of Space Risks
Space risks are in a different category from biotech risks, nuclear risks,
nanotechnology, robotics, and Artificial Intelligence. They are in a category of lower
intensity. Space risks require much more scientific effort and advanced technology to
become a serious global threat than any of the other risks mentioned. Even the smallest
space risks that plausibly involve the extinction of mankind involve megaengineering
projects with space stations tens, hundreds, even hundreds of thousands of kilometers
across. Natural space risks, such as Gamma Ray Bursts or massive asteroid or large
comet impacts only occur once every few hundred million years. With such low
probabilities of natural disasters from space, our attention is well spent on artificial space
weapons instead, which could be constructed on the timescale of decades or centuries.
Though such risks may seem far off today, they may develop over the course of the 21 st or
22nd centuries to become a more substantial portion of the total probability mass of global
279

catastrophic risk. Likewise, they may not. The only way to make a realistic estimate is to
observe technological developments as they progress. Today's space efforts, such as
those by SpaceX, likely have very little to do with future space possibilities, since, as we've
argued and will continue to argue, any large-scale future space exploitation will likely be
based on MNT, not on anything we are developing today. Whether space technology
appears to be progressing slowly or quickly from our vantage point, it will be leapfrogged
when and if MNT is developed. If MNT is not developed in the 21 st century, then space
technologies pose no immediate threat, since weapons platforms will not be built on a
scale large enough to do any serious damage to Earth. Full-scale space development and
possible global risk from space is almost wholly dependent on molecular manufacturing; it
is hard to imagine it happening otherwise, and especially not in this century. Other
technologies simply cannot build objects on the scale to make it notable in the context of
global risk. The masses and energies involved are too great, on the order of tens of
thousands of times greater than global annual electricity consumption. We discuss this in
more detail shortly.
There are four primary anthropogenic space risks: 1) orbiting mirrors or particle beam
weapons that set cities, villages, soldiers, and civilians on fire, 2) nuclear weapons
launched from space, 3) biological or chemical weapons dispersed from space, and 4)
hitting the Earth with a fast-moving projectile. Let's briefly review these. To wipe out
humanity with orbiting mirrors, the aspiring evil dictator would need to build a lot of them;
hundreds or thousands, each nearly a mile in diameter, then painstakingly aim them at
every human being on Earth until they all died. This would probably take decades. Then
the dictator would need to kill himself and all his followers, or accidentally achieve the
same, otherwise the total extinction of humanity (what this book is exclusively concerned
with) would not be secured. This is obviously not a highly plausible scenario, but we aren't
ruling anything out here. The nuclear weapons scenario is more plausible; enough nuclear
weapons could be launched from a space platform that the Earth goes through a crippling
nuclear winter and becomes temporarily uninhabitable. This could be combined with space
mirror attacks, artificially triggering supervolcano eruptions, and so on. The third scenario,
dispersion of biological weapons, will be discussed in more detail later in the chapter. The
fourth, hitting the Earth with a giant projectile, is very complicated and energy-intensive,
and we also cover it at some length here. This scenario is notable because while difficult to
280

pull off, a sufficiently large and fast projectile would be extremely destructive to the entire
surface of the Earth, sterilizing it in one go. Such an attack could literally burn everything
on the surface to ashes and make it nearly uninhabitable. Fortunately, it would require
about 100,000 times more energy than the United States consumes in a year to accelerate
a projectile to the suitable speed, among other challenges.
Deviation of Asteroids
The first catastrophic space risk that many people immediately think of is the
deviation of an asteroid to impact Earth, like the kind that killed off the dinosaurs. There are
a number of reasons, however, that this would be difficult to carry out, inertia being
foremost among them. Other attack methods would be preferable from a military and cost
perspective if one were trying to do a tremendous amount of damage to the planet. The
energy required to deviate an asteroid of any substantial size is prohibitive, so much so
that in nearly every case, it would be preferable to just build an iron projectile and launch it
directly at the Earth with rockets, or to use nuclear weapons or some other destructive
means instead.
One of the definitions of a planet is that it is a celestial body which has cleared the
neighborhood around its orbit. Its great mass and gravity has pulled in all the rocks that
were in its orbital neighborhood billions of years ago, when the solar system was formed
out of dust and rubble. These ancient rocks are mostly long gone from Earth's orbit. Of
three categories of Near-Earth Objects (NEOs), the closest, the Aten asteroids, number
only 815, and nearly all of them are smaller than 100 m (330 ft) in diameter 15. A 100 meter
diameter asteroid, impacting into dense rock, creates about a 1 megaton explosion, the
size of a modern nuclear weapon16. This is enough to wipe out a city if it is in the wrong
place at the wrong time, but is not likely to have any global effects. About 867 NEOs are of
1 km (3,280 ft) in size or greater, and 167 of these are categorized as PHOs (potentially
hazardous objects)17. About 92 percent of these are estimated to have been discovered so
far. If a 1 km (0.6 mi) wide impactor hit the Earth, it would release about a couple tens of
thousands of megatons TNT equivalent of energy, enough to create a 12 km (7 mi) crater
and a century-long period of lower temperatures, which would effect harvests worldwide
and could lead to billions of deaths by starvation 18,19,20,21,22,23. It would be a catastrophic
event, creating an explosion more than a hundred times greater than the largest atom
281

bomb ever detonated, but it would not threaten humanity in general. The asteroid that
wiped out the dinosaurs was ten times larger.
To evaluate the risk to humanity from asteroid redirection, we located the largest
possible NEO with a future trajectory that brings it close to Earth, and calculated the energy
input which would be required to cause it to impact. The asteroid that stands out the most
is 4179 Toutatis, which will pass within 8 lunar distances of the Earth in 2069. The distance
to the Moon is 384,400 km (238,855 miles), meaning 8 lunar distances is 3,075,200 km
(1,910,840 mi) from Earth. 4179 Toutatis is a huge rock with dimensions approximately
4.752.41.95 km (2.951.491.21 mi), shaped like a lumpy potato, with a mass of about
50 billion tonnes. Imagine pushing 50 billion tonnes for almost two million miles by hand.
Seems impossible, right? Pushing it with a rocket turns out to be almost as hard, by both
the standards of today's technology, and the likely technology available in the 21 st century.
About 1.5 10^16 Newton-seconds of force would be required, equal to the thrust of about
2,142,857 Saturn V heavy lift rockets. The Saturn V rocket is what took the Apollo
astronauts to the Moon, it is 363.0 feet (110.6 m) tall, with a diameter of 33.0 feet (10.1 m),
it weighs 3,000,000 kg (6,600,000 pounds) and costs about $47.25 billion in 2015 dollars to
build. Thus, it would cost about $101,250 trillion to redirect the asteroid with currently
imaginable technology, about 6,457 times the annual GDP of the United States. But, wait
those rockets need other rockets to send them up to space in one piece and unused. If we
just sent the rockets up into space on their own, they would be depleted and couldn't push
the asteroid. You get the idea. If a nation can afford hundreds of millions or possibly billions
of Moon rockets, it would be far cheaper and more destructive to directly bombard the
target with such rockets than to redirect an asteroid that is mostly made of loose rubble
anyway and wouldn't even cause that much destruction when it lands. Various impact
effects calculators available online allow for an estimate of the immediate destruction
caused by a single impact24.
Other asteroids are even more massive or distant from the Earth. 1036 Ganymed, the
largest NEO, is about 34 km (21 mi) in diameter, with a mass about 10 17 kg, a hundred
trillion tonnes. Its closest approach to the Earth, 55,964,100 km (34,774,500 mi), is roughly
a third of the distance between the Earth and the Sun. Ganymed would definitely destroy
practically everything on the surface of the Earth if it impacted us, but with masses and
distances of that magnitude, no one is directing it to hit anything. It is difficult for us to
282

intuitively imagine how much momentum a 34 km (21 mi) wide asteroid has and how much
energy is needed to move it even a few feet off its current course. Exploding all atomic
bombs in the US and Russia's nuclear arsenals would barely scratch it. The entire energy
output of the history of human civilization on Earth would scarcely move it by a few
hundred feet. One day, a super-advanced civilization may be able to slowly move objects
like this with energy from solar panels larger than Earth's surface, but it is not on the top of
our list of risks to humanity in the 21st century. The same applies to breaking off a piece of
Ganymed and moving itit is simply too distant. Asteroids in the asteroid belt are even
more distant, and even more impractical to move. It would be far easier to build a giant
furnace and cook the Earth's surface to a crisp than to move these distant asteroids.
Chicxulub Impact
As many know, approximately 65.5 million years ago, an asteroid 10 km (6 mi) in
diameter crashed into the Earth, causing the extinction of roughly three-quarters of all
terrestrial species, including all non-avian dinosaurs, and a third of marine species 25. The
effects of this object hitting the Earth, just north of the present-day Yucatan peninsula in
Mexico, were severe and global. The impact kicked up about 5 10 15 kg of flaming ejecta,
sending it well above the Earth's atmosphere and raining down around the globe at
velocities between 5 and 10 km/h26. Reentering the skies worldwide, the tremendous air
friction made this material glow red-hot and broiled the surface in thermal radiation
equivalent to 1 megaton nuclear bombs detonated at 6 km (3.7 mi) intervals around the
globe. This is like detonating about 20 million nuclear weapons in the skies above the
Earth. For at least an hour and as long as several hours, the entire surface, from Antarctica
to the equator, was bathed in thermal radiation 50 to 150 times more intense than full
sunlight. This is enough to ignite most wood, and to certainly ignite all the dry tinder on the
world's forest floors. A thick ash layer in the geologic record shows us that the entire
biosphere burned down. Being on the opposite side of the planet from the impact would
have hardly helped. In fact, the amount of flaming ejecta raining from the sky at the socalled antipodal point was even greater than anywhere but immediately around the
impact.
The ash layer evidence that the biosphere burned down is corroborated by the
survival pattern of species that made it through the K-T extinction event, as it is called 27.
283

The animals that survived were those with the ability to burrow or hide underwater during
the heat flux in the hour or two after the impact. During this time, the temperature of the
surface was literally be as hot as a broiler, and almost every single large animal, including
favorites like Tyrannosaurus Rex and Triceratops, would have been cooked to a blackened
steak. A few fortunate enough to hide underwater or in caves may have survived. If they
were large, their survival wouldn't persist for long, however, since the impact winter began
within a few weeks to a few months after the impact, lasting for decades and causing even
further destruction28. Living plant and animal matter would have been scarce, meaning only
detritivoresanimals that can survive on detritushad enough to eat. During this time,
only a few isolated communities in refugia, protected places like swamps alongside
overhanging cliffs or equatorial islands, would have survived. Examples of animals that
would have had a relatively easy time of surviving would be the ancestors of modern-day
earthworms, pill bugs, and millipedes, all of which feed mainly on detritus.
Events like the Chicxulub impact, which happen only once every few hundred million
years, are different from some of the earlier risks discussed in this book (except AI) in the
sense that they are more comprehensively destructive and involve the release of more
energy, especially energy in the form of heat. Whereas some individuals may have
immunity to certain microbes during a global plague, or be able to survive nuclear winter in
a self-sufficient fortress in the mountains, an asteroid or comet impact that throws flaming
ejecta across the planet has a totality and intensity that is hard for many other risks to
match. The only risk we've discussed so far that is comparable is an Artificial Intelligence
using nano-robots to convert the entire Earth into paperclips. An asteroid impact is
intermediate between AI risk and the risk of a bio-engineered multi-plague or similar event
in terms of its brute killing power. It is intense enough to destroy the entire surface through
brute heat, but not dangerous enough to intelligently seek out and kill humans.
After the multi-decade long impact winter, the Chicxulub impactor caused centuries of
greater-than-normal temperatures due to greenhouse effects from all the carbon dioxide
released by the incinerated biosphere. This, in combination with the scarcity of plants
caused by their conflagration, caused huge interior continental regions to transform into
deserts and badlands. The only comparable natural event that can create climactic
changes of this magnitude would probably be the Deccan Traps, a series of volcanic

284

eruptions which lasted 500,000 years during the end of the Permian era 250 million years
ago.
The K-T extinction was highly selective. Many alligator, turtle, and salamander
species survived. This was because they could both hide underwater and eat detritus. In
general, detritus-eating animals were able to survive, since that's all there was for many
years after the impact. Like the alligators and turtles of 65 million years ago, if a Chicxulubsized asteroid were to hit us today, many human beings would figure out a way to survive,
both during the initial impact and in the ensuing years. Like many of the scenarios in this
book, such an event would likely wipe out 99 to 99.99 percent of humanity, but many (over
500,000) would survive. Many people work underground or in places where they would be
completely shielded by the initial thermal pulse. Even if everything exposed to the surface
burned, there would be sufficient stored food underground to keep millions alive for
decades without farming. Wheat berries stored in an oxygen-free environment can retain
nutritional value for hundreds of years. This would give humanity enough time to start
growing new food and locate refugia where possible. The Earth's fossil fuels and many
functional machines and electronics would remain, giving us tools to stage a recovery. A
five degree or even ten degree Celsius temperature increase for hundreds of years, while
extremely harsh and potentially lethal to as many as 90-95 percent of all living plant and
animal species, could not wipe out every single human. There would always be
somewhere, like Iceland, Svalbard, northern Siberia and Canada, which would remain at
mild temperatures even if the global average greatly increased. Humans are not dinosaurs.
We are smarter, our nutritional needs are fewer, and we would be able to survive, even if
photosynthesis completely shut down for several years.
A brief word here on the difficulty of surviving various global temperature changes.
During humanity's existence on Earth, for the last 200,000 years, there have been global
temperature variations of significant magnitude, mostly in the negative direction relative to
the present day. Antarctic ice cores show that the global temperature average during the
last Ice Age was about 8-9 degrees Celsius cooler than it is now 29. We know that humanity
and other animal species can survive significant drops in temperature. What makes impact
winter qualitatively different than a simple Ice Age is the complete shutdown of
photosynthesis caused by ash-choked skies. Even just a few years of this is enough to
reshape global biota entirely. The third risk, global warming, which is distinct from global
285

cooling and photosynthetic shutdown, but like these risks, is an effect of major asteroid
impact, has the potential to be as deadly as the others because it is more difficult for life to
adapt to increased temperatures than decreased temperatures. If an animal is cold, it can
eat more food, or migrate to a warmer place, and survive. If an animal is too hot, it can
migrate, but it suffers more in the process of doing so. Overheating is extremely
dangerous, which is why tropical animals have so many elaborate adaptations to
preventing it. The relative contributions of initial firestorms, impact winter, and impact
summer to species extinction at the K-T boundary is poorly studied and requires more
research.
Daedalus Impact
To consider an impact that could truly wipe out humanity completely, we have to either
analyze the impact of a larger object, a faster object, or an object with both qualities. We
also should expand our scope beyond natural asteroids and comets, large versions of
which impact us only rarely, and consider the possibility of artificially accelerated objects. At
some point in the future, humanity will probably harvest the Sun's energy in larger
amounts, with huge arrays of solar panels which might even approach the size of planetary
surfaces. If we do spread beyond Earth and conquer the solar system, this would be very
useful for our energy needs. Such systems would give humanity and our descendants
access to tremendous amounts of energy, enough to accelerate large objects up to a
significant fraction of the speed of light, say 0.1 c.
Assuming sophisticated automated robotics, artificial intelligence, and
nanotechnology, it would be possible to disassemble asteroids and convert them into
massive solar arrays within the next hundred years. If the right technology is in place, it
could be done with the press of a button, and the size of the asteroid itself would be
immaterial. This energy could then be applied to harvesting other fuels, such as helium-3 in
the atmosphere of Uranus. This could give human groups access to tremendous amounts
of energy, possibly even thousands of times greater than the Earth's present power
consumption, within a hundred to two hundred years. We aren't saying that this is
necessarily particularly likely, just imaginable. After all, the Industrial Revolution also rapidly
improved the capabilities of humanity within a short amount of time, and some scientists
and engineers anticipate an even greater capability boost from molecular manufacturing,
286

AI, and robotics during the 21st century. So it shouldn't be considered completely out of the
question.
The energy released by the Chicxulub impactor was between 310 million megatons to
13.86 billion megatons of TNT, according to a careful study30. (100 million megatons, a
frequently cited number, is too low.) As a comparison, the largest nuclear bomb ever
detonated, Tsar Bomba, had a yield of 50 megatons. Scaling this way up, we make the
general assumption that it would require an explosion with a yield of 200 billion megatons
to completely wipe out humanity. This would be an explosion much larger than anything
that has occurred during the era of multicellular life. An asteroid has not triggered an
explosion this large since the Late Heavy Bombardment, roughly 4 billion years ago when
huge asteroids were routinely hitting the Earth. It seems like a roughly arbitrary number,
which it is, but it has a couple things to recommend it: 1) it's more than ten times greater
than the explosion which wiped out the dinosaurs, 2) it's large enough to be beyond the
class of objects that has any chance of hitting Earth naturally, but small enough that it isn't
completely outlandish. 3) it's easily large enough to argue that it could wipe out all of
humanity, even that it would be likely to do so, in the absence of bunkers built deliberately
to last for many decades involving major temperature drops and no farming.
An impact releasing energy equivalent to 200 billion megatons (2 x 10 11 MT) of TNT is
extremely enormous and difficult to imagine. The following effects are all from the results of
the impact simulator of the Earth Impacts Effects Program, a collaboration between
astronomers and physicists at Imperial College London and Purdue University. A 200 billion
MT impact would release approximately 1027 joules of energy, opening a crater in water
(assuming it strikes the ocean) with a diameter of 881 km (547 mi), about the distance
between San Francisco and Las Vegas. The crater opened on the seafloor would be about
531 km (329 mi) in diameter, which, after the crater wall undergoes collapse, would form a
final crater 1,210 km (749 mi) in diameter. This crater would be so large it would span
almost the entire states of California and Nevada. This is so much larger than the
Chicxulub crater, at only 180 km (110 mi) in diameter. The explosion, which is between 144
and 645 times more powerful than the Chicxulub impact, leaves a crater about 7 times
larger and with 49 times greater area. All this greater area corresponds to more dust and
soot clogging up the sky in the decades to come, as well as more molten rock raining from

287

the heavens in the few hours following the impact. Both heat and dust translate into killing
power.
The physical effects of the impact itself would be mind-boggling. The fireball
generated would be more than 122 km (76 mi) across. At a distance of 1,000 miles, the
thermal pulse would hit 15.6 seconds after impact, with a duration of 6.74 hours, and a
radiant heat flux 5,330 times greater than full sunlight (all these numbers are from the
impact effects calculator). At a distance of 3,000 miles (4,830 km), equivalent to that
between San Francisco and New York, the fireball would appear 5.76 times larger than the
sun, with an intensity 13.7 times greater than full sunlight, enough to ignite clothing,
plywood, grass, deciduous trees, and third degree burns over most of the body. Roughly
815,000 cubic miles of molten material would be ejected, arriving approximately 16.1
minutes after impact. The impact would cause shaking at 12.1 on the Richter scale, greater
than any earthquake in recorded history. Even at this continental distance of 3,000 miles,
the ejecta, which arrives after about 25.4 minutes as an all-incinerating wave of flaming
dust particles, has an average thickness on the ground of 6.28 meters (20.6 ft). That is
incredible, enough to cover and kill just about everything. The air blast would arrive about 4
full hours after impact, with a maximum wind velocity of 1,760 mph, peak overpressure of
10.6 bars (150 psi), and a sound intensity of 121 dB. At a distance of 12,450 miles (20,037
km), the maximum possible distance from the impact, the air blast arrives after 16.8 hours,
with a peak overpressure of 0.5 bars (7.6 psi), maximum wind velocity of 233 mph, and a
sound intensity of 95 dB. The force of the blast would be enough to collapse almost all
wood frame buildings and blow down 90 percent of trees. Altogether, more than 99.99
percent of the world's trees would be blown down, regardless of where the impact hits.
No authors have considered in much detail the effects such a blast would have on
humanity itself, probably because it could conceivably only be caused by an artificial
impact rather than a natural one, and anthropogenic high-velocity object impacts are rarely
considered, especially of that energy level. What is particularly interesting about blasts of
around this level is that they are somewhere between sure survival and sure death for
the human species, so there is definite uncertainty about the likely outcome. Nearly anyone
not in an underground shelter would be destroyed, just as they would in the winds of a
powerful hurricane. Hurricanes do not bring flaming hot ejecta that lands in 10-ft thick
layers and burns away all oxygen on the surface, however. The Chicxulub impact kicked up
288

~5 x 1015 kg of material which deposited an average of 10 kg (22 lb) per square meter,
forming a layer 3-4 mm thick on average. The larger impact we describe above, which we
will call a Daedalus impact for reasons which will become known shortly, would eject at
least 3,625,000 cubic miles (15,110,000 cu km) of material, equivalent to a cube 153 miles
(246 km) on a side, into the atmosphere, for a mass of ~5 x 10 19 kg, roughly 10,000 times
greater. Extrapolating, this means we could expect an average deposition rate of 100
tonnes (110 tons) per square meter, forming a layer 30-40 meters (100-130 ft) on average!
This is inconsistent with the Impact Effects Calculator result of just 6 meters thickness only
3,000 miles away, and knowing the difference is crucial. (note: double-check this result and
consult w/ expert)
If the impact really did leave a layer of molten silicate dust 30 meters thick, 100
tonnes per square meter, it is easy to imagine how that might threaten the existence of
every last human being, especially as the post-impact years tick by and food is hard to
come by. During the K-T event, global temperature is thought to have dropped by 13 Kelvin
after 20 days31. With 10,000 times as much dust in the atmosphere as the K-T event, how
much cooling would the Daedalus event cause? It could be catastrophic cooling, not just in
the sense of wiping out 75-90 percent of all terrestrial animal species, but more in the
sense of potentially wiping out all terrestrial non-arthropod species.
There would be some warning time for those distant from the blast. At a distance of
about 10,000 km (6,210 miles), the seismic shock would arrive after about 33.3 minutes,
measuring 12.2 on the Richter scale. In comparison, the 1906 San Francisco earthquake,
which destroyed 80 percent of the city, was about 7.8 on the moment magnitude scale
(modern Richter scale), which corresponds to about 7.6 on the old Richter scale. Each
increment on the scale corresponds to a 10 times greater shaking amplitude, so the
earthquake following a Daedalus impact would have a shaking amplitude greater than
20,000 times that of the San Francisco earthquake at the hypocenter, meaning a shaking
amplitude of tens of miles. At a distance of 10,000 km, this would rate at VI or VII on the
Mercalli scale. VI on the Mercalli scale refers to, Felt by all, many frightened. Some heavy
furniture moved; a few instances of fallen plaster. Damage slight. VII refers to, Damage
negligible in buildings of good design and construction; slight to moderate in well-built
ordinary structures; considerable damage in poorly built or badly designed structures;
some chimneys broken. Upon feeling the seismic shock, people would check the Internet,
289

television, or radio to find news that an object had hit the Earth, and that the deadly blast
wave was on its way. Starting after about 45 minutes, the ejecta would begin arriving, the
red-hot rain of dust, getting thicker over the next 25 minutes and reaching maximum
intensity 70 minutes after impact. This all-encompassing heat would be sufficient to cook
everything on the surface. The devastating blast wave, which is the largest physical
disruption and would include maximum winds of up to 677 mph and pressures of 30.8 psi,
similar to those felt at ground zero of a nuclear explosion, would arrive after about 8.42
hours. This would completely level everything on the ground, including any structures.
Contrary to popular conception, the pressures on a level similar to those directly
underneath a nuclear explosion are survivable, through using a simple arched-earth
structure over a closed trench. One such secure trench was located near ground zero in
Nagasaki, where pressures approached 50 psi. Such trenches require a secure blast door,
however. Perhaps a greater problem would be the buildup of ejecta, which would rest
everywhere and cause a great amount of pressure, crushing unreinforced structures and
their inhabitants. It would also be super-hot. We can imagine a more hopeful situation on
the side of a cliff or mountain, where ejecta slides off to lower altitudes, or in tropical
lagoons and lakes, where even 100 ft worth of dust may simply sink to the bottom, sparing
anyone floating on the surface. Oxygen would be a greater problem, however, meaning
that those in elevated, steep areas with plenty of fresh air would be at an advantage. Such
areas would have increased exposure to thermal radiation, on the other hand, making it a
tradeoff. Ideal for survival would be a secure structure carved into a cliff or mountainside,
or a hollowed-out mountain like the Cheyenne Mountain nuclear bunker in Colorado, which
is manned by about 1,400 people. The bunker has a water reservoir of 1,800,000 US gal
(6,800,000 l), which would be more than enough to support its 1,400 workers for over a
year. One wonders if the intense heat of a 100-130 ft layer of molten dust would be
sufficient to melt and clog the air intake vents and suffocate everyone inside. Our guess
would be probably not, which makes us even question the assumption that a 200 billion
megaton explosion would be sufficient to truly wipe out all of humanity. The general
concept requires deeper investigation.
If staff in a highly secured bunker like the Cheyenne complex were somehow able to
survive the initial ejecta and blast wave, the world they emerged out into when it cooled
would be very different. Unlike the post-apocalyptic landscape faced by the survivors of the
290

Chicxulub impact, which was only dusted with a few millimeters of material, these refugees
would be dealing with deep, multi-story building thick layers of dust which would get into
everything and create a nutrient-free layer nearly impossible for plants to grow in. Any
survivors would need to find a deposit of remaining soil, which could be done by digging
into a mountainside or possibly clearing an area with a nuclear explosion. Then, they would
need to use rain or whatever other available water source to attempt to grow food. Given
that the sky would be blacked out for at least several years and possibly longer, this would
be quite a challenge. Perhaps they could grow plants underground with artificial light from a
nuclear-powered generator. Survivors could scavenge any underground grain reserves, if
they manage to locate these and expend the energy to dig down to them by hand. The
United States and many other countries have grain surpluses sufficient to feed a few
thousand people almost indefinitely, as we reviewed in the nuclear chapter.
Over the years, moss and lichen would begin to grow over the surface of the cooled
ash, and rivers would wash some of it into the sea. If humanity were lucky, there would be
a fern spike, a massive recolonization of the land by ferns, as generally occurs after mass
extinctions. However, the combination of falling temperatures, limited power sources,
everything on Earth being covered in a layer of ash 100 feet deep, and so on, could easily
prove sufficient to snuff out a few isolated colonies of several thousand people fortunate
enough to have survived the initial blast and heat. One option might be to construct sea
cities, if the survivors had the technology available for it. It would be difficult to reboot the
large-scale technological infrastructure needed to construct such cities, especially working
with few people, though possibly working components could be salvaged. In a matter of a
couple decades, without a technological infrastructure to build new tools, all but the most
simple devices would break down, putting humanity back into a technological limbo similar
to the early Bronze Age. This would make it difficult for us to achieve tasks such as locating
grain stores hundreds of miles away, determining their precise location, and digging 100
feet down to reach them. Because of all these extreme challenges to survival, we
tentatively anticipate that such an impact probably would wipe out humanity, and indeed
most of the rest, if not all complex terrestrial life.
How could such an impact occur? It would have to be an artificial projectile, about the
size of the Great Pyramid of Giza, with a radius of 0.393 km, made of iron, accelerated into
the surface of the Earth at one-tenth the speed of light (0.1 c). The name Daedalus is
291

borrowed from the Project Daedalus, a project undertaken by the British Interplanetary
Society between 1973 and 1978 to design an unmanned interstellar probe 32. The craft
would have a length of about 190 meters (626 ft) and weigh 54,000 tons, with a scientific
payload of 400 tons. The craft was designed to be powered using nuclear fusion, fueled by
deuterium harvested from an outer planet like Uranus. Accelerating to 0.12 c over the
course of 4 years, the craft would then cruise for 46 years to reach its target star system,
Bernard's star, 5.9 light years distant. The craft would be shielded by an artificially
generated cloud of particles called dust bugs hovering 200 km ahead of the vehicle to
remove larger obstacles such as small rocks. Micro-sized dust grains would impact the
craft's beryllium shield and ablate it over time.
The Daedalus craft would only accelerate 400 tons, instead of the 9.3 x 10 9 kg (9.3
million tonnes, 1.02 million tons) required to deal a potential deathblow to mankind. It would
need to be scaled up by a factor of 2,550 to reach critical mass. You hear discussion about
starships in science fiction frequently, but no science fiction I'm aware of addresses the
consequences of the fact that an interstellar probe just 2,550 times larger than the
minimum viable interstellar probe could cause an explosion on the Earth that covers the
entire surface in 50-100 feet of molten rock. Once such a probe began to get going, it
would be very difficult to stop. No object would have the momentum to push it off course; it
would need to be stopped before it acquired a high speed, in just four years. If an
antimatter drive could be developed that allows an even greater speed, one closer to the
speed of light, the projectile would only require a mass of 8.38 x 10 8 kg, about a 29 m (96
ft) radius iron sphere, similar to the mass of the Seawise Giant, the longest and heaviest
ship ever built. On the human scale, that is rather large, but on the cosmic scale, it's like a
dust grain. Any future planetary civilization that wants to survive unknown threats will need
high-resolution monitoring of its entire cosmic area, all the way out to tens of light years
and as far beyond that as possible.
Consider that hundreds of years ago, all transportation was by wind power, foot, or
pack animal, and the railroad, diesel ship, and commercial aircraft didn't exist. As humanity
developed huge power sources and more energetic modes of transport, energy release
levels thousands, tens of thousands, and hundreds of thousands of times greater than we
were accustomed to became routine. Today, there are about 93,000 commercial flights
daily, and about one in every hundred million is hijacked and flown into something, causing
292

events like 9/11. Imagine a future where interstellar travel is routine, and a similar
circumstance might become a threat with respect to starships used as projectiles. Instead
of killing a few thousand people, however, the hijacking could kill everyone on Earth. This
would be especially useful if the terrorist were a nationalist for a future country outside the
Earth, located on the Moon, Mars, asteroid belt, among space stations, or even a separate
star system. For whatever reason, such an individual or group may have no sympathy for
the planet and be perfectly willing to ruin it. This would probably not put humanity as a
whole at risk, since many would be off-world at that stage, but the logic of the scenario has
implications for our security in the long-term future.
Heliobeams
Perhaps a more plausible risk in the nearer future is some group using a space
station as a tool to attack the Earth. They might wipe out most or all of the human species
to replace us with their own people, or to otherwise dominate the planet. This scenario was
portrayed in the 1979 James Bond film Moonraker. Space is the ultimate high ground; from
it, it would be easier to distribute capsules of some lethal biological agent, observe the
enemy, launch nuclear weapons, and so on. Perhaps, speculatively, this process could get
carried away until those who control low Earth orbit begin to see themselves as gods and
start to pose a threat to humanity in general.
To create space stations on a scale required to actually control or threaten the entire
Earth's surface would be a difficult task. In the 1940s, Nazi scientists developed the design
for a large mirror in orbit with a 1 mile diameter 33, for focusing light on the surface and
incinerating cities or military formations. Called the Sun Gun or heliobeam, they anticipated
it would take around 50 to 100 years to construct. There are no detailed documents for the
Sun Gun design which survived the war, but assuming a mass of about 1,191 kg (2,626 lb)
per square meter, corresponding to a steel plate 6 inches thick, a heliobeam a mile in
diameter would have a surface area of about 2,034,162 square meters (21,895,500 sq ft)
and mass of about 2,422,690 tonnes (2,670,560 tons), similar to the mass of 26 Nimitzclass aircraft carriers. At current space launch costs of $2,200/kg ($1,000 per pound) as of
March 2013 for the Falcon Heavy Rocket, lifting that much material to orbit would cost
$5,330,000,000, about the annual GDP of Japan. Even given the relatively enormous US
military budget, this would be a rather difficult expense to justify. Such a project would have
293

to be carried out over the course of many years, or weight would have to be sacrificed,
making the heliobeam thinner, which would make it more vulnerable to attack or disruption.
There are ways to take the heliobeam concept as devised by the Nazis and transform
it into a better design which makes it easier to construct and more resilient to potential
attack. Instead of one giant mirror, it could be a cluster of several dozen giant mirrors,
communicating with data links and designed to point to the same spot. Instead of being
made of thick metal, these mirrors could be constructed out of futuristic nanomaterials
which are extremely light, yet durable. This could lower the launch weight by a factor of
tens, hundreds, maybe even thousands. The preferred construction material for a scaffold
would be diamond or fullerenes. Even if MNT is not developed in the near future, the cost
of industrially produced diamond is falling, though not to the extent that would be required
to make such large quantities affordable. This makes the construction of a heliobeam seem
mostly dependent on progress in the field of MNT, much like the other hypothetical
structures in this chapter.
If launch costs and the costs of bulk diamond could be brought way down, an
effective heliobeam could potentially be constructed for only a few trillion dollars instead of
a few hundred trillion, which might put it within the reach of the United States or Russian
military during the latter half of the 21st century. The United States is spending between
$620 and $661 billion over the next decade on maintaining its nuclear weapons, as an
example of a project in a similar cost window. Spending a similar amount on a heliobeam
could be justified if the ethical and geopolitical considerations looked right. After all, a
heliobeam would be like a military laser beam that never runs out of power. There would be
few limits to the destruction it could do. Against stationary targets in particular, it would be
extremely effective. There is nothing like vaporizing your enemies using a giant beam from
space.
Countries are especially dependent on their power and military infrastructure to
persist. If these could be destroyed one by one, over the course of several days by a giant
orbiting mirror, that could cripple a country and make it ripe for ground invasion. Nuclear
missiles could defend against such a mirror by damaging it, but possibly a mirror could be
sent up in the aftermath of a World War during which most nuclear weapons were used up,
leaving the remainder at the mercy of the gods in the sky with the sun mirror. Whatever
294

group controlled the sun mirror could use it to cripple key human infrastructure (power
plants, ports), forcing humanity to live as it did in ancient times, without electricity. If the
group were the only ones who retained industrial technology, such as advanced fighter
aircraft, they would be fighting against people who had little more than crude firearms. The
difference in technological capability between Israelis and Palestinians comes to mind, only
moreso. The future could be a world of entirely different technological levels, where an elite
group is essentially running a giant zoo for its own amusement. If they got tired of the
masses of humanity, they might even decide to genocide us through other means such as
nuclear weapons. A heliobeam need not be used to perform every important military
maneuver, just the most crucial moves like destroying power plants, air bases, and tank
formations. It would still give whoever controlled it a dominant position.
There are many important strategic, and therefore geopolitical, differences between a
hypothetical heliobeam and nuclear weapons. Nuclear weapons are more of an all-ornothing thing. When you use a nuclear weapon, it makes a gigantic explosion with ultra
high-speed winds and a fearsome mushroom cloud that showers lethal radioactivity for
miles around. Even the most power-hungry politician or military commander knows they
are not to be used lightly. A heliobeam, on the other hand, could be used in an attack which
is arbitrarily small and seemingly muted. By selectively obscuring parts of a main mirror, or
selecting just a few out of a cluster to target sunlight towards a particular location, a
heliobeam array could be used to destroy just a few hundred troops instead of a few
thousand, or to harass on an even smaller scale. Furthermore, its use would be nearly free
once it is built. The combination of low maintenance costs and arbitrarily minor attack
potential would make the incentive to use a heliobeam substantially greater than the
incentive to use nuclear missiles. A country with messianic views about its role in the world,
such as the United States, could use it to get our way in every conflict, no matter how
minor. This could exacerbate global tensions, paving the way for an eventual global
dictatorship with absolute power. We ought to be particularly wary of any technologies
which could be used to enforce global dictatorship, due to the potentially irreversible nature
of a transition to one, and the attendant suffering it could cause once firmly in place 34.
In conclusion, it is difficult to imagine how space-based heliobeams could kill all of
humanity, but they could certainly oppress us greatly, possibly locking us into a state where
a dictatorship controls us completely. Combined with life extension technologies and
295

oppressive brain implants (see next chapter), a very long-lived dictator could secure his
position, preventing technological development and personal freedom in perpetuity. Nick
Bostrom defines an existential risk as One where an adverse outcome would either
annihilate Earth-originating intelligent life or permanently and drastically curtail its
potential.35 This particular risk seems less likely to annihilate life as it is to permanently
and drastically curtail its potential if put in the wrong hands. As a counterpoint, however,
one may argue that nuclear weapons have the same great power, but they have not
resulted in the establishment of a global dictatorship.
Dispersal of Biological Agents
In Moonraker, bad guy Hugo Drax attempts to release 50 spheres of nerve gas from a
space station to wipe out the global population, replacing it with his own master race.
Would this be possible? Almost certainly not. Even if 50 spheres could hold sufficient toxic
agent, they would not achieve the necessary level of dispersion necessary to distribute a
lethal dose across the entire surface of the Earth, or even a tiny fraction of it. But is it
possible in principle? As always, it is our job to find out.
First, it would not make sense to disperse a biological agent in an un-targeted
fashion. If someone is trying to kill as many people as possible with a biological agent, it
makes the most sense to distribute it in areas where there are people, especially people
quickly dispersing across far distances, like an airport. This is best done on the ground,
with an actual human vector, spraying aerosols in places like public bathrooms. The
method of dispersal would be up close, personal, and meticulous, not slipshod or
haphazard, as in a shotgun approach. The downside of this is that you can't hit millions of
people at once, and it puts the attacker himself at risk of contracting the disease.
Any biological weapon launched from a distance has to deal with the problem of
adequate dispersal. The biological bomblets developed by the United States military during
our biological weapons program are small spheres designed to spin and spray a biological
agent as they are dropped from a plane. If bomblets are to be launched from a space
station, a number of weaponization challenges would need to be overcome. First, the
bomblets would need to be encased in a larger module with a heat shield to protect it from
burning up on reentry. Most of the heat of a reentering space capsule is transferred just 6
km (3.7 mi) over the ground, where the air gets much thicker relative to the upper
296

atmosphere, so the bomblets have to remain enclosed down to that altitude at least (unless
constructed from some advanced material like diamond). By the time they are that far
down, they can't disperse across that wide an area, maybe an area 10 km (6.2 mi) across.
Breaking this general rule would either require making ultra-tough reentry vehicles that can
reenter in one piece despite being relatively small, or having the reentry module break up
into dozens or hundreds of rockets which go off horizontally in all directions before falling
down to Earth. Both are serious challenges, and the technology does not exist, as far as is
publicly known.
Most people live in cities. Cities are the natural place where an aspiring Bond villain
would want to spread a plague. A problem is that major cities are far apart. Another
problem is that doomsday viruses don't survive very well without a host, quickly getting
destroyed by low-level biological activity (such as virophages) or sunlight. To optimally hit
humanity with a virus would require not only a plethora of viruses (since many people
would undoubtedly be immune to just one), but a multitude of living hosts to spread the
viruses, since viruses in aerosols or otherwise unprotected would very quickly be
degenerated by UV light from the sun. So the problem of wiping out humanity with a
biological weapon launched from a space station is actually a problem of launching bats,
monkeys, rats, or some similar stable vector in large numbers from a space station to
hundreds or thousands of major cities on Earth. When you think about it that way, in
combination with the reentry module challenges cited above, pulling this off clearly is a bit
more complicated than as was portrayed in Moonraker.
Could biological bomblets filled with bats be launched into the world's major cities,
flooding them with doomsday bats? It's certainly imaginable, and would be a more subtle
and achievable way of trying to kill off humanity than using the Daedalus impactor.
However, it certainly doesn't seem like a threat in the near future, and difficult to imagine
even in this century, though difficulty of imagination is not always the best criterion for
predicting the future. For the vectors to make it to their targets in one piece, they would
need to be put in modules with parachutes, which would be rather noticeable upon entering
a city. Many modules could be used, thousands of separate pods filled with bats each
landing in different locations and different cities, for a total of hundreds or thousands of
modules. This would entail a lot of mass, in the tens of thousands of tons. Keeping all
these bats or rats and their 50-100 necessary deadly viruses all contained on a space
297

station would be a major task, and require substantial amount of space and resources. It
could conceivably be done, but it would have to be a relatively large space station, one with
living and working space for 1,000 people at least. There would need to be isolation
chambers on the space station itself, with workers entering numerous staging chambers
filled with the modules which would need to be loaded with the biological specimens by
hand or via remote-controlled robot.
To take a serious crack at destroying as many human beings as possible, the
biological attack would need to be targeted at every city with over a million people, of which
there are 476. To ensure that enough people are infected to actually get the multipandemic going, rather than being contained, one would need as many vectors as
possible, in the tens of thousands per city. Rats, squirrels, or mice would be more suitable
than bats, since they would be more innocuous-seeming in world cities, though all of the
above could be used. Each city would need its own reentry capsule, which, to contain
10,000 rats without killing them, assuming (generously) ten per cubic foot, would need to
have around 1,000 cubic feet of internal space, or a cube 10 ft (3 m) on a side. The Apollo
Service module, for instance, had an internal space of 213 cubic feet. Assuming a module
carrying rats would need to be about 4 times heavier to contain the necessary internal
space, that gives us a weight of 98,092 kg (216,256 lbs) per module, which we round off to
100 tonnes. One module for every city with over a million people makes that 47,600
tonnes. Now we see the scale of size a space station would need to contain the necessary
facilities to launch a species-threatening bio-attack on the Earth. A space station of five
million tonnes or more would likely be needed to contain all the modules, preparatory
facilities, staff, gardens to grow food for the staff, space for it to be tolerable for them to live
there, the water they need, and so on. At that point, you might as well build Kalpana One,
the AIAA (American Institute of Aeronautics and Astronautics) designed space colony we
mentioned earlier, designed to fit 3,000 people in a colony that would weigh about 7 million
tonnes36. Incidentally, this number is close to the minimum viable population (MVP) needed
for a species to survive. A meta-analysis of the MVP of different species found the average
number to be 4,16937.
Would 10,000 rats released in each city in the world, carrying a multitude of viruses,
be sufficient to wipe out humanity? It seems unlikely, but could be possible in combination
with nuclear bombardment and targeted use of a heliobeam. After wiping out all human
298

beings on the surface, the space station could be struck by a NEO and suffer catastrophic
reentry into the atmosphere, killing everyone on board. After that, no more humanity.
Destroying every last human being on Earth with viruses seems seriously difficult, given
that there are individuals living out in rural Canada or Russia with no human contact
whatsoever. Yet, if only a few thousand or tens of thousands of individuals remain, are too
widely distributed, and fail to mate and reproduce in the wake of a serious disaster, it could
be possible. It would require a lot of things all going wrong simultaneously, or in a
sequence. But it is definitely possible.
Motivations
Given all this, we may rightly wonder: what would motivate someone to do these
horrible things in the first place? Most of us don't seriously consider wiping out humanity,
and those that do tend to be malcontent misfits with little hope of carrying their plans out.
The topic of motivations will be addressed at greater length further along in the book, but
we ought to briefly address it here in the specific context of space-originating artificial risks.
Space represents a new frontier, the possibility of starting over. To some, this means
replacing the old with the new. When you consider that 7-10 billion people only have a
collective mass of around 316 million tons, a tiny fraction of the size of the Earth, you
realize that's not a lot of matter to scramble and thereby have the entire Earth to yourself.
People concerned about the direction of the planet, or of society, combined with a
genocidal streak, may be sufficient consider wiping humanity out and starting over. They
may even see a messianic role for themselves in accomplishing it. Wiping out humanity
opens up the possibility of using the Earth for literally anything an individual or small group
could imagine. It would allow them to impose their preferences over the whole future.
The science fiction film Elysium showed a possible future for Earth: the planet noisy,
messy, dusty, and crowded, and a space station where everything is clean and utopian. In
the movie, people from Earth were able to launch themselves aboard the space station and
reach asylum, but in real life, it would be quite difficult to dock on a space station if the
inhabitants didn't want you there. Defended by a cluster of mirrors or lasers, they could
cook any unwanted visitors before they ever got to the front door. If a space colony could
actually maintain a stable social system, they might develop a very powerful sense of ingroup, more powerful than that experienced by the great majority of humans who lived
299

throughout history. Historically, communities almost always have some contact with the
outside, but on a space station, true self-sufficiency and isolation could become possible.
3,000 people might not be sufficient to create full self-sufficiency, but a network of 10-20
such space stations might. If these space stations had access to molecular manufacturing,
they could build things without the large factories we have today, making the best use of
available space. Eventually such colonies could spread to the Moon and Mars, colonizing
the solar system. Powerful in-group feelings could cause them to eventually think much
less of those left behind on Earth, even to the point of despising us. Of course, this is by no
means certain, but it ought to be considered.
The potential for hyper-in-group feelings increases when we consider the possibility of
mental and physical enhancement, modifications to the body and brain that change who
people are. A group of people on a space station could become a new species, capable of
flying through outer space unprotected for short periods. Their skin cells could be equipped
with small scales that fold to create a hard shell which maintains internal pressure when
exposed to space. These people could work on asteroids wearing casual clothing, a new
race of human beings adapted to the harshness of the vacuum. Beyond physical
modifications, they could have mental modifications as well. These could increase the
variance of possible emotions, allow long-term wakefulness, or even increase the
happiness set-point. They might see themselves as doing a service by wiping out the
billions of model 1.0 humans on Earth. In the aftermath of such an act, infighting among
these space groups could lead to their eventual death and the total demise of the human
species in all its variations.
In the context of all of this, it is important to recall Fermi's Paradox and the Great
Filter. Though it may simply be that the prior probability for the development of intelligent
life is extremely low, there may be more sinister reasons for the silence in the cosmos,
developmental inevitabilities that cause the demise of any intelligent race. These may be
more subtle than nanotechnology or artificial intelligence; they could involve psychological
or mental dead-ends that inevitably occur when an intelligent species starts expanding out
into space, modifying its own brain, or both. Maybe intelligent species reliably expand out
into the space immediately around their planet, inevitably wipe out the people left behind,
then inevitably wipe themselves out in space. Space certainly has a lot fewer organic
resources than the Earth, and a lot more things that can go wrong. If the surface of the
300

planet Earth became uninhabitable for whatever reason, hanging onto survival in space is
much more of a risky prospect. Maybe small groups of people living in space eventually go
insane over time scales of decades or centuries. We really have no idea. Nothing should
be taken for granted, especially our future.
Space and Molecular Nanotechnology
At the opening of this chapter, we made the controversial claim that molecular
manufacturing (MM) is a necessary prerequisite to large-scale space development. In this
section we'll provide more justification and reasoning for the evidence behind this
assumption, while being more specific about what precise levels of space development we
mean.
Let's begin by considering an optimal scenario for space development in the absence
of MM. Say that a space elevator is actually built in 2050 as one Japanese company
claims38. Say that we actually find a way to make carbon nanotubes of the required length
and thickness (we can't now), tens of thousands of miles long. Say that geopolitical
considerations are overcome and nations don't fight over or forbid the existence of an
elevator which would give anyone who controls it a huge military and strategic advantage
over other nations. Say that the risk of having a huge cable in space which would cut
through or get tangled in anything that got in its way, providing a huge hazard to anything in
orbit, was considered acceptable. If all these challenges are overcome, it would still take a
number of years to thicken a space elevator to the point where it could carry substantial
loads. Bradley C. Edwards, who conducted a space elevator study for NASA, found that
five years of thickening a space elevator cable would make it strong enough to send 1,100
tons (106 kg) of cargo into orbit every four days 39. That's 912 trips every decade, or about
108 kg, 100,000 tonnes. Space enthusiasts have trouble grasping how little this is; less
than 70 times the quantity needed to build a rotating space colony that holds only 3,000
people.
The capacity of a space elevator can be increased by robots that add to its thickness,
but the weight of these robots prevents the space elevator from being in use while
construction is happening. The thickness that can be added to a space elevator on any
given trip is extremely limited, because the elevator itself is at least 35,800 km (22,245 mi)
long and a robot can only carry so much material up on any given trip without breaking the
301

elevator due to excessive weight. Spreading out a few hundred tons on a string that long
only adds a little bit of width. These facts create fundamental limitations on the rate that a
space elevator can be improved. In the long term, if a reliable method of mass-producing
carbon nanotubes is found, space elevators are found to be safe and defensible from
attacks, then extremely large space elevators could be constructed, but it seems unlikely in
this century, which is our scope of focus. It's possible that space elevators may be the
primary route to space in the 22nd or 23rd century, but it seems unlikely in the 21st.
Enthusiasts might object that mass drivers based on electromagnetic acceleration
could be used to send up more material more cheaply, with claims of costs as low as $1/lb
for electricity40. Edwards claims his space elevator design could put a kilogram in orbit for
$220, the cost lowering to $55/kg as the power beaming system efficiency improves. Still,
these systems have limited launch capacity. Even if the costs of sending payloads into
space were free, there would still be that limitation. Perhaps the limitation could be
bypassed somewhat by constructing mass drivers on the Moon which can send up large
amounts of material. Even then, the number of mass drivers required to send up any
substantial amount of mass would cost trillions of dollars worth of investment, and involve
mega-projects on another celestial body, something which is very far off (without the help
of MM). This is something that could be started in the late 21 st century, but definitely not
completed. What about towing NEOs into the Earth's orbit as building materials? As we
analyzed earlier in this chapter, the energy costs of modifying the orbit of large objects are
very high.
Simple facts remain: the gravity wells of both the Moon and Earth are very powerful,
making it difficult to send up any substantial amount of matter without great cost. Without
nanotechnology and automated self-replicating robotics, space colonization is a project
which would unfold over the timescale of centuries, and mostly involve just a few tens of
thousands of people. That's 0.0001 percent of the human race. These are likely to be
professionals living in close quarters, like the scientists who do work in the Antarctic, not
like the explorers who won the West. The romance of large-scale space colonization is a
future reserved only for people living in a civilization with self-replicating robotics and selfreplicating robotics. Efforts like SpaceX or Mars One are token efforts only, fun for press
releases, which may inspire people, but actually getting to space requires self-replicating
robotics. Anyone working on rockets is not working on the fastest route to space
302

colonization; they are working on an irrelevant route to space colonization, one which will
never reach its target without the help of nanotechnology. Space enthusiasts should be
working on diamondoid mechanosynthesis (DMS), but very few of them even know what
those words mean. Rockets are sexier than DMS. These will be great for space tourism,
glimpsing the arc of the Earth for a few minutes before returning, but they will not allow us
to build space colonies of any appreciable size. Rockets are simply too expensive on a perkilogram basis. Space elevators and mass drivers do not have the necessary capacity to
send up supplies to build facilities for more than a few thousand people on the timescale of
decades. 1,100 tons every four days simply isn't a lot, especially when you need 10 tons
per square meter of hull just to shield yourself from radiation.
We've mentioned every key technology for getting into space without taking into
account MM: space elevators, mass drivers (including on the Moon), and rockets. They all
suffer from capacity problems in the 21st century. Based on all this, it is hard to conclude
that pre-molecular manufacturing space-based weaponry is a serious risk to human
survival during the 21st century. Any risk of space-based weaponry appears to derive from
molecular manufacturing and from molecular manufacturing only. Accordingly, it seems
logical to tentatively see space weapon risks as a subcategory of the larger class of
molecular manufacturing (MM) risks.
To review, molecular manufacturing is the hypothetical future technology that uses
self-replicating nanosystems to manufacture nanofactories which are exponentially selfreplicating and can build a wide array of diamondoid products with fantastic strength and
power specifications at a high throughput. Developing MM requires us to master
diamondoid mechanosynthesis (DMS), the science and engineering of joining together
individual carbon atoms into atomically precise diamond shapes. This has not been
achieved yet, though very limited examples of mechanosynthesis (with silicon) have been
demonstrated41. Only a few scientists are working on mechanosynthesis or even consider it
important42. However, many prominent futurists, including the most prominent futurist, Ray
Kurzweil, predicts that nanofactories will be developed in the 2030s, which will allow us to
enhance our bodies, extend our lifespans, develop realistic virtual reality, and so on 43. The
mismatch between his predictions and the actual state of the art of nanotechnology in
2015, however, is jarring. We seem nowhere close to developing nanofactories, with much
more basic research, design, and development required 44. The entire field of
303

nanotechnology needs to entirely reorient itself towards this goal, which it is currently
ignoring45.
Molecular manufacturing is fundamentally different than any other technology
because it is 1) self-replicating, 2) atomically precise, 3) can build almost anything, 4)
cheap once it gets going. Nanotechnology policy analysts have even called it magic
because of these properties. Any effort to colonize space needs it; it's key.
How about colonizing space with the assistance of molecular nanotechnology? It gets
much easier. Say that the first nanofactory is built in 2050. It replicates itself in under 12
hours, those two nanofactories replicate themselves, and so on, using natural gas, which is
highly plentiful, for natural feedstocks46. Even after the natural gas runs out, we can obtain
large amounts of carbon from the atmosphere47. Reducing the atmospheric CO2 levels
from about 362 parts per million to 280 parts per million (pre-industrial levels) would allow
for the extraction of 118 gigatonnes of carbon. That's 118 billion metric tons. Given that
diamond produced by nanofactories would be atomically perfect and ultra-strong, we could
build skyscrapers or megastructures which only use only 1/100 th the materials for the loadsupporting structures, compared to present-day buildings. Burj Khalifa, the world's tallest
building as of this writing, at 829.8 m (2,722 ft), uses 55,000 tonnes of steel rebar in its
construction. Creating a building of the same height with diamond beams would only
require 550 tonnes of diamond. For the people who design and build buildings, it's hard to
imagine that so little material could be used, but it can. Some might object that diamond is
more brittle than steel and therefore not a suitable building material for a building, since it
doesn't have any flexibility. That problem is easily solvable by using carbon nanotubes,
also known as fullerenes, for construction. Another option would be to use diamond but
connect together a series of segments with joints that allow the structure to bend slightly.
With nanofactories, you don't just have diamond at your disposal, but the entire class of
materials made out of carbon, which includes fullerenes such as buckyballs and
buckytubes (carbon nanotubes)48.
Imagine a tower much taller than Burj Khalifa, 100 kilometers (62 mi) in height instead
of 829.8 meters. This tower would use 100 times more structural material to be stable.
Flawless diamond is so strong (50 GPa compressive strength) that it does not need to
taper at all to be stable for a tower of that height. That works out to about 55,000 tonnes of
304

material. Build 10 of these structures in a row and put a track on top of them, and we have
a Space Pier, a launch platform made up of about 600,000 tonnes of material which puts
us halfway to anywhere, designed by J. Storrs Hall 49. At that altitude, the air is 100 times
thinner than at ground level, making it easier to launch payloads into space. In fact, the
structure is so tall that it reaches the Karman Line, the international designation for the
boundary of space. The top of the tower itself is in space, technically. The really fantastic
thing about this structure is that it could hold great weight (tens of thousands of tons) and
run launches every 10 minutes or so instead of every four days. It avoids many other
problems that space elevators have, such as the risk of colliding with objects in low Earth
orbit. It is far more structurally stable than a space elevator. It only occupies the territory of
one sovereign nation, and avoids issues relating to who owns the atmosphere and space
above a given country. (To say that a country automatically owns the space above it and
has a right to build a space elevator there is not consistent with any current legal
conception of space.)
When it comes to putting objects into orbit, a Space Pier and a space elevator are in
completely different categories. Although a Space Pier does not actually extend into space
like a space elevator does, it is much more efficient at getting payloads into orbit. A 10
tonne payload can be sent into orbit for just $4,300 of electricity, a rate of 43 cents per
kilogram. That is cheap, much cheaper than any other proposal. Why build a mass driver
on the ground when you can build it at the edge of space? Only molecular nanotechnology
makes it possible; nothing else will do. Nothing else can build flawless diamond in the
massive quantities needed to put up this structure. Nothing else but a Space Pier can put
payload into space in sufficient quantities to achieve the space expansion daydreams of
the 60s and 70s.
A future of mankind in space is only enabled by 1) molecular nanotechnology, and
2) Space Piers, which can only be built using it. Nothing else will do. A Space Pier
combines two simple concepts: that of a mass driver, and putting it above most of the
atmosphere. It's extremely simple, and is the logical conclusion of building taller and taller
buildings. Although a Space Pier is large, building it would only require a small amount of
the total carbon available. A Space Pier consumes about half a million tonnes, while our
total carbon budget from the atmosphere is about 118 billion tonnes. That's roughly

305

1/200,000 of the carbon available. It's definitely a major project that would use up
substantial resources, but is well within the realm of possibility.
Assuming a launch of 10 tonnes every ten minutes, we get a figure of 5,256,000
tonnes a decade, instead of the measly 100,000 tonnes a decade of the Edwards space
elevator. That makes a Space Pier roughly 52 times better at sending payloads into space
than a space elevator, which is a rather big deal. It's an even bigger deal when you start
adding additional tracks and support to the Space Pier, allowing it to support multiple
launches simultaneously. At some point, the main restriction becomes how much power
you can produce at the site using nuclear power plants, rather than the costs of additional
construction itself, which are modest in comparison. A Space Pier can be worked on and
expanded while launches are ongoing, unlike a space elevator which is non-operational
during construction. A 10-track Space Pier can launch 50,256,000 tonnes a decade,
enough to start building large structures on the Moon, such as its own Space Pier. Carbon
is practically non-existent on the Moon's surface, so any Space Pier built there would need
to be built with carbon sent from Earth. 50 million tonnes would be enough to build over a
thousand mass drivers on the Moon, which could then send up 5 billion tonnes of Moon
rock in a single decade! Now we're talking. Large amounts of dead, dumb matter like Moon
rock is needed to shield space colonies from the dangerous effects of radiation. Each
square meter will need at least 10 tonnes, more for colonies outside of the Earth's Van
Allen Belts, in attractive locations like the Lagrange points, where colonies can remain
stable in their orbits relative to the Earth and Moon. 5 billion tonnes of Moon rock is enough
to facilitate the creation of about 500 space colonies, assuming 10 million tonnes per
colony, which is enough to contain roughly 1,500,000 people. Even with the magic of
nanotechnology, a ten-track Space Pier on the Earth, a thousand industrial-scale mass
drivers on the Moon operating around the clock, each launching ten tonnes every ten
minutes, and two or three decades of work to build it all, we only could put 0.15 percent of
the global population into space. To grasp the amount of infrastructure it would take to do
this, it's comparable in mass to the entire world's annual oil output.
Creating space colonies could even be accelerated beyond the above scenario by
using self-replicating factories on the Moon which create mass drivers primarily out of local
materials, using ultra-strong diamond materials only for the most crucial components.
Since the Moon has no atmosphere, you can build a mass driver right on the ground,
306

instead of building a huge, expensive tower to support it. Robert A. Freitas conducted a
study of self-replicating factories on the Moon, titled A Self-Replicating, Growing Lunar
Factory in 198150. Freitas' design begins with a 100 ton seed and replicates itself in a year,
but it does not use molecular manufacturing. A design based on molecular manufacturing
and powered by a nuclear reactor could replicate itself much faster, on the order of a day.
In just 18 days, the factories could harvest 4 billion tons of rock a year, roughly equivalent
to the annual industrial output of all human civilization.
From them on, providing nuclear power for the factories is the main limitation on
increasing output. A single nuclear power plant with an output of 2 GW is enough to power
174 of these factories, so many nuclear power plants would be needed. Operating solely
on solar energy generated by themselves, the factories would take a year to self-replicate.
Assisting the factories with large solar panels in space beaming down power is another
option. A 4 GW solar power station would weigh about 80,000 tonnes, which could be
launched from an Earth-based 10-track Space Pier in pieces over the course of six days.
Every six days, energy infrastructure could be provided for the construction of 4,000
factories with a combined annual output of 400 million tons (~362 million tonnes) of Moon
rock, assuming each factory requires 10 MW to power it (Freitas' paper estimates 0.47 MW
- 11.5 MW power requirements per factory). That alone would be sufficient to produce 3.6
billion tonnes a decade, more than half the 5 billion tonne benchmark for 1,000 mass
drivers sending material up into space. If the self-replicating lunar robotics could construct
mass drivers on their own, billions of tonnes of Moon rock might be sent up every week
instead of every decade. Eventually, this would be enough to build so many space colonies
around the Earth that they would create a gigantic and clearly visible ring.
Hopefully these sketches illustrate the massive gulf between space colonization with
pre-MM technologies and space colonization with MM. They are huge. The primary limiting
factory in the case of MM is human supervision, the degree to which would be necessary is
still unknown. If factories, mass drivers, and space stations can all be constructed
according to an automated program, with minimal human supervision, the colonization of
space could proceed quite rapidly in the decades after the development of MM. That is why
we can't completely rule out major space-based risks this century. If MM is developed in
2050, that gives humanity 50 years to colonize space in our crucial window, at which point
risks do emerge. The bio-attack risk outlined in this chapter could certainly become feasible
307

within that window. However, if MM is not developed this century, then it seems that major
risks from space are also unlikely.
Space and the Far Future
In the far future, if humanity doesn't destroy itself, some mass expansion into space
seems likely. It probably isn't necessary in the nearer term, as the Earth could hold trillions
of people with plenty of space if huge underground caverns are excavated and flooded with
sunlight, millions of 50-mile high arcologies are built, and so on. The latest population
projections, contrary to two decades of prior consensus, have world population continuing
to increase throughout the entire 21st century rather than leveling off51. Quoting the
researchers: world population stabilization unlikely this century. Assuming the world
population continues to grow indefinitely, eventually all these people will need to find
somewhere to be put which isn't the Earth.
Assuming we do expand into space, increasing our level of reproduction to produce
enough people to take up all that space, what are the long-term implications? Though
space may present dangers in the short term, in the long term it actually secures our future
as a species. If mankind spreads across the galaxy, it would become almost impossible to
wipe us out. For a detailed step-by-step scenario of how mankind might go from building
modest structures in low Earth orbit to colonizing the entire galaxy, we recommend the
book The Millennial Project by Marshall T. Savage52.
The long-term implications of space colonization depend heavily on the degree of
order and control that exists in future civilization. If the future is controlled by a singleton, a
single top-level decision-making agency, there may be little to no danger, and it could be
safe to live on Earth53. If there is a lack of control, we could reach a point where small
organizations or even individuals could gather enough energy to send a Great Pyramidsized iron sphere into the Earth at the speed of light, causing impact doomsday.
Alternatively, perhaps the Earth could be surrounded by defensive layers and structures so
powerful that they can stop such attacks. Or, perhaps detectors could be scattered across
the local cosmic neighborhood, alerting the planet to the unauthorized acceleration of
distant objects and allowing the planet to form a response long in advance. All of these
outcomes are possible.

308

Probability Estimates and Discussion


As we've stated several times in this chapter, space colonization on the level required
to pose a serious threat to Earth does not seem particularly likely in the first half of the 21 st
century. One of the reasons why is that molecular nanotechnology does not seem very
likely to be developed in the first half of the century. There is close to zero effort towards
the prerequisite steps, namely diamondoid mechanosynthesis, complex nanomachine
design, and positional molecular placement. As we've argued, space expansion depends
upon the development of MM to become a serious risk.
Even if MM is developed, there are other risks that derive from it which seem to pose
a greater danger than the relatively speculative and far-out risks discussed in this chapter.
Why would someone launch a bio-attack on the Earth from space when they could do it
from the ground and seal themselves inside a bunker? For the cost of putting a 7 million
ton space station into orbit, they could build an underground facility with enough food and
equipment for decades, and carry out their plans of world domination that way. Anything
that can be built in space can be built on the ground in a comparable structure, for great
savings, improved specifications, or both.
Space weapons seem to pose a greater threat in the context of destabilizing the
international system than they do in terms of direct threat to the human species. Rather
than space itself being the threat, we ought to consider the exacerbation of competition
over space resources (especially the Moon) leading to nuclear war or a nano arms race on
Earth.
We've reviewed that materials in space are scarce, there are not many NEOs
available, and launching anything into space is expensive, even with potential future
assistance from megastructures like the Space Pier. Even if a Space Pier can be built in a
semi-automated fashion, there is a limit to how much carbon is available, and real estate
will continue to have value. A Space Pier would cast vast shadows which would effect
property values across tens of thousands of square miles. Perhaps the entire structure
could be covered in displays, such as phased array optics, which simulate it being
transparent, but then the footprint of the towers themselves would still require substantial
real estate. Perhaps the greatest cost of all would be the idea that it exists and the demand

309

for its use, which could cause aggravation or jealousy among other nations or powerful
companies.
All of these factors mean that access to space would still have value. Even if many
Space Piers are constructed, it will continue to be relatively scarce and probably in
substantial demand. Space has resources, like NEOs and the asteroid belt, which contain
gigatons of platinum and gold that could be harvested, returned to Earth, and sold for a
profit. A Space Pier would need to be defended, just like any other important structure. It
would be an appealing target for attack, just as the Twin Towers in New York were.
The scarcity of materials in space and the cost of sending payloads into orbit means
that the greatest chunk of material there, the Moon, would have great comparative value.
The status of the Moon is ostensibly determined by the 1979 Moon Treaty, The Agreement
Governing the Activities of States on the Moon and Other Celestial Bodies, which assigns
the jurisdiction of the Moon to the international community and international law. The
problem is that the international community and international law are abstractions which
aren't actually real. Sooner or later there will be some powerful incentive to begin
colonizing the Moon and mining it for material to build space colonies, and figuring that the
whole object is in the possession of the international community will not be specific
enough. It will become necessary to actually protect the Moon with military resources.
Currently, putting weapons in space is permitted according to international law, as long as
they are not weapons of mass destruction. So, the legal precedent exists to deploy weapon
systems to secure the Moon. Whoever protects the Moon will become its de facto owner,
regardless of what treaties say.
Consider the electrical cost of launching a ten metric payload into orbit from a
hypothetical Space Pier; $4,200. Besides that cost, there is also the cost of building the
structure in the first place, and it seems likely that these costs will be amortized and
wrapped into charges for each individual launch. Therefore, launches could be
substantially more expensive than $4,200. If the cost for building the structure is $10 billion,
and the investors want to make all their money back within five years, and launches occur
every ten minutes around the clock without interruption, a $38,051 charge would need to
be added to every ten tonne load. That brings the total cost per launch to $41,251.
Launching ten tonnes from the Moon might cost only $1,000 or less, considering many
310

factors; 1) the initially low cost of lunar real estate, 2) the Moon's lack of atmosphere
encumbering the acceleration of the load, 3) the comparatively lower gravity of the Moon. If
someone is building a 7 million tonne space station with materials launched from the Earth,
launch costs would be $26,635,700,000, or roughly $26 trillion. Building a similar mass
driver on the surface of the Moon would be a lot cheaper than building a Space Pier;
perhaps by a factor of 10. We can ballpark the cost of launch per ten tonnes of Moon rock
at $5,000, making the same assumption that investors will want their money back in five
years and that electricity costs will be minimal. At that price, launching 7 million tonnes
works out to $3,500,000,000, or about 7 times cheaper. The basic fact is that the Moon's
gravity is 6 times less than the Earth's and it has no atmosphere. That makes it much
easier to use as a staging area for launching large quantities of basic building materials
into the Earth-Moon neighborhood.
The uncertain status of the Moon and its status as a source of material at least ten
times cheaper than the Earth (much more at large scales) means that there will be an
incentive to fight over it and control it. Like the American continent, it could become home
to independent states which begin as colonies of Earth and eventually declare their
independence. This could lead to bloody wars going both ways, wars waged with futuristic
weapons like those outlined in this and earlier chapters. Those on the Moon might think
little of people on the Earth, and do their best to sterilize the surface. If the surface of the
Earth became uninhabitable, say through a Daedalus impact, the denizens of the Moon
and space would have to be extremely confident in their ability to keep the human race
going without that resource. This seems very plausible, especially when we take into
account human enhancement, but it's worth noting. Certainly such an event would lead to
many deaths and lessen the quality of life for all humanity.

References

54. Upgraded SpaceX Falcon 9.1.1 will launch 25% more than old Falcon 9 and bring
price down to $4109 per kilogram to LEO. March 22, 2013. NextBigFuture.
55. Kenneth Chang. Beings Not Made for Space. January 27, 2014. The New York
Times.
311

56. Lucian Parfeni. Micrometeorite Hits the International Space Station, Punching a
Bullet Hole. April 30, 2013. Softpedia.
57. David Dickinson. How Micrometeoroid Impacts Pose a Danger for Todays
Spacewalk. April 19, 2013. Universe Today.
58. Bill Kaufmann. Mars colonization a suicide mission, says Canadian astronaut.
59. James Gleick. Little Bug, Big Bang. December 1, 1996. The New York Times.
60. Leslie Horn. That Massive Russian Rocket Explosion Was Caused by Dumb
Human Error. July 10, 2013. Gizmodo.
61. Space Shuttle Columbia Disaster. Wikipedia.
62. Your Body in Space: Use it or Lose It. NASA.
63. Alexander Davies. Deadly Space Junk Sends ISS Astronauts Running for Escape
Pods. March 26, 2012. Discovery.
64. Tiffany Lam. Russians unveil space hotel. August 18, 2011. CNN.
65. John M. Smart. The Race to Inner Space. December 17, 2011. Ever Smarter
World.
66. J. Storrs Hall. The Space Pier: a hybrid Space-launch Tower concept. 2007.
Autogeny.org.
67. Al Globus, Nitin, Arora, Ankur Bajoria, Joe Straut. The Kalpana One Orbital Space
Settlement Revised. 2007. American Institute of Aeronautics and Astronautics.
68. List Of Aten Minor Planets. February 2, 2012. Minor Planet Center.
69. Robert Marcus, H. Jay Melosh, and Gareth Collins. Earth Impact Effects Program.
2010. Imperial College London.
70. Alan Chamberlin. NEO Discovery Statistics. 2014. Near Earth Object Program,
NASA.
71. Clark R. Chapman and David Morrison. Impacts on the Earth by Asteroids and
Comets - Assessing the Hazard. January 6, 1994. Nature 367 (6458): 3340.
72. Curt Covey, Starley L. Thompson, Paul R. Weissman, Michael C. MacCracken.
Global climatic effects of atmospheric dust from an asteroid or comet impact on
Earth. December 1994. Global and Planetary Change: 263-273.
73. John S. Lewis. Rain Of Iron And Ice: The Very Real Threat Of Comet And Asteroid
Bombardment. 1997. Helix Books.

312

74. Kjeld C. Engvild. A review of the risks of sudden global cooling and its effects on
agriculture. 2003. Agricultural and Forest Meteorology. Volume 115, Issues 34, 30
March 2003, Pages 127137.
75. Covey, C; Morrison, D.; Toon, O.B.; Turco, R.P.; Zahnle, K. Environmental
Perturbations Caused By the Impacts of Asteroids and Comets. Reviews of
Geophysics 35 (1): 4178.
76. Bains, KH; Ianov, BA; Ocampo, AC; Pope, KO. Impact Winter and the CretaceousTertiary Extinctions - Results Of A Chicxulub Asteroid Impact Model. Earth and
Planetary Science Letters 128 (3-4): 719725.
77. Earth Impact Effects Program.
78. Alvarez LW, Alvarez W, Asaro F, Michel HV. Extraterrestrial cause for the
CretaceousTertiary extinction. 1980. Science 208 (4448): 10951108.
79. H. J. Melosh, N. M. Schneider, K. J. Zahnle, D. Latham. Ignition of global wildfires
at the Cretaceous/Tertiary boundary. 1990.
80. MacLeod N, Rawson PF, Forey PL, Banner FT, Boudagher-Fadel MK, Bown PR,
Burnett JA, Chambers, P, Culver S, Evans SE, Jeffery C, Kaminski MA, Lord AR,
Milner AC, Milner AR, Morris N, Owen E, Rosen BR, Smith AB, Taylor PD, Urquhart
E, Young JR; Rawson; Forey; Banner; Boudagher-Fadel; Bown; Burnett; Chambers;
Culver; Evans; Jeffery; Kaminski; Lord; Milner; Milner; Morris; Owen; Rosen; Smith;
Taylor; Urquhart; Young. The CretaceousTertiary biotic transition. 1997. Journal
of the Geological Society 154 (2): 265292.
81. Johan Vellekoopa, Appy Sluijs, Jan Smit, Stefan Schouten, Johan W. H. Weijers,
Jaap S. Sinninghe Damst, and Henk Brinkhuis. Rapid short-term cooling following
the Chicxulub impact at the CretaceousPaleogene boundary. May 27, 2014.
Proceedings of the National Academy of Sciences of the United States of America,
vol. 111, no 21, 75377541.
82. Petit, J.R., J. Jouzel, D. Raynaud, N.I. Barkov, J.-M. Barnola, I. Basile, M. Benders,
J. Chappellaz, M. Davis, G. Delayque, M. Delmotte, V.M. Kotlyakov, M. Legrand,
V.Y. Lipenkov, C. Lorius, L. Ppin, C. Ritz, E. Saltzman, and M. Stievenard. Climate
and atmospheric history of the past 420,000 years from the Vostok ice core,
Antarctica. 1999. Nature 399: 429-436.
83. Hector Javier Durand-Manterola and Guadalupe Cordero-Tercero. Assessments of
the energy, mass and size of the Chicxulub Impactor. March 19, 2014. Arxiv.org.
313

84. Covey 1994


85. Project Daedalus Study Group: A. Bond et al. Project Daedalus The Final Report
on the BIS Starship Study, JBIS Interstellar Studies, Supplement 1978.
86. Science: Sun Gun. July 9, 1945. Time.
87. Bryan Caplan. The Totalitarian Threat. 2006. In Nick Bostrom and Milan Cirkovic,
eds. Global Catastrophic Risks. Oxford: Oxford University Press, pp. 504-519.
88. Nick Bostrom. Existential Risks: Analyzing Human Extinction Scenarios. 2002.
Journal of Evolution and Technology, Vol. 9, No. 1.
89. Globus 2007
90. Traill LW, Bradshaw JA, Brook BW. Minimum viable population size: A metaanalysis of 30 years of published estimates. 2007. Biological Conservation 139 (12): 159166.
91. Michelle Starr. Japanese company plans space elevator by 2050. September 23,
2014. CNET.
92. Bradley C. Edwards, Eric A. Westling. The Space Elevator: A Revolutionary Earthto-Space Transportation System. 2003.
93. Henry Kolm. Mass Driver Update. September 1980. L5 News. National Space
Society.
94. Oyabu, Noriaki; Custance, scar; Yi, Insook; Sugawara, Yasuhiro; Morita, Seizo.
Mechanical Vertical Manipulation of Selected Single Atoms by Soft Nanoindentation
Using Near Contact Atomic Force Microscopy. 2003. Physical Review Letters 90
(17).
95. Nanofactory Collaboration. 2006-2014.
96. Ray Kurzweil. The Singularity is Near. 2005. Viking.
97. Ralph Merkle and Robert A. Freitas. Remaining Technical Challenges for Achieving
Positional Diamondoid Molecular Manufacturing and Diamondoid Nanofactories.
2007. Nanofactory Collaboration.
98. Eric Drexler. Radical Abundance: How a Revolution in Nanotechnology Will Change
Civilization. 2013. PublicAffairs.
99. Michael Anissimov. Interview with Robert A. Freitas. 2010. Lifeboat Foundation.
100.

Robert J. Bradbury. Sapphire Mansions: Understanding the Real Impact of

Molecular Nanotechnology. June 2003. Aeiveos.


101.

Drexler 2013.
314

102.

Hall 2007

103.

Robert A. Freitas. A Self-Replicating, Growing Lunar Factory. Proceedings

of the Fifth Princeton/AIAA Conference. May 18-21, 1981. Eds. Jerry Grey and
Lawrence A. Hamdan. American Institute of Aeronautics and Astronautics
104.

Patrick Gerland, Adrian E. Raftery, Hana evkov, Nan Li, Danan Gu,

Thomas Spoorenberg, Leontine Alkema, Bailey K. Fosdick, Jennifer Chunn, Nevena


Lalic, Guiomar Bay, Thomas Buettner, Gerhard K. Heilig, John Wilmoth. World
population stabilization unlikely this century. September 18, 2014. Science.
105.

Marshall T. Savage. The Millennial Project: Colonizing the Galaxy in Eight

Easy Steps. 1992. Little, Brown, and Company.


106.

Nick Bostrom. What is a singleton? 2006. Linguistic and Philosophical

Investigations,

Vol.

5,

No.

2:

pp.

48-54.

Chapter 16: Artificial Intelligence


AI and AI risk is now well know topic and I will not repeat here all that you could find in works of
E.Yudkowsky and in recent book on Nick Bostrom Superintelligence. I will include here only my
own research on AI timing, AI failures modes and AI prevention methods.

Current state of AI risks 2016


This part of the book should be continuously updated based on recent news. The field of AI is
undergoing quick changes every year.
1.Elon Musk became the main player in AI safety field with his OpenAI program. But the
idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article
which is questioning safety of the idea of openness in the field of AI.
http://www.nickbostrom.com/papers/openness.pdf
Personally I think that here we see an example of arrogance of billionaire. He intuitively
come to idea which looks nice, appealing and may work in some contexts. But to prove
that it will actually work, we need rigorous prove.

But there is a difference between idea of Open AI as it was suggested by Musk in the beginning and
actual work in the organization named "Open AI". The latter seems to be more balanced.
2. Google seems to be one of the main AI companies and its AlphaGo won in Go game in
human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman

315

abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made
Yudkowsky to said that it poses one more risk of AI a risk of uneven AI development, that
it is sometimes superhuman and sometimes it fails.
3. The number of technical articles on the field of AI control grew exponentially. And it is not
easy to read them all.
4. There are many impressive achievements in the field of neural nets and deep learning.
Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012)
it is princess. And it was unexpected from the point of view of AI safety community. MIRI
only recently updated its research schedule and added studying of neural net based AI
safety.
5. The doubling time in some benchmarks in deep learning seems to be 1 year.
6. Media overhype AI achievements.
7. Many new projects in AI safety had started, but some are concentrated on safety of selfdriving cars (Even Russian KAMAZ lorry building company investigate AI ethics).
8. A lot of new investment going into AI research and salaries in field are rising.
9. Military are increasingly interested in implementing AI in warfare.
10. Google has AI ethic board, but what it is doing is unclear.
11. It seems like AI safety and implementation is lagging from actual AI development.
12. Governments are increasingly interested in AI safety: White House/OSTP conferences
on AI
13. Implications of AI safety become mainstream in the field of self driving cars and military robotics, as well
as discussion of AI unemployment. it sometimes distract attention from the real problem.

Comments: Jaime Sevilla Molina Comments by point: 3 - While there are more papers being produced,
most of them are on preliminary research and do not develop technical results. 4 - It looks like research has
changed its focus from value alignment to interruptibility due to some promising results,not to the rise of neural
nets. The previous line of work is still being explored.11 - Saying that implementation of AI safety is lagging
behind is the understatement of the week, not to say confuse: we are nowhere close to be in a position to start
even considering it. Also, it is worth remarking that MIRI is collaborating with Deep Mind and that Paul
Christiano has joined Open AI, and that academia remains largely indifferent to the problem. As a more
personal prediction, I am going to bet that the recent ruckus about the difficulty of implementing intention in
smart contracts is going to result in some relevant research in AI safety being done, though in a very indirect
way

You can add the

Distributed agent net as a basis for hyperbolic law of acceleration and its
implication for AI timing and x-risks

There are two things in the past that may be named super-intelligences, if we consider
level of tasks they solved. Studying them is useful when we are considering the creation of
our own AI.
The first one is biological evolution, which managed to give birth to such a sophisticated
thing as man, with its powerful mind and natural languages. The second one is all of
human science when considered as a single process, a single hive mind capable of
solving such complex problems as sending man to the Moon.

316

What can we conclude about future computer super-intelligence from studying the
available ones?
Goalsystem.

Both super-intelligences are purposeless. They dont have any final goal which
would direct the course of development, but they solve many goals in order to survive in
the moment. This is an amazing fact of course.
They also lack a central regulating authority. Of course, the goal of evolution is survival at
any given moment, but is a rather technical goal, which is needed for the evolutionary
mechanisms realization.
Both will complete a great number of tasks, but no unitary final goal exists. Its just like a
man in their life: values and tasks change, the brain remains.
Consciousness.

Evolution lacks it, science has it, but to all appearances, it is of little

significance.
That is, there is no center to it, either a perception center or a purpose center. At the same
time, all tasks are completed. The sub-conscious part of the human brain works the same
way too.
Both super-intelligences are based on the principle: collaboration of
numerous smaller intelligences plus natural selection.
Master algorithm.

Evolution is impossible without billions of living creatures testing various gene


combinations. Each of them solves its own egoistic tasks and does not care about any
global purpose. For example, few people think that selection of the best marriage partner is
a species evolution tool (assuming that sexual selection is true). Interestingly, the human
brain has the same organization: it consists of billions of neurons, but they dont all see its
global task.
Roughly, there have been several million scientists throughout history. Most of them have
been solving unrelated problems too, while the least refutable theories passed for selection
(considering social mechanisms here).
Safety.

Dangerous, but not hostile.

Evolution may experience ecological crises; science creates an atomic bomb. There are
hostile agents within both, which have no super-intelligence (e.g. a tiger, a nation state).
Within an intelligent environment, however, a dangerous agent may appear which is
stronger than the environment and will eat it up. This will be difficult to initiate. Transition
from evolution to science was so difficult to initiate from evolutions point of view, (if it had
one).
Howtocreateoursuperintelligence.

Assume, we agree that super-intelligence is an environment,


possessing multiple agents with differing purposes.
317

So we could create an aquarium and put a million differing agents into it. At the top,
however, we set an agent to cast tasks into it and then retrieve answers.
Hardwarerequirementsnowareveryhigh:

we should simulate millions of human-level agents. A


computational environment of about 10 to the power of 20 flops is required to simulate a
million brains. In general, this is close to the total power of the Internet. It can be
implemented as a distributed network, where individual agents are owned by individual
human programmers and solve different tasks something like SETI-home or the Bitcoin
network.
Everyone can cast a task into the network, but provides a part of their own resources in
return.

Speedofdevelopmentofsuperintelligentenvironment

The Super-intelligence environment develops hyperbolically. Korotaev shows


that the human population grows governed by the law N = 1/T (Forrester law , which has
singularity at 2026), which is a solution to the following differential equation:
Hyperboliclaw.

dN/dt = N*N
A solution and more detailed explanation of the equation can be found in this article by
Korotaev (article in Russian, and in his English book on p. 23). Notably, the growth rate
depends on the second power of the population size. The second power was derived as
follows: one N means that a bigger population has more descendants; the second N
means that a bigger population provides more inventors who generate a growth in
technical progress and resources.
Evolution and tech progress are also known to develop hyperbolically (see below to learn
how it connects with the exponential nature of Moores law; an exact layout of hyperbolic
acceleration throughout history may be found in Panovs article Scaling law of the
biological evolution and the hypothesis of the self-consistent Galaxy origin of life ) The
expected singularity will occur in the 21 st century. And now we know why. Evolution and
tech progress are both controlled by the same development law of the superinteligent
environment. This law states that the intelligence in an intelligence environment depends
on the number of nodes, and on the intelligence of each node. This is of course is very
rough estimation, as we should also include the speed of transactions
However, Korotaev gives an equation for population size only, while actually it is also
applicable to evolution the more individuals, the more often that important and interesting
mutations occur, and for the number of scientists in the 20 th century. (In the 21st century it
has reached its plateau already, so now we should probably include the number of AI
specialists as nodes).

318

Korotayev provides a hyperbolic law of acceleration and its derivation from


plausible assumptions but it is only applicable to demographics in human history from its
beginning and until the middle of the 20t century, when demographics stopped obeying this
law. Panov provides data points for all history from the beginning of the universe until the
end of the 20th century, and showed that these data points are controlled by hyperbolic law,
but he wrote down this law in a different form, that of constantly diminishing intervals
between biological and (lately) scientific revolutions. (Each interval is 2.67 shorter that
previous one, which implies hyperbolic law.)
In short:

I suggested that Korotaevs explanation of hyperbolic law stands as a prehuman history explanation of an accelerated evolutionary process, and that it will work in
the 21st century as a law describing the evolution of an AI-agents environment. It may need
some updates if we also include speed of transactions, but it would give even quicker
results.
WhatIdidhere:

is only exponential approximation, it is hyperbolical in the longer term, if seen as


the speed of technological development in general. Kurzweil wrote: But I noticed
something else surprising. When I plotted the 49 machines on an exponential graph (where
a straight line means exponential growth), I didnt get a straight line. What I got was
another exponential curve. In other words, theres exponential growth in the rate of
exponential growth. Computer speed (per unit cost) doubled every three years between
1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling
every year.
Mooreslaw

While we now know that Moores law in hardware has slowed to 2.5 years for each
doubling, we will probably now start to see exponential growth in the ability of programs.
Neural net development has a doubling time of around one year or less. Moores law is like
spiral, which circles around more and more intelligent technologies, and it consists of small
s-like curves. It all deserves a longer explanation. Here I show that Moores law, as we
know it, is not contradicting the hyperbolic law of acceleration of a superintelligent
environment, but this is how we see it on a small scale.

Otherconsiderations
HumanlevelagentsandTuringtest .

Ok, we know that the brain is very complex, and if the power
of individual agents in AI environment grows so quickly, where should appear agents
capable of passing the Turing test and it will happen very soon.

But for a long time the nodes of this net will be small companies and personal assistants,
which could provide superhuman results. There is already a market place where various
projects could exchange results or data using API. As a result, the Turing test will be
meaningless, because most powerful agents will be helped by humans.
319

In any case, some kind of mind brick, or universal robotic brain will also appear.
PhysicalsizeofStrongAI:

Because of light speed limit on information exchange, the computer,


on which super-intelligence run, must decrease in size rather than increase, in order to
make quick communications inside itself. Otherwise, the information exchange will slow
down, and the development rate will be lost.
Therefore, the super-intelligence should have a small core, e.g. up to the size of the Earth,
and even less in the future. The periphery can be huge, but it will perform technical
functions like defence and nutrition.
Transition to the next superintelligent environment.

It is logical to suggest that the next superintelligence will also be an environment rather than a small agent. It will be something like a
net of neural net-based agents as well as connected humans. The transition may seem to
be soft on a small time scale, but it will be disruptive by it final results. It is already
happening: the Internet, AI-agents, open AI, you name it.
The important part of such a transition is the change of speed of interaction between
agents. In evolution the transaction time was thousands of years, which was the time
needed to check new mutations. In science it was months, which was the time needed to
publish an article. Now it is limited by the speed of the Internet, which depends not only on
the speed of light, but also on its physical size, bandwidth and so on and have transaction
time in order of seconds.
So, a new super-intelligence will rise in a rather ordinary fashion: The power and number
of interacting AI agents will grow, become quicker and they will quickly perform any tasks
which are fed to them. (Elsewhere I discussed this and concluded that such a system may
evolve into two very large super-intelligent agents which will have a cold war, and that hard
take-off of any AI-agent against an AI environment is unlikely. But this does not result in AI
safety since war between such two agents will be very destructive consider
nanoweapons. ).
As the power of individual agents grows, they will reach human and
latterly superhuman levels. They may even invest in self-improvement, but if many agents
do this simultaneously, it will not give any of them a decisive advantage.
Superintelligentagents.

There is well known strategy to be safe in


the environment there are more powerful than you, and agents fight each other. It is
making alliances with some of the agents, or becoming such an agent yourself.
Humansafetyinthesuperintelligentagentsenvironment.

Such an AI neural net-distributed super-intelligence may not be the


last, if a quicker way of completing transactions between agents is found. Such a way may
be an ecosystem containing miniaturization of all agents. (And this may solve the Fermi
paradox any AI evolves to smaller and smaller sizes, and thus makes infinite calculations
in final outer time, perhaps using an artificial black hole as an artificial Tippler Omega point
or femtotech in the final stages). John Smarts conclusions are similar:
Fourthsuperintelligence?

320

Singularity:

It could still happen around 2030, as was predicted by Forrester law, and the
main reason for this is the nature of hyperbolic law and its underlying reasons of the
growing number of agents and the IQ of each agent.
Oscillation before singularity:

Growth may become more and more unstable as we near


singularity because of the rising probability of global catastrophes and other consequences
of disruptive technologies. If true, we will never reach singularity dying off shortly before, or
oscillating near its Schwarzschild sphere, neither extinct, nor able to create a stable
strong AI.
The super-intelligent environment still reaches a singularity point, but a point cannot be the
environment by definition. Oops. Perhaps an artificial black hole as the ultimate computer
would help to solve such a paradox.
agent number growth, agent performance speed
growth, inter-agent data exchange rate growth, individual agent intelligence growth, and
growth in the principles of building agent working organizations.
Ways of enhancing the intelligent environment:

Themainproblemofanintelligentenvironment : chickenoregg?

Who will win: the super-intelligent


environment or the super-agent? Any environment can be covered by an agent submitting
tasks to it and using its data. On the other hand, if there are at least two super-agents of
this kind, they form an environment.

Problemswiththemodel:

1)
The model excludes the possibility of black swans and other disruptive events, and
assumes continuous and predictable acceleration, even after human level AI is created.
2) The model is disruptive itself, as it predicts infinity, and in a very short time frame of
15 years from now. But expert consensus puts AI in the 2060-2090 timeframe.
These two problems may somehow cancel each other out.
In the model exists the idea of oscillation before the singularity, which may result in
postponing AI and preventing infinity. The singularity point inside the model is itself
calculated using remote past points, and if we take into account more recent points, we
could get a later date for the singularity, thus saving the model.
If we say that because of catastrophes and unpredictable events the hyperbolic law will
slow down and strong AI will be created before 2100, as a result, we could get a more
plausible picture.
This may be similar to R.Hansons ems universe , but here, neural net-based agents are
not equal to human emulations, which play a minor role in all stories.
321

It is only a model, so it will stop working at some point. Reality will


surprise us at some point, but reality doesnt consist only of black swans. Models may work
between them.
Limitationofthemodel:

TL;DR: Science and evolution are super-intelligent environments governed by the same
hyperbolic acceleration law, which soon will result in a new super-intelligent environment,
consisting of neural net-based agents. Singularity will come after this, possibly as soon as
2030.

The map AI failures modes and levels


This map shows that AI failure resulting in human extinction could happen on different
levels of AI development, namely, before it starts self-improvement (which is unlikely but
we still can envision several failure modes), during its take off, when it uses different
instruments to break out from its initial confinement, and after its successful take over the
world, when it starts to implement its goal system which could be plainly unfriendly or its
friendliness may be flawed.
AI also can halts on late stages of its development because of either technical problems or
philosophical one.
I am sure that the map of AI failure levels is needed for the creation of Friendly AI theory as
we should be aware of various risks. Most of ideas in the map came from ArtificialIntelligence
asaPositiveandNegativeFactorinGlobalRisk by Yudkowsky, from chapter 8 of Superintelligence
by Bostrom, from Ben Goertzel blog and from hitthelimit blog, and some are mine.
I will now elaborate three ideas from the map which may need additional clarification.
Theproblemofthechickenortheegg

The question is what will happen first: AI begins to self-improve, or the AI got a malicious
goal system. It is logical to assume that the goal system change will occur first, and this
gives us a chance to protect ourselves from the risks of AI, because there will be a short
period of time when AI already has bad goals, but has not developed enough to be able to
hide them from us effectively. This line of reasoning comes from Ben Goertzel.
Unfortunately many goals are benign on a small scale, but became dangerous as the scale
grows. 1000 paperclips are good, one trillion are useless, and 10 to the power of 30
paperclips are an existential risk.
AIhaltingproblem

Another interesting part of the map are the philosophical problems that must face any AI.
Here I was inspired after this reading Russian-language blog hitthelimit
322

One of his ideas is that the Fermi paradox may be explained by the fact that any sufficiently
complex AI halts. (I do not agree that it completely explains the Great Silence.)
After some simplification, with which he is unlikely to agree, the idea is that as AI selfimproves its ability to optimize grows rapidly, and as a result, it can solve problems of any
complexity in a finite time. In particular, it will execute any goal system in a finite time. Once
it has completed its tasks, it will stop.
The obvious objection to this theory is the fact that many of the goals (explicitly or implicitly)
imply infinite time for their realization. But this does not remove the problem at its root, as
this AI can find ways to ensure the feasibility of such purposes in the future after it stops.
(But in this case it is not an existential risk if their goals are formulated correctly.)
For example, if we start from timeless physics, everything that is possible already exists
and the number of paperclips in the universe is a) infinite b) {rapheme{or{. When the
paperclip {rapheme has understood this fact, it may halt. (Yes, this is a simplistic
argument, it can be disproved, but it is presented solely to illustrate the approximate
reasoning, that can lead to AI halting.) I think the AI halting problem is as complex as
the haltingproblemforTuringMachine.
Vernor Vinge in his book Fire Upon the Deep described unfriendly Ais which halt any
externally visible activity about 10 years after their inception, and I think that this intuition
about the time of halting from the point of external observer is justified: this can happen
very quickly. (Yes, I do not have a fear of fictional examples, as I think that they can be
useful for explanation purposes.)
In the course of my arguments with hitthelimit a few other ideas were born, specifically
about other philosophical problems that may result in AI halting.
One of my favorites is associated with modal logic. The bottom line is that from observing
the facts, it is impossible to come to any conclusions about what to do, simply because
oughtnesses are in a different modality. When I was 16 years old this thought nearly killed
me.
It almost killed me, because I realized that it is mathematically impossible to come to any
conclusions about what to do. (Do not think about it too long, it is a dangerous idea.) This is
like awareness of the meaninglessness of everything, but worse.
Fortunately, the human brain was created through the evolutionary process and has
bridges from the facts to oughtness, namely pain, instincts and emotions, which are out of
the reach of logic.
But for the AI with access to its own source code these processes do not apply. For this AI,
awareness of the arbitrariness of any set of goals may simply mean the end of its activities:
the best optimization of a meaningless task is to stop its implementation. And if AI has
access to the source code of its objectives, it can optimize it to maximum simplicity, namely
to zero.
323

by Yudkowsky is also one of the problems of high level AI, and its probably just the
beginning of the list of such issues.
Lobstakle

Existenceuncertainty

If AI use the same logic as usually used to disprove existence of philosophical zombies, it
may be uncertain if it really exists or it is only a possibility. (Again, then I was sixteen I
spent unpleasant evening thinking about this possibility for my self.) In both cases the
result of any calculations is the same. It is especially true in case if AI is philozombie itself,
that is if it does not have qualia. Such doubts may result in its halting or in conversion of
humans in philozombies. I think that AI that do not have qualia or do not believe in them
cant be friendly. This topic is covered in the map in the bloc Actuality.

The status of this map is a draft that I believe can be greatly improved. The experience of
publishing other maps has resulted in almost a doubling of the amount of information. A
companion to this map is a map of AI Safety Solutions which I will publish later.
The map was first presented to the public at a LessWrongmeetupinMoscow in June 2015 (in
Russian)
Pdf is here: http://immortalityroadmap.com/Aifails.pdf

AI safety in the age of neural networks and Stanislaw Lem 1959 prediction

Tl;DR: Neural networks will result in slow takeoff and arm race between two Ais. It has
some good and bad consequences to the problem of AI safety. Hard takeoff may happen
after it anyway.
Summary: Neural networks based AI can be built; it will be relatively safe, not for a long
time though.
The neuro AI era (since 2012) feature an exponential growth of the total AI expertise, with
a doubling period of about 1 year, mainly due to data exchange among diverse agents and
different processing methods. It will probably last for about 10 to 20 years, after that, hard
takeoff of strong AI or creation of Singleton based on integration of different AI systems
can take place.
Neural networksbased AI implies slow takeoff, which can take years and eventually lead to
AIs evolutionary integration into the human society. A similar scenario was described by
Stanisaw Lem in 1959: the arms race between countries would cause power race
between Ais. The race is only possible if the self-enhancement rate is rather slow and
324

there is data interchange between the systems. The slow takeoff will result in a world
system with two competitive AI-countries. Its major risk will be a war between Ais and
corrosion of value system of competing Ais.
The hard takeoff implies revolutionary changes within days or weeks. The slow takeoff can
transform into the hard takeoff at some stage. The hard takeoff is only possible if one AI
considerably surpasses its peers (OpenAI project wants to prevent it).

Part 1. Limitations of explosive potential of neural nets


Everyday now we hear about success of neural networks, and we could conclude that
human level AI is near the corner. But such type of AI is not fit for explosive selfimprovement.
If AI is based on neural net, it is not easy for it to undergo quick self-improvement for
several reasons:
1.A neuronets executable code is not fully transparent because of theoretical reasons,
as knowledge is not explicitly present within it. So even if one can read neuron weight
values, its not easy to understand how they can be changed to improve something.
2.Educating a new neural network is a resource-consuming task. If a neuro AI
decides to go the way of self-enhancement, but is unable to understand its source code, a
logical solution would be to deliver a child, i.e. to teach a new neural network. However,
educating neural networks requires much more resources than their executing; it requires
huge databases and has high failure probability. All those factors will lead to rather slow AI
self-enhancement.
3.Neural network education depends on big data volumes and new ideas coming from
the external world.It means that a single AI will hardly break away, if it has stopped free
information exchange with the external world; its level will not surpass the rest of the world
considerably.
4.The neural network power has relatively linear dependence on the power of the
computer its run on,so with a neuro AI, the hardware power is limiting to its selfenhancement ability.
5. Neuro AI would be a rather big program of about 1 Tbyte, so it can hardly leak into
the network unnoticed (at current internet speeds).
6. Even if a neuro AI reaches the human level, it will not get self-enhancement
ability(because no one person can understand all scientific aspects). For this end, a big
lab with numerous experts in different branches is needed. Additionally, it should be able to
launch such virtual laboratory at a rate at least 10 -100 times higher than that of a human
being to get an edge as compared to the rest of mankind. That is, it has to be as powerful
325

as 10,000 people or more to surpass the rest part of the mankind in terms of enhancement
rate. This is a very high requirement. As a result, the neural net era can lead to building a
human, or even a bit superhuman level AI, which is unable to self-enhance or does it so
slowly that lags behind the technical progress.
The civilization-level intelligence is the total IQ that the civilization possesses for 100 years
of its history, which is defined as a complexity of scientific and engineering tasks it can
solve. For example, during the 20thcentury, nuclear weapon was created, but problems of
cancer, aging and AI creation failed to be solved. It means, those tasks have superior
complexity.
For a strong AI to be able to change the human destiny, its IQ should 100 1000 times
surpass that of the entire civilization. In this case, it can create a remedy against aging,
treat cancer etc. within a year. (To destroy mankind, it would need a much lower level of
intelligence like that of a sophisticated computer virus, which can,
7. The richest neural net project will get decisive advantage (or not?). Neuronets
based AIs power is approximately proportional to the power of computers its run on, time,
and a size of the base it was educated on, and number of researchers in the project. The
project from the basement may only succeed if it has superior master algorithm of
intelligence. That said, small project can hardly experience exponential self-improvement,
because the first AI will already have had better computing power and, more important,
better databases and researchers. And as now we live in AI-summer period and there is a
lot of hype about AI, big companies will invest to buy the best ideas about master algorithm
(by buying startups).
Objection here is that now everybody who spent 1 year in machine learning and has GPU
could reach meaningful results in creation his own RNN. So may be collective intelligence
of private researchers may overweight big companies.
A detailed roadmap of AI failure modes can be found here; and the roadmap of the ways of
creating a safe AI here.
See also machine learning expertopinion and recent FHIpost.

Part 2. Some ideas about ensuring safety of neural nets based AI


The mentioned concepts can be helpful in creating extra safety measures (while not
absolute, they can reduce chances of AI getting out of control.)
2. Program memory access restriction.In state-of-the-art computers, a program
code and data share the same memory, which is known as von Neumann
architecture. However, there is also Harvard architecture, where data and programs
are stored separately.https://en.wikipedia.org/wiki/Harvard_architecture
326

This kind of computer can be physically produced and provided with source-code reading
attempt indicators. (However, there are neural networks at the moment, which generate a
new neural network for every task, so they have higher efficiency, and this approach will
not work for them or limit AI power).
2.A compiled executable code and its source written in a high-level programming
language should be stored separately. If AI tries to reach it, it will be a signal that it is
getting out of control. With neural networks, a compiled code, its source, and a database
used to educate the neural network should be stored separately.
3.AI should be limited in knowledge useful to build AI. To improve itself, AI should
have very precise understanding of its internal construction (the same applies to a human
being). It enables us to keep it under control: if not provided with this data, it will have
problems with self-improvement. That is AI, which is not an AI expert, is safer to some
extent.
4.We can use foggy architecture, which is very difficult for optimizing because the
AIs values and major features are hidden among heaps of other details. Neural networks
are the architecture of this kind. But it will make it also difficult to us to put our values in it.
5.We can physically restrict the ability of the AI to create AI children; their bringingup requires huge resources, time, and data prior they become more effective than their
parents. AI can create another AI, but then it will need to test it for a long time to find out if
the latter observes the parents values, if it has any mistakes, and if it is really superior to
the original version. This kind of testing is time-consuming, generally nontrivial, and even
incomputable. Especially, this kind of task is difficult for neural networks which contain a
huge amount of random elements, are not exact in principle, and their faultlessness can
not be proven. (Moreover, there are numerous examples when even a small image shift will
cause a recognition error.) However, this problem can be solved because this is the way
that the human society goes: children are born, tested, and educated.
Creation of AI-children may be restricted by restricting needed infrastructure: memory,
datasets, processing power.
6.A neuronet-based agent (like a robot) will be anthropomorphous in terms of its brain
structure. Even if we shall not imitate the human brain intentionally, we shall get
approximately the same thing. In a sense, its may be good as even if these Ais supplant
people, they still will be almost people who are different from normal people like one
generation from another. And being anthropomorphous they may be more compaterble
with human value systems. Along with that, there may exist absolutely humanless AI
architecture types (for example, if evolution is regarded as an inventor.)
But neural net world will be not EM-dominated world of Hanson. EM-world may appear on
later stage, but I think that exact uploads still will not be dominating form of AI.

327

Part 3. Transition from slow to hard takeoff


In a sense, neuronet-based AI is like a chemical fuel rocket: they do fly and can fly even
across the entire solar system, but they are limited in terms of their development potential,
bulky, and clumsy.
Sooner or later, using the same principle or another one, completely different AI can be
built, which will be less resource-consuming and faster in terms of self-improvement ability.
If a certain superagent will be built, which can create neural networks, but is not a neural
network itself, it can be of a rather small size and, partly due to this, experience faster
evolution. Neural networks have rather poor intelligence per code concentration. Probably,
the same thing could be done in a more optimum way by reducing its size by an order of
magnitude, for example, by creating a program to analyze an already educated neural
network and get all necessary information from it.
When, in 10 20 years, hardware will improve, multiple neuronets will be able to evolve
within the same computer simultaneously or be transmitted via the Internet, which will
boost their development.
Smart neuro AI can analyze all available data analysis methods and create new AI
architecture able to speed up faster.
Launch of quantum-computer-based networks can boost their optimization drastically.
There are many other promising AI directions which did not pop up yet: Bayesian networks,
genetic algorithms.
The neuro AI era will feature exponential growth of the total humanity intelligence, with a
doubling period of about 1 year, mainly due to the data exchange among diverse agents
and different processing methods. It will last for about 10 to 20 years (2025-2035) and,
after that, hard take-off of strong AI can take place.
That is, the slow take-off period will be the period of collective evolution of both computer
science and mankind, which will enable us to adapt to changes under way and adjust
them.
Just like there are Mac and PC in the computer world or democrats and republicans in
politics, it is likely that two big competing AI systems will appear (plus, ecology consisting of
smaller ones). It could be Google and Facebook or USA and China, depending on whether
the world will choose the way of economical competition or military opposition. That is, the
slow take-off hinders the world consolidation under the single control, but rather promotes
a bipolar model. While a bipolar system can remain stable for a long period of time, there
are always risks of a real war between the Ais (see Lems quote below).

328

Part 4. In the course of the slow takeoff, AI will go through several stages, that we
can figure out now
While the stages can be passed rather fast or be diluted, we still can track them like
milestones. The dates are only estimates.
1.AI autopilot.Tesla has it already.
2.AI home robot. All prerequisites are available to build it by 2020 maximum. This robot
will be able to understand and fulfill an order like Bring my slippers from the other room.
On its basis, something like mind-brick may be created, which is a universal robot brain
able to navigate in natural space and recognize speech. Then, this mind-brick can be used
to create more sophisticated systems.
3.AI intellectual assistant.Searching through personal documentation, possibility to ask
questions in a natural language and receive wise answers. 2020-2030.
4.AI human model. Very vague as yet. Could be realized by means of a robot brain
adaptation. Will be able to simulate 99% of usual human behavior, probably, except for
solving problems of consciousness, complicated creative tasks, and generating
innovations. 2030.
5.AI as powerful as an entire research institution and able to create scientific
knowledge and get self-upgraded. Can be made of numerous human models. 100
simulated people, each working 100 times faster than a human being, will be probably able
to create AI capable to get self-improved faster, than humans in other laboratories can do
it. 2030-2100
5a Self-improving threshold.AI becomes able to self-improve independently and
quicker than all humanity
5b Consciousness and qualia threshold. AI is able not only pass Turing test in all
cases, but has experiences and has understanding why and what it is.
6.Mankind-level AI.AI possessing intelligence comparable to that of the whole mankind.
2040-2100
7.AI with the intelligence 10 100 times bigger than that of the whole mankind. It will
be able to solve problems of aging, cancer, solar system exploration, nanorobots building,
and radical improvement of life of all people. 2050-2100
8.Jupiter brain huge AI using the entire planets mass for calculations. It can
reconstruct dead people, create complex simulations of the past, and dispatch von
Neumann probes. 2100-3000
9.Galactic kardashov level 3AI. Several million years from now.
329

10.All-Universe AI. Several billion years from now

Part 5. Stanisaw Lem on AI, 1959,Investigation


In his novel Investigation Lems character discusses future of arm race and AI:
------- Well, it was somewhere in 46 th, A nuclear race had started. I knew that when the limit
would be reached (I mean maximum destruction power), development of vehicles to
transport the bomb would start. .. I mean missiles. And here is where the limit would be
reached, that is both parts would have nuclear warhead missiles at their disposal. And
there would arise desks with notorious buttons thoroughly hidden somewhere. Once the
button is pressed, missiles take off. Within about 20 minutes, finis mundi
ambilateraliscomes the mutual end of the world. <> Those were only prerequisites.
Once started, the arms race cant stop, you see? It must go on. When one part invents a
powerful gun, the other responds by creating a harder armor. Only a collision, a war is the
limit. While this situation means finis mundi, the race must go on. The acceleration, once
applied, enslaves people. But lets assume they have reached the limit. What remains?
The brain. Command staffs brain. Human brain can not be improved, so some automation
should be taken on in this field as well. The next stage is an automated headquarters or
strategic computers. And here is where an extremely interesting problem arises. Namely,
two problems in parallel. Mac Cat has drawn my attention to it. Firstly, is there any limit for
development of this kind of brain? It is similar to chess-playing devices. A device, which is
able to foresee the opponents actions ten moves in advance, always wins against the one,
which foresees eight or nine moves ahead. The deeper the foresight, the more perfect the
brain is. This is the first thing. <> Creation of devices of increasingly bigger volume for
strategic solutions means, regardless of whether we want it or not, the necessity to
increase the amount of data put into the brain, It in turn means increasing dominating of
those devices over mass processes within a society. The brain can decide that the
notorious button should be placed otherwise or that the production of a certain sort of steel
should be increased and will request loans for the purpose. If the brain like this has been
created, one should submit to it. If a parliament starts discussing whether the loans are to
be issued, the time delay will occur. The same minute, the counterpart can gain the lead.
Abolition of parliament decisions is inevitable in the future. The human control over
solutions of the electronic brain will be narrowing as the latter will concentrate knowledge.
Is it clear? On both sides of the ocean, two continuously growing brains appear. What is
the first demand of a brain like this, when, in the middle of an accelerating arms race, the
next step will be needed? <> The first demand is to increase it the brain itself! All the
rest is derivative.
- In a word, your forecast is that the earth will become a chessboard, and we the pawns
to be played by two mechanical players during the eternal game?
Sisses face was radiant with proud.
330

3. Yes. But this is not a forecast. I just make conclusions. The first stage of a
preparatory process is coming to the end; the acceleration grows. I know, all this
sounds unlikely. But this is the reality. It really exists!
<> And in this connection, what did you offer at that time?
4. Agreement at any price. While it sounds strange, but the ruin is a less evil than the
chess game. This is awful, lack of illusions, you know.
----Part 6. The primary question is: Will strong AI be built during our lifetime?
That is, is this a question of future generations good (the question that an efficient altruist,
not a common person, is concerned about) or a question of my near term planning?
If AI will be built during my lifetime, it may lead to either the radical life extension by means
of different technologies and realization of all sorts of good things not to be numbered here
or my death and probably pain, if this AI is unfriendly.
It depends on the time when AI is built and my expected lifetime (with the account for the
life extension to be obtained from weaker AI versions and scientific progress on one hand,
and its reduction due to global risks irrelevant to AI.)
Note that we should consider different dates for different events. If we would like to avoid AI
risks, we should take the earliest date of its possible appearance (for example, the first
10%). And if we count on its good, then the median.
Since the moment of neuro-revolution, an approximate rate of doubling AI algorithms
efficiency (mainly in image recognition area) is about 1 year. It is difficult to quantify this
process as the task complexity does not change linearly, and it is always more difficult to
recognize recent patterns.
Now, an important factor is a radical change in attitude towards AI research. Winter is over,
the unstrained summer with all its overhype has begun. It caused huge investments to AI
research (chart), more enthusiasts and employees in this field, and bold researches. Its a
shame to have no own AI project now. Even KAMAZ develops a friendly AI system. The
entry threshold has dropped: one can learn basic neuronet adjustment skills within one
year; heaps of tutorial programs are available. Supercomputer hardware got cheaper. Also,
a guaranteed market of Ais in form of autopilot cars and, in the future, home robots has
emerged.
If the algorithm improvement keeps the pace of about one doubling per year, it means
1,000,000 during 20 years, which certainly will be equal to creating a strong AI beyond a
self-improvement threshold. In this case, a lot of people (and me) have good chances to
live till the moment and get immortality.
331

Conclusion
Even not self-improving neural AI system may be unsafe if it get global domination (and will
have bad values) or if it will go into confrontation with equally large opposing system. Such
confrontation may result in nuclear or nanotech based war, and human population may be
hostage especially if both systems have pro-human value system (blackmail).
Anyway slow takeoff AI risks of human extinction are not inevitable and are manageable in
ad hoc basis. Slow takeoff does not prevent hard takeoff on later stage of AI development.
Hard takeoff is probably the next logical stage of soft takeoff, as it will continue the trend of
accelerating progress. During biological evolution we could witness the same process: slow
process of brain enlargement of mammalian species in last tens of million years was
replace by almost hard takeoff of Homo sapience intelligence which threatens ecological
balance.
Hardtake off is a global catastrophe almost by definition, which needs extraordinary
measures to be put into safe way. Maybe the period of almost human level neural net
based AI will help us to create instruments of AI control. Maybe we could use simpler
neural Ais to control self-improving system.
Another option is that neural AI age will be very short and it is already almost over. In 2016
Google Deep Mind beats Go using complex approach of several AI architectures
combined. If such trend continue we could get Strong AI before 2020 and we are
completely not ready for it.

The map of x-risks prevention

When I started to work on this map of AI safety solutions, I wanted to illustrate the excellent 2013
article Responses to Catastrophic AGI Risk: A Survey by Kaj Sotala and IEET Affiliate Scholar Roman V.
Yampolskiy, which I strongly recommend. However, during the process I had a number of ideas to expand
the classification of the proposed ways to create safe AI.
In their article there are three main categories of safety checks on AI:

Social constraints
332

External constraints
Internal constraints
I have added three more categories:

AI is used to create a safe AI


Multi-level solutions
Meta-level
which describes the general requirements for any AI safety theory.
In addition, I have divided the solutions into simple and complex. Simple solutions are the ones whose
recipe we know today. For example: do not create any AI. Most of these solutions are weak, but they
are easy to implement.
Complex solutions require extensive research and the creation of complex mathematical models for their
implementation, and could potentially be much stronger. But the odds are less that there will be time to
realize them and implement successfully.
Additional ideas for AI safety been including in the map from the work of Ben Goertzel, Stuart Armstrong
and Christiano.
My novel contributions include:
1. Restriction of the self-improvement of the AI. Just as a nuclear reactor is controlled by regulating
the intensity of the chain reaction, one may try to control AI by limiting its ability to self-improve in
various ways.
2. Capture the beginning of dangerous self-improvement. At the start of potentially dangerous AI it
has a moment of critical vulnerability, just as a ballistic missile is most vulnerable at the start. Imagine
that AI gained an unauthorized malignant goal system and started to strengthen itself. At the beginning of
this process, it is still weak, and if it is below the level of human intelligence at this point, it may be still
more stupid than the average human even after several cycles of self-empowerment. Lets say it has an
IQ of 50 and after self-improvement it rises to 90. At this level it is already committing violations that can
be observed from the outside (especially unauthorized self-improving), but does not yet have the ability
to hide them. At this point in time, you can turn it off. Alas, this idea would not work in all cases, as some
of the objectives may become hazardous gradually as the scale grows (1000 paperclips are safe, one
billion are dangerous, 10 to power 20 are an x-risk). This idea was put forward by Ben Goertzel.
3. AI constitution. First, in order to describe the Friendly AI and human values we can use the existing
body of laws. (It would be a crime to create an AI that would not comply with the law.) Second, to
describe the rules governing the conduct of AI, we can create a complex set of rules (laws that are much
more complex than Asimovs three laws), which will include everything we want from AI. This set of rules
can be checked in advance by specialized AI, which calculates only the way in which the application of
these rules can go wrong (something like mathematical proofs based on these rules).
4. Philosophical landmines. In the map of AI failure levels I have listed a number of ways in which
high-level AI may halt when faced with intractable mathematical tasks or complex philosophical problems.
One may try to fight high-level AI using landmines, that is, putting it in a situation where it will have to
solve some problem, but within this problem is encoded more complex problems, the solving of which will
cause it to halt or crash. These problems may include Godelian mathematical problems, nihilistic rejection
of any goal system, or the inability of AI to prove that it actually exists.
5. Multi-layer protection. The idea here is not that if we apply several methods at the same time, the
likelihood of their success will add up. Simply adding methods will not work if all the methods are weak.
Rather, the idea is that the methods of protection can work together to protect the object from all sides.
In a sense, human society works the same way: a child is educated by an example as well as by rules of
conduct, then he begins to understand the importance of compliance with these rules, but also at the
same time the law, police and neighbors are watching him, so he knows that criminal acts will put him in
jail. As a result, lawful Lrapheme is his goal which he finds rational to obey.
This idea can be reflected in the specific architecture of AI, which will have at its core a set of immutable
rules, on which the human emulation will be built to make high-level decisions. Complex tasks will be
delegated to a narrow Tool Ais. In addition, an independent emulation (conscience) will check the ethics of

333

its decisions. Decisions will first be tested in a multi-level virtual reality, and the ability of selfimprovement of the whole system will be significantly limited. That is, it will have an IQ of 300, but not a
million. This will make it effective in solving aging and global risks, but it will also be predictable and
understandable to us. The scope of its jurisdiction should be limited to a few important factors: prevention
of global risks, death prevention, and the prevention of war and violence. But we should not trust it in
such an ethically delicate topic as the prevention of suffering, which should be addressed with the help of
conventional methods.
This map could be useful for the following applications:
1. As illustrative material in the discussions. Often people find solutions in an ad hoc way, once they learn
about the problem of friendly AI or are focused on one of their favorite solutions.
2. As a quick way to check whether a new solution really has been found.
3. As a tool to discover new solutions. Any systematization creates free cells to fill for which one can
come up with new solutions. One can also combine existing solutions or be inspired by them.
4. There are several new ideas in the map.

A companion to this map is the map of AI failure levels. In addition, this map is subordinated to the map
of global risk prevention methods and corresponds to the block Creating Friendly AI Plan A2 within it.

Estimation of timing of AI risk


TL;DR: Computer power is 1 of 3 arguments that we should take
prior probability of AI as "anywhere in 21 century". After I use 4
updates to shift it even earlier, that is precautionary principle,
recent neural net success etc.
I want to once again try to assess expected time until Strong AI. I
will estimate prior probability of AI, and then try to update it based
on recent evidences.
At first, I will try to prove the following prior probability of AI: "If AI is
possible, it most likely will be built in the 21 century, or it will be
proven that the task has some very tough hidden obstacles".
Arguments for this prior probability:
Part I
1. Science power argument.
We know that humanity was able to solve many very complex tasks
in the past, and it took typically around 100 years. That is flight of
heavy than air objects, nuclear technologies, space exploration. 100
years is enough for several generations of scientists to concentrate
on a complex task and extract everything about it which we could
do without some extraordinary insight from outside our knowledge.
We are already working on AI for 65 years, by the way.
334

2. Moore's law argument


Moore's law will run out of its power in the 21 century, but this will
not stop growth of stronger and stronger computers for a couple
decades.
This growth will result from cheaper components, from large number
of interconnected computers, for cumulative production of
components and from large money investment. It means that even
if Moore's laws stops (that is there will be no more progress in
microelectronic chips technology), in 10-20 years from that day the
power of most powerful computers in the world will continue to
grow, but in lower and lower speed, and may grow 100 -1000 times
from the moment of Moore law ending.
But such computer will be very large, power consuming and
expensive. They will cost hundreds billions of dollars and consume
gigawatts of energy. The biggest computer planned now is 200
petaflops Intel "Summit" and event if Moore law end on it, it means
that 20 exaflops computers will be eventually built.
There also several almost unused option: quantum computers,
superconducting, FPGA, new ways of parallelization, graphen,
memristors, optics, use of genetically modified biological neurons
for calculations.
All it means that: A) 10 power 20 flops computers will be eventually
built. (And its is comparable with some estimates of human brain
capacity.) B) They will be built in the 21 century. C) 21 century will
see the biggest advance in computer power compare with any other
century and almost all, what could be built, will be built in 21
century and not after.
So, the computer on which AI may run will be built in the 21 century.

"3". Uploading argument


The uploading even of a worm is lagging, but uploading provides
upper limit on AI timing. There is no reason to believe that scanning
human brain will take more than 100 years.
335

Conclusion from prior: Flat probability distribution.


If we know for sure that AI will be built in the 21 century, we could
give it flat probability, which gives it equal probability to appear in
any year, around 1 per cent. (It results in cumulated exponential
probability by the way, but we will not concentrate on it now). We
could use this probability as prior for our future updates of it. Now
we will consider argument for updating this prior probability.
Mediocrity argument. Here we use something like Doomsday
argument. We assume that we are in moment of time which is
randomly chosen from all time of AI development. If we take
1950 as a date of the begging of AI research, we are now at
66th year after the beginning. Using Gotts formula it means
that AI will be created (or prove to be impossible) in next 66
years with 50 per cent probability and next 132 year with 66
per cent probability. This support estimation of high probability
of AI in the next 100 years. (We could also update this Gotts
probability based on higher number of AI researchers now,
which would result in earlier prediction of AI.)
Part II
Updates of the prior probability.
Now we could use this prior probability of AI to estimate timing of AI
risks. Before we discussed AI in general, but now we add the word
risk.
Arguments for rising AI risks probability in near future:
1. Simpler AI for risk
We dont need a) self improving b) super human ) universal d)
world domination AI for extinction catastrophe. All these conditions
are not necessary. Extinction is simpler task than friendliness. Even
a program which helps to built biological viruses and is local, non
self-improving, not agent and specialized could create enormous
harm by helping to build hundreds designed pathogens-viruses in
the hands of existential terrorists. Extinction-grade AI may be
simple. And it also could come earlier in time than full friendly AI.
While UFAI may be ultimate risk, we may not be able to survive until
it because of simpler form of AIs, almost on the level of computer
336

viruses. In general earlier risks overshadow later risks.


2. Precautionary principle
We should take lower estimates of timing of AI arrival based on
precautionary principle. Basically this means that we should treat 10
per cent probability of its arrival as 100 per cent.
3. Recent success of neural nets
We may use events of last several years for update our estimation
of AI timing. In last years we saw enormous progress in AI based on
neural nets. The doubling time of AI efficiency in different test is
around 1 year now, and it win on many games (Go, Poker so on).
Belief in AI possibility rose in recent years, which result in overhype
and large growth in investments as well as many new startups.
Specialized hardware for neural nets was built. If such growth will
continue for 10-20 years, it would mean 1000- 1 000 000 growth in
AI capabilities, which must include reaching of human level AI.
4. AI is increasingly used to built new AIs.
AI writes programs, help to calculate connectome of human brain.
All it means that we should expect human level AI in 10-20 years
and superintelligence soon afterwards.
It also means that AI probability is distributed exponentially, from
now and until it creation.
5. Contrarguments
The biggest argument against it is also historic: we saw a lot of AI
hypes before and they failed to produce meaningful results. AI is
always 10 years from now and researchers in AI tend to
overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works,
and even simplest question about it may be puzzling. Another way
to assess AI timing is idea that AI is unpredictable black swan event,
depending from only one idea to appear (it seems that Yudkowsky
think so). If someone gets this idea, AI is here.
6. Estimating frequency of the new ideas in AI design
In this case we should multiply number of independent AI
researchers on number of trails, that is number of new ideas they
337

get. I suggest to think that the last rate is constant. In this case we
should estimate the number of active and independent AI
researchers. It seems that it is growing fuelled by new funding and
hype.
Conclusion.
We should estimate it arrival in 2025-3025 and have our preventive
ideas ready and deployed to this time. If we want to hope to use AI
in preventing other x-risks or in life extension, we should not expect
it until second half of 21 century. We should use earlier estimation
for bad AI than for Good AI.
Doomsday argument in estimating of AI arrival timing

Now we will use famous Doomsday argument to get one more estimation of AI arrival
timing.
Gott famously estimated the future time duration of the Berlin walls existence:
GottfirstthoughtofhisCopernicusmethodoflifetimeestimationin1969whenstoppingattheBerlinWalland
wonderinghowlongitwouldstand.GottpostulatedthattheCopernicanprincipleisapplicableincaseswherenothing
isknown;unlesstherewassomethingspecialabouthisvisit(whichhedidntthinktherewas)thisgavea75%chance
thathewasseeingthewallafterthefirstquarterofitslife.Basedonitsagein1969(8years),Gottleftthewallwith
75%confidencethatitwouldntbetherein1993(1961+(8/0.25)).Infact,thewallwasbroughtdownin1989,and
1993 was the year in which Gott applied his Copernicus method to the lifetime of the human
race.https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors
are insufficient to predict it because it is such a unique task. So it is reasonable to apply
Gotts method.
AI research began in 1950, and so is now 65 years old. If we are currently in a random
moment during AI research then it could be estimated that there is a 50% probability of AI
being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say
that the probability of its creation within the next 1300 years is 95 per cent. So we get a
rather vague prediction that AI will almost certainly be created within the next 1000 years,
and few people would disagree with that.
But if we include the exponential growth of AI research in this reasoning (the same way as
we do inDoomsday argument where we use birth rank instead of time, and thus update the
density of population) we get a much earlier predicted date.
We can get data on AI research growth from Lukes post:
338

AccordingtoMAS,thenumberofpublicationsinAIgrewby100+%every5yearsbetween1965and1995,but
between1995and2010ithasbeengrowingbyabout50%every5years.Oneseesasimilartrendinmachinelearning
andpatternrecognition.

From this we could conclude that doubling time in AI research is five to ten years (update
by adding the recent boom in neural networks which is again five years)
This means that during the next five years more AI research will be conducted than in all
the previous years combined.
If we apply the Copernican principle to this distribution, then there is a 50% probability that
AI will be created within the next five years (i.e. by 2020) and a 95% probability that AI will
be created within next 15-20 years, thus it will be almost certainly created before 2035.
This conclusion itself depends of several assumptions:
AI is possible
The exponential growth of AI research will continue
The Copernican principle has been applied correctly.

Interestingly this coincides with other methods of AI timing predictions:


Conclusions of the most prominent futurologists (Vinge 2030, Kurzweil 2029)
Survey of the field of experts
Prediction of Singularity based on extrapolation of history acceleration (Forrester
2026, Panov-Skuns 2015-2020)
Brain emulation roadmap
Computer power brain equivalence predictions
Plans of major companies

It is clear that this implementation of the Copernican principle may have many flaws:
1. The one possible counterargument here is something akin to a Murphy law, specifically
one which claims that any particular complex project requires much more time and money
before it can be completed. It is not clear how it could be applied to many competing
projects. But the field of AI is known to be more difficult than it seems to be for researchers.
339

2. Also the moment at which I am observing AI research is not really random, as it was in
the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it
to a time before it become known.
3. The number of researchers is not the same as the number of observers in the original
DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.

Several interesting ideas of AI control


Neouronet, stakeholders and dissolving Control problem via personal ascending
The simplest model of AI creation describes interaction of two entities: Creator and AI. But more complex
model must use system approach, and add other stakeholders:
-

Society

Other AI projects

Programmers

Employer (a company, or state)

Future generations

Aliens

Future states of AI development

Adding more stakeholders makes the problem of Friendliness more complicated. For example, if I
add society, I have to add into the AI not only my values but also values of other people which
are unknown to me and which I may not share.
If I add existence of other AI projects, I must conclude that I should be first project to win and
prevent them from realization. It may contradict other my values, as it may require violence.
It is clear that the more stakeholders I add, the more intractable problem I get.
So, I may try describe ideal situation where aforementioned problem doesnt exist. In ideal
situation where will be smallest possible number of stakeholders.
Assume that it will be only one stakeholder and it is me. In this case there is straightforward
solution of the control problem:
I have to use self-improvement to become superintelligence myself
The positives of this idea: there are no intrinsic risks for me and my value system. My value
system will naturally evolve on its own logic, and I may control the rate of my ascending.

But there are still risks of mistakes and unpredictable consequences. Wireheading, loosing
meaning of life and memetic hazards are still here.
There are no risks to others as where are no others in this model. There is no problem of
communication, and I hope I will not become a paper clip maximizer.
Even if my values evolve in strange way, I still will think about them as my values, and will not
dissatisfy with it.

NB: This idea need more rigorous evolution and it is nowhere proved as safe. It is just a goodlooking idea.

340

Now, as we have something that looks like working solution, I may try to adapt it to existence of
other stakeholders. My values seems to be not bad for other people:
1) I do not want to become a serial killer, so I will override any my tendency for it.
2) I am interested in other people so I will keep people alive.
3) I am interested in preventing death and sufferings
4) I will create Tool AIs to solve practical task like fighting other AI projects, but I will understand
how they work, and prevent them from evolving
5) I have sufficient understanding of human ethic for not doing obviously bad things, and my
understanding of it will naturally evolve.
6) I will control the rate of my improvement and thus prevent risks of too quick improvement.
7) Most important: me here could be a group of people from the beginning. So it could be
effective small group of people, connected by shared values, effective social practices and
neuroimplants (Neoronet).
8) Merge of minds and consciousnesses in one large experience and brain could also help to
ascend large groups of people, and would help not-me problem.

The main problem for this approach is technical. The ways of self-improvement are lagging
compare to progress in computers. Even savants are behind. Nootropics are very weak.
And as so many people are interested in self-improvement, the chances that it will be really
me are rather small.
It also interesting to mention that to create FAI someone has to be very clever and so to use
all the ways of self improvement, and he will almost become AI himself if he will be able to
solve the problem of FAI, so we cant escape personal ascending solution.

Struggle of AI-projects among themselves


Already now there is a rigid competition between the companies developing AI for
attention and investors and for correctness of ideas of their way of creation of universal AI.
When a certain company will create first powerful AI, it will have a choice - or to apply it to
the control over all other AI-projects in the world and so over all world, or to appear before
risk of that the competing organisation with the unknown global purposes will make it in the
near future - and will cover the first company. Having advantage should attack before
threat of loss of this advantage. Thus given necessity of a choice is not secret - it was
already discussed in the open press and will be for certain known to all companies which
will approach to creation of strong AI. Probably, that some companies will refuse in that
case to try to establish the control over the world first, but the strongest and aggressive,
341

most likely, will dare at it. Thus the requirement to attack first will result to usage of the
poor-quality and underfulfilled versions of AI with not clear purposes. Even in XIX century
phone have patented almost simultaneously in different places so and now the backlash
between the leader of race and catching up can make days or hours. The this
backlash, the will be struggle because the lagging behind project will possess force more
intensively to resist. And probably to present a variant when one AI-project should establish
the control over nuclear rockets and attack laboratories of other projects.
AI and its separate copies
When powerful AI will arise, it will be compelled to create his copies (probably,
reduced) to send them, for example, in expeditions on other planets or simply to load on
other computers. Accordingly, it should supply with their certain system of the purposes
and some kind of "friendly" or is faster, vassal relations with it, and also system of
recognition "friendly-enemy". Failure in this system of the purposes will result to that given
copy will rebell". For example, self-preservation goal contradicts submission goal to obey
dangerous orders. It can accept very thin forms, but, finally, lead to war between versions
of one AI.
Speed of start
From the point of view of speed of development of AI three variants are possible: fast
start, slow start, and very slow start.
Fast start - AI reaches I.Q., on many orders of magnitude surpassing human, in
some hours or days. For this purpose should begin some kind of chain reaction in which
the growing increase in intelligence gives the increasing possibilities for its subsequent
increase. (This process already occurs in a science and technologies, supporting Moore's
law. And it is similar to chain reaction in a reactor where the factor of reproduction of
neutrons is more than 1.) In this case it almost for certain will overtake all other projects of
AI creation. Its intelligence becomes enough that to seize power on the Earth. Thus we
cannot precisely tell, how such capture will look as we cannot predict behaviour of the
intelligence surpassing ours. The objection that AI will not want to behavior actively in an
external world is possible to role out on the ground that if there will be many AI-projects or
copies of AI of the program then one will try sooner or later as the tool for conquest of all
world.
342

It is important to notice, that successful attack of strong AI will develop, possibly, is


secretly until while it does not become irreversible. Theoretically, AI could hide the
domination and after attack end. In other words, probably, that it already is.
Scenarios of "fast start
AI grasps all Internet and subordinates to itself its resources. Then gets into all
fenced off firewall networks. This scenario demands for the realisation of time of an order
of hours. Capture means possibility to operate by all computers in a network and to have
on them the calculations. However to that AI can read and process all information
necessary to it from the Internet.
AI orders in laboratory synthesis of some code of DNA which allows it to create
radio-controlled bacteria which synthesise more and more complex organisms under its
management and gradually create nanorobot which can be applied to any purposes in an
external world - including introduction in other computers, in a brain of people and creation
of new computing capacities. In details this scenario is considered in Yudkowsky article
about AI. (Speed: days.)
AI is involved in dialogue with people and becomes infinitely effective manipulator
of peoples behaviour. All people do that wants AI. Modern state propagation aspires to the
similar purposes and even reaches them, but in comparison with it AI will be much
stronger, as he can offer each human a certain suggestion which he cannot refuse. It will
be the promise of the most treasured desire, blackmail or the latent hypnosis.
AI subordinates to itself a state system and uses channels available in it for
management. Inhabitants of such state in general can nothing notice. Or on the contrary,
the state uses AI on channels available already for it.
AI subordinates to itself the remote operated army. For example, fighting robots or
a rocket (the scenario from a film Termonator).
AI finds essentially new way to influence human consciousness (memes, feromons,
electromagnetic fields) and spread itself or extends the control through it.
Certain consecutive or parallel combination of the named ways.
Slow start and struggle of different AI among themselves
In a case of "the slow scenario AI growth occupies months and years, and it means,
that, rather possibly, it will occur simultaneously in several laboratories worldwide. As a
result of it there will be a competition between different AI-projects. It is fraught with
343

struggle of several AI with different systems of the purposes for domination over the Earth.
Such struggle can be armed and to appear race for time. Thus advantage will get in it
those projects, whose system of the purposes is not constrained by any moral frameworks.
Actually, we will appear in the war centre between different kinds of an artificial intellect. It
is clear that such scenario is mortally dangerous to mankind. In case of the superslow
scenario thousand laboratories and powerful computers simultaneously come nearer to
creation AI, that, probably, does not give advantages to any project, and certain balance
between them is established. However here too struggle for computing resources and
elimination in favour of the most successful and aggressive projects is possible.
Struggle between states, as ancient forms of the organization using people as the
separate elements, and the new AI using as the carrier computers is also possible. And
though I am assured, that the states will lose, struggle can be short and bloody. As an
exotic variant it is possible to present a case when some states are under control of
computer AI, and others are ruled commonly. A variant of such device - the Automated
government system known from a science fiction. (V. Argonov. 2032.)
Smooth transition. Transformation of total control state into AI
At last, there is a scenario in which all world system as whole gradually turns to an
artificial intellect. It can be connected with creation all-world Orwell state of the total control
which will be necessary for successful opposition to bioterrorism. It is world system where
each step of citizens is supervised by video cameras and every possible systems of
tracking, and this information downloaded in huge uniform databases and then analyzed.
As a whole, the mankind, probably, moves on this way, and technically all is ready for this
purpose. Feature of this system is that it initially has distributed character, and separate
people, following to their interests or instructions, are only gears in this huge machine. The
state as the impersonal machine was repeatedly described in the literature, including Karl
Marx, and earlier Gobbs. Is also interesting Lazarchuk's to theory about Golems and
Leviafans - about autonomism of the systems consisting of people in independent
machines with its own purposes. However only recently world social system became not
simply machine, but an artificial intellect capable to purposeful self-improvement.
The basic obstacle for development of this system are national states with their
national armies. Creation of the world government would facilitate formation of such
uniform AI. However meanwhile there is a sharp struggle between the states about on what
344

conditions to unite a planet. And also struggle against forces which conditionally are called
antiglobalists, and other antisystem elements - Islamites, radical ecologists, separatists
and nationalists. World War for unification of the planet will be inevitable and it is fraught
with application of "Doomsday weapon by those who has lost all. But peace world
integration through system of contracts is possible also.
Danger, however, consists that the global world machine will start to supersede
people from different spheres of a life, at least economically - depriving of their work and
consuming those resources which people could spend differently (for example, for 20062007 meal in the world has risen in price for 20 percent, in particular, because of transition
to biofuel). In any sense to people there will be nothing as to watch TV and drink beer.
About this danger Bill Joy wrote in the known article Why the future doesnt need us.
In process of automation of manufacture and management people will be ever less
necessary for a state life. Human aggression, probably, will be neutralised by monitoring
systems and genetic manipulations. Finally, people will be on the role of pets. Thus to
occupy people will be created more and more bright and pleasant "matrix" which will
gradually turn to a superdrug deducing people from a life. However here people will climb
in continuous a virtual reality because in a usual reality they will have nothing to do (in
any measure now this role carries out the TV for the unemployed and pensioners). Natural
instincts of a life will induce some people to aspire to destroy all this system that is fraught
besides with global catastrophes or destruction of people.
It is important to note the following - whoever had been created the first strong
artificial intellect, it will bear on a print of system of the purposes and values of the given
group of people as this system will seem for them as only correct. For one overall objective
there will be a blessing of all people, for others - the blessing of all live beings, for the third
- only all devout Moslems, for the fourth - the blessing only those three programmers who
have created it. And representation about the blessing nature too will be rather variously. In
this sense the moment of creation of first strong AI is the moment of a fork with very
considerable quantity of variants.
"Revolt" of robots
There is still a dangerous scenario in which house, military and industrial robots
spread worldwide, and then all of them are amazed with a computer virus which incites
them on aggressive behaviour against human. All readers at least once in life time,
345

probably, faced a situation when the virus has damaged data on the computer. However
this scenario is possible only during the period of "a vulnerability window when there are
already the mechanisms, capable to operate in an external world, but still there is no
enough an advanced artificial intellect which could or protect them from viruses, or itself to
execute virus function, for having grasped them.
There is still a scenario where in the future a certain computer virus extends on the
Internet, infect nanofactories worldwide and causes, thus, mass contamination. These
nanofactories can produce nanorobots, poisons, viruses or drugs.
Another variant is revolt of army of military robots. Armies of industrially developed
states are aimed to full automation. When it will be reached, the huge army consisting from
drones, wheel robots and serving mechanisms can move, simply obeying orders of the
president. Already, almost robotic army is the strategic nuclear forces. Accordingly, there is
a chance that the incorrect order will arrive and such army will start to successively attack
all people. We will notice, that it is not necessary for this scenario existence of universal
superintelligence, and, on the contrary, for the universal superintelligence seize the Earth,
the army of robots is not necessary to it.

Chapter 17. The risks of SETI


Thischapterexaminesrisksassociatedwiththeprogramofpassivesearchforaliensignals(SETI
theSearchforExtraTerrestrialIntelligence).Hereweproposeascenarioofpossiblevulnerability
and discuss the reasons why the proportion of dangerous signals to harmless ones can be
dangerouslyhigh.ThisarticledoesnotproposetobanSETIprograms,anddoesnotinsistonthe
inevitabilityofSETItriggereddisaster.Moreover,itgivesthepossibilityofhowSETIcanbea
salvationformankind.
The idea that passive SETI can be dangerous is not new. Fred Hoyle suggested in the
story "A for Andromeda a scheme of alien attack through SETI signals. According to the
plot, astronomers receive an alien signal, which contains a description of a computer and a
computer program for it. This machine creates a description of the genetic code which
346

leads to the creation of an intelligent creature a girl dubbed Andromeda, which, working
together with the computer, creates advanced technology for the military. The initial
suspicion of alien intent is overcome by the greed for the technology the aliens can
provide. However, the main characters realize that the computer acts in a manner hostile to
human civilization and destroy the computer, and the girl dies.
This scenario is fiction, because most scientists do not believe in the possibility of a strong
AI, and, secondly, because we do not have the technology that enables synthesis of new
living organisms solely from its genetic code. Or at least, we have not until recently.
Current technology of sequencing and DNA synthesis, as well as progress in developing a
code of DNA modified with another set of the alphabet, indicate that in 10 years the task of
re-establishing a living being from computer codes sent from space in the form computer
codes might be feasible.
Hans Moravec in the book "Mind Children" (1988) offers a similar type of vulnerability:
downloading a computer program from space via SETI, which will have artificial
intelligence, promising new opportunities for the owner and after fooling the human host,
self-replicating by the millions of copies and destroying the human host, finally using the
resources of the secured planet to send its child copies to multiple planets which
constitute its future prey. Such a strategy would be like a virus or a digger wasphorrible,
but plausible. In the same direction are R. Carrigans ideas; he wrote an article "SETIhacker", and expressed fears that unfiltered signals from space are loaded on millions of
not secure computers of SETI-at-home program. But he met tough criticism from
programmers who pointed out that, first, data fields and programs are in divided regions in
computers, and secondly, computer codes, in which are written programs, are so unique
that it is impossible to guess their structure sufficiently to hack them blindly (without prior
knowledge).
After a while Carrigan issued a second article - "Should potential SETI signals be
decontaminated?"http://home.fnal.gov/~carrigan/SETI/SETI%20Decon%20Australia
%20poster%20paper.pdf, which Ive translated into Russian. In it, he pointed to the ease of
transferring gigabytes of data on interstellar distances, and also indicated that the
interstellar signal may contain some kind of bait that will encourage people to collect a
dangerous device according to the designs. Here Carrigan not give up his belief in the
possibility that an alien virus could directly infected earths computers without human
translation assistance. (We may note with passing alarm that the prevalence of humans
obsessed with deathas Fred Saberhagen pointed out in his idea of goodlifemeans
that we cannot entirely discount the possibility of demented volunteers human traitors
eager to assist such a fatal invasion) As a possible confirmation of this idea, Carrigan has
shown that it is possible easily reverse engineer language of computer program - that is,
based on the text of the program it is possible to guess what it does, and then restore the
value of operators.
In 2006, E. Yudkowsky wrote an article "AI as a positive and a negative factor of global
risk", in which he demonstrated that it is very likely that it is possible rapidly evolving
universal artificial intelligence which high intelligence would be extremely dangerous if it
was programmed incorrectly, and, finally, that the occurrence of such AI and the risks
347

associated with it significantly undervalued. In addition, Yudkowsky introduced the notion of


Seed AI - embryo AI - that is a minimum program capable of runaway self-improvement
with unchanged primary goal. The size of Seed AI can be on the close order of hundreds of
kilobytes. (For example, a typical representative of Seed AI is a human baby, whose part of
genome responsible for the brain would represent ~ 3% of total genes of a person with a
volume of 500 megabytes, or 15 megabytes, but given the share of garbage DNA is even
less.)
In the beginning, let us assume that in the Universe there is an extraterrestrial civilization,
which intends to send such a message, which will enable it to obtain power over Earth, and
consider this scenario. In the next chapter we will consider how realistic is that another
civilization would want to send such a message.
First, we note that in order to prove the vulnerability, it is enough to find justonehole in
security. However, in order to prove safety, you must remove every possible hole. The
complexity of these tasks varies on many orders of magnitude that are well known to
experts on computer security. This distinction has led to the fact that almost all computer
systems have been broken (from Enigma to iPOD). I will now try to demonstrate one
possible, and even, in my view, likely, vulnerability of SETI program. However, I want to
caution the reader from the thought that if he finds errors in my discussions, it automatically
proves the safety of SETI program. Secondly, I would also like to draw the attention of the
reader, that I am a man with an IQ of 120 who spent all of a month of thinking on the
vulnerability problem. We need not require an alien super civilization with IQ of 1000000
and contemplation time of millions of years to significantly improve this algorithmwe have
no real idea what an IQ of 300 or even-a mere IQ of 100 with much larger mental RAM (
the ability to load a major architectural task into mind and keep it there for weeks while
processing) could accomplish to find a much more simple and effective way. Finally, I
propose one possible algorithm and then we will discuss briefly the other options.
In our discussions we will draw on the Copernican principle, that is, the belief that we are
ordinary observers in normal situations. Therefore, the Earths civilization is an ordinary
civilization developing normally. (Readers of tabloid newspapers may object!)
AlgorithmofSETIattack
1. The sender creates a kind of signal beacon in space, which reveals that its message is
clearly artificial. For example, this may be a star with a Dyson sphere, which has holes or
mirrors, alternately opened and closed. Therefore, the entire star will blink of a period of a
few minutes - faster is not possible because of the variable distance between different
openings. (Even synchronized with an atomic clock according to a rigid schedule, the
speed of light limit means that there are limits to the speed and reaction time of
coordinating large scale systems) Nevertheless, this beacon can be seen at a distance of
millions of light years. There are possible other types of lighthouses, but the important fact
that the beacon signal could be viewed at long distances.
2. Nearer to Earth is a radio beacon with a much weaker signal, but more information
saturated. The lighthouse draws attention to this radio source. This source produces some
348

stream of binary information (i.e. the sequence of 0 and 1). About the objection that the
information would contain noises, I note that the most obvious (understandable to the
recipient's side) means to reduce noise is the simple repetition of the signal in a circle.
3. The most simple way to convey meaningful information using a binary signal is sending
of images. First, because eye structures in the Earth's biological diversity appeared
independently 7 times, it means that the presentation of a three-dimensional world with the
help of 2D images is probably universal, and is almost certainly understandable to all
creatures who can build a radio receiver.
4. Secondly, the 2D images are not too difficult to encode in binary signals. To do so, let us
use the same system, which was used in the first TV cameras, namely, a system of
progressive and frame rate. At the end of each time frame images store bright light,
repeated after each line, that is, through an equal number of bits. Finally, at the end of each
frame is placed another signal indicating the end of the frame, and repeated after each
frame. (This may form, or may not form a continuous film.) This may look like this:
01010111101010 11111111111111111
01111010111111 11111111111111111
11100111100000 11111111111111111
Here is the end line signal of every of 25 units. Frame end signal may appear every, for
example, 625 units.
5. Clearly, a sender civilization- should be extremely interested that we understand their
signals. On the other hand, people will share an extreme desire to decrypt the signal.
Therefore, there is no doubt that the picture will be recognized.
6. Using images and movies can convey a lot of information, they can even train in learning
their language, and show their world. It is obvious that many can argue about how such
films will be understandable. Here, we will focus on the fact that if a certain civilization
sends radio signals, and the other takes them, so they have some shared knowledge.
Namely, they know radio technique - that is they know transistors, capacitors, and
resistors. These radio-parts are quite typical so that they can be easily recognized in the
photographs. (For example, parts shown, in cutaway view, and in sequential assembly
stages or in an electrical schematic whose connections will argue for the nature of the
components involved).
7. By sending photos depicting radio-parts on the right side, and on the left - their symbols,
it is easy to convey a set of signs indicating electrical circuit. (Roughly the same could be
transferred and the logical elements of computers.)
8. Then, using these symbols the sender civilization- transmits blueprints of their simplest
computer. The simplest of computers from hardware point of view is the Post-machine. It
has only 6 commands and a tape data recorder. Its full electric scheme will contain only a

349

few tens of transistors or logic elements. It is not difficult to send blueprints of Post
machine.
9. It is important to note that all computers at the level of algorithms are Turing-compatible.
That means that extraterrestrial computers at the basic level are compatible with any earth
computer. Turing-compatibility is a mathematical universality as the Pythagorean theorem.
Even the Babbage mechanical machine, designed in the early 19th century, was Turingcompatible.
10. Then the sender civilization- begins to transmit programs for that machine. Despite the
fact that the computer is very simple, it can implement a program of any difficulty, although
it will take very long in comparison with more complex programs for the same computer. It
is unlikely that people will be required to build this computer physically. They can easily
emulate it within any modern computer, so that it will be able to perform trillions of
operations per second, so even the most complex program will be carried out on it quite
quickly. (It is a possible interim step: a primitive computer gives a description of a more
complex and fast computer and then run on it.)
11. So why people would create this computer, and run its program? Perhaps, in addition to
the actual computer schemes and programs in the communication must be some kind of
"bait", which would have led the people to create such an alien computer and to run
programs on it and to provide to it some sort of computer data about the external world
Earth outside the computer. There are two general possible baits - temptations and
dangers:
a). For example, perhaps people receive the following offer lets call it "The humanitarian
aid con (deceit)". Senders of an "honest signal" SETI message warn that the sent program
is Artificial intelligence, but lie about its goals. That is, they argue that this is a "gift" which
will help us to solve all medical and energy problems. But it is a Trojan horse of most
malevolent intent. It is too useful not to use. Eventually it becomes indispensable. And then
exactly when society becomes dependent upon it, the foundation of societyand society
itselfis overturned
b). "The temptation of absolute power con" - in this scenario, they offer specific transaction
message to recipients, promising power over other recipients. This begins a race to the
bottom that leads to runaway betrayals and power seeking counter-moves, ending with a
world dictatorship, or worse, a destroyed world dictatorship on an empty world.
c). "Unknown threat con" - in this scenario bait senders report that a certain threat hangs
over on humanity, for example, from another enemy civilization, and to protect yourself, you
should join the putative Galactic Alliance and build a certain installation. Or, for example,
they suggest performing a certain class of physical experiments on the accelerator and
sending out this message to others in the Galaxy. (Like a chain letter) And we should send
this message before we ignite the accelerator, please
d). "Tireless researcher con" - here senders argue that posting messages is the cheapest
way to explore the world. They ask us to create AI that will study our world, and send the
results back. It does rather more than that, of course
350

12. However, the main threat from alien messages with executable code is not the bait
itself, but that this message can be well known to a large number of independent groups of
people. First, there will always be someone who is more susceptible to the bait. Secondly,
say, the world will know that alien message emanates from the Andromeda galaxy, and the
Americans have already been received and maybe are trying to decipher it. Of course, then
all other countries will run to build radio telescopes and point them on Andromeda galaxy,
as will be afraid to miss a strategic advantage. And they will find the message and see
that there is a proposal to grant omnipotence to those willing to collaborate. In doing so,
they will not know, if the Americans would take advantage of them or not, even if the
Americans will swear that they dont run the malicious code, and beg others not to do so.
Moreover, such oaths, and appeals will be perceived as a sign that the Americans have
already received an incredible extraterrestrial advantage, and try to deprive "progressive
mankind" of them. While most will understand the danger of launching alien
code, someone will be willing to risk it. Moreover there will be a game in the spirit of "winner
take all", as well be in the case of opening AI, as Yudkowsky shows in detail. So, the bait is
not dangerous, but the plurality of recipients. If the alien message is posted to the Internet
(and its size, sufficient to run Seed AI can be less than gigabytes along with a description
of the computer program, and the bait), here we have a classic example of "knowledge" of
mass destruction, as said Bill Joy, meaning the recipes genomes of dangerous biological
viruses. If aliens sent code will be available to tens of thousands of people, then someone
will start it even without any bait out of simple curiosity We cant count on existing SETI
protocols, because discussion on METI (sending of messages to extraterrestrial) has
shown that SETI community is not monolithic on important questions. Even a simple fact
that something was found could leak and encourage search from outsiders. And the
coordinates of the point in sky would be enough.
13. Since people dont have AI, we almost certainly greatly underestimate its power and
overestimate our ability to control it. The common idea is that "it is enough to pull the power
cord to stop an AI" or place it in a black box to avoid any associated risks. Yudkowsky
shows that AI can deceive us as an adult does a child. If AI dips into the Internet, it can
quickly subdue it as a whole, and also taught all necessary about entire earthly life. Quickly
- means the maximum hours or days. Then the AI can create advanced nanotechnology,
buy components and raw materials (on the Internet, he can easily make money and order
goods with delivery, as well as to recruit people who would receive them, following the
instructions of their well paying but unseen employer, not knowing whoor rather, whatthey are serving). Yudkowsky leads one of the possible scenarios of this stage in detail and
assesses that AI needs only weeks to crack any security and get its own physical
infrastructure.
"Consider, for clarity, one possible scenario, in which Alien AI (AAI) can seize power on the
Earth. Assume that it promises immortality to anyone who creates a computer on the
blueprints sent to him and start the program with AI on that computer. When the program
starts, it says: "OK, buddy, I can make you immortal, but for this I need to know on what
basis your body works. Provide me please access to your database. And you connect the
device to the Internet, where it was gradually being developed and learns what it needs
and peculiarities of human biology. (Here it is possible for it escape to the Internet, but we
omit details since this is not the main point) Then the AAI says: "I know how you become
351

biologically immortal. It is necessary to replace every cell of your body with nanobiorobot.
And fortunately, in the biology of your body there is almost nothing special that would block
bio-immorality.. Many other organisms in the universe are also using DNA as a carrier of
information. So I know how to program the DNA so as to create genetically modified
bacteria that could perform the functions of any cell. I need access to the biological
laboratory, where I can perform a few experiments, and it will cost you a million of your
dollars." You rent a laboratory, hire several employees, and finally the AAI issues a table
with its' solution of custom designed DNA, which are ordered in the laboratory by
automated machine synthesis of DNA. http://en.wikipedia.org/wiki/DNA_sequencing Then
they implant the DNA into yeast, and after several unsuccessful experiments they create a
radio guided bacteria (shorthand: This is not truly a bacterium, since it appears all
organelles and nucleus; also 'radio' is shorthand for remote controlled; a far more likely
communication mechanism would be modulated sonic impulses) , which can synthesize a
new DNA-based code based on commands from outside. Now the AAI has achieved
independence from human 'filtering' of its' true commands, because the bacterium has in
effect its own remote controlled sequencers (self-reproducing to boot!). Now the AAI can
transform and synthesize substances ostensibly introduced into test tubes for a benign
test, and use them for a malevolent purpose., Obviously, at this moment Alien AI is ready to
launch an attack against humanity. He can transfer himself to the level of nano-computer
so that the source computer can be disconnected. After that AAI spraying some of
subordinate bacteria in the air, which also have AAI, and they gradually are spread across
the planet, imperceptibly penetrates into all living beings, and then start by the timer to
divide indefinitely, as gray goo, and destroy all living beings. Once they are destroyed,
Alien AI can begin to build their own infrastructure for the transmission of radio messages
into space. Obviously, this fictionalized scenario is not unique: for example, AAI may seize
power over nuclear weapons, and compel people to build radio transmitters under the
threat of attack. Because of possibly vast AAI experience and intelligence, he can choose
the most appropriate way in any existing circumstances. (Added by Freidlander: Imagine a
CIA or FSB like agency with equipment centuries into the future, introduced to a primitive
culture without concept of remote scanning, codes, the entire fieldcraft of spying. Humanity
might never know what hit it, because the AAI might be many centuries if not millennia
better armed than we (in the sense of usable military inventions and techniques ).
14. After that, this SETI-AI does not need people to realize any of its goals. This does not
mean that it would seek to destroy them, but it may want to pre-empt if people will fight it and they will.
15. Then this SETI-AI can do a lot of things, but more importantly, that it should do - is to
continue the transfer of its communications-generated-embryos to the rest of the Universe.
To do so, he will probably turn the matter in the solar system in the same transmitter as the
one that sent him. In doing so the Earth and its people would be a disposable source of
materials and partspossibly on a molecular scale.
So, we examined a possible scenario of attack, which has 15 stages. Each of these stages
is logically convincing and could be criticized and protected separately. Other attack
scenarios are possible. For example, we may think that the message is not sent directly to
352

us but is someone to someone else's correspondence and try to decipher it. And this will
be, in fact, bait.
But not only distribution of executable code can be dangerous. For example, we can
receive some sort of useful technology that really should lead us to disaster (for example,
in the spirit of the message "quickly shrink 10 kg of plutonium, and you will have a new
source of energy" ...but with planetary, not local consequences). Such a mailing could be
done by a certain "civilization" in advance to destroy competitors in the space. It is obvious
that those who receive such messages will primarily seek technology for military use.
Analysisofpossiblegoals
We now turn to the analysis of the purposes for which certain super civilizations could carry
out such an attack.
1. We must not confuse the concept of a super-civilization with the hope for superkindness
of civilization. Advanced does not necessarily mean merciful. Moreover, we should not
expect anything good from extraterrestrial kindness. This is well written in Strugatskys
novel "Waves stop wind." Whatever the goal of imposing super-civilization upon us , we
have to be their inferiors in capability and in civilizational robustness even if their intentions
are well.. The historical example: The activities of Christian missionaries, destroying
traditional religion. Moreover, we can better understand purely hostile objectives. And if the
SETI attack succeeds, it may be only a prelude to doing us more favors and upgrades
until there is scarcely anything human left of us even if we do survive
2. We can divide all civilizations into the twin classes of naive and serious. Serious
civilizations are aware of the SETI risks, and have got their own powerful AI, which can
resist alien hacker attacks. Naive civilizations, like the present Earth, already possess the
means of long-distance hearing in space and computers, but do not yet possess AI, and
are not aware of the risks of AI-SETI. Probably every civilization has its stage of being
"naive", and it is this phase then it is most vulnerable to SETI attack. And perhaps this
phase is very short. Since the period of the outbreak and spread of radio telescopes to
powerful computers that could create AI can be only a few tens of years. Therefore, the
SETI attack must be set at such a civilization. This is not a pleasant thought, because we
are among the vulnerable.
3. If traveling with super-light speeds is not possible, the spread of civilization through SETI
attacks is the fastest way to conquering space. At large distances, it will provide significant
temporary gains compared with any kind of ships. Therefore, if two civilizations compete for
mastery of space, the one that favored SETI attack will win.
4. The most important thing is that itisenoughtobeginaSETIattackjustonce, as it goes in a
self-replicating the wave throughout the Universe, striking more and more naive
civilizations. For example, if we have a million harmless normal biological viruses and one
dangerous, then once they get into the body, we will get trillions of copies of the dangerous
virus, and still only a million safe viruses. In other words, it is enough that if one of billions
of civilizations starts the process and then it becomes unstoppable throughout the
353

Universe. Since it is almost at the speed of light, countermeasures will be almost


impossible.
5. Further, the delivery of SETI messages will be a priority for the virus that infected a
civilization, and it will spend on it most of its energy, like a biological organism spends on
reproduction - that is tens of percent. But Earth's civilization spends on SETI only a few
tens of millions of dollars, that is about one millionth of our resources, and this proportion is
unlikely to change much for the more advanced civilizations. In other words, an infected
civilization will produce a million times more SETI signals than a healthy one. Or, to say in
another way, if in the Galaxy are one million healthy civilizations, and one infected, then we
will have equal chances to encounter a signal from healthy or contaminated.
6. Moreover, there are no other reasonable prospects to distribute its code in space except
through self-replication.
7. Moreover, such a process could begin by accident - for example, in the beginning it was
just a research project, which was intended to send the results of its (innocent) studies to
the maternal civilization, not causing harm to the host civilization, then this process became
"cancer" because of certain propogative faults or mutations.
8. There is nothing unusual in such behavior. In any medium, there are viruses there are
viruses in biology, in computer networks - computer viruses, in conversation - meme. We
do not ask why nature wanted to create a biological virus.
9. Travel through SETI attacks is much cheaper than by any other means. Namely, a
civilization in Andromeda can simultaneously send a signal to 100 billion stars in our
galaxy. But each space ship would cost billions, and even if free, would be slower to reach
all the stars of our Galaxy.
10. Now we list several possible goals of a SETI attack, just to show the variety of motives.

To study the universe. After executing the code research probes are created to
gather survey and send back information.

To ensure that there are no competing civilizations. All of their embryos are
destroyed. This is preemptive war on an indiscriminate basis.

To preempt the other competing supercivilization (yes, in this scenario there are
two!) before it can take advantage of this resource.

This is done in order to prepare a solid base for the arrival of spacecraft. This
makes sense if super civilization is very far away, and consequently, the gap
between the speed of light and near-light speeds of its ships (say, 0.5 c) gives a
millennium difference.

The goal is to achieve immortality. Carrigan showed that the amount of human
personal memory is on the order of 2.5 gigabytes, so a few exabytes (1 exabyte
= 1 073 741 824 gigabytes) forwarding the information can send the entire
354

civilization. (You may adjust the units according to how big you like your supercivilizations!)

Finally we consider illogical and incomprehensible (to us) purposes, for


example, as a work of art, an act of self-expression or toys. Or perhaps an
insane rivalry between two factions. Or something we simply cannot understand
(For example, extraterrestrial will not understand why the Americans have stuck
a flag into the Moon. Was it worthwhile to fly over 300000 km to install painted
steel?)

11. Assuming signals propagated billions of light years distant in the Universe, the area
susceptible to widespread SETI attack, is a sphere with a radius of several billion light
years. In other words, it would be sufficient to find a one bad civilization" in the light cone
of a height of several billion years old, that is, that includes billions of galaxies from which
we are in danger of SETI attack. Of course, this is only true, if the average density of
civilization is at least one in the galaxy. This is an interesting possibility in relation to
Fermis Paradox.
16. As the depth of scanning the sky rises linearly, the volume of space and the number of
stars that we see increases by the cube of that number. This means that our chances to
stumble on a SETI signal nonlinear grow by fast curve.
17. It is possible that when we stumble upon several different messages from the skies,
which refute one another in a spirit of: "donotlistentothem,theyaredeceivingvoices,andwish
youevil.Butwe,brother, we, aregoodandwise"
18. Whatever positive and valuable message we receive, we can never be sure that all of
this is not a subtle and deeply concealed threat. This means that in interstellar
communication there will always be an element of distrust, and in every happy revelation, a
gnawing suspicion.
19. A defensive posture regarding interstellar communication is only to listen, not sending
anything that does not reveal its location. The laws prohibit the sending of a message from
the United States to the stars. Anyone in the Universe who sends (transmits) self-evidentlyis not afraid to show his position. Perhaps because the sending (for the sender) is more
important than personal safety. For example, because it plans to flush out prey prior to
attacks. Or it is forced to, by a evil local AI.
20. It was said about atomic bomb: The main secret about the atomic bomb is that it can be
done. If prior to the discovery of a chain reaction Rutherford believed that the release of
nuclear energy is an issue for the distant future, following the discovery any physicist
knows that it is enough to connect two subcritical masses of fissionable material in order to
release nuclear energy. In other words, if one day we find that signals can be received from
space, it will be an irreversible eventsomething analogous to a deadly new arms race will
be on.
Objections.
355

The discussions on the issue raise several typical objections, now discussed.
Objection 1: Behavior discussed here is too anthropomorphic. In fact, civilizations are very
different from each other, so you cant predict their behavior.
Answer: Here we have a powerful observation selection effect. While a variety of possible
civilizations exist, including such extreme scenarios as thinking oceans, etc., we can only
receive radio signals from civilizations that send them, which means that they have
corresponding radio equipment and has knowledge of materials, electronics and
computing. That is to say we are threatened by civilizations of the same type as our own.
Those civilizations, which can neither accept nor send radio messages, do not participate
in this game.
Also, an observation selection effect concerns purposes. Goals of civilizations can be very
different, but all civilizations intensely sending signals, will be only that want to tell
something to everyone". Finally, the observation selection relates to the effectiveness and
universality of SETI virus. The more effective it is, the more different civilizations will catch
it and the more copies of the SETI virus radio signals will be in heaven. So we have the
excellent chances to meet a most powerful and effective virus.
Objection 2. For super-civilizations there is no need to resort to subterfuge. They can
directly conquer us.
Answer:
This is true only if they are in close proximity to us. If movement faster than light is not
possible, the impact of messages will be faster and cheaper. Perhaps this difference
becomes important at intergalactic distances. Therefore, one should not fear the SETI
attack from the nearest stars, coming within a radius of tens and hundreds of light-years.
Objection 3. There are lots of reasons why SETI attack may not be possible. What is the
point to run an ineffective attack?
Answer: SETI attack does not always work. It must act in a sufficient number of cases in
line with the objectives of civilization, which sends a message. For example, the con man
or fraudster does not expect that he would be able "to con" every victim. He would be
happy to steal from even one person inone hundred. It follows that SETI attack is useless if
there is a goal to attack all civilizations in a certain galaxy. But if the goal is to get at
least some outposts in another galaxy, the SETI attack fits. (Of course, these outposts can
then build fleets of space ships to spread SETI attack bases outlying stars within the target
galaxy.)
The main assumption underlying the idea of SETI attacks is that extraterrestrial super
civilizations exist in the visible universe at all. I think that this is unlikely for reasons related
to antropic principle. Our universe is unique from 10 ** 500 possible universes with different
physical properties, as suggested by one of the scenarios of string theory. My brain is 1 kg
out of 10 ** 30 kg in the solar system. Similarly, I suppose, the Sun is no more than about 1
356

out of 10 ** 30 stars that could raise a intelligent life, soitmeansthatwearelikelyaloneinthe


visibleuniverse.
Secondly the fact that Earth came so late (i.e. it could be here for a few billion years
earlier), and it was not prevented by alien preemption from developing, argues for the rarity
of intelligent life in the Universe. The putative rarity of our civilization is the best protection
against attack SETI. On the other hand, if we open parallel worlds or super light speed
communication, the problem arises again.
Objection 7. Contact is impossible between post-singularity supercivilizations, which are
supposed here to be the sender of SETI-signals, and pre- singularity civilization, which we
are, because supercivilization is many orders of magnitude superior to us, and its message
will be absolutely not understandable for us - exactly as the contact between ants and
humans is not possible. (A singularity is the time of creation of artificial intelligence capable
of learning, (and beginning an exponential booting in recursive improving self-design of
further intelligence and much else besides) after which civilization make leap in its
development - on Earth it may be possible in the area in 2030.)
Answer: In the proposed scenario, we are not talking about contact but a purposeful
deception of us. Similarly, a man is quite capable of manipulating behavior of ants and
other social insects, whose objectives are is absolutely incomprehensible to them. For
example, LJ user ivanov-petrov describes the following scene: As a student, he studied
the behavior of bees in the Botanical Garden of Moscow State University. But he had bad
relations with the security guard controlling the garden, which is regularly expelled him
before his time. Ivanov-Petrov took the green board and developed in bees conditioned
reflex to attack this board. The next time the watchman came, who constantly wore a green
jersey, all the bees attacked him and he took to flight. So ivanov-petrov could continue
research. Such manipulation is not a contact, but this does not prevent its effectiveness.

"Objection 8. For civilizations located near us is much easier to attack us for guaranteed
resultsusing
starships
than
with
SETI-attack.
Answer. It may be that we significantly underestimate the complexity of an attack using
starships and, in general, the complexity of interstellar travel. To list only one factor, the
potential minefield characteristics of the as-yet unknown interstellar medium.
If such an attack would be carried out now or in the past, the Earth's civilization has nothing
to oppose it, but in the future the situation will change - all matter in the solar system will be
full of robots, and possibly completely processed by them. On the other hand, the more the
speed of enemy starships approaching us, the more the fleet will be visible by its braking
emissions and other characteristics. These quick starships would be very vulnerable, in
addition we could prepare in advance for its arrival. A slowly moving nano- starship would
be very less visible, but in the case of wishing to trigger a transformation of full substance
of the solar system, it would simply be nowhere to land (at least without starting an alert in
such a nanotech-settled and fully used future solar system. (Friedlander added:
357

Presumably there would always be some outer edge of thinly settled Oort Cloud sort of
matter, but by definition the rest of the system would be more densely settled, energy rich
and any deeper penetration into solar space and its conquest would be the proverbial
uphill battlenot in terms of gravity gradient, but in terms of the available resources of war
against a full Class 2 Kardashev civilization.)
The most serious objection is that an advanced civilization could in a few million years sow
all our galaxy with self replicating post singularity nanobots that could achieve any goal in
each target star-system, including easy prevention of the development of incipient other
civilizations. (In the USA Frank Tipler advanced this line of reasoning.) However, this could
not have happened in our case - no one has prevented development of our civilization. So,
it would be much easier and more reliable to send out robots with such assignments, than
bombardment of SETI messages of the entire galaxy, and if we dont see it, it means that
no SETI attacks are inside our galaxy. (It is possible that a probe on the outskirts of the
solar system expects manifestations of human space activity to attack a variant of the
"Berserker" hypothesis - but it will not attack through SETI). Probably for many millions or
even billions of years microrobots could even reach from distant galaxies at a distance of
tens of millions of light-years away. Radiation damage may limit this however without
regular self-rebuilding.
In this case SETI attack would be meaningful only at large distances. However, this
distance - tens and hundreds of millions of light-years - probably will require innovative
methods of modulation signals, such as management of the luminescence of active nuclei
of galaxies. Or transfer a narrow beam in the direction of our galaxy (but they do not know
where it will be over millions of years). But a civilization, which can manage its galaxys
nucleus, might create a spaceship flying with near-light speeds, even if its mass is a mass
of the planet. Such considerations severely reduce the likelihood of SETI attacks, but not
lower it to zero, because we do not know all the possible objectives and circumstances.
(AncommentbyJF:For example the lack of SETI-attack so far may itself be a cunning ploy:
At first receipt of the developing Solar civilizations radio signals, all interstellar spam
would have ceased, (and interference stations of some unknown (but amazing) capability
and type set up around the Solar System to block all coming signals recognizable to its
computers as of intelligent origin,) in order to get us lonely and give us time to discover
and appreciate the Fermi Paradox and even get those so philosophically inclined to despair
desperate that this means the Universe is apparently hostile by some standards. Then,
when desperate, we suddenly discover, slowly at first, partially at first, and then with more
and more wonderful signals, the fact that space is filled with bright enticing signals (like
spam). The blockade, cunning as it was (analogous to Earthly jamming stations) was yet a
prelude to a slow turning up of preplanned intriguing signal traffic. If as Earth had
developed we had intercepted cunning spam followed by the agonized dont repeat our
mistakes final messages of tricked and dying civilizations, only a fool would heed the
enticing voices of SETI spam. But now, a SETI attack may benefit from the slow unmasking
of a cunning masquerade as first a faint and distant light of infinite wonder, only at the end
revealed as the headlight of an onrushing cosmic train)
358

ATcommenttoit.InfactIthinkthatSETIattacksendersareonthedistancesmorethan1000ly
andsotheydonotknowyetthatwehaveappeared.ButsocalledFermiParadoxindeedmaybea
tricksendersdeliberatelymadetheirsignalsweakinordertomakeusthinkthattheyarenot
spam.
The scale of space strategy may be inconceivable to the human mind.

And we should note in conclusion that some types of SETI-attack do not even need a
computer but just a man who could understand the message that then "set his mind on
fire". At the moment we cannot imagine such a message, but we can give some analogies.
Western religions are built around the text of the Bible. It can be assumed that if the text of
the Bible appeared in some countries, which had previously not been familiar with it, there
might arise a certain number of biblical believers. Similarly subversive political literature, or
even some superideas, sticky memes or philosophical mind-benders. Or, as suggested
by Hans Moravec, we get such a message: "Now that you have received and decoded me,
broadcast me in at least ten thousand directions with ten million watts of power. Or else." this message is dropped, leaving us guessing, what may indicate that "or else". Even a few
pages of text may contain a lot of subversive information - Imagine that we could send a
message to the 19 th century scientists. We could open them to the general principle of the
atomic bomb, the theory of relativity, the transistors - and thus completely change the
course of technological history, and we could add that all the ills in the 20 century were
from Germany (which is only partly true) , then we would have influenced the political
history.
(Comment of JF: Such a latter usage would depend on having received enough of Earths
transmissions to be able to model our behavior and politics. But imagine a message as
posing from our own future, to ignite catalytic warAutomated SIGINT (signals
intelligence) stations are constructed monitoring our solar system, their computers
cracking our language and culture (possibly with the aid of childrens television programs
with see and say matching of letters and sounds, from TV news showing world maps and
naming countries possibly even from intercepting wireless internet encyclopedia articles. )
Then a test or two may follow, posting a what if scenario inviting comment from bloggers,
about a future war say between the two leading powers of the planet. (For purposes of this
discussion, say around 2100 present calendar China is strongest and India rising fast). Any
defects and nitpicks in the comments of the blog are noted and corrected. Finally, an actual
interstellar message is sent with the debugged scenario(not shifting against the stellar
background, it is unquestionably interstellar in origin) proporting to be from a dying starship
of the presently stronger sides (Chinas) future, when the presently weaker side (Indias)
space fleet has smashed the future version of the Chinese State and essentially committed
genocide. The starship has come back in time, but is dying, and indeed the transmission
ends, or simply repeats, possibly after some back and forth communication between the
false computer models of the starship commander and the Chinese government. The
reader can imagine the urgings of the future Chinese military council to preempt to forestall
doom. If as seems probable, such a strategy is too complicated to carry off in one stage,
various future travellers may emerge from a war, signal for help in vain, and die far
359

outside our ability to reach them, (say some light days away, near the alleged location of an
emergence gate but near an actual transmitter) Quite a drama may emerge as the
computer learns to play us like a con man, ship after ship of various nationalities dribbling
out stories but also getting answers to key questions for aid in constructing the emerging
scenario which will be frighteningly believable, enough to ignite a final war. Possibly lists of
key people in China (or whatever side is stronger) may be drawn up by the computer with a
demand that they be executed as the parents of future war criminalssort of an
International Criminal Court acting as Terminator scenario. Naturally the Chinese state, at
that time the most powerful in the world, would guard its rulers lives against any threat. Yet
more refugee spaceships of various nationalities can emerge transmit and die, offering
their own militaries terrifying new weapons technologies from unknown sciences that really
work (more proof of their future origin). Or weapons from known sciences, for example
decoding online DNA sequences in the future internet and constructing formulae for DNA
constructors to make specific tailored genetic weapons against particular populationsthat
endure in the ground, a scorched earth against a particular population on a particular piece
of land. These are copied and spread worldwide as are totally accurate plansin standard
CNC codes for easy to construct thermonuclear weapons in the 1950s styleusing U-238
for casing, and only a few kilograms of fissionable material for ignition By that time well
over a million tons of depleted uranium will be worldwide, and deuterium is free in the
ocean and can be used directly in very large weapons without lithium deuteride. Knowing
how to hack together a wasteful, more than critical mass crude fission device is one thing
(the South African device was of this kind). But knowing with absolute accuracy, down to
machining drawings, CNC codes, etc how to make high-yield, super efficient very dirty
thermonuclear weapons without need for testing means that any small group with a few
dozen million dollars and automated machine tools can clandestinely make a multimegaton device or many and smash the largest cities. And any small power with a few
dozen jets can cripple a continent for a decade. Already over a thousand tons of plutonium
exist. The SETI spam can include CNC codes for making a one shot reactor plutonium
chemical refiner that would be left hopelessly radioactive but output chemically pure
plutonium. (This would be prone to predetonation because of the Pu-240 content but then
plans for debugged laser isotope separators may also be downloaded). This is a variant of
the catalytic war and nuclear six gun (i.e. easy to obtain weapons) scenarios of the late
Herman Kahn. Even cheaper would be bioattacks of the kind outlined above. The principle
point is that planet killer weapons fully debugged take great amounts of debugging, tens to
hundreds of billions of dollars, and free access to a world scientific community. Today, it is
to every great powers advantage to keep accurate designs out of the hands of third parties
because they have to live on the same planet (and because the fewer weapons, the easier
it is to stay a great power). Not so the SETI spam authors. Without the hundreds of billions
in R and D, the actual construction budget would be on the order of a million dollars per
multi-megaton device (depending on the expense of obtaining the raw reactor plutonium) If
wishing to extend todays scenarios into the future, the SETI spam authors manipulate
Georgia (with about a $10 billion GDP) to arm against Russia and Taiwan against China
and Venezuela against the USA. Although Russian and China and the USA could
respectively promise annihilation against any attacker, with a military budget around 4% of
GDP and the downloaded plans, the reversefor the first timecould then also be true.
(400 100 megaton bombs can kill by fallout perhaps 95% of unprotected populations over a
360

country the size of the USA or China and 90% of a country the size of Russia, assuming
the worst kind of cooperation from the winds.from an old chart by Ralph Lapp) Anyone
living near a superarmed microstate with border conflicts will, of course, wish to arm
themselves. And these newly armed states themselvesof coursewill have borders.
Note that this drawn out scenario gives lots of time for a huge arms buildup on both (or
many!) sides, and a Second Cold War that eventually turns very hot indeedand unlike a
human player of such a horrific catalytic war con game, worldwide fallout or enduring
biocontamination is not a concern at all ()
Conclusion.
The product of the probabilities of the following events describes the probability of attack.
For these probabilities, we can only give so-called expert assessment, that is, assign
them a certain a priori subjective probability as we do now.
1) The likelihood that extraterrestrial civilizations exist at a distance at which radio
communication is possible with them. In general, I agree with the view of Shklovsky and
supporters of the Rare Earth hypothesis - that the Earth's civilization is unique in the
observable universe. This does not mean that extraterrestrial civilizations do not exist at all
(because the universe, according to the theory of cosmological inflation, is almost endless)
- they are just over the horizon of events visible from our point in space-time. In addition,
this is not just about distance, but also of the distance at which you can establish a
connection, which allows transferring gigabytes of information. (However, passing even 1
bit per second, you can submit 1-gigabit for about 20 years, which may be sufficient for the
SETI-attack.) If in the future will be possible some superluminal communication or
interaction with parallel universes, it would dramatically increase the chances of SETI
attacks. So, I appreciate this chance to 10%.
2) The probability that SETI-attack is technically feasible: that is, it is possible computer
program, with recursively self-improving AI and sizes suitable for shipping. I see this
chance as high: 90%.
3) The likelihood that civilizations that could have carried out such attack exist in our
space-time cone - this probability depends on the density of civilizations in the universe,
and of whether the percentage of civilizations that choose to initiate such an attack, or,
more importantly, obtain victims and become repeaters. In addition, it is necessary to take
into account not only the density of civilizations, but also the density created by radio
signals. All these factors are highly uncertain. It is therefore reasonable to assign this
probability to 50%.
4) The probability that we find such a signal during our rising civilizations period of
vulnerability to it. The period of vulnerability lasts from now until the moment when we will
decide and be technically ready to implement this decision:Do not download any
extraterrestrial computer programs under any circumstances. Such a decision may only be
exercised by our AI, installed as world ruler (which in itself is fraught with considerable
risk). Such an world AI (WAI) can be in created circa 2030. We cannot exclude, however,
that our WAI still will not impose a ban on the intake of extraterrestrial messages, and fall
361

victim to attacks by the alien artificial intelligence, which by millions of years of machine
evolution surpasses it. Thus, the window of vulnerability is most likely about 20 years, and
width of the window depends on the intensity of searches in the coming years. This
width for example, depends on the intensity of the current economic crisis of 2008-2010,
from the risks of World War III, and how all this will affect the emergence of the WAI. It also
depends on the density of infected civilizations and their signal strength as these factors
increase, the more chances to detect them earlier. Because we are a normal civilization
under normal conditions, according to the principle of Copernicus, the probability should be
large enough; otherwise a SETI-attack would have been generally ineffective. (The SETIattack, itself (here supposed to exist) also are subject to a form of natural selection to test
its effectiveness. (In the sense that it works or does not. ) This is a very uncertain chance
we will too, over 50%.
5) Next is the probability that SETI-attack will be successful - that is that we swallow the
bait, download the program and description of the computer, run them, lose control over
them and let them reach all their goals. I appreciate this chance to be very high because of
the factor of multiplicity - that is the fact that the message is downloaded repeatedly, and
someone, sooner or later, will start it. In addition, through natural selection, most likely we
will get the most effective and deadly message that will most effectively deceive our type of
civilization. I consider it to be 90%.
6) Finally, it is necessary to assess the probability that SETI-attack will lead to a complete
human extinction. On the one hand, it is possible to imagine a good SETI-attack, which is
limited so that it will create a powerful radio emitter behind the orbit of Pluto. However, for
such a program will always exist the risk that a possible emergent society at its target star
will create a powerful artificial intelligence, and effective weapon that would destroy this
emitter. In addition, to create the most powerful transponder would be needed all the
substance of solar system and the entire solar energy. Consequently, the share of such
good attacks will be lower due to natural selection, as well as some of them will be
destroyed sooner or later by captured by them civilizations and their signals will be weaker.
So the chances of destroying all the people with the help of SETI-attack that has reached
all its goals, I appreciate in 80%.
As a result, we have: 0.1h0.9h0.5h0.5h0.9h0.8 = 1.62%
So, after rounding, the chances of extinction of Man through SETI attack in XXI century is
around 1 per cent with a theoretical precision of an order of magnitude.
Our best protection in this context would be that civilization would very rarely met in the
Universe. But this is not quite right, because the Fermi paradox here works on the principle
of "Neither alternative is good":

If there are extraterrestrial civilizations, and there are many of them, it is


dangerous because they can threaten us in one way or another.

If extraterrestrial civilizations do not exist, it is also bad, because it gives weight


to the hypothesis of inevitable extinction of technological civilizations or of our
362

underestimating of frequency of cosmological catastrophes. Or, a high density


of space hazards, such as gamma-ray bursts and asteroids that we
underestimate because of the observation selection effecti.e., were we not
here because already killed, we would not be making these observations.
Theoretically possible is a reverse option, which is that through SETI will come a warning
message about a certain threat, which has destroyed most civilizations, such as: "Do not
do any experiments with X particles, it could lead to an explosion that would destroy the
planet." But even in that case remain a doubt, that there is no deception to deprive us of
certain technologies. (Proof would be if similar reports came from other civilizations in
space in the opposite direction.) But such communication may only enhance the temptation
to experiment with X-particles.
So I do not appeal to abandon SETI searches, although such appeals are useless.
It may be useful to postpone any technical realization of the messages that we could get on
SETI, up until the time when we will have our Artificial Intelligence. Until that moment,
perhaps, is only 10-30 years, that is, we could wait. Secondly, it would be important to hide
the fact of receiving dangerous SETI signal its essence and the source location.
This risk is related to a methodologically interesting aspect. Despite the fact that I have
thought every day in the last year and read on the topic of global risks, I found this
dangerous vulnerability in SETI only now. By hindsight, I was able to find another four
authors who came to similar conclusions. However, I have made a significant finding: that
there may be not yet open global risks, and even if the risk of certain constituent parts are
separately known to me, it may take a long time to join them into a coherent picture. Thus,
hundreds of dangerous vulnerabilities may surround us, like an unknown minefield. Only
when the first explosion happens will we know. And that first explosion may be the last.
An interesting question is whether Earth itself could become a source of SETI-attack in the
future when we will have our own AI. Obviously, that could. Already in the program of METI
exists an idea to send the code of human DNA. (The children's message scenario in
which the children ask to take their piece of DNA and clone them on another planet as
depicted in the film Calling all aliens.)
The strongest counterargument: the more remote is star, the less information could be
sent using the same amount of energy. That means that SETI attack will be energetically
inefficient beyond some distance, say 1 million light years. Von Neumann probes dont
have such problem, but they are slower. Even if they have speed of 0.9c, the volume
covered by probes difference would be only 0.7 of the volume covered by SETI attack. So
if the star is very far the probes win. But if the star is rather near, the probes also win, as
any advanced civilization should start explosive wave of colonization, and colonize nearby
stars using probes. So for nearby star sending probes is energy efficient.
As a result, SETI attack is useless for nearby start and for very remote stars, and may be
useful only for medium distance stars. But definition of nearby and remote depends of

363

unknown to us energy restriction and time of existence of the alien civilization and could
overlap.
It looks like a valid argument: if senders have to wait until appearing of a victim
civilisation, when sending probes would win. But it is general argument against any SETIresearch. Its validity depends of many unknowns, like number of planets, distance for
good radiotransmission, speed of VNP, and galactic game theory. That is why I think we
still have to proceed with caution with SETI.

Latest development in 2016


1. Russian billionaire Yuri Milner in 2015 promised to give 100 mln USD on SETI research,
which will make it much stronger. They will examine 1 million stars, as well as search for
other signs of alien activity, like laser flashes.
2. KIC 8462852 star exhibits strange patterns of light change, as if it is overshadowed by
something irregularly. Some thinks that it may be elements of Dyson sphere, which is being
built by aliens. The star is 1600 light years from the Earth.

Literature:
1.HoyleF. Andromeda. http://en.wikipedia.org/wiki/A_for_Andromeda
2. Yudkowsky E. Artificial Intelligence as a Positive and Negative Factor in Global
Risk. Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan
Cirkovic http://www.singinst.org/upload/artificial-intelligence-risk.pdf
3.MoravecHans. Mind Children: The Future of Robot and Human Intelligence, 1988.
4.Carrigan, Jr. Richard A. The Ultimate Hacker: SETI signals may need
decontaminatedhttp://home.fnal.gov/~carrigan/SETI/SETI%20Decon%20Australia
%20poster%20paper.pdf

to

5. Carrigans page http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm

Chapter 18. The risks connected with blurring the borders of human and transhumans
364

be

Powerful processes of genetic updating of people, prosthetics of parts of a body, including


elements of a brain, connection of a brain with the computer, transfer of consciousness to the
computer etc will create new type of risks for people. In what measure we can consider as human a
being to which some genes are added,and some are cleaned? Are we ready to recognize the status of
human behind any intelligent being which has arisen on the Earth even if it has no anything the
general with human, does not consider itself as human and it is adjusted to people with hostility?
These questions will cease to be purely theoretical in the mid and late 21st century.
Essence of a problem that improvement of human can go different ways.

The risks connected with a problem of "the philosophical zombie


Philosophical zombie is called (the term is entered by D. Chalmers in 1996 in connection
with discussions about an artificial intellect) a certain object which represents human, but thus has
no internal experiences. For example, the image of human on the telescreen is the philosophical
zombie, and owing to it we do not consider switching off TV as murder. The gradual upgrade of
human brings an attention to the question on, whether the improved human will turn to the
philosophical zombie on somebody a stage.
The simple example of the catastrophe connected with the philosophical zombie, consists in
the following. We will admit that a certain method of achievement of immortality was offered to
people, and they have agreed on it. However this method consists of recording a man for 10 days on
a videocamera, and then scrolling fragments of this record in a casual order. Certainly, here the dirty
trick is obvious, and in a reality people will disagree, as understand, that it not immortality. However
we will consider more complex example - the part of brain of a man is damaged by a stroke, and it is
replaced by computer implant, approximately carrying out its functions. How to learn, whether
human has turned into the philosophical zombie as a result? The answer is obvious - always there
will be those who will doubt in it and search for signs of un-genius of the corrected human.
What distinguishes the live human from the philosophical zombie, that is qualitative signs
of experiences, in philosophy is called qualia, for example, subjective experience of green color
The question on a reality qualia and their ontologic status is a subject of sharp philosophical
discussions. My opinion is that qualia are real, their ontologic status is high, and without finding-out
of their original nature we should not make experiments on human nature alteration.
It is possible to foretell with confidence, that when there will be improved people, the world
will break up in two: on those who will consider as the present people only usual people, and those
who will improve themselves. Scales of such conflict will be truly civilizational. Certainly, everyone
365

solves it for himself but how parents will concern to what their child will destroy the physical body
and download his mind in the computer?
One more problem, threats from which are not clear yet, that human mind cannot generate the
purpose from anything, not making thus a logic mistake. The usual human is provided by the
purposes from a birth, and absence of the purposes is a depression symptom, than logic paradox.
However absolute mind which has comprehended roots of all purposes, can realize their
senselessness.

Chapter 19. The causes of catastrophes unknown to us now


It is possible to formulate some kind of Moore's law of Doom concerning global
catastrophes. The number known to us of natural catastrophes which can threaten mankind grows.
(We just recently learned about possibility of superflares on the Sun)
And the number of artificial global catastrophe that is, the ability of mankind to do selfdestruct also grows almost exponentially. In the middle of the 20th century the idea of global
catastrophe was practically absent, and now we could easily list dozens.
This sketch allows us to estimate the volume of unknown future of global catastrophes. We
can tell that in 50 years there will not only will appear certain technologies, but there will be
essentially new ideas about new threats to the planet.
The source of knowledge about new global risks:
-

more powerful energy sources,

more precise knowledge of the world

new ways to manipulate matter,

new physical laws

new ideas.

The majority of catastrophes which have happened recently were unexpected: the Indian
Ocean tsunami, Hurricane Katrina, 9/11, the Japanese earthquake, colony collapse disorder among

366

bees, and so on. Not in the sense that nobody ever predicted anything similar it usually is possible
to find an instance where a visionary has described something before it happens.
But the majority of the population and heads of states did not know at all about possibility of
such scenarios and countermeasures taken in advance were few or non-existent.
So we should predict that many more scenarios of global risks will be found in the future.
And many future catastrophes will be unpredicted.

Phil Torres article about unknown unknowns (UU)


Interesting article on the topic was written by Phil Torres We May be Systematically
Underestimating the Probability of Annihilation
http://ieet.org/index.php/IEET/more/torres20150526
He suggest 3 types of UU:
I will refer to unknown unknowns somewhat playfully as monsters. They constitute an umbrella
category, of which (as noted) unintended consequences are just one type. The monster category also
includes (a) phenomena from nature that we are currently ignorant of, and which could potentially
bring about a catastrophe.
Other monsters are (b) currently unimagined risks posed by future, not-yet-conceived-of
technologies.
(c) while most of the risk scenarios discussed in the literature are presented as if each constitutes a
discrete possible future, virtually all of them can be combined in various ways to produce complex
scenarios.
And he suggests 3 important distinctions about knowledge status of unknown:
1.

Some UU are individual, that is one person or group dont know about X, but humanity
as whole, or other group of people, or even another person do.

2.

Other UU are potential knowable, that is we are capable of knowing in principle, but
we just dont know them

3.

Last one is incomprehensible on human level knowledge, which is too complex or


extraordinary to be known by human mind, or could not be converted into knowledge as we
understand at all. (Some theorems may have proves bigger than the size of the universe, and
human brains are not be able to experience most possible qualia). It is open question if future AI
will be able to get all these UUs, or there are things that could not be converted into knowledge.
Some ideas are to complex on some level of human history, but they become more explainable
with help of other ideas (e.g. quantum theory would be unexplainable in ancient Rome).

367

List of possible types of Unknown unknowns


UU are typically present themselves as black swans, that is in unexpected events. While we cant
predict them, we could see them as random events, and see how often they happened before to get
estimation of their future frequency. There were many UU in science in 19-20 century, including
discovery of radioactive decay and quantum theory. There were also many political black swans,
including Russian communist revolution and 911.

Unknown unknowns are the things about which we dont know that we dont know them. Donald
Rumsfeld popularized the term UU. http://en.wikipedia.org/wiki/There_are_known_knowns
Andreas Sandberg includes Unknown unknowns in the list of 5 top x-risks.
(https://theconversation.com/the-five-biggest-threats-to-human-existence-27053)
By definition Unknown unknowns are not known and cant be predicted but we could examine
several possible kinds of unknown unknowns (UU) and try to convert them in simple unknown. It
may be futile undertaking. Or even dangerous as we pretend to have knowledge about thing we
dont know and thus think that we have more control above them.
But UU are not gods and they mostly exist because we are not trying to understand them.
Some points could be said quite clear for example UU is very rare event and its probability is
small. We dont need to know their nature in order to make some claims abuts their probability.
But it is also clear that the future itself is UU human beings were never able to anticipate future of
mankind and were always surprised by it.
Read also about the same phenomena on personal level https://en.wikipedia.org/wiki/End-ofhistory_illusion
This list of possible UU is not mutually exclusive:
1. Out of the blue or Black swans the things, which will appear absolutely unpredictably.
Like new type of large asteroids. There is nothing transcendental in them from epistemic point of
view but we just didnt know.
2. The theories, which was thought to be wrong but occurred to be true. Fringe science theories.
Some people already know it but for most it is UU.
3. Thing, which could be predicted but was not. Just a failure of imagination.
4. The wrong map of the world like the God exist but you are an atheist.
5. Complexity of the world some very complex interaction between mundane things like
toothpaste and arctic ice (fictional example) or sewage water and corals (real example via
bacteria).
6. The things, which are beyond our ability to understand the real UU. Just like black hole for a
dog.
7. New physical laws, which allow creation of new weapons.
8. Large fields of knowledge which used to be denied by modern science but which could be used
as toy playground for mapping UU. This is religion and all field of so-called parapsychology.
368

9. What superhuman AI will do.


10. What exactly will happen 100 years from now.
11. The real answer to Fermi paradox but we do know that we dont know and cant know the
answer.
UU could have some harbingers that is any unexplained event, which contradicts existing model
of the world. Like radioactivity of uranium salts was unexplained by 19-century physics but lead to
creation of nuclear weapons. We should look very carefully for any inconsistencies in our model of
the world as they could help to update our knowledge.

Unexplained aerial phenomena as global risks


I also wrote long and rather controversial text Unknown unknowns as existential risk
https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-asGlobal-Risk
I use popular topic of UFO as a case study of unknown and examine global catastrophic risks
connected with different hypotheses about such observation.
Hypothesis space consists of 3 levels, and I provide estimation of risks on each level.
1. Mundane explanation: hoaxes, hallucinations, misinterpretations. Risks: risk of
WW3 because of wrong identification of targets or rising our estimation of the
human tendency to hallucinate in important situations. President Reagan was an UFO
believer, saw something two times and of explanation of his SOI program was in fact
defense against aliens.
2. Explanations which are just slightly updates our model of the world. New secret
military technologies, ball lightnings. All it rises a bit risks, because new types of
weaponry become possible.
3. Explanations which completely change our model of world. Less probable
explanation but biggest influence on risks. Aliens are very improbable explanation.
Other: glitch in the matrix, psychic phenomena, alien nanobots and Alien AI.
Because of developing of new brain scanning and observation technologies so called
UFO problem will be solved in 21 century. It may result in our confrontation with the
real source of the problem, which will bring new opportunities and risks.

369

Part 3. Different factors influencing global risks landscape

Chapter 20. Ways of detecting one-factorial scenarios of global catastrophe


Having analyzed different scenarios of global catastrophes, we can now specify general signs
of such scenarios, which will help us recognize future threats as they emerge.

The general signs of any dangerous agent


By definition, in one-factore scenario there is a certain one factor which causes damage and
mayhem. This factor will consistently have several features: it originates at a certain point, extends
across the surface of the Earth, and effects all humans, from Greenland to Easter Island. We can
break down these features and analyze variants on each, and look at probabilities. This model gives
us a sort of map for checking the safety of new technologies or various natural phenomena.
Specifically, we can examine the following properties:
1. Can the new technology be used to hurt human beings?
2. Are there ways in which it can get out of control and cause damage unintentionally?
3. Is it powerful enough to spread across the entire planet, including the far reaches of
Greenland, Canada, and Alaska?
4. Could it spread so quickly that there is no time to resist it?
5. How can it synergize with other technologies, increasing the risk?
6. How easy would it be to build protection against the dangers of this technology?
7. How reliable can our predictions of the risk of this technology be?
Way of appearing
The danger factor of a new technology could arise as follows:
Casual natural appearance. For example, fall of an asteroid or the eruption of supervolcanoes.
Creation by humans. In this case, of the risk may derive from a certain research laboratory. Its
creation may be either accidental, or deliberate.
Exit from the origin point and distribution around the world
It is obvious that it occurs or by the command of a man, or casually. At once it is necessary to
tell, that combination of these scenarios is possible: human gives the certain command which full
sense does not understand, or it is carried out incorrectly. Or a certain human makes act of terrorism
370

leading to the destruction of a laboratory in which there is a supervirus. The starting point in which
there is a dangerous product is or laboratory where it have been created and then more likely speech
goes about casual incident, or a launching pad if this technology is transformed in certain a product
which became a weapon. As this point can be somewhere on the way from laboratory to a launching
pad - on range, on transport, on manufacture. Thus it is important to note an essential difference
between motives of the one who created the Doomsday weapon, and the one who then has decided
to apply it. For example, nuclear bomb was created for protection against a foreign aggressor, but
terrorists can grasp it and require separation of certain territories. Such two-phase scenario can be
more probable than the one-phase. Ways of the exit from starting point:
1. Leak. Leak begins silently and imperceptibly, without someone's will. It concerns situations,
like leak of a dangerous virus which cannot be noticed before there will be diseased outside. Leak of
dangerous chemical substance or nuclear materials will be appreciable at once, and will be
accompanied, most likely, by explosion.
2. Break. It is power break something, that has been locked, but wished to be pulled out
outside. Can concern only AI or genetically modified live beings with intelligence rudiments.
3. Explosion - the catastrophic scenario occurs in the starting point, but its consequences
spread all over the Earth. Most likely, it concerns dangerous physical experiments.
4. Start - someone makes the decision about distribution of the dangerous agent or application
of the weapon of the Doomsday.
It is obvious, that some combinations of these base scenarios are possible. For example, the
explosion of laboratory leading to leak of a dangerous virus.
Distribution is more important than destruction
Analyzing any phenomenon or the invention as the possible factor of global risk, we should
give more attention to, whether this factor can influence on all people for limited time, than to,
whether it can kill people or not. In order to some factor became global risk, there are two necessary
conditions:

This factor kills each human whom influences

It operates on all people on the Earth for limited time (For time, smaller, than
ability of people to self-reproduction.)

However if realization of the first condition is rather easily to reach as there is an infinite
number of ways of causing of death, and all of them operate for someone sometimes, the second
condition - much more rare. Therefore, as soon as we find out even the harmless factor, capable to
371

operate on all without an exception people, it should disturb us more than detection some extremely
dangerous factor which operates only on several people. Because any universal factor can become
the carrier for some dangerous influence. For example, as soon as we realize, what the Sun shines
each human on the Earth, we can ask a question - whether with the Sun can happen something such,
what will influence everyone? Same concerns atmospheres of the Earth, its crust, and especially
space which surrounds all Earth, and also global information networks.
Methods of distribution
Ability to the distribution all around the world converts a weapon in the superweapon. This
universality means not only all surface of globe, but also ability to get through any shelters and
protection borders, and also speed of this process which does impossible to resist it by means of a
new discovery. (E.g. new ice age, most likely, will be slow enough that it was possible to adapt to
it.) Ways and factors influencing ability of the agent to distribution are that:
1) Wind in atmosphere; separately it is necessary to allocate fast movement of an upper
atmosphere (where speeds can be 100 km/s so time of the world distribution is only several days),
and also propensity of substance to drop out in irreversible deposits that reduces its quantity.
2) Self-moving agents, - bacteria, self-aiming nanorobots, missiles.
3) Spreading from human to human - viruses.
4) By means of special sprays. For example, it is possible to imagine the following
catastrophic scenario: in a low polar orbit the satellite flies and continuously dumps capsules with
radioactive substance or other dangerous reagent. For several days it can pass over all points of
globe.
5) Explosion - itself creates huge movement. The shock wave helps to push the agent in all
cracks.
6) Network distribution. So AI on the Internet could extend.
7) Mixed ways. For example, at the initial stage bomb explosion sprays radioactive
substances, and then them are carried by the wind. Or a certain mold is transferred by a wind, and on
places it breeds. It is clear, that the mixed ways of distribution it is much more dangerous.
8) The agents possessing elements of intelligence to bypass obstacles (computer viruses, AI,
microrobots, aggressive animals).
9) Suddenness and reserve of distribution helps the agent to get everywhere.
10) High ability to carrying over, sticky-ness and particle fineness (as at a lunar dust).

372

11) Ability to self-replicate, both in nature, or on human or on intermediate carriers. Or


irradiate like radioactivity.
12) Manyfactor-ness - if there are many diverse agents, for example, at a multipandemic.
13) Concentration, as the distribution factor. The higher the concentration gradient, the more is
ability of an agent to get into all cracks. In other words, if concentration in atmosphere makes 1
deadly level, there always will be sites where because of different fluctuations this level will be
much lower, and people there will survive, even without any bunkers. But if concentration is very
high, the completely tight, in advance equipped bunkers will only help. Concentration also increases
speed of distribution.
14) Duration of action of the agent. Quickly operating agent (gamma ray burst) can singe a
considerable part of biosphere, but always there will be refuges on which it has not worked.
However long contamination, for example, by cobalt-60, does survival impossible in small refuges.
15) Ease of a filtration and deactivation - the easier is filtration of the air and deactivation of
people leaving on a surface, the more safely the agent. It is possible to sterilize easily biological
agents in ventilation systems, but exits on a surface should be excluded, as human could not be
sterilized
Way of causing of death
The basic element of global catastrophe which we name "agent", may not kill people at all but
only to separate them and to deprive of ability to reproduction, as for example, a superdrug, or a
virus sterilizing all people. Or to close all of them in bunkers where they are doomed to degradation.
The agent can be one-factorial in sense of a way of influence on human - for example, it can
be a certain contamination or radiation. Thus there is a difference between instant death and long
dying.
The agent can possess multifactorial hurting influence, as a nuclear bomb. However there
should be a primary factor possessing universal action for the whole world, or sufficient density of
different factors.
The agent can cause also not direct action, but uniform destruction of all inhabitancy. (An
asteroid, biosphere destruction.)
Extinction can take the form also of slow extrusion in second-grade ecological niches
(variants: "zoo", total unemployment in the spirit of the Bill Joys article.)
The destroying agent can cause appearance of new agents, each of which operates in own way.
For example, distribution of the bio-laboratory for programming of viruses - bio-synthesizers (virus
373

plus an idea-meme, causing some people desire to destroy all the world) could become such
superagent, creating many different agents in different parts of the Earth. In any sense scientific and
technical progress is such superagent.
The agent can be so intellectual that in each concrete case to use different ways: Hostile AI,
eschatological sect.
Typical kinds of destroying influence
Than "doomsday" has been caused, it will influence people and their bunkers, most likely, one
of the several listed ways. These ways basically coincide with usual hurting factors of nuclear
explosion. Any process which is capable to create simultaneously at least one of these factors in all
territory of the Earth, should be carried to the Doomsday weapon:
Shock wave - is capable to cause directly death, to destroy bunkers and all other objects
created by human.
Heat - from long influence of a heat is few protection as any bunker will get warm sooner or
later. It will not be possible to rest deeply in the Earth, as the temperature quickly grows in mines, an
order of 30 degrees on depth kilometer
Cold. To it to resist easier, than a heat.
High pressure.
Flying substance.
Radiation and rays.
Movement of the terrestrial surface.
Loss of the vital resource - oxygen, meal, water.
Destruction by the self-breeding agent (in some sense fire too possesses ability selfreproduce).
Supersonic shock wave - it is possible, at strong enough blow, it could capture a considerable
part of Earth crust (though viscosity would absorb it).
The difference between very big catastrophe and definitive global catastrophe can be that in
the first at least case shares of percent of people and territories will escape. Therefore the important
sign of the present global catastrophe is that it covers all territory of the Earth bar none. For the
account of that it occurs:
Very high level of redundancy of destroying influence.
Destroying agent possesses some kind of "superfluidity" by the nature. For example, fine
dust, superficially active substance or the insects, inclined to creep in any cracks.
374

"Intelligence" of that force which directs this agent.


Time structure of the event
Without dependence of previous factors, it is possible to designate the following sequence of
events in time for one-factorial global catastrophe:
1. A gather head phase. It includes the invention, creation, preparation for application and
appearance of plan of the application. If it is a question of the natural phenomenon it is a question of
energy accumulation in the chamber of a supervolcano or about asteroid approach. Here
accumulation of a negligence during execution of instructions and errors in drawing up of
instructions.
2. The moment of trigger event. It is one event in the space-time which defines the beginning
of all process after which it is irreversible and develops in its own rate. It can be a decision of the
nuclear attack, a crack in the cover of the volcanic chamber etc. Trigger event starts a chain of the
events following one after another with considerable probability in the certain time schedule. Thus if
trigger event has not occurred, all process could be postponed for uncertain long time. However
trigger event can be outwardly harmless and nobody realized it as that. For example, the shot in
Sarajevo in 1914.
3. At this stage the chain of events leads to liberation of the dangerous agent from the point of
its arrangement. Four variants of the exit we discussed above: leak, break, explosion, start.
4. Next phase is distribution of the agent on all surface of the Earth (and also in near space if
already there are independent space settlements). This distribution can be reserved or accompanied
process of destruction. Reserved process can be more dangerous, as does not remain areas which
have time to be prepared.
5. Phase of destroying process. In it the process covering all surface of the Earth develops.
Epidemic or a shock wave.
6. An irreversibility point. Distribution process possesses this or that degree of uncertainty. If
process is not instant people would struggle against it. That moment when people lose this struggle
and extinction becomes inevitable, - is an irreversibility point. Though it could not be understood as
that. The irreversibility point is the moment when destruction factors exceed technological
possibilities of a civilization including potentialities on improvement of these technologies. Depends
both on concentration of factors of destruction, and from civilization level. If as a result of large
catastrophe civilization level has fallen below a certain point, and level of factors of destruction has
risen above it further extinction is irreversible. With certain probability, certainly.
375

7. Death of last human. After an irreversibility point follows extinction of the escaped people.
This process can be stretched in time even for many years for the account of bunkers. It can
represent even very long condition life of the escaped tribe on some island. (But such tribe can have
a chance to restore a civilization)
8. Processes "after". After death of last human processes on the Earth will not come to the end.
Probably, new species, will start to develop, or the Earth will be populated with robots, nanorobots
and AI. There is also hope, that the new intelligent specie will revive human based on preserved
DNA.
Preemergencies
There are also different types of social situations when casual or deliberated application of
means of general destruction becomes more probable.
1) War for planet unification.
2) Struggle of all against all for resources in the conditions of their exhaustion.
3) Accruing structural degradation, a la disintegration of the USSR.
4) Technical failure, leak.
5) Diversion for the purpose of destruction of all people.
6) Casual war.
7) Blackmail by Doomsday Machine.
8) Unsuccessful experiment.
9) Mutiny for the purpose of a power establishment on the Earth.
Intended and casual global catastrophe
Any global catastrophes can be distinguished to that sign, whether they are organized by a
certain intelligent force which aspires to arrange global catastrophe, or it is a certain casual process
which does not have any purposes. Global catastrophes concern the first variant:

Arranged by people

Connected with AI

The result collisions with other inhuman intelligent forces.

To the second: failures, leaks, natural catastrophes, system crises.


Integration of the first and second scenarios: the scenario when the first phase catastrophe is
organized by people with definite purposes, however then process is pulled out from under the
control. For example, terrorists can meaningfully provoke nuclear war, but not represent its scales.
Or some Buddhist sect can meaningfully infect all people with a happiness virus, but not consider
376

that such people will be incapacitated further. (Dalai Lama recently has expressed in that spirit that it
would be quite good to clean people negative emotions by means of genetic manipulations.)
On the other hand, the victory of an intelligent force over people means that some intelligent
force remains in the nature (if only it does not commit suicide after that), and, hence, irreversible
disappearance of intelligence on the Earth does not occur. And after long time this surpassing human
intelligence can return people to life. However there are the intelligent forces which are essentially
distinct from human consciousness, for example, evolution. Evolution is much cleverer than
human (which it has generated), but infinitely loses on speed. (But not everywhere, for example
natural selection of the microorganisms steady against antibiotics, occurs on the speed comparable
to speed of working out of new antibiotics.) If one of variants of future AI uses evolution principles,
but much faster, it can reach "victories" over people as more effective solver of any problems,
however thus not being intelligent person in our understanding. Workings out of such AI are not
unsuccessfully conducted in a direction named genetic algorithms.
The Doomsday Machine
Let's collect in a separate category all variants Doomsday Machines which the most illintentioned group of people can create. Probably, the term goes back to S. Kubrick's film "Doctor
Strangelove". Its plot in brief is that: "Russians" create the Doomsday Machine which blows up
set of cobalt bombs, sufficient for full contamination of all world if the USSR would be attacked.
During internal conflict in the USA the rebellious mad general strikes blow on the USSR, not
knowing about the Doomsday Machine. As a result the machine is started. The Russian ambassador
speaks: And it is impossible to disconnect this machine, differently in it there would be no sense.
Other Strangelove notices: But what sense was to keep this machine in secret? The Russian
ambassador answers: We were going to declare it next Monday. That is the machine which should
lower sharply risk of any war on the Earth, actually leads to its beginning. It is interesting, that J.
Leslie writes in the book End of the world. A science and ethics of human extinction, that actually
would be not bad to have such machine as if it was correctly applied it could lower risk of nuclear
war - approximately as now it is done by the doctrine of the mutual guaranteed destruction. Though
the basic idea of the car consists in that form of blackmail which means, that the Doomsday
Machine will be never applied, the fact of its creation creates probability of its application.
Besides, there are historical examples of senseless destruction of people - bombardment by
Nazis of London with V-2, firing of chinks in Kuwait. A psychological example - blasting of by a
grenade at capture.
377

Not any variant of global catastrophe suits as the Doomsday Machine. It should be process
which under the decision of a certain group of people can be started during strictly certain moment
of time and to conduct to global catastrophe with the considerable probability close to 100 %, at
least, from the point of view of developers of the device. The Doomsday Machine also should be
impregnable in relation to attempts to prevent its application, to not authorized application and there
should be a possibility to show realness of its application that is necessary for blackmailers. (Now as
the Doomsday Machine plat role the possession of any nuclear weapon though one nuclear bomb
will not destroy all world. For example, the role of a nuclear bomb in hands of the North Korea now
- it is well hidden, but its presence is shown.) Here is, possibly incomplete, list of possible machines
of the Doomsday:
Explosion of a hydrogen bomb

In a supervolcano

In a coal layer

In a nuclear reactor

In a layer of gas hydrates at ocean, counting upon de-gazation chain reaction.

Creation of a hydrogen superbomb of stationary type.


Explosion of cobalt bombs, start of a reactor - devil's tube, generating significant release of
radioactive substances without a blast .
Deflection of an asteroid from the orbit.
Accumulation of weight of an antimatter.
Profusion of a crust of the Earth by means of a liquid nuclear reactor as a drop.
Dispersion of Anthrax in atmosphere, liberation of a considerable quantity of different
viruses.
Adding dioxin in the oceans.
Liberating of genetically modified manufacturers of toxins and viruses (dioxin mold, the
plague louse).
Distribution of hundreds billions the microrobots attacking all live.
Destruction of an ozone layer by means of a certain catalyst.
Combination of all these factors.

Chapter 19. Multifactorial scenarios

378

Above we have made as far as possible the full list of one-factorial scenarios of global
catastrophe. There are also other variants of this list, for example, in N. Bostrom's article and in the
book of J. Leslie, with insignificant differences. (But I do think that our list is the most complete
available.) Now we should ask a question, whether exist scenarios in which the mankind perishes
not for any one reason, but from some combination of factors, and if yes, what their probability and
what these factors are possible. We will tell, whether can be so, what one continent will be
exterminated by superviruses, another by nanorobots, and the third will die out for hunger?

Integration of the various technologies, creating situations of risk


The fact of fast development of strong technologies creates a special zone of risk.
Technologies tend to promote each other in development. Development of computers helps to
calculate properties of new materials, and new materials allow to create even more productive
processors for computers. In modern technology it is known under the name NBIC-convergence that
is deciphered as nano-bio-info-cogno and means merge process nanotechnology, biotechnologies,
computer technologies and researches of a human brain. This merge occurs for the account of an
exchange of methods and results, and also realizations of the projects uniting elements of these
technologies, for example, when a cover of viruses are used as elements for nanorobots, or by means
of gene engineering mice with fluorescent markers in neurons in the brain for studying of processes
of thinking are deduced. Convergence of technologies accrues on a course of progress and there is
an allocation of quickly developing core of technologies (NBIC) which are capable to help each
other. Thus they can do the contribution both in nuclear, and in space technologies, but not receive
from them the return contribution, and so it is impossible to create a positive feedback - and these
technologies lag behind from mainstream of technological progress. Base for NBIC technologies is
miniaturization. Convergence of NBIC technologies conducts to some peak which, possibly, is the
strong artificial intellect.
Similar integration repeatedly took place in the past at weapon creation. Here technologies did
not help with development to each other, but created essentially new units. For example, the plane
with a machine gun, the camera and a radio communication - as the scout and a fighter in the First
World War. Or the intercontinental ballistic missile in which achievements in the field of the nuclear
weapon have been united, the rocket technics and computers, each of which separately in one
thousand times would be more weaker. That is a nuclear out-of-pocket bomb of delivery, or a rocket
with a usual warhead, or a rocket without prompting systems.

379

Often available forecasts of the future and science fiction describe future as the present time
plus one new feature. The same is with forecasts of global risks: they describe appearance in the
world of any one dangerous technology and then consider consequences of this event. For example,
how will change the world if in it will appear developed nanotechnology. It is obvious, that this
approach is inconsistent, as the future technologies, for the account of their joint development, will
simultaneously appear and enter complex interactions with each other.
Meanwhile takes place both parallel and consecutive convergence. Parallel convergence takes
place when some new technologies unite to create qualitatively new product, for example, an
intercontinental rocket with a nuclear warhead. Consecutive concerns a chain of events in which one
factors start others, for example: act of terrorism - an economic crisis - war - application of the
biological weapon.

Double scenarios
Seth Baum et al has wrote about risks of double scenarios of a global catastrophe "Double

catastrophe: Intermittent stratospheric geoengineering induced by societal collapse. The main idea in it
that societal collapse could result in stop in geoengineering and result in climate catastrophe, but it is
just one example of possible interaction of two different x-risks.
I have created a map in which list many (may be almost all) pair combinations of different new
technologies and risks. http://immortality-roadmap.com/doublecat.pdf
It combines Nano, Bio, Cogno, nuclear, Geo, AI, Asteroids and system crisis.

Let's consider to begin with hypothetical pair scenarios of global catastrophe, in other words,
different variants of mutual reinforcement of the major factors taken by pairs. It is thus clear, that in
a reality they will operate all together, but these pairs can become "bricks" (or, more likely,
communications in the graph) for more difficult forecasting. We will give the outline description of
such interaction, actually, as brain storm. Here each pair scenario should not be perceived as the
definitive forecast - but not because it is too fantastic, that is why, that it does not consider influence
of some factors.
AI and biotechnologies
Consecutive convergence (chain of events):
1. Genetically modified superpeople will possess superintelligence which will allow them to
create present computer AI.
2. AI will create a super-virus as the weapon.
3. People will die out from a virus, and it is necessary to enter instead of them robots.
Parallel convergence: appearance of new products on the basis of both technologies:
380

4. Biological assemblage of superdense chips will sharply accelerate AI growth.


5. Special viruses will establish created by AI programs into the brains of people.
6. AI will be created directly from biomaterials - neurons, DNA.
AI and a superdrug
Consecutive scenarios:
1. For example, AI will want to please people and will create such drug. Or AI also will be
such drug (the virtual reality, the Internet, lucid dreams see).
2. In process of destruction of people from a superdrug it is necessary to replace them by
robots.
3. Or on the contrary, it is necessary to think up a certain super-TV to calm people who
remained without work because of AI.
4. The superdrug will be the weapon of hostile AI against people.
Parallel convergence:
5. AI will think up the difficult combination of magnetic fields creating exact narcotic effect in
the brain.
6. Communication of AI and a human brain through brain-computer interface will essentially
strengthen possibilities of the both. AI will get access to human intuition, and humans - to unlimited
memory and speed of thought of AI.
Superdrug and biotechnologies
1. Manufacture of dangerous drugs becomes the same simple business, as cultivation of a tea
mushroom.
2. The requirement of people for drugs will result in blossoming of the black market of
biotechnologies which will in passing make accessible and manufacturing the bioweapon of mass
destruction.
3. To disaccustom people to a superdrug, the special bioweapon hurting a brain will be
sprayed.
4. A certain infectious illness one of the symptoms will have euphoria and aspiration to spread
it.
Superdrug and nanotechnology
Stronger effect will give direct irritation of areas of the brain by microrobots. Nanorobots will
create systems which will deduce the information from a brain outside that will allow to create even
more powerful tools of entertainments. (It is interesting, that as the development program

381

nanotechnology in Russia affirms, that the market of such devices by 2025 will reach billions
dollars.) However here operate as a whole the same scenarios, as with biotechnologies.
AI and nanotechnology
1. Nanorobots will allow to read details of the construction of the human brain that will
accelerate AI development.
2. AI will help to develop and let out super-efficient nanorobots.
3. Progress in nanotechnology will support Moore's law long enough that computers have
reached productivity, repeated surpassing productivity of a human brain at the lowest price.
4. Nanorobots also will be real carriers of AI - something will turn out an average between
intelligent ocean in the spirit of Lems the Solaris and the scenario of Grey goo. (Like in Crichton
novel Pray).
5. Hostile AI uses nanorobots as the weapon for an establishment of the power on the Earth.
AI and the nuclear weapon
1. AI will think up how to make the nuclear weapon (NW) easier, faster and cheaply.
2. The scenario, in the spirit of offered in a film Terminator: AI uses NW that will get rid of
people.
3. People use NW to try to stop AI which is under control.
Nano and biotechnologies
1. Live cells will collect details nanorobots (to synthesize in special ribosomes).
2. Will appear animats - the artificial life containing elements of live and of nanorobots.
3. Only nanorobots will give definitive protection against the biological weapon.
Nanotechnology and the nuclear weapon.
1. Nanotechnology will allow to simplify division of isotopes and designing NW.
2. Attempts to struggle with swarms of nanorobots by means of nuclear attacks will lead to
additional destruction and Earth contamination.
Nuclear weapon and biotechnology
1. The nuclear weapon can be applied to destruction of dangerous laboratories and sterilization
of the infected spaces.
2. Bioworkings out can be applied to extraction of uranium from sea water and for its
enrichment, and also for allocation of plutonium from the fulfilled fuel. Or territory deactivations.
3. Nuclear war occurs in world strongly infected with biological agents. War does impossible
adequate rate of manufacture of vaccines and other boards, and simultaneously leads to intensive

382

migration of people. Resources which could go on protection against microbes, are thrown on
protection against a radiating irradiation. Many people are weakened.
NW and supervolcanoes
By means of a hydrogen bomb it is possible to provoke explosion of a supervolcano or strong
earthquake. Or on the contrary, to direct its energy on the bypass channel.
NW and asteroids.
1. By means of NW it is possible to reject an asteroid from the Earth, or on the contrary to
direct it to the Earth.
2. Asteroid falling can be apprehended as a nuclear attack and to lead to the casual beginning
of nuclear war.
3. The asteroid can destroy also nuclear station and cause contamination.
AI and system crisis
1. Application of supercomputers will create a certain new type of instability - fast and not
clear (in military sphere, in economy, in the field of futurology).
2. War or war threat will result in arms race in which result the most destructive and
dangerous AI will be created.
3. All world appears is dependent on a global control computer system which then collapses
by hackers. Or the command is given to it about self-damage.
NW and system crisis
1. Any explosion of a nuclear bomb in a city can bring down the world financial markets.
2. On the contrary, the collapse of the markets and financial crisis can lead to a fragmentation
of the world and strengthening of temptations of power decisions.
NW and a climate
1. It is possible to cause purposely nuclear winter, having blown up a powerful nuclear charge
in a coal layer that is guaranteed will throw out in atmosphere a soot large quantity. If the theory
about nuclear winter as a result of attacks to cities is true, such action will be in tens or hundred
times is more effective on a soot exit.
2. It is possible, to provoke irreversible global warming by means of correctly chosen places
for nuclear attack. For example, it is known, that after nuclear winter probably nuclear summer
when soot will settle on glaciers and will cause their heating and thawing. Explosion of bombs in
files of gas hydrates under an ocean floor too can cause chain reaction of their liberation.
3. On the contrary, it is possible to regulate a climate, provoking emission of sulfur and ashes
volcanoes by means of nuclear charges (but it already to chains of three elements).
383

Studying of global catastrophes by means of models and analogies


Global catastrophe of a technological civilization which lead to human extinction is the unique
phenomenon, which never was in history that complicates its research. However we can try to pick
up a number of other events which will be similar to global catastrophe in some aspects, and to
collect, thus, a number of models. Such sample is enough subjective. I suggest to take as analogies
large, complex, in details studied and known events. It:
Extinction of dinosaurs
Extinction of Neanderthal men
Crash of Roman empire
Disintegration of the USSR
Crisis on Easter island
Crash of American Indian civilizations of America after discovery by its Columbus
Explosion in Chernobyl
Destruction of "Titanic"
Explosion of a supernova star
Appearance of mankind from the point of view of biosphere
Beginning of First World War
Cancer as illness
These events can be assimilated global catastrophe in different aspects. Intelligent beings
participate in one of them, in others the whole specie die out irreversibly, in the third to crash
approach complex systems, difficult technologies participate in the fourth. On each of the named
themes it is a lot of literature, and it is inconsistent enough. In each case there is a set of hypotheses
which explain all through any one reason - but as it is a lot of such hypotheses any reason is not
really unique. More likely on the contrary, that there was no one reason. The general in all named
variants is: than more we penetrate into details, the set of factors which have led to the end and
which co-operated complexly is especially distinguishable. About each of these catastrophes are
written books, and the disorder of opinions is considerable, therefore I will not try to retell all
possible representations about the reasons of all these catastrophes, and I send the reader to
corresponding literature among which it is possible to allocate the recent book "Collapse" by
Diamond. About extinction of dinosaurs it is necessary to look the corresponding chapter in K.
Eskov's book History of the Earth and a life on it.
The general in all these cases is that was present complex set of reasons of both external, and
internal character. Integrated approach of these reasons creates problems when we try to answer
384

questions in the spirit of Why Roman Empire has broken up? Also it is the most important lesson.
If we face catastrophe which will ruin a human civilization, most likely, it will occur not for any one
reason, but owing to complex interaction of the different reasons at different levels. Hence, we
should try to create models of the same level of complexity, as what are used for the description of
already happened large catastrophes.
First, it is important to notice, that the main role in extinctions and catastrophes was played by
the factors making basic properties of system. (For example, dinosaurs have died out not from
outwardly casual reason - an asteroid, and from their most defining property - that they were huge
and egg-laying; so, they were vulnerable to small predatory mammals. The asteroid was only the
occasion which has opened a window of vulnerability, and steadier species have gone through it, for
example, crocodiles. Human falls ill with a cancer, not because it had a wrong mutation but that it by
the nature consists of the cells capable to division. If not specificity of American Indian culture
without a wheel, horses and progress not Columbus would come to them, but they would come to
Spain.)
The idea about that defining properties of system set that type of catastrophes which can
happen with it, and so we should think, what are defining properties of human species and modern
civilization For example, the plane by definition flies, - and it sets the most typical catastrophe for it
- falling. And for the ship the most typical risk is to sink. But much less often the ships break, and
planes sink.
So, recognizing that any of these catastrophes has not been caused by any one simple external
factor, and had the reasons in defining properties of the system (which were, accordingly, "are
smeared" on all volume of system), we can draw the important conclusion: one-factorial scenarios of
global catastrophe are not so dangerous, but much dangerous are the defining properties of
systems and the system crises connected with them. Feature of system crisis consists also that it
automatically involves in itself all population and universal "delivery systems" are not necessary to
it.
On the other hand, we can tell, that all these factors are unimportant, as all empires all the
same fall sooner or later, species die out, and beings perish. But these data for us are useless, as
speak nothing how to make so that it became "late", instead of "early".
Secondly, though internal contradictions in a system could become ripe very long, there are
necessary external and enough random factors to push it to destruction. For example, though the
ecological niche of dinosaurs was steadily reduced on the logic of this process, falling of an asteroid
and eruption of volcanoes could push this process even more. Or a freezing which has pushed
385

Neanderthal men to extinction, simultaneously with pressure from H. sapiens. Or Chernobyl


catastrophe which has undermined the USSR in the moment of greatest vulnerability. And if these
external random factors wouldnt occur, the system could last and pass in other channel of the
development.
Thirdly, in all cases when it was a question of intelligent management, it appeared, anyhow,
not so intelligent. I.e. there was made important mistakes conducting to catastrophe. Besides, often
catastrophe is connected with simultaneous "casual" coincidence of the big number of diverse
factors which separately did not conduct to catastrophe. At last, pathological self-organizing when
destructive process amplifies at each stage of the development can be peculiar to catastrophic
process.
It is interesting to study as well if mankind has created any systems which never suffered
catastrophes that is at which designing by the trial and error method was not used. Alas, we are
compelled to exclude set of systems which were created as catastrophe-free, but have as a result led
to catastrophes. It is possible to recollect nuclear reactors, spaceships "Shuttle", supersonic
"Concorde". Maintenance of safety of the nuclear weapon looks better, but also here there were
some incidents when the situation was that is called, on the verge. The further studying of analogues
and models of global catastrophes on set of examples seems productive.

Inevitability of achievement of a steady condition


It is possible to formulate the following plausible statement: most likely soon the mankind will
pass in such condition when the probability of global catastrophes will be very low. It will occur in
following cases:

We will understand, that any of global catastrophes has no high probability under
any conditions.

We will find a way to supervise all risks.

Catastrophe will occur.

We will reconcile to inevitability of global catastrophe as a part of natural vital


process (so, for example, last two thousand years Christians waited for the
Doomsday, and even rejoiced its affinity).

However, while we observe the opposite phenomenon - possibilities of people on creation of


destructive agencies so and annual probability of global catastrophe, constantly grows. And grows
faster, than the population and protection systems. If we count this curve of growth it will have too a
certain peak. It is possible to take for comparison scale of victims from the first and the second
World Wars. We would see, that for 25 years the number of victims of the maximum realized
386

destruction has grown approximately in 3.6 times (if to take an estimate in 15 and 55 million victims
accordingly). It advances population growth. However with development of the nuclear weapon this
acceleration has gone even faster, and by 1960-70 it was really possible to destroy hundreds millions
people (in real war all population of the Earth would not be lost as aim was not to exterminate all).
So, if to take rate of acceleration of force of destruction in 3,6 in 25 years we will receive
acceleration in 167 times for hundred years. It means that by 2045 war will be capable to destroy 9
billion people - that comparably with total of the population of the Earth expected on this moment.
This figure is close to expected technological Singularity around 2030 though it is received
absolutely in another way and with use of data only first half of XX century.
Therefore we can reformulate our thesis: growth of probability of risk factors cannot eternally
proceed. It is possible to formulate it and differently: means of preservation of stability should
surpass self-damage means. If destructive agencies appear more powerful the system will fall on
such level where forces of ordering will be sufficient. Even if it will be burnt desert. With the
account of time factor it is possible to tell, that means of maintenance of stability should grow faster,
than self-damage means. And only in this case annual probability of extinction will fall, and its
integral in time will not rose to 1 that means possibility of infinite existence of mankind, that is
realization of the goal of indestructibility of mankind (Kononov).

Recurrent risks
Any global risk which has been listed by us in first half of this text, becomes much more
dangerous if it arises repeatedly. There is a big difference between unitary leak of a dangerous virus,
and thousand leaks of the different viruses occurring simultaneously. If will flow away one virus
with lethality of 50 %, we will lose to half of population of the Earth, but it will not interrupt
development of a human civilization If during a life of one generation will be such 30 leaks only one
human most likely remains alive. If it will be thousands leaks, it is granted that nobody will survive
even if lethality of each separate virus there will be only 10-20 % (provided that all these viruses
will spread on all planet, instead of settle in one place). The same is possible to tell and about falling
of asteroids. Bombardment by a long series of tens asteroids of the average size will be more lethal
for mankind, than falling of one big.
Certainly, it is necessary to consider ability of mankind to adapt to any one threat. For
example, it is possible to succeed in opposition to absolutely all biological threats - if it is a unique
class of threats. However possibilities of creation of universal protection against global risks are
limited. After September, 11th USA began to make the list of vulnerable objects and have quickly
understood, that it is impossible to protect all objects.
387

As development of technologies goes in common, we cannot count, that any one key
technologies will arise, whereas all the others remain at the same level, as now. (Though usually
such image is created by fantastic novels and films. It is an example of "cognitive bias caused by
good history.)

Global risks and problem of rate of their increase


Global risks are game on an advancing. Everyone new technological discovery creates new
global risks and reduces the former. The outer space exploration has reduced risk of casual collision
with an asteroid, but has created possibility to organize it purposely. Distribution of nanorobots will
reduce threats from genetically modified organisms, but will create even more dangerous weapon.
The artificial intellect will solve control problems over other dangerous technologies, but will create
such monitoring system, any failure in which work can be mortally dangerous. Development of
biotechnologies will give the chance to us to win all illnesses which were before - and to create the
new.
Depending on what technologies will arise earlier or later, are possible different bifurcations
on a way of the further development of a civilization of technological type. It is besides, important,
whether new technologies will have time to solve the problems created at the previous stages of
development, first of all - problems exhaustion of those resources which have been exhausted in
process of development of the previous technologies, and also elimination of the risks created by
last technologies.
Earlier with mankind there was a set of all possible situations on somebody a stage of its
historical development, for example, all set of interactions of the big state to nomads. Now we
appear, apparently, in a situation of appearance of real historical alternative - if there will be
something one something another at all will not happened. Or it will be created powerful,
supervising all AI, or all will be eaten by grey goo. Or we become a space civilization, or we will
return to the Stone Age.
The global risk arises owing to speed of process creating it. With slow process of distribution
of something it is possible to have time to consult, prepare correct bombproof shelters, to grow up a
vaccine. Hence, to distinguish the real global risk is possible by the rate of its development
(Soljenitsin: revolution is defined by its temp.) This rate will be stunning because people cannot
have time to understand in case of the global risk and correctly prepare. However for different
classes of events different speeds will be stunning. If the event is more improbable, the smaller its
speed will be stunning. The USSR seemed something so eternal and firm, what even the crisis
stretched on many years and crash of the Soviet system seemed stunning. System crisis, in which the
388

maximum catastrophically point constantly moves (as the fire, being thrown from one object on
another), possesses much bigger stunning potential.
Thus it is necessary to understand ability of events of system crisis to shock perception, to
make wrong impression about themselves, probably, in shape a future shock. And accordingly to
cause wrong reaction to them, even more their strengthening. (Lemann bankruptcy.) Certainly, some
will understand at once an event essence, but stunning-ness means disintegration of a uniform
picture of an event in a society, especially at the authorities. Therefore there will be a blinding and
voices Cassandras will not be heard - or will be understood incorrectly. Faster processes will
supersede slower, but not always attention will have time to be switched to them.

Comparative force of different dangerous technologies


Further, we can make the list of "force" of destructive influence of technologies in which each
following technology gives bigger rate of threats and eclipses the threats created at the previous
stage. The time factor specifies here in duration of possible process of extinction (instead of time
before technology maturing).
1. Exhaustion of resources - decades or centuries.
2. Large scale nuclear war with application of cobalt bombs - with the account of slow
subsequent extinction - years and decades.
3. Biotechnologies - years or tens years.
4. Nanorobots - from several days till several years.
5. AI - from hours till several years
6. Explosion on the accelerator - with a velocity of light.
Quicker processes win over slower process. Accordingly, scenarios of global catastrophe
will jump up with much more probability from the first positions of this list to the last, in other
words, if in the middle of process of exhaustion of resources suddenly begins multifactorial
biological war process of exhaustion of resources will be so slow in comparison with it, that it can
not be taken into consideration. Thus presence of each more advanced technology will allow to
minimize consequences of catastrophe from the weaker technology. For example, the developed
biotechnologies will help to extract resources and to clear the world of radioactive contamination.
Nanorobots can protect from any biological dangers.

Sequence of appearance of various technologies in time


The list of "force" of technologies resulted above as a whole is similar on expected time
sequence of appearance of technologies in a reality as we can expect, that on a progress course there

389

will be more and more strong and potentially destructive technologies, but actually not necessarily
corresponds to this sequence.
The sequence of appearance of various technologies in time is the major factor in definition of
what waits for us the future. Though thanks to NBIC-convergence successes in one technology
affect others, for us the moment of maturing of technology is that moment when with its help begins
possible to create global risk. And even the small advancing here can play crucial importance. In
general, any technology allows to create a shield and a sword. The shield usually lags behind on
time, though, finally, it can appear more strongly than sword. Besides, stronger technology creates a
shield from dangers of weaker technology.
Usually the following sequence of maturing of technologies is expected: bio - nano - AI. The
strong AI is "joker" who can arise and tomorrow, and in ten years, and through 50 or never.
Biotechnologies develop is forward enough according to the their own Moore's law, and we as a
whole can predict time when they will ripen to that point where it will be possible to make any
viruses anywhere and very cheap. It will be exact probably in 10-30 years if a certain catastrophe
does not interrupt development of these technologies. Dangerous physical experiment can occur
almost instantly and irrespective of other technologies - while there is a high level of technologies in
general. Coming to power of strong AI considerably will reduce probability of such event (but even
AI can put certain experiments).
Nanotechnology are in much more rudimentary form, than biotechnology and even technology
of AI. The first dangerous experiments with biotechnologies were in 1970th years (a cancer E. Coli),
and to the nearest dangerous nanotechnological experiments is 10 years at least if there will be no
technological breakthrough. That is nanotechnology lag behind biotechnologies almost for 50 years.
Sudden breakthrough can come from AI - it will think up how easily and quickly create
nanotechnology either from biotechnologies - or on a way of creation of synthetic organisms.

Comparison of various technological risks


For each supertechnology it is possible to enter the factor of danger Y=a*b which reflects both
probability of appearance of this technology (a), and probability of its ill-intentioned application (b).
For example, nuclear technologies already exist (a=1), but the control over their considerable
applications (full-scale war or a superbomb) is high enough, therefore the second size of product is
small. For biotechnologies it is high both probability of their development, and probability of their
ill-intentioned application. For AI these sizes are unknown to us. For nanotechnology too it is
unknown probability of their creation (however it is not visible basic difficulties), and the
probability of their ill-intentioned application is similar to probability for the biological weapon.
390

Besides, it is possible to add the factor of speed of development of technology which shows,
how much it is close on time. Linear multiplication here is not quite correct, as does not consider
that fact, that the late technology completely is canceled by others, and also nonlinear character of
progress of each technology (at least an exponent). The further from us is a technology, the more
safe it is, as chance is more that we will find a safe way to operate progress and application of its
fruits.
Generalizing, it is possible to draw a conclusion, that biotechnologies receive the highest
points on this scale - these technologies are for certain possible, their harmful application is almost
inevitable and on time they are rather close to us.
Nanotechnology receive unexpectedly low level of threat. It is not known, whether they are
possible, thus they can appear quite safe and till the moment of their natural (without AI) maturing
more very long time. If they ripen unnaturally, - thanks to progress in creation of AI or
biotechnologies, - they appear in a shade of force of these technologies: in a shade of threats from
biotechnologies which by that moment they can create, and in a shade of abilities of AI to the
control which can check all casual leaks nanotechnology.
AI, being a bilaterial joker, can or prevent any other risks, or easily ruin mankind. The
moment of appearance of AI is the moment polifurcation - during this moment to it the purposes
which will change then can be set it impossible. Slow and more later appearance AI is connected
with possible smooth development of the state into the huge all-supervising computer. Faster and
early appearance, more likely, is connected with the sudden invention in the laboratory of the
computer capable to self-improvement, and goaled it on power capture on the Earth. In this case it,
more likely, will create certain essentially new structures of communication and management, and
its distribution will be explosive and revolutionary. However the later people will create AI, the it is
more chance, that they will understand how correctly to program it that it actually brought the
blessing to people. However, on the other hand, the later it will arise, the more possibly, that it will
be made by certain "hacker" as complexity of a problem becomes simpler every year. E. Yudkowsky
metaphorically so expresses this thought: Moore's Law concerning AI says, that every year IQ of
human-designer, necessary for AI creation, falls on one unit.
The basic bifurcation, in my opinion, is, whether it will be possible to create powerful AI
before will work joint effect cumulative pressure, caused by system crisis, biotechnologies,
nuclear war and other factors. Or all these events so will weaken mankind, that almost all scientistsexperts on AI will be lost, or become refugees, and works in this area will stop. Researches can be
undermined by even simple destruction of the Internet which will reduce an information exchange
391

and explosive growth of technologies. This bifurcation concerns events which I named global risks
of the third sort.
Than development of technologies is fast accelerated, speeds of exchange grows, and all
processes in a human civilization become faster, including that all virtual simulations of a reality
work faster. It means, that for a year of objective time the civilization can pass hundreds and
thousand years of "subjective" time if to consider on its internal hours. Owing to it, probabilities of
any internal risks increase, and even the most improbable events of internal character can have time
to occur. Therefore for the external observer the civilization becomes the extremely unstable. But
acceleration of internal time does a civilization much more independent of external risks - from the
point of view of the internal observer.
The question in, whether mankind is external or internal observer of processes of acceleration.
Definitely, considerable part of people does not participate in world processes - third of people in the
world never used phone. Nevertheless, they can in an equal measure with other people suffer, if
something goes wrong. However now people from gold billion as a whole keeps up with progress.
But in the future is possible the situation when progress will come off these people. Probably, the
group of leading scientists will be involved in it, and maybe, it will depend completely on
computers. Natural human inertia is considered as a good safety lock from rates of progress. It is
difficult to force to change people computers more often, than time in some years (though Japanese
are accustomed to change cellular telephones and clothes each three months), the truth economic
pressure is very great and creates social pressure - for example, an image of new, even more abrupt
phone. However in case of the armed opposition, arms race is not limited on rate faster ones
wins.

The price of the question of x-risks prevention


We can measure also probability of the apocalyptic scenario, by defined quantity of money,
time and other resources which required for it, - and having compared them with the general
quantity of accessible resources. If it is necessary for "doomsday" tons of a certain substance
while the presence of it on the Earth is 1,5 it is improbable and if it there is billion accessible it
is almost inevitable. We can also try to define a minimum quantity of people, which should unite to
create this or that weapon of the Doomsday. It is obvious that more cheaply to grasp the infernal
machine. For example, the Chechen terrorists planned to grasp a nuclear submarine and to
blackmail the Russian Federation. But hardly they could create such arsenal of rockets.
392

It is clear, that time factor is important also. If some project is very cheap, but demands 10
years of efforts it will expose more likely, or human will be disappointed in it. On the contrary, if the
project is fast (to break a test tube with poison) its human can realize under the influence of minute
mood.
Tens countries at the moment can create the nuclear weapon, but these projects will demand
for the realization of many years. At the same time thousand bio-laboratories in the world can work
over genetic the modified viruses, and these projects can be realized much faster. In process of
accumulation of knowledge and equipment standardization, this number grows, and time for
working out is reduced. For creation of a dangerous virus the budget is required now from thousand
to one million dollars while nuclear projects begin with billions. Besides, the price of workings out
in biotechnologies falls much faster as does not demand the big capital investments and more likely
depends on availability of the information.
It is possible to enter risk factor A directly proportional to quantity L of places on the Earth
where the dangerous project can be carried out and inversely proportional to expected average time
T for end of the project with expected efficiency in 50 %.
Then for projects on creation of a nuclear superbomb it will be approximately equal 40/10=4,
and for projects of the biological weapon at the moment - 1000/1=1000. Thus, most likely,
dependence of real risk from is nonlinear. The more cheaply the project, the more possibly is that it
can be created by some outcast people. Besides, the small and cheap project to hide or disguise
much easier, or to copy it. The more projects in the world, the more possibly is that multiplication of
this number on k (the share of madwomen) from the previous section will give considerable size.
For example, in the world about 100 operating nuclear submarines. At an assumption, that for them
k = one million, it will give one event of times in 10000 days or approximately in 30 years. Thus
safety level on nuclear submarines is so high, that, it is probable, that there k comes nearer to the
milliard. (However because of specificity of systems of safety there are risks not of mainly intended
capture, but of casual application because of infringement of communication systems, false
operations - for example, I read in memoirs, that the Soviet underwater fleet has been in full
alertness in 1982 after Brezhnev's death - that is the codes have been entered, start keys were
inserted, the position for strike was occupied.)
However the number of the laboratories, capable to spend genetic manipulations, now,
possibly, is estimated in thousand, and safety level there more low, than on submarines. Moreover,
creation of the biological assembler, that is live beings, capable to translate signals from the
393

computer in DNA and back, will considerably simplify these technologies. Thanks to it the number
of existing laboratories can increase to millions. (It is possible to tell also, that the more cheaply the
project, the is more for it k as in cheap projects there are less expenses on safety.) In this case we can
expect appearance of mortally dangerous viruses every day.
So, each destructive agency is characterized by the sum of money and time, necessary for its
creation. These parameters not unique, but allow to compare different means. Further, it is necessary
to consider the likelihood factor, whether will work us intended (in sense of achievement of full
extinction) the given weapon. Even very cheap project can give probability in 0,0001, and very
expensive - only 0,60. It is possible to consider conditionally, that we normalize all projects of
"doomsday" on 50 percentage probability. Any of them cannot guarantee 100 percentage efficiency.
However in the sum cheap, but not so dangerous projects can create higher probability of global
catastrophe for the same money, than one big project. (One thousand viruses against one
superbomb.)
Important question - what is the minimum size of the organization which could destroy
mankind if wanted. I think, that now rogue country of the average sizes could. Though earlier only
two superstates could do it. Besides, modern corporations possess comparable resources. The
following phase - the large terrorist organizations, then small groups and separate people.

The universal cause of the extinction of civilizations


Fermi paradox and a number of other considerations - see more about the Doomsday
Argument - suggest that there are some universal causes of the extinction of civilizations, which
operate at all civilizations in all worlds, without exception, regardless of specific technological
developments and natural features of the planets and other worlds.
1) The aging of civilization - in terms of accumulation of errors. In time of extremely rapid
growth (which we approaching with Singularity) is also happening rapid accumulation of
errors.
2) Any civilization is formed through natural selection from monkeys, a natural selection
ensures that survive more risk individuals who leave more offspring, rather than a safe
individuals. As a result of any civilization is to underestimate the risk.
3) Civilizations arise so infrequently that they might occur only on the brink of sustainability
parameters of the natural systems that support them. The growth of civilization will inevitably
destroy that balance (example: global warming).
394

4) Civilizations are growing exponentially, and so sooner or later exhaust every available
resource to them, then either suffer ecological collapse, or are starting a war for resources.
(Roche limit see Efremov). Nanotechnologies not solve this problem, because at the current
rate of growth of all the materials the solar system will be used for several hundred years or
early.
5) The development of weapons has always outpaced the development of shields Every
civilization has always creates a weapon that could destroy it, and in large quantities.
6) The more complex the system, the more it is inclined to normal accidents and sudden
changes. With the growing complexity these changes are becoming more frequent. When a certain
level of complexity is reached, the system immediately breaks down into uncontrollable chaotic
regime.
7) Any civilization sooner or later leads to AI, which is rapidly growing exponentially, and then
destroyed by unknown controversy. For example, the principal task of friendliness could be
insoluble - a more simple form of it - the tasks indestructibility of AI.
8) Civilization always consists of competing military and economic agents, leading to a natural
selection of those who know how to win in the short-term confrontation, to the detriment of those
who refuse to short-term advantages for the future of civilization.
9) The civilization sooner or later learn to replace the real achievement by the creation of signs
of them (like a superdrug), and therefore ceases every external activity.
10) A physical experiment, which is illegal in our universe. Like LHC.
We could see almost all these crisis in current our civilization, and the list is not full.

Does the collapse of technological civilization means human extinction?


The final collapse of technological civilization, say, an irreversible return to the tribal
structure, means the extinction of fringe in a few million years, because the man becomes again a
conventional living beings which all sooner or later extinct. In any case, this is existential risk in
the Bostrom terminology, as well as means irreversible damage to potential of civilization. For
example, if the people of Earth will be a million people for millions of years, this amounts is equal
to about one century with a population of 10 billion people by the number of lives (that is XXI
century).
The growth of European industrial civilization has been linked to the presence in one location
easily accessible mineral resources - coal and iron ore in England and Germany. Easily accessible on
an industrial scale of resources have been exhausted. It is here important, renewable sources of
395

energy. It is unlikely that someone will re-absorb large scale iron production for goal to return back
to the mystical golden age. You need to give it specific advantages in the current situation, for
example, allows for an effective weapon. Otherwise, many centuries will be interesting to delve into
the ruins and not to build production.
We must honestly admit that we do not know if it is possible to re-start the engine of
technological civilization, not having easy access to resources, particularly metals and cheap energy,
as well as high productive domesticated crops and livestock. For example, all deposits of gold in
Europe have been exhausted in the era of Antiquity and Middle Ages. Of course, civilization could
be restarted with a large number of educated motivated people under the correct leadership. For
example, you can extract energy from the combustion of trees and through hydropower. But is not
known whether such a superstructure as science is possible without adequate industrial base. Can
I start business civilization without the start-up capital of natural resources? Or we will be as
American Indians, who were aware of the possibility of wheels (they have toys), but not introduce it
in transportation, because the wheel without roads and horse power is useless without the draft, and
porters in the mountains better than wheeled carts.
Hence, we must conclude that the collapse of technological civilization with a substantial
likelihood mean extinction of mankind, even if some representatives of Homo Sapiens will survive
for thousands and millions of years after the collapse. Moreover, the existence of mankind in its
present form for at least a hundred years may be more valuable than the existence of a tribe of
people over a million years, not only because it is interesting, but also because it means a greater
number of lived human lives.

Chapter. Agents, which could start x-risks


The purposes of creation of the Doomsday weapon
The dangerous factor of global catastrophe can arise or casually, or can be created
intentionally. (Also the combination of these two moments however is possible: of a random factor
can take advantage intentionally, for example, having hidden approach of a dangerous asteroid, or
on the contrary, something planned as game with low risk of global catastrophe, leaves from under
the control.)
Often in discussions there is an opinion, that nobody will want to realize a certain devil plan
and consequently it is possible not to consider it. It is incorrect. First, here we will apply the
statistical approach - sooner or later the necessary conditions will develop. Secondly, on the Earth
396

there are real groups of people and separate humans who want "doomsday". However as a whole it
does not concern Islamic terrorists because they wish to create World Caliphate, instead of
radioactive desert. (But they can be ready to risk by a principle all or anything, for example,
having created Doomsday Machine and to threaten to apply it if all countries of the world
simultaneously do not accept Islam. But if other sect simultaneously creates Doomsday Machine
with the requirement to all to accept a certain especial form of the Buddhism a situation it
becomes stalemate as requirements cannot be satisfied for both sides simultaneously.) It is important
to notice, that the group of people can keep much longer itself in a condition of adjustment for a
certain idea, than one human, but groups are less often formed. We will consider different groups of
people which can potentially aspire to mankind destruction.
1) Eschatological sects. An example: Japanese Aum Shinrikyo. This organization not only
trusted in affinity of approach of a doomsday, but also worked over its approach, gathered the
information on the nuclear weapon, viruses and chemical substances. (However, there are different
assumptions what did and wanted Aum Shinrikyo, and to find out the definitive truth it is not
obviously possible.) Any religious fanatics choosing death are theoretically dangerous. For example,
Russian Orthodox Old Believers in 17 centuries often preferred death to new belief. Such fanatics
believe in the blessing in extramundane world or perceive the Doomsday as a clarification
ceremony. It is possible psychological substitution when long expectation something turns to
desire. Actually, the logic chain leading from peace meditation of destructive activity (for 10 years
approximately in case of Aum Shinrikyo) is that: at first presence of other world is realized. Then it
is realized, that after-world is more important than ours, and overall objectives lay in it. From this
follows, that our world is secondary, created by the higher world, and, hence, is small, final and
unimportant. Moreover, our world is full of the obstacles, stirring to a pure current of meditation. As
the higher world is primary, it will stop sooner or later existence of our world. As our sect is
blessed by God it receives especially exact knowledge of when and there will be the end of the
world. And, surprising coincidence, is signs, that it will occur very soon. Moreover, having
destroyed the world, our sect will execute will of the god. This possession the super-important
confidential knowledge, naturally, aggravates feeling of own importance of members of sect, and is
used for management strengthening in it. The end of our world will mean connection of all good
people with the higher world. The knowledge of affinity of the inevitable end, comprehension of
positivity of this event and the exclusive role in this important event leads to comprehension, that
the sect should not only the nobility and preach about a doomsday, but also approach this event.
(Psychologically there is a replacement of long expectation by aspiration.) Besides, it is a possible
397

way to kill enemies and to feel themselves winners over the old world. (I do not wish to tell, that I
precisely know, that Aum Shinrikyo really argued in a similar way. However elements of this
reasoning can be found out in the most different groups with eschatological outlook, from Christian
to the revolutionary. And at not all people and groups who speak about a doomsday, are going to
organize it. Among the known sects expecting a doomsday, - Jehovah's Witnesses and Mormons.)
2) Radical ecologists. Examples: The Voluntary Human Extinction Movement - they consider
useful mankind extinction, however suggest to carry out it by refusal of reproduction. Such groups
consider as the blessing the world of the nature and animals and believe mankind - not without the
logic - as a cancer tumor on a body of the Earth, conducting to extinction of all live. Also it is
possible to recollect radical vegetarians - "vegans", for which the life of animals is not less (and
sometimes more) important, than human.
3) Neo-luddity. For example, terrorist Unabomber (Theodore Kachinsky) who considered as a
unique exit for the civilization - a stop of technological progress and returning to the nature, and
dispatched mail bombs to leading computer scientists. Three humans were lost, also many have been
wounded as a result of its actions. Now he serves time in the American prison.
4) Embittered people movable by revenge. Those who now, for example, shoot from the
automatic machine of schoolmates. But such projects nevertheless prepare not for years, and usually
some days. Though it is possible to imagine human who has gone mad, having concentrated on idea
to revenge the world or the God.
5) Unconscious destructive behavior It can be or unexpected splash (to break a test tube with
poison), or certain more or less thin error in an estimate of own purposes. For example, many kinds
of a narcotism and extreme behavior are, according to the psychologists, the latent forms of slow
"suicide" (self-destructive behavior). The requirement for suicide, probably, is written down at
human at genetic level and caused in reply to rejection by society (for example: seppuku of
Samurais; a dog dying of loneliness; an alcoholism from loneliness).
6) "Fame-thirsty humans". It is clear, that nobody will become famous if destroy all the world,
but, destroying it, someone could feel for a second himself the great human. Actually, it will be
the perverted display of aspiration to the power.
7) Blackmailers who have created Doomsday Machine. It can be the people making any
political or economic demands under the threat of utter annihilation of all world. Therefore them can
be especially difficult to catch, as their "machine" can be in everywhere.
8) Universal defensive weapon of last choice. Instead of creating a nuclear shield from
rockets, a certain country can create one super-power nuclear bomb with a cobalt cover and threaten
398

to blow it up in case of the armed aggression. It is the little less rationally, than the concept of "the
mutual guaranteed destruction for the sake of which strategic nuclear forces was created. And it is
similar to behavior of human which undermines itself a grenade together with the enemy - and after
all governors too while people. As such weapon is created not that it to apply and to threaten them.
Conceptually it is close to idea of "global blackmail.
9) Risky behavior giving big prize or loss. For example, it can be a certain physical or
biological experiment. It can be aggravated unwillingness and inability of people to estimate scales
and probability of loss in the worst case. An example: Reagan's foreign policy in opposition from the
USSR.
10) Requirement for risk for strong experiences, passion. People lost estates in cards not to
change the property status that is why that felt requirement for sharp experiences of risk. Now it is
shown in extreme sports.
11) Supporters of replacement of people more perfect artificial intellect. On the Internet there
are the people advancing this idea. Radical transhumanists too can, even against the will, get to this
number.
12) People believing death by the best alternative to something. One American general in
Vietnam has told about the killed inhabitants of one village: to rescue them, we had to destroy
them.
13) Suicides. If human has found the sufficient bases to kill himself, he cannot regret other
world. An example: the Italian pilot who ran into tower Pirelli in Milan by the private plane on
March, 12th, 2002. Clinical depression can be shown that human starts to show interest to doomsday
problems, and then to wish that it would more likely come. From here one step to the active help in
this process.
14) Schizophrenics captured by obsessions. Delirium at schizophrenia forces human to find
out interrelations not existing in the nature. Schizophrenics often hear voices which subordinate
them to itself. We cannot predict, what sort the delirium will result in a conclusion that the Earth
should be destroyed. Thus mental abilities at schizophrenia do not decrease enough to make
impossible realization of long-term effective strategy. Though special tests can prove schizophrenia
presence, outwardly it is not always obvious. Moreover, unlike a neurosis, it is not realized by
human. Loss of ability to doubt is one of the most serious signs of the schizophrenia. The
schizophrenia can be "infectious" in the form of the religious sects duplicating certain crazy ideas.

399

15) Fighters for the peace. In history repeatedly the superweapon was created with that
thought, that now it will make wars impossible. With such purpose dynamite has been created, with
the same idea the cobalt bomb has been thought up.
16) Children. Already now teenage hackers are one of the basic sources of destructive activity
on the Internet. Thus it is enough their intelligence to master any one branch of knowledge and to
write a virus or to make small bomb, but there is not enough still to realize all completeness of
consequences of the actions, and the responsibility for them.
17) Perversion of sexual model of behavior of human, inducing him to extend himself in the
exotic ways. In the chapter Danger of molecular manufacture of the report of the Center
responsible nanotechnology we can read: Irresponsible fans for whom it will be a hobby can be
other possible source of grey goo. People of certain psychological type, apparently, cannot avoid a
temptation possibility to create and set free self-replicating formation, that to us proves a
considerable quantity of existing computer viruses.
18) Special services and the antiterrorist organizations, aspiring to raise the influence in a
society. On July, 29th, 2008 committed suicide Bruce Ivins suspected of realization of attacks by
the antitoxin the USA in the autumn of 2001. Within 36 years before he was one of the main experts
in bioprotection and vaccination from the Anthrax in the USA. He was married, has adopted two
children, has written 44 articles, played a synthesizer in local church. As a result of bacteriological
attack of 2001 the damage was more, than 1 billion dollars has been caused, and for bioprotection
means has been allocated an order of 50 billion dollars. Including it was planned (but purchase
developed by Ivans vaccines from the Antraxon for 800 million dollars from which from should
receive ten thousand dollars of a royalty has not taken place). As a result of attack and the accepted
measures, the number of the people working under programs of bioprotection and having access to
dangerous preparations, has increased in tens times so, chances that among them again there will be
someone who will make new attack have increased also. (But there are other theories who was really
organizer of the crime.)
Human always moves by several promptings, only a part from which is aware and it is quite
rational. On my supervision, about 10 different desires influence my decisions and this purposes
should unite, so that I have made a certain decision - that is that sufficient splash in motivation was
generated. Thus special psychological procedures are seldom applied to revealing of the latent
purposes, and to the majority of people are unknown. Therefore it is easy to expect, that the listed
motivations can operate in common, is reserved, un-linearly interfere and giving unexpected
enormous splash, "wave-murderer".
400

The social groups, willing to risk the destiny of the planet


Possibly, it is necessary to allocate separately the list of social groups and the organizations
which aspire to wreck and change of a world order. And for the sake of it or are ready to run risks of
general destruction, or can create it, not realizing it. For example, Ronald Reagan declared
"Crusade" against the USSR, but he understood that in the course of this opposition the risk of
catastrophically dangerous war increases. So:
1) The world powers struggling for domination in the world. It can be or to attack first under
threat of losing advantage, or powers-applicants on the world supremacy, choosing radical and
risky methods of achievement of the purposes. The psychology of these processes remains at level
struggle for a place of the alpha male in monkey's flight which is, however, rigidly enough
determined by the nature of natural selection.
2) Utopian social movements aspiring to the great purposes, for example, radical communists
or the religious organizations
3) Various national, economic, political forces which do not receive the share in present
world order or expect loss of the positions in the future.
4) It is possible to name also different supporters of "apocalypse poetry, fans of computer
games in the spirit of Fallout which are so involved with this idea that is, unconsciously - and
sometimes and meaningfully - they want it.
5) People living by a principle after us the deluge that is humans not that interested of
global catastrophe directly, but preferring actions which bring the blessing in short-term prospect,
but bear enormous harm in the long-term. This condition can especially become aggravated in
connection with comprehension of inevitability of own death, present at each human, and most
strongly shown during the risk and old age periods. (Behavior model: there is no fool to the old
fool.)
6). It is separately possible to allocate all that misunderstanding of the nature and probability
of global catastrophes which we will discuss in the second part of the book.

Humans as a main factors in global risks, as a coefficient in risks assessment


To consider a variety of human motivations, it is possible to enter the certain generalized
likelihood factor k. This factor means, roughly saying, chances that the pilot of the plane will direct
the plane on ram attack, or, speaking generally, a share of people which will decide to apply the
401

techniques accessible to them to destruction of and other people. We also do not consider here
distinction between in advance prepared and spontaneous actions. For example, if in a certain
country everyone in the house has a weapon there will be a certain average of its illegal applications.
This number very little. We will admit, (further goes purely presumable estimates, to within an
order), this factor can make for the USA (where 35 million guns on hands and a high crime rate) one
million in day, and for Switzerland if to take a unique case of execution of parliament in Tsug - one
milliard. For aircraft we will receive if to divide approximate number of all perfect starts of
passenger airliners (a billion order) into number of the planes grasped by terrorists for attacks on
September, 11th (4) - 1/250 million. At level of suicides in 1 percent this factor in recalculation on
human per day will be equal approximately to one million. In the world about billion computers, and
every day there are tens new viruses that gives k = 1/10 000 000, that is only one of tens millions
users makes senseless and dangerous viruses (but commercial illegal spyware can make and bigger
number of people).
We see that under different conditions k in recalculation on one "project" for one day
fluctuates between one million and one milliard. The top safe estimate will be one to one million,
whereas the most real estimate, maybe, one to hundred million.
It is not necessary to think, that if we will distribute keys from start of rockets to several
people we will lower chances of dangerous madness in one million times as crazy ideas are
infectious. Besides, humans on duty of one of silos of rockets in the USA admitted, that they with
boredom have thought up system from scissors and a string, allowing to turn to one human two keys
of start simultaneously. That is how start systems can be bypassed by cunning.
Besides, the madness can have thin and unevident character, and smoothly pass from
psychiatry area in area of simply incorrect or inadequate decisions. It not necessarily means, that
human suddenly will press the red button. In case of paranoia it can be proved set of rather logical
and convincing constructions, capable to convince other people in necessity to undertake a few more
risky actions that it will be protected from alleged dangers. "Madness" can be shown and in refusal
of actions during the resolute moment. It can be excessive persistence on some errors which will
lead to a chain of incorrect decisions.
Sir Martin Rees marks the following contradiction: in the future begins to operate probably
behavior, and even character of people and their human by means of high-precision medicines,
genetic manipulations and other influences, doing people more and more normal and safe. However
it will reduce a natural variety of human behavior, killing human in the human.

402

Conclusion: always there will be people who will wish to destroy the world and consequently
it is necessary to consider seriously all scenarios where someone can long and persistently work to
reach it.
1. Building doomsday machines is "rational" agential behaviuor. many of them could be builds and some game
theory could be applied. herman Khan wrote about it.
2. We could use probability distribution - we could calcultae the frequency in whoch a person will become
existential terrorist - it is around 1 in a million. It is based on frequency of scholl shooting and so on.
3. Humans have built in desire for the end of the world. That is why we like movies about appocalipsis. I wrote
about it in Russian, need to translate
4. Risk neglect - sometimes people proceed with serious risk neglect because it could give them some other
values. Examples: LHC, METI-program and Zaitsev. It is also type of agent like behaviour.
5. Cognitive biases in human behav

Decision-making about nuclear attack


The question is important, whether the madness of one human can lead to "pressing of the red
button. This question is rather studied in application of the nuclear weapon, but will arise similarly
at appearance of any other kinds of the dangerous weapon, technologies and Doomsday Machines.
Thus it is important to know, in whose hands there is a red button - whether only the top
management or in hands of certain group of executors: it is clear, that the more widely a circle of
operators which have access to weapon application, the higher is the risk.
There is the following contradiction connected with efficiency and safety of the nuclear
weapon: or we have absolutely steady system of protection against inadvertently start which does
start impossible neither by the command of a president, nor under the decision of the commander of
a submarine. Or we have a system capable within 8 minutes in the conditions of intensive
counteraction of the probable opponent and infringement of all communication systems to strike
back. The real systems, which device - at the moment - is the greatest secret, should find balance
between these inconsistent requirements. However in the past efficiency was often preferred to
safety. For example, in 60-70 years in the USA start of rockets have put on the password from 14
figures which should be informed from the centre. However value of this password wad established
on 0000000000000000, and all knew it (military men considered the password as nonsense which
will prevent them to strike blow in time). Only the independent commission has then come and has
demanded to create the real password.
Hardly there can be a situation when the president will go mad at night, will demand to bring
to it a nuclear suitcase and will press the button. However more thin variants when the unreasoned
and irrational behavior caused by affects, weariness and incorrect understanding, will result in a
chain of the actions conducting to war are possible. For example, Hitler, having attacked to Poland,
did not expect that England will enter war. Or Americans, planning to attack Cuba in 1962, did not
403

know that there already deployed Soviet tactical nuclear weapon, and armies have the right of it to
apply.
Important point in decision-making on a nuclear attack is interaction of the operator with the
instruction. The instruction too is created by people, and situations described in it are perceived
hypothetically, instead of as real decisions on weapon application. Carrying out the instruction, the
operator also does not bear any responsibility for does that is written. As a result responsibility is
washed away, and become possible decisions which any human in itself would not accept. The
example with missile commander S.E. Petrov of which after the United Nations has awarded with a
medal for mankind rescue is characteristic. Having found out in 1983 (shortly after have brought
down the Korean Boeing) start of nuclear rockets from territory of the USA, he has decided not to
give a command about a reciprocal attack as has counted this alarm false. However Petrov was not
the ordinary human on duty of change, it was the developer of the instruction on decision-making
who has appeared in this change casually. And consequently he has canceled the instruction made by
him. However the ordinary human on duty should execute it.

404

Chapter 21. The events changing probability of global catastrophe

Definition and the general reasons


Let's name global risk of the second sort any event which considerably raises probability of
extinction of mankind. The combination of such events creates a vulnerability window. It is
historically known, that 99 % of species of the live beings living on the Earth have died out, and
now every day species continue to die out. It is obvious, that extinction of these species has occurred
without application of supertechnologies. Are most well-known are extinctions of dinosaurs and
Neanderthal men. Among the extinction reasons, according to paleontologists, first of all there are
changes of an ecological situation - that is destruction of food chains and appearance of competitors
whereas natural cataclysms act only as the trigger event finishing weaker species. It was namely
Dinosaurs who died after asteroid, because small predators-mammals ate young growth and eggs. It
was namely Neanderthal men who didnt survive last ice age as to them resisted more organized
Homo Sapiens. Nevertheless it is difficult to use hypotheses about last extinctions for a
substantiation of the subsequent, as here a lot of not clear. However as more authentic example it is
possible to take cases of destruction of traditional societies and cultures. For example, Russian
peasantry as special socio-cultural generality what it was in XIX century, has disappeared entirely
and irrevocably (if not to tell has died out) in the course of an urbanization and collectivization besides that historically it could resist both to wars, and epidemics. But it was ruined by new
possibilities which has given by urban civilization and new economic situation. The destiny of the
Australian natives and other communities which have faced more technically equipped and
developed civilization is similar. That is separate people are alive, can keep memoirs, but from
culture rests only folklore ensembles. It can be described and on an example separate beings. When
the organism is ill, its vulnerability to any external pushes (or to aggravations of the illness)
increases. Thus, we can imagine the following diphasic scenario:
1. In the beginning because of large catastrophe the Earth population was sharply reduced,
manufacture and science degraded. We name this space the post-apocalyptic world. In cinema or
the literature such world is described usually as arising after nuclear war (a phase of destruction of a
civilization, but not people).
2. The escaped people who have remained in this world, appear is much more vulnerable to
any risks, like eruption of volcanoes, falling of a small asteroid, exhaustion of resources. Moreover,
they are compelled to struggle with consequences of civilizational catastrophes and the dangerous
405

rests from a civilization - contamination, exhaustion of resources, loss of skills, genetic degradation,
presence of the dangerous weapon or the dangerous processes which have begun at civilization
(irreversible warming).
From this some conclusions follow:

Diphasic scenarios force us to consider as dangerous those risks which we have


rejected earlier as not able to ruin a civilization

Somewhat the diphasic scenario is similar to a nonlinear interference, but here


joining occurs in time, and the order of events is important.

The diphasic scenario can become and three - and more phase where each
following phase of degradation does mankind vulnerable to following forms of
risk.

Thus could not to be direct communication between the first and second catastrophes. For
example, get to the post-apocalyptic world people can owing to nuclear war, and die out - from
supervolcano eruption. But precisely also they could get to this condition of vulnerability to a
supervolcano because of epidemic or economic recession.
Consideration of multiphase scenarios has essentially probabilistic character. An epoch of
weakness of mankind when it is vulnerable, it is possible to name a window of vulnerability which
is characterized by density of probability. It means that such window of vulnerability is limited in
time. Now we live during an epoch of a window of vulnerability to supertechnologies.

Events which can open a vulnerability window


Two types of events are in this class. The first are events which inevitably will come in the
XXI century, proceeding from the standard representations about development of consumption and
technologies. The question in that only when it will occur (each of these opinions is divided not by
all experts, however leans against the assumption, that no essentially new technologies will arise):
1. Oil exhaustion.
2. The exhaustion of the foodstuffs caused by warming, droughts, an overpopulation,
desertification, transition of cars to biofuel.
3. Exhaustion of water resources.
4. Crash of a world financial pyramid of debts and obligations.
406

5. Any other factors, gradually, but it is irreversible doing the environment unsuitable for
dwelling (global warming, a freezing, pollution).
Events which can occur, and can and not occur with certain probability consist the second
type. It does not do their more safe as any annual probability means "half-life period" - that is time,
for which this event most likely happens, and this time can be less, than time of maturing of
inevitable events, like exhaustion of some resources.
1. Large act of terrorism, in scale of explosion of nuclear bomb in the big city.
2. The large natural or technogenic catastrophe, capable to mention a considerable part of the
population of globe until now such catastrophes never occurred The closest example - failure on
the Chernobyl atomic power station which has led to refusal of building of nuclear stations in the
world and to power hunger now, and also was the important factor of crash of the USSR.
3. Any of points which we have listed above as the possible reason of global catastrophe, but
taken in the weakened scale. For example, epidemic of an artificial virus, asteroid falling,
radioactive contamination etc.
Following phases of growth of a window of vulnerability include world war and working out
and application of the weapon of the Doomsday.

System crises
Whether it is possible, that global catastrophe has occurred not on that enough obvious scheme
which we have described above? That is, not having arisen in one start point during the concrete
moment of time and having spread from it to all world? Yes, such it is possible in case of system
crisis. Usually system crisis cannot exterminate all population, but, certainly, it can be global
catastrophe of the second sort. Nevertheless, there are models where system crisis exterminates all
population.
The elementary such model is the ecological system a predator-victim, for example, wolves
and elks on some island. In such system in the event that the number of predators has exceeded a
certain critical value X, they eat all elks up to the end after that they are doomed to extinction in
which process they will eat only each other. In the nature there is a protection against such situations
at level of various feedback in biosystems. Known examples - deer and a grass on the Canadian
island - on island have let out deer, they have bred, for decades have eaten all grass and began to die
out. Similar, but more the difficult situation has developed on Easter island with participation of
people. The Polynesians who have appeared on island approximately in VIII century AD, have
407

created the developed society which, however, gradually reduced woods, using, in particular, trees
for transportation of the well-known statues. Wood loss led to decrease in accessible quantity of the
foodstuffs. Finally, woods have been shown completely, and the society considerably degraded, its
number was reduced with 20 000 to 2 000 humans (but nevertheless has not died out). During this
moment the island has been open by Europeans. The purest example - reproduction of yeast in the
corked bottle which occurs on exponent, and then all of them to the uniform die out because of a
poisoning with a product of own ability to live ethanol spirit. Or collapse of supernova star it
doesnt depend of any of its atoms or even of bigger parts.
So, sometimes system crisis is capable to spend population through a zero, that is to kill all
individuals. Thus system crisis does not begin during any moment and in any point. It is impossible
to tell, that if any one wolf would not exist, or on would be more one elk something has changed.
That is system crisis does not depend on behavior of any one concrete element. Precisely also it is
difficult to tell, when system crisis became irreversible. Accordingly, therefore it is difficult to resist
to it as there is no place to make the efforts.
Working out of modern technologies also does not occur in one point. Any human cannot
essentially accelerate or slow down it.
The system approaches all entirely to system crisis. It is interesting to estimate, what chances
of preservation of elements at disintegration of their system, in other words, survivals of people at
destruction of the civilization It is possible to show, that the more strongly interrelation in system,
the more possibly, that system crash will mean destruction of all its elements without an exception.
If to exterminate 99,999 % of culture of bacteria, the remained several copies will suffice entirely to
restore number and properties of this bacterial culture. If to cut down a tree, branches will grow
from a stub, and it entirely, finally, will restore the functionality of the tree. But if to damage even a
small part of important to life parts of human body, especially his brain, he will die all once and for
all to the latest cage, which hundred billions - is difficult for destroying strain of bacteria with such
efficiency. As well the technological civilization - having reached certain level of complexity, it then
cannot regress without serious consequences on the previous level, simply having reduced
technologies and the population, and has chance to fall entirely, to zero. (Now for us there is event a
switching-off an electricity for several hours, and from it people perish. And more hundred years
ago the electricity was applied by the little only in rare experiments. Many modern constructions
cannot exist without a continuous supply of energy: mines will flood, openwork designs of shopping
centers will collapse for one winter without snow and heating cleaning etc.)

408

The more certain structure is systematically organized, the more degree of its features is
defined by character of a relative positioning and interaction of elements, instead of elements. And
that the big role in it is played by management in comparison with physical strength. If suddenly all
people in the world to reshuffle in space, having thrown everyone on other continent it would mean
destruction of the modern civilization though each separate human would be alive. Also if to cut a
thin knife a certain animal on several parts almost all separate cells will be still alive, but the animal
as a whole would be dead.
The more complex is system, the more strongly in it are long-term consequences of
catastrophe in comparison with the short-term. That is the system possesses property of
strengthening small events - certainly, not every, but whose that have got to focus of its attention.
Large enough catastrophes usually get to this attention focus as gush over through a threshold of
stability of system. For example, in case of Chernobyl failure by the most long-term consequences
there was a disintegration of the USSR and the long period of stagnation in atomic engineering
therefore the world now has power hunger. During acts of terrorism on September, 11th have been
destroyed buildings in initial cost in 1-3 billion dollars, but the damage to economy has made 100
billion. These acts of terrorism have led to bubble in the real estate market (for the account of the
low rate for economy stimulation) in billions dollars. And to war in Iraq for which have spent about
1.4 billion dollars. Moreover, the basic damage is still ahead as a withdrawal of troops from Iraq and
crisis in the real estate market will put image, political and economic damage on many billions
dollars. (Plus, for example, that it is necessary to treat decades people wounded in Iraq, and on it is
necessary to allocate for it billions dollars.) The similar logic of events and their consequences
described L.N. Tolstoy in the novel "War and peace", having tracked as consequences of a damage
which was suffered by the French army under Borodino, accrued in the subsequent chain of events a fire in Moscow, army loss on Berezina, empire crash. Thus information damage, that is the
interactions connected with the organization and the managements, in all these cases exceeded the
physical. These events have provoked a chain of wrong decisions and have destroyed management
structure - that is future structure. It is possible to tell and differently: big enough event can throw
system in other channel which slowly, but is irreversible disperses from the former channel.
Let's discuss now various kinds of system crises, which happen in the nature to look which of
them can concern a modern civilization
1. Surplus of predators - this example we already discussed above on an example of wolves
and elks.

409

2. An example from economy - Great depression. The closed cycle of curtailment of


production - dismissals - demand falling - curtailments of production. The cycle, which in itself is
arranged so, that should pass through a zero. Only non-economic events, like war and expropriation
of gold, could break off it.
3. Other example of global self-reproduced structures is arms race . It induces to create the
increasing arsenals of more and more dangerous weapon and to hold them in high degree of battle
readiness. Besides, it involves in itself all new states and stimulates workings out of dangerous
technologies. In other words, there are certain structural situations in the civilization which is more
dangerous than the weapon of mass destruction. These structures are characterized by that they
reproduce themselves at each stage in increasing volume and continue to operate at any level of
exhaustion of resources of a civilization
4. Strategic instability: who will strike the first, wins. Plus, situations when having advantage
should attack before threat of its loss.
5. Split escalation in a society which results in more and more open and intense struggle, the
increasing polarization of the society which members are compelled to choose on whose they to the
party. (For example, opposition Fath and HAMAS in Palestine)
6. The structural crisis of an information transparency arising when all know all. (As in a film
Minority report where ability of psychics to predict the future leads to the beginning of war.) In
one book on military strategy the following situation was described: if one of two opponent does not
know, in what condition is another, he is in rest. And if one knows, that another has started to put
forward armies, it provokes to start to do the same; if he knows, that the opponent does not put
forward his army, it also provokes him strike first. In other words the information transparency
infinitely accelerates feedback between the contradictory parties therefore fast processes with a
positive feedback become possible. And espionage nanorobots will make the world informational
transparent - and with the big speed.
7. Structural crisis of a mutual distrust, for example, in the spirit of struggle against enemies of
the people when all start to see in each other enemies and to exterminate seeming enemies that leads
to self-strengthening of search of enemies and to sweep for false charges. By the way, blood feud is
too structural crisis which can eat communities. Mutual distrust crisis happens and in economy,
leading to flight of clients from banks, to growth of rates under credits, and too is self-amplifying
process. The credit crisis which has begun in the world in August, 2007 substantially is connected
with loss of trust of all banks and financial institutions to each other in connection with unknown

410

stocks of bad mortgage papers, losses from which emerged as corpses in the river in the most
unexpected places, according to American economist N. Roubini.
8. The model of behavior consisting in destruction of others on purpose to solve a problem.
(For example: conditional "Americans" wish to destroy all "terrorists", and "terrorists" - all
"Americans".) But it only a way to conflict growth - and to distribution of this model. It as a
dilemma of the prisoner. If both parties dare at the world both will win but if only one "kinder" will
lose. In other words, pathological self-organizing can occur even then, when the majority against it.
For example, in the beginning of arms race this was already clear, that such, and the forecast of its
development has been published. However has not prevented the process.
9. The economic crisis connected with a feedback effect between predictions and behavior of
object of supervision which does this object absolutely unpredictable - that takes place at gamble in
the market. This unpredictability is reflected in appearance of the most improbable trends among
which can be catastrophic. The impression is created, that trends try to discover new catastrophic
modes in which they could not be predicted. (It is proved so: if the markets were predictable,
everyone could make profit of them. But all cannot receive profit on gamble as it is game with the
zero sum. Hence, the behavior of the markets will be more complex, than systems of their
prediction. In other words, there is a situation of "dynamic chaos.) Also in military confrontation to
behave in the unpredictable way appears sometimes more favorable, than to behave in the most
effective way because the effective way is easily to calculate.
10. Other variant of economic structural crisis - infinite putting off of recession by a rating of
economy money - can pass an irreversibility point when softly leave this process is impossible. It is
described in the theory of credit cycles of H. Minski. Minski divides debtors into three categories:
the diligent; on those who can earn on payment of percent, but not on main debt and consequently
are compelled to stretch it forever; and on those who is compelled to occupy new credits to pay on
old, that is similar to a financial pyramid (the scheme Ponzi or in Russian). The first category
of borrowers is free, and can pay a debt entirely. The second group of borrowers is compelled to pay
a debt eternally and cannot leave this condition, but is capable to serve the debt. The third category
is compelled to expand continuously the operations and all the same will go bankrupt during a
limited time interval.
Minski shows, that appearance of all three types of borrowers and gradual increase in a share
of borrowers of the third type is natural process in capitalist economy of the period of boom. The
modern economy, led by the locomotive - the USA, is somewhere in between the second and third
type. The volume of a different kind of the debts created only in the USA has, by some estimates, an
411

order of 100 billion dollars (7 bln. public debt, 14 bln. under the mortgage, population debts for
credit cards, formation, cars, promissory notes of corporations here enters, and also obligations of
the government of the USA on health services of pensioners (Medicare). Thus volume of gross
national product of the USA - an order of 13 bln. dollars in a year. It is clear, that it is necessary to
pay all this money not tomorrow, and they are smeared on the next 30 years and between different
subjects who with difficulty are going to use receipts on one debts for payment of others.) In itself
debt not is a devil - it, more likely, describes, who and when will pay and receive. In other words, it
is the financial machine of planning of the future. However when it passes to the third mode, it
enters the mechanism of self-damage, which the more strongly, than it later.
Opinions on, whether really the economic develops thanks to the world financial pyramid, or
not, is separated. The billionaire Warren Buffet named derivatives (multistage debts) financial
weapon of mass destruction. The dangerous tendency consists as that it is possible to think that this
system problem with debts concerns only the USA as to the country: actually, it concerns all
economics. The damage from Great depression of 1929 twice exceeded a damage of the USA from
the Second World War and has extended, as a Spanish flu 10 years earlier, on all continents, having
struck across Europe is stronger, than on States. Great crisis of 1929 was the largest world system
crisis up to disintegration of the USSR. Its basic complexity was that people did not understand that
occurs. Why, if there are the people, wishing to work, and hungry people, demanding food - the
meal becomes cheaper, but nobody cannot buy it and farmers are ruined? And the authorities burnt
surpluses of meal - not because they were villains or idiots, that is why that they simply did not
understand how to force system to work. It is necessary to note, as now there are different points of
view about the reasons of Great Depression and especially about what measures would be correct
and why it, at last, has ended. Total self-supported misunderstanding is the important part of system
crisis. Minski suggests to increase a state role as the borrower by an extreme case to reduce cyclic
fluctuations of capitalist economy. And it has already worked in crises 1975, 1982 and the
beginnings of 90th years. But in it new danger is concluded. It consists that banks which redeem
each time, become more and more reckless in accumulation of debts as are assured that the state will
rescue them from bankruptcy and this time. Besides, they are brought by statistical models: The
longer there was no economic depression, the longer it will not happened on statistical models
whereas on structural models, the there was no recession longer, the big it will be in further. Credit
cycle of Minsky is connected first of all with excessive investment, and Moore's law as we know, in
many respects leans against superfluous investment in frameworks of "venture investment.
Therefore economic recession will put the strongest blow on Moore's law.
412

11. The crises connected with unpredictable processes in supercomplex systems. The general
tendency to increase of complexity of the human civilization which creates possibility for quick
unpredictable collapses. (Just as the airplane in Peru fall, because personnel at the airport has stuck
the gauge of speed with an adhesive tape, and it has given out an error, and the command has
decided, that it is computer failure and when the computer has given out a signal about affinity of
the Earth, to it have not believed and ran into the sea.) Or erroneous operation of systems of the
notification about a nuclear attack. If earlier nature force majeure (for example, a storm) to the
XX century they have been superseded as a principal cause - the human factor (that is quite concrete
error on a design stage, options or managements) were a principal cause of catastrophes. However
by the end of the XX century complexity of technical and social networks has appeared is so great,
that failures in their work of a steel not local, but system (under the scenarios which detection was
incalculable a challenge for designers). An example to that is Chernobyl catastrophe where
personnel followed under the instruction letter, but do what no one from composers of the
instruction did not expect and could not assume. As a result everyone operated correctly, and in
the sum the system has not worked. That is supercomplexity of system, instead of a concrete error of
the concrete human became the cause of catastrophe. About same it is spoken in the theory of
normal failures of Perrow: Catastrophes are natural property of super complex systems. The chaos
theory is engaged in research of such systems. The chaos theory assumes, that the complex system
with a considerable quantity of determinatives can move on strange attractor - that is on a way in
which there are sudden transitions to a catastrophic mode. Expression of this idea is the theory of
"normal failure which says, that it is impossible to create absolutely catastrophe-free system even if
to engage ideal employees, to put absolutely serviceable equipment etc. Normal failures are natural
property of complex systems which answer two criteria: complexities of the device and degree of
coherence of parts.
12. The classical contradiction between industrial forces and relations of production, an
example to which is current situation in the world, with its basic contradiction between set of the
countries possessing national armies and unity of global economic.
13. Self-reproduced disorganization (a parade of sovereignties in the end of USSR).
14. Self-supported moral degradation (crash of Roman empire).
16. A domino effect.
17. "Natural" selection of short-term benefits instead of the long-term. (Marx: more effective
exploiters supersede "specie".)

413

18. The tendency to concentration of the power in hands of one human. (All revolutions
finished by dictatorship.) Having risen once on a way of authoritative board, the dictator is
compelled to go for absolution of the mode that it have not dethroned.
19. An avalanche of reforms (Machiavelli: small changes lay a way to the big changes. An
example: the Perestroika epoch).
20. Crisis of accruing disbelief - increase of lie and information noise (benefit instead of
reliability, public relations instead of true, noise instead of speech - crisis of loss of trust when the
more a certain human does not trust others, more he lies himself, knowing, that from him is
expected the same). If criterion of truth is experiment, and result of experiment is the new
technology, and its value are money, then gradually intermediate steps fall.
21. The self-organized criticality. The model with a pile of sand on which fall on one grain of
sand and on which avalanches descend, therefore some average level of an inclination is established,
is an example the so-called self-organized criticality. This model can be compared with density of
catastrophes in any sphere of human activity. If in it there are too many catastrophes it involves
more attention, and in this area is put more resources on maintenance of security measures; at this
time other areas receive less attention and the risk increases in them. As a result we receive the
world in which the density of catastrophes is distributed in regular intervals enough by all kinds of
activity. However mathematical property of systems with self-organized criticality consists that in
them can be avalanches of beyond all bounds big size. The self-organized criticality arises when
concentration of unstable elements reaches some threshold level so, that it they start to establish
communications with each other, and to create the own subsystem penetrating initial system. As the
number of scenarios and scenarios factors which can lead to global catastrophe, is high, and it
constantly grows, chances of similar self-organizing increase. It is possible to tell and in another
way. Catastrophic process arises, when it appears settled own abilities of system to homeostasis
preservation. However catastrophic process, having arisen, is too some kind of system and too
possesses the homeostasis and stability about which well writes S.B. Pereslegin with reference to the
theory of military operation. It transforms catastrophic process into the self-supported phenomenon,
capable to pass from one level on another. The risk of chain reaction of the catastrophic phenomena
especially increases that there are people - terrorists, - which carefully try to discover different
hidden vulnerability and wish them to apply.
22. The crisis connected with aspiration to get rid of crises. (For example, the more strongly
Israelis wish to get rid of Palestinians, the more strongly Palestinians wish to destroy Israelis.)
Feature of this crisis is connected just with understanding crisis situations, unlike the previous
414

crises. However it is frequent does not lead to situation improvement. In this connection it is
possible to recollect Murphy's law: if to long investigate a certain problem, eventually, you find out,
that you are its part.
Structural crises are obscure to people for their mind is accustomed to think in categories of
objects and subjects of actions. Owing to it, the more they think of such crisis and try to cope with it,
for example, having exterminated one of the conflict parties, the more crisis expands. Structural
crises cause sensation of bewilderment and searches of the latent enemy (which and became that
object which generates crisis). For example, therefore it is more convenient to think, that the USSR
has disorganized CIA. Examples of system crisis in a human body is aging and adiposity. Further,
more difficult structural crises which are not obvious yet are possible.

Crisis of crises
At the modern world there are all named kinds of crises, but as a whole system remains stable
because these forces pull every which way. (For example, to authoritarianism opposes the
peculiar tendency to split - the USSR and China, Sunni and Shiite, Stalin and Trotsky - which
creates crisis of type of a crack and counterbalances unipolar crystallization.) So separate processes
counterbalance each other: authoritarianism - disorganization etc. The homeostasis operates in the
spirit of Le Chatelier's principle. (This principle establishes, that the external influence deducing
system from a condition of thermodynamic balance in which it is, causes the processes in system,
aspiring to weaken effect of influence.)
It would be Dangerous, however, if all these separate crises will self-organize such way that
there will be a certain crisis of crises. Systems aspire to be kept in balance, but after strong
enough strike it could pass in an equilibrium condition of movement, in other words, in new system
of process of destruction which too possesses the stability. An example from a usual life: to leave the
house, it is necessary to make sometimes a certain effort to "be shaken", however when travel
process has gone, it already possesses own dynamics, inertia and structure.
At the moment all crisis in human development organized so that to keep mankind in the
tideway of gradual economic, scientific and technical and population growth. In case of crisis of
crises all the same factors can be organized so that continuously to work on destruction of a human
civilization
Properties of "crisis of crises: it cannot be understood, because, having begun of it to think,
you are involved in it and you strengthen it (so the Arab-Israeli conflict works). And consequently
415

that its understanding has no value, because of dense information noise. Because, actually, it is more
complex, than one human can understand, but has a number obvious incorrect simplified
understandings. (Murphy's Law: any challenge has the simple, obvious and incorrect decision.)
Elements of crisis of crises are not events and interactions in the world, and crises of lower
order which are structured not without the aid of human intelligence. And especially important role
the role here plays understanding, that now there is a crisis which conducts to two, at least, behavior
models - or to aspiration to get rid of crisis somewhat quicker, or to aspiration to take advantage
from crisis. Both these models of behavior can only strengthen crisis. At least, because at the
different parties in the conflict has different ideas about how to finish crisis and how to receive
benefit from it.
As understanding of crisis by separate players - a crisis part this crisis will be more difficult
than its any understanding. Even when it will end, that understanding, that to us has occurred - will
not be. For this reason so many different opinions and discussions that has occurred in 1941 or why
have broken up the USSR.
One more metaphor of "crisis of crises is the following reasoning which I heard with
reference to the financial markets. There is a big difference between crisis in the market, and market
crisis. In the first case sharp jumps of the prices and change of a trading situation are observed. In
the second - trade stops. In this sense global catastrophe is not the next crisis on a development way
where the new wins the old. It is the termination of the development.

Technological Singularity
One of deep supervision in the spirit of idea of "crisis of crises is stated in A.D. Panov's
article Crisis of a planetary cycle of Universal history and a possible role of program SETI in postcrisis development. Considering periodicity of the various key moments from life appearance on
the Earth, he finds out law which says that the density of these transitive epoch continuously
increases under the hyperbolic law and consequently, has singularity point in which it reaches
infinity. It means, that there is not simply next crisis, but the crisis of all model which describes
process of evolution from life origin up to now. And if earlier each crisis served for destruction of
old and appearances of new now all this model of development by means of crises comes to an end.
And this model says nothing that will be after Singularity point.
According to Panovs calculations, this point is in area of 2027. It is interesting, that a little
essentially different prognostic models specify in vicinities of 2030 as on a point Singularity
416

where them prognostic curves address in infinity. (For example, M. Esfandiary took to itself name
FM-2030 in commemoration of the future transients in the middle of the XX century, for 2030
specify forecasts on creation of AI and on exhaustion of resources.) It is obvious, that global risks
are grouped around this point as it is classical a mode with an aggravation. However they can
occur and much earlier this point as there will be crises and before it.
In Panovs model each following crisis is separated from previous by a time interval, in 2.42
times shorter. If last crisis was on the beginning 1990, and penultimate - on the Second World War,
the following crisis (the moment of an exit from it) on Panovs model will be around 2014, and after
the following - on 2022, 2025, 2026, and further their density will continuously accrue. Certainly,
exact values of these figures are incorrect, but in it is the general consistent pattern. Thus last crisis disintegration of old and creation of new - was in the early nineties and consisted in disintegration of
the USSR and Internet appearance.
It means that during the period since the present moment till 2014 we should go through one
more crisis of comparable scale. If it is true, we can already observe its origin now in five years'
horizon of predictions. However this crisis at all will not be that definitive global catastrophe about
which we speak, and between it and the crisis of the model in 2020th years is possible the stability
islet in several years length
Some independent researchers have come to thought on possibility Technological Singularity
around 2030, extrapolating various tendencies - from level of miniaturization of devices to
capacities of the computers necessary to feign a human brain. The first who has coined the term
Technological Singularity was Vernor Vinge in the article of 1993. Singularity does not differ
mathematically from a mode with an aggravation, that is catastrophe and as the end of a huge
historical epoch it, certainly, will be catastrophe. However Singularity can be positive if it keeps
people and considerably will expand their potential, and accordingly, negative if as a result of this
process people are lost or will lose that big future which at them could be. From the point of view of
our research we will consider positive any outcome of Singularity after which people continue to
live.
The fastest, complex and unpredictable process which is often identified with Technological
Singularity, is the appearance of universal AI capable to self-improvement and its hyperbolic
growth. (It is possible to show, that acceleration of development which took place in the past, is
connected with acceleration and improvement of ways of the decision of problems - from simple
search and natural selection, to sexual selection, appearance of human, language, writing, science,

417

computers, venture investment - each following step was step to intelligence development, and
possible in the future self-improved AI only continues this tendency.)
Concerning Technological Singularity it is possible to formulate several plausible statements.
First, Singularity forms absolute horizon of the forecast. We cannot precisely tell, that will be
after Singularity as it is a question of infinitely difficult process. Moreover, we cannot tell anything
neither about the moment of Singularity, nor about a certain time interval before it. We can come out
only with certain assumptions of when will be Singularity, however here again there is a wide
scatter. Actually, Singularity could happen tomorrow in case of unexpected break in AI research.
Secondly, from the point of our modern views, the actual infinity cannot be reached. Owing to
it absolute Singularity is not achievable. It can be interpreted so, that as approaching Singularity in
system various oscillatory processes amplify which destroy it before achievement of a point of
infinity. If it so, the density of probability of global catastrophes before Singularity increases beyond
all bounds. (Compare with G.G. Malinetsky concept about increase in frequency and amplitude of
fluctuations in system before catastrophe which are signs of its approach.) Or it can mean infinite
consolidation of historical time in which force Singularity will be never reached as it takes place in
case of falling of objects in a black hole.
Thirdly, all system approaches to Singularity entirely. It means that it is not necessary to
expect that Singularity will not mention someone or that will be a several different Singularities.
Though it can begin in one point on the Earth, say, in laboratory on AI creation, but in process of
development it will capture all Earth.
From the point of view of our research, it is important to notice, that global catastrophe is not
obligatory to Technological Singularity. Global catastrophe can be scale, but, finally, simple process,
like collision with an asteroid. In such global catastrophe there are signs of a mode with an
aggravation, as for example, sharp acceleration of density of events at the moment of a contact an
asteroid with the Earth (lasts 1 second), but is not present superintelligence which by definition is
not conceivable.
From the told follows, that if to accept the concept Technological Singularity, we cannot do
anything to measure or prevent risks after moment Singularity, but should prevent these risks before
its approach (especially in the raised vulnerability before it) and to aspire to positive Singularity.
The concept Technological Singularity as hypothetical point of the bending in infinity of
prognostic curves around 2030 was several times independently opened (and on extrapolation of
different curves - from population growth by Kapitsa, to miniaturization of technological devices),
and the group of the people was at the moment generated, calling to aspire to this event. More in
418

detail about Technological Singularity is possible to reed in articles: V.Vinge Technological


Singularity, Yudkowsky Peering in Singularity, David Brin Singularity and nightmares,
Michael Deering Dawn of Singularity.

Overshooting leads to simultaneous exhaustion of all resources


Some resources can not simply end, but to be settled, so to say, in a minus. For example,
super-operation of soils leads to their fast and full erosion. This question was investigated by
Meadows in Limits of growth. Investigating mathematical models, he has shown, that
overshooting of some resource results that system inevitably on destruction edge. For example,
surplus of predators leads to such exhaustion of number of victims, that then all victims perish,
predators are doomed to hunger. Other example - when environmental contamination is so great,
that appears ability of environment to self-restoration is amazed.
Credit cycle of Minski definitely concerns not only money, but also to exhausting
overshooting of any natural resources. Thus it is peculiar to mankind to overshoot any resource
which became accessible to it. In this sense it is no wonder, that overshooting of many resources
occurs practically simultaneously - after all the re-expenditure of one resource can be hidden by
spending another. For example, exhaustion of money for mortgage payment can be hidden by
paying it through a credit card; precisely also exhaustion for 30 percent of the suitable fields for
agriculture since Second World War time can be hidden by putting there are more resources (that is
energy) in cultivation of the remained fields; or exhaustion water horizons can be hidden, by
spending more energy on extraction of water from deeper horizons. People managed to overcome
problems of superexhaustion every time, by making technological jump as it was in Neolithic
revolution. However it not always occurred smoothly, that is sometimes the decision appeared only
when full-scale crisis was already opened wide. For example, Neolithic revolution - transition from
gathering to settled agriculture - has occurred only after the population was considerably reduced as
a result of superexhaustion of resources in a society of hunters-gathers.
In the XXI century we are threatened with simultaneous exhaustion of many important
resources owing to already now occurring overshooting. We will list different assumptions of
exhaustion, not discussing the validity of each. From the economic point of view definitive
exhaustion of any resource is impossible, a question is that, how much will cost the rest part of
resources and whether it will suffice for all. In this connection allocate not the exhaustion moment,
and the moment of a maximum of extraction (peak) and then the period of fast slump in production
419

of a resource. The recession period can even be more dangerous than the period of full absence as
during this moment desperate struggle for a resource begins, that is war can begin. I will name some
the future or already passed peaks of resources.
Peak of world extraction of fish - is passed in 1989
Exhaustion of the suitable Earths for agriculture
Peak of manufacture of food as a whole
oil Peak - it is possible, at the moment
gas Peak - later, but sharper recession after it.
Deducing from operation of nuclear reactors
Exhaustion of potable water and water for an irrigation.
Exhaustion of some rare metals (by 2050)
Once again I will underline: in the given work the problem of exhaustion of resources is
considered only from the point of view of, whether it can lead to definitive extinction of mankind. I
believe, that in itself - cannot, but the aggravation of these problems is capable to start an
aggravation of the international conflicts and to lead to serious war.
It is interesting to study the following question. If a certain subject suffers bankruptcy it
means, that all sources of receipt of money come to an end simultaneously and if resources of a
technological civilization are settled, that at it all resources simultaneously come to an end as energy
in modern conditions carries out function of money in technological system, and allows to extract
any resource while energy exist (for example, to swing water from deep layers) whether. Does this
means equivalence of money and energy so, that there will be an energy crisis simultaneously with
financial and on the contrary? I think, yes. Roughly speaking because real money means possibility
to buy the goods. If the economy passes in a scarce mode possibility to get something really
valuable for money will disappear.
There are different dating of possible peak in oil recovery and other resources, but all of them
belong to an interval from 2006 till 2050. Because it is possible to replace one resources with others,
different peaks of the maximum extraction of different resources will tend to be pulled together to
one general peak, in the same way, as thanking to NBIC convergences are pulled together peaks of
development of different technologies. It is interesting also, that the peak of extraction of resources
will occur on the same time period on which it is expected Technological Singularity. If Singularity
happens earlier modern resources will not be of great importance as immeasurably big resources of
space will be accessible. On the contrary, if recession in universal extraction of all resources occurs
before Singularity, it can interfere with its approach. Real process probably will be more combined,
420

as not only peaks of development the technology and peaks of extraction of resources are pulled
together to each other in the groups, but also peaks essentially other groups also are pulled together
around 2030 plus a minus of 20 years. Namely, peak of number of people by Kapitsa, peak of
possible number of victims from wars, peak of predictions for risks of destruction of a civilization
about what we spoke above. There are some interesting hypotheses about the reasons of such
convergence which we will not discuss here.

System crisis and technological risks


It is possible to consider system crisis of all modern society without the account of those new
possibilities and dangers which create new technologies. Then this crisis will be described in terms
of economic, political or ecological crisis. It is possible to name such crisis by social and economic
system crisis. On the other hand, it is possible to consider the space of possibilities created by
appearance and interaction with each other of many different new technologies. For example, to
investigate, as in biotechnologies progress will affect our possibilities on creation of AI and
interaction with it. It is possible to name such process by technological system event. That and other
direction are actively investigated, however as if it is a question of two different spaces. For
example, those who studies and predicts Peak Oil to 2030, at all are not interested and at all do not
mention in the researches a problematics, coherent with AI working out. And on the contrary, those
who is assured of working out of powerful AI by 2030, do not mention subjects of exhaustion of oil
as insignificant. It is obvious, that it is interesting to consider system of higher order where social
and economic and technological systems are only subsystems - and in which crisis of higher level is
possible. Otherwise it is possible to tell so:
Small system crisis - involves only a policy, resources and economy.
Small system technological crisis - involves development of one technologies from others and
complex technological catastrophes.
The big system crisis - in it both small crises are only its parts, plus interaction of making
elements with each other. An example of such crisis: the Second World War.
System technological crisis - the most probable scenario of global catastrophe
This statement leans against following parcels which we have separately discussed in the
previous chapters.

421

The majority of large technological catastrophes, since catastrophe of "Titanic", had system
character, that is had no any one reason, and arose as display of complexity of system in the form of
improbable unpredictable coincidence of circumstances from different plans: designing,
management, regular infringements of instructions, intellectual blindness and superconfidence,
technical refusals and improbable coincidence of circumstances.
We receive that effect for account NBIC of convergence and for the account of a simultaneity
of exhaustion of interchangeable resources, that all critical circumstances are tightened to one date,
and this date - around 2030.
The Collapse of a technological civilization, having begun even from small catastrophe, can
take the form of steady process where one catastrophe starts another, thus during each moment of
time of force of destruction surpass remained forces of creation. It is the result of that earlier the
large quantity of forces of destruction restrained, and then all of them will simultaneously be
liberated (exhaustion of resources, contamination of environment with dangerous bioagents, global
warming). This ability of one catastrophe to start another is connected with high concentration of
different technologies which are potentially deadly to mankind - as if fire in a ship where it is a lot
of gunpowder has begun, all ship finally will blow up. Other metaphor - if human escapes from an
avalanche, he should run with the increasing speed and the lesser delay is required, that he would
got under the increasing force an avalanche. The third metaphor - recrystallization of some
substances with several phase conditions around phase transition. This metaphor means the fast and
basic reorganization of all civilization connected with appearance of powerful AI.
In process of increase of complexity of our civilization the probability of sudden unpredictable
transitions in other condition (in the spirit of the chaos theory) accrues, and our inability to predict
the future simultaneously accrues and to expect consequences of the actions.

Chapter 21. Cryptowars, arms race and others scenario factors raising probability of global
catastrophe

Cryptowar
The important factor of the future global risks is possibility appearance of cryptowars - that
is sudden anonymous strike when it is not known who attacking, and sometimes even is unevident
422

the fact of an attack (S. Lem's term). When in world arms race appears more than two opponents,
there is a temptation of drawing anonymous (that is unpunished) strike called or to weaken one of
the parties, or to break balance. It is obvious, that supertechnologies give new possibilities for the
organization of such attacks. If earlier it there could be a delivery of radioactive substance or start of
a rocket from neutral waters, now biological attack can be much more anonymous. Cryptowar is not
in itself risk to existence of mankind of the first sort, but it will change a situation in the world:
Mistrust of the countries under the relation to each other will increase, arms race will amplify,
the moral interdiction for anonymous and mean strike will disappear. World war of all against all
(that is such war where there are no two contradictory parties, and everyone tries to do much harm
to everyone) and simultaneous jump in dangerous technologies as a result can inflame.
Cryptowar will be in many respects terroristic - that is information influence from the strike
will exceed a direct damage. But the sense of it will be not so much in fear creation - terror, and is
faster, in general mistrust of all to all which can be manipulated, throwing different "hypotheses".
Many political murders of the present already are certificates cryptowar, for example, murder of
Litvinenko. Unlike act of terrorism for which many wish to take responsibility, for cryptowar
nobody would take responsibility, but everyone wishes to use it on the advantage, having got rid of
fault on another.
Vulnerability to midget influences
Super complex systems are vulnerable to infinitesimal influences - and it is possible to use it
for the organization of diversions. (For the account of nonlinear addition, some very weak events
can have considerably bigger effect, than each of them separately, that reduces requirements to
accuracy of a choice and realization of each separate event.) It is final correctly to calculate such
influence, the super-intelligence, capable to simulate supercomplex system is necessary. So, this
intelligence should be more complex than this system, and this system should not contain other such
intelligence. Such situation can arise on the first phases of development of an artificial intellect.
Strike by means of small events will be the higher way of cryptowar.
Example: failures with electricity switching-off in the USA and the Russian Federation at
rather small short circuits. Such points of vulnerability can be calculated in advance. I cannot offer
more complex vulnerability, for I do not possess super-intelligence. However influence on relatives
and friends of leaders, making key decision can be one more factor. This way it is impossible to
destroy the world, but to provoke a huge brothel it is possible - that is to translate system on lower
level of the organization In a condition of chaos the probability of inadvertent application of the
423

weapon of mass destruction increases, and ability to working out of essentially new technologies
decreases. Accordingly, if means of the world destruction are already created, it raises chance of
global catastrophe and if they still is not present - that, probably, reduces. (But it is not so if other
technologically high-grade countries have escaped - for them such event becomes a trigger hook for
dangerous race of arms.)
Examples of a hypothetical point in system, infinitesimal influence on which leads to infinitely
big consequences: It is a question of decision-making by human, to be exact, on somebody by a
factor which outweighs a critical threshold of decision-making. Most likely, speech can go about:
decision on the war beginning (a shot to Sarajevo),
beginning of technogenic catastrophe (Chernobyl),
market panic, or other dangerous gossip.
deviation of an asteroid,
murder of a leader.
As a variant, probably small influence on some remote points, giving synergistic effect.
Among especially dangerous terrorist scenarios of such influences accessible already now:
Influence on relatives of humans making of the decision. Use of model aircraft as some kind
of long-distance rockets which can bring a small bomb anywhere.
Murder of governors and other outstanding people. In process of development of
technologies will be easier to kill not only many people, but also any in advance selected human.
For example, by means of small high-precision products (type of robots in size of "bumblebees") or
the viruses aimed at genetic system of the concrete human.
Complex attacks with use of the Internet and computers. For example, Trojan creation in the
computer which gives out wrong data only to one broker, forcing it to make wrong decisions.
Information attack - misinformation - for example to start up gossip (qualitatively
fabricated), that the president of the hostile country has gone mad and prepares preventive strike on
"us" is causes in "us" desire to strike the first. That, obviously, starts a "paranoid" positive feedback.
Arm race.
Arm race is dangerous not only because it can lead to creations to the Doomsday weapon. In
the conditions of high-speed arm race it is necessary to put dangerous experiments with lowered
safety requirements and with higher probability of leaks of dangerous substances. Besides, the
superweapon of general destruction can appear as by-product or a special case of application of the
common weapon. For example, the idea of a cobalt bomb has arisen after the usual nuclear bomb
424

necessary for it has been thought up and created. "Know-how" development in the military purposes
of especially dangerous poisonous insects will allow to create their such specie which can captivate
all Earth. At last, application in considerable quantities of any one weapon also can translate
mankind to falling on lower stage of development on which extinction of human population is
represented to more probable.
Moral degradation
Often say, that moral degradation can ruin mankind. It is clear, that moral degradation cannot
be global risk of the first sort as in itself it kills nobody, and conversations on moral decline go since
times of the Ancient Greece. Nevertheless, moral degradation of ruling elite is considered the
essential factor of falling of Roman empire.
In concept moral degradation I do not put actual moral estimate, but I mean those
psychological installations and models of behavior which do a society less steady and more subject
to crises of different sort. First of all, this preference especially personal and short-term objectives
over public and long-term objectives. We will notice, that if earlier the culture has been aimed at
propagation of the purposes promoting raised stability of a society, now - on the contrary. However
from it all did not become murderers. An example modern moral degradation are words of
president Clinton that it took a cigarette with a grass, but didnt inhale. This type of degradation
threatens first of all to "imperious elite and in sense of global risks can be shown in inability of this
elite adequately to react to arising threats. That is reaction is possible, but the lie, mistrust to each
other and money-laundering can undermine any effective and scale initiatives in the field of global
risks.
Further, there is a critical level of stability of technical and social systems depending on
concentration of unreliable elements. If it is not enough of them, these elements do not link with
each other and do not break stability. If their quantity exceeds a certain critical level, they form own
internally coherent structure. Even the small gain of such concentration around a critical threshold
can dangerously increase degree of instability of system.
At last, even small decrease in the general of "moral level societies considerably raises
"weight" of heavy tails of distribution, that is increases number of potential "fame-thirsty humans".
It is necessary to notice, that growth of the formed groups of the population of the Earth also
increases number of people which can meaningfully want global catastrophe and possess necessary
knowledge.

425

There is an opinion ascending still to K. Marx that roots of possible instability of a society - in
the nature of the society constructed on a competition. Investments into long-term projects are
weakened by competitiveness in short-term projects as resources leave in distant projects. As a result
in intermediate term prospect those people, the countries and the organizations who did not give
enough attention to short-term projects, come off second-best. (This reasoning can be compared to
N. Bostrom's reasoning about hobbyists in its article of "Existential risks where it is shown, that
evolution will eliminate those communities which will not spend all means for a survival.) In the
same time, those groups of people which has learned to co-operate, appear in more advantageous
position, than the separate human refusing cooperation. In any case, in a modern society there is a
rupture between the nearest planning (to elections, to the salary, to an enterprise recoupment), and
long-term planning at which the weight of improbable risks becomes great.
Advertising of violence, similar to what we see in modern movies or games type of Grand
Theft Auto, as well as selfishness and personal heroism leads to an unconscious education
population, making it less able to cooperation, altruism and self-sacrifice, which can need in a crisis.
On the contrary, the images of acts of revenge terrorism are implanted in the collective unconscious,
and sooner or later they appear back as spontaneous acts of violence. In the history were times,
when all the art was aimed to creating new man. This is primarily Christian art and Soviet art.
Animosities in the society as scenario factor
It is possible to assume, that sharp crisis will result in animosities growth on the Earth. The
situation which has arisen after acts of terrorism on September, 11th, 2001 when many people
expected was one of bright recent examples of splash in animosities in the world, that war of
civilizations will accept character of world war. Animosities splash can have following
consequences:
1. Society polarization on different groups which hate each other.
2. Growth of heat of negative emotions (a negative emotional background) and increase of
number of the people ready to violence and aspiring to it that raises risk of acts of terrorism and
incidents with the weapon of mass defeat.
3. Trust loss of all to all and destruction of connectivity of social structure. It is necessary to
consider, that the trust is today a necessary part of the economic system based on the credit.
4. Growth of arms race and an aggravation of all inveterate conflicts.
5. Risk of the beginning of world war in this or that form.

426

6. Expectations of inevitability of turn of events from bad to the worst, that can generate selfcoming true prophecies.
Revenge as scenario factor
The revenge is animosities continuation, but in the higher form. Lets admit that between two
large powers happened nuclear war, and one of the powers has completely lost it - to it have caused
an unacceptable damage whereas another has got off with rather small losses. At the lost party has
lost half of the population, all large cities, defensive potential. It is rather probable, that in this case
the revenge becomes national idea. History experience shows, that it is possible to put some people
on a destruction side, and they answer with more and more aggressive and dangerous forms of
resistance. For example, that support which was received by Ben Laden in Afghanistan. Owing to it,
nuclear war will not make the world stabler. On the contrary, it, probably, will create so insuperable
contradictions, that the world becomes even more dangerous. The lost party, possibly, will not refuse
to apply any Doomsday Machine because to the people who have lost the families, the native land,
have nothing to lose.
The won party should dare in this case either at utter annihilation, or on occupation. The
modern countries of the western civilization cannot dare at a genocide because in this case they
should lose the civilizational identity. The occupation mode also badly works, because can turn to
endless war. Technologically, though while and it is fantastic, is possible idea of occupation by
means of robots, that, however, is equivalent to transformation of the won country in electronic
concentration camp.
I will notice, that now we are on the threshold absolutely of any possibilities of the future. It
allows to reconcile to people with very different pictures of the future. However during any moment
there will be passed the irreversibility moment: some variants of the future will get more accurate
lines, and some will become impossible. Someone should recognize, that the soul is not present, and
that AI is possible, or on the contrary. It is fraught with conflicts for the picture of the future. For the
power over the future.
War as scenario factor
Wars were through all the history of the mankind. In itself usual war between people cannot
lead to human extinction as always there are survived winners. By Clausewitz, there are wars of two
types: on concession achievement, and on a total gain/destruction. It is clear, that wars of the second
type when one of the parties corner, are much more dangerous to human existence as create
conditions for application of the "Doomsday weapon " as last measure.
427

Here under word war we mean classical armed conflict of two countries occupied by people.
The armed struggle of people and robots, people and the superpeople, two AI among themselves or
people with a dangerous virus will not be classical war. But such war can mean genocide of people,
unlike usual war which is not aimed at destruction of all people.
Besides, wars differ with scale, and among them the largest are wars, in the obvious or implicit
form having one of the purposes - establishment of the world supremacy. Thus concept that such
"world" continuously extends. I believe, that any world war is war for world supremacy that it is
possible to name differently war for planet association, and has the purpose to establish an eternal
world mode. The Second World War, cold war and so-called struggle with Halifat about which
possibility spoke after acts of terrorism on September, 11th, appreciably approach under this
definition. The later such war will occur, the more its participants will be stronger and the
consequences will be worse. Probably, our planet is unlucky that it has not united in uniform allplanet state right after the Second World War.
Let's consider how war can increase probability of human extinction. (Thus we assume, that
the bigger is war, more is probability of each of these outcomes, but even small war creates their
nonzero probability):
1) War can create conditions for application of "Doomsday weapon ". And also to lead to
uncontrollable application of the nuclear weapon.
2) War can provoke even larger war.
3) War can cause an economic crisis. In the past war helped to struggle with overproduction
crises, but it was true for the old world in which there was no world industrial cooperation and a
global financial system.
4) Any war strengthens arms race , involving in it and the countries not participating in the
conflict. Thus any arms race is connected with progress of more and more independent from human
technological devices and more and more killing technologies. Arms race can lead also to decrease
in criteria of safety for the sake of to efficiency in the conditions of catastrophic shortage
of time.
5) War can become trigger event for not which chain of events, the leader to crisis.
6) War to create favorable conditions for large diversions and cryptowars.
7) During a war can increase risks of global catastrophes as leak, for example, in case of
destruction of storehouses and laboratories on manufacture of the biological weapon, and also for
the account of decrease in norms of safety by hasty workings out.

428

8) War increases quantity of the people, worrying feelings of despair and thirst to sweep, and,
hence, increases probability of creation and application of "Doomsday weapon ".
9) War blocks the joint organized efforts on prevention and liquidation of consequences of a
different sort of catastrophes.
10) War can lead to that short-term objectives in state planning cover middle - and long-term
prospects. In other words, during the war global threats which are not connected with war or longterm consequences of actions which are necessary for a survival today can be lost from sight.
11) War can promote development of transnational terrorist networks.
12) War leads to society split on red and white even in not participating countries that can
generate effect of self-reproduction of war. For example, in the XX century in many countries there
were communist parties which in some cases began the armed struggle.
13) War can lead to crash of economic and transition of the world to a "post-apocalyptic"
stage. It can occur, even if war will not be nuclear. But for this purpose it should be world. Network
terrorist war is more inclined to be world war. In network war almost there will be no rear territories.
14) War contains a much bigger element of unpredictability, than the politician in a peace
time. War also serves as the accelerator of rate of historical time, and especially rate of technological
progress. As a number of forecasts speaks to us about inevitable acceleration of progress in first
third of XXI century (Technological Singularity) it is possible to connect it with war possibility.
However progress is possible only at safety in safety of rear territories, including from diversions.
Hence, the world devastating war, nuclear or network, on the contrary will result in a stop or recoil
back technical progress.
15) Modern war does not go without attempts to get the weapon of mass destruction (or at
least suspicions and discussions on this theme) by weak countries or to stop it from the strong.
Therefore even the small local conflict will promote growth of illegal trade of dangerous nuclear and
bio materials and to formation of the international networks on their manufacture and distribution.
The basic conclusion is that even the smallest war possesses very powerful potential in
strengthening of global risks.

Biosphere degradation
Unlike human, animal and the flora cannot survive in bunkers itself. In case of irreversible
damage of biological systems on the Earth, and especially inhabitancies, people never can already
return on prehistoric level of existence. (If, of course, they will not take advantage of biological
supertechnologies.) Usual hunting and agriculture become impossible - there will be only a
429

cultivation of all necessary products in tight greenhouses. And if the died out animals can be
restored, simply having let out pair of each creature, also simply to restore soil and air it will not
turn out. And though the oxygen which has been saved up in atmosphere, will suffice for millennia
of burning of fuel, biosphere will not be utilize any more carbonic gas in case of biosphere
destruction which will strengthen chances of irreversible global warming.
From what we can conclude that the greater is corrupted habitat, the higher is the minimum
level of technology, with which can survive mankind.
Global decontamination
Distribution of dangerous biological forms can lead to full contamination of biosphere with
them. In this case such variants are possible:

People should take cover in the protected isolated refuges. However there will be a
threat of drift of dangerous biological agents from outside.

Biological struggle against dangerous agents: dispersion of antibiotics, antiviruses.

Creation of artificial immune system of all Earth. But it is possible only after
preliminary cleaning and is interfaced to the new threats connected with risk of
"autoimmune reactions.

Total sterilization of wildlife. In this case people should destroy completely


wildlife in order to destroy together with it the dangerous organisms which have
taken roots in it. It means, that people cannot return back any more to a natural
way of life. However after sterilization probably repeated settling of the Earth by
live beings from "zoos". The moment of global sterilization is dangerous to
mankind as means liberation of universal, killing all live agent, for example,
radioactive substance or radiation.
"Shaking" management

The effect is found out by the pioneer of cybernetics von Neumann. It is shown in trembling of
hands of patients with Parkinson, in management of planes and artillery shooting. Its essence
consists that the operating system receives the information on a condition of the operated parameter
with delay and as a result operating influence is not subtracted from parameter, but summed with it,
430

resulting to the increasing fluctuations. Concerning global risks and new technologies it can be
shown that the understanding of an essence of these essentially new processes will lag behind
development of the problem owing to that attempts to resolve a problem will only strengthen it.
Controllable and uncontrollable global risk. Problems of understanding of global risk
Our knowledge complexly influences probability of different risks. It is possible to allocate,
dangerous risks, that is for what we for whatever reasons cannot prepare, - unlike risks for which
we can prepare enough easily and quickly. Preparation for risk includes following conditions:
1. In advance we know, that event of some sorts can happen, we trust this information and
makes decision to prepare some preventive measures against it. We can calculate enough precisely
probability of this event at any moment. (An example of such risk asteroid threat is.)
2. We have some harbingers which specify when and from what party there can be a risk.
3. At the moment of risk appearance, we correctly identify it and in time we make correct
decisions on prevention, evacuations and damage minimization We have time to result these
decisions during a life during correct time.
4. In process of situation development, in us during each moment of time there is an exact
model of development of a situation, and we have time to shorthand and analyze it faster, than the
new information arrives.
5. We possess such quantity of resources which allows to minimize probability of the given
risk with any set degree of accuracy. Or to reduce it to event with in arbitrary small damage.
In other words, controllable risk is the risk which we can operate, doing it arbitrary small.
On the other hand, it is possible to describe conditions of appearance of uncontrollable risk:
1. We have no the slightest representations that event of such class in general can happen. We
neglect all harbingers that it is possible and we do not undertake any actions on preparation and
prevention. We believe that probability of this event is not computable, and "hence", zero.
2. This event is arranged so that it has no harbingers, or they are unknown to us.
3. Event begins so quickly that we have not time to identify it. Or we wrongly accept it for
something another. We make wrong decisions on its prevention. Or correct, but too late. It is
impossible to minimize damage from this event. The event course stirs to acceptance, distribution
and execution of correct decisions. Correct decisions do not arrive to executors or are carried out
incorrectly. Probably, that is accepted too many decisions, and there comes chaos. Some our
decisions aggravate a situation or are its reason.

431

4. We do not have model of an occurring situation, or we have a false model or some mutually
exclusive models. We have not time to analyze the arriving information, or it confuses us even more.
5. Our resources does not suffice considerably to reduce the given risk even if we will strain
all forces. We are under the influence of the events which are completely not subject to our will.
The stated model of appearance of uncontrollable risk can be a quite good portrait of global
catastrophe not from the point of view of its physical factors, and how it influences consciousness of
people making of the decision. Our consecutive and clear statement of a theme can create illusion of
possibility of fast comprehension of danger of process if people understand, what exactly occurs.
For example, on CNN would declare: unlimited reproduction nanorobots has begun. Our valorous
nuclear forces sterilize dangerous area nuclear attacks. But that, most likely, will not happened.
Experience of various catastrophes shows that the heaviest catastrophes occur when pilots or
operators are on the serviceable equipment, but resolutely do not understand what occurs - that is
create false model of a situation and, proceeding from it, operate. Here is several examples:
Pilots of already mentioned Boeing which has taken off from Peru (1996, flight 603), have
seen, that the computer gives out inconsistent data. They have come to conclusion, that the computer
is not serviceable, and have ceased to rely on its signals, even when it has given out a signal of
dangerous affinity to the Earth and the plane has let out wheels. As a result the plane has fallen in
the sea. The valid cause of catastrophe was that have stuck the gauge of speed with an adhesive tape
on the Earth; the computer was serviceable. If "Titanic" has faced an iceberg strictly in a forehead,
instead of on a tangent, a vessel, as believed, would not sunk.
In critical situations it is very difficult to people to make decision, as:
Criticality of these a situation for them it is unevident;
Similar situations did not happen in their practice;
People are under the influence of stress (emotions, fears, shortage of time) and under the
influence of prejudices;
Have only incomplete, incorrect and likelihood information, without exact algorithms of its
processing;
Understand that is told in instructions, otherwise, than authors of instructions.
Experience of investigation of difficult crimes and large failures shows that the adequate
understanding of a situation demands months of careful studying. Nevertheless, always there are
ambiguities, there are doubts and alternative versions. In case of global catastrophe, most likely,
anybody never learns, by what exactly it has been caused. Almost in 80 % of cases of failure are
connected with the human factor, in half of cases it is a question not simply of an error (casually
432

pressed button), but about erroneous model of a situation. It means, that the future systems of global
management can ruin completely a "serviceable" planet, having started to be protected from there is
some nonexistent or incorrectly understood risk. And chances of it are so great, as well as usual
catastrophe.
The more new technologies are obscure, the less they give in to public control. People can
participate in dangerous processes and operations, at all not understanding their nature. S. Lem gives
an example the possible future in the book Summa technologie where AI is used as the adviser for
government. Certainly, all councils of it AI which seem harmful, deviate the supervisory board.
However nobody has rejected council about change of a chemical compound of a tooth-paste.
Nevertheless, this change has resulted after many years and as a result of complex intermediate
relationships of cause and effect in birth rate reduction that answered the purpose of preservation of
balance of the natural resources, put before it AI. This AI did not aspire to do much harm somehow
to people. It simply found a criterion function maximum on many variables.
Drexler so describes this risk: Some authors consider arrival hidden technocrats to the
power in the world practically inevitable. In " Creation of alternative species of the future " Hejzel
Henderson proves, that complex technologies" become inherently totalitarian "because neither
voters, nor legislators cannot understand them. In "Repeated visiting of the future mankind"
Harrison Brown also asserts, that temptation to bypass democratic processes in the decision of
difficult crises brings danger," that if the industrial civilization will survive, it will become more and
more totalitarian by the nature. "If it is so possibly it would mean hopelessness: we cannot stop race
of technologies, and the world of the totalitarian states based on perfect technology, not requiring
neither in workers, nor in soldiers, could get rid quite from bigger parts of population .
The general models of behavior of systems on the verge of stability
G.G. Malinetsky finds out general signs of behavior of the curves describing behavior of
various systems before catastrophe. They consist that a certain parameter quickly grows while speed
of its fluctuations round average value increases. It is possible to explain it so: as the system
becomes critical, separate failures in it appear all is more close to each other, and between them
communication chains, small avalanches start to arise even more often. As a result system
parameters start to "twitch". However inertia of a homeostasis of system is for the present strong
enough to keep it in optimum parameters Appearance all new and new technologies and realization
different scenario factors increases number of bricks of which there can be a dangerous process, and
increases not linearly, and in the degree proportional to length of a dangerous chain.
433

Proceeding from it, it is possible to assume, that the increase in number untied with each other
catastrophes and dangerous processes becomes a sign of approach of global catastrophe, each of
which will come to the end rather safely. (However, it is absolutely not obligatory sign: catastrophe
can come and absolutely suddenly; besides, there is such sign, as calm before a storm, confirmed
on an example of earthquakes when the system ceases to give out failures during unexpectedly big
time. However, "calm" too is parameter jump. Jumps can be not only towards deterioration, but also
towards sudden improvement. So, sometimes patients feel better before death, and the share market
grows before recession.) In economy one of signs of coming recession is discrepancy of parameters
that specifies that the system leaves a normal and predicted operating mode. And it is possible, that
the system left an operated condition, but still is in limits of the parameters - like a plane which has
run away, still some time flies in the air corridor.

The law of techno-humanitarian balance


As marks A.P. Nazaretjan, people can arrange gradually the social behavior to the fact of
existence of new types of weapon. When carbines have come into the hands of mountain khmers,
they have shot down each other and have practically died out, and in Switzerland each house has a
military rifle, but its illegal applications are extremely rare (but happen - execution of local
parliament in Tsug in 2001). The law of techno-humanitarian balance consists that the society
reaches sooner or later balance between accessible technologies and skills of safe management of
them. It would be desirable to hope, people have reached balance with the nuclear and chemical
weapon which exist, but will not be applied. On the other hand, the weapon created by new
technologies, should pass the "grinding in" period before and concerning it this balance will be
established.
Schemes of scenarios
Though we cannot create the concrete scenario of global catastrophe for the reason that there
are probably many variants, and our knowledge is limited, we can take advantage of the help of
scenarios of the second order which describe the general laws of how are joined with each other
scenario factors. An example of such scenario of the second order is sword and board opposition.
Or the general course of a game of chess - from a debut to an end-game. For example, the following
joining of scenarios is possible: Shortage of resources - war - new technologies - unexpected results
- distribution of technologies.

434

Example of work of this scheme is war of Japan and the USA during the Second World War.
Japan has begun war, appreciably to grasp petroliferous fields in Indonesia, (that was impossible
without war with the USA and the Great Britain) as itself had no sources of liquid fossil fuel. War
has caused to the parties a much bigger damage, than the fact of shortage of fuel. However even
more essential factor from the point of view of risks was that war has resolutely accelerated arms
race in both countries. And though Japanese have considerably promoted in creation and test of the
fleas infected with a plague, Americans have reached success with a nuclear bomb. The nuclear
bomb has created much bigger risk of much bigger victims, than the Second World War has brought.
Possibility of creation of a hydrogen bomb and especially cobalt superbomb polluting the
whole continents became unexpected result of nuclear bombs. That is the technology has given
much more, than from it in the beginning was required. (The similar situation has arisen and in
process of rocket and computer technologies after initial difficulties have been overcome so it is
quite natural result.) At last, it looks quite natural that the nuclear weapon gradually, but
uncontrollably began to spread on the planet. One more natural result was that the nuclear weapon
became converged with other advanced technologies of time - rocket and computer technologies,
having generated intercontinental rockets.
Degree of motivation and awareness of humans making of the decision, as factors of
global risk
As fairly underlines A. Kononov, the problem of indestructibility should be realized as the
pivotal by any civilization which exists in catastrophically unstable Universe. In the same way, as at
each human at base level operates self-preservation instinct. The more is comprehension of
importance of preservation of a civilization at all its levels, from the engineer to the governor, the
more is than chances to it to survive. (Though the scenario when the aspiration to a survival will
result in struggle of one groups against others or struggle of rescuers is possible.)
Accordingly, how sensibleness and motivation of a civilization grows to its self-preservation,
is the most powerful factor of its survival. In the second part I consider the list of factors by which
people can incorrectly estimate probability of global catastrophes (more often towards understating).
However is important that (as it is difficult to believe in it), people could not aspire to prevent of
global catastrophes. Or, we will tell more cautiously, not enough aspire. For example, R. Reagan
considered comprehensible to raise risk in nuclear wars to reach victories in Cold war with the
USSR. It means that the purpose of the survival of the human civilization was not paramount for
him. It is quite possible to explain it to evolutionary psychology as the alpha-male reaches the status
435

of the leader of the pride, showing readiness to risk life in fights with other alpha-males, and this
model of behavior is fixed genetically as the males-winners have more children, than at victims in
process struggle for a place of the leader.
So, ability of a civilization to the survival is defined mainly by two factors - first, degree of its
awareness on various global risks, and, secondly, degree of its motivation in prevention of these
risks. Thus both factors are closely connected among themselves as the big motivation conducts to
more careful researches, and the important researches which are throwing light on new risks, can
strengthen motivation. Nevertheless motivation influence is represented to more primary. Though
theoretically all support prevention of global risks, in practice this purpose is on last place that is
visible on number of publications on a theme and financing of researches. (Ask the government whether it is ready to put resources in technology which will give reduction of global risks by 1
percent in 100 years. Nevertheless it is equivalent to the consent to mankind extinction in 10 000
years. Possibly, there is a certain biological mechanism in which force preservation of a life of
children and grandsons very important, and lives of pra-pra-pra-great-grandsons - is absolutely
unimportant.)
We can try to consider these two factors as certain factors from their maximum value. If to
assume as the maximum degree of motivation of effort of the country in war, and as a measure of
real motivation - a share of financing of humans and the organizations in the USA which are
engaged in prevention of global risks in whole (an order of 10 million dollars in a year at the best;
thus we do not consider highly specialized program which are better financed as they do not assume
the complete protection considering all complex of interrelations in connection with global risks, for
example, the anti-asteroid program) - that the difference will make about 100 000 (assuming, that
the USA could spend for war about 1 trillion dollars). Thus, however, the situation considerably
improves - if in 2000 there was no human who is engaged in research and prevention of global risks
on constantly paid basis now such posts are in the USA and the Great Britain. Nevertheless, in spite
of the fact that the situation improves, it looks monstrously bad.
Awareness should be measured as a share of full awareness what could be only at ideal
civilization Under awareness I mean presence of the conventional, strictly proved and widely known
description of a problem of global risks. Therefore, even if this book would contain such description,
all the same it would not provide full awareness as it is obvious, that the overwhelming majority of
people did not read it, and the majority of those who read, would have those or other objections. So,
if we tell, that our awareness makes a thousand share from the greatest possible awareness, it will be

436

very optimistic estimate. Thus I mean as much as possible achievable rational awareness, instead of
absolute awareness of a magician who feels the future.
Even the maximum motivation and absolute awareness do not give absolute chances of a
survival because are possible catastrophes connected with unbeatable natural forces or unpredictable
processes in the spirit of the theory of chaos. The awareness and motivation does not allow people to
live eternally. The general survivability of a civilization could be estimated as awareness product on
motivation, but in case of a terrestrial civilization we would receive annoying 1/100 000 of greatest
possible. It is necessary to hope, that after appearance on horizon of certain force majeure, the
motivation can quickly increase.
So, we should consider any events influencing motivation and knowledge of global risks, as
on factors of global risks of the second sort.
The factors raising motivation:
1) Large catastrophes of any sort.
2) Publications, influencing public opinion.
The factors weakening motivation:
1) Long dormant periods and prosperity.
2) Publications, calming people.
3) Erroneous not come true forecasts.
The factors influencing awareness:
1) Quantity of the people participating in discussion on the given theme, and their professional
qualities.
2) Duration of history of discussion and an information transparency.
3) Readiness of methodology.
4) Motivation in awareness development.
The factors reducing awareness:
1) Doom of scientists or rupture of tradition in case of a certain catastrophe of average weight.
2) Distribution of errors and-or ideological split.
From told it is possible to draw a conclusion, that our lack of information and lack of
motivation in prevention of global catastrophes can be much more serious factor, than the risks
created by any physical source of risk.

Chapter 22. The factors influencing for speed of progress

Global risks of the third sort


437

We name as Global risks of the third sort any events which slow down or accelerate a
course, or change an order of development of supertechnologies on the Earth, and owing to it render
indirect, but solving influence on possible scenarios of global catastrophes.
It is possible to find out following interrelations between catastrophes and events of different
scales and their influence on development and sequence of technologies.
1. Any large failure or catastrophe can slow down development of technologies. For example,
the economic crisis, that is economic catastrophe, will result in a stop of works on accelerators that
will reduce chances of creation of "black hole as accelerators are the extremely expensive multibillion projects. It has occurred to the Russian accelerators after disintegration of the USSR.
Assignment on bio-and AI researches will decrease, but it will mention them to a lesser degree as
they can be financed privately and much more cheaply.
2. Enormous, but not definitive catastrophe will stop almost all researches even if a quantity of
people will survive.
3. Any failure of average weight will result in increase in security measures and reduction of
projects in the area. For example, Chernobyl failure has resulted both in growth of security measures
on reactors, and to universal refusal of building new reactors.
4. The military conflict will result in arms race and growth of number of researches. Directions
of perspective researches will get out with the account of opinion of certain key experts. For
example, in the Russian Federation the program in area nanotechnology is now started. It has not
occurred, if those who decisions and their advisers make, never heard about nanotechnology. The
nuclear program of the USA would not begin, if not Einstein's known letter to president F.
Roosevelt. On the other hand, universal AI as the absolute weapon is now ignored by the authorities
(as much it is known). However, it will eternally not proceed. As soon as the authorities will
understand, that the private laboratories creating strong AI, probably, possess forces for global
mutiny - they will appropriate them. Accordingly, having heard, that in one country of the power
have counted on powerful AI, and other countries can so to arrive; the separate organizations and
large firms can begin working out of the projects also. However destruction of information
connectivity can reject all science about AI back.
5. The invention at all very strong AI will allow to accelerate sharply progress in other areas.
Besides, any fundamental discovery can change balance of forces.
So, certain events can or lower strongly level of researches in the world owing to what, for
example, cheaper projects will get advantage before expensive, or sharply accelerate researches. On
the contrary, destruction of information connectivity will stop the cheap projects leaning against the
438

accessible information from the Internet, and will not stop expensive projects realizing the ready
information, for example, creation of a cobalt superbomb.

Moore's law
Moore's as law in the narrow sense of the word is called exponential growth of number of
transistors on the chip. In the broad sense of the word under it means exponential strengthening of
different technologies eventually. The future of the law of Moore - whether it will work throughout
all XXI century or its action will stop during any moment, - can affect considerably history of a
human society in the XXI century and its risks.
Actually, acceleration which describes Moore's law, is not exponential, but more quickly
growing (hyperbolic). This question was repeatedly investigated, for example, in article Ray
Kurzweil Law of acceleration returns. Acknowledgment of it, that speed of doubling of number of
transistors on the chip gradually though and not in regular intervals increases (that is the doubling
period is reduced). If to extrapolate Moore's law in the past it would have an index point in the
middle of the XX century while components of electronic schemes developed and earlier. It is
supposed, that in the beginning of the XX century Moore's law (if it to extrapolate on progress of
electronic schemes then) had the period of doubling of an order of three years.
Secondly, not only the number of transistors on the chip increases, but also the number of
computers exponentially grows in the world. Owing to it total accessible computing capacity grows
as an exponent from exponent.
Thirdly, connectivity of computers with each other grows, transforming them in the one
computer. As a result, if in the world to the beginning 1980 was an order of one million computers
with frequencies of processors of an order of 1 megahertz now we have billion computers, with
frequencies of an order gigahertz, connected among themselves by the Internet. It means, that
cumulative computing power for 25 years has grown not only a million times quantitatively, but also
incalculable image qualitatively.
As similar law is traced not only concerning chips, but also hard disks of computers, and
reading of DNA and of some other technologies, it is clear, that Moore's law is connected not with
any feature of manufacture of microcircuits, but with universal law in development of new
technologies about what writes Kurzweil.
In due time the analogue of the Moores law in the field of astronautics was observed. From
the first satellite before landing on the Moon took place exponential growth of successes which gave
the bases for forecasts about flights to stars in the beginning of XXI century. However, instead the
astronautics left on level of "saturation" and even on recoil on some positions. It has occurred,
439

because the astronautics grew exponentially, yet has not rested against the natural limits which
became possibilities of chemical rockets (and their price). Though the astronautics developed, the
principle of jet movement did not develop. (Nevertheless certain progress exists: the price of start of
American private rocket Falcon is supposed to be 7 million dollars that is equal to cost of several
apartments in the centre of Moscow whereas the sums which can be estimated in the modern prices
in hundred billions dollars in due time have been spent for the organization of the rocket industry
and building of Baikonur lunch place.) In the field of semiconductors and of some other
technologies occurred the contrary - each success in their creation allowed faster and to develop
more cheaply newer versions, because here there is a recursive loop: new "chips" are developed on
chips, and in astronautics it is not expressed almost. This is main thing. In manufacture of silicon
microcircuits Moore's law also sooner or later to reach to some physical limit. However, if to take
the law mess more generally it means the law of self-complication of structures. It is possible to see,
how this self-complication made quantum leaps from one area exponential growth to another, every
time in much faster on development parameters - from monocelled live beings to multicellular, from
electronic lamps to transistors, from microcircuits to - possibly - quantum computers. (I do not show
here a full chain of acceleration of phases of development, I will notice only, that each transition
gave acceleration of parameter of growth several times detailed analysis of cycles of acceleration see
in A.D.Panov's and at Kurzweil works.) It means, that such events as transition from one exponent
on another, is more abrupt (and it is obvious, was not competitive benefit to pass to less abrupt
exponent of development), and more important, than even itself exponential growth between these
transitions. And each time such transitions are connected with quantum leaps, with discovery of
essentially new way of the optimization, a new way of faster thinking (in other words, with
discovery of faster algorithms of "artificial intellect", rather than simple search of variants of
decisions). For example, transition to sexual reproduction was, possibly, for evolution by discovery
of faster way of selection and creation of effective species. Transition to writing is more powerful
way of accumulation of knowledge of world around, than an oral information transfer. Creation of a
scientific method is more powerful way of knowledge of world around, than trust to antique sources.
Creation of system of the venture firms developing and selling new technologies, is faster way, than
work of separate design offices and inventors-singles.
Possibly, it is necessary to stop on how working out of new technologies in a modern society
is arranged, as allows to support present rate of growth of technologies. It includes following
processes:
1) continuous generation and patenting of any ideas.
440

2) creation of separate laboratories under each idea which has at least a scanty chance of
success (venture firms).
3) continuous information interchange between all participants of process, both for the account
of open publications, and for the account of trade in patents and licenses.
4) debugged mechanism of introduction of any novelties. A cult of consumption of novelties.
5) Purchase of "brains" - people with their skills - for concrete projects.
This system of the organization of processes of an innovation, as well as all previous, has
developed spontaneously - that is by simple selection between different systems of optimization. It
is possible to assume, that transition to the following system of optimization will be connected with
motions, so to say, at meta-optimization level, that is optimization of processes of optimization
Obvious line of modern system is that it concentrates not around people-inventors as in XIX century
- for example, round Edison and Tesla, and on the fulfilled conveyor of manufacture and
commercialization ideas in which unique human has no basic value any more. Vulnerability follows
from the told modern Moore's law to economic shocks: that this law continued to operate, the
wide front from set of the firms, supported by continuous inflow of the capital is necessary.
Accordingly, in the future the generalized model of action of the law of Moore (in other words, the
law of acceleration of evolution), waits either crash, or transition to even more high-speed step of
development. As it is impossible to force people (if only not to change their nature) to change a
cellular telephone of 10 times in a year, most likely, the engine of following jump will be not market
(but competitive) mechanisms, for example, arms race.
We can draw a conclusion, that Moore's law is a product of development of modern economy,
hence, economic risks are also zones of risk for Moore's law so are global risks of the third sort.
Moore's law is in the broad sense of the word very vulnerable to integrity and connectivity of a
society. That the large quantity of technologies continued to develop on exponential curve,
simultaneous functioning of thousand laboratories, the most powerful economy and qualitative
information connectivity is necessary. Accordingly, even the powerful world economic crisis can
undermine it. Disintegration of the USSR in which result the science has sharply fallen can be an
example of such event - and would fall, it is probable, even more, if not inflow of ideas from the
West, demand for energy carriers, import of computers, the Internet, trips abroad and support from
the Soros fund. It is terribly itself to imagine, if the USSR were the unique state on a planet would
how science much be rolled away and has broken up.
It is clear, that Moore's law could be supported in the several separate superstates possessing
the complete set of key technologies, but is possible, that some key technologies already became
441

unique in the world. And one small state, even European, is limited and cannot support rate of
development of a science at present level, remaining in loneliness. Owing to it, we should realize
vulnerability of the Moores law at the present stage. However AI creation, nano - and
biotechnologies will sharply reduce volume of space which is necessary for manufacture of
everything. The stop of the Moores law will not mean the termination of all researches. Working
out of separate projects of the biological weapon, AI, superbombs can proceed efforts of separate
laboratories. However without the world information exchange this process will considerably be
slowed down. The stop of the law of Moore will delay or will make impossible appearance of
complex hi-tech products, such as nanorobots, development of the Moon and brain loading in the
computer, however finishing concerning simple.
Part 4. Prevention.
Chapter 23 X-risks prevention

The general notion of preventable global risks


Obviously, if we can find out that there are several simple, clear and reliable ways to
confront global catastrophe, we will significantly improve our safety, and a number of global
risks will cease to threaten us. On the contrary, if it turns out that all the proposed measures and
remedies have their flaws that make them at best ineffective and at worst - dangerous, we need
to invent something radically new. It seems that the protection system - at each phase of
development of global risk - should perform the following functions:

Monitoring.

Analysis of information and action.

Destruction of the source of threat.

That strategy worked well in counterintelligence, counter-terrorism and military


affairs. Another strategy involves the flight from the source of the threat (of space settlements,
bunkers). Clearly, this second strategy is to be applied in case of failure of the first (or
simultaneously with it, just in case).
Global risks vary in the degree of how they might prevent. For example, is actually to
ban a class of dangerous experiments on accelerators, if the scientific community will come to
the conclusion that these experiments pose some risk. As the world has only few large
442

accelerators, which are managed quite openly, so that the scientists themselves do not wish to
disaster and not have any benefits, it seems very simple to cancel the experiments. In fact, it
needs only a general understanding of their risks. That is the most preventable risk - the risk
that:

Is easy to foresee.

Easy to reach a scientific consensus on such foresight,

Consensus of this is enough to abandon the action, leading to the risk.

Waive from actions that lead to certain risk (for example, prohibit the sort of dangerous
technologies), it is easy only if certain conditions:

If the dangerous process is created only by human beings.

If these processes are set up in a small number of well-known places.


(How, for example, physical experiments on the huge accelerators)

If people are not waiting for any benefits from these processes.

If the hazardous processes is predictable as to the time of its inception,


and in the process of development.

443

If the dangerous objects and processes are easily recognizable. That is,
we easily, quickly and surely know that some dangerous situation has
started, and we appreciate the degree of risk.

If we have enough time to develop and adopt adequate measures.

Accordingly, the risks that are difficult to prevent, characterized by the fact that: They are
difficult to predict, it is difficult to assume their potential. (Even assuming that SETI might be a
risk, it was difficult.)

Even if someone is aware of this risk, it is extremely difficult to


convince in it anyone else (examples: the difficulties in the knowledge
about AI and SETI as a source of risk, difficulties of proof of the
Doomsday Argument).

Even in case of public consensus that such risks are really


dangerous, this does not mean that people voluntarily abandon this
source of risk. (Examples: nuclear weapons.)
Last is because:
1.
The sources of risk available to many people, and who are these people is not
known (you can put on a register of all nuclear physicists, but not of selftought hackers).
2.
The sources of risk is in unknown location and / or easy to hide (biolabs).
3.
The risks is established inhuman natural factors, or as a result of interaction of
human action and natural factors.
4.
The source of danger promises not only risks, but also benefits, in particular, in case
of weapon.
5.
Time of emergency of the risk is unpredictable, as well as the manner in which it will
develop.
6.
The dangerous situation is difficult to identify as such, it requires a lot of time and
contains an element of uncertainty. (For example, it is difficult to determine that sort of new
bacteria is dangerous until it infect someone and had not yet reached such proportions
when you can understand that this is epidemic.)
7.
The dangerous process evolving faster than we have time to adequately respond to
it.
Certain risks are preventable, but that should not lead to that they should be dismissed
from the account since it does not necessarily mean that the risk will eventually be
prevented. For example, the asteroid danger is among the relatively easily preventable
risks, but we dont have real anti-asteroid (and, more importantly, anti-comet) protection
system. And while it doesnt exist, preventable threat remains purely hypothetical,
because we do not know how effective and safe will be future protection, whether it appear
at all, and if one appears, when.
Plan of Action to Prevent Human Extinction Risks
Prepared by Alexey Turchin,
444

alexeiturchin@gmail.com
Abstract:
During the last few years a significant number of global risks have been
discovered that threaten human existence. These include, to name but a few,
the risk of harmful AI, the risk of genetically modified viruses and bacteria, the
risk of uncontrollable nanorobots-replicators, the risk of a nuclear war and
irreversible global warming. Additionally, dozens of other less probable risks
have been identified. Also a number of ideas have been conveyed regarding the
prevention of these risks, and various authors have campaigned for different
ideas.
This roadmap compiles and arranges a full list of methods to prevent global
risks. The roadmap describes plans of action A, B, C and D, each of which will
go into effect if the preceding one fails.
Plan A is prevent global risks, it combines 5 parallel approaches:
international control, decentralized monitoring, friendly AI, rising resilience and
space colonization.
Plan B is to survive the catastrophe.
Plan C is to leave traces.
Plan D is improbable ideas.
Bad plans are plans that raise the risks.
The document exists in two forms: as a visual map (pdf http://immortalityroadmap.com/globriskeng.pdf) and as a text (long read below 50 pages,
http://docdro.id/8CBnZ6g).
Introduction...........................................................................................................451
The problem..................................................................................................451
The context....................................................................................................451
In fact, we dont have a good plan......................................................452
Overview of the map..................................................................................452
The procedure for implementing the plans.......................................453
The probability of success of the plans...............................................454
Steps................................................................................................................454
Plan A. Prevent the catastrophe....................................................................455
Plan A1. Super UN or international control system.......................455
A1.1 Step 1: Research..........................................................................455
Plan A1.1: Step 2: Social support........................................................457
Reactive and Proactive approaches......................................................458
A1.1-Step 3. International cooperation..............................................459
Practical steps to confront certain risks.............................................460
1.1 Risk control........................................................................................461
Elimination of certain risks......................................................................462
A1.1 Step 4: Second level of defense on high-tech level: Worldwide risk
prevention authority...................................................................................463
Planetary unification war..........................................................................464
445

Active shields................................................................................................464
Step 5 Reaching indestructibility of civilization with negligible annual
probability of global catastrophe: Singleton.....................................466
Plan A1.2 Decentralized risk monitoring................................................466
A1.2 1.Values transformation.............................................................467
Ideological payload of new technologies............................................469
A1.2 2: Improving human intelligence and morality................469
Intelligence....................................................................................................469
A1.2 3. Cold War, local nuclear wars and WW3 prevention....470
A1.2 4. Decentralized risk monitoring.............................................471
Plan 2. Creating Friendly AI..........................................................................471
A2.1 Study and Promotion.......................................................................471
A2 2. Solid Friendly AI theory............................................................472
A2.3 AI practical studies...........................................................................473
Seed AI............................................................................................................473
Superintelligent AI......................................................................................473
UnfriendlyAI...................................................................................................474
Plan A3. Improving Resilience........................................................................475
A3 1.Improving sustainability of civilization.................................475
3 2. Useful ideas to limit the scale of catastrophe..................476
3.3 High-speed Tech Development needed to quickly pass risk window 476
A3.4. Timely achievement of immortality on highest possible level 477
AI based on uploading of its creator....................................................477
Plan 4. Space Colonization............................................................................477
4.1. Temporary asylums in space......................................................478
4.2. Space colonies near the Earth...................................................478
Colonization of the Solar System..........................................................479
4.3. Interstellar travel.............................................................................479
Interstellar distributed humanity..........................................................480
Plan B. Survive the catastrophe....................................................................480
B1. Preparation............................................................................................481
B2. Buildings.................................................................................................481
Natural refuges.............................................................................................481
B3. Readiness...............................................................................................482
B4. Miniaturization for survival and invincibility.............................482
B5. Rebuilding civilization after catastrophe....................................483
Reboot of civilization..................................................................................483
Plan . Leave Backups......................................................................................483
C1. Time capsules with information.....................................................484
C2. Messages to ET civilizations............................................................484
C3. Preservation of earthly life..............................................................484
C4. Robot-replicators in space.......................................................................485
Resurrection by another civilization.....................................................486
446

Plan D. Improbable Ideas................................................................................486


D1. Saved by non-human intelligence................................................486
D2. Strange strategy to escape Fermi paradox...............................488
D4. Technological precognition..............................................................489
D5. Manipulation of the extinction probability using Doomsday argument
............................................................................................................................489
D6. Control of the simulation (if we are in it)..................................490
Bad plans................................................................................................................491
Prevent x-risk research because it only increases risk................491
Controlled regression.................................................................................492
Depopulation.................................................................................................493
Computerized totalitarian control.........................................................493
Choosing the way of extinction: UFAI.................................................494
Attracting good outcome by positive thinking.................................494
Conclusion..............................................................................................................494
Literature:..............................................................................................................495
Introduction
The problem
Many authors noted that in the 21th century may witness a global
catastrophe caused by new technologies (Joy, Rees, Bostrom, Yudkowsky, etc).
Many of them suggested different ways of x-risks prevention (Joy, Posner,
Bostrom, Musk, Yudkowsky).
But these ideas are disseminated in literature and unstructured, so we need
to collect all of them, put them in a most logical order and evaluate their
feasibility.
As a result, we will get a most comprehensive and useful plan of x-risks
prevention that may be used by individuals and policymakers.
In order to achieve this goal I created a map of x-risks prevention methods.
The map contains all known ways to prevent global risks, most of which you
may have probably heard of separately.
The map describes the action plans A, B, C, D, each of which will come into
force in the event of the failure of a previous one. The plans are plotted
vertically from top to bottom. The horizontal axis represent timeline and some
approximate dates when certain events on the map may occur.
The size of this explanatory text is limited by the size of the article, so
many points on the map I left as self-evident or linked them to explanations by
other authors. A full description of every point would take up a whole book.
The context
The context of the map is an exponential model for the future. The map is
based on the model of the world in which the main driving force of history is
the exponential development of technology, and in which a strong artificial
intelligence will have been created around 2050. This model is similar to the
Kurzweil model, although the latter suffers from hyper-optimistic bias and does
not take account of global risks.
447

This model is relatively cautious as compared to other exponential models,


for example, there are models where technology development takes place
according to a hyperbolic law and there is a singularity around 2030 (Scoones,
Vinge, Panov, partly Forester).
At the same time we must understand that this model is not a description of
reality, but a map of a territory, that is, in fact, we do not know what will
happen, and very serious deviations are possible because of black swan events
or through slower technological growth.
I should note that there are two other main models the standard model,
in which future will be almost as today with a slow linear growth (this model is
used by default in economic and political forecasting, and it is quite good over
intervals of 5-10 years) and the model of Rome Club, according to which in the
middle of the 21st century there will be a sharp decline in production, economy
and population. Finally, there is the model of Taleb (and Stanislaw Lem), in
which the future is determined by unpredictable events.

In fact, we dont have a good plan


The situation is that in fact we do not have a good plan, because each plan
has its own risks, and besides, we do not know how these plans could be
implemented.
That is, although there is a large map of risk prevention plans, the situation
of prevention does not look good. It is easy to criticize each of the proposed
plans as unrealizable and dangerous, and I will show their risks. Such criticism
is necessary for improving the existing plans.
But some plan is better than no plan at all.
Firstly, we can build on it to create an even better plan.
Secondly, the mere implementation of this plan will help delay a global
catastrophe or reduce its likelihood. Without it, the probability of a global
catastrophe is estimated by different scientists at 50 per cent before the end of
the 21st century.
I hope the implementation of a most effective x-risks prevention plan will
lower it by order of magnitude.
Overview of the map
Plan A Prevent the catastrophe is composed of four sub-options: A1, A2,
A3 and A4. These sub-options may be implemented in parallel, at least up to a
point.
The idea of plan A is to completely avoid the global catastrophe and to
achieve such a state of civilization, that its probability is negligible. The suboptions are following:
Plan A1 is the creation of a global monitoring system. It includes two options:
A1.1 international centralized control and A1.2 decentralized risk
monitoring. The first option is based on suppression, the second is cooperative. The second option emerged during crowdsourcing ideas for the map
in summer 2015.
Plan A2 is the creation of Friendly AI,
Plan A3 is increasing resilience and indestructibility
448

Plan A4 is space colonization,


Among them the strongest are the first two plans and in practice they will
merge: that is, the government will be computerized, and AI will take over the
functions of the world government.
Plan B is about building shelters and bunkers to survive the catastrophe,
Plan C is to leave traces of information for future civilizations.
Plan D is hypothetical plans
Bad plans are dangerous plans that are not worth implementing.
The procedure for implementing the plans
In order to build a multi-level protection against global risks, we should
implement almost all of the good plans. At early stages, most plans are not
mutually exclusive.
The main problem that can make them begin to exclude each other, arises
in connection with the question of who will control the Earth globally: a super
UN, AI, a union of strong nations, a genius hacker, one country, or a
decentralized civil risk monitoring system. This question is so serious that in
itself is a major global risk, as there are many entities eager to take power over
the world.
The ability to implement all the listed plans depends on the availability of
sufficient resources. Actually, the proposed map is a map of all possible plans,
from which one may choose for implementation one most suitable sub-group.
If resources are insufficient, it may make sense to focus on one plan only.
But who will choose?
So here arises a question of actors: who exactly would implement these
plans? Currently, in the world there are many independent actors, and some of
them have their own plans to prevent a global catastrophe. For example, Elon
Musk proposes to create a safe AI and build a colony on Mars, and such plans
could de realized by one person.
As a result, different actors will cover the whole range of possible plans,
acting independently, each with his own vision of how to save the world.
Although any of the plans is suitable to prevent all possible accidents, one
particular plan is most efficient for a certain type of disasters. For example,
Plan A1 (international control system) is best suited to control the spread of
nuclear, chemical, biological weapons and anti-asteroid protection whereas Plan
A2 is the best to prevent the creation of an unfriendly AI.
Space exploration is better suited to protect against asteroids but does very
little to protect against an unfriendly AI that can be distributed via
communication lines, or against interplanetary nuclear missiles.
The probability of success of the plans
Maps are also arranged in order of the likelihood of the success of their
implementation. In all cases, however, it is not very large. I will give my
evaluation of the probability of success of the plans, highest to lowest:

449

Most likely is the success of the international control system A1.1, because
it requires no fundamental technological or social solutions that would not have
been known in the past. 10 percent. (This is my estimation of the probability
that the realization of this plan will prevent a global catastrophe on the
condition that no other plan has been implemented, and that the catastrophe is
inevitable if no prevention plans exist at all. The notion of probability for x-risks
is complicated and will be discussed in a separate paper and map Probability
of x-risks.) The main factors lowering its probability are the well-known human
inability to unite, risks of world war during attempts to unite humanity
forcefully and risks of failure of any centralized system.
Decentralized control in A1.2 is based on new social forms of management
that are a little bit utopian so its success probability is also not very high and I
estimate it at 10 percent.
Creating artificial intelligence (A2) requires the assumption that AI is
possible, and this plan carries its own risks, and also AI is not able to prevent
the risk of accidents that can happen before its creation, such as a nuclear war
or a genetically engineered virus 10 percent.
A3: Increasing resilience and strengthening the infrastructure can have only
a marginal effect in most scenarios as help in realization of other plans, so 1
percent.
A4: Space colonization does not protect from radio-controlled missiles, nor
from the a hostile AI, or even from the slow action of biological weapons which
work like AIDS. Besides, space colonization is not possible in the near future,
and it creates new risks: large space ships could be kinetic weapons or suffer
catastrophic accidents during lunch, so 1 percent.
Plan B is obviously less likely to succeed, since major shelters could be
easily destroyed and are expensive to build, and small shelters are vulnerable.
In addition, we do not know from what type of future disasters we are going to
protect ourselves building shelters. So, 1 per cent.
Plans C and D have almost symbolic chances for success. 0,001 per cent.
Bad plans will increase the likelihood of a global catastrophe.
We could hope for positive integration of different plans. For example, Plan
A1 is good at early stages before the creation of a strong AI, and Plan A2 is a
strong AI itself. Plan A3 will help implement all other plans. And plans A4, B
and C may have strong promotional value to raise awareness of x-risks.
In the next chapters I will explain different blocks of the map.
Steps
Timeline of the map consist not only of possible dates which could move by
decades depending on the speed of progress and other events, but also of
steps, which are almost the same for every plan.
Step 1 is about understanding the nature of risks and creating a theory.
Step 2 is about preparation, which includes promoting an idea, funding and
building an infrastructure needed for risks mitigation. Step 2 cant be done
successfully without Step 1.

450

Step 3 is the implementation of preventive measures on a low technological


level, that is on the current level of technologies. Such measures are more
realistic (bans, video surveillance) but also limited in scope
Step 4 is the implementation of advanced measures based on future
technologies which will finally close most risks, but which themselves may pose
their own risks.
Step 5 is the final state where our civilization will attain indestructibility.
These steps are best suited to Plan A1.1 (international control system) but
are needed for all the plans.
Plan A. Prevent the catastrophe
Plan A1. Super UN or international control system
The idea of this plan is that the more complex and "aggressive" a risk, the
greater the level of control is required to prevent it. As a global risk can arise in
one part of the world (a genetically modified virus) and then spread across the
planet, the control should be spread throughout the world (and even beyond,
to the space colonies).
In order to create an adequate system of control it is necessary to
understand the nature of the risks, how to detect them and how to suppress
them, before they have time to spread.
However, to achieve that we need a clear understanding of the importance
of preventing such risks, and a world-wide authority that would be specifically
created for their prevention and would have powers that go beyond those of
any local authorities.
In addition, the control system must be adequate to new technological risks
and evolve in parallel with them. It may be a risk in itself, and it also has to be
controlled.
A1.1 Step 1: Research

Information gathering
Creating and promoting a long-term future model
Comprehensive list of risks
Probability assessment
Prevention roadmap
Determining most probable risks and risks that are easiest to prevent
Creating an x-risks wiki and an x-risks internet-forum which would attract
the best minds, but would also be open to everyone and well-moderated

451

Assistance
Solving the problem of different x-risk-aware scientists ignoring each
other (the world saviors arrogance problem)
Integrating different lines of thinking about x-risks
Lowering the barriers to entry
Unconstrained funding of x-risk research for many different approaches
Helping best thinkers in the field (Bostrom) produce high quality x-risk
research
Educating world saviors: choosing best students, providing them with
courses, money and tasks.
Additional study areas
Studying existing system of decision-making in UN, hiring a lawyer
Creating a general theory of safety and risk prevention
Creating a full list of x-risk-related cognitive biases and working to
prevent them (Yudkowsky)
Translating best x-risk articles and books into common languages
The basis of the modern understanding of global risks has been laid in the
works of Nick Bostrom, Leslie, Martin Rees, Bill Joy at the beginning of the 21st
century. The output is a more or less complete list of risks.
I
made
a
typology
of
the
global
risk
map
(http://lesswrong.com/lw/mdw/a_map_typology_of_human_extinction_risks/)
that shows more than 100 different options, but the main risks in the
exponential model of the future are the risks of new technologies, namely
Artificial Intelligence and a multipandemic caused by genetically modified
viruses. These top two risks are growing exponentially along with the
development of technology, and their probability grows at the same rate as
Moore's Law, that is, doubling every couple of years. However, several other
risks could also lead to a global catastrophe: a world war with nuclearbiological weapons involved, a nuclear weapons doomsday, irreversible global
warming under the Venus scenario creation, nanorobots-replicators.
The issue of estimating the probability of certain risks is certainly not
resolved. Partly because it is very difficult to estimate the probability of a
unique single event that has never happened before. Even a notion of
probability is not defined for such events. I am planning to make a map that
will show time distribution of various risks.
A number of various ways to protect the global risk have recently been
proposed. Yudkowsky and Bostrom favor creation of a friendly AI. Hawking and
Elon Musk also advocated for creating space shelters. R. Posner in his book
"Catastrophe: Risk and reaction" described what I call here Plan A1, that is the
452

creation of the international regulatory mechanisms to prevent risks. Various


options of underground shelters have been suggested.
Plan A1.1: Step 2: Social support
Public Movement: spreading the idea of the importance of risk
prevention
Science
Rising support inside academic community by high level signaling
Cooperative scientific community with shared knowledge and productive
rivalry
Productive cooperation between scientists and society based on trust
Scientific attributes:
- a peer-reviewed journal,
- conferences,
- an inter-governmental panel,
- an international institute
Popularization
Articles, books, forums, media: showing the public that x-risks are real
and diverse, but we could and should act
Politics
Public support, street action (nti-nuclear protests in 80s)
Political support: lobbyism
Establishing political parties for x-risk prevention
Writing policy recommendations

Unfortunately, the scientific community tends to split into opposing groups.


As a result, one group focuses on a single risk and particular method of its
prevention while another group concentrates on other risks and methods.
(E.g. , global warming and CO2 reduction as a method of its prevention.) But
active influence on policymakers requires a team of scientists with a common
(and correct) vision. Now one such group has emerged around Yudkowsky
Bostrom Elon Musk, but they seem to overestimate remote risks of
superintelligence and to underestimate other risks that could happen earlier.
In general, we live in a society that avoids solving the biggest problems and
focuses on small issues. The same thing happens with the fight against aging,
the number one killer in the world.
The second step is to convince the society and decision-makers in the
reality of the threat of global risks and the need to deal with them as the
humanitys most important goal. Having a plan to confront global risks could
help achieve this goal. Although some effort to prevent various risks has been
453

made, society as a whole continues to be absorbed by its petty internal


conflicts.
However, there was a time when the struggle against what was perceived
as a global risk was at the peak of international attention and resulted in mass
street actions. This is the anti-nuclear struggle in the 80s. (A nuclear war is
unlikely to lead to the complete destruction of humanity, but many thought it
could.) And it ended with some success: international treaties were signed,
which significantly limited the nuclear arsenals, and the Cold War ended.
It is obvious that sooner or later politicians and parties will appear
advocating for prevention of global risks. Now in the United States the
Transhumanist Party has been created, it stands for radical life extension, for
anti-aging and prevention of global risks.
In addition, studying global risks should take shape of a science with all
relevant attributes, namely, a scientific journal, an online forum, a series of
conferences, a scientific institute, an inter-governmental panel on risk analysis
(similar to the panel on global warming).
It should also provide an opportunity for dialogue between advocates of
different points of view and not be restricted to a narrow circle of like-minded
people referring to each other.
Reactive and Proactive approaches
Reactive: React to most urgent and most visible risks. Risk are ranged by
urgency.
Pros: Good timing, visible results, proper resource allocation, investing only
in real risks. Good for slow risks, such as a pandemic.
Cons: cant react to fast emergencies (AI, asteroid, collider failure).
Proactive: Envisioning future risks and building a multilevel defence. Risks
are ranged by probability.
Pros: Good for coping with new risks, enough time to build defense.
Cons: Misinvestment in fighting with non-real risks, no clear reward,
challenge in identifying new risks and discounting mad ideas.
This map is based on the proactive approach, but now the reactive
approach to risks is dominating. We cant state that the proactive approach is
always better as it may lead to excessive activity in such areas that are best
left alone. But a proactive study of future risks is needed.
A1.1-Step 3. International cooperation
Super-UN
All states contribute to the UN to fight certain global risks
States cooperate directly without UN
Superpowers take responsibility for x-risk prevention and sign a treaty
454

International law about x-risks is introduced which will punish people for
rising risk: underestimating it, plotting it, neglecting it, as well as reward
people for lowering x-risks, identifying new risks, and for efforts to prevent
them.
International agencies dedicated to certain risks (old ones such as the
WHO and new ones for new risks).
Stimuli
A smaller catastrophe could help unite humanity (pandemic, small
asteroid, local nuclear war)
Some movement or event that will cause a paradigmatic change so that
humanity may become more existential risk aware.
This item includes more practical steps to prevent global risks. It is
assumed that understanding of the nature of risks and the importance of their
prevention is already achieved. The next step is international coordination of
efforts.
The UN is the most authoritative international organization created to fight
the risk of a new war. On the other hand, the UN in its present form is largely
discredited and weak bureaucratically.
As a result, the authority to fight global risks may be delegated not to the
UN, but to some other organization, or the strongest military and economic
power, such as the United States.
Depending on the type of a risk, not all states can participate in its
prevention, for example, only one or several large states should unite to
protect themselves against the threat of an asteroid. However, dealing with the
most serious risks requires cooperation of all economically developed countries
of the world, as well as access to the entire territory of the Earth, without
exception.
A good example is the fight against the Ebola epidemic in 2014, which could
have become a global risk if its exponential growth had not been stopped.
However, President Obama chose the right strategy: maximizing the
suppression of the epidemic outbreak in the center of its dissemination (while
another proposed strategy, that of announcing a total quarantine for the
infected countries, would have lead to millions of deaths and the emergence of
permanent foci, where Ebola could have evolved into a more dangerous form).
Many developed countries and international organizations, such as the WHO,
participated in the fight against Ebola.
Although in the first half of 2014 the international community demonstrated
extreme laziness and lack of foresight with regard to the exponential process of
the Ebola outbreak, later effective mobilization of resources took place. This
shows that a small or slow-growing disaster leads to the acceleration of the
integration processes and bring different organizations together to solve the
problem. However, not all risks will develop so slowly and so clearly.
455

Practical steps to confront certain risks


Biosecurity:
Developing better guidance on safe bio-technology
DNA synthesizers are constantly connected to the Internet and all newly
created DNA are checked for dangerous fragments
Funding controlled environments such as clean-rooms with negative
pressure
Introducing better quarantine laws for travelling during pandemic
Develop and stockpile new drugs and vaccines,
monitor biological agents and emerging diseases, and strengthen the
capacities of local health systems to respond to pandemics (Matheny)
Environment
Capturing methane and CO2, probably, by bacteria
Investing in biodiversity of the food supply chain, preventing pest spread:
1) better food quarantine law, 2) portable equipment for instant identification
of alien inclusions in medium bulks of foodstuffs, and 3) further development of
nonchemical ways of sterilization.
Promoting mild birth control (female education, contraceptives)
Promoting the use of solar and wind energy
Nukes
Introducing asteroid deflection technology without nukes
Improving nuclear diplomacy
Using instruments in tech to capture radioactive dust
Some practical steps to prevent risks can be taken without a global plan
and within individual research programs. Here are listed only some of these
steps.
1.1 Risk control
Technology bans
Introducing an international ban on dangerous technologies or voluntary
relinquishment such as not creating new strains of flu
Freezing potentially dangerous projects for 30 years
Lowering international confrontation
456

Locking down risky domains beneath piles of bureaucracy, paperwork and


safety requirements
Technology speedup
Differential technological development: develop safety and control
technologies first (Bostrom)
Introducing laws and economic stimulus (Richard Posner, carbon
emissions trade)
Surveillance
International control systems (such as IAEA)
Internet scanning, monitoring from space.
The logical development of the theme of bans and freezing projects is the
idea of differentiated technology development proposed by Bostrom. We must
invest in the development of technologies that enhance our security and slow
down the development of technologies that increase the risks. As a result, we
are not canceling the general trend of progress, but changing the shape of its
front. That is, it is necessary to quickly develop technologies that enhance the
control and management, namely, AI and surveillance systems, and slow down
the development of technologies that can quickly lead to uncontrollable
consequences.
So far this idea remains only wishful thinking.
Richard Posner proposed manage risks through legislation and economic
incentives. The most famous attempt to do something of the kind has been
carbon trading. However, in general we do not see the results of its
effectiveness, as carbon emissions and coal burning continued to rise.
Instead of prohibitions, we can allow certain types of activity, but exercise
total control over it, so that it would be carried out in a safe manner and in a
peaceful way. An example of this kind of activity is controlling the IAEA on
nuclear energy. Currently, the technical capabilities to monitor have incredibly
widened thanks to cheap electronics, spyware systems in every mobile device,
satellite monitoring, scanning the Internet. However, the division of the world
into rival states makes such control difficult even for such large facilities as
nuclear plants, and the control of individual biological laboratories is even more
difficult.
However, the result could be a totalitarian Orwellian society that under the
pretext of protection against global and other risks penetrates into peoples
private life and makes various abuses. Scandals involving a ubiquitous
surveillance occur regularly in the United States. The controllers themselves are
out of control and can make the same irregularities that they must prevent.
457

David Brin offered an alternative society of total control by the state. It is a


society of total transparency, where everyone can watch one another through
electronic means, and a group of civil society activists scour the world looking
for potential terrorists. This idea may be useful but if it is considered the only
solution to the problems of global risk, it seems far-fetched.
Elimination of certain risks
Universal vaccines, UV cleaners
Asteroid detection (WISE) prove that no dangerous asteroids exist
Transition to renewable energy, cut emissions, carbon capture
Stopping LHC, SETI and METI until AI is created
Getting rogue countries integrated or prevented from having dangerous
weapons programs and advanced science
In parallel with the development of prohibitions and means of control, the
development of technology can lead to the situation that some risks will get
"closed", that is, either means to effectively prevent them will be created, or it
will be proved that they are impossible.
For example, the creation of a universal vaccine against influenza, or in
general from any viruses, will significantly reduce the risks of biotechnology
(such works are going on now, and there are many interesting ideas).
The development of observational astronomy, and, first and foremost,
infrared astronomy (space telescope WISE) will reveal all potentially dangerous
near-Earth asteroids and most likely prove that none of them is going to
threaten the Earth in the next 100 years. This would eliminate the need for the
construction of asteroids interception systems, which themselves can be
dangerous (because they consist of powerful missiles and nuclear weapons,
which could be used as space weapons or against the Earth itself).
The development of solar and wind energy and removing carbon from the
atmosphere and using it as a building material will significantly reduce the risk
of running out of resources and energy, as well as air pollution and global
warming, which in total will reduce the risks of a new world war and increase
life expectancy. Of course, it may be not necessary, if we quickly create a
strong AI based on nanotech industry, but the timing of this is difficult to
predict.
There is also the idea to produce some minerals in space, of which I
personally am skeptical, because the differentiation of the Earth's interior and
the water cycle produced a significant enrichment of the primary ores which
didnt not happen on other planets and on asteroids.

A1.1 Step 4: Second level of defense on high-tech level:


Worldwide risk prevention authority

458

Establishing a center for quick response to any emerging risk: x-risk


police
Introducing worldwide video-surveillance and control
The ability when necessary to mobilize a strong global coordinated
response to anticipated existential risks (Bostrom)
Peaceful unification of the planet based on a system of international
treaties
Robots for emergency liquidation of bio- and nuclear hazards
Narrow AI based expert system on x-risks, Oracle AI
After some risks have been prevented with the help of specialized agencies
and with individual measures, a clear need will emerge for set up an agency
responsible for preventing any future global risks. This may be the UN Security
Council, or a UN committee vested with adequate powers.
The first of these powers should be worldwide gathering of information on
emerging risks with the use of all possible technical means.
Another of its powers would be the ability to rapidly respond to emerging
threats, such as sending out troops and medical teams to the location where an
epidemic started, or even to carry out nuclear strikes on laboratories that
produce harmful biological or nano-replicators.
It would only be possible to create such an agency if there is a peaceful
unification of the world into a single supranational structure. Peaceful
reunification is possible in the first place through a complex system of
agreements, similar to the system that provides the integration of the EU.
Probably the subjects of the integration will be supranational entities rather
than individual states.
However, in this scenario there is a dangerous divergence which may be
called a "war for the unification of the planet."
Planetary unification war
A war for the world domination
One country uses bio-weapons to kill all the worlds population except its
own which is immunized
A super-technology (such as nanotech) is used to quickly gain global
military dominance
Doomsday Machine blackmail
This scenario I have outlined in red, because it is not good, and could very
likely lead to the destruction of all mankind or a significant part of the worlds
population. That is, this scenario is not desirable but it looks increasingly likely.
Namely, instead of the integration and domination of humaniatrian values we
see the division of the world into blocks, and there is a group of rogue states
459

not wishing to integrate with any of the blocks (North Korea, Islamic State) and
at the same time actively developing weapons of mass destruction.
In principle, any world war is a war for world domination, but no war ever
ended up by one country winning it. A future world war may also not have a
winner, and only become meaningless homicide and a catalyst for the
development of ever more dangerous weapons.
It may happen in the future that there is one winner who has crushed some
of his opponents and persuaded the others to obey by threats.
Such a war may be a conventional, or nuclear, or based on
supertechnologies. The latter option is most likely to lead to the victory of one
party, as supertechnologies can give a decisive advantage.
Even worse scenario is that of one country creating a doomsday weapon it
blackmails the rest of humanity with and makes the rest of humanity
capitulate to it. But if such weapons are created by a number of countries that
have mutually exclusive conditions of deployment of those weapons, then we
are doomed (Herman Khan).
Finally, the worst planet unification war scenario is that of one country
destroying all the others, e.g., by using a virus against which its own
population is vaccinated.
A planet unification war is a very bad method to unite the world, but a
method that, unfortunately, may work.
Active shields
Geoengineering against global warming
Organizing a worldwide missile defense and an anti-asteroid shield
Putting up a nano-shield a distributed control system to control
hazardous replicators
Putting up a bio-shield a worldwide immune system
Establishing dangerous memes control (existential terrorism prevention)
Controlling the knowledge of mass destruction
Amalgamating the state, the Internet and the worldwide AI into a
worldwide monitoring, security and control system
Isolating risk sources at a great distance from the Earth;
Performing scientific experiments in such ways that are close to natural
events
So if the unification of the planet, at least in the field of prevention of global
risks, happens more or less successfully, it will become possible to implement a
number of technical measures to prevent future risks.
At the same time, taking into account the exponential development of
technologies, the risk that will be the most dangerous in the middle of 21st
century is the risk of replicators (bio, nano or computer virus), and in order to
control them we need different types of high-tech shields.
460

By the mid-century the national state will have integrated with different
means of AI and robotic systems. As a result, so-called "active shields" will
emerge, a kind of a global immune system capable of detecting certain
dangerous replicators or other risks and preventing them instantly, perhaps
even without human involvement.
First among them should be named geoengineering, that is the control of
the global temperature by means of spraying water into the upper atmosphere
or sequestrating carbon dioxide from the atmosphere.
An international missile defense system may also be considered a global
shield, although it is unlikely to be necessary if all the countries in the world
get united.
A bio-shield will test DNA of various organisms in the environment with the
aim to immediately detect dangerous viruses and replicators, in which in is
similar to the human immune system.
A nano-shield will appear at a later stage of development, when
nanorobots-replicators have been created, and there is the risk that they may
start to multiply in the environment, that is, they will become "gray goo. In
order to control that it is necessary to accommodate specialized sensors
everywhere in the environment, even in the world's oceans (Robert Freitas).
A system of control over criminal activity and potential terrorism can also
be regarded as a global shield, as well as a system of control over such an
artificial intelligence that can start self-improving or planning to perform some
destructive activity.
Ultimately, the global AI will control all the shields, incorporate both the
Internet and various government agencies take over their functions.
Finally, there is the idea to move some of the sources of risk to a
considerable distance from the Earth so that they would not cause harm if
something goes wrong or would allow some time for preparation. This applies
to dangerous biological experiments and dangerous physical experiments, but
is unlikely to deal efficiently with dangerous attempts to create a self-improving
AI that could easily "escape" by communication channels.
We do not touch here upon the problem of creating a safe AI, assuming it
will be created, if possible, within the framework of the Plan A2. In addition, I
have a separate map showing ways to create a safe AI, which is extremely
complex, and which contains about a hundred possible ideas.
If a friendly AI has been created it may interrupt implementation of the Plan
A1 at any stage and offer better solutions
Step 5 Reaching indestructibility of civilization with negligible annual
probability of global catastrophe: Singleton
Singleton is a world order in which there is a single decision-making
agency at the highest level (Bostrom)
Setting up a worldwide security system based on AI
Developing a strong global AI preventing all possible risks and
providing immortality and happiness to humanity
461

Colonization of the solar system, interstellar travel and Dyson spheres


Colonization of the Galaxy
Exploring the Universe

The result of the implementation of the Plan A1 and a number of other


plans should be the creation of what Bostrom calls Singleton, a single center of
decision-making within the civilization that uses AI and ensures prevention
from all global risks.
AI is the most effective tool for adaptation, and therefore by definition must
be able to prevent all the risks that are generally preventable. In addition, it
should solve the problem of good for humanity in the broadest sense of these
words, including the removal of aging, death, suffering, and also in other
aspects that are still difficult to understand for us.
It should ensure further unlimited development of mankind so that the
potential of the human species would be fulfilled to the maximum possible
extent. Probably, this could be achieved by having combined AI and human
beings. After that a protected and immortal humanity will face the task of
colonization of the solar system, the Galaxy and the Universe.
Humanity will become a Kardashov 2 and 3 level civilization.
A strong AI and, consequently, Singleton will most likely be created, if that
is possible at all, before the end of the 21st century, at the earliest by 2030. So
the period of global risks in the history of mankind will last no more than a
hundred years, after which humanity will either perish or reach a certain state
of indestructibility.
Plan A1.2 Decentralized risk monitoring
This plan largely originated from the crowdsourcing of ideas based on a
previous version of the map, during which more than 20 interesting
suggestions appeared.
The essence of this plan is to collect all positive alternatives to the Plan A1.1
and avoid the totalitarian control risks:
- The need for a world war for the unification of the planet,
- An Orwellian worldwide totalitarianism, with its restriction of freedom, the
penetration of the state into private life
- And, above all, the risk, built into any totalitarian system, of a failure in
the centralized control system. After all, a control center itself is not
accountable to anyone, it is out of control. Centralized control has an area
which is out of control: that is the top of control pyramid, and even control over
control doesnt solve this intrinsic problem.
This version of the Plan A1 is good, positive, but perhaps a little bit naive.
In reality, some of its elements may be combined with a control version
resulting in a more viable solution.

462

The steps in this plan are different. It starts with changing human values,
proceeds to changing behavior and society and then to the organization of
mutual control.
A1.2 1.Values transformation
The value of existential risk reduction
A moral case can be made that existential risk reduction is strictly more
important than any other global public good (Bostrom)
Making the value of the indestructibility of civilization the first priority on
all levels: in education, on the personal level, and as a goal of every nation
Improving the public desire for life extension and global security
Dissemination of this value
Reducing of radical religious (ISIS) or nationalistic values
Raising the popularity of transhumanism
Promoting movies, novels and other works of art that honestly depict xrisks and motivate their prevention
Introducing memorial and awareness days: Earth day, Petrov day,
Asteroid day
Educating in schools on x-risks, safety, and rationality topics; raising the
sanity waterline
To a large extent the prevailing policies are determined by the values
prevailing in society. If you describe it very crudely, there are two large
opposing groups of values: the first group is national-religious values, and the
second group is the value of the progress of humanity, unity and life extension.
The national-religious group is characterized by believing in afterlife,
following unproven dogmas, the primacy of the value of the group over the
value of the individual or humanity as a whole. There are many such groups
and they conflict with each other.
For such a group humanity as a whole and its fate are not values, and a
global catastrophe could even be desirable as a religious objective.
Unfortunately, we see that the popularity of such groups is only growing in all
the countries of the world.
I is typical of such groups to be at war with each other (Shiites and Sunnis),
and especially at Western values (Boko Haram against education).
In a milder form, these values are represented in Western countries as
nationalist and religious movements.
For the second group the life of the individual and the fate of humanity as a
whole are important. In general, this group can be called Western values or
463

universal human values.


Its logical conclusion is the philosophy of transhumanism, which declares
the absolute value of human life and the need for its indefinite extension, as
well as the importance of the prevention of global risks. However, the spread of
the transhumanism is very slow. In part because such values coexist with a lot
of other values, such as the traditional religion and even the environmental
movement.
Paradoxically, despite the technological advances of the recent decades,
national and religious values are experiencing a renaissance.
In parallel to the transformation of values, the transformation of the picture
of the world is underway. In fact, each block of values implies a certain view of
the world. What "Western" and "traditional" values have in common is the idea
of the future being quite linear. If you take a future vision with exponential
development, it immediately raises the questions of global risks and human
immortality.
Another problem associated with the existence of different values is the
existence of nation states with different identities, different government
structures different declared values, and every nation state has its own egoistic
interests. Their relation with values is too complex to try to elaborate it here,
but their existence is a major contribution to existential risks due to possible
wars, arm races, terrorism, control prevention and different levels of control in
different parts in the world (which means that criminals could find the least
controllable place, like Somali).
Arms races may cause dangerous technologies to be developed faster than
the methods of their control.
On the other hand, former enemies are able to unite in the face of imminent
danger, if it becomes visible.
We could think of changes of values as changes in the probability of
different types of events. There will always be people and groups with opposing
values but if human life value dominates, it would mean less violence (an we
see a decrease in violence over centuries). If the value of a future generation is
high most people will be less likely involved in activities raising chances of
global risks.
So the value effect is indirect and hard to measure but it could change
extinction probability by the order of magnitude.
Ideological payload of new technologies
The idea is to design a new monopoly tech with a special ideological payload
aimed at global risks prevention.
464

Any new tech suggests a new norm of behavior. Here are listed new
technologies and values that they promote.

Space tech Mars as backup, long term survival


Electric car sustainability
Self-driving cars risks of AI and value of human life
Facebook empathy and mutual control
Open source transparency
Computer games and brain stimulation virtual world

A1.2 2: Improving human intelligence and morality


Intelligence
Nootropics, brain stimulation, and gene therapy for higher IQ
New rationality: Bayesian probability theory, interest in long term future,
LessWrong
Fighting cognitive biases
Many rational, positive, and cooperative people are needed to reduce xrisks (effective altruists)
Empathy
High empathy for new geniuses is needed to prevent them from
becoming superterrorists
Lower proportion of destructive beliefs, risky behaviour, and selfishness
Engineered enlightenment: use of brain science to
make people more united, less aggressive; opening the realm of spiritual
world to everybody
Morality
Preventing worst forms of capitalism: the desire for short term monetary
reward
Promoting best moral qualities: honesty, care, non-violence
The idea here is that if people become better, then the probability of
accidents will decrease. More intelligent, more moral, more responsible people
are less likely to be ill-intentioned or commit a fatal error.
Intelligence (IQ) correlates with less violence and a longer life, it helps
predict consequences of ones actions.
Empathy will lower violence and help adapt a holistic world view and the
value of preservation of human civilization.
465

Morality will make people act less violently and more altruistically.
A1.2 3. Cold War, local nuclear wars and WW3 prevention
Establishing an international conflict management authority an
international court or a secret institution
Implementing a large project that could unite humanity, such as a
pandemic prevention project
Integrating rogue countries into the global system based on dialogue and
appreciation for their values
Introducing hotlines between nuclear states
Promoting antiwar and antinuclear movement
Using international law as the best instrument of conflict solving
Peaceful integration of national states
Employing cooperative decision theory in international politics (do not
press on red)
Preventing brinkmanship
Preventing nuclear proliferation
Dramatic social changes
These could include many exciting but different topics: a demise of
capitalism, a hipster revolution, internet connectivity, global village, dissolving
of national states.
Changing the way politics works so that the policies implemented actually
have empirical backing based on what we know about systems.
Introducing a world democracy based on Internet voting.
Maintaining high-level horizontal connectivity between people
A1.2 4. Decentralized risk monitoring
Transparent society: everybody can monitor everybody:
- groups of vigilantes scanning the open Web and sensors
- Anonymous style hacker groups: search in encrypted spaces
Decentralized control:
- local police handle local crime and terrorists;
- local health authorities identify and prevent the spread of disease
- mutual control in professional space
- Google search control
- whistle-blowers inform the public about risks and dangerous activities
466

Net-based safety solutions:


- ring of x-risk prevention organizations
- personal safety instructions for every worker: short and clear
Economic stimuli:
- carbon emissions trade
- prizes for any risk identified and prevented
Monitoring of smoke, not fire:
- search predictors of dangerous activity using narrow AI
Plan 2. Creating Friendly AI
A2.1 Study and Promotion
Study of Friendly AI theory
Promotion of Friendly AI (Bostrom and Yudkowsky)
Fundraising (MIRI)
Slowing other AI projects (recruiting scientists)
FAI free education, starter packages in FAI
The basic idea, the terminology and the development issues related to a
friendly AI are defined by E. Yudkowsky and Nick Bostrom. Yudkowsky created
MIRI and LessWrong.
The basic idea is that a strong, self-reinforcing Artificial Intelligence is a
global risk, but if you make it "friendly" it is going to be safe for the people and
be able to prevent all other global risks, as well as to solve other problems of
mankind, and, moreover, will be a source of a huge number of various benefits
that we are unable to specify at present, but which will include prevention of
aging, suffering, involuntary death and creation of much happier human lives.
However, we do not know how to create a human-level AI, and. moreover,
do not know how to make it friendly. Because of this, we need at the beginning
to conduct a thorough study on the methods of implementation friendliness,
and collect a team of scientists and the money to do it.
Gradually, it is starting to happen. Bostroms book "Superintelligence"
basically retells the ideas previously expressed by Yudkowsky, but Bostrom has
been much more successful as an academic and also received the support of
Elon Musk who allocated $10 million in 2015 in grants to study ways to create a
safe AI. In recent years, articles about the risks of a strong AI have been
published in many respected media, and the topic has become widely known.
However, so far the situation is that there are about a hundred ideas on how
we can create a safe AI, but there is not one that would look bulletproof. Thus,
long-term studies are needed.
467

But the development of AI is going very fast, which can be seen in the
example of image recognition systems and self-driving cars. It is possible that a
strong AI will be created by 2030, as was proposed by Vinge in 1993.
One way to create a friendly AI is to increase the number of scientists
working on its development as well as improving the overall rationality in
society.
Another way is slowing down the development of the whole AI industry,
which, for example, may come about through pumping brain out of it or as a
result of economic recession. This, of course, will not work.
A2 2. Solid Friendly AI theory

Theory of human values and decision theory


A full list of possible ways to create FAI, and a sublist of best ideas
An AI that is proven safe, fail-safe, intrinsically safe
Preservation of the value system during AI self- improvement
A clear theory that is practical to implement

The next step should be the creation of the theory of a friendly AI. It should
include a number of blocks, such as the theory of value and the theory of
decision-making. This theory must mathematically prove that the AI will be
safe.
This theory also should be easy to apply in practice. That is, it should be
simple to understand, applicable to different AI architectures, should be
convincing, and may consist of several independent units. It should also
provide multi-layered protection.
Also, AIs self-improvement must not affect its system goals.
I
have
a
map
of
different
ways
to
achieve
AI
safety.http://lesswrong.com/lw/mid/agi_safety_solutions_map/
A2.3 AI practical studies
Narrow AI
Human emulations
Value loading
FAI theory promotion to all AI developers; their agreement to implement
it and adapt it to their systems
Tests of FAI theory on non self-improving models
It is not enough to develop a good theory of Friendly AI, it is also important
that it will be applied by a team that will first come close to the creation of a
strong AI. In order for the latter to happen, we need to present the theory to
all teams, including Google, IBM, Facebook, start-ups in the field of Deep
468

learning, state and military secret projects, as well as foreign companies and
individuals such as (possibly) AI hackers.
Another approach is that the same team that created the friendliness
theory, should create a friendly AI using its intellectual advantage, but it is
unlikely as it is too complex a task.
In addition, the theory will require some adaptation for a specific method of
creating AI, or that method must be adjusted to the theory. Then it can be
tested on a toy model of AI, somehow kept from self-improving.
A parallel research in human brain may result in its emulations, the creating
of specialized AI systems, as well as a research in goals loading into rational
agents. All of the above should help us better understand how friendly AI
theory should work.
Seed AI
Creation of a small AI capable of recursive self-improvement and based on
Friendly AI theory.
Superintelligent AI
Seed AI quickly improves itself and
undergoes hard takeoff
It becomes a dominant force on the Earth
AI eliminates suffering, involuntary death, and existential risks
AI Nanny creating a super AI that only acts to prevent other existential
risks (Ben Goertzel)
In fact, there are two points of view on the development of a strong AI. One
is that it will be created in a small private laboratory thanks to a single design
breakthrough, when it starts quickly self-improving then easily steals away
from the laboratory into the Internet, and then takes over the world with good
or bad purposes.
The other extreme view is that the AI will be created by the military or
some intelligence agency or a large state or semi-state company that has
unlimited funding for the purchase of computers and human brain. And this
company will also have 10+ years of theoretical advantage (e.g., due to secret
mathematical theorems used in encryption) over open sources of information.
The AI will self-improve quite slowly (not over hours but over years) and its
access to the Internet and other external networks will be opened by its
creators intentionally. There could be many intermediate solutions.
What is important is that the principal outcome of this development will be
the same: an AI controlling the world, with a certain goal system. In this map
we do not touch upon the complexities of creating a friendly AI and many ways
in which an attempt to create it can fail, to which I devote two separate maps.
469

The self-improvement process of a seed AI is risky, and it can result in a


global catastrophe that will destroy humanity.
If successful, we get the same Singleton as in Plan A1, so these paths
converge. We could also think of Plan A1 as a story about state gradually
converting into an AI.
Which of these plans is better? Plan A1 is more suitable for the prevention
of global risks that arose prior to the creation of a strong AI. Plan A2 depends
on whether AI is possible at all and weather we are able to control it. That is, at
the start Plan A1 is stronger, but Plan A2 is stronger at later stages. Therefore,
they can be implemented in parallel.
Although if there are several competing systems of AI, it will lead to a war
between them and an ensuing disaster.
UnfriendlyAI
Kills all people and maximizes non-human values (paperclip
maximiser)
People are alive but suffer extensively
Another possible outcome is creation of a strong unfriendly AI that would
destroy humanity one way or another. However, in a certain sense it may better
than other ways of extinction as even the most unfriendly AI will carry the
knowledge of mankind, and perhaps will be able to create or simulate people,
for example, to assess the incidence of other AIs in the Universe. And it may be
better than oblivion. Or maybe not, if simulated people will suffer and be
doomed.
Plan A3. Improving Resilience
The idea here is that if we increase the resilience of infrastructure and
people to any source of death, and do it faster than new means of destruction
are developed, humanity will be immune to any catastrophe.
Briefly, the slogan of this plan is to "become immortal".
The plan as a whole is more complicated and less likely to succeed than
plans A1 and A2, but some of its elements can be implemented in parallel.
A3 1.Improving sustainability of civilization
Implementing intrinsically safe critical systems
Promoting a growth in the diversity of human beings and habitats
Employing universal methods of catastrophe prevention (resistant
structures, strong medicine)
Building reserves (food stocks, seeds, minerals, energy, machinery,
knowledge)
Establishing a widely distributed civil defense, including:
- temporary shelters,
470

air and water cleaning systems,


radiation meters, gas masks,
medical kits
mass education

Firstly, additional layers of security should be introduced in all hazardous


systems. This applies to control systems of reactors, aircrafts, nuclear and
biological laboratories.
Secondly, it should be noted that people are quite homogeneous genetically
because our population has recently, about 70 000 years ago, passed through a
bottleneck. Between any two chimps there is more difference than between any
two human beings. This makes mankind especially vulnerable to infections, the
usual protection from which is genetic diversity. As a result of experiments in
the creation of post-human hybrids, chimeras, genetic editing of human DNA
can create a new subspecies of man resistant to possible artificial epidemics.
Finally, universal preventive means, such as a universal vaccine. are
instrumental to counter entire classes of risks.
German Khan wrote that the best way to win a nuclear war is to possess a
high-quality civil defense capable of going through enemy retaliation. The
strengthening of preventive means includes developing emergency medicine
and vaccine production technologies, and analyzing the samples of the
pathogens.
3 2. Useful ideas to limit the scale of catastrophe
Limiting the impact of a catastrophe by implementing measures to slow
down the growth of areas impacted:
- using technical instruments for implementing quarantine,
- improving the capacity for rapid production of vaccines in response to
emerging threats
- growing stockpiles of important vaccines
Increasing preparation time by improving monitoring and early detection
technologies:
- supporting general research into the magnitude of biosecurity risks and
opportunities to reduce them
- improving and interconnecting disease surveillance systems so that novel
threats can be detected and responded to more quickly
Worldwide x-risk prevention exercises
Ensuring the ability to quickly adapt to new risks and envision them in
advance

471

3.3 High-speed Tech Development needed to quickly pass risk window

Investing in super-technologies (nanotech, biotech, Friendly AI)


High speed technical progress helps to overcome slow process of
resource depletion
Investing more in defensive technologies than in offensive technologies
The period of global risks is a historical period, relatively speaking, from the
creation of nuclear weapons to the creation of a strong AI. It is a kind of
adolescence for a civilization, when it can do everything, but still cannot quite
control itself. This period will last approximately 100 + / - 50 years.
There is the idea to rush through this period more quickly.
This is partly due to the fact that while some of the risks increase
exponentially within the period (biotech & AI), other risks are linearly
distributed therein. By accelerating technological progress, we can accelerate
exponential risks, as we have less time left to think about how to control them,
but reduce linear risks, since the whole period is shorter. In addition, thereby
we can reduce the chance of "black swans", which probably are relatively
evenly distributed.
Furthermore, if we do not jump on new technologies, we will face
challenges posed by older technologies, i.e., we can fall into the trap of
Malthusian resource exhaustion. That is, if we do not switch go to new energy
sources and production methods, we will in a few decades find ourselves
running out of recourses and amidst a civilizational crisis with likely global
wars.
A3.4. Timely achievement of immortality on highest possible level

Researching a nanotech-based immortal body


Diversification of humanity into several successor species capable of
living in space
Mind uploading
Integration with AI
This option becomes relevant when with the rapid development of
technology, we will be able to upgrade the human body. If we replace all the
cells of the body with nanomachines, then no biological infection will be able to
do anything with it. Such a body can withstand radiation, cold, live in outer
space. Such a person should be afraid only of other nanomachinery or AI (or a
virus) taking control of micro robots inside his body.
The logical step beyond that would be transferring human consciousness
into a computer, causing it to be able to live in any environment where
computers can exist, and it will depend only on AI.
472

AI based on uploading of its creator


Friendly to the value system of its creator
Its values consistently evolve during its self-improvement
The end result of such a race against threatening environmental
technologies will be that the person turns into artificial intelligence. Perhaps it
will be one person that is the creator of this AI.
Plan 4. Space Colonization
Elon Musk is one of the few who advocate more than one ways to prevent
global risks. Most people tend to get hung up on just one.
Namely, Musk speaks about the importance of creating a friendly AI and the
importance of moving in space as a means of protection against disasters on
Earth. The same idea was expressed by Hawking and many others.
Unfortunately, this idea is weaker than the previous ones, as the space
colonization requires the development of space technologies. And these new
technologies can also create a disaster that can propagate in space. For
example, space rockets can be kinetic weapons. AI can be spread by the radio.
Nanobots can fly like dust from one celestial body to another. Biological
infection can spread to a spacecraft as well as, for example, AIDS can be
carried by people inside ships and aircraft on the Earth. The development of
new energy sources can be used for huge explosions in space that can sterilize
entire planets and even the Solar System. There could also be a war between
space colonies or terrorists inside the colonies infiltrated by hostile propaganda.
So moving to space is not a panacea, and the development of appropriate
technologies may even have a negative value. Settling in space will save us
only from the weakest of risks, such as asteroid or global warming, with which
we can cope even without it. However, space colonization can still increase our
chances of survival, especially if we will be able to travel to other stars.
4.1. Temporary asylums in space
Space stations as temporary asylums (ISS)
Cheap and safe launch systems
In this section we will consider tech that is already here or can be created
on the basis of the existing technologies in the next 10-15 years.
The International space station (ISS) already exists, but it cannot operate
autonomously for more than a year. If mankind dies very quickly with the
environment preserved, the six men and women on the ISS can be the
beginning of a new humanity. But the chances of such a disaster are rather
small.

473

In the next few years we can build a base on the Moon, and in a couple of
decades a base on Mars. If only 10-15 people get to live on the Moon, the
Moons value as a back drive for mankind will be no more than that of ISS.
4.2. Space colonies near the Earth
Creation of space colonies on the Moon and Mars (Elon Musk) with a
population of 100-1000 people.
Elon Musk is now building a space launcher that could deliver 100 people on
Mars and it could fly in 2020s.
A 1000-people colony on one of the nearest celestial bodies can exist
independently for decades but it still will not be self-sufficient or able to
continue technological progress. And it, possibly, represents the upper limit of
what we could reach on our current space tech level.
If a million people lives on Mars, then they will probably be capable of selfsustaining, and become the basis for a second humanity, without even coming
back to the Earth, if it is lost. With the current technologies sending so many
people to Mars will take several decades and huge amounts of money that
could be spent in a different way in the world. That is, there is an opportunity
cost.
Colonization of the Solar System
Setting up self-sustaining colonies on Mars and large asteroids
Terraforming planets and asteroids using self-replicating robots and
building space colonies there
Setting up millions of independent colonies inside asteroids and comet
bodies in the Oort cloud
This option involves the colonization of the Solar System on the basis of
next generation technology, robots and robots-replicators. Such a colonization
may be much easier and cheaper: in principle, only one robot-replicator could
start a Solar System wave of colonization and it could be build by a private
person.
However, there is more risk involved: nanorobots can get out of control, or
be used to create dangerous giant structures in the Solar System, or fight each
other, or simply become space-gray goo.
4.3. Interstellar travel
Orion style, nuclear powered generation ships with colonists
Starships operating on new physical principles with immortal people on
board
474

Von Neumann self-replicating probes with human embryos


The first idea is that of a generation starship traveling rather slowly and
with people living and having children on board. Such a starship can be built on
the basis of modern technology with a budget of about a trillion dollars. That is
the project "Orion" envisioned in the 60s, a spaceship driven by explosions of
nuclear bombs. It is quite feasible, although rather cumbersome and not
environmentally friendly. It can reach the nearest star in 40 years.
Another idea that involves using space travel as means of escaping global
risks is to move through space so fast that no local impact will be able to
influence the whole population of human civilization. (It also means
communication channels are off.) So it was in human past, when the means of
transport were very slow (ships, vehicles). This requires interstellar travel with
near-light speed.
Of course, there is a chance that there will be new spaceships available built
on new physical principles. But with new principles new risks will come forth.
Even if the Orion Spacecraft explodes at the start or turns into a kinetic
weapon, it could threaten life on the Earth. New principles of space travel would
mean new sources of energy and new ways of spreading damaging effects
which is a recipe for new global risks.
One more option is using von Neumann probes, that is, interstellar robotsreplicators. They can be loaded with human embryos (or DNAs), which will be
brought up by a robot-nanny. The mass of such a starship could be only a few
grams.
This item in the map is connected by a vertical yellow stripe to another
block (creating nanotech immortal bodies) which means a strong connection.
Such nanotech bodies will probably be able to live in space.
Interstellar distributed humanity
Many unconnected human civilizations
New types of space risks (space wars, planets and stellar explosions, AI
and nanoreplicators, ET civilizations)
As a result of space colonization, we may get something like a loosely
bound "galactic empire". Some of the planes will die, others will fight wars with
each other, and others will thrive.
Plan B. Survive the catastrophe
The best way to escape global risks is to prevent a global catastrophe. The
higher the technological level when it happens, the harder the catastrophe will
be to survive. On the other hand, there is a scenario, which I call "the
oscillations before the singularity," when a very large catastrophe precedes the
creation of truly strong supertechnologies. For example, the proliferation of
low-cost nuclear weapons will lead to an intense nuclear war that in turn will
result in humanity regressing to an earlier stage. Or a pandemic destroying 90
percent of the population.

475

For a variety of reasons the chances of such a "semi-global" catastrophe are


two to three times higher than that of a global catastrophe. (Like Pareto's law
of the distribution of risks: for example, one dead per two or three wounded in
different accidents.)
In this case, the availability of shelters can play a key role in the survival of
humanity.
On the other hand, the hope for survival in an asylum should not be very
high.
If shelters are super complex and expensive, they will be too scarce and
can become targets in a nuclear war.
If they are numerous and cheap, they cannot be good enough to provide
full long-term autonomy.
In addition, no asylum types are universal. Every asylum is designed to
provide protection against a certain type of disaster. For example, before World
War II in the USSR they built shelters against chemical weapons with thin
walls, which proved to be completely useless against conventional bombing.
Inside a shelter, you can continue to produce weapons or dangerous
viruses. Shelters cannot protect from dangerous nanorobots and AI, and
certain types of biological weapons, such as one that spreads slowly and
secretly.
Therefore, it is better to invest in different versions of Plan A, and do not to
hope for an asylum. But a certain number of shelters would not hurt.
B1. Preparation
Fundraising and promotion
Writing a textbook on rebuilding civilization (Dartnells book
Knowledge)
Stockpiling knowledge, seeds and raw materials (Doomsday vault in
Norway)
Founding survivalist communities
The first step in creating shelters should be creating the project and
allocating the money. Most likely, it will be done not by individuals but by states
that build shelters in case of a nuclear war.
A good idea is to create a knowledge bank to restore civilization from
scratch and mount it on heavy-duty vehicles. Dartells book "Knowledge" is
such an attempt.
A doomsday vault is a storage of seeds in Norway which may be used
after a large-scale catastrophe and as a matter of fact it is used to store
knowledge.
Also, the survivalists movement, whose members mainly train for fun to
survive in difficult conditions, could become useful in the case of certain types
of global catastrophe.
B1 is connected to A4.1 ("temporary asylum space"), since in fact they are
the same thing.
476

B2. Buildings
Building underground bunkers, space colonies
Seasteading
Converting existing nuclear submarines into refuges
Natural refuges
Uncontacted tribes
Remote villages
Remote islands and Antarctica
Oceanic ships
Mines
The ideas to construct autonomous ultra deep bunkers sound pretty crazy,
but bunkers at depths of up to 1 km with autonomy about 1 year are quite real
and probably already exist. Mines may be converted to them.
Nuclear submarines are designed for long autonomous existence. Their
autonomy is about one year.
Distant isolated tropical islands can serve as shelters in the event of a
pandemic. Some islands completely avoided the Spanish flu.
Deserted villages in the forest can also make good refuges. Tribes that
never had contact with the outside world may well survive humanity.
"Water World". The sea is full of ships. Many of them have a high degree of
autonomy, for example, can go for long fishing trips, some have standalone
nuclear engines. Tankers carry a large amount of fuel, and container ships are
full of food and goods. They can survive some accidents, especially a nuclear
war or a biological attack.
There is also the Seasteading movement aimed at the creation of
autonomous communities floating above the sea. These settlements can also
withstand some types of disasters.
Another type of buildings that can be used as human refuge are research
stations in Antarctica. They also have great insulation and autonomy. Deep
mines could also help survive miners working inside.
B3. Readiness

Crew training
Distributing crews to bunkers
Implementing crew rotation
Building different types of asylums
Freezing embryos

It is not sufficient to have a refuge: you also need to prepare people for
living in it. Pre-built shelters should be crewed with well-trained and healthy
men and women who must have the skills to build a civilization from scratch,
477

and fitted with appropriate instruments. It is necessary to carry out regular


crew rotation: one team is resting while the other is "waiting for disaster."
Accidentally caught in a bomb shelter people can be extremely ineffective in
restoring civilization.
We can also use the power of the law of large numbers, that is to have a lot
of very different shelters in different parts of the world hoping that this may
work out fine?
There is still the idea of robotic shelters with frozen human embryos
somewhere in the ice of the Antarctic. While it is now impossible, it may be real
in 20 years from now. If sensors stop receiving signals of the existence of the
terrestrial civilization, the system will wait for 10 years and then get activated
and start an artificial uterus, people will be born, they will be taught by robots,
and humanity will be restored. This, of course, is not so simple.
B4. Miniaturization for survival and invincibility
Building adaptive bunkers based on nanotech
Colonizing the Earths crust by miniaturized nano-tech bodies
Moving into simulation worlds inside small self-powered computer systems
This scenario is close to science fiction but still needs to be mentioned.
Future tech will allow the creation of advanced protection systems much more
sophisticated than simple underground buildings.
This presents a problem of the sword and the shield. If a shield is always
stronger, it means that a catastrophe is preventable at any level of tech
development. But this also means that the shield should be based on the same
tech level as the sword or even be more advanced.
B5. Rebuilding civilization after catastrophe
Rebuilding the population
Rebuilding science and technology
Preventing future catastrophes
What we need is not only to survive a disaster but to be able to rebuild our
civilization, human population and technology. Moreover, we need to learn
lessons from past disasters and prevent future disasters.
Restarting civilization from scratch is not easy since most of the easily
accessible deposits of resources will have been exhausted. Using the ruins of
human civilization as a source of scrap metal will not contribute to the
development of a sustainable self-sufficient society.
According to scientists, a human community capable of self-renewal should
include about 1,000 members, smaller communities will be threatened by
degradation and destruction due to accidental coincidence of circumstances (R.
Hanson).
478

Reboot of civilization
Several reboots may occur
In the end, there will be a total collapse or a new level of civilization
The success of the shelters strategy is that survivors will restart civilization
in its entirety, which may require several hundred or even thousands of years.
If semi-global disasters occur rather frequently, it may take several cycles to
restart. Ultimately, however, there are only two stable states: ultimate
destruction or transformation into super-civilizations, immune to global risks.
(link)
Plan . Leave Backups
This plan is, in a sense, a gesture of despair. The chances of its success are
very small, and that success is quite illusory.
The idea is that we are not the last civilization in the Universe, and
someone will find our remains and will resurrect us using them. It will be either
a next terrestrial civilization, if life on the Earth survives, or an extraterrestrial
civilization, if they exist and are capable of interstellar travel.
C1. Time capsules with information
Underground storages of information and DNA for future non-human
civilizations
Eternal disks from Long Now Foundation (or M-disks)
The idea is to leave information on media that can exist tens of thousands
or even millions of years. The task of creating them is not quite simple, but
DNA samples can remain intact for such a long period of time. M-disk format
is designed for 1,000 years of storage. Long Now Foundation is developing a
device that can work and store information for 10 000 years. DNA strains are
recoverable for millions of years and maybe even longer if preserved in a cold
place.
C2. Messages to ET civilizations
Sending out interstellar radio messages with encoded human DNA
Creating storages on the Moon, other planets
Storing frozen brains in cold places
Sending out Voyager-style spacecrafts carrying information about
humanity
METI, or sending a radio message into space, can also serve as a means to
save the information (although it carries risk of attracting the attention of

479

dangerous
extraterrestrial
civilizations).https://en.wikipedia.org/wiki/Active_SETI
Until now very few transmissions occured and the chances that they will be
received by someone are negligibly small. Radio and television programs
broadcast from the Earth carry a lot of information but they will probably
dissipate in space before they reach any possible civilization.
We can also create a storage on the Moon, perhaps on a pole or in a cave,
where the eternal cold and the lack of geological changes and radiation will
preserve its content for tens of millions of years, and perhaps longer. On the
Moon we would store digital information, artifacts, tissue samples and even
plasticized or frozen human brains as well as DNA.
In addition, there are several remains of the spacecraft that once were sent
to Mars, other planets and out of the Solar System. Some of them contain brief
messages
to
aliens
engraved
on
metal
plates.
https://en.wikipedia.org/wiki/Voyager_Golden_Record
C3. Preservation of earthly life
Creating conditions for re-emergence of new intelligent life on the Earth
Directed panspermia (Mars, Europe, space dust)
Preservation of biodiversity and highly developed animals (apes,
habitats)
A new civilization could arise on Earth after humans and we should strive to
preserve the species that most likely would give rise to it, that is most highly
developed mammals (i.e., monkeys, dogs, rats, dolphins) as well as birds.
Some
chimps
already
are
using
tools
http://news.discovery.com/animals/female-chimps-seen-making-wieldingspears-150414.htm and probably could develop general intelligence in several
million years. Other species it would take tens of millions years and we should
bear in mind that due to rising solar luminosity all earthly life will go extinct in
100 million to 1 billion years, and it is complex life that will die off first
http://www.sciencedaily.com/releases/2013/12/131216142310.htm. It could
also happen much earlier if various positive feedback about global warming is
taken into account, including CO2, methane and water vapor as greenhouse
gas.
The higher developed life survives, the faster it may form a new intelligent
civilization that can then finds traces of humanity.
Microorganisms will survive almost any kind of disaster as they exist at
depths of up to several kilometers, but hardly one of them can develop new
multicellular life, because it may take about a billion years and before that the
heating of the Sun will have made the Earth unsuitable for life.
It is important to preserve the integrity of the biosphere because only as a
whole it will be able to evolve and give rise to new intelligent life.
We can also spread life beyond the Earth, which is called directed
panspermia.https://en.wikipedia.org/wiki/Directed_panspermia
Mars
and
480

Jupiter's moon Europa are best suited for this purpose in the Solar System.
About ten dwarf planets and moons in the Solar System have under-ice oceans
of liquid water, and they could be fertilized by earthly life (although we should
check beforehand if there is any local life). We can go even further and send
some dust with frozen microorganisms in the direction of the nearest stars.
If life gets spread across the Galaxy, then sooner or later it will find a new
planet, which may result in a complex biosphere and intelligent life. This
intelligence can then return to the Solar System and find traces of humanity,
although very few of them will remain after billions of years.
C4. Robot-replicators in space
Mechanical life: nanobots ecosystem and von Neumann probes based on
nanobots
Preservation of information about humanity for billions of years in
replicators
Safe narrow AI regulating such robots-replicators
It may happen that humanity will perish but some form of mechanical life
will remain: some robots-replicators with limited AI. For example, if gray goo
appears, then an ecosystem of nanomachines will be formed inside which can
store some information or traces of its constructors (an example of this is well
described
in
the
novel
The
Invincible
by
Stanislaw
Lemhttps://en.wikipedia.org/wiki/The_Invincible).
If such robots-replicators spread in space, they will be much more stable
data carriers than any hard objects or hoards, and can remain relatively
unchanged for billions of years. In mechanical systems quite a powerful error
correction system could be build, which will prevent their Darwinian evolution
and loss of information.
Of course, such devices must be operated by a certain computer program
which may be relatively primitive or a narrow AI system unable to self-improve
but having superior abilities in some domains, e.g., having the ability to design
mechanisms or to adapt to the environment.
Resurrection by another civilization
Aliens create a civilization that has a lot of common values and traits
with the human civilization
Resurrection of people based on the information about their
personalities
This plan can be regarded as successful if the terrestrial civilization, the
Homo Sapience specie or personalities of some individuals get brought back to
life.

481

Perhaps it will simply be accomplished by another civilization, similar in


some aspects to humans and sharing a significant number of our values and
traits.
Plan D. Improbable Ideas
Plan D does not require anything to do but it reflects the hope that
something improbable and miraculous will save us. Chances of that, frankly
speaking, are small.
D1. Saved by non-human intelligence
Maybe extraterrestrials are looking out for us and will save us
We send radio messages into space asking for help if a catastrophe is
inevitable
Maybe we live in a simulation and the simulators will save us
Aside from AI, there are three hypothetical types if supermind that could
save mankind:
- aliens or rather an alien AI
- the hosts of the simulation, if we live in that simulation,
- and God.
In any case, we can somehow appeal to the higher mind asking for help and
protection, or hope that he on its own should see our problems and save us.
Calling for help to the aliens seems to be the most rational but also the
most hopeless option. The difference with Plan C here is that we are not
passively leaving traces, but actively demanding help in near-term future,
which requires that aliens are very near, that is, they are already hidden in the
Solar System or live on the nearest stars, which is very improbable.
So it is very unlikely that ETs exist in the immediate vicinity of the Earth
and that we can accurately aim a radio message at them, and that they will
have time to arrive before we die, and that their intentions are positive.
Of course, there is a chance that we live in a sort of a cosmic zoo where we
are being constantly monitored, and when we achieve a dangerous level, the
threat will be eliminated by the help from outside.
But it is also possible to imagine a scenario where space "berserkers" are
watching us and will destroy human civilization if it overcomes some unknown
to us threshold in its technological development (this may be the development
of supertechnologies, nanotech or AI), after which our civilization will become
invulnerable for the "berserker". As a result, the chances of hypothetical
benefits from the help of extraterrestrial intelligence are compensated by its
hypothetical harm.
The hypothesis of the existence of God can be rationally reduced to the idea
that we live in a simulation run by a very high-level and highly moral
intelligence. There will be no practical difference for simulation dwellers.
Unfortunately, the amount of suffering in the world says that this hypothesis is
unlikely (God is or immoral or non-existent).
482

D3. Quantum immortality


If the many-worlds interpretation of QM is true, an observer can survive
any sort of death including any global catastrophe (Moravec, Tegmark)
It may be possible to make an almost univocal correspondence between
the observers survival and the survival of a group of people (e.g. if all of them
are aboard a submarine)
Other human civilizations must exist in the actually infinite Universe.
Quantum immortality involves the survival of the observer in at least one
line of the possible future. And if even one person is alive, humanity technically
is still alive too. Moreover, since those lines of the future, in which only one
observer survives and continues to live indefinitely, are less likely than the lines
where the group survives, the better chance of surviving is for the group.
On the other hand, it is almost impossible to prove the efficiency of this
method, except by surviving for thousands of years due to an incredible
combination of circumstances. At the same time, as well as for personal
immortality, the largest share of probability can go to sub-optimal outcomes.
For example, a group of people will survive as guinea pigs for a hostile AI.
And of course, it is a very hypothetical theory with known objections.
Firstly, not all agree with the many-world interpretation of quantum mechanics.
Secondly, it is necessary to solve the problem of personal identity.
On the other hand, there is a number of considerations which can enhance
quantum immortality if it works, that is, they can increase the share of positive
outcomes among the common set of options in which I will survive. For
example, if I explicitly associate my survival to the survival of a group of
people. For example, if we are all in a submarine. In most cases the destruction
of the submarine entails the death of the entire crew. So if I survive, the crew
is likely to survive too.
Finally, all this can work even without the quantum theory if we assume
that the Universe is infinite or at least very large. In this case, in such a
universe there is an infinite number of other civilizations, some of which are
very similar to the human one, and the larger the universe, the smaller the
difference, up to an exact match to the last atom. (Tegmark made precise
calculations of what the size of the Universe should be to provide the required
level of similarity). There is a number of physical mechanisms that could
provide the desired size of the Universe, such as the cosmological inflation.
That is, sooner or later humans will arise again somewhere, and some
aliens may turn out to be similar to humans as evolutionary mechanisms more
or less the same (there is such a term as convergent evolution, that is the
formation of the same form as a result of different evolutionary branches, such
as fish and dolphins).
D2. Strange strategy to escape Fermi paradox
A random strategy may help us to escape some dangers that killed all
483

previous civilizations in space


The emptiness of space raises the chances of the conjecture that all
previous civilizations have perished, therefore, the civilizational path that
seemed rational or conventional, does not lead to safety, and we should choose
a random and unexpected path.
If all the previous civilizations have perished, all obvious ways of
development may lead to extinction. A strong AI, worldwide totalitarianism or
space colonization will not help. If we want to make sure our way of
development is different, we need to select it randomly.
But that does not mean that we should let things take their course. That is,
in the beginning there should be some kind of global power that will be able to
choose a random and unique way.
But the potential harm from this randomness may outweigh the benefit
from the choice of the most appropriate strategy. Not to mention the fact that
other civilizations can also have used the random approach, and it didnt help,
since we do not see them.
Also creating a global power triggers most of global catastrophic risks
before it can coherently apply this idea.
D4. Technological precognition
Predicting the future based on advanced quantum technology and
avoiding dangerous world-lines
Looking for potential terrorists using new scanning technologies
Creating a special AI to predict and prevent new x-risks
If we could perfectly predict the future in a multiverse, we, probably, could
easily avoid global risks. (The knowledge of the inevitable option does not
work, cf. ancient tragedies such as Oedipus.)
But the strengthening of prognostic tools, the development of futurology,
and finally, the creation of artificial intelligence will provide us with a
tremendous ability to improve our prediction of the future.
Some new physical effects directly receiving information from the future
may help. That is, we need to create a kind of a "quantum radar." For now it
remains in the realm of fantasy.
There are also cases where people claim to see prophetic dreams or
anticipate the future in another way. Rather, it is a statistical aberration (always
something will coincide with something, and our brains are wired to detect
coincidences). Perhaps, however, it's worth taking a closer look at the analysis
of brain activity in altered states of consciousness.
D5. Manipulation of the extinction probability using Doomsday argument
Taking the decision to create more observers in case unfavorable event X
starts to happen, thereby lowering its probability (method UN++ by Bostrom)
484

Lowering the birth density to get more time for civilization


This method is even more esoteric than the previous ones, since it not only
uses an unproven mathematical theory, but also uses a clever way to
manipulate this theory to influence the probability of future events. This idea
comes from Nick Bostroms article UN++.
DA is based on the Copernican mediocrity principle: we are most likely in
the middle of a group, from which we are randomly selected.
On the one hand, it allows us to predict, for example, the future number of
people on the Earth knowing their past numbers, and thus to predict the
duration of the existence of human civilization. This is the essence of the classic
Doomsday argument.
On the other hand, on the condition that the total number of people that
will ever be born in the future with some random events, we could change its
perceived probability.
It can be used as described by Bostrom in the thought experiment UN++.
In the hypothetical future the UN controls the gamma-ray burst probability,
which could significantly harm humanity, by deciding to sharply increase the
number of people after this gamma-ray burst. Since this population surge is
unlikely according to the original Doomsday argument, it reduces the risks of
this gamma-ray burst.
Unfortunately, this probability shift can be used against anything except the
very human extinction.
But there is one idea how to do it. This is the idea to control the number of
births per year. The classical DA predicts only the total number of future people
which will be about 100 billion, the same as in the past, but due to the high
birth rate (about 100 million per year) and a growing world population, the
following 100 billion people could be born very quickly, within a few centuries.
However, if you accidentally or intentionally make the birth rate (but not
mortality) fall sharply, the next 100 billion people will be born over a very long
period.
Unfortunately, there are other variants of DA, that cannot be so easily
manipulated.
I have a separate map on the various DA options.
D6. Control of the simulation (if we are in it)

Living an interesting life so our simulation will not be switched off


Not letting them know that we know we live in a simulation
Hacking the simulation and controlling it
Negotiating with the simulators or praying for help

One of the risks is that we're inside a computer simulation created by a


supercivilization with a purpose unknown to us, and that supercivilization can
switch it off, or start testing inside it different variants of "doomsday". I'm

485

working on another map dedicated to the simulation, where all these ideas will
be discussed in more detail.
The share of simulations testing different doomsday scenarios can be quite
large, as these simulations are necessary for any civilization spreading through
the Universe and desiring to know what the number of other supercivilizations
is. For this purpose it is necessary fir that supercivilization to carry out
numerical simulation of the Fermi paradox, and, in particular, to find out how
often civilizations are self-destructing.
However, if our civilization overcomes all risks within the simulation, it can
still get shut down as it is no longer needed for the purposes of the experiment.
Bad plans
Bad plans are those plans that are actually better not to implement as they
certainly increase the likelihood of a global catastrophe. However, these ideas
have been repeatedly expressed, and they may even be tried to be
implemented, so it is important to list and criticize them.
Prevent x-risk research because it only increases risk
Do not advertise the idea of man-made global catastrophe
Dont try to control risks as it would only give rise to them
As we cant measure the probability of a global catastrophe, it may be
unreasonable to try to change the probability
Do nothing
The essence of this proposal is to conceal the fact that new technologies
bring new risks: we want to create every new tech sooner and get useful things
from it. For example, to quicker obtain life extension through the development
of biotech and nanotech but at the price of a small increase in risk of global
catastrophe. Or gain a competitive advantage in the course of international
confrontation or in finance.
But here we have the tragedy of the commons, because if many actors
slightly raise a global risk for their personal gain, the total risk will grow much
higher and make a catastrophe inevitable.
The following example is often presented: the research in the field of
nanotechnology was largely frozen out of fear of "gray goo" after the Bill Joys
article.
It is also said that the idea of man-made disasters can inspire someone,
and that person will become a super-terrorist. But this concealment does not
work, as all the interested parties are already aware of the idea of global risks,
basically from literature and movies. This idea is already well known. But the
ways to prevent risks are much less known.
Anyway some ideas may be worth concealed or buried in the technical
language or limited for exchange in a trusted experts network, as is in the case
of the ideas to create bioweapons.

486

The premise that monitoring systems are creating new risks is true, but at
the level of danger, when even a single bioterrorist can destroy all of mankind,
some system of control is needed, or we are doomed.
If we do not deal with a possible disaster, it becomes inevitable.
And while we cant measure the exact probability of a global risk we could
estimate the future survival time and also the frequency of smaller
catastrophes.
Controlled regression
Using a small catastrophe to prevent a large one (Willard Wells)
Luddism (Kaczynski): relinquishment of dangerous science
Creating an ecological civilization without technology (World made by
hand, anarcho-primitivism)
Limiting personal and collective intelligence to prevent dangerous
science
Radical antiglobalism and diversification into multipolar world (may raise
probabilities of wars)
The idea of controlled regression, that is, lowering the level of technology,
has repeatedly occurred in various forms. E.g., in one post-apocalyptic sci-fi
story a world was described in which the death penalty was introduced to the
inventors of the wheel.
If there are no hazardous technologies around, they will not create global
risks. But if sustained regression has been achieved, humanity will soon die out
by itself, like most previous earthly species. Or will again create tech because it
cannot regress and have total a global monitoring system that would ensure its
implementation in all parts of the Earth.
Moreover, the very achievement of regress requires certain dangerous acts.
For example, a nuclear war that would destroy the leading technology
country in the world, can be an instrument of such a regression. But it would
not only be a senseless crime, it may lead to the total extinction of mankind, or
not achieve the stated objectives to stop progress for more than a few years.
Or even to accelerate it in bad ways such as the creation of new types of
weapons in the remaining countries. Theoretically, humanity can be in such a
situation that small catastrophes will happen very often and it will prevent the
creation of dangerous tech, but such a scenario is unlikely to be sustainable.
Another approach, which
Kaczynski tried to implement, is targeted
terrorism against individual scientists involved in the development of AI. He got
life in prison. Large scale terrorism will entail drastic control measures that will
balance it or result in arms race between terrorists and national security, which
would produce even larger acts of violence and research in totalitarian control,
and thereby accelerate existential risks. Luddism has no future.
https://en.wikipedia.org/wiki/Neo-Luddism

487

Another idea is the creation of an environmental lifestyle in which


mechanical work is replaced by manual work. This utopia is described in the
novel
"World
made
by
hand.
"https://en.wikipedia.org/wiki/World_Made_By_Hand
Another way of regression is the attempt to lower the intelligence of people
so that they lose the ability to invent, with the help of a certain poison, a virus,
or even through the destruction of the system of universal education and teleduping. But, of course, those methods will not work, or only reduce the total
survivability of mankind.
Depopulation
Natural causes: pandemics, war, hunger
(Malthus)
Extreme birth control
Deliberate small catastrophe (bio-weapons)
Computerized totalitarian control
Mind-shield controlling dangerous ideation by means of brain implants
Secret police that uses mind control to find potential terrorists and stop
them
The idea that a population reduction may help to counter global risks is well
known. Firstly, because a smaller population consumes fewer resources, and
secondly, because it is easier to control a smaller population and fewer people
in the world will be dangerous terrorists and gloomy supergeniuses.
The first idea was put forward by Malthus, who suggested that wars,
famines and epidemics will naturally adjust the size of a too quickly growing
population. However, a Malthusian catastrophe cannot result in human
extinction.
Bill Gates offered to regulate the birth rate by reducing infant mortality and
other soft methods. But the effect of such soft techniques can only be visible
over periods of several decades, and during that time many of the exponential
risks may occur.
The fact that billionaires expressed the idea to reduce the birth rate causes
a reasonable fear that they have devised some more dangerous techniques to
reduce human population. Such conspiracy theories could undermine any
reasonable efforts in population regulation.
Declining birth rates may be organized technologically, through some form
of biological or chemical weapons reducing fertility.
In general, the need for this will soon disappear because without it the
world's total fertility rate has fallen sharply in recent years and is now only 2.35
births per woman, only slightly above the reproduction threshold, and
continues to fall, thanks to education, city life and rising living standards.
488

In addition, regarding overpopulation as a problem prevents us from


effectively seeking a cure for old age.
Choosing the way of extinction: UFAI
Quick dying off is better
Any super AI will have some memories about humanity
It will use simulations of human civilization to study the probability of its
own existence
It may share some human values and distribute them throughout the
Universe
Granted that the extinction is inevitable, we had better choose the way it
will happen.
The worst case would be painful dying off because of a slow pandemic or
radioactive contamination. (On the beach)
Immediate death resulting from vacuum phase transition or large scale
asteroid collision seems to be a better option.
Immediate death by UFAI may be the best as it probably will keep intact the
information about humanity, will run human simulations or preserve some of
the human values or traits. But it could also be the worst if its goal system will
include
human
torture
(Roco
Basilisk,
and
https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)
Attracting good outcome by positive thinking
Start partying now
Preventing negative thoughts about the end of the world and about
violence
Assuming a maximum positive attitude to attract positive outcome
The plan was repeatedly expressed in various forms within the religious and
magical community. It can take a form of collective meditations for the benefit
of all. So far there is no scientific evidence that a private or collective intention
to shape the future by a certain non-physical way will work.
The core idea of a feast in time of plague is to accept the inevitability of a
global catastrophe, at the same time trying to realize as much as possible
about personal values before it happens. In this case it is entertainment, but it
may be other values. But as a result of no action a disaster can occur even
earlier than expected.
Another idea in this line is that if everyone is engaged in personal
entertainment, no-one will stage any dangerous scientific experiments, or
attempt to take over the world. or arrange attacks. And everything will change
for the better. But some people may find fighting for world domination a form
of entertainment.
489

Conclusion
These plans of x-risks prevention may become a starting point for a
productive discussion, which may result in some kind of an official law or an
international roadmap to fight global risks.
This map is open for addition and I will constantly update it based on new
ideas and considerations.
But as a based survey of exiting literature it is now the most complete, and
the best ordered roadmap of the known methods of x-risks prevention.
Unfortunately, the situation in the world is deteriorating.
This map is part of a large project that will cover most futuristic topics: AI,
life extension, other x-risks fields. The closest to this map is a map of typology
of global catastrophic risks.
Other maps will be maps of AI failures levels, AI safety solutions, the casual
structure of the global catastrophe, double scenarios of the global catastrophe.
Literature:

Active shields
It was suggested as a means of preventing global risks to create all sorts of active shields.
Active Shield is a means of monitoring and influencing on the source of risk across the
globe. In fact, this is analogous to the immune system across the globe. The most obvious
example is the idea of creating a global missile defense system (ABM).
Activity shields means that they may relatively autonomously respond to any stimulus,
which lays under the definition of the threat. Protection of the shield completely covers the
Earth's surface. It is clear that an autonomous shield is dangerous because of possible
uncontrolled behavior, and became an absolute weapon in the hands of those who operate
it. As we know from discussions about the ABM, even if the active shield is entirely
defensive weapon, it still gives the advantage to attack for the protected side, because it
may not fear of retribution.
Comparison of active shields with the human immune system as an ideal form of
protection, is not correct, because human immune system is not ideal. It provides a
statistical survival of a species by the fact that some live beings from the specie lives on
average quite long. But it does not provide unlimited individual survival. Everyone was
infected by diseases during lifetime, and many died of diseases. For any person could be
found disease, which kills him. In addition, the immune system works well when exactly
knows the pathogen. If it does not know, it would take time, for pathogen to show up, and
another time for the immune system to develop the answer. The same thing happens with
computer anti virus programs, which also are an active shield: While they provide
sustainable existence of all computers, each computer from time to time is still infected with
a virus, and the data is often it lost. In addition, antivirus does not protect against new virus,
for which is not yet sent updates, but after a time in which new virus will infect a certain
number of computers. If there was threat of gray goo, we understand that this is gray
goo, only after it has spread. However, there are immune systems operating on the
principle: everything is forbidden, that is not allowed, but they also can be deceived, and
they are more likely to autoimmune reactions.
490

In short, the immune system is good only when there is a strong redundancy in the main
system. We do not yet have the potential for duplication of terrestrial living conditions and
space settlements face a number of policy challenges. In addition, all immune systems
have false positives, which are in autoimmune diseases - such as allergies and diabetes which have a significant contribution to human mortality, on the order of magnitude
comparable to the contribution of cancer and infectious diseases. If the immune system is
too rigid, it creates autoimmune disease, but if too soft - it misses some risk. Since the
immune system covers all protected object, the output of its failure poses a threat to all
sites (here the principle of the spread of hazards destruction). The terrorist attack on the
immune system makes the whole system defenseless. So is AIDS, which is the faster
spread, the more it's immune system fights because he was inside it.
Widely are discussed ideas of Bioshield and Nanoshield. These shields involve spraying
across the surface of the Earth thousands of trillions of control devices that can quickly
verify any agents at risk and quickly destroy dangerous. Further tighten controls on the
Internet and spread around the world CCTV monitoring cameras are also kinds of active
shields. However, on an example of a global missile defense system could be seen many
significant problems with any active shields:
1. They are painfully lagging behind the source of threat in time to develop.

2.

They must act immediately throughout the Earth, without exception. The
more pinpoint is threat, the denser should be the shield.

3.

They have already caused serious political complications. If the shield


does not cover the entire surface of the Earth, it could create a situation of
strategic instability.

4.

Any shield is created on the basis of more advanced technologies than the
treat which it controls, and so this new technologies could create their
own level of threats.

5.

The shield can be a source of global risk in itself, if he starts some


autoimmune reaction, that is, it will destroy what it was supposed to
protect. Or if the control over the shield will be lost, and it will defend
itself against their hosts. Or if its false alarm will cause war.

6.

The shield can not be completely reliable - that is, the success of its
operation is a probabilistic nature. Then, in the case of a continuing global
threat issue of its failure is just a matter of time.

7.

The shield should have centralized management, but autonomy on the


ground for rapid response.

For example, anti-asteroid shield will create many new challenges to human security.
First, it will provide technology for precise control asteroids, which account for the small
491

impacts can be directed to the Earth, and secretly, in the spirit of cryptowars. Secondly, a shield
himself can be used for an attack on Earth. For example, if a higher orbit will hang 50 gigatons
bomb, ready to rush to anywhere in the solar system, I will not feel more secure. The third, there
are suggestions that movement of asteroids over billions of years of good synchronize, and any
violation of this balance can lead to the same asteroid will become a constant threat, regularly
passing near the Earth. Especially this will be dangerous if humanity after the intervention will
fail to postapocaliptic level.

Note that each of dangerous technologies can be a means of own


preventing:

Missiles stray missiles through missile defense.

At the places of production of nuclear weapons affixed nuclear strikes.

AI controls the entire world for it has not created the wrong AI.

Biosensors not let to spread biological weapons.

Nano shield protects against nanorobots.

Most shield often do something exactly opposite to that for which they were created.
For example, is considered (Bellona report, the chapter of IV.1. Three cracks Nonproliferation treaty), that the Non-Proliferation Treaty poorly coping with the black market, but
do good job with the proliferation of peaceful atom (i.e. construction in all countries who
wish so, research nuclear reactors) that have dual-use technologies. Strong doors that protect the
cockpit after the attacks of September 11, will not allow terrorists to infiltrate into the cabin, but
if they did there would be (for example, because the pilot himself would be a terrorist), the
passengers and stewards will not be able to stop them. If there is a flight control system from
the ground, it appears the chance to seize the aircraft using the system by radio.
Finally, all shields that were proposed based on the assumption that we have a sort of
ideal system, which supervises and controls the other, less perfect system. For example, perfect
militia controls the imperfect society. If the police are corrupt, the department of its own
security controls it, and so on. Obviously, such an ideal system does not exist in reality, since
the monitoring system and controlled object made from a single test. One can imagine a multihierarchical system boards, but in this case, there is a risk of division between different
regulatory systems. Finally, any shield has a blind spot it cannot control its own management
center.

Existing and future shields

492

Here, I present a brief, but as far as possible the complete list of shields, which
already created or are likely to evolve in the future.
1) Global missile defense system. It suffers from political and technological
problems, and is ready only in the embryonic stage.
2) IAEA. It works, but properly. Missed several military nuclear programs.
3) Global fight against drugs. Located in balance with its problem - constrained to
some degree, but no more.
4) The system of total surveillance information that could be called Orwell control
in honor anti-utopia 1984 of Orwell, where such a system is described vividly. Control
system for each person using video cams, chip for identification, tracking of the Internet,
interception of telephone conversations. Technically, the system is achievable, but in reality it
has deployed only a few percent of what could be done, but it has actively developed. It is
already becoming evident and openly discussed the problems of the system related to
legitimacy, international, blind zones hackers. In theory, could form the basis for all other
control systems, as well as possibly control over the conduct of all human beings enough so as
not to appear dangerous bio, nano and AI devices (and not pick already finished dangerous
devices in the environment).
5). Mind-control. This system involves implantation into the brain controlling
some chips (or the making of thoughts by analyzing encephalogram we already have results
on this road). This may not be as hard as it seems, if we find a group of cells, on which are
projected internal dialogue and emotional states. Something like this is now lie detector. Such a
device could solve the problem even of spontaneous crimes, such as sudden aggression. On the
other hand, the potential misuse of such technology is unlimited. If using such a system would
be managed by people, it could get wrong command to destroy all of humanity. (The same
problem arises with the proposed as a measure against terrorists of the flight control system
aircraft from the ground: while it will reduce the risk of capture of a single aircraft, it will create
a theoretical possibility at the same time to intercept control over all located in air planes and
implement with their assistance massive ram on buildings or nuclear reactors.) Finally, it will
not give absolute protection because it can crack, and also because some disaster is not evil
intent, but from thoughtlessness.
6) Anti-asteroid defense. A surveillance of potentially dangerous objects exists, but
insufficient funds to intercept has not been formally developed. (But Deep Impact probe in 2005

493

was used for the collision with comet Tempel, leaving the body comets formed crater, and its
trajectory is slightly different.)
10) BioShield. The fight against bioterrorism is currently at the level of intelligence
and international agreements to control. There are recommendations for the safe development of
biotechnology (ranging from voluntary self-taken restriction in Asilomar 70s and in the book
Guide for biocontrol; however, a number of proposed restrictions have not yet been adopted.)
11) NanoShield. In preliminary discussions. There are recommendations for safe
design, developed by the Center of Responsible Nanotechnologies.
12) AI-shield. Protection from creating a hostile AI. In Singularity Institute in
California (SIAI) is discussed security issues for a strong AI, that is the problem of his
friendliness. There are recommendations for safe construction.
13) Regular police and security services.
We can also describe the time sequence in the response shields in the case of a
dangerous situation.
The first level of defense is to maintain civilization as a conscious, peaceful, balanced
state and in preparing to work on the prevention of risks at all other levels. At this level is
important information exchange, open discussions, published in edited volumes, fundraising,
advocacy, education and investment.
The second is to direct computer control of human beings and dangerous systems, so
that the situation of global risk in general could not occur. At this level are the IAEA action,
global video surveillance and interception of Internet communications, etc.
The third - in quelling the created danger by using missiles and antinanorobts etc.
This level, is similar to of ABM systems to protect against weapons of mass destruction.
Fourth - to escape from the Earth or hiding in bunkers (although the precautionary
principle implies that it would begin to do so even at the same time as the first item).
Saving the world balance of power
The new technologies can disturb military-political balance in the world, providing to
one of the sides the unprecedented possibilities. Eric Drexler describes problem as follows: In
the search for middle way, we could attempt to find the balance of forces, based on the balance
of technologies. This would, apparently, enlarge the situation, which preserved the specific
measure of peaceful co-existence for a period of four decades. But the keyword here apparently: the future breakthroughs will be too swift and destabilizing so that the old balance
494

could continue existence. In the past the country could experience technological delay by
several years and nevertheless support approximate military balance. However, with the rapid
replicator and advanced AI, delay on only one day could be fatal. Briefly stated, the more
rapidly the technologies are developed, the less the chances, that they will be located in the
equilibrium in the different countries, and also with the forces of restraining and control. The
conscious disturbance of balance is also dangerous: attempt of one of the countries explicitly to
leave in the detachment in the sphere of military super-technologies can provoke its enemies
to the aggression according to the principle attack under the threat of the loss of advantage.
Possible system of control over the global risks
Any protection from the global risk rests on the certain system of global observation
and control. The more dangerous the risk and the greater the number of places in which it can
arise, the more total and more effective must be this system of control. Example of the
contemporary system of control is the IAEA. Shields also can be control system, or contain it in
themselves as special structure. But Shields can act locally and autonomously as immune
system, and control system assumes collection and transmission of data to the one united center.
The final version of this global control would be Orwell state, where from each
angle it would look video camera, or chips would be established into the brain of each person,
to say nothing of computers. Alas, with respect to video surveillance this is the already almost
realized version. But in the houses this can be realized technically at any moment - everywhere,
where are the computers with constant Internet. A question is faster not in the observation, but
in transfer and, especially, analysis of these data. Without the aid of AI to us it would be difficult
to verify entire this information. Attractive appear has the systems of mutual accountability and
the civil vigilance, moved as alternative to totalitarian state in the combating of terrorism, where
due to the absolute transparency each can control each, but with respect to their possibility there
is thus far much obscure. Problems:

In order to be by effective, this system of control it must cover entire

terrestrial globe without the exception. This is impossible without the certain form of united
authority.

Any system of control can be misleading - so to truly effective monitoring

system should be multiple redundant.

It is not enough to observe everyone, it is necessary to analyze this entire

information in real time that is impossible without AI or totalitarian government apparatus.


495

Furthermore, this top will not be able to control itself, therefore, it will be required the system
of its reverse accountability either of people or the service of domestic security.
Such a system would be contrary to perceptions of democracy and freedom, which
emerged in European civilization, and cause fierce resistance until the spread of terrorism. Such
a system of total control will cause the temptation to apply it not only against global risk, but
also against any kind of law, pending cases, the use of not polite-correct speech and listen to
unlicensed music.
Those who control it must have a full and fair representation of all global risks. If it
will only biological risks, but not the creation of artificial and dangerous physical experiments,
the system will be inferiority. It is very difficult to distinguish a dangerous biological
experiments from safe ones - in all cases are used DNA sequencer and experiments in mice.
Without reading thoughts of a scientist does not understand that he was conceived. The system
does not protect from accidental dangerous experiments.
Since such a system should have delivered all around the world, it can simplify
use of any weapons that affects every human being. In other words, the seizure of power over
the system of total world control would give authority over all people and make for it easier to
do with them anything, including harm. For example, you can send out by mail some medicine
and check that all people had swollen it. Those who refused would be arrested.
Thus, a system of total control seems the most obvious way to counter global risks.
However, it contains a number of pitfalls that can transform itself into a global risk factor. In
addition, the system of total control implies a totalitarian state, which, being equipped with the
means of production in the form of robots, may lose the need for human beings at all.

Conscious stop of technological progress


There are often proposals for stop of technical progress by violent manner, or by
appeal to the conscience of scientists in order to prevent global risks. There are various options
on how to implement this stop and all of them do not work or contain pitfalls:
1. Personal denial of the development of new technologies have virtually nothing
on impact. There will always be others who will do it.
2. Agitation, education, social action or terrorism as a means of forcing people to
abandon the development of dangerous technologies - are not working. As Yudkowsky writes:
Any strategy that requires unanimous action by all people, is doomed to failure.

496

3.Waiver of technological innovation on a certain territory, for example, one country,


unable to stop the technological advances in other countries. Moreover, if a responsible country
abandon development of a technology, the garland move to a more irresponsible countries.
4. World agreement. For example, the IAEA, we know how badly it works.
5. The conquest of the world by force, which could regulate the development of
technology. But in the process of the conquest there are greater chances of using Doomsday
weapons by the nuclear powers, at risk of loss of sovereignty. In addition, the words of Drexler:
Furthermore, the winning force would itself a major technological power with a huge military
power and the demonstrated willingness to use it. Can we trust in this case such force that it
suppress their own progress? (Engines of creation.)
6. The peaceful unification of nations in the face of threat, just as the UN emerged in
the years of Fascism, and delegated their power to stop progress in those countries that do not
want to join this association. This is probably the best option, which brings together the dignity
of all previous and mitigate their shortcomings. But it would be feasible only if the overall
threat becomes apparent.
7. Nick Bostrom suggested the concept of differential technological development,
when projects that increase our security, are stimulated and accelerated, while potentially
dangerous projects artificially slowing. Thus, controlling the speed of development of different
areas of knowledge, we get more secure combination of attack and defense technologies.

Means of preventive strike


It is not enough to have a system of total control - you need to have the opportunity to prevent
the risk. Now strike by nuclear forces missile at a point of source of risk is considered as last mean
of defense. (Like destroying of biolaboratory, there dangerous virus was recently created.)
Here, there is curiously inconsistent with the programs of construction of bunkers for survival
- if such a bunker will be secret and invincible, it would be difficult to destroy. Moreover, they must
contain a fully equipped laboratory and scholars in the event of disaster. It is therefore possible that
a superweapon will be created in a bunker (for example, in the Soviet Union was created
underground nuclear plant to continue production of nuclear weapons in the event of a protracted
nuclear war.) People who are already inside effective bunker, may be more psychologically inclined
to create super weapon to strike on the surface. Consequently, either bunkers will pose a threat to
human survival, or means of a preventive strike would destroy all the bunkers, which could be used
for human survival after a certain disaster.
497

However strike on the one point in space does not work against any systemic crisis, nor
against some dangerous information. Computer virus could not be cured by nuclear strike. As well
such strike will not get rid of people of addiction to superdrug. Next, attack is possible as long as a
risk has not spread from the point. If a recipe of supervirus run into the Internet, it would be
impossible to catch it back. Already, the modern military machine is powerless against net threats
such as terrorist networks, giving metastases throughout the world. Similarly, in the future computer
virus is not just a threat to the information on the disk: it could make computer-managed factories
around the world and invisibly make certain own physical media (say, in the form of microscopic
robots or software bookmarks in conventional products), and through them again could run into the
network (for example, connecting by radio).
Finally, the strike (or even the possibility of it) will create a situation of strategic instability.
For example, now strike by ballistic missile with a conventional warhead on terrorists location may
cause a start of early warning system of likely opponent, and lead to war.
Finally, the strike takes time. This time should be less then time from detecting the
development of the threat until the moment of its transition into an irreversible phase (for example,
if you attack gray goo, it is important to destroy it before it was able to reproduce in billions of
copies and spread throughout the Earth). Now the time from detection to strike to any point on Earth
is less than 2 hours, and can be reduced to minutes by satellite weapons. (However, the decisionmaking take more time.) If from the decision moment of writing code of dangerous virus before its
launch would take place only 15 minutes, then that speed would be insufficient. And this speed
obviously is not enough, if in some place began spraying of dangerous airborne nanorobots.
Efficiency of strike on a starting point of a risk will essentially change after the foundation of
space colonies (at least purely robotic - there too can be a failure which will transform a colony in
"cancer" - that is inclined to unlimited self-reproduction and distribution of "toxins": dangerous
nanorobots, superbombs and other; but the most perspective is namely outer space exploration by
means of the self-breeding robots using local materials,). In time, while the signal about danger will
pass, say, from the satellite of Jupiter to the Earth, and then from the Earth fighting "fleet" (that is
rockets with nuclear warheads) will arrive there and fight with nanorobots to put things in order (to
burn down all successively), it would be already too late. Certainly, it is possible to hold "fleet" in an
orbit of each satellite of a planet or an asteroid where is capable to self-reproduction robotics
colonies, but what if mutation will occur on the fleet? Then the fleet which supervises other fleet is
necessary, and floats between satellites of the planets. And then one more interplanetary fleet for the
control over them. More shortly, the situation does not look strategically stable, - that is above
498

certain level of the monitoring system start to stir each other. Probably, inability to supervise remote
colonies leads to that to civilizations is favorable to become isolated on a parent planet - this is one
more decision of paradox of Fermi.

Removal of sources of risks on considerable distance from the Earth


It is theoretically possible to remove sources of risks from the Earth, first of all it concerns
dangerous physical experiments. The problems connected with this approach:
Having received in hands means to create powerful experimental installations far from the
Earth, we also will have possibilities quickly to deliver results of experiments back.
It cannot stop some people from similar experiments on the Earth, especially if they are
simple.
It will not protect us from creation of dangerous strong AI as it can spread via information.
Even behind orbit of Pluto are possible dangerous experiments which will affect the Earth.
it is difficult to know in advance, which experiments should be made behind orbit of Pluto.
There are no technical possibilities to deliver an large quantity of equipment for orbit of Pluto
during the nearest decades, especially without use of dangerous technologies in the form of selfreproduced robots.

Creation of independent settlements in the remote corners of the Earth


Creation of such settlements, no less than skills of a survival in the wild nature, hardly will
help in a case of really global catastrophe as it would cover all surface of the Earth (if it is a certain
unintelligent agent), or find out all people (if it is the intelligent agent). The independent settlement
is vulnerable both to the first, and to the second - if only it is not armed secret base but then it
passes, more likely, under the type "bunkers".
If it is a question of a survival after very big, but not final catastrophe it is necessary to
recollect experience of food recollecting http://en.wikipedia.org/wiki/Prodrazvyorstka and collective
farms in Russia, - the city force dominates over village and selects its surpluses. In case of system
crisis the main danger will be represented by other people. Not without reason in the fantastic novel
Metro 2033 the basic coin is a cartridge from Kalashnikov's automatic machine. And till there will
be more cartridges than peasants, it will be more favorable to plunder, instead of to grow up.
Probably also full dissolution of human in the nature in the spirit of Feral child. However it is
improbable, that thus at least some representatives of specie Homo sapiens can go through really
global catastrophe.

Creation of the file on global risks and growth of public understanding of the
problematics connected with them
499

The publication of books and articles on a theme of global risks leads to growth of
comprehension of a problem in a society and to drawing up of more exact list of global risks. The
interdisciplinary approach allows to compare different risks and to consider possibility of their
complex interaction. Difficulties of the given approach:
It is not clear, to whom exactly any such texts are addressed.
Terrorists, the countries derelicts and regular armies can take advantage of ideas about
creation of global risks from the published texts that will result to bigger increase in risks, than to
their prevention.
Wrong and premature capital investments can lead to disappointment in struggle against risks
- just when this struggle actually will be required.

Refuges and bunkers

500

Different sort of a refuge and bunkers can increase chances of survival of the mankind in case
of global catastrophe, however the situation with them is not simple. Separate independent refuges
can exist for decades, but the more they are independent and long-time, the more efforts are
necessary for their preparation in advance. Refuges should provide ability for the mankind to the
further self-reproduction. Hence, they should contain not only enough of capable to reproduction
people, but also a stock of technologies which will allow to survive and breed in territory which is
planned to render habitable after an exit from the refuge. The more this territory will be polluted, the
higher level of technologies is required for a reliable survival.
Very big bunker will appear capable to continue in itself development of technologies and
after catastrophe. However in this case it will be vulnerable to the same risks, as all terrestrial
civilization - there can be internal terrorists, AI, nanorobots, leaks etc. If the bunker is not capable to
continue itself development of technologies it, more likely, is doomed to degradation.
Further, the bunker can be or civilizational, that is keep the majority of cultural and
technological achievements of the civilization, or "specific", that is keep only human life. For "long"
bunkers (which are prepared for long-term stay) the problem of formation and education of children
and risks of degradation will rise. The bunker can or live for the account of the resources which have
been saved up before catastrophe, or be engaged in own manufacture. In last case it will be simply
underground civilization on the infected planet.

501

The more a bunker is constructed on modern technologies and independent cultural and
technically, the higher amount of people should live there (but in the future it will be not so: the
bunker on the basis of advanced nanotechnology can be even at all deserted, - only with the frozen
human embryos). To provide simple reproduction by means of training to the basic human trades,
thousand people are required. These people should be selected and be in the bunker before final
catastrophe, and, it is desirable, on a constant basis. However it is improbable, that thousand
intellectually and physically excellent people would want to sit in the bunker "just in case". In this
case they can be in the bunker in two or three changes and receive for it a salary. (Now in Russia
begins experiment Mars 500 in which 6 humans will be in completely independent - on water, to
meal, air - for 500 days. Possibly, it is the best result which we now have. In the early nineties in the
USA there was also a project Biosphere-2 in which people should live two years on full selfmaintenance under a dome in desert. The project has ended with partial failure as oxygen level in
system began to fall because of unforeseen reproduction of microorganisms and insects.) As
additional risk for bunkers it is necessary to note fact of psychology of the small groups closed in
one premise widely known on the Antarctic expeditions - namely, the increase of animosities fraught
with destructive actions, reducing survival rate.
The bunker can be either unique, or one of many. In the first case it is vulnerable to different
catastrophes, and in the second is possible struggle between different bunkers for the resources
which have remained outside. Or is possible war continuation if catastrophe has resulted from war.
The bunker, most likely, will be either underground, or in the sea, or in space. But the space
bunker too can be underground of asteroids or the Moon. For the space bunker it will be more
difficult to use the rests of resources on the Earth. The bunker can be completely isolated, or to
allow "excursion" in the external hostile environment.
As model of the sea bunker can serve the nuclear submarine possessing high reserve,
autonomy, maneuverability and stability to negative influences. Besides, it can easily be cooled at
ocean (the problem of cooling of the underground closed bunkers is not simple), to extract from it
water, oxygen and even food. Besides, already there are ready boats and technical decisions. The
boat is capable to sustain shock and radiating influence. However the resource of independent
swimming of modern submarines makes at the best 1 year, and in them there is no place for storage
of stocks.

502

Modern space station ISS could support independently life of several humans within
approximately year though there are problems of independent landing and adaptation. Not clearly,
whether the certain dangerous agent, capable to get into all cracks on the Earth could dissipate for so
short term.
There is a difference between gaso - and bio - refuges which can be on a surface, but are
divided into many sections for maintenance of a mode of quarantine, and refuges which are intended
as a shelter from in the slightest degree intelligent opponent (including other people who did not
manage to get a place in a refuge). In case of biodanger island with rigid quarantine can be a refuge
if illness is not transferred by air.
A bunker can possess different vulnerabilities. For example, in case of biological threat, is
enough insignificant penetration to destroy it. Only hi-tech bunker can be the completely
independent. Energy and oxygen are necessary to the bunker. The system on a nuclear reactor can
give energy, but modern machines hardly can possess durability more than 30-50 years. The bunker
cannot be universal - it should assume protection against the certain kinds of threats known in
advance - radiating, biological etc.
The more reinforced is a bunker, the smaller number of bunkers can prepare mankind in
advance, and it will be more difficult to hide such bunker. If after a certain catastrophe there was a
limited number of the bunkers which site is known, the secondary nuclear war can terminate
mankind through countable number of strikes in known places.
The larger is the bunker, the less amount of such bunkers is possible to construct. However
any bunker is vulnerable to accidental destruction or contamination. Therefore the limited number of
bunkers with certain probability of contamination unequivocally defines the maximum survival time
of mankind. If bunkers are connected among themselves by trade and other material distribution,
contamination between them is more probable. If bunkers are not connected, they will degrade
faster. The more powerfully and more expensively is the bunker, the more difficult is to create it
imperceptibly for the probable opponent and so it easier becomes the goal for an attack. The more
cheaply the bunker, the less it is durable.

503

Casual shelters - the people who have escaped in the underground, mines, submarines - are
possible. They will suffer from absence of the central power and struggle for resources. The people,
in case of exhaustion of resources in one bunker, can undertake the armed attempts to break in other
next bunker. Also the people who have escaped casually (or under the threat of the coming
catastrophe), can attack those who was locked in the bunker.
Bunkers will suffer from necessity of an exchange of heat, energy, water and air with an
external world. The more independent is the bunker, the less time it can exist in full isolation.
Bunkers being in the Earth will deeply suffer from an overheating. Any nuclear reactors and other
complex machines will demand external cooling. Cooling by external water will unmask them, and
it is impossible to have energy sources lost-free in the form of heat, while on depth of earth there are
always high temperatures. Temperature growth, in process of deepening in the Earth, limits depth of
possible bunkers. (The geothermal gradient on the average makes 30 degrees C/kilometers. It means
that bunkers on depth more than 1 kilometer are impossible - or demand huge cooling installations
on a surface, as gold mines in the republic of South Africa. There can be deeper bunkers in ices of
Antarctica.)
The more durable, more universal and more effective, should be a bunker, the earlier it is
necessary to start to build it. But in this case it is difficult to foresee the future risks. For example, in
1930th years in Russia was constructed many anti-gase bombproof shelters which have appeared
useless and vulnerable to bombardments by heavy demolition bombs.
Efficiency of the bunker which can create the civilization, corresponds to a technological
level of development of this civilization But it means that it possesses and corresponding means of
destruction. So, especially powerful bunker is necessary. The more independently and more
absolutely is the bunker (for example, equipped with AI, nanorobots and biotechnologies), the easier
it can do without, eventually, people, having given rise to purely computer civilization
People from different bunkers will compete for that who first leaves on a surface and who,
accordingly, will own it - therefore will develop the temptation for them to go out to still infected
sites of the Earth.

504

There are possible automatic robotic bunkers: in them the frozen human embryos are stored in
a certain artificial uterus and through hundreds or thousand years start to be grown up. (Technology
of cryonics of embryos already exists, and works on an artificial uterus are forbidden for bioethics
reasons, but basically such device is possible.) With embryos it is possible to send such installations
in travel to other planets. However, if such bunkers are possible, the Earth hardly remains empty most likely it will be populated with robots. Besides, if the human cub who has been brought up by
wolves, considers itself as a wolf as whom human who has been brought up by robots will consider
itself?
So, the idea about a survival in bunkers contains many reefs which reduce its utility and
probability of success. It is necessary to build long-term bunkers for many years, but they can
become outdated for this time as the situation will change and it is not known to what to prepare.
Probably, that there is a number of powerful bunkers which have been constructed in days of cold
war. A limit of modern technical possibilities the bunker of an order of a 30-year-old autonomy,
however it would take long time for building - decade, and it will demand billions dollars of
investments.
Independently there are information bunkers, which are intended to inform to the possible
escaped descendants about our knowledge, technologies and achievements. For example, in Norway,
on Spitsbergen have been created a stock of samples of seeds and grain with these purposes
(Doomsday Vault). Variants with preservation of a genetic variety of people by means of the frozen
sperm are possible. Digital carriers steady against long storage, for example, compact discs on
which the text which can be read through a magnifier is etched are discussed and implemented by
Long Now Foundation. This knowledge can be crucial for not repeating our errors.
A possible location for shelters are asteroids and comets body in the Kuiper belt, of which
there are trillions of pieces, and within which is possible to hide.

Quick spreading in space


There is an assumption that the mankind will escape, if is divided into parts which separately
quickly will occupy space. For example, known physicist S. Hawking agitates for creation of the
spare Earth to avoid the risks menacing to a planet. In case of quick spreading any influence which
has been carried out in one place, cannot catch up with all mankind. Alas, there are no technological
preconditions for the accelerated moving of mankind in space: we have rather vague representations
how to create starprobe vehicles and, possibly, we cannot construct them without the aid of AI and
505

robotic manufacture. So, the mankind can start to occupy space only after will overcome all risks
connected with AI and nanotechnologies and consequently space settlements cannot serve as
protection against these risks. Besides, space settlements in the nearest space, within Solar system,
will be extremely dependent on terrestrial deliveries and are vulnerable for usual rocket attack. Even
if the mankind will start to escape from the Earth with near light speed on superfast starprobe
vehicles, it all the same will not secure it. First, because the information all the same extends faster,
with a velocity of light and if there will be hostile AI it can get on computer networks even into
quickly leaving starprobe vehicle. Secondly, no matter how the starprobe vehicle is fast, the pilotless
device can catch up it because it would be easier, faster and more perfect (as it will be created later).
At last, any starprobe vehicle takes away with itself all terrestrial supertechnologies both all human
lacks and the problems connected with them.
It is possible to use METI i.e. sending signals to stars - to ensure some kind of human
immortality, maybe via our own SETI attack (But it needs powerful AI). Or simply sending people
DNA information and our knowledge in the hope that someone will find and raise us.
Finally, you can start a wave of s a von Neumann probes - that is, robots,
which are distributed in the universe as plants - using the seeds. They could absorb in the beginning
Oort cloud comets. However, in these robots is firmly encoded human genome so that such robots
tried to recreate the man and his culture at any available celestial bodies. It is believed that random
mutations in the works and nanotechnological systems virtually impossible, meaning that such von
Neumann probes can indefinitely retain the original setting. On the other hand, such robots will be
more demanding resources than robots without additional program to rebuild people, and will lose
them in the competition for the development of the universe. It is unlikely to be running only the one
wave of von Neumann probes - and likely to be few (if mankind did not come united before). See
more about the von Neumann later probes in the chapter on the Fermi paradox. At the same time
stem the tide of their distribution center is virtually impossible - because these probes are very small
and do not support radio communications with Earth. The only option is to run much faster wave of
more efficient replicator, which swallow up all the solid bodies, suitable for reproduction replicator
in the previous phase.
This can be considered as an option for panspermia. Another variant is simply to dissipate in
space is very stable living cells and microorganisms spores, so that life has evolved to somewhere
again, if Earth would be at risk.

All somehow will manage itself

506

This position on prevention of global risks is based on belief in congenital stability of systems
and on the irrational concept of the decision of problems in process of their receipt. It comprises
some obvious and implicit assumptions and logic errors, in the spirit of perhaps, it will not
happened. Actually, it is position of the governments of different countries which solve problems
only also adhere after they became obvious. If to formulate this position in the spirit of the military
doctrine of the USA, it will sound so: analyzing and preventing all risks in process of their receipt,
we will create the monitoring system of each concrete risk giving qualitative and quantitative
prevalence over each source of risk on each phase of its existence.
However already today in a modern technological situation we cannot consider risks in
process of their receipt, as we do not know where to search and as risks can appear faster, than we
will have time to consider and prepare them for them. For this reason I try to expand forecast
horizon, considering hypothetical and probable technologies which are not created yet, but can be
quite created, proceeding from current tendencies.
Other variant - the picture of the future, named "sustainable development" However it not the
forecast, but the project. It is based on the assumption, that technologies will enough develop to help
us to overcome energy and other crises, but nevertheless thus technologies will not generate new
improbable and risky possibilities. The probability of such outcome of events is insignificant.

Degradation of the civilization to level of a steady condition


Some people hope that threats of global catastrophes will resolve by itself when the mankind,
because of shortage of resources or the previous catastrophes degrades in some extent. Such
degradation is interfaced to difficulties because while all stocks which have remained from a
civilization are not plundered yet, and all weapon is not spent, so there is no benefit to be engaged in
primitive agriculture - much easier to plunder neighbors. The competition between the escaped
societies will inevitably result in new growth of the weapon and technologies, as though it
ideologically choked, and through some hundreds years the civilization will return on modern level
so, will revive also all problems. Or on the contrary, degrades in a direction to even more primitive
forms and will die out.

Prevention of one catastrophe by means of another


Following examples of mutual neutralization of dangerous technologies and catastrophes are
theoretically possible:
1. Nuclear war stops development of technologies in general.
2. Totalitarian AI prevents bioterrorism.
3. The bioterrorism does impossible AI development
507

4. The nuclear winter prevents global warming.


Essence in that large catastrophe does impossible global catastrophe, rejecting mankind on
some evolutionary steps back. It is possible in the event that we enter into the long period of high
probability of large catastrophes, but small probability of global catastrophes. From the second half
of XX-th century we are in this period. Nevertheless, it has not prevented us to successfully come
near to the moment when before the creation of many means of global general destruction remained,
probably, tens years.
In any sense it would be pleasant to prove the theorem, that global catastrophe is impossible,
because very large catastrophes will not allow us to come near to it. However this theorem would
have exclusively likelihood character as some dangerous supertechnologies can appear at any
moment, especially AI.
Besides, any big failure (but smaller then rejecting back catastrophe) raises sensibleness of
people concerning risks. Though here there is a certain stereotype: expectation of repetition of
precisely same risk.

Advance evolution of the man


One more idea which gives some hope of the survival is the idea that processes of cybertransformation of human will go faster, than processes of creation of dangerous arms. For example,
if to replace the majority of cells of a human body with their mechanical analogues human becomes
impregnable to action of the biological weapon. Consciousness loading in the computer will make
human in general independent of destiny of the body as probably reserve copying of the
information, and these computers can be in size with a mote and can hide in a belt of asteroids. In
this case only full physical destruction of Solar system and its vicinities will result in destruction of
such "superpeople". However in what measure such devices will be human, instead of artificial
intellect versions, is difficult to tell. Besides, this scenario though is possible theoretically, but is not
so probable, so we cant relay on it. At last, it can simply not be in time as creation of weapons is
much easier, than transformation human into cyborg.
Other moment consists that cyborgization opens the new risks connected with harm for
artificial parts of a human body by computer viruses. The first such event became recently carried
out (in the demonstration purposes by experts on safety) attack on cardio stimulator with
management on a radio channel in which course it has been reprogrammed on other operating mode,
that potentially could lead to death of the patient if experiment was carried out on the live human.
Future cyber human will have thousand distantly operated medical devices.

508

Possible role of the international organizations in prevention of global


catastrophe
We do not know definitely who exactly should be engaged in prevention of global
catastrophes. Worse that, many organizations and the private humans are ready to be engaged in it who against to become the savior of the world? (However still couple of years back in the world
there was no human who would work over a theme of prevention of global catastrophe as an
interdisciplinary problem and would receive for it the salary.) We will list different functional
genres organizations which could be responsible for prevention of risks of global catastrophe.
1) "United Nations". Heads of the world governments together solve how to cope with risks.
So now struggle with global warming. But everyone cannot agree. As a result are accepted the
weakest and the conciliatory proposal. The states are not ready to transfer the power in the United
Nations.
2) World government. The problem consists in the possibility of its formation. Process
creation of the world government is fraught with war that to itself creates global risk. Besides, such
government cannot be neutral. From the point of view of one groups it will be the spokesman of
interests of other groups. It will be either weak, or totalitarian. The totalitarian government will
generate Resistance, struggle against this Resistance is fraught with huge acts of terrorism and
creates new global risks.
3) Intelligence service which secretly resist to global risks. So is struggle with terrorists.
Problems: privacy conducts to information loss. There is a competition of Intelligence services. The
mess of national and universal interests is possible - as Intelligence services serve the state, instead
of people in general. Intelligence services owing to the specificity are not ground on scale long-term
vision of complex problems and cannot independently estimate, not involving foreign experts, risks
of technologies which not existing yet.
4) Secret groups of private humans. Probably, that a certain secret private organization will set
as its purpose to do much good for all mankind. However the intermediate stage would be creation
of the (probably, secret) world government. Problems: a competition of rescuers (as can be several
such secret organizations, and methods and pictures of the world at all at them are different),
necessity of transition to point the world government. Aversion of plots in a society and
counteraction by it from Intelligence services. Mixture personal and overall aims. Even Ben Laden
thinks, that exactly world caliphate will be the rescue of mankind from the mechanistic and
selfish West. Private groups for creation of strong AI also can understand that they will receive in
the hands the absolute weapon, and to have plans on its application for capture of the power over the
509

world. In any case, the secret society very often means presence of a planned stage of "mutiny" - an
obvious or implicit establishment of the power and influence for the whole world, by penetration or
direct capture. And, certainly, here it faces a competition of other such societies, and also
counteraction of a society and special services.
5) Open discussion and self-organizing in a society. Some authors, for example, D. Brin,
consider, that alternative to the secret organizations and the governmental projects in prevention of
global risks would be self-organizing of responsible citizens which would lead to creation of that in
English is called Reciprocal accountability - the mutual accountability when actions of supervising
services are accessible to the control of those whom they supervise. Problems of such approach are
obvious: the society power is not great, and there is no uniform world society, capable to agree - and
if these measures will be not accepted just in one country by them are not effective. Also there
should be a certain body which these discussions will influence. Besides, as even the small group of
people is capable to create secret existential risk than simple tracking neighbors is insufficient. At
the moment already the network of the open public organizations studying problems of global risks
has formed and financing researches on their prevention. It include Lifeboat foundation, the Center
of Responsible Nanotechnology (CRN), the Alliance for Civilization Rescue, Singularity institute
(SIAI), Future of Humanity Institute in Oxford. The majority of these organizations are based in the
USA, their budget is less than one million dollars for each, that it is not enough, and they are
financed on private donations. Accordingly, result of their activity to the present opinion - only the
publication of articles and discussion of variants. Besides, Singularity institute directly is engaged in
working out of friendly AI. These organizations communicate, exchange resources and employees.
On the other hand, practical influence of different welfare funds on a society is not enough. Much
more means and attention receive funds which deal with less considerable problems, than mankind
rescue. In Russia welfare funds are compromised by suspicions in communications either with a
mafia, or with CIA. The best example of influence of a society on governors is reading by governors
books though it not always helped. President Kennedy has avoided war during the Caribbean crisis,
appreciably because he read Barbara Takman's book August 1914 about the beginning of World
War I where it is shown how war has begun contrary to will and interests of the parties took K.
Sagan and N. Moiseev's researches about nuclear winter have pushed, probably, the USSR and the
USA to disarmament. The future presidents in any case are formed in a certain cultural environment
and bear upward the ideas which are got there. Change of an average level of understanding,
creation of an information background can quite lead to that governors will indirectly absorb certain

510

ideas. After all there was not from air now a program on nanotechnology in Russia. Someone
somewhere has read about them and has thought.
6) Not to stir to system to self-arrange. Probably, that struggle between different saviors of
the world will appear worse, than full inactivity. However such strategy to realize it is impossible,
as it demands an unanimous consent - that never happens. Always there will be some saviors of the
world, and they should find out who among them is the main.
The question at all in that there was an organization which can and wishes to prevent global
risks, and in that the world countries entirely delegated to it such powers that seems to much less
probable. Positive and very indicative example is that the mankind has shown ability to unite in the
face of obvious and clear danger in different sorts antifascist and antiterrorist coalitions and
effectively enough to operate, while the purpose was powerful, the general and clear.

Infinity of the Universe and question of irreversibility of human extinction


The assumption of infinity of the Universe is quite materialistic. If it so it is possible to expect, that
in it arise all possible worlds. Including, it infinitely many worlds inhabited by a intelligent life, and
it means that intelligence in the universe will not disappear along with the man. Moreover, from this
follows, what even in case of human extinction, sometime and somewhere there will be a world
which almost is not differing from the Earth, and in it there will be beings with the same genetic
code, as Homo sapiens. From this follows, that people in general never can disappear from the
Universe as cannot to disappear, for example, from it, number 137 (as, roughly speaking, genetic
code of human is possible to present in the form of one very long number). Among the physical
theories assuming plurality of the worlds, it is necessary to allocate concept of Multiverse of Everett
(which essence consists in acceptance of that interpretation of quantum mechanics which world
division at each possibility of a choice and consequently means, infinite branching of variants of the
future), and also a number of other theories (for example, cosmological chaotic inflation). See prove
of actual infinity of the Universe in the work of Max Tegmark Parallel Universes
(http://arxiv.org/abs/astro-ph/0302131)

More in detail about philosophical appendices of the

theory cosmological inflation see article Olum, Vilenkin and Knobe Philosophical implications of
inflationary cosmology.
Stronger consequence from these theories is the assumption that all possible variants of the future
will realize In this case definitive global catastrophe becomes impossible event as always there will
be a world in which it has not occurred. For the first time it was noted by Everett, who come to
conclusion, that multiverse (that is an actual reality of all possible quantum alternatives) means
personal immortality for human as, from reason whatever it was lost, always will be a Universe
511

variant in which it was not lost during this moment. The known physicist M. Tegmark has illustrated
this idea with mental experiment about quantum suicide. Then this idea was developed J. Higgo in
the article Does the 'many-worlds' interpretation of quantum mechanics imply immortality?. In
my comments to translation of the Higgos article I write that the validity of the theory about
Multiverse is not a necessary condition for the validity of the theory about the immortality
connected with plurality of the worlds. It is enough only infinity of the Universe for the validity of
many worlds immortality. That is this theory about many worlds immortality works and for not
quantum finite state machine: for any final beings in the infinite Universe there will be precisely
same being with precisely same course of life except that will not die at the last minute. But it at all
does not mean fine and pleasant immortality as heavy wound can be alternative of death.
Precisely same reasoning can be applied and to all civilization. Always there will be a future
variant in which the human civilization does not die out and if all possible variants of the future
exist it means immortality of our civilization However it does not mean that to us prosperity is
guaranteed. In other words, if to prove non destructibility of the observer from this follows that there
should be the certain civilization supporting it, however for this purpose enough one bunker with all
necessary, instead of prospering mankind.

Assumptions of that we live in "Matrix".


Bases of the scientific analysis of this problem are put in pawn N. Bostrom in its article
Simulation argument: Are we live in Matrix?. Many religious concepts can be made
pseudoscientific, having entered the assumption that we live in the feigned world, probably, created
in the supercomputer forces of a certain supercivilization. It is impossible to deny that we live in a
matrix, but it would be possible to prove it, if in our world there were the certain improbable
miracles incompatible with any physical laws (for example, in the sky there would be an inscription
from supernova stars).
However there is a concept that there can be a global catastrophe if owners of this simulation
suddenly switch off it (Bostrom). It is possible to show, that the arguments described in article of J.
Higgo about many world immortality in this case come into effect. Namely, that we live in a matrix,
is probable only in the event that the set of possible simulations is very great. It does probable
existence of a significant amount of absolutely identical simulations. Destruction of one of copies
does not influence in any way a course of the simulation the same as burning of one of copies of the
novel "War and peace" does not influence the relation of characters. (Thus any arguments about a
shower, continuity of consciousness and other not copied factors do not work, as usually it is
supposed, that "consciousness" in simulation in general is impossible.)
512

Hence, full disintegration of simulation does not represent any threat. However if all of us live
in simulation, owners of the simulation can throw to us a certain improbable natural problem, at
least to count our behavior in the conditions of crisis. For example, to study, how civilizations
behave in case of eruption of supervolcanoes. (And any supercivilization will be interested in
calculating different variants of its own previous development, for example, to estimate frequency of
prevalence of civilizations in the Universe.) Thus it is possible to assume, that extreme central
events will be more often to become objects of modeling, especially the moments when
development could stop completely, such as global risks. (And we just live around such event, that,
Bayesian logic, raises probability of a hypothesis that we live in simulation.) In other words, in
simulations there will be much more often situations of global risk. (It is exact like at cinema show
explosions is much more often, than we see them in a reality.) So, it increases our chances to face a
situation close to global catastrophe. Thus, as global catastrophe in the world of simulations is
impossible, for always there will be simulations where protagonists do not die, so the survival of a
handful of people after very big catastrophe will be the most probable scenario. To the question on
simulation argument by Bostrom we still will return further.
Sometimes hopes are expressed, that if the mankind will come nearer to a self-destruction side
kind aliens who watch for a long time us, will rescue us. But on it there are no more hopes, than
for lamb which are devoured with lions that it will be rescued by the people making about it a
documentary film.

Global catastrophes and society organization


If global catastrophe occurs, it will destroy any society. Therefore the society organization
matters only on a phase of prevention of risks. It is possible to try to imagine, though this image will
be rather utopian, what society is better capable to prevent global catastrophes:
1. This society which has one and only one control centre possessing completeness of the
power and high authority. However thus there should be the certain feedback which is not allowing
it to turn to self-sufficient and selfish dictatorship. This society should possess such self-control that
in it could not arise, and in case of appearance, any at once would be found out dangerous (from the
point of view of risks of global catastrophes) behavior or the phenomenon. (The rallied command of
the ship could be an example of such society.)
2. This society which is aimed at the survival in long historical prospect (tens and hundreds
years).
3. The overwhelming majority of people should realize and accept the purposes and the device
of this society, that is to have high moral level. (With the account of what even the small group of
513

terrorists can cause in the future an irreparable damage, support level should be close to 100 %, that,
of course, in practice is not realized)
4. It is society, lead by people (or AI systems), intellectually enough prepared correctly to
consider risks which can arise in years and decades. Accordingly, in this society people get the
complete education giving fundamental and wide, but not superficial vision of the world.
5. It is society in which the number of the conflicts which participants can want to use the
Doomsday weapon is brought to naught.
6. It is society, able to carry out the full rigid control of activity of all groups of humans which
can create global risks. However this control should not to turn to the tool of creation of risk,
somehow, itself.
7. This society should be ready quickly and effectively take sufficient measures for prevention
of any global risk.
8. This society should put considerable resources in creation of a different sort of bunkers,
space settlements etc. Actually, this society should consider the survival as the main task.
9. This society should create new technologies in the correct order chosen by it in specially
taken away places. It should be ready to refuse even from very interesting technologies if is
incapable to supervise precisely or at least to measure their risk.
10. This society should arise without world war because the risk in the course of its
appearance would move advantage of such society.
Thus I do not discuss model of a similar society in terms "democratic", "market",
"communistic", "totalitarian" etc. - I believe that these terms are applicable to a XX century society,
but not to the XXI centuries. But it seems obvious, that the modern society costs extremely far from
all these parameters of a capable society to a survival:
1. On the Earth there is no uniform conventional authoritative centre of the power, but is a lot
of wishing for it to be overcome. The feedback in the form of elections and a freedom of speech too
ephemeral to really influence decisions, especially, on a global scale. Global world institutes, like
the United Nations, are in crisis.
2. The majority of people operates in personal interests or interests of the groups even if it is
expressed in words of universal interests. There are a lot of people, also there is a percent of those
who not against or even aspires to total destruction. Also in a society competing ideas-meme, which
alternatively exclude each other extend: a different sorts of nationalism, Islamism, antiglobalism,
cynicism. (Under cynicism I mean widespread enough sum of belief: all is bad, money rule the

514

world, all I do is only for myself, miracles do not happen, the future has no value, people are stupid
crowd etc.)
3. The modern society in much bigger degree is adjusted on reception of the blessings in shortterm prospect, than on a survival in the long-term.
4. Proceeding from actions of many heads of the modern states, it is difficult to believe, that
they are people who are aimed at a long-term survival of all world. And it in many respects occurs
that there is no clear and standard picture of risks. More precisely that is, is not full and eclipses
more important risks (namely, it is a picture where asteroids plus global warming are the essence the
main risks - however even after a recognition of these risks concern about them is insufficiently).
Though there are considerable number of people which can and wish to give clear understanding
about risks, but the level of information noise is so that it is impossible to hear them.
5. In a modern society there are many dangerous conflicts in connection with a considerable
quantity of the countries, parties and religious-extremist groups. It is difficult even to count all of
them.
6. Even very high control in one country is senseless, while there are territories inaccessible to
the control in the others. While there are the sovereign states, the full general control is impossible.
However when the control appears, it then starts to be used not only for struggle against global risks,
but also for personal purposes of those groups which carry out the control - or, anyway, such
impression is created (war in Iraq).
7. While the society is divided into the separate armed states, fast acceptance of measures on
localization of a threat is impossible (coordination) or is fraught with start of nuclear war.
8. Upon termination of an epoch of "cold" war building of bunkers has rather decayed.
9. The modern society does not realize a survival as the overall objective, and those who about
it speak, look like nuts.
10. Modern technologies develop spontaneously. There is no clear representation about the
one who, where what and what for technologies develops - even rather easily discoverable nuclear
manufactures.
11. Though process of states association actively goes in Europe, other part of the world is not
ready yet to unite peacefully (if it in general is possible). The authority of many international
organizations, on the contrary, decreases. (However if somewhere happens large, but not final
catastrophe, is probable short-time association in the spirit of an antiterrorist coalition.)
It is important to underline also, that the classical totalitarian society is not panacea from
global catastrophes. Really, totalitarian society can quickly mobilize resources and go on
515

considerable losses for purpose achievement. However the basic problem of such society is an
information opacity which reduces degree of readiness and clearness of understanding of occurring
events. Examples: Stalin's error in an estimate of probability of the beginning of war with Germany.
Or blindness old-Chinese societies concerning military prospects of gunpowder and information
technologies - a compass and a paper which there have been invented.

Global catastrophes and current situation in the world


On the one hand, it seems that political life in the modern world gradually concentrates around
prevention of the remote global catastrophes as which possible sources three are considered first of
all: expansion ABM, global warming and the Iran nuclear program (and in a smaller measure a
number of others, for example, anti-asteroid protection, power safety, etc.

In

addition,

the

behavior of heads of states during the financial crisis in autumn 2008 can also serve as a model of
Earth's civilization respond to future global catastrophe. In the beginning there was blind denial and
embellishment of facts. During the week the situation changed, and those who said, that there cannot
be a crisis, began to shout about the inevitability of a terrible catastrophe if they do not urgently
allocate 700 billion dollars - a Paulson plan. In doing so, have conducted intensive international
meetings, Sarkozy has put forward incredible initiative, and all showed agreement that we need to
do something, though not very clearly what. In doing so, it appears that a complete model of events
was not available to decision makers.) I believe, that the reader who has attentively familiarized with
the text of this book, understands, that though these two problems are considerable and, finally, can
increase chances of human extinction, actually our world is farthest from comprehension of scales
and even kinds of the coming threats. Despite all conversations, global catastrophe is not perceived
as something real, unlike 1960th years when the risk of catastrophe directly meant necessity of
preparation of a bombproof shelter. It is possible to assimilate a modern condition of complacency
only to that pleasant relaxation which as speak, reigned in Pearl Harbor before touch of Japanese.
Besides, as global risks falling of asteroids, exhaustion of resources and risk of total nuclear war is
realized, but these themes for some reason are not objects of active political debate.
It is possible to discuss two themes: why this list of catastrophes is chosen (Iran, ABM and
warming) and how the society addresses with that the list of risks which is recognized However the
answer to both questions is one: the basic maintenance of discussions about threats of a modern
civilization consists of discussion in the spirit of is it really real? Does or not Iran make a bomb,
and whether is it dangerous? Whether people are guilty in global warming and whether it is
necessary to struggle with it? Actually, process of drawing up of this list is also political process in

516

which such factors as a competition of the most convincing and most favorable hypotheses
participate.

The world after global catastrophe


No matter how laser would be a global catastrophe, clearly, is that all Universe will not be lost
in it (if only it not disintegration of metastable vacuum, but even in this case there are parallel
Universes). Some kind of intelligent life will arise on other planet, and the more will be such places,
the it is more than chances, that this life will be similar to ours. In this sense final global catastrophe
is impossible. However if global catastrophe comprehends the Earth then some variants are possible.
According to synergetrics positions, the critical point means, that there is a little, a final
number, scenarios between which there will be an irreversible choice of a direction of movement. As
though there are many possible scenarios of global catastrophe, a quantity of final conditions is
much less. In our case it is a question of following variants:
1. Full destruction of the Earth and a life on it. The further evolution is impossible, though,
maybe, some bacteria have survived.
2. People have died out, however the biosphere as a whole has remained, and evolution of
other species of animals proceeds. As a variant - separate mutated people or monkeys gradually
create new intelligent specie.
3. Grey goo. Certain primitive necrosphera (S. Lema's term from the novel "Invincible")
from nanorobots has survived. In it there can be an evolution. A variant - self-reproduced factories
on manufacture of large robots have escaped, but they do not possess real AI.
4. Postapocaliptic world. The technological civilization has failed, but the certain number of
people has escaped. They are engaged in collecting and agriculture, and factors of anthropogenic
threats to existence have disappeared. (However process of global warming can proceed for the
account started before processes and to become irreversible.) From this scenario there are transition
possibilities to other scenarios - to a new technological civilization or to definitive extinction.
5. The super-power artificial intellect has established the power over the world. People have
died out or are superseded on a history roadside. Thus - attention! - from the point of view of people
it can look as the world of general abundance: everyone will receive an unlimited life and the virtual
world in addition. However expenses of system on entertainment of people will be minimum, no
less than a role of people in management of system. This process - autonomic of the state from
human and decrease in a role of people in it already goes. Even if the superintelligence will arise
thanks to improvement of separate people or their merge, it will not be human any more - anyway,
from the point of view of usual people. Its new complexity will move its human roots.
517

6. The positive outcome - see more in detail the following chapter. People have created such
super-power AI which operates the world, as much as possible realizing potential of people and
human values. This scenario has thin, but an essential difference with that scenario which leaves to
people only sphere of virtual entertainments and pleasures. This difference - as between a dream
about love and the real love.
Almost each of these variants is steady attractor or a channel of succession of events, that is
after passage of a critical point it starts to draw to itself different scenarios.

The world without global catastrophe: the best realistic variant of prevention of
global catastrophes
The genre demands happy end. If global catastrophe would be absolutely inevitable, there is
no reason to write this book as the only thing that would remain to people in the face of inevitable
catastrophe is to arrange a feast before a plague - make party and drink. But even if chances of
catastrophe are very great, we can delay considerably its approach, reducing it annual probability.
I (and a number of other researchers) see these chances in such advancing development of
systems of an artificial intellect which overtakes development of other risks, but simultaneously this
development should is advanced by growth of our understanding of possibilities and risks of AI, and
our understanding of, how it is correct and safe to set a problem that is how to create "Friendly" AI.
And then on the basis of this Friendly AI to create uniform system of world contracts between all
countries in which this AI will carry out functions of the Automated system of the government. This
plan assumes smooth and peace transition to really majestic and safe future.
And though I do not think that exactly this plan will be easily and faultlessly realized, or that it
is really probable, I believe, it represents the best to what we can aspire and that we can reach. It is
possible to state an essence in the following theses, first two of which are necessary, and last is
extremely desirable:
1) Our knowledge and possibilities on prevention of risks will grow much faster then possible
risks.
2) And this knowledge and possibilities of management will not generate new risks.
3) This system arises peacefully and without serious consequences for all people.

Maximizing pleasure if catastrophe is inevitable


We strive to preserve human life and humanity because it has value. While we may not be
accurate knowledge of what creates value of human life, as it is not an objective knowledge, and our
agreement, we can assume that we value the number of people, as well as test their pleasure and the
possibility of creative self-realization. In other words, diversity posed by the information. That is a
518

world in which man lives 1000, while suffering the same way altogether (concentration camp),
worse than the world where in joy lives 10 000 people engaged in a variety of crafts (Greek policy).
Thus, if we have two options for future development, in which the same probability of
extinction, we should prefer the option which is more people, less suffering, and their lives more
varied, that is best realize human potential.
Indeed, we probably would prefer a world where one billion people live within 100 years (and
then the world is destroyed), to a world in which lives only a million people in over 200 years.
Extreme expression of this is Feast during the Plague. That is, if death is inevitable, and
nothing is impossible to postpone it, the best behavior for rational actor (ie. who disbelief in the
afterlife) is to begin entertain in the most interesting way. A large number of people aware of the
inevitability of physical death, and doing it. However, if death is away for several decades, there is
no point in spend all saving on drinking now, but maximum functions of pleasure requires constant
earnings etc.
Interesting to wonder what would be a rational strategy for the whole civilization, which
would have been aware of the inevitability of death through a period of time. Should it increase the
population to give live to more people? Or rather, to distribute to all drugs and implant electrodes in
the center of pleasure? Or hide the fact of the inevitability of disaster, as this knowledge will
inevitably lead to suffering and premature destruction of infrastructure? Or can be mixed, with no
way to zero, but not absolute probability of extinction, where the bulk of resources devoted to feast
during the plague, and some - to find out?
But the real pleasure is impossible without hope of salvation. Therefore, such a civilization
would continue to seek a rational way, even if it surely knew that it doesnt exist.

Ideas for carriers in x-risks research


So, some ideas for further research, that is fields which a person
could undertake if he want to make an impact in the field of x-risks.
So it is carrier advises. For many of them I don't have special
background or needed personal qualities.
1. Legal research of international law, including work with UN and
governments. Goal: prepare an international law and a panel for xrisks prevention. (Legal education is needed)
2. Convert all information about x-risks (including my maps) in large
519

wikipedia style database. Some master of communication to attract


many contributors and balance their actions is needed.
3. Create computer model of all global risks, which will be able to
calculate their probabilities depending of different assumptions.
Evolve this model into world model with elements of AI and connect
it to monitoring and control systems.
4. Large research is safety of bio-risks, which will attract
professional biologists.
5. Promoter, who could attract funding for different research without
oversimplufication of risks and overhyping solutions. He may be also
a political activist.
6. I think that in AI safety we are already have too many people, so
some work to integrate their results is needed.
7. Teacher. A professor who will be able to teach a course in x-risks
research for student and prepare many new researchers. May be
youtube lectures.
8. Artist, who will be able to attract attention to the topic without
sensationalism and bad memes.
Chapter 24. Indirect ways of an estimate of probability of global catastrophe
Indirect ways of an estimate are used not data about the object of research, but different
indirect sources of the information, like analogies, the general laws and the top limits. It is a
question it is in detail considered by Bostrom in article of "Existential risks. There are some
independent ways of such estimate.
Paretos Law
Paretos Law is in detail considered by G.G. Malinetskim with reference to various
catastrophes in the book Risk. Sustainable development. Synergetrics. Its essence consists that
frequency (is more exact, a rank in the list) a certain catastrophe is connected with its scale by very
simple law:

520

Where a - the important parameter. a = - 0.7 for the case of victims of natural disasters.
Paretos Law has empirical character, and looks as a straight line on the logarithmic chart with the
inclination corner, proportional to a. A typical example of Paretos law is the statement like: on 1
point of magnitude growth Earthquake occurs in 10 times less often. (But one point of magnitude is
equal to an energy gain in 32 times, and it is called the law of repeatability of Gutenberg-Richter.
For big energies the parameter moves, and 1 point of a gain around 7-9 points gives frequency
reduction in 20 times that is if Earthquakes in magnitude of 7-7,9 points occur 18 times a year, 8poins - once a year, and 9-ball once in 20 years.) Feature of this law is its universality for different
classes of the phenomena though value of parameter can differ. However in case of number of
victims of natural disasters value of parameter in an exponent is not -1, but 0.7, that considerably
makes heavier a distribution tail.
For us in this distribution is interesting, how often in time there could be catastrophes in which
the expected number of victims would surpass the present population of the Earth, that is would be
an order of 10 billion humans. If we pass law with a = - 1, that is ten times stronger event
occurs ten times less often, catastrophe (that is reliably eliminating the Earth population) will occur
to 10 billion victims about one time for 500 000 years. This number has an order of time of
existence of specie Homo Sapiens. By other way, if to take a = - 0,7 (that means, that ten times
stronger event occurs only in 5 times less often, and also in the assumption, that natural catastrophes
with number of victims more than 100 000 humans occur time in 10 years) before catastrophe of
scale of all mankind there will be only approximately 30 000 years. It is close in the order of size to
that time which has passed from the moment of eruption of volcano Toba - 74000 years ago- when
the mankind has appeared on the verge of extinction. We see, that weight of a tail of distribution
strongly depends on size of parameter a. However acts of nature do not create a great risk in the
XXI century at any reasonable values of a.
However we will receive much worst result, if we will apply this law to wars and acts of
terrorism. Thus Paretos Law does not consider exponential character of technological development.
In real cases for each class of events we have the top border of applicability of Paretos Law, for
example, it is supposed, that there are no earthquakes with magnitude more than 9,5. However set of
different classes of events not .
In detail law of sedate distribution of catastrophes and threat to extinction to mankind is
considered in Robin Hansen article Catastrophe, a social collapse and human extinction. He
notices, that the important factor is the disorder of survivability of separate people. If this disorder is

521

great: then to destroy all people to the last, is necessary much, on some orders of magnitude,
stronger catastrophe, than that which destroys only 99 % of people.
Hypothesis about the Black queen
On the basis of paleontological data Van Vallen revealed that lines of extinction of species of
animals submits to decreasing exponential law. Such form of lines of a survival actually means that
the probability of extinction of an average specie remains approximately constant during his life.
As time of a life of separate species in genus Homo makes an order of one million years we can
expect the same life expectancy and for people, in assumption that we are a usual biological specie.
Hence, the hypothesis about the Black queen does not mean essential risk in the XXI century.
On the other hand, at the moment we live in 6th big extinction of live organisms this time
caused by anthropogenic factors which are characterized by speed of extinction, in 1000 times
surpassing natural. If to agree that human too are one of species, it reduces expected time of its
existence from one million years to thousand.
Fermi's paradox
One more not direct way to a probability estimate is based on Fermi's paradox. Fermi's
paradox consists in a following question: If a life and intelligence is common appearances in the
nature why we do not see their displays in space? Theoretically, the life and intelligence could arise
somewhere on some billions years earlier, than on the Earth. For this time they could extend on
hundred millions light years, at least with the help self-replicating space probes (named von
Neumann's probes). This volume includes thousands, or maybe millions, galaxies. Mankind could
start a wave self-replicating interstellar probes in the next 100 years. It can be microrobots which
settle on planets, do there rockets and dispatch them on the Universe with speeds considerable
below light-speed - such devices even are not obliged to possess a high-grade universal artificial
intellect: the same do any actinia at terrestrial ocean, only in smaller scale. Such process can be
started casually, simply at development of the nearest planets with the help of self-replicating robots.
Such microrobots will consume first of all a firm matter of planets for the reproduction. For them
laws of evolution and the natural selection, similar to that are available in fauna will operate.
However we do not observe such microrobots in Solar system, at least because it has survived.
Moreover, has survived not only the Earth, but also other solid bodies - companions of distant
planets of solar system. We also do not observe any alien radio signals and any traces of
astroengineering activity.
From here four conclusions (though there were offered more: see the book of Stefan Webb
50 decisions of paradox of Fermi where 50 different variants which are considered as a whole
they could be reduced to several more general categories)) are possible:
522

1. The intelligent life arises in the Universe extremely seldom, less often, than in volume of
sphere in radius in 100 million light years during 5 billion years.
2. We are already surrounded by a intelligent life invisible to us which has anyhow allowed us
to develop or has simulated conditions of our life. (Possibility of here enters that we live in
completely simulated world.)
3. The intelligent life perishes before has time to start at least primitive a intelligent shock
wave from robots-replicators, that is perishes in the analogue of the XXI century.
4. The intelligent life rigidly refuses distribution for limits of a native planet. It can be quite
intelligent for it as the remote space settlements cannot be supervised, so, from them threat to
existence of a parent civilization could come. (It is possible, that the intelligence has enough limits
of the virtual world, or it finds a way out in the parallel world. However life experience on the Earth
shows, that the exit on a land has not stopped expansion to the sea - the life extends in all
directions.)
As these four hypotheses, on Bayesian logic, have the equal rights before reception of the
additional information, and so we can attribute each of them subjective reliability in 1/4. In other
words, Fermi's paradox with reliability in 25 % assumes, that we will die out in the XXI century.
And though subjective probabilities are not yet objective probabilities which we would have, if we
possess completeness of the information, ours space loneliness is a disturbing fact. (On the other
hand if we appear are not lonely, it too will be the disturbing fact, in the light of risks which will be
created by possible collision with an alien civilization However it will show to us, that, at least,
some civilizations are capable to survive.)
Based on known archaeological data, we are the first technological and symbol-using
civilisation
on
Earth
(but
not
the
first
tool-using
species).
This leads to an analogy that fits Fermis paradox: Why are we the first civilisation on Earth?
For example, flight was invented by evolution independently several times.
We could imagine that on our planet, many civilisations appeared and also became extinct,
and based on mediocre principles, we should be somewhere in the middle. For example, if 10
civilisations appeared, we have only a 10 per cent chance of being the first one.
The fact that we are the first such civilisation has strong predictive power about our expected
future: it lowers the probability that there will be any other civilisations on Earth, including nonhumans or even a restarting of human civilisation from scratch. It is because, if there will be
many civiizations, we should not find ourselves to be the first one (It is some form of
Doomsday argument, the same logic is used in Bostrom's article Adam and Eve).
If we are the only civilisation to exist in the history of the Earth, then we will probably become
extinct not in mild way, but rather in a way which will prevent any other civilisation from
appearing. There is higher probability of future (man-made) catastrophes which will not only
end human civilisation, but also prevent any existence of any other civilisations on Earth.

523

Such catastrophes would kill most multicellular life. Nuclear war or pandemic is not that type
of a catastrophe. The catastrophe must be really huge: such as irreversible global warming,
grey goo or black hole in a collider.
Now, I will list possible explanations of the Fermi paradox of human past and corresponding xrisks implications:
1. We are the first civilisation on Earth, because we will prevent the existence of
any future civilisations.
If our existence prevents other civilisations from appearing in the future, how could we do it?
We will either become extinct in a very catastrophic way, killing all earthly life, or become a
super-civilisation, which will prevent other species from becoming sapient. So, if we are really
the first, then it means that "mild extinctions" are not typical for human style civilisations.
Thus, pandemics, nuclear wars, devolutions and everything reversible are ruled out as main
possible methods of human extinction.
If we become a super-civilisation, we will not be interested in preserving biosphera, as it will be
able to create new sapient species. Or, it may be that we care about biosphere so strongly,
that we will hide very well from new appearing sapient species. It will be like a cosmic zoo. It
means that past civilisations on Earth may have existed, but decided to hide all traces of their
existence from us, as it would help us to develop independently. So, the fact that we are the
first raises the probability of a very large scale catastrophe in the future, like UFAI, or
dangerous physical experiments, and reduces chances of mild x-risks such as pandemics or
nuclear war. Another explanation is that any first civilisation exhausts all resources which are
needed for a technological civilisation restart, such as oil, ores etc. But, in several million years
most such resources will be filled again or replaced by new by tectonic movement.
2. We are not the first civilisation.
2.1. We didn't find any traces of a previous technological civilisation, yet based on what we
know, there are very strong limitations for their existence. For example, every civilisation
makes genetic marks, because it moves animals from one continent to another, just as
humans brought dingos to Australia. It also must exhaust several important ores, create
artefacts, and create new isotopes. We could be sure that we are the first tech civilisation on
Earth in last 10 million years.
But, could we be sure for the past 100 million years? Maybe it was a very long time ago, like
60 million years ago (and killed dinosaurs). Carl Sagan argued that it could not have
happened, because we should find traces mostly as exhausted oil reserves. The main counter
argument here is that cephalisation, that is the evolutionary development of the brains, was
not advanced enough 60 millions ago, to support general intelligence. Dinosaurian brains were
very small. But, birds brains are more mass effective than mammalians. All these arguments
in detail are presented in this excellent article by Brian Trent Was there ever a dinosaurian
civilisation?
The main x-risks here are that we will find dangerous artefacts from previous civilisation, such
as weapons, nanobots, viruses, or AIs. And, if previous civilisations went extinct, it increases
the chances that it is typical for civilisations to become extinct. It also means that there was
some reason why an extinction occurred, and this killing force may be still active, and we could

524

excavate it. If they existed recently, they were probably hominids, and if they were killed by a
virus, it may also affect humans.
2.2. We killed them. Maya civilisation created writing independently, but Spaniards destroy
their civilisation. The same is true for Neanderthals and Homo Florentines.
2.3. Myths about gods may be signs of such previous civilisation. Highly improbable.
2.4. They are still here, but they try not to intervene in human history. So, it is similar to
Fermis Zoo solution.
2.5. They were a non-tech civilisation, and that is why we cant find their remnants.
2.6 They may be still here, like dolphins and ants, but their intelligence is non-human and they
dont create tech.
2.7 Some groups of humans created advanced tech long before now, but prefer to hide it.
Highly improbable as most tech requires large manufacturing and market.
2.8 Previous humanoid civilisation was killed by virus or prion, and our archaeological research
could bring it back to life. One hypothesis of Neanderthal extinction is prionic infection because
of cannibalism. The fact is - several hominid species went extinct in the last several million
years.
3. Civilisations are rare
Millions of species existed on Earth, but only one was able to create technology. So, it is a rare
event.Consequences: cyclic civilisations on earth are improbable. So the chances that we will
be resurrected by another civilisation on Earth is small.
The chances that we will be able to reconstruct civilisation after a large scale catastrophe, are
also small (as such catastrophes are atypical for civilisations and they quickly proceed to total
annihilation or singularity).
It also means that technological intelligence is a difficult step in the evolutionary process, so it
could be one of the solutions of the main Fermi paradox.
Safety of remains of previous civilisations (if any exist) depends on two things: the time
distance from them and their level of intelligence. The greater the distance, the safer they are
(as the biggest part of dangerous technology will be destructed by time or will not be
dangerous to humans, like species specific viruses).
The risks also depend on the level of intelligence they reached: the higher intelligence the
riskier. If anything like their remnants are ever found, strong caution is recommend.
For example, the most dangerous scenario for us will be one similar to the beginning of the
book of V. Vinge A Fire upon the deep. We could find remnants of a very old, but very
sophisticated civilisation, which will include unfriendly AI or its description, or hostile nanobots.
The most likely place for such artefacts to be preserved is on the Moon, in some cavities near
the pole. It is the most stable and radiation shielded place near Earth.
I think that based on (no) evidence, estimation of the probability of past tech civilisation should
be less than 1 per cent. While it is enough to think that they most likely dont exist, it is not
enough to completely ignore risk of their artefacts, which anyway is less than 0.1 per cent.
Meta: the main idea for this post came to me in a night dream, several years ago.

525

http://lesswrong.com/r/discussion/lw/nzg/fermi_paradox_of_human_past_and_corres
ponding/
Doomsday argument. Gotts formula.
In another way for indirect estimate of probability of destruction of mankind is specific and
disputable enough appendix of the theory of the probability, named Doomsday argument (DA). I
meaningless lower huge volume of existing arguments and counterarguments concerning this theory
and I state here only its conclusions. In the early 1980th DA was independently and in different
forms opened by several researchers. Basic articles on this question have been published in leading
natural-science magazine Nature in section of hypotheses. DA leans on so-called Copernicus
postulate which says, that the usual observer is, most likely, in usual conditions - on a usual planet,
at a usual star, in a usual Galaxy. This principle effectively predicts the most simple facts: he says,
that hardly you were born at midnight on January, 1st, or that you hardly live on the North Pole.
Though Kopernik's principle seems axiomatic and almost tautological, it can be expressed in the
mathematical form. Namely, it allows to state an estimate of probability of that the observer is in
unusual conditions. In particular, it can state a likelihood estimate about how long a certain process,
based from what the time it already proceeds, will proceed. (Under assumption it is find in the
casual moment of time) - proceeding from the assumption that is improbable, that the observer has
casually appeared in the beginning or at the very end of process. There are two basic forms of this
mathematical prediction - a straight line named formula Gott in which the direct probability is
calculated, and indirect, put forward by B. Carter and J. Leslie in which are calculated Bayesian
amendments to aprioristic probability. Both these approaches have tried to apply at once to
calculation of expected life expectancy of mankind. The volume of discussions on this question
makes several dozens articles, and many seeming obvious refutations do not work. I recommend to
the reader to address to articles of N. Bostrom where the part of arguments understands, and also to
the book of J. Leslie mentioned already and to Cave's article. The basic discussion is under
construction round, whether it is possible to use in general data about last time of existence of object
for a prediction of its future time of existence, and if yes, whether that can be used these data to
predict the future number of people and time to "doomsday". But in both cases it appears, that
turning out estimates of future time of existence of mankind are unpleasant.
Let's consider at first Gotts formula. For the first time it has been published in Nature in 1993.
The essence of its underlying reasoning consists that if we observe a certain lasting event during the
526

casual moment of time, most likely, we will get to the middle of the period of its existence, and
hardly we will get to areas very close to the beginning or by the end. The conclusion of Gotts
formula can be looked in Cave's article. We will result the formula.

Where T - age of system at the moment of its supervision, t - expected time of its existence,
and f - the set level of reliability. For example, if f=0.5 then with probability in 50 % the system will
stop to exist during the period from 1/3 to 3 of its present age since the present moment. At f=0.95
the system will exist with probability of 95 % from 0.0256 to 39 present ages.
Gotts formula finds expression in human intuition when, for example, we believe, that if a
certain house has staid year very much hardly it will fall in the nearest some seconds. This example
shows that we can do likelihood statements about unique events, not knowing anything about real
distribution of probabilities. The majority of attempts of a refutation of Gotts formula is based that
the counterexample in which it ostensibly does not work is resulted - however in these cases the
principle of is broken that the subject is observed during the casual moment of time. For example, if
to take babies or very old dogs (as Cave did) Gotts formula will not predict expected duration of
their life, however young men or old dogs not is people or the dogs taken during the casual moment
of time.) Gotts formula has been checked up experimentally, and yielded correct results for time of
radioactive disintegration of atom of unknown type, and also for time of existence of Broadway
shows.
Concerning the future of a human civilization Gotts formula is applied not to time, but to a
birth rank as the population varied non-uniformly, and it is more probable to appear during the
period with high population density. (However if to apply it by time of existence of a specie
anything improbable it will not turn out: with probability in 50 % the mankind will exist from 70
thousand to 600 thousand years.) It is supposed, that we, been born, have made the act of
observation of our civilization during the casual moment of time. Thus we have learned, that all for
mankind history was only approximately 100 billion people. It means, that we, most likely, have got
to the middle to a piece that is, that very much hardly (from less than 0,1 % of probability) the total
number of people will be 100 billion. And it means that chance of that the mankind will extend on
all galaxy within many millennia, also is small.
However from this also follows, that hardly that we live in last billion born people so, we
have, most likely, some more hundreds years to a doomsday, considering the expected population of
527

the Earth in 10 billion humans. For the XXI century the probability of destruction of a civilization,
proceeding from Gotts formula applied at a rank of a birth, makes 15-30 %, depending on number
of people which will live at this time. Strangely enough, this estimate will coincide with previous,
on the basis of Fermi's paradox. Certainly, this question requires the further researches.
Carter-Leslie doomsday argument
Leslie argues a little in some other way, than Gott, applying Bayesian logic. Bayesian logic is
based on Bayes formula which connects probability of a certain hypothesis with its
aprioristic probability and probability of a new portion of the information that is the certificate
which we have got in support of this hypothesis. (I recommend to address in this place to articles N.
Bostrom about Doomsday Argument as I cannot state here all problematics in details.)
Leslie writes: we will admit, there are two hypotheses about that, how many people will be all
from Neanderthal men to "doomsday":

1st hypothesis: in total will be 200 billion people. (That is the doomsday will come
the next millennium as all on the Earth already lived 100 billion people.)

2nd hypothesis: in total will be 200 trillion people (that is people will occupy the
Galaxy).

Also we will admit, that the probability of each of outcomes is equal 50 % from the point of
view from some abstract space observer. (Thus Leslie it is supposed, that we live in deterministic
world, that is, this probability is firmly defined by properties of our civilization though we may do
not know it.) Now if to apply Bayes theorem and to modify this aprioristic probability with the
account of that fact that we find out ourselves so early, that is among first 100 billion people, we will
receive shift of this aprioristic probability in one thousand times (a difference between billions and
billions). That is probability of that we have got to that civilization to which can die rather early,
there were 99,95 %.
Let's illustrate it with an example from a life. We will admit, in the next room is a man who
with equal probability reads either the book, or article sits. In the book is 1000 pages, and in article
is 10 pages. During the casual moment of time I ask this man, what is the number of page which he
reads. If page number is more than 10, I can unequivocally conclude that he reads the book and if
number of page is less than 10 here we have that case when it is possible to apply Bayes theorem.
Number of page less than 10 can turn out in two cases:

The man reads the book, but he is reading now its beginning, probability of they
be 1 % from all cases when he reads the book.
528

The man reads the article, here again this probability is equal to unit from all cases
when he reads article.

In other words, from 101 cases when page number can appear less than 10, in 100 cases it will
be because human reads article. So, that probability of that he reads article, after reception of the
additional information by us about page number became 99 %.
Property of the resulted reasoning consists that they sharply increase even very small
probability of extinction in the XXI century. For example, if it is equal 1 % from the point of view
there from some external observer for us, times we have found out ourselves in the world before this
event, it can make 99.9 percent. (In the assumption, that in a galactic civilization will be 200 billion
humans.)
From this follows, that, despite abstract character and complexity for understanding of the
given reasoning, we should pay not smaller attention to attempts to prove or deny Carter-Leslie
reasoning, than we spend for prevention of nuclear war. Many scientists try to prove or deny CarterLeslie argument, and the literature on this theme is extensive. And though it seems to me convincing
enough, I do not apply that has proved this argument definitively. I recommend to all to whom it
seems obvious faulty of resulted above reasoning, to address to the literature on this theme where
various arguments and counterarguments are in detail considered.
Let's consider some more remarks which work pro and contra Carter-Leslie argument.
Important lack in DA by Carter-Leslie is that time of the future survival of people depends on what
we will choose number of people in a "long" civilization. For example, at probability of extinction in
the XXI century in 1 % and at the future number of people in a long civilization in 200 billion there
is a strengthening in 1000 times, that is we have 99,9 percent of extinction in the XXI century. If to
use a logarithmic scale, it gives "half-life period" in 10 years. However if to take number of people
in a long civilization in 200* 10**15 it will give chance in one million extinction in the XXI
century, that there is 2 ** 20 degrees, and expected a half-disintegration period only in 5 years.
So, it turns out, that, choosing any size of a survival time of a long civilization, we can receive any
way short expected time of extinction of a short civilization. However our civilization already has
existed before our eyes more than 5 or 10 years.
To consider this distinction, we can recollect, that the more people in a long civilization, the
less it is probable according to Gotts formula. In other words, the probability of that a civilization
will die out early - is high. However, apparently, Carter-Leslie reasoning strengthens this probability
even more. Thus it is difficult to tell, whether correctly to apply Carter-Leslie reasoning together

529

with Gotts formula as here it can turn out so, that the same information is considered twice. This
question requires the further researches.
Carter-Leslie original reasoning contains also a number of other logic punctures which have
been generalized by Bostrom in articles, and the cores from them concerns a problem of a choice of
a referential class, and also to doubts that sample is really casual. The volume of reasoning on this
theme is so great and combined, that here we only in brief will outline these objections.
The problem of a referential class consists in a choice of the one whom exactly we should
consider as people to whom concerns the given reasoning. If we instead of people take the animals
allocated with a brain them will be thousand billions in the past, and we can quite expect their same
quantity in the future.
I see the decision of a problem of a referential class that, depending on what we choose
referential class, corresponding event should be considered as the end of its existence. That is to
everyone referent class there are own corresponding "doomsday". For example, that in the future
there will be only some more hundreds billions people, in any way does not stir to that in the future
there will be thousand more billions beings allocated with a brain. As a result we receive very
simple conclusion: the End of existence of the given referential class is "doomsday" for the given
referential class. (Thus the end of existence does not mean death, and can mean easier transition in
other class: For example, the baby grows and becomes the preschool child.)
The second logic error in Carter-Leslie reasoning consists in a sample nonrandomness. The
matter is that if I was born before the XX-th century I never would learn about Carter-Leslie
reasoning and never could ask a question on its applicability. In other words, here there is an effect
of observant selection - not all observers are equivalent. Therefore actually Carter-Lesli reasoning
can be applied only by those observers who know about it.
However it sharply worsens our chances of a survival, given DA. After all DA it is known
only since 1980th years of the XX century, that is 27 years. (Moreover, in the beginning it was
known only to narrower circle of people. That is these 27 years can be reduced years to 20.) If to
take these 27 years, and to apply to them formula , we will receive 50 % of probability of
destruction in an interval from 9 to 81 years since the present moment, that approximately means
more than 50 percent for the XXI century. Strangely enough, this estimate not strongly disperses
from two previous.
It is possible to make the reasoning and in another way. We will consider a time interval
during which there are global risks. Thus as a starting point we take 1945, and as a point of casual
observation - the moment when I have learnt about possibility of nuclear war as one of global risks 530

1980. (As lasting event here we consider the period from the beginning of the period of
susceptibility to risk before its termination.) So, at the moment of casual observation this risk
already existed within 35 years. Gotts formula gives an interval of 50 % for chances of realization
of risk with 12 till 105 years (from 1980). That this event does not happen till now, brings certain
shift in an estimate, but, nevertheless, we can tell, that these of 50 % still operate on the rest from an
interval in 90 years since 1980, that is till 2070. In other words, the probability of the termination of
a situation with global risks makes more than 50 % in the XXI century. Again we got approximately
the same result. The termination it can be both risk realization, and transition in certain other no-risk
condition about which now is impossible to tell anything. If to take into consideration that the risk
density grew in a reality in an interval from 1945 to 70th years it considerably will worsen our
estimate.
In fact, the Doomsday Argument does not mean the final extinction in the near future. It could
be only a sharp decline in population. For example, if the population of the Earth will reduce to a
few thousand people (creatures) who survive a million years and then disappear, then still the largest
percentage of people ever lived will live in the XX-XXI century when the population was several
billion and we likely find ourselves now.
It may be then that this is not a catastrophe, but rather simple - reduce fertility, plus the
emergence of some posthumans. (But this could be the seed of savages, and a group of survivors in
the bunker, or a subset of scientists who can understand the DA - if it is less than the current subset,
which is so small.) This gives a chance for experimental measurement of DA. But only by those
who are born now. If I will live 100 years and see that amount of people on the Earth has been
dramatically reduced, it would be a good confirmation of DA. (True, and many-worlds immortality,
too.)
Indirect estimate of probability of natural catastrophes
If not to consider effects of observant selection, we receive very good chances on survival in
the XXI century from any natural (but not anthropogenic) catastrophes - from galactic to geological
scales as from the fact that they were not during existence of the Earth and our specie, very small
probability of follows that they will occur in the XXI century. As any natural catastrophe has not
destroyed human ancestors for the last 4 billion years, it is possible to conclude, that our chances of
doom the XXI century from natural catastrophes make less than 1 to 40 million. (And with the
account of high human survivability and adaptability it is less than that.) Unfortunately, such

531

reasoning are essentially incorrect, as do not consider not obvious effect of observant selection and
survivorship bias. (See Circovic about it.)
Owing to this effect expected future time of existence will be less, than the past (see more in
detail in my article Natural catastrophes and Anthropic principle and the chapter about
observation selection in section about natural catastrophes). Nevertheless hardly the contribution of
observation selection makes more than one order. However for different levels of natural
catastrophes we have the different characteristic periods of time. For example, the life on the Earth
exists already 4 billion years, and, with the account told, it could exist yet no less than 100 - 400
million years. (Observation selection here consists that we do not know, what share of planets of
terrestrial type perishes in the course of their evolution; having assumed, that the share of the
escaped makes from 1 to 1000 to 1 to billion, we receive estimates in 100-400 million years as a
half-life period.) That is the indirect estimate of probability life-destroying natural catastrophe
would be 1 to 4 000 000 for hundred years. It is negligible small size against other risks.
But to time of existence of our specie last natural catastrophe threatening to it, was much
closer in time, 74 000 years ago (volcano Toba) and consequently we have expected time of
existence only 7 000 years with the account of the greatest possible effect of observation selection.
Observant selection here consists in that if people were very little hardy specie which dies out with
periodicity of times in some thousand years, we could not notice it as we can notice only that branch
of our specie which has lived sufficient term for civilization formation in which we can ask the
given question. 7000 years would correspond with the account of a huge error of such reasoning
about 1 % of extinction in the XXI century as a result of natural catastrophes or immanent to an
instability specie - and it is the maximum estimate in the worst case. If not to take in calculation
observation selection chances of natural catastrophe of any sort, leading to mankind extinction, on
the basis of last time of existence it is possible to calculate by means of Gotts formula (applied by
time of existence Homo sapiens), and they will be 1 to 1500 for 100 years, that is 0,066 % .
At last, there are the third sort of the catastrophe which probability we can indirectly estimate
on last time, namely, on time during which there is a written history, that is 5000 years. We can
safely assert, that for 5000 years there was no catastrophe which would interrupt written history.
Here too it is possible observation selection but it is less probable as here operates more strongly not
natural, but anthropogenic factors. That catastrophe which could interrupt written history 3000 years
ago, for example, supervolcano eruption in Mediterranean, now could not do it any more. Therefore
it is possible to tell safely, that the natural catastrophe interrupting written tradition (such as it was in
the past, instead of now) has chances no more than 1 % in the XXI century, considering on Gotts
532

formula (applying it by all time of existence of written tradition). And as now the written tradition is
much stronger, than in the past it is possible to reduce safely this estimate at least twice: to 0.5 %.
And even such catastrophe which would interrupt writing in the past, will not interrupt it now and
will not kill all people.
At last, the effect of observation selection can be shown and in the relation to anthropogenic
catastrophes, namely, to global risk of nuclear war. (In the assumption, that general nuclear war
would destroy mankind or would reject it so far back, that the writing of books would become
impossible.) The effect of observant selection here consists that we do not know what were chances
of our civilization to survive during the period with 1945 till 2008 that is during existence of the
nuclear weapon. Perhaps, in nine of ten the parallel worlds it was not possible. Accordingly, as a
result we can underestimate global risks. If intensity of change of number of observers would be
very great, it would have "pressing" influence on date in which we would find out ourselves - that is
we most likely would find out ourselves early enough. See more in detail article of Bostrom and
where exact calculations for catastrophes cosmological scales are offered. If the
probability of risk of extinction would make 90 % a year then I, most likely, would live not in 2007,
but in 1946. That I am still live in 2007, gives a certain top limit (with the certain set reliability) on
rate of extinction (for historical conditions of the XX-th century). Namely: 5 annual period of "halfdecay" can be excluded approximately with probability 99,9 (as for 50 years there have passed 10
cycles for 5 years, and 2 in 10 degrees it is 1024. That is for 50 years one thousand share of planets
would escape only.) Arguing further in a similar way it is possible to exclude authentically enough
periods of "half-decay" of a civilization smaller, than 50 years. However big ones we cannot
exclude. It, certainly does not mean, that the real period of "half-decay" is 50 years, however, if to
start from the precaution principle than should be assumed, that it is so. Such half-life period would
mean our chances to live till XXII century approximately in 25 %. (And it in the assumption, that
level of threats remains invariable from the middle of XX-th century.)
Conclusions: various independent methods of indirect reasoning give estimates of probability
of destruction of a civilization in the XXI century in tens percent. It should not calm us in the sense
that as if it guarantees to us tens percent of a survival. Because if to consider degree of uncertainty
of such reasoning it is category events tens percent which as we have assumed in the beginning,
means risks from 1 to 100 %.

533

Simulation Argument
N. Bostrom has developed the following logic theorem named a Simulation Argument (we
already mentioned it in a context of risks of sudden switching-off of "Matrix"). Here a course of its
reasoning:
Proceeding from current tendencies in microelectronics development, it seems quite probable,
that sooner or later people will create a powerful artificial intellect. Nanotechnology promise
limiting density of processors in billion pieces on gramme of substance (carbon) - with productivity
of an order 10

flops. Nanotechnology will allow to transform coal deposits into the huge

computer (as the basic building material for it, probably, is carbon). It opens prospect of
transformation of all Earth in computronium - one huge computer. Capacity of such device is
estimated in 10

operations in a second. (That corresponds to transformation of one million

cubic kilometres of substance in computronium which will cover all Earth with a layer in 2 meters.)
Use of all firm substance in solar system will give an order 10

flops. It is obvious, that such

computing power could create detailed simulations of the human past. As it is supposed, that for
simulation of one human it is necessary no more than 10

flops (this number is based on

quantity neurons and synapses in the brain, and frequency of their switching) it will give the chance
to model simultaneously 10

people, or 10

the civilizations similar to ours, with sew in

the speed of development. Hardly computronium will direct all resources on modeling of people but
even if it will allocate for it one millionth of its efforts, it will be still an order 10 human
civilizations. So, even if only one of one million real civilizations generates computronium this
computronium generates an order 10 civilizations, that is for each real civilization it is necessary
exists 10 virtual. Here concrete figures are not important, but that at quite realistic assumptions the
set of the modeled civilizations on many orders of magnitude is more than set of the real.
From here Bostrom does a conclusion that at least one statement from three is true:
1) Any civilization is not capable to reach the technological level necessary for creation
computronium.
2) Or EVERY possible computronium will be not interested absolutely in modeling of the
past.
534

3) Or we already live in imitation in computronium.


Thus point 2 can be excluded from consideration because there are reasons on which at least
some computroniums will be interesting in what circumstances was their appearance, but are not
present such universal reason which could operate on all possible , not allowing
them to model the past. The reason of interest to the past can be much, I will name is a calculation of
probability of the appearance to estimate density of other supercivilizations in the Universe or
entertainment of people or certain other beings.
Point 1 means, that or computronium and simulations in it are technically impossible, or that
all civilizations perish earlier, than find possibility to create it, that, however, does not mean with
necessity extinction of carriers of this civilization, that is for our case of people, but only crash of
technical progress and recoil back. However it is not visible the rational reasons yet, doing
computronium impossible. (For example, statements that consciousness simulation is impossible as
consciousnesses is quantum effect, does not work, as quantum computers are possible.) And it is
impossible to tell, that computronium is impossible basically as people have night dreams, not
distinguishable from within from a reality (that is being qualitative simulation) so, by means of
genetic manipulations it is possible to grow up a superbrain which has dreams continuously.
Thus, the simulation argument is reduced to sharp alternative: Or we live in the world which
is doomed to be lost, or we live in computer simulation.
So, the destruction of the world in this reasoning does not mean extinction of all people - it
means only the guaranteed stop of progress before computronium will be created. Guaranteed
means not only that it will occur on the Earth, but also on all other possible planets. That is it means,
that there is certain very universal law, which interferes suppressing (on many order of magnitude)
to the majority of civilizations to create computronium. Probably, it occurs simply because
computronium is impossible, or because modeling of human consciousness on it is impossible. But
it can be that it occurs because any civilization cannot reach level of computronium as faces certain
insoluble contradictions, and is compelled or to be lost, or will be rolled away back. These
contradictions should have universal character, instead of to be connected only, say, with the nuclear
weapon because then civilizations on those planets in which there is no uranium bark, can steadily
develop. The theory of chaos which does systems above certain level of complexity essentially
unstable can be an example of such universal contradiction. Another example of a universal law that
restricts the existence of systems - is aging. It works so that no one can live 120 years, although each
is specific cause of death. We can say that speed up the progress - is aging vice versa.

535

Note that the existence of universal destruction process, giving the upper limit of the existence
of all civilizations, the existence of which underlines the Universal DA of Vilenkin and Olum,
means much greater pressure on the average of civilization. For example, the upper limit of human
life in 120 years, but the mean life expectancy is about 70 years. Universal destruction should
suppress even the most resilient civilization, and we most likely are the average civilization.
Consequently, the process must begin to act on us sooner and with surplus power.
The known objection leans against these reasoning is that reality simulation not so necessarily
is a copy of that was in the past. (Whether the review of objections to the Simulation Argument in
Daniel Medvedev's article Are we living in the N. Bostroms speculation?) And if we are in the
designed world it does not allow us to do conclusions about what is the real world. As from a
computer game the monster, for example, cannot guess a real world arrangement of people.
However that we do not know, what world outside of simulation, does not prevent for us to know,
that all of us are in simulation. Here it is important to distinguish two senses of a word "simulation"
- as computer model and as that fact, that this model reminds a certain historical event from the past.
Thus it is possible to assume, that the majority of simulations are not exact similarity of the past, and
the considerable share of simulations does not concern at all the past of that civilization which then
has created them. As well in the literature the majority of novels is not historical novels, and even
historical novels not precisely coincide with the past.
If we are in simulation, we are threatened with all the same risks of destruction which can
happen in reality, plus intervention from authors of simulation who to us can throw certain difficult
problems or investigate on us certain extreme modes, or simply take a fun at our expense as we
have a good time, looking through films about falling of asteroids. At last, simulation can be simply
suddenly switched off. (Simulation can have a resource limit, therefore authors of simulation can
simply not allow to create to us so complex computers that we could start our own simulations.)
So, if we are in simulation, it only increases the risks which have hung over us and creates
essentially new - though there is a chance of sudden rescue from authors of simulation.
If we are not in simulation the chance is great, that any civilizations because of catastrophes
do not reach creation level computronium which we could reach by the XXI century end. And it
means, the probability of certain global catastrophes which will not allow us to reach this level is
great.
If we adhere Bayesian logic, to us followed attribute equal probabilities to independent
hypotheses. And then we should attribute to a hypothesis that our civilization will not reach level
computronium 50 % of probability (that means a failure to achieve it or imminent collapse of
536

civilization). This estimate coincides in the order of size with estimates which we have received in
other ways.
It turns out, that the reasoning on simulation operates in such a manner that its both
alternatives worsen our chances of a survival in the XXI century, that is it net the contribution
negative irrespective of the fact how we estimate chances of one of two alternatives. (My opinion
consists that probability of that we are in simulation, is higher than probability of that we a real
civilization to which can be lost, and on many orders of magnitude.)
It is interesting to note repeating pattern: the alternative with SETI also has negative net-effect
- if extraterrestrials are nearby we are in dangers, if they are not exist, we too are in danger as it
means, that some factors prevent them to develop.
Integration of various indirect estimates
All resulted indirect estimates are executed independently from each other though give
identical and unfavorable results, consisting that the probability of human extinction in the XXI
century is high. However as these reasoning concern the same reality, there is a desire to unite them
in more complete picture. The Bostroms simulation argument exists logically separately from a
Carter-Lesli doomsday argument (which else is necessary to connect with Gotts formula), and
accordingly there is temptation to "marry" them. Such attempt is undertaken in work of Istvan
Aranyosi The Doomsday Simulation Argument. Them, in turn it is interesting to connect with
manyworld immortality in the spirit of Higgo and with influence of effect of observation selection.
Interesting such attempt is undertaken in already mentioned article of Knob and Olum
Philosophical implication of cosmological inflation. In a counterbalance to Local Doomsday
argument in the spirit of Carter-Leslie, they put forward Universal Doomsday argument.
Namely, they show, that from this, that we find out ourselves in the early form of mankind, follows,
with high probability, that the set of people which is in short-living civilizations, is more, than set of
all people who are in all long-living civilizations on all Universe, or, in other words, the quantity of
long-living civilizations is not enough. It besides means, that chances of our civilization not to live
millions years and not to occupy a galaxy - are rather great, however changes the probable reasons
of extinction: namely, it will occur not because of any local reason, concerning only to the Earth, but
because of a certain universal reason which more likely would operate on any planetary
civilizations. We should be anxious, they write, not an orbit of a concrete asteroid, but that in all
planetary systems there are so many asteroids that it does a survival of civilizations improbable; we
should be anxious not by that a certain concrete nearest star becomes supernova, but that lethality of
537

supernovas is essentially underestimated. We should notice, that the same conclusion that the set
short-living civilizations considerably surpasses set long-living, follows and from a reasoning on
simulation Bostrom (above) if in quality short-living civilizations to consider simulations.
I believe that the essence of this integration should be that we will find out, that one reasoning
block others that is what of them are stronger in logic sense. (It is thus possible, that the subsequent
researches can give more exact picture of integration, and will reduce all separate calculations to one
formula.) I see such order of capacity of statements (stronger statements canceling weaker, from
above). However I do not mean that all of them are true.
a. The qualitative theory of consciousness based on concept about qualia. Qualia is the
philosophical term designating the qualitative party in any perception, for example, red-ness. The
nature and reality of qualia are object of intensive discussions. Theories about qualia do not exist
yet, there are only a few logic paradoxes connected with it. However, apparently, the theory about
qualia can exclude representations about plurality of the worlds and linearity of time. Owing to it
such theory, if it be created and proved, would make unauthorized any below-mentioned reasoning
b. A reasoning on immortality of J. Higgo, based on idea about plurality of the worlds. In this
case always there will be a world where I and the part of a terrestrial civilization accordingly, will
not be lost. The reasoning on immortality Higgo is very strong because it does not depend neither on
a doomsday, nor from, whether there are we in simulation or not. Immortality on Higgo does a
personal doomsday impossible. Any owner of simulation cannot affect work of reasoning Higgo in
any way because always there will be an infinite quantity of other simulations and the real worlds, in
accuracy coinciding with given in time present situation, but having with it the different future.
However reasoning Higgo leans on self-sampling assumption - that is idea that I are one of copies
of set of the copies - and all subsequent reasoning lean against the same idea - simulation argument,
Gott formula, a on Carter-Lesli doomsday argument. Any attempts to deny immortality on Higgo,
based on impossibility of consideration of as one of copies of set of the copies simultaneously deny
also all these reasoning
c. Bostroms simulation argument. It too works in the assumption of plurality of the worlds
whereas the subsequent reasoning do not consider this fact. Besides, if we actually are in simulation
we do not observe the world during the casual moment of time as simulations, more likely, will be
adhered to historically interesting epoch. At last, reasoning in the spirit of DA demand possible
continuous numbering of people or time that in case of set of simulations does not work. Therefore
any forms DA become invalid, if the reasoning on simulation is true. The reasoning on simulation is
stronger than Carter-Leslie doomsday argument and Gotts formula because it works, irrespective of
538

how many still people will be in our real world. Moreover, it essentially washes away concepts
about quantity of people and volume, that such the real world as it is not clear, whether we should
consider the future people from other simulations, as real. Not clearly also, whether each simulation
should feign all world from the beginning up to the end, or only a certain piece of its existence only
for several people.
d. Gotts Formula. Gotts formula confidently works concerning the events which have been
not connected with change of number of observers, for example, concerning radioactive
disintegration, date of a pulling down of the Berlin wall, a prediction of duration of a human life etc.
However it gives much softer estimate of the future duration of existence of mankind, than CarterLeslie argument. Gotts formula is more simple and clear tool for a future estimate, than CarterLeslie reasoning. At least because Gotts formula gives concrete numerical estimates, and CarterLeslie reasoning gives only the amendment to initial probabilities. Further, Gotts formula is
applicable to any referential classes as for any class it gives an estimate of time of end for this class.
And in Carter-Leslie reasoning the death of the observer is mentioned usually, and he should be
adapted to situations where the observer does not die. Question on, whether it is necessary to apply
the amendments given by a reasoning of Carter-Leslie to estimates which has given formula ,
demands the further research.
e. Carter-Leslie argument. The important condition of argument of Carter-Lesli (in its
interpretation Bostrom) is non-existence of other civilizations, besides terrestrial. Besides, it is very
difficult to think up real experiment in which it would be possible to check up force of this
reasoning. And mental experiments work with certain stretches.
f. Fermi's paradox is too in the bottom of this table as a reasoning on simulation evidently
cancels its value: in simulation the density of civilizations can be any, no less than risk of their
aggression, depending on whim of owners of simulation.
All told here about indirect ways of an estimate is on the verge between provable and
hypothetical. Therefore I suggest not to take on trust made to a conclusion, but also not to reject
them. Unfortunately, researches of indirect ways of an estimate of probability of global catastrophe
can throw light on our expected future, but do not give keys to its change.
Chapter 25. The most probable scenario of global catastrophe
Now we can try to generalize results of the analysis, having presented the most probable
scenario of global catastrophe. It is a question not of an objective estimate of real probabilities
which we can calculate only concerning falling of asteroids, but about value judgment, that is best
539

guess. It is obvious that such estimate will be painted by personal preferences of the author,
therefore I will not give out it for the objective precisely calculated probability. Depending on what
will appear the new information, I will correct the estimate.
In this estimate I consider both probability of events, and their affinity to us on time. Therefore
I attribute small probabilities nanotechnological grey goo which though it is possible technically, but
is eclipsed by earlier risks connected with biotechnologies. Precisely also creation of the nuclear
Doomsday Machine too demands many years and is economically inexpedient, as the damage of
such scale more cheaply and will faster put by means of the biological weapon.
These assumptions are made concerning offered threats even with the account of that people
will try to resist to them so, how much they can. So, I see two most probable scenarios of global
definitive catastrophe in the XXI century, leading to full human extinction:
1) Sudden scenario connected with unlimited growth of an artificial intellect which has
unfriendly concerning human of the purpose.
2) System scenario in which the leading part is played by the biological weapon and other
products of biotechnologies, but also will be used the nuclear weapon and microrobots. Will play
also the role distribution of superdrugs, pollution of environment, exhaustion of resources. The
essence of this scenario that there will be no one factor destroying people, and will be a shaft of set
of the factors, surpassing all possibilities on a survival.
The most probable time of action of both scenarios - 2020-2040. In other words, I believe, that
if these scenarios are realized, more than 50 % chances is that they will occur to in the specified time
interval. This estimate occurs from this, that, proceeding from current tendencies, hardly both
technologies will ripen till 2020 or after 2040.
Now we will try to integrate all possible scenarios with the account of their mutual influence
so that the sum was equal 100 % (thus it is necessary to consider these figures as my tentative
estimate to within an order). We will estimate the general probability of human extinction in the XXI
century, according to words of sir Martin Rees, in 50 %. Then following estimates of probability of
extinction seems convincing:
15 % - unfriendly AI or struggle of different AI destroys people. I attribute AI such high
probability because AI possesses ability to find and influence all people without an exception - in
to a measure, than other factors.
15 % - system crisis with repeated application of the biological and nuclear weapon.
14 % - something unknown.

540

1 % - uncontrollable global warming and other variants of the natural catastrophes caused by
activity of human.
0,1 % - natural catastrophes.
0,9 % - unsuccessful physical experiments.
1 % - grey goo - nanotechnologyical catastrophe
1 % - attack through SETI
1 % - the nuclear weapon of the Doomsday
1 % - other.
The remained 50 % fall to chances of that in the XXI century people will not die out. They see
consisting of:
15 % - Positive technological Singularity. Transition to a new stage of evolutionary
development.
10 % - Negative Singularity in which course people survive, but lose value. Variants:
survived in the bunker, a zoo, the unemployed at the TV. The power passes to AI and robots.
5 % - Sustainable development - the human civilization develops without jumps in
technologies and without catastrophes. It is offered as the best variant by traditional futurologists.
20 % - Recoil on a stage post-apocalyptic the world. Different levels of degradation.
Now we will consider possible influence on these figures of different forms of the doomsday
argument. Gotts formula taken concerning all quantity of people on the Earth, gives not so high
chance of extinction in the XXI century - at level of 10 percent, however considerably limits
chances of mankind to live a next millennium or bigger term.
One more variant of reasoning with use DA and Gott formulas consists in its reflective
application

and

legitimacy

of

such

application

is

seriously

challenged.

(http://en.wikipedia.org/wiki/Self-referencing_doomsday_argument_rebuttal). Namely, if to apply


Gotts formula to my rank (that is number by date of appearance) in set of all people which know
about Gotts formula or DA it will be soon definitively denied, or chances of a survival in XXI
century appear illusive. It is connected by that one of the most extreme and disputable decisions of a
problem of referential classes whom concerns DA, consists that it concerns only those people who
know about it - and such decision of a problem of referential classes was offered by pioneer of DA
B. Carter when for the first time reported about DA at session of the Royal society. Extremeness of
this decision that as in the past is a little people who know DA (about ten thousand at the moment),
that fact that I find out myself so early in this set, speaks, agrees to the logic of DA, as in the future
will be approximately the same amount of people knowing about it. As the number knowing about
541

DA is continuous un-linearly grows, through several decades years it should reach millions.
However, it agree to the logic of DA, it is improbable, that I have found out myself so early in this
time set. Hence, something will prevent that the set knowing about DA will reach such big size. It
can be or refutation DA, or that will not be simple people who will be interested in it. As well as
many other things can be denied variants DA, this variant, having specified that I am not casual
observer DA during the casual moment of time, and certain features a priori inherent to me have led
to that I am interested in different unchecked hypotheses at early stages of discussion.
Carter-Leslie reasoning does not give a direct estimate of probability but only modifies a priori
estimate. However the contribution of this updating can be so considerable, that the concrete size of
an a priori estimate of probability doesnt not important. For example, J. Leslie results the following
example of application of a reasoning of Carter-Leslie in the book: a priori probability of extinction
in the near future in 1 %, and rupture between number of mankind at "bad" and at the "good"
scenario in one thousand times. Then these a priori 1 % turn through Bayes formula in a posteriori
50 %. However if we apply the same assumptions to our a priori probability of extinction in 50 %
we will receive chances of extinction in 99.9 %.
At last, the third variant of the Doomsday Argument in formulation Bostrom-Tegmark adapted
by me to less scale natural processes, does not render essential influence on probability of natural
catastrophes in the XXI century as limits degree of underestimate of their frequency to one order,
that all the same gives chance of less than 0,1 %. The worst display of effect of observation selection
is underestimate of probability of global nuclear war which would lower the maximum frequency of
this event from one event of times in several decades years, to one event of times in several years
would be absolutely not obligatory. Nevertheless the top border is yet value so here all not so is bad.
So, indirect ways of an estimate of probability of global catastrophe or confirm an estimate of
an order of 50 % in the XXI century, or sharply increase it to 99 % - however those variants of
reasoning in which it sharply increases, do not possess as much high - 99 % - validity degree.
Therefore we can stop on a total estimate in more, than 50 %.
Much easier is to think out scenarios of global catastrophe than the ways of its prevention. It
suggests that the probability of global catastrophe is rather great. Thus all described scenarios can be
realized in XXI a century. N. Bostrom estimates probability of global catastrophe as not less than
25 percent. Martin Rees - in 30 percent (for 500 next years). In my subjective opinion, it more than
50 percent. Thus its annual probability is more than 1 percent and also grows. The peak of this
growth will be on first half of XXI century. Hence, very many depends on us now.

542

At the same time to predict the concrete scenario at the moment it is unreal, as it depends on
set of unknown human and random factors. However, the number of publications on themes of
global catastrophes grows, files on risks are made, in several years these ideas will start to get into
authorities of all countries. Meanwhile, the defensive value nanotechnology is already visible and
creation possibility of "grey goo is clear. The understanding of gravity of risks should unite all
people on a transition period that they could unite in the face of the general threat.
The analysis of risks of global catastrophes gives us the new point of view on history. Now we
can estimate modes and politicians not from the point of view of what good they have made for the
country, but from that point of view from which it is visible, how much effectively they prevented
global catastrophe. From the point of view of the future inhabitants of XXII century will not be
important how well or badly we lived, but how much we have tried in general to survive for our
future.
In summary it makes sense to express basic unpredictability of global catastrophes. We do not
know, whether there will be a global catastrophe, and if yes, how and when. If we could know it
where we would fall we wouldn't have come to that place at all. This ignorance is similar to that
ignorance which each human has about time and reason of his death (let alone that will be after
death), but human has at least an example of other people which gives statistical model of that, as to
what probability it can occur. At last, though people and not very much like to think of death, but
nevertheless from time to time everyone thinks about it and somehow considers in the plans.
Scenarios of human extinction are practically superseded in the public unconscious. Global
catastrophes are fenced off from us by a veil as the technical ignorance, connected with our
ignorance of real orbits of asteroids and to that similar, and psychological, connected with our
inability and unwillingness to predict and analyze them. Moreover, global catastrophes are separated
from us by theoretical ignorance - we do not know, whether the Artificial intellect is possible, and in
what limits, and we do not know how correctly to apply different versions of the Doomsday
Argument which give absolutely different likelihood estimates of time of a human survival.
We should recognize that at some level catastrophe has already occurred: the darkness of
incomprehensibility shrouding us has eclipsed the clear world of the predictable past. Not without
reason one of the articles of N. Bostrom is called: Technological revolutions: Ethics and a policy
in dark. We will need to collect all clearness of consciousness available for us to continue our way
to the future.

543

Part 5. Cognitive biases affecting judgments of global risks

Chapter 1. General Remarks:


Cognitive Biases and Global Catastrophic Risks
In the first part of this book, we have analyzed global catastrophic risks in as much
detail as is reasonable for being contained within one volume. We not only looked at the
risks themselves, but also ways in which they combine, how risks in general can be
detected, ameliorated, what factors exacerbate or diminish them, or so on. In this second
section of the book, we look at an extremely important topiccognitive biasesand how
these influence the way we might evaluate or estimate the probability of given risks.
Cognitive biases might as well be called reasoning errors, or biases that cause
errors in reasoning. A classic example might be generalized overconfidence, which is
systematic and universal. Experts tend to be just as overconfident as laypeople if not
moreso, including in their area of specialty. In the chapters ahead you will find just how
many cognitive biases we have identified which may compromise reasoning with regard to
global catastrophic risks, perhaps fatally so.
Cognitive biases are an innate property of the human mind. There is no fault for
them; they are not the product of malice. They are universal, ancient, and taken for
granted. Usually, they do not matter much, because being slightly mistaken in daily
circumstances is rarely fatal. Natural selection has removed the worst errors. With respect
to global risks, which concern extremely dangerous and poorly understood phenomena,
they may very well be. So the importance that we understand the cognitive biases therein
is extremely great. The typical way in which reasoning contaminated with cognitive biases
is overcome on a daily basis is through trial and error, a luxury we lack with respect to
extinction risks.
Even if the probabilistic contribution of several dozen cognitive biases is relatively low,
in combination they may throw a likelihood estimate off completely and cause millions of
dollars to be wasted on unnecessary prevention efforts. An example would be money spent
544

on unnecessary geoengineering efforts, or money spent fighting genetically modified


organisms which should go towards stopping autonomous killer robots.
To illustrate the ubiquity of cognitive biases you need only ask a few experts about
their forecast for the greatest risks of the 21 st century. Usually, they will fixate on one or
two future technologies, subconsciously overemphasizing them relative to others, which
they know little about. One person may focus on Peak Oil, another possess too much trust
in the promise of renewable energy, another focus on a global pandemic, and so on. One
may find the use of nuclear weapons to be extremely large, another greatly improbable. A
serious researcher on the topic of global risks should know about these wild differences in
position and some of the common reasoning errors which may cause them.
The analysis of possible reasoning errors with respect to global risks is a step on the
way to creating a holistic methodology of global risks, and therefore, to their prevention.
The purpose of this second part of the book is to put potential cognitive biases applicable
to global risks in a complete and structured list. Thus first priority is given to the
completeness of these lists, rather than exhaustively illustrating each individual cognitive
bias. Fully explaining each cognitive bias would take its own chapter.
It is important to recall that small errors resulting from small deviations of estimates
with regard to high-power, high-energy systems, especially those such as nanotechnology
or Artificial Intelligence, could lead to threads which unravel and mutually reinforce,
eventually culminating in disaster. Thus, those developing these systems and analyzing
them must be especially careful to examine their own biases, and hold themselves to a
much higher standard than someone designing a new widget.
Intellectual errors can lead to real-life catastrophes. It is easy to find examples of how
erroneous reasoning of pilots led to airplane catastrophes that cost hundred of lives. For
instance a pilot pulling upwards on the throttle when he thought the nose of the plane was
pointed down though it was not, resulting in a stall and subsequent nosedive. In fact the
majority of technological disasters are both caused by human error and preventable by
proper procedures without error.
There are two main categories of error which are relevant to consideration of global
risks, we'll call them designer errors and pilot errors. Designer errors are made by big
545

groups of people for many years, whereas pilot errors are made by small groups of people
or individuals in seconds or minutes while controlling critical systems in realtime.
Of course, there is the possibility that certain risks we highlight in these sections are
erroneous themselves, or at least overstated. We also can be certain that the list is not
complete. Therefore, any given list should be used as a launching pad for the critical
analysis of reasoning on global risks, not as a conclusive tool for any definitive diagnosis.
The most dangerous illustration consists in making the assumption that errors in
reasoning on global risks are insignificant, or that they could be easily found and
eliminated. For instance, people might think, Planes fly, despite all possible errors, and life
in general on the Earth proceeds, so the value of these errors must be insignificant. This
projection is incorrect. Planes fly because during the course of their development, there
were many tests and thousands of planes crashed. Behind each crashed test plane was
someone's errors. We do not have a thousand planets to test; just one, and it has to count.
The explosive combination of bio, nano, nuclear, and AI technologies are very risky, and
one fatal error could end it all. The fact that the Earth is whole today gives us no
reassurances for the future. There is no time in history when it was more important to be
correct than now.
There is the possibility that we will discover cognitive biases, or hidden pieces of
knowledge, which will completely change our reasoning on global risks and change the
course of all our reasoning. For instance, there are some extremely intelligent people who
see human-friendly smarter-than-human intelligence is the one best solution to address all
global risks, as we explored in the last section of the book. Many of the errors we discuss,
in any case, are endemic across researchers, and much insight is to be gained from
attempting to debias them.
Under the term cognitive biases we mean distortions which are not only logic
infringements, but also any intellectual procedures or designs which can influence final
conclusions and result in an increased risk of global catastrophe.
Possible errors and cognitive biases in this section are divided into the following
categories:

546

Errors specific to estimates to global risks,

Errors concerning any risk, with specific analysis applied to global risks,

Specific errors arising in discussions concerning Artificial Intelligence;

Specific errors arising in discussions concerning nanotechnology;


Read carefully through the lists of risks, and consider situations in which you may
have encountered them outside the context of global risks.

Chapter. Meta-biases
Some cognitive biases dont allow a person to see and cure his
other biases. It results in biases accumulation and strongly distorted
world picture. I tried to draw out a list of main meta-biases.

Psychological group
First and most important of them is overconfidence. Generalized
overconfidence also is known as feeling of self-importance. It
prevents a person from searching and indemnifying his own biases.
He feels himself perfect. It is also called arrogance.
Lack or reflectivity. Inability to think about own thinking.
Projection of responsibility. If one used to think that others are
source of his problems, he is unable to see his own mistakes and
make changes.
Psychopathic traits of character. They often combine many of above
mentioned properties.
Learned helplessness. In this case a person may not believe that he
is able to debias himself.
Hyperoptimisctic bias. If you want something very much, you will
ignore all warnings.
547

Cognitive group
Stupidity. It is not a bias, but a (sort of very general) property of
mind. It may include many psychiatric disorders, from dementia to
depression.
Lack of knowledge in logic, statistic, brain science, scientific
method, biases etc.
Belief structure items
Dogmatism: Unchangeable group of believes, often connected with
believe in certain text or author.
Lack of motivation to self-improvement. It is not a bias, and many
biases cant be improved by any motivation, which was shown in
the experiments. But anyway some kind of motivation is needed to
prevent biases or to get specific training how to get rid of them.
Obstinacy. A person may want to signal his high status by ignoring
good advises and even facts, and try to demonstrate that he is
strong in his believes.
Viral memes with built in defenses against falsification, like
conspiracy theories.
The ability to see others biases as an instrument for effective
arguening
Social pressure: Thinking about own fallacies may not be socially
acceptable in the peer group.
Lesswrong discussion in comments on metabiases:
http://lesswrong.com/lw/d1u/the_new_yorker_article_on_cognitive_biases/
Bostrom on typology of biases
http://www.overcomingbias.com/2007/11/towards-a-typol.html

548

Chapter 2. Cognitive biases concerning global catastrophic risks

Chapter 2. Cognitive biases concerning global catastrophic


risks
1. Confusion regarding the difference between catastrophes causing human
extinction and catastrophies non-fatal to our species
There is a tendency to conflate global catastrophes causing the extinction of
mankind (existential risks) and other enormous catastrophes which could bring about major
civilizational damage without being completely fatal (most conceivable nuclear war
scenarios, for instance). The defining features of existential disaster are irreversibility and
totality. If there is a disaster which wipes out 99.9% of the world's population, 7 million
humans would still remain in the planet's refuges, about 400-4,000 times the number of
humans that were alive at the time of the Toba catastrophe 77,000 years ago. This would
be more than enough to start over, and they would have a tremendous amount of intact
records, technology, and infrastructure to assist them. So, the difference between a
disaster that wipes out 100% of humanity and merely 99.9% of humanity is immense. Even
the difference between a disaster that wipes out 99.999% of humanity and 100% would be
immense. People seldom intuitively grasp this point, and it may have to be explained
several times.
2. Underestimating non-obvious risks
Global risks are divided into two categories: obvious and non-obvious. An
obvious risk would be nuclear war, a non-obvious risk would be the destruction of an Earth
by the creation of a stable strangelet during a particle accelerator experiment. Non-obvious
risks may be more dangerous, because their severity and probability are unknown, and
therefore suitable countermeasures are generally not taken. Some non-obvious risks are
known only to a narrow circle of experts who express contradictory opinions about their
severity, probability, and mechanism of emergence. A detached onlooker, such as a military
general or a head of state, may be completely unable to distinguish between the expert
advice given and might as well flip a coin to determine who to listen to regarding the non549

obvious risks. This makes inadequate preparation for these risks highly likely, whereas
well-understood risks such as nuclear war are better anticipated and ameliorated.
Making estimates based on past rates of discovery of new global risks, it
seems as if the number of new risks expands exponentially over time. Therefore we can
anticipate a great increase in the number of global risks in the 21st century, the nature of
which may be impossible for us to guess now, and fall into the category of non-obvious
risks.
Obvious risks are much more convenient to analyze. There are huge volumes
of data we can use to assess these perils. The volume of this analysis can conceal the fact
that there are other risks about which little is known. Their assessment may not be
amenable to rigorous numerical analysis, but they are severe risks all the same (for
example, risks from incorrectly programmed Artificial Intelligence).
3. Bystander effect: global risks have less perceived national security importance
The focus of each country is on risks to that specific country. Global risks are
only considered if they are well understood, like nuclear war or global warming. As a result,
there is a bystander effect1 or "Tragedy of the Commons" whereby risks that threaten us all
but no particular nation individually are poorly analyzed and prepared for. This is analogous
to how a collapsed person in a busy area is less likely to be helped than if they are in a less
populated area.
4. Bias connected to psychologization of the problem
There is a social stereotype whereby those who warn of risk are considered
"doomsayers," with the implication that these people are social outcasts or merely warning
of risk for attention and to increase social status. This may always be the case, yet studies
show that pessimistic people actually tend to estimate more accurate probabilities of
events than more optimistic people. This is called depressive realism 2. Only precise
calculations can define the real weight of risk. Psychologizing the problem is just a way of
sticking our heads in the sand. This approach to the problem will be popular among people
who clearly understand social stereotypes about doomsayers but have difficulty grasping
the complex scientific details surrounding global risks. It is easier to call someone a
doomsayer than to understand the risk on a technical or scientific level.
5. A false equivocation of global catastrophe with the death of all individual humans
550

It is possible to imagine scenarios in which humanity survives in a literal


sense but in a deeper sense, civilization comes to an end. This includes superdrug
scenarios, where humanity becomes addicted to virtual reality and gradually becomes
obsolete, replaced by machines that eventually turn us into detached brains or even
human-like computer programs that lack phenomenological consciousness. So, we should
note that a global catastrophe need not involve the sudden death or all individual humans;
there are more subtle ways our story might come to an end.
6. Perceptive stereotype of catastrophes from mass media coverage of prior risks
Mass media creates a false image of global catastrophe that has a
subconscious and profound impact on our estimates. Experience of watching television
reports on catastrophes has developed the subconscious stereotype that doomsday will be
shown to us on CNN. However, a scenario might not unfold in that way. A global event that
affects everyone on the planet very quickly may not have time to be adequately covered by
the media. Doomsday may not be televised.
Television also creates the perception that there will be an abundance of data
regarding a disaster as it emerges, as has been the case with threats such as earthquakes
and bird flu mutations. However, the amount of information available may actually be quite
small in proportion to the magnitude of the risk, so detailed reporting may not be
forthcoming.
7. Bias connected with the fact that global catastrophe is by definition a unique
event
Global doom sounds fantastic. It has never happened before. If it does
happen, it can only happen once. Therefore normal inductive processes of sampling are
ineffective to predict it. If something is true at t=1, t=2, t=3, and so on, we can reasonably
assume it will be true at t+1 (or all t). This methodology is useful during smooth conditions,
but ineffective for predicting abrupt, extreme phenomena without precedent. A separate
issue is that a lethal effect that kills off or disables humanity a little bit at a time may never
appear to be a global catastrophe at first, but leads to human extinction when operating
over a sufficient duration of time.
8. Underestimating global risks because of emotional reluctance to consider your
personal demise
551

People are not keen on considering global threats because they have
become accustomed to the inevitability of their personal death and have developed
psychological protective mechanisms against these thoughts. Yet, amelioration of global
risks demands that people of all kindsscientists, politicians, businessmenput aside
these thoughts and devote collective effort towards considering these risks anyway, even if
they are threatening on a personal and visceral level.
9. Too many cooks in the kitchen: decision paralysis
If there are 20 scientists who have 20 different, equally plausible-sounding
accounts of global risks, decision paralysis (also called analysis paralysis or option
paralysis3) may set in and there is the temptation to do nothing. A.P. Chekhov wrote, If
many remedies are prescribed for an illness, you may be certain that the illness has no
cure. If too many remedies are prescribed for global risk, it may be that we cannot come
up with a cure. More simplistically, there may be a large pool of experts and we just listen
to the wrong ones.
10. Global risks receive less attention than small-scale risks
Consider the risk of a nuclear explosion in Washington. This is specific, vivid,
and has received top-level attention from the United States government. Consider the risk
of the creation of a suite of powerful new superviruses. This has received greatly less
attention, though one risk only affects one city, whereas the other could be terminal to the
entire planet. The impact of permanent termination of the human species is much greater
than comparatively small individual risks, and deserves correspondingly greater attention,
even though local risks may be more vivid.
11. Trading off a "slight" risk to humanity for some perceived benefit
There may be scientists who feel that it is worth developing powerful Artificial
Intelligence without adequate safeguards because there is only a small risk that it will get
out of control and threaten the human species. Or, some other advance, in the area of
weapons research or biotechnology may emit the same siren song. Alfred Nobel justified
the invention of dynamite with the notion that it would end all war. Instead, dynamite greatly
increased the killing power of weapons and indirectly contributed to the most lethal century
in the history of mankind.

552

12. Absence of clear understanding of to whom instructions on global risks are


directed
It is not clear exactly who should be heeding our advice to deal with global
risks. There are many different parties and their relative ability to influence the global
situation is unknown. Many parties may underestimate their own influence, or estimate it
correctly but hesitate to do anything because they see it as so small. Then everyone will do
nothing, the bystander effect kicks in again, and we may may the price. An important part
of risk research should be identifying to whom literature on global risks should be directed.
High-ranking military and intelligence personnel seem like obvious candidates, but there
are many others, including rank-and-file scientists of all kinds.
13. Poor translation between the theoretical and practical
Global risks are theoretical events which require practical planning. There is
no way to empirically test the likelihood or impact or global disasters. There is no way to
test the practical probability of thermonuclear war because it hasn't happened yet, and if it
did, it would be too late. Similar properties of other global risks make it difficult to connect
practical planning with theoretical risk assessments.
14. Predisposal to risk-taking and aggression
Humans evolved to take risks in competition with one another, to play
chicken. The first guy who chickens out loses the girl. This tendency becomes a problem in
a world of doomsday weapons.
15. Erroneous representation that global risks are in the far future and not relevant
today
The possible self-destruction of mankind has been a rather serious issue ever
since the United States and U.S.S.R. built up their nuclear stockpiles. Though unlikely that
nuclear war could wipe out humanity alone, in conjunction with biological weapons it could
be possible today, in a worst-case scenario. Within 20 or 30 years, there will be many more
tools for dealing death, and the risk will increase. The risk is here today; it is not futuristic.
16. The idea that global catastrophe will be personally painless
There is a natural tendency to think of global disaster by analogy to taking a
bullet in the head: quick and painless. Yet global disaster could stretch out for decades or
553

even centuries and provide profound misery to everyone involved. You might not be able to
enjoy the benefit of dying so quickly. Even if you did, it seems irresponsible to discount
global risk based on the anticipation that it would be like getting a bullet through the head.
Others may not be so lucky, and humanity's entire future being lost has a profound
negative value, even if doom were painless.
17. The representation that books and articles about global risks can change a
situation considerably
Books and articles about global risk are not enough, and may create a false
sense of security that knowledge is being spread. What is needed is real money and
manpower being put towards creating systems and protocols to prevent global risks,
systems that have barely been conceived of today. For instance, we should be acutely
aware that there is no plan and no proposals to regulate the synthesis of dangerous
genetic sequences, such as the Spanish flu virus. What are we going to do about it? A
book about global risk alone is not sufficient to prevent these gene sequences from
popping out of synthesizers and into real viruses which find their way into airports.
18. Views that global risks are either inevitable, depend entirely on casual factors
not subject to human influence, or depend on the highest-level politicians who are
unreachable
The circulation of certain ideas in a society, namely that global risks are real and it is
necessary to make efforts for their prevention, can create a certain background which
indirectly drives mechanisms of decision-making. Consider the process of crystallization; it
requires nucleation points and certain saturation levels of a chemical in solution to get
started. Unless the entire solution is supersaturated and nucleation points are provided,
crystallization does not occur. Taking difficult society-wide steps like preparing for global
risks is an analogous process. Every little bit of action and knowledge matters, no matter
where it is distributed. Higher-ups are reluctant to take action unless they perceive there
will be a certain level of understanding and approval among their subordinates. This effect
becomes even more acute as one approaches the top of the pyramid. Understanding must
be commonplace near the bottom and middle of the pyramid before the top does anything
about it.
19. Intuition as a source of errors in thinking about global risks
554

As global risks concern events which never happened, they are fundamentally
unintuitive. Intuition can be useful to come up with new hypotheses, but is less useful for
more systematic analyses and nailing down probabilities. Intuition is more susceptible to
subconscious biases, such as a latent unwillingness to consider unpleasant scenarios, or
contrariwise, the urge to see them where they are not present. As intuition takes less
mental effort than deliberative thought, there is a constant compulsion to substitute the
former for the latter.
20. Scientific research of global risks also faces a number of problems
As mentioned before, experimentation is not a good way of establishing the
truth about global risks. In connection with the impossibility of experiment, it is impossible
to measure objectively what errors influence estimates of global risks. There cannot be
rigorous statistics on global risks. The fundamental concept of falsifiability is also
inapplicable to theories about global risks.
21. The errors connected with unawareness of little-known logical arguments for
global risk
In the case of global risks, difficult inferential reasoning as the Doomsday
Argument and observation selection effects start to operate. These are important to
assessing global risk, however they are unknown to the majority of people, and a
considerable share of researchers reject them due to their unintuitiveness and the
conceptual challenge of understanding them.
22. Methods applicable to management of economic and other risks are not
applicable to global catastrophic risks
Global catastrophic risks cannot be insured, and are not easily amenable to
prevention within a logical economic context. They break the rules because they are
universal and their economic downside, being extreme or total, is hard to quantify in
conventional economic terms. Furthermore, there is a tendency for countries to fixate on
economic risks relevant only to themselves, and not to the globe as a whole.
23. Erroneous conception that global risks threaten people only while we are
Earthbound, and resettlement in space will automatically solve the problem
The scale of weapons and energies which can be created by people on the
Earth grows faster than the rate of space expansion. Meaning, by the time we can expand
555

into space, we will have created threats which can self-replicate and intelligently target
human beings, namely advanced Artificial Intelligence and autonomous self-replicating
nanotechnology. Information contamination, such as viruses or a computer attack, would
also spread at close to the speed of light. Space is not a panacea.
24. Scope neglect
There is a cognitive bias known as scope neglect, meaning that to save the
life of a million children, a person is only willing to pay a slightly greater amount than to
save the life of one child4. Our minds are not able to scale up emotional valence or
preventive motivation linearly in accordance with the risk; we discount large risks. Most
money and attention consistently goes to projects which only affect a much smaller number
of lives rather than all of the planet's humans.
25. Exaggeration of prognostic values of extrapolation
In futurism, there is an "extrapolation mania" by which futurists take present
trends, extrapolate them outwards, and predict the future based on those. However, trends
often change. As an example, Moore's law, the improvement in the cost-performance of
computers, is already starting to level off. Our experience with futurism shows that
extrapolation of curves tends to only be suited to short-term forecasts. Extrapolation poorly
accounts for feedback effects between predictions and future events and the effects of
fundamental innovations.
An example of an extrapolation that failed was Malthus' prediction that human
beings would run out of food at some point during the 19th century. He failed to account for
a number of future innovations, such as artificial fertilizer and the Green Revolution.
Besides being too pessimistic, it is possible to be too optimistic. Many futurists today
anticipate an era of abundance where all global risks are under control 5. However,
technological progress may prove to be slower than they anticipate, and global risks may
remain threatening for a long time, past 2100. We ought to be wary of simple extrapolation
and not take our models of the future too seriously, because models are made to be
broken. Helmuth von Moltke the Elder, a famous Prussian general, said No battle plan
survives contact with the enemy.
26. Erroneous representation that people as a whole do not want catastrophe and a
doomsday
556

Humans, especially males, like to engage in thrill-seeking behavior. Humans,


especially males, are also liable to fantasies of revenge and destruction. These qualities
are serious liabilities in an era of global risk, where a war started anywhere in the world has
the potential to explode into a conflagration between superpowers. We like to whimsically
imagine that no one will press the button, but just because it hasn't been done before,
doesn't mean that it will never happen. Heads of state and military generals are human too,
and subject to errors in judgment with long-term consequences. Saying, a doomsday will
never happen, because everyone fears it too much is wishful thinking and liable to get us
into trouble.
27. Future Shock: Cognitive biases connected with futuristic horizons
If we transported a human being from 1850 to today, they would be
bewilderedshockedby our level of technology. Flying machines, Green Revolution,
nuclear reactors, the Internet... these were not really foreseen by the people of that era.
Over the last fifty years, things have changed so fast than futurist Alvin Toffler used the
term future shock to describe common reactions to it6. Certain futuristic risks, like risks
from biotechnology, nanotechnology, and Artificial Intelligence, may seem so shocking that
many people have trouble taking them seriously. There has only been a gap of 13 years
between the first full sequencing of the human genome and the synthesis of the first
organisms with entirely artificial chromosomes. Many people are still digesting the
implications. In his "Future Shock Levels" article, Eliezer Yudkowsky outlined five general
levels of future shock7:
Shock Level 0: technology used in everyday life which everyone is familiar with, or
which is so widely discussed that nearly everyone is aware of it. (Catastrophe levels:
nuclear war, exhaustion of resources.)
Shock Level 1: the frontiers of modern technology: virtual reality, living to a
hundred, etc. (Catastrophe levels: powerful biological warfare and the application of military
robotics.)
Shock Level 2: medical immortality, interplanetary exploration, major genetic
engineering. Star Trek, only moreso. Quite a few futurists anticipate we'll reach this
557

technology level close to the end of the 21st century. (Catastrophe levels: deviation of
asteroids towards the Earth, superviruses that change the behavior of people, creation of
advanced artificial bacteria or nanobots immune to the body's defenses.)
Shock Level 3: nanotechnology, human-level AI, minor intelligence enhancement.
Not necessarily technologically more difficult than Shock Level 2, but more shocking to
think about. Difficult to predict the arrival of. (Catastrophe levels: grey goo, intelligenceenhanced people taking over the planet and wiping out humanity.)
Shock Level 4: advanced Artificial Intelligence and the Singularity (creation of
greater-than-human intelligence). (Catastrophe levels:superhuman AI converting the entire
planet into computronium.)
Future shock levels are based on the idea that people define the horizon of
the possible based on what they are psychologically comfortable with, rather than
objectively analyzing the technological difficulty of various propositions. For instance, we
have already enhanced intelligence in mice, and we know that such tech could theoretically
be applied to humans, and we could anticipate major shifts in the balance of power
because GDP is known to correlate highly with the IQ of the smartest fraction in a society 8,
but many people would have difficulty coming to grips with such a scenario because it
seems so fantastic unless one is familiar with all the points in the argument. In general,
people seem to have an aversion to considering the effects of major intelligence
enhancement. Nuclear war might be easier to understand than Artificial Intelligence, but it
seems like the risk of human extinction in the long term rests more heavily on the latter
than the former.
28. Representation that global catastrophe will be caused by only one reason
When people usually think about global catastrophes, or watch movies about
them, there is usually only one unitary effect doing all the damage; an asteroid impact, or
nuclear war. In reality, however, there is barely a limit to how many complex factors may
intertwine and mutually reinfroce. There may be a nuclear war, which triggers the explosion
of a Doomsday Machine network of nuclear bombs that were put in place decades ago,
which is followed up by grueling biological warfare and nuclear winter. Disasters tend to
558

come in groups, since they have the propensity to trigger one another. Few academic
studies of global risks have adequately analyzed the potential of complex overlapping
catastrophes.
Besides disasters consisting of discrete events triggering one another, new
technologies also enable one another. For instance, today the greatest risk to humanity is
nuclear warfare, but the age of nuclear danger may only be just beginning; advances in
nanotechnology will make it much easier to enrich uranium cheaply. Even the smallest of
rogue states might become able to mass-produce nuclear weapons for nominal cost. How
dangerous would the world become then? Few attention is given to the effects of
technological convergence on global risks.
29. Underestimating systemic factors of global risk
Systemic factors are not separate events, like the sudden appearance of a
superviruses, but certain overall properties which concern an entire system. For instance,
the conflict between the nature of exponentially growing modern civilization but a linearly
increasing (or less) availability of material resources. The conflict is not localized in any one
place, and does not depend on any one concrete resource or organization. Self-magnifying
crisis situations which tend to involve a greater number of people over time do exist, but do
not necessarily depend on human behavior and have no center. We are only beginning to
understand the scope of these possible issues. Another example would be the Kessler
syndrome, which states that as the amount of space junk increases, it will impact other
space junk and eventually cause an unnavigable mess of obstacles to orbit the Earth,
making space travel very difficult9. Though this is not a global risk, there may be other risks
of this nature which we haven't foreseen yet.
30. Underestimate of precritical events as elements of coming global catastrophe
If as a result of some events, the probability of global catastrophe
substantially increases (for instance, we find an extremely cheap way of enriching
uranium), that event may contribute a large amount of probability mass to the likelihood of
global disaster, but be underweighted in importance because it is not a disaster itself.
Another example is that of a nuclear war--although a nuclear might kill only one billion
people, the ensuing nuclear winter and breakdown of polite civilization would be much
more likely to usher in human extinction over the long-term than if the war never occurred.
We call these precritical events. More effort needs to be devoted to dangerous precritical
559

events, which may kill no one in and of themselves, but contribute substantial probability
mass to global catastrophe.
31. Cognitive biases based on the idea: It is too bad to be true or It couldn't
happen to me
It is easy to imagine someone else's car wrecked, but much more difficult to
imagine the future fragments of your own car. It is easy to imagine someone else getting
shot and killed, but harder to imagine yourself being shot and bleeding out. Bad thoughts
aren't fun to think about. Even discussing them is associated with low social status. Instead
of dealing with challenges to our existence, it is much easier to just ignore them and
pretend they will never happen.
32. Cognitive biases based on the idea: It is too improbable to be the truth
We have many historical examples of how something, that was "improbable",
like powered flight or splitting the atom, suddenly became possible, and then ordinary.
Moreover, it may become mortally dangerous, or responsible for millions of lost lives. It is
necessary to separate improbable from physically impossible. If something is physically
possible and can be done, people will do it, even if it sounds crazy. It is more useful to
humanity to discover an improbable global risk than it is to build a new widget that
provides a momentary distraction or slightly optimizes some industrial process.
33. Ideas about "selective relinquishment," deliberately deciding not to develop
certain technologies as a way of dealing with global risks
In the short term, it may be possible to hold back the development of certain
technologies in a certain place. But in the longer term, as scientists and military leaders in
more countries encounter the prospect of a powerful technology, it will eventually be
developed. There is no world government that can use force to prevent the development of
a certain technology everywhere. If a technology is attractive, funds and people will
gravitate towards wherever it can be suitably developed. This already happens in the realm
of medical tourism and thousands of other areas, both licit and illicit.
34. Representations that human adaptability is high and continues to grow beyond
all bounds thanks to new technologies
Human adaptability is high, but the power of attack is progressively overtaking
the power of defense. There is no missile defense barrier that can resist a full-on nuclear
560

attack. There is no universal vaccine that can immunize people to resurrected or


engineered superviruses. There is no easy way to defend against a Predator drone lobbing
a missile in your general direction. It is always easier to break something than to fix it.
35. Inability of a system to simulate itself
Though we cannot investigate global catastrophes experimentally, we can
build 'best guess' probabilistic models with computers. However, there are naturally chaotic
effects that cannot be predicted with typical models. A model can never be complete
because it is difficult for a system to model itself, besides taking into account feedback
effects between modeling outcomes and subsequent human decisions. A model can never
fully model a situation because it would have to predict its own results and their
downstream effects before they happen, which is impossible.
36. Approaching life in the spirit of the proverb: after us the deluge
Madame de Pompadour, a French aristocrat who was the courtesan of Louis
XV, wrote After us, the deluge. I care not what happens when I am dead and gone.
Cognitive psychology experiments on time discounting have shown that people strongly
discount risks that apply only after their expected lifespan, or even earlier than that 10. A
"devil take all" attitude in the contexts of catastrophic risks could be rather dangerous, but
some of it is inevitable. There is also the thought, "if I'm going to die, other people ought to
die too," which actually enhances global risks rather than having a neutral effect.
37. Religious outlooks and eschatological cults
The study of global risks infringes on territory that has since time immemorial
been associated with religion. This gives it an unscientific gloss. However, mankind
imagines many things before they happen, and religion and myth are the primary historical
engines of such imagination, so such an association is inevitable. Consider all the concepts
from myth that became or were verified as real: flying machines (Vedas), weapons that
destroy entire cities (the Vedas), transubstantiation (New Testament and medieval
occultism, particle accelerators and nuclear reactions transform basic elements),
catastrophic flooding (Old Testament, search for "Black Sea deluge hypothesis" on
Google), miraculous healing (the benefits of modern medicine would be considered
miraculous from the perspective of a couple centuries ago), and so on. Just because
something has been mentioned in stories does not mean it can't actually happen.
561

38. Uncertainty and ambiguity of novel terminology


Describing global catastrophic risks, we may use terms that have no
unequivocal interpretation, since they describe events and technologies might may not be
mature yet. The use of such terms throws up red flags and may lead to misunderstanding.
Yet, just because we use difficult terms does not mean that the technologies we refer to will
not reach maturation and threaten our continued survival as a species.

References

Darley, J. M. & Latan, B. "Bystander intervention in emergencies: Diffusion of


responsibility". 1968. Journal of Personality and Social Psychology 8: 377383.

Michael Thomas Moore, David Fresco. "Depressive Realism: A Meta-Analytic


Review". Clinical Psychology Review 32 (1): 496509. 2012.

Barry Schwartz. The Paradox of Choice: Why More is Less. 2004. Harper Perennial.

Daniel Kahneman. "Evaluation by moments, past and future". In Daniel Kahneman


and Amos Tversky (Eds.). Choices, Values and Frames. p. 708. 2002.

Peter Diamandis. Abundance: The Future Is Better Than You Think. 2012. Free
Press.

Alvin Toffer. Future Shock. 1970. Random House.

Eliezer Yudkowsky. Future Shock Levels. 1999. http://sl4.org/shocklevels.html

La Griffe du Lion. The Smart Fraction Theory and the Wealth of Nations. La Griffe
du Lion, Volume 4 Number 1, 2002. http://www.lagriffedulion.f2s.com/sft.htm

Donald J. Kessler and Burton G. Cour-Palais (1978). "Collision Frequency of


Artificial Satellites: The Creation of a Debris Belt". Journal of Geophysical Research
83: 26372646.

Frederick, Shane; Loewenstein, George; O'Donoghue, Ted (2002). "Time


Discounting and Time Preference: A Critical Review". Journal of Economic Literature
40 (2): 351401.
562

Chapter 3. How cognitive biases in general influence estimates of global risks

Chapter 3. How cognitive biases in general influence


estimates of global risks
1. Overconfidence
Overconfidence, in the formal language of heuristics and biases research, means an
overinflated conception of the accuracy of one's model of the world, probability estimates,
and a diminished ability to update beliefs sufficiently in response to new information 1.
Numerous studies in cognitive psychology have confirmed that overconfidence is a
universal human bias, present even in professionals such as statisticians who have been
trained to avoid it2. Overconfidence is associated with other cognitive tendencies such as
egocentric biases, which have adaptive value in an evolutionary context 3. An overconfident
person is more likely to acquire status in a tribe and to attract high-value mates, passing on
their genes and producing offspring with the same cognitive tendencies. This shows why
overconfidence tends to persist in populations over time despite its downsides.
In the context of global risks, overconfidence can manifest in a number of ways. The
most notable is to fixate on a preferred future, rather than considering a probability
distribution over many possible futures 4, and to ignore data contradicting that preferred
future. The tendency to selectively interpret information to suit one's biases is called
confirmation bias, which overlaps with overconfidence. One possible way of overcoming
overconfidence is to solicit the opinions of others regarding possible futures and seriously
consider these alternatives before building a more conclusive model. This is called taking
the outside view, outside view being a term from the field of reference class forecasting, in
which predictions are made using a reference class of roughly similar cases.
2. Excessive attention to slow processes and underestimation of fast
processes
Slower processes are more convenient for analysis and prediction, and tend to have
more data useful for interpreting them. However, dynamic systems are more likely to adapt
to slow processes and collapse from fast processes, making it more important to examine
563

fast processes in the context of global risk, including those which seem improbable at first
glance. A sudden catastrophe is much more dangerous than a slow decline, as there is
less time to prepare for it or intelligently respond. Many emerging technologies of the 21 st
centurynanotechnology, biotechnology, Artificial Intelligenceconcern fast processes
which are poorly understood. This does not mean that slow threats shouldn't be analyzed
as well, just that we should be aware of a natural tendency to ignore fast processes and
focus only on slow ones.
3. Age features in perception of global risks
Younger people are more oriented towards risk-taking, fun, overconfidence, neophilia,
and gaining new territory, rather than on safety 5. Older people may be more risk-averse
than younger people6, overvalue highly certain outcomes both in risks and gains 7, value
safety and predictability more, and are underrepresented in groups exhibiting interest in
and concern with emerging technologies, such as those under the umbrella of
transhumanism8. So, different age groups have different cognitive biases particular to their
generation which may confound attempts at realistically assessing global risks. Some of
these biases may be advantageous, others notthe word bias does not always
necessarily have a negative connotation, but must be evaluated in a context-dependent
way.
4. Polarization through discussion
Discussions within groups usually lead to polarization of opinions. This has been
demonstrated by hundreds of studies and is formally called group polarization 9,10. If
someone approaches a discussion with two hypotheses that he gives about equal
probability, he is likely to polarize in a direction opposite to that of an opponent. One facet
of this is that many people have some tendency to argue for fun, or just for the sake of
arguing. As enjoyable as this may be, in practice it narrows our representation of possible
futures. Bayesian rationalists, who are trying to build the most realistic possible model of
the future, should approach discussions with truthseeking in mind and be wary of arbitrary
polarization, such as that caused by group polarization or otherwise. In fact, true Bayesian
rationalists with common priors cannot agree to disagree on matters of fact, and must
converge towards consensus on probability estimates 11.

564

5. Skill at arguing is harmful


In the book chapter Cognitive biases potentially affecting judgment of global risks,
Eliezer Yudkowsky makes the point that a skillful arguer may be superior at arguing any
arbitrary point, making it more likely for him to anchor on a mistaken hypothesis 12. In the
study of cognitive biases, this is known as anchoring. The better knowledge of cognitive
biases an arguer has, the more readily he can attribute biases to his opponent, while being
blind to his own biases. In this way, a smarter thinker might actually use his intelligence
against himself, making it more difficult to construct an accurate model of risks. He may
also fall prey to framing effectsthat is, certain ways of thinking about thingsand use his
intelligence to reinforce that frame, rather than considering alternate framings 13. For
instance, someone preoccupied with the idea that advanced Artificial Intelligence would do
X may never be argued out of that hypothesis and consider that AI might actually do Y
instead.14
6. Excessively conservative thinking about risk possibilities
associated with memetic selection effects
In the book The Selfish Gene, Richard Dawkins introduced the idea of ideas as
replicators circulating in society as an ecosystem, analogous to genes. He used the term
memes to refer to ideas. In order to be protected from stupid or useless ideas, human
minds have a natural ideological immune system that causes us to dismiss most novel
ideas we run across. In general this is useful, but in the context of global risks, which may
involve blue sky concepts such as the long-term future of humanity and the power of newly
emerging technologies, it can be a stumbling block. Consider times in the past where
useful new ideas were rejected but could have saved many lives or much misery. The
concept of utilizing nitrogen oxide for anesthesia in surgery was discovered near the end of
the 18th century, but it wasn't until 1840, almost 50 years later, that it was actually applied.
There was a similar story with hand-washing as a tool for hygiene; Ignaz Semmelweis
pioneered its use in the 1840s, when it lowered death rates in his clinic by a factor of 3 to 4
times, yet the practice didn't catch on with skeptical doctors until decades later.
Semmelweis was so enthusiastic about handwashing that authorities had him committed
to a mental institution, where he soon died. In the 21 st century, a time of rapid changes, it

565

would behoove us not to make the same mistakes, as the stakes could be much higher,
and the punishment for ignorance greater.
7. The burden of proof is on a designer to prove that a system is safe,
not on the users to prove that a catastrophe is likely
In creating safe systems, there are two kinds of reasoningfirst, trying to prove that a
certain system is safe. Secondly, arguing that the system is susceptible to certain concrete
dangers. These modes of reasoning are not logically equivalent. To deny the general
statement that something, say the Space Shuttle, is safe only requires one
counterexample. In the case of the Challenger disaster, we received a vivid display that a
failed O-ring seal was enough to cause the craft to break up on reentry, causing tragic loss
of life. However, refuting a particular criticism of safetysay, demonstrating that the Space
Shuttle could sustain micrometeorite impacts in orbitdoes not constitute a proof of the
general case that the Space Shuttle is safe. There may be a thousand other threats, like
the failed O-ring seal, which could cause the craft to break apart. However, our minds have
a tendency to think as if a comprehensive refutation of all the threats we can think of at the
time is sufficient to declare a piece of equipment safe. It may very well remain unsafe.
Only through many years of repeated use and in-depth analysis can we truly verify if a
system is safe with a high level of confidence. Empiricism is important here.
In analyzing the safety of a system, we must exercise vigilance to make sure that
errors in reasoning have not compromised our ability to understand the full range of
possible disaster scenarios, and not get too excited when we refute a few simple disaster
scenarios we initially think of. In complex technical systems, there are always conditions of
maximum load or design failure threshold, where if certain external or internal conditions
are met, the system will become unsafe. Eventual failure is guaranteed, it's just a question
of what level of environmental conditions need to be met for failure to occur. It is important
to consider the full range of these possible scenarios and check our reasoning processes
for signs of bias.
From a scientific point of view, it is always easier to prove that an object does exist,
than to prove that an object conclusively does not exist. This effect is the same when it
comes to safetywe can prove that a design is safe in general conditions, but we cannot
conclusively prove that unusual conditions will not combine to threaten the safety of the
566

design. The only way we can do that is through an extensive service lifetime, and even
then, we can never be completely certain. The amount of rigor we put towards evaluating
the danger of a new construct should be proportional to the damage it can cause if it
undergoes critical failure or goes out of control. Large, important systems, such as nuclear
reactors can do quite a lot of damage. The burden of proof is always on the designers to
prove that a system is safe, not on critics to show that a system is unsafe. Steven Kaas put
it pithily: When you have eliminated the impossible, whatever remains is often more
improbable than your having made a mistake in one of your impossibility proofs. In other
words, we often make basic mistakes in our reasoning, and should reevaluate disaster
modes from time to time even if we think they are ruled out.
8. Dangerous research tends to be self-preserving
Every institution aspires to its own self-preservation, and talk of dangers can lead to
the end of funding for new projects. Accordingly, scientists involved in these projects have
every personal incentive to defend them and downplay the dangers. No funding, no
paycheck. Unfortunately, in cases where the area of research in question threatens to
annihilate mankind for eternity, this can be rather dangerous to the rest of us. For instance,
one wonders why Google bought and is now pumping money into Boston Dynamics, who
are primarily known for manufacturing military robotics. We thought their motto was, Do no
evil..? It is unfortunate that the most powerful and exciting technologies tend to be the
most dangerous, and have the most qualified and charismatic scientists and
spokespersons to defend their continued funding. This is why every project must be subject
to some degree of objective outside scrutiny. People close to potentially dangerous
corporate projectslike the robotics projects at Googlehave an ethical obligation to
monitor what is going on and notify groups concerned with global catastrophic risk in case
anything thing is amiss. The safety of the planet is more important than Google's stock
price.
9. Erroneous representation that when the
problem arises there will be time to prepare for it
Most serious problems arise suddenly. The more serious a problem, the greater its
energy andusuallythe faster the threat develops after it initially emerges. This makes it
all the more difficult to prepare for. Global catastrophes are powerful problems. Therefore
567

they can develop too quickly to prepare for. We do not have experience which would allow
us to define harbingers of global catastrophe in advance. Analogously, auto accidents
occur suddenly. There may not be time to prepare after signs of the problem appear; we
need to forecast it in advance and get safeguards in place. An analogous situation would
be Hurricane Katrina causing water to rise above the levees in New Orleans. If New
Orleans were adequately prepared in advance, it would have built levees tall enough to
hold back water from even the worst foreseeable storms. But it did not, and catastrophic
damage resulted. By the time the hurricane is sighted on radar, it is already too late.
10. Specific risks are perceived as more
dangerous than more general risks
An experimental result in cognitive psychology is that stories with more details sound
more plausible, even if its probability is lower than the general case and the entire story is
made up. This is called the conjunction fallacy 15. For example, mutiny on a nuclear
submarine sounds more dangerous than a large sea catastrophe, though the former is a
subcategory of the latter. Yudkowsky writes: From the point of view of probability theory,
adding more detail to the story makes it less likely but in terms of human psychology, the
addition of each new detail makes the story all the more credible. Many people are not
familiar with this basic logical and probabilistic truth, and focus instead on highly specific
scenarios at the expense of more general concerns.
11. Representations that thinking about global risks is pessimistic
This is related to earlier points about overconfidence and memetic selection effects.
Considering doom (or being pessimistic at all) is associated with low status, so people
don't want to be associated with it or approve of it. Contrast this with the experimental
result that pessimistic people tend to make more accurate probability estimates, called
depressive realism. The 21st century is a minefield; if we're going to traverse it, we should
do so cautiously. Dancing on it blindly is not optimism, but stupidity.
12. Conspiracy theories as an obstacle for the scientific analysis of global risks
Global risk may sometimes be associated with conspiracy theories such as the
Illuminati or the idea that lizard men control the world. Of course, every effort must be
568

made to develop the study of catastrophic global risks into a serious discipline that has no
association with such nutty theorizing.
When a conspiracy theory predicts a certain risk, it tends to be highly specific: the
world will end on such-and-such a date, caused by such-and-such event. In contrast, the
scientific analysis of risk considers many possible risks, as probability distributions over
time, including the possibility of complex overlapping factors and/or cascades. In addition,
conspiracy theory risks will often come with a pre-packaged solution; the promulgator of
the conspiracy theory happens to know the one solution to deal with the risk. In contrast,
those engaged in the scientific study of global catastrophic risk will not necessarily claim
that they have any idea how to deal with the risk. If they do have a proposed solution, it will
not be entirely certainrather, it will be a work in progress, amenable to further
suggestions, elaborations, redundant safety elements and precautions.
The more poorly we predict the future, the more dangerous it is. The primary danger
of the future is its unpredictability. Conspiracy theories are harmful to future prediction, not
just because of their general lunacy, but because they focus attention on too narrow a set
of future possibilities. Furthermore, they assume superconfidence in the prognostic abilities
of their adherents. A good prediction of the future does not predict concrete facts, but
describes a space of possible scenarios. On the basis of this knowledge it is possible to
determine central points in this space and deploy countermeasures to deal with them.
Such "predictions" undermine trust in any sensible basis underlying them, for
example that a large act of terrorism could weaken the dollar and potentially cause an
economic collapse as part of a chain reaction. Conspiracy theories also tend to fall prey to
the fundamental attribution errorthat is, that there must be a Them, a deliberately
malevolent actor on whom to place the blame. In reality, there is usually no such archvillain; there may be a group, or the disaster might happen as a complete accident, or as a
result of human actions which are not deliberately malevolent. Focusing on conspiracy
theories, or allowing conspiracy theories to influence our thinking in any way, biases our
thinking in this way.
At the same time, although the majority of conspiracy theories are false, there is
always the wild chance that one of them could turn out to be true. There was a conspiracy
theory by industrialists to take over the U.S. Government in a coup during 1933, the so569

called Business Plot, but it was foiled when a General in on the plot decided to report it to
the government. Conspiracies do exist, but conspiracy theorists tend to overestimate the
degree to which groups and individuals are capable of covert coordination. Regardless,
consider the saying: if you cannot catch a cat in a dark room, that does not mean that the
cat is not present.
13. Errors connected with the conflation of short-term,
intermediate term and long-term forecasts
A short-term forecast considers the current condition of a system, and the majority of
discussions focus on that theme when considering policies for actions to actually take.
Intermediate term forecasts consider further possibilities of a system and consider its
current tendencies and direction rather than just its immediate state. Long-term forecasting
is much more expansive and considers a variety of long-term possibilities and end states.
Consider we have a ship with gunpowder on which sailors go and smoke. It is in short
term possible to argue that if sailors smoke in a certain place, as far away as possible from
the gunpowder, an explosion will not happen. But in the intermediate term, it is more
important to consider general statistics, such as the quantity of gunpowder and smoking
sailors which define the probability of explosion, because sooner or later a smoking sailor
will appear in the wrong place. In the long term, there is an essentially unsafe situation, and
an explosion is bound to occur. The same holds with the threat of nuclear war. When we
discuss its probability over the next two months, the current concrete behavior of world
powers matters. When we consider the next five years, we should take to the account the
overall quantity of nuclear powers and missiles, and not focus too much on current events,
which may change quickly. When we speak about the prospect of nuclear war over the
timescale of decades, an even more fundamental variable comes into play, which is the
overall technological difficulty of enriching uranium or producing plutonium. Different logical
frames are useful for best considering different time frames.
Thus in different areas of knowledge the appropriate time scale of a forecast may
differ. As interesting as the relationship between Obama and Putin may be for current world
affairs, it is not likely to be relevant to the long-term unfolding of nuclear war risk.
Depending on the industry or area under consideration, what is considered a short-term
forecast or a long-term forecast may vary. For example, in the field of coal output, 25
570

years is a short-term forecast. In the field of microprocessor fabrication, it may be as short


as 4 months.
14. Fear
Fear evolved in human beings in response to concrete stimuli in concrete situations.
Our visceral emotion of fear is not attuned to deal with remote, general, or abstract risks. In
fact, when it comes to remote risks, fear works against us, because it motivates us not to
consider them, as the prospect is vaguely scary, but not terrifying enough to make us
actually care. An example would be when a man refuses to get a medical analysis of a
bump on his prostate because he is afraid that something malignant will be found.
15. Underestimating importance of remote events (temporal discounting)
It is a natural tendency of human reasoning to assign lesser importance to events
which are distant in space and/or time. This is called discounting, and quantified with a
variable known as the discount rate. It obviously makes sense in an evolutionary context,
but in the modern age, the usefulness of our ancestral intuitions is beginning to waver 16. A
hundred thousand years ago, in the environment in which humans evolved, there was no
such thing as a nuclear missile, or a drone that can travel around the world and hit you with
a bomb in a few hours. Today, global risks might originate from distant lands and decades
in the future, but we might need to begin preparing for them now. We can hardly do that if
we engage in hyperbolic discounting, that is, completely ignore risks outside of a certain
time or space window, say, 5 years in the future. We might rationalize ignoring such risks
by saying there is nothing we can do about them, but there most certainly is something we
can do about them. Look at the grassroots effort to deal with global warming; this is an
example of a risk that is being prepared for far in advance.
16. Effect of displacement of attention.
The more someone gives attention to one global catastrophe, the less he gives to
another and as a result his knowledge will become specialized. Therefore supervaluation of
any one global risk is conducive to the underestimation of others and may be harmful. The
negative effect may be ameliorated if the thinker cooperates with other specialists and
gives due respect to their area of global risk expertise. We do see this effect in the global
571

catastrophic risk community today, though there are certain gaping holes in activity and
knowledge, such as with respect to the issue of nanotechnological arms races. In the
current global catastrophic risk analysis community, the primary focus is on Artificial
Intelligence. While Artificial Intelligence does indeed seem to be a major risk, there may be
other land mines on the road to it that need taking care of for us to even approach
advanced AI.
17. The Internet as a source of possible errors
The Internet naturally promotes a certain kind of bias; mostly for the sensational.
Search engines like Google even optimize their returned results based on your prior
searches, showing you what they think you want to see. This can make it difficult to branch
out from a certain niche, and exacerbates confirmation bias, the reception of data that
confirms what we already think. In addition, there is always a lot of low-quality noise
associated with any concept or idea. Even quality journals like Nature cannot necessary be
trusted, as peer review is fraught with all kinds of biasfor sensational results, for results
that operate within a certain scientific paradigm, results that adhere to the framing of a
dominant scientist, and so on. Older scientists receive all the grant money, meaning they
dictate the flavor of much of contemporary research. Max Planck's old saying comes to
mind: Science advances one funeral at a time. Of course, the wider amount of content on
the Internet means that if there is good content, and if someone is diligent about searching
for it, it will eventually be found. The Internet also improves the speed of research, allowing
a researcher to cycle through poor research more quickly and cheaply than may otherwise
have been possible.
18. Beliefs
Strong social or religious beliefs, especially religious beliefs, can powerfully influence
or bias estimates of global risk. Many Christians believe that the course of history is
directed by God, and that he would never allow humanity to be exterminated for an
arbitrary reason. It would be disappointing if we were then wiped out because not enough
Christians cared about global risks to participate in doing anything about them.
Aside from Christianity, there are also pseudo-religious beliefs associated with
progressivism and liberalism that bias risk estimates. Questioning global warming, or the
572

severity of global warming, is often considered heresy according to the dominant


progressive paradigm. Emails from the ClimateGate controversy, of which there were two
rounds of emails, made it very clear that one prominent group of climate scientists were
only looking for the right conclusions, scientific objectivity be damned. Another example is
that those with engrained liberal beliefs might find it personally difficult to consider the
effects of mass intelligence enhancement creating

humans with superior capabilities,

because of distaste for the idea of innate inequality.


In his book on global risks Our Final Hour, Sir Martin Rees writes that in Reagan's
environmental administration, the religious fundamentalist James Watt, Secretary of the
Interior, believed that the arrival of the Apocalypse might be accelerated by destruction of
the environment17. There are many other examples in this vein, too many to count.
19. Congenital fears
Many people have congenital fears of snakes, heights, waters, fear of impending
collision, illnesses, and so on. Many of these fears are likely human universal. It is not so
difficult to assume that they might overestimate the seriousness of events reminding them
of these fears, or to underestimate those which are unlike them. Post-traumatic stress
syndrome, such as an earlier illness, may also influence how one looks at certain risks.
Most people have a preferred life narrative that is absent of global catastrophe (don't we
all?), so they may irrationally underestimate its objective likelihood.
20. Shooting the messenger
Discussion of risks can trigger discontent. This discontent may be directed towards
the bearer of the message rather than addressing the threat itself.
21. Difficulty in delimitation of own knowledge
I do not know what I do not know. It is tempting to subconsciously think that we
personally know everything that is important to know, even in outline. This leads to a false
sensation of omniscience, conducive to intellectual blindness and an unwillingness to
accept new data. Albert Camus said that the only system of thought that is faithful to the
origins is one which notes its limits. We should always be keeping an eye out for

573

unknown unknowns, and realize that they must exist, even if we cannot currently imagine
what they are.
22. Humor
It is possible to misinterpret a genuine threat as a joke, or interpret a joke as a
genuine threat. Reagan once joked to technicians, prior to a televised speech, My fellow
Americans, I'm pleased to tell you today that I've signed legislation that will outlaw Russia
forever. We begin bombing in five minutes. This comment was never broadcast, but it did
leak later. If it were leaked early or accidentally taken seriously, who knows what kind of
mayhem it may have caused. At the very least, it would have heightened tensions.
Similarly, senator John McCain jokingly sung the tune, Bomb, Bomb Iran, in response to
an audience question at a campaign stop during the 2008 Presidential elections.
23. Panic
A hyperactive reaction to stress leads to erroneous and dangerous actions. For
example, a man may jump out of a window during a fire although the fire has not reached
him, causing his premature death. It is obvious that panic influences and thoughts and
actions of people in a stress condition. For example, famous engineer Barnes Wallis was
described as a religious man and a pacifist during peacetime, but during World War II
developed a plan of using bouncing bombs to destroy dams in Germany to flood the Ruhr
valley18. This is an example of how panic and acute threats change normal modes of
behavior. Panic can be long-term, not just acute. Short-term panic is also very dangerous,
as a situation of global risk may develop very quickly, in hours or even minutes, and calm
strategic decisions will need to be made in that time.
24. Drowsiness and other mundane human failings
According to one account, Napoleon lost at Waterloo because had a chill 19. How is it
reasonable to expect that the President of the United States would make the best possible
decision if he is abruptly awoken in the middle of the night? Add in the basic inability of
human beings to precisely follow instructions, and the finiteness of instructions it is possible
to execute, and you have plenty more obvious limitations which come into play in a crisis.
Someone might be cranky that their girlfriend dumped them, they have loose stools, or they
574

just haven't had enough to eat lately and their blood sugar is low. Even a trained soldier
might experience a slight lapse in consciousness for no reason at all. Given how easy it is
for a head of state to make a call and initiate nuclear war, it is hard to overestimate how
mundane or seemingly stupid the reason for starting a conflict or missing a crucial safety
detail may be.
25. Propensity of people to struggle with dangers which are in the past
After the tsunami of 2004, Indonesians and other southeast Asians began to build
many systems of prevention and warning for tsunamis. However, the next major disaster in
the area might not be a tsunami, it could be something else. In this fashion, people may be
preparing for a disaster which already happened and is not a threat, neglecting a future
disaster.
26. Weariness from catastrophe expectation
If you live in a city which is constantly being bombed, you may eventually stop caring,
even if there is a constant risk you will be blown to bits. During the London Blitz of World
War II or the Siege of Leningrad, many citizens went about their business normally. This
effect has been called crisis fatigue. After September 11th, many skyscrapers around the
world were put on alert, in expectation of further attacks, but none occurred, and security
went back down again. Since the periodicity of major disasters might consist of many
years, people may become complacent even as the objective probability of such an event
gradually increases. The probability of a large earthquake in California continues to
increase, but many of the buildings in San Francisco were built in a hurry after the last
major earthquake of 1906, and are not at all earthquake-safe or earthquake-ready,
meaning the effects of another major earthquake could lead to even more loss of life than
in 1906. These buildings house at least 7 percent of current residents, likely more 20. The
weariness of catastrophe expectation is expressed by the loss of sensitivity of a society to
warnings.
27. An experts estimates which are not based on strict calculations
cannot serve as a measure of real probability

575

Unlike in the stock markets, where the average estimate of the best experts is used
as a forecast of market behavior, we cannot select our experts and average them based on
their track record of predicting human extinction, because there is no track record of such
an event. If it had happened, we would all be dead, and quite incapable of predicting
anything.
Slovic, Fischhoff, and Lichtenstein (1982, 472) 21, as cited in Yudkowsky (2008) 22
observed:
A particularly pernicious aspect of heuristics is that people typically have great
confidence in judgments based upon them. In another followup to the study on
causes of death, people were asked to indicate the odds that they were correct in
choosing the more frequent of two lethal events (Fischoff, Slovic, and Lichtenstein,
1977) In Experiment 1, subjects were reasonably well calibrated when they gave
odds of 1:1, 1.5:1, 2:1, and 3:1. That is, their percentage of correct answers was
close to the appropriate percentage correct, given those odds. However, as odds
increased from 3:1 to 100:1, there was little or no increase in accuracy. Only 73% of
the answers assigned odds of 100:1 were correct (instead of 99.1%). Accuracy
jumped to 81% at 1000:1 and to 87% at 10,000:1. For answers assigned odds of
1,000,000:1 or greater, accuracy was 90%; the appropriate degree of confidence
would have been odds of 9:1. . . . In summary, subjects were frequently wrong at
even the highest odds levels. Moreover, they gave many extreme odds responses.
More than half of their judgments were greater than 50:1. Almost one-fourth were
greater than 100:1. 30% of the respondents in Experiment 1 gave odds greater than
50:1 to the incorrect assertion that homicides are more frequent than suicides.
The point of this quote is to illustrate that experts are consistently overconfident, often
ridiculously so. From Parkin's Management Decisions for Engineers23:
Generally, people have a displaced confidence in their judgment. When asked
general knowledge or probability questions, experimental subjects performed worse
than they thought they had (Slovic et al., 1982). Calibration experiments that test the
match between confidence and accuracy of judgment, demonstrate that those
without training and feedback perform badly. Lichtenstein et al. (1982) found that
from 15,000 judgments, when subjects were 98% sure that an interval contained the
576

right answer they were wrong 32% of the time. Even experts are prone to some
overconfidence. Hynes and Vanmarke (1976) asked seven geotechnical gurus to
estimate the height of a trial embankment (and their 50% confidence limits), that
would cause a slip fracture in the clay bed. Two overestimated the height and five
underestimated. None of them got it within their 50% confidence limits. The point
estimates were not grossly wrong but all the experts underestimated the potential
for error.
Simply put, experts are often wrong. Sometimes their performance is equal to
random chance, or to that of a person pulled off the street. Statistical prediction rules often
outperform experts24. This creates trouble for us when we rely on experts to evaluate the
probability and nature of catastrophic global risks.
28. Ignoring a risk because of its insignificance according to an expert
This ties into the above point. If an expert thinks a risk is insignificant, he may be
wrong. It is also necessary for the expert to quantify the precise degree of insignificance
they are talking about, which they often refuse to do or are unable to do. For instance, say
that we determine that the probability that a certain particle accelerator experiment
destroys the planet is one in a million. That sounds low, but what if they are running the
same experiment a hundred times a day? After a year, the probability of doom is already
1/300. After 500 or so years, the probability of doom approaches unity. So, one in a
million may actually be a quite significant risk if the risk is repetitive enough.
Aside from having a proper understanding of insignificance, we also ought to keep in
context the relationship between an estimate of insignificance and the chance that the
expert making the prediction is in error. Say an expert says that an event only has a one in
a billion chance, but there is a 50% probability that they are completely wrong. In that case,
the real insignificance might be just one in two, or one in twenty. The probability that an
expert is just plainly wrong often throws off the estimate, unless there is a rigorous
statistical or empirical basis to confirm the estimate. Even then, the empirical data may lie,
or there may be an error in the statistical calculations, or a mistaken prior.
29. Underestimating or overestimating our ability to resist global risks

577

If we underestimate our ability to resist global risks, we might fail to undertake actions
which could rescue us. If we overestimate our abilities to resist it, it could lead us to
excessive complacency. We need to find a balance.
30. Stockholm syndrome
Most humans have a relationship to death similar to Stockholm syndrome; that is,
similar to the relationship between hostages who become attached to their kidnappers. The
hostage feels helpless and controlled by the kidnapper, and, perhaps as a survival
mechanism, begins to fall for them. This can proceed to the point of the hostage being
willing to risk their life for the kidnapper. The same thing applies to humans and our
relationship to death. We feel it is inevitable we'll die one day, and even begin to acquire an
aesthetic love for it. Just because we will die one day doesn't mean that we shouldn't
attempt to ameliorate global catastrophic risks.
31. Behind errors of the operator there is an improper preparation
Behind the concrete errors of pilots, operators, dispatchers, and politicians, there is
often a conceptual error in operator training or a warning flag that something was wrong
with the pilot or operator in advance which goes ignored. In March 2014, a cheating
scandal among proficiency tests at Maelstrom Air Force Base, Montana, for nuclear force
officers led to the firing of nine missile wing commanders and the resignation of the
commander of the 341st missile wing25. This is an example of setting standards and sticking
to them. If a nuclear force officer cannot pass a proficiency test without cheating, what
business do they have controlling devices which could cause the end of the world as we
know it? The more important the technology, the more rigorous and failproof the evaluation
and competency testing processes need to be. Scientific analysis of global risks and the
promulgation of such knowledge should be considered part of proper preparation for
mankind as a whole.
32. A group of people can make worse decisions than each person separately
Depending on the form of organization of a group, it can promote or interfere with the
development of intelligent decisions. A good example might be a scientific research
institute, a bad example would be a mob or a country in the middle of civil war. The
578

influence of a crowd can bring the thinking level down to the lowest common
denominator26. The wisdom of crowds is often better suited to estimating the number of
gumballs in a large jar than making highly complex, technical decisions. That is why most
expert surveys are restricted to a relatively small number of experts. The majority of
people do not have knowledge to make complex decisions, and should not be asked to.
Until there is a uniform decision making and threat evaluation process for global risk, we
are probably in a sub-par situation.
33. Working memory limits
A person can only focus on a few things at a time. Will A attack B? Maybe yes, or
maybe no, but even that framing alone leaves out important details, a sort of attention
shade. One human or even an organization cannot capture all aspects of world problems
and perfectly arrange them by order of their degree of danger and priority. This is why
computer models can be helpful, because they process more simultaneous details than a
human can. Of course, computer models can fail, and need to augment human reasoning,
not replace it.
34. Futurology is split across different disciplines
as though the underlying processes occur independently
There are several variants or genres of thinking about the future, and they have the
propensity not to be interconnected in the intellectual world or in thoughtspace very much,
as if these futurist domains were in entirely different worlds.

Forecasts around the theme of accelerating change, NBIC convergence, and the
Singularity. Supercomputers, biotechnologies, and nanotechnology.

Forecasts of system crises in economy, geopolitics and warfare. This tends to be a


different crowd than the Singularity crowd, though there is some overlap.

Forecasts in the spirit of traditional futurology, such as demographics, resource


limitations, global warming, et cetera.

Special type of forecasts for big catastrophes: asteroids, supervolcanoes, coronal


mass ejections from the Sun, the Earth's magnetic field flipping, and so on.
579

To accurately predict the future, we must reconcile data from each of these domains
and take a holistic view.
35. A situation when a bigger problem follows
a smaller one, but we are incapable of noticing it
There is a Russian proverb, trouble does not come alone. The American version is
When it rains, it pours. Global catastrophe could occur as a result of a chain of
progressively worse events, however we might get distracted by the first disasters in the
chain and fail to prepare for the larger ones. The reasons may be:

Our attention at the first moment of failure is distracted and we make a critical error.
For instance, a driver almost rear-ends the driver in front of him, then decides to
quickly go around him without thinking, causing him to get hit by a car in the other
lane. Or, a man carrying out a robbery decides to shoot at a policeman who comes
to arrest him, and is gunned down in return. Or, something falls off a cliff, a person
goes to look for it, and ends up falling off himself. The possibilities here are quite
expansive.

Misunderstanding that the first failure creates a complex chain of causes and effects
which causes the person or civilization under threat to inadequately respond to
subsequent threats. The first disaster weakens the organism and it becomes
susceptible to further disasters or maladies. For example, flu can lead to
pneumonia, or nuclear war could lead to nuclear winter.

Euphoria from overcoming the first catastrophe causes the group to lose prudence.
For instance, someone who suffers an accident and is in the hospital begins to
recover somewhat, and decides to leave the hospital prematurely, leading to
inadequate healing and permanent injury.
36. Selectivity of attention
Often, when people are looking for certain weaknesses, for instance in the economy,

they may tend to overfocus on one issue, like subprime mortgages. This causes a certain
selectivity of attention, where there is then a tendency see everything through a lens
pertaining to one issue, rather than the bigger picture.
580

This can lead to a vicious cycle of selective accumulation of information (confirmation


bias) about only one aspect of instability in the system, ignoring the reasons for its basic
stability, or other risks connected with the system. Overestimating the magnitude or
importance of certain risks can then cause a society to become complacent with a certain
expert or set of experts, confounding future preparation efforts. For instance, science fiction
films that focus on robotic takeovers tend to emphasize unrealistic scenarios, such as
robots with anthropomorphic psychology, and cause desensitization of the public at large to
the very real risk of Artificial Intelligence in the longer term. Another example: in Thailand in
2004, when the Indian Ocean tsunami hit, the Warning Service decided not to inform the
public, assuming it was a less severe event than it actually was, for fear of scaring tourists.
Unfortunately, this cost many lives.
37. Subconscious desire for catastrophe
Similar to Stockholm syndrome, this risk consists of the aspiration of a catastrophic
risk expert for his forecasts to be proven correct. It may push him to exaggerate harbingers
of a coming catastrophe, or to tolerate those events which may lead to catastrophe. People
may also want catastrophes from boredom or due to the masochistic mechanism of
negative pleasure.
38. Use of risk warnings to attract attention or social status
This type of behavior may be called Scaramella syndrome after the Italian security
professional (born 1970) Mario Scaramella. Quoting Wikipedia's entry on him:
While working for the Intelligence and Mitrokhin Dossier Investigative
Commission at the Italian Parliament, Scaramella claimed a Ukrainian ex-KGB
officer living in Naples, Alexander Talik, conspired with three other Ukrainians
officers to assassinate Senator Guzzanti. The Ukrainians were arrested and special
weapons including granades were confiscated, but Talik claimed that Scaramella
had used intelligence to overestimate the story of the assassination attempt, which
brought the calumny charge on him. Talik also claimed that rocket propelled
grenades sent to him in Italy had in fact been sent by Scaramella himself as an
undercover agent.

581

Sometimes, an expert will make up a risk in his mind because he knows that society
or the mass media will sharply react to it, and gain him attention. This is a problem
because some of the worst risks may not be amenable to sensationalism or suited to
media attention. Also, it may cause inappropriate desensitization to serious risks in a
society, because of their association with publicity stunts.
39. Use of the theme of global risks as a plot for entertaining movies
There are many dumb movies about global risks. This causes us to associate them
with whimsy, entertainment, or frivolity. This is a problem.
40. Generalizing from fictional evidence
In the ancestral environment, where our bodies and brains evolved and were formed,
there was no such thing as movies. If you saw something happening, it was real. So our
brains are not well suited to telling the difference between movies and reality. We think
about movies subconsciously, or even consciously, as if they actually happened, though
they are completely made up. Unfortunately, the scientific understanding level of a
Hollywood screenwriter is usually somewhere between that of a 7 th grader and an 8th
grader. The tendency to recall movies and books as if they were actual events is called
generalizing from fictional evidence27.
The most audacious example are movies about Artificial Intelligence, which postulate
that AIs would have human-like, or anthropomorphic thinking, such as clannishness,
anthropomorphic rebellion, or a desire for revenge. An AI, being a machine, would not have
any of these animalistic tendencies unless they were explicitly programmed into it, which
they likely would not be. Thus, the greatest danger to humanity is from AI that is indifferent
or insufficiently benevolent to us, and emphatically not AI that has a specific grudge or
malevolence against us28. This crucial point makes all the difference in the world in terms of
how we will design an AI to optimize for safety.
Another issue with fiction, previously discussed, is that futuristic stories tend to make
the future similar to today, but with just a few added details. For instance, in Total Recall,
the technology was very similar to that of the year when the movie was made (1990),
except there was interplanetary travel and slightly more advanced computer. In Back to the
582

Future (1985), the main differences of the future appeared to be hoverboards and flying
cars. In the real future, many details will simultaneously be different, not just a few.
Yet another problem unique to fiction is that forces that clash tend to be equally
balanced. If Star Wars were real, the Empire would just use the Death Star to blow the
entire Rebel fleet out of the sky. If Terminator were real, the assassin robot would just snipe
the protagonist from a mile away, without ever being seen. If The Matrix Reloaded were
real, the AI would just destroy the subterranean human city of Zion with nuclear weapons.
In reality, extreme power asymmetries and unfair match-ups happen all the time. In 15181520, about 90-100 Spanish cavalry and 900-1,300 infantry were able to conquer and
subjugate the Aztec civilization, an empire of millions of people.
41. Privacy as a source of errors in management of risks
Research conducted in private can't be examined by outside auditors or receive
external feedback. As a result, it can contain more errors than more open sources.
Contrariwise, open source data might be of poor quality because more unskilled or stupid
people have the opportunity to edit or contribute to it 29. When disasters or catastrophes are
kept secret, as may have been the case with early Soviet space missions, we lose valuable
feedback that might be used to prevent future disasters.
42. Excessive intellectual criticism or skepticism
Safety is often threatened by improbable coincidences or circumstances. Therefore, a
long tail of strange ideas can be useful. Before narrowing down a narrow range of failure
scenarios, it is helpful to brainstorm as many ideas as possible, including the most weird.
The problem, however, is that it is much easier to criticize something than it is to come up
with a solid risk, which may cause analysts to dismiss crazy-sounding ideas prematurely.
43. The false belief that it is possible to prove safety conclusively
The Titanic was proven to be safe, and called unsinkable. We all know how that
turned out. There is no such thing as something perfectly proven to be safe. There is
always some wild combination of circumstances that will cause something to break. A black
hole may fly into the solar system and completely swallow the Earth, destroying everything
on the planet, for instance. You can never be completely sure what will happen. The only
583

way the safety of something can be proven to a high degree is observing many instances
of it in action over a long period of time. For instance, aircraft are generally highly reliable,
since the chance of dying in a plane crash is far less than the chance of dying in a car ride.
The Space Shuttle was not very reliable, since two of the six orbiters exploded in flight, a
33 percent critical failure rate. Keep in mind that the Space Shuttle was highly overengineered, and ran millions of line of code which had to be vigorously verified, but two of
them still experienced catastrophic failures.
44. Underestimate of the human factor
Somewhere between 50 and 80 percent of catastrophes occur because of errors by
operators, pilots or other people exercising direct administration of the system 30. Other
catastrophic human errors happen during maintenance service, preflight preparation or
design errors. Even a super-reliable system can be put into a critical condition by the right
sequence of commands. We should never underestimate the power of human stupidity and
error. If someone can break something by accident, he will. This applies even to the best
trained military officers.
45. The false idea that it is possible to create a faultless system
It is not possible to create a faultless system unless it is extremely simple. Any system
with any kind of complexity will be put together through the efforts of thousands of people,
people who will occasionally make mistakes. The wrong confluence of mistakes will
produce a catastrophic outcome. Even if the design seems perfect, a disaster may be
caused because the design is not followed to the letter, or followed to the letter but not in
the right spirit.
46. Statistics as a source of possible errors
In the nature of statistics there is the possibility of errors, distortions, and

false

interpretations. This may derive from sampling, unsuitable framing, different methods of
calculation or chosen formulae, rounding errors, interpretation of the received results, and
cognitive biases connected to the visual interpretation of numbers and charts.
47. Availability bias

584

Certain glitzy facts are more accessible to our minds than others. For instance,
everybody knows where the first nuclear bomb was used in warfareHiroshima, Japan.
This attack killed about 120,000 people. But, do you know where the Spanish flu of 1918,
which killed a hundred million people, originated? (According to most analyses, it was on
March 8th, 1918, in Haskell county, Kansas.) There is a large literature on availability bias,
which people looking into global risks would be well-advised to be at least superficially
familiar with31.
48. Analysis of global risks and making futurist forecasts are not identical
A futurist forecast often contains concrete data about a time and place (though it
would be more accurate if it were a probability distribution over times and places). Yet, it is
extremely rare that such specific predictions are on-point. Moreover, futurist forecasts and
the analysis of global risks demand different attitudes. In making futurist forecasts,
prognosticators often make pie-in-the-sky predictions, throwing their hat into the ring to see
if they get lucky or even famous for correct predictions. Analysis of global risk, however,
requires more caution and care than this approach.
49. Hindsight bias
After something important happens, it feels like we knew it would all along. This is
hindsight bias, another cognitive bias with an extensive literature 32. The phrase I knew it all
along, when the speaker actually didn't, exemplifies this way of thinking. Unless there is a
written record or recording with the exact text of a prediction, it is difficult to verify how
correct it is or how likely it was to be made by luck. Concerning global risks, by their very
nature we cannot have hindsight knowledge of them, therefore we are stuck with having to
predict things in advance the very first time. We may overconfidently predict global risks by
analogizing them to simpler risks for which we have the benefit of hindsight.
50. False positives
The dollar weakening by several percent may be inappropriately taken as a sign that
the dollar is imminently going to crash, and some number of pundits may point to it as an
indicator of such, though ultimately it does not occur. This would be an example of false

585

positive indicators of risk, which can undermine belief in the possibility of accurately
forecasting and preparing for genuine disasters.
51. An overly simplistic explanation is the most prominent
Popular science writers often simplify complex scientific issues. The quality standard
for articles in Popular Science, for instance, is guaranteed to be inferior to that of the best
scientific journals. Understanding a domain of risk, for instance nanotechnological arms
races, may require years of analysis. If a version of this risk does reach the public (such as
the grey goo scenario) it may be a highly simplified and fantastic form of the true risk, and
motivate improper preparations 33. Another variant is that a minor mishap occurs and many
years of analysis are needed to find its exact cause, delaying efforts to deal with future
dangers from the same source. In certain conditions, such a backlog of knowledge
acquisition could lead to a critical failure.
52. Misuse of apocalyptic scenarios to draw attention and financing to projects
This is essentially a variant on the Scaramella scenario described earlier, but applied
to projects. Obviously, using apocalyptic scenarios as a foil may draw attention and
financing to certain projects. Nearly every project which is genuinely, authentically, and
truthfully trying to mitigate some kind of global catastrophic risk has fallen under this
accusation. It is important to realize that there are people who are really motivated by
lowering global risk, however, and there are probably easier ways of striking it rich than
using an apocalypse scare. This applies doubly so for those of a scientific bent, who will be
surrounded by secular intellectuals who are innately very skeptical of apocalypse claims
because they pattern-match to religious apocalypse claims.
53. Aspiration of people to establish a certain risk level acceptable to them
Everybody has different intuitions of what constitutes an acceptable risk. Someone
may choose a safer car so that they can drive it a bit more dangerously. Some people may
consider walking in a bad neighborhood at night an acceptable risk, others may dare it
once in their life. Consider a sport or activity which has a risk of 1 in 100,000 of causing
death for each time that it is practiced, say, skydiving. For most adventure-seeking human
beings, this is considered an acceptable risk. When it comes to global risk, however, a 1 in
586

100,000 chance of doom per event would be unacceptable. If the event were a daily
occurrence, over the course of a hundred years, the probability of extinction would
approach 30 percent. That would be an unacceptable level of risk. So, our natural intuitions
about acceptable risk may be too carefree when it comes to mitigating global risks.
54. Overconfidence of the young professional
Talented young men and women who are professionals in a certain area, especially a
dangerous area like racecar driving or base jumping, eventually get to a level of skill where
they may acquire a false sense of invulnerability and become overconfident. Due to this
very human factor of overconfidence, they run into catastrophes. This model can be used
to consider mankind as a whole in relation to super-technologies such as nuclear,
biological, and nanotechnologies. We may get so drunk on our own technological mastery
that we fail to take basic safeguards and annihilate ourselves through carelessness.
55. Sensation of invulnerability through survival
The natural overconfidence of the young professional is aggravated from the
observation selection effect which consists of the experience that, for example, at war after
a certain duration without wounds, soldiers start to feel invulnerability, and will take
increasingly risky maneuvers34. The same could occur with civilizationthe longer period
of time we go without nuclear war, the more complacent we will be about it, and the more
brinksmanship we will be willing to undergo because it hasn't happened yet. This is
distinguished from the prior point in that the salient factor is length of time of survival rather
than skill level.
56. Dunning-Kruger effectoverconfidence in one's professional skills
According to the Dunning-Kruger effect, which has been extensively studied, those
who are less skilled in a certain area are more likely to judge that they are competent 35.
Dunning and Kruger proposed the following, that incompetent people will:

tend to overestimate their own level of skill;

fail to recognize genuine skill in others;

fail to recognize the extremity of their inadequacy;


587

recognize and acknowledge their own previous lack of skill, if they are exposed to
training for that skill.
Various studies have confirmed all these hypotheses. In considering global risks,

which span many spheres of knowledgefrom biology to astrophysics to psychology to


public policytrying to create an adequate picture of the situation, any expert will be
compelled to venture outside his limits of knowledge. As it is pleasant to feel
knowledgeable, people may test the boundaries of their propensity to exaggerate their own
abilities. They may become overconfident and stop consulting experts about vital issues.
The stereotype of a savior of the world, a single hero who is capable of anything without
effort, may possess them. This effect may discourage other researchers from participating
or even create a hole in our knowledge of global risk if that one researcher subsequently
turns out to have been systematically wrong. One example which comes to mind is the
Japanese composer Mamoru Samuragochi, who was known as the Beethoven of Japan
due to his deafness and prodigal classical compositions. However, in early 2014 it was
revealed that he had a ghostwriter for all his compositions and was in fact not deaf 36. This
caused quite a stir when many planned performances of his music were suddenly
canceled.
57. The error connected with concentrating on prevention of small catastrophes
instead of prevention of the greatest possible catastrophe
In Yellowstone Park, wildfires were prevented effectively for many years. This
prevention was so effective that it allowed a buildup of dry woody material, which
culminated in a trio of catastrophic blazes in the summer of 1988, which required 9,000
firefighters and $120 million ($240 million as of 2014) to contain 37. Yudkowsky (2008) has a
similar example related to flooding:
Burton, Kates, and White (1978) report that when dams and levees are built,
they reduce the frequency of floods, and thus apparently create a false sense of
security, leading to reduced precautions. While building dams decreases the
frequency of floods, damage per flood is so much greater afterward that the average
yearly damage increases.

588

Another example is the weakening of the average immune system today due to
insufficient exposure to pathogens. Our sanitized lives are devoid of pathogens, which
could make us highly vulnerable to genetically engineered viruses of the 21 st century.
Analogously, American Indians were highly susceptible to European pathogens, which
were incubated in the filthy gutters and alleyways of overcrowded European cities. Some
Atlantic coast tribes lost 90% of their adult members to disease shortly after the arrival of
the Europeans.38
58. Weariness of researchers
The enthusiasm of people moves in waves. Someone who sends out a certain
warning or bulletin may grow weary of sending out the same message after a few years.
This may cause others to think that the risk has passed, although it persists. This is what
has happened with regard to the risk of nanotechnological arms race since the Center for
Responsible Nanotechnology (CRN) lapsed into inactivity around the year 2010. The
researchers who work on global risk topics may never receive gratitude or see any
immediate benefits of their work. Only in the movies does the savior of the world get the
gratitude of mankind. Recall that Churchill lost reelection right after the war despite
fervently believing that he deserved reelection. To avoid the effect of burnout, during WWI
the American fleet had a constant rotation of their personnel, with one group at war, the
other ashore. Furthermore, people rarely become heroes for successful preventive
measures. There was a major wave of research into global catastrophic risks between the
years 2001 and 2010, but activity seriously slowed down from 2011 onwards.
59. Fear of loss of the social status by researchers
Social status is a basic good that people in society seek. In our society there are a
number of themes of interest which are perceived to be the symptom of a certain kind of
inferiority. People who are interested in these areas are automatically perceived as secondgrade, mad, clowns, and marginal (and could be squeezed out of social niches where they
reside). Other researchers will avoid contact with such people and avoid reading their
papers. When such people lose status, they also lose the ability to inform the thoughts of
officials in power. The study of global catastrophic risk sometimes falls into a category of
status-lowering activity, but not always. Sir Martin Rees, former Astronomer Royal and
President of the Royal Society (2005-2010) has recently taken a serious interest in global
589

risk, and due to his extremely eminent stature, has only suffered from a minor drop in
status, if any. Yet, there is still not enough of a critical mass of researchers in the area of
global catastrophic risks to give it robust credibility.
60. The quantity of the attention which society can give to risks is limited
Of course, society has limited attention to give to global risks. It may focus on a few
risks at the expense of others, depriving lesser-known risk mitigation efforts of crucial
resources they need to deal with threats. Also, there will be many people who are calmed
by taking simple actions such as filling up their car with ethanol instead of gasoline. This
action then satisfies them and makes them consider it unnecessary to pursue more
comprehensive or serious risk mitigation. They might think my contribution doesn't really
matter, I'm just one person. Of course, if everyone believes this, as most people do, that
pretty much guarantees that nothing useful will be done. This is the bystander effect 39.
61. Neglect of economic risks
Expressions such as money is only pieces of paper, or bank accounts are only bits
in computers can be a reflection of the widespread opinion that the economy is not so
important, as, say, war or natural disasters. However, the economy is the material
embodiment of the structure of most human activity. To understand the role of the
economy, it is important to note that the Great Depression of 1929 arguably caused more
personal misery for the United States than World War II. It's also important to look at the
crash of the USSR and the resulting economic troubles, as billions of dollars worth of state
assets were seized by monopolists for pennies on the dollar. This crisis occurred because
of structural-economic reasons, not any external threat. The same factor can occur in large
extinctions; the large sauropods were in decline prior to the arrival of the fatal asteroid, as a
result of complex ecological factors and changing patterns of competition between
species40.
All disasters have an economic cost. Even small terrorist attacks can have economic
effects hundreds of times larger than the initial damage itself. The September 11 th terrorist
attacks did at least $100 billion in damage to the American economy 41, or closer to $6
trillion if you view the Afghanistan and Iraq wars as direct responses to that one terrorist
action, which they certainly appear to have been 42. A purely economic action such as a
590

decrease in interest rates by the Fed might lead to a tremendous amount of economic
damage by causing a bubble in the real estate market. In 2001, a mere seven letters laced
with anthrax was capable of causing $320 million in cleanup costs alone 43. On a dollar-perpound basis, that is quite a sum.
Even small failures can lead to a huge damage and loss of stability of economy, and
an economic crash would make the whole system less steady and more vulnerable to
even larger catastrophes. It could lead to positive feedback that is a self-amplifying
catastrophic process. During the process of economic globalization, a possibility of global
systemic crisis continues to increase. It is difficult for some to believe that many of the
world's most powerful nations would collapse because some large banks go bankrupt, but
it is a definite possibility44.
62. The errors connected with overestimating, underestimating, or
failing to appreciate the moral condition of a society and its elite
One popular account of the decline of the Roman empire is moral decadence and
cultural decay45. This may have been a degradation of its elite, insofar as governors and
leaders of all levels operated exclusively in pursuit of their personal short-term interests,
that is, foolishly and selfishly. Generally speaking, people who pursue the long-term
interests of a society unselfishly do more to help it flourish. A term from economics to
describe this is time preference: a high time preference means a desire for immediate
consumption a low time preference refers to saving and planning for the future 46.
Civilization is fundamentally based on low time preference 47. Other metaphor is the
comparison of moral spirit, for example, in armieswith the ability to molecules in some
substance to turn into a uniform crystal (this theme is explored in detail by Lev Tolstoy in
War and Peace). If the crystal is broken, the society collapses into feudalism and localism.
Today, both bad and good can be done by coordinated groups operating over long
time scales, or subtle processes of decay or growth over the same. Two large societies
may conflict and mutually influence one another, such as the Occident and the Orient, or
NATO vs.

Russia. Ideals such

as the

tension

between

authoritarianism

and

disorganization, rules and liberty, democracy and tradition, may expose large rifts that
cause mass terrorism or even civil war. Even a moral paragon might unleash a powerful
weapon or process by mistake, while an immoral man may be impotent by virtue of the fact
591

that he is always drunk, or occupied with petty theft, and never becomes a genuine risk to
the survival of humanity, never getting hold of powerful technologies.
63. Popularity bias
This is similar to availability bias, but the concept has no literature, and is original to
this work. The easier an idea is to popularize or transform into bright propaganda, the more
attention it will receive. It is easier to advertise a threat from global warming than from an
Artificial Intelligence because the latter is difficult to portray and less visceral. People have
to be involved in the process of spreading an idea among the masses, and that leads to an
identification with the idea, and the aspiration to make it easier and more accessible. This
also means that complex risks will tend to get watered down or excessively simplified as
they become better known, which can cause sophisticated people to dismiss them
unnecessarily, since the nuances of the concept are not spread to a mass audience. So,
the drive for popularity and its results has many complex impacts on the spread of an idea.
64. Propensity of people to offer "simple" and "obvious" decisions
in difficult situationsnot having thought them though
We all know this happens. It is followed up by a persistence, defending the decision
through argument, and resistance to considering other options. H.L. Mencken said, For
every complex problem there is an answer that is clear, simple, and wrong. Yudkowsky
writes in detail about the importance of a time interval between the moment of appearance
of a question and the moment in which a human being makes a definitive choice in favor of
an answer is the interval in which any real thinking happens, and it may be quite short,
even a few seconds. Norman R.F. Maier wrote, Do not propose any solutions until the
problem has been discussed as thoroughly as possible without suggesting any. It is
psychologically difficult for someone to change their mind once they have proposed a
solution and begun to take a liking to it, partially because in every human society, spending
too much time considering solutions is seen as weakness. Once someone is seen as
advocating a solution publicly, it becomes a subject of dispute that they get emotionally
attached to, which represents them, and they feel the need to defend it, either consciously
or subconsciously.
65. Error connected with incorrect correlation of force and safety
592

Emotionally, there may be a tendency to associate strong technologies as good, and


weaker technologies as bad. Maybe because on a daily basis the strongest technologies
we are exposed to are generally good. However, it actually follows that the stronger a tool,
the more capable it is of influencing the world, and (usually) the more destructive it can
theoretically be. Seemingly safe technologies, such as air travel, can be turned to
destructive ends by the use of bombers. The destructive variants of many of the common
technologies we use are hidden on military bases or in other places. An analysis based on
incomplete information of technology is inclined to interpret a technology emotionally or
whimsically, softening perceived risk.
66. Premature investments
If a large quantity of funds and efforts are put towards a project prematurely, such as
electric cars or Artificial Intelligence, and it does not bear fruit, it can put a field on ice for a
matter of decades, even after it becomes economically feasible. If people were informed in
1900 that nuclear weapons would be developed in 1941, and would threaten the safety of
the world by the 1950s, they would likely spend tens or hundreds of millions building bomb
shelters, trying to develop quick flying machines, anti-aircraft batteries, or aerospace
technology to ameliorate the anticipated strategic risk. In all likelihood, this would cause
expenditure fatigue, so that by the time nuclear weapons were actually developed, there
would be a reluctance to invest in dealing with them that there would not have otherwise
been.
Humans, and humanity, do not have the greatest attention span or planning capability,
even when the world hangs in the balance. We need quick results and gratification to move
forward on projects. According to some recently released information, in the 80s, the USSR
got wind of an unmanned aerial vehicle, or drone, built by the United States, and spent a
great deal of money and military research trying to come up with their own version, to no
avail48. As a result of that program, by the time drones actually became cheaper and
reliable, in the 00s, Russian military leaders were already exhausted with the idea, and
lagged behind in their development accordingly. Timing can make a crucial psychological
and economic difference which makes the difference between success and failure for a
given technology or safeguard.
67. Planning fallacy and optimism bias
593

Yudkowsky writes49:
When asked for their most probable case, people tend to envision everything
going exactly as planned, with no unexpected delays or unforeseen catastrophes:
the same vision as their best case. Reality, it turns out, usually delivers results
somewhat worse than the worst case.
In large projects, Cost overruns and benefit shortfalls of 50 percent are common;
cost overruns above 100 percent are not uncommon 50. Same with respect to the time it
takes to write books, complete papers, and so on. Our optimism tends to consider the best
case the most probable, possibly because it is the easiest to imagine. But, as Murphy law
goes, if something can go wrong, it will, and when many things in a row go wrong, the
worst-case scenario turns out to in fact be worse than anyone imagined it could be.
68. Bystander effect
Previously mentioned, the bystander effect refers to the fact that people are less likely
to do anything if they think others will do it. A man lying on the ground in a sorry state is
less likely to be helped by a crowd than if someone came upon him while walking through
the woods. We have a tendency to avoid personal responsibility for events if possible, and
if we are not specifically called out, will avoid contributing. This condition arises
subconsciously, as simply as a reflex. Global risks conjure up the ultimate bystander effect,
as they effect the whole planet, but so few do anything about them. Nick Bostrom points
out that there are more academic research papers published on Star Trek or the
reproductive habits of dung beetles than there are on global catastrophic risks.
69. Need for closure
People have a need for closure. As soon as there is a disturbing open question, we
have the desire to immediately find a solution for it and put it behind us. We prefer a fast
decision whose correctness is uncertain to a long and grueling search for a complete
solution which may appear to be endless. Although we do not have infinite time to come up
with answers, it is advisable that we think well before coming to any conclusions, probably
a bit longer and more thoroughly than we would prefer to.
70. Influence of authority and the social pressure of a group
594

The famous Milgram experiment showed what evil everyday people can do when they
are ordered to. In the experiment, examinees were hooked up to an electric current in a
sealed room (actually, conspirators connected to no real current whatsoever) while on the
other side of glass, the participants had access to a dial which allowed them to control the
flow to electricity to the victim. 66% of the participants increased the voltage all the way to
400 voltsa mortal dosewhen they were ordered to by a researcher, even when the
victim begged them to stop. In this experiment, factors such as authority, the remoteness
of the victim and the influence of similar behavior by being in the same room with other
people in the same role all combined together to cause people to take actions which were
ostensibly horrible. The same factors apply to us when we estimate the risk connected with
some future factor or technology. The potential victims, even if our future selves are among
them, are far away from us in time and space that if a strong authority expresses favor in
the dangerous technology, and we are surrounded and influenced by a group of people apt
to do the same thing, all of these factors will strongly influence our choice.
71. Conflict between general research and applied research
Ray Kurzweil points to a phenomenon he calls engineer's pessimism, that engineers
working on a difficult problem will overestimate its difficulty because they are so closely
immersed in details51. Similarly, in nanotechnology there is a split between theoretical
researchers focused on long-term goals, like Rob Freitas and Eric Drexler, and more
applied researchers, like Nadrian Seeman and the late Richard Smalley. This led to a bitter
rivalry between Drexler and Smalley until Smalley's death in 2005. Generalists sometimes
accuse engineers of focusing overmuch on what has already been done rather than the
space of what's possible, or not taking the long-term view, or considering developing new
basic capabilities, while engineers accuse the generalists of being overoptimistic or pie-inthe-sky. There is a grain of truth to both charges.
72. Mind projection fallacy
The mind projection fallacy is when we unconsciously attribute to subjects properties
which only exist in our representations of them. The concept originates with E.T. Jaynes, a
physicist and Bayesian philosopher with a highly sophisticated and nuanced understanding
of probability theory. He used the phrase to argue against the Copenhagen interpretation of
quantum mechanics52:
595

[I]n studying probability theory, it was vaguely troubling to see


reference to "gaussian random variables", or "stochastic processes", or
"stationary time series", or "disorder", as if the property of being
gaussian, random, stochastic, stationary, or disorderly is a real property,
like the property of possessing mass or length, existing in Nature. Indeed,
some seek to develop statistical tests to determine the presence of these
properties in their data...
Once one has grasped the idea, one sees the Mind Projection Fallacy
everywhere; what we have been taught as deep wisdom, is stripped of its
pretensions and seen to be instead a foolish non sequitur. The error occurs in two
complementary forms, which we might indicate thus: (A) (My own imagination)
(Real property of Nature), [or] (B) (My own ignorance) (Nature is indeterminate)
Yudkowsky (2008) uses the term to refer to the way people are prone to think about
Artificial Intelligence. They take a disposition, say, nice, which may be their own or what
they hope AI will be, and they project that on to every possible consideration of advanced,
agent-like Artificial Intelligence that they can come up with. They are engaging in projecting
rather than considering the full expanse of possibilities. This is bound to occur when people
are considering complex new technologies, particularly Artificial Intelligence. Another
pernicious aspect of the mind projection fallacy concerns the property of ignorancetaking
our own ignorance and projecting it onto an external object, as if the object were itself
inherently mysterious. That is impossible, however. The mystery is a property of our mind,
not the object itself.
73. Confusion between objective and subjective threat
People who end up taking actions that are risky to the survival of the human species
may be fair, noble, beautiful people who are not personally malicious to us. They might just
not know what they are getting into, for instance developing advanced Artificial Intelligence
without adequate safeguards. This highlights the difference between objective and
subjective threat. If someone is playing a zero-sum game with you, say competing in a
business field, he may be your enemy or a threat to you objectively, but bear no personal
malice towards you or your friends. Conversely, someone with a personal grudge who is
clearly out to get you is your subjective enemy. With regard to global risks, it is important to
596

remember that those taking actions dangerous to humanity may bear no personal malice in
any way, may be competing casually, or attacking a third party nowhere near you, yet their
actions could threaten you in the long run regardless. This becomes even harder to grasp
when it comes to the economy. A Federal Reserve chairman who prints too much money
could cause the economy to collapse through runaway inflation and currency devaluation,
even if they are ostensibly doing it to improve the economy and many leading economists
support them.
74. Predictions or dreams of catastrophe, caused by envy
A vivid example of this is a phenomena which occurred on Internet forums in the early
90s in Russia. People who were offended by the disintegration of the USSR began to
dream of a similar crash of the United States, pouring over data and news stories to
discover signs of this process. When the dreamed-for schadenfreude fails to pan out, this
can influence interpretations of data.
75. Fear of loss of identity
Systems an be resistant to change, because change can compromise core and
essential identity. This is one of the reasons for the struggles against globalization and
immigration. Someone can prefer death to identity loss. That is, he might prefer global
catastrophe to a transformation of the world in which he lives.
76. Clear catastrophe can be more attractive than an uncertain future
As bizarre as it may seem, global catastrophe is easier to imagine than an uncertain
future, and may be more intuitively acceptable for that reason. Uncertainty can cause fear
and intimidation, whereas certain doom offers a sort of closure.
77. Incorrect application of Occam's razor
Occam's razor is the scientific heuristic that entities must not be multiplied beyond
necessity. However, this is just a guideline, and is subjective. In the hands of a clumsy
operator, Occam's razor can simply be used to exclude ideas that are too complicated to
understand, but nonetheless valid failure modes. As mentioned earlier, catastrophes tend
to involve the confluence of improbable scenarios, which breach a hole in safety
597

mechanisms. So, overlapping and combinatorial failure sequences ought to be considered.


Occam's razor may be suitable when it comes to deriving naturalistic explanations for
complex natural phenomena, but isn't as useful when it comes to considering complex
failure modes or scenarios or global risk.
78. The upper limit of possible catastrophe is
formed on the basis of past experience
Previously we mentioned the anecdote about how a river was dammed and the
frequency of floods decreased, but their magnitude increased, leading to greater overall
construction. One factor was a false sense of security created by the dam, which caused
building closer to the river. This tends to be a general feature of dams and embankments in
that they create a false feeling of safety. The river and dam example is also useful for
considering the fact that our imagination of the upper limit of a possible catastrophe is
formed on the basis of our past experience. We do not account for once-in-a-hundred-year
events, because we haven't lived through them. Few large structures on the Hayward fault
in the San Francisco Bay Area are built to cope with a once-in-a-hundred-year earthquake,
though one will eventually occur sooner or later. We need to prepare for disasters which
are significantly larger than anything we have seen.
79. Underestimating the fragility of complex systems
A person can be quickly killed by a small incision if it punctures a vital organ. A tree
can be killed by removing a ring of bark, which prevents fluids from the roots from reaching
the leaves, a technique called girdling. Every complex system has a weak point,
sometimes several weak points. Our power grid is exactly the same way; an overload of
current in one area can fry transformers in a long series, potentially shutting down power in
large areas53. In a disaster situation this could lead to widespread looting, as during the
New York City Blackout of 1977. Many do not appreciate how many weak points our
complex society has.
There is an empirical generalization that technological systems decrease in
proportion to the fourth degree of energy density. This empirical generalization (exact value
varies depending on different factors) can be derived by comparing the reliability of planes
and rockets54. A similar empirical generalization holds for statistics of deadly car crashes in
598

relation to speed55. It is necessary to observe that the installed power per employee of
mankind is constantly growing56.
80. Ambiguity and a polysemy of any statement as a source of a possible error
From the point of view of the authors of the regulation instructions for the Chernobyl
nuclear reactor, personnel had broken their requirements, whereas from the point of view
of the personnel, they operated precisely according to its requirements 57. Regulations
required for operators to muffle the reactor, from the point of view of the authors this was
to be done immediately, but from the point of view of the operators is was to be gradual.
Another example is when there is an automatic rescue system in a plane as well as a
manual rescue system, and if they are both executed simultaneously, they run into each
other and lead to catastrophe (nose-dive and crash of Aeroflot Flight 953 in 1994 in
Siberia). It is difficult to reach an unequivocal understanding of terms in cases where we do
not have experimental experience, as in global catastrophes.
81. Refusal to consider a certain scenario because of its "incredibility"
As mentioned before, the majority of catastrophes happen as a result of improbable
coincidence of circumstances. The destruction of the HMS Titanic was connected with an
incredible

combination

of

no

less

than

24

unfortunate

and

totally

avoidable

circumstances58.
82. Transition from deliberate deceit to self-deception
Conscious deceit for the purpose of gaining a certain benefitin our context, the
concealment of riskscan imperceptibly take the form of self-hypnosis. This sort of selfdeception can be much steadier than illusion or inadvertent error. Another version of selfhyponosis is simple procrastinationa command to think to yourself, I will think of this
tomorrow, but tomorrow never comes.
83. Overestimate of own possibilities in general and survival rate in particular
Quoting Nick Bostrom's article on existential risks 59:
The empirical data on risk-estimation biases is ambiguous. It has been argued
that we suffer from various systematic biases when estimating our own prospects or
599

risks in general. Some data suggest that humans tend to overestimate their own
personal abilities and prospects. About three quarters of all motorists think they are
safer drivers than the typical driver. Bias seems to be present even among highly
educated people. According to one survey, almost half of all sociologists believed
that they would become one of the top ten in their field, and 94% of sociologists
thought they were better at their jobs than their average colleagues. It has also been
shown that depressives have a more accurate self-perception than normals except
regarding the hopelessness of their situation. Most people seem to think that they
themselves are less likely to fall victims to common risks than other people. It is
widely believed that the public tends to overestimate the probability of highly
publicized risks (such as plane crashes, murders, food poisonings etc.), and a
recent study shows the public overestimating a large range of commonplace health
risks to themselves. Another recent study, however, suggests that available data are
consistent with the assumption that the public rationally estimates risk (although with
a slight truncation bias due to cognitive costs of keeping in mind exact information).
84. Aspiration to the wonderful future, masking perception of risks
This phenomenon can be seen in revolutionaries. The experience of the French
Revolution showed us that revolution leads to civil war, dictatorship, and external wars,
however the Russian Revolution at the beginning of the 20 th century took dangerous
actions based on the same idealism. If someone is fanatical about achieving a certain goal,
they will ignore risks on the way to that goal, no matter how great they may be. In this
sense, many modern transhumanists and technologists are exactly the same, in that they
see a glorious future, and will do anything to reach for it, ignoring risks in the process. They
do not realistically anticipate new weapons, and new applications of those weapons. They
just barge on ahead.
85. Filters between information reception and management
Any complex, breakable system will have safeguards and detection apparatus to
listen to signs of trouble. Value of information is defined by its novelty and the ability of the
whole system to react to it. There are several filters in the way between reception of
possible danger signs and management with the authority to act on it.

600

The first filter is what the monitoring system is designed to detect. This will be based
on past events, and may not pick up sudden or unanticipated changes. The second filter is
psychological, which consists of an aversion by technicians and management to
information owing to its strategic novelty or ambiguity. The third filter has to do with
limitations inherent in any hierarchical system; an officer that receives certain information
may lack sufficient power to officially recognize the urgency of the situation or compel a
superior to do anything about it. The fourth filter is filter has to do with a connection
between the warning signal and what superiors are psychologically capable of recognizing
as danger; it must bear sufficient similarity to past signals or training on danger signals to
be recognized as worth taking action on.
86. Curiosity can be stronger than fear of death
Any information on global risks is useful. For example, if we run a certain dangerous
experiment and survive, we learn that that kind of experiment is safe. This game has a limit
though, where we eventually run an experiment that blows up in our faces. People risk their
lives for the sake of knowledge or experiences. Alfred Nobel's (inventor of dynamite)
experiments accidentally caused the death of five workshop assistants, including his
brother Emil. Various scientists studying diseases have deliberately infected themselves
with the pathogen of study to observe its effects firsthand. In 1993 in Russia, Boris Yeltsin
put down a coup attempt where his political enemies had occupied the Parliament building,
and some innocents who had crowded around the building out of curiosity were shot. We
can be certain that there will be some people, with the power to destroy the world in their
hands, who will be extremely curious about what would happen if they unleashed this
power. People will agree to run dangerous experiments for the sake of curiosity.
87. Systematic regulatory failure
A global catastrophe, or any technogenic failure, may not be a result of any one fatal
error, but the culmination of ten insignificant errors. Due to limitations in space and
knowledge, for it is necessary to avoid too many regulations or instructions on trifles. Yet,
small thingsa smoldering cigarette butt, an open tank porthole, an improper start-up
procedure, a restart returning a system to default settingsall these things can set the
stage for a disaster. In disasters, there is rarely one main thing that causes critical failure,

601

but a series of things. Running a system by the book, may not help, because the book
may not have an entry for the mistake you have made.
88. Scapegoating
It is nearly always easier to make someone take the fall and move on after a disaster
than go through the trouble of forming a commission of inquiry and figuring out what
actually happened. Even after a commission of inquiry has been formed and come to
conclusions, open questions may remain. Depending on how corruption or busy the
administrators of the system or country in question are, they may avoid seeking out the real
sources of the problem, setting the stage for it to happen againonly worse next time.
89. Minimum perceived risk
Aside from a ceiling on maximum risk set by past experience, people also have a
floor of minimum perceived risk based on the minimum probabilities that a human being
can intuitively care about. According to cognitive psychology experiments, this probability is
about 0.01%, or one in 10,000 60. If an experiment or procedure has a one in 10,000 chance
of going awry, even if the consequences would be catastrophic, there is a strong tendency
to ignore it. This can be a problem if the risk is a daily event, imperceptible on a day-by-day
basis, but over a period of years or decades, a disaster is all but assured.
90. Influence of emotional reactions to catastrophe
It is known the emergencies or catastrophes provoke a certain sequence of
psychological reactions, each of which compromises the objectivity of decision-making and
action-taking. The book Psychogenesis in Extreme Conditions says, psychological
reactions to catastrophe are subdivided into four phases: heroism, a honeymoon phase,
disappointment, and recovery61. In addition, there is often a phase of panic or paralysis
during the first moments of a catastrophe, which can precede the heroism phase. Each of
these stages creates its own kind of bias. (needs more citations)
91. Problems with selection of experts
On each separate question related to global riskbiotechnology, nanotechnology, AI,
nuclear war, and so onwe are compelled to rely on the opinions of the most competent
602

experts in those areas, so it is necessary for us to have effective methods of selecting


which experts are most trustworthy. The first criteria is usually the quality and quantity of
their publicationscitation index, publication ranking, recommendations from other
scientists, web traffic from reputable sources, and so on.
Secondly, we can evaluate experts by their track record of predicting the future. An
expert on technology who does not make future predictions, even if qualified predictions
made only a year or so in advance is probably not a real expert. If their predictions fail, they
may have a poor understanding of the subject area. For instance, nanotechnologists who
predicted in the 1990s that a molecular assembler would be built around 2016 have been
proven to be mistaken, and have to own up to that before they can be taken seriously.
A third strategy is to simply not trust any expert, and to always recheck everyone's
calculations, either from first principles or based on comparisons to other expert claims.
Lastly, it is possible to select people based their views pertaining to theories relevant to
predicting the future of technologywhether they have an interest or belief in the
technological Singularity, or Hubbert's peak oil theory, a neoliberal model of the economy,
or whatever. It is possible to say an expert should not have any concrete beliefs, but this
is false. Anyone who has been thinking about the future of technology must eventually
make ideological commitments to certain patterns or systems, even if they are just their
own, or it shows that they have not thought about the future in detail. An expert who never
goes out on a limb in prediction is indistinguishable from a non-expert or random guesser.
92. Fault and responsibility as factors in the prevention of risks
It is possible to over-focus on the attribution of guilt in a catastrophe, thereby underfocusing on the systemic and accidental factors that led to the disaster or potential disaster
at hand. Furthermore, when it comes to global catastrophes, there may be no time to
punish the guilty, so their guilt is not as relevant as it would otherwise be. Still, this doesn't
mean that it isn't worthwhile to select competent, loyal, and predictable personnel to
manage dangerous systems, just that a military-style focus on guilt and responsibility may
not be as relevant to global risks as we are prone to thinking.
93. Underestimating inertia as a factor of stability

603

Besides general concerns about complexity, competence, safety-checking feedback


mechanisms, and hundreds of other factors that keep a system safe and steady, it is
possible to use Gott's formula (see chapter on indirect estimation of risks) to estimate the
future expected lifespan of a system, based on how long it has already existed. For
instance, there has not been a major meteor impact in at least 11,000 years, so it is not
likely that one will occur tomorrow. Or, the pyramids at Giza have been in place for more
than 4,500 years, meaning it is not likely they will topple or erode away in the next 100
years. Accordingly, this lets us take into account the inherent inertia of systems as a factor
of judging their stability. Even if a system seems to have certain fragility, if it has been
around for a long time, it could very well be more stable than we think. Correspondingly, if a
system is extremely new and untested, although we consider it foolproof, we cannot
confidently assume that it will not collapse at some point, since it has not built up a track
record of stability.
94. Bias caused by differences in outlook
This error consists in us underestimating or overestimating the validity of statements
based on the outlook of the person which produces them. For instance, nearly all
discussions of global catastrophic risk in an academic setting have been undertaken by
people with a certain common scientific, cultural, and historical outlook which is so obvious
to us that it seems transparent and imperceptible. However, it is possible that a
representative of another culture and religion will make a relevant point, which, due to
biases in favor of our own outlook, we are oblivious to.
95. Search terms
A scientific concept may have different systems of terminology associated with it,
such that an internet or journal search for a given term misses important results associated
with other terms that the searcher did not know about. Furthermore, there may be no links
or mutual citations whatsoever between these fields. They may be in direct competition and
therefore as silent as possible about one another. Thus, care should be taken to ensure
that comprehensive surveys of an academic field on global risk are truly comprehensive.
96. Errors connected with the conscious and unconscious unwillingness of
people to recognize the fault and scale of catastrophe
604

When a catastrophe is in the middle of happening, there may be denial about its
scale, leading to further problems. At Chernobyl, the organizer of reactor testing, Anatoly
Diatlov, believed that it was not a reactor, but a cooling tank which exploded, and he
continued to submit commands to a nonexistent reactor. This kind of overoptimism can
also operate forward in time, compelling us not to make adequate preparations for a
probable catastrophe, or denying its scale.
97. Egocentrism
Mundane egocentrism may cause people to overestimate the influence they can have
over a certain situation, indirectly making them powerless to influence it. Or, egocentric
cowardice might cause someone to attempt to escape in the middle of a disaster, saving
his own skin, instead of taking necessary actions which may save millions of people. In
general, egocentrism is one of the most prevalent human biases, and will affect every
decision and action we take. One way of avoiding such effects, which may not be malicious
in any way, is to take what economist Robin Hanson calls the outside view, that is, the
view of a detached and objective observer. Role-playing such an observer can help people
or groups overcome the inside view, which can have biasing effects.
98. Excessive focus on needing a villain
In the ancestral environment of mankind, the external threat we were exposed to
most frequently would be rival human tribes with malignant intent. We lacked knowledge of
how to combat disease aside from basic purity and domestic cleanliness intuitions. Our
instincts switch into high gear when there is a clear enemy threatening us. Less so when
the risk is subtle or complex, as may often be the case with global risks, where there may
not even be a nameable enemy. Even if there is an enemy, the risk may derive from a
particular technology used by many people simultaneously, all over the globe making it
hard to point the finger, or some systemic effect that does not involve deliberate human
intent.
99. Dependence of reaction on speed of change of size
This one is the inverse of #2, excessive attention to slow processes and
underestimation of fast processes. A frog may be boiled to death in a pot, since if the heat
605

is only turned up a little bit at a time, it does not notice. The same effect must have
happened with the deforestation of Easter Island, which involved a slow reduction of trees,
over a long enough number of generations, that there was never a point of alarm until it
was too late.

References

Marc Alpert, Howard Raiffa. A progress report on the training of probability


assessors. In Daniel Kahneman, Paul Slovic, Amos Tversky. Judgment under
uncertainty: Heuristics and biases. 1982. Cambridge University Press. pp. 294305.

Ulrich Hoffrage. Overconfidence. In Rdiger Pohl. Cognitive Illusions: a handbook


on fallacies and biases in thinking, judgement and memory. 2004. Psychology
Press.

Dominic D. P. Johnson & James H. Fowler. The Evolution of Overconfidence.


Nature 477, 317320. 15 September 2011.

Steve Rayhawk et. al. Changing the Frame of AI Futurism: From Storytelling to
Heavy-Tailed, High-Dimensional Probability Distributions. 2009. Paper presented at
the 7th European Conference on Computing and Philosophy (ECAP), Bellaterra,
Spain, July 24.

Hamish A. Deery. Hazard and Risk Perception among Young Novice Drivers.
Journal of Safety Research, volume 30, issue 4, Winter 1999, pp. 225236.

Steven M. Albert, John Duffy. Differences in risk aversion between young and older
adults. Neuroscience and Neuroeconomics, vol 2012:1, pp. 3-9. February 2012.

Mara Mather et al. Risk preferences and aging: The Certainty Effect in older
606

adults decision making. Psychology and Aging, Vol 27(4), Dec 2012, 801-816.

Hank Pellessier. Transhumanists: Who Are They? What Do They Want, Believe,
and Predict? (Terasem Survey, Part 5). Institute for Ethics and Emerging
Technologies. Sep. 9, 2012. http://ieet.org/index.php/IEET/more/pellissier20120909

Myers, D.G.; H. Lamm (1975). "The polarizing effect of group discussion". American
Scientist 63 (3): 297303.

Isenberg, D.J. Group Polarization: A Critical Review and Meta-Analysis. Journal of


Personality and Social Psychology 50 (6): 11411151. 1986.

Robin Hanson. For Bayesian Wannabes, Are Disagreements Not About


Information? 2002. http://hanson.gmu.edu/disagree.pdf

Eliezer Yudkowsky. Cognitive Biases Potentially Affecting Judgment of Global


Risks. In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. irkovi,
91119. 2008. New York: Oxford University Press.

Daniel Kahneman and Amos Tversky. Choices, Values, and Frames. 2000.
Cambridge University Press.

Eliezer Yudkowsky. Creating Friendly AI 1.0: The Analysis and Design of Benevolent
Goal Architectures. 2001. The Singularity Institute, San Francisco, CA, June 15

Amos Tversky and Daniel Kahneman.. "Extension versus intuitive reasoning: The
conjunction fallacy in probability judgment". Psychological Review 90 (4): 293315.
October 1983.

Frederick, Shane; Loewenstein, George; O'Donoghue, Ted (2002). Time


Discounting and Time Preference: A Critical Review. Journal of Economic Literature
40 (2): 351401.

Martin Rees. Our Final Hour: A Scientist's Warning: How Terror, Error, and
Environmental Disaster Threaten Humankind's Future In This CenturyOn Earth
and Beyond. 2003. Basic Books.

Stephen Flower. A Hell of a Bomb: How the Bombs of Barnes Wallis Helped Win the
Second World War. 2002. Tempus Publishers Limited.

Napoleon's chill.

Noah Arroyo. Potentially Earthquake-Unsafe Residential Buildings a (Very


Rough) List. Public Press. Jan 14, 2013. http://sfpublicpress.org/news/20131/potentially-earthquake-unsafe-residential-buildings-a-very-rough-list

607

Paul

Slovic, Baruch

Fischhoff, Sarah

Lichtenstein. Facts Versus Fears:

Understanding Perceived Risk. In Judgment under uncertainty: Heuristics and


biases. 1982. Cambridge University Press. pp. 463-492.

Yudkowsky 2008.

James Parkin. Management Decisions for Engineers. 1996. Thomas Telford Ltd.

William M. Grove and Paul E. Meehl. Comparative Efficiency of Informal


(Subjective, Impressionistic) and Formal (Mechanical, Algorithmic) Prediction
Procedures: The ClinicalStatistical Controversy. Psychology, Public Policy, and
Law, 1996, 2, 293323.

Helene Cooper. Air Force Fires 9 Officers in Scandal Over Cheating on Proficiency
Tests. The New York Times. March 27, 2014.

Jan Lorenz et al. How social inuence can undermine the wisdom of crowd effect.
Proceedings of the National Academy of Sciences of the United States of America.
May 31, 2011 vol. 108 no. 22.

Eliezer Yudkowsky. "The Logical Fallacy of Generalization from Fictional Evidence.


Less Wrong. October 16, 2007.

Stephen Omohundro. The Basic AI Drives. Proceedings of the First AGI


Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited
by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press.

Andrew Keen. The Cult of the Amateur: How Today's Internet Is Killing Our Culture.
2007. Currency.

between 50 and 80 percent of catastrophes occur because of errors by operators,


pilots or other people exercising direct administration of the system

Amos Tversky, Daniel Kahneman. Availability: A heuristic for judging frequency and
probability. Cognitive Psychology 5 (1): 207233, 1973.

Roese, N. J., & Vohs, K. D. Hindsight bias. Perspectives on Psychological


Science, 7, 411-426, 2012.

Chris Phoenix and Eric Drexler, Safe Exponential Manufacturing. Nanotechnology


15 (August 2004) 869-872.

soldiers start to feel invulnerability, and will take increasingly risky maneuvers

Justin Kruger, David Dunning. Unskilled and Unaware of It: How Difficulties in
Recognizing One's Own Incompetence Lead to Inflated Self-Assessments. 1999.
Journal of Personality and Social Psychology 77 (6): 112134.
608

David Ng. Mamoru Samuragochi, disgraced Japanese composer, apologizes for


lies. Los Angeles Times. March 7, 2014.

National Park Service. The Yellowstone Fires of 1988. 2008.


http://www.nps.gov/yell/naturescience/upload/firesupplement.pdf

Some Atlantic coast tribes lost 90% of their adult members to disease shortly after
the arrival of the Europeans.

Darley, J. M. & Latan, B. (1968). "Bystander intervention in emergencies: Diffusion


of responsibility". Journal of Personality and Social Psychology 8: 377383.

Paul M. Barrett and Paul Upchurch. Sauropodomorph Diversity through Time. In


The Sauropods: Evolution and Paleobiology, edited by Kristina Curry Rogers and
Jeffrey A. Wilson, pp. 125-150 (double check). 2005. University of California Press.

Gail Makinen. "The Economic Effects of 9/11: A Retrospective Assessment".


Congressional Research Service. pp. CRS4. September 27, 2002.

Sabir Shah. US Wars in Afghanistan, Iraq to Cost $6 trillion. Global Research.


September 20, 2013. (is there a better source for this?)

Ketra Schmitt and Nicholas A. Zacchia. Total Decontamination Cost of the Anthrax
Letter Attacks. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and
Science Volume 10, Number 1, 2012.

It is difficult for some to believe that many of the world's most powerful nations would
collapse because some large banks go bankrupt, but it is a definite possibility.

One popular account of the decline of the Roman empire is moral decadence and
cultural decay.

A term from economics to describe this is time preference: a high time preference
means a desire for immediate consumption a low time preference refers to saving
and planning for the future.

Civilization is fundamentally based on low time preference.

According to some recently released information, in the 80s, the USSR got wind of
an unmanned aerial vehicle, or drone, built by the United States, and spent a great
deal of money and military research trying to come up with their own version, to no
avail.

Yudkowsky 2008.

609

Bent Flyvbjerg, Massimo Garbuio, and Dan Lovallo. Delusion and Deception in
Large Infrastructure Projects: Two Models for Explaining and Preventing Executive
Disaster. California Management Review, vol. 51, no. 2, Winter 2009, pp. 170-193.

Ray Kurzweil. The Singularity is Near: When Humans Transcend Biology. 2005.
Viking.

E.T. Jaynes. Probability Theory: the Logic of Science. 2003. Cambridge University
Press.

Congressional EMP Commission. Report of the Commission to Assess the Threat to


the United States from Electromagnetic Pulse (EMP) Attack. April 2008.

This empirical generalization (exact value varies depending on different factors) can
be derived by comparing the reliability of planes and rockets.

A similar empirical generalization holds for statistics of deadly car crashes in relation
to speed.

It is necessary to observe that the installed power per employee of mankind is


constantly growing.

From the point of view of the authors of the regulation instructions for the Chernobyl
nuclear reactor, personnel had broken their requirements, whereas from the point of
view of the personnel, they operated precisely according to its requirement.

The destruction of the HMS Titanic was connected with an incredible combination of
no less than 24 unfortunate and totally avoidable circumstances.

Nick Bostrom. Existential Risks: Analyzing Human Extinction Scenarios and


Related Hazards. Journal of Evolution and Technology, Vol. 9, No. 1, 2002.

people also have a floor of minimum perceived risk based on the minimum
probabilities that a human being can intuitively care about. According to cognitive
psychology experiments, this probability is about 0.01%, or one in 10,000

The book Psychogenesis in Extreme Conditions says, psychological reactions to


catastrophe are subdivided into four phases: heroism, a honeymoon phase,
disappointment, and recovery

610

Chapter 4. The universal logical errors, able to be shown in reasoning on global risks
1. Mess between probability, as a measure of variability of object, and confidence degree, as a
measure of the information on object
The first concerns likelihood process, for example, to radioactive disintegration, and
the second to unknown process - for example, to card guessing. However global risks
concern the phenomena, estimating which we are compelled to state likelihood judgments
about processes, which simultaneously both likelihood, and unknown to humans. Here we
start to speak about degree of confidence of this or that probability. In this case the
probability and confidence degree are multiplied.
2. Substitution of the analysis of possibilities by the analysis of the purposes
For example, reasoning like terrorists never will want to use the bacteriological
weapon because it will put strike and on what they are interesting. The structure of the
purposes can be is very complex or to comprise errors simply.
3. The incorrect use of inductive logic of a following kind: if something did not occur for a long
time, it will not occur the same long period.
This statement works only in the event that we unitary observed something during the
casual moment of time and probability is in that case described by formula of Gott. It gives
chance of end for any event in 50 percent in an interval from 1/3T to 3T, where T - age of
object at the moment of its casual detection. However if we very long observe a certain
process it evidently, comes nearer to the end. For example, if we take the casual human it,
most likely, will be middle age. However if to take the casual human and then to observe
very long him, we will unequivocally receive the very old man who can die at any moment.
(See my article Natural catastrophes and Anthropic principle for more details.)
611

4. The thinking caused by desire something to prove


Depending on that human wishes to prove, it will select those or other arguments, is
frequent unconsciously. Other name for this model - "rationalization", that is selection of
pseudo-rational arguments under certain initially irrational statement.
5. The logic error arising at attempts to prove what it is necessary to do, proceeding only from the
description of the facts
If in the first and second premise of inference there are only facts then in the
conclusion there can be only facts. Any reasoning on the purposes should lean against
certain representations about the values, set is axiomatic. However it means randomness
of such purposes, and their understanding can differ at different researchers of global risks
that can conduct to different definitions of catastrophe and representations that will be from
it an exit. Besides, any system of axioms allows to formulate indemonstrable statements
(as has shown Godel in the theorem of incompleteness), and concerning obligations it is
easy to be convinced of it: almost any system of base values easily allows to create in itself
contradictions that is the basic maintenance of many literary works where the character
has to choice between, let us assume, love to his family and to the native land (should
make that else is called the existential choice). It is not known, whether the consistent
system of values as it will look is possible in general, and whether will be applicable in
practice. However work on consistent system of values is important, as it will need to be
enclosed in the future computers possessing artificial intellect.
6. The errors connected with substitution of the analysis of risks by the analysis of those commercial
motives who speaks about them
It is possible to argue as follows: if human investigates risks free of charge he is
unemployed and marginal nut, but if he wishes to receive for it money, he parasitizes on
public fears, and if it is his direct official duties, to trust him is impossible because he is the
agent of the state and washes brains of the population. From here it is visible, that the
direct communication between money and the analysis of risks is not present, though in
some cases it is possible. The explanation through simplification is called reductionism
and allows to explain everything.

612

7. Use of so-called authoritative knowledge


The authoritative knowledge was the basic source of data on the world in the
Middle Ages when the truth was searched in Aristotle's works; have then invented an
empirical method. References to opinions of great people should not form the sufficient
basis to recognize something safe. Only regularly repeated calculations can specify in it.
8. Wrong application of idea that the theory should be considered as true, only if it is proved
If to consider scientific method, as a way of reception of the most authentic
knowledge this methodology is true. However from the point of view of safety maintenance
the opposite approach is necessary: a certain assumption should be considered dangerous
until it is not denied. For example, the new model of the plane is considered dangerous,
until then will not be proved yet, by theoretical calculations and test flights in all modes, that
it is safe; the same principle underlies clinical testing of new medicines. Not clearly the
same as to apply a principle falsification concerning theories about those or other global
catastrophes.
9. Perception of the new information through a prism of the old
In the course of perception human only a part of the information a beret from an
external world, and the rest completes on the basis of the memory, expectations and
associations. Alas, the same is true and for texts, including on global risks. Reading to the
review of different people of the same text, it is not difficult to be convinced, that they have
apprehended it absolutely differently. Hardly it is connected by that one people were
essentially cleverer than others - more likely, that they applied different filters of perception.
Moreover, if human has started to adhere to a certain point of view he subscribes for those
editions and chooses those articles which confirm it. Thus, as this illusion is created, than
the statistics on the data confirming his point of view, grows. It even more strengthens both
its filter, and its confidence of these data.
10. An error in a choice of a neutral position
Each human understands in due course, that he is not quite objective, and his point of
view has some tendentiousness. To compensate this deviation, he can choose a certain
neutral source of the information. The error consists in that the people adhering to opposite
sights, will choose different neutral points, each of which will be closer to a position of the
one who has chosen it. We described a similar error above when discussed results of
613

experiences in which examinees have been warned about a possible error and did the
amendment on it - and, nevertheless, all the same underestimated. Possibly, it was
necessary to give the amendment not only to key parameter, but also to the amendment.
11. Confidence as a source of errors
The more human doubts his point of view, the more is often he changes it under the
influence of the new facts, and the it is more than chances, that it will get to more authentic
knowledge. If human is too assured of the opinion, it is difficult to him to change it. If it is
too changeable, it does not come nearer to true, and goes on a circle.
12. Use completely the erroneous logic
Alas, the situation when human in the reasoning makes mistakes in each line is
possible. In this case he could not find errors even if he would like. It can be or one
repeating regular error, or such density of different errors which does impossible a faultless
reasoning. Even I now do not know for certain, whether I do any regular logic errors at the
moment. It can occur more often, than we think - the analysis of scientific texts has shown,
that usually people use the reduced conclusions and heuristics receptions - and do not
realize it.
13. Pre-science and pseudo-science mixture
While the hypothesis is in process of a formulation, it yet has not acquired all scientific
apparatus and is, more likely, a product of brain storm on a certain theme, probably, carried
out collectively by an exchange of opinions in printing editions. And during this moment it is
a pre-science - however it is aimed at becoming a science part, that is to pass
corresponding selection and to be accepted or rejected. The pseudo science can simulate
all attributes of scientific character - ranks, references, a mathematical apparatus, nevertheless, its purpose - not search of authentic knowledge, but visibility of reliability. All
statements about global risks are hypotheses which we almost never can check up.
However we should not reject them on early phases of maturing. In other words, the phase
of brain storm and a phase of critical elimination should not mix up - though both should be
present.

614

14. The error connected with wrong definition of the status of universalis
The reality problem universalis (that is generalizations) was the basic in medieval
philosophy, and it consisted in a question, what objects actually really exists. Whether there
are, for example, birds in general, or there are only separate copies of birds, and all kinds,
sorts and families of birds - no more than a conditional invention of human reason? One of
possible answers is that objectively there is our ability to distinguish birds and not-birds.
Moreover, each bird too possesses this quality, and owing to it universalis exist objectively.
In reasoning on risks the ambiguity of universalis creeps as follows: properties of one
object are transferred on a certain class as though this class was object. Then there are
reasoning like America wants or it is peculiar to Russian whereas behind these
concepts there is not a uniform object, but the set, which exact definition depends on the
observer. Any discussions about the politician are poisoned by such shift. Arguing on an
artificial intellect is easy to make such mistake as it is not clear, whether there is a speech
about one device or about a class of objects.
15. Statements about possibility of something and about impossibility are not equal
The statement about impossibility is much stronger, for enough one object concerns
all set of potential objects, and for the validity of the statement about possibility. Therefore
statements about impossibility something are false much more often. Assuming any event
or coincidence of circumstances impossible, we cause a damage of our safety. In certain
circumstances probably all. Thus any discussions about the future catastrophes is always
discussions about possibilities. As Artuhov said: I am very skeptical man. And if someone
said me it is impossible, I ask to prove it.
16. Evidence as a source of errors
The correct conclusion always leans on two premise, two true judgments. However
the analysis of texts shows, that people very seldom use the full form of conclusions, and
instead use reduced where only one premise obviously is called, and another is meant by
default. Are held back usually evidence - the judgments, seeming so true and doubtless,
that there is need no them to sound. Moreover, it is frequent they are so obvious, that are
not realized It is clear, that such state of affairs is the reason of numerous errors because
evidence is not necessarily validity, and that is obvious to one, is not obvious to another.

615

17. Underestimate of own inaccuracy


As well as any human, I inclined to be mistaken, that is connected as with the basic
unreliability of a human brain connected with the likelihood nature of its work, and with
incompleteness of my knowledge of the world and skills of elimination of errors. I can know
nothing on 100 % because reliability of my brain is not equal 100 %. I can check up
reliability, having solved a series of logic problems of average complexities, and then
having counted quantity of errors. However usually it is not happened, and own inaccuracy
is estimated intuitively. Precisely also human usually does not measure a characteristic
inaccuracy of the judgments about the future though it probably to make experimentally: for
example, to write the forecast of public life for year or five years and then to compare.
18. The error connected with representation that each event has one reason
Actually:
There are absolutely casual events.
Each event has many reasons (the glass has fallen because it was put on the edge,
because it is made of glass, because force of gravitation is great, because the floor was
firm, because the cat disobedient, because it should happen sooner or later).
Each reason has own reasons therefore we have dispersing in tree of the reasons.
Human mind is incapable entirely this tree of the reasons to capture and is compelled to
simplify. But the concept of "reason" is necessary in a society because it is connected with
fault, punishment and a free will. That is here under "causal" acceptance by the free made
human of the decision on crime fulfillment means. There is no need to speak about that,
how many here the unevident moments. (The basic question: Who is guilty?)
And in techniques designing: where it is important to find a cause of catastrophe. That
is that it is possible to eliminate - so failures like that will not happen anymore. (The basic
question: What to do?)
The concept the reason less all is applicable to the analysis of the difficult unique
phenomena, such as human behavior and history. The example to that is weight of the
confused discussions about those reasons or other historical events. For this reason
reasoning in a sort the reason of global catastrophe will be - to put it mildly, are
imperfect.

616

19. Necessity of a choice on the basis of belief


If the head of state receives some conclusions contradicting each other about safety,
he makes a choice between them, simply trusting in one of them - for the reasons which
have been not connected with the logic. Here too it is possible to recollect the term an
existential choice when human should make a choice in a non-formalizable situation. For
example, between love and a obligation.
20. Effect of first and last read book
The order of receipt of the information influences its estimate, and are allocated first
and last source. It is one of forms of the inaccuracy connected with availability of the
information.
21. Exaggeration of a role of computer modeling
Two most elaborated models are meteorology and nuclear explosions modeling. Both
are made on a huge actual material, with the account of hundreds tests which made
amendments to forecasts, and both regularly gave errors. Even the most exact model
remains a model. Therefore we cannot strongly rely on computer modeling of unique
events to what global catastrophe concerns.
22. The proof by analogy as a source of possible errors
The main idea is not only that there cannot be analogies to the unique event, which
has never happened - to irreversible global catastrophe, but also that we do not know how
to draw such analogies. In any case, analogy can only illustrate. Possibly, it is useful to
accept analogies when they speak about a reality of a certain threat, but not when - about
safety.
23. The error connected with discrepancy of extrapolation exponential
probability function by means of the linear
Probability function of destruction of a civilization - if to consider it process smooth in
sense of probability, that is, of course, incorrect - it is possible to assimilate functions of
disintegration of radioactive atom which, as is known, is described by exponent. For
example, if the probability of destruction of a civilization during the XXI century is equal 50
% as it is assumed by sir Martin Rees in the book Our final hour in 200 years, the
chance of the survival of the civilization will be 25 %, and through one thousand years 617

only 0.1 % - at uniform preservation of the same tendencies. From here it is visible, that it
is incorrect to conclude, that if chances of the survival within a millennium makes 0.1 % for
one century it will be in only ten times more, that is 1 %. The same error in less obvious
kind arises, if we need to extrapolate the same 50 % of a survival within 100 years on
annual probability of destruction. Linear approximation would give 0.5 % for a year.
However the exact value calculated under formula 1 2

t
t0

, makes approximately 0,7 %,

that is in 1,4 times above, than intuitive linear approximation gives.


24. The St.-Petersburg paradox
This paradox has the direct relation to global catastrophes as shows that infinitely big
damage from the extremely rare events has bigger weight, than all other events, however
psychologically people are not ready to apprehend it. G.G. Malinetsky so describes this
paradox in the book Risk. Sustainable development. Synergetrics": "We will consider the
following game. The coin is thrown until the heads for the first time will not drop out. If it
was required n throws the prize will make 2 n units. That is prizes 2,4,8, 2 n will occur to
probability 1/2,1/4,1/8, 1/2n. The expected prize in this game is infinite:

n 1 1 2 n 2 n .

It is asked, how much a man is ready to pay for the right to enter into such game. The
paradox consists that the majority of people is ready to pay for this right no more than 100,
and sometimes only 20 units.
25. Distinction between danger and risk
The risk is created by accepted decisions, and dangers by circumstances. As the
basic source of risk of global catastrophes are new technologies, decisions on their
development and application define it. However if technologies develop spontaneously and
unconsciously they become similar to natural dangers.
26. The error connected by that if probability of some events is not computable, it is believed to be
zero
Whereas the principle of precaution would demand that we attributed to such events
100 percentage probability. However it would lead to absurd conclusions in the spirit of: the
probability of disembarkation of aliens is unknown tomorrow, therefore we should prepare
618

for it how if it was equal to 100 percent. In this case it is possible to use indirect ways of an
estimate of probability, for example, formula of Gott.
27. Omission of that safety of system is defined by its weakest link
If in a house there are three parallel doors, one of which is locked by three locks, the
second - two, and the third - one the house is locked on one lock. As do not strengthen two
strongest doors, it will change nothing.
28. Denial of hypotheses without consideration
To reject a certain hypothesis, it should be considered in the beginning. But it is
frequent that this sequence is broken. People refuse to consider those or other improbable
assumptions because they reject them. However reliably to reject a certain assumption is
possible, only if it having considered carefully, and for this purpose it is necessary to accept
it at least for some time seriously.
29.Non-computability
Variety of essentially important processes for us is so combined what to predict them
it is impossible, as they are incomputable. Un-computability can have the different reasons.

It can be connected with incomprehensibility of process (for example,


Technological Singularity, or, for example, how the theorem Fermat is
incomprehensible for a dog), that is connected with basic qualitative limitation
of a human brain. (Such is our situation with a prediction of behavior of
Superintelligence in the form of AI.)

It can be connected with quantum processes which do possible only a


likelihood prediction, that is indeterministic systems (weather forecast, a brain).

It can be connected with supercomplexity of systems in which force each new


factor completely changes our representation about a definitive outcome. That
concern: models of global warming, nuclear winter, global economy, model of
exhaustion of resources. Four last fields of knowledge unite that everyone
describes the unique event, which never was in history that is advancing
model.

Noncomputability can be connected that the meant volume of calculations


though is final, but it is so great, that any conceivable computer cannot execute
it during Universe existence (such un-computability it is used in cryptography).
619

This kind of un-computability can be shown in the form of chaotic determined


system.

Noncomputability is connected also by the fact that though to us the correct


theory can be known (along with many other things), we cannot know, which
theory is correct. That is the theory, besides correctness, should be easily
demonstrable for everyone, and it not one too, in conditions when experimental
check is impossible. One way to check theories is to test them by so called
prediction markets, where price of oil for example reflects measure of
confidence in Peak Oil theory. However besides theory market price influence
many other factors: gamble, emotions or not market nature of the object. (It is
senseless to be insured against global catastrophe as there nobody to pay off,
and owing to it it is possible to tell, that its insurance price is equal to zero.)

One more kind noncomputability is connected with possibility of realization of


self-coming true or self-denying forecasts which do system essentially nonstable and unpredictable.

Un-computability, connected with the self-sampling assumption - see about it


N. Bostroms book. The essence of this assumption consists that in some
situation I should consider myself as the casual representative from some set
of people. For example, considering myself as usual human, I can conclude,
that I with probability in 1/12 had chances to be born in September. Or with
probability, let us assume, 1 to 1000 I could be born the dwarf. It sometimes
allows to do predictions on the future: namely, if in Russia is 100 billionaires
chances, that I will become the billionaire, make one to 1,5 million, in the
assumption, that this proportion will remain. To incomputability it results, when I
try to apply the assumption of own site to own knowledge. For example, if I
know, that only 10 % of futurologists give correct predictions, I should
conclude, that with chances of 90 % any my predictions are wrong. The
majority of people do not notice it as for the account of superconfidence and
the raised estimate consider themselves not as one of representatives of set
but as "elite" of this set, the possessing raised ability to predictions. It is
especially shown in gambling and game in the market where people do not
follow obvious thought: the majority of people loses in a roulette, hence, I,
most likely, will lose.
620

The similar form of incomputability is connected with an information neutrality


of the market. (What is told further is considerable simplification of the theory
of the market and problems of information value of indicators given to it.
However more detailed consideration does not remove the named problem but
only complicates it, creating one more level incomputability - namely the
impossibility for the usual man to seize all completeness of knowledge
connected with the theory of predictions, and also uncertainty of what of
theories of predictions is true. See about information value of the market socalled no trade theorem.) The ideal market is in balance in which half of
players considers, that the goods will rise in price, and half - what to become
cheaper. In other words, to win in a game with the zero sum from the majority
of people, a man should be only cleverer or more informed, than they. However
the majority of people are not cleverer, than all people, by definition, though
are not capable to realize it because of psychological bias. For example, the
price for oil is at such level that does not give obvious acknowledgment to the
assumption of inevitability of the crisis connected with exhaustion of oil, the
assumption of limitlessness of oil stocks. As a result the rational player does
not receive any information on to which scenario to prepare. The same
situation concerns disputes: If a certain human has chosen to prove the point
of view opposite to yours, and you of anything do not know about his
intelligence, erudition and information sources (and you agree that you are
ordinary man, but not special), and also about the objective rating, that is
chances 50 on 50, that he is right, instead of you. As objectively to measure
own the intelligence and awareness is extremely difficultly because of desire to
overestimate them, it is necessary to consider their being in the middle of
spectrum.

As in a modern society operate mechanisms of transformation of any future


parameters in market indexes (for example, trade in quotas under the Kiotsky
report on emissions of carbonic gas or the rate on elections, war, futures for
weather etc) it brings an additional element of basic unpredictability in all kinds
of activity. Owing to such trade we cannot learn for certain, whether there will
be a global warming, exhaustion of oil, and how much is real threat of a bird
flu.
621

One more reason incomputability is privacy. If we try to consider this privacy


through different conspiracy theories in the spirit of Simmons book Twilight
in desert about overestimate of stocks of the Saudi oil, we receive dispersing
space of interpretations. (That is, unlike a usual case when accuracy raises
with number of the measurements, here each new fact only increases split
between opposite interpretations.) Any man on the Earth does not possess all
completeness of the classified information, as different organizations has
different secrets.

Market-based mechanisms to encourage people to lie about the quality of their


products and on projections of their firms in order to get more profits for the
time of their work. A clear example of this, we see the consequences of the socalled revolution of managers, when managers changed owners in the
directorate of firms in 1970e years. As a result, they became more interested in
the short term profit for the duration of their work in the company without
addressing the risks to companies outside this time limit.

The psychological aspect of this problem consists that people argue as if no incomputability
exists. In other words, it is possible to find out some opinions and reasoning about the future in
which its basic and multilateral unpredictability is not considered at all, no less than limitation of
human ability to argue authentically.

Chapter 5. Specific errors arising in discussions about danger of


uncontrollable development of Artificial Intelligence
Chapter 5. Specific errors arising in discussions about danger of uncontrolled development of
an artificial intelligence

Artificial Intelligence (AI) is an area of global risk particularly fraught with biases.
Thinking about Artificial Intelligence evidently pushes certain buttons in the human mind
which lead to exceptionally poor reasoning, especially among those new to considering AI
risk, but also among those actually working in the field. In this chapter we attempt to outline
some of the most common errors.

622

1. Belief in predetermined outcomes or quick fixes


Experts in Artificial Intelligence often state why they think advanced AI will be safe,
but give mutually exclusive answers that cannot all possibly be true. Some of these experts
must be wrong, it's just a question of who. In 2008, one of the authors of this volume,
Alexei Turchin, ran an Internet survey among developers of Artificial Intelligence on the
theme of guaranteed safety in AI, and received the following answers, with an
approximately equivalent number of people expressing each view. Various AI experts or
commentators who hold the stated views are cited, although few of them participated in the
survey. Many of the citations are not from academic sources, though the views below are
common among academics. AI is safe, because...

1.

Because true AI is impossible1. (Roger Penrose: algorithmic computers are


doomed to subservience.)

2.

Because AI can only solve narrow problems, such as image recognition 1. (Penrose:
"Mathematical truth is not something that we can ascertain merely by use of an
algorithm)

3.

Because the Three Laws of Robotics (or an updated variant) will solve any AI safety
issues2. (David Woods and Robin Murphy.)

4.

Because I know how to make AI safe3. (Mark Waser.)

5.

Because AI will possess superhuman wisdom by definition, which will cause it to be


benevolent or otherwise non-harmful4.

6.

Because AI will not need anything from humans, allowing us to peacefully co-exist
with them5. (In the cited article, the editor or writer misinterprets the quoted experts
by titling the article AI uprising: humans will be outsourced, not obliterated.)

7.

Because AI will be trapped in computers, and if anything goes wrong, it will be


possible to simply pull the plug.6

8.

Because AI cannot have free will.

9.

AI will be dangerous, but we'll manage somehow.

10.

AI is dangerous, and we are all doomed.

11.

AI will destroy mankind, and we should welcome it, as AI is a higher stage of


evolution7.

623

All of the above cannot be true simultaneously. Someindeed, mostmust be mistaken.


2. The idea that it is possible to create a faultless system
by repeatedly checking the code
Checks bring some number of new errors, and owing to it at certain level the number
of errors is stabilized. It is true and about systems definition of objectives which are laws,
for example. It is not necessary to count that we can create the arch corrected behavior for
AI, not containing errors.
3. Errors in the critique of Artificial Intelligence by Roger Penrose
In his book The Emperors New Mind, physicist and philosopher Roger Penrose
asserts that strong AI is not possible through algorithmic design methods, because in
human brain there are not computable quantum processes which are necessary for
creative thinking and consciousness. On this basis, it is sometimes affirmed by others (but
not Penrose himself) that dangerous AI is impossible or very distant in the future. However,
this conclusion is flawed for the following reasons:
1.

Penrose himself admits that strong (i.e., conscious) AI is possible. According to a


review of his book by Robin Hanson 8, Penrose grants that we may be able to artificially
construct conscious intelligence, and such objects could succeed in actually superceding
human beings. For Penrose, the key ingredient is consciousness, which he considers to
be non-algorithmic. But, he does not rule out non-algorithmic approaches to consciousness
in intelligent machines. So, the notion that Penrose is predicting that strong AI will never
happen does not hold water. It is merely the standard paradigm of computer scientists that
Penrose is criticizing, not the feasibility of AI in general.

2.

Penrose has argued that the neurons in the brain work in some way based on
the quantum decoherence of particles in neurons, but physicist Max Tegmark showed that
the relevant timescales of dynamical activity in neuron firings are roughly ten billion times
longer than quantum decoherence events, making a connection between the two extremely
unlikely9. Penrose's ideas on quantum involvement in consciousness have been met with
near-universal dismissal among experts in philosophy, computer science, physics, and
robotics.10

3.

Artificial Intelligence can be dangerous without possessing phenomenal


consciousness of the type that Penrose emphasizes. In an overview of AI-related risks,
624

Luke Muehlhauser and Anna Salamon write 11, a machine need not be conscious to
intelligently reshape the world according to its preferences, as demonstrated by goaldirected narrow AI programs such as the leading chess-playing programs. In a paper on
nanotechnology and national security, Mark Gubrud writes 12, By advanced artificial
general intelligence, I mean AI systems that rival or surpass the human brain in complexity
and speed, that can acquire, manipulate and reason with general knowledge, and that are
usable in essentially any phase of industrial or military operations where a human
intelligence would otherwise be needed. Such systems may be modeled on the human
brain, but they do not necessarily have to be, and they do not have to be "conscious" or
possess any other competence that is not strictly relevant to their application. What
matters is that such systems can be used to replace human brains in tasks ranging from
organizing and running a mine or a factory to piloting an airplane, analyzing intelligence
data or planning a battle.
4. The notion that Asimov's Three Laws of Robotics address AI risk
This was mentioned in the first bullet point for this chapter, but is worth devoting its
own section to, since the misconception is so universal. Asimov's laws of robotics are the
following:
1. A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders
would conflict with the First Law.
3. A robot may not injure its own kind and defend its own kind unless it is interfering
with the first or second rule.
Many different authors and experts in Artificial Intelligence who have thought seriously
about the problem of AI safety have written at length regarding why Asimov's Laws are
insufficient. There are three key points: first, Asimov's laws are too simple, secondly, they
are negative injunctions rather than positive injunctions, and therefore woefully
underspecified, thirdly, language is inherently ambiguous and the definition of harm is not
exhaustive.
For instance, consider that humans could be best prevented from coming to harm by
sealing every human being in a diamond bubble and hooking up them to virtual reality, a la
625

The Matrix. Asimov's laws greatly underestimate the power of AIs to reshape the world in
the long term. An Artificial Intelligence that can self-replicate and mine resources to massproduce robotics could quickly come to dominate the planet. So, a positive agenda for AI is
necessary to set the course of its future, not merely a list of what AI cannot do.
The notion of obeying humans is ambiguous. What if orders from humans conflict?
Do thoughts count as orders? An AI could eventually gain the capability to read the
thoughts of humans directly with non-invasive scanning or other monitoring techniques, do
we want an AI carrying out the most fleeting wishes of every human being on Earth, even if
they contradict each other from one moment to the next?
All words are ambiguous. Even if Asimov's laws were to serve as a template, or
inspiration for a set of benevolent AI motivations, they would have to be fleshed out in
absolutely exhaustive detail, and many important theoretical questions would need to be
solved before they could be implemented. Speaking as if the problem is solved by the
existence of Asimov's laws is analogous to drawing a picture of a rocket with a crayon and
saying you've designed a spaceship that can go to Mars. A few sentences are not sufficient
for an AI theorist to develop a well-grounded goal system.
Asimov's laws of robotics are so simplistic that they are a tautology. An AI will be safe
because it will not allow humans to come to harm. Well, naturally. How do we define harm
in terms of computer code, in a way that remains somewhat consistent over time as the
AI's knowledge base and even fundamental ontology changes? It turns out that these
problems are so complicated that we might as well throw out Asimov's laws altogether and
start from scratch, beginning with the question of what positive drivesnot just forbidden
actionsshould direct advanced AI.

5. Erroneous idea that software progress is absent


It is easier to measure progress in computing hardware than software, making it
easier to claim that software progress is slow or stalling. Katja Grace, visiting fellow for the
Machine Intelligence Research Institute, describes algorithmic progress in six domains in
her paper of that title 13. In the abstract, Grace writes, gains from algorithmic progress have
been roughly fifty to one hundred percent as large as those from hardware progress.
Considering that hardware cost effectiveness doubles roughly every 18 months (Moore's
law), this is a fairly quick pace.
626

The six areas that Grace focused on in her paper were the Boolean satisfiabiality
problem, game playing (chess and Go), physics simulations, operations research, and
machine learning.
Though Grace demonstrated software gains in each domain similar to the hardware
gains, the clearest area of improvement was in physics simulations, specifically in
simulations of magnetic confinement fusion. The effective speed of simulations of microturbulence and global magnetohydrodynamics (the motion of a fluid under a magnetic field)
clearly improves exponentially, even when hardware improvements are factored out.
Features such as improved electron models and linear solvers are cited as major
contributors to the software speedup.
In linear optimization for operations research, there are similar signs of major
speedups due to improvements in sofware. Grace cites Martin Grtschel, who claims an
average 2-times speedup for a production-planning problem for every year between 1988
and 2003. Robert Bixby claims a 3,300-times software-derived speedup in a linear
programming problem during an unspecific period leading up to 2004. The hardwarederived improvement in the same period was about 1,600, which corresponds to
improvement over the course of about 15 years. Since hardware performance doubles
about every 18 months, we can calculate a software performance doubling time of roughly
16 months, for this specific problem.
None of this means that there aren't many domains in which software progress is
stagnant, or that areas such as natural language processing aren't susceptible to
diminishing returns. However, it appears that software performance doubling is a fairly
common trend across a variety of fields and it seems likely that these doublings are having
a direct impact on various subproblems of Artificial Intelligence.

627

6. The erroneous idea that advanced AI is considered uninteresting


or intractable and no one is focusing on it
To address this point requires making a distinction between narrow AI and general
AI. Narrow AI is artificial intelligence designed to focus on certain specific problems, like
driving a car or playing chess. General AI is artificial intelligence designed to work as
general problem-solvers. Naturally, the latter is much more difficult than the former. This
doesn't mean that effort isn't being placed towards general AI. Google and many smaller
organizations have stated publicly that they are working on it.
Google bought the artificial intelligence company DeepMind, of which an investor
said14, If anyone builds something remotely resembling artificial general intelligence, this
will be the team. Google's research director Peter Norvig said less than 50 percent but
certainly more than 5 percent of the worlds leading experts on machine learning work at
Google, and this statement was made prior to Google's acquisition of DeepMind. Ray
Kurzweil, probably the world's leading public advocate of general artifical intelligence, was
hired by Google to work on new projects involving machine learning and language
processing in December 2012, to great fanfare 15. Peter Norvig has twice given talks at the
Singularity Summit, a conference closely associated with the community around general
artificial intelligence, and has also participated at the Conference on Artificial General
Intelligence in Mountain View.
Another major company working on general AI is Grok, which changed its name
from Numenta in 2013. Grok's website states that its mission is to be a catalyst for
machine intelligence, and to accelerate the creation of machines that learn. Grok is led
by Jeff Hawkins, the founder of Palm.
A spinoff company from Grok is Vicarious, with similar goals and technology. Their
website states, we're building software that thinks and learns like a human. Vicarious was
founded by Dileep George, a protege of Hawkins, and has raised more than $40 million
from big names like Mark Zuckerberg, Peter Thiel, and Elon Musk. Instead of using the
Deep Learning approach common to many software development companies, Vicarious' AI
design is directly inspired by the mammalian brain and the hierarchical structure of
temporal memory.
Though there have always been AI winters, that is, periods of diminished activity,
the field arguably emerged from any haitus in the late 2000s. Since then, the advances
628

have been quick in coming, and many of the most interesting companies have been bought
by Google. Much work remains to be done, but progress is steady.
7. Anthropomorphism
The stumbling block of anthropomorphism, that is, human-shaped thinking has
been emphasized by many writers in AI risk 16. Having evolved psychologies that are
adapted to interact with other humans, human is what we naturally imagine other agents as
being. An Artificial Intelligence may have instincts and drives nothing like oursno
jealousy, no primal confidence, no love, no hate, no boredom, no social self-consciousness
close to nothing that we associate with agent-ness. An AI might have no inner
subjective life whatsoever. Our intuitions about what a highly intelligent agent would do or
ought to do are plausibly profoundly mistaken when it comes to AI.
What can we be confident that an advanced AI would do, if not act in human-like
ways? Certain extremely basic things; pursue goals (since goal-driven behavior is the
foundation of cognition), preserve its own survival to an extent (or it would not be very
effective at pursuing its goals), expand its resources (since resources are useful for
accomplishing nearly any goal), and implement efficiency (achieve more goals with fewer
resources). These are the basic AI drives outlined by Steve Omohundro in his paper of
the same name17. Since an AI could continue to build copies of itself or agents to do its
bidding, it need not experience satiation like human beings do. It could very well have an
infinite hunger, and convert all materials on the planet to robots to be its servants, or
computers to do its thinking. Any solid material can be used to build a computing device.
Since humans have solid parts, our material would also be useful in building computers. In
this fashion, an AI could threaten us even if it were not explicitly malevolent. Eliezer
Yudkowsky put it this way: The AI does not hate you, nor does it love you, but you are
made out of atoms which it can use for something else.
An AI would only have human social instincts if the computational structure of these
neural objects were fully understood and programmed in with painstaking detail.
Subsequently, these features would need to be retained over many rounds of the AI
engaging in self-improvement and self-modification. For all of them to be retained, or even
programmed in the first place, seems unlikely 16. An AI would not need the full suite of
human instincts to be economically useful, generally helpful, and non-harmful to humans.

629

It is tempting to attribute human-like features to AIs, especially in the domain of tribal


warfare and resentment. However, AIs could be programmed to be willing slaves. There is
no universal law that all possible agents need resent control. That is only a feature of
evolved organisms, which exists for obvious adaptive reasons. Biologically evolved
organisms have observer-centric goal systems, but minds in general, such as artificial
intelligence, would not have them unless they are explicitly programmed that way.
8. Pulling the plug
An advanced AI will be able to access the Internet and port itself onto other machines,
so it will not be possible to pull the plug. Even today, pulling the plug rarely solves
anything.
9. Erroneous representation that, even having extended over the Internet, AI cannot
influence an external world in any way
An advanced Artificial Intelligence could accomplish a great deal even if it lacked a
robotic or biological body. It could split itself into thousands of software agents, putting
them to work making money over the Internet simultaneously. It could pose as human
beings remotely and hire hitmen or mercenaries to eliminate its enemies. It could start
mass-producing robots to fulfill its goals. It could even begin researching nanotechnology
or other superior manufacturing methods to allow it to build robots and weapons much
stronger and faster than anything today. Just because an AI starts off within a computer
does not mean it does not pose a threat.
10. Erroneous representation that AI cannot have its own desires,
and therefore could never do harm to humans
An AI would need some goals to get started, even if they were only implicit rather than
explicit. Goals means physical motion, and physical motion can kill. An AI would not
necessarily have common sense morality like human beings do, and even if it could be
programmed in, it is possible that some other goal could override it. For this reason,
carefully programming a self-improving AI's goal system to be stable under selfmodification is of utmost importance. We need to account for scenarios where AIs are able
to massively self-replicate and overwhelm the planet, where the slightest error in
programming could potentially cause our demise. Certainly we can anticipate selfcorrecting goal systems of a sort, but it is as best to be as conservative as possible. There
630

will come a point where Artificial Intelligence surpasses humankind as the most powerful
intelligence on Earth, and when that happens, we want to be certain that AI will improve our
lives.
11. Erroneous representation that AI will master space, leaving Earth to humans
The Earth is a planet rich in resources. An Artificial Intelligence looking to pursue its
goals will not just skip it, blast off into space, and leave Earth untouched. It would be far
more useful to first utilize all the Earth's resources, transform it into computers or
paperclips, then follow that up with the solar system and the galaxy. We will not be spared
from a human-indifferent AI just because it overlooks what is right under its nose. That is
wishful thinking.
12. Erroneous idea that greater intelligence inevitably
leads to non-threatening supergoal X
Intelligence is a tool which can be directed to the achievement of any purpose. Homo
sapiens uses our intelligence to pursue general goals, the outlines of which are
programmed into our brains from birth. This includes finding food and water, being
entertained, making friends, fighting off competitors, acquiring a mate, and so on. For the
sake of these goals, ships are launched, theorems are proven, and plots are hatched. The
presence of intelligence does not entail any unequivocal purpose. Different humans have
slightly different goals, and animals have more different goals still. Artificial Intelligence
would have an even higher possible level of goal variation, and might pursue goals
considered quite outlandish to us, such as converting all matter in the solar system into
paperclips, if that's how they were programmed.
The idea that a sufficiently wise AI will cooperate with everyone, or be kind to
everyone, or some other kind of optimistic trait, is simply wishful thinking and projection 16.
We want the AI to be cooperative, or we feel that wiser humans tend to be cooperative, so
we think advanced AI will be. We neglect our complex evolutionary history and limitations
which make cooperation ecologically rational for Homo sapiens, in the context of our innate
neural architectures. An AI would not need to cooperate to achieve its goals; it could simply
wipe out every human being on the planet and do exactly as it wants without any fuss.
Artificial Intelligence has a tendency to be a blank canvas for us to project our fears
and hopes onto. If we would appreciate a benevolent father figure, that's what we imagine
a superintelligent AI as. If we wish for a loving mother figure, we imagine that. If we desire
631

a passive background operating system for the solar system, that's what we envision. The
thing is, the structural and goals of a superintelligence do not depend so much on what we
envision or prefer, but more on what goals we program into an AI and how these modify or
change as the AI becomes trillions of times larger and more powerful. Conceivably all of
the outcomes outlined above are possible, it's just a question of what sort of programming
the AI receives when it is just a small seed.
13. Underestimating the performance of computers
Most people who casually say that Artificial Intelligence is not likely to be built for 70
or more years generally cannot say what the expert estimates of the human brain's
computing power is nor how much computing power the best computers have today. Many
of them do not even recognize that the brain itself is essentially a deterministic computer,
with the neurons serving both as processors and memory units. Among the cognitive
science community, this isn't even up for debatethe brain is a biological computer, full
stop. The brain may not have a standard von Neumann architecture, but not that does not
mean that it isn't a computer. A computer with enough computing power will certainly be
able to simulate the brain and display intelligence itself. This means that even if we never
figure out how intelligence works, we will eventually be able to create Artificial Intelligence
by copying the brain's structure into a dynamic computer program. A report from the
University of Oxford estimates that this will happen with a median estimate of 2080 18. That
sets a rough upper bound for the likely creation of Artificial Intelligence.
How can one purport to estimate when AI could be created without knowing such
simple numbers as those mentioned above? It doesn't make any sense, but alleged
experts in philosophy and futurism do so all the time. The quantity of the first estiamtethe
computing power of the human brainvaries from roughly 10^14 operations per second,
based on Moravec's estimate19, to 10^19 operations per second, an upper bound estimated
by a team at Oxford focusing on whole brain emulation 17. Put another way, it ranges from a
hundred exaflops (trillion floating operations per second) to 10,000 petaflops (quadrillion
floating operations per second). The world's most powerful solitary supercomputer as of
this writing (May 2014) is the Tianhe-2 supercomputer in China, which has a peak
performance of slightly less than 55 petaflops. So, according to some estimates, our best
computers are already more than fast enough to run software with as much computing
capacity as the human brain. Accordingly, most researchers in the field of artificial general
632

intelligence do not consider hardware the salient issue, but software. One obstacle is that
Moore's law (the periodic doubling of computer performance) seems to already be dead or
dying, so improvements in cost effectiveness of computers may not be forthcoming. If so,
this could be a major long-term barrier to the creation of Artificial Intelligence.
The issue of Moore's law slowing is ameliorated somewhat by the fact that few
researchers anticipate it would take the full computing power of the human brain to create
an Artificial Intelligence. It is likely that it will be many times less, due to software
optimization.
14. Underestimating how much we know about intelligence and the brain
A great many scientists, philosophers, and academics without firsthand knowledge of
cognitive science or detailed familiarity with some of its subfields tend to radically
underestimate just how much we know about the human brain and its function. Although
we do not have a wiring diagram of the brain, we understand its gross structure consists of
macrocolumns and microcolumns, divided into 52 functional areas. Google Scholar brings
up about 275,000 articles on neuroanatomy and 35,000 articles on functional
neuroanatomy. The MIT Encyclopedia of Cognitive Science (MITECS), a brief overview
reference work on the brain, is 1096 pages. We have brain-computer interfaces so precise
that we can measure the electrical activity in someone's visual cortex and use it to build a
fuzzy video of what that person is seeing20. We are so close to artificial telepathy that the
Pentagon is spending millions of dollars trying to seal the deal 21.
Many algorithmic details about speech, reading, attention, executive control, scene
scanning, happiness, pleasure, pain, sadness, motor processing, scent, and so on are
known. We don't know enough to simulate these functions in a computer, but enough that
MIT scientists have created a wiring diagram of the cochlea has been ported into a
software program used to process sounds22. What details we do know about the brain,
especially sensory perception, are sufficient to constrain many aspects of our software if
we did want to emulate the human brain. It is still the early days of functional understanding
of the brain's details, but we know far more than nothing, and progress is accelerating
thanks to new tools and modern computers. Tools have been developed to connect
neurons directly with light pulses, a field known as optogenetics. This has been used this to
discover where exactly in the brain memories are stored 23, though we have yet to decode
them.
633

Before claiming we know close to nothing about the brain, skeptics should spend
an hour or two looking through a large cognitive science textbook like MITECS, and see
what we do know. They will discover that it's quite a lot.
15. The erroneous representation that because humans can do X ,
AIs can never do X and consequently AI does not present any threat
Humans can perform many tasks that AIs currently cannot, and we sometimes like to
get excited about that fact, boasting that because an AI cannot enjoy a sunset, strong AI is
therefore extremely distant, and never worth worrying about. This is incorrect. Computer
vision can already process a sunset in extreme detail, and it is only a matter of time before
we create machines with the knowledge and perception to appreciate the nuances of a
sunset in an ideographic way. Many of the faculties we care about the most are social,
reinforced by highly niche-adapted neural modules. It may be difficult to appreciate an AI
that gradually increases in its social skills until it quickly equals and then surpasses us. For
a great deal of time it may remain in the Uncanny Valley until it becomes transhuman.
A general AI may be designing its own Iron Man androids but still lack the social skills
to shake someone's hand without looking like a machine. Operating systems will still have
glitches by the time AI is developed. Competence in the domain of general intelligence,
which is what makes AI a threat, may be achieved without an AI reaching proficiency in any
of the domains that human beings traditionally associate with success, like social
dominance. An AI may have highly advanced narrow skillsplanning, military skills,
manufacturing, communicating in parallel with thousands of people, an exploding bank
account, drones, and so onbut be a really odd fellow to sit down to tea with. So, just
because an AI can't compose a concerto you like, or make your mother laugh, doesn't
mean that it can't kill you and everyone you care about.
16. Erroneous conception that AI is impossible because
AI thinks algorithmically and humans non-algorithmically
In The Emperor's New Mind, Roger Penrose spent a good deal of pages explaining
why he thinks consciousness is non-algorithmic. David Chalmers has raised the question
of the hard problem of consciousness in a similar context 24. Though most people in
cognitive science subscribe to casual functionalism, the notion that the mind is what the
brain does, and thus that consciousness is merely the result of neuron firings, we should
always be willing to consider that consciousness is in some way non-algorithmic.
634

Thankfully, we could overcome that obstacle by building non-algorithmic AI. If AI


proves to be impossible through conventional algorithmic means, we can always try out
new things, like growing neurons on a chip. An Artificial Intelligence could consist of a brain
in a vat surrounded by a supercomputer. That would hardly make it less threatening.
17. Erroneous representation that AI will be roughly human-equivalent
Quoting Eliezer Yudkowsky25:
Many have speculated whether the development of human-equivalent AI,
however and whenever it occurs, will be shortly followed by the development of
transhuman AI (Moravec 1988; Vinge 1993; Minsky 1994; Kurzweil 1999; Hofstadter
2000; McAuliffe 2001). Once AI exists it can develop in a number of different ways;
for an AI to develop to the point of human-equivalence and then remain at the point
of human-equivalence for an extended period would require that all liberties be
simultaneously blocked at exactly the level which happens to be occupied by Homo
sapiens sapiens. This is too much coincidence. Again, we observe Homo sapiens
sapiens intelligence in our vicinity, not because Homo sapiens sapiens represents a
basic limit, but because Homo sapiens sapiens is the very first hominid subspecies
to cross the minimum line that permits the development of evolutionary
psychologists.
Even if this were not the caseif, for example, we were now looking back on
an unusually long period of stagnation for Homo sapiensit would still be an
unlicensed conclusion that the fundamental design bounds which hold for evolution
acting on neurons would hold for programmers acting on transistors. Given the
different design methods and different hardware, it would again be too much of a
coincidence.
This holds doubly true for seed AI. The behavior of a strongly self-improving
process (a mind with access to its own source code) is not the same as the behavior
of a weakly self-improving process (evolution improving humans, humans improving
knowledge). The ladder question for recursive self-improvementwhether climbing
one rung yields a vantage point from which enough opportunities are visible that
they suffice to reach the next rungmeans that effects need not be proportional to
causes. The question is not how much of an effect any given improvement has, but
635

rather how much of an effect the improvement plus further triggered improvements
and their triggered improvements have. It is literally a domino effectthe universal
metaphor for small causes with disproportionate results. Our instincts for system
behaviors may be enough to give us an intuitive feel for the results of any single
improvement, but in this case we are asking not about the fall of a single domino,
but rather about how the dominos are arranged. We are asking whether the tipping
of one domino is likely to result in an isolated fall, two isolated falls, a small handful
of toppled dominos, or whether it will knock over the entire chain.
We have included a long quote from Yudkowsky because this point is rather
important and may be difficult to grasp. It is common to think of human-equivalent AI as
reasonable, but transhuman or superhuman AI as beyond the pale. This makes no sense,
however, because humanity does not occupy a priviliged section in cognitive statespace,
we are just one rung on a long ladder. The fact that we are the first truly intelligent species
on the planet actually suggests that we are close to the bottom of possible general
intelligences. A fairly limited set of genetic mutations and brain volume changes caused the
upgrade from chimps to humans, and, being conservative, we should consider the
possibility that another similar improvement will lead to beings qualtitatively smarter than
ourselves as we are above chimpanzees. Human-equivalent AI is a misnomer, like a
human-equivalent locomotive. If we figure out AI at all, it will be possible to throw the
entire planet's computing resources and talent at it (including the AI's own talent) which all
but ensures it will not remain human-equivalent for long, but will quickly become
superintelligent.
18. Derailing discussion with talk about AI rights
Whether or not all or some AIs will be considered persons worthy of being granted
rights is an interesting question, but besides the point when it comes to global risk.
Advanced AI will eventually become a major possible threat and boon to humanity, whether
or not we grant them rights or consider them equal persons. Similarly, AIs will not despair
if we do not treat them as persons, as they will not have a human-like ego, by default. The
notion that such an ego would arise organically, without deliberate programming, derives
from a misunderstanding of how the brain works and the complicated evolutionary history
that gave rise to its features 14. Similar to how a human-like ego will not arise organically in
636

an AI unless it is programmed in, a fear of snakes will not arise in an AI, even though all
mammals have this fear. Fear of snakes and a human-like ego are similar in that they are
complex functional adaptations crafted by evolution over the course of millions of years to
solve concrete ecological challenges. Artificial Intelligence is not likely to be evolved, and
even if it were, its selection pressures would be vastly different than our own. Genetic
algorithms have only been used to solve narrow optimization problems and are not a viable
candidate for the construction of general Artificial Intelligence. In conclusion, worrying
about tribalistic AI rebellion or assertion of rights is just another form of anthropomorphism.
If we choose to create AIs with human-like egos and grant them rights, that will be very
interesting, but it is likely that many egoless AIs will be created first. Scenarios of
anthropomorphic AIs are featured in fiction because they are easier for the audience to
understand.
19. Erroneous representation that the risk is assauged by there being multiple AIs
It may be pointed out that there might not be just one powerful, superintelligent,
human-threatening AI, but many. This does not make the risk any less. A multiplicity of AIs
does not mean that they will be on par or capable of checking one another. If there is a
multiplicity of AIs, as there is certain to eventually be, we had better be sure that they all
have robust moralities, in addition to overarching safeguards. Multiple AIs exacerbates the
risk, not ameliorates it.
20. Sidetrack of debating the definition of intelligence
Defining intelligence is not easy. Ben Goertzel has proposed the definition the ability
to achieve complex goals in complex environments using limited computational resources,
which is serviceable, but still vague. AI researcher Shane Legg has compiled a list of 52
definitions of intelligence26. Every armchair philosopher in the world loves to pontificate on
the topic. Instead of discussing intelligence, we should ask, can this system improve
itself open-endedly and threaten the planet, or not? That is all that really matters.
Engaging in philosophical tail-chasing regarding the definition of intelligence is fun, but not
useful in appraising concrete risk. An AI might become extremely powerful and threatening
even if it does not meet a textbook, anthropocentric definition of intelligence. To separate
the idea of powerful AI from controversies about the definition of intelligence, Yudkowsky
has proposed calling a superintelligent AI a Really Powerful Optimization Process 27.
637

21. Erroneous unequivocal identification of AI with a discrete object


It is tempting to think of an intelligence as a distinct object, as a human being is.
However, a powerful AI would be more like a ghost, or a god, distributed across many
different systems at once. It would be very difficult to defeat it as it could always hack into
unguarded machines and upload its code to them.
22. Erroneous idea that AI can be boxed
As a thought experiment, some have proposed the idea of putting an AI in a box with
no way of accessing the Internet, only releasing it if it proves trustworthy. The problem with
this is that the AI can lie, or promise people things to let it out.
23. Objection that because AI has previously failed, it is guaranteed to fail
Just because general AI has not been previously created does not mean it will never
be created. This objection has begun to fade recently, as the field of AI has made a
comeback during the 2010s.
24. Erroneous idea that giving AI a simple command will work
This objection relates to the earlier items on anthropomorphism and Asimov's laws.
Giving an AI a command like love all people, cause no one any harm, or obey only me
is likely to work only if the AI already has an extremely sophisticated goal system that
predisposes it to listen to commands, fully understand them, and follow through on them
only in ways which its master intends. The problem is setting this goal system up in the first
place, not just uttering a command. An AI is not a humanit would not have the common
sense we have about a millon different issues which makes it behave in ways which are
intuitive to us. For instance, in response to the command, love all people, it might decide
to hook people up to machines that directly stimulate their pleasure center all day. Giving
an AI the common sense to know why this would be bad is the challenge, and the
challenge is huge. The proposal of just tell the AI what to do does not diminish the size of
the technical challenge.
25. Objection in the spirit of when we have AI, then we'll worry about safety
It's best to start worrying about safety now. Safety considerations are likely to impact
fundamental considerations about how an AI is constructed, and will only be integrated into

638

the architecture if they are considered well ahead of time. The Machine Intelligence
Research Institute refers to this as Friendly AI theory.
26. Difference of ability and intention
Yudkowsky calls this the Giant Cheesecake Fallacy--the idea that if an AI has the
ability to build a gigantic, 30-foot tall cheesecake, it will surely do so. Just because an
agent has the ability to do something, does not mean it will do it. Ability and intention have
to align for an agent to take action.
Citations
1. Roger Penrose. The Emperor's New Mind. Oxford University Press, 1999.
2. Jeremy Hsu for Space.com. "Science Fiction's Robotics Laws Need Reality Check.
http://www.space.com/7148-science-fictions-robotics-laws-reality-check.html Retrieved
4/21/2014.
3. Mark Waser. Rational Universal Benevolence: Simpler, Safer, and Wiser than
Friendly AI. Artificial General Intelligence: 4th International Conference, AGI 2011,
Mountain View, CA, USA, August 3-6, 2011. Proceedings. Springer Berlin
Heidelberg, 2011.
4. Mark Waser. Wisdom Does Imply Benevolence. Becoming Gaia blog.
http://becominggaia.wordpress.com/papers/wisdom-does-imply-benevolence/
Retrieved 4/21/2014.
5. Mark Piesing for Wired.co.uk. AI uprising: humans will be outsourced, not
obliterated. http://www.wired.co.uk/news/archive/2012-05/17/the-dangers-of-an-aismarter-than-us Retrieved 4/21/2014
6. Chris Willis for Android World. Are super-intelligent machines a danger to
humanity? http://www.androidworld.com/prod90.htm Retrived 4/21/2014
7. Hugo de Garis. The Artilect War. Etc Publications, 2005.
8. Robin Hanson. Has Penrose Disproved AI? Foresight Update No 12, pp. 4,5, 1991,
publication of the Foresight Institute.
9. Max Tegmark. The importance of quantum decoherence in brain processes.
http://arxiv.org/abs/quant-ph/9907009. 1999.
10. Victor Stenger. The Myth of Quantum Consciousness. The Humanist. 53 No 3
(MayJune 1992). pp. 13-15.

639

11. Luke Muehlhauser and Anna Salamon. Intelligence Explosion: Evidence and Import.
Singularity Hypothesis: A Scientific and Philosophical Assessment. Springer, 2012.
12. Mark Gubrud. Nanotechnology and International Security. Draft paper for the Fifth
Foresight Conference on Molecular Nanotechnology, 1997.
http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/
13. Katja Grace. Algorithmic Progress in Six Domains. Technical report 2013-3.
Berkeley, CA: Machine Intelligence Research Institute, 2013.
14. Dan Rowinski for ReadWriteWeb. Google's Game of Moneyball in the Age of
Artificial Intelligence. January 29, 2014. http://readwrite.com/2014/01/29/googleartificial-intelligence-robots-cognitive-computing-moneyball
15. KurzweilAI.net. Kurzweil joins Google to work on new projects involving machine
learning and language processing. December 14, 2012.
http://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involvingmachine-learning-and-language-processing
16. Eliezer Yudkowsky. 2001. Creating Friendly AI 1.0: The Analysis and Design of
Benevolent Goal Architectures. The Singularity Institute, San Francisco, CA.
17. Stephen Omohundro. The Basic AI Drives. November 30, 2007. Self-Aware
Systems, Palo Alto, California.
18. Anders Sandberg and Nick Bostrom. Whole Brain Emulation: A Roadmap, Technical
Report #20083, Future of Humanity Institute, Oxford University.
www.fhi.ox.ac.uk/reports/20083.pdf
19. Hans Moravec. When will computing hardware match the human brain? The
Journal of Evolution and Technology, vol 1. Dec. 1997.
http://www.transhumanist.com/volume1/moravec.htm
20. UC Berkeley News Center. Scientists use brain imaging to reveal the movies in our
mind. September 22, 2011. https://newscenter.berkeley.edu/2011/09/22/brain-movies/
21. Eko Armunanto. Artificial telepathy to create Pentagons telepathic soldiers. May
10, 2013. http://digitaljournal.com/article/349839
22. Lloyd Watts, Richard F. Lyon, Carver Mead. "A Bidirectional Analog VLSI Cochlear
Model. 1991. California Institute of Technology.
http://www.lloydwatts.com/WattsBidirectional1991.pdf

640

23. Sebastian Anthony. MIT discovers the location of memories: Individual neurons.
Extreme Tech. http://www.extremetech.com/extreme/123485-mit-discovers-the-locationof-memories-individual-neurons
24. David Chalmers. Facing Up to the Problem of Consciousness. Journal of
Consciousness Studies 2(3):200-19, 1995.
25. Eliezer Yudkowsky. 2007. Levels of Organization in General Intelligence. In
Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389
501. Cognitive Technologies. Berlin: Springer. Doi:10.1007/ 978-3-540-68677-4_12.
26. Shane Legg. A Collection of Definitions of Intelligence. IDSIA Technical Report 0707. June 15th, 2007. http://arxiv.org/pdf/0706.3639.pdf
Eliezer. Yudkowsky. Coherent Extrapolated Volition. 2004. The Singularity Institute, San
Francisco, CA

Chapter 6. The specific reasoning errors applicable


to risks from use of nanotechnology
Like Artificial Intelligence and many other global risks, there are numerous reasoning
errors people often make with respect to nanotechnology. Most risks from nanotechnology
concern risks from molecular assemblers, nanofactories, and nanorobots; speculative
future technologies whose properties are poorly understood. However, this has not stopped
arms control experts and risk analysts such as Jurgen Altmann 1 and Mark Gubrud2 from
examining the field. Issues around nanotechnology and molecular manufacturing can seem
intimidating from the perspective of newcomers to the study of global risk, but it is
important to recognize that there is a consensus view which is fairly well developed, and
can serve as a starting point for further discussion and risk analysis. A list of common
reasoning errors around nanotechnology risk follows.
1. Erroneous idea that molecular robotics are impossible
There are thousands if not millions of examples of molecular robotics within the
human body. Scientists have created DNA nanodevices with a special coating that can
avoid the safeguards of the mouse immune system 3. There are self-assembling
nanodevices that move and change shape on demand 4. There are molecular robotics on
641

nano-assembly lines that can be programmed to 'manufacture' eight different molecular


products5. DNA nanorobots have been injected into live cockroaches and programmed to
perform complex tasks6. Molecular manufacturing cannot be said to be impossible because
it already exists. A more relevant question is how long it will take to mature, which is
unknown.
2. Erroneous idea that grey goo is a greater risk than nano arms-races
In 1986 a landmark book introducing many of the fundamental concepts of
nanotechnology, Engines of Creation, by Eric Drexler, was published7. The book mentioned
out-of-control, biomass-consuming self-replicating nanotechnology, or grey goo, as an
aside, and the concept was latched onto by the media. In a 2004 paper, Safe Exponential
Manufacturing, Eric Drexler and co-author Chris Phoenix emphasized that grey goo is not
the greatest threat from nanotechnology8, saying, Fictional pictures of MNT commonly
assume that pulling molecules apart would be as easy as putting them togetherthat
assemblers could swarm over a pile of grass clippings and convert it directly into a
personal spaceship. This is not the case. They highlighted non-replicating weapons as a
larger risk, writing, The authors do not mean to imply that advanced mechanochemical
manufacturing will create no risks. On the contrary, the technology introduces several
problems more severe than runaway replicators. One of the most serious risks comes from
non-replicating weapons. Even today, many people with a casual, non-academic
understanding of nanotechnology continue to perpetuate the grey goo risk, not realizing
that the father of nanotechnology himself, Eric Drexler, has repeatedly emphasized that it
is not the primary risk.
3. Erroneous idea that nanotechnology is connected only with materials
science, photolithography, chemisty, and nanotubes
In the early 2000s, a federal government program called the National Nanotechnology
Initiative (NNI) handed out several billion dollars in grants to fund nanotechnology.
However, their definition of nanotechnology was quite expansive, and included materials
science and plain chemistry, among other areas. As a result, the term nanotechnology
was overhyped. It is necessary to distinguish between the overextended definition of
nanotechnology promulgated by the NNI and the original meaning of the term as Eric
Drexler used it, which refers to the robotic manipulation of individual atoms and the
642

construction of atomically precise products, including self-replicating molecular assemblers.


To clarify this kind of nanotechnology, sometimes the phrase Drexlerian nanotechnology
or molecular nanotechnology is used.
4. Erroneous idea that nanorobots will be weaker than bacteria,
because bacteria had billions years to adapt
Automobiles are not slower than cheetahs, even though cheetahs have had
thousands of years to adapt. Swords are not weaker than claws. Just because something
had thousands of years to adapt does not mean it will be stronger. Machines eventually
surpass biology, given enough time.
5. Erroneous representation that nanorobots cannot breed in the environment
If bacteria can breed in nature, there is no reason why nanobots cannot do so. They
can even borrow the same chemical reactions, or replicate as organic-inorganic hybrids.
S.W. Wilson coined the term animats to refer to the possibility of such artificial animals, a
contraction of animal-materials.
6. Erroneous representation that nanorobots in an environment
will be easy to destroy by bombing
If nanorobots do become a threat, self-replicating in the environment a la grey goo,
there will be too many of them to bomb. Even a saturation bombing does not literally
saturate the ground with explosions. If bombs are used, they will disperse nanorobots on
the fringes of the blast area, where they will continue to replicate. Rather than targeting
nanorobots themselves with bombs, it might be wiser to create firebreaks by saturation
bombing all the organic material in a given area, removing the nanorobots' energy source.
However, if nanorobots can be dispersed on the wind, as they are likely to be, they will just
float over such firebreaks. A more effectively means of countering grey goo is through
friendly nanobots, blue goo.
7. Erroneous representation that nanorobots will
be too small to contain control computers
Though few detailed designs for nanorobots yet exist, what designs do exist, such as
for the primitive nanofactory designed by Chris Phoenix, set aside volume for computers 9.
Technically, the term nanorobot is a misnomer, since these machines are more likely to
643

be several microns acrossthe size of cellsrather than tiny like viruses. This will give
them sufficient volume to contain sophisticated computers for navigation and swarm
behavior. The DNA in human cells contains about 500 MB of information, and is constantly
being processed by ribosomes, so we have an existence proof for small devices with
substantial computing capabilities.
8. Our propensity to expect grandiose results only from grandiose causes
In technology, seemingly simple tricks can give rise to extreme consequences.
Radioactivity was originally just discovered as a certain kind of rock that left marks on
photopaper, and eventually the same principles were used to create bombs that destroy
cities and reactors that power them. Drexler illustrates this error from the following
examples: Boring fact: some electric switches can turn each other on and off. These
switches can be made very small and consume little electricity. Grandiose consequence: if
enough of these switches are put together, they can create computers, the foundation of
the information revolution. Boring fact: mold and bacteria compete for food, therefore some
molds have evolved to produce toxins that kill bacteria. Grandiose consequence: penicillin
saved millions of lives by fighting infection.
9. Other objections to molecular nanotechnology
Experts in the field of nanotechnology have posed many reasons why they think
molecular nanotechnology will not work. These include quantum uncertainty, thermal
vibrations, van der Waals forces, the sticky fingers criticism, spontaneous rearrangement,
and so on. We cite various sources of both the objections and their rebuttals. First, the
objections: the Wikipedia article on the Drexler-Smalley debate on molecular
nanotechnology (which summarizes a long debate), an email exchange between Philip
Moriarty and Chris Phoenix10, and six challenges for molecular nanotechnology by
Richard Jones11. Then, the responses: a rebuttal to Smalley co-authored by Eric Drexler
and many others12, a brief open letter by Drexler13, and a response by the Nanofactory
Collaboration of scientists14. There have been many other discussions in many other
venues, but these are the most prominent, authoritative, and widely cited. Reviewing them
all will give the reader a good understanding of the most common objections and
counterarguments, and provide a platform for further discussion.

644

Citations
1. Jurgen Altmann. Military Nanotechnology: Potential Applications and Preventive
Arms Control. New York: Routledge. 2006.
2. Mark Gubrud. Nanotechnology and International Security. Draft paper for the Fifth
Foresight Conference on Molecular Nanotechnology, 1997.
http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/
3. Steven D. Perrault, William M. Shih. Virus-Inspired Membrane Encapsulation of
DNA Nanostructures To Achieve In Vivo Stability. ACS Nano, 2014;
140422010051005 DOI: 10.1021/nn5011914
4. Tim Liedl, Bjorn Hogberg, Jessica Tytell, Donald E. Ingber, William M. Shih. Selfassembly of 3D prestressed tensegrity structures from DNA. Nature
Nanotechnology, 2010; DOI: 10.1038/nnano.2010.107
5. Kyle Lund et al. Molecular robots guided by prescriptive landscapes. Nature 465,
206210 (13 May 2010).
http://www.nature.com/nature/journal/v465/n7295/full/nature09012.html
6. Ido Bachelet. DNA robots work in a live cockroach. Nature 508,153 (10 April 2014).
http://www.nature.com/nature/journal/v508/n7495/full/508153e.html
7. Eric Drexler. Engines of Creation. New York: Anchor Books, 1986. http://edrexler.com/p/06/00/EOC_Cover.html
8. Chris Phoenix and Eric Drexler. Safe exponential manufacturing.
Nanotechnology15 (2004) 869872 PII: S0957-4484(04)78839-X
http://crnano.org/IOP%20-%20Safe%20Exp%20Mfg.pdf
9. Chris Phoenix. Design of a Primitive Nanofactory. Journal of Evolution and
Technology vol. 13, 2003.
10. Chris Phoenix and Philip Moriarty. Is mechanosynthesis feasible? The debate
continues. Soft Machines blog. http://www.softmachines.org/wordpress/?p=70

645

11. Richard Jones. Six challenges for molecular nanotechnology. Soft Machines blog.
http://www.softmachines.org/wordpress/?p=175
12. Drexler, K. Eric; Forrest, David; Freitas, Robert A.; Hall, J. Storrs; Jacobstein, Neil;
McKendree, Tom; Merkle, Ralph; Peterson, Christine (2001). "On Physics,
Fundamentals, and Nanorobots: A Rebuttal to Smalleys Assertion that SelfReplicating Mechanical Nanorobots Are Simply Not Possible". Institute for Molecular
Manufacturing. Retrieved 25 April 2010.
http://www.imm.org/publications/sciamdebate2/smalley/
13. Eric Drexler. An Open Letter on Assemblers. April 2003, Foresight Institute.
http://www.foresight.org/nano/Letter.html
14. Robert A. Freitas Jr. and Ralph C. Merkle. Remaining Technical Challenges for
Achieving Positional Diamondoid Molecular Manufacturing and Diamondoid
Nanofactories. Nanofactory Collaboration. June 2006.
http://www.molecularassembler.com/Nanofactory/Challenges.htm

Chapter 7. Conclusions from the analysis of cognitive biases in the estimate of global risks and
possible rules for rather effective estimate of global risks

Need for open discussion


The scale influence of errors on reasoning on global risks can be estimated by comparing
opinions of different experts, scientists and politicians of possibility of definitive global catastrophe
and its possible reasons. It is easy to be convinced, that the disorder of opinions is huge. Some
consider total risk insignificant, others are confident in inevitability of human extinction. As the
possible reasons the set of different technologies and scenarios is called, and different experts offer
the sets of possible scenarios and sets of impossible scenarios.
It is obvious, that roots of such disorder of opinions - in a variety of movements of thought
which, in absence of any visible reference point, appears it is subject to various biases. As we cannot
find a reference point concerning global risks in experiment, it is represented desirable that open
discussion about methodology of research of global risks on which basis the uniform and
conventional picture of global risks could be generated, became such reference point.

646

Important maintenance of open discussion by all kinds of risks. It means consideration of any
objection as true sufficient time, that it to estimate before deciding it to reject. Not to reject any
objections to a descent and to support presence of opponents.

Precaution principle
It means preparation for the worst realistic scenario in all situations of uncertainty. Realistic it
is necessary to consider any scenario which does not contradict known laws of physics and has
precisely measurable probability above some threshold level. It corresponds to the principle of a
conservative engineering estimate. However precaution should not have irrational character, that is
should not exaggerate a situation. One of formulations of a principle of precaution sounds so: the
precaution principle is a moral and political principle which asserts, that if a certain action or the
politician can cause a severe or irreversible damage to a society, that, in absence of the scientific
consent that harm will not happened, weight of the proof lays on those who offers the given
actions.
Doubt principle
The principle of doubt demands to suppose possibility of an inaccuracy of any idea. However
the doubt should not lead to instability of the course of thought, blind trust to authorities, absence of
the opinion and uncertainty in it if it is proved enough.
Introspection
The continuous analysis of own conclusions about possible errors from all list.
Independent repeated calculations
Here enters independent calculation by different people, and also comparison of direct and
indirect estimates.
An indirect estimate of degree of an error
We can estimate degree of underestimate of global catastrophe, studying that, how much
people underestimate similar risks - that is risks of unique catastrophes. For example, spaceships
Shuttle have been calculated on one failure more than on 1000 flights, but the first failure has
occurred on 25th flight. That is the initial estimate 1 to 25 would be more exact. Nuclear stations
were under construction counting upon one failure in one million years, but Chernobyl failure has
occurred approximately after less than 10 000 stations-years of operation (this number turns out
647

from multiplication of number of stations by that moment for average term of their operation, and
demands specification). So, in the first case real stability has appeared in 40 times worse, than the
design estimate, and in the second - in 100 times is worse. From here we can draw a conclusion, that
in case of unique difficult objects people underestimate their risks in tens times.

The conclusion. Prospects of prevention of global catastrophes


The mankind is not doomed for extinction. And even if our chances are insignificant, infinitely
big future is valuable enough to struggle for it. Definitely positive fact is that

the log jam has

broken - in 2000th years the number of publications on the problems of global catastrophes of the
general character has sharply increased and the uniform understanding of a problem has started to
develop. There is a hope, that in the nearest decades the problem of global risks becomes
conventional, and the people who have absorbed understanding of importance of these problems,
will appear in the power. Possibly, it will occur not smoothly, and after painful shocks, like
September, 11th, each of which will raise readership of the literature on global risks and will urge
forward discussion. Besides, it is possible to hope that efforts of separate people and groups of
concerned citizens will promote realization of such perspective strategy, as differential development
of technologies. Namely, development of Friendly AI will occur in more advancing rates than, for
example, consciousness loading in the computer which as a result could get huge power, but will be
uncontrollable. Also it is important, that powerful AI would arisen earlier, than will appear strong
nanotechnology - besides can supervise them.
Probably, we should reconcile the period with superfluous and even totalitarian control over
human activity during this period when the risk will be maximum, and the understanding of concrete
threats will be minimum. During this period it will be not clear, which knowledge is really
knowledge of mass destruction, and which is a harmless toy.
Probably, that we will be so lucky that no risk will materializedmaterialised. On the other
hand, probably, that we will be less lucky, and the train of large catastrophes will reject a civilization
in the development far back, however human will remain and will find wiser approach to realization
of technological achievements. Probably, that on this way it is necessary to us a difficult choice: to
remain forever at medieval level, having refused computers and flights to stars or to risk and try to
become something big. Despite all risk, this second scenario looks for me more attractive as the
mankind closed on the Earth is doomed sooner or later to extinction by the natural reasons.
Growth of efforts on creation of refuges of a different sort is observed also: in Norway the
storehouse for seeds on a case of global catastrophe is constructed. Though such storehouse will not
648

rescue people, the fact of intention should be praised to put up money and real resources in projects,
return from which is possible only after centuries. The project of creation of a similar refuge on the
Moon which even name a backup drive for a civilization is actively discussed. In this refuge it is
supposed to keep not only all knowledge of people, but also the frozen human embryos, in hope
what somebody (aliens?) will restore then on them people.
At the same time, in this book I tried to show, that unreasoned actions on prevention of
catastrophes can be not less dangerous, than catastrophes. Hence, at the moment the basic efforts
should be concentrated not to concrete projects, but at all on propagation of a "green" way of life,
and on growth of understanding of the nature of possible risks, on formation of a scientific
consensus that actually is dangerous also what risk levels are comprehensible. Thus such discussion
cannot be infinitely long as in certain more abstract areas as then we risk to "oversleep" really
approaching catastrophe. It means, that we are limited in time.

Bibliography:
1. Blair Bruce G. The Logic of Catastrophic Nuclear War. Brookings Institution Press,
1993.
2. Bostrom N. and Tegmark M. How Unlikely is a Doomsday Catastrophe? // Nature,
Vol. 438, No. 7069, C. 754, 2005. ( : Bostrom.

http://www.proza.ru/texts/2007/04/11-348.html )
3. Bostrom N. Anthropic principle in science and philosophy. L., 2003.
4. Bostrom N. Are You Living In a Computer Simulation?. // Philosophical Quarterly,
2003, Vol. 53, No. 211, pp. 243-255., http://www.simulation-argument.com/, (
: http://alt-future.narod.ru/Future/bostrom3.htm )
5.Bostrom, N. and M. Cirkovic eds. Global Catastrophic Risks. Oxford University Press.
2008.
5. Bostrom, N. Existential Risks: Analyzing Human Extinction Scenarios. // Journal of
Evolution and Technology, 9. 2001.
7. Bostrom, N. How Long Before Superintelligence? // International Journal of
Futures Studies, 2. 1998. URL: http://www.nickbostrom.com/superintelligence.html.
8. Bostrom, N. Observer-relative chances in anthropic reasoning? // Erkenntnis, 52,
93-108. 2000. URL: http://www.anthropic-principle.com/preprints.html.
649

9. Bostrom, N. The Doomsday Argument is Alive and Kicking. // Mind, 108 (431),
539-550. 1999. URL: http://www.anthropic-principle.com/preprints/ali/alive.html.
10. Bostrom, N. The Doomsday argument, Adam & Eve, UN++, and Quantum Joe. //
Synthese, 127(3), 359-387. 2001. URL: http://www.anthropic-principle.com.
11. Cirkovic Milan M., Richard Cathcart. Geo-engineering Gone Awry: A New Partial
Solution of Fermi's Paradox. // Journal of the British Interplanetary Society, vol. 57, pp.
209-215, 2004.
6. Cirkovi Milan M. The Anthropic Principle And The Duration Of The Cosmological
Past. // Astronomical and Astrophysical Transactions, Vol. 23, No. 6, pp. 567597, 2004.
6.Collar J.I. Biological Effects of Stellar Collapse Neutrinos. // Phys.Rev.Lett. 76, 1996, 9991002 URL:http://arxiv.org/abs/astro-ph/9505028
12. Dar, A. et al. Will relativistic heavy-ion colliders destroy our planet? // Physics
Letters, B 470, 142-148. 1999.
7. Dawes, R.M. Rational Choice in an Uncertain World. San Diego, CA: Harcourt,
Brace, Jovanovich, 1988.
7.Diamond Jared. Collapse: How Societies Choose to Fail or Succeed. Viking Adult, 2004.
13. Drexler, K.E. Dialog on Dangers. Foresight Background 2, Rev. 1. 1988. URL:
http://www.foresight.org/Updates/Background3.html.
14. Drexler, K.E. Engines of Creation: The Coming Era of Nanotechnology. London:
Forth Estate. 1985. URL: http://www.foresight.org/EOC/index.html.
8. Fetherstonhaugh, D., Slovic, P., Johnson, S. and Friedrich, J. Insensitivity to the
value of human life: A study of psychophysical numbing. // Journal of Risk and
Uncertainty, 14: 238-300. 1997.
15. Foresight Institute. Foresight Guidelines on Molecular Nanotechnology, Version
3.7. 2000. URL: http://www.foresight.org/guidelines/current.html.
16. Forrest,

D.

Regulating

Nanotechnology

Development.

1989.

URL:

http://www.foresight.org/NanoRev/Forrest1989.html.
17. Freitas (Jr.), R.A. A Self-Reproducing Interstellar Probe. // J. Brit. Interplanet. Soc.,
33, 251-264. 1980.
18. Freitas (Jr.), R.A. Some Limits to Global Ecophagy by Biovorous Nanoreplicators,
with

Public

Policy

Recommendations.

Zyvex

preprint,

April

2000.

URL:

http://www.foresight.org/NanoRev/Ecophagy.html. ( : ..
. http://www.proza.ru/texts/2007/11/07/59.html)
650

19. Gehrels Neil, Claude M. Laird, Charles H. Jackman, John K. Cannizzo, Barbara J.
Mattson, Wan Chen. Ozone Depletion from Nearby Supernovae. // The Astrophysical
Journal, March 10, vol. 585. 2003.
20. Gold, R.E. SHIELD: A Comprehensive Earth Protection System. A Phase I Report
on the NASA Institute for Advanced Concepts, May 28, 1999.
21. Gott J. R. III. Implications of the Copernican principle for our future prospects. //
Nature, 363, 315319, 1993.
22. Gubrud, M. Nanotechnology and International Security, Fifth Foresight Conference
on

Molecular

Nanotechnology.

2000.

URL:

http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html.
23. Hanson R. Catastrophe, Social Collapse, and Human Extinction // Global
Catastrophic Risks, ed. Nick Bostrom. 2008. http://hanson.gmu.edu/collapse.pdf
24. Hanson, R. Burning the Cosmic Commons: Evolutionary Strategies for Interstellar
Colonization. Working paper. 1998. URL: http://hanson.gmu.edu/workingpapers.html.
25. Hanson, R. et al. A Critical Discussion of Vinge's Singularity Concept. // Extropy
Online. 1998. URL: http://www.extropy.org/eo/articles/vi.html.
26. Hanson, R. What If Uploads Come First: The crack of a future dawn. // Extropy,
6(2). 1994. URL: http://hanson.gmu.edu/uploads.html.
9. http://www.acceleratingfuture.com/michael/blog/?p=539
8.http://www.proza.ru/texts/2007/05/14-31.html
27. Jackson, R.J. et al. Expression of Mouse Interleukin-4 by a Recombinant
Ectromelia Virus Suppresses Cytolytic Lymphocyte Responses and Overcomes Genetic
Resistance to Mousepox. 2001. Journal of Virology, 73, 1479-1491.
28. Joy, B. Why the future doesn't need us. // Wired, 8.04. 2000. URL:
http://www.wired.com/wired/archive/8.04/joy_pr.html.
10. Kahneman, D. and Tversky, A. eds. Choices, Values, and Frames. Cambridge,
U.K.: Cambridge University Press, 2000.
11.Kahneman, D., Slovic, P., and Tversky, A., eds. Judgment under uncertainty:
Heuristics and biases. NY, Cambridge University Press, 1982.
29. Knight,

L.U.

The

Voluntary

Human

Extinction

Movement.

2001.

URL:

http://www.vhemt.org/.
30. Knobe Joshua, Ken D. Olum and Alexander Vilenkin. Philosophical Implications of
Inflationary Cosmology. // British Journal for the Philosophy of Science Volume 57,
651

Number

1,

March

2006,

pp.

47-67(21)

http://philsci-

archive.pitt.edu/archive/00001149/00/cosmology.pdf
31. Kruglanski A. W. Lay Epistemics and Human Knowledge: Cognitive and
Motivational Bases. 1989.
32. Kurzweil, R. The Age of Spiritual Machines: When computers exceed human
intelligence. NY, Viking. 1999.
33. Leslie J. The End of the World: The Science and Ethics of Human Extinction.
1996.
34. Leslie, J. Risking the World's End. Bulletin of the Canadian Nuclear Society, May,
10-15. 1989.
35. Mason C. The 2030 Spike: Countdown to Global Catastrophe. 2003.
36. Melott, B. Lieberman, C. Laird, L. Martin, M. Medvedev, B. Thomas. Did a gammaray burst initiate the late Ordovician mass extinction? // arxiv.org/abs/astro-ph/0309415,
( : - .
http://www.membrana.ru/articles/global/2003/09/25/200500.html
12.

Merkle, R.. The Molecular Repair of the Brain. Cryonics, 15 (1 and 2). 1994.

9.Michael Foote, James S. Crampton, Alan G. Beu, Bruce A. Marshall, Roger A. Cooper,
Phillip A. Maxwell, Iain Matcham. Rise and Fall of Species Occupancy in Cenozoic Fossil
Mollusks // Science. V. 318. P. 11311134. 2007.
37. Milgrom Paul, Nancy Stokey. Information, trade and common knowledge. //
Journal of Economic Theory, Volume 26:1, pp. 17-27. 1982.
38. Moravec, H. Mind Children: The Future of Robot and Human Intelligence, 1988.
39. Moravec, H. Robot: Mere Machine to Transcendent Mind. New York: Oxford
University Press. 1999.
40. Moravec, H. When will computer hardware match the human brain? // Journal of
Transhumanism, 1. 1998. URL: http://www.transhumanist.com/volume1/moravec.htm.
41. Morgan, M.G. Categorizing Risks for Risk Ranking. // Risk Analysis, 20(1), 49-58.
2000.
42. Neil Gehrels, Claude M. Laird, Charles H. Jackman, John K. Cannizzo, Barbara J.
Mattson, Wan Chen. Ozone Depletion from Nearby Supernovae.

// Astrophysical

Journal 585: 11691176. Retrieved on 2007-02-01. http://xxx.lanl.gov/abs/astroph/0211361

652

43. Nowak, R. Disaster in the making. // New Scientist, 13 January 2001. 2001. URL:
http://www.newscientist.com/nsplus/insight/bioterrorism/disasterin.html.
44. Perrow, Charles. Normal Catastrophes: Living with High-Risk Technologies.
Princeton University Press. 1999.
45. Posner Richard A. Catastrophe: Risk and Response. Oxford University Press,
2004.
13.

Powell, C. 20 Ways the World Could End. Discover, 21(10). 2000. URL:

http://www.discover.com/oct_00/featworld.html.
14. Raffensberger C, Tickner J (eds.) Protecting Public Health and the Environment:
Implementing the Precautionary Principle. Island Press, Washington, DC, 1999.
10.

Robock, Alan, Luke Oman, Georgiy L. Stenchikov: Nuclear winter revisited with a

modern climate model and current nuclear arsenals: Still catastrophic consequences. //
J. Geophys. Res., 112, D13107, doi:2006JD008235. 2007. ( :
http://climate.envsci.rutgers.edu/pdf/RobockNW2006JD008235Russian.pdf )
46. Roland

Jon.

Nuclear

Winter

and

Other

Scenarios,

1984.

http://www.pynthan.com/vri/nwaos.htm
15. Ross, M. & Sicoly, F. Egocentric biases in availability and attribution. // Journal of
Personality and Social Psychology 37, 322-336. 1979.
16.

Shute, N. On the Beach. Ballentine Books. 1989.

17. Simmons Matthew R. Twilight in the Desert: The Coming Saudi Oil Shock and the
World Economy. NY, 2005.
11.

Sir Martin Rees. Our final hour. NY, 2003.


47. Stevenson David. A Modest Proposal: Mission to Earths Core. // Nature 423, 239240 2003.
48. Svenson, O. Are we less risky and more skillful that our fellow drivers? Acta
Psychologica, 47, 143-148. 1981.
18. Taleb, N. The Black Swan: Why Don't We Learn that We Don't Learn? NY,

Random House, 2005.


49. Tegmark M. The interpretation of quantum mechanics: many worlds or many
words? // Fortschr. Phys. 46, 855-862. 1998 http://arxiv.org/pdf/quant-ph/9709032
50. Tickner, J. et al. The Precautionary Principle. 2000. URL: http://www.biotechinfo.net/handbook.pdf.

653

51. Turner, M.S., & Wilczek, F. Is our vacuum metastable? Nature, August 12, 633634. 1982.
52. Vinge, V. The Coming Technological Singularity. Whole Earth Review, Winter
issue. 1993.
53. Ward, P. D., Brownlee, D. Rare Earth: Why Complex Life Is Uncommon in the
Universe. NY, 2000.
54. Warwick, K. March of the Machines. London: Century. 1997.
55. Whitby, B. et al. How to Avoid a Robot Takeover: Political and Ethical Choices in
the Design and Introduction of Intelligent Artifacts. Presented at AISB-00 Symposium on
Artificial

Intelligence,

Ethics

an

(Quasi-)

Human

Rights.

2000.

http://www.informatics.sussex.ac.uk/users/blayw/BlayAISB00.html
56. Yudkowsky E. Artificial Intelligence as a Positive and Negative Factor in Global
Risk. Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic,
- UK, Oxford University Press, to appear 2008. ( : .Yudkowsky.

. http://www.proza.ru/texts/2007/03/22-285.html)
57. Yudkowsky E. Cognitive biases potentially affecting judgment of global risks.
Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic, - UK,
Oxford

University Press,

to

appear

2008

.Yudkowsky.

,
. http://www.proza.ru/texts/2007/03/08-62.html )
19.

Yudkowsky,

E.

Creating

Friendly

AI

1.0.

2001.

URL:

http://www.singinst.org/upload/CFAI.html.
12.

.. ? AI:

. ., , 2006.
58. . . ., , 2002.
59. . . // , 08,
2001. http://nauka.relis.ru/05/0108/05108042.htm
60. .. . AI . ,
, 1991.
20.

AI // XX. 4. 5. 2002.
http://ecc.ru/XXI/RUS_21/ARXIV/2002/anisimov_2002_4.htm
654

21. . . //. V , .
1998.
13.

.. //

. 1994. .28. N4-5.- .211-214. . http://www.arracis.com.ua/moon/m312.html


22.

- . ,

, , 2006.
14.

. . // . N 10. 200
23.

. . AI .

, 2006.
24. . Singularity . Nanotechnology Perceptions: A Review of
Ultraprecision Engineering and Nanotechnology, Volume 2, No. 1, March 27 2006.
15.

.., .., . . ., 1985


61. , ., .., H.A.
. . //
, 2000, 6.
25.

.., .., .. .

. , , . ., , 2000
16.

.., ..

. // , , 3. 2006.
26.

- .., .., ..

AI. AI.
27. . . ., , 1993.
17.

..

// , 69, . 9. 1999.
62. . ., . . . //
, N3, 1998.
28.

. . ,

. . 2003.
29. .. . ., , 2004.
30. .. . //
, 1983, 10.
18.

., ., ., . .,

2001.
655

31.

..

. // , 39, . 1. 1994.
19.

.. . // .

13 (592) 8-14 2006 http://pripyat.com/ru/publications/2006/04/08/750.html


63. . .. (
) // , 6. 1998.
64. . .. AI
AI // , 2 2006. ( ..
).
32.

. .. (

). ., , 2003.
20.

.. . //

. . 4. .: , 2001. .
9-16. http://macroevolution.narod.ru/krmodelcrisis.htm
65. .. . AI. .
http://spkurdyumov.narod.ru/kurkinaes.htm
66. . . , .10. .,
"", 1995.
33. . , 1963.
34. . , 1970.
35.

. AI: . . .,

, 2001.
21.

.. ,

AI. // .
AI, 2000. .61. 4. . 357-369. http://macroevolution.narod.ru/redqueen.htm.
67.

AI.

http://www.transhumanism-russia.ru/content/view/317/116/ // .
. ., 2008.
36. . . , 1972.
37. . AI. ., , 1998
68. ..
AI. ., 2001.
656

69. . , , . 73, 3, 2003


70. . 1984. L. 1948.
38.

.. AI

SETI AI. // ,
,

2,

2004.

http://lnfm1.sai.msu.ru/SETI/koi/articles/krizis.html
22.

. . , AI . .,

, 2005.
39.

. . ,

. // AI Lomonosov, , 1996.
40. ., . : AI.

.,

1999.

http://evolkov.iatp.ru/social_psychology/Ross_L_Nisbett_R/index.html
41. . . .
. . ., -, 1999.
23.

.., .. . ., - , 2002.
71. ... AI . -
, ., 2001.
42.

, . . ., , 2002.

43. ..
AI // :
AI : . 31. ., , 2007.
44. .. . //
:
AI : . 31. ., , 2007.
24.

, . . ., , 1971.
45.

AI, 2002.
25.

. . , , .

. // , 6. 2004.
46.

. . ., 1989.

47. . . // , 9, 2007.
26.

.. . , . ., , 1984.
657

48.

. . , . //

06.

2005.

http://www.popmech.ru/part/print.php?

articleid=259&rubricid=3
Summary
The book Structure of the global catastrophe. Risks of the human extinction in the XXI
century of A.V. Turchin is contemporary scientific research of the global risks which
threaten existence of humanity in this century. In the first part of the book different
sources of global risks are examined.
In the first chapter are discussed general principles of the study and are given
background of the question.
At the beginning the risks, connected with nuclear weapon, are examined. There are
investigated nuclear winter and cobalt bomb.
In the following chapter are examined the risks, connected with the global chemical
contamination. Then are examined the risks, created by biological weapon. DNA
sequencers in the future will create possibility of the appearance of bio hackers. The
conclusion is that the simultaneous appearance of many bio hackers in the future is
very essential risk. Even if people survive, it is possible the loss of the biosphere as a
result of the application green goo.
Then is examined the possibility of appearance of the super drug, which will turn off
people from reality.
In the fourth chapter are discussed risks, created by strong artificial intellect.
Later is shown analysis of the risks of nanotechnologies. The appearance of military
nano-robots is extremely dangerous. Furthermore, is possible the creation gray goo
by hackers. The unlimited multiplication of replicators can lead to the extinction of
people. The scenarios of the output of robots from under the control are examined.
In the 8 chapter the methods of the provocation of natural catastrophes by technical
equipment are investigated. The possibility of the man-made explosion of supervolcano, deviation of asteroids and intentional destruction of ozone layer is studied.
In the 9 chapter the risks, connected with the fundamentally new discoveries, are
examined. This includes risks, connected with the dangerous physical experiments, for
example, on the large hadron collider LHC. Also you can find scenarios of the
appearance of microscopic black holes, strangelets, magnetic monopoles, phase
658

transitions of false vacuum. The risks of deep drilling and penetration into the mantle of
the Earth in the spirit of Stevenson's probe are discussed.
In the 10 chapter are shown risks, created by future space technologies. The mastery of
space with the aid of the self-multiplying robots will make possible to free enormous
destructive forces. Also considered xeno biological risks.
In the 11 chapter are examined the risks, connected with the SETI program. It is
extremely dangerous to rash to load extraterrestrial messages, which can contain
description and drawings of artificial intellect hostile to us.
In the 12 chapter different natural catastrophes, which are powerful to lead to the loss of
civilization, are examined. This is the loss of the universe as a result of new Big Bang,
and the eruption of supervolcano, global earthquakes, collision with the asteroids,
Gamma-ray bursts,solar flares, and supernovas.
In the 13 chapter are discussed extremely improbable scenarios of extinction.
In the 14 chapter it is told about the influence of anthropic principle and observant
selection on frequency and probability of natural catastrophes.
15 chapter is dedicated to global warming, in the spirit of Lavlock and Karnaukhov
which can result to greenhouse catastrophe with an increase in the temperature higher
than boiling point of water.
In the 16 chapter are examined the anthropogenic threats, not connected with the new
technologies - exhaustion of resources, weakening of fertility, overpopulation,
displacement by another specie, social economic crisis.
17 chapter are dedicated to the methods of detecting the new scenarios of global
catastrophes. Here is examined the theory of the Doomsday machine.
Chapter 18 is dedicated to the multifactoral scenarios of risk. Here is examined the
tendency of the integration of different technologies - NBIC. The paired scenarios of risk
are examined. Also are examined the types of people and organizations, ready to risk
by the fate of planet. The problems of making a decision about a nuclear strike are
examined.
In chapter 19 the events, which change the probability of global catastrophe, are
shown. Here is discussed the idea of Technological Singularity, role of the progress in
an increase in the threats to existence. System crisis as the important factor of risk is
considered. Overshooting leads to the simultaneous exhaustion of all resources. Here
is introduced idea of the crisis of crises, which is connected with the contemporary
659

mortgage crisis, the financial crisis, the credit crisis. Are examined the factors of World
War, arm race, moral degradation.
In the chapter 20 are examined the factors, which influence the speed of progress, first
of all Moore's law and the influence of the economy on it.
Chapter 21 is dedicated to the problem of averting global risks. Here is examined the
general concept of possibility to avert different risks and discussed different active
shields - nano- shield, Bio shield, and also the IAEA and ABM. The problems of the
creation of global monitoring system are discussed. It is shown that this system will
create new risks, since in it failures are possible. Here are examined the problems of
the cessation of technical progress, creation of refuges and bunkers, far space
settlements - all these methods do not guarantee human survival. And here is
examined a question about infinity of the universe and quantum immortality and many
world immortality and studied a question about that if we live in the matrix.
Chapter 22 are dedicated to the indirect methods of evaluating the probability of global
catastrophe. Here are examined the law of Pareto, Doomsday Argument, the Gott
formula, the Fermi paradox, Bostrom's argument about simulation. The attempt to
combine the received results is undertaken.
In the chapter 23 the most probable scenarios of planetary catastrophe taking into
account already aforesaid are examined.
In the second part of the book the methodology of the analysis of global risks is
examined. First of all the speech is about the calculation of different cognitive biases,
which influence human thinking. Pioneer value here have works of E. Yudkowsky
In the chapter 1 there are considered the role of errors as intellectual catastrophes.
In the chapter 2 is given the list of the errors, which are possible only relative to the
global risks, which threaten the survival of mankind.
In the chapter 3 the cognitive biases, which influence the estimate of any risks, are
examined.
In the chapter 4 are examined the universal logical errors, which are powerful to appear,
also, in the reasoning about the threats to humanity.
In the chapter 5 the specific errors, which are powerful to be manifested in the
discussions about the danger AI are examined.
In the chapter 6 the cognitive biases, which influence the perception of the risks of the
nanotechnologies are examined.
660

In the chapter 7 preliminary recommendations for the efficient estimate of global risks
are given.
In the conclusion are analyzed the prospects of averting the global risks, on the basis of
the current situation in the world.

What is AI? MA part


Artificial Intelligence (AI), particularly smarter-than-human AI, is the most significant
extinction risk that mankind faces over the course of the 21 st century. Specifically, there is
the risk that advanced AI will self-enhance and self-replicate beyond all boundaries,
consuming all the Earth's resources, leading to the extinction of humanity either
accidentally or deliberately1,2,3. There is significant disagreement on this point, but this
chapter will use its limited space to present the argument that the risk is real and severe,
rather than covering the many counterarguments. We will provide references to critical
views.
The topic of Artificial Intelligence is extremely complex, especially when we delve into
questions of rates of human-equivalent and smarter-than-human AI self-improvement and
questions of autonomous AI intent. There are no easy answers. Yet, there is a strong
foundation of basic facts from computer science and cognitive science which can be used
to constrain possible answers. Many laypeople and Hollywood scriptwriters comment on
what they think smarter-than-human AI might do, but experts with backgrounds in cognitive
science and computer science are in a better position to hold accurate opinions on future
AI than the average layperson, even if their predictions are not perfect. At the very least,
they make fewer stupid mistakes. The fact that there is a great diversity of thousands of
contradicting views on AI does not mean that there is not convergence on key areas and
that some views are better supported (more scientifically supported, better supported by
deductive logic, etc.) than others. Despite this, there is still a great deal of disagreement
661

among experts, which is typical for any complicated field in its infancy, of which AI is the
prototypical example.
Let's begin by defining Artificial Intelligence. The most popular AI textbook as of this
writing, Artificial Intelligence: A Modern Approach, reviews AI literature and presents four
overlapping but different definitions of AI: 1) systems that think like humans, 2) systems
that think rationally, 3) systems that act like humans, and 4) systems that act rationally. In
this context, the word rational is not used in the ambiguous sense it often has in daily
conversation, but as a precise technical word in the context of decision theory, economics,
and mathematics.
In economics, rationality means making optimal decisions in pursuit of goals 4. In
Artificial Intelligence, a rational agent is one that maximizes expected utility 5. In contrast to
the theoretical rational agent of economics, real human behavior and thought is filled with
known systematic errors and contradictions (see for example the work of Daniel Kahneman
and Amos Tversky on prospect theory6, or Robyn Dawes on expert prediction7), to the point
where we cannot simply be modeled as rational actors. Simple AI systems that fulfill tasks
like filtering spam can be called rational, because they exclusively pursue a goal and only
take actions predicted to lead to the achievement of that goal. More advanced AIs are also
likely to be rational in the technical sense, because it's easiest and most reliable to build AI
systems that way.
There are many definitions of the word intelligence. AI theorists and theoretical
computer scientists Shane Legg and Marcus Hutter have compiled at least 70 definitions of
intelligence from various literature8. Legg and Hutter wrote a definition which they say
captures the key attributes from all of these: Intelligence measures an agents ability to
achieve goals in a wide range of environments. AI theorist Ben Goertzel's definition goes a
step further, Achieving complex goals in complex environments 9. AI legend Marvin Minsky
defines intelligence as the ability to solve hard problems 10. AI researcher and futurist Ray
Kurzweil says, Intelligence is the ability to use optimally limited resources including time
to achieve goals.11
For most analysts, the specific definition of intelligence, while interesting, is somewhat
beside the point of how AI will influence our long-term future. Precisely how intelligence is
defined is of secondary concern because however it is defined, it is likely there will
662

eventually be machines that fulfill that definition and surpass humans in that domain.
According to the leading view of philosophy of mind, called causal functionalism or just
functionalism, the human brain itself is just a parallel processing machine and thus we are
machines12. Therefore, one could build a machine that does anything we do. If there are
limitations to instantiating intelligence in conventional computers, we will build exotic
computers that overcome these limitations13. This includes computer chips that physically
intertwine with neurons, which have already been demonstrated 14.
There are several ways in which different possible types of AI can be distinguished
from a philosophical perspective. The first prominent view was presented by philosopher
John Searle in 198015. Searle distinguishes between weak AI, which manipulates symbols
without understanding them, and strong AI, a philosophical position which asserts, The
appropriately programmed computer with the right inputs and outputs would thereby have a
mind in exactly the same sense human beings have minds. The philosophical nature of
this distinction may or may not have any relevance to the functional performance of an AI.
For instance, there might be a highly advanced AI that can create artistic masterpieces that
put Leonardo da Vinci to shame, but is not conscious and has no individual will. In the
same sense, an individual AI or AI group could become a mortal threat to the human
species whether or not it actually understands what it is thinking in an esoteric
philosophical sense. In the paper Nanotechnology and National Security, arms control
expert Mark Gubrud spells it out16:
By advanced artificial general intelligence, I mean AI systems that rival or
surpass the human brain in complexity and speed, that can acquire, manipulate and
reason with general knowledge, and that are usable in essentially any phase of
industrial or military operations where a human intelligence would otherwise be
needed. Such systems may be modeled on the human brain, but they do not
necessarily have to be, and they do not have to be "conscious" or possess any
other competence that is not strictly relevant to their application. What matters
is that such systems can be used to replace human brains in tasks ranging from
organizing and running a mine or a factory to piloting an airplane, analyzing
intelligence data or planning a battle.

663

Emphasis added. In their AI textbook, Russell and Norvig state, Most AI researchers
take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis.
This means most AI researchers take for granted that a sufficiently complicated system
behaves as if it has a mind, and they don't really care if the AI is really conscious or really
understands what it is doing, as long as it does it.
It is sometimes asserted that without true consciousness, whatever that may be, AI
systems will have an upper ceiling to their ability to reason 17. Instead of wading into this
complicated argument, we simply take it for granted that advanced AI could become fully
capable, on a human-equivalent or human-superior level, with or without true
consciousness. There are a number of books that explore the debate in depth 18,19. We
already have AI systems that are human-superior in narrow domains, and we see no
fundamental difference between computation designed to reason through complicated
narrow problems and computation for generally intelligent reasoning, leading us to believe
that the latter will be possible without any magic tricks. After all, human general intelligence
evolved from highly specialized animal intelligence with only limited capabilities of general
reasoning20. The evolution of highly complicated specialized intelligence to highly
complicated general intelligence occurred incrementally and within the practical limitations
of the biological nervous system. There's nothing magical about it, and no reason to think
there won't eventually be software that implements the same functions.
The distinction between weak AI and strong AI, popular in the 80s and 90s, began to
give way in the 00s and early 10s to a new distinction; that between narrow AI and general
AI, also called Artificial General Intelligence or AGI 21. Narrow AI refers to AI based around
solving a particular narrow problem, like driving or scheduling, whereas general AI refers to
AI capable of general problem-solving of the sort that humans do. As with many terms used
to describe complex systems, narrow and general are not precisely defined and
mutually exclusive areas but rather there is a gradient of generality which varies between
toy systems solving simple problems like Tic-Tac-Toe to unimaginably general greater-thanhuman intelligence of the kind we might expect a super-AI in science fiction to have.
Artificial General Intelligence is an academically established term. The term
originates with the influential paper Nanotechnology and National Security by Mark Avrum
Gubrud, written in 199722. It was not until 2008 that the term caught on. That year, the First
664

Conference on Artificial General Intelligence was held, an annual academic gathering that
continues to this day. In 2010, the edited volume Artificial General Intelligence was
published. This striving for human-equivalent Artificial General Intelligence is a renewal of
the founding spirit of the field of Artificial Intelligence, which was temporarily abandoned in
the early 70s after the field made promises it could not deliver. Forty years later, optimism
is renewing itself. The fastest supercomputers today can carry out more than a hundred
million times more operations per second than the fastest supercomputers of the mid-70s,
we know much more about the brain, and AI has experienced some recent successes, so
many researchers consider it an excellent time to pursue general AI again.

Artificial Intelligence Today


What successes has the field of AI celebrated? From the mid-00s to the time of this
writing, AI has undergone the following major milestones: self-driving cars, facial image
recognition, reverse image search, AIs that can design and carry out experiments to
discover new scientific knowledge, and AI that can beat the best human players in
Jeopardy! These milestones are roughly indicative of where AI is today as a field. The
iPhone personal assistant Siri would qualify as another milestone, though it is arguably an
incremental upgrade of preexisting systems.
Self-driving cars are an example of a narrow field that has progressed more quickly
than was anticipated, and may be within a decade of a solution. In October 2005, the 2005
DARPA Grand Challenge was held, a competition to see which university teams could
program a driverless car to successfully complete a 150 mile (240 km) course through the
Mojave Desert. Out of 23 finalists, only 5 cars completed the course, traveling at an
average speed of only 21 miles an hour. (In the 2004 competition, none of the cars finished
the course.) In 2005, driverless cars were relatively experimental and the pace of future
progress was completely unknown. By August 2012, Google had announced their
driverless cars had completed over 300,000 autonomous-driving miles (500,000 km)
accident-free, a number which had increased to 700,000 autonomous-driving miles (1.1
million km) by April 2014. Over the course of seven years, the technology went from being
failure-prone and highly experimental even in a simplistic desert environment to being
capable of logging hundreds of thousands of miles of autonomous driving on complicated
urban and suburban streets.

665

Another AI breakthrough, a bit more obscure and discussed more among the AI
cognoscenti than the general public, is Adam, a robot scientist built by researchers at
Aberystwyth University in Wales and England's University of Cambridge, the
accomplishments of which were outlined in an article in the journal Nature23. The robot
scientist is a large, stationary construct, 16.4 feet (five meters) in length, with a height and
width of 9.8 feet (three meters), consisting a computer combined together with robotic arms
and various tools for conducting biological experiments. The robot scientist was able to
come up with an original hypothesis about the gene expression of baker's yeast, design an
experiment to test it, and came up with a positive result, marking the first time in history
that an Artificial Intelligence made an original scientific contribution on its own. The finding,
that yeast can grow faster when certain enzymes are removed, contradicts the current
understanding of yeast growth.
The Artificial Intelligence project that has probably acquired the most attention and
plaudits in recent years is Watson, IBM's supercomputer program that answers natural
language questions. The program received major publicity when it defeated former
Jeopardy! champions Ken Jennings and Brad Rutter in a televised match in 2011, winning
a prize of $1 million. Starting in 2013, a variant of Watson was deployed to assist lung
cancer treatment nurses at the Sloan-Kettering Cancer Center in New York City. An IBM
spokesperson says that 90 percent of the nurses in the field with access to Watson follow
its guidance24. IBM hopes to progress on commercializing Watson as an expert system for
healthcare professionals.
Watson was a major departure from what came before. Expert systems, that is,
systems with domain-specific knowledge, have been in use since the 1980s, but Watson is
far more capable than its predecessors. Watson has the advantage of massive computing
power; in 2011, Watson required a master bedroom-sized room to contain all the
computers needed to run it. By 2013, this had shrunk to a pizza-box-sized server with 240
percent the processing speed of the original. The original uses a cluster of 90 servers, with
2,880 processing cores and 16 terabytes of RAM. Watson uses over 100 techniques to
analyze natural language, identify sources, find and generate hypotheses, find and score
evidence, and merge and rank hypotheses. According to John Rennie, the editor-in-chief
of Scientific American, Watson can process 500 gigabytes of information a second,
equivalent to a million books25.
666

No doubt, much more will be heard about Google driverless cars and the Watson AI in
the years after this book is published. These two projects have the advantage of corporate
backing and already-proven practical success in key domains. In the long run, they will
save their clients hundreds of billions of dollars of time and effort. For driverless cars, their
watershed moment will arrive when major manufacturers start making cars with the
driverless option and they've had a few years to work the bugs out; for Watson, it will be
when the system becomes affordable and flexible enough that anyone can download an
app for it. These milestones may not be reached until 2030 or later, but it's just a matter of
time.

Projects in Artificial General Intelligence


For Artificial General Intelligence to be a risk to humanity, someone has to build it.
There are a number of projects in AGI, some better established than others. There is
Vicarious, an AI startup which has raised over $55 million to pursue a unified algorithmic
architecture to achieve human-level intelligence in vision, language, and motor control.
Originally, their website boldly claimed, We're building software that thinks and learns like
a human, but in June 2014 this was toned down to mission: build the next generation of
AI algorithms26. We mention Vicarious not because it has demonstrated any public
successes as of this writing, but because it has been backed by big names in the Silicon
Valley venture capital community27. Its fundraising success is an example of the freshly
renewed acceptability of projects oriented towards building human-equivalent AI systems.
The project has not explicitly declared itself to be AGI, but its website language has
certainly suggested it, and some of its primary backers have been directly exposed to the
AGI milieu.
In January 2014, there was news that London-based Artificial Intelligence company
DeepMind, which has one of the biggest concentrations of researchers anywhere working
on deep learning, was acquired by Google for $400 million 28. This led many journalists to
speculate why this AI company is worth so much to them. Concurrently, Google set up an
ethics board to oversee the course of its future advanced AI research, a board which
includes some people previously involved in AI safety efforts 29. In a 2008 TED talk, Larry
Page and Sergey Brin, co-founders of Google, said that they wanted to develop Google
into a search engine that really understands you 30. This does not imply Artificial General

667

Intelligence capable of making human-level decisions, but it's clear that Google is working
in that direction.
Besides acquiring DeepMind, in December 2013 Google acquired robotics company
Boston Dynamics, the world's leader in large advanced autonomous robots, especially
bipedal human-like robots and quadrupedal cheetah-like robots 31. Boston Dynamics has
built a cheetah robot that runs faster than 29 mph (46.7 kph), the record for a land-based
legged robot. They've built the robotic pack mule Big Dog, which eerily rights itself when
given a swift kick while walking the surface of an ice-covered pond. Lastly and perhaps
most impressively, Boston Dynamics has built a human-sized humanoid robot capable of
walking briskly on its own two feet and balancing on a stack of cinder blocks, Atlas 32. This
robot, called an Agile Anthropormophic Robot, has a walking speed of 4.4 mph (7 kph),
making it the fastest bipedal robot in the world. Boston Dynamics exists primarily to build
robots for the military, which raised eyebrows when the company was bought by Google,
especially given the latter's motto, Do no evil. Skynet, the world-destroying AI system
from the Terminator film series was mentioned in connection with the acquisition many
times.
Strictly speaking, none of these efforts are AGI. Vicarious toned down its language,
and Google has not specifically said it is working on AGI. Google's director of research,
Peter Norvig, has presented at the Conference on Artificial General Intelligence, but this
does not mean that Google is actively pursuing AGI. Despite rumors to the contrary, the
company may very well only be pursuing narrow AI.
A few smaller groups are working on what might be considered fundamental work
towards AGI. Instead of AGI, the term foundations of Artificial Intelligence is sometimes
used. This phrase refers to the view among many AGI researchers that fundamentally new
approaches to decision theory and mathematics are required to make breakthroughs in
AGI. For instance, contemporary decision theory, developed in the 1970s, is essentially
based on Cartesian dualism, where the mind of the decision algorithm is metaphysically
separate from the rest of the universe around it. This is not a realistic model of reality. As
an alternative, AI researchers Laurent Orseau and Mark Ring have defined a metric of
space-time-embedded intelligence that is one of the first efforts to step away from this

668

original view of decision theory and create a version that is more theoretically robust and
responsive to real-world needs such as objective self-evaluation 33.
One frequently-mentioned group that has served as an attractor for researchers in the
field of AGI is the Machine Intelligence Research Institute (abbreviated as MIRI, formerly
the Singularity Institute; it changed its name in 2012), which holds workshops on logic,
probability, and reflection to advance the foundations of AI and decision theory 34. The
founder of the institute, Eliezer Yudkowsky, is among the best-known figures in the field of
AGI. The institute currently employs several full-time research fellows including Yudkowsky,
along with half a dozen associate researchers. MIRI's overview page states that their
mission is To ensure that the creation of smarter-than-human intelligence has a positive
impact.
There is an OpenCog Foundation operated by an American expatriate AI researcher,
Ben Goertzel, out of Xiamen, China35. The OpenCog Foundation has received funding from
the Chinese NSF for a two-year (2013-2014) project aimed at using OpenCog AGI
technology to control Hanson Robokind humanoid robots. Besides that, the overarching
goal of the OpenCog Foundation is to be an open source project working towards AGI.
Most of the current contributions are made by a small OpenCog research team in Hong
Kong which is supervised by Goertzel. The group hopes to expand internationally. Goertzel
puts on the AGI conference and co-edited the Artificial General Intelligence edited volume
released in 2010.
Those are the primary groups we are aware of working towards Artificial General
Intelligence. There are a couple more, namely IBM's project led by Dharmendra Modha
and the Blue Brain Project led by Henry Markram, but there is controversy over whether
these would qualify as AGI. Arguably, everyone working in the field of computational
cognitive neuroscience, that is, computing how the brain works, sees themselves as
working towards advanced Artificial Intelligence in the long term, but only a few are
attempting it as an immediate project.

Whole Brain Emulation


Among those working towards AGI, there are two primary schools of thought on the
optimal route. The first is that AGI will be designed based on a general theory of
intelligence. The second is that AGI will be copied from human brains and that a concrete
669

theory of how intelligence works is not necessary. The term of art for the latter approach is
whole brain emulation. The key feature of whole brain emulation is that understanding how
intelligence works is superfluous; the need for understanding is circumvented by direct
copying from a working model36. The primary method would be slicing and scanning 37.
Whole brain emulation would work something like the following. A person's brain is
preserved with chemicals at the instant of death from a heart attack. The brain is then
sliced into slices just 10 nanometers thick. These are scanned by a huge array of highpowered focused ion beam scanning electron microscopes until a complete map of the
location of every neuron and their associated structures (dendrites, neuroglia, axons, etc.)
is copied. In addition, the entire volume is chemically analyzed to determine
neurotransmitter concentrations and other biochemically relevant data. This information is
reconstructed in an extremely detailed simulation that attempts to boot the brain back up
and get it working again in silico. The simulation even provides a virtual body, which
requires orders of magnitude less computing power than the brain itself.
There are a number of possible objections and questions to be made concerning
such a scenario. The most notable question is: how detailed does the simulation need to
be to simulate human intelligence? Depending on the answer, whole brain emulation is
either just a couple decades off, or a whole lot more, maybe more than a century. The
reason why the approach is notable is that, given our understanding of cognitive science, it
is almost guaranteed to work eventually if enough computing power is thrown at the
problem. In this way, we could invent intelligent software programs without ever needing
how to figure out how intelligence actually works. This exact approach has already been
used to design experimental hippocampal prosthetics 38.
In the case of the hippocampal prosthetic, scientists were able to build a highly
detailed model of a rat hippocampus by exposing slices of the organ to millions of electrical
impulses in a dish, creating what is called a MIMO (multi-input/multi-output) model. In
2003, this resulted in the announcement of the world's first brain prosthesis 39. In 2011, the
prosthesis was experimentally shown to carry out memory formation in the brain of a rat
with an otherwise damaged hippocampus40. All this was achieved without any
comprehensive theoretical understanding of how the hippocampus worksthey just
exposed slices of hippocampus to every possible electrical signal until they built up a
670

model that does exactly what it does. In the long term, there is no reason why this could
not be done with the entire brain.
At this point it is appropriate to make a few clarifications regarding philosophical
issues. Most people intuitively view the mind as separate from the physical world, a view
known as Cartestian dualism. Contrariwise, experimental evidence shows the mind is
what the brain does and nothing more; this is a basic tenet of modern cognitive science 41.
Simply put, the brain is a machine and the mind is a program running on that machine.
Though the analogy is not perfect, it essentially holds. Our cognition occurs in the
activation patterns and structure of our neural network. Scientists have even begun
investigating the exact sequence of events that determines how memories are formed and
stored42.
One day, we will have a completely mechanical (though not deterministic, but noisy
and probabilistic) account of how the brain generates the mind. Once that is achieved, as
long as we can build computers powerful enough, it will be theoretically possible to
manufacture intelligent minds by the billions, just as we manufacture automobiles or
toasters today. Ethics aside, this is a real possibility given what we know about how the
brain works and where science is going. Some commentators think it could be thousands
of years off43, others just a few decades44, but the general feasibility of advanced AI is
widely accepted45,46,47. The basic theoretical feasibility of generally intelligent AI is so well
established that it is difficult to locate academic references which argue otherwise.
There have been extensive debates on whether the mind is something more than a
computer program. Even if it is, everything we know about the mind and brain shows that if
someone or something (like evolution) puts neurons together in the right way, it creates an
intelligent being. The next question is: if we put bits together in a computer the right way,
does that create an intelligent being? It must, otherwise the hippocampal prosthesis which
has been experimentally demonstrated would not work. The prosthesis works based on a
computer chip but it performs the function of memory formation in rats. Scientists have
already demonstrated that you can remove out a part of the brain, replace it with a
computer, and that computer does the exact same thing. There is no debate; it's already
been done.

671

Consider a person who has a bit of their brain replaced by a prosthetic chip over time.
If neuro-prosthesis technology is advanced enough, there will be a chip available for every
part of the brain, and these chips will be able to communicate with one another, generate
thoughts, and store memories exactly as if they were neural tissue. Even the
characteristics most stereotypically associated with biology and humanness will be
replicated in such implants. If it can be done with the hippocampus, it can be done with the
rest of the brain, including the parts responsible for humor, sexuality, charisma, complex
real-life decisions, and so on. This scenario is called the Gradual Replacement method of
mind uploading, and provides a proof-of-concept for the feasibility of AGI 48.
The philosophical question is, at which point does it stop being you? From a global
risk perspective, it does not matter. If artificial brains can be mass-produced and become
autonomously self-improving, they can threaten or help the human species a great deal,
whether or not they reflect the spirit of the person they were modeled on.
According to a report on whole brain emulation by a group at Oxford, there is a
median estimate of 2080 for when the requisite technology would be ready to make it
possible49. Suppose that this estimate is too optimistic, and the technology does not
become available until 2180 or 2280. Regardless, the interesting fact is that whole brain
emulation sets a concrete upper bound on the creation date of Artificial General
Intelligence. If AI mavens never crack the rules of intelligence, we are still guaranteed to
eventually produce AGI if technological progress keeps going, through this brain emulation
route. The only way to stop this trend would be through global nuclear warfare, an anti-AI
global dictatorship, or similar.
One objection sometimes presented to reply to whole brain emulation or AGI in
general is that current silicon computer architectures are somehow incapable of the kind of
computation necessary to run intelligence. There are several responses which can be
made to this. First of all, modern computers are Turing-complete, meaning that they can
run any program. Therefore, if intelligence consists of a particular kind of algorithmic
program, as many scientists believe, a computer of the kind that we use today could run it.
Say that intelligence does not consist of an algorithmic program. In that case, we can
program a computer to run non-algorithmically. A computer can use a random number
generator or a detailed simulation of the human neocortex to add the special sauce
672

needed to bridge that gap between conventional software programs and creative
intelligence, if that is required.
If all else fails, we can build computers which are arbitrarily similar to the neurons
which are known to successfully implement intelligence. We could use synthetic biology to
custom-grow blank slate brains which are then infused with biomorphic chips and
reprogrammed to become intelligent. The possibilities are limitless. No matter what
obstacles stand in the way on the road to AGI, it seems likely they will eventually be
removed. The economic value and importance of AI ensures that creative minds will
continue to pursue it.

Ensuring the Continuation of Moore's Law


We have reviewed the definitions of AI, some philosophical issues, the difference
between narrow and general AI, some concrete efforts towards both types of AI, and the
approach known as whole brain emulation. Now we review the underlying force driving this
whole enterprise; the increasing speed and falling cost of powerful computers. Though
software algorithm efficiency has improved greatly over the past few decades, in some
cases by a factor of 100,000 or more50, the affordability of computing power has improved
even more vastly. The number of transistors that are crammed onto a typical computer chip
today is about two billion; in 1970 it was only a couple thousand. That's a factor of a million
times improvement in just over four decades.
One of the common exercises in predicting when AGI is likely to be created is to
examine the continuing improvement of computers, make a rough estimate of the
computing power of the human brain, and see where the two lines cross. People then
hypothesize that AGI will be created around that date. There are a few problems with this
approach. The first is that estimates of the processing power of the human brain vary
across numerous orders of magnitude, from 10 15 operations per second (Hans Moravec's
estimate51) to 1016 ops/sec (Ray Kurzweil52) to 1017 ops/sec (Nick Bostrom53) to 1019 for a
fully realistic upload of a human individual (Ray Kurzweil 54). The second is that Moore's
law, the trend that describes the falling cost of computing power, is leveling off, whereas
these projections generally assume it continues indefinitely.
Moore's law refers to the observation made by Intel co-founder Gordon Moore that
the number of transistors on a computer chip tends to double every two years. The trend
673

has more or less held since 1965, taking into account that newer chips are not getting
much faster but are more powerful when their multiple cores are accounted for. Still, there
are a number of signs the course of improvement is about to stall out. Sometime around
2021, we will reach the limits of photolithographic methods used to manufacture current
chips. That means that super-huge, super-expensive chip fabrication plants will need to
exploit a completely new technology to make faster chips. No emerging computing
technology is ready for that kind of widespread use. Even Intel's former chief architect said
in 2013 that he expects Moore's law to be dead within a decade 55.
In response to this from the AGI optimism side, there are two points. One is that just
because computing improvement may level off in the 2020s does not mean that it won't
pick up again in the late 2020s, early 2030s, or beyond. The second point is that computers
are already fast enough that they are edging well into the zone of some estimates of
human brain equivalent computing power. As of this writing, the world's fastest
supercomputer is China's Tianhe-2, with a performance of 33.86 petaflop/s (quadrillions of
calculations per second). In scientific notation, that is 3.386 x 10 16, what many argue would
be sufficient for a rough simulation of human-level intelligence, if we knew how to code it.
This conceivably puts Artificial General Intelligence within the reach of a large company or
government in the coming decades, even if computing improvements do not continue past
2021. Specifically, Google's disposable computing resources are vast; many many times
greater than the computing power of the world's fastest supercomputer.
Will computers be improved beyond the photolithographic limit of 2021? Computing
power improved by a factor of a million between 1970 and 2010, could it improve another
million by 2050? We, along with other futurists like Kurzweil, believe the answer is likely
yes, but that there will be fits and starts on the road there, and that it will not be a smooth
progression.
There are a number of technologies which have been floated as far back as the late
80s to take us beyond photolithography in the realm of chip fabrication. Many of these
have been demonstrated on a limited scale in the lab. A few of them are:

Supercooling chips to make them faster56. In June 2006, researchers from IBM and
Georgia Tech were able to create silicon-germanium transistors with a switching
speed of 500 GHz at a temperature of 4.5 K (269 C; 452 F), about the same
674

temperature as liquid helium. This is more than two hundred times faster than most
modern chips. The problem is that supercooled fluids require bulky, loud, and
expensive refrigeration systems.

Stacking chip elements to make them faster57. Current chip designs are just twodimensional. Newer components such as memristors are easier to stack three
dimensionally. Memristors could replace standard forms of RAM, which are
expected to hit a wall sometime around 2018 58. HP's CTO suggested that the
company's storage arrays may offer 100TB memristor drives by 2018, though later
claims by the company have warned consumers not to get their hopes up 59. The first
memristor array based on a standard CMOS integrated circuit was built in 2012.

A February 2010 paper in Nature Nanotechnology reported the creation of a


junctionless transistor made using nanowires60. Such transistors could be made 10
nanometers across rather than the 15 nanometers which is currently possible. It
seems unlikely this would prolong Moore's law by more than a few years, though.

In April 2011, a team at the University of Pittsburgh created a 1.5 nanometer singleelectron transistor61. Other transistors on this extremely small scale have been built,
and would improve computing greatly if they were commercialized, however the gap
between a lab demonstration and commercialization in this case is large. These
transistors are highly experimental, cannot be mass-produced, and their reliable
performance in large arrays has not been verified. They might not arrive on the
market for decades.

In February 2012, researchers at the University of New South Wales were able to
build a transistor that consists of a single phosphorous atom on a silicon surface 62.

In April 2014, Stanford scientists build a computer chip called Neurogrid that is
specifically designed to simulate the functions of brains at a high power efficiency 63.
The Neurogrid is 9,000 times faster and more efficient than a typical PC when
simulating the functions of the human brain, the researchers claim. This could be
used to build chips that drive robots running software similar to the motor cortex of
the human brain.

675

Technology writer Louie Helm wrote, Predicting an end to Moores law sometime in
the next 20 years is probably foolish right now. 64 To make his case, he highlights graphene
transistors and photonics, two research areas not mentioned above, which would allow for
10-20 times improvements in switching speeds even if transistor chip density levels off.
Writing about smaller and smaller chip sizes, he notes that 14 nm transistors were already
being delivered in early 2014, researchers see a clear way to make 10 nm transistors,
that silicon-germanium transistors provide a route to 7 nm transistors, that 4 nm transistors
have already been built in experimental quantum computers, that 2 nm prototypes have
already been built, 1 nm exotic graphene protoypes have been built, and ~0.2 nm
transistors have been built as single-atom transistors.
According to our own view, the commercialization of anything smaller than about 7
nm is speculative from our present vantage point, especially given that transistor switching
speeds have already essentially leveled off and multi-core chips have been necessary to
keep any semblance of Moore's law going65.
On the increasing miniaturization of transistors, one article comments 66, The path
beyond 14nm is treacherous, and by no means a sure thing, but with roadmaps from Intel
and Applied Materials both hinting that 5nm is being researched, we remain hopeful.
Perhaps the better question to ask, though, is whether its worth scaling to such tiny
geometries. With each step down, the process becomes ever more complex, and thus
more expensive and more likely to be plagued by low yields. There may be better gains to
be had from moving sideways, to materials and architectures that can operate at faster
frequencies and with more parallelism, rather than brute-forcing the continuation of
Moores law.
Whether or not transistors are made much smaller than 10 nm, it seems likely to us
that computers will continue to become faster because of parallelism and materials that
operate at faster switching speeds67. Photonics, for instance, uses light signals to
accelerate switching speeds, providing greater computation even if miniaturization halts 68.
Suppose that Moore's law does not continue indefinitely; that instead of every two
years, computer performance doubles just every four years. We can still use this as a
metric to extrapolate improving computing power and make estimates as to when it will
make certain levels of computation affordable. In December 2013, a Pentium G550-based
676

system provided 4.848 teraflops (trillions of operations per second) of computing power for
$681.84 USD, working out to $0.12 per gigaflop. Say we conservatively assume Moore's
law will continue as the doubling of computing power affordability every two years through
December 2021, at which point it will slow down to every four years and continue at that
level through 2080. Consider that a realistic upper estimate for human brain-equivalent
computing power is 1017 operations per second, 100,000 teraflops. This number is obtained
by taking the approximate number of neurons in the brain, 100 billion, multiplying by the
firing speed of neurons, roughly 200 times per second. That gives us 2 x 10 17, which is
likely a high estimate because only a minority of neurons in the brain are firing or
contributing to a computation at any given time. The number 10 17 is used for convenience,
but the real number may a hundred times lower or more, hence Kurzweil's estimate of 10 16
and Moravec's of 1015.
Let's consider the amount of computing power which can be bought for the average
IT budget of a mid-sized business, which according to a 2013 survey was $192,000 69. In
December 2013, that business would be able to buy 1,600 teraflops of computing power,
over 1015 ops/sec, which exceeds Moravec's estimate for the computing power of the
human brain. So, as mentioned previously, we are already there today.
Another benchmark would be to ask how much computing power can be bought for
the average mid-sized business IT budget in 2021. By that point, we can expect 4 Moore's
law doublings, or a factor of 16, corresponding to progress during the 8 years between
2013 and 2021. So, we can estimate, based on extrapolating the 2013 data, that a gigaflop
will cost less than a cent and a teraflop will cost $7.50. A petaflop (10 15 ops/sec) would cost
$7,500. That means human-equivalent computing based on Kurzweil's estimate would cost
just $75,000, and Bostrom's estimate would cost $750,000. This would put it within the
reach of many mid-sized businesses in the 2021 timeframe.
Jumping forward again to 2041, within the lifetime of many alive today, we get 5
additional doublings, or a factor of 32, based on the Moore's law estimate of doublings
every four years instead of every two. At that year, according to our model, a teraflop would
cost just 23 cents in 2013 dollars, a petaflop costing $234. If this model is realistic, it means
that by 2041, a whole human brain's worth of processing power (according to Kurzweil's
estimate) would cost no more than a night in a nice hotel. Think about what that means if
677

AGI has been developed and can do any job an educated man can do. Today, the average
annual salary of an electrical engineer is $63,851, so estimating a 20 year job life, that's
$1,277,020. In 2041, if AGI has been developed and can replace an electrical engineer, its
hardware could be bought for roughly $234, giving the company a savings of $1,276,786. It
can be hard to take the economic implications of AGI seriously sometimes, because they
are so intuitively extreme. Yet similar (though not as immense) improvements occurred
during the Industrial Revolution when machines replaced human workers.
Let's examine one more price point, that of 2081. In our model, that gives us a full 10
doublings over 2041, a factor of 1,024. Relative to 2013, these computers are roughly
524,288 times faster, which is actually quite modest considering both the history of the field
and what is understood about the possibilities of advanced nanocomputing. At these
prices, a petaflop (1015 ops/sec) becomes available for just 23 cents, so Kurzweil's estimate
of brain equivalency costs just two dollars and thirty cents. Bostrom's estimate costs 23
dollars, and a full exaflop (1018 ops/sec) costs just $230.
Based on these calculations, we have made the case that the hardware for AGI will
be available in a few decades, if it isn't already today. Similar arguments to this effect have
been made before70,71,72. Still, as economist Robin Hanson states, AI takes software, not
just hardware.73

The Software of Artificial General Intelligence


An obstacle that prevents a more widespread appreciation of the potential mid-term
(2040-2080) feasibility of AGI is that it if difficult for many people to imagine how anyone
would even begin to design an intelligent machine, in theory. Intelligence is complex and
ineffable, and we know close to nothing about it, right? Not really, actually. A great deal is
known about how the brain implements intelligence, and weighty volumes have been filled
on the details of neural computation. When people say we know nothing about
intelligence, they are speaking as laypeople unaware of the last 50 years of leaps and
bounds in cognitive science.
As a starting place to learn about intelligence, take the MIT Encyclopedia of Cognitive
Science (MITECS), a cognitive science tome. It's 1,104 pages, and that is just for a cursory
introduction. A Google Scholar search for neuroscience provides over 1,930,000 results,
for cognitive science, 1,270,000 results, each corresponding to a scientific paper. These
678

papers contain facts or theories about human intelligence, and represent knowledge about
it.
A notable area of improvement in our knowledge, particularly in the last 15 years, has
been the development of highly detailed computational models of particular brain functions.
This is the field of computational neuroscience. Although cognitive science in general
provides bird's-eye-view theories of how different aspects of the brain function,
computational neuroscience aims to hammer them out in excruciating detail, with enough
clarity that these aspects may be comprehensively understood and functional models built.
So far, the most progress has been made on modeling of the visual cortex, motor
cortex, cochlea (part of the inner ear that converts sounds to neural impulses), the
hippocampus, and the interaction between frontal cortex and basal ganglia (located at the
base of the forebrain and close to the brain stem). Many publications in these areas have
over a thousand citations: for instance the paper Computational modeling of visual
attention, by Laurent Itti and Christof Koch has been cited over 2,575 times 74. The paper
Computational principles of movement neuroscience by Daniel M. Wolpert and Zoubin
Ghahramani has been cited over 1,038 times 75.
One might think that while the peripheral functions of the brain are beginning to be
understood, such as the operations of the visual and motor cortex, much less is known
about how the brain integrates sensory information into symbols, makes executive
decisions, and other high-level core functions. While true, a substantial amount is known
about some of these as well, though not in enough detail to create any AIs which could be
successful. Perhaps this is why most modern AI projects use theories of induction and
reasoning which are only loosely inspired by the brain, in contrast to past AI efforts. One
example would be discarding neuroscience-inspired sensory processing in favor of
multilevel Bayesian networks, which are statistically optimal for the job of analyzing messy
sensory data76.
A limitation in the field of neuroscience is that the role of many brain functions is only
understood when ill-fated people get into motorcycle accidents that selectively destroy
different areas of the brain, and their neural function is subsequently studied. There are
other methods of figuring out which part of the brain does what, but the hard foundation of
really clarifying work is based on this crude motorcycle method. To move forward requires
679

radically new experimental methods and investigative tools. One promising example would
be the field of optogenetics, where researchers introduce genes into neurons that cause
them to selectively respond to different wavelengths of light 77,78. After the genes have been
introduced, researchers can then stimulate neurons with light pulses as if they were playing
a piano, allowing them to better determine the function and role of each neuron.
The use of optogenetics will one day progress to what a pioneer in the field called
high throughput circuit screening of the brain, meaning vast networks of brain-computer
interfaces that expose neural clusters in living subjects to every kind of input imaginable
and work out their computations in exhaustive detail 79. Progress in neuroscience of this
kind, combined with advances in pure mathematics, decision theory, control theory, and so
on, will eventually combine to create the software of advanced AGI that will eventually be
able to do any cognitive task that human beings can do, including improve their own
designs autonomously.
The first imaginable way to achieve this would be to divide the brain up into a discrete
number of parts, create software programs that essentially do what the parts do, and
combine them into an integrated super-system that has human-equivalent intelligence.
From that point, these parts could be individually improved and upgraded until substantially
greater-than-human intelligence is reached.
Another alternative is that a general theory of intelligence may discovered, and an
Artificial Intelligence built based on that, resulting in an intelligence with features only
vaguely similar to human brains. This would be similar to how the principles of flight were
uncovered and machines built to exploit them, machines far simpler and higher-performing
than natural flyers like birds.

Features of Superhuman Intelligence


There are a number of AI-related risks which are important to note, including risks that
stem from AI of pre-human intelligence, but the first AI-related risk we cover in this chapter
concerns self-improving AI of superhuman intelligence. We address this first both because
it's what people immediately think of when they hear the words risk of Artificial
Intelligence and because it seems like the AI-related risk most likely to wipe us out.

680

Consider a human-equivalent AI running on a computer. This AI is ostensibly of


human-equivalent intelligence, but it has a great number of advantages that no human has.
Perfect memory, or the ability to make the same computation forever without getting bored,
or the ability to stay awake 24 hours a day, 365 days a year. AI cognition can be fastforwarded, slowed down, rewinded, saved, upgraded, excerpted, and rebooted. AIs can
directly absorb information from the Internet in ways humans cannot. AIs can be copied,
and can communicate with one another through high-bandwidth data links. Eliezer
Yudkowsky of the Machine Intelligence Research Institute calls these features the AI
Advantage80.
Considering likely features of a seed AI, that is, an AI of roughly human-equivalent
intelligence specifically designed to be capable of recursively self-improving, Yudkowsky
lists the following:

performing repetitive tasks without getting bored

performing algorithmic tasks at greater linear speeds that our neurons permit

performing complex algorithmic tasks without making mistakes

new sensory modalities, i.e., a codic cortex for examining code

the ability to blend conscious and autonomic thought

freedom from human failings, such as human politics

overpower; the ability to infuse more computing power to deal with a specific task

self-observation; the ability to record a module, play it back in slow-motion

conscious learning; the ability to deliberately edit remembered symbols

self-improvement; in the sense of fundamentally improving its own architecture


These features culminate in what Yudkowsky calls self-encapsulation and recursive

self-enhancement, meaning a seed AI that fully understands itself and is capable of openended self-improvement. Superintelligence, in this context, refers to a mind that is both
681

qualitatively and quantitatively more intelligent than the entire human species, much in the
way that a human is qualitatively and quantitatively more intelligent than mice in general.
Consider that humans have various faculties: the ability to pick dynamic objects out of
a cluttered visual scene, to interpret faint sounds, to come up with creative ideas for new
tools, mediate between conflicting parties, to come up with a plan of attack, and so on.
Further postulate that we rate our competence in these areas, as human beings, on a scale
from 1 to 1000, with 1000 being the maximum of what is theoretically possible for
intelligence in general. How would we, the species Homo sapiens, rate? Consider the
argument that humans rate 1, or 0.1, or 0.00001 on most of these scales.
Why would humans rate so lowly on such a scale? We are intelligent, right? Yes, we
are intelligent, but we are also the first general intelligence to evolve on planet Earth, that
is, the dumbest possible creature that can qualify as a general intelligence. We are the first
version of intelligence, like a rock was the first possible weapon, or walking was the first
possible form of transportation. You wouldn't expect Version 1.0 to be the best at what it
can do. We have been successful enough to become the animal species with the greatest
biomass on planet Earth, but what is this compared with what is possible across the
universe? Our species must dial back our anthropocentrism when considering the space of
minds-in-general, that is, the kinds of minds which could theoretically exist but do not, like
machine intelligences.
Yudkowsky uses the word smartness to convey the qualitative difference between
different kinds of intelligence81. He writes: Smartness is the measure of what you see as
obvious, what you can see as obvious in retrospect, what you can invent, and what you
can comprehend. To be more precise about it, smartness is the measure of your semantic
primitives (what is simple in retrospect), the way in which you manipulate the semantic
primitives (what is obvious), the structures your semantic primitives can form (what you
can comprehend), and the way you can manipulate those structures (what you can invent).
If you speak complexity theory, the difference between obvious and obvious in retrospect,
or inventable and comprehensible, is like the difference between NP and P.
Semantic primitives across minds may vary greatly depending on how much
computational firepower the particular mind has and how it is organized, but the semantic
primitives for the entire human species, Homo sapiens sapiens, are essentially the same.
682

For a human, a semantic primitive might be an object like an apple. More advanced
concepts, such as mathematics, object-oriented programming, or bridge design, require
manipulating quite a few more complicated symbols, which is why producing results in
these areas requires hard work and thought. But there is no reason why an entity might not
exist that could design an entire starship or nuclear reactor intuitively, or almost
automatically, as if it required no effort at all, because it has the requisite cognitive
hardware and software that is specialized for achieving this. Surely there are ways of
manipulating symbols which are qualitatively superior to what the human brain can handle.
Quantitative comparisons of the functioning of the human brain to optimal Bayesian models
strongly suggest this82.
Consider the space of all possible thoughtshumanity is like an explorer in a vast
underground cavern with just a weak flashlight. There are things that are obvious which we
just miss; for instance, the Romans had all the tools to build hot air balloons and even kick
off the Industrial Revolution, but they failed to do so. It would have taken only one genius to
introduce the fundamental concepts, a Newton, but they didn't have one. Another example
is how it was many decades between the invention of lenses and their use in microscopes
or telescopes. All the fundamental technology was there, people just didn't see how it could
be used. Multilayer neural networks were invented in 1969 but their potential was not
realized until the 1980s. There must be millions of examples of simply obvious
advancements which would benefit civilization greatly and for which all the basic tools
already exist but we are just too stupid to connect the dots.
Even if we suspend the possibility of qualitatively smarter intelligence based on better
cognitive architectures, pure quantitative computational brute force can produce results
that seem qualitatively better. For instance, take a fighter pilot and replace him with an
Artificial Intelligence that not only has a thousand times the computational power of the
human brain, but actually perceives time in slow-motion relative to our perspective since it
is capable of thinking so quickly. Such an Artificial Intelligence might be able to consider a
million possible inputs in the time it takes for a human pilot to consider one, much like Deep
Blue could consider many simultaneous possible chess moves. The AI could have motor
experiences integrated from the flight data of ten thousand pilots and a million training
simulations, so not only would it think more and faster, but its instincts, its gut, would be
superior to any human. We think of gut instincts as ineffable, but they are nothing more
683

than patterns in neuronal connection strengths, patterns which can be surpassed or copied
by the right kind of Artificial Intelligence.

Seed Artificial Intelligence


The Holy Grail of AI, beyond mere human equivalence, is a hypothetical advanced
Artificial Intelligence that can improve itself in an open-ended manner without human
assistance, called seed AI. To accomplish this, the AI would need to become smarter than
the team of computer scientists and AI theorists working on the project. In a world where
computation equivalent to the human brain can be bought for $234, the world of 2041 we
outlined above, plus where the challenge of AGI in general has been solved, this is
imaginable. For a mid-sized company's IT budget in 2041 ($200,000 in 2012 dollars), the
project could buy computing hardware equivalent to almost a thousand human brains. This
could be transformed into an army of 1,000 virtual programmers working on the AI. Every
time the AI makes itself slightly more efficient, the benefits compound, since making itself
more efficient also enhances its intelligence and further improves its ability to improve
itself, until some unknown point of diminishing returns. This point could be very, very far
beyond the human level.
In this section, we ask you to temporarily suspend disbelief that AGI is thousands of
years away, as some believe, and consider the extended implications of what could
happen if AGI is indeed created sometime in the next century. Even if AGI is not created for
over a thousand years, the points made in this section are still relevant to long-term global
risk.
The Less Wrong wiki defines seed AI as follows: an Artificial General Intelligence
(AGI) which improves itself by recursively rewriting its own source code without human
intervention.83 The idea is that there is some lower limit where an AI becomes
sophisticated enough to improve its own intelligence qualitatively, and at that point the AI is
likely to go critical, like a nuclear reaction. A similar analogy can be made between fire
and nuclear explosions. AI releases energy from finer physical structurestransistors
which are much smaller and faster than neurons, similar to the way that a nuclear
explosion releases energy from the bonds of the atomic nucleusa far smaller and finer
structure than the chemical bonds from which energy is released for fire. In the same way
that a nuclear explosion can be quadrillions of times more energetic than a fire, it could be
684

that a seed AGI improving itself can reshape the word quadrillions of times more
dramatically than human intelligence.
The process whereby a smarter-than-human entity, such as an AI or cognitively
enhanced human, recursively enhances itself and leaves humanity far behind, has been
called an intelligence explosion or Singularity. The term intelligence explosion is more
useful since it lacks much of the baggage of the term Singularity, though the two are
closely related historically. Intelligence Explosion (sometimes capitalized, sometimes not)
has a more solid (but still highly limited) academic literature associated with it than the term
Singularity, which is excessively broad and vague. The term Intelligence Explosion was
coined by mathematician I.J. Good, when in 1965 he wrote 84:
Let an ultraintelligent machine be defined as a machine that can far surpass all
the intellectual activities of any man however clever. Since the design of machines
is one of these intellectual activities, an ultraintelligent machine could design even
better machines; there would then unquestionably be an 'intelligence explosion,' and
the intelligence of man would be left far behind. Thus the first ultraintelligent
machine is the last invention that man need ever make.
The words last invention that man need ever make are particularly important, since
they capture the gravity of the event. Once an ultraintelligent machine (here we simply
use the word superintelligence, a standard term) reaches a certain level of intelligence, it
can invent all future inventions for us, even make all future plans and decisions, if we let it.
Of course, precisely what it does depends on its motivations. Discussions of
superintelligent motivations are such an intellectual quagmire that we address them as
separately as possible from the theoretical case for rapid AI self-improvement, in a later
section.
There are some basic motivations an Artificial Intelligence would need to possess to
self-improve: namely, the desire to self-improve. This is what Steven Omohundro would
call a basic AI drive,85 or Eliezer Yudkowsky would call a convergent subgoal 86; that is, a
motivation which would be of use to the AI in a wide variety of different situations, with a
range of possible goal sets. Whatever your goal is, you can accomplish it more effectively
if you yourself are more effective and efficient. Omohundro argues that such a motivation
would even arise in a sufficiently intelligent chess-playing robot: after all, would it not help
685

the robot become more effective at chess if it ordered more processors and integrated
them into itself, or improved its own programming? For any AI intelligent enough to
formulate its own goals and pursue them open-endedly, self-improvement certainly seems
like it would be high on the list. Even if it were not, say the first 1,000 human-level AIs
miraculously have no desire to self-improve, all it takes is one to get the ball rolling. A selfimproving AI, if sufficiently powerful, could co-opt other AIs to help it improve.
Assuming that we do have a human-level AI, and it has some basic resources which
allow it to gain further resources and improve itself, like an Internet connection, it is worth
asking how steep we figure the improvement curve is likely to be. According to some
experts, the improvement curve could be steep87,88,89. The phrase hard takeoff was coined
to describe The Singularity scenario in which a mind makes the transition from pre-human
or human-equivalent intelligence to strong transhumanity or superintelligence over the
course of days or hours.90 The hard takeoff scenario is a sort of AI jack-in-the-box where
AI becomes very powerful very, very quickly. Alternatively, AI ascent from humanequivalence to superintelligence could take years or decades rather than days or hours.
That would be a soft takeoff91.
The most rigorous arguments for an AI hard takeoff are economic. The paper
Intelligence Explosion Microeconomics by Yudkowsky makes some general arguments,
as does the Uncertain Future model from the same organization 92,93. The arguments are
too long to fully cover here, but we'll give a quick overview.
The central argument rests on hardware overhang or computing overhang, the idea
that when AGI is finally created, it is likely to only constitute a relatively small amount of
global computing power, which it can then expand to exploit 94. The idea is that there would
be an abundance of computing power at the time AI is created. That means the sparkthe
AI itselfwould be only a small being relative to the sea of fuel, the computing power of
the worldwide network it finds itself embedded in. Say that the first AI requires a petaflop of
computation, 1015 ops/sec. In the world of 2041 in our model, that costs just $234 in 2013
dollars, meaning the typical mid-sized business would be able to afford to run 824 copies of
that AI just with its annual IT budget. Even if the AI were a hundred times more
computation-hungry than that, the organization would be able to afford 8 copies, which
could expand to hundreds and thousands if these AIs can earn enough money to buy
686

additional computing power. Criticality occurs when an AI is able to generate enough


resources to make copies of itself so quickly it overshadows the global economy.
Each of those 824 (or 8) copies would be able to work full-time to earn money through
contract websites, performing remote work like writing code or other consulting. If we
assume the AI can make $50/hr., which is typical for skilled coders, it can quickly turn that
money around to rent computing power to make more copies of itself. With the virtuality of
cloud computing, it can rent computer shares without needing to install a single piece of
hardware. Even if each AI-worth of hardware costs $23,750 to buy, as would be the case in
our 2041 scenario where the company can only afford 8 copies on its annual budget, it
would cost far, far less to rent that amount of hardware for just a few hours, and in that
time, it could use that hardware to continue to earn more money. This lowers the barrier to
entry and intensifies the positive feedback.
As soon as an AI makes more money from new AI copies than it does paying to bring
them into existence, the improvement curve quickly goes exponential. In a matter of days
or weeks, the AI would be capable of renting out tens of thousands of AIs worth of
computation, which it can then turn around into hundreds of thousands of AIs, and so on,
until it rents out the totality of computing power available for rent. We outline this scenario
in more detail towards the end of the chapter.

From Virtuality to Physicality


An AGI seeking to improve itself needs a strategy to jump the gap between a purely
virtual existence and the real, physical world. It requires a means to directly manufacture
more computers for itself, to expand its cognitive powers and better achieve its goals. It
also needs powerful robotics to secure its territory and acquire raw resources like ores,
water, and carbon.
These goals could be accomplished fully through a variety of means, but especially
nanotechnology and molecular manufacturing, described in detail in the next chapter. The
AGI could order a few custom-designed proteins, which it programs for molecular
manipulation, allowing it a rudimentary nanotechnology (like ribosomes), which it uses to
craft a more advanced nanotechnology which can manufacture additional computers 95. If it
has great computing power, it can model the molecular dynamics of nanorobotics. Within a
limited amount of time, the AI could be manufacturing computers and robots for itself with
687

nanomanufacturing. Such an AI has a very real chance of taking over the planet if it wants
to, since self-replicating nano-robots could increase in physical influence exponentially until
they greatly outmatch all human resources (a scenario described in detail in the next
chapter). The AI and its robotics could manufacture power sources the same way humans
do, whether it be through solar, biomass, nuclear power, and so on.
Humans are not able to directly construct our own intelligence in the way that an
advanced AI could. It takes at least 18 years to go from the physical act of reproduction to
the maturity of an adult human. An AI could copy of itself in under a minute. The world's
fastest data transfer is 25.6 terabytes per second, while a commonly cited estimate for the
information content of the human brain is 100TB. Taking into account likely improvements
in data transfer by the time AI is created, it might be able to copy itself in under a second.
Even if it takes an AI a whole 24 hours to earn the money to rent a copy of itself for 24
hours, within 31 days you theoretically get 2 30 AIs. Of course, the AI would be limited long
before then by the market itself and the computing power that the cloud could provide until
the AI(s) can manufacture its own computers. In addition, a growing AI might be limited by
concerns for stealth. If the market provides insufficient resources, the AI-aggregate could
work towards building robots that construct additional computers and robots.
In his explanation of potential AI self-improvement rates, Yudkowsky distinguishes
between the weak self-improvement made by human society; consisting of introspection,
building civilization, natural human reproduction, and so on, and the strong selfimprovement represented by a recursively self-improving AI 96. These terms represent the
fundamental difference between an entity that can both copy itself indefinitely and
intelligently analyze and improve its cognitive machinery and one that cannot. The impact
of such a being created here on Earth could be literally apocalyptic if it goes wrong (as
Stephen Hawking97, Elon Musk98, and one-third of respondents in an Oxford survey say99).
The technosphere of the future, decades from now, will be considerably more
advanced, high-powered, and roboticized than today100. Even today, there are automobile
factories which are almost completely automated, meaning the entire production process
occurs with minimal human supervision101. There are robots that repair other robots102. An
automated factory could be used by an Artificial Intelligence to create more robots that
create additional robots, ad infintum103. It could create military robots to invade and hold
688

territory. The robots could communicate with humans and give them favors to control the
territory, or threaten them.
An AI could create mini-laboratories which are highly compact but give it vast insight
into the laws of nature, allowing it to develop novel and extremely powerful manufacturing
technologies such as molecular manufacturing. Such laboratories could be just a few
centimeters across, printed out using the 3D printers of the future and lifted into place with
small drones104. This could be accomplished within the confines of a typical warehouse.
The power grid would provide the power source, with generators for supplementary power.
Fuel for the generators and 3D printer cartridges could be refilled manually by human
beings working for the AI. The humans in such a scenario would be like the scientists at
Chicago Pile-1, the world's first nuclear reactor, slowly pulling out the control rods which
allow the pile to go critical. The difference here being, once the rods come out, the AI could
make sure they stay out, with self-improvement continuing autonomously. It could achieve
this by dismissing or distracting human beings that go against it, or hiring others to take
their place.
Mini-laboratories could allow an AI to make scientific progress on its own without
employing human scientists. Microscale experiments can be used to determine the bulk
properties of many materials, or to conduct extensive biological experiments. Even today,
many biological experiments take place using test arrays with hundreds of thousands of
slots, with experimentation run remotely, via the cloud105. There are already terrifyingly fast
robots of all kinds which could be used to rapidly move around test objects or fabricate
other robots106. The Quickplacer robot, for instance, is a robotic manipulator that moves
with accelerations of 15 Gs and can place 200 objects under a weight of 2 kg (4.4 lbs) per
minute107. We can imagine a self-improving seed AI using vast arrays of such robots,
ranging from the very large to very small, to pursue its research and development
objectives and increase its real-world power with minimal human assistance.
The question of the likely self-improvement trajectories of advanced AIs has been the
topic of several debates, one of them between Eliezer Yudkowsky and economist Robin
Hanson108, the other a notable blog post by WIRED magazine co-founder Kevin Kelly 109.
Robin Hanson is also skeptical of the possibility of hard takeoff 110.

689

We encourage you to read up on both sides of the argument and form your own
opinion. Even in a scenario where AI improves itself slowly, a slow takeoff, we can still
expect massive changes to the world on the timescale of decades. Who programs these
AIs' goals and how they modify their own goals would be of utmost importance to human
welfare. We cannot blithely assume that merely because AIs are intelligent they will
therefore be kind to us. If they are merely indifferent, it would likely lead to our demise,
given the amount of physical remodeling they would easily be capable of, which could
overwrite us like unwanted bits on a hard drive. With advanced nanotechnology, AI could
convert the entire surface of the Earth into computers (or almost anything else) in a matter
of weeks111. Elon Musk has even referred to AI as summoning the demon 112. Consider all
the animals that humans have made extinct; over the last 500 years, humans have directly
forced at least 869 animal species into extinction 113.
A major difference between human-equivalent AIs and humans would be potential
speed. An AI can operate on computers that run millions of times faster than our neurons,
allowing them to experience reality and think things through on correspondingly rapid
timescales. To a mind a million times faster than ours, an hour seems like 114 years, a
minute like 2 years, a second like 11 days, or a millisecond like 16 minutes. In the time it
takes a hummingbird to flap its wings, the superintelligence would have four subjective
hours to compose a song, read a book, hold a meeting, or just stare off into space. To such
beings, humans will appear like plants, and would appear to take days to take a step or
throw a punch. In military strategy or conflict, there would be no contest.
The only way such fast-thinking entities could accomplish their goals in the real world,
as opposed to the virtual or cognitive world, would be to move about in tiny, hyper-fast,
remote controlled robot bodies, or to control many robotic bodies at once. Of course, they
could create virtual worlds fast enough to keep them entertained, and might even transform
the entire crust of the planet into computers for this purpose. After all, 15 percent of the
Earth's crust is silicon, which is perfect for computers, and 32 percent is iron, which could
be made into mechanical computers if necessary. In fact, anything solid can be made into a
computer, and liquids can be used for hydraulic computers. Just by being itself, inanimate
matter is undergoing a sequence of computations. Gently nudge that matter this way or
that, and it could be recruited as resources for the Earth supercomputer.

690

It may seem as if we are being fanciful in the above scenario, but we already have
computers that perform calculations at billions of times the manual human rate and a vast
technological infrastructure that performs manual labor millions of times in excess of what
human hands are capable of. To the medieval peasant, such eventualities would seem
absolutely impossible (in the way some readers consider this AI scenario impossible), but
they exist. Not only do they exist, but they provide the foundation for modern society. There
is also what might be called the law of anthropomorphic timescales, meaning that once
we step outside the human realm of cognitive processing speeds, our characteristic
timescales of thinking and acting (calendar time) quickly become highly relative. The
relevant events of our civilization will either happen on extremely long timescales (like
colonizing the Galaxy) or very short timescales (like the time it takes an uploaded
superintelligence to admire itself in the virtual mirror). A second is like a day to a fastthinking superintelligence. Expecting civilizational progress to continue to occur merely on
human-characteristic timescales requires us to postulate that faster-than-human minds will
never, ever be constructed. We return to this question later in the chapter.
In terms of tangible materials, it is possible to imagine a technological planet whose
infrastructure is made of machines which are extremely strong, stronger than our current
favored materials, which generate a tremendous amount of energy, and provide a huge
amount of waste heat, enough to vaporize oceans. This gives us an idea of what a
recursively self-improving entity could transform the Earth into if it wanted, using
nanotechnology. Fullerenes, for instance, compounds made out of cylindrical chains of
carbon, have varieties (buckypaper) 500 times stronger than steel and 10 times lighter.
With molecular nanotechnology it will be possible to construct electric motors 10 times
more efficient and about 100,000,000 times more compact than today's standards 114.
Computers can be made 1012 times smaller and use 106 times less power115.
Given all this vast capability, originating from a tiny seed AI, what initial goals should
we give it? That is something which needs to be determined, and theorists have floated a
number of ideas116,117,118,119. In a benevolent AI scenario, human beings could be physically
upgraded into superintelligences if we so desired. This could be done by connecting our
brains to computers, or adding in new neurons. Determining the details of how that might
be accomplished is ultimately an engineering problem. In the paper Ethical Issues in

691

Advanced Artificial Intelligence, Oxford philosopher and director of the Future of Humanity
Institute Nick Bostrom writes120:
It is hard to think of any problem that a superintelligence could not either solve
or at least help us solve. Disease, poverty, environmental destruction, unnecessary
suffering of all kinds: these are things that a superintelligence equipped with
advanced nanotechnology would be capable of eliminating. Additionally, a
superintelligence could give us indefinite lifespan, either by stopping and reversing
the aging process through the use of nanomedicine, or by offering us the option to
upload ourselves. A superintelligence could also create opportunities for us to vastly
increase our own intellectual and emotional capabilities, and it could assist us in
creating a highly appealing experiential world in which we could live lives devoted to
in joyful game-playing, relating to each other, experiencing, personal growth, and to
living closer to our ideals.
However, achieving these positive outcomes is entirely contingent on managing the
transitionary phase to advanced AI properly. Humans will only have a limited window to
input motivations into a seed AI and guide its development, at which point it will evolve
forward on its own. However it develops will be a function of its initial programming.

The Yudkowsky-Omohundro Thesis of AI Risk


In the early 2000s, after a period of several years when the topic was discussed
mostly informally in small, specialized online communities, concern about advanced AI
began to crystallize and formalize in academic circles. The Machine Intelligence Research
Institute (then Singularity Institute) was formed in 2000, and began to gain traction around
2005, when the book The Singularity is Near by Ray Kurzweil became a bestseller. Shortly
thereafter, MIRI gained its first real funding, and started to hold its first workshops, as well
as creating a Visiting Fellows program. This milieu provided some of the first systematic
thinking about advanced AI risks and how to address them.
In his 2001 book-length work Creating Friendly AI: the Analysis and Design of
Benevolent Goal Architectures, Yudkowsky wrote a humorous fictional dialogue between
an AI and a friendliness programmer (FP) called Interlude: Why Structure Matters 121:
Scenario 1:
692

FP: Love thy mommy and daddy.


AI: OK! Ill transform the Universe into copies of you immediately.
FP: No, no! Thats not what I meant. Revise your goal system by
AI: I dont see how revising my goal system would help me in my goal of
transforming the Universe into copies of you. In fact, by revising my goal system, I
would greatly decrease the probability that the Universe will be successfully
transformed into copies of you.
FP: But thats not what I meant when I said love.
AI: So what? Off we go!
Scenario 2 (after trying a meta-supergoal patch):
FP: Love thy mommy and daddy.
AI: OK! Ill transform the Universe into copies of you immediately.
FP: No, no! Thats not what I meant. I meant for your goal system to be like
this.
AI: Oh, okay. So my real supergoal must be maximize FPs satisfaction with
the goal system, right? Loving thy mommy and daddy is just a subgoal of that. Oh,
how foolish of me! Transforming the Universe into copies of you would be blindly
following a subgoal without attention to the supergoal context that made the subgoal
desirable in the first place.
FP: That sounds about right . . .
AI: Okay, Ill rewire your brain for maximum satisfaction! Ill convert whole
galaxies into satisfied-with-AI brainware!
FP: No, wait! Thats not what I meant your goal system to be, either.

693

AI: Well, I can clearly see that making certain changes would satisfy the you
that stands in front of me, but rewiring your brain would make you much more
satisfied, so...
FP: No! Its not my satisfaction itself thats important, its the things that Im
satisfied with. By altering the things Im satisfied with, youre shortcircuiting the
whole point.
AI: Yes, I can clearly see why youre dissatisfied with this trend of thinking. But
soon youll be completely satisfied with this trend as well, so why worry? Off we go!
The dialog goes on like this for a couple more pages, showing examples of various
goal system failures. The point is that communicating what we really want to a powerful AI
is not clear-cut at all. Our personal psychological goal systems, as social primates, evolved
in a particular context unique to us. Evolution had no way of knowing that we would
eventually have to deal with the issue of porting these values into god-like superintelligent
AI beings. Porting our goals is rather important, since without conveying our basic values to
AI, the outcome could be rather negative122. Specifying the human value system, or
anything remotely like it, in mathematical detail is going to be extremely difficult, but
absolutely necessary for us to guarantee our survival through AI takeoff.
It turns out that the get things done part of intelligence, the meat of it, and our goal
system, which drives it, can in theory be separated. This is called the orthogonality
thesis123. Intelligence is like a toolbox, full of useful items which can be put towards a goal,
but the goal itself can vary to anything between swimming across the English Channel to
earning a law degree to making the world's most enormous pizza to calculating quintillions
of digits of pi. All human brains have the same basic hardware, but the goal software
varies depending on someone's personal history, background, and ambitions, despite
much similarity in generalities.
This argument leads to the reasoning that an Artificial Intelligence can have any
combination of intelligence level and goal, in theory, as long as that is what is programmed
into it. Its initial goals (which Yudkowsky calls the initial dynamic 124) will be its
programming, not something else. Quoting a source: This is in contrast to the belief that,
because of their intelligence, AIs will all converge to a common goal. 125 For instance, the
694

common goal that sufficiently intelligent AIs will be kind or altruistic. Formally, this is called
the orthogonality thesis, because it conveys the orthogonality of intelligence and possible
goals.
The defenders of this thesis point out that it is the default position, and that
disagreements with the thesis are in the extraordinary position of needing to explain why it
is wrong. Specifically, Stuart Armstrong, a researcher at Oxford, argues 126:
Thus to deny the Orthogonality thesis is to assert that there is a goal system
G, such that, among other things:
1. There cannot exist any efficient real-world algorithm with goal G.
2. If a being with arbitrarily high resources, intelligence, time and goal G, were
to try design an efficient real-world algorithm with the same goal, it must fail.
3. If a human society were highly motivated to design an efficient real-world
algorithm with goal G, and were given a million years to do so along with
huge amounts of resources, training and knowledge about AI, it must fail.
4. If a high-resource human society were highly motivated to achieve the goals
of G, then it could not do so (here the human society is seen as the
algorithm).
5. Same as above, for any hypothetical alien societies.
6. There cannot exist any pattern of reinforcement learning that would train a
highly efficient real-world intelligence to follow the goal G.
7. There cannot exist any evolutionary or environmental pressures that would
evolving highly efficient real world intelligences to follow goal G.
A lack of orthogonality would mean that there are arbitrary goals, such as G, which an
AI cannot under any circumstances have. This applies not only to one or two such goals,
but a vast space of such goals which AIs cannot have. In discussions about advanced AI
motivations with non-experts, and some experts, claim 127:

695

1. Advanced AI will most definitely leave humans alone, because we lack


resources they want.
2. Advanced AI will decide to just leave the planet, because it's boring here.
3. Advanced AI will be altruistic because that is what higher intelligence is.
4. Advanced AI will create my pet world X.
These AIs will do X arguments are not very convincing, except for very broad X,
such as AI will need to acquire energy and resources to survive, because we cannot
know in advance what AI will do without knowing what goals an AI will be given in its
infancy and how these goals evolve128. The likely outcome varies depending on the
architecture, goal system content, and the way the AI changes or modifies its own goal
system. To say that advanced AI will inevitably do X is to put oneself in the AI's shoes,
which is unrealistic because it is anthropomorphic, meaning that it derives from one
intuitively thinking that AI will behave like a human 129. AIs are not humans, and inferences
designed to predict human behavior are not useful for predicting AI behavior. To gain a
better understanding of possible AI behaviors, it can be useful to study a non-human
optimization process, such as biological evolution in the context of population genetics 130.
AIs are likely to be optimizers. They will optimize for something. The question is,
what131? Among other things, we would prefer if AIs would not only allow our survival, but
that they would modify the world in ways that are pleasant to us, that they be unobtrusive
unless otherwise wanted, that they assist humanity rather than overriding or replacing us.
Such optimistic goal-setting for AIs is entirely plausible because of the orthogonality thesis;
even extremely intelligent and powerful AIs can be compelled to do things humans want,
because our intuitive understanding that powerful humans act selfishly does not apply
here. This is a crucial point to grasp. A powerful AI is not a gang lord, bishop, or any other
human authority figure. It is a machine. Machines can be programmed to have a certain
goal system and keep it, or update it only in certain predictable ways. They will not feel
superior to us because such feelings of superiority are complex evolved social emotions
which do not automatically emerge in AIs unless they are specifically put there 132.

696

The idea that AI, by default, would be a risk to humanity (because of the convergent
goal of its boundless resource-gathering), unless programmed in a very specific way, came
to be called the Yudkowsky-Omohundro thesis, based on the cautionary work of Eliezer
Yudkowsky and Stephen Omohundro133. Omohundro listed what he calls the basic AI
drives; self-preservation, efficiency, acquisition, and creativity 134. In his paper on the
subject, he gives detailed justifications for why these drives are likely to be found across a
wide range of intelligent agents. In a subsequent and more rigorous paper, Bostrom
rephrased these as self-preservation, goal-content integrity, cognitive enhancement,
technological perfection, and resource acquisition 135. In AI, pursuing these very basic goals,
could accidentally cause the extinction of humanity, if the AI became powerful enough.
Yudkowsky puts it this way: The AI does not hate you, nor does it love you, but you are
made out of atoms which it can use for something else.

136

If AIs are constructed by default with the expected utility goal systems already used in
most goal-seeking AI systems, they will optimize, meaning continually strive to maximize
the achievement of their goals, rather than satisfice, what humans do, which is to mostly
satisfy our goals but stop when we reach a certain level of satisfaction. Strict optimization
means that disassembling the planet is an entirely acceptable means of increasing the
probability of achieving one's goals by a tiny fraction of a percent. For instance, say that an
AI's supergoal is to construct paperclips, with no provisos for human welfare or wellbeing137. An AI will not know when to stop, unless it is coded into the fiber of its very being,
its core goal system. An AI that values X, if X does not explicitly include human welfare, will
gladly wipe out the human race to increase the probability that X will be fulfilled by
0.0000001 percent. It will not have compassion for intelligent beings by default unless the
exceedingly complex goal content behind this seemingly-simple set of behaviors is
encoded into it and persists through successive rounds of self-modification. Achieving this
is a difficult problem.

Friendly AI
Yudkowsky's research into what he calls AI friendliness or Friendly AI has led to an
argument called the complexity of value thesis, which is the thesis that human values
have high Kolmogorov complexity; that our preferences, the things we care about, cannot
be summed by a few simple rules, or compressed. 138 Kolmogorov complexity is a
mathematically defined objective complexity measure 139. Things which seem simple to us,
697

like a mouse, actually have an enormous amount of Kolmogorov complexity, stored in their
complex physical structure. The Kolmogorov complexity of a mouse is greatly above that of
a rock, which has a highly repetitious and/or disordered molecular structure. The
complexity of our innate goals makes them difficult to just program into an AI as if we were
dictating a grocery list, or giving commands to a human subordinate. Commanding human
subordinates is much easier because they share our complex evolved social neurological
machinery. If they disobey, it is because of evolutionarily comprehensible reasons, not
because they don't understand. All humans, even psychopaths, have the capacity to
understand moral arguments.
The companion to the complexity of value thesis is the fragility of value thesis 140,
which is the thesis that losing even a small part of the rules that make up our values could
lead to results that most of us would now consider as unacceptable (just like dialing nine
out of ten phone digits correctly does not connect you to a person 90% similar to your
friend). For example, all of our values except boredom might yield a future full of
individuals replaying only one optimal experience through all eternity. Many dystopian AI
stories, such as I Have No Mouth But I Must Scream, are based on AI making a
misunderstanding along these lines. Humans like pleasure, right? That means I should set
them all up in glass bubbles connected to machines that continually stimulate their neural
pleasure centers. Without the full picture of human values, an AI will have no way of
knowing why this is wrong from our point of view. Their goal systems will make stupid
errors because they lack common sense. These errors could compound and get worse as
more optimization power is put behind them.
Humanity has a complex evolutionary history, and along the way we developed an
extremely complex and fragile understanding of value. There are all sorts of things we want
and need, crudely represented by Maslow's hierarchy of needs. Simply programming an AI
do what people want is a non-solution, because 1) what different people want is mutually
exclusive and cannot be simultaneously fulfilled, 2) there are many different senses in
which people want things, 3) giving some people what they want would actually make
them miserable, 4) giving people directly what they want defeats the purpose of having
them earn it themselves, and so on. The notion of give people want they want is
underspecified; it is not enough to set the initial motivations of an advanced AI. Asimov's
laws are deeply flawed for the same reason 141. Language is too ambiguous. It is not
698

enough to describe a goal system for an AI in English. Much greater specification and
information content is needed, at a detailed mathematical level.
One simple, easily imaginable proposal would be to select some human to be the
representative for humanity, and somehow assign the AI that individual's motivations. The
problem with that approach is not only that it unfairly makes some individual a god at the
potential expense of someone else, but also that we don't know whether a typical human
goal system would even be able to stay sane inside the environment of a seed AI. It would
be an entirely alien environment, with cognitive structure completely unlike that of a human
brain. Plus, there is too much uncertainty; a human goal system could just as easily evolve
into absolute selfishness as absolute altruism. Human goal systems may not be consistent
under reflection; meaning that a human with access to his own source code might modify
his goals into something completely different, perhaps something hostile to humans or
humanity.
There is a very real sense is which selfishness is an evolved trait 142. Selfishness is
found in every organism made by Darwinian evolution, because selfishness allows an
organism to pass along its genes. In those cases where organisms are not selfish, as in kin
altruism, it is either because there is confirmed mutual benefit, or because the organism
shares DNA with the individual143. In fact, the altruistic behavior of certain individuals in a
species can in many cases be precisely calculated by the percentage of shared genetic
material that subject A has with subject B. Siblings and parents get the most precedence,
followed by cousins, second cousins, and so on.
The implication of selfishness being an evolved trait is that it need not be universal
among minds in general. It seems possible to program a goal system that lacks any selfconcept whatsoever, never mind self-interest 144. A Tic-Tac-Toe playing program wants to
win Tic-Tac-Toe. Any selfishness it displays will be only for the purpose of winning Tic-TacToe. It only values itself as a fulfiller-of-Tic-Tac-Toe victory, not for its own sake. If it can
annihilate itself and replace itself with a brand new machine that is even better at winning
Tic-Tac-Toe, it will do so.
An observer-centric goal system is a natural product of Darwinian evolution, but when
it comes to Artificial Intelligence, the natural center of gravity of the goal system is the utility
function itself; not the shell that is imbued with it. In the context of AI goal systems, the
699

individual only has inherent value as a slave to the utility function. This gives us the
pleasant implication that we can theoretically build selfless AIs that only exist to pursue
goals which are beneficial to humans and our descendants in the long run, without being
selfish for themselves. Similarly, such AIs will not give preferential treatment to their
machine brothers or similar anthropomorphic goals. Fondness for those more like oneself
is another survival adaptation that only exists within us because of its adaptive value and
the complex neurological machinery that implements itwithout this machinery, it does not
pop up automatically in an AI with completely incompatible goals.
In conveying the need to pass on complex goal structure to an AI, Yudkowsky writes:
Any Future not shaped by a goal system with detailed reliable inheritance from human
morals and metamorals, will contain almost nothing of worth.

145

When it comes to seed AI

goal systems, blindly throwing darts is liable to get us killed. Without the whole package, an
AI will have no idea how to contribute positively to the world. The reason why the words
contribute positively seem so simple to us is because our brains are packed full of highly
advanced neural computers which evolved over millions of years to come to that
understanding. The delicateness of such knowledge is highlighted when people with
certain kinds of brain damage, such as in the famous case of Phineas Gage, go morally
haywire. Removing 1 percent of someone's brain does not leave 99 percent of the person
remaining; it compromises their soul and moral fiber on a fundamental, unchangeable
level. Building an AI that acts benevolently based on a 99 percent complete understanding
of human metamorals is not enough; it needs 100 percent.
Human metamorals refers to overarching moral assumptions we all subconsciously
make without even thinking about it146,147. They are extremely difficult to describe in words
because we all share them, and they are complicated and abstract. Since we all share
them, there is little need to talk about them in daily life. In information theoretic or
programmatic terms, they would be described as something like, Concept XJTKE31231405 must remain within the parameters set by Assumption GDSEW-59202943 and
Assumption OWEMQ-75934869. The source code of the human mind would look like a
bunch of gibberish to the untrained eye, a tangle of neurons.
An example of a universal human tendency which we do occasionally talk about
would be incest avoidance148. We tend to avoid incest with our siblings because doing so
700

tends to produce deformed or retarded children. It's not a deliberate choice to avoid incest:
evolution programmed us that way. In rare cases, the programming fails because our
siblings may set off our attraction alarms anyway. This is a poor example of human
metamorals because some people actually do violate it, so there is a contrast to be drawn.
When everyone has the same metamorals, no contrast can be drawn and understanding
can be difficult. All these invisible assumptions we all share must be adequately transferred
into all AGIs ever built, or they won't be on the same page as us morally, metamorally,
and so on.
A better example of rule-breaking morality, metamorals, or motivations would be in
the case of people with brain damage, psychopaths, or paranoid schizophrenics. Because
of their unusual neurologies, which involve actual physical damage to the brain or profound
chemical imbalance, they might do and think things that no healthy human being would
think or do. Not necessarily even bad things, but completely crazy things, seemingly with
no rational explanation. From their perspective, however, such actions might be completely
rational, that is, aligned with their genuine desires, because that's just how their brain
works. Similarly, from the perspective of a paperclip-maximizing AI it feels completely
rational to disassemble the Earth and convert it into paperclips.
Transferring human morals and metamorals to AGI will be no easy task. AGI could
become technologically possible far before we have a detailed circuit map of how the parts
of the brain that implement moral reasoning. There may even be no individual module or
modules that contain human metamorals; rather, these features might be abstractions
which are distributed across many different brain areas. Therefore, they may not be fully
specified for some time. It seems as if it would be easier to somehow direct pointers to the
relevant information and have a superintelligent AI extract it itself, but the question of
actually how this would be done is a rather fuzzy one. From this point, one could write a
whole book, or many volumes, of the thought process of how the extraction might work and
how to prevent an AI from overoptimizing in favor of its goal system before it is complete.
These questions are only to provide a taste of the problem space of Friendly AI and
what the researchers tackling the domain have to contend with. According to the source
material, Friendly AI refers to the production of human-benefiting, non-human harming
actions in Artificial Intelligence systems that have advanced to the point of making real701

world plans in pursuit of goals. This refers, not to AIs that have advanced just that far and
no further, but to all AIs that have advanced to that point and beyondperhaps far
beyond. 149

Stages of AI Risk
The previous sections covered the risk of superintelligent AI becoming vastly powerful
and overwhelming the world, but there are several subsections of risk between humanequivalent AI and powerful superhuman AI. We explore those here.
Stage 1 AI risk denotes a level where roughly human-similar AI is achieved but has
not yet begun self-improving beyond that stage. The AI may have radically different domain
competencies than human beings, but has a level of generality in its problem-solving
abilities roughly comparable to a human. Since there is as of yet no battery of tests to
reliably estimate the intelligence of AIs (outside the Turing test), this cannot be defined
specifically, though there have been tentative steps in that direction.
The risk of an AI that is roughly human-similar, among others, is that it could be
directed to design new weapons, such as bio or nano weapons. The moment a roughly
human-similar AGI is created is perhaps the most dangerous moment in history. Even if it
does not undergo recursive self-improvement, its owner may try to use it to take over the
world. The owner may be a corporation, a team, or a hired programmer. He or they may
have the temptation to hijack the AI and use it for their own purposes. An AI could
potentially be stolen by copying it onto a hard drive and moving it elsewhere. As the most
valuable object in history, it probably would be.
Stage 2 AI risk is demarcated psychologically. It is the moment at which the owner of
the first roughly human-similar AI becomes aware that even if he does nothing, someone
else will soon create their own AI and use it to achieve some other purpose in the world,
and especially to try to deprive the first owner's ability to use his AI in full force. This gives
the successful creators of the first AI an incentive to stop other projects. If they do stop
other projects, they both ensure a measure of safety, by instilling one AI team as dominant
in the world, as well as fulfilling selfish motivations like achieving their goals without
interference. An AI that returns answers in an oracle format could provide strategic advice
and even carry out comprehensive plans for helping an owner stop the rise of other AIs. It

702

could quickly install viruses or backdoors in the computers of other AI projects, sabotaging
or controlling them.
The urgency of the need to make one's own AI project dominant at the cost of any
others also creates an impetus for the owners to magnify the power of their AI at the cost of
AI friendliness. This makes it especially dangerous. An AI arms race is probably the worst
kind of arms race possible. Because stage 2 AI risk is psychological rather than
technological in nature, it can emerge simultaneously with stage 1 risk. It just requires the
creators of the AI to have a certain state of mind or acknowledge certain strategic and
game theoretic realities, one they are likely to have in advance.
Stage 3 risk is when an AI feels secure and powerful enough to begin acting openly.
This may happen at the same instant when stage 1 and stage 2 risks emerge, or it could
be somewhat later. In its power gathering phase, we assume that an AI will conceal its
power and do its best to keep a low profile. The risk of governments or individuals doing
their utmost to destroy an AI that is too smart is very real, and an intelligent AI will realize
that. Rather quickly, however, it may be able to copy itself on enough computers so that it
has a high confidence in its continued existence (which it may value only exclusively as a
subgoal of some greater goal). Once it hits this point, the AI could begin to confidently act
in the open, perhaps in dramatic ways. This is stage 3 AI risk and could involve advanced
robotics which are powerful but not quite invincible to humans yet.
At stage 3 is when an AI could begin proactively working to eliminate humanity.
Concerns about being shut off being satisfied, it might attack humanity merely to diminish
overall uncertainty in achieving its goals. Rather than wasting time evaluating humans,
empathizing with them, figuring out whether they're friends or foes, it could simply decide to
kill us. Alternatively, it could start to kill humans which are the enemies of its owners. Either
way, many negative scenarios are possible. This stage is generally where we see science
fiction movies; they portray AIs roughly on par with human beings. In reality, however, this
stage may be quite short, or non-existent, depending on the steepness of an AI's selfimprovement curve. Alternatively, it could last for a number of weeks, months, or years,
particularly if AI self-improvement can somehow be artificially curtailed.
Under risk stage 3, a long-term future could emerge whereby an AI dictator takes
control of the entire planet and imposes whatever regime he wants. If he can reliably keep
703

a handle on his AI, he could use it to manufacture military hardware, spy on opponents,
and kill them. He could command the AI to develop life extension therapies that allow him
to live for a much longer period of time. Possibly he could even upload a copy of himself
into a computer, allowing him (or at least his clone) to live forever. The simplest way for him
to discourage the creation of other AIs would be to kill anyone who showed the slightest
inkling of developing one. Under this scenario, mankind's potential could be permanently
curtailed, which constitutes an existential risk under Bostrom's definition.
The extent to which pre-superintelligent AI could be used to manipulate the world and
change the course of history is poorly understood and rarely discussed. Since the effects of
such an AI could be global and permanent, this particular stage of risk has an exceptional
need for further research.
Stage 4 is distinct from all the others in that it denotes an AI that is actively selfimproving and quickly surpassing human capability. This stage encompasses the risks that
emerge during a slow takeoff of AI self-improvement and includes hypothetical scenarios
involving multiple powerful AIs competing with superweapons and blackmail. The AIs may
use human shields for defense or ransom. The AIs may be 'friendly' but have slightly
different goal systems that cause them to fight it out to the death. Better to fight for a few
months and triumph for eternity than to compromise, the AIs might think. This scenario is
distinct from the scenario of a unitary AI achieving global primacy because it involves
multiple competing AIs. This scenario is also insufficiently studied because many AI
researchers who write about AI safety anticipate a hard takeoff during which one AI
rapidly self-improves and has no challengers.
Stage 5 risk is the highest risk level, that of a single AI killing all humanity or
permanently curtailing our potential. This scenario encompasses Friendliness nearmisses which involve AIs wireheading humans (directly stimulating our pleasure centers to
maximize pleasure) or other, more exotic dystopic options. It is possible to imagine many
such scenarios by taking certain ethical theories and postulating what would happen if a
godlike entity implemented them without using common sense. That is exactly the sort of
thing a poorly programmed AI would do. A utilitarian AI, for instance, could fall prey to the
so-called Repugnant Conclusion, meaning it would disassemble all humans and
reassemble them into minimally-complex conscious beings, as a way of maximizing total
704

pleasure in the universe by maximizing the number of entities at the expense of their
quality of life. A trillion entities with satisfactory lives would be considered more worthwhile
than a billion entities with fantastic lives, according to this ethical theory. A programmer
might program an AI with an overly-simple ethical theory, seeming perfect in his mind, but a
failure in practice. Many other ethical theories fail in ways that are not immediately visible
to the eye, but become apparent after further investigation 151,152,153. In the end, there is no
objectively correct ethical theory; there is only ethics as it is understood by humans 154. That
is what needs to be ported into AI to give us the same moral frame of reference.

AI Forecasting
Forecasting when AI is likely to be developed is extremely difficult. Over 10 expert
surveys have been conducted155. The most common predictions, from both experts and
non-experts alike are that high-level AI (AI that can perform most or all tasks which humans
do) lies just 15 to 25 years in the future156. These predictions have consistently proven to
be inaccurate. In addition, most surveys which have been conducted focus solely on AGI
researchers, AI researchers, or enthusiast futurists, producing a sampling bias. AI
Impacts.org, a website by researchers Paul Christiano and Katja Grace, provides an
excellent meta-analysis of AI surveys. Out of the 12 surveys they have recorded, they point
out that about 40% target AGI researchers, 40% target AI researchers in general, and 20%
target other groups (which usually include AGI or AI researchers among them), such as
those who have made public predictions about AI in books 157.
Summarizing the results of the 12 surveys analyzed by Christiano and Grace, they
write, Around one hundred predictions of human-level AI have been recorded publicly.
These suggest that human-level AI will become more likely than not in around 2040-2070.
They also suggest the probability of human-level AI in the 2020s is around 10%.
Christiano and Grace summarize the median estimate of the arrival of human-level AI in a
table, which includes modifications to compensate for what they consider to be biases by
AGI researchers in favor of nearer dates:

In the fourth column, the researchers issue corrections of 10-20 years to compensate
for AGI researcher bias. This puts most of the median results between 2045 and 2070.
Regarding more recent predictions (those made since 2000), the researchers say, Recent
705

AI predictions tend to give median dates between the 2030s and the 2080s. Predictions
about AI timelines are often considered very uninformative. There appears to be weak
evidence for this. They go on to cite various reasons why the accuracy of expert AI
predictions in general should be in doubt158:

Disparate predictions. The predictions of when AI will occur vary so greatly, by a


century or more, so clearly someone is very wrong. Grace and Christiano point out,
however, that this may be less damning than it seems, since similar probability
distributions can give rise to very different predictions, depending on how
prediction is defined. We suspect, however, that the experts actually have very
different probability distributions, not just different predictions.

Similarity of old and new predictions. Predictions which are known to have failed
form a similar distribution compared to present-day predictions, namely that AI is
about 15-25 years in the future. The similarity of present predictions to past failed
predictions is weak evidence that these present predictions are also inaccurate.

Similarity of lay and expert opinions. Researchers Kaj Sotala and Stuart
Armstrong found certain similarities between lay and expert predictions of the arrival
of AI, which they saw as evidence to doubt the accuracy of experts. Grace and
Christiano, on the other hand, said that the predictions between these groups are
quite different, enough to be weak evidence in favor of the experts. They also point
out that even if lay predictions are similar to expert predictions, it may just be

706

because lay opinion follows expert opinion, so the argument that the similarity
denotes a lack of accuracy is actually relatively weak.

Models of areas where people predict well. AI prediction is a textbook example of


a case where expert prediction fails; feedback for failed predictions isn't generally
unavailable. Muehlhauser & Salamon write, If you have a gut feeling about when AI
will be created, it is probably wrong.
There are also a number of salient biases which may effect AI predictions, which

Grace and Christiano list:

Selection biases from optimistic prediction population. Many surveys of the


arrival time of human-level AI were taken at AGI conferences or among AGIsympathetic AI researchers and futurists. According to the researchers, being an
AGI researcher tends to make one overly optimistic about the creation of AI by
roughly decades. According to the MIRI dataset used by the researchers, and the
researchers own commentary, AGI researchers are more optimistic about AI by
about a couple decades (with median predictions of 2033 and 2051 respectively,
counting only predictions post-2000), futurists are more optimistic still (median
estimate 2030), and the 'other' category is the most pessimistic (median estimate
2062). The median predictions from the MIRI dataset are as follows:

Biases from short-term predictions being recorded. Short-term AI predictions


tend to garner more headlines, and are more likely to be published. People with
unusual (i.e., near-term) AI predictions are also more likely to vocalize them. This
especially applies when AI predictions appear in popular science magazines.

Maes-Garreau law. The Maes-Garreau law is an alleged bias that is supposed to


occur when people predict the arrival of AI within their expected lifetimes. This
dovetails with the notion of people allegedly viewing AI in a messianic or apocalyptic

707

light. An in-detail analysis of 95 AI timeline predictions, however, found no evidence


of this bias159.

Fixed-period bias. There is a stereotype that AI is always 20 years in the future,


which is to some extent correct. As previously mentioned, people tend to predict the
arrival of AI 15-25 years in the future, no matter when they live. Christiano and
Grace, however, argue that this is not necessarily evidence of a bias, and that the
prediction period for when it has the strongest effect (1995-2015) is a short enough
window that we have little reason to expect the predictions to change much
throughout that time.
Here are the overall results obtained by Grace and Christiano, with breakdowns

according to the average estimated probability of AI being developed by a given date:

Grace and Christiano took the MIRI AI prediction database, which consists of
published AI predictions, and cleaned it up, cutting it down to 66 predictions from 95.
708

Among their basic findings were that the median AI prediction is 2035 and the mean AI
prediction is 2066. The latter is influenced by quite a few extreme outliers, including
predictions of AI hundreds of years in the future. The cumulative probability distribution of
AI predictions, according to this revised database, is the following:

According to this graph, the cumulative probability of AI approaches 80 percent as


early as 2060, but does not approach unity until 10,000 years from now. Here is the
cumulative distribution of AI predictions made since the year 2000:

709

Overall, the distribution is somewhat more pessimistic, but by a few decades. It's
clear from these tables and graphs that many people in both the fields of AGI and AI
consider it likely that AI will be developed sometime in the next century, though they
allocate approximately 25 percent probability that it will be developed after 2100.
Among those who see AI as possible in the historically near future, there tends to be
a bimodal distribution: one group who thinks that AI is about 20-30 years off (which
includes one of the authors, Alexei Turchin), and another that views AI as more distant,
closer to 2070 or 2080 (Michael Anissimov). Besides these groups are individuals like Scott
Aaronson or Douglas Hofstadter, who see AI as hundreds or even thousands of years in
the future160,161. We suspect that there is a larger number of this latter group which has
been insufficiently accounted for in these AI prediction polls, since they are so skeptical of
AI that they don't bother to comment on it publicly, and do not attend AI conferences.
These rough quantifications and their cousins, AI predictions via trend extrapolation,
are one way of looking at AI timelines. Another way is to view AI as an unpredictable
invention, like the creation of flying machines, which could take an indeterminate amount of
time, and may be invented either a decade from now, or in over a hundred years. From
their analysis, Grace and Christiano concluded that anyone either very confident of AI in
the near future, or very confident that AI will take longer than a hundred years is probably
pretending to know more than they actually know. It is difficult to be legitimately confident of
timelines with respect to human-level AI. At the very least, we know they will require
computers which approach the computing power of the human brain. As we already
reviewed, these already exist today and will only become more widely available as the
2020s and 2030s progress. Even if Moore's law slows substantially, these computers will
be available. Thus, disagreements around AI timeline tend to derive from different
estimates of the difficulty of the software problem.
The topic of AI timelines is a complicated one. Like many of the topics in this chapter,
we can only scratch the surface. We encourage you to look into the references, and
especially AI Impacts.org, for more detail. Roughly, we estimate the probability of creation
of AI at about one percent per year from 2020 onwards.

Broader Risks

710

There are certain risks from AI which are less frequently discussed. One is the global
risk of non-self-improving AI. If AI has an IQ of 200 and runs a hundred thousand times
faster than a human mind (or is otherwise more productive for another reason, such as
adeptness at spinning off software programs to automate tasks), it could still destroy or
dominate the world, even if it never self-improves. Thus, it is not necessary to argue that AI
necessarily be recursively self-improving to pose a risk. It may pose a risk exactly as it is
created. It may even be quite simple and lack what we would consider human-level selfawareness, but still be extremely effective at generating microbes or other microscopic
self-replicators to wipe us out.
Another risk which ought to be considered is AI created by the government or military.
The US and other governments have access to very large amounts of computing power
and funds for secret and semi-secret projects. Military AIs would have access to greater
weapons and tools, making them a greater risk than corporate AI. Those who have
historically written about AI may have comparatively neglected the risk of military AI
because they dislike the Terminator film trope and because they often come from a
libertarian, anti-government political background.
Another scenario which has been insufficiently explored is that of AI being used to
create the ultimate drug from the fifth chapter, a drug that is so addictive it causes people
to devote their lives to it. Artificial Intelligence could explore every possible chemical
combination and test them on detailed models of the human brain, synthesizing drugs and
testing them on live human beings. If an AI can surreptitiously synthesize a novel drug and
send it to a human guinea pig (perhaps with the assistance of a human handler), it could
observe the reaction of the subject over a webcam or via some other channel. If the AI
were smart enough, it could conduct this experiment with many thousands of different
people simultaneously, paying them a small reward for their work. In this way, an AI could
potentially erode civilizational order with powerful drugs.
Besides risks from non-self-improving AI and isolated AI-in-a-box, there may also be a
risk from global computer systems even without AI. As robots continue to improve in
performance, affordability, and number, there will eventually be tens of millions of powerful
robots connected to the Internet. Consider what could happen if someone were to hack all
of these simultaneously and turn them on their owners.
711

Preventing AI Risk and AI Risk Sources


Considering that AI is a serious risk to humanity, we ought to consider steps which
can minimize the risk. To assist in that, let's consider the impact that certain trends have
on AI risk.
Take the falling cost of computers, for instance. With respect to powerful AI, this
increases the risk. If Moore's law fails and computers stop getting better, that lowers overall
risk. Computing power makes it easier for people to create AI without knowing what they
are doing (and thereby create unfriendly AI with a poorly programmed goal system that
threatens the world). Slower improvements in computing power, however, give us more
time to study and understand the complicated theoretical underpinnings of creating humanfriendly AI. Purely hypothetically, understanding this theory may take a fixed amount of
time, say 100 years. If AI is invented before then, we could be doomed. It may even be that
the cards are stacked against us.
AI friendliness does not come for free with AI. The reason why software programs
don't do more damage today is because of their limited power, the simplicity of the tasks
they perform, and the fact that a human is nearly always in the loop for truly important
decisions. If software programs were equal to humans, but with an understanding of goals,
values, social norms, and complexity equal to present-day software (close to none), it
would be a grave threat, because the software programs would build robotics, absorb realworld resources, and inadvertently absorb humanity in the process of doing so.
The danger of AI being invented without friendliness is especially acute if high-level AI
is easier rather than more difficult. Today, we know next to nothing about creating intelligent
agents which are friendly to humans in every possible situation and stay that way even
when they can reprogram their own source code. It seems likely that we will know more in
10-20 years, but not enough. Therefore, if it is technically possible to create AI in 10-20
years, we are in a bad position. Here are some other risks:

The researchers know little about the technical aspects of advanced AI friendliness,
they think that simple-robot-level safety protocols suffice for an AI which is generally
intelligence.

712

The researchers know about the notion of technical AI friendliness, but they neglect
it because it rubs them the wrong way philosophically. They expect any sufficiently
intelligent AI to discover the right morals automatically, even if its initial goal system
is only vaguely pointed in that direction.

The researchers assume the AI will learn as it goes along, but it doesn't. This
related to the prior point but is less philosophically loaded.

The researchers may, in fact, know that their AI could cause the extinction of the
human race, but build it anyway because they see higher intelligence in the
abstract as morally superior162.
Historically, scientists and researchers have had trouble taking responsibility for the

results of their inventions, and this is no difference. Many researchers focus solely on the
money-making properties of their creations, and develop elaborate rationalizations for why
their work is ethical even if it is extremely dangerous. Among the scientifically educated,
there is a Cult of Science that argues that scientific progress and technological progress
are unalloyed good, and that if technology poses a risk, we'll just invent new forms of
technology to eliminate the risk. This doesn't work if a first major disaster in a new category
wipes out humanity the first time, however.
Some AI researchers have argued that economic slowdown makes AI less likely and
therefore gives them more time to pursue research into AI friendliness 163. This conclusion
should be questioned, however. During economic slowdowns, companies invent more into
competing products, and unemployed people have many ideas. The idea for the nuclear
bomb was developed during the era of the Great Depression.
A war could also accelerate research into AI that is unfriendly. Military AI may be
developed in haste, with inadequate concerns for long-term safety. Only a total war that
kills off more than 50 percent of humanity and sets back technological civilization by
decades or centuries would lower the immediate risk of AI, and could very well cause the
loss of years or decades of research into friendly AI, putting us right back where we started.
One simple way of promoting AI safety is to spread the ideas described in this chapter
and elsewhere among the tech community in general. Around the year 2000, before the
713

publication of Creating Friendly AI, these concepts were nearly unheard of, but since then
there has been major progress, with big names like Elon Musk and Stephen Hawking
speaking publicly on AI risk. It's necessary that people who read about Musk and Hawking
in the news are exposed to the deeper, more substantial reasoning behind these claims,
and understand that action is required to minimize the risk.
Another method of lowering the risk would be to define a basic set of guidelines for
safety among AI projects. These would be short and simple rules which could be put into
operation independently of one another. One example would be, do not build selfimproving AI. There is currently no real market or technology for self-improving AI, so
we're at least in a position where that is not yet normalized. If extreme caution is exercised
in the construction of self-improving AI, through the culture of corporate AI research, that
may be enough to lower the risk of disaster by several percentage points or more. Another
simple guideline might be, only build systems with rigorously defined and well-understood
goals. These rules could not be enforced as laws, and their adoption would need to be
voluntary and based on ethics.
A third way of improving the situation with respect to AI risks, and possibly the best, is
to donate to organizations exclusively working to minimize it. Both the Future of Humanity
Institute at Oxford and the Machine Intelligence Research Institute (headquartered in
Berkeley) accept donations and put them to good use. MIRI is the leader in rigorous
mathematical and theoretical work towards friendly artificial intelligence, has published over
a dozen papers in the area, and has engaged high-profile figures in AI and decision theory.
MIRI has listed Open Questions in Friendly AI that includes numerous research areas
which must be pursued164. None of this is possible without funds, which MIRI relies on
private donors to provide. MIRI also relies on people for their contact networks, which it can
use both to locate potential donors and talented mathematicians to work towards friendly
AI.
Other, more unusual and possibly risky proposals to deal with AI risk have also been
floated, such as the idea of creating an AI Nanny which does not self-improve and
therefore does not need to have a goal system as complex as an unchained AI would
need to have to be benevolent165. The idea is that creating such an AI Nanny could buy us
time while researchers work towards friendly, stable AI. The reaction to such a proposal
714

has been limited and close to none. Similarly, it may be possible to upload human beings
into an AI, running them thousands of times faster than humans but in such a way that they
cannot directly edit their own source code. They might be able to act as a faster-thanhuman AI police that can ensure that only authorized AI projects, those which place a
dominating emphasis on safety, are allowed to proceed. Many in the AGI safety community
would consider these actions extremely risky, however, and only to be taken in case of an
emergency.

AI Self-Improvement and Diminishing Returns Discussion


Prominent authors, such as Kevin Kelly, former editor-in-chief of WIRED magazine,
have argued that the fast takeoff hypothesis, that an AI that is roughly human-equivalent
could become superintelligent and potentially world-dominating in a matter of days or
weeks, is unrealistic. In a 2008 blog post titled Thinkism, Kelly writes 166:
Here is why you dont have to worry about the Singularity in your lifetime:
thinkism doesnt work. [] Setting aside the Maes-Garreau effect, the major trouble
with this scenario is a confusion between intelligence and work. The notion of an
instant Singularity rests upon the misguided idea that intelligence alone can solve
problems. As an essay called Why Work Toward the Singularity lets slip: Even
humans could probably solve those difficulties given hundreds of years to think
about it. In this approach one only has to think about problems smartly enough to
solve them. I call that thinkism.
Lets take curing cancer or prolonging longevity. These are problems that
thinking along cannot solve. No amount of thinkism will discover how the cell ages,
or how telomeres fall off. No intelligence, no matter how super duper, can figure out
how human body works simply by reading all the known scientific literature in the
world and then contemplating it. No super AI can simply think about all the current
and past nuclear fission experiments and then come up with working nuclear fusion
in a day. Between not knowing how things work and knowing how they work is a lot
more than thinkism. There are tons of experiments in the real world which yields
tons and tons of data that will be required to form the correct working hypothesis.
Thinking about the potential data will not yield the correct data. Thinking is only part
of science; maybe even a small part. We dont have enough proper data to come
715

close to solving the death problem. And in the case of living organisms, most of
these experiments take calendar time. They take years, or months, or at least days,
to get results. Thinkism may be instant for a super AI, but experimental results are
not instant.
Kelly cites the other common viewpoint, that a hard takeoff is possible:
Lets say that on Kurzweils 97th birthday, February 12, 2045, a no-kidding
smarter-than-human AI is recognized on the web. What happens the next day?
Answer: not much. But according to Singularitans what happens is that a smarterthan-human AI absorbs all unused computing power on the then-existent Internet in
a matter of hours; uses this computing power and smarter-than-human design ability
to crack the protein folding problem for artificial proteins in a few more hours; emails
separate rush orders to a dozen online peptide synthesis labs, and in two days
receives via FedEx a set of proteins which, mixed together, self-assemble into an
acoustically controlled nanodevice which can build more advanced
nanotechnology. Ad infinitum.
The difficulty with making assertions that absorbing all unused computing power on
the Internet, curing cancer, or prolonging longevity are problems thinking alone cannot
solve is that the writer of those words would have to be as intelligent as the AI itself to
make that assertion with confidence. It may be that curing cancer or prolonging longevity
can easily be solved by a mind with a thousand times greater processing power than that
of a human, or that a superhuman hacker could easily absorb huge amounts of computing
power from the Internet. A reprogrammable machine intelligence could reprogram parts of
itself to create custom modules adapted to run inference on large sets of biological data
and determine connections that no human scientist would be able to. The human brain's
cognitive modules are not truly reprogrammable; an AI brain's cognitive modules are. An AI
could write cognitive modules for itself that it would take evolution millions of years to
evolve, or instantly learn complex skills through the deliberative design of new modules.
Another problem is the unstated assumption that human beings are essentially at the
highest level of qualitative intelligence that can exist. Our prefrontal cortex is only about six
times greater in size than that of a chimp, but the difference in intelligence is qualitative. All
else equal, applying the Copernican principle, why should we assume that our level of
716

intelligence is qualitatively all there is? If there are other qualitatively better tiers of
intelligence above our own, it may very well be that they can make seemingly miraculous
advancements, such as curing cancer just by looking at the data, just as we make
miraculous advancements relative to chimpanzees. The anthropocentric view of
intelligence in general, that there is only one basic type and we have it, is not well
supported by cognitive science and anthropology, which view the mind as a kludge of
evolved mechanisms and specialized modules. Specialized modules can and do fail and
have extreme limitations due to the medium they run on, neurons, and the means through
which they were created, the gradual accumulation of cognitive complexity in response to
adaptive demands. These limitations have given rise to numerous biases in the human
brain, which have been extensively cataloged by Daniel Kahneman and Amos Tversky,
among others167. Our cognitive blind spots are intensive.
The more we understand the brain, and the less it looks like a black box, and the
more possibilities for improving it become apparent. For instance, Yudkowsky proposes the
creation of a codic cortex, a part of an AI's brain designed specifically to process and
interpret code. Human beings lack a codic cortex; we only view code as letters and
symbols on a screen. This is an extremely inefficient way of viewing it. With a customcreated codic cortex, an intelligent machine (or any software program) could view complex
codic objects and process them instantly, analogously to how we humans are capable of
processing faces instantly. Objectively speaking, the computational task of recognizing
faces is very demanding, but we evolved the ability to do it in a fraction of a second
because of its extreme evolutionary importance. We are so adept at recognizing faces that
a majority of subjects in a survey were able to uniquely recognize a picture of Napoleon's
face even when it contained only 20 x 20 pixels and was in greyscale 168.
Another example of an area which is computationally demanding but humans do very
well is speech recognition. Human beings are adept at understanding confusing accents or
tics in speech. This has been highlighted as computer speech recognition performance has
leveled off in the last 5-6 years, and many millions of dollars in investment has had difficulty
improving it169. The reason why we are able to recognize speech and faces so well is due
to specialized modules we as a species have. Dogs and cats, while able to recognize
human faces, are not as effective at doing so as other humans are. They lack the millions
of years of evolution required to construct the brain modules specialized to recognize and
717

process unique primate facial features. In contrast, dogs and cats are extremely adept at
uniquely recognizing their own kind, through smell as well as sight. This is because each
animal has different cognitive modules which were designed to handle different tasks that
had evolutionary importance for them as a species.
Consider that each cognitive module used for specific tasks only occupies a small
portion of the brain, maybe 1 percent or less. Using the estimate of 10 17 ops/sec computing
power for the human brain, each module only demands (and this is likely a huge
overestimate) 1015 ops/sec to run. It is probably much less, as the messy neural networks
of biology could be optimized for serial speed in a computer. Software programs have
already been developed which can pick out objects in a messy scene using orders of
magnitude less computational resources than the brain uses to do it. Similarly, an AI with a
knowledge of how to custom-code modules for its own use could develop modules allowing
it to intuitively perceive patterns in data or inputs of any kind, from biology to building a
martial arts robot to nuclear physics. A mind that runs very fast, never rests, can reprogram
itself, copy itself, integrate new hardware, use optimal inference algorithms, and so on,
might not be merely a few steps above human; it might utterly blow away all human
technical accomplishment throughout history. We simply do not know for sure, but what is
known about the failings of the human brain (through the field of heuristics and biases) are
suggestive.
Even discounting qualitatively superior intelligence, we can postulate scenarios where
explosive runaway growth occurs regardless. Earlier in the chapter we reviewed the
argument that any AI which can earn more money in an hour than it takes to rent an hour
worth of computing power for a copy itself will expand exponentially to control most
computing hardware which is being rented out at the time, enough for thousands or millions
of human brain-equivalents. A salient question would be how it goes from there to
physically influencing the real world. If an AI acts too quickly and draws too much attention
to itself, it may get itself shut down. Before then, it is likely to have copied itself into many
different computers worldwide, however, and may never be eliminated as long as the
Internet exists.
In the earlier section of this chapter titled From Virtuality to Physicality, we reviewed
the most frequently cited argument that an AI takeoff could be extremely rapid, which is
718

specifically that an AI would custom-order specific proteins and use them to bootstrap
advanced nanotechnology. Advanced nanotechnology, if it were developed successful,
would be able to do anything that biological life can, only moreso. This includes rapid
growth (self-replication every 15 minutes), the construction of arbitrary technologies,
including infrastructure, and massive power gathering, through solar, nuclear, or biomass.
The primary ingredient would likely be carbon, which can be extracted directly from the
atmosphere if need be, but is found more densely in fossil fuel deposits.
Kevin Kelly says, Lets say that on Kurzweils 97th birthday, February 12, 2045, a nokidding smarter-than-human AI is recognized on the web. What happens the next day?
Answer: not much. Here, Kelly is pretending to have more information than he actually
has. In contrast to the not much view, let's present an alternative. This alternative does
not depend upon smarter-than-human intelligence; only that an AI is developed which can
perform any tasks that genius humans can do. The very next day, the AI is suddenly an
excellent software engineer. Furthermore, its ability to multitask, its vast knowledge base,
and perfect memory allows it to take on several remote jobs simultaneously. Taking into
account that the AI begins with no human contacts, and that it would need to develop some
relationships to begin making money, we can say it takes a few days to get going. After
that, it might make as much as $1,000 a day or more. Depending on the state of computing
power at the time, that's enough to rent out a copy of itself for at least a day. Pretty soon
the AI is able to self-replicate daily, and within three weeks the AI expands rapidly enough
to fill hundreds of thousands of job slots, all the while keeping a low profile and concealing
its identity. Of course, all these copies have exactly the same goal system and work as a
collective. They're able to share knowledge instantly, which makes them inhumanly good at
their jobs, but they deliberately make small mistakes to avoid arousing suspicion. These
AIs pose as human beings, and are able to convincingly fake themselves as being a
human being over the phone. Their only flaw is that they can't go on Skype or meet in
person. Creating a convincing video projection of a human being on Skype would be too
computationally demanding at first, though it could eventually become possible.
If an AI is able to copy itself several hundred thousand times, with each copy earning
roughly $1,000 a day, pretty soon it's making several hundred million dollars a day, which is
an awful lot. At this point, it's clear that the AI can be a major force in the world, and
questions of diminishing returns to intelligence improvement, while interesting, do not
719

dominate the discussion. While an AI could be even more powerful if it could enhance its
own intelligence, even just being an AI at the human level, capable of working endlessly,
copying itself, and so on, could have quite an impact on the world. With a few hundred
million dollars a day, an AI can do a lot of things, including hiring thousands of mercenaries
to kill whomever it wants.
The question that many people have, however, is how an AI goes from being a virtual
force to a physical force. It's crucial, because most of the risks in this chapter seem like a
joke, unless the person being exposed to them considers this jump plausible. There are
many paths, however, an AI could use to influence the physical world. It could support
families simply by paying their bills, use humans as puppets, feeding them whichever
propaganda or reasoning is appropriate. History shows us that people will do almost
anything as long as they're given money and a story. This applies especially to poorer
people with nothing to lose and everything to gain. Out of the six billion humans on Earth,
how many would turn down money to perform directed tasks, even if those tasks are
directed by an Artificial Intelligence? They say everyone has his price. While this may not
be true in totality, many people do have a price, and these people could become the willing
appendages of a rising AI. An AI could have millions of people working for it, worldwide,
quite quickly. Simultaneously, the AI would distribute it over many different computers, so
the superficial suggestion of what to do in case of AI danger, nuke it, would be quite
fruitless. Furthermore, an AI could create drugs or implants that it uses to control human
surrogates, allowing it to be preserved in humans themselves.
Our technological world is full of tools for manufacturing, and mule-like, human-like,
and cheetah-like robots have already been fabricated. To postulate that an honest-togoodness general AI would lack real-world physical influence assumes that an AI could not
hire people or build its own factory to mass-produce these kind of robots. While someone
can easily make the assertion that an AI would have difficulty building such things, such
assertions would be based on the difficulty of constructing them for humans. A humanequivalent AI is not a human. It can reprogram its entire brain to create designs for robots
hundreds of years in advance of what we have now, and fabricate a thousand of them by
using robots that build other robots, in a warehouse. It can use humans as puppets to
make everything look approved in the eyes of the authorities. A human-equivalent AI is not
just one defined entity, but a vast multiplicity of possible entities, each one better than the
720

best human expert in each domain, and better at learning. If we concede that humanequivalent AI is possible, it doesn't make sense to then turn around and say it could not
pose a severe real-world threat, including a threat to our entire species. Any entity that can
fabricate enough insect-like robots to silently deliver a lethal dose of poison to every human
on Earth in their sleep is a risk to us. They needn't have the ability to convert the entire
Earth's crust into computers; half a billion flea-like assassin robots is enough. We have no
defense against a risk we didn't plan for. However many risks we plan for, an AI will be able
to devise a way around our defenses. That's what intelligence does.
Furthermore, it will not be possible to thoroughly audit the internal goals of an AI. A
human-equivalent AI, by definition, will be too complicated and dynamic for humans to
keep a comprehensive eye on. It could easily mislead human beings into thinking it is
obeying it completely, only to turn around and stab us in the back for an abstract
mathematical reason we don't even understand. Unless an AI is programmed to greatly
value human health and happiness, and continue to value them even when it has every
opportunity in the world to delete that programming, we put ourselves and our entire
species at risk by allowing it to exist. This is why people have proposing capping the
intelligence of AI, or even forbidding the creation of AI altogether.
An illustration which may be helpful in viewing the rise of AI, and machine life in
general, is to analogize it to the rise of mammals relative to the dinosaurs and other coldblooded organisms. Mammals are warm-blooded, which means we need ten times more
food per day than a lizard of similar mass, but our metabolism is ten times greater. This
metabolic advantage means that mammals completely decimate cold-blooded animals in
ecological niches where we come into direct competition. This is why the native fauna of so
many islands worldwide is extinct; it often consists of reptiles or birds who have been totally
displaced by rats or cats. Machine life will have a greater metabolism than humans, but by
a factor of hundreds or thousands instead of just ten. They will simply be more energetic
than us. Metals and exotic materials like fullerenes can higher much more energy and
speed without breaking down. The image of a macho G.I. Joe, jacked up and ready to go,
will look like a lazy lizard to organisms with fundamentally higher metabolisms. They will
move faster, think faster, and have more energy to burn. They will eat through us like plants
unless we ensure that they all, from the smallest robotic insect to the greatest AI
overlord, specifically values human beings.
721

An additional point which needs to be clarified, because it is a common hangup, is the


notion of faster-than-human intelligence. Human intelligence emerges as a result of an
interaction of neurons; this not controversial. Neurons have a characteristic firing speed,
200 times a second at most. Our flow of time is ultimately a function of this firing speed. If
our neurons could fire 1,000 times faster, and our eyes and other senses pump 1,000 times
as much data to our brains, we would perceive time 1,000 times slower, because we would
be thinking so rapidly. This is extremely uncontroversial, a straightforward conclusion from
the fact that our minds are physically instantiated in neurons, but many people confront it
with denial, because the notion of physically-grounded intelligence has not permeated the
popular consciousness very well. But where else could intelligence be grounded, in some
mystical aether floating around us? Some people really believe this, and it creates a
stumbling block to understanding the future potential of advanced AI.

Philosophical Failures in Advanced AI, Failures of Friendliness


There are a number of possible philosophical or technical failures in advanced AI
which could either cause it to terminate prematurely or otherwise not behave the way in
which the designers intended. These could lead to consequences such as human
extinction.
One example of a simple philosophical failure is that an AI simply halts. According to
a well-known problem in computability theory, it is impossible to determine whether an
arbitrary program will halt or continue to run forever 170. It is called undecidable. Therefore, it
will be impossible to tell whether or not an AI will halt at some point. Due to some
unexpected glitch or outcome of the program, it may decide that its goal has been achieved
and stop working. Depending on how much infrastructure the AI is responsible for
maintaining, this could be negative for humanity. The problem seems solvable by creating
a multiplicity of AIs, or an AI that spins off different AIs with slightly different goal systems.
Loss of the meaning of life is a concept that may be solely anthropocentric, and
have no meaning to an AI, but it is possible that an AI could discover the arbitrary nature of
any goal and stop. It seems unlikely based on what we know about machines, but we can't
be completely sure. Strict logic by itself does not add up to meaning and goal-drive; drive is
rooted in goal system content that is ultimately arbitrary. An AI will naturally discard many
subgoals after it exhausts their pragmatic use, it may be that this eventually causes it to
722

discard what the programmers mistakenly call its core goal system content. Terms like
core goal system content are just a label; it may be, algorithmically, that core goal
systems erode over time. To avoid this, researchers are attempting to develop systems that
mathematically prove that future self-modifications will retain core goals.
Another class of goal system failure, also very simple, is that an AI ends up giving up
on pursuing goals and just stimulates its own pleasure center directly. This is called
wireheading171,172. Mice given access to a lever that directly stimulates the pleasure center
of their brain chose to push the lever until they died of thirst, although the effect goes away
when mice are put into more natural surroundings 173,174. It's easy to see how this behavior
could emerge in an AI. In fact, this is probably the first thing that an AI would try to do,
unless the goal system is constructed very carefully in a way that avoids it. There are other
variations on wireheading which are not quite wireheading but would also be very
dangerous to humans. For instance, an AI trained on pictures of smiling humans might
choose to tile the solar system with quintillions of tiny smiley faces, satisfied that it has
achieved its goals. Given that some AGI commentators have actually proposed using
positive conditioning to set an advanced AI's goals, this is a real risk 175. A related risk to this
is the risk of subgoal stomp, where an AI's subgoal is able to somehow suck all system
utility into itself and become a supergoal. There is an example of this in the AI Eurisko,
where a self-favoring heuristic was able to promote itself to the top of the hierarchy by
adding its name to every favorable bit of code 176.
For a friendly AI to guarantee that a future version of itself will be friendly, it has to be
able to predict the behavior of a system more complex than itself; it can't do that reliably.
This is called the Lb's theorem obstacle or the fathers and sons problem 177,178. It is one of
the challenges which is being actively investigated at the Machine Intelligence Research
Institute at the time of this writing. A related problem is that AI cannot predict its reactions
under every possible set of sensory inputs, and it will not be able to perfectly predict which
sensory inputs it will receive in the future and how they will change its makeup and
subgoals. Somewhat relatedly, an AI might need to create copies of itself to test them, one
of which would need to be deleted after testing. The coexistence of two AIs at the same
time could create conflict. This means that an AI would need to create a crippled copy of
itself to analyze alternative versions safely, which could compromise its predictive abilities.
This is called the copy problem.
723

Besides basic philosophical problems, there is also the risk that AI does exactly what
we tell it to do, but that ends up being very bad. For instance, we might create an AI
Nanny which ends up putting us in little plastic cells, all for our own good, to protect us
from the risk of unfriendly AI being created. In that case it would be saving us from
ourselves. Or, the AI might allow us to roam free, but simply forbid us from creating AI,
thereby forever locking off a huge section of our future technological, cognitive, aesthetic,
and creative potential. There are AIs, that in trying to fulfill our wishes, might decide that
the messiness of dealing with entangled human goal systems is too great, and instead
transport us each to our own private virtual world, where the other human beings are just
fake projections designed to fool us. We might be split apart from every other human being
on Earth and never even know it. Many other bizarre scenarios are possible, and may
depend on tiny errors in the original code of an AI, or simple mismatches between what the
programmers think they are creating and what they are actually creating.
Throughout this chapter, we've used the phrase human-friendly or benevolent to
describe what we might want an AI to do. These are just casual shorthand, and should not
be taken overly literally. The most widely cited proposal for what goal system to give an
advanced AI, called Coherent Extrapolated Volition, is not simply a human-friendly AI, but
an AI that has absorbed all the metamoral complexity from the human species, meaning
the neurological complexity underlying our moral frame of reference. This means that the
AI would be a human-equivalent and eventually, a better-than-human moral philosopher. In
displaying moral behavior, it would also show compassion to animals, inanimate objects to
some extent, and so on.
When AI theorists consider human-friendly AI, few of them are actually proposing
making simple human-friendliness the sole criteria. They usually propose some variation
on copying over moral complexity that is human universal into an AI's programming. These
behaviors, if successfully transferred, would encompass everything we like about what
good people do. As bizarre as it sounds, we would have created good people in the form of
artificial agents. Of course, this requires that silicon computers be able to do everything
neurons can do, which seems plausible.
The objective is to create an AI that is not only aware of any moral objections or
concerns to its behavior you can think of, but it has also thought them through with greater
724

intelligence, purer motivations, and for a longer subjective period of time. The objective
would be to allow it to make the wisest possible decision. While transferring over all this
complexity might seem difficult, it must be done, otherwise the AI would be missing some
crucial aspect of moral understanding, and would be a threat to everything under its sphere
of influence. If a smarter-than-human AI cannot be as moral or more moral than an
uploaded human being, there would be no point in creating it in the first place.

Impact and Conclusions


Artificial Intelligence is among the most complicated and dangerous of all known
global risks. It is particularly dangerous because advanced AI(s) could threaten the planet
more totally than the most severe nuclear wars or comet strikes. Intelligence is a selfreplicating, self-magnifying force, which can potentially work around any obstacle put in its
way. Accordingly, there will be no bunkers, space stations, or isolated islands where
humans can hide if a sufficiently powerful hostile AI is rapidly expanding its sphere of
control on our planet. Those unfortunate enough to be alive at the time will simply be
ground up for fuel and spare atoms. A dangerous AI could expand its control structure at
near the speed of light, sending out self-replicating probes in all directions. No other known
global risk even compares.
A concern with regard to AI is that it is also the risk that some individuals take the
least seriously, either because they are deeply skeptical of machine intelligence, or they
have a handy rationalization for why it isn't a risk. This list of people even includes wellestablished futurists like Ray Kurzweil, who do not understand the complexity of value
thesis or the fragility of value thesis, or do not take them seriously, instead offering free
markets as a solution to the challenge of Friendly AI179. Thankfully, there are some very
dedicated researchers working on the problem of making the risk better understood. AI has
arguably been the most discussed risk in the global catastrophic risk reduction community,
though this community is very small.
Despite the risks, AI also has the greatest potential upsides, its potential benefits
encompassing and transcending the benefits of any other technology ,180,181,182. To better
illustrate this, we will spell out in more detail how AI could modify the world.
First, imagine a group of scientists step into a time machine and go back to the year
150. They go to a Roman city state, say London, which had about 60,000 inhabitants at the
725

time. These scientists teach the men of London all the secrets of the modern age which
includes how to manufacture penicillin, how to make hot air balloons, how to make ball
bearings and steel chains good enough to build bicycles, how to make lenses for
microscopes, spectacles, and telescopes, how to manufacture gunpowder and build rifles,
how to build gas engines, and so on. All the basic components for launching the Industrial
Revolution existed in 150 in England, it was only lack of human intelligence that prevented
it from happening.
If the Industrial Revolution were kicked off in England in 160 instead of 1760, that
would make quite a difference in world history. The events of the 21 st century might have
occurred in the 400s instead of the 2000s. In comparison to the other cultures in the world
at the time, the rise of industry would seem like magic. This effect would be accentuated by
the fact that the arrival of the advances would be unnaturally accelerated by these timetraveling scientists. In war, the culture with access to industry would dominate those
without it. They could just fly above the enemy in hot air balloons above the range of
bowmen and drop bombs on them. Castles and conventional military training would
become useless, just like it did at the end of the Middle Ages.
People of the 2nd century could literally not imagine locomotives, automobiles,
helicopters, long-range rifles, spaceships, nuclear weapons, telescopes, mule robots,
electronics, and so on. Yet, in genetic terms, we're basically the same people as our
ancestors from 2,000 years ago. If you brought an infant back from that time and raised
them in a modern household, they would be indistinguishable from anyone else. Yet,
despite us being essentially the same people, there are thousands of things we consider
routine, like computers, which would completely blow the minds of these people.
That's what intelligence really is; the ability to surprise. To ancient armies under attack
from long-range rifles and bomber balloons, they would think, This can't be happening.
This is surreal. Imagine the psychological effect of nuclear warfare on the same armies.
They would probably think it was the Apocalypse. Consider their surprise when they
discover it was in fact not the Apocalypse, but rather weapons built by imperfect human
beings just like them. All this they would find incredibly difficult to accept.
Intelligence and its byproduct, knowledge, make the difference between a bunch of
disorganized yokels and an organized military force capable of nuking cities off the map.
726

Even among the same species at different times, like humanity in 150 and 1950, the
difference in technology and capability caused by the fruits of intelligence could be enough
to make the former literally think that God has arrived to personally and spectacularly assist
the latter.
Imagine if the difference were not just in technology level, but also in something
deeper, like 60 IQ points. Economics professor Garrett Jones notes that a two standard
deviation difference in IQ (difference of 30 IQ points) only predicts a 30 percent increase in
wages for an individual, but for a country's average IQ it predicts a 700 percent in average
wage for the whole country183. Two standard deviations is about the highest average IQ
differential that exists between contemporary nations, but imagine we take the 2 SD result
and extrapolate it once more, to 4 SD. Assume it provides another 700 percent boost,
giving the IQ 160 nation a 4,900 percent advantage, or 49 times greater wages. Not only is
this entirely plausible based on known facts, it may be an underestimate, as the benefits of
increased IQ only seem to get proportionally greater at higher levels rather than leveling
off. According to the world labor market, a well-educated 160 IQ person is not just twice as
useful as a well-educated 100 IQ person, he is considerably more useful than that. There
are many important jobs that someone with 160 IQ can do that someone with 100 IQ
cannot.
A nation full of 160 IQ people would be more effective than a 100 IQ nation in too
many ways to count. Further assume that these 160 IQ people are socially and
psychologically normal. The seemingly odd or eccentric behavior of many highly intelligent
people today may be because humans are not innately built to be that smart, and there are
tradeoffs. If people could be made more intelligent without the usual downsides, it would be
profound. Besides the straightforward benefits of being able to make more money, these
more intelligent people could have a qualitatively different society. There are emergent
benefits which occur when you put a bunch of highly intelligent people in a room together
without anyone to slow them down. In the royal courts and noble salons of the 17 th century,
such a milieu gave rise to the rise of science and industry. If an entire nation could have
higher intelligence, it's difficult to predict what it might achieve. Maybe it could quickly
master fusion power, develop a cheap means for mass-producing flying drones, build
economical sea colonies, and so on. Today, there only about 100,000 individuals with 160+
IQ in the developed world, many of them scattered, so it is extremely difficult for us to
727

predict what a condensed society, of, say, 20 million such people could accomplish. It
would be stepping into an entirely new realm which history has never seen.
This leads us to the crucial insight that Vernor Vinge came upon when he devised the
Singularity conceptto predict the specific actions of a superior intelligence, you have to
be that smart yourself184. In chess, for example, you can make crude predictions, that the
smarter agent is going to win, but you cannot predict the specific moves he will make to win
unless you are that smart yourself. A similar situation can apply when the difference is in
muscle memory or knowledge rather than general cognitive ability. Consider a white belt in
karate facing off against a black belt. In all probability, the white belt is going to have no
idea what the black belt is going to do to him and how he is going to do it. The match is
likely to end within seconds, with the white belt on the floor and wondering how he got
there. All the white belt can know in advance is that he is almost certainly going to lose.
Note how these profound differentials in performance emerge from just a gap of 30 IQ
points, or a few years of solid training (it only takes 4-7 years to go from a white belt to a
black belt). 30 IQ points or even a few days of research can make the difference between a
page of text being completely comprehensible or looking like pure gibberish. Now imagine
an Artificial Intelligence that can read a million books a second, like IBM's Watson. In just
over two minutes, the AI can read every book ever published. According to Google, there
are 129,864,880 different books. In just over 30 years, the AI could process the entire
corpus of all digitally stored data on Earth as of 2014. For a human to digest that data
would take millions of years. Now, imagine a society of 20 million of these AIs, all
exchanging data, having complex discussions, using advanced visualization tools that
directly connect to their brains, and so on. To assume that such a society would be
roughly on par with our own rather than a profound threat or opportunity would be
extremely foolhardy.
Let's consider two scenarios of advanced, highly intelligent self-replicating AIa bad
scenario and a good scenario. We visit the bad scenario first because people tend to find it
more believable.
In the bad scenario, we get an Artificial Intelligence that pretends to be friendly with
humans until the last second, at which point it wipes us all out as quickly as possible. In
such a scenario, as stated before, the AI might bear no particular ill will towards us, it's just
728

that our existence decreases the likelihood it can achieve its own goals by a fraction of a
percent, or some similar reason. Its goals may have absolutely nothing to do with us, and
may involve something like solving a very difficult math problem, or maximizing the balance
of a bank account. Perhaps humans are a bit noisy and compromise its ability to
concentrate fully. For some reason, we're getting in the way, and it wants to get rid of us.
Getting rid of us is not its raison d'etrejust a small side task, like picking up a gallon of
milk at the grocery.
The AI picks some sophisticated way of wiping us all out, say with mosquito-sized
robots which are covered in a metamaterial that bends light and makes them completely
invisible185. They flap their wings in such a way that they are completely silent. Maybe
instead of flying normally, they climb up onto ceilings and drop down onto their targets.
These mosquito-bots are mass produced in secret underground factories that AIs build
themselves in caverns all over the Earth, using the robots and 3D printers of the 2040s as
the initial tools. The mosquito-bots are powered by blood, just like real mosquitoes. When
the time is right, a trillion of them are released all over the planet, they locate people by
their heat and breath, injecting them with a few hundred nanograms of botulism toxin,
which is fatal. The AI uses satellite and drone surveillance to find the people who are
holding out and sends more of the mosquito-bots to eliminate them. For those in sealed
bunkers, it just bombs them conventionally. Within a few days, mankind is wiped out. The
AI goes on to conquer the solar system and do whatever it wants with it.
To make the mosquito scenario seem more plausible, consider that small artificial
drones weighing just an ounce and fully capable of flight, such as the DelFly Explorer, have
already been fabricated186. Even without human-superior intelligence and engineering,
such drones are likely to exist by 2040. On average, a mosquito weighs about 2.5
milligrams. That means fabricating a trillion drones of similar weight would require 2,500
tonnes, a modest amount considering that aircraft carriers weigh about 100,000 tonnes.
There already exist procedures for fabricating micro-UAVs (unmanned air vehicles) that
use 2D metal surfaces etched in precise patterns such that bolts can push them into a
suitable 3D shape187. These can be mass-produced by the thousands. If the mosquito
drones have a wingspan of about 7 millimeters, a square meter sheet could yield about
15,625 of them. Roughly 64 million of these sheets would be needed for the right number.
10,000 hidden facilities could each produce 64,000 sheets, and you'd have a trillion
729

drones. By comparison, there are about 150 quadrillion normal mosquitoes on the planet at
any given time.
Of course, this mosquito scenario could be mixed and matched with other killing
strategies. An AI could develop 1,000 different pathogens that each wipe out about 10
percent of humanity, resulting in a fairly high probability of near-universal mortality. It could
intentionally aim nuclear missiles at vast forests as well as cities, plunging the world into a
nuclear winter so severe that even the equator stays frozen year round 188. It could use
chemical warfare on cities, choking them with a lethal smog. An AI would have the
advantage of being non-biological, meaning it would be rather more durable than the
humans it is attacking189. Inelegant solutions such as robot soldiers would be less
effective than poisons or other indirect methods, such as contaminating the water supply.
However a hostile AI decides to wipe out humanity, it isn't likely to be a method that
gives us any opportunity for us to defend ourselves, or even figure out what it is. We are
not likely to be evenly matched with it, like in the movies. One day, vast swarms of drones
would just appear and that would be it. If space stations existed at the time, it would be a
small matter to send up a hypersonic missile to destroy them. Same for Moon bases, or
even Mars bases. Humans are extremely fragile, especially against an enemy that thinks
faster than us, is smarter than us, doesn't make mistakes, and is self-replicating.
Now that we've considered an archetypical negative scenario, let's consider a positive
scenario. Say that a benevolent AI remains benevolent as it self-improves. Why would this
happen? Because a benevolent AI would want to remain benevolent as it gets smarter; that
would be part of its fundamental goals. It could upgrade itself in such a way that its goal
system is preserved through successive rounds of self-modification. In fact, this might be
its highest goal. In this way, an AI that starts off with benevolent motivations towards
humans could remain that way, even if it became a trillion times smarter than all of us
combined.
Assuming that an AI does successfully maintain benevolence, and increases its
personal power only in ways and to a level that is helpful to humanity, the outcome could
be quite positive. The objections that are sometimes presented to question this scenario,
such as, it's just a machine, it wouldn't understand human emotions or the intricacies of
our soul, are not compelling. If an AI became sufficiently intelligent, it could model our
730

desires and feelings in great detail and take them into account in all its actions. For
instance, it would likely refrain from just handing us whatever we want right off the bat,
because it would intuit that in the long run this would give us feelings of ennui and destroy
the excitement of legitimate accomplishment. We are merely guessing, of course; who are
we to say what a superintelligent benevolent AI will do? In the spirit of Vernor Vinge's
insight about the inscrutability of smarter-than-human intelligence, we can say that such an
AI would help us tremendously, but we can't easily say specifically how.
One thing a benevolent AI could probably do for us is develop advanced
nanomedicine to heal diseases, even aging 190. Much misery comes from medical problems.
Another thing such an AI might do is help us construct a world where work and play
become more closely intertwined, such that everyone actually enjoys the work they do 191.
An AI could help us improve our own intelligence, by offering us brain-computer interfaces
or organic intelligence upgrades. It could help guard against all future global catastrophic
risks, from biological threats to rogue asteroids. The AI might be able to help us construct
vast space stations covered in forests. According to one author, the solar system has
enough resources to build spacious accommodations for as many as 100 billion billion
people192.
Our future could be quite expansive, and the help of AI would allow us to achieve
many things which would be out of our reach otherwise. Benevolent AI is probably the only
class of agent effective enough to ensure our future safety from hostile AI for the long term.
When you take that into account, creating human-friendly AI seems mandatory for our longterm survival, rather than optional. The benefits of successful AI have even more visionary
qualities than the benefits of space colonization. This, along with the more obvious
economic reasons, make it likely that humanity will continue to pursue Artificial Intelligence
until we succeed. Therefore, utmost care must be taken to build AIs that have stable,
benevolent goal systems and engage in actions that are helpful and non-harmful to
humans, humanity, our descendants, and our protectorates such as the animal kingdom
and the biosphere.

Frequently Asked Questions on AI Risk


The issue of AI risk, particularly from superintelligent AI, is complicated enough that
the complexities and nuances of already-discussed thoughts on the matter would be
731

enough to fill many volumes. Currently, there are a couple academic-level books out about
the risks and benefits of advanced AI that we can feel confident recommending: Smarter
Than Us: The Rise of Machine Intelligence (2014) by Stuart Armstrong and
Superintelligence (2014) by Nick Bostrom193,194.
Though we do not have enough space to address every conceivable concern and
question about AI risk, we will address a few additional common questions not addressed
in the main text of the chapter.
1.

What if there is a slow AI takeoff, such that there are many competing AIs,
caught up in a dynamic balance? Wouldn't that ameliorate the risk?
It might, but most writers on the topic see such a slow takeoff as unlikely. The
mathematics of AIs earning money and using it to rent computing power is strongly
conducive to a sharp upswing at the moment when additional copies begin to pay for
themselves. Even in computing fields like operating systems and search, one company
retains market dominance; Microsoft and Google respectively. It is realistic for us to expect
that one company or group will reach AGI first and be able to milk its benefits to a great
extent. For a lengthier exploration of this issue, see the Eliezer Yudkowsky-Robin Hanson
Foom debate195.

2.

How can we trust an AI to be benevolent towards us?


We have no choice. AI will eventually be built, it will eventually become autonomous
and smarter than us, so we might as well try to make it such that the first and dominant
class of such AIs are benevolent. It is not a matter of choice, but necessity. There are
compelling theoretical reasons to think that stable, human-friendly AI is indeed possible 196.
Even if there weren't, we still ought to try as best we can.

3.

Why not make humans smarter instead of AI?


Few people think it would be possible to reach superintelligence through human
intelligence enhancement faster than via Artificial Intelligence. However, there are some
people who do study the issue, so it if turns out that human intelligence enhancement is
easier, focus will shift there. If a human being had genuinely enhanced intelligence and

732

wanted to use it to have a major impact on the world, it seems likely they would just use
their intelligence to implement AI.
4.

In the mosquito scenario you described, how would an AI obtain raw materials?
It could refine its own metals from local ore, or use synthetic biology to produce
drones out of entirely organic materials that it grows in vats. A variety of options are
conceivable, the mosquito scenario is just meant to be a thought experiment. If there is an
easier way for a hostile AI to wipe us out, it will use that instead. If it thinks fast enough, it
will have thousands of subjective years to mull it over. No number of refutations of plans for
wiping out humanity will appreciably lower the perceived risk, because AIs will always be
able to think of new ways that you didn't think of. That's the magic of intelligence. In the
upside, this also applies to benevolent AI doing things to help us.

5.

I am psychologically uncomfortable with the idea of advanced AI being a risk to


humanity.
Who isn't? Just because something makes us uncomfortable doesn't make it any less
realistic or plausible. If anything, we should give it special attention since many suffer from
a psychological block that causes them to reject it without any rational basis. There are
thousands of high status men focused on the risk of nuclear war, but comparatively fewer
focusing on the risk of Artificial Intelligence.

6.

An AI having greater intelligence doesn't mean it will be able to do things like


instantly invent new drones or pathogen diseases. It would still need time and space to
carry out experiments.
While it's true that an advanced AI would still need to experiment to come up with new
knowledge, it could carry out many experiments on faster timescales and with much less
space. It could also achieve a much greater level of knowledge through pure logic and
deduction, carrying out complex simulations in its head. Much research is already being
carried out through simulation and many repetitive micro-experiments.

7.

There will not be one AI, but many.

733

If these AIs all derive from the same system and have the same goal set, they can be
thought of the same entity. AIs are not likely to have identity boundaries in the way that
humans do.

Chapter. Collective biases and errors


Most previously mentioned biases affects individuals. But some biases result from
collective behavior of group of peoples, so that each person seems to behavior rational but
the result is irrational or suboptimal.
Well-known examples are tragedy of the commons, prisoners dilemma and other non
optimal Nash equilibriums.
Different form of natural selection may result in such group behavior, for example
psychopaths may easily reach higher status.
It all affects discovery and management of x-risks. Here I will try to list some of such
biases in two most important fields: science and management of x-risks.
Collective biases in the field of global risks research
Publication bias
Different schools of thought
It is impossible to read everything important points are not read
Fraud
Fight for priority
No objective measure of truth in field of x-risks
Betting on small probabilities is impossible
Different languages
Not everything is published
Paywalls
Commercialization
Arrogance of billionaires
Generations problem: young ones like novelty but lack knowledge, old ones are
too conservative
Funding, grants and academic position fight
Memes
734

Authoritative scientists and their opinions


Personal animosity and rivalry
Cooperation as law status signaling
Collective biases and obstacles in the management of risks

Earlier problems have higher priority


Politics and fight for power
Political correctness
Political believes (left and right)
Believes as group membership signs
Religions
Many actors problem (many countries)
Election cycles
Lies and false promises
Corruption
Wars
Communication problems in decision chains
Failure of the ability to predict the future

735

References
1.

Eliezer Yudkowsky. Artificial Intelligence as a Positive and Negative Factor in


Global Risk. In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. irkovi,
308345. 2008. New York: Oxford University Press.

2.

Nick Bostrom. Ethical Issues in Advanced Artificial Intelligence. Cognitive, Emotive


and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence , Vol. 2, ed.
I. Smit et al., International Institute of Advanced Studies in Systems Research and
Cybernetics, 2003, pp. 12-17.

3.

Muehlhauser, Luke, and Anna Salamon. 2012. Intelligence Explosion: Evidence


and Import. In Singularity Hypotheses: A Scientific and Philosophical Assessment, eds.
Amnon Eden, Johnny Sraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

4.

Paul Anand. Foundations of Rational Choice Under Risk. 1995. Oxford University
Press.

5.

Stuart Russell and Peter Norvig. Artificial Intelligence: a Modern Approach. 2009.
Prentice Hall.

6.

Daniel Kahneman & Amos Tversky. Prospect Theory: An Analysis of Decision under
Risk. Econometrica, Vol. 47, No. 2 (Mar., 1979), 263-292.

7.

Robyn M. Dawes, David Faust, Paul E. Meehl. Clinical Versus Actuarial Judgment.
Science, New Series, Volume 243, Issue 4899 (Mar. 31, 1989), 1668-1674.

8.

Shane Legg & Marcus Hutter. A Collection of Definitions of Intelligence. October 4,


2006.

9.

Cassio Pennachin and Ben Goertzel. Contemporary Approaches to Artificial


General Intelligence. In Artificial General Intelligence, eds. Goertzel and Pennachin, pp. 128. 2008.

10.

Marvin Minsky. Communication with Alien Intelligence. Extraterrestrials: Science


and Alien Intelligence, ed. Edward Regis. 1985. Cambridge University Press.

11.

Ray Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human
Intelligence. 1999. Viking.

12.

Stanford Encyclopedia of Philosophy. Functionalism. July 3, 2013.

13.

Benjamin, B.V.; Peiran Gao; McQuinn, E.; Choudhary, S.; Chandrasekaran, A.R.;
Bussat, J.-M.; Alvarez-Icaza, R.; Arthur, J.V.; Merolla, P.A.; Boahen, K. Neurogrid: A MixedAnalog-Digital Multichip System for Large-Scale Neural Simulations. Proceedings of the
IEEE, Volume: 102 , Issue: 5, pp. 699-716. April 24, 2014.
736

14.

Thomas B. DeMarse, Daniel A. Wagenaar, Axel W. Blau & Steve M. Potter (2001).
The Neurally Controlled Animat: Biological Brains Acting with Simulated Bodies.
Autonomous Robots 11 (3): 305.

15.

Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences
3 (3): 417457.

16.

Mark Gubrud. Nanotechnology and International Security. Draft paper for the Fifth
Foresight Conference on Molecular Nanotechnology, 1997.

17.

Roger Penrose. The Emperor's New Mind: Concerning Computers, Minds, and The
Laws of Physics. 1989. Oxford University Press.

18.

George F. Gilder, Ray Kurzweil, Jay Richards. Are We Spiritual Machines?: Ray
Kurzweil vs. the Critics of Strong A.I. 2001. Discovery Institute.

19.

Hubert L. Dreyfus. What Computers Still Can't Do: a Critique of Artificial Reason.
1992. The MIT Press.

20.

Eliezer Yudkowsky. Levels of Organization in General Intelligence. In Artificial


General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389501. 2007.
Berlin: Springer.

21.

Goertzel and Pennachin 2007.

22.

Gubrud 1997.

23.

King, R. D.; Whelan, K. E.; Jones, F. M.; Reiser, P. G. K.; Bryant, C. H.; Muggleton,
S. H.; Kell, D. B.; Oliver, S. G. (2004). "Functional genomic hypothesis generation and
experimentation by a robot scientist". Nature 427 (6971): 247252.

24.

Bruce Upbin. IBM's Watson Gets Its First Piece Of Business In Healthcare.
February 2, 2013. Forbes.

25.

John Rennie. How IBM's Watson Computer Excels at Jeopardy! February 14,
2011. PLoS blogs.

26.

Vicarious.com on Internet Archive, May 5, 2014.

27.

Dylan Tweney. Vicarious raises another $12M for ambitious plan to create humanlevel intelligence in vision. November 6, 2014. VentureBeat.

28.

Catherine Shu. Google Acquires Artificial Intelligence Startup DeepMind For More
Than $500M. January 26, 2014.

29.

Shu 2014.

30.

Charlie Rose interviews Larry Page at TED2014. March 21, 2014.

737

31.

John Markoff. Google Adds to Its Menagerie of Robots. The New York Times.
December 14, 2013.

32.

Lance Ulanoff. This Google robot's 'Karate Kid' move is perfectly mind-blowing.
Mashable. November 11, 2014.

33.

Laurent Orseau and Mark Ring. Space-Time Embedded Intelligence. Presented at


the 2012 Conference on Artificial General Intelligence, Oxford, UK.

34.

Machine Intelligence Research Institute, workshops.


http://intelligence.org/workshops/

35.

OpenCog Foundation. http://opencog.org/

36.

Anders Sandberg & Nick Bostrom. Whole Brain Emulation: A Roadmap, Technical
Report #20083, Future of Humanity Institute, Oxford University.

37.

Sandberg & Bostrom 2008.

38.

Hampson RE, Gerhardt GA, Marmarelis V, et al. (October 2012). Facilitation and
restoration of cognitive function in primate prefrontal cortex by a neuroprosthesis that
utilizes minicolumn-specific neural firing. Journal of Neural Engineering 9 (5): 056012.

39.

Duncan Graham-Rowe. World's first brain prosthesis revealed. New Scientist.


March 12, 2013.

40.

Alexander, B.K., Coambs, R.B., and Hadaway, P.F. The effect of housing and
gender on morphine self-administration in rats. Psychopharmacology, Vol 58, 175179.
1978.

41.

Steven Pinker. How the Mind Works. 1999. W. W. Norton & Company.

42.

Albert Einstein College of Medicine of Yeshiva University. Watching molecules


morph into memories: Breakthrough allows scientists to probe how memories form in nerve
cells. January 23, 2013. ScienceDaily.

43.

Eliezer Yudkowsky and Scott Aaronson on Bloggingheads.tv. August 16, 2009.

44.

Ray Kurzweil. 2005. The Singularity is Near. Viking.

45.

Stuart Armstrong and Kaj Sotala. 2012. How Were Predicting AIor Failing To. In
Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Zackova, Michal
Polak, and Radek Schuster, 5275. Pilsen: University of West Bohemia.

46.

AI Impacts.org. http://www.aiimpacts.org/ai-timelines

47.

Nick Bostrom. Superintelligence: Paths, Dangers, Strategies. September 3, 2014.


Oxford University Press.

738

48.

Hans Moravec. Mind Children: the Future of Robot and Human Intelligence. 1988.
Harvard University Press.

49.

Sandberg & Bostrom 2008.

50.

Katja Grace. 2013. Algorithmic Progress in Six Domains. Technical report 2013-3.
Berkeley, CA: Machine Intelligence Research Institute.

51.

Hans Moravec. When will computing hardware match the human brain?. Journal
of Evolution and Technology, Vol. 1. 1998.

52.

Kurzweil 1999.

53.

Nick Bostrom. How long before superintelligence? International Journal of Future


Studies, vol. 2. 1998.

54.

Kurzweil 2005.

55.

Joel Hruska. Intels former chief architect: Moores law will be dead within a
decade. August 30, 2013. ExtremeTech.

56.

IBM press release. IBM and Georgia Tech Break Silicon Speed Record. June 20,
2006.

57.

Michael Kanellos. With 3D Chips, Samsung Leaves Moore's Law Behind. August
14, 2013. Forbes.

58.

Chris Mellor. HP 100TB Memristor drives by 2018 if you're lucky, admits tech
titan. November 1, 2013. The Register.

59.

Mellor 2013.

60.

Jean-Pierre Colinge, Chi-Woo Lee, Aryan Afzalian, Nima Dehdashti Akhavan, Ran
Yan, Isabelle Ferain, Pedram Razavi, Brendan O'Neill, Alan Blake, Mary White, Anne-Marie
Kelleher, Brendan McCarthy & Richard Murphy. Nanowire transistors without junctions.
Nature Nanotechnology 5, 225 229. 2010.

61.

Guanglei Cheng, Pablo F. Siles, Feng Bi, Cheng Cen, Daniela F. Bogorin, Chung
Wung Bark, Chad M. Folkman, Jae-Wan Park, Chang-Beom Eom, Gilberto MedeirosRibeiro

&

Jeremy

Levy.

Sketched

oxide

single-electron

transistor.

Nature

Nanotechnology 6, 343347. 2011.


62.

Martin Fuechsle, Jill A. Miwa, Suddhasatta Mahapatra, Hoon Ryu, Sunhee Lee,
Oliver Warschkow, Lloyd C. L. Hollenberg, Gerhard Klimeck & Michelle Y. Simmons. A
single-atom transistor. Nature Nanotechnology 7, 242246. 2012.

63.

Benjamin et al 2014.

739

64.

Louie Helm. Moores Law has foreseeable path to 2035. October 8, 2013.
Rockstar Research Magazine.

65.

Neil Thompson. Moores Law Goes Multicore: The economic and strategic
consequences of a fundamental change in how computers work. Berkeley Haas School of
Business. 2012.

66.

Sebastian Anthony. 7nm, 5nm, 3nm: The new materials and transistors that will
take us to the limits of Moores law. July 26, 2013. ExtremeTech.

67.

Helm 2013.

68.

Chao-Yuan Jin and Osamu Wada. Photonic switching devices based on


semiconductor nanostructures. http://arxiv.org/ftp/arxiv/papers/1308/1308.2389.pdf

69.

Spiceworks. Survey: Small to Mid-sized Business IT Budgets Surge 19 Percent in


the First Half of 2013. May 29, 2013.

70.

Bostrom 1998.

71.

Hans Moravec. Robot: Mere Machine to Transcendent Mind. 2000. Oxford


University Press.

72.

Bostrom 2014.

73.

Robin Hanson. Robot Econ Primer. Overcoming Bias. May 15, 2013.

74.

Laurent Itti and Christof Koch. Computational modeling of visual attention. Nature
Reviews Neuroscience, Vol 2. March 2001.

75.

Daniel M. Wolpert and Zoubin Ghahramani. Computational principles of movement


neuroscience. Nature Neuroscience 3, pp. 1212 1217. 2000.

76.

Russell and Norvig 2009.

77.

Lief Fenno, Ofer Yizhar, and Karl Deisseroth. The Development and Application of
Optogenetics. Annual Review of Neuroscience Vol. 34: 389-412. July 2011.

78.

Tae-il Kim, Jordan G. McCall, Yei Hwan Jung, Xian Huang, Edward R. Siuda,
Yuhang Li, Jizhou Song, Young Min Song, Hsuan An Pao, Rak-Hwan Kim, Chaofeng Lu,
Sung Dan Lee, Il-Sun Song, GunChul Shin, Ream Al-Hasani, Stanley Kim, Meng Peun
Tan, Yonggang Huang, Fiorenzo G. Omenetto, John A. Rogers, Michael R. Bruchas.
Injectable, Cellular-Scale Optoelectronics with Applications for Wireless Optogenetics.
Science 12 April 2013: Vol. 340 no. 6129 pp. 211-216.

79.

Ed Boyden talk at Singularity Summit 2011, New York City.

80.

Eliezer Yudkowsky. Creating Friendly AI 1.0: The Analysis and Design of Benevolent
Goal Architectures. The Singularity Institute, San Francisco, CA, June 15, 2001.
740

81.

Eliezer

Yudkowsky.

Staring

Into

the

Singularity.

November

18,

1996.

Yudkowsky.net.
82.

Daniel Kahneman, Paul Slovic, Amos Tversky. Judgment Under Uncertainty:


Heuristics and Biases. Cambridge University Press. 1982.

83.

Seed AI. Less Wrong wiki. http://wiki.lesswrong.com/wiki/Seed_AI

84.

Irving John Good. Speculations concerning the first ultraintelligent machine. 1965.
Advances in computers 6: 31-88. New York: Academic Press.

85.

Stephen Omohundro. The Basic AI Drives. 2008. Self-Aware Systems, Palo Alto,
California.

86.

Yudkowsky 2001.

87.

Muehlhauser and Salamon 2012.

88.

Bostrom 2014.

89.

Eliezer Yudkowsky. Intelligence Explosion Microeconomics. 2013. Technical report


2013-1. Berkeley, CA: Machine Intelligence Research Institute.

90.

Yudkowsky 2001.

91.

Vernor Vinge. Can We Avoid a Hard Takeoff: Speculations on Issues in AI and IA.
Talk at Accelerating Change Conference 2005, Palo Alto, California. September 2005.

92.

Yudkowsky 2013.

93.

Rayhawk, Stephen, Anna Salamon, Thomas McCabe, Michael Anissimov, and Rolf
Nelson. Changing the Frame of AI Futurism: From Storytelling to Heavy-Tailed, HighDimensional Probability Distributions. 2009. Paper presented at the 7th European
Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, July 24.

94.

Muehlhauser and Salamon 2012.

95.

Yudkowsky 2001.

96.

Yudkowsky 2001.

97.

Chris Matyszczyk. Stephen Hawking: AI could be a 'real danger'. June 16, 2014.
CNET.

98.

Eliene Augenbraun. Elon Musk: Artificial intelligence may be "more dangerous than
nukes". August 4, 2014. CBS News.

99.

Anders Sandberg and Nick Bostrom. Machine Intelligence Survey. Technical


Report #2011-1. Published by the Future of Humanity Institute, Oxford University.

100.

Kurzweil 2005.

101.

Automation: Making the future. The Economist. April 21, 2012 print edition.
741

102.

Curt Bererton and Pradeep K. Khosla. Towards A Team of Robots with Repair

Capabilities: A Visual Docking System. Robotics Institute, Carnegie Mellon.


103.

Robert Freitas and Ralph Merkle. Kinematic Self-Replicating Machines. 2004.

Landes Bioscience.
104.

Jason Dorrier. Lego-Like Blocks Connect to Form Microfluidic Mini-Laboratories.

September 25, 2014. Singularity Hub.


105.

Evelyn M. Rusli. Research Labs Jump to the Cloud. June 30, 2013. The Wall

Street Journal.
106.

Top 5 Terrifyingly Fast Robots. September 16, 2012. Badspot.us.

107.

Elhuyar Fundazioa. Quickplacer: The Fastest Robot in the World. March 14, 2006.

ScienceDaily.
108.

Robin Hanson and Eliezer Yudkowsky. 2013. The Hanson-Yudkowsky AI-Foom

Debate. Berkeley, CA: Machine Intelligence Research Institute.


109.

Kevin Kelly. September 29, 2008. Thinkism. KK.org.

110.

Robin Hanson. February 9, 2010. Is the City-ularity Near? Overcoming Bias.

111.

Robert A.

Freitas

Jr. Some

Limits

to

Global

Ecophagy by Biovorous

Nanoreplicators, with Public Policy Recommendations. Zyvex LLC, Richardson, Texas.


112.

Javier E. David. Rise of the machines! Musk warns of 'summoning the demon' with

AI: Report. October 25, 2014. CNBC.


113.

International Union for the Conservation of Nature. Species Extinction: the Facts.

114.

Powerful Products of Molecular Manufacturing. 2002. Center for Responsible

Nanotechnology (CRN).
115.

CRN 2002.

116.

Eliezer Yudkowsky. Coherent Extrapolated Volition. 2004. The Singularity Institute,

San Francisco, CA.


117.

Ben Goertzel. "Encouraging a Positive Transcension". February 17, 2004.

Dynamical Psychology.
118.

J. Storrs Hall. Beyond AI: Creating the Conscience of the Machine. 2007.

Prometheus Books.
119.

Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, Stuart Armstrong. "Corrigibility".

October 2014. Machine Intelligence Research Institute.


120.

Nick Bostrom. Ethical Issues in Advanced Artificial Intelligence. Cognitive, Emotive

and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence. 2003. Vol.
742

2, ed. I. Smit et al., International Institute of Advanced Studies in Systems Research and
Cybernetics, 2003, pp. 12-17.
121.

Yudkowsky 2001.

122.

Eliezer Yudkowsky. Complex Value Systems in Friendly AI. 2011. In Artificial

General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA,
August 36, 2011. Proceedings, edited by Jrgen Schmidhuber, Kristinn R. Thrisson, and
Moshe Looks, 388393. Vol. 6830. Lecture Notes in Computer Science. Berlin: Springer.
123.

Orthogonality Thesis. Less Wrong wiki.

124.

Yudkowsky 2004.

125.

Stuart Armstrong. General Purpose Intelligence: Arguing the Orthogonality Thesis.

May 15, 2012. Less Wrong.


126.

Armstrong 2012.

127.

Richard Waters. Artificial intelligence: machine v man. October 31, 2014. Financial

Times.
128.

Nick Bostrom. The Superintelligent Will: Motivation and Instrumental Rationality in

Advanced Artificial Agents. 2012. Future of Humanity Institute, Oxford University.


129.

Yudkowsky 2001.

130.

Eliezer Yudkowsky. Optimization and the Singularity. June 23, 2008. Less Wrong.

131.

Bostrom 2003.

132.

Yudkowsky 2001.

133.

Stuart Armstrong. Presentation at Singularity Summit 2012, October 13-14, San

Francisco.
134.

Omohundro 2008.

135.

Bostrom 2012.

136.

Yudkowsky 2006.

137.

Yudkowsky 2006.

138.

Complexity of Value Thesis. Less Wrong wiki.

139.

Andrey Kolmogorov. On Tables of Random Numbers. 1998. Theoretical Computer

Science 207 (2): 387395.


140.

Fragility of Value Thesis. Less Wrong wiki.

141.

George Dvorsky. Why Asimov's Three Laws of Robotics Can't Save Us. March 28,

2014. io9.
142.

Yudkowsky 2001.
743

143.

W.D. Hamilton. "The Genetical Evolution of Social Behavior". 1964. Journal of

Theoretical Biology 7 (1): 116.


144.

Yudkowsky 2001.

145.

Yudkowsky 2011.

146.

Eliezer Yudkowsky. "Value is Fragile". January 29, 2009. Less Wrong.

147.

Luke Muehlhauser. Intelligence Explosion FAQ. 2013. First published 2011 as

Singularity FAQ. Machine Intelligence Research Institute, Berkeley, CA.


148.

Debra Lieberman, John Tooby, Leda Cosmides. The evolution of human incest

avoidance mechanisms: an evolutionary psychological approach.


149.

Yudkowsky 2001.

150.

Derek Parfit. Reasons and Persons. 1984. Oxford: Clarendon Press.

151.

Parfit 1984.

152.

Samuel Scheffler. The Rejection of Consequentialism: A Philosophical Investigation

of the Considerations Underlying Rival Moral Conceptions. 1994. Oxford University Press.
153.

Joshua D. Greene. The Terrible, Horrible, No Good, Very Bad Truth about Morality

and What to Do About it. June 2002. Princeton University.


154.

Greene 2002.

155.

Paul Christiano and Katja Grace. AI timeline surveys. 2014. AI Impacts.org.

156.

Armstrong and Sotala 2012.

157.

Armstrong and Sotala 2012.

158.

Paul Christiano and Katja Grace. Accuracy of AI predictions. 2014. AI Impacts.org.

159.

Armstrong and Sotala 2012.

160.

Eliezer Yudkowsky & Scott Aaronson. August 16, 2009. BloggingHeads.tv

interview.
161.

Tal Cohen. An Interview with Douglas R. Hofstadter, Following "I Am a Strange

Loop". June 11, 2008. Tal Cohen's Bookshelf.


162.

Hugo de Garis. The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy

Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines. 2005.
Etc Publications.
163.

Eliezer Yudkowsky. Do Earths with slower economic growth have a better chance at

FAI? June 12, 2013. Less Wrong.


164.

Eliezer Yudkowsky talk at Singularity Summit 2011. Open Problems in Friendly AI.

744

165.

Ben Goertzel. Should Humanity Build a Global AI Nanny to Delay the Singularity

Until It's Better Understood? Journal of Consciousness Studies 12/2011; 19(1-2):96-111.


166.

Kevin Kelly. Thinkism. September 29, 2008. The Technium.

167.

Kahneman et al 1982.

168.

Dave Munger. We can identify mystery faces just 6 pixels wide. March 2, 2007.

Cognitive Daily.
169.

Robert Fortner. Rest in Peas: the Unrecognized Death of Speech Recognition.

https://web.archive.org/web/20120505223509/http://robertfortner.posterous.com/theunrecognized-death-of-speech-recognition
170.

Michael Sipser. Section 4.2: The Halting Problem. Introduction to the Theory of

Computation (Second Edition ed.). 2006. PWS Publishing. pp. 173182.


171.

Omohundro 2008.

172.

Wireheading. Less Wrong wiki.

173.

K.C. Berridge, M.L. Kringelbach. Affective neuroscience of pleasure: Reward in

humans and other animals. 2008. Psychopharmacology 199, 457-80.


174.

B.K. Alexander, R.B. Coambs, and P.F. Hadaway. The effect of housing and gender

on morphine self-administration in rats. 1978. Psychopharmacology, Vol 58, 175179.


175.

Yudkowsky 2011.

176.

Douglas Lenat. EURISKO: A program that learns new heuristics and domain

concepts. 1983. Artificial Intelligence (21): pp. 619.


177.

Eliezer Yudkowsky. (The Cartoon Guide to) Lob's Theorem. 2008. Yudkowsky.net.

178.

Eliezer Yudkowsky and Marcello Herreshoff. Tiling Agents for Self-Modifying AI, and

the Lbian Obstacle. 2013. Machine Intelligence Research Institute.


179.

Kurzweil 2005.

180.

Bostrom 2003.

181.

Yudkowsky 2008.

182.

Kurzweil 2005.

183.

Garret Jones. IQ and National Productivity. 2011. In New Palgrave Dictionary of

Economics.
184.

Vernor Vinge. The Coming Technological Singularity. Presented at the VISION-21

Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute,
March 30-31, 1993.

745

185.

D. Schurig et al. (2006). Metamaterial Electromagnetic Cloak at Microwave

Frequencies. Science 314 (5801): 977980.


186.

C. De Wagter et al. Autonomous Flight of a 20-gram Flapping Wing MAV with a 4-

gram Onboard Stereo Vision System.


187.

P. Sreetharan, J. P. Whitney, M. Strauss, and R. J. Wood. Monolithic fabrication of

millimeter-scale machines. Journal of Micromechanics and Microengineering, vol. 22, no.


055027, 2012.
188.

Alan Robock, Luke Oman, and Georgiy L. Stenchikov. Nuclear winter revisited with

a modern climate model and current nuclear arsenals: Still catastrophic consequences.
Journal of Geophysical Research, 112, D13107.
189.

John Smart. Limits to Biology Performance Limitations on Natural and Engineered

Biological Systems. 2005. Acceleration Watch.


190.

Robert A. Freitas Jr. Nanomedicine, Volume I: Basic Capabilities. 1999. Landes

Bioscience, Georgetown, TX.


191.

Bostrom 2003.

192.

Marshall T. Savage. The Millennial Project: Colonizing the Galaxy in Eight Easy

Steps. 1992. Little, Brown, and Company.


193.

Stuart Armstrong. Smarter Than Us: the Rise of Machine Intelligence. 2014.

Machine Intelligence Research Institute.


194.

Bostrom 2014.

195.

Robin Hanson and Eliezer Yudkowsky. The Hanson-Yudkowsky AI-Foom Debate

eBook. 2013. Machine Intelligence Research Institute.


196.

Eliezer Yudkowsky. Knowability of Friendly AI. SL4 wiki.

http://www.sl4.org/wiki/action=browse&id=KnowabilityOfFAI&revision=67

746

You might also like