You are on page 1of 2

Fermi Paradox and global catastrophes

The main ways of solving the Fermi Paradox are:

1) They are already here (at least in the form of their signals)

2) They do not disseminate in the universe, do not leave traces, and not send
signals. That is, they do not start a shock wave of intelligence.

3) The civilizations are extremely rare.

Additional way of thinking is 4): we are unique civilization because of observation


selection

All of them have a sad outlook for global risk:

In the first case, we are under threat of conflict with superior aliens.

1A) If they are already here, we can do something that will encourage them to
destroy us, or restrict us. For example, turn off the simulation. Or start the program
of probes-berserkers. This probes cold be nanobots. In fact it could be something
like “Space gray goo” with low intelligence but very wide spreading. It could even
be in my room. The only goal of it could be to destroy other nanobots (like our
Nanoshield would do). And so we will see it until we create our own nanobots.

1b) If they open up our star system right now and, moreover, focused on total
colonization of all systems, we are also will fight with them and are likely to lose.
Not probable.

1c) If a large portion of civilization is infected with SETI-virus and distributes signals,
specially designed to infect naive civilizations - that is, encourage them to create a
computer with AI, aimed at the further replication by SETI channels. This is what I
write in the article Is SETI dangerous? http://www.proza.ru/texts/2008/04/12/55.html

1d) By the means of METI signal we attract attention of dangerous civilization and it
will send to the solar system a beam of death (probably commonly known as
gamma-ray burst). This scenario seems unlikely, since for the time until they
receive the signal and have time to react, we have time to fly away from the solar
system - if they are far away. And if they are close, it is not clear why they were not
here. However, this risk was intensely discussed, for example by D. Brin.

2. They do not disseminate in space. This means that either:

2a) Civilizations are very likely to destroy themselves in very early stages, before it
could start wave of robots replicators and we are not exception. This is reinforced
by the Doomsday argument – namely the fact that I'm discovering myself in a
young civilization suggests that they are much more common than the old.
However, based on the expected rate of development of nanotechnology and
artificial intelligence, we can start a wave of replicators have in 10-20 years, and
even if we die then, this wave will continue to spread throughout the universe.
Given the uneven development of civilizations, it is difficult to assume that none of
them do not have time to launch a wave of replicators before their death. This is
possible only if we a) do not see an inevitable and universal threat looming directly
on us in the near future, b) significantly underestimate the difficulty of creating
artificial intelligence and nanoreplicators. с) The energy of the inevitable
destruction is so great that it manages to destroy all replicators, which were
launched by civilization - that is it is of the order of a supernova explosion.

2b) Every civilization sharply limit itself - and this limitation is very hard and long as
it is simple enough to run at least one probe-replicator. This restriction may be
based either on a powerful totalitarianism, or the extreme depletion of resources.
Again in this case, our prospects are quite unpleasant. Bur this solution is not very
plausible.

3) If civilization are rare, it means that the universe is much less friendly place to
live, and we are on an island of stability, which is likely to be an exception from the
rule. This may mean that we underestimate the time of the future sustainability of
the important processes for us (the solar luminosity, the earth's crust), and most
importantly, the sustainability of these processes to small influences, that is their
fragility. I mean that we can inadvertently break their levels of resistance, carrying
out geo-engineering activities, the complex physics experiments and mastering
space. More I speak about this in the article: “Why antropic principle stopped to
defend us. Observation selection and fragility of our environment”.
http://www.scribd.com/doc/8729933/Why-antropic-principle-stops-to-defend-us-
Observation-selection-and-fragility-of-our-environment- See also the works of
M.Circovic on the same subject.

However, this fragility is not inevitable and depends on what factors were critical in
the Great filter. In addition, we are not necessarily would pressure on this fragile,
even if it exist.

4) Observation selection makes us unique civilization.

4a. We are the first civilization, because any civilization which is the first captures
the whole galaxy. Likewise, the earthly life is the first life on Earth, because it would
require all swimming pools with a nutrient broth, in which could appear another life.
In any case, sooner or later we will face another first civilization.

4b. Vast majority of civilizations are being destroyed in the process of colonization
of the galaxy, and so we can find ourselves only in the civilization which is not
destroyed by chance. Here the obvious risk is that those who made this error, would
try to correct it.

4c. We wonder about the absence of contact, because we are not in contact. That
is, we are in a unique position, which does not allow any conclusions about the
nature of the universe. This clearly contradicts the Copernican principle.

The worst variant for us here is 2a - imminent self-destruction, which, however, has
independent confirmation through the Doomsday Argument, but is undermine by
the fact that we do not see alien von Neuman probes. I still believe that the most
likely scenario is a Rare earth.

You might also like