Professional Documents
Culture Documents
The concern about the possibility of autonomous rogue AIs in coming years or
decades is hotly debated—take, for example, two contrasting views recently published
in the Economist. Even though there is no current consensus on the most extreme
10
risks, such as the possibility of human extinction, the absence of clear evidence
against such risks (including those short of extinction) suggests that caution and
further study are absolutely required.
The second consideration behind the push for researching countermeasures is that the
stakes are so high. Even if we manage to significantly reduce the probability of a
rogue AI emerging, the tiniest probability of a major catastrophe—such as a nuclear
war, the launch of highly potent bioweapons, or human extinction—is still
unacceptable. Beyond regulation, humanity needs a Plan B.
Democracy and human rights. In the context of the possible emergence of rogue AIs,
there are three reasons why we should focus specifically on the preservation, and
ideally enhancement, of democracy and human rights: While democracy and human
rights are intrinsically important, they are also fragile, as evidenced repeatedly
throughout history, including cases of democratic states transitioning into
authoritarian ones. It is crucial that we remember the essence of democracy—that
everyone has a voice—and that this involves the decentralization of power and a
system of checks and balances to ensure that decisions reflect and balance the views
of diverse citizens and communities. Powerful tools, especially AI, could easily be
leveraged by governments to strengthen their hold on power, for instance, through
multifaceted surveillance methods such as cameras and online discourse monitoring,
as well as control mechanisms such as AI-driven policing and military weapons.
Naturally, a decline in democratic principles correlates with a deterioration of human
rights. Furthermore, a superhuman AI could give unprecedented power to those who
control it, whether individuals, corporations, or governments, threatening democracy
and geopolitical stability.
Highly centralized authoritarian regimes are unlikely to make wise and safe decisions
due to the absence of the checks and balances inherent in democracies. While
dictators might act more swiftly, their firm conviction in their own interpretations and
beliefs could lead them to make bad decisions with an unwarranted level of
confidence. This behavior is similar to that of machine-learning systems trained by
maximum likelihood: They consider only one interpretation of reality when there
could be multiple possibilities. Democratic decisionmaking, in contrast, resembles a
rational Bayesian decisionmaking process, where all plausible interpretations are
considered, weighed, and combined to reach a decision, and is thus similar to
machine-learning systems trained using Bayes’s theorem. Furthermore, an
11
It is worth noting that, with only a few corporations developing frontier AI systems,
some proposals for regulating AI could be detrimental to democracy by allowing
increasing concentration of power, for example with licensing requirements and
restrictions on the open-source distribution of very powerful AI systems. If, to
minimize catastrophic outcomes by limiting access, only a few labs are allowed to
tinker with the most dangerous AI systems, the individuals or entities that control
those labs may wield dangerously excessive power. That could pose a threat to
democracy, the efficiency of markets, and geopolitical stability. The mission and
governance of such labs are thus crucial elements of the proposal presented here, to
make sure that they work for the common good and the preservation and enhancement
of democracy.