You are on page 1of 3

Protecting Humanity from Rogue AIs

The concern about the possibility of autonomous rogue AIs in coming years or
decades is hotly debated—take, for example, two contrasting views recently published
in the Economist. Even though there is no current consensus on the most extreme
10

risks, such as the possibility of human extinction, the absence of clear evidence
against such risks (including those short of extinction) suggests that caution and
further study are absolutely required.

Specifically, we should be researching countermeasures against rogue AIs. The


motivation for such research arises from two considerations: First, while
comprehensive and well-enforced regulation could considerably reduce the risks
associated with the emergence of rogue AIs, it is unlikely to offer foolproof
protection. Certain jurisdictions may lack the necessary regulation or have weaker
rules in place. Individuals or organizations, such as for-profit corporations or military
bodies, may skimp on safety due to competing objectives, such as commercial or
military competition, which may or may not align well with the public interest.
Malicious actors, such as terrorist organizations or organized crime groups, may
intentionally disregard safety guidelines, while others may unintentionally fail to
follow them due to negligence. Finally, regulation itself is bound to be imperfect or
implemented too late, as we still lack clarity on the best methods to evaluate and
mitigate the catastrophic risks posed by AI.

The second consideration behind the push for researching countermeasures is that the
stakes are so high. Even if we manage to significantly reduce the probability of a
rogue AI emerging, the tiniest probability of a major catastrophe—such as a nuclear
war, the launch of highly potent bioweapons, or human extinction—is still
unacceptable. Beyond regulation, humanity needs a Plan B.

Democracy and human rights. In the context of the possible emergence of rogue AIs,
there are three reasons why we should focus specifically on the preservation, and
ideally enhancement, of democracy and human rights: While democracy and human
rights are intrinsically important, they are also fragile, as evidenced repeatedly
throughout history, including cases of democratic states transitioning into
authoritarian ones. It is crucial that we remember the essence of democracy—that
everyone has a voice—and that this involves the decentralization of power and a
system of checks and balances to ensure that decisions reflect and balance the views
of diverse citizens and communities. Powerful tools, especially AI, could easily be
leveraged by governments to strengthen their hold on power, for instance, through
multifaceted surveillance methods such as cameras and online discourse monitoring,
as well as control mechanisms such as AI-driven policing and military weapons.
Naturally, a decline in democratic principles correlates with a deterioration of human
rights. Furthermore, a superhuman AI could give unprecedented power to those who
control it, whether individuals, corporations, or governments, threatening democracy
and geopolitical stability.

Highly centralized authoritarian regimes are unlikely to make wise and safe decisions
due to the absence of the checks and balances inherent in democracies. While
dictators might act more swiftly, their firm conviction in their own interpretations and
beliefs could lead them to make bad decisions with an unwarranted level of
confidence. This behavior is similar to that of machine-learning systems trained by
maximum likelihood: They consider only one interpretation of reality when there
could be multiple possibilities. Democratic decisionmaking, in contrast, resembles a
rational Bayesian decisionmaking process, where all plausible interpretations are
considered, weighed, and combined to reach a decision, and is thus similar to
machine-learning systems trained using Bayes’s theorem. Furthermore, an
11

authoritarian regime is likely to focus primarily on preserving or enhancing its own


power instead of thoughtfully anticipating potential harms and risks to its population
and humanity at large. These two factors—unreliable decisionmaking and a
misalignment with humanity’s well-being—render authoritarian regimes more likely
to make unsafe decisions regarding powerful AI systems, thereby increasing the
likelihood of catastrophic outcomes when using these systems.

It is worth noting that, with only a few corporations developing frontier AI systems,
some proposals for regulating AI could be detrimental to democracy by allowing
increasing concentration of power, for example with licensing requirements and
restrictions on the open-source distribution of very powerful AI systems. If, to
minimize catastrophic outcomes by limiting access, only a few labs are allowed to
tinker with the most dangerous AI systems, the individuals or entities that control
those labs may wield dangerously excessive power. That could pose a threat to
democracy, the efficiency of markets, and geopolitical stability. The mission and
governance of such labs are thus crucial elements of the proposal presented here, to
make sure that they work for the common good and the preservation and enhancement
of democracy.

You might also like