You are on page 1of 3

that could relegate safety to a lower priority than necessary.

The rules governing these


labs should forbid them from using AI technology to achieve military or economic
dominance, which would stimulate an AI arms race. Keeping the labs independent
will help to mitigate concerns about power concentration that could negatively affect
international geopolitical security and the global economy. A clear mission and
governance mechanisms with multilateral checks and balances are needed to focus
these labs on humanity’s well-being and to counter the possibility of power
concentrating in the hands of a few.

If, as a byproduct of their work on countermeasures, these proposed labs discovered


AI advances that had safe and beneficial applications—for example, in medicine or
combating climate change—the capabilities to develop and deploy those applications
should be shared with academia or industry labs so that humanity as a whole would
reap the benefits.

And, while these labs should be independent from any government, they should be
mostly publicly funded and would of course collaborate closely with the national-
security sectors and AI-dedicated agencies of the coalition member countries to
deploy the safety methodologies that are developed. In light of all this, an appropriate
governance structure for these labs must be put in place to avoid capture by
commercial or national interests and to maintain focus on protecting democracy and
humanity.

Why Nonprofit and Nongovernmental?

Although a for-profit organization could probably raise funding faster and from
multiple competing sources, those advantages would come at the cost of a conflict of
interest between commercial objectives and the mission of safely defending humanity
against rogue AIs: Investors expect rising revenues and would push to expand their
market share—that is, to achieve economic dominance. Inherent in this scenario is
competition with other labs developing frontier AI systems, and thus not sharing the
important discoveries for advancing AI capabilities. This would slow the collective
scientific progress on countermeasures and could create a single point of failure if one
of the labs makes an important discovery that it does not share with others. The
survival of not just democracy but humanity itself depends on avoiding the
concentration of AI power and a single point of failure, and is thus at odds with
commercial objectives. The misalignment between defending against rogue AIs and
achieving commercial success could make a for-profit organization sacrifice safety to
some extent. For example, an AI that cannot act autonomously is safer but would have
far fewer commercial applications (such as chatbots and robots), but market
dominance requires pursuing as many revenue sources as efficiently possible.

On the other hand, government funding may also come with strings attached that
contradict the main mission: Governments could seek to exploit advances in AI to
achieve military, economic (against other countries), or political (against internal
political opponents) dominance. This, too, would contradict the objective of
minimizing power concentration and the possibility of a single point of failure.
Government funding thus needs to be negotiated in a multilateral context among
democratic nations so that the participating labs can, by common accord, rule out
power-concentration objectives, and the governance mechanisms in place can enforce
those decisions.

Consequently, a multilateral umbrella organization is essential for coordinating these


labs across member countries, potentially pooling computing resources and a portion
of the funding, and for setting the governance and evolving AI-safety standards across
all the labs in the network. The umbrella organization should also coordinate with a
globally representative body that sets standards and safety protocols for all countries,
including nondemocratic ones and those not participating in these countermeasure-
research efforts. Indeed, it is quite likely that some of the safety methodology
discovered by the participating labs should be shared with all countries and deployed
across the world.

Linking this umbrella organization with a global institution such as the United
Nations will also help to keep power from concentrating in the hands of a few rich
countries at the expense of the global South. Reminiscent of the collective fear of
14

nuclear Armageddon after World War II, which provided the impetus for nuclear-
arms-control negotiations, the shared concern about the possible risks posed by rogue
AIs should encourage all countries to work together to protect our collective future.

As I explained in my testimony to the U.S. Senate in July 2023, many AI researchers,


including all three winners of the 2018 Turing Award, now believe that the emergence
of AI with superhuman capabilities will come far sooner than previously
thought. Instead of taking decades or even centuries, we now expect to see
15

superhuman AI within the span of a few years to a couple of decades. But the world is
not prepared for this to happen within the next few years—in terms of either
regulatory readiness or scientific comprehension of AI safety. Thus we must
immediately invest in and commit to implementing all three of my recommendations:
regulation, research on safety, and research on countermeasures. And we must do
these carefully, especially the countermeasures research, so as to preserve and protect
democracy and human rights while defending humanity against catastrophic
outcomes.

You might also like