Professional Documents
Culture Documents
And, while these labs should be independent from any government, they should be
mostly publicly funded and would of course collaborate closely with the national-
security sectors and AI-dedicated agencies of the coalition member countries to
deploy the safety methodologies that are developed. In light of all this, an appropriate
governance structure for these labs must be put in place to avoid capture by
commercial or national interests and to maintain focus on protecting democracy and
humanity.
Although a for-profit organization could probably raise funding faster and from
multiple competing sources, those advantages would come at the cost of a conflict of
interest between commercial objectives and the mission of safely defending humanity
against rogue AIs: Investors expect rising revenues and would push to expand their
market share—that is, to achieve economic dominance. Inherent in this scenario is
competition with other labs developing frontier AI systems, and thus not sharing the
important discoveries for advancing AI capabilities. This would slow the collective
scientific progress on countermeasures and could create a single point of failure if one
of the labs makes an important discovery that it does not share with others. The
survival of not just democracy but humanity itself depends on avoiding the
concentration of AI power and a single point of failure, and is thus at odds with
commercial objectives. The misalignment between defending against rogue AIs and
achieving commercial success could make a for-profit organization sacrifice safety to
some extent. For example, an AI that cannot act autonomously is safer but would have
far fewer commercial applications (such as chatbots and robots), but market
dominance requires pursuing as many revenue sources as efficiently possible.
On the other hand, government funding may also come with strings attached that
contradict the main mission: Governments could seek to exploit advances in AI to
achieve military, economic (against other countries), or political (against internal
political opponents) dominance. This, too, would contradict the objective of
minimizing power concentration and the possibility of a single point of failure.
Government funding thus needs to be negotiated in a multilateral context among
democratic nations so that the participating labs can, by common accord, rule out
power-concentration objectives, and the governance mechanisms in place can enforce
those decisions.
Linking this umbrella organization with a global institution such as the United
Nations will also help to keep power from concentrating in the hands of a few rich
countries at the expense of the global South. Reminiscent of the collective fear of
14
nuclear Armageddon after World War II, which provided the impetus for nuclear-
arms-control negotiations, the shared concern about the possible risks posed by rogue
AIs should encourage all countries to work together to protect our collective future.
superhuman AI within the span of a few years to a couple of decades. But the world is
not prepared for this to happen within the next few years—in terms of either
regulatory readiness or scientific comprehension of AI safety. Thus we must
immediately invest in and commit to implementing all three of my recommendations:
regulation, research on safety, and research on countermeasures. And we must do
these carefully, especially the countermeasures research, so as to preserve and protect
democracy and human rights while defending humanity against catastrophic
outcomes.