You are on page 1of 4

Containment for AI

How to Adapt a Cold War Strategy to a New Threat

By Mustafa Suleyman

January 23, 2024

The last two years have seen startling advances in artificial intelligence. The next few years
promise far more, with larger and more efficient models, capable of real creativity and
complicated planning, likely to emerge. The potential positives are astonishing, including
heightened business productivity, cheaper and more effective health care, scientific
discoveries, and education programs tailored to every child’s needs. But the risks are also
colossal. These include the proliferation of disinformation, as well as job losses, and the
likelihood that bad actors will seek to use the new technology to sow havoc.

This technology will proliferate rapidly. That means that over the next ten years, grappling
with AI’s inbuilt tendency toward uncontrolled spread will become a generational challenge.
It will, accordingly, require a generational response akin to what the West mobilized in the
early days of the Cold War. At that time, the American diplomat George F. Kennan talked
about containing the Soviet Union by using hard power and economic and cultural pressure
to ensure that the Soviets were kept behind their borders and the democratic world was not
overwhelmed. Today’s challenge requires a similarly broad and ambitious program, in this
case to keep AI in check and societies in control. It will be, like Kennan’s, an effort based on
laws and treaties. It will also necessitate, however, a massive global movement and changes
to the culture of technology companies. This modern form of containment will be needed
not only to manage AI and prevent it from creating catastrophe but also to ensure that it
becomes one of the most extraordinarily beneficial inventions in human history.

THE TIDE ALWAYS COMES IN

Across the sweep of human history, there is a single, seemingly immutable law: every
foundational technology ever invented—from pickaxes to plows, pottery to photography,
phones to planes—will become cheaper and easier to use. It will spread far and wide. The
ecosystem of invention defaults to expansion. And people, who always drive this process,
are Homo technologicus, the innately technological species.

Consider the printing press. In the 1440s, after Johannes Gutenberg invented it, there was
only a single example in Europe: his original in Mainz, Germany. But just 50 years later,
there were around 1,000 presses spread across the continent. The results were
extraordinary. In the Middle Ages, major countries including France and Italy each produced
a few hundred thousand manuscripts per century. A hundred years later, they were
producing around 400,000 books each per year, and the pace was increasing. In the
seventeenth century alone, European countries printed 500 million books.
The same trend was seen with the internal combustion engine. This was a tricky invention
that took over 100 years to perfect. Eventually, by the 1870s, there were only a few working
examples in German workshops. The technology was still nascent, limited in number, and
utterly marginal. Eight years after he invented the first practical automobile in 1885, the
German engineer Carl Benz had sold just 69 cars. But a little over 100 years later, there were
over two billion internal combustion engines of every conceivable shape and size, powering
everything from lawnmowers to container ships.

Some technologies, particularly nuclear weapons, may appear to buck this trend. After all,
80 years on from their creation, they have been used only twice, by the United States in
1945, and arsenals are well down from their 1980s highs. Although there is some truth to
this counterargument, however, it ignores the thousands of warheads still deployed around
the world, the constant pressure of new states looking to build them, and the hair-raising
litany of accidents and close calls that, from the beginning, have been a regular and, for
obvious reasons, underreported feature of these weapons. From the drama of the Cuban
missile crisis in 1962 to the disappearance of nuclear materials from a U.S. government
employee’s car in 2017, nuclear weapons have never been truly contained despite the
avoidance of outright catastrophe. If such technologies as nuclear weaponry are an
exception to the rule of technological spread, they are at best a very partial and uneasy
exception.

THE IMPENDING DELUGE

It is inevitable that AI will follow the trajectory of the hand axe, the printing press, the
internal combustion engine, and the Internet. It, too, will be everywhere, and it will
constantly improve. It is happening already. In just a few years, cutting-edge models have
gone from using millions of parameters, or variables adjusted in training, to trillions,
indicating the ever-increasing complexity of these systems. Over the last decade, the
amount of computation used to train large AI models has increased by nine orders of
magnitude. Moore’s law, which holds that computing power doubles every two years,
predicted exponential increases in what computers can do. But progress has been even
faster in AI, with the trends of lower costs and improving capability ascending on a curve
beyond anything seen with a technology before. The results are visible in well-known AI
products but are also proving transformative under the surface of the digital world,
powering software, organizing warehouses, operating medical equipment, driving vehicles,
and managing power grids.

As the next phase of AI develops, a powerful generation of autonomous AI agents capable of


achieving real-world goals will emerge. Although this is often called artificial general
intelligence, I prefer the term “artificial capable intelligence,” or ACI, which is a stage before
full AGI, in which AI can nonetheless achieve a range of tasks autonomously. This technology
can accomplish complex activities on humans’ behalf, from organizing a birthday party to
completing the weekly shop, in addition to something as consequential as setting up and
running an entire business line. This will be a seismic step for the technology, with
transformative implications for the nature of power and the world economy. It can be
expected to proliferate rapidly and irreversibly.
An ACI in everyone’s pocket will result in colossal increases in economic growth, as the most
significant productivity enhancer seen in generations becomes as ubiquitous as electricity.
ACI will revolutionize fields including health care, education, and energy generation. Above
all, it will give people the chance to achieve what they want in life. There is a fair amount of
doomsaying about AI at the moment, but amid well-justified concerns, it is important to
keep in mind the many upsides of AI. This is particularly the case for ACI, which has the
potential to give everyone access to the world’s best assistant, chief of staff, lawyer, doctor,
and all-around A-team.

Yet the downsides cannot be ignored. For a start, AI will unleash a series of new dangers.
Perhaps the most serious of these will be new forms of misinformation and disinformation.
Just a few simple language commands can now produce images—and, increasingly, videos—
of staggering fidelity. When hostile governments, fringe political parties, and lone actors can
create and broadcast material that is indistinguishable from reality, they will be able to sow
chaos, and the verification tools designed to stop them may well be outpaced by the
generative systems. Deepfakes caused turmoil in the stock market last year when a
concocted image of the Pentagon on fire caused a momentary but noticeable dip in
indexes, and they are likely to feature heavily in the current U.S. election race.

Many other problems can be expected to result from the global advance of AI. Automation
threatens to disrupt the labor market, and the potential for immense cyberattacks is
growing. Once powerful new forms of AI spread, all the good and all the bad will be
available at every level of society: in the hands of CEOs, street vendors, and terrorists alike.

STOPPING THE SPREAD

Most people’s attention has correctly focused on the social and ethical implications of this
change. But this discussion often neglects to consider technology’s tendency to penetrate
every layer of civilization, and it is this that requires drastic action. It is technology’s
tendency to spread fast, far, and wide that demands that AI must be contained, both in its
proliferation and in its negative impacts, when the latter do occur. Containment is a
daunting task, given the history and the trajectory of innovation, but it is the only answer—
however difficult—to how humanity should manage the fastest rollout of the most powerful
new technology in history.

Containment in this sense encompasses regulation, better technical safety, new governance
and ownership models, and new modes of accountability and transparency. All are
necessary—but not sufficient—to assure safer technology. Containment must combine
cutting-edge engineering with ethical values that will inform government regulation. The
goal should be to create a set of interlinked and mutually reinforcing technical, cultural,
legal, and political mechanisms for maintaining societal control of AI. Governments must
contain what would have once been centuries or millennia of technological change but is
now unfolding in a matter of years or even months. Containment is, in theory, an answer to
the inescapability of proliferation, capable both of checking it and addressing its
consequences.
This is not containment in the geopolitical sense, harking back to Kennan’s doctrines. Nor is
it a matter of putting AI into a sealed box, although some technologies—rogue AI malware
and an engineered pathogen, in particular—need just that. Nor is containment of AI
competitive, in the sense of seeking to fight some Soviet Red Menace. It does resemble
Kennan’s approach in that the policy framework must operate across all dimensions. But
containing technology is a much more fundamental program than what Kennan envisioned,
seeking a balance of power not between competing actors but between humans and their
tools. What it seeks is not to stop the technology but keep it safe and controlled.

Most people rightly argue that regulation is necessary, and there is a tendency to believe
that it is enough. It is not. Containment in practice must work on every level at which the
technology operates. It therefore needs not only proactive and well-informed lawmakers
and bureaucrats but also technologists and business executives. It needs diplomats and
leaders to cooperate internationally to build bridges and address gaps. It needs consumers
and citizens everywhere to demand better from technology, and ensure that it remains
focused on their interests. It needs them to agitate for and expect responsible technology,
just as growing demand for green energy and environmentally friendly products has spurred
corporates and governments into action.

STEERING WITHOUT A MAP

Containment will require hard technical questions to be answered by international treaties


and mass global movements alike. It must encompass work on AI safety, as well as the audit
mechanisms needed to monitor and enforce compliance. The companies behind AI will be
critical to this effort and will need to think carefully about how to align their own incentives
with government regulation. Yet containing AI will not be the sole responsibility of those
building its next generation. Nor will it rest entirely on national leaders. Rather, all of those
who will be affected by it (that is, everyone) will be critical to creating momentum behind
this effort. Containment offers a policy blend capable of working from the fine-grained
details of an AI model out to huge public programs that could mitigate vast job destruction.

Collectively, this project may prove equal to this moment and capable of counteracting the
many risks that AI poses. The cumulative effect of these measures—which must include
licensing regimes, the staffing of a generation of companies with critics, and the creation of
inbuilt mechanisms to guarantee access to advanced systems—is to keep humanity in the
driver’s seat of this epochal series of changes, and capable, at the limit, of saying no. None
of these steps will be easy. After all, uncontrolled proliferation has been the default
throughout human history. Containment should therefore be seen not as the final answer to
all technology’s problems but rather, the first critical step.

You might also like