You are on page 1of 10

Months: Print & Digital Only $18.

99 (EXTRA 50% saving on


Access the normal price)

Henry Kissinger’s Last Crusade: Stopping Dangerous


AI

WASHINGTON, DC - NOVEMBER 05: Former U.S. Secretary of State Henry Kissinger speaks during a National Security
Commission on Artificial Intelligence (NSCAI) conference November 5, 2019 in Washington, DC.
Alex Wong/Getty
Images

BY BELINDA LUSCOMBE
NOVEMBER 5, 2021 10:44 AM EDT

(To receive weekly emails of conversations with the world’s top CEOs and
business decisionmakers, click here.)
At the
Months: age&of
Print 98, former Secretary of State Henry Kissinger
Digital has$18.99
Only a whole new
(EXTRA 50% saving on
Access the normal price)
area of interest: artificial intelligence. He became intrigued after being
persuaded by Eric Schmidt, who was then the executive chairman of Google, to
attend a lecture on the topic while at the Bilderberg conference in 2016. The
two have teamed up with the dean of the MIT Schwarzman College of
Computing, Daniel Huttenlocher, to write a bracing new book, The Age of AI,
about the implications of the rapid rise and deployment of artificial
intelligence, which they say “augurs a revolution in human affairs.” The book
argues that artificial intelligence processes have become so powerful, so
seamlessly enmeshed in human affairs, and so unpredictable, that without
some forethought and management, the kind of “epoch-making
transformations” they will deliver may send human history in a dangerous
direction.

The U.S. Nuclear Fusion


Breakthrough Is a Huge Milestone
—But Unlimited Clean Energy Is…
Next Video

Kissinger and Schmidt sat down with TIME to talk about the future they
envision.

(This interview has been condensed and edited for clarity.)

BRANDED CONTENT

Aging & Elderly Care in Japan


and the US
Months: Print & Digital BY SOMPO HOLDINGS Only $18.99 (EXTRA 50% saving on
Access the normal price)

Dr. Kissinger, you’re an elder statesman. Why did you think


AI was an important enough subject for you?

Kissinger: When I was an undergraduate, I wrote my undergraduate thesis of


300 pages—which was banned after that ever to be permitted—called “The
Meaning of History.” The subject of the meaning of history and where we go
has occupied my life. The technological miracle doesn’t fascinate me so much;
what fascinates me is that we are moving into a new period of human
consciousness which we don’t yet fully understand. When we say a new period
of human consciousness, we mean that the perception of the world will be
different, at least as different as between the age of enlightenment and the
medieval period, when the Western world moved from a religious perception of
the world to a perception of the world on the basis of reason, slowly. This will
be faster.

Most Popular from TIME

There’s one important difference. In the Enlightenment, there was a conceptual


world based on faith. And so Galileo and the late pioneers of the
Enlightenment had a prevailing philosophy against which they had to test their
thinking. You can trace the evolution of that thinking. We live in a world which,
in effect, has no philosophy; there is no dominant philosophical view. So the
technologists can run wild. They can develop world-changing things, but
there’s nobody there to say, ‘We’ve got to integrate this into something.’
When
Months: Printyou met
Eric [Schmidt] and he invited
& Digital Onlyyou
$18.99to speak
(EXTRA at on
50% saving
Access the normal price)
Google, you said that you considered it a threat to
civilization. Why did you feel that way?

Kissinger: I did not want one organization to have a monopoly on supplying


information. I thought it was extremely dangerous for one company to be able
to supply information and be able to adjust what it supplied to its study of
what the public wanted or found plausible. So the truth became relative. That
was all I knew at the time. And the reason he invited me to meet his
algorithmic group was to have me understand that this was not arbitrary, but
the choice of what was presented had some thought and analysis behind it. It
didn’t obviate my fear of one private organization having that power. But that’s
how I got into it.

Schmidt: The visit to Google got him thinking. And when we started talking
about this, Dr. Kissinger said that he is very worried that the impact that this
collection of technologies will have on humans and their existence, and that
the technologists are operating without the benefit of understanding their
impact or history. And that, I think, is absolutely correct.

Given that many people feel the way that you do or did about
technology companies—that they are not really to be
trusted, that many of the manipulations that they have used
to improve their business have not been necessarily great
for society—what role do you see technology leaders playing
in this new system?

Kissinger: I think the technology companies have led the way into a new
period of human consciousness, like the Enlightenment generations did when
they moved from religion to reason, and the technologists are showing us how
to relate reason to artificial intelligence. It’s a different kind of knowledge in
some respects, because with reason—the world in which I grew up—each
evidence supports the other. With artificial intelligence, the astounding thing
is, you come up with a conclusion which is correct. But you don’t know why.
That’s a totally new challenge. And so in some ways, what they have invented
is dangerous. But it advances our culture. Would we be better off if it had never
beenPrint
Months: invented? I don’t know that. But now that it exists, we
& Digital have
Only to(EXTRA
$18.99 understand
50% saving on
Access the normal price)
it. And it cannot be eliminated. Too much of our life is already consumed by it.

What do you think is the primary geopolitical implication of


the growth of artificial intelligence?

Kissinger: I don’t think we have examined this thoughtfully yet. If you imagine
a war between China and the United States, you have artificial-intelligence
weapons. Like every artificial intelligence, they are more effective at what you
plan. But they might be also effective in what they think their objective is. And
so if you say, ‘Target A is what I want,’ they might decide that something else
meets these criteria even better. So you’re in a world of slight uncertainty.
Secondly, since nobody has really tested these things on a broad-scale
operation, you can’t tell exactly what will happen when AI fighter planes on
both sides interact. So you are then in a world of potentially total
destructiveness and substantial uncertainty as to what you’re doing.

World War I was almost like that in the sense that everybody had planned very
complicated scenarios of mobilization, and they were so finely geared that once
this thing got going, they couldn’t stop it, because they would put themselves
at a bad disadvantage.

So your concern is that the AIs are too effective? And we


don’t exactly know why they’re doing what they’re doing?

Kissinger: I have studied what I’m talking about most of my life; this I’ve only
studied for four years. The Deep Think computer was taught to play chess by
playing against itself for four hours. And it played a game of chess no human
being had ever seen before. Our best computers only beat it occasionally. If this
happens in other fields, as it must and it is, that is something, and our world is
not at all prepared for it.

The book argues that because AI processes are so fast and


satisfying, there’s some concern about whether humans will
lose the capacity for thought, conceptualizing and
reflection. How?
Schmidt:
Months: Print &So, again, using Dr. Kissinger as our example, let’s
Digital Onlythink
$18.99 about
(EXTRAhow
50% saving on
Access the normal price)
much time he had to do his work 50 years ago, in terms of conceptual time, the
ability to think, to communicate and so forth. In 50 years, what is the big
narrative? The compression of time. We’ve gone from the ability to read books
to being described books, to neither having the time to read them nor conceive
of them nor to discuss them, because there’s another thing coming. So this
acceleration of time and information, I think, really exceeds humans capacities.
It’s overwhelming, and people complain about this; they’re addicted, they can’t
think, they can’t have dinner by themselves. I don’t think humans were built
for this. It sets off cortisone levels, and things like that. So in the extreme, the
overload of information is likely to exceed our ability to process everything
going on.

What I have said—and is in the book—is that you’re going to need an assistant.
So in your case, you’re a reporter, you’ve got a zillion things going on, you’re
going to need an assistant in the form of a computer that says, ‘These are the
important things going on. These are the things to think about, search the
records, that would make you even more effective.’ A physicist is the same, a
chemist is the same, a writer is the same, a musician is the same. So the
problem is now you’ve become very dependent upon this AI system. And in the
book, we say, well, who controls what the AI system does? What about its
prejudices? What regulates what happens? And especially with young people,
this is a great concern.

One of the things you write about in the book is how AI has a
kind of good and bad side. What do you mean?

Kissinger: Well, I inherently meant what I said at Google. Up to now humanity


assumed that its technological progress was beneficial or manageable. We are
saying that it can be hugely beneficial. It may be manageable, but there are
aspects to the managing part of it that we haven’t studied at all or sufficiently.
I remain worried. I’m opposed to saying we therefore have to eliminate it. It’s
there now. One of the major points is that we think there should be created
some philosophy to guide to the research.

Who would you suggest would make that philosophy?


What’s the next step?
Kissinger:
Months: We need a number of little groups that ask questions.
Print & Digital When
Only $18.99 I was
(EXTRA 50%asaving on
Access the normal price)
graduate student, nuclear weapons were new. And at that time, a number of
concerned professors at Harvard, MIT and Caltech met most Saturday
afternoons to ask, What is the answer? How do we deal with it? And they came
up with the arms-control idea.

Schmidt: We need a similar process. It won’t be one place, it will be a set of


such initiatives. One of my hopes is to help organize those post-book, if we get
a good reception to the book.

I think that the first thing is that this stuff is too powerful to be done by tech
alone. It’s also unlikely that it will just get regulated correctly. So you have to
build a philosophy. I can’t say it as well as Dr. Kissinger, but you need a
philosophical framework, a set of understandings of where the limits of this
technology should go. In my experience in science, the only way that happens
is when you get the scientists and the policy people together in some form.
This is true in biology, is true in recombinant DNA and so forth.

These groups need to be international in scale? Under the


aegis of the U.N., or whom?

Schmidt: The way these things typically work is there are relatively small,
relatively elite groups that have been thinking about this, and they need to get
stitched together. So for example, there is an Oxford AI and Ethics Strategy
Group, which is quite good. There are little pockets around the world. There’s
also a number that I’m aware of in China. But they’re not stitched together; it’s
the beginning. So if you believe what we believe—which is that in a decade, this
stuff will be enormously powerful—we’d better start now to think about the
implications.

I’ll give you my favorite example, which is in military doctrine. Everything’s


getting faster. The thing we don’t want is weapons that are automatically
launched, based on their own analysis of the situation.

Kissinger: Because the attacker may be faster than the human brain can
analyze, so it’s a vicious circle. You have an incentive to make it automatic, but
you don’t
Months: Print &want to make it so automatic that it can act on Only
Digital a judgment you might
$18.99 (EXTRA 50% saving on
Access the normal price)
not make.

Schmidt: So there is not discussion today on this point between the different
major countries. And yet, it’s the obvious problem. We have lots of discussions
about things which are human speed. But what about when everything happens
too fast for humans? We need to agree to some limits, mutual limits, on how
fast these systems run, because otherwise we could get into a very unstable
situation.

You can understand how people might find that hard to


swallow coming from you. Because the whole success of
Google was based on how much information could be
delivered, how quickly. A lot of people would say, Well, this is
actually a problem that you helped bring in.

Schmidt: I did, I am guilty. Along with many other people, we have built
platforms that are very, very fast. And sometimes they’re faster than what
humans can understand. That’s a problem.

Have we ever gotten ahead of technology? Haven’t we always


responded after it arrives? It’s true that we don’t
understand what’s going on. But people initially didn’t
understand why the light came on when they turned the
switch. In the same way, a lot of people are not concerned
about AI.

Schmidt: I am very concerned about the misuse of all of these technologies. I


did not expect the Internet to be used by governments to interfere in elections.
It just never occurred to me. I was wrong. I did not expect that the Internet
would be used to power the antivax movement in such a terrible way. I was
wrong. I missed that. We’re not going to miss the next one. We’re going to call
it ahead of time.

Kissinger: If you had known, what would you have done?


Schmidt:
Months: Print &I Digital
don’t know. I could have done something different. Had(EXTRA
Only $18.99 I known50%itsaving on
Access the normal price)
10 years ago, I could have built different products. I could have lobbied in a
different way. I could have given speeches in a different way. I could have given
people the alarm before it happened.

I don’t agree with the line of your argument that it’s fatalistic. We do roughly
know what technology is going to deliver. We can typically predict technology
pretty accurately within a 10-year horizon, certainly a five-year horizon. So we
tried in our book to write down what is going to happen. And we want people to
deal with it. I have my own pet answers to how we would solve these problems.
We have a minor reference in the book how you would solve misinformation,
which is going to get much worse. And the way you solve that is by essentially
knowing where the information came from cryptographically and then ranking
so the best information is at the top.

Kissinger: I don’t know whether anyone could have foreseen how politics are
changing as a result of it. It may be the nature of the human destiny and
human tragedy that they have been given the gift to invent things. But the
punishment may be that they have to find the solutions themselves. I had no
incentive to get into any technological discussions. In my 90s, I started to work
with Eric. He set up little seminars of four or five people every three or four
weeks, which he joined. We were discussing these issues, and we were raising
many of the questions you raised here to see what we could do. At that time, it
was just argumentative; then, at the end of the period, we invited Dan
Huttenlocher, because he’s technically so competent, to see how we would
write it down. Then the three of us met for a year, every Sunday afternoon. So
this not just popping off. It’s a serious set of concerns.

Schmidt: So what we hope we have done is we’ve laid out the problems for the
groups to figure out how to solve them. And there’s a number of them: the
impact on children, the impact on war, the impact on science, the impact on
politics, the impact on humanity. But we want to say right now that those that
initiatives need to start now.

Finally, I want to ask you each a question that sort of relates


to each other. Dr. Kissinger, when, in 50 years, somebody
Googles
Months: your
Print & Digital name, what would you like the
Onlyfirst fact about
$18.99 (EXTRA 50% saving on
Access the normal price)
you to be?

Kissinger: That I made some contribution to the conception of peace. I’d like
to be remembered for some things I actually did also. But if you ask me to sum
it up in one sentence, I think if you look at what I’ve written, it all works back
together toward that same theme.

And Mr. Schmidt, what would you like people to think of as


your contribution to the conception of peace?

Well, the odds of Google being in existence in 50 years, given the history of
American corporations, is not so high. I grew up in the tech industry, which is a
simplified version of humanity. We’ve gotten rid of all the pesky hard problems,
right? I hope I’ve bridged technology and humanity in a way that is more
profound than any other person in my generation.

CONTACT US AT LETTERS@TIME.COM.

You might also like