Professional Documents
Culture Documents
Henry Kissinger and Eric Schmidt On AI's Dangerous Future - Time
Henry Kissinger and Eric Schmidt On AI's Dangerous Future - Time
WASHINGTON, DC - NOVEMBER 05: Former U.S. Secretary of State Henry Kissinger speaks during a National Security
Commission on Artificial Intelligence (NSCAI) conference November 5, 2019 in Washington, DC.
Alex Wong/Getty
Images
BY BELINDA LUSCOMBE
NOVEMBER 5, 2021 10:44 AM EDT
(To receive weekly emails of conversations with the world’s top CEOs and
business decisionmakers, click here.)
At the
Months: age&of
Print 98, former Secretary of State Henry Kissinger
Digital has$18.99
Only a whole new
(EXTRA 50% saving on
Access the normal price)
area of interest: artificial intelligence. He became intrigued after being
persuaded by Eric Schmidt, who was then the executive chairman of Google, to
attend a lecture on the topic while at the Bilderberg conference in 2016. The
two have teamed up with the dean of the MIT Schwarzman College of
Computing, Daniel Huttenlocher, to write a bracing new book, The Age of AI,
about the implications of the rapid rise and deployment of artificial
intelligence, which they say “augurs a revolution in human affairs.” The book
argues that artificial intelligence processes have become so powerful, so
seamlessly enmeshed in human affairs, and so unpredictable, that without
some forethought and management, the kind of “epoch-making
transformations” they will deliver may send human history in a dangerous
direction.
Kissinger and Schmidt sat down with TIME to talk about the future they
envision.
BRANDED CONTENT
Schmidt: The visit to Google got him thinking. And when we started talking
about this, Dr. Kissinger said that he is very worried that the impact that this
collection of technologies will have on humans and their existence, and that
the technologists are operating without the benefit of understanding their
impact or history. And that, I think, is absolutely correct.
Given that many people feel the way that you do or did about
technology companies—that they are not really to be
trusted, that many of the manipulations that they have used
to improve their business have not been necessarily great
for society—what role do you see technology leaders playing
in this new system?
Kissinger: I think the technology companies have led the way into a new
period of human consciousness, like the Enlightenment generations did when
they moved from religion to reason, and the technologists are showing us how
to relate reason to artificial intelligence. It’s a different kind of knowledge in
some respects, because with reason—the world in which I grew up—each
evidence supports the other. With artificial intelligence, the astounding thing
is, you come up with a conclusion which is correct. But you don’t know why.
That’s a totally new challenge. And so in some ways, what they have invented
is dangerous. But it advances our culture. Would we be better off if it had never
beenPrint
Months: invented? I don’t know that. But now that it exists, we
& Digital have
Only to(EXTRA
$18.99 understand
50% saving on
Access the normal price)
it. And it cannot be eliminated. Too much of our life is already consumed by it.
Kissinger: I don’t think we have examined this thoughtfully yet. If you imagine
a war between China and the United States, you have artificial-intelligence
weapons. Like every artificial intelligence, they are more effective at what you
plan. But they might be also effective in what they think their objective is. And
so if you say, ‘Target A is what I want,’ they might decide that something else
meets these criteria even better. So you’re in a world of slight uncertainty.
Secondly, since nobody has really tested these things on a broad-scale
operation, you can’t tell exactly what will happen when AI fighter planes on
both sides interact. So you are then in a world of potentially total
destructiveness and substantial uncertainty as to what you’re doing.
World War I was almost like that in the sense that everybody had planned very
complicated scenarios of mobilization, and they were so finely geared that once
this thing got going, they couldn’t stop it, because they would put themselves
at a bad disadvantage.
Kissinger: I have studied what I’m talking about most of my life; this I’ve only
studied for four years. The Deep Think computer was taught to play chess by
playing against itself for four hours. And it played a game of chess no human
being had ever seen before. Our best computers only beat it occasionally. If this
happens in other fields, as it must and it is, that is something, and our world is
not at all prepared for it.
What I have said—and is in the book—is that you’re going to need an assistant.
So in your case, you’re a reporter, you’ve got a zillion things going on, you’re
going to need an assistant in the form of a computer that says, ‘These are the
important things going on. These are the things to think about, search the
records, that would make you even more effective.’ A physicist is the same, a
chemist is the same, a writer is the same, a musician is the same. So the
problem is now you’ve become very dependent upon this AI system. And in the
book, we say, well, who controls what the AI system does? What about its
prejudices? What regulates what happens? And especially with young people,
this is a great concern.
One of the things you write about in the book is how AI has a
kind of good and bad side. What do you mean?
I think that the first thing is that this stuff is too powerful to be done by tech
alone. It’s also unlikely that it will just get regulated correctly. So you have to
build a philosophy. I can’t say it as well as Dr. Kissinger, but you need a
philosophical framework, a set of understandings of where the limits of this
technology should go. In my experience in science, the only way that happens
is when you get the scientists and the policy people together in some form.
This is true in biology, is true in recombinant DNA and so forth.
Schmidt: The way these things typically work is there are relatively small,
relatively elite groups that have been thinking about this, and they need to get
stitched together. So for example, there is an Oxford AI and Ethics Strategy
Group, which is quite good. There are little pockets around the world. There’s
also a number that I’m aware of in China. But they’re not stitched together; it’s
the beginning. So if you believe what we believe—which is that in a decade, this
stuff will be enormously powerful—we’d better start now to think about the
implications.
Kissinger: Because the attacker may be faster than the human brain can
analyze, so it’s a vicious circle. You have an incentive to make it automatic, but
you don’t
Months: Print &want to make it so automatic that it can act on Only
Digital a judgment you might
$18.99 (EXTRA 50% saving on
Access the normal price)
not make.
Schmidt: So there is not discussion today on this point between the different
major countries. And yet, it’s the obvious problem. We have lots of discussions
about things which are human speed. But what about when everything happens
too fast for humans? We need to agree to some limits, mutual limits, on how
fast these systems run, because otherwise we could get into a very unstable
situation.
Schmidt: I did, I am guilty. Along with many other people, we have built
platforms that are very, very fast. And sometimes they’re faster than what
humans can understand. That’s a problem.
I don’t agree with the line of your argument that it’s fatalistic. We do roughly
know what technology is going to deliver. We can typically predict technology
pretty accurately within a 10-year horizon, certainly a five-year horizon. So we
tried in our book to write down what is going to happen. And we want people to
deal with it. I have my own pet answers to how we would solve these problems.
We have a minor reference in the book how you would solve misinformation,
which is going to get much worse. And the way you solve that is by essentially
knowing where the information came from cryptographically and then ranking
so the best information is at the top.
Kissinger: I don’t know whether anyone could have foreseen how politics are
changing as a result of it. It may be the nature of the human destiny and
human tragedy that they have been given the gift to invent things. But the
punishment may be that they have to find the solutions themselves. I had no
incentive to get into any technological discussions. In my 90s, I started to work
with Eric. He set up little seminars of four or five people every three or four
weeks, which he joined. We were discussing these issues, and we were raising
many of the questions you raised here to see what we could do. At that time, it
was just argumentative; then, at the end of the period, we invited Dan
Huttenlocher, because he’s technically so competent, to see how we would
write it down. Then the three of us met for a year, every Sunday afternoon. So
this not just popping off. It’s a serious set of concerns.
Schmidt: So what we hope we have done is we’ve laid out the problems for the
groups to figure out how to solve them. And there’s a number of them: the
impact on children, the impact on war, the impact on science, the impact on
politics, the impact on humanity. But we want to say right now that those that
initiatives need to start now.
Kissinger: That I made some contribution to the conception of peace. I’d like
to be remembered for some things I actually did also. But if you ask me to sum
it up in one sentence, I think if you look at what I’ve written, it all works back
together toward that same theme.
Well, the odds of Google being in existence in 50 years, given the history of
American corporations, is not so high. I grew up in the tech industry, which is a
simplified version of humanity. We’ve gotten rid of all the pesky hard problems,
right? I hope I’ve bridged technology and humanity in a way that is more
profound than any other person in my generation.
CONTACT US AT LETTERS@TIME.COM.