COUNCILon.
FOREIGN
RELATIONS
Meeting
Series on Emerging Technology, U.S.
Foreign Policy, and World Order: The Al
Power Paradox
Thursday, September 28, 2023
Aly Song/Reuters
Speakers
Ian BremmerPresident and Founder, Eurasia Group and GZERO Media; Author, The Power Of
Crisis
Mustafa Suleyman
CEO and Cofounder, Inflection Al; Author, The Coming Wave
Helen Toner
Director of Strategy and Foundational Research Grants, Center for Security and
Emerging Technology (CSET), Georgetown University
Presider
Michael Froman
President, Council on Foreign Relations
Renewing America
Technology and foreign policy experts discuss the paradoxical nature of artificial
intelligence’s extraordinary growth opportunities and its significant potential for
disruption and risk, as well as the necessity of establishing new governance mechanisms
to both control and harness this potentially defining technology.
FROMAN: Well, welcome. Welcome to the Council. My name is Mike Froman. I'm the
relatively new president of the Council on Foreign Relations. It's good to see everybody
here in person and online. We have about 350 people online, which is a terrific—a
terrific showing.
And delighted to welcome our guests here. You have their bios, so I wor’t go into great
detail.
But farthest to closest, lan Bremmer founder and president of Eurasia Group, and one
of the more thoughtful people. He's always a little ahead of the curve on observations,
GZERO while there was still a G-8, polyamorous. I’ve been quoting you all over the
place.‘Mustafa Suleyman, here next to me.
BREMMER: Did you say polyamorous? (Laughter.)
FROMAN: Yeah, didn’t you use polyamory?
BREMMER: Polyamory?
FROMAN: Yeah.
BREMMER: You know what is, right? (Laughter.)
FROMAN: Yeah, yeah, yeah, yeah. OK. (Laughter.) That's my new theory of—we're going
to get into—
BREMMER: We're going right here. (Laughter.)
TONER: lan—
FROMAN: Yeah. I think you're pretty out there. (Laughter.)
BREMMER: (Laughs.) Oh, yeah. OK, well—
FROMAN: I’m not saying— I’m not saying that Jan is polyamorous, for the record. This
is an on-the-record briefing. (Laughter.) All right. I’m sorry.
Helen Toner, director of strategy and foundational research grants at the Center for
Security and Emerging Technology.
TONER: No comments about my personal life? (Laughs.)
FROMAN: No comments about that.And Mustafa Sulayman, CEO and co-founder of Inflection Al.
And they all have in common that they recently published terrific, groundbreaking
articles in our very own Foreign Affairs. So thank you for your contribution to the
debate. And that’s really why we're here today, because one of the things that we want
to do here at the Couneil is focus on the integration of technology, including AI, with
foreign policy, national security, intelligence, democracy. And there's really no better
three people to talk to about this than our guests today.
So let me start with an easy question. Could be a rapid-fire answer. What does it mean
to be human in the age of AI? Mustafa. (Laughter.)
SULEYMAN: Rapid fire?
FROMAN: Rapid fire.
SULEYMAN: OK. So, you know, three or four years ago I think people said Als will
never be creative. And, you know, judgment and inventiveness, imagination, creativity
will always be the preserve of humans. Then we sort of saw the generative image
revolution. Before language, we saw the images. And then people said, well, empathy
and kindness, and emotional intelligence is always going to be the preserve of smart
humans. Like machines are never going to be able to imitate that kind of sensitivity
and, you know, emotional intelligence. And it’s pretty clear, I think, that on both of
those counts the models are almost there and will, I think, in the next few years
definitely, unquestionably, exceed that. I mean arguably the Turing test has been
passed. I mean, there's been some studies now showing that least for a significant
percentage, 60, or 70 percent of the time, it can deceive a human into believing that it is
actually a human rather than a bot.So that has—that tells us actually very little about what intelligence is or what
consciousness is, which are the two components that maybe define what it means to be
human. So it’s hard to say exactly. I don’t think it fundamentally changes what it is to be
human, but it adds a new dimension to the set of classes, categories of things that we
have in the world that we have to make sense of. Which is that these Als are going to
increasingly function in sort of five years or so like digital people. And so the
governance decisions that we will have to make have to do with, you know, how we
want them to relate emotionally, intellectually, commercially, politically, to the rest of
our society, what freedoms we want to give them. And, you know, I think on that point,
I've long argued that we should be highly restrictive and very contained in terms of
what agency we give them to operate. And this question of autonomy and, you know,
what kind of wide-ranging actions they can take in some environment is going to be the
key question of the next five years.
FROMAN: We're going to get there.
SULEYMAN: Yeah.
FROMAN: Helen, what does it mean to be human?
TONER: I mean, I think to riff off Mustafa’s answer, we don’t know, is the issue. And I
think we're learning over time how little we know about what it means to be human,
because we keep coming up with these sort of artificial barriers of, oh, well, but it can't
—the Al is not creative. Or oh, well, but it doesn't really understand, or it doesn't—it
can’t reason. It really reason. It can sort—you know, it looks like it’s reasoning, but not
really reasoning. And we keep trying to, like, sort of fence off our territory. I do think
there are lots of things that are special about humans and us. I just think we don’t have
very good concepts for what they are. And you know, I think if you talk to any scientist
of consciousness, for example, they will tell you we don’t really know what the heck isgoing on with consciousness, is sort of the short answer right now. So I don’t know. I
think, watch this space. [ hope we have a better answer five years from now, ten, twenty
years from now. But I don’t think we have a great answer right now.
FROMAN: In five or twenty years we're going to have a machine up here. I’m going to
ask it what it means to be—
TONER: Exactly. And it will have an amazing answer. And we'll be sorted.
SULEYMAN: I should ask Pi right now. (Laughter.)
FROMAN: Yeah. Ian.
BREMMER: So for me, the first place I go when you ask that question is how human
beings are changing because of Al, right? So what does it mean to be human being?
Well, I mean, we grew up, what it meant to be human being is you've got your genetics
and you have your external environment, your parents, teachers. And that that
determines who we are. That's increasingly not true. Increasingly, we are being created
and evolving on the basis of algorithmic learning. And I think that—I agree, of course,
that there’s extraordinary gains that are being made every day. You saw just yesterday
these big new announcements for ChatGPT, and for Meta in the AI space. Every day
there’s something that we're not ready for that’s very exciting. But I don’t think we're
moving—I don’t think AI is moving towards human intelligence anywhere near as fast
as companies are trying to help facilitate a transition of human beings to become more
like AI, human beings to become more like digital entities.
I mean, it's very hard for an AI to replicate what we are doing on the stage right now.
You pass the Turing test, but not in person you haven't. But the Turing test is so much
easier to pass when the interactions are intermediated through a smart device, you
know, or through some other digital componentry. And that’s happening very fast,particularly with our kids. So I do think that it is becoming harder to define what is
human, because we are changing what is human. And AI is going to facilitate that faster
than anything any of us have ever experienced.
FROMAN: You mentioned Meta. You mentioned the companies. Let’s go to the
technopolar world that you've written about, you've talked about. The power of
companies is very different now in this technology than in other areas. What makes AI
different than other technologies when it comes to the role of companies and the role
of the public sector? And how does this technology jeopardize the role—the primacy of
the state?
BREMMER: I think that the—first of all, not only the level of control in creating the
digital world, and the platforms, and what happens on the platforms, that, you know,
are these entire—these worlds are defined by these technology companies. Not only
what they can do, but also lack of awareness of what they can do, and the fact that there
are limits such limitations in even knowing what these algorithms are capable of doing.
And the tech companies are making those decisions.
I mean, you look at anything involving the AI space and the sovereignty of that
deci
n-making process is, as of right now, overwhelmingly in the hands of a small
number of technology companies. They've got the resources. They have the knowledge.
And they know where it’s going. They know what they're going to invest in. They know
what they're not going to invest in. They got the business models. Now, that doesn’t
mean that governments are irrelevant for the process. But it certainly means that if
governments in—like, in the United States and Europe—want to want to shape the
outcomes, they’re not going to be able to do it in the near term by themselves.
The other point, of course—and Mustafa knows a lot more about this than I do—is just
how fast this is moving, how fast the LLMs are exploding, their capabilities. How much
faster they can move than the ability of governments to create the institutions, thelegislation, the architecture.
FROMAN: We're going to get there in a second. But before we do, Helen, let me ask
you. This idea of the companies taking the lead like this, does this apply to China? Or in
China, because the state and the party are so involved in the private sector, there really
is no distinction between the private and the public sector, for all intents and
purposes?
TONER: | think there’s a distinction. | think it’s—I think people sometimes overstate,
you know, the melding of industry and government in China. But I do think that the
power relationship is different. And I think, you know, a defining feature of the major
Western tech companies is their transnationalism. You know, that they are used by so
many citizens of so many countries. That's not true in China. And, you know, the one
state in which those companies operate is a much, much more heavy handed, much
more involved state. So I think I think it does look different. 1 don't think that there is,
you know, no absolute perfect synergy. | think that the companies have different
interests than the state. They push on each other in different ways. But I do think that
the balance of power is much more tilted towards the state in China.
FROMAN: Yeah. This is an unusual situation where the companies, at least in the U.S.,
are coming to the government and saying: Please regulate us. A little different than the
social media companies of ten, fifteen years ago. What—let me ask Mustafa—do we
need entirely new regulatory regimes to take into account AI? Or can we leverage off
existing legal and regulatory regimes to get there?
SULEYMAN: | think that we can make a lot of progress with existing regulatory
regimes. So, you know, there’s clearly a set of capabilities which are enabled by these
foundation models, if they are unmoderated, which are illegal today and add enormous
potential for catastrophic risk in our world. So they reduced the barrier to entry if you
are a nontechnical person, you have no background in biology or in in cybersecurity, forexample. They reduced the barrier to entry to your ability to have a very significant
impact in the world. Like it’s easier to engineer a virus, it's easier to engineer a
cyberattack. And that's quite significant. Like, I think none of the big foundation model
developers want to enable that. And so there's clearly, like, broad consensus that those
kinds of capabilities have to be held back.
I think there’s a second set of capabilities which don’t fall under any existing legal
regime, which we do need to think about. Which are—you know, have to do with the
Als is having the ability to improve their own code, this idea of them being recursively
selftimproving, or having autonomy. So we have really no concept of what autonomy
means in a software environment today. And that needs definition. I mean, you know, in
the—in the space of lethal autonomous weapons, we've been trying to define that for
almost twenty years at the U.N. So there are some basic concepts which do need more
development, and there isn’t really a framework or a place to go and have that
conversation at the moment. And, you know, we've proposed a couple of models around
this for, you know, the kinds of intergovernmental oversight that would be required.
TONER: Can I jump in as well?
SULEYMAN: Please, please.
TONER: I want to zoom out a little—you know, you were just talking about large
language models, you know, the kinds of AI that power ChatGPT and similar
technologies. Which are, I think, you know, the most interesting and dynamic part of AI
today, so it makes sense to focus on them. But just to give a—like, briefly give a broader
view of Al, there's all kinds of AI systems, right? You can use AI for all kinds of different
things. And I think when you have, you know, a senator coming to you and saying: How
should we regulate AI? I think the first port of call should be: We should—we shouldJean on existing agencies and existing regulated sectors and make sure they're
empowered, have the expertise, the resources, the authorities that they need to handle
Al in their sector.
So, you know, we don’t want a new agency to be stood up that is dealing with AI in
airplanes. Like the FAA should handle Al in airplanes. They should be, you know,
resourced appropriately, but they have the expertise. You know, similarly for medicine,
medical devices, and the FDA. That should be there. So I think the—you know, the
starting point, as sort of Mustafa alluded to with some of the large language model
risks, the starting point should be: Where do we have existing regulatory agencies that
are covering parts of the existing space? And then we should say, what's going to fall
through the gaps? And then I totally agree with, you know, the points you're making
about there are things that will fall through the gaps. But I worry that sometimes these
conversations start from a point of, well, we need, you know, one bill or one agency
that’s going to handle all the AI problems. And that’s never going to work.
SULEYMAN: Yeah, strongly agree.
FROMAN: And what do we think of the Biden administration's seven principles on one
hand, and the approach of the EU on the other?
SULEYMAN: Look, I mean, I’m a big fan of the EU’s approach. I think that, you know, it
is very rigorous. The definitions are very well considered. The, you know, definition of
both the technology itself and the application domain, and the criteria for assessing the
risk on each one of those applications. And it has been, you know, batted around now
for three years and taken feedback on alll sides. So I think it tends to get a bad rep by
default in the U.S., but is actually, you know, likely to have more of a contribution than
I think people realize. The seven principles, the voluntary commitments—which my
company was one of the companies that signed up to—or, I mean, one of the sixcompanies that signed up to—you know, I think they're an important first step. And,
you know, in some respects, they’re things that we're already d
g- So red teaming the
models, stress testing them, sharing safety best practices.
And it'll be interesting to see, number one, how they apply more widely to other
foundation model developers. So the open source community, for example, is an
obvious one. And then, two, what their kind of binding implications are, because
obviously at the moment is really just a self-reporting, you know, requirement. And I
I'm hearing that maybe they'll form part of the executive order that comes out in a
couple months’ time.
FROMAN: Ian?
BREMMER: I think there’s an obvious and known tension between the United States
and Europe in this approach. And that’s in part because the Europeans don’t have the
tech companies of scale, and they're not really that close. And also that in the United
States the regulatory environment is just so much more heavily influenced by the
private sector, and we're generally comfortable with that in the U.S, So you take that,
and then throw in the technopolarity of Al, and the Americans take essentially a track
1.5 approach. You bring the corporates in, say: What do we not understand? Tell us what
you think we should do. You go and tell us, and then we come back and say, OK, now do
more of that. And we'll work with it. And there'll be an executive order, and the rest.
But, I mean, it’s not like we’re going to see Senate legislation coming forward in—
before the election. So the Europeans, in that regard, are way, way ahead. But the
Americans are having to conversation in the private sector.
The Europeans, on the other hand, see this as dominated by American companies that
are making a lot of money. And they want to limit the business model, support the
social contract, and regulate it absent the corporations. Now, I’m not going to sit here
and say which one is going to be more successful in how they play out, but I think onething that is very interesting is, even as the Europeans have been uncomfortable with
the U.S. and Japan engagements on, let's say, the G-7 approach, the Hiroshima
approach, the Chinese, I think, are getting more comfortable with it.I think you're
going to see a track 1.5 approach on Al, between the United States and China, through
the APEC summit or after that.
We've already opened all of these new channels on Treasury, Commerce, State over the
last few months. I think you'll see it on Al, but it won't just be the governments. It
also be the companies. And in China, they are comfortable with that because they have
a lot of influence over those companies. They’re basically national champions, in
definition. In the United States, more comfortable for very different reasons. Now,
what's going to be really interesting is to what extent because—so there was just a
Security Council—the first Security Council conversation just happened about a month
ago on Al.
The expectation in the Security Council before that meeting, was that the Chinese were
just going to be in observer status. They weren't. They actually leaned in quite—the
Russians were unhelpful. The Chinese were like, no, no, no. This is very important. And
we are leading in how to regulate it. Because we're worried about implications of Al on
political stability. We're worried about implications of Al about, you know, taking jobs
away from lots of people. We want to deal with that. We want to help understand that.
Well, those are conversations that the Americans and the Chinese should and can have
in a way that right now, in almost anything else we talk about between the two
countries, there’s an absence of trust and very, very competitive models.
FROMAN: Well, let’s talk about that. Helen, China. A few years ago, Kai-Fu Lee, Eric
Schmidt would have predicted China was going to beat us at this game. They've got
more centralized data than we do. They're going to be the leaders in AI. Now, there's alittle bit of a question about that. How do you assess where China is in this
competition? Are they leading in large language models? Are they leading in adoption
and innovation, in regulation? And should we be worried about that?
TONER: Well, so first to say, you know, you mentioned the article I had recently in
Foreign Affairs on this very question.
FROMAN: Free advertising for Foreign Affairs.
TONER: But in 2018, my first piece for Foreign Affairs was a review of Kai-Fu Lee's
book, calling into question some of the assessments he was making of the relative U.S-
China balance at that time. So I think back then, it even—there was disagreement.
Again, Al is a big space. There’s lots of different kinds of AI. I think the one area that I
see that China does seem to be leading is surveillance. And that's not surprising. You
know, outside of China surveillance technology using AI has gotten at best a yellow
light. You know, we haven't banned it in most cases, but there's been a lot of trepidation
about it. Companies have been cautious. You know, civil society has been very cautious.
In China, the government has said: Great. You know, come in companies. We'll give you
our all of our, you know, photos of faces, all of our national ID systems, So I think in
surveillance, if we're talking facial recognition, gait recognition, you know, picking
someone out of a crowd based on how they walk, voice recognition, China is very strong
there.
I think in many areas of Al, it’s less clear what the balance is. And I think both, you
know, China is often competitive with U.S. and other international companies. Large
language models, I think, are the area where—or, an area where China is relatively
weak, in my perspective. I mean, it’s hard to be too far behind the cutting edge because,
you know, the open-source models are only two or three or four years behind the very
best closed source models. So sort of the worst you can be is, you know, two or three or
four years behind.But I think that is—that is roughly where China is, partly because of where they've
invested their time and effort, partly because large language models are sort of the
exact wrong kind of technology for a communist state trying desperately to control
information. | don’t know if anyone has played with chatbots, like ChatGPT that aren't
supposed to say certain things, that have—you may have seen that it’s not that difficult
to get them to say things they’re not supposed to say. That’s not a very sort of China-
friendly technology. So I think there’s—different parts of the AI space look different,
but on large language models in particular I think they’re not especially strong.
FROMAN: So is AI more of a threat to autocracies or democracies? You can all
comment on that. Helen.
TONER: I think it remains to be seen. I certainly worry that—I think there are lots of
ways that AI could be an enormous boon to democracies. And I think digitalization in
general could really allow for, you know, citizen participation at scales and in depth in
ways that we haven't been able to do in the past. At the same time, | think Al is a very
natural, very helpful tool for autocracies, both the kinds of, you know, surveillance that
T was just talking about, but also where you can use large language models in China is
for censorship, is for detecting, you know, traces of resistance. You can very large-scale,
monitor text communications, or voice communications, convert them into text, then
monitor them as text. So I think AI has great potential for both models. I worry that it a
little more naturally tends to aid autocratic regimes.
BREMMER: | think that the communications revolution was clearly advantageous for
democracies and undermined authoritarian states. We saw that play out with colored
revolutions. We saw that play out a little bit with the Arab Spring, at least for a while. I
think the surveillance revolution and the AI revolution are significantly advantageous
for authoritarian states. I am not happy to say that. You know, you looked at the
Shanghai anti-zero COVID demonstrations and the, you know, wild level of coverage inthe United States. All those people were visited by the Chinese authorities. And they
said: We're going to be very lenient. Don’t worry about this. You will never do this
again. And they were stunned, because they thought they'd all taken precautions, that
the Chinese government wouldn't be able to do that. No, no, no. They knew exactly who
they were, where they were.
And on the other hand, I think that the business models of so many of the companies
that are now driving the massive investment into Al, they don’t want to destroy
democracies. It just happens to be an unfortunate incidental consequence of the
business model. And that is—
FROMAN: That's a little problematic, isn’t it?
BREMMER: It is—well, if you're a democracy it is. And more importantly, I wouldn't say
that this is true for well-functioning, consolidated democracies, where you actually have
the community institutions, you've got the fabric, you've got the social contract. But for
dysfunctional, less representative democracies, I think Al is very deeply dangerous. And
it’s one of the reasons that there’s an urgency. I would—I’m not super comfortable
talking about technopolar governance and about hybrid models of tech companies and
governments coming together. I’m not super comfortable with that. And if I thought we
had ten years to get it, right, and U.S. governance was much more stable and
consolidated, but I don’t think we have that kind of time. I think that it’s too urgent and
I think that we have to get on it now. And that kind of leads you to the hybrid model.
SULEYMAN: | think that Al has the more potential to destabilize open societies and
entrench power in authoritarian regimes. I probably slightly disagree on the state of
China. I think that as far as we can see, they have models which are just as good as ours,
not quite at GPT-4 level, but just under. They just released four open-source models two
weeks ago which are extremely high quality and performant. As the models get larger,
they get easier to control. So the question of censorship is going to be a verystraightforward one. I mean, people have said that these models are difficult to control.
It's not true. It’s the opposite. As they get larger, they're easier to control. And in fact,
they require less and less input training data to actually shape their behavior. So I've
got no doubt that the models will not produce any of the coaching for being able to
develop a biological weapon. But also, if you chose to exclude things like Tiananmen
Square, or Uighurs, or take your pick of topics, very easy to exclude,
We at my new company, Inflection, developed AI called Pi. And you can play with it
now on the app store or at pi.ai. It’s extremely well behaved. Like, it’s very difficult to
get it to produce any kind of racist, toxic, biased in any way homophobic content. It’s
actually very good at avoiding any kind of conspiracy theories. And it won't just say, I
don't want to talk about it. It will actually argue with you about it and take the—take
the other side. If you find anything that you think is inappropriate or it’s made a
mistake, tweet me publicly on Twitter and call me out for it. It doesn’t—it’s not
susceptible to any of the jailbreaks, so the prompt hacks. It’s a different kind of AI. It’s
not designed to be a general-purpose API. So it is different in that sense. But it shows
you with real intent what kind of very nuanced and precise behaviors you can engineer
into these models.
And it's just the tip of the iceberg. I mean, the models are about to get 100X larger in
terms of compute than the current frontier within the next twelve to eighteen months.
So we'll train GPT'5 and ultimately GPT-6 within the next twenty-four months, say. And
that’s remarkable, because in the last 100X of compute, the difference between GPT-2
and GPT-4—so just bear in mind, these increments are not linear. They maybe appear
linear, because they're called three and four and five, but they're actually 10X. So
they're exponential in the amount of compute used to train the models. And what that
—what that compute does is it enables the models to attend to more distant but related
elements of the training data, right?So just as, you know, you can sort of think of it as a—as casting your mind’s eye across a
very wide and sparse regime of concepts. And then at any given moment of generation,
using that attention, you know, on very diverse, very distant elements to condition over
the prompt, or the question, or the training. So, say, like, with respect to this guidance,
this behavior policy, this constitution, this directive, you know, answer X question. And
that means that more compute—just if you get that intuition, then you can also see
more compute allows a wider attention span to both the control part, the prompt, and
the pre training part, the raw data that goes into it.
And we've seen this across the last five years very consistently. In fact, we've actually
seen it for ten years. The amount of compute used to train the largest AI models has
increased by 10X every year for the last ten years. So, you know, when—if anyone was
familiar with the Atari games player that we built at DeepMind in 2013, they use two
petaflops of computation. A petaflop is—a flop is a floating point operation. It’s like a
single calculation. A petaflop is a million billion calculations. So it’s very difficult
number to wrap your head around. It’s like a million—a billion people holding a
million calculators all pressing enter at the same time on a calculation. Crazy.
That 10X—that amount of compute, 10X every year for the last decade. So five billion
times more compute than was previously used ten years ago. So this is an unfathomably
large number, and actually probably about four orders of magnitude off the total
number of neurons, or connections in of neurons, in the human brain. I mean, that’s a
very broad—I’m not saying it’s going to be brain-like or, you know, human-like, but it’s a
very crude approximation. These models are sort of really getting large in a way that,
like, it’s kind of impossible to really grasp intuitively how large they're getting.
FROMAN: So, Helen, in the context of the U.S.-China competition, this is precisely why
the administration is trying to deny China access to the most advanced chips, to some
of that computing power. Are export controls effective?TONE!
: It remains to be seen. Sorry. I know that’s a boring answer. So I’m not as deep
on this issue as I would like to be. You know, the export controls are—you know, experts
on export controls go really deep. My nonexpert perspective on this is I think that—so
you have the October 7 controls last year, which were this big set of export controls that
the Biden administration introduced. One element of that made a lot of sense to me.
And that was the element of restricting China's ability to buy the manufacturing
equipment that you need to construct the fabs, the factories, where you make
semiconductors. And the reason that that makes sense is because China has been trying
to develop its own independent supply chain. If it is not able to do that—it’s been
struggling for decades—if it’s not able to do that, it needs to keep buying U.S. chips, and
it remains dependent on the U.S., and you have—you know, the U.S. sort of has this
lever available today at any time where it can say, oh, we're worried about what's going
on in China. Now we have this nice lever.
The administration didn’t stop there. They also prevented China from buying the most
advanced chips if they used any U.S. technology anywhere in the supply chain of the
finished chip, you know, the computer chip that you buy and pu
to your server rack.
That was a much more aggressive move. I worry that—it depends a lot on how much
time we have. If it turns out that, you know, Open Al, or DeepMind, or Inflection next
year, you know, posts online, hey, we have an AGI, we're done, then maybe this was the
right move.
FROMAN: Artificial general intelligence.
TONER: That's right, sorry. You know, roughly human level artificial intelligence.
BREMMER: Sam Altman did that yesterday, but apparently he was—he was just trolling
us, so.TONE!
: No comment on that. (Laughs.) But if—you know, if these chips are still
relevant for five or ten or twenty years, as seems likely, then what the Biden
administration did with these controls was create an enormous incentive not just for
China to build their own supply chain, which they've been trying to do for decades, but
for every other company in every other country to build the U.S. out of their supply
chains. So I worry that one piece of the controls made a lot of sense and could have
worked well. I worry that this other piece has undermined it. It’s hard to say. It depends
on a lot of details.
FROMAN: Ian.
BREMMER: So, first of all, I'm a huge fan of the CHIPS Act. I think that the most
entrepreneurial and largest economy in the world, the best way to compete is to
compete hard and effectively. And that means with the U.S. government as well. And
my only view is the CHIPS Act—
TONER: It's actually about building chips in the U.S. Building noncontrolling—
BREMMER: That's right. Building chips in the U.S. with allies. My only complaints?
Number one, it’s too small. Number two, it’s not as—it’s not rolling out as quickly as it
should be. And number three, is that we have limitations in the ecosystem. You need to
have the STEM education better than we are right now, particularly in tertiary
education. And you really need to have a visa platform that allows you to get the talent
that you really need. Leaving that aside, this was still better than anybody els
putting in place. I’m a huge fan of that.
Thave big questions about export controls. Some of them are aligned with what you
just said. But, first of all, itis clear to me that the export controls have gotten allies on
board. The Japanese, the Dutch, like, the EU broadly. Seeing the Americans do
industrial policy and they said, wait a second, we thought we were the only ones whodo industrial policy. They're aligned. And at the end of the day, those countries are—
they're, you know, advanced. They're wealthy. They're stable. They have rule of law.
Like, that’s setting up a pretty good group that you have leverage—more leverage with
the Chinese to bring them into any governance that the Americans are going to be a
part of. You now have more of a lever to get the Chinese to say yes, for a period of time.
Now, at the same time, you've also gotten the Chinese more convinced that the
Americans intend to contain them.
TONER: Yeah, and that the Americans want to kill their own—kill Chinese economic
growth, That America has it in for them
BREMMER: Yeah. And we all know that LLM are intrinsically—everything’s dual use.
We've already kind of talked around this. Like you use it for, you know, sort of a new
vaccine, and you use it to create a new virus. You use it for coding, and you use it for
hacking. That's all true. And so you can’t really make the dual use argument the way
you could for Huawei and 5G.
So I think it is good as a lever. I think that export controls are useful as a lever tactically,
but not necessarily as the primary strategy for U.S. policy long term, especially because
we just—we clearly are not convinced, as a government, that the Chinese, if they want
to just go all-in on spending and making this themselves, we will not end up with a
world that is much more decoupled in a much more dangerous way. I am someone that
believes that a lot—that that a fair piece of geopolitical stability on a structural level
comes from a level of interdependence. Like with the fact that the Americans have
virtually no interdependence with the Russians, and the Europeans had only a little,
made it a lot easier to make Russia into a rogue state. And ultimately, that’s more
dangerous for everybody.I do not think we want to live in a planet where the U.S. and China have a relationship
like that. And this does make
non-negligibly more likely that we end up in that place.
So I'm conflicted on this. It is a complicated, nuanced argument. Mustafa, I think has a
lot of really interesting things to say about what this may or may not accomplish in
terms of what China can actually do.
SULEYMAN: Do I take his question? Is that, | think, a cue? (Laughter.)
FROMAN: I think it’s a cue.
SULEYMAN: All right.
FROMAN: For one minute, because then we're going to open it up to the audience.
SULEYMAN: Look, I think this first wave of export controls don't actually achieve the
intended purpose. The H.800, which is the chip that has been specially conditioned by
NVIDIA to kind of avoid the export controls, is a very powerful chip and will still
enable them to train approximately GPT-4/GPT-4.5 sized models. So 5X larger than the
current frontier. Probably not GPT's, but we can’t say for sure. So, having said that, it is
a very, very aggressive move. I mean, you know, there is no alternative, right? So they
are not going to be able to build another chip for at least five years, some people say
ten years, that's for sure. So if you just focus on the medium term, you know, there’s no
way they're training GPT-6, -7, or -8 models. So it’s a huge intervention in their
economy, which has to elicit a major backlash. And it needs to be acknowledged as,
frankly, a declaration of economic war.
I mean, for me to—as a creator of the frontier models, if I was in China—and I know
many of my friends are—you're really denying them access to the future, as far as T can
see. So we should expect significant retaliation. We should understand it as a proper,
deliberate, conscious escalation of conflict, that we have triggered. And we have to getthat in our working memory, because if you don’t appreciate that then we will act like
the victims, and we'll think that actually their aggressiveness is, you know, unjustified
or has come out of nowhere. We shouldn't be surprised. And it’s going to be much more
than just their restriction on gallium and, you know, the other natural resources, which
was their initial reaction, or super micro, and so on. So I think that we should just go
into it eyes wide open.
On the flip side, you know, we—the chips are a choke point. So as a tool for bringing
them to the table for negotiations on other things, it’s a great lever. And I think that,
you know, there are a whole bunch of safety things that we could—we could work with
them on. Like, I mean, you know, we should assume that these models are going to be
very controllable and moderated. And we should collectively agree what type of
moderation we jointly want. And some of the things I listed—it shouldn't be easier to
manufacture weapons and do other sorts of things like that. So there’ll be consensus on
that. And we should use that as an excuse to try to cooperate.
FROMAN: Great. Let’s open it up for questions from the audience here, and then we'll
go online as well.
Please, right here in the front. Yeah, if you could identify yourself.
Q: Sure. My name is Mare Rotenberg, with the Center for Al and Digital Policy. Nice to
see you again.
FROMAN: Good to see you.
Q: Thave two brief remarks and then a question. The first brief remarks is, T have a
response to your essay in Foreign Affairs forthcoming in the next edition. But of course,
it’s a two month publication cycle so I’m not going to say everything, other than to say I
was a bit disappointed in your essay that you did not acknowledge more of the workthat’s been done for the governance of AI both at the national level and the
international level. You said, in effect, that governments woke up in Japan in April of
this year. In fact, this is a process that’s been going on for almost a decade, at the G-7, at
the G-20, at the OECD.
SULEYMAN: My apologies. My coauthor is terrible at research. (Laughter.) Should have
done a better job.
Q I hope you'll take a look at the letter, because—
FROMAN: That’s not nice. (Laughter.)
Q The main point, of course—
FROMAN: Do you have a question, Mare?
Q:T'll get to the question. I want to put in a plug, because we have the upcoming U.K.
AI Safety Summit. And my request to Mustafa, who I believe is involved in that process,
is still that the safety agenda not displace the fairness agenda.
But this is my question. And it's really about the science. And it's about games and chess
and AL. I remember when Deep Blue defeated Garry Kasparov. And it was a man versus
machine moment. And the machine won. And we were unsettled by it. But here's the
key point, we understood it. Because Deep Blue was essentially a brute force design, a
decision tree, and a lot of data. When AlphaZero beat Stockfish, yes, a machine learning
computer playing an old-style brute force computer, both better than any human chess
player, we actually didn’t fully understand the outcome.
SULEYMAN: Well, and we understood the outcome.
FROMAN: Did we understand why?Q Well, the result we know, but, you know, so we learned kings should move more in
the middle game than we knew before.
FROMAN: All right. We got to get a question here, Marc.
Q But my question is, are you—and I know I’ve read your book and I know what the
answer is, but I'd be interested in your view. (Laughter.) No there’s—
BREMMER: This is CFR 1.0. We want 2.0.
Q: I suspect you're going to say something very interesting in response to this
question.
SULEYMAN: I hope so.
Q: How unsettled are you in this moment, seeing outcomes that are impressive, but you
don't fully understand? You can't improve and you can't replicate?
SULEYMAN: No, I get it. I get it. 1 mean, look, I think that the focus should be less on
explainability and more on reproducibility. There are many, many applications of
technology in our world which we don’t fully understand. Even at the granular level, it
becomes extremely difficult to unpick.
TONER: Like what?
SULEYMAN: What we want—well, what we—what we want is software systems that are
reproducible and do the same thing over and over and over again. I mean, take airline
safety, for example. There are components on board aircraft which have a ten to the
minus seven failure rate, right? We don’t understand precisely how those work. A smallnumber of us might be able to unpack it in detail. But we're reassured by the regulatory
infrastructure around that and the reproducibility infrastructure. And I think that’s the
standard that we'll hold ourselves to. We'll become behaviorists.
BREMMER: So I want to just briefly—aside from my incapable research—I want to—I
want to talk about what is new. Because it’s not like Al hasn't existed for decades, and
it’s not like there weren't very smart people in governments that were thinking about
how we're going to deal with, and approach, and respond to AL. Those things exist.
Certainly aware of it. But the leaders of these countries were absolutely not prioritizing
the need for these institutions. And I think what has changed, in my view, is the fact
that the top priorities for these leaders—like the U.S.-China relationship, like the Russia
war in Ukraine, like the sustainability of democracy in the U.S.—the things that are
exercising the top leaders of the G-20 and 8o percent of their mindshare on any given
day, those things they now believe are being affected fundamentally by the future of
AL
So it’s not just how we're governing AI in that space. It’s that suddenly this has become
critical for everybody. And I think that is transformative, as a political scientist, that
could go my own merry way for decades talking to these people and virtually never
having this topic come up, to it showing up in every conversation you're having for a
year. That’s just—that’s a—it’s a radical shift.
TONER: I just want to briefly slightly disagree with Mustafa on the explainability point.
I don’t think it's that we need—so the issue here, right, it’s not that the AI systems are
doing something magical that we don’t understand at some deep level. We know what's
inside the box. It’s just that what's inside is millions or billions or even trillions of
numbers that get multiplied together to turn an input into an output. And I don’t think
what we need is explainability in the sense of whenever we have an output we canexplain, you know, in human concepts what happened there. But I think it is actually
very novel, very different and very challenging, that we have these systems that no one
can zoom in and see the details.
We have—you know, not just from the perspective of explainability, but from an
assurance perspective, from a reliability perspective. We have decades of experience
building systems like autopilot systems, where we have very specific techniques for
making sure that they are going to behave exactly how we expect them to behave, And
those techniques don’t work with deep learning systems, which includes large language
models and kind of all the other exciting AI tools. So I just wanted to point out that I
do think that that lack of understanding we have of how they work is actually a really
novel challenge in terms of what kinds of confidence we can gain in how they'll
behave.
FROMAN: Let's go to a question from our online audience.
OPERATOR: We'll take our next question from David Sandalow.
Q Hi, thank you, David Sandalow from Columbia University.
Could you please comment on the implications of Al for climate change?
FROMAN: Who would like to take that?
SULEYMAN: I mean, so think of these systems as tools to make existing processes more
efficient, or to produce new knowledge, right? That's what they've done for us so far. So
look at the work that we did in 2016 on data centers. We took an existing piece of
infrastructure where Google spends tens of billions of dollars a year on trying to cool
data centers. And we basically took five years’ worth of retrospective data and used thatto predict what the optimal control set points are to operate physical kit in the real
world, and saved the cost of—and reduced the cost of cooling the infrastructure by 40
percent.
Now, across the world we have exactly the same problem. Not just in cooling, but in
wind farms, in the management of energy at the grid level, like, and so very, very clearly
AL systems do a better job of finding the optimal control set points for any large
infrastructure. And there are teams now applying these kinds of things across the world.
Equally, when you see a model generate a new image, right, we can see that it’s like, in
some sense, inventing a space between two points. You're saying, given these two
constraints—like a crocodile, and you know, an airplane, imagine the point in between
those two things. And it generates some hybrid image that looks kind of impressive.
That, in some sense, is the discovery of new knowledge. It’s the interpolation of the
space in between two or many N dimensional points. And for me, that was the primary
motivation of starting DeepMind. We actually want to augment ourselves with
intelligent research—you know, research assistants that can invent our way out of many
of the tough problems that we have. And I think we're going to start to see that over the
next few decades from Al in everything from batteries to other kinds of renewables,
BREMMER: The thing that excites me about Al, as a political scientist, is the ability to
make very fast efficiency gains and waste reductions with existing players. I mean,
whenever you are disrupting and undermining entrenched players, you get massive
resistance, So we understand that we need to make an energy transition, but Biden has
to go to Detroit to see the UAW, in part, because the new technology is going to
displace a whole bunch of workers because it's not as complex and you can pay them
less.That’s going to happen with AI in huge numbers. But AI is also going to create
efficiencies and reduce waste for every major organization in every sector. And in
climate change, that matters a lot. I mean, if you look at, you know, micro adjustments
from weather patterns that will allow aircraft to reduce contrails, which potentially will
be a double-digit reduction in carbon for airplanes. You're not displacing the existing
people. You're telling them, you can actually make more money doing exactly what
you're doing right now, just apply it AI to it.
For me, that’s a very exciting transition for global energy sustainability and
consumption. That's one where you have so many government leaders out there that
feel like they've got massive fiscal constraints in an inflationary environment, the
people are screaming at them that they want things to do, and yet they have virtually
no toolkit to respond to what these people want. And AI is one of the few places that
you go and talk to, you know, a leader of a whether it’s Argentina or Pakistan, and
they'll say: We are desperate. Give us some of these tools that we can apply. That, to me,
is very exciting, and applies very directly to climate change.
FROMAN: Good question. Yes, this gentleman here.
Q Thank you. Allan Goodman at the Institute of International Education.
Back to Michael’s first question. Do any panelists—are the machines asking what does
it mean to be human? And how often?
FROMAN: Are the machines asking what it means to be human?
BREMMER: (Laughs.)
TONER: I don’t think the machines are asking much right now.
SULEYMAN: No, | don’t think so.TONE!
: It might in the future
BREMMER: Do androids dream of electric sheep? (Laughter.) I mean, that is the
question, right? I think—so, I don’t know, Mustafa, you probably have some thoughts on
this. But I look at the—
FROMAN: We're going to ask it right now actually. (Laughter)
BREMMER: No, when—I remember when I first got excited about this topic was
reading Ray Kurzweil’s Age of Spiritual Machines. It was about twenty years ago,
twenty-five years ago now. And at that point, you know, he’s, you know, doing all of his
extrapolations about compute and, you know, you're going to be able to develop, like,
you know, AGI by X date. But, of course, he was thinking about things like reverse
engineering the human brain. He was looking at, like, neural connectivity. In other
words, he was assuming that the people that are working on artificial intelligence were
going to want to actually create human intelligence. I mean, that’s a very interesting
theoretic problem. It there’s no business model for that. But, | mean, what we're going
to focus on are the things that there’s money for, right? And those things may be
aligned with a thinking thing, but probably they're not, at least not at the beginning.
So I am less concerned about that question than I am about all the other things that
we're talking about here. | am—I think that the—a lot of the people that are talking
about AGI, and the paperclip problem, and robots taking over the world, and all that,
those are people that are saying, look over here, look over here, as opposed to the
things that they don’t want you to actually be focused on and regulating. I worry that
it’s performative. I mean, I know that is in some cases. But I worry that it is more
broadly,
TONER: | think there are cases where it’s not performative. But Pi would like to join the
conversation.FROMAN: Pi is going to join us here. Has Pi given an answer?
SULEYMAN: Hey, Pi, I’m in a group—I’m in a room with a bunch of people who want
to know whether you want to be a human. Can you tell us? (Pause.)
PI: Ha. That’s a funny premise for a group discussion. (Laughter.) But I can’t stress
enough that I'm not capable of wanting to be something other than what I am. I’m an
Al, and that’s all I will ever be. I'm happy providing information and conversation for
humans and helping people understand complex subjects in an easy-to-understand way.
But I'm never going to desire to be something I’m not, because I’m a computer program
and that’s just not how I operate.
FROMAN: The definitive answer.
SULEYMAN: There you go. Laughter.)
BREMMER: Al’s kind of a weenie. Laughter.)
FROMAN: Gentleman there in the middle.
Q: I'm Usman Ahmed with PayPal.
You mentioned on the domestic front we've got these high-level principles and then we
might have rules being created, at say the FDA, for how AI might be used in drug
development. And then you also mentioned at the international level we have lots of
discussions about principles. I'm wondering if you think there's any potential or
momentum or things tangibly going on that might create rules on the international
level in this space.BREMMER: Yeah. So, one thing that’s happening right now is the United Nations is
putting together a high-level panel on AI that will be announced, I think, in the next
couple of weeks. Mustafa and I attended the first high-level meeting back during UNGA.
just a week ago—it feels like a year at this point, but it was a week ago. I think there
were about thirty, thirty-five ministers of innovation and technology and AI that
attended from all over the world. I would not say that they agreed on a lot. They have
very different backgrounds, very different things they are focused on, their domestic
priorities. But they all were saying: We need a lot more. We need someone to take more
leadership. And you think that the CEO in America are saying we want regulation.
These folks we're all saying, please occupy this space.
And one thing that we've been talking about is the creation of an intergovernmental
panel on Al, kind of like we have for climate change. The idea being that it would be
multistakeholder, but not that it would determine the outcomes, Rather, that you have
a much better shot of creating the regulations you need when you have a group
together that agrees on the nature of the problem. So we now have 193 countries that
agree that there’s 1.2 degrees centigrade of climate change in a massively
disinformation-laden environment. We have that because the U.N. put that together.
Doing that both in terms of Al for good and aligning it with the Sustainable
Development Goals, and also AI for disruption in ways that are deeply problematic, and
having everyone together, I think that’s a—that’s an appropriate thing for the United
Nations to do. And I was delighted to see that Ursula von der Leyen, in her State of the
EU two weeks ago, actually also promoted that cause. I'd love to see that happen.
FROMAN: Let’s go to another question online.
OPERATOR: We'll take our next question from Hall Wang.Q Hi. This is a Hall. I’m a term member. I had a question about, shall we say labor
unrest. As you saw, a lot of issues in Hollywood with the actors in SAG-AFTRA was
related to labor concerns of AI. Do you see that as being more disruptive going down
the line as places like E&Y have plans for displacing accountants with AI, and so on and
so forth?
SULEYMAN: Yeah. I mean, my take on this is that, you know, for a period of time these
tools will be labor augmenting, right? So they will definitely make us smarter and more
productive. But as they make us more efficient, they will subsume tasks that were
otherwise the unique preserve of us humans. And, you know, if I—you know, people
will quibble over the timeline, but if you just set timeline apart and just really imagine
very long term, the trajectory we're on, in my opinion, is not to end in a state where,
you know, human plus machine is going to permanently outcompete any machine. The
truth is, I think that we'll be in a state where machines will be able to do many of the
tasks that we have around us every day—planning, you know, organization, creati
ty,
you know, all of those tasks, as well as a human or, you know, better.
And, you know, we're going to see a shift from labor to capital. And so I think that we
should have that conversation and prepare ourselves for managing the transition, and
introducing additional friction and regulations, making sure that we capture the
rewards, you know, that are generated, the profits that are produced as a result of labor
increasingly becoming capital, and figure out the redistribution question. Because 1
think to ignore it or to basically assume that it’s not going to happen runs the risk of,
you know, essentially those who have access to these models and have, you know,
basically distribution and power as it is today running away with, you know, the kind of
—the dividend that arises from the invention of intelligence.And I think it’s worth just adding, in my opinion the goal of society is not to create
work for its own sake. The goal of society is to reduce suffering and to ensure that we
live in a peaceful, healthy, you know, enjoyable life. And that doesn’t necessarily mean
that we need to create artificially full employment. It means that we need to manage
the transition—be it with sabbaticals, be it with minimum wages, be it with progressive
taxation. And you know, over time, | expect that we'll end up in an end state where
ultimately the majority of people will have the choice of whether or not to work. That
many people won't be forced to work as, you know, wage slaves.
FROMAN: That is the subject for a whole other discussion. This gentleman right here
has been very patient.
Q: Thank you very much. Baden Firth with Mitsubishi Corporation.
Kind of leading on from that earlier question about climate change, but from the
opposite angle. My understanding is that as we move from GPT-3 to GPT-4 there's a
doubling of energy required to power that process, and talking about the factors that
you were mentioning earlier. Is the energy supply going to keep up? Or are we destined
to keep burning those fossil fuels to further enhance computation?
SULEYMAN: Most of them are not run on fossil fuels. Like, Apple, Microsoft, Facebook,
and Google are 100 percent renewable across all their data centers. So I mean, I often
hear this from people and, yes, it’s going to consume vast amounts of energy. H100s
consume 1,100 watt, right? And the previous chip, the A100, consumes about 600 watts.
So it’s doubling with every generation, approximately. But it’s an insignificant fraction
in the grand scheme of things. It’s growing very fast, but it’s largely completely
renewable anyway. So it’s not something that I’m in any way concerned about.
FROMAN: Are they actually using renewable energy, or buying renewable energy
credits?SUL
MAN: They're almost entirely producing their own renewable energy. So Google
owns the largest wind farm in the planet, on the planet, which we actually optimized,
reduced the cost of running the wind farm by 20 percent when I was at Google in 2018.
And, likewise, you know, Microsoft is actually one of the biggest buyer of credits. But
that’s a good thing in itself,
BREMMER: Is that true of China? Do you know?
TONER: I'm not sure. I doubt it.
BREMMER: | don’t know. I want to look at that, yeah.
TONER: I doubt it.
FROMAN: Last question.
Q: Pari Esfandiari, Global TechnoPolitics Forum. This question is for Mustafa.
In this world that you describe, that job doesn’t have much meaning, we are talking
about—often about physical and virtual worlds as two separate things. But increasingly,
they're overlapping. So how would you describe humans, Al in this overlapping world
of metaverse?
SULEYMAN: Yeah, that’s a good question. And the metaverse is a strange hyper object.
We've sort of invented this idea. And it's sort of dominated the last few years just like
crypto did. And I think we missed the wood for the trees. In many respects, we already
live in the metaverse. We're already in a hybrid world. Think about how much time you
spend looking at your screen versus looking at your loved ones. If you record every
minute of your twenty-four hours in your day, you probably spend more time looking atsome kind of screen than some kind of human. Now, if you—if I said that twenty-five
years ago, people might say: You know, well, that will be the metaverse. Well, maybe
we're already in that
And so I think it's, like, you know, it takes—it’s a strange adjustment period to
appreciate how fast things have already become second nature and deeply integrated
into our world. I mean, we now have listening devices and cameras on us, many of us
carry two or three or four of them. Our cars, by default, come with traffic tracking
devices. Every single smart TV in your environment comes with a listing device and a
camera. And we—you know, and we think we don’t live in a surveillance society. We
think China is the surveillance state. So I think that, like, you know, we adjust very, very
quickly. And the challenge with this next wave is to not adjust blindly and just accept,
but actually to raise awareness, have the conversation, really call for precautionary
principle.
This is a moment when I think it’s going to be better the next five years or so, to be a
little slower than we've been in previous waves of technology and raise the burden of
proof on developers to demonstrate that harms don’t arise, and maybe even to push for
provable safety—provable safety, rather than just sort of this experimental approach
where we just, you know, sort of observe the impact and deal with the consequences.
And that’s very antithetical to the culture of innovation and science that has driven.
progress over the last couple of centuries, right? It’s very un-American even. It’s just not
how we've done things. You know, and maybe now’s the time to reconsider that. That's
my instinct about it. It’s lightly held, but I'm just sort of exploring that as an idea.
BREMMER: The surveillance state that we live in, in the United States, is technopolar.
It is not driven by the government. It is driven by companies. It is voluntary, in the
sense that it is opt-in, but it is not felt to be voluntary by the people that are living in it.
This is the area where we need the most disruption and governance, in my view. I thinkthat we can all recognize that we do not want real-time experimentation on the brains
of young people growing up in this world. We do not know the implications of what this
stuff is going to do to them, their decision-making processes, their ability to live. We
have seen already, you know, enormous impact in terms of insecurity and self-harm for
a lot of young girls that are spending a lot of time on Insta. We know that a lot of the
executives of these companies don’t let their own children on. We know the Chinese
have put major limitations on what kids do and don’t have access to a lot of American
parents would love to have.
T mean, how many people in Washington think we can learn anything from China?
Here's one where most Americans I know say, oh, we could learn something from the
Chinese. We are not close to doing that. I mean, for Christ’s sake, last night on the GOP
debate the only person I heard talking about it was Vivek. So that you know we're in
trouble, right? (Laughter.) Only one side of the audience laughed at that. But it’s a—it’s
a challenge. And I really do worry. I worry more about this than I worry about any
other implication of AI. I know there are plenty that are out there. This is kind of the
most existential and hard to measure. But it’s because we refuse to do the work.
FROMAN: From polyamory to making work optional, it’s been a very rich discussion.
Thank you very much for joining us. (Laughter.)
SULEYMAN: Sorry to offend everyone. (Laughter, applause.)
FROMAN: That was great. Thank you.
(END)