You are on page 1of 5

Reuters

THE BIG STORY

ALL EYES
ON AI
Regulators struggle to keep up with
artificial intelligence’s rapid development
YIFAN YU Nikkei staff writer

PALO ALTO, U.S. Imagine a world where artificial intelli- practices might inadvertently deplete natural resources or dis-
gence surpasses human expertise in nearly every domain and rupt ecosystems due to a lack of consideration for long-term
can code in one day what the biggest tech companies in the sustainability affecting food production and environmental
world produce in years. Now, imagine that world becoming balance.”
our reality within the next decade. As AI becomes more powerful and prevalent, tech industry
In a series of speeches delivered in 20 countries over the past leaders like Altman, researchers and regulators alike are dis-
month, this is what Sam Altman, CEO of the startup OpenAI, cussing the guardrails to put around the powerful technology,
has been asking audiences as part of his effort to proselytize as issues such as disinformation, bias and job displacement
a global regulatory framework for artificial intelligence. His rise in prominence along with the potential benefits of AI.
company helped start the AI craze in November with the The stakes are high: Some U.S. policymakers fear too much
launch of chatbot ChatGPT, which has demonstrated the huge regulation will slow the development of U.S. AI technology
potential of artificial intelligence. and give China an advantage; regulators such as the Federal
While most tech moguls resist efforts at government regu- Trade Commission say too little regulation in the previous era
lation, Altman has taken pains to tell governments that they of social media has eroded transparency and given rise to black
need to start thinking about regulations now, before it is too box algorithms, abuses of privacy and toxic disinformation.
late. “The potential upside here is enormous. The AI revolu- Largely spurred by the success of ChatGPT, governments
tion will create shared wealth and make it possible to dramat- around the world this year made their first attempts at com-
ically improve the standard of living for everyone,” he told an ing to grips with AI. China in April announced a draft of a
AI conference in Beijing on June 10. generative AI law that is set to be the world’s first legislation
“But we must manage the risk together in order to get on the technology to take effect. The European Union’s AI Act,
there,” Altman added in his first public speech in China, a comprehensive bill on artificial intelligence passed a parlia-
where ChatGPT is banned. “If we’re not careful, a misaligned mentary vote in May, taking a step closer to becoming law.
AI system designed to improve public health outcomes At the Group of Seven summit in May in the Japanese city of
could disrupt an entire health care system by providing Hiroshima, international regulations for AI were a key topic. People visit an AI robot booth at Security
ungrounded advice. In the U.S., the White House and Senate called in tech lead- China, an exhibition on public safety
“Similarly, an AI system designed to optimize agricultural ers to discuss the risks AI poses. In a series of hearings and and security, in Beijing on June 7.

Nikkei Asia - Special excerpt from June 19-25, 2023 Print edition. Nikkei Inc.
No reproduction without permission.
THE BIG STORY

OpenAI CEO Sam Altman attends an open regulating the phenomenon. $140 billion) by 2030; many state and local-level pol-
dialogue with students at Keio University in China has a track record of emphasizing politi- icies that funnel capital and other support into AI
Tokyo on June 12. cal control over the tech sector even if that means development have been unveiled since then.
sacrificing the interests of its most innovative com- But the level of state support is commensurate
panies. Its approach to AI has been no different: with the increase in political control.
element to the development and applications of AI. Beijing has already warned companies to slow “When it comes to AI, China is quite different from,
Regulators could profoundly affect the develop- down the development of ChatGPT-like services. say, the United States, which has not really done a

Satoko Kawasaki
ment of AI with rules governing the type of data In April, five months after OpenAI’s chatbot was whole lot of new stuff legally to regulate AI. China
generative AI can train its models on, for example, released to the public, and when most governments has, in fact, been working on this for years,” said
or the uses that AI can be put to, such as policing. were still pondering the question of where to start Helen Toner, a director at Georgetown University’s
“How these technologies develop is not an in- regulating AI, China unveiled a draft law requiring Center for Security and Emerging Technology.
evitability” as Lina Khan, head of the U.S. Federal all companies developing generative AI products to The country introduced its first AI-related regu-
Trade Commission, which enforces rules on com- register with the government and pass security tests lation in March 2022, requiring companies that use
behind-the-scenes negotiations in Washington, law- petition, put it in an interview with U.S. network before rolling out to the public. AI-powered recommendation algorithms to ensure
makers and regulators have been trying to deter- CNBC in May. “We as policymakers can face a The draft calls for no discrimination or misin- the algorithms do not “endanger national security
mine how best to impose limits on the technology, choice that will set all of these technologies on a formation and respect for intellectual properties in or the social public interest.” Providers of such
if at all. healthier and more beneficial trajectory, rather than generative AI model training. It also emphasizes technology need to register with the Cyberspace
At the end
These regulatory decisions add an unpredictable a harmful one.” that content generated through the use of gener- Administration of China (CAC), China’s internet of the day,
Most immediately, regulatory decisions in China ative AI must reflect “socialist core values” -- the watchdog, to be compliant, the regulation says. any existential
and the U.S. -- the world’s two AI superpowers -- governing credo of the Chinese Communist Party. Industry participants and analysts are expect-
are now set to shape the new AI world order. Which They may not contain anything related to “subver- ing more AI-related regulations to be introduced
risks are
AI and government regulation: country’s regulatory formula will prove superior sion of state power; the overturning of the socialist in China. At a national security meeting in Beijing, going to
The story so far
could be decisive in determining who wins the AI system; incitement of separatism; harm to national President Xi Jinping in late May called for more be beyond
October 2022 race. This will put China’s top-down regulatory unity,” the draft states. Content also cannot upset and improved governance of artificial intelligence,
U.S. White House publishes Blueprint for an AI Bill approach, characterized by state control and cen- the economic or social orders. according to a readout from state media Xinhua
borders
How these of Rights, outlining a nonbinding road map for sorship, up against the U.S.’s bottom-up approach, The draft’s language is similar to that of several News Agency. Vilas Dhar
responsible use of artificial intelligence
technologies where Big Tech dominates the conversations of how new regulations Beijing has issued during a sweep- President of the
AI policy should be shaped while regulators take a ing crackdown on the tech sector over the past two THE US: FILLING A REGULATORY VACUUM Patrick J. McGovern
develop is not November 2022
back seat. years. From antitrust to data protection, the crack- The U.S., on the other hand, seems to be in no Foundation
OpenAI releases AI chatbot ChatGPT to the
an inevitability public, powered by GPT-3 model Some argue that national regulations will be down has addressed a range of issues, but all mea- rush to roll out national legislation on AI, leaving
meaningless, as large language AI models cannot sures emphasize the requirements for tech compa- the question of how to govern the technology to
Lina Khan February 2023
Head of the U.S. Federal ChatGPT records 1 billion page visits stay bound by national borders. They propose an nies to reflect these same “socialist core values.”
Trade Commission international watchdog agency that could oversee China’s State Council laid out its ambitions in Chinese President Xi Jinping has called for
March 14 the global development of AI. artificial intelligence back in 2017, with a road map stronger governance of artificial intelligence
OpenAI releases new model, GPT-4 “As much as we want to talk about China and the envisioning an industry worth 1 trillion yuan (over in the country.
U.S., AI is truly a borderless technology and I don’t
March 21
Google unveils Bard, its ChatGPT rival chatbot think that we can regulate it at the state level,” said
Vilas Dhar, president of the Patrick J. McGovern
China beats U.S. in AI-related research
April 11 Foundation, a $1.5 billion U.S.-based philanthropy
output (Number of papers ranked in top 10%
Cyberspace Administration of China releases draft focused on artificial intelligence and other tech of citations in global AI field)
of Administrative Measures for Generative Artificial
Intelligence Services, the country’s first legislation solutions for good.
8,000
targeting generative AI “At the end of the day, any existential risks are China
going to be beyond borders,” Dhar said. He argues
May 4 6,000
China and the U.S. need to find common ground in
U.S. President Biden meets CEOs of Microsoft,
Google and OpenAI to discuss AI risks AI governance and that cross-border collaboration 4,000 U.S.
is a must to regulate the powerful technology.
India
May 11 2,000 U.K.
European Parliament committees vote to approve CHINA: TOP-DOWN APPROACH However, Australia
the AI Act; lawmakers are now finalizing its details
with the European Commission and member states given the level of tension between Beijing and 0

Reuters
Washington, it is hard to imagine how the super- 2012 ‘13 ‘14 ‘15 ‘16 ‘17 ‘18 ‘19 ‘20 ‘21
Source: Nikkei Asia research powers that are leading AI research could agree on Source: Elsevier

Nikkei Asia - Special excerpt from June 19-25, 2023 Print edition. Nikkei Inc.
No reproduction without permission.
THE BIG STORY

Venture capital investment in AI by country (Selected countries/regions, in millions of dollars)

Number of deals Average deal size


4,000 100
Japan EU*
South Korea 80
3,000 Greater China
60
2,000 Greater China
South Korea
40
U.S.
1,000 EU*
U.S. 20 Japan
0 0
2012 ‘14 ‘16 ‘18 ‘20 ‘22 2012 ‘14 ‘16 ‘18 ‘20 ‘22
*Includes 27 member states Source: Preqin

AP
the tech industry. “By and large, federal agencies Alphabet CEO Sundar Pichai speaks about the Meanwhile, the Biden administration introduced PaLM 2, was thoroughly vetted by the Responsible
have still not developed the required AI regulatory new PaLM 2 large language model at a Google an initiative called Blueprint for an AI Bill of Rights AI team before being rolled out, according to Dai.
plans,” Brookings Institution, a Washington-based event in California on May 10. in October, and Senate majority leader Chuck However, some are questioning whether the
think tank, concluded in a report examining AI reg- Schumer is working on a framework to regulate AI industry-led, bottom-up efforts in the U.S. will lead
ulation in the U.S. Altman added. “It will be important for policymak- at the national level. Both express high-level prin- to effective AI governance, as regulating AI could
The report says that while U.S. regulations on ers to consider how to implement licensing regu- ciples without getting into the details of how they mean slowing down progress amid intense compe-
AI are clearly needed, the processes required “in- lations on a global scale and ensure international will be implemented in practice. tition among the leading players. It’s literally
By and large, creasingly stand in contrast to the current zeitgeist cooperation on AI safety, including examining po- In the absence of federal regulations, Microsoft Experts say it is hard to motivate companies to
-- where AI systems are becoming increasingly tential intergovernmental oversight mechanisms and Google, the leaders in the field, have set up self-regulate when competitors are moving for-
putting the
[U.S.] federal powerful and having impact much faster than gov- and standard-setting.” internal AI governance teams and published their ward at full speed, nor is it straightforward to gas pedal and
agencies ernment can react.” Sen. Richard Blumenthal, a Democrat from own AI principles of how the company will develop convince competing companies to agree on how the brakes
have still not The CEOs of Microsoft, Google and OpenAI met Connecticut and chairman of the Senate panel, said and deploy related technologies responsibly. Both self-regulation needs to be designed.
with U.S. President Joe Biden in May. According to the goal of the hearing was to learn more about the mention “safety, inclusivity and accountability” “It’s literally putting the gas pedal and the
down in the
developed a White House readout, the administration called benefits and risks of AI and avoid repeating mistakes as the key principles in developing and deploying brakes down in the car at the same time,” said Paul car at the
the required on the leading AI companies to “model responsible the country’s lawmakers made with previous tech- their AI products. Kedrosky, managing partner at SK Ventures. same time
AI regulatory behavior,” and to “take action to ensure responsible nologies. “Our goal is to demystify and hold account- At Google, this has resulted in the creation of the “If you’re the brake pedal person [in the com-
innovation and appropriate safeguards.” able those new technologies to avoid some of the Responsible AI team, which takes part in designing pany], people will likely tolerate you for a while,” Paul Kedrosky
plans “That’s just symbolism,” said John B. Quinn, mistakes of the past,” Blumenthal said. “Congress models. “We started working with the Responsible Kedrosky said, “but then if the company’s market Managing partner at
SK Ventures
A Brookings founder and managing partner of Los Angeles- failed to meet the moment on social media.” AI team within Google from the very beginning,” capitalization starts to fall, you’re gonna get fired.”
Institution report headquartered law firm Quinn Emanuel Urquhart said Andrew Dai, principal software engineer at For example, OpenAI’s Altman, who called for
& Sullivan, adding that while he does not expect Google DeepMind who led the development of the U.S. and international AI regulation at the May
the Biden administration to sign any federal-level company’s large language models, including the 16 Senate hearing, said at a May 24 conference in
legislation on AI very soon, there have been a lot of latest, PaLM 2. London that ChatGPT and the companies will
regulatory actions at the state level. Responsible AI features include measuring how exit the EU market if the region’s AI laws become
California, for example, introduced a bill in April much the model memorizes the training data, or too overbearing, according to media reports. His
that aims to regulate how artificial intelligence is controlling how toxic the output from the model is, comment triggered the anger of Thierry Breton,
used in “automated decision tools,” such as algo- in addition to quality filtering and data duplication, European commissioner for internal market, who
rithms that filter out job applicants. Dai told Nikkei Asia at a briefing. called it “blackmail.”
On a federal level, the U.S. Senate held a hear- After training, the large language model, such as The CEO later walked back his comment on
ing about artificial intelligence in May where Twitter and said the company has “no plans to
OpenAI’s Altman testified and urged more AI leave.”
regulation and the formation of a U.S. or global Bart Selman, a professor at Cornell University,
U.S. Federal Trade Commission head Lina Khan
agency to oversee companies’ compliance with told media in May that policymakers need to
and hundreds of other renowned AI researchers
Reuters

safety standards. make a choice to send new technologies on a in March signed an open letter that called for a
“We are not alone in developing this technology,” “healthier and more beneficial trajectory.” six-month pause on training AI. He said the AI

Nikkei Asia - Special excerpt from June 19-25, 2023 Print edition. Nikkei Inc.
No reproduction without permission.
THE BIG STORY

Screenshot from Beijing Academy of Artificial Intelligence’s website


industry in the U.S. had little incentive to bear the Affairs. “Europe is worried about the risks and really
costs of regulating itself in any meaningful way. wants to keep control over the evolution of models.”
“Compared to Europe, where they have reason- Bourgeot said that while “everybody’s worried”
able data and privacy laws in place now,” Selman about AI, Europe is preoccupied by consumers,
said, “Big Tech has sort of slowed [data and privacy whereas the U.S. is concerned about its tech giants
regulations] down in the U.S.” on its territory.
“Big Techs might say, ‘OK, let’s work on regula- At a conference in Los Angeles in May, Kai-Fu
tion for AI together. Just give us another five years.’ Lee, chairman and CEO of Sinovation Ventures
By that time, it will be too late,” said Selman. and one of the most renowned AI experts in China,
“When you work for a company, you can’t really warned that overregulation could stifle innovations
go against the interests of the company. So you have and tech developments.
to have people that are a little removed from that “For example, EU [regulations] have various parts
and don’t have such a conflict of interest [in shaping that require a human in the loop, which just makes AI
AI policies],” he added. nonworkable. So you need to avoid overregulation
Several open letters similar to the one Selman ... that [would make] technology useless,” Lee said
Europe is signed and endorsed by many tech industry ex- at the panel.
worried about perts and senior researchers in the field have been While the top-down approach gave China a head “China, they can regulate and they will regu- OpenAI’s Altman, right, speaks to an
released since March. All warn of the severe risks of start in AI regulation, exactly how effective the law late,” said Yan Luo, a partner at law firm Covington AI conference held by Beijing Academy of
the risks and AI, but none has led to legislative or other actions will be remains a question. & Burling who specializes in Chinese cybersecurity Artificial Intelligence on June 10.
really wants to so far. and data privacy rules. “So this is probably going
keep control “Part of [calling for a six-month pause in] the let- to be the first rule around the world that is actually state,” he said. “I don’t think it will work in a de-
ter was to get attention to the issue. We sort of knew taking into force about generative AI.” mocracy. Thank God.”
over the it would be almost impossible to really enforce a ChatGPT’s advice on AI regulation According to Luo, the draft proposes filing re- Despite the differences in values and an ongoing
evolution of pause,” Selman said. “The cat is out of the bag.”
(Asked on June 12, 2023)
quirements for generative AI products and punish- AI race between the U.S. and China, some believe
models Question ments such as fines for companies that violate the the countries can and even must agree on AI gover-
‘THE STAKES HAVE NEVER BEEN HIGHER’ Just rules but does not specify how the requirements will nance to avoid catastrophe.
Remi Bourgeot
How should governments regulate artificial
“This is a global technology with global impacts,”
China’s
like AI could improve work productivity but may intelligence? be enforced or how the AI models will be reviewed.
An economist at The
also cause job displacement that leads to world- “Nobody knows how effective the law will be,” Altman said at the June AI conference in Beijing. AI regulation
Conference Board
and associate fellow wide socioeconomic crises, both China’s top-down Response Luo said. “The cost of accidents from reckless development is about not
and the U.S.’s bottom-up approach to governing the Regulating artificial intelligence (AI) is While the draft says the state supports “inter- and deployments -- that affects us all.”
at the French Institute
a complex and multifaceted issue that “I appreciate that from time to time, great pow-
putting out
for International and technology have their pros and cons. national cooperation in foundational technologies
Strategic Affairs “The top-down approach China uses actually has
requires careful consideration.
While there is no one-size-fits-all approach, such as AI algorithms and frameworks,” Luo does ers may have their share of differences. This is true anything that
huge merits, in my opinion,” Kedrosky said. “When here are some key considerations and not expect China to be signing onto a global AI legal today, as it has been before,” Altman said. “But even damages the
the bottom-up approach is too slow, it seems irre- approaches that governments can take framework anytime soon. during the most difficult times, great powers have
to regulate AI:
found ways to cooperate on what matters most.
state. I don’t
sponsible and even immature and childish to wait “At least for China,” Luo said, “I can see they
for things to happen. You see some rumblings in the 1. Ethical frameworks want something very specific. But if you go interna- “With the emergence of increasingly powerful think it will work
EU that are also top-down.” 2. Sector-specific regulations tional, and go high level, that means a lot of princi- AI systems, the stakes for global cooperation have in a democracy.
The European Union is considering a bill called 3. Data privacy and security ples everyone needs to agree on, which is different never been higher.”
4. Transparency and explainability
Comparing an international AI regulation to the
Thank God
the EU AI Act. The bill, which has been under dis- from what Chinese regulators want.”
5. Algorithmic bias and fairness
cussion for two years but is yet to be signed into 6. Safety and reliability The West could be equally less interested to in- Treaty on the Non-Proliferation of Nuclear Weapons Eric Schmidt
law, proposes to assess AI tools by risk level. It 7. International cooperation clude China in an international AI regulation bloc. signed after World War II, Selman said people need Former CEO
could also require generative AI companies to re- 8. Continuous adaptation “I think there are pieces of the Chinese model to see what the technology can do and how destruc- and chairman
9. Public-private collaboration of Google
veal which copyrighted material had been used in that are not applicable in the U.S.,” Toner at tive it can be without guardrails, and leaders of
model training, which some experts say is nearly It’s important to note that the specific Georgetown University said. “You can’t mandate nations need to realize the danger AI poses to their
impossible given the sheer amount of data used in regulations will vary based on each country’s what kinds of information the systems are going own countries.
large language models. legal and cultural context. to provide or what kinds of political views they’re Even if the U.S. or China can win the AI race, he
Striking the right balance between fostering
“The U.S. is at a much less advanced stage in the going to represent.” wondered, “Is anybody actually going to benefit
innovation and addressing potential risks is
development of regulations. And it will be less strict crucial when regulating AI to ensure its Eric Schmidt, former CEO and chairman of from this?”
[than Europe],” said Remi Bourgeot, an economist responsible and beneficial use. Google, discussed the matter at the Milken Institute
at The Conference Board and an associate fellow at Global Conference in May. “China’s AI regulation
the French Institute for International and Strategic Full text response has been abbreviated is about not putting out anything that damages the Additional reporting by Mailys Pene-Lassus in Paris.

Nikkei Asia - Special excerpt from June 19-25, 2023 Print edition. Nikkei Inc.
No reproduction without permission.

You might also like