You are on page 1of 37

Ask the algorithm

Human wealth advisers are going out of fashion

AS PROBLEMS GO, the suspicion that you are being overcharged by a private wealth
manager is one of the better ones to have in life. But even millionaires who are regularly
invited out to lunch by their banker tire of the 1-3% annual fee they have to cough up for his
investment advice. Many mere submillionaires may well be paying similar rates for an asset-
management professional to administer their pension pot, often without being aware of it.
Could a computer not do an equally good job dishing out standardised guidance on how much
they should invest respectively in shares, bonds and other assets?

A raft of “automated wealth managers” is now available, on the premise that algorithms can
offer sound financial advice for a small fraction of the price of a real-life adviser (see table).
With names that suggest a mix of blue-blooded discretion and startup ebullience—
Wealthfront, Betterment, Personal Capital, FutureAdvisor—they are growing at a rapid clip.
Most are grudgingly starting to accept the tag of “robo-adviser”.

The platforms work by asking customers a few questions about who they are and what they
are saving for. Applying textbook techniques for building up a balanced portfolio—more
stable bonds for someone about to retire, more volatile equities for a younger investor, and so
on—the algorithm suggests a mix of assets to invest in. Nearly all plump for around a dozen
index funds which cheaply track major bond or stock indices such as the S&P500. They keep
clear of mutual funds, let alone individual company shares. Testing the various algorithms,
your risk-averse, youngish correspondent was steered towards an apparently sensible blend of
low-fee funds to help his meagre retirement pot grow.

This sort of insight used to be guarded


jealously by financial advisers, but now
you can get it from the robo-advisers
without so much as providing an e-mail
address. The hope is that all but the
most penny-pinching savers will then
go on to purchase the mix of funds
through the service, at an annual cost
starting at around 0.25% of the assets
invested. (Investors also pay the fees of
the funds they buy, which adds another
0.15-0.30%.) Automated services
offering more human involvement typically charge closer to 1% a year. Most have much lower
minimum investment limits than their traditional rivals.

A major selling point for robo-advisers is that they promise they will not make any money
from their customers other than through the annual fee. That is refreshing in an industry rife
with potential conflicts of interest. Banks, for instance, often recommend that their clients
invest in funds run by their asset-management subsidiaries. Most of the newcomers offer
automatic rebalancing of portfolios, so an investor’s exposure to stocks or bonds stays much
the same even as prices fluctuate. Many tout their “tax-loss-harvesting” capabilities.

Small fortunes

The transparent fee structure appeals to sceptical younger investors, says Adam Nash,
Wealthfront’s boss. Around 60% of its clients are under 35, many of them with starter
fortunes from Silicon Valley, where the company is based. The average account size is a touch
under $100,000, an amount that would be uneconomic for a Merrill Lynch or Morgan Stanley
broker to handle.

Mr Nash, a veteran of Apple and LinkedIn rather than Wall Street, compares the current
growth in robo-traders to the rise of Vanguard, which in the mid-1970s pioneered low-cost
index funds as competition to pricey mutual funds. Charles Schwab sprung up at the same
time to undercut large banks’ high-margin brokerages. What those newcomers were to the
baby-boomer generation when it first started thinking about saving for retirement, Wealthfront
is to the tech-savvy millennials at the same stage in their lives, he says.
Regulation has, if anything, helped the robo-advisers get off the ground. They emphasise that
client assets are held by third-party depositary banks, still perceived as safe by the public. If
one of them were to go out of business, investors would not lose any money. All are overseen
by the same watchdogs as the incumbent banks they are taking on.

The robo-advisers are doubling their assets under management every few months, but their
combined assets still run to less than $20 billion, against $17 trillion for traditional managers.
Several banks manage over $1 trillion each. The robo-newcomers are nowhere near big
enough for sustained profitability, says Sean Park of Anthemis, an investment firm that has
backed Betterment. “To be successful [a firm] needs to manage tens of billions; to be really
successful they need to manage hundreds of billions.” In the meantime, they are living off the
largesse of venture capitalists, who poured nearly $300m into various robo-advisers last year.

If they are to be successful in the longer term, they will have to persuade today’s 20-
somethings to remain loyal to automated services when they become wealthier 40-somethings.
Traditional investment advisers think they can win over older customers by offering them
services such as inheritance planning. But just in case, the incumbents are working with the
robo-insurgents.

Schroders, a large European asset manager, has backed Nutmeg, Britain’s largest newcomer.
Vanguard, the group that puts together the low-fee funds that most robo-advisers recommend,
is launching its own low-cost advisory service. JPMorgan Chase and Goldman Sachs have
backed Motif, a startup that builds baskets of stocks based on investment themes. Charles
Schwab, now a wealth-management giant with $2.5 trillion under management, in March
rolled out its own automated wealth service, targeting people with as little as $5,000 in
savings. It charges no fees upfront but guides clients towards some of its own investment
products—a breach of the unwritten robo-advisory code.

Schwab’s arrival was discreetly celebrated as a validation of the automated advisory model. A
truce of sorts seems to be in the offing. Betterment now offers a “white-label” version of its
platform, so that human wealth advisers can pass off the computers’ diligence as their own.
Fidelity, a giant financial-services firm, is among those trialling the service. Human-based
advisory services point out they have lots of clever computer wizards working for them.
Robo-advisers, for their part, boast about the pioneering investment thinkers they employ,
programming the computers to recommend the right products.

https://www.economist.com/special-report/2015/05/07/ask-the-algorithm

The bigger-is-better approach to AI is running out of road


If AI is to keep getting better, it will have to do more with less

When it comes to “large language models” (llms) such as gpt—which powers Chatgpt, a
popular chatbot made by Openai, an American research lab—the clue is in the name.
Modern ai systems are powered by vast artificial neural networks, bits of software modelled,
very loosely, on biological brains. gpt-3, an llm released in 2020, was a behemoth. It had
175bn “parameters”, as the simulated connections between those neurons are called. It was
trained by having thousands of gpus (specialised chips that excel at ai work) crunch through
hundreds of billions of words of text over the course of several weeks. All that is thought to
have cost at least $4.6m.

But the most consistent result from


modern ai research is that, while big is good,
bigger is better. Models have therefore been
growing at a blistering pace. gpt-4, released in
March, is thought to have around 1trn
parameters—nearly six times as many as its
predecessor. Sam Altman, the firm’s boss, put
its development costs at more than $100m.
Similar trends exist across the industry.
Epoch ai, a research firm, estimated in 2022
that the computing power necessary to train a
cutting-edge model was doubling every six to
ten months (see chart). 
This gigantism is becoming a problem. If
Epoch ai’s ten-monthly doubling figure is
right, then training costs could exceed a billion
dollars by 2026—assuming, that is, models do not run out of data first. An analysis published
in October 2022 forecast that the stock of high-quality text for training may well be exhausted
around the same time. And even once the training is complete, actually using the resulting
model can be expensive as well. The bigger the model, the more it costs to run. Earlier this
year Morgan Stanley, a bank, guessed that, were half of Google’s searches to be handled by a
current gpt-style program, it could cost the firm an additional $6bn a year. As the models get
bigger, that number will probably rise.
Many in the field therefore think the “bigger is better” approach is running out of road.
If ai models are to carry on improving—never mind fulfilling the ai-related dreams currently
sweeping the tech industry—their creators will need to work out how to get more performance
out of fewer resources. As Mr Altman put it in April, reflecting on the history of giant-
sized ai: “I think we’re at the end of an era.” 
Quantitative tightening
Instead, researchers are beginning to turn their attention to making their models more
efficient, rather than simply bigger. One approach is to make trade-offs, cutting the number of
parameters but training models with more data. In 2022 researchers at DeepMind, a division
of Google, trained Chinchilla, an llm with 70bn parameters, on a corpus of 1.4trn words. The
model outperforms gpt-3, which has 175bn parameters trained on 300bn words. Feeding a
smaller llm more data means it takes longer to train. But the result is a smaller model that is
faster and cheaper to use. 
Another option is to make the maths fuzzier. Tracking fewer decimal places for each number
in the model—rounding them off, in other words—can cut hardware requirements drastically.
In March researchers at the Institute of Science and Technology in Austria showed that
rounding could squash the amount of memory consumed by a model similar to gpt-3, allowing
the model to run on one high-end gpu instead of five, and with only “negligible accuracy
degradation”.
Some users fine-tune general-purpose llms to focus on a specific task such as generating legal
documents or detecting fake news. That is not as cumbersome as training an llm in the first
place, but can still be costly and slow. Fine-tuning llama, an open-source model with 65bn
parameters that was built by Meta, Facebook’s corporate parent, takes multiple gpus anywhere
from several hours to a few days. 
Researchers at the University of Washington have invented a more efficient method that
allowed them to create a new model, Guanaco, from llama on a single gpu in a day without
sacrificing much, if any, performance. Part of the trick was to use a similar rounding
technique to the Austrians. But they also used a technique called “low-rank adaptation”, which
involves freezing a model’s existing parameters, then adding a new, smaller set of parameters
in between. The fine-tuning is done by altering only those new variables. This simplifies
things enough that even relatively feeble computers such as smartphones might be up to the
task. Allowing llms to live on a user’s device, rather than in the giant data centres they
currently inhabit, could allow for both greater personalisation and more privacy. 

A team at Google, meanwhile, has come up with a different option for those who can get by
with smaller models. This approach focuses on extracting the specific knowledge required
from a big, general-purpose model into a smaller, specialised one. The big model acts as a
teacher, and the smaller as a student. The researchers ask the teacher to answer questions and
show how it comes to its conclusions. Both the answers and the teacher’s reasoning are used
to train the student model. The team was able to train a student model with just 770m
parameters, which outperformed its 540bn-parameter teacher on a specialised reasoning task.

Rather than focus on what the models are doing, another approach is to change how they are
made. A great deal of ai programming is done in a language called Python. It is designed to be
easy to use, freeing coders from the need to think about exactly how their programs will
behave on the chips that run them. The price of abstracting such details away is slow code.
Paying more attention to these implementation details can bring big benefits. This is “a huge
part of the game at the moment”, says Thomas Wolf, chief science officer of Hugging Face,
an open-source ai company. 
Learn to code
In 2022, for instance, researchers at Stanford University published a modified version of the
“attention algorithm”, which allows llms to learn connections between words and ideas. The
idea was to modify the code to take account of what is happening on the chip that is running
it, and especially to keep track of when a given piece of information needs to be looked up or
stored. Their algorithm was able to speed up the training of gpt-2, an older large language
model, threefold. It also gave it the ability to respond to longer queries. 
Sleeker code can also come from better tools. Earlier this year, Meta released an updated
version of PyTorch, an ai-programming framework. By allowing coders to think more about
how computations are arranged on the actual chip, it can double a model’s training speed by
adding just one line of code. Modular, a startup founded by former engineers at Apple and
Google, last month released a new ai-focused programming language called Mojo, which is
based on Python. It too gives coders control over all sorts of fine details that were previously
hidden. In some cases, code written in Mojo can run thousands of times faster than the same
code in Python.
A final option is to improve the chips on which that code runs. gpus are only accidentally
good at running ai software—they were originally designed to process the fancy graphics in
modern video games. In particular, says a hardware researcher at Meta, gpus are imperfectly
designed for “inference” work (ie, actually running a model once it has been trained). Some
firms are therefore designing their own, more specialised hardware. Google already runs most
of its ai projects on its in-house “tpu” chips. Meta, with its mtias, and Amazon, with its
Inferentia chips, are pursuing a similar path. 
That such big performance increases can be extracted from relatively simple changes like
rounding numbers or switching programming languages might seem surprising. But it reflects
the breakneck speed with which llms have been developed. For many years they were research
projects, and simply getting them to work well was more important than making them elegant.
Only recently have they graduated to commercial, mass-market products. Most experts think
there remains plenty of room for improvement. As Chris Manning, a computer scientist at
Stanford University, put it: “There’s absolutely no reason to believe…that this is the ultimate
neural architecture, and we will never find anything better.”

https://www.economist.com/science-and-technology/2023/06/21/the-bigger-is-better-
approach-to-ai-is-running-out-of-road
Our early-adopters index examines how corporate America is deploying
AI
Companies of all stripes are using the technology

Technology stocks are having a bumper year. Despite a recent wobble, the share price of the
Big Five—Alphabet, Amazon, Apple, Meta and Microsoft—has jumped by 60% since
January, when measured in an equally weighted basket. The price of shares in one big
chipmaker, Nvidia, has tripled and in another, amd, almost doubled. Their price-to-earnings
ratios (which measure how much the markets think a company is worth relative to its profits)
are ten times that of the median firm in the s&p 500.

The main reason for the surge is the promise of artificial intelligence (ai). Since the launch in
November of Chatgpt, an ai-powered chatbot, investors have grown ever more excited about a
new wave of technology that can create human-like content, from poems and video footage to
lines of code. This “generative ai” relies on large language models which are trained on big
chunks of the internet. Many think the technology could reshape whole industries, and have as
much impact on business and society as smartphones or cloud computing. Firms that can
make the best use of the technology, the thinking goes, will be able to expand profit margins
and gain market share.

Corporate bosses are at pains to


demonstrate how they are adopting ai. On
April 4th Jamie Dimon, JPMorgan Chase’s
boss, said his bank had 600 machine-
learning engineers and had put ai to work
on more than 300 different internal
applications. David Ricks, the boss of Eli
Lilly, has said that the pharmaceutical
giant has more than 100 projects on the go
using ai. 
Company case studies reveal only part of
the picture. To get a broader sense of
which companies and industries are
adopting ai The Economist examined data
on all the firms in the s&p 500. We looked
at five measures: the share of issued
patents that mention ai; venture-capital
(vc) activity targeting ai firms; acquisitions of ai firms; job listings citing ai; and mentions of
the technology on earnings calls. Because other types of ai could bring benefits for business,
our analysis captures activity for all ai, not just the generative wave. The results show that
even beyond tech firms the interest in ai is growing fast. Moreover, clear leaders and laggards
are already emerging.
AI expertise already seems to be spreading (see chart).
About two-thirds of the firms in our universe have
placed a job ad mentioning ai skills in the past three
years, says PredictLeads, a research firm. Of those that
did, today 5.3% of their listed vacancies mention ai, up
from a three-year average of 2.5%. In some industries
the rise is more dramatic. In retail firms that share has
jumped from 3% to 11%, while among chipmakers that
proportion grew from 9% to 19%.

The number of ai-related patents being registered


trended upwards between 2020 and 2022, according to
data provided by Amit Seru of Stanford University.
PitchBook, another research firm, concludes that in
2023 some 25% of venture deals by s&p 500 firms
involved aistartups, up from 19% in 2021. Global
Data, also a research firm, finds that about half the
firms scrutinised have talked about ai in earnings calls
since 2021, and that in the first quarter of this year the
number of times ai was mentioned in the earnings calls
of America Inc more than doubled compared with the
previous quarter. Roughly half have been granted a patent relating to the technology between
2020 and 2022.

The use of generative ai may eventually become even more common than other sorts of ai.
That is because it is good at lots of tasks essential to running a firm. A report by McKinsey, a
consultancy, argues that three-quarters of the expected value created by generative ai will
come in four business functions—research and development, software engineering, marketing
and customer service. To some extent, all these operations are at the core of most big
businesses. Moreover, any large company with internal databases used to guide employees
could find a use for an ai-powered chatbot. Morgan Stanley, a bank, is building an ai assistant
that will help its wealth managers find and summarise answers from a huge internal
database. slb, an oil-services company, has built a similar assistant to help service engineers.

While the adoption of ai is happening in many firms, some are more enthusiastic than others.
Ranking all the companies using each metric and then taking an average produces a simple
scoring system. Those at the top seem to be winning over investors. Since the start of the year,
the median share price of the top 100 has risen by 11%; for the lowest-scoring quintile, it has
not moved at all.

The top spots are unsurprisingly dominated by Silicon Valley. On a broad definition,
the s&p 500 contains 82 tech firms. Almost 50 of them make the top 100. Nvidia is the
highest-scoring firm. According to data from PredictLeads, over the past three years a third of
its job listings have mentioned ai. In the past year the firm has mentioned ai in its earnings
calls almost 200 times, more than any other company. Other high-ranking tech firms include
the cloud-computing giants—Alphabet (3rd), Microsoft (12th) and Amazon (34th). They sell
access to a range of ai tools, from services that help train sophisticated models to software that
allows the use of ai without having to write reams of code. 

Beyond tech, two types of firms seem to be adopting ai the quickest. One is data-intensive
industries, such as insurers, financial-services firms and pharmaceutical companies. They
account for about a quarter of our top 100. These firms tend to have lots of structured datasets,
such as loan books or patient files, which makes it easier to use ai, notes Ali Ghodsi of
Databricks, a database firm. Around a tenth of JPMorgan Chase’s current job listings
mention ai. The firm recently filed a patent for Indexgpt, an ai-infused chatbot that gives
investment advice. Health-care firms like Gilead Sciences and Moderna use ai to discover new
drugs. Others, such as Abbott and Align Technology, are building ai-powered medical
devices. America’s Food and Drug Administration approved 97 such machines last year, up
from 26 in 2017. 

A second group is industries that are already being disrupted by technology, including
carmaking, telecoms, media and retail. Thirteen firms from these industries make the high-
scoring 100, including Ford, General Motors and Tesla. The rise of electric vehicles and the
prospect of self-driving cars has encouraged vehicle manufacturers to invest in technology. In
March Ford established Latitude ai, a self-driving car subsidiary that might one day rival gm’s
Cruise. In April Elon Musk told analysts that Tesla was buying specialised ai chips and was
“very focused” on improving their ai capabilities in an effort to improve his firm’s self-driving
efforts.

Retailers are using ai to bolster their core business. Nike, a sportswear giant, filed an
application for a patent in 2021 for a system that can generate three-dimensional computer
models of trainers. Christian Kleinerman of Snowflake, a database provider, notes that
retailers are also taking advantage of the growth of e-commerce by collecting more data on
customers. That allows more accurate targeting of marketing campaigns. Some may take
personalisation a step further. In 2021 Procter & Gamble, a consumer-goods giant, applied for
a patent for an ai-based system which analyses users’ skin and hair conditions based on
photos, and recommends products to treat them.

One source of variation in ai use across industries may be a result of the type of work
undertaken. A working paper led by Andrea Eisfeldt of the University of California looked at
how exposed firms are to ai. The researchers assessed which tasks took place in a firm and
how well Chatgpt could perform them. The most exposed were tech firms, largely
because ai chatbots are good at coding. Those industries least exposed, such as agriculture and
construction, tended to rely on manual labour.

Clear leaders and laggards are emerging within industries, too. About 70 firms in the s&p 500
show no sign on any of our metrics of focusing on ai. That includes firms in ai-heavy
industries, such as insurers. The mass of smaller firms not included in the s&p 500 may be
even less keen. One distinguishing factor within industries may be investment. For the top 100
firms in our ranking, the median r&d expenditure as a share of revenue was 11%. For those in
the lowest 100 it was zero.
Vlad Lukic of bcg, a consultancy, notes that there is even a lot of variation within companies.
He recalls visiting two divisions of the same medium-sized multinational. One had no
experience working with ai. The other was advanced; it had been using a pilot version of the
technology from Openai, the startup behind Chatgpt, for two years.
Among early adopters, many non-tech companies’ ai use is growing more sophisticated. Mr
Seru’s data reveal that about 80 non-tech firms have had ai-related patents issued which were
cited by another patent, suggesting that they have some technological value. Some 45 non-
tech companies in the s&p 500 have recently placed ads which mention model training,
including Boeing, United Health and State Street. That suggests they may be building their
own models rather than using off-the-shelf technology from the likes of Openai. The
advantage of this approach is that it can produce more-accurate ai, giving a greater edge over
rivals.
However, a shift to in-house training hints at one of the risks: security. In May Samsung
discovered that staff had uploaded sensitive code to Chatgpt. The concern is that this
information may be stored on external servers of the firms which run the models, such as
Microsoft and Alphabet. Now Samsung is said to be training its own models. The firm also
joined the growing list of companies that have banned or limited the use of Chatgpt, which
includes Apple and JPMorgan Chase. 
Other risks abound. Model-makers, including Openai, are being sued for violating copyright
laws over their use of internet data to train their models. Some large corporations think that
they could be left liable if they use Openai’s technology. Moreover, models are prone to make
up information. In one incident a New York lawyer used Chatgpt to write a motion. The
chatbot included fictional case-law and the lawyer was fined by the court. 
But all this must be weighed against the potential benefits, which could be vast. Waves of
technology frequently turn industries on their head. As generative ai diffuses into the
economy, it is not hard to imagine it doing the same thing. Mr Lukic says that the biggest risk
for companies may be falling behind. Judged by the scramble in America Inc for all things ai,
many bosses and investors would agree
https://www.economist.com/business/2023/06/25/our-early-adopters-index-examines-how-
corporate-america-is-deploying-ai

Henry Kissinger explains how to avoid world war three


America and China must learn to live together. They have less than ten years

n beijing they have concluded that America will do anything to keep China down. In
Washington they are adamant that China is scheming to supplant the United States as the
world’s leading power. For a sobering analysis of this growing antagonism—and a plan to
prevent it causing a superpower war—visit the 33rd floor of an Art Deco building in midtown
Manhattan, the office of Henry Kissinger.

On May 27th Mr Kissinger will turn 100. Nobody alive has more experience of international
affairs, first as a scholar of 19th-century diplomacy, later as America’s national security
adviser and secretary of state, and for the past 46 years as a consultant and emissary to
monarchs, presidents and prime ministers. Mr Kissinger is worried. “Both sides have
convinced themselves that the other represents a strategic danger,” he says. “We are on the
path to great-power confrontation.”

At the end of April The Economist spoke to Mr Kissinger for over eight hours about how to
prevent the contest between China and America from descending into war. These days he is
stooped and walks with difficulty, but his mind is needle-sharp. As he contemplates his next
two books, on artificial intelligence (ai) and the nature of alliances, he remains more interested
in looking forward than raking over the past.
Mr Kissinger is alarmed by China’s and America’s intensifying competition for technological
and economic pre-eminence. Even as Russia tumbles into China’s orbit and war overshadows
Europe’s eastern flank, he fears that ai is about to supercharge the Sino-American rivalry.
Around the world, the balance of power and the technological basis of warfare are shifting so
fast and in so many ways that countries lack any settled principle on which they can establish
order. If they cannot find one, they may resort to force. “We’re in the classic pre-world war
one situation,” he says, “where neither side has much margin of political concession and in
which any disturbance of the equilibrium can lead to catastrophic consequences.”
Study war some more
Mr Kissinger is reviled by many as a warmonger for his part in the Vietnam war, but he
considers the avoidance of conflict between great powers as the focus of his life’s work. After
witnessing the carnage caused by Nazi Germany and suffering the murder of 13 close relatives
in the Holocaust, he became convinced that the only way to prevent ruinous conflict is hard-
headed diplomacy, ideally fortified by shared values. “This is the problem that has to be
solved,” he says. “And I believe I’ve spent my life trying to deal with it.” In his view, the fate
of humanity depends on whether America and China can get along. He believes the rapid
progress of ai, in particular, leaves them only five-to-ten years to find a way.
Mr Kissinger has some opening advice to aspiring leaders: “Identify where you are.
Pitilessly.” In that spirit, the starting-point for avoiding war is to analyse China’s growing
restlessness. Despite a reputation for being conciliatory towards the government in Beijing, he
acknowledges that many Chinese thinkers believe America is on a downward slope and that,
“therefore, as a result of an historic evolution, they will eventually supplant us.”

He believes that China’s leadership resents Western policymakers’ talk of a global rules-based
order, when what they really mean is America’s rules and America’s order. China’s rulers are
insulted by what they see as the condescending bargain offered by the West, of granting China
privileges if it behaves (they surely think the privileges should be theirs by right, as a rising
power). Indeed, some in China suspect that America will never treat it as an equal and that it’s
foolish to imagine it might.

However, Mr Kissinger also warns against misinterpreting China’s ambitions. In Washington,


“They say China wants world domination…The answer is that they [in China] want to be
powerful,” he says. “They’re not heading for world domination in a Hitlerian sense,” he says.
“That is not how they think or have ever thought of world order.”

In Nazi Germany war was inevitable because Adolf Hitler needed it, Mr Kissinger says, but
China is different. He has met many Chinese leaders, starting with Mao Zedong. He did not
doubt their ideological commitment, but this has always been welded onto a keen sense of
their country’s interests and capabilities.

Mr Kissinger sees the Chinese system as more Confucian than Marxist. That teaches Chinese
leaders to attain the maximum strength of which their country is capable and to seek to be
respected for their accomplishments. Chinese leaders want to be recognised as the
international system’s final judges of their own interests. “If they achieved superiority that can
genuinely be used, would they drive it to the point of imposing Chinese culture?” he asks. “I
don’t know. My instinct is No…[But] I believe it is in our capacity to prevent that situation
from arising by a combination of diplomacy and force.”
One natural American response to the challenge of China’s ambition is to probe it, as a way to
identify how to sustain the equilibrium between the two powers. Another is to establish a
permanent dialogue between China and America. China “is trying to play a global role. We
have to assess at each point if the conceptions of a strategic role are compatible.” If they are
not, then the question of force will arise. “Is it possible for China and the United States to
coexist without the threat of all-out war with each other? I thought and still think that it [is].”
But he acknowledges success is not guaranteed. “It may fail,” he says. “And therefore, we
have to be militarily strong enough to sustain the failure.”

The urgent test is how China and America behave over Taiwan. Mr Kissinger recalls how, on
Richard Nixon’s first visit to China in 1972, only Mao had the authority to negotiate over the
island. “Whenever Nixon raised a concrete subject, Mao said, ‘I’m a philosopher. I don’t deal
with these subjects. Let Zhou [Enlai] and Kissinger discuss this.’…But when it came to
Taiwan, he was very explicit. He said, ‘They are a bunch of counter-revolutionaries. We don’t
need them now. We can wait 100 years. Someday we will ask for them. But it’s a long distance
away.’”

Mr Kissinger believes that the understanding forged between Nixon and Mao was overturned
after only 50 of those 100 years by Donald Trump. He wanted to inflate his tough image by
wringing concessions out of China over trade. In policy the Biden administration has followed
Mr Trump’s lead, but with liberal rhetoric.

Mr Kissinger would not have chosen this path with respect to Taiwan, because a Ukrainian-
style war there would destroy the island and devastate the world economy. War could also set
back China domestically, and its leaders’ greatest fear remains upheaval at home.

“It is not a simple matter for the United States to abandon Taiwan without undermining
its position elsewhere”

The fear of war creates grounds for hope. The trouble is that neither side has much room to
make concessions. Every Chinese leader has asserted his country’s connection to Taiwan. At
the same time, however, “the way things have evolved now, it is not a simple matter for the
United States to abandon Taiwan without undermining its position elsewhere.”

Mr Kissinger’s way out of this impasse draws on his experience in office. He would start by
lowering the temperature, and then gradually build confidence and a working relationship.
Rather than listing all their grievances, the American president would say to his Chinese
counterpart, “Mr President, the two greatest dangers to peace right now are us two. In the
sense that we have the capacity to destroy humanity.” China and America, without formally
announcing anything, would aim to practise restraint.
Never a fan of policymaking bureaucracies, Mr Kissinger would like to see a small group of
advisers, with easy access to each other, working together tacitly. Neither side would
fundamentally change its position on Taiwan, but America would take care over how it
deploys its forces and try not to feed the suspicion that it supports the island’s independence.

Mr Kissinger’s second piece of advice to aspiring leaders is: “Define objectives that can enlist
people. Find means, describable means, of achieving these objectives.” Taiwan would be just
the first of several areas where the superpowers could find common ground and so foster
global stability.

In a recent speech Janet Yellen, America’s treasury secretary, suggested that these should
include climate change and the economy. Mr Kissinger is sceptical about both. Although he is
“all for” action on the climate, he doubts it can do much to create confidence or help establish
a balance between the two superpowers. On the economy, the danger is that the trade agenda
is hijacked by hawks who are unwilling to give China any room to develop at all.

That all-or-nothing attitude is a threat to the broader search for detente. If America wants to
find a way to live with China, it should not be aiming for regime change. Mr Kissinger draws
on a theme present in his thought from the very beginning. “In any diplomacy of stability,
there has to be some element of the 19th-century world,” he says. “And the 19th-century
world was based on the proposition that the existence of the states contesting it was not at
issue.”

Some Americans believe that a defeated China would become democratic and peaceful. Yet,
however much Mr Kissinger would prefer China to be a democracy, he sees no precedent for
that outcome. More likely, a collapse of the communist regime would lead to a civil war that
hardened into ideological conflict and only added to global instability. “It’s not in our interest
to drive China to dissolution,” he says.

Rather than digging in, America will have to acknowledge China has interests. A good
example is Ukraine.

China’s president, Xi Jinping, only recently contacted Volodymyr Zelensky, his Ukrainian
counterpart, for the first time since Russia invaded Ukraine in February last year. Many
observers have dismissed Mr Xi’s call as an empty gesture designed to placate Europeans,
who complain that China is too close to Russia. By contrast, Mr Kissinger sees it as a
declaration of serious intent that will complicate the diplomacy surrounding the war, but
which may also create precisely the sort of opportunity to build the superpowers’ mutual trust.

Mr Kissinger begins his analysis by condemning Russia’s president, Vladimir Putin. “It was
certainly a catastrophic mistake of judgment by Putin at the end,” he says. But the West is not
without blame. “I thought that the decision to…leave open the membership of Ukraine
in nato was very wrong.” That was destabilising, because dangling the promise
of nato protection without a plan to bring it about left Ukraine poorly defended even as it was
guaranteed to enrage not only Mr Putin, but also many of his compatriots.

The task now is to bring the war to an end, without setting the stage for the next round of
conflict. Mr Kissinger says that he wants Russia to give up as much as possible of the territory
that it conquered in 2014, but the reality is that in any ceasefire Russia is likely to keep
Sevastopol (the biggest city in Crimea and Russia’s main naval base on the Black Sea), at the
very least. Such a settlement, in which Russia loses some gains but retains others, could leave
both a dissatisfied Russia and a dissatisfied Ukraine.

In his view, that is a recipe for future confrontation. “What the Europeans are now saying is,
in my view, madly dangerous,” he says. “Because the Europeans are saying: ‘We don’t want
them in nato, because they’re too risky. And therefore, we’ll arm the hell out of them and give
them the most advanced weapons.’” His conclusion is stark: “We have now armed Ukraine to
a point where it will be the best-armed country and with the least strategically experienced
leadership in Europe.”

To establish a lasting peace in Europe requires the West to take two leaps of imagination. The
first is for Ukraine to join nato, as a means of restraining it, as well as protecting it. The
second is for Europe to engineer a rapprochement with Russia, as a way to create a stable
eastern border.
Plenty of Western countries would understandably balk at one or other of those aims. With
China involved, as an ally of Russia’s and an opponent of nato, the task will become even
harder. China has an overriding interest to see Russia emerge intact from the war in Ukraine.
Not only does Mr Xi have a “no-limits” partnership with Mr Putin to honour, but a collapse in
Moscow would trouble China by creating a power vacuum in Central Asia that risks being
filled by a “Syrian-type civil war”.

Following Mr Xi’s call to Mr Zelensky, Mr Kissinger believes that China may be positioning
itself to mediate between Russia and Ukraine. As one of the architects of the policy that pitted
America and China against the Soviet Union, he doubts that China and Russia can work
together well. True, they share a suspicion of the United States, but he also believes that they
have an instinctive distrust of one another. “I have never met a Russian leader who said
anything good about China,” he says. “And I’ve never met a Chinese leader who said anything
good about Russia.” They are not natural allies.

The Chinese have entered diplomacy over Ukraine as an expression of their national interest,
Mr Kissinger says. Although they refuse to countenance the destruction of Russia, they do
recognise that Ukraine should remain an independent country and they have cautioned against
the use of nuclear weapons. They may even accept Ukraine’s desire to join nato. “China does
this, in part, because they do not want to clash with the United States,” he says. “They are
creating their own world order, in so far as they can.”
The second area where China and America need to talk is ai. “We are at the very beginning of
a capability where machines could impose global pestilence or other pandemics,” he says,
“not just nuclear but any field of human destruction.”
He acknowledges that even experts in ai do not know what its powers will be (going by the
evidence of our discussions, transcribing a thick, gravelly German accent is still beyond its
capabilities). But Mr Kissinger believes that ai will become a key factor in security within five
years. He compares its disruptive potential to the invention of printing, which spread ideas
that played a part in causing the devastating wars of the 16th and 17th centuries.
“There are no limitations. Every adversary is 100% vulnerable…[We live] in a world of
unprecedented destructiveness”

“[We live] in a world of unprecedented destructiveness,” Mr Kissinger warns. Despite the


doctrine that a human should be in the loop, automatic and unstoppable weapons may be
created. “If you look at military history, you can say, it has never been possible to destroy all
your opponents, because of limitations of geography and of accuracy. [Now] there are no
limitations. Every adversary is 100% vulnerable.”

ai cannot be abolished. China and America will therefore need to harness its power militarily
to a degree, as a deterrent. But they can also limit the threat it poses, in the way that arms-
control talks limited the threat of nuclear weapons. “I think we have to begin exchanges on the
impact of technology on each other,” he says. “We have to take baby steps towards arms
control, in which each side presents the other with controllable material about capabilities.”
Indeed, he believes that the negotiations themselves could help build mutual trust and the
confidence that enables the superpowers to practise restraint. The secret is leaders strong and
wise enough to understand that ai must not be pushed to its limits. “And if you then rely
entirely on what you can achieve through power, you’re likely to destroy the world.”

Mr Kissinger’s third piece of advice for aspiring leaders is to “link all of these to your
domestic objectives, whatever they are.” For America, that involves learning how to be more
pragmatic, focusing on the qualities of leadership and, most of all, renewing the country’s
political culture.

Mr Kissinger’s model for pragmatic thinking is India. He recalls a function at which a former
senior Indian administrator explained that foreign policy should be based on non-permanent
alliances geared to the issues, rather than tying up a country in big multilateral structures.
Such a transactional approach will not come naturally to America. The theme running through
Mr Kissinger’s epic history of international relations, “Diplomacy”, is that the United States
insists on depicting all its main foreign interventions as expressions of its manifest destiny to
remake the world in its own image as a free, democratic, capitalist society.

The problem for Mr Kissinger is the corollary, which is that moral principles too often
override interests—even when they will not produce desirable change. He acknowledges that
human rights matter, but disagrees with putting them at the heart of your policy. The
difference is between imposing them, or saying that it will affect relations, but the decision is
theirs.

“We tried [imposing them] in Sudan,” he notes. “Look at Sudan now.” Indeed, the knee-jerk
insistence on doing the right thing can become an excuse for failing to think through the
consequences of policy, he says. The people who want to use power to change the world, Mr
Kissinger argues, are often idealists, even though realists are more typically seen as willing to
use force.

India is an essential counterweight to China’s growing power. Yet it also has a worsening
record of religious intolerance, judicial bias and a muzzled press. One implication—though
Mr Kissinger did not directly comment—is that India will therefore be a test of whether
America can be pragmatic. Japan will be another. Relations will be fraught if, as Mr Kissinger
predicts, Japan takes moves to secure nuclear weapons within five years. With one eye on the
diplomatic manoeuvres that more or less kept the peace in the 19th century, he looks to Britain
and France to help the United States think strategically about the balance of power in Asia.

Big-shoe-fillers wanted

Leadership will matter, too. Mr Kissinger has long been a believer in the power of individuals.
Franklin D. Roosevelt was far-sighted enough to prepare an isolationist America for what he
saw as an inevitable war against the Axis powers. Charles de Gaulle gave France a belief in
the future. John F. Kennedy inspired a generation. Otto von Bismarck engineered German
unification, and governed with dexterity and restraint—only for his country to succumb to
war-fever after he was ousted.

Mr Kissinger acknowledges that 24-hour news and social media make his style of diplomacy
harder. “I don’t think a president today could send an envoy with the powers that I had,” he
says. But he argues that to agonise about whether a way ahead is even possible would be a
mistake. “If you look at the leaders whom I’ve respected, they didn’t ask that question. They
asked, ‘Is it necessary?’”
He recalls the example of Winston Lord, a member of his staff in the Nixon administration.
“When we intervened in Cambodia, he wanted to quit. And I told him, ‘You can quit and
march around this place carrying a placard. Or you can help us solve the Vietnam war.’ And
he decided to stay… What we need [is] people who make that decision—that they’re living in
this time, and they want to do something about it, other than feel sorry for themselves.”

Leadership reflects a country’s political culture. Mr Kissinger, like many Republicans, worries
that American education dwells on America’s darkest moments. “In order to get a strategic
view you need faith in your country,” he says. The shared perception of America’s worth has
been lost.

He also complains that the media lack a sense of proportion and judgment. When he was in
office the press were hostile, but he still had a dialogue with them. “They drove me nuts,” he
says. “But that was part of the game…they weren’t unfair.” Today, in contrast, he says that
the media have no incentive to be reflective. “My theme is the need for balance and
moderation. Institutionalise that. That’s the aim.”

Worst of all, though, is politics itself. When Mr Kissinger came to Washington, politicians
from the two parties would routinely dine together. He was on friendly terms with George
McGovern, a Democratic presidential candidate. For a national security adviser from the other
side that would be unlikely today, he believes. Gerald Ford, who took over after Nixon
resigned, was the sort of person whose opponents could rely on him to act decently. Today,
any means are considered acceptable.

“I think Trump and now Biden have driven [animosity] over the top,” Mr Kissinger says. He
fears that a situation like Watergate could lead to violence and that America lacks leadership.
“I don’t think Biden can supply the inspiration and…I’m hoping that Republicans can come
up with somebody better,” he says. “It’s not a great moment in history,” he laments, “but the
alternative is total abdication.”

America desperately needs long-term strategic thinking, he believes. “That’s our big challenge
which we must solve. If we don’t, the predictions of failure will be proved true.”

If time is short and leadership lacking, where does that leave the prospects for China and the
United States finding a way to live together in peace?

“We all have to admit we’re in a new world,” Mr Kissinger says, “for whatever we do can go
wrong. And there is no guaranteed course.” Even so he professes to feel hope. “Look, my life
has been difficult, but it gives ground for optimism. And difficulty—it’s also a challenge. It
shouldn’t always be an obstacle.”
He stresses that humanity has taken enormous strides. True, that progress has often occurred
in the aftermath of terrible conflict—after the Thirty Years War, the Napoleonic wars and the
second world war, for example, but the rivalry between China and America could be different.
History suggests that, when two powers of this type encounter each other, the normal outcome
is military conflict. “But this is not a normal circumstance,” Mr Kissinger argues, “because of
mutually assured destruction and artificial intelligence.”

“I think it’s possible that you can create a world order on the basis of rules that Europe, China
and India could join, and that’s already a good slice of humanity. So if you look at the
practicality of it, it can end well—or at least it can end without catastrophe and we can make
progress.”

That is the task for the leaders of today’s superpowers. “Immanuel Kant said peace would
either occur through human understanding or some disaster,” Mr Kissinger explains. “He
thought that it would occur through reason, but he could not guarantee it. That is more or less
what I think.”

World leaders therefore bear a heavy responsibility. They require the realism to face up to the
dangers ahead, the vision to see that a solution lies in achieving a balance between their
countries’ forces, and the restraint to refrain from using their offensive powers to the
maximum. “It is an unprecedented challenge and great opportunity,” Mr Kissinger says.

The future of humanity depends on getting it right. Well into the fourth hour of the day’s
conversation, and just weeks before his birthday celebrations, Mr Kissinger adds with a
characteristic twinkle, “I won’t be around to see it either way.” 

https://www.economist.com/briefing/2023/05/17/henry-kissinger-explains-how-to-avoid-
world-war-three

The bigger-is-better approach to AI is running out of road


If AI is to keep getting better, it will have to do more with less
When it comes to “large language models” (llms) such as gpt—which powers Chatgpt, a
popular chatbot made by Openai, an American research lab—the clue is in the name.
Modern ai systems are powered by vast artificial neural networks, bits of software modelled,
very loosely, on biological brains. gpt-3, an llm released in 2020, was a behemoth. It had
175bn “parameters”, as the simulated connections between those neurons are called. It was
trained by having thousands of gpus (specialised chips that excel at ai work) crunch through
hundreds of billions of words of text over the course of several weeks. All that is thought to
have cost at least $4.6m.

But the most consistent result from


modern ai research is that, while big is good,
bigger is better. Models have therefore been
growing at a blistering pace. gpt-4, released in
March, is thought to have around 1trn
parameters—nearly six times as many as its
predecessor. Sam Altman, the firm’s boss, put
its development costs at more than $100m.
Similar trends exist across the industry.
Epoch ai, a research firm, estimated in 2022
that the computing power necessary to train a
cutting-edge model was doubling every six to
ten months (see chart). 
This gigantism is becoming a problem. If
Epoch ai’s ten-monthly doubling figure is
right, then training costs could exceed a billion
dollars by 2026—assuming, that is, models do not run out of data first. An analysis published
in October 2022 forecast that the stock of high-quality text for training may well be exhausted
around the same time. And even once the training is complete, actually using the resulting
model can be expensive as well. The bigger the model, the more it costs to run. Earlier this
year Morgan Stanley, a bank, guessed that, were half of Google’s searches to be handled by a
current gpt-style program, it could cost the firm an additional $6bn a year. As the models get
bigger, that number will probably rise.
Many in the field therefore think the “bigger is better” approach is running out of road.
If ai models are to carry on improving—never mind fulfilling the ai-related dreams currently
sweeping the tech industry—their creators will need to work out how to get more performance
out of fewer resources. As Mr Altman put it in April, reflecting on the history of giant-
sized ai: “I think we’re at the end of an era.” 
Quantitative tightening
Instead, researchers are beginning to turn their attention to making their models more
efficient, rather than simply bigger. One approach is to make trade-offs, cutting the number of
parameters but training models with more data. In 2022 researchers at DeepMind, a division
of Google, trained Chinchilla, an llm with 70bn parameters, on a corpus of 1.4trn words. The
model outperforms gpt-3, which has 175bn parameters trained on 300bn words. Feeding a
smaller llm more data means it takes longer to train. But the result is a smaller model that is
faster and cheaper to use. 
Another option is to make the maths fuzzier. Tracking fewer decimal places for each number
in the model—rounding them off, in other words—can cut hardware requirements drastically.
In March researchers at the Institute of Science and Technology in Austria showed that
rounding could squash the amount of memory consumed by a model similar to gpt-3, allowing
the model to run on one high-end gpu instead of five, and with only “negligible accuracy
degradation”.
Some users fine-tune general-purpose llms to focus on a specific task such as generating legal
documents or detecting fake news. That is not as cumbersome as training an llm in the first
place, but can still be costly and slow. Fine-tuning llama, an open-source model with 65bn
parameters that was built by Meta, Facebook’s corporate parent, takes multiple gpus anywhere
from several hours to a few days. 
Researchers at the University of Washington have invented a more efficient method that
allowed them to create a new model, Guanaco, from llama on a single gpu in a day without
sacrificing much, if any, performance. Part of the trick was to use a similar rounding
technique to the Austrians. But they also used a technique called “low-rank adaptation”, which
involves freezing a model’s existing parameters, then adding a new, smaller set of parameters
in between. The fine-tuning is done by altering only those new variables. This simplifies
things enough that even relatively feeble computers such as smartphones might be up to the
task. Allowing llms to live on a user’s device, rather than in the giant data centres they
currently inhabit, could allow for both greater personalisation and more privacy. 

A team at Google, meanwhile, has come up with a different option for those who can get by
with smaller models. This approach focuses on extracting the specific knowledge required
from a big, general-purpose model into a smaller, specialised one. The big model acts as a
teacher, and the smaller as a student. The researchers ask the teacher to answer questions and
show how it comes to its conclusions. Both the answers and the teacher’s reasoning are used
to train the student model. The team was able to train a student model with just 770m
parameters, which outperformed its 540bn-parameter teacher on a specialised reasoning task.

Rather than focus on what the models are doing, another approach is to change how they are
made. A great deal of ai programming is done in a language called Python. It is designed to be
easy to use, freeing coders from the need to think about exactly how their programs will
behave on the chips that run them. The price of abstracting such details away is slow code.
Paying more attention to these implementation details can bring big benefits. This is “a huge
part of the game at the moment”, says Thomas Wolf, chief science officer of Hugging Face,
an open-source ai company. 
Learn to code
In 2022, for instance, researchers at Stanford University published a modified version of the
“attention algorithm”, which allows llms to learn connections between words and ideas. The
idea was to modify the code to take account of what is happening on the chip that is running
it, and especially to keep track of when a given piece of information needs to be looked up or
stored. Their algorithm was able to speed up the training of gpt-2, an older large language
model, threefold. It also gave it the ability to respond to longer queries. 
Sleeker code can also come from better tools. Earlier this year, Meta released an updated
version of PyTorch, an ai-programming framework. By allowing coders to think more about
how computations are arranged on the actual chip, it can double a model’s training speed by
adding just one line of code. Modular, a startup founded by former engineers at Apple and
Google, last month released a new ai-focused programming language called Mojo, which is
based on Python. It too gives coders control over all sorts of fine details that were previously
hidden. In some cases, code written in Mojo can run thousands of times faster than the same
code in Python.
A final option is to improve the chips on which that code runs. gpus are only accidentally
good at running ai software—they were originally designed to process the fancy graphics in
modern video games. In particular, says a hardware researcher at Meta, gpus are imperfectly
designed for “inference” work (ie, actually running a model once it has been trained). Some
firms are therefore designing their own, more specialised hardware. Google already runs most
of its ai projects on its in-house “tpu” chips. Meta, with its mtias, and Amazon, with its
Inferentia chips, are pursuing a similar path. 
That such big performance increases can be extracted from relatively simple changes like
rounding numbers or switching programming languages might seem surprising. But it reflects
the breakneck speed with which llms have been developed. For many years they were research
projects, and simply getting them to work well was more important than making them elegant.
Only recently have they graduated to commercial, mass-market products. Most experts think
there remains plenty of room for improvement. As Chris Manning, a computer scientist at
Stanford University, put it: “There’s absolutely no reason to believe…that this is the ultimate
neural architecture, and we will never find anything better.”

https://www.economist.com/science-and-technology/2023/06/21/the-bigger-is-better-
approach-to-ai-is-running-out-of-road
Nine British Banks Sign Up to New AI Tool for Tackling Scams
TSB, Lloyds, Halifax, Natwest and Bank of Scotland use tool
TSB estimates system could save UK banks £100 million per year

Mastercard Inc. is selling a new artificial intelligence-powered tool that helps banks more
effectively spot if their customers are trying to send money to fraudsters. 
Nine of the UK’s biggest banks, including Lloyds Banking Group Plc, Natwest Group
Plc and Bank of Scotland Plc, have signed up to use the Consumer Fraud Risk system,
Mastercard told Bloomberg News. 

Trained on years of transaction data, the tool helps to predict whether someone is trying to
transfer funds to an account affiliated with “authorized push payment scams.” This type of
fraud involves tricking a victim into moving money into an account falsely posing as a
legitimate payee, such as a family member, friend or a business. 

The tool comes as UK banks prepare for new rules from the Payment Systems Regulator that
will require them to compensate customers affected by APP scams from 2024. Historically
banks haven’t been liable for this type of fraud, although some signed a voluntary agreement
to pay back victims.

Ajay Bhalla, president of cyber and intelligence at Mastercard, described APP scams as a
“huge problem” that banks have historically struggled to detect because victims’ accounts
aren’t compromised. Clients voluntarily make the transfer and so pass many of the security
checks used to identify other types of fraud, such as unauthorized payments, he said. 

In the UK, victims of APP scams lost £484.2 million ($616 million) in 2022, according
to research by banking industry association UK Finance. Losses to APP fraud across the UK,
US and UK are expected to hit $5.25 billion by 2026, according to a report by ACI Worldwide
and GlobalData.

TSB Banking Group Plc, the first bank to implement the system four months ago, has seen a
20% increase in detection of this type of fraud, which the bank’s Head of Fraud Paul Davis
said was one of the biggest improvements of any individual fraud prevention project he’s
worked on. TSB estimates it could save UK’s banks about £100 million per year if the tool
was rolled out across the industry.

“It’s a good example of the power of sharing data,” Davis said in an interview. “It’s the first
time we’ve been able to see both sides of the payment — sending and receiving.”
Consumer Fraud Risk works by assigning a risk score from 0 to 999 on any attempted bank
transfer within half a second — similar to a system it already uses to identify fraudulent credit
card payments. The bank can combine this risk score with its own analytics to create an
assessment of the transaction and, if necessary, block it before the money leaves the victim’s
account.

TSB’s Davis said the system has been particularly good at detecting purchase scams, where
fraudsters posing as merchants trick people into paying for goods or services that are never
received, which represent about half of all APP fraud. It’s also helped reduce false positives –
genuine transactions flagged as potential fraud.
Mastercard is able to provide the risk score because it runs the infrastructure for real-time
electronic transfers of funds through its subsidiary Vocalink. For the past five years,
Mastercard has worked with UK banks to follow how scammers move their proceeds through
a sequence of “mule” accounts – often at several different banks – to obscure their activity.
This has allowed Mastercard to identify patterns of behavior and accounts associated with
scams that it’s used to train the Consumer Fraud Risk system.

The company, which charges banks a fee for the product based on transaction volume, plans to
roll out the tool globally. It’s in discussions with potential clients in markets with mature, real-
time payments systems and significant APP fraud, including the US, India and Australia.
https://www.bloomberg.com/news/articles/2023-07-05/mastercard-s-ai-tool-helps-nine-
british-banks-tackle-scams

AI is not yet killing jobs


White-collar workers are ever more numerous

fter astonishing breakthroughs in artificial intelligence, many people worry that they will end
up on the economic scrapheap. Global Google searches for “is my job safe?” have doubled in
recent months, as people fear that they will be replaced with large language models (llms).
Some evidence suggests that widespread disruption is coming. In a recent paper Tyna
Eloundou of Openai and colleagues say that “around 80% of the us workforce could have at
least 10% of their work tasks affected by the introduction of llms”. Another paper suggests
that legal services, accountancy and travel agencies will face unprecedented upheaval.

Economists, however, tend to enjoy making predictions about automation more than they
enjoy testing them. In the early 2010s many of them loudly predicted that robots would kill
jobs by the millions, only to fall silent when employment rates across the rich world rose to
all-time highs. Few of the doom-mongers have a good explanation for why countries with the
highest rates of tech usage around the globe, such as Japan, Singapore and South Korea,
consistently have among the lowest rates of unemployment.

Here we introduce our first attempt at tracking ai’s impact on jobs. Using American data on
employment by occupation, we single out white-collar workers. These include people working
in everything from back-office support and financial operations to copy-writers. White-collar
roles are thought to be especially vulnerable to generative ai, which is becoming ever better at
logical reasoning and creativity.

However, there is as yet little evidence of


an ai hit to employment. In the spring of 2020
white-collar jobs rose as a share of the total, as
many people in service occupations lost their
job at the start of the covid-19 pandemic (see
chart). The white-collar share is lower today, as
leisure and hospitality have recovered. Yet in
the past year the share of employment in
professions supposedly at risk from
generative ai has risen by half a percentage
point. 
It is, of course, early days. Few firms yet use generative-ai tools at scale, so the impact on jobs
could merely be delayed. Another possibility, however, is that these new technologies will end
up destroying only a small number of roles. While ai may be efficient at some tasks, it may be
less good at others, such as management and working out what others need. 
ai could even have a positive effect on jobs. If workers using it become more efficient, profits
at their company could rise which would then allow bosses to ramp up hiring. A recent survey
by Experis, an it-recruitment firm, points to this possibility. More than half of Britain’s
employers expect ai technologies to have a positive impact on their headcount over the next
two years, it finds.

To see how it all shakes out, we will publish updates to this analysis every few months. But
for now, a jobs apocalypse seems a way off.

https://www.economist.com/finance-and-economics/2023/06/15/ai-is-not-yet-killing-jobs

AI Founders Vie for Big Wealth in Unicorn Frenzy


Entrepreneurs and veteran CEOs alike are getting rich as investors make big bets on
artificial intelligence.
When Mustafa Suleyman partnered with billionaire LinkedIn founder Reid
Hoffman last year on a startup called Inflection AI, he saw the potential in their
prized project: a chatbot meant to be an “emotional companion that is kind,
encouraging and rational.”

Now, so do deep-pocketed investors.

A $225 million funding round in May 2022, long before markets were whipped into a
frenzy over artificial intelligence, vaulted the firm past the $1 billion "unicorn"
threshold. Inflection AI declined to disclose its current valuation. But Suleyman says
it’s now worth billions — giving Suleyman a sizeable fortune in the hundreds of
millions.

Investor exuberance over AI’s potential, from infrastructure inspection to language


translation and image recognition, has sent the wealth of up-and-comers like
Suleyman skyrocketing, while also bolstering the fortunes of established billionaires
in the space. The year’s biggest surge came from AI chip maker Nvidia Corp., whose
founder Jensen Huang nearly tripled his net worth to $38.5 billion, according to the
Bloomberg Billionaires Index. Oracle Corp.’s Larry Ellison became the world’s fourth-
richest person while Google’s Sergey Brin and Larry Page added billions after
announcing plans for an AI-powered chatbot as part of a revamped search engine.

May, with a record $12.8 billion of that coming from companies working on
generative AI, or algorithms that can be used to create new content from text to
videos. That’s nearly quintuple the amount from the same period last year.

“This is capitalism at its best. You want capital to chase opportunity and that drives
creativity and invention,” Suleyman said. But it’s also a world laden with risk, with
investors potentially pouring money into overhyped startups. “Of course, some people
are going to lose their shirts,” he said.

The current funding frenzy took off in in January, when OpenAI, the company
founded by Sam Altman that created ChatGPT, set the record for AI fundraising with
a $10-billion raise from Microsoft Corp. at a $29-billion valuation.

One beneficiary: Anthropic, an AI safety and research firm co-founded by siblings and
former OpenAI executives Daniela and Dario Amodei. It raised $450 million at a $5
billion valuation, the biggest in AI since OpenAI, with backing from Google. The firm
has said it will use the funds to make a safer chatbot experience.

Based on the Amodei siblings’ minority stakes, the injection has likely boosted their
fortunes by hundreds of millions.
Boom and Bust
But while internet booms past created massive fortunes, they also have a history of
ending in major busts — what makes AI any different?

In 2017, Gregg Johnson, a former Salesforce executive, joined Invoca, a company that
uses AI to for conversation intelligence that allows firms to better track metrics on
sales and marketing. The company has come a long way from its humble beginnings
in 2008: It now has about $100 million in recurring revenue and 400 employees, says
Johnson, who is CEO. Last year, it raised $83 million for a $1.1 billion valuation.

While Invoca has a proven track record stretching back more than a decade, these
days “a lot of companies are getting insane amounts with just $3 to $5 million
revenue,” he said. Johnson and other industry leaders fear the banking community is
back to “throwing money at AI startups in a willy-nilly way” as they did in 2021,
potentially seeding the ground for a big selloff if the cash finds its way to overhyped
players, he said.

James Penny, the chief investment officer of TAM Asset Management and a veteran
investor who foresaw the headwinds now threatening the ESG movement, echoed
Johnson’s concerns. He said the current landscape reminds him of the early days of
the tech bubble that burst in 2000, wiping more than 70% off the Nasdaq.

While Johnson and Penny have history on their side, the reality is that building an AI
startup is years-long, capital-intensive venture. Founders need the money if they want
to have a real chance. 

In 2014, Abhinai Srivastava co-founded Mashgin, a company looking to swap out


conventional checkout kiosks at stores with ones that would use artificial intelligence
and computer vision to check prices — in essence, replacing barcodes.
A large American bank soon gave Srivastava his big break, agreeing to install the AI
checkout kiosks in its cafeterias. The bank eventually installed thousands of them in
its New York offices.

Mashgin has now expanded into stadiums, including Madison Square Garden and
Detroit’s Ford Field. Convenience stores such as Circle K and other markets also use
its AI kiosks. Last year, the group raised $62.5 million at a $1.5 billion valuation,
bringing Srivastava’s personal stake value to more than $200 million.

“We thought it would take three to six months, but it took five years for us to get our
first product going,” he said. Proving a product’s viability in the lab is one thing, but
“the real world is the place that is hard.”

Regulatory Risk
In the near term, regulation could slow down investors. 

Since founding OpenAI, Sam Altman has taken a lead role in the debate surrounding
AI regulation. OpenAI is working alongside a select group of companies, including
Anthropic and Google, to conduct an evaluation of AI systems for the White House.

Of course, having AI tycoons help write the rules of the road for their own industry
comes with drawbacks. OpenAI lobbied for significant elements of Europe’s AI Act to
be watered down, according to recent reporting by Time magazine.

In late May, Altman joined hundreds of other AI enthusiasts in signing a one-line


statement released by the Center for AI Safety a non-profit research group:
“Mitigating the risk of extinction from AI should be a global priority alongside other
societal-scale risks such as pandemics and nuclear war.”

Hoffman did not sign the statement. But Suleyman did. 


Suleyman said he recognizes the potential pitfalls of AI, and believes the risks of the
technology necessitate tighter regulation — both from governments and from the
companies themselves.

“Some people are going to create AIs that act like humans and try to convince people
they’re human,” he said. “It will turbocharge the spread of manipulative persuasive
storytelling. We’ll have to take a much more aggressive approach to online
moderation and platform responsibility.”

https://www.bloomberg.com/news/articles/2023-06-23/ai-billionaires-founders-get-rich-as-
startups-reach-unicorn-status?srnd=premium-asia

Our early-adopters index examines how corporate America is deploying AI


Companies of all stripes are using the technology

echnology stocks are having a bumper year. Despite a recent wobble, the share price of the
Big Five—Alphabet, Amazon, Apple, Meta and Microsoft—has jumped by 60% since
January, when measured in an equally weighted basket. The price of shares in one big
chipmaker, Nvidia, has tripled and in another, amd, almost doubled. Their price-to-earnings
ratios (which measure how much the markets think a company is worth relative to its profits)
are ten times that of the median firm in the s&p 500.
The main reason for the surge is the promise of artificial intelligence (ai). Since the launch in
November of Chatgpt, an ai-powered chatbot, investors have grown ever more excited about a
new wave of technologythat can create human-like content, from poems and video footage to
lines of code. This “generative ai” relies on large language models which are trained on big
chunks of the internet. Many think the technology could reshape whole industries, and have as
much impact on business and society as smartphones or cloud computing. Firms that can
make the best use of the technology, the
thinking goes, will be able to expand profit
margins and gain market share.
Corporate bosses are at pains to demonstrate
how they are adopting ai. On April 4th Jamie
Dimon, JPMorgan Chase’s boss, said his bank
had 600 machine-learning engineers and had
put ai to work on more than 300 different
internal applications. David Ricks, the boss of Eli Lilly, has said that the pharmaceutical giant
has more than 100 projects on the go using ai. 
Company case studies reveal only part of the picture. To get a broader sense of which
companies and industries are adopting ai The Economist examined data on all the firms in
the s&p 500. We looked at five measures: the share of issued patents that mention ai; venture-
capital (vc) activity targeting ai firms; acquisitions of ai firms; job listings citing ai; and
mentions of the technology on earnings calls. Because other types of ai could bring benefits
for business, our analysis captures activity for all ai, not just the generative wave. The results
show that even beyond tech firms the interest in ai is growing fast. Moreover, clear leaders
and laggards are already emerging
ai expertise already seems to be spreading (see chart). About two-thirds of the firms in our
universe have placed a job ad mentioning ai skills in the past three years, says PredictLeads, a
research firm. Of those that did, today 5.3% of their listed vacancies mention ai, up from a
three-year average of 2.5%. In some industries the rise is more dramatic. In retail firms that
share has jumped from 3% to 11%, while among chipmakers that proportion grew from 9% to
19%.
The number of ai-related patents being registered trended upwards between 2020 and 2022,
according to data provided by Amit Seru of Stanford University. PitchBook, another research
firm, concludes that in 2023 some 25% of venture deals by s&p 500 firms involved aistartups,
up from 19% in 2021. GlobalData, also a research firm, finds that about half the firms
scrutinised have talked about ai in earnings calls since 2021, and that in the first quarter of this
year the number of times ai was mentioned in the earnings calls of America Inc more than
doubled compared with the previous quarter. Roughly half have been granted a patent relating
to the technology between 2020 and 2022.
The use of generative ai may eventually become even more common than other sorts of ai.
That is because it is good at lots of tasks essential to running a firm. A report by McKinsey, a
consultancy, argues that three-quarters of the expected value created by generative ai will
come in four business functions—research and development, software engineering, marketing
and customer service. To some extent, all these operations are at the core of most big
businesses. Moreover, any large company with internal databases used to guide employees
could find a use for an ai-powered chatbot. Morgan Stanley, a bank, is building an ai assistant
that will help its wealth managers find and summarise answers from a huge internal
database. slb, an oil-services company, has built a similar assistant to help service engineers.
While the adoption of ai is happening in many firms, some are more enthusiastic than others.
Ranking all the companies using each metric and then taking an average produces a simple
scoring system. Those at the top seem to be winning over investors. Since the start of the year,
the median share price of the top 100 has risen by 11%; for the lowest-scoring quintile it has
not moved at all.
The top spots are unsurprisingly dominated by Silicon
Valley. On a broad definition, the s&p 500 contains 82
tech firms. Almost 50 of them make the top 100.
Nvidia is the highest-scoring firm. According to data
from PredictLeads, over the past three years a third of
its job listings have mentioned ai. In the past year the
firm has mentioned ai in its earnings calls almost 200
times, more than any other company. Other high-
ranking tech firms include the cloud-computing giants
—Alphabet (3rd), Microsoft (12th) and Amazon
(34th). They sell access to a range of ai tools, from
services that help train sophisticated models to
software that allows the use of ai without having to
write reams of code. 
Beyond tech, two types of firms seem to be
adopting ai the quickest. One is data-intensive
industries, such as insurers, financial-services firms and pharmaceutical companies. They
account for about a quarter of our top 100. These firms tend to have lots of structured datasets,
such as loan books or patient files, which makes it easier to use ai, notes Ali Ghodsi of
Databricks, a database firm. Around a tenth of JPMorgan Chase’s current job listings
mention ai. The firm recently filed a patent for Indexgpt, an ai-infused chatbot that gives
investment advice. Health-care firms like Gilead Sciences and Moderna use ai to discover new
drugs. Others, such as Abbott and Align Technology, are building ai-powered medical
devices. America’s Food and Drug Administration approved 97 such machines last year, up
from 26 in 2017. 
A second group is industries that are already being disrupted by technology, including
carmaking, telecoms, media and retail. Thirteen firms from these industries make the high-
scoring 100, including Ford, General Motors and Tesla. The rise of electric vehicles and the
prospect of self-driving cars has encouraged vehicle manufacturers to invest in technology. In
March Ford established Latitude ai, a self-driving car subsidiary that might one day rival gm’s
Cruise. In April Elon Musk told analysts that Tesla was buying specialised ai chips and was
“very focused” on improving their ai capabilities in an effort to improve his firm’s self-driving
efforts.
Retailers are using ai to bolster their core business. Nike, a sportswear giant, filed an
application for a patent in 2021 for a system that can generate three-dimensional computer
models of trainers. Christian Kleinerman of Snowflake, a database provider, notes that
retailers are also taking advantage of the growth of e-commerce by collecting more data on
customers. That allows more accurate targeting of marketing campaigns. Some may take
personalisation a step further. In 2021 Procter & Gamble, a consumer-goods giant, applied for
a patent for an ai-based system which analyses users’ skin and hair conditions based on
photos, and recommends products to treat them.
One source of variation in ai use across industries may be a result of the type of work
undertaken. A working paper led by Andrea Eisfeldt of the University of California looked at
how exposed firms are to ai. The researchers assessed which tasks took place in a firm and
how well Chatgpt could perform them. The most exposed were tech firms, largely
because ai chatbots are good at coding. Those industries least exposed, such as agriculture and
construction, tended to rely on manual labour

Clear leaders and laggards are emerging within industries, too. About 70 firms in the s&p 500
show no sign on any of our metrics of focusing on ai. That includes firms in ai-heavy
industries, such as insurers. The mass of smaller firms not included in the s&p 500 may be
even less keen. One distinguishing factor within industries may be investment. For the top 100
firms in our ranking, the median r&d expenditure as a share of revenue was 11%. For those in
the lowest 100 it was zero.

Vlad Lukic of bcg, a consultancy, notes that there is even a lot of variation within companies.
He recalls visiting two divisions of the same medium-sized multinational. One had no
experience working with ai. The other was advanced; it had been using a pilot version of the
technology from Openai, the startup behind Chatgpt, for two years.

Among early adopters, many non-tech companies’ ai use is growing more sophisticated. Mr


Seru’s data reveal that about 80 non-tech firms have had ai-related patents issued which were
cited by another patent, suggesting that they have some technological value. Some 45 non-
tech companies in the s&p 500 have recently placed ads which mention model training,
including Boeing, United Health and State Street. That suggests they may be building their
own models rather than using off-the-shelf technology from the likes of Openai. The
advantage of this approach is that it can produce more-accurate ai, giving a greater edge over
rivals.

However, a shift to in-house training hints at one of the risks: security. In May Samsung
discovered that staff had uploaded sensitive code to Chatgpt. The concern is that this
information may be stored on external servers of the firms which run the models, such as
Microsoft and Alphabet. Now Samsung is said to be training its own models. The firm also
joined the growing list of companies that have banned or limited the use of Chatgpt, which
includes Apple and JPMorgan Chase. 

Other risks abound. Model-makers, including Openai, are being sued for violating copyright
laws over their use of internet data to train their models. Some large corporations think that
they could be left liable if they use Openai’s technology. Moreover, models are prone to make
up information. In one incident a New York lawyer used Chatgpt to write a motion. The
chatbot included fictional case-law and the lawyer was fined by the court. 

But all this must be weighed against the potential benefits, which could be vast. Waves of
technology frequently turn industries on their head. As generative ai diffuses into the
economy, it is not hard to imagine it doing the same thing. Mr Lukic says that the biggest risk
for companies may be falling behind. Judged by the scramble in America Inc for all things ai,
many bosses and investors would agree. ■

https://www.economist.com/business/2023/06/25/our-early-adopters-index-examines-how-
corporate-america-is-deploying-ai

The bigger-is-better approach to AI is running out of road


If AI is to keep getting better, it will have to do more with less

When it comes to “large language models” (llms) such as gpt—which powers Chatgpt, a
popular chatbot made by Openai, an American research lab—the clue is in the name.
Modern ai systems are powered by vast artificial neural networks, bits of software modelled,
very loosely, on biological brains. gpt-3, an llm released in 2020, was a behemoth. It had
175bn “parameters”, as the simulated connections between those neurons are called. It was
trained by having thousands of gpus (specialised chips that excel at ai work) crunch through
hundreds of billions of words of text over the course of several weeks. All that is thought to
have cost at least $4.6m.
But the most consistent result from
modern ai research is that, while big is good,
bigger is better. Models have therefore been
growing at a blistering pace. gpt-4, released in
March, is thought to have around 1trn
parameters—nearly six times as many as its
predecessor. Sam Altman, the firm’s boss, put
its development costs at more than $100m.
Similar trends exist across the industry.
Epoch ai, a research firm, estimated in 2022
that the computing power necessary to train a
cutting-edge model was doubling every six to
ten months (see chart). 
This gigantism is becoming a problem. If Epoch ai’s ten-monthly doubling figure is right, then
training costs could exceed a billion dollars by 2026—assuming, that is, models do not run out
of data first. An analysis published in October 2022 forecast that the stock of high-quality text
for training may well be exhausted around the same time. And even once the training is
complete, actually using the resulting model can be expensive as well. The bigger the model,
the more it costs to run. Earlier this year Morgan Stanley, a bank, guessed that, were half of
Google’s searches to be handled by a current gpt-style program, it could cost the firm an
additional $6bn a year. As the models get bigger, that number will probably rise.
Many in the field therefore think the “bigger is better” approach is running out of road.
If ai models are to carry on improving—never mind fulfilling the ai-related dreams currently
sweeping the tech industry—their creators will need to work out how to get more performance
out of fewer resources. As Mr Altman put it in April, reflecting on the history of giant-
sized ai: “I think we’re at the end of an era.” 
Quantitative tightening
Instead, researchers are beginning to turn their attention to making their models more
efficient, rather than simply bigger. One approach is to make trade-offs, cutting the number of
parameters but training models with more data. In 2022 researchers at DeepMind, a division
of Google, trained Chinchilla, an llm with 70bn parameters, on a corpus of 1.4trn words. The
model outperforms gpt-3, which has 175bn parameters trained on 300bn words. Feeding a
smaller llm more data means it takes longer to train. But the result is a smaller model that is
faster and cheaper to use. 
Another option is to make the maths fuzzier. Tracking fewer decimal places for each number
in the model—rounding them off, in other words—can cut hardware requirements drastically.
In March researchers at the Institute of Science and Technology in Austria showed that
rounding could squash the amount of memory consumed by a model similar to gpt-3, allowing
the model to run on one high-end gpu instead of five, and with only “negligible accuracy
degradation”.

Some users fine-tune general-purpose llms to focus on a specific task such as generating legal
documents or detecting fake news. That is not as cumbersome as training an llm in the first
place, but can still be costly and slow. Fine-tuning llama, an open-source model with 65bn
parameters that was built by Meta, Facebook’s corporate parent, takes multiple gpus anywhere
from several hours to a few days. 

Researchers at the University of Washington have invented a more efficient method that
allowed them to create a new model, Guanaco, from llama on a single gpu in a day without
sacrificing much, if any, performance. Part of the trick was to use a similar rounding
technique to the Austrians. But they also used a technique called “low-rank adaptation”, which
involves freezing a model’s existing parameters, then adding a new, smaller set of parameters
in between. The fine-tuning is done by altering only those new variables. This simplifies
things enough that even relatively feeble computers such as smartphones might be up to the
task. Allowing llms to live on a user’s device, rather than in the giant data centres they
currently inhabit, could allow for both greater personalisation and more privacy. 

A team at Google, meanwhile, has come up with a different option for those who can get by
with smaller models. This approach focuses on extracting the specific knowledge required
from a big, general-purpose model into a smaller, specialised one. The big model acts as a
teacher, and the smaller as a student. The researchers ask the teacher to answer questions and
show how it comes to its conclusions. Both the answers and the teacher’s reasoning are used
to train the student model. The team was able to train a student model with just 770m
parameters, which outperformed its 540bn-parameter teacher on a specialised reasoning task.

Rather than focus on what the models are doing, another approach is to change how they are
made. A great deal of ai programming is done in a language called Python. It is designed to be
easy to use, freeing coders from the need to think about exactly how their programs will
behave on the chips that run them. The price of abstracting such details away is slow code.
Paying more attention to these implementation details can bring big benefits. This is “a huge
part of the game at the moment”, says Thomas Wolf, chief science officer of Hugging Face,
an open-source ai company. 

Learn to code

In 2022, for instance, researchers at Stanford University published a modified version of the
“attention algorithm”, which allows llms to learn connections between words and ideas. The
idea was to modify the code to take account of what is happening on the chip that is running
it, and especially to keep track of when a given piece of information needs to be looked up or
stored. Their algorithm was able to speed up the training of gpt-2, an older large language
model, threefold. It also gave it the ability to respond to longer queries. 

Sleeker code can also come from better tools. Earlier this year, Meta released an updated
version of PyTorch, an ai-programming framework. By allowing coders to think more about
how computations are arranged on the actual chip, it can double a model’s training speed by
adding just one line of code. Modular, a startup founded by former engineers at Apple and
Google, last month released a new ai-focused programming language called Mojo, which is
based on Python. It too gives coders control over all sorts of fine details that were previously
hidden. In some cases, code written in Mojo can run thousands of times faster than the same
code in Python.

A final option is to improve the chips on which that code runs. gpus are only accidentally
good at running ai software—they were originally designed to process the fancy graphics in
modern video games. In particular, says a hardware researcher at Meta, gpus are imperfectly
designed for “inference” work (ie, actually running a model once it has been trained). Some
firms are therefore designing their own, more specialised hardware. Google already runs most
of its ai projects on its in-house “tpu” chips. Meta, with its mtias, and Amazon, with its
Inferentia chips, are pursuing a similar path. 
That such big performance increases can be extracted from relatively simple changes like
rounding numbers or switching programming languages might seem surprising. But it reflects
the breakneck speed with which llms have been developed. For many years they were research
projects, and simply getting them to work well was more important than making them elegant.
Only recently have they graduated to commercial, mass-market products. Most experts think
there remains plenty of room for improvement. As Chris Manning, a computer scientist at
Stanford University, put it: “There’s absolutely no reason to believe…that this is the ultimate
neural architecture, and we will never find anything better.

You might also like