You are on page 1of 1

Views

The columnist Aperture Letters Culture Culture columnist


Penny Sarchet muses Peruse stunning Maybe our big bang Will neural tech Bethan Ackerley sees
on why orchids are examples of world was just the best lead to the end double in new Dead
so diverse  p28 photography  p30 of the bunch  p32 of privacy?  p34 Ringers TV show  p36

Comment AI special report

Facing AI extinction
Why do many of today’s artificial intelligence researchers
dismiss the potential risks to humanity, asks David Krueger

I
N A recent White House press Attitudes are changing, but
conference, press secretary not quickly enough. AI x-risk
Karine Jean-Pierre couldn’t is admittedly more speculative
suppress her laughter at the than important social issues
question: Is it “crazy” to worry that with present-day AI, like bias
“literally everyone on Earth will and misinformation, but the basic
die” due to artificial intelligence? solution is the same: regulation.
Unfortunately, the answer is no. A robust public discussion is long
While AI pioneers such as overdue. By refusing to engage,
Alan Turing cautioned that we some AI researchers are neglecting
should expect “machines to take ethical responsibilities and
control”, many contemporary betraying public trust.
researchers downplay this concern. Big tech sponsors AI ethics
In an era of unprecedented growth research when it doesn’t hurt the
in AI abilities, why aren’t more bottom line. But it is also lobbying
experts weighing in? to exclude general-purpose AI
Before the deep-learning from EU regulation. Concerned
revolution in 2012, I didn’t think researchers recently called for
human-level AI would emerge a pause on developing bigger AI
in my lifetime. I was familiar with models to allow society to catch
arguments that AI systems would up. Critics say this isn’t politically
insatiably seek power and resist realistic, but problems like AI
SIMONE ROTELLA

shutdown – an obvious threat x-risk won’t go away just because


to humanity if it were to occur. they are politically inconvenient.
But I also figured researchers This brings us to the ugliest
must have good reasons not to be reason researchers may dismiss AI
worried about human extinction are themselves often ignorant of making them phenomenally x-risk: funding. Essentially every
risk (x-risk) from AI. of arguments for AI x-risk. rich and powerful, even if it had a AI researcher (myself included)
Yet after 10 years in the field, One basic argument is by 1 per cent chance of escaping their has received funding from big
I believe the main reasons are analogy: humans’ cognitive control and killing everyone. tech. At some point, society may
actually cultural and historical. abilities allowed us to outcompete Because no safe experiment can stop believing reassurances from
By 2012, after several hype cycles other species for resources, definitively tell us whether an AI people with such strong conflicts
that didn’t pan out, most AI leading to many extinctions. AI system will actually kill everyone, of interest and conclude, as
researchers had stopped asking systems could likewise deprive us such concerns are often dismissed I have, that their dismissal
“what if we succeed at replicating of the resources we need for our as unscientific. But this isn’t an betrays wishful thinking rather
human intelligence?”, narrowing survival. Less abstractly, AI could excuse for ignoring the risk. It just than good counterarguments.  ❚
their ambitions to specific tasks displace humans economically means society needs to reason
like autonomous driving. and, through its powers of about it in the same way as other For more on AI, see pages 12 and 46
When concerns resurfaced manipulation, politically. complex social issues. Researchers
outside their community, But wouldn’t it be humans also emphasise the difficulty
researchers were too quick to wielding AIs as tools who end up of predicting when AI might David Krueger is an
dismiss outsiders as ignorant and in control? Not necessarily. Many surpass human intelligence, assistant professor in
their worries as science fiction. But people might choose to deploy a but this is an argument for machine learning at the
in my experience, AI researchers system with a 99 per cent chance caution, not complacency. University of Cambridge

22 April 2023 | New Scientist | 27

You might also like