You are on page 1of 8

Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

blogs.scientificamerican.com

Don't Panic about AI


John Browne
11-14 minutos

According to legend, the medieval philosopher and Franciscan friar


Roger Bacon created an all-knowing artificial brain, which he
encased in a bronze, human-like head. Bacon, so the story goes,
wanted to use the insights gleaned from this “brazen head” to make
sure Britain could never be conquered.

Following Bacon, a long-standing challenge for engineers and


computer scientists has been to build a silicon-based replica of the
brain that could match, and then exceed, human intelligence. This
ambition pushes us to imagine what we might do if we succeed in
creating the next generation of computer systems that can think,
dream and reason for us and with us.

Today there is little talk of brazen heads, but artificial intelligence


seems to be everywhere. Magazines and newspaper articles
promote it endlessly, raising expectation and fear in roughly equal
measure.

Certain forms of AI are indeed becoming ubiquitous. For example,


algorithms execute huge volumes of trading on our financial
markets, self-driving automobiles are beginning to navigate city
streets, and our smartphones are translating from one language to
another. These systems are sometimes faster and more perceptive

1 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

than we humans are. But so far that is only true for the specific
tasks for which the systems have been designed. That is something
that some AI developers are now eager to change.

Some of today’s AI pioneers want to move on from today’s world of


“weak” or “narrow” AI, to create “strong” or “full” AI, or what is often
called artificial general intelligence (AGI). In some respects, today’s
powerful computing machines already make our look brains look
puny. They can store vast amounts of data, process it with
exceptional speed and communicate instantaneously with other
computers all around the planet. If these devices could be provided
with AGI algorithms that work in more flexible ways, the opportunity
would be huge.

AGI could, its proponents say, work for us diligently, around the
clock, and drawing on all available data, could suggest solutions for
many problems that have so far proved intractable. They could
perhaps help provide effective preemptive health care, avoid stock
market crashes or prevent geopolitical conflict. Google’s DeepMind,
a company focused on the development of AGI, has an immodest
ambition to “solve intelligence.” “If we’re successful,” their mission
statement reads, “we believe this will be one of the most important
and widely beneficial scientific advances ever made.”

Since the early days of AI, imagination has outpaced what is


possible or even probable. In 1965 an imaginative mathematician
called Irving Good, who had been a colleague of Alan Turing in the
World War II code-breaking team at Bletchley Park, predicted the
eventual creation of an “ultra-intelligent machine 5 that can far
surpass all the intellectual activities of any man, however clever.”
He predicted that such a machine would be able to turn its vast
intellect to improving itself—each tweak would increase its ability to

2 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

enhance its own powers, leading to a rapidly accelerating positive


feedback loop. “There would then unquestionably be an
‘intelligence explosion,’” Good wrote, “and the intelligence of man
would be left far behind.”

Good went on to suggest that “the first ultra-intelligent machine”


could be “the last invention that man need ever make.” This led to
the idea of the so-called “technological singularity” proposed by
Ray Kurzweil, who argues that the arrival of ultraintelligent
computers will be a critical turning point in our history, beyond
which there will be an eruption of technological and intellectual
prowess that will alter every facet of existence. Good added an
important qualification to his “last invention” prediction: the idea that
we would be able to harvest its benefits “provided that the machine
is docile enough to tell us how to keep it under control.”

Fears about the advent of malign, powerful, man-made intelligent


machines have been reinforced by many works of fiction—Mary
Shelley’s Frankenstein and the Terminator film series, for example.
But if AI does eventually prove to be our downfall, it is unlikely to be
at the hands of human-shaped forms like these, with recognizably
human motivations such as aggression or retribution.

Instead, I agree with Oxford University philosopher Nick Bostrom,


who believes that the gravest risks from AGI do not come from a
decision to turn against humankind but rather from a dogged
pursuit of set objectives at the expense of everything else. Berkeley
AI researcher Stuart Russell summarizes what he sees as the core
of this problem: “If you, for example, say, ‘I want everything I touch
to turn to gold,’ then that’s exactly what you’re going to get and then
you’ll regret it.” If computers do become extremely intelligent, there
is no reason to expect them to share any capability that people

3 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

would recognize as justice or compassion.

The promise and peril of true AGI are immense. But all of todays
excited discussion about these possibilities presupposes the fact
that we will be able to build these systems. And, having spoken to
many of the world’s foremost AI researchers, there is good reason
to doubt that we will see AGI any time soon, if ever.

Sign up for Scientific American’s free newsletters.

According to Russell, “We are several algorithmic breakthroughs


away from having anything that you would recognize as general-
purpose intelligence.” Tong Zhang, who was, until earlier this year,
head of AI research at the Chinese technology firm Tencent,
agrees: “If you want general AI, there certainly are a lot of obstacles
you need to overcome.” “I just don’t see any practical drivers in the
near future for a cross-sectional general superintelligence,” says
MIT roboticist Cynthia Breazeal. Mark James of Beyond Limits also
doubts that anyone is really on track to develop true AGI, saying
that, “for the AI field to truly progress to the point of having a really
human-like thinking machine, we need to rethink the problem from
square one.”

I think James is right—after all, how can we engineer something


that we cannot even define? We’ve never really managed to work
out what natural human intelligence is, so it is not clear what
engineers are trying to imitate in machines. Rather than intelligence
being a single, physical parameter, there are many types of
intelligence, including emotional, musical, sporting and

4 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

mathematical intelligences. Zoubin Ghahramani, Cambridge


professor and chief scientist at Uber, agrees: “I actually don’t think
there is such a thing as general intelligence,” he told me. And if
there is no such thing as a general intelligence, there is no hope of
building one, from either synthetic or biological parts. Ghahramani
goes further still, arguing that “our view of intelligence is “pre-
Copernican.” Just as the Earth is not at the center of our solar
system, the human brain does not represent the pinnacle of
intelligence.

What all this means is, even if we could emulate the intelligence of
the human brain, that might not necessarily be the best powerful
route to towards powerful forms of AGI. As leading AI researcher
Michael Jordan, from the University of California, Berkeley, has
pointed out, civil engineering did not develop by attempting to
create artificial bricklayers or carpenters, and chemical engineering
did not stem from the creation of an artificial chemist, so why
should anyone believe that most progress in the engineering of
information should come from attempting to build an artificial brain?

Instead, I think engineers should direct their imaginations towards


building computer systems that think in ways that we cannot: that
grapple with uncertainty, calculate risk by considering thousands or
millions of different variables and integrate vast quantities of poorly
structured data from many different sources.

None of this is to take away from the power of increasingly


adaptable AI algorithms, or to ignore the risks that they could one
day pose through unanticipated side effects or malign applications.
But if we have reason to believe that a machine with generalized
human-like intelligence is impossible, many concerns about AI
evaporate; there is no need to write any rigidly defined moral code

5 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

or value system into the workings of AI systems. Instead, our aim


should be to make them controllable and highly responsive to our
needs. Many first-rate researchers and thinkers are devoting a
great deal of time and energy to preempting problems associated
with AI before they arise.

Russell thinks the key to making AI systems both safer and more
powerful is in making their aims inherently unclear or, in computer
science terminology, introducing uncertainty into their objectives. As
he says, “I think we actually have to rebuild AI from its foundation
upwards. The foundation that has been established is that of the
rational [human-like] agent in optimization of objectives. That’s just
a special case.” Russell and his team are developing algorithms
that will actively seek to learn from people about what they want to
achieve and which values are important to them. He describes how
such a system can provide some protection, “because you can
show that a machine that is uncertain about its objective is willing,
for example, to be switched off.”

Work like this is important, particularly because Russell and his


collaborators are not simply flagging up ill-defined risks, but also
proposing concrete solutions and safeguards. This is what Stanford
AI professor and former head of Google Cloud Fei-Fei Li meant
when she said to me, “It’s not healthy to just preach a kind of
dystopia [about AI]. It’s much more responsible to preach a
thoughtful message.”

If the only message presented about the great leaps in physics


during the early 20th century had been dire warnings of imminent
nuclear Armageddon, we would not now have all the amazing
discoveries that have stemmed from our understanding of atomic
structures and quantum mechanics. The risks associated with AI

6 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

must be kept in perspective and responded to with constructive


action and regulation, rather than hand-wringing and alarmism.

Between the extraordinarily optimistic and the terrifyingly


pessimistic, lies a more realistic future for AI. Long before they
achieve anything even remotely resembling ultraintelligence,
computers will continue to change how we live and think in ways
that are both far-reaching and hard to predict. As our computers
become smarter, people will also get smarter and more capable.
We will need the processing power and increasingly intelligent
insights generated by machines to take on our most pressing global
challenges—from tackling climate change to curing cancer—and to
seek answers to the deepest questions about ourselves and our
place in the wider universe.

Attempts by medieval alchemist Roger Bacon notwithstanding,


engineers have so far failed in their attempts to emulate the human
brain in machine form. It is quite possible that they will never
succeed in that ambition. But that failure is irrelevant. Regardless of
the fact that today’s advanced AI systems think in distinctly non-
human-like ways, they are among the most powerful tools we have
built. If we wield them wisely and responsibly, they can help us build
a better future for all humanity.

The views expressed are those of the author(s) and are not
necessarily those of Scientific American.

ABOUT THE AUTHOR(S)

John Browne

Lord John Browne, trained as a professional engineer, was group

7 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...

chief executive of BP from 1995 to 2007, where he built a


reputation as a visionary leader, transforming BP into one of the
world's most successful companies. He is now the executive
chairman of L1 Energy. He is the author of Make, Think, Imagine:
Engineering the Future of Civilization, published in August 2019.

8 of 8 10/12/2019 14:21

You might also like