Professional Documents
Culture Documents
blogs.scientificamerican.com
1 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
than we humans are. But so far that is only true for the specific
tasks for which the systems have been designed. That is something
that some AI developers are now eager to change.
AGI could, its proponents say, work for us diligently, around the
clock, and drawing on all available data, could suggest solutions for
many problems that have so far proved intractable. They could
perhaps help provide effective preemptive health care, avoid stock
market crashes or prevent geopolitical conflict. Google’s DeepMind,
a company focused on the development of AGI, has an immodest
ambition to “solve intelligence.” “If we’re successful,” their mission
statement reads, “we believe this will be one of the most important
and widely beneficial scientific advances ever made.”
2 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
3 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
The promise and peril of true AGI are immense. But all of todays
excited discussion about these possibilities presupposes the fact
that we will be able to build these systems. And, having spoken to
many of the world’s foremost AI researchers, there is good reason
to doubt that we will see AGI any time soon, if ever.
4 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
What all this means is, even if we could emulate the intelligence of
the human brain, that might not necessarily be the best powerful
route to towards powerful forms of AGI. As leading AI researcher
Michael Jordan, from the University of California, Berkeley, has
pointed out, civil engineering did not develop by attempting to
create artificial bricklayers or carpenters, and chemical engineering
did not stem from the creation of an artificial chemist, so why
should anyone believe that most progress in the engineering of
information should come from attempting to build an artificial brain?
5 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
Russell thinks the key to making AI systems both safer and more
powerful is in making their aims inherently unclear or, in computer
science terminology, introducing uncertainty into their objectives. As
he says, “I think we actually have to rebuild AI from its foundation
upwards. The foundation that has been established is that of the
rational [human-like] agent in optimization of objectives. That’s just
a special case.” Russell and his team are developing algorithms
that will actively seek to learn from people about what they want to
achieve and which values are important to them. He describes how
such a system can provide some protection, “because you can
show that a machine that is uncertain about its objective is willing,
for example, to be switched off.”
6 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
The views expressed are those of the author(s) and are not
necessarily those of Scientific American.
John Browne
7 of 8 10/12/2019 14:21
Don't Panic about AI about:reader?url=https://blogs.scientificamerican.com/observations/dont...
8 of 8 10/12/2019 14:21