You are on page 1of 1

1.

QUANTA MAGAZINE: You think the goal of your field should be developing artificial intelligence
that is “provably aligned” with human values. What does that mean?

STUART RUSSELL: It’s a deliberately provocative statement, because it’s putting together two things —
“provably” and “human values” — that seem incompatible. It might be that human values will forever
remain somewhat mysterious. But to the extent that our values are revealed in our behavior, you would
hope to be able to prove that the machine will be able to “get” most of it. There might be some bits and
pieces left in the corners that the machine doesn’t understand or that we disagree on among ourselves.
But as long as the machine has got the basics right, you should be able to show that it cannot be very
harmful.

Russell, 53, a professor of computer science and founder of the Center for Intelligent Systems at the
University of California, Berkeley, has long been contemplating the power and perils of thinking
machines.

Recently, he says, artificial intelligence has made major strides, partly on the strength of neuro-inspired
learning algorithms. These are used in Facebook’s face-recognition software, smartphone personal
assistants and Google’s self-driving cars. In a bombshell result reported recently in Nature, a simulated
network of artificial neurons learned to play Atari video games better than humans in a matter of hours
given only data representing the screen and the goal of increasing the score at the top — but no
preprogrammed knowledge of aliens, bullets, left, right, up or down. “If your newborn baby did that you
would think it was possessed,” Russell said.

2. We must also be clear on what we mean by “intelligence” and by “AI.” Concerning


“intelligence,” Legg and Hutter (2007) found that definitions of intelligence used
throughout the cognitive sciences converge toward the idea that “Intelligence measures
an agent’s ability to achieve goals in a wide range of environments.” We might call this
the “optimization power” concept of intelligence, for it measures an agent’s power to
optimize the world according to its preferences across many domains. But consider two
agents which have equal ability to optimize the world according to their preferences,
one of which requires much more computational time and resources to do so.

3. So, when will we create AI? Any predictions on the matter must have wide error bars.
Given the history of confident false predictions about AI (Crevier 1993), and AI’s potential speed
bumps, it seems misguided to be 90% confident that AI will succeed in the
coming century. But 90% confidence that AI will not arrive before the end of the century
also seems wrong, given that: (a) many difficult AI breakthroughs have now been made,
(b) several factors, such as automated science and first-mover incentives, may well accelerate
progress toward AI, and (c) whole brain emulation seems to be possible and have
a more predictable development than de novo AI. Thus, we think there is a significant
probability that AI will be created this century. This claim is not scientific—the field of
technological forecasting is not yet advanced enough for that—but we believe our claim
is reasonable.

4. Artificial Intelligence or AI provides predictions on new scenarios based on learning from large
volumes of historical data. It is required to take a holistic look at a range of digital
transformation in the education sector. These are augmented reality, virtual reality, videos and
blockchain. AI and other digital technologies should be used to help teachers in
imparting education (Sarkar, 2018)

5. It is advised to have an approach of educating people about use of AI by preparing people to


work and live with AI and then use AI in planning educational and training systems
(Luckin and Issroff, 2018)

You might also like