You are on page 1of 92

The Ascents and Collapses of Artificial Intelligence

Melanie Mitchell
Portland State University
and Santa Fe Institute
“A year from now, we’ll have over a million cars with full self-driving,
software, everything.” — Elon Musk, 2019
“Perhaps expectations are too high, and... this will eventually result in
disaster…. [S]uppose that five years from now [funding] collapses
miserably as autonomous vehicles fail to roll. Every startup company
fails. And there's a big backlash so that you can't get money for
anything connected with AI. Everybody hurriedly changes the names
of their research projects to something else.

This condition [is] called the ‘AI Winter.’”


“Perhaps expectations are too high, and... this will eventually result in
disaster…. [S]uppose that five years from now [funding] collapses
miserably as autonomous vehicles fail to roll. Every startup company
fails. And there's a big backlash so that you can't get money for
anything connected with AI. Everybody hurriedly changes the names
of their research projects to something else.

This condition [is] called the ‘AI Winter.’”

—Drew McDermott, 1984


Outlook for AI

AI Winter
AI Winter

1960 1970 1980 1990 2000 2010 2020


Outlook for AI ?
AI Winter
AI Winter

1960 1970 1980 1990 2000 2010 2020


Frank Rosenblatt
The Mark I Perceptron,
1957

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Frank Rosenblatt
The Mark I Perceptron,
1957
“The Navy revealed the embryo
of an electronic computer today
that it expects will be able to
walk, talk, see, write, reproduce Outlook for AI
itself, and be conscious of its
existence.”

— New York Times, July, 1958 1960 1970 1980 1990 2000 2010 2020
I confidently expect that
within a matter of 10 or 15
years, something will Outlook for AI
emerge from the
laboratory which is not too
far from the robot of
science fiction fame.
— Claude Shannon, 1961
1960 1970 1980 1990 2000 2010 2020
Machines will be
capable, within
twenty years, of doing
any work that
a man can do.
— Herbert Simon, 1965
I confidently expect that
within a matter of 10 or 15
years, something will Outlook for AI
emerge from the
laboratory which is not too
far from the robot of
science fiction fame.
— Claude Shannon, 1961
1960 1970 1980 1990 2000 2010 2020
Machines will be
capable, within Within a generation...the
twenty years, of doing problem of creating ‘artificial
any work that intelligence’ will be
a man can do. substantially solved.
— Herbert Simon, 1965 — Marvin Minsky, 1967
I confidently expect that
within a matter of 10 or 15
years, something will Outlook for AI
emerge from the
laboratory which is not too
far from the robot of
science fiction fame.
— Claude Shannon, 1961
1960 1970 1980 1990 2000 2010 2020
[The results are] wholly discouraging about general-
purpose programs seeking to mimic the problem-
solving aspects of human [brain] activity over a rather
wide field. Such a general-purpose program, the
coveted long-term goal of AI activity, seems as remote
as ever.

— James Lighthill, 1973

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


[The results are] wholly discouraging about general-
purpose programs seeking to mimic the problem-
solving aspects of human [brain] activity over a rather
wide field. Such a general-purpose program, the
coveted long-term goal of AI activity, seems as remote
as ever.

— James Lighthill, 1973

“AI research is unlikely to produce military applications in


the foreseeable future.”

— American Study Group, 1973


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


The Rise of Expert Systems

From XCON (1982):


If:
• the most current active context is distributing
massbus devices, and
• there is a single-port disk drive that has not been
assigned to a massbus, and
• there are no unassigned dual-port disk drives, and
• the number of devices that each massbus should support is known, and
• there is a massbus that has been assigned at least one disk drive that
should support additional disk drives, and
• the type of cable needed to connect the disk drive to the previous
device on the massbus is known
Then: assign the disk drive to the massbus.
Outlook for AI

Adapted from
http://www.cs.utexas.edu/users/ea
r/ugs302/Lectures/L10-
1960 1970 1980 1990 2000 2010 2020
ExpertSystems.ppt
The Rise of Neural Networks and Machine Learning

Published 1986

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


, 1987

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


, 1987

I believe that this [neural network]


technology which we are about to
embark upon is more important than the
atom bomb.

— Jasper Lupo, director of DARPAs


Tactical Technology Office, 1988

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Expert systems are brittle and unreliable

Neural networks are hard to train, don’t seem


to scale to complex problems

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


1990: I get out of graduate school.

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


1990: I get out of graduate school.

I’m advised not to use the term “artificial intelligence” on my job


applications.

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


The Deep Learning Revolution

All knowledge is learned from examples/experience,


and is encoded as weights.

Outlook for AI

https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
1960 1970 1980 1990 2000 2010 2020
Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Speech Recognition Error Rate

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Performance in “Reading Comprehension”

(Stanford Question-Answering Dataset)

“human level”

Outlook for AI

2018 2019 1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Outlook for AI

1960 1970 1980 1990 2000 2010 2020


Google’s new service translates
languages almost as well as
humans can.

Outlook for AI

1960 1970 1980 1990 2000 2010 2020


DeepMind’s
Deep Q-Learning

Atari video games

Human Level

Mnih, V. et al. "Human-level control through deep reinforcement learning." Nature 518, no. 7540
(2015): 529.
Deep Q-Learning discovers “tunneling”
Human-level AI will be passed in
the mid-2020s.

— Shane Legg, 2016


(Co-Founder and Chief Scientist,
Google DeepMind)
Human-level AI will be passed in I set the date for the Singularity . . .
the mid-2020s. as 2045. The nonbiological
intelligence created in that year will
— Shane Legg, 2016 be one billion times more powerful
(Co-Founder and Chief Scientist, than all human intelligence today.
Google DeepMind)
— Ray Kurzweil
?
Outlook for AI

1960 1970 1980 1990 2000 2010 2020


[In the quest to create human-level
AI] there has been almost no
progress.
?
Outlook for AI
— Gary Marcus, 2016
(Professor, NYU and Founder,
Geometric Intelligence)

1960 1970 1980 1990 2000 2010 2020


Some cracks in the foundation
of deep learning

• Brittleness and vulnerability to attacks

• Lack of generalization, abstraction, “transfer learning”

• Lack of “common sense”


Deep neural networks make unhuman-like errors
if inputs differ from training data
Alcorn, Michael A., et al. "Strike (with) a Pose: Neural Networks Are Easily
Fooled by Strange Poses of Familiar Objects." arXiv preprint arXiv:1811.11553
(2018).
Automated captioning systems are unreliable

“A group of people sitting at a bus stop”

— Microsoft captionbot
Translation systems can ignore important context
Attacks on Image Classification Systems

From http://cs231n.stanford.edu/
Attacks on Autonomous Driving Systems
Target: “Speed Limit 80”

Evtimov et al., “Robust Physical-World Attacks on Deep


Learning Models”, 2017
Attacks on Speech Recognition Systems

2016

“Okay Google, browse to evil.com”


Attacks on Question-Answering Systems

Jia & Liang, “Adversarial Examples for Evaluating Reading Comprehension


Systems”, 2017
Deep reinforcement learning sometimes is not able to transfer to
slightly different scenarios

Breakout with
Standard Breakout Paddle shifted up

Kansky, K. et al., 2017. Schema networks: Zero-shot transfer with a generative causal model of intuitive
physics. arXiv preprint arXiv:1706.04317.
Deep neural networks have trouble with abstraction

#18 #19

Bongard Problems (1967)


Deep neural networks have trouble with abstraction

#84 #91

Bongard Problems (1967)


Deep neural networks have trouble with abstraction

For each class, network is trained on 20,000 examples. Then


tested on 10,000 new examples.

Stabinger, S., Rodríguez-Sánchez, A., & Piater, J. 25 years of CNNs: Can we compare to
human abstraction capabilities?. In International Conference on Artificial Neural Networks
(pp. 380-387). Springer, 2016.
Deep neural networks have trouble with abstraction

For each class, network is trained on 20,000 examples. Then


tested on 10,000 new examples.

Method Accuracy
LeNet .57
Google LeNet .5
Humans .98

Stabinger, S., Rodríguez-Sánchez, A., & Piater, J. 25 years of CNNs: Can we compare to human abstraction capabilities?.
In International Conference on Artificial Neural Networks (pp. 380-387). Springer, 2016.
https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/
Four AI Fallacies
Four AI Fallacies

Narrow AI is on a continuum with general AI


Four AI Fallacies

Narrow AI is on a continuum with general AI

— Related to Hubert Dreyfus’s “First step fallacies"


Four AI Fallacies
Easy things are easy and hard things are hard
Four AI Fallacies
Easy things are easy and hard things are hard

— Herbert Simon: “Everything of interest in cognition


happens above the 100-millisecond level.”
Four AI Fallacies
Easy things are easy and hard things are hard

— Herbert Simon: “Everything of interest in cognition


happens above the 100-millisecond level.”

— Andrew Ng: “If a typical person can do a mental task


with less than one second of thought, we can probably
automate it using AI either now or in the near future.”
Four AI Fallacies
Easy things are easy and hard things are hard

— Herbert Simon: “Everything of interest in cognition


happens above the 100-millisecond level.”

— Andrew Ng: “If a typical person can do a mental task


with less than one second of thought, we can probably
automate it using AI either now or in the near future.”

— Demis Hassibis et al.: Go is one of “the most


challenging of domains”
Four AI Fallacies
Names confer abilities
Four AI Fallacies
Names confer abilities

— Benchmark datasets called


“object recognition”
“reading comprehension”
“question answering”
“common sense understanding”.
Four AI Fallacies
Names confer abilities

— Benchmark datasets called


“object recognition”
“reading comprehension”
“question answering”
“common sense understanding”.

— Methods called “deep learning”, “neural networks”


Four AI Fallacies
Names confer abilities

— Benchmark datasets called


“object recognition”
“reading comprehension”
“question answering”
“common sense understanding”.

— Methods called “deep learning”, “neural networks”

— “Overattributions” in explanations of what networks


learned (e.g., “tunneling”)
Four AI Fallacies

Intelligence is all in the brain


Four AI Fallacies

Intelligence is all in the brain

— Allen Newell and Herbert Simon: “A physical symbol


system has the necessary and sufficient means for
general intelligent action.”
Four AI Fallacies

Intelligence is all in the brain

— Allen Newell and Herbert Simon: “A physical symbol


system has the necessary and sufficient means for
general intelligent action.”

— Geoffrey Hinton: “I think you can capture a thought by


a vector.”
Some steps to more robust and human-like AI

• Generative, human-like concepts

• Active, dynamic perception

• Predictive coding and causal models

• Abstraction, analogy, metaphor

• Developmental foundations of common sense


Learning generative, human-like concepts
Lake et al. (2015). Human-level concept learning through probabilistic program
induction. Science, 350(6266), 1332-1338.
Active Perception and Analogies for
Situation Recognition

Quinn et al. (2018). Semantic image retrieval via


active grounding of visual situations. In 2018
IEEE 12th International Conference on Semantic
Computing (ICSC) (pp. 172-179). IEEE.
Active Perception and Analogies for
Situation Recognition

Quinn et al. (2018). Semantic image retrieval via


active grounding of visual situations. In 2018
IEEE 12th International Conference on Semantic
Computing (ICSC) (pp. 172-179). IEEE.
Active Perception and Analogies for
Situation Recognition

Quinn et al. (2018). Semantic image retrieval via


active grounding of visual situations. In 2018
IEEE 12th International Conference on Semantic
Computing (ICSC) (pp. 172-179). IEEE.
Active Perception and Analogies for
Situation Recognition

Quinn et al. (2018). Semantic image retrieval via


active grounding of visual situations. In 2018
IEEE 12th International Conference on Semantic
Computing (ICSC) (pp. 172-179). IEEE.
Active Perception and Analogies for
Situation Recognition

Quinn et al. (2018). Semantic image retrieval via


active grounding of visual situations. In 2018
IEEE 12th International Conference on Semantic
Computing (ICSC) (pp. 172-179). IEEE.
Foundations of Common Sense

Winograd Schema “Common Sense” Challenge


Levesque, Hector, Ernest Davis, and Leora Morgenstern. "The Winograd Schema
challenge." Thirteenth International Conference on the Principles of Knowledge
Representation and Reasoning. 2012.

I poured water from the bottle I poured water from the bottle into
into the cup until it was full. the cup until it was empty.

What was full? What was empty?


The steel ball hit the glass table The glass ball hit the steel table
and it shattered. and it shattered.

What shattered? What shattered?


State-of-the-art AI: ~60% (vs. 50% with random
guessing)

Humans: 100% (if paying attention)


State-of-the-art AI: ~60% (vs. 50% with random
guessing)

Humans: 100% (if paying attention)

“When AI can’t determine what ‘it’ refers to in a sentence,


it’s hard to believe that it will take over the world.”

— Oren Etzioni, Allen Institute for AI


https://www.seattletimes.com/business/technology/paul-allen-invests-125-million-to-teach-computers-common-sense/

https://allenai.org/alexandria/
DARPA “Foundations of Common Sense” Challenge

https://www.fbo.gov/index.php?s=opportunity&mode=form&id=f98476244ba0c06de9e0b38bfe75f54d&tab=cor
e&_cview=0
Summary
Summary

• AI has a history of ascents and collapses: optimistic predictions


followed by disappointments
Summary

• AI has a history of ascents and collapses: optimistic predictions


followed by disappointments

• Will our current “AI Spring”, full of optimistic predictions, be


followed by another “AI Winter”?
Summary

• AI has a history of ascents and collapses: optimistic predictions


followed by disappointments

• Will our current “AI Spring”, full of optimistic predictions, be


followed by another “AI Winter”?

• Deep learning has had huge success in narrow domains, but


fails at some fundamental aspects of general intelligence.
Summary

• AI has a history of ascents and collapses: optimistic predictions


followed by disappointments

• Will our current “AI Spring”, full of optimistic predictions, be


followed by another “AI Winter”?

• Deep learning has had huge success in narrow domains, but


fails at some fundamental aspects of general intelligence.

• Will these struggles be solved via more layers and more data, or
is something fundamentally different needed?
Coming this Fall!

You might also like