You are on page 1of 28

Lecture 3

History of Artificial Intelligence


The case of Intelligent Machines
What Early AI Scientists Say
Pre-history of AI
• the quest for understanding & automating intelligence has deep roots
• 4th cent. B.C.: Aristotle studied mind & thought, defined formal logic
• 14th–16th cent.: Renaissance thought built on the idea that all natural or artificial
processes could be mathematically analyzed and understood
• 18th cent.: Descartes emphasized the distinction between mind & brain (famous for
"Cogito ergo sum")
Pre-history of AI
• the quest for understanding & automating intelligence has deep roots
• 19th cent.: advances in science & understanding nature made the idea of creating
artificial life seem plausible
• Shelley's Frankenstein raised moral and ethical questions
• Babbage's Analytical Engine proposed a general-purpose, programmable computing
machine -- metaphor for the brain
• 19th-20th cent.: saw many advances in logic formalisms, including Boole's algebra,
Frege's predicate calculus, Tarski's theory of reference
• 20th cent.: advent of digital computers in late 1940's made AI a viable
• Turing wrote seminal paper on thinking machines (1950)
Early Era of AI
• Birth of AI occurred when Marvin Minsky & John McCarthy organized the Dartmouth
Conference in 1956 and brought together researchers interested in "intelligent machines“.
For next 20 years, virtually all advances in AI were by attendees
• John McCarthy
• LISP, application of logic to reasoning
• Marvin Minsky
• Popularized neural networks, Slots and frames, The Society of the Mind
• Claude Shannon
• Computer checkers, Information theory, Open-loop 5-ball juggling
• Allen Newell and Herb Simon
• General Problem Solver
General Problem Solver
Goal 1: Transform L1 into LO
• Here is a trace of GPS solving the logic Goal 2: Reduce difference between L1 and L0
Goal 3: Apply R1 to L1
problem to transform L1= R*(-P => Q) Goal 4: Transform L1 into condition (R1)
Produce L2: (-P => Q) *R
into L2=(Q \/ P)*R Goal 5: Transform L2 into L0
Goal 6: Reduce difference between left(L2) and left(L0)
Goal 7: Apply R5 to left(L2)
(Newell & Simon, 1972, Goal 8: Transform left(L2) into condition(R5)
Goal 9: Reduce difference between left(L2) and condition(R5)

p420) Rejected: No easier than Goal 6


Goal 10: Apply R6 to left(L2)
Goal 11: Transform left(L2) into condition(R5)
Produce L3: (P \/ Q) *R
Goal 12: Transform L3 into L0
Goal 13: Reduce difference between left(L3) and left(L0)
Goal 14: Apply R1 to left(L3)
Goal 15: Transform left(L3) into condition(R1)
Produce L4: (Q \/ P)*R
Goal 16: Transform L4 into L0
Identical, QED
Minsky’s First Neural Net Simulator
Early Era of AI
• 50s/60s: Early successes! AI can draw logical conclusions, prove some
theorems, create simple plans… Some initial work on neural networks…
• Led to overhyping: researchers promised funding agencies spectacular progress,
but started running into difficulties:
• Ambiguity: highly funded translation programs (Russian to English) were good at syntactic
manipulation but bad at disambiguation
• “The spirit is willing but the flesh is weak” becomes “The vodka is good but the meat is rotten”
• Scalability/complexity: early examples were very small, programs could not scale to bigger instances
• Limitations of representations used
Herbert Simon, 1957
• “It is not my aim to surprise or shock you--- but … there are now in the world
machines that think, that learn and that create. Moreover, their ability to do these
things is going to increase rapidly until---in a visible future---the range of problems
they can handle will be coextensive with the range to which human mind has been
applied. More precisely: within 10 years a computer would be chess
champion, and an important new mathematical theorem would be
proved by a computer.”

• Simon’s prediction came true --- but 40 years later instead of 10


History of AI
• 1960's – failed to meet claims of 50's, problems turned out to be hard!
• so, backed up and focused on "micro-worlds"
• within limited domains, success in: reasoning, perception, understanding, …
• ANALOGY (Evans & Minsky): could solve IQ test puzzle
• STUDENT (Bobrow & Minsky): could solve algebraic word problems
• SHRDLU (Winograd): could manipulate blocks using robotic arm
• STRIPS (Nilsson & Fikes): problem-solver planner, controlled robot
"Shakey"
• Minsky & Papert demonstrated the limitations of neural nets
History of AI
• 1970's – results from micro-worlds did not easily scale up
so, backed up and focused on theoretical foundations, learning/understanding
• conceptual dependency theory (Schank)
• frames (Minsky)
• machine learning: ID3 (Quinlan), AM (Lenat)
practical success: expert systems
• DENDRAL (Feigenbaum): identified molecular structure
• MYCIN (Shortliffe & Buchanan): diagnosed infectious blood diseases
History of AI
• 1980's – BOOM TOWN!
• cheaper computing made AI software feasible
• success with expert systems, neural nets revisited, 5th Generation Project
• XCON (McDermott): saved DEC ~ $40M per year
• neural computing: back-propagation (Werbos), associative memory (Hopfield)
• logic programming, specialized AI technology seen as future
Post Early Era of AI
• 70s, 80s: Creation of expert systems (systems specialized for one
particular task based on experts’ knowledge), wide industry adoption
• Again, overpromising…
• … led to AI winter(s)
• Funding cutbacks, bad reputation
History of AI
• 1990's – again, failed to meet high expectations
• so, backed up and focused : embedded intelligent systems, agents, …
• hybrid approaches: logic + neural nets + genetic algorithms + fuzzy + …
• CYC (Lenat): far-reaching project to capture common-sense reasoning
• Society of Mind (Minsky): intelligence is product of complex interactions of simple
agents
• Deep Blue (formerly Deep Thought): defeated Kasparov in Speed Chess in 1997
Modern AI
• More rigorous, scientific, formal/mathematical
• Fewer grandiose promises
• Divided into many subareas interested in particular aspects
• More directly connected to “neighboring” disciplines
• Theoretical computer science, statistics, economics, operations research, biology,
psychology/neuroscience, …
• Often leads to question “Is this really AI”?
• Some senior AI researchers are calling for re-integration of all these topics, return to more
grandiose goals of AI
IS AI POSSIBLE
The Case of Thinking Machines
Can Machine Ever Think
• In 1980, the philosopher Searle claimed to be able to prove that no computer program
could possibly think or understand, independently of its complexity based on the
following points
• that every operation that a computer is able to carry out can equally well be performed by a human
being working with paper and pencil in a hardcoded unintelligent way.
• Though these programs can show good performance, yet they cannot think and understand similar
to a human interpreter that can answer Chinese questions with the help of code without even
understanding the Chinese.
• If Strong AI is true, then there is a program for Chinese such that if any computing system runs that
program, that system thereby comes to understand Chinese.
• I could run a program for Chinese without thereby coming to understand Chinese.
• Therefore Strong AI is false.
Can Machine Ever Think
• The Systems Reply
• The man in the room does not understand Chinese. The man is but a part, a central
processing unit (CPU), in a larger system. The larger system includes the huge
database, the memory (scratchpads) containing intermediate states, and the
instructions – the complete system that is required for answering the Chinese
questions. So the Systems Reply is that while the man running the program does not
understand Chinese, the system as a whole does.
Can Machine Ever Think
• The Virtual Mind Reply
• the operator of the Chinese Room does not understand Chinese merely by running the paper machine. However
the Virtual Mind reply holds that what is important is whether understanding is created, not whether the Room
operator is the agent that understands.
• Unlike the Systems Reply, the Virtual Mind reply (VMR) holds that a running system may create new, virtual,
entities that are distinct from both the system as a whole, as well as from the sub-systems such as the CPU or
operator. In particular, a running system might create a distinct agent that understands Chinese.
• This virtual agent would be distinct from both the room operator and the entire system. The psychological traits,
including linguistic abilities, of any mind created by artificial intelligence will depend entirely upon the program
and the Chinese database, and will not be identical with the psychological traits and abilities of a CPU or the
operator of a paper machine, such as Searle in the Chinese Room scenario. According to the VMR the mistake in
the Chinese Room Argument is to make the claim of strong AI to be “the computer understands Chinese” or “the
System understands Chinese”. The claim at issue for AI should simply be whether “the running computer creates
understanding of Chinese”.
Can Machine Ever Think
• The Robot Reply
• The Robot Reply concedes Searle is right about the Chinese Room scenario: it shows that a computer trapped in
a computer room cannot understand language, or know what words mean.
• The Robot reply is responsive to the problem of knowing the meaning of the Chinese word for hamburger –
Searle’s example of something the room operator would not know. It seems reasonable to hold that most of us
know what a hamburger is because we have seen one, and perhaps even made one, or tasted one, or at least
heard people talk about hamburgers and understood what they are by relating them to things we do know by
seeing, making, and tasting.
• Given this is how one might come to know what hamburgers are, the Robot Reply suggests that we put a digital
computer in a robot body, with sensors, such as video cameras and microphones, and add effectors, such as
wheels to move around with, and arms with which to manipulate things in the world. Such a robot – a computer
with a body – might do what a child does, learn by seeing and doing. The Robot Reply holds that such a digital
computer in a robot body, freed from the room, could attach meanings to symbols and actually understand
natural language.
Can Machine Ever Think
• The problem with this reasoning is that it also applies to the biological counterpart. In fact, at a neural level,
human brain is also operated by electrochemical reactions and each neuron automatically responds to its inputs
according to fixed laws. Each neuron is not conscious, but contributes to thinking without understanding what is
going on in our mind. However, this does not prevent us of experiencing happiness, love and irrational
behaviors. With the emergence of artificial neural networks, the problem of artificial intelligence becomes even
more intriguing, because neural networks replicate the basic electrical behavior of the brain and provide the
proper support for realizing a processing mechanism similar to the one adopted by the brain.
• If we remove structural diversity between biological and artificial brains, the issue about artificial intelligence
can only become religious. Which means that if we believe human intelligence is determined by divine
intervention, then clearly no artificial system can ever possess thinking ability. If instead we believe that human
intelligence is a natural property developed by complex brains, then the possibility of realizing an artificially
intelligent being remains open.
Ontogenetic and Phylogenetic Control for
Conscious Robots (Ali Raza, W. M. Qazi, 2011)
Artificial Emotions
The Case of Emotional Machines
Artificial Emotions
Artificial Emotions
Artificial Emotions
• Imagine it was one cold, winter Monday morning. Someone hadn't slept enough and really
didn't want to do any work. In college or in office he logged into his computer by telling it
"Good Morning!". From the speaker's voice the computer could tell that for the speaker it is
not a good morning at all and it reacts by showing the speaker something funny on its
screen or telling some jokes. Wouldn't that be nice?
• The key to making futuristic scenarios like this one a reality is the research field of
artificial emotions.
• In order to understand that the speaker is in a bad mood, computer needs some concept
of emotions.
• If the research field of Artificial Emotion be able to make the computer understood what is
emotion, the computer may also react in an emotional way when it access any data related
to human emotion like sounds, touches, images, scenes etc.
Emotions
• In order to model emotions, we first need to define what an emotion is – and what it is not. The
definitions of emotion and mood from are explained below, but there may different conventions –
there are no universal definitions for these terms.
• Emotion: An emotion is defined as an internal mental and affective state.
• By this definition, e.g. pain is not an emotion, because it is a physical state, not a mental state. Similarly,
aggression is not an emotion because it is a behavioral state.
• There can be no such thing as neutral emotion, emotions are always positive or negative.
• Mood: The main difference between mood and emotion is that mood is an emotional state that lasts over a
comparatively long time, while emotions might be short-lived. Also, moods can have less specific causes
and are generally less extreme than emotions.
• As with emotions, moods can have a positive or a negative valence, but they can also be neutral.
Will Machines Take Over the World

You might also like