Introduction to AI

& Intelligent Agents

What is Intelligence?
 Intelligence:
 “the capacity to learn and solve problems” (Websters
dictionary)
 in particular,
 the ability to solve novel problems
 the ability to act rationally
 the ability to act like humans

 Artificial Intelligence
 build and understand intelligent entities or agents
 2 main approaches: “engineering” versus “cognitive modeling”

What is Artificial Intelligence?
 John McCarthy, who coined the term Artificial Intelligence in 1956,
defines it as "the science and engineering of making intelligent
machines", especially intelligent computer programs.
 Artificial Intelligence (AI) is the intelligence of machines and
the branch of computer science that aims to create it.
 Intelligence is the computational part of the ability to achieve
goals in the world. Varying kinds and degrees of intelligence occur
in people, many animals and some machines.
 AI is the study of the mental faculties through the use of
computational models.
 AI is the study of : How to make computers do things which,
at the moment, people do better.
 AI is the study and design of intelligent agents, where an
intelligent agent is a system that perceives its environment and
takes actions that maximize its chances of success.

g.g. and act  e.What’s involved in Intelligence?  Ability to interact with the real world  to perceive. and making decisions  ability to deal with unexpected problems.. speech recognition and understanding and synthesis  e. have an effect  Reasoning and Planning  modeling the external world. planning..g. image understanding  e. given input  solving new problems. ability to take actions.g.. understand.. uncertainties  Learning and Adaptation  we are continuously learning and adapting  our internal models are always being “updated”  e. a baby learning to categorize and recognize animals .

.

AI Problems: Mundane Tasks Formal Tasks Expert Tasks .

.

(un)decidability. learning from data  Economics utility. algorithms. process cognitive Cognitive Science information. grammars . mind as physical system. perceive. methods of reasoning.  Computer building fast computers engineering  Control theory design systems that maximize an objective function over time  Linguistics knowledge representation.  Psychology/ how do people behave. foundations of learning. represent knowledge. rationality. computation.  Mathematics Formal representation and proof. language. rational economic agents  Neuroscience neurons as information processing units. (in)tractability  Probability/Statistics modeling uncertainty. Academic Disciplines relevant to AI  Philosophy Logic. decision theory.

1012 bits of RAM  cycle times: order of 10 .9 seconds  Conclusion  YES: in the near future we can have computers with as many basic processing elements as our brain. or nerve cell. but with  far fewer interconnections (wires or synapses) than the brain  much faster updates than the brain  but building hardware is very different from making a computer behave like a brain! .Can we build hardware as complex as the brain?  How complicated is our brain?  a neuron. is the basic information processing unit  estimated to be on the order of 10 12 neurons in a human brain  many more synapses (10 14) connecting these neurons  cycle time: 10 -3 seconds (1 millisecond)  How complex can we make computers?  108 or more transistors per CPU  supercomputer: hundreds of CPUs.

Can Computers beat Humans at Chess?  Chess Playing is a classic AI problem  well-defined problem  very complex: difficult for humans to play well 3000 2800 Deep Blue Human World Champion 2600 2400 Points Ratings 2200 Deep Thought Ratings 2000 1800 1600 Deep Fritz –German Deep Junior – Israeli Prog. 1400 1200 1966 1971 1976 1981 1986 1991 1997  Conclusion:  YES: today’s computers can beat even the best human .

g.g. etc  humans understand what they are saying  machines don’t: so they sound unnatural  Conclusion:  NO... “tish” -> sequence of basic audio sounds  Difficulties  sounds made by this “lookup” approach sound unnatural  sounds are not independent  e. “fictitious” -> fik-tish-es  use pronunciation rules to map phonemes to actual sound  e. “act” and “action”  modern systems (e.. at AT&T) can handle this pretty well  a harder problem is emphasis.Can Computers Talk?  This is known as “speech synthesis”  translate text to phonetic form  e. for individual words . emotion. for complete sentences  YES.g..g.

if unsuccessful hands you over to a human operator  saves millions of dollars a year for the phone companies . very difficult  “Lets talk about how to wreck a nice beach”  (I really said “________________________”)  Recognizing single words from a small vocabulary  systems can do this with high accuracy (order of 99%)  e.g. directory inquiries  limited vocabulary (area codes. city names)  computer tries to recognize you first.Can Computers Recognize Speech?  Speech Recognition:  mapping sounds from a microphone into a list of words  classic problem in AI..

STOP.dialogues from big hero 6 movie  Baymax: [appears behind Hiro] Hiro?  Hiro: [screams. STOP! It's just an expression! .  [He moves his hands toward Hiro]  Baymax: Clear. then sees who it is] You gave me a heart attack!  Baymax: [rubs his hands together] My hands are equipped with defibrillators. STOP.  Hiro: [alarmed] STOP.

single speaker) .Recognizing human speech (ctd.g. accents..)  Recognizing normal speech is much more difficult  speech is continuous: where are the boundaries between words?  e. for restricted problems (small vocabulary. other speakers.. modern systems are only about 60-70% accurate  Conclusion:  NO. normal speech is too complex to accurately recognize  YES. etc  on normal speech. hypothesize and test  try telling a waiter in a restaurant: “I would like some dream and sugar in my coffee”  background noise.g. “John’s car has a flat tire”  large vocabularies  can be many thousands of possible words  we can use context to help figure out what someone said  e. colds.

makes any sense. “time-flies” are fond of arrows  only 1. time passes quickly like an arrow?  2. much of what we say is beyond the capabilities of a computer to understand at present .Can Computers Understand speech?  Understanding is different to recognition:  “Time flies like an arrow”  assume the computer can recognize all the words  how many different interpretations are there?  1. command: only time those flies which are like an arrow  4. command: time the flies the way an arrow times the flies  3.  but how could a computer figure this out?  clearly humans use a lot of implicit commonsense knowledge in communication  Conclusion: NO.

g... Daimler Benz)  e.Can Computers Learn and Adapt ?  Learning and Adaptation  consider a computer learning to drive on the freeway  we could teach it lots of rules about what to do  or we could let it drive and steer it back on course when it heads for the embankment  systems like this are under development (e. computers can learn and adapt. when presented with information in the appropriate way . RALPH at CMU  in mid 90’s it drove 98% of the way from Pittsburgh to San Diego without any human assistance  machine learning allows computers to learn to do things without explicit programming  many successful applications:  requires some “set-up”: does not mean your PC can learn to forecast the stock market or become a brain surgeon  Conclusion: YES.g.

g.Can Computers “see”?  Recognition v. Understanding (like Speech)  Recognition and Understanding of Objects in a scene  look around this room  you can effortlessly recognize objects  human brain can map 2d visual image to 3d “map”  Why is visual recognition a hard problem?  Conclusion:  mostly NO: computers can only “see” certain types of objects under limited circumstances  YES for certain constrained problems (e.. face recognition) .

you want to take a holiday in Brazil  you need to decide on dates. etc  involves a sequence of decisions. real-world planning and decision-making is still beyond the capabilities of modern computers  exception: very well-defined..g. and actions  What makes planning hard?  the world is not predictable:  your flight is canceled or there’s a backup on the 405  there are a potentially huge number of details  do you consider all flights? all dates?  no: commonsense constrains your solutions  AI systems are only successful in constrained planning problems  Conclusion: NO. flights  you need to get to the airport.Can computers plan and make optimal decisions?  Intelligence  involves solving problems and making decisions and plans  e. plans. constrained problems .

Summary of State of AI Systems in Practice  Speech synthesis. natural scenes is still too hard  Learning  adaptive systems are used in many applications: have their limits  Planning and Reasoning  only works for constrained problems: e. chess  real-world is too complex for general systems  Overall:  many components of intelligent systems are “doable”  there are many interesting research problems remaining ..g. recognition and understanding  very useful for limited vocabulary applications  unconstrained speech understanding is still too hard  Computer vision  works for constrained problems (hand-written zip-codes)  understanding real-world.

location. gender. from your Web surfing  Automated fraud detection  Digital Cameras  Automated face detection and focusing  Computer Games  Intelligent characters/agents . signature verification systems  automated loan application classification  Customer Service  automatic voice recognition  The Web  Identifying your age. Intelligent Systems in Your Everyday Life  Post Office  automatic address recognition and sorting of mail  Banks  automatic check readers.

Hard or Strong AI  Generally. ◊ If it can apply a wide range of background knowledge and ◊ If it has some degree of self-consciousness. artificial intelligence research aims to create AI that can replicate human intelligence completely.  Strong AI aims to build machines whose overall intellectual ability is indistinguishable from that of a human being. .  Strong AI refers to a machine that approaches or supersedes human intelligence. ◊ If it can do typically human tasks.

Soft or Weak AI
 Weak AI refers to the use of software to study or accomplish specific
problem solving or reasoning tasks that do not encompass the full range of
human cognitive abilities.

 Example : a chess program such as Deep Blue.

 Weak AI does not achieve self-awareness; it demonstrates wide range of
human-level cognitive abilities; it is merely an intelligent, a specific problem-
solver.

What is Artificial Intelligence?
 Thought processes vs. behavior
 Human-like vs. rational-like
 How to simulate humans intellect and behavior by a
machine.
 Mathematical problems (puzzles, games, theorems)
 Common-sense reasoning
 Expert knowledge: lawyers, medicine, diagnosis
 Social behavior
 Web and online intelligence
 Planning for assembly and logistics operations
 Things we call “intelligent” if done by a human.

What is AI?
Views of AI fall into four categories:
Thinking humanly:The Thinking rationally: The
cognitive modeling “laws of thought”
approach approach

Acting humanly:The Acting rationally: The
Turing Test approach rational agent approach

The textbook advocates “acting rationally”

We understand some of the mechanisms of intelligence and not others. but what is intelligence?  Intelligence is the computational part of the ability to achieve goals in the world.  Yes. Basic Questions)  What is artificial intelligence?  It is the science and engineering of making intelligent machines. It is related to the similar task of using computers to understand human intelligence.  Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?  Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent.html .  More in: http://www-formal.stanford. especially intelligent computer programs. but AI does not have to confine itself to methods that are biologically observable. many animals and some machines.edu/jmc/whatisai/node1. Varying kinds and degrees of intelligence occur in people. What is Artificial Intelligence (John McCarthy .

people are better.. learning… (Bellman) .What is Artificial Intelligence  Thought processes  “The exciting new effort to make computers think . activities such as decision-making. 1985)  Behavior  “The study of how to make computers do things at which. problem solving. at the moment. Machines with minds. in the full and literal sense” (Haugeland. 1991)  Activities  The automation of activities that we associate with human thinking.” (Rich. and Knight.

strategically important systems. they deal with legacy data.  Emphasis shifts away from replacing expensive human experts with stand-alone expert systems toward main-stream computing systems that create strategic advantage.  Time has proven Dyson's prediction correct. they talk to networks.  Adapted from Patrick Winston. like raisins in a loaf of raisin bread.  Humans usually are important contributors to the total solution. Former Director. they are implemented in popular languages.AI as “Raisin Bread”  Esther Dyson [predicted] AI would [be] embedded in main-stream. and they run on standard operating systems.  Many of today's AI systems are connected to large data bases. MIT AI Laboratory . they handle noise and data corruption with style and grace.

M. robotics) for full test . Turing. 1950)  Requires:  Natural language  Knowledge representation  Automated reasoning  Machine learning  (vision.The Turing Test (Can Machine think? A.

.reasoning. .knowledge representation .language/image understanding.learning * Question: is it important that an intelligent system act like a human? .Acting humanly: Turing test  Turing (1950) "Computing machinery and intelligence“  "Can machines think?"  "Can machines behave intelligently?“  Operational test for intelligent behavior: the Imitation Game  Suggests major components required for AI: .

then conclude that the machine is intelligent. and an interrogator. ◊ The interrogator tries to determine which is the person and which is the machine.◊ 3 rooms contain: a person. and the person also tries to convince the interrogator that it is the human. . ◊ If the machine succeeds in fooling the interrogator. Goal is to develop systems that are human-like. ◊ The interrogator can communicate with the other 2 by teletype (to avoid the machine imitate the appearance or voice of the person). ◊ The machine tries to fool the interrogator to believe that it is the human. a computer.

the general problem solver (Newell and Simon 1961)  Cognitive sciences  through psychological experiments  through brain imaging  Thinking rationally:  Logic  Problems: how to represent and reason in a domain  Acting rationally:  Agents  Operate autonomously  Perceive their environment  Persist over prolonged time period  Adapt to change  Create and pursue goals .Acting/Thinking Humanly/Rationally  Acting Humanly: Turing test (1950)  Methods for Thinking Humanly:  Introspection.

conduct experiments with people to try to “reverse- engineer” how we reason. predict  Problems  Humans don’t behave rationally  The reverse engineering is very hard to do  The brain’s hardware is very different to a computer program .Thinking humanly: Cognitive modeling  Cognitive Science approach  Try to get “inside” our minds  E. learning..g. remember.

costs. theorem-provers  Limitations  Does not account for an agent’s uncertainty about the world  E.g. difficult to couple to vision or speech systems  Has no way to represent goals. etc (important aspects of real-world environments) .g...Thinking rationally: "laws of thought"  Represent facts about the world via logic  Use logical inference as a basis for reasoning about these facts  Can be a very useful approach to AI  E.

take the best actions)  on average over time  within computational limitations (“bounded rationality”) . given the available information  An agent acts rationally if it selects the action that maximizes its “utility”  Or expected utility if there is uncertainty  Emphasis is on autonomous agents that behave rationally (make the best predictions.Acting rationally: rational agent  Decision theory/Economics  Set of future states of the world  Set of possible actions an agent can take  Utility = gain to an agent for each action/state pair  The right thing: that which is expected to maximize goal achievement.

Agents and environments Compare: Standard Embedded System Structure ADC microcontroller DAC sensors actuators ASIC FPGA ASIC: application-specific integrated circuit FPGA: field programmable gate array .

Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators Human agent: eyes. ears. and other organs for sensors. legs. mouth. and other body parts for actuators  Robotic agent: cameras and infrared range finders for sensors. hands. various motors for actuators .

Agents and environments  Percept: agent’s perceptual inputs at an instant  The agent function maps from percept sequences to actions: [f: P*  A]  The agent program runs on the physical architecture to produce f  agent = architecture + program .

[A.Dirty]. Suck.g. e. Right.. [B.Vacuum-cleaner world  Percepts: location and state of the environment.Clean]  Actions: Left. NoOp .

g. amount of time taken. amount of noise generated. performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up. . etc. amount of electricity consumed. based on the evidence provided by the percept sequence and whatever built-in knowledge the agent has.  Performance measure:An objective criterion for success of an agent's behavior  E.Rational agents  Rational Agent: For each possible percept sequence. a rational agent should select an action that is expected to maximize its performance measure..

exploration)  An agent is autonomous if its behavior is determined by its own percepts & experience (with ability to learn and adapt) without depending solely on build-in knowledge .Rational agents  Rationality is distinct from omniscience (all-knowing with infinite knowledge)  Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering.

we must specify its “task environment”: PEAS: Performance measure Environment Actuators Sensors .Task Environment  Before we design an intelligent agent.

horn  Sensors: Cameras. fast. comfortable trip.PEAS  Example: Agent = taxi driver  Performance measure: Safe. customers  Actuators: Steering wheel. other traffic. pedestrians. brake. sonar. accelerator. signal. maximize profits  Environment: Roads. keyboard . engine sensors. speedometer. odometer. GPS. legal.

diagnoses. hospital. referrals) Sensors: Keyboard (entry of symptoms. lawsuits Environment: Patient. tests.PEAS  Example: Agent = Medical diagnosis system Performance measure: Healthy patient. findings. staff Actuators: Screen display (questions. treatments. minimize costs. patient's answers) .

bins  Actuators: Jointed arm and hand  Sensors: Camera.PEAS  Example: Agent = Part-picking robot  Performance measure: Percentage of parts in correct bins  Environment: Conveyor belt with parts. joint angle sensors .

then the environment is strategic)  Episodic (vs. Decisions do not depend on previous decisions/actions. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent.Environment types  Fully observable (vs. . sequential): An agent’s action is divided into atomic episodes. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time.  Deterministic (vs. (If the environment is deterministic except for the actions of other agents.

dynamic): The environment is unchanged while an agent is deliberating. Does the other agent interfere with my performance measure? .Environment types  Static (vs. How do we represent or abstract or model the world?  Single agent (vs. clearly defined percepts and actions. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)  Discrete (vs. multi-agent): An agent operating by itself in an environment. continuous): A limited number of distinct.

/ episodic/ static/ discrete/ agents environm. partial stochastic sequential dynamic discrete multi Eng. sequential static discrete single puzzle chess with fully strategic sequential semi discrete multi clock poker back gammon taxi partial stochastic sequential dynamic continuous multi driving medical partial stochastic sequential dynamic continuous single diagnosis image fully determ. stochastic sequential dynamic continuous crossword fully determ. tutor .task observable determ. episodic semi continuous single analysis partpicking partial stochastic episodic dynamic continuous single robot refinery partial stochastic sequential dynamic continuous single controller interact.

partial stochastic sequential dynamic discrete multi Eng./ episodic/ static/ discrete/ agents environm. tutor . episodic semi continuous single analysis partpicking partial stochastic episodic dynamic continuous single robot refinery partial stochastic sequential dynamic continuous single controller interact. stochastic sequential dynamic continuous crossword fully determ.task observable determ. sequential static discrete single puzzle chess with fully strategic sequential semi discrete multi clock poker partial stochastic sequential static discrete multi back gammon taxi partial stochastic sequential dynamic continuous multi driving medical partial stochastic sequential dynamic continuous single diagnosis image fully determ.

sequential static discrete single puzzle chess with fully strategic sequential semi discrete multi clock poker partial stochastic sequential static discrete multi back fully stochastic sequential static discrete multi gammon taxi partial stochastic sequential dynamic continuous multi driving medical partial stochastic sequential dynamic continuous single diagnosis image fully determ. tutor . episodic semi continuous single analysis partpicking partial stochastic episodic dynamic continuous single robot refinery partial stochastic sequential dynamic continuous single controller interact. stochastic sequential dynamic continuous crossword fully determ./ episodic/ static/ discrete/ agents environm.task observable determ. partial stochastic sequential dynamic discrete multi Eng.

Agent types  Table Driven agents  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents  Learning agents .

Table Driven Agent. current state of decision process table lookup for entire history .

.Table Driven Agent.

Simple reflex agents • NO MEMORY • Fails if environment is partially observable example: vacuum cleaner world .

Simple reflex agents Example 1: Example 2: if car-in-front-is-braking then initiate-braking .

Simple reflex agents .

Model-based reflex agents Model the state of the world by: modeling how the world changes description of how it’s actions change the world current world state •This can work even with partial information •It’s is unclear what to do without a clear goal .

one is usually about five miles north of where one was five minutes ago.  After driving for five minutes northbound on the freeway. • The details of how models and states are represented vary widely depending on the type of environment and the particular technology used in the agent design. Model-based reflex agents Example:  “how the world evolves independently of the agent”  an overtaking car generally will be closer behind than it was a moment ago  “how the agent’s own actions affect the world”  when the agent turns the steering clockwise. the car turns to the right. .

Model-based reflex agents .

We need to predict the future: we need to plan & search .Goal-based agents Goals provide reason to prefer one action over the other.

Utility-based agents Some solutions to goal states are better than others. Which combination of goals is preferred? . Which one is best is given by a utility function.

etc. Evaluates current world state changes action rules “old agent”= model world and decide on actions to be taken suggests explorations . new action rules.Learning agents How does an agent improve over time? By monitoring it’s performance and suggesting better modeling.

Human-like vs. Rational  Textbook (and this course) adopt Acting Rationally  Esther Dyson: AI as raisin bread  Small sweet nuggets of intelligent control points  Embedded in the matrix of an engineered system  “Rational Agent” as the organizing theme  Acts to maximize expected performance measure  Task Environment: PEAS  Environment types:Yield design constraints . Acting.Summary  Conceptions of AI span two major axes:  Thinking vs.