You are on page 1of 72

CS305PC Introduction to Artificial Intelligence

Representation and Problem Solving


UNIT I
What is AI?
Foundation of AI
History of AI
Intelligent Agent
Problem formulation
Review of Tree and Graph Structures
State Space Representation
Search Graph and Search Tree
Textbook

Artificial Intelligence: A Modern Approach (AIMA)


(Second Edition) by Stuart Russell and Peter Norvig
What is Artificial
Intelligence?
 Artificial Intelligence:
⚫ build and understand intelligent entities
 Intelligence:
⚫ “the capacity to learn and solve problems”
⚫ the ability to act rationally
Two main dimensions:
 Thought processes vs behavior
 Human-like vs rational-like
Views of AI fall into four categories/approaches:

Thinking humanly Thinking rationally


Acting humanly Acting rationally
Acting Humanly:Turing Test
(Can Machine think? A. M. Turing, 1950)

AI system passes if interrogator cannot tell


which one is the machine.
Acting humanly: Turing Test

Turing test → identified key research areas in AI:

To pass the test, computer need to possess


 Natural Language Processing – to communicate with
the machine;
 Knowledge Representation – to store and manipulate
information;
 Automated reasoning – to use the stored information
to answer questions and draw new conclusions;
 Machine Learning – to adapt to new circumstances
and to detect and extrapolate patterns.
Total Turing Test:
 To pass the Total Turing Test, the computer
needs,
 Computer vision –to perceive objects

 Robotics-manipulate objects and move about.


Thinking humanly: cognitive modeling
Requires scientific theories of internal activities
of the brain; How to validate?
1) Cognitive Science (top-down) →
Predicting and testing behavior of human
subjects
– computer models + experimental
techniques from psychology
2) Cognitive Neuroscience (bottom-up) →
Direct identification from neurological data
Thinking rationally: "laws of thought“
Proposed by Aristotle;
Given the correct premises, it yields the correct
conclusion
Socrates is a man
All men are mortal
--------------------------
Therefore Socrates is mortal
Logic → Making the right inferences!
Acting rationally: rational agent
An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that
environment through actuators.
Rational behavior: doing the right thing;
that which is expected to maximize goal
achievement, given the available
information;
Foundations of AI
 Philosophy logic, methods of reasoning, mind vs. matter,
foundations of learning and knowledge

 Mathematics logic, probability, computation

 Economics utility, decision theory

 Neuroscience biological basis of intelligence(how brain process


information?)

 Psychology computational models of human intelligence(how


humans and animals think and act)
 Computer engineering how to build efficient computers?

 Linguistics rules of language, language acquisition(how does


language relate to thought)

 Control theory design of dynamical systems that use controller to


achieve desired behavior
History of AI
 1943 McCulloch & Pitts “Boolean circuit model of brain”
 1950 Turing’s “Computing Machinery and Intelligence”
 1951 Minsky and Edmonds
• Built a neural net computer SNARC
• Used 3000 vacuum tubes and 40 neurons
The Birthplace of
“Artificial Intelligence”, 1956

 1956 Dartmouth meeting: “Artificial


Intelligence” adopted
 1956 Newell and Simon Logic theorist
(LT)- proves theorem.
Early enthusiasm,great
expectations (1952-1969)

⚫ GPS- Newell and Simon-thinks like humans(1952)


⚫ Samuel Checkers that learns (1952)
⚫ McCarthy - Lisp (1958),
⚫ Geometry theorem prover - Gelernter (1959)
⚫ Robinson’s resolution(1963)
⚫ Slagles – SAINT solves calculus problems(1963).
⚫ Daniel Bobrows Student program solved algebra story
problems(1964).
⚫ 1968- TomEvans Analogy program solved geometric
analogy problems that appear in IQ test.
 1966-1974 a dose of reality
⚫ Problems with computation
⚫ 1969 :Minsky and Papert Published the book Perceptrons,
demonstrating the limitations of neural networks.
 1969-1979 Knowledge-based systems
⚫ 1969:Dendral:Inferring molecular structures
Mycin: diagnosing blood infections
Prolog Language PLANNER became popular
Minsky developed frames as a representation and reasoning
language.
 1980-present: AI becomes an industry
⚫ Japanese Government announced Fifth generation project to build
intelligent computers
⚫ AI Winter –companies failed to deliver on extra vagant promises
 1986-present: return of neural networks
Many research were done by psychologists on Neural networks
271- Fall 2008
 1987-present: AI becomes a Science
⚫ HMMs, planning, belief network

Emergence of Intelligent agents(1995-present)


o The agent architecture SOAR developed
o The agents environment is internet.
o Web based applications, search engines, recommender systems,
websites
Intelligent Agents

 Agents and environments


 Rationality

 Nature of Environments

 Structure of Agents
Agents
 An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
 Human agent:
sensors- eyes, ears, and other organs
actuators- hands, legs, mouth, and
other body parts
 Robotic agent:
Sensors - cameras and infrared range
finders
Actuators - motors
 an agent perceives its environment through
sensors
⚫ the complete set of inputs at a given time is
called a percept
⚫ the current percept, or a sequence of
percepts may influence the actions of an
agent –percept sequence
 The agent function maps from percept histories to
actions:[f: P* → A].The agent function is an
abstract mathematical description.
 The agent function will be implemented by an agent
program.The agent program is a concrete
implementation running on the agent
architecture .
Vacuum-cleaner world
 Percepts:
Location and status,
e.g., [A,Dirty]
 Actions:
Left, Right, Suck, NoOp

Example vacuum agent program:

function Vacuum-Agent([location,status]) returns an action


 if status = Dirty then return Suck
 else if location = A then return Right
 else if location = B then return Left
Rationality

 A rational agent is one that does the right


thing. Every entry in the table for the agent
function is filled out correctly.
 It is based on
⚫ performance measure
⚫ percept sequence
⚫ background knowledge
⚫ feasible actions
Omniscience, Learning and
Autonomy
 an omniscient agent deals with the actual
outcome of its actions
 a rational agent deals with the expected
outcome of actions
 a rational agent not only gather information
but also learns as much as possible from the
percepts it receives.
 a rational agent should be autonomous –it
should learn what it can do to compensate
for partial or incorrect prior knowledge.
Nature of Environments
Specifying the task environment

 Before we design an intelligent agent, we must specify its “task


environment”:
 Problem specification: Performance measure, Environment,
Actuators, Sensors (PEAS)
Example of Agent Types and their PEAS description:
 Example: automated taxi driver
⚫ Performance measure
• Safe, fast, legal, comfortable trip, maximize profits
⚫ Environment
• Roads, other traffic, pedestrians, customers
⚫ Actuators
• Steering wheel, accelerator, brake, signal, horn
⚫ Sensors
• Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard
 Example: Agent = Medical diagnosis system

Performance measure: Healthy patient, minimize costs,


lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings,
patient's answers)
 Example: Agent = Part-picking robot

Performance measure: Percentage of parts in correct


bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
 Example: Agent = Interactive English tutor

Performance measure: Maximize student's score on test


Environment: Set of students
Actuators: Screen display (exercises, suggestions, corrections)
Sensors: Keyboard

Artificial Intelligence a modern approach


Example: Agent = Satellite image system

Performance measure: Correct image categorization


Environment: Downlink from satellite
Actuators: Display categorization of scene
Sensors: Color pixel array
Properties of Task Environment
 Fully observable (vs. partially observable): The agent's sensors
give it access to the complete state of the environment at each point
in time
e.g an automated taxi doesn’t has sensor to see what other drivers are
doing/thinking.
 Deterministic (vs. stochastic): The next state of the environment is
completely determined by the current state and the agent’s action
⚫ Strategic: the environment is deterministic except for the actions
of other agents
e.g Vacuum world is Deterministic while Taxi Driving is Stochastic –
as one can exactly predict the behaviour of traffic
 Episodic (vs. sequential): The agent's experience is divided into
atomic “episodes,” and the choice of action in each episode depends
only on the episode itself
 E.g. an agent sorting defective parts in an assembly line is episodic
while a taxi driving agent or a chess playing agent are sequential ….
 Static (vs. dynamic): The environment is unchanged while an agent is
deliberating
⚫ Semidynamic: the environment does not change with the passage
of time, but the agent's performance score does
e.g.Taxi Driving is Dynamic, Crossword Puzzle solver is static,chess
played with a clock is semidynamic
 Discrete (vs. continuous): The environment provides a fixed number of
distinct percepts, actions, and environment states
e.g. chess game has finite number of states
• Taxi Driving is continuous-state and continuous-time problem …
 Single agent (vs. multi-agent): An agent operating by itself in an
environment
e.g. An agent solving a crossword puzzle is in a single agent
environment
• Agent in chess playing is in two-agent environment
task observable determ./ episodic/ static/ discrete/ agents
environm. stochastic sequential dynamic continuous
crossword fully determ. sequential static discrete single
puzzle
chess with fully strategic sequential semi discrete multi
clock
poker partial stochastic sequential static discrete multi

back fully stochastic sequential static discrete multi


gammon
taxi partial stochastic sequential dynamic continuous multi
driving
medical partial stochastic sequential dynamic continuous single
diagnosis
image fully determ. episodic semi continuous single
analysis
partpicking partial stochastic episodic dynamic continuous single
robot
refinery partial stochastic sequential dynamic continuous single
controller
interact. partial stochastic sequential dynamic discrete multi
Eng. tutor
Structure of Agents
 An agent is completely specified by the agent function
mapping percept sequences to actions.
 The agent program implements the agent function
mapping percepts sequences to actions
Agent=architecture + program.
Architecture= sort of computing device with physical
sensors and actuators.
 Aim of AI is to design the agent program
Table-Driven agent

Function Table-Driven-Agent(percept)
Static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified

append percept to the end of percepts


action <- Lookup(percepts,table)
Return action

The table agent program is invoked for each new percept and returns an action
each time. It keeps track of percept sequences using its own private data structure.
Table-lookup agent

 Drawbacks:
⚫ Huge table
⚫ Take a long time to build the table
⚫ No autonomy
⚫ Even with learning, need a long time to learn
the table entries.
 Example : let P be the set of possible percepts and T be the
lifetime of the agent (the total number of percepts it will receive)
then the lookup table will contain PT entries.
 The table of the vacuum agent (VA) will contain more than 4T
entries (VA has 4 possible percepts).
 Four basic kinds of agent program are
⚫ Simple reflex agents
⚫ Model-based reflex agents
⚫ Goal-based agents
⚫ Utility-based agents

All of these can be turned into learning agents


Simple reflex agents
 Single current percept : the agent select an action on the
basis current percept, ignoring the rest of percept history.
 Example : The vacuum agent (VA) is a simple reflex agent,
because its decision is based only on the current location and
on whether that contains dirt.
 Rules relate
⚫ “State” based on percept
⚫ “action” for agent to perform
⚫ “Condition-action” rule:
If a then b: e.g.
vacuum agent (VA) : if in(A) and dirty(A), then vacuum
taxi driving agent (TA): if car-in-front-is-braking then initiate-
braking.
Agent program for a simple reflex agent

The vacuum agent program is very small compared to the corresponding


table : it cuts down the number of possibilities from 4T to 4. This reduction
comes from the ignoring of the history percepts.
Simple reflex agent Program
Function Simple-Reflex-Agent(percept)
Static: rules, set of condition-actions rules;

state <- Interpret-Input(percept)


Rule <- Rule-Match(state, rules)
action <- Rule-Action[Rule]

return action

A simple reflex agent. It acts according to rule whose condition matches the
current state, as defined by the percept.
Schematic diagram of a Simple
reflex agentcurrent state of decision process

Limited Intelligence
Fails if environment
is partially observable

example: vacuum cleaner world


Simple reflex agents
 Simple but very limited intelligence.
 Action does not depend on percept history, only on
current percept.
 Therefore no memory requirements.
 Infinite loops
⚫ Suppose vacuum cleaner does not observe location.
What do you do given location = clean? Left of A or
right on B -> infinite loop.
⚫ Possible Solution: Randomize action.

41 Artificial Intelligence a modern approach


Model-based reflex agents
 Solution to partial observability problems
⚫ Maintain state
• Keep track of parts of the world can't see now
• Maintain internal state that depends on the percept
history

⚫ Update previous state based on


• Knowledge of how world changes, e.g. TA : an overtaking car
generally will be closer behind than it was a moment ago.
• Knowledge of effects of own actions, e.g. TA: When the agent
turns the steering wheel clockwise the car turns to the right.
• => Model called “Model of the world” implements the
knowledge about how the world work.
42
Schematic diagram of a Model-based reflex agents
description of Models the world by:
current world state modeling how the world changes
how it’s actions change the world

sometimes it is unclear what to do


without a clear goal
Model-based reflex agents
Function Model-based-Reflex-Agent(percept)
Static: state, a description of the current world state
rules, set of condition-actions rules;
actions, the most recent action, initially none

State<-Update-State(oldInternalState,LastAction,percept)
rule<- Rule-Match(State, rules)
action <- Rule-Action[rule]

return action

A model-based reflex agent. It keep track of the current state of the world using
an internal model. It then chooses an action in the same way as the reflect agent.
Goal-based agents
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
⚫ A destination to get to
 Uses knowledge about a goal to guide its
actions
⚫ E.g., Search, planning
 Goal-based Agents are much more flexible in
responding to a changing environment;
accepting different goals.
45 Artificial Intelligence a modern approach
Goal-based agents
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake

47 Artificial Intelligence a modern approach


Utility-based agents
 Goals are not always enough
⚫ Many action sequences get taxi to destination
⚫ Consider other things. How fast, how safe…..

 A utility function maps a state onto a real


number which describes the associated degree
of “happiness”, “goodness”, “success”.
 Where does the utility measure come from?
⚫ Economics: money.
⚫ Biology: number of offspring.
⚫ Your
48 life? Artificial Intelligence a modern approach
Utility-based agents
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
Learning agents
How does an agent improve over time?
By monitoring it’s performance and suggesting
better modeling, new action rules, etc.

Evaluates
current
world
state

changes
action
rules
“old agent”
model world
and
suggests decide on
explorations actions to be
taken
Learning Agents can be divided into 4 conceptual
components:
1. Learning elements are responsible for
improvements
2. Performance elements are responsible for
selecting external actions (previous knowledge)
3. Critic tells the learning elements how well the
agent is doing with respect to a fixed performance
standard.
4. Problem generator is responsible for suggesting
actions that will lead to new and informative
experience.
Example :Automated Taxi driving
•The performance element consists of whatever collection of knowledge and
procedures the TA has for selecting its driving actions.

•The critic observes the world and passes information along to the learning
element. For example after the taxi makes a quick left turn across three lanes
the critic observes the shocking language used by other drivers. From this
experience the learning element is able to formulate a rule saying this was a
bad action, and the performance element is modified by installing this new rule.

•The problem generator may identify certain areas of behavior in need of


improvement and suggest experiments : such as testing the brakes on different
road surfaces under different conditions.

•The learning element can make change in any Knowledge of previous agent
types : observation between two states (how the world evolves), observation of
results of actions (what my action do). (learn from what happens when
52
strong brake is applied on a wet road …)
Problem Formulation
Problem Solving agents
Example problems
Searching for solutions

53
Problem Solving agents:

1. Goal Formulation: Set of one or more (desirable) world


states.

2. Problem formulation: What actions and states to


consider given a goal and an initial state.

3. Search for solution: Given the problem, search for a


solution --- a sequence of actions to achieve the goal
starting from the initial state.

4. Execution of the solution


54
Example: Path Finding problem

Initial
State
 Formulate goal:
⚫ be in Bucharest
(Romania)
 Formulate problem: Goal
⚫ action: drive between State
pair of connected
cities (direct road)
⚫ state: be in a city
(20 world states)

 Find solution: Environment: fully observable (map),


⚫ sequence of cities deterministic, and the agent knows effects
leading from start to of each action.
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
 Execution
⚫ drive from Arad to
Bucharest according
to the solution
Well defined Problems and
solutions
A problem can defined by 4 components
1.Initial state: starting point from which the agent sets
out
2.Operator: description of an action
State space: all states reachable from the initial
state by any sequence of actions
Path: sequence of actions leading from one state
to another
3.Goal test: determines if a given state is the goal
state
4.Path cost function: assign a numeric cost to
each path.
56
Example Problems
 Toy problems
⚫ Illustrate/test various problem-solving methods
⚫ Concise, exact description
⚫ Can be used to compare performance
⚫ Examples: 8-puzzle, 8-queens problem, Cryptarithmetic,
Vacuum world, Missionaries and cannibals.
 Real-world problem
⚫ More difficult
⚫ No single, agreed-upon specification (state, successor function,
edgecost)
⚫ Examples: Route finding, VLSI layout, Robot navigation,
Assembly sequencing

57
Toy problems:
Simple Vacuum World
 states
⚫ two locations
⚫ dirty, clean
 initial state
⚫ any legitimate state
 successor function (operators)
⚫ left, right, suck
 goal test
⚫ all squares clean
 path cost
⚫ one unit per action

Properties: discrete locations, discrete dirt (binary), deterministic


The 8-puzzle

[Note: optimal solution of n-Puzzle family is NP-hard]


8-Puzzle
 states
⚫ location of tiles (including blank tile)
 initial state
⚫ any legitimate configuration
 successor function (operators)
⚫ move tile
⚫ alternatively: move blank
 goal test
⚫ any legitimate configuration of tiles
 path cost
⚫ one unit per move

Properties: abstraction leads to discrete configurations, discrete moves,deterministic


8-Queens
 incremental formulation ◆ complete-state formulation
⚫ states ◆ states
• arrangement of up to 8 queens ❖ arrangement of 8 queens on the board
on the board ◆ initial state
⚫ initial state ❖ all 8 queens on board
• empty board ◆ successor function (operators)
⚫ successor function (operators) ❖ move a queen to a different square
• add a queen to any square ◆ goal test
⚫ goal test ❖ no queen attacked
• all queens on board ◆ path cost
• no queen attacked ❖ irrelevant (all solutions equally valid)
⚫ path cost
• irrelevant (all solutions equally
valid)
Real-world problems
 Route finding
⚫ Defined in terms of locations and transitions along links between
them
⚫ Applications: routing in computer networks, automated travel
advisory systems, airline travel planning systems
 states
⚫ locations
 initial state
⚫ starting point
 successor function (operators)
⚫ move from one location to another
 goal test
⚫ arrive at a certain location
 path cost
⚫ may 62 be quite complex
• money, time, travel comfort, scenery, ...
 Touring and traveling salesperson problems
⚫ “Visit every city on the map at least once”
⚫ Needs information about the visited cities
⚫ Goal: Find the shortest tour that visits all cities
⚫ NP-hard, but a lot of effort has been spent on improving the
capabilities of TSP algorithms
⚫ Applications: planning movements of automatic circuit board drills
 VLSI layout
⚫ positioning millions of components and connections on a chip to
minimize area, circuit delays, etc.
⚫ Place cells on a chip so they don’t overlap and there is room for
connecting wires to be placed between the cells
 Robot navigation
⚫ Generalization of the route finding problem
• No discrete set of routes
• Robot can move in a continuous space
• Infinite set of possible actions and states
63
 Assembly sequencing
⚫ Automatic assembly of complex objects
⚫ The problem is to find an order in which to assemble
the parts of some object
 Protein design

sequence of amino acids that will fold into the 3-


dimensional protein with the right properties to cure some
disease.
Searching for Solutions

Search through the state space.


We will consider search techniques that use an
explicit search tree that is generated by the initial state and
successor function .
search Tree example Node selected
for expansion.
Nodes added to tree.
Selected for expansion.

Added to tree.

Note: Arad added (again) to tree!


(reachable from Sibiu)

Not necessarily a problem, but


in Graph-Search, we will avoid
this by maintaining an
“explored” list.
An informal description of
general Tree search algorithm

initialize (initial node)


Loop
choose a node for expansion according to strategy
goal node? → done
expand node with successor function

69
states vs. nodes
 A state is a (representation of) a physical configuration
 A node is a data structure with 5 components state, parent node, action,
path cost,depth

70
General Tree Search Algorithm
function TREE-SEARCH(problem, fringe) returns solution
fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node := REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe := INSERT-ALL(EXPAND(node, problem), fringe)

◆ generate the node from the initial state of the problem


◆ repeat
◆ return failure if there are no more nodes in the fringe
◆ examine the current node; if it’s a goal, return the solution
◆ expand the current node, and add the new nodes to the fringe
Measuring Problem solving performance

An algorithms performance can be evaluated in 4 ways:

1.Completeness: does it always find a solution if one exists?


2. Time Complexity: how long does it take to find a solution
3.Space Complexity: how much memory does it need to perform the
search
4.Optimality: does the strategy find the optimal solution

 Time and space complexity are measured in terms of


⚫ b: branching factor(max no of successors of any node) of the
search tree
⚫ d: depth of the shallowest goal node
⚫ m: maximum length of the state space (may be ∞)
72

You might also like