You are on page 1of 52

Chapter 2

Intelligent Agent

Berhanu F.
Intelligent Agents

● I want to develop an agent that will
• –Clean my house, filter information, cook when I don't
want to, wash my clothes, take a note in a meeting,
handle my emails, fix my car (or take it to be fixed),
etc.
• –i.e do the things that I don't feel like doing

AI is the science of building agents

(machines) that act rationally with


respect to a goal.

In Acting Rationally approach – AI is viewed

as a study and construction of rational


agents
Agen
●An agent is anythingt that
can be viewed
as perceiving its environment though sensors
and acting upon that environment though
actuators
●Agent is something that perceives its
env't through Sensors and acts upon that env't
through Effectors
●The agent is assumed to exist in an
environment in which it perceives and acts
●An agent is rational since it does the right thing
to achieve the specified goal
Agent
Agen
t
Human beings Agents

Sensors Eyes, Ears, Cameras, Scanners,


Nose Mic, infrared range
finders

Effectors Hands, Legs, Various Motors


Mouth (artificial hand,
artificial leg),
Speakers, Radio
Agent
●Percepts: the agent's perceptual input at any
given instant
●Percept sequence: is the complete history of
everything the agent has ever perceived
●An agent's choice of action at any given instant can
depend on the entire percept sequence observed to
date, but not on anything it hasn't perceived
●Mathematically, an agent's behavior is described
by the agent function that maps any given percept
sequence to an action
Agent

[f: ft* A]

–The function is abstract mathematical


agent an
description
program runs on the
●The agentto physical
architecture produce f
–The agent program is a concrete
implementation, running within some physical system
Agent: vacuum cleaner

● The vacuum cleaner agent:


–Perceives location and content
–Actions: move left, move right, suck up the dirt,
do nothing
Agent: vacuum cleaner

● Agent function
Percept sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
... …
[A, Clean], [A, Clean], [A, Clean] Right
[A, Clean], [A, Clean], [A, Dirty] Suck
… …
Agent: vacuum cleaner

Various ways of filling the action column =>


various vacuum cleaner


● Agent Program
Function REFLEX-VACUUM- ([location,
AGENT
status]) returns an action
If status = Dirty then return Suck
Else if location = A then return Right
Else if location = B then return Left
Examples of agents
Agent type Percepts Actions Goals Environment
Medical Patient,
diagnosis Symptoms, Questions, tests, hospita
Healthy
system
patient's treatments patients, l
Interactive answers Set of
English tutor minimize students,
costs
materials
Typed words, Write exercises, Maximize
Part-picking Conveyor belts
robot questions, suggestions, student's with parts
score suggestions corrections on exams
Pixels of Pick up parts and Place parts in
varying sort into bins correct
bins intensity
Rationality
●An agent should strive to “do the right thing”, based
on what it can perceive
A rational agent is one that does the right thing

–Every entry in the table for the agent function is filled out correctly

What is the right thing?


–Can be answered by considering the consequence of the


agent's behaviors
Rationality - cont.
An agent generates a sequence of actions => causes the

environment to go through a sequence of states


–If the sequence is desirable, then the agent has performed well

●The notion of desirability is captured by


a performance measure that evaluates any given
sequence of environment states
–Performance measure – an objective criterion for the success of an
agent's behavior
–A rational agent should strive to maximize the performance measure
–Performance measures for VC agent?
Rationality - cont.
–performancemeasure of a vacuum cleaner agent
could be amount of:
Dirt cleaned up

Time taken

Electricity consumed

Noise generated, etc.


●No one fixed performance measure for all


tasks and agents
–A designerwill devise one appropriate to the
circumstances
Rationality - cont.
–Performance measures have to be defined according
to what one actually wants in the environment,
rather than according to how one thinks the agent
should behave
● Rationality depends on:
–The performance measure that defines the criterion
of the success
–The agent's prior knowledge of the environment
–The actions that the agent can perform
–The agent's percept sequence to date
Rationality - cont.

●Rational Agent: for each possible


percept sequence, a rational agent should select
an action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and whatever
built-in knowledge the agent has
Rationality and Learning
●A rational agent has to also learn from what
it perceives
–The agent's initial configuration could reflect some
prior knowledge of the environment, but as the
agent gains experience this may be modified and
augmented
–Sometimes the environment is completely known a
priori
●In this case the agent need not perceive or
learn; it simply acts correctly
–Such agents are fragile
Rationality and Autonomy
A rational agent should be autonomous

–Itshould learn what it can to compensate for partial or incorrect


prior knowledge
–If an agent relies on the prior knowledge of its designer rather than
its own percepts, it lacks autonomy

Should we provide prior knowledge?


–Ifwe do not provide any prior knowledge, the agent will act
randomly
=> it would be reasonable to provide an AI agent with some initial
knowledge as well as an ability to learn
PEAS

PEAS – Performance measures,


Environment, Actuators, Sensors


–Grouped together as task environment
●In order to design appropriate AI agent, we
need to specify the task environment
● Consider a task of designing an automated taxi
PEAS - cont.

● Performance measure?
• –Destination – getting to the correct destination
• –Fast – minimizing the trip time and cost
• –Profit – minimizing fuel consumption, wear and tear
• –Legality – minimizing violation of traffic laws
• –Safety – maximizing passengers safety and comfort
• –etc.
PEAS - cont.

● Environment?
• –Street – such as alleys, highways, etc
• –Traffic
• –Pedestrian
• –Police cars
• –Passengers
• – etc.
PEAS - cont.

● Actuators?
–Steering
wheel

Accelerat
or
–Brake

–Horn

–Display
or speech
synthesiz
PEAS - cont.
● Sensors?
–Video camera
–Accelerometer

–Gauges – such as speedometer


–Engine sensors, fuel sensor
–Keyboard or microphone
–GPS – not to be lost, Etc.
Environment Types
● Task environment
● Environment type determines the agent design
Fully observable, partially observable,

Unobservable environments
–Fully observable env't – the agent's sensors give it
access to the complete state of the environment
●The sensors detect all aspects that are relevant
to the choice of action
–Relevance depend on performance measures
Environment Types - cont.

● Partially observable env't
• –Noisy or inaccurate sensors
• –Parts of the state are missing from the sensor data

e.g. a VC agent with only local dirt sensor


● Unobservable env't
• –The agent with no sensors at all
Environment Types - cont.
Deterministic Vs Stochastic

–Deterministic env't – next state of the env't is completely


determined by the current state and the action executed by the agent
e.g. VC agent env't

–Stochastic/non-deterministic/Uncertain env't
● partially observable /not fully observable env't
–e.g taxi driving agent env't
● Stochastic Vs non-deterministic?
–Stochastic - uncertainty about outcome is quantified in terms of
probability
Environment Types - cont.
Episodic Vs sequential

–Episodic env't – the agent's experience is divided into atomic


episodes.
●In each episode the agent receives a percept and then performs a
single action
●The next episode does not depend on the action taken in the
previous episode
●e.g. an agent spotting defective parts on an assembly line –
decision depends on the current part
–Sequential env't – current decision could affect future decisions,
short term actions can have long term consequences
●Chess playing, taxi driving
Environment Types - cont.
Static Vs Dynamic

–Dynamic env't – changes as the agent acts


● e.g. taxi driving
Continuously asking the agent what it wants to do, if it

has not decided yet, it is considered that as the agent


deciding to do nothing
–Static env't
● Easy to deal
–The agent need not keep looking at the world while it
is deciding on an action
–The agent need not worry about the passage of time
● e.g. crossword puzzles
Environment Types - cont.
• –Semi-dynamic – the environment itself does not
change through time, but the agents performance
score does

e.g. chess when played with a clock

Environment Types - cont.
● Discrete Vs Continuous
–Applies to:
●the state of the env't,
●the way time is handled, and
●the percepts and actions of the agent
–Discrete env't is characterized by a limited number
of distinct states, clearly defined percepts and actions
–Continuous env't
●Chess playing agent?
●Taxi driving agent?
Environment Types - cont.

● Known Vs Unknown
• –This distinction refers
not to the env't itself but to
the agent's (or designer's) state of knowledge about
the “laws of physics” of the environment
• –Known env't – the outcomes for all actions are
given
• –Unknown env't – the agent will have to learn how it
works in order to make good decisions
Environment Types -

cont.
Single agent Vs multi-agent environment
–Singleagent env't – an agent operating by itself in
the environment
● e.g. VC agent
–Multi-agent env't
● e.g. environment of an agent playing chess
–Maximizing performance measure = > minimizing
performance measure of the opponent, thus
competitive multi-agent env't
● Taxi driving environment
–Avoiding collision maximizes the performance
measures of all agents, hence partially cooperative
multi-agent env't
–Using a parking lot
The structure of Agents
● The goal of AI is to design an agent program
–Agent program implements agent function
–Agent program will run on some sort of
computing device with sensors and actuators called
architecture
–Agent = architecture + program
●The program has to be appropriate for
the architecture
●If the program recommends the action Walk,

the architecture should have legs


The Structure of Agents – cont.
● The tasks of the architecture:
–Itmakes the percepts from the sensors available to
the program,
–It runs the program, and
–Itfeeds the program's action choices to the
actuators
●The architecture can be a PC, a robotic car
with several onboard computers, cameras and
other sensors
Agent Programs

● Skeleton of the agent programs:
• –they takethe current percept as input from the
sensors and return an action to the actuators
• –If the
agent's actions need to depend on the entire
percept sequence, the agent will have to remember
the percepts
Agent program types

Based on the methods utilized for selecting

actions, agent programs can be categorized


as:
• –Table driven agent
• –Simple reflex agent
• –Model based reflex agents
• –Goal based agents
• –Utility based agent
Table driven agent
●An agent program that keeps track of the
percept sequence and then uses it to index into a
table of actions to decide what to do
–Thetable represents an agent function that the agent
program embodies
●To build a rational agent this way, we must
construct a table that contains the
appropriate action for every possible percept
sequence
Table driven agent – cont.
•Drawbacks:
–Huge table
–Take a long time to build the table
–No autonomy
–Even if the environment is simple enough to yield a
feasible table size, the designer still has no
guidance about how to fill in the table entries
Simple reflex agents

●Select actions on the basis of the current


percept, ignoring the rest of the percept history
● Example:
Function REFLEX-VACUUM-AGENT ([location,
status]) returns an action
If status = Dirty then return Suck
Else if location = A then return Right
Else if location = B then return Left
Simple reflex agents – cont.
●A more general reflex agents work by finding a
rule whose condition matches the current situation (as
defined by the percept) and then doing the action
associated with that rule
–e.g.If the car in front brakes and its brake lights come on, one should
notice and initiate braking

●Some processing is done on the visual input to


establish the condition “the car in front is
braking”. This condition triggers the action
“initiate braking”. Such a connection is called a
condition-action rule
–If car-in-front-is-braking then initiate-braking
–Humans also have many such conditions – learned/innate
e.g blinking
Simple reflex agent – cont.
Simple reflex agent -
cont.
function SIMPLE-REFLEX-AGENT(percept) returns action
static: rules, a set of condition-action rules
state  INTERPRET-INPUT (percept)
rule  RULE-MATCH (state,rules)
action  RULE-ACTION [rule]
return action
Simple reflex agents –
cont.

Advantage: simple


Drawback: limited intelligence

• –Correct decision can be made if the env't is fully observable


• –Infinite loops – if the env't is partially observable

● To escape from infinite loop, the agent has to randomize its action
Model based reflex
agent
●A modification of reflex agent to work
in partially observable env't
–Bykeeping track of the part of the world it can't see
now
●That means, the agent maintains some sort of
internal state that depends on the percept history
and thereby reflects at least some of the unobserved
aspects of the current state

● A reflex agent with internal state


Model based reflex agent - cont.
• –Updating the internal state information as time goes by requires
two kinds of knowledge

● how the world evolves independently of the agent

● how the agent's own actions affect the world
• Is called a model of the world
• –Model Based agent is the one that uses such a model
Model based reflex agent - cont.
Goal based agent
●Knowledge about the current state of the
environment is not always enough to decide what to
do
–For example, at a road junction, the taxi can turn left, turn
right, or go straight on. The correct decision depends on
where the taxi is trying to get to
●Besides a current description, the agent needs some sort of
goal information that describes desirable situations, e.g. being
at the passenger's destination

●combines goal information with the model (the


same information as was used in the model-based reflex
agent) to choose actions that achieves the goal
●involves consideration of the future
Goal based agent - cont.
Utility based agent
● Goals
–arenot enough to generate high-quality behavior
in most environments
●For example, many action sequences will get the taxi to
its destination (thereby achieving the goal) but some are
quicker, safer, more reliable, or cheaper that others
–justprovide a crude distinction between happy and
unhappy state
●Utility based agents allow a comparison of
different world states according to exactly
how happy they would make the agent
Utility based agent - cont.

It uses a model of the world, along with a

utility function that measures its


preferences among states of the world.
Then it chooses the action that leads to the
best expected utility
• –Utility function is essentially an internalization of the
performance measure
• –If theinternal utility function and the external
performance measure are in agreement, then an
agent that chooses actions to maximize its utility will
be rational according to the external performance
measure
Utility based agent - cont.
THANKS YOU!!!!!

You might also like