You are on page 1of 47

16CS314 -

Artificial Intelligence
PEASPEAS
PEAS
 E.g., the task of designing an automated taxi
 Performance measure?? safety, destination, legality,

comfort
 Environment?? US streets/freeways, pedestrians, weather

 Actuators?? steering, accelerator, brake, horn,

speaker/display
 Sensors?? video, accelerometers, gauges, engine sensors,

keyboard, GPS
PEASPEAS
Agent Type Performance Environments Actuators Sensors
PEAS Measure

Taxi driver Safe: fast, Roads, Steering Cameras, sonar,


legal, other traffic, accelerator Speedometer,G
comfortable pedestrians, brake, PS,
trip, Customers Signal, Odometer,
maximize horn, engine sensors
profits display ,keyboards,
accelerometer
PEASPEAS
PROPERTIES

• Fully observable vs Partially Observable


• Determinstic vs Stochastic
• Episodic vs Sequential
• Static vs Dynamic
• Discrete vs Continuous
• Single agent vs Multi agent
PEASPEAS
Fully observable vs Partially Observable

 Fully observable-
 Agent’s sensor – complete access to state of

environment
 Partially Observable

 Noise – agent with inaccurate sensors may miss

some state information


PEASPEAS
Fully observable

 Fully observable-Example
PEASPEAS
Fully observable

 Fully observable-
 Example

 Puzzle game
 Image analysis
PEASPEAS
Partially Observable

 Partially observable-
 Agent can’t see other cards
Determinstic vs Stochastic

 Determinstic
 Agent can take the next action or can process

remaining part of image based on current knowledge


• Stochastic
agent is in stochastic (random probability)
• environment , see the goal and from all current and
previous percepts agent needs to take action
Determinstic - Example

 Video analysis
Stochastic
- Example
 Car driving
 Boat driving agent- next driving is not based on

current state.
Episodic Vs Sequential
 Episodic
Agent experience is divided into atomic episodes such
that each episode consists of, the agent perceiving
process and then performing single action.
 Previous episode does not affect the current actions.

 Sequential

 Current decision could affect all future decision


Episodic example
 Agent finding defective part of assembled computer
machine.
 Will inspect the current part and take action which

does not depend on pervious decision


 Blood testing for patient

 Card games
Sequential example
Chess is sequential example- agent takes action based
on pervious decisions
Chess with a clock
Refinery controller
Static vs Dynamic

 Static
 Easy to tackle agent need not worry about changes

around (will not change) while taking actions


 Example

8 queen puzzle
Static vs Dynamic

 Dynamic
 Keep on changing continuously which makes agent to

be more attentive to make decision for act


 Example

 Driving boat( a big wave can come it can be more

windy)
Discrete vs Continuous

 Discrete
 Has fixed finite discrete states over the time and each

state has associated percepts and action


 Tic tac toe depicts -where every

state is stable -associated percept –


outcome of some action
Discrete vs Continuous

 Continuous
 Is not stable at any given point of time – changes

randomly – make agent learn continuously – and make


decision.
 Flight controller
Single agent vs Multi agent

 Single agent
 Well defined single agent

 Boat driving [ here single agent perceives and acts]


Single agent vs Multi agent

Multi agent
 Various agent or various

group of agents working


together to take decisions
1.Multi agent independent
environment
 Many agent in game of maze
Single agent vs Multi agent

2.Multi agent cooperative environment


Many agent working together to achieve goal
Football
3.Multi agent competitive environment
Here many agents are working but opposite to each
other
Trading agents
Single agent vs Multi agent

4.Multi agent antagonistic environment


Here multiple agents are working opposite to each other
but one side (agent/ agent teams ) is having negative
goal
War games
Types of agent

1.Simple Reflex Agent


2.Model based Reflex Agent
3.Goal Based Agent
4.Utility Based Agent
Types of agent
PEASPEAS
1.Simple Reflex Agent

Example
• ATM agent system if PIN
matches with given account number
then customer gets money.
PEASPEAS
1.Simple Reflex Agent

 Ignore the rest of the percept history and act only on


the basis of the current percept.
 condition-action rule. A condition-action rule is a rule

that maps a state i.e, condition to an action.


 If the condition is true, then the action is taken, else

not
PEASPEAS
1.Simple Reflex Agent

Property
• Very limited intelligence.
• No knowledge of non-perceptual parts of state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then
the collection of rules need to be updated.
PEASPEAS
1.Simple Reflex Agent
PEASPEAS
Model Based Reflex Agent
• Example
• A car driving agent which
maintains its own internal state
and then take action as
environment appears to it.
PEASPEAS
Model Based Reflex Agent
• It works by finding a rule whose condition matches
the current situation.
• can handle partially observable environments by
use of model about the world.
• The agent has to keep track of internal state which is
adjusted by each percept and that depends on the
percept history.
PEASPEAS
Model Based Reflex Agent
Updating the state requires the information about :

• how the world evolves in-dependently from the agent,

• how the agent actions affects the world.


PEASPEAS
Model Based Reflex Agent
PEASPEAS
Model Based Reflex Agent
PEASPEAS
Goal Based Agent
• Example
• Agent searching a
solution for 8 queen puzzle
PEASPEAS
Goal Based Agent
• Agents take decision based on how far they are currently
from their goal(description of desirable situations).
• This allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a goal state.
• The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents
more flexible.
PEASPEAS
Goal Based Agent
• They usually require search and planning.
• The goal based agent’s behavior can easily be
changed.
PEASPEAS
Goal Based Agent
PEASPEAS
Utility Based Agent
• Example
• Military planning robot
which provides certain plan of
action to be taken.
• Its environment is too
complex and excepted
performance is also high
PEASPEAS
Utility Based Agent
• When there are multiple possible alternatives, then to
decide which one is best, utility based agents are
used.
• They choose actions based on a preference
(utility) for each state
PEASPEAS
Utility Based Agent
• Utility describes how “happy” the agent is.
• Because of the uncertainty in the world, a utility agent
chooses the action that maximizes the expected
utility.
PEASPEAS
Utility Based Agent
Properties of Environments

 Accessible/ Inaccessible.
 If an agent's sensors give it access to the complete state of the
environment needed to choose an action, the environment is accessible.
 Such environments are convenient, since the agent is freed from the
task of keeping track of the changes in the environment.
 Deterministic/ Nondeterministic.
 An environment is deterministic if the next state of the environment is
completely determined by the current state of the environment and the
action of the agent.
 In an accessible and deterministic environment the agent need not deal
with uncertainty.
7/16/2018 Artificial Intelligence 44
Contd..
 Episodic/ Non episodic.
 An episodic environment means that subsequent episodes do not
depend on what actions occurred in previous episodes.
 Such environments do not require the agent to plan ahead.

7/16/2018 Artificial Intelligence 45


Properties of Environments

 Static/ Dynamic.
 An environment which does not change while the agent
is thinking is static.
 In a static environment the agent need not worry about
the passage of time while he is thinking, nor does he
have to observe the world while he is thinking.
 Discrete/ Continuous.
If the number of distinct percepts and actions is limited
the environment is discrete, otherwise it is continuous.
7/16/2018 Artificial Intelligence 46
Contd..
 With/ Without rational adversaries.
 If an environment does not contain other rationally
thinking, adversary agents, the agent need not worry
about strategic, game theoretic aspects of the
environment
 As example for a game with a rational adversary, try
the Prisoner's Dilemma

7/16/2018 Artificial Intelligence 47

You might also like