You are on page 1of 22

Agent

Artificial Intelligence
By
Malik Abdul Manan
Today topics
• Agent
• Rational agent
• PEAS Model
• Types of environment
• Types of Agent
– Simple reflex agents;
– Model-based reflex agents;
– Goal-based agents; and
– Utility-based agents
Agent

An agent is anything that can be viewed as perceiving its environment


through sensors and SENSOR acting upon that environment through
actuators.

• We use the term percept to refer to the agent’s perceptual inputs at


any given instant.
• An PERCEPT SEQUENCE agent’s percept sequence is the complete
history of everything the agent has ever perceived.
• In general, an agent’s choice of action at any given instant can depend
on the entire percept sequence observed to date, but not on anything
it hasn’t perceived.
Agent

• Examples: Human (five sense), Computer program (console


used as input, files, sound etc as output ), Robotics
• agent function (mathematical equation)that maps any given
percept sequence to an action
– for an artificial agent it will be implemented by an agent program.
GOOD BEHAVIOR: THE CONCEPT OF
RATIONALITY
• A rational agent is one that does the right thing
• performance measure that evaluates any given
sequence of environment states.
• Rational agent at any given time depends
on four things:
-The performance measure that defines the criterion of
success.
- The agent’s prior knowledge of the environment.
- The actions that the agent can perform.
- The agent’s percept sequence to data.
Rational Agent
• definition of a rational agent:
– For each possible percept sequence, a rational
agent should select an action that is expected to
maximize its performance measure, given the
evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
PEAS (Performance, Environment, Actuators, Sensors)
Types of environment
• Fully observable vs. partially observable:
– If an agent’s sensors give it access to the complete state of the environment at
each point in time, then we say that the task environment is fully observable:
example Playing chess is fully observable
Driving car, Medical diagnosis Partially observable (complete state unknown)

• Single agent (puzzle ) vs. multi-agent(chess)


• Deterministic(we know the next move) vs.
stochastic.
• Static (no change by observation) vs. dynamic:
• Discrete (states in number(chess move)) vs.
continuous (driving like location):
Agent Structure

The job of AI is to design an agent program that implements the agent


function— the mapping from percepts to actions.
Agent = architecture + program
• If the program is going to recommend actions like Walk, the architecture
had better have legs.
• The architecture might be just an ordinary PC, or it might be a robotic car
with several onboard computers, cameras, and other sensors.
Types of agent programs
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents
Simple reflex agents
• These agents select actions on the basis of the
current percept
• ignoring the percept history
• environment is fully observable.
• condition–action rule, If then
• Example
– If temp>50 then turn on AC
Simple reflex agents
Problems for the simple reflex agent design approach:

• They have very limited intelligence


• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.

Examples:

Room Cleaner agent, it works only if there is dirt in the room.


TV Timer
AC temperature sensor
Model-based reflex agents
• The most effective way to handle partial
observability
• A model-based agent has two important
factors:
Model: It is knowledge about "how things happen
in the world," so it is called a Model-based agent.
Internal State: It is a representation of the current
state based on percept history
• keep track of the past of the world it can’t see
now (store the old data)
Model-based reflex agents

•These agents have the model, "knowledge of the world" and based on
the model they perform actions.
•Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
• Some examples of items with model-based agents aboard include
the Roomba vacuum cleaner and the autonomous car known as
Waymo.
• Both interact with their environments by using what they know--an
internal model of the world--and their on-board sensors as well, to
make moment-to-moment decisions about their actions.
Goal based Agent
• Expansion of model based reflex agent
• Desirable situation (goal)
• Searching and planning
• Works in partially observable environment
• Google's Waymo driverless cars are good examples
of a goal-based agent when they are programmed
with an end destination, or goal, in mind.
• The car will then ''think'' and make the right
decisions in order to deliver the passenger where
they intended to go.
Goal based Agent

•These agents may have to consider a long sequence of possible


actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
Utility-based agents
• Focus on utility not Goal
• Utility function (Function that deal with happy or
unhappy state)
• Works in partially observable environment
• The Utility-based agent is useful when there are
multiple possible alternatives, and an agent has to
choose in order to perform the best action.
• The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
Let's say you want to travel from D.G.Khan to Islamabad: the goal-based
agent will get you there. Islamabad is the goal and this agent will map the
right path to get you there.
But if you're traveling from D.G.Khan to Islamabad and encounter a
closed road, the utility-based agent will kick into gear and analyze other
routes to get you there, selecting the best option for maximum utility. In
this regard, the utility-based agent is a step above the goal-based agent.

You might also like