You are on page 1of 60

Intelligent Agents

OBJECTIVE
Define an agent
Define an Intelligent agent
Define a Rational agent
Explain Bounded rationality
Discuss different types of environment
Explain different agent architectures
OBJECTIVE
On completion of this lesson the student will be able to
Understand what an agent is and how an agent interacts with
the environment.
Given a problem situation, the student should be able to
• identify the percepts available to the agent and
• the actions that the agent can execute.

⚫ Understand the performance measures used to


evaluate an agent.
OBJECTIVE
o Be familiar with
⚫ Different agent architectures
⚫ Stimulus response agents
⚫ State based agents
⚫ Deliberative / goal-directed agents
⚫ Utility based agents
⚫ Learning agents
o Be able to analyze a problem situation and be able to
⚫ Identify the characteristics of the environment
⚫ Recommend the architecture of the desired agent
AGENT & ENVIRONMENT
percepts

Environment
Agents

Actions
AGENTS
Operate in an environment
Perceives its environment through sensors,
Acts upon its environment through actuators/
effectors
Have goals
SENSORS AND EFFECTORS
An agent perceives its environment through sensors
⚫ The complete set of inputs at a given time is called a
percept
⚫ The current percept or a sequence of percepts can
influence the actions of an agent
It can change the environment through effectors
⚫ An operation involving an actuator is called an action
⚫ Actions can be grouped into action sequences
AGENTS
Have sensors, actuators
Have goals

Implement mapping from percept sequence to


actions (Agent program)
Performance measure to evaluate agents
AGENTS
Autonomous agent
decide
autonomously
which
action to take in the current situation
to
maximize progress towards its goals.
PERFORMANCE
Behavior and performance of IAs in terms of agent
function
⚫ Perception history (sequence) to Action
Mapping
⚫ Ideal mapping: specifies which actions an agent
ought to take at any point
⚫ Performance measure: a subjective measure to
characterize how successful an agent is (e.g.,
speed, power usage, accuracy, money, etc)
EXAMPLES OF AGENTS
Humans
⚫ eyes, ears, skin, taste buds, etc. for sensors
⚫ hands, fingers, legs, mouth for effectors
Robots
⚫ Cameras, infrared, bumper, etc. for sensors
⚫ grippers, wheels, lights, speakers, etc. for actuators
Software agents (softbot)
⚫ functions as sensors
⚫ functions as actuators
TYPES OF AGENTS
Xavier was designed and built by CMU graduate students
in 1993. The overall goal was to develop autonomous
mobile robots that can operate reliably over extended
periods of time, can learn and adapt to their environments,
and can interact with people in a socially acceptable
manner.

Robot: Cog Robot(MIT)


The motivation behind creating Cog is
the hypothesis that they wanted to build a
humanoid robot- A robot having
intelligence like human being.
AIBO ROBOT : SONY
The AIBO entertainment Robot is a totally
new kind of robot- autonomous, sensitive to
his environment, and able to learn and
mature like a living creature. Since each
AIBO experiences his world differently,
each develop his own unique personality-
different from any other AIBO in the world!
TYPES OF AGENTS

Softbots
Expert Systems

⚫ Cardiologist

Autonomous spacecraft

Intelligent building
AGENTS
Fundamental faculties of intelligence
⚫ Acting
⚫ Sensing
⚫ Understanding, reasoning, learning
In order to act you must sense. Blind actions is not a
characterization of intelligence.
Robotics: sensing and acting, understanding not
necessary.
Sensing needs understanding to be useful.
INTELLIGENT AGENTS
Intelligent Agent
⚫ must Sense,
⚫ must act,
⚫ must be autonomous (to some extent)
⚫ must be rational
RATIONAL AGENT
AI is about building rational agents.
An agent is something that perceives and acts.
A rational agent always does the right thing.
⚫ What are the functionalities (goal)?
⚫ What are the components?
⚫ How do we build them?
RATIONALITY
Perfect Rationality
⚫ Assumes that the rational agent knows all and will take
the action that maximizes her utility
⚫ Human beings do not satisfy this definition of
rationality.
Bounded Rationality, Herbert Simon, 1972
⚫ Because of the limitations of the human mind, humans
must use approximate methods to handle many tasks.
RATIONALITY
Rational Action: The action that maximizes the
expected value of the performance measure given
the percept sequence to data
⚫ Rational = Best?
Yes, to the best of its knowledge
⚫ Rational = Optimal?
Yes, to the best of its abilities
And its constraints
OMNISCIENCE
A rational agent is not omniscient
• It doesn't know the actual outcome of its action
• It may not know certain aspects of its
environment
Rationality must take into account the
limitations of the agent
• Percept sequence, background knowledge,
feasibility actions
• Deal with the expected outcomes of actions
BOUNDED RATIONALITY
Evolution did not give rise to optimal agents, but to
agents which are in some senses locally optimal at
best.
In 1957, Simon proposed the notion of Bounded
Rationality:
⚫ That property of an agent that behaves in a manner that is
nearly optimal with respect to its goals as its resources will
allow
AGENT ENVIRONMENT
Environments in which agents operate can be
defined in different ways.
⚫ It is helpful to view the following definitions as referring
to the way the environment appears from the point of view
of the agent itself
TASK ENVIRONMENTS

A SKETCH OF AUTOMATED TAXI DRIVER


ENVIRONMENT:
OBSERVABILITY
Fully observable
⚫ All of the environment relevant to the action being
considered is observable
⚫ Such environment are convenient, since the agent is
freed from the task of keeping track of the changes in
the environment.
Partially observable
⚫ The relevant features of the environment are only
partially observable
Examples:
⚫ Fully obs : Chess: Partially obs : Poker
ENVIRONMENT:
DETERMINISM
Deterministic: The next state of the environment is
completely described by the current state and the agent’s
action. Image analysis
Stochastic: If an element of interference or uncertainty
occurs then the environment is stochastic. Note that a
deterministic yet partially observable environment will
appear to be stochastic to the agent. Ludo
Strategic: environment state wholly determined by the
preceding state and the action of multiple agents is called
strategic. Chess
ENVIRONMENT: EPISODICITY
Episodic/ Sequential
⚫ An episodic environment means that subsequent
episodes do not depend on what actions occurred
in previous episodes.
⚫ In a sequential environment, the agent engages in
a series of connected episodes.
ENVIRONMENT: DYNAMISM
Static Environment: does not change from one state
to the next while the agent is considering its course
of action. The only changes to the environment as
those caused by the agent itself
Dynamic Environment: changes over time
independent of the of actions of the agent- and thus
if an agent does not respond in a timely manner, this
counts as a choice to do nothing
⚫ Interactive Tutor
ENVIRONMENT : DYNAMISM
Static/ Dynamic
⚫ A static environment does not change while the agent
is thinking.
⚫ The passage of time as an agent deliberates is
irrelevant.
⚫ The agent doesn’t need to observe the world during
deliberation.
ENVIRONMENTS :
CONTINUITY
Discrete/ Continuous.
⚫ If the number of distinct percepts and actions is
limited, the environment is discrete, otherwise it
is continuous.
ENVIRONMENTS: OTHER
AGENTS
Single agent/ Multi-agent
⚫ If the environment contains other intelligent agents, the
agent needs to be concerned about strategic, game
–theoretic aspects of the environment (for either
cooperative or competitive agents)
⚫ Most engineering environments don’t have multi-agent
properties, whereas most social and economic systems get
their complexity from the interactions of (more or less)
rational agents
COMPLEX ENVIRONMENTS
Complexity of the environment includes
⚫ Knowledge rich: enormous amount of information that the
environment contains and
⚫ Input rich: the enormous amount of input the environment
can sent to an agent
The agent must have a way of managing this
complexity. Often such considerations lead to the
development of
⚫ Sensing strategies and
⚫ attentional mechanisms
So that the agent may more readily focus its efforts
in such rich environments.
EXAMPLE: VACUUM CLEANER

Foundations of Artificial
AGENT

Intelligence
32
i Percepts: location and contents, e.g., [A, Dirty]
i Actions: Left, Right, Suck, NoOp
EXAMPLES OF TASK
ENVIRONMENTS
TABLE BASED AGENT
• Information comes from sensors- percepts
• Look it up!
• Triggers actions through the effectors
❑ Reactive agents
– No notion of history, the current state is as the sensors see it right now
– Percept sequence

Percepts
Percept Actions
Agent

Actions
TABLE BASED AGENT
A table is a simple way to specify a mapping from
percepts to actions
⚫ Table may become very large
⚫ All work done by the designer
⚫ No autonomy, all actions are predetermined
⚫ Learning might take very long time
Mapping is implicitly defined by a program
⚫ Production system
⚫ Rule based
⚫ Neural networks
⚫ algorithm
PERCEPT BASED AGENT
Information comes from sensors-percepts
Changes the agents current state of the world
Triggers actions through the effectors

❑ Reactive agents
❑ Stimulus-response agents

No notion of history, the current state is as the sensors see it


right now.
PERCEPT BASED AGENT
Efficient
No internal representation for reasoning, inference.
No strategic planning, learning.
Percept-based agent are not good for multiple,
opposing, goals.
TYPES OF AGENT PROGRAMS
▪ The job of AI is to design an agent program that
implements the agent function— the mapping from precepts
to actions.
▪ This program will run on some sort of computing device
with physical sensors and actuators—we call this the
architecture:
agent = architecture + program .
▪ Four types
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
STATE-BASED AGENTS / SIMPLE
REFLEX AGENT
information comes from sensors-percepts
changes the agents current state of the world
based on state of the world and knowledge
(memory), it triggers actions through the effectors
SIMPLE REFLEX AGENTS
• It uses just condition-action rules
• The rules are like the form “if … then …”
• efficient but have narrow range of applicability
• Because knowledge sometimes cannot be stated explicitly
• Work only
• if the environment is fully observable
SIMPLE REFLEX AGENTS
SIMPLE REFLEX AGENTS
EXAMPLE: SIMPLE REFLEX
VACUUM AGENT
MODEL-BASED REFLEX AGENTS
• For the world that is partially observable
• the agent has to keep track of an internal state
That depends on the percept history

• Reflecting some of the unobserved aspects

• E.g., driving a car and changing lane

• Requiring two types of knowledge


• How the world evolves independently of the agent
• How the agent’s actions affect the world
EXAMPLE TABLE AGENT
WITH INTERNAL STATE
IF THEN
Saw an object ahead, Go straight
and turned right, and
it’s now clear ahead
Saw an object Ahead, Halt
turned right, and object
ahead again
See no objects ahead Go straight

See an object ahead Turn randomly


MODEL-BASED REFLEX AGENTS

The agent is with memory


MODEL-BASED REFLEX AGENTS
GOAL-BASED AGENT
information comes from sensors-percepts
changes the agents current state of the world
based on state of the world and knowledge (memory)
and goals/intentions, it chooses actions and does them
through the effectors
GOAL BASED AGENT
Agent’s actions will depend upon its goal.
Goal formulation based on the current situation is a
way of solving many problems and search is a
universal problem solving mechanism in AI.
The sequence of steps required to solve a problem is
not known a priori and must be determined by a
systematic exploration of alternatives.
AGENTS WITH EXPLICIT GOALS
UTILITY-BASED AGENTS
• Goals alone are not enough
• to generate high-quality behavior
• E.g. meals in Canteen, good or not ?
• Many action sequences the goals
• some are better and some worse
• If goal means success,
• then utility means the degree of success (how successful it is)
UTILITY-BASED AGENT
A more general Framework
Different preferences for different goals.
A utility function maps a state or a sequence states to
a real valued utility.
The agent acts so as to maximize expected utility
UTILITY-BASED AGENTS
• it is said state A has higher utility
• If state A is more preferred than others
• Utility is therefore a function
• that maps a state onto a real number
• the degree of success
UTILITY-BASED AGENTS
• Utility has several advantages:
• When there are conflicting goals,
• Only some of the goals but not all can be achieved
• utility describes the appropriate trade-off
• When there are several goals
• None of them are achieved certainly
• utility provides a way for the decision-making
UTILITY-BASED AGENTS (4)

• Utility Function
• a mapping of states onto real numbers
• allows rational decisions in two kinds of situations
• evaluation of the tradeoffs among conflicting goals
• evaluation of competing goals
LEARNING AGENT
Learning allows an agent to operate in initially
unknown environments
The learning element modifies the performance
element
Learning is required for true autonomy
LEARNING AGENTS

i Four main components:


4 Performance element: the agent function
4 Learning element: responsible for making improvements by observing
performance
4 Critic: gives feedback to learning element by measuring agent’s
performance 57

4 Problem generator: suggest other possible courses of actions


(exploration)
STATE SPACE SEARCH
Formulate a problem as a state space search by showing the
legal problem states, the legal operators, and the initial and
goal states.
A state is defined by the specification of the values of all
attributes of interest in the world
An operator changes one state into the other, it has a
precondition which is the value of certain attributes prior to
the application of the operator, and a set of effects, which are
the attributes altered by the operator
The initial state is where you start
The goal state is the partial description of the solution
SUMMARY
• An agent perceives and acts in an environment, has
an architecture, and is implemented by an agent
program.
• An ideal agent always chooses the action which
maximizes its expected performance, given its
percept sequence so far.
• An autonomous agent uses its own experiences rather
than built-in knowledge of the environment by the
designer.
SUMMARY
An agent program maps from percept to action and
updates its internal state.
Reflex agents respond immediately to percepts.
Goal-based agents act in order to achieve their goal(s).
Utility-based agents maximize their own utility function.
Representing knowledge is important for successful agent
design.
The most challenging environment are partially
observable, stochastic, sequential, dynamic and
continuous, and contain multiple intelligent agents.

You might also like