You are on page 1of 35

Artificial Intelligence

Lecture #3: Agent

Dr. Md. Sazzad Hossain, PhD(Japan)


Professor
Department of CSE
Mawlana Bhashani Science and Technology University
Email: sazzad.hossain14@northsouth.edu

1
What is an agent?
 “An over-used term” (Patti Maes, MIT Labs, 1996)
 Many different definitions exist ...
 Who is right?

What is an agent ?
Agent Definition (1)
American
Heritage
Dictionary:

agent -
I can relax, ” … one that acts or
my agents
will do all has the power or
the jobs on authority to
my behalf act… or
represent
another”
Agent Definition (2)
 “…agents are software entities that carry out some set
of operations on behalf of a user or another
program ..." [IBM]

Potentially agents
may have
“Everything-as-a-User” !
Agent Definition (3)
Agent Definition (4)
 "An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon that
environment through effectors."
Russell & Norvig
Agent Definition (5)
 “… An agent is anything that is capable of acting upon
information it perceives. An intelligent agent is an agent
capable of making decisions about how it acts based on
experience.”
F. Mills & R. Stuffle-
beam
Agent Definition (6)
 An agent is an entity which is: …
• proactive: … should not simply act in response to their environment, … should
be able to exhibit opportunistic, goal-directed behavior and take the initiative
when appropriate; …
• social: … should be able to interact with humans or other artificial agents …

“A Roadmap of agent
research and development”,
N. Jennings, K. Sycara, M.
Wooldridge (1998)
Agent & Environments

 The agent takes sensory input from its environment,


and produces as output actions that affect it.

sensor action
input Agent output

Environment
Agents and Intelligent Agents
 An agent is anything that can be viewed as
 perceivingits environment through sensors and
 acting upon that environment through actuators

 An intelligent agent acts further for its own


interests.
Example of Agents
 Human agent:
• Sensors: eyes, ears, nose….
• Actuators: hands, legs, mouth, …
 Robotic agent:
• Sensors: cameras and infrared range finders
• Actuators: various motors
 Agents include humans, robots, thermostats, etc
 Perceptions: Vision, speech reorganization, etc.
Agent Function & program
 An agent is specified by an agent function f that maps
sequences of percepts Y to actions A:
Y  { y0 , y1 ,..., yT }
A  {a0 , a1 ,..., aT }
f :Y  A
 The agent program runs on the physical architecture to
produce f
• agent = architecture + program
 “Easy” solution: table that maps every possible sequence Y
to an action A
Agents and Environments

 The agent function maps from percept histories


(sequences of percepts) to actions:
[f: P*  A]
Example: A Vacuum-Cleaner Agent

A B

 Percepts: location and contents, e.g., (A, dust)


• (Idealization: locations are discrete)
 Actions: move, clean, do nothing:
LEFT, RIGHT, SUCK, NOP
Example: A Vacuum-Cleaner Agent
Properties of Agent
 mobility: the ability of an agent to move around in an
environment.
 veracity: an agent will not knowingly communicate false
information
 benevolence: agents do not have conflicting goals, and that
every agent will therefore always try to do what is asked of
it
 rationality: agent will act in order to achieve its goals, and
will not act in such a way as to prevent its goals being
achieved.
 learning/adoption: agents improve performance over time
The Concept of Rationality

 What is rational at any given time depends


on four things:
 The performance measure that defines the criterion of
success.
 The agent’s prior knowledge of the environment.
 The actions the agent can perform.
 The agent’s percept sequence to date.
Rational Agents
 Rational Agent: A rational agent is one that does the right
thing—conceptually speaking, every entry in the table for the
agent function is filled out correctly.
For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure.
 Performance measure: An objective criterion for success of an
agent's behavior, given the evidence provided by the percept
sequence.
Nature of Task Environment

 To design a rational agent we need to specify a task


environment
• a problem specification for which the agent is a
solution
 PEAS: to specify a task environment
• Performance measure
• Environment
• Actuators
• Sensors
PEAS: Specifying an Automated Taxi Driver

 Performance measure:
• safe, fast, legal, comfortable, maximize profits
 Environment:
• roads, other traffic, pedestrians, customers
 Actuators:
• steering, accelerator, brake, signal, horn
 Sensors:
• cameras, sonar, speedometer, GPS
PEAS: Another Example
 Agent: Medical diagnosis system
 Performance measure: Healthy patient, minimize costs.
 Environment: Patient, hospital, staff
 Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
 Sensors: Keyboard (entry of symptoms, findings, patient's
answers)
Properties of Task Environment

1) Fully observable / Partially observable


• If an agent’s sensors give it access to the
complete state of the environment at each
point in time, then we say that the task
environment is fully observable.
(e.g. chess – what about Kriegspiel?)
• An environment might be partially
observable because of noisy and inaccurate
sensors or because parts of the state are
simply missing from the sensor data—for
example, a vacuum agent with only a local
dirt sensor cannot tell whether there is dirt
in other squares, and an automated taxi
cannot see what other drivers are thinking.
2) Deterministic / Stochastic
•An environment is deterministic if the next state of the environment is
completely determined by the current state of the environment and the action
of the agent;
•In a stochastic environment, there are multiple, unpredictable
outcomes. Most real situations are so complex that it is impossible to keep
track of all the unobserved aspects; for practical purposes, they must be
treated as stochastic.
For example, Taxi driving is clearly stochastic in this sense, because one can
never predict the behavior of traffic exactly; moreover, one’s tires blow out
and one’s engine seizes up without warning. The vacuum world as we
described it is deterministic, but variations can include stochastic elements
such as randomly appearing dirt and an unreliable suction mechanism.

In a fully observable, deterministic environment, the agent need not deal with
uncertainty.

Note: Uncertainty can also arise because of computational limitations.


E.g., we may be playing an omniscient (“all knowing”) opponent but we
may not be able to compute his/her moves.
 3) Episodic / Sequential

• In an episodic environment, the agent’s experience is


divided into atomic episodes. Each episode consists of the
agent perceiving and then performing a single action.

• Subsequent episodes do not depend on what actions


occurred in previous episodes. Choice of action in each
episode depends only on the episode itself.
(E.g., classifying images.)

• In a sequential environment, the agent engages in a series


of connected episodes. Current decision can affect future
decisions. (E.g., chess and driving)
 4) Static / Dynamic
• A static environment does not change while the agent is
thinking.

• Dynamic environments, on the other hand, are


continuously asking the agent what it wants to do; if it
hasn’t decided yet, that counts as deciding to do nothing.

• The environment is semidynamic if the environment


itself does not change with the passage of time but the
agent's performance score does.

• Taxi driving is clearly dynamic: the other cars and the


taxi itself keep moving while the driving algorithm
hesitates about what to do next. Chess, when played
with a clock, is semidynamic. Crossword puzzles are
static.
 5) Discrete / Continuous
• If the number of distinct percepts and actions is limited,
the environment is discrete, otherwise it is continuous.

• For example, the chess environment has a finite number of


distinct states (excluding the clock). Chess also has a
discrete set of percepts and actions.
• Taxi driving is a continuous-state and continuous-time
problem: the speed and location of the taxi and of the other
vehicles sweep through a range of continuous values and
do so smoothly over time. Taxi-driving actions are also
continuous (steering angles, etc.). Input from digital
cameras is discrete, strictly speaking, but is typically
treated as representing continuously varying intensities and
locations.
 6) Single agent / Multi-agent
• If the environment contains other intelligent agents, the
agent needs to be concerned about strategic, game-
theoretic aspects of the environment (for either cooperative
or competitive agents).
• Most engineering environments don’t have multi-agent
properties, whereas most social and economic systems get
their complexity from the interactions of (more or less)
rational agents.
• For example, an agent solving a crossword puzzle by itself
is clearly in a single-agent environment, whereas an agent
playing chess is in a two agent environment.
Examples of task environments
and their characteristics

28
Structure Of Agent
 Goals
• Given a PEAS task environment
• construct agent function f,
• design an agent program that implements f on a particular
architecture
• Agent= Architecture +program.

 Agent Architecture:
• Computing device with physical sensor and actuator.
• Makes the percept from the sensors and make it available to the program.
• Runs the program
• Feeds the program action choices to the actuators.
Belief-Desire-Intention (BDI) architectures

 It involves two processes:


• Deliberation: deciding which goals we want to achieve.
• Means-ends reasoning (“planning”): deciding how we are
going to achieve these goals.
 To differentiate between these three concepts
• I believe that if I study hard I will pass this course
• I desire to pass this course
• I intend to study hard
BDI architectures
 First: try to understand
what options are available.

 Then: choose among them,


and commit to some.

These chosen options become intentions, which


then determine the agent’s actions.

 Intentions influence beliefs


upon which future
reasoning is based
Schematic of BDI Architecture
 A belief revision function ( brf )

 A set of current beliefs

 An option generation function

 A set of current desires (options)

 A filter function

 A set of current intentions

 An action selection function


BDI architectures: reconsideration
of intentions
 Example (taken from Cisneros et al.)

Time t = 0
Desire: Kill the alien
Intention: Reach point P
Belief: The alien is at P
BDI architectures: reconsideration
of intentions

Time t = 1
Desire: Kill the alien
Intention: Kill the alien
Belief: The alien is at P Wrong!
End of Presentation

Questions/Suggestions

Thanks to all !!!

You might also like