Professional Documents
Culture Documents
11.1.5
she agents that
far the agents have been discussed are, the
So
agents with state maintenance,
without
ents w i t h o u t state maintenance and reactive agents. The agent's decision making
r e has
procedure has been modeled as an abstract function
named action, that indicates
Now
Now its time
which
action to perform.
to study the implementation of the decision making
function and its various related factors.
ence for further discussion the agent would be classified into 4
categories that are
as follows,
1. Logic based agents carries out decision making through logical deductions.
3. Belief-desire-intention agents carries out decision making that depends upon the
manipulation of data structures which are used represent the agent's beliefs, agernt's
desires, and agent's intentions.
What is the
world like now
Environment
Condition-action Action to
(if-then) rules be done
Actions
Actuators-
3. The idea of agents as theorem provers is highly attractive. For example consider
that there is certain theory of agency that explains how an intelligent agent should
behave. This theory may explain, say, how an agent generates goals so as to satisfy
its design objective and how it interleaves goal-directed and reactive behavior in
order to achieve these goals, and all such aspects. Then such a theory ø can be
considered as a specification for how an agent should behave.
4. The traditional approach for the implementation of a system that will satisfy this
specification would involve refining the specification through a series of
progressively more concrete stages, until finally an implementation is completely
done.
5. While thinking of an intelligent agent as a theorem prover, no such refinement
takes place. Instead, is viewed as an executable
ø
specification. It is directly
executed in order to produce the agent's behavior.
6. Example of logic-based agents
These agents would be termed as deliberate agents. In such agents, the internal
state is assumed to be a database of formulae of classical first-order
predicate logic.
For example, the agent's database might contain formulae such
Start(machine 102)
Temperature(blower 267, 430)
Pressure(digester 76, 184)
Using such formulae complete agent's environment is described. The database
contains the complete information that the agent has about its environment.
Database of an agent can be thought of as analogous to that of belief in humans.
Thus a user might have a belief that machine 102 is started the same way the
agent might have the predicate Start(machine102) in its database. Like humans can
have errors same is applicable to agents. Thus one might believe that Machine 102
1s started though is off; the fact that an agent has Start(machine 102) in its database
does not mean that machine 102 is started. There can multiple reasons why the
perception of agent can go wrong. Say the agent's
may be raulty
sensors
reasoning may be
faulty, the information may not be up to date, or or the
tne
interpretation of the formula Start(machine 102) intended by the agent's desl8 r
may be different altogether.
ot Lbe the set of sentences of
classical first-order
set L databases, i.e., the set of sets logic, and let DB =
2 (L) be the
of L-formulae. The internal
ie then element of DB. Then, ?,
an state of an agent
?1,.. for members of DB. The
an agagent is a member of the set DB. An internal state of
modelled through a set of deduction rules, ?. agent's decision making process is
These are rules of inference for the
logic. Now, A-p o it the formula ø can be
proved from the database ?? using only
the deduction rules ?. An
agent's perception function observe remains the same as
earlier,
observe :S>P
dirt, the agent forward one "step" or turn right 90°. The agent moves
can move
around a room, which is divided grid-like into a number of equally sized squares
-Dirt(0, 0) - Do(forward)
In(0, 0) a Facing (north) a
9.Logic based approach advantages
(a) In logic-based approaches to building agents, decision making is viewed
as
deduction. An agent's "program" implements its decision making strateey
is
encoded as a logical theory, and the process of selecting an action reduces to
to a
problem of proof.
(6) Logic-based approaches are good to work with arnd have a clean (logical)
cal)
semantic due to which they can be used over long period of time.
(C)The issues associated with representing and reasoning about complex, dynamic,
possibly physical environments are also essentially unsolved that makes
developing logic agent a tedious task.
Agent
Stimulus-response behaviours
S
e State Action
S
O
Stateg Acton
S
State, Action,
The observe function, which represents the agent's perceptual ability, is the same in
earlier intelligent agent architecture. But in subsumption architecture systems, there
is tight coupling between perception and action. The raw sensor input is not
there is certainly attempt to transform
processed or transformed much, and no
with an agent's set of behavioral rules, the 'Behave' is the binary inhibition relation
is arelation that has total ordering on R. It
on the set of behaviors. «CRxR
The statement
irreflexive and antisymmetric.
means that the relation is transitive,
inhibits b2 and that b is lower in hierarchy
than b2 so b, will
b<b2 b1
means
chosen.
5. Reactive agent approach - advantages
action function is not worse
(a) The overall time complexity of the subsumption
of behaviors or number of
than O(ns), where n is the larger of the number
as discussed above,
percepts. Thus, even with the naive algorithm process
works considerably
decision making is tractable. Though, in practice, it
be encoded into
better than O(n ). The decision making logic can
hardware, giving constant decision time. For modern hardware, this means
disadvantages
do not employ models of their environment, then
(a) In reactive agents, if the agents
information available in their local environment for
they must have sufficient
AS purely reactive agents make decisions
them to determine an acceptable action.
the agents current state), it is
based on local information, (.e., information about
non-local
difficult to how such decision making could take into account
see
take a "short term'" view.
information, for which it must inherently
reactive agents that can be designed to learn from
(b) It is difficult model purely
their performance over time. The important central
experience so as to improve
benefit and popularity of purely reactive systems is that overall behavior emerges
its
behaviors when the agent is placed in
from the interaction of the component
environment. But the term "emerges" suggests
that the relationship between
and overall behavior is not understandable.
individual behaviors, environment,
to engineer agents to fulfill specific tasks.
This necessarily makes it very hard
9 t OysleS
therethere is no principled
ine Such agent is highlymethodology
Also, for building such
tedious that would agents. The process of
require lot of trials and errors.
h o cOmplexity of reactive
agent still go on increasing when it
lavers that encompasses behaviors. 'The contains many
dynamics of the interactions between the
different behaviors become too complex to understand.
different
Researchers have
nroposed solutions to these
problems.
One of the
design a evolving agent to perform certain tasks.
widely accepted solution is to
necessarily (and in fact may change in the future). The Beliefset(s) are
be true
decided
stored in database (sometimes called a belief base or a belief set), that is
by an implementation decision.
model
(b) Desires Desires represent the motivational state of the agent. They
would like to accomplish or bring about.
objectives or situations that the agent
to the function or become winner.
For example, find the best offer, go
been adopted for active pursuit by
(c) Goals A goal is a ultimate desire that has
goals puts on the restriction that the set of active
the agent. Usage of the term
one should not have concurrent goals to
desires must be consistent. For example,
desirable for
to a function and to stay
at garden, though they could both be
go
the agent.
Intentions of the agent, indicating
represent the deliberative state
(d) Intentions -
An event may
gent.
update beliefs, trigger plans or modify goals. Events may hee
generated externally and received by sensors or integrated systems. Also, events
nts
may be generated internally to trigger decoupled updates or plans of activity or
updated details.
3. In this architecture involves two crucial processes, namely to
practical reasoning
decide what goals are to achieve and how these goals are going to get achieved.
The process of what goals to achieve is known as deliberation whereas how to
achieve the goal is known as means-ends reasoning.
4. Role of intentions in BDI architecture
(a) A central problem in the design of practical reasoning agents is that of achieving
a good balance between different concerns related with
intentions.Specifically, it
seems clear that an agent should at times drop some intentions (because it comes
to believe that either they will never be achieved, they are achieved, or else
because the reason for having the intention is no
longer present). It may happen
that, from time to time, agent has to reconsider its intentions. But reconsideration
incurs the cost in terms ot both time and computational resources. Therefore, it
poses a critical issue where in two situations may arise,
Either an agent that does not reconsider sufficiently often will continue
stop to
attempting to achieve its intentions even atter it is clear that they cannot bee
achieved, or that there is no longer any reason for achieving them.
Or an agent that constantly reconsiders its attentions may spend insufficient time
actually working to achieve them, and hence runs the risk of never practically
achieving them.
This critical issue is essentially the problem of balancing pro-active (goal directed)
and reactive (event driven) behavior. There is clearly a tradeoff between the
degree of commitment and reconsideration at work.
(b) Another issue is that different types of environment f
require differenttypes
decision strategies. In static,
unchanging environment, purely pro-active, 8 a l
directed behavior is
adequate. But in more dynamic environments, the abity
react to
changes by modifying intentions becomes more important.
* Agent systems
process in in a BDI
A
Reasoning agent
l Main components of a BDI agent are,
(a) M
beliefs
generate
options
desires
filter
intentions
actions
action
output
Belief-desire-intension architecture
Fig. 11.1.6
(b) Let, Blf be the set of all possible beliefs, Dsr be the set of all possible desires, and
Int be the set of all possible intentions. Usually, beliefs, desires, and intentions
are represented as logical formulae, perhaps of first-order logic.)
An agent's belief revision function is a mapping, bl: which on the basis of the
current percept and current beliefs determines a new set of beliefs. The option
generation function, options, maps a set of beliets and a set of ntentions to a set
of desires.
(c) The options function plays several roles. First, it is responsible for the agent's
means-ends reasoning - the process of deciding how to achieve intentions. Thus
once an agent has formed an intention to x, it must subsequently consider
options to achieve x. BDI agent's option generation process is one of recursively
elaborating a hierarchical plan structure, considering and committing to
progressively more specific intentions, until finally it reaches the intentions that
correspond to immediately executable actions.
(d) Important properties of option function - Option function must be consistent that
is any options generated must be consistent with both the agent's current beliefs
and current intentions. Option function must be opportunistic, in that it should
recognize when environmental circumstances change advantageously, to offer the
agent new ways of achieving intentions, or the possibility of achieving intentions
that were otherwise unachievable.
(e) A BDI agent's deliberation process (deciding what to do) is represented in the
filter function, which updates the agent's intentions on the basis of its
previously-held intentions and current beliefs and desires. This function takes
care of two major tasks. First, it must
drop any intentions that are no longer
achievable, or for which the expected cost of achieving them exceeds the expected
gain associated with successfully achieving them. Second, it should retain
intentions that are not achieved, and that are still expected to have a
positive
overall benefit. Also, it should adopt new intentions, either to achieve evisting
intentions, or to exploit new opportunities. This function do not introduce any
new intentions. In other words, current intentions are either previously held
intentions or
newly adopted options.
() The execute function return any executable intentions one that corresponds to a
directly executable action. The agent decision function, action of a BDI agent is
action
perceptual
Layer2 output Layer 2 Layer 2
input
Layer 1 Layer 1
Layer 1
interacting layers.
Usually, there will be at least two layers, to deal with reactive and pro-active
behaviors respectively. Depending upon requirement of task there can be many
more layers. Multiple layers provide better separation of tasks to be performed.
2. Formally, there are two types of control flow within layered architectures
Horizontal layering - In horizontally layered architectures, the software lato
ers
are each directly connected to the sensory input and action output. In effant
each layer itself acts like an agent, generates what action to perform.
ect,
Vertical layering I n vertically layered architectures, sensory input and action
output are each dealt with by at most one layer each.
3. The advantage horizontally layered architectures is their conceptual simplicity.
of
That is as per the requirement agent's each behavior will be modeled as
different
layer. But as the layers are competing with one-another to generate action
suggestions, there is a possible problem that the overall behavior of the agent will
not be coherent. To ensure that
horizontally layered architectures are consistent,
they generally include a mediator function, which makes decisions about which
layer has "control" of the agent at any given point of time. For this there should be
a properly designed interaction between the layers. If there are n layers in the
architecture, and each layer is capable of suggesting m possible actions, then this
means there m'n such interactions to be considered that makes the system
are
clearly much simpler than the horizontally layered case. Though this looks simplE
2ake
vertically layered architecture needs to pass control to each layer so as to ma
the decision. So failures in any one layer are likely to have serious consequenc ences
5. Twovery
TOURINGMACHINES,
very popular examples
which is
of
a
layered
horizontally
architectures
layered
are
a r c h i t e c t u r e Fa
enrg
du s o n s