You are on page 1of 12

Intelligent Agent's Architecture

11.1.5
she agents that
far the agents have been discussed are, the
So
agents with state maintenance,
without
ents w i t h o u t state maintenance and reactive agents. The agent's decision making
r e has
procedure has been modeled as an abstract function
named action, that indicates
Now
Now its time
which
action to perform.
to study the implementation of the decision making
function and its various related factors.
ence for further discussion the agent would be classified into 4
categories that are
as follows,

1. Logic based agents carries out decision making through logical deductions.

2. Reactive agents - carries out decision making through procedure implementationin


some form of direct mapping from situation to action.

3. Belief-desire-intention agents carries out decision making that depends upon the
manipulation of data structures which are used represent the agent's beliefs, agernt's
desires, and agent's intentions.

4. Layered architectures wherein the decision making is done through various


software layers, each of which explicitly reasons about the environment at different
levels of abstraction as per the requirement of problem under consideration.
Now, moving further in the discussion, rather than just working around the
abstract view of agents, it is time to make precise specifications about the internal
structure and operation of agents. The next section studies the nature of these
specifications, the assumptions upon which the architectures depend, and the
comparative analysis of advantages and disadvantages of all the architectures.

11.1.5.1 Logic Based Agent Architecture

AGENT Sensors Percepts

What is the
world like now

Environment

Condition-action Action to
(if-then) rules be done

Actions
Actuators-

Fig. Logic based agent architecture


11.1.4
as
"traditional" approach to building artificially intelligent
system, (known
t 1s the behavior can be generated
in a system by
symbolic AI) suggests that intelligent
providing that system a symbolic representation of its environment and its desired
behavior, and syntactically manipulating this representation.
2. In these systems, the symbolic representations are logical formulae, and the
syntactic manipulation corresponds to logical deduction, or theorem proving.

3. The idea of agents as theorem provers is highly attractive. For example consider
that there is certain theory of agency that explains how an intelligent agent should
behave. This theory may explain, say, how an agent generates goals so as to satisfy
its design objective and how it interleaves goal-directed and reactive behavior in
order to achieve these goals, and all such aspects. Then such a theory ø can be
considered as a specification for how an agent should behave.
4. The traditional approach for the implementation of a system that will satisfy this
specification would involve refining the specification through a series of
progressively more concrete stages, until finally an implementation is completely
done.
5. While thinking of an intelligent agent as a theorem prover, no such refinement
takes place. Instead, is viewed as an executable
ø
specification. It is directly
executed in order to produce the agent's behavior.
6. Example of logic-based agents
These agents would be termed as deliberate agents. In such agents, the internal
state is assumed to be a database of formulae of classical first-order
predicate logic.
For example, the agent's database might contain formulae such
Start(machine 102)
Temperature(blower 267, 430)
Pressure(digester 76, 184)
Using such formulae complete agent's environment is described. The database
contains the complete information that the agent has about its environment.
Database of an agent can be thought of as analogous to that of belief in humans.
Thus a user might have a belief that machine 102 is started the same way the
agent might have the predicate Start(machine102) in its database. Like humans can
have errors same is applicable to agents. Thus one might believe that Machine 102
1s started though is off; the fact that an agent has Start(machine 102) in its database
does not mean that machine 102 is started. There can multiple reasons why the
perception of agent can go wrong. Say the agent's
may be raulty
sensors
reasoning may be
faulty, the information may not be up to date, or or the
tne
interpretation of the formula Start(machine 102) intended by the agent's desl8 r
may be different altogether.
ot Lbe the set of sentences of
classical first-order
set L databases, i.e., the set of sets logic, and let DB =
2 (L) be the
of L-formulae. The internal
ie then element of DB. Then, ?,
an state of an agent
?1,.. for members of DB. The
an agagent is a member of the set DB. An internal state of
modelled through a set of deduction rules, ?. agent's decision making process is
These are rules of inference for the
logic. Now, A-p o it the formula ø can be
proved from the database ?? using only
the deduction rules ?. An
agent's perception function observe remains the same as
earlier,
observe :S>P

Similarly, next function has the form


next (DB, P) > DB2
It thus maps a database and a percept to a new database. However, an agent's
action selection function, which has the
signature
action: DB> A is defined in terms of its deduction rules
8. The vacuum world cleaning agent as a logic based agent
In vacuum world the robotic agent cleans up a house by taking various move. The
robot is equipped with a sensor that will tell it whether it is over any dirt, and a
vacuum cleaner is used to suck up dirt. In addition, the robot always has a definite
orientation (one of north, south, east, or west). In addition to being able to suck up

dirt, the agent forward one "step" or turn right 90°. The agent moves
can move
around a room, which is divided grid-like into a number of equally sized squares

(conveniently corresponding to the unit of movement of the agent). The cleaning


other work. The room is designed to be a 3 x 3
agent only do cleaning task and
no

(0, 0) facing north.


grid, and the agent always starts in grid square

In the vacuum cleaning world agent receives a percept dirt (signifying


agent, the
no special information). It can
that there is dirt beneath it), or null (indicating
actions: forward, suck, or turn. The goal is to
perform any one of three possible
traverse the roonm continually searching
for and removing dirt.

Predicates for the agent would be,


implies agent is at (x, y)
In(x, y)
implies there is dirt at (x, y)
Dirt(x, y)
the agent is facing direction d
Facing(d) implies
the agent is maintained so as to decide the agent's
The history and current state of
move.

The rules can be formed as below,

-Dirt(0, 0) - Do(forward)
In(0, 0) a Facing (north) a
9.Logic based approach advantages
(a) In logic-based approaches to building agents, decision making is viewed
as
deduction. An agent's "program" implements its decision making strateey
is
encoded as a logical theory, and the process of selecting an action reduces to
to a
problem of proof.
(6) Logic-based approaches are good to work with arnd have a clean (logical)
cal)
semantic due to which they can be used over long period of time.

10. Logic based approach - disadvantages

(a) The inbuilt computational complexity of theorem proving makes it questionable


whether agents as theorem provers can operate ettectively in time-constrained
environments.

(b) Decision making in such agents is predicated on the assunmption of calculative


rationality. Another important assumption made is that the world will not change
in any significant way while the agent is deciding its further action.

(C)The issues associated with representing and reasoning about complex, dynamic,
possibly physical environments are also essentially unsolved that makes
developing logic agent a tedious task.

11.1.5.2 Reactive Agent Architecture

Agent
Stimulus-response behaviours
S
e State Action
S
O
Stateg Acton
S
State, Action,

Fig. 11.1.5 Reacting agent architecture


. The reactive sometime referred to as behavioral (because a
agent approaches are
common theme is that of developing and combining individual behaviors), situated
(Decause a common theme is that of agents actually situated in some environmen
uch
rather than being disembodied from it), and ultimately reactive (because suct

ystems are 100% urnderstood to be just reacting to an environment, without


reasoning about it).
2. The r e a c tire agent
architecture is subsumption
2. 1n
the st-known reactive agent architecture. architecture, which is comparat.
It was vely
who has lone pioneering in this reactive developed by Rodney Brooks
ential critics of the symbolic agent's architecture. He has been a
approach and gave another
developing intelligent agent. approach for
Characteristics ot subsumption architecture
(al In subsumption architecture, an agent's decision-making is realized
of task accomplishing behaviors. Each behavior through set a
of an agent
as an individual action
may be thought of
function, which continually takes
maps it to an action to perform. Each of these
perceptual input and
behavior modules is designed to
achieve some particular task. In Brooks'
implementation, the behavior modules
are modeled finite state machines. An
as
important point to note in this case is
that these task accomplishing modules do not
comprise of any complex symbolic
representations, and they don't have any symbolic reasoning at all. In many
implementations, these behaviors are implemented as rules of the form
situation action maps perceptual input directly to actions)
(b) In subsumption architecture multiple behaviors can "fire" simultaneously. Hence
there is a need of mechanism to choose between the different actions selected by
these multiple actions. Brooks has proposed arranging the modules into a
hierarchy, with the behaviors arranged into layers. Lower layers in the hierarchy
are able to inhibit higher layers, it means that, lower the layer, then higher is its
priority. The idea is that higher layers represent more abstract (can say
overall/consolidated behaviors). For example, one might desire a behavior in a
mobile robot for the behavior "avoid obstacles". It makes sense to give obstacle
and hence this behavior will typically be encoded in a
avoidance a high priority,
low-level layer, which has high priority.
4. Subsumption architecture model implementation and working

The observe function, which represents the agent's perceptual ability, is the same in
earlier intelligent agent architecture. But in subsumption architecture systems, there
is tight coupling between perception and action. The raw sensor input is not
there is certainly attempt to transform
processed or transformed much, and no

images to symbolic representations.


The decision function action is realized through a set of behaviors, together with an
inhibition relation holding between these behaviors. A behavior is a pair (c, a),
and a e A, is an action. A
where c P, is a set of percepts called the condition,
C
behavior (c, a) will fire when the environment is in state seS, if observe(s), a e A.
of all behavioral rules. Associated
Let Behave {(c,a) | cgP and a e A} be the set
=

with an agent's set of behavioral rules, the 'Behave' is the binary inhibition relation
is arelation that has total ordering on R. It
on the set of behaviors. «CRxR
The statement
irreflexive and antisymmetric.
means that the relation is transitive,
inhibits b2 and that b is lower in hierarchy
than b2 so b, will
b<b2 b1
means

get higher priority than b2.


the set named fired of all behaviors
The action selection begins by first computing
fires is checked, to determine whether
that fire. Then, each behavior (c, a) that
fires. If not, then the action part of the
there is some other higher priority behavior
behavior fires, then the
selected action. If no
behavior, a, is returned as the
action has been
distinguished action null will be returned, indicating that no

chosen.
5. Reactive agent approach - advantages
action function is not worse
(a) The overall time complexity of the subsumption
of behaviors or number of
than O(ns), where n is the larger of the number
as discussed above,
percepts. Thus, even with the naive algorithm process
works considerably
decision making is tractable. Though, in practice, it
be encoded into
better than O(n ). The decision making logic can

hardware, giving constant decision time. For modern hardware, this means

be guaranteed to select an action within nano-seconds.


that an agent can

is the and major advantage of the


This computational simplicity strength
subsumption architecture.

(b) The major advantages of reactive approaches such as the Brooks


subsumption architecture are simplicity, economy, computational tractability,
robustness against failure.

6. Reactive agent approach -

disadvantages
do not employ models of their environment, then
(a) In reactive agents, if the agents
information available in their local environment for
they must have sufficient
AS purely reactive agents make decisions
them to determine an acceptable action.
the agents current state), it is
based on local information, (.e., information about
non-local
difficult to how such decision making could take into account
see
take a "short term'" view.
information, for which it must inherently
reactive agents that can be designed to learn from
(b) It is difficult model purely
their performance over time. The important central
experience so as to improve

benefit and popularity of purely reactive systems is that overall behavior emerges
its
behaviors when the agent is placed in
from the interaction of the component
environment. But the term "emerges" suggests
that the relationship between
and overall behavior is not understandable.
individual behaviors, environment,
to engineer agents to fulfill specific tasks.
This necessarily makes it very hard
9 t OysleS

therethere is no principled
ine Such agent is highlymethodology
Also, for building such
tedious that would agents. The process of
require lot of trials and errors.
h o cOmplexity of reactive
agent still go on increasing when it
lavers that encompasses behaviors. 'The contains many
dynamics of the interactions between the
different behaviors become too complex to understand.
different
Researchers have
nroposed solutions to these
problems.
One of the
design a evolving agent to perform certain tasks.
widely accepted solution is to

11.1.5.3 Belief-Desire-Intention (BDI) Agent Architecture


1. This architecture has thought
of from philosophical tradition of
understanding
practical reasoning, the process of deciding, moment by moment, which action to
perform in the furtherance of one's goals.

2. The terms beliefs, desires and intentions can be defined as follows,


(a) Beliefs - Beliefs represent the informational state of the agent, that is agent's
agent's
beliefs about the world (including itself and other agents). Beliefs can also include
inference rules, allowing forward chaining to lead to new beliefs. Using the term
belief instead of knowledge recognizes that what an agent believes may not

necessarily (and in fact may change in the future). The Beliefset(s) are
be true
decided
stored in database (sometimes called a belief base or a belief set), that is

by an implementation decision.
model
(b) Desires Desires represent the motivational state of the agent. They
would like to accomplish or bring about.
objectives or situations that the agent
to the function or become winner.
For example, find the best offer, go
been adopted for active pursuit by
(c) Goals A goal is a ultimate desire that has
goals puts on the restriction that the set of active
the agent. Usage of the term
one should not have concurrent goals to
desires must be consistent. For example,
desirable for
to a function and to stay
at garden, though they could both be
go
the agent.
Intentions of the agent, indicating
represent the deliberative state
(d) Intentions -

subset of desires to which the


what the agent has chosen to do. Intentions are
extent committed. In implemented systenms, this means the
agent has up to some
agent has executing a plan to achieve the goal
begun
actions (recipes or knowledge areas) that an agent can
(e) Plans The sequences of
-

of its intentions are termed as plans. Plans may


perform to achieve one or more
final goal. For example agent can have a plan to
include other sub-plans to attain
that may include a sub-plan to search
for best offer. Plans are
go for shopping
conceived and as the details are being filled in the plans as
initially only partially
executing them.
they progress in
Events - Events are triggers for reactive activity to be carried out by the agent

An event may
gent.
update beliefs, trigger plans or modify goals. Events may hee
generated externally and received by sensors or integrated systems. Also, events
nts
may be generated internally to trigger decoupled updates or plans of activity or
updated details.
3. In this architecture involves two crucial processes, namely to
practical reasoning
decide what goals are to achieve and how these goals are going to get achieved.
The process of what goals to achieve is known as deliberation whereas how to
achieve the goal is known as means-ends reasoning.
4. Role of intentions in BDI architecture

Intentions play a number of important roles in practical reasoning. Most of the


means ends reasoning are intentions driven. Intentions put constrain on future
deliberation. Intentions have persisting nature and hence they do affect agent
working until the goal is achieved. Also, intentions influence the beliefs of agent
upon which future practical reasoning is based.
5. Problems associated with BDI architecture

(a) A central problem in the design of practical reasoning agents is that of achieving
a good balance between different concerns related with
intentions.Specifically, it
seems clear that an agent should at times drop some intentions (because it comes
to believe that either they will never be achieved, they are achieved, or else
because the reason for having the intention is no
longer present). It may happen
that, from time to time, agent has to reconsider its intentions. But reconsideration
incurs the cost in terms ot both time and computational resources. Therefore, it
poses a critical issue where in two situations may arise,
Either an agent that does not reconsider sufficiently often will continue
stop to
attempting to achieve its intentions even atter it is clear that they cannot bee
achieved, or that there is no longer any reason for achieving them.
Or an agent that constantly reconsiders its attentions may spend insufficient time
actually working to achieve them, and hence runs the risk of never practically
achieving them.
This critical issue is essentially the problem of balancing pro-active (goal directed)
and reactive (event driven) behavior. There is clearly a tradeoff between the
degree of commitment and reconsideration at work.
(b) Another issue is that different types of environment f
require differenttypes
decision strategies. In static,
unchanging environment, purely pro-active, 8 a l
directed behavior is
adequate. But in more dynamic environments, the abity
react to
changes by modifying intentions becomes more important.
* Agent systems

process in in a BDI
A
Reasoning agent
l Main components of a BDI agent are,
(a) M

A set of current beliefs,


1. A
current environment.
representing information the agent has about its

2 A belief revision function, (brf), which takes


agent's current beliefs, and on the basis
a
perceptual input and the
of these, determines a new set of
beliefs.
3. An option generation function, (options), which determines the
available to the agent (its desires), on the basis of its current beliefsoptions
about
its environment and its current intentions.
4. A set of current options, representing possible courses of actions available
to the agent.

5. A filter function (filter), which represents the agent's deliberation process,


and which determines the agent's intentions on the basis of its current
beliefs, desires, and intentions.
6. A set of current intentions, representing the agent's current focus. These are
the states of affairs that it has committed to trying to bring about an
action selection function (execute), which determines an action to perform
on the basis of current intentions.
sensor
input
brf

beliefs

generate
options

desires

filter

intentions

actions
action
output
Belief-desire-intension architecture
Fig. 11.1.6
(b) Let, Blf be the set of all possible beliefs, Dsr be the set of all possible desires, and
Int be the set of all possible intentions. Usually, beliefs, desires, and intentions
are represented as logical formulae, perhaps of first-order logic.)
An agent's belief revision function is a mapping, bl: which on the basis of the
current percept and current beliefs determines a new set of beliefs. The option
generation function, options, maps a set of beliets and a set of ntentions to a set

of desires.
(c) The options function plays several roles. First, it is responsible for the agent's
means-ends reasoning - the process of deciding how to achieve intentions. Thus
once an agent has formed an intention to x, it must subsequently consider
options to achieve x. BDI agent's option generation process is one of recursively
elaborating a hierarchical plan structure, considering and committing to
progressively more specific intentions, until finally it reaches the intentions that
correspond to immediately executable actions.
(d) Important properties of option function - Option function must be consistent that

is any options generated must be consistent with both the agent's current beliefs
and current intentions. Option function must be opportunistic, in that it should
recognize when environmental circumstances change advantageously, to offer the
agent new ways of achieving intentions, or the possibility of achieving intentions
that were otherwise unachievable.
(e) A BDI agent's deliberation process (deciding what to do) is represented in the
filter function, which updates the agent's intentions on the basis of its
previously-held intentions and current beliefs and desires. This function takes
care of two major tasks. First, it must
drop any intentions that are no longer
achievable, or for which the expected cost of achieving them exceeds the expected
gain associated with successfully achieving them. Second, it should retain
intentions that are not achieved, and that are still expected to have a
positive
overall benefit. Also, it should adopt new intentions, either to achieve evisting
intentions, or to exploit new opportunities. This function do not introduce any
new intentions. In other words, current intentions are either previously held
intentions or
newly adopted options.
() The execute function return any executable intentions one that corresponds to a
directly executable action. The agent decision function, action of a BDI agent is

then a function action P> A.


(g) Generally a priority is attached with each intention, indicating its relative
importance. Or else natural idea is to represent intentions as a stack. An inten
15 pushed on to the stack when it is adopted, and popped when it is either
hieved else not achievable. More abstract
or
intentions will tend to be at the
mof
bottom of ithe stack and more concrete
intentions towards the top.
nchude, BDI architectures are practical
(h) To c o n c l u a
reasoning
eciding what to do resembles the kind architectures, which the
in
process of deci
of practical reasoning that
haman adopts. The basic
components of a BDI
architecture are data structures
reDresenting the beliefs, desires, and intentions of the agent, and functions that
renresent its deliberation and means-ends reasoning. Intentions
play a central role
in the BDI model by providing stability for decision and act to making, focus the
agent's practical reasoning.
7. BDI approach advantages
(a) As this model uses a standard human reasoning process to reach to goal it
is easy to understand.

(b) It has clear functional decomposition, which indicates what sorts of


subsystems might be required to build an agent.

8.BDI approach disadvantages


Main difficulty lies in knowing how to efficiently implement all BDI model
functions.

11.1.5.4 Layered Architecture


action
output

Layer n Layer n Layer n

action
perceptual
Layer2 output Layer 2 Layer 2
input
Layer 1 Layer 1
Layer 1

perceptual perceptional action


input input output

(a) Horizontal layering (a) Vertical layering (c) Vertical layering


(One pass control) (Two pass control)
Fig. 11.1.7 layered agent architecture types

1. In layered architectures various subsystems are arranged into a hierarchy of

interacting layers.
Usually, there will be at least two layers, to deal with reactive and pro-active
behaviors respectively. Depending upon requirement of task there can be many
more layers. Multiple layers provide better separation of tasks to be performed.
2. Formally, there are two types of control flow within layered architectures
Horizontal layering - In horizontally layered architectures, the software lato
ers
are each directly connected to the sensory input and action output. In effant
each layer itself acts like an agent, generates what action to perform.
ect,
Vertical layering I n vertically layered architectures, sensory input and action
output are each dealt with by at most one layer each.
3. The advantage horizontally layered architectures is their conceptual simplicity.
of
That is as per the requirement agent's each behavior will be modeled as
different
layer. But as the layers are competing with one-another to generate action
suggestions, there is a possible problem that the overall behavior of the agent will
not be coherent. To ensure that
horizontally layered architectures are consistent,
they generally include a mediator function, which makes decisions about which
layer has "control" of the agent at any given point of time. For this there should be
a properly designed interaction between the layers. If there are n layers in the
architecture, and each layer is capable of suggesting m possible actions, then this
means there m'n such interactions to be considered that makes the system
are

complex in terms of processing time. The introduction of a central control system


also introduces a bottleneck into the agent's decision making. These problems are
partially handled by a vertically layered architecture.
4. One can subdivide vertically layered architectures into one pass architectures and
two pass architectures. In one-pass architectures, control flows sequentially through
each layer, until the final layer generates action output. In two-pass architectures,
informatiorn flows up the architecture (the first pass) and control then flows back
down. There are some interesting similarities between the idea of two-pass
vertically layered architectures and the way that organisations work, with
information flowing up to the highest levels of the organisation, and commands
then flowing down. In both one pass and two pass vertically layered architectures,
the complexity of interactions between layers is reduced because there are n -

interfaces between n layers, then if each layer is capable of suggesting m


actionsis
there are at most m (n - 1) interactions to be considered between layers. This

clearly much simpler than the horizontally layered case. Though this looks simplE
2ake
vertically layered architecture needs to pass control to each layer so as to ma
the decision. So failures in any one layer are likely to have serious consequenc ences

for agent performance.

5. Twovery
TOURINGMACHINES,
very popular examples
which is
of
a
layered
horizontally
architectures
layered
are
a r c h i t e c t u r e Fa
enrg
du s o n s

architecture and Jörg


Müller's INTERRAP, which is two pass vertically layered architecture.

You might also like