You are on page 1of 21

Unit IV

SOFTWARE AGENTS

Seminar link :
https://drive.google.com/file/d/145lCDlpTamSMsMQ_5N4nKZbp8chtcMf/view?
usp=sharing
Intelligent Agent
• Agent that exhibits the following types of behavior in
order to meet its delegated objectives:
1) Proactiveness: Intelligent agents are able to exhibit goal-directed
behavior by taking the initiative in order to satisfy their delegated
objectives. (Goal-driven nature)
2) Reactivity: Intelligent agents are able to perceive their environment,
and respond in a timely fashion to changes that occur in it in order to
satisfy their delegated objectives (Event-driven)
3) Social ability: intelligent agents are capable of interacting with other
agents (and possibly humans) in order to satisfy their design objectives.
Architecture for Intelligent Agents
There are 4 main types of architecture :
1) logic-based agents – in which the decision about what action to perform is
made via logical deduction.
2) Reactive agents – in which decision making is implemented in some form of
direct mapping from situation to action. (situation -> action)
3) Belief-desire-intention agents – in which decision making depends upon the
manipulation of data structures representing the beliefs, desires, and intentions
of the agent.
4) Layered architectures – in which decision making is realized via various
software layers, each of which is more or less explicitly reasoning about the
environment at different levels of abstraction.
Logic-Based Architecture

The “traditional” approach to building artificially intelligent systems (known as


symbolic AI) suggests that intelligent behavior can be generated in a system by
giving that system a symbolic representation of its environment and its desired
behavior, and syntactically manipulating this representation.

According to logic based agents , symbolic representations are logical formulae,


and the syntactic manipulation corresponds to logical deduction, or
theorem proving.

Logic based agents also called as deliberate agents.

Such agents are assumed to maintain an internal database of


formulae of classical first-order predicate logic, which represents in a symbolic
form the information they have about their environment.
 The database is the information that the agent has about
its environment.
 An agent’s database plays a somewhat analogous role to that of belief in
humans.
 Let L be the set of sentences of first-order logic.
 The internal state of a deliberate agent – the agent’s “beliefs” – is then a subset
of L, i.e., a set of formulae of first-order logic.
 Δ, Δ1,... to denote such belief databases.
 An agent’s decision-making process is modeled through a set of deduction
rules, ρ(rho).
 These are simply rules of inference for the logic.
 Let first order formula – ϕ, Δ – database , ρ – rules.
 We write Δ | p ϕ ,if the first order formula ϕ can be proved from the database
Δ using only the deduction rules ρ.
4. Domain Predicates used here are:
• In(x, y) agent is at (x, y)
• Dirt(x, y) there is dirt at (x, y)
• Facing(d) the agent is facing
direction d

5) The rules ρ govern our agent’s


behavior. The rules we use have the
form :
1. Our agent can receive a percept dirt ϕ(...) −→ ψ(...)
(signifying that there is dirt where ϕ(phi) and ψ(phsi) are
beneath it), or null (indicating no special predicates over some arbitrary list of
information). constants and variables.
2. It can perform any one of three possible
actions: The idea being that if ϕ matches
forward, suck, or turn. against the agent’s database, then ψ
can be concluded, with any variables in
3. The goal is to traverse the room, ψ instantiated.
continually searching for and removing dirt.
• The first rule deals with the basic cleaning action of the agent
In(x, y) ∧ Dirt(x, y) −→ Do(suck)
• The robot will always move
from (0,0) to (0,1) to (0,2) and then to (1,2), to (1,1), and so on. Once it reaches (2, 2),
it must head back to (0, 0).
• The rules dealing with the traversal up
to (0, 2) are very simple,
I n(0, 0) ∧ F acing(north) ∧ ¬Dirt(0, 0) −→ Do( forward)
In(0, 1) ∧ Facing(north) ∧ ¬Dirt(0, 1) −→ Do( forward)
In(0, 2) ∧ Facing(north) ∧ ¬Dirt(0, 2) −→ Do(turn)
In(0, 2) ∧ Facing(east) −→ Do( forward)

• Similar rules can easily be generated that will get the agent to (2, 2),and once at (2, 2)
back to (0, 0).

• Calculative Rationality : An agent is said to enjoy the property of calculative rationality


if and only if, its decision-making apparatus will suggest an action that was optimal
when the decision-making process began.
Reactive Architecture
• Reactive agents – in which decision making is implemented in some form
of direct mapping from situation to action.
• The best-known reactive agent architecture  - The Subsumption Architecture. It was
developed by Rodney Brooks.
• There are two defining characteristics of the subsumption architecture. The first is that
an agent’s decision making is realized through a set of task accomplishing behaviors.
each behavior may be thought of as an individual action selection process.
• An important point to note is that these task accomplishing modules are assumed to
include no complex symbolic representations, and are assumed to do no symbolic
reasoning at all.
• The second defining characteristic of the subsumption architecture is that
many behaviors can “fire” simultaneously.
• Brooks proposed arranging the behavior modules into a subsumption hierarchy,
with the behaviors arranged into layers. Lower layers in the hierarchy are able to
inhibit higher layers.
• The lower a layer is, the higher is its priority.
• The idea is that higher layers represent more abstract behaviors.
• For example , one might desire a behavior in a mobile robot for the behavior
“avoid obstacles.”
• It makes sense to give obstacle avoidance a high priority – hence this behavior
will typically be encoded in a low-level layer, which has a high priority.
Action Selection in Layered Architectures
• Action selection is realized through a set of behaviors together with an
inhibition relation .
• We write b1 ≺ b2, and read this as 'b1 inhibits b2' - b1 is lower in the hierarchy
than b2, and will hence get priority over b2. (b1 is the inhibiting layer)
Example : Mars Explorer
Mechanisms used in the Explorer

1) A gradient field :
• The mother ship generates a radio signal so that agents can know in which
direction the mother ship lies .
• An agent needs to travel 'up the gradient' of signal strength .
• The signal need not carry any information
2) Indirect communication:
• Agents will carry 'radioactive crumbs', which can be dropped, picked up and
detected by passing robots .
The various behaviors that can be derived are:
b0: if detect an obstacle then change direction
b1: if carrying a sample and at the base then drop sample
b2: if carrying a sample and not at the base then travel up gradient
b3: if detect a sample and not at the base then pick up sample
b4: if true then move randomly (The final behavior ensures that an agent with
“nothing better to do” will explore randomly)
sThe above behaviors are arranged into the hierarchy:
b0 ≺ b1 ≺ b2 ≺ b3 ≺ b4
Belief-Desire-Intention Architectures
Layered Architecture

• Given the requirement that an agent be capable of reactive and proactive


behavior, this involves creating separate subsystems to deal with these
different types of behaviors.
• This idea leads naturally to a class of architectures(Layered architecture) in
which the various subsystems are arranged into a hierarchy of interacting
layers (S/W layer) .
• Example of such layered architecture : INTERRAP & TOURINGMACHINES.
• Typically, there will be at least two layers, to deal with reactive and proactive
behaviors, respectively.
• Typology : the information and control flows between the layers.
• There are two major types of layering :
1)Horizontal layering 2) Vertical layering
Horizontal Layering :
here, each software layer is
directly connected to the sensory
input and action output.
In effect,
each layer itself acts like an agent,
producing suggestions as to what
action to perform.

• In order to ensure that


horizontally
layered architectures are
consistent, they generally
include a mediator function,
which makes decisions about
which layer has “control” of the
agent at any given
time.
• The introduction of a central
control system also introduces a
bottleneck into the
agent’s decision making.
Vertical layering : In one-pass architecture,
Here the sensory input control flows sequentially through each
and action output are each dealt with layer, until the final layer generates action
by at most one layer each. output.
We further subdivide vertically layered In two-pass architecture, information flows
architecture into up the architecture (the
one-pass and two-pass architecture. first pass) and control then flows back down.

Vertical – one pass Vertical – two pass


TOURING MACHINE – HORIZONTAL LAYERING

• TOURINGMACHINES consists of three activity producing layers.

1.Reactive Layer : is implemented as a set of situation-action rules, like the behaviors in


Brooks' subsumption architecture.
2.Planning Layer : achieves the agent’s proactive behavior via plans based on the library
of schemas. So, in order to achieve a goal, the planning layer attempts to find a schema
in its library that matches that goal.
3.Modeling Layer : represents the various entities in the world (including the
agent itself, as well as other agents), predicts conflicts
between agents, and generates new goals to be achieved in order to resolve these
conflicts.
• The three control layers are embedded within a control subsystem, which is
effectively responsible for deciding which of the layers should have control over
the agent.
INTERRAP - VERTICAL LAYERING

• INTERRAP contains three control layers, as in TOURINGMACHINES.


• The lowest (behavior-based) layer deals with reactive behavior; the middle (local
planning) layer deals with everyday planning to achieve the agent’s goals, and the
uppermost (cooperative planning) layer deals with social interactions.
• Each layer has associated with it a knowledge base, i.e., a representation of the world
appropriate for that layer.
• The two main types of interaction between layers are bottom-up activation and top-
down execution.
• Bottom-up activation occurs when a lower layer passes control to a higher layer
because it is not competent to deal with the current situation.
• Top-down execution occurs when a higher layer makes use of the facilities provided by
a lower layer to achieve one of its goals.
• The basic flow of control in INTERRAP begins when perceptual input arrives at the
lowest layer in the architecture.
Thank you

You might also like