You are on page 1of 38

UNIT IV SOFTWARE AGENTS

• Architecture for Intelligent Agents


• Agent communication
• Negotiation and Bargaining
• Argumentation among Agents
• Trust and Reputation in Multi-agent systems.
An agent is commonly made up of a number of elements. These include one or more sensors that
are used to perceive the environment, one or more effectors that manipulate the environment, and
a control system. The control system provides a mapping from sensors to effectors, and provides
intelligent (or rational) behaviour
EXAMPLE:web spider
• Virtual agent that gathers and filters information for another party.
• A web spider uses a primary sensor of the HyperText Transport Protocol, or HTTP,
as a means to gather data from web pages.
• Its control system is an application, which can be written in almost any language,
that drives the behavior of the web spider.
• This behavior includes web-data parsing and filtering.
• The web spider can identify new links to follow to collect additional information,
and use the HTTP protocol to navigate the web environment.
• Finally, the web spider can communicate with a managing user through email
using the Simple Mail Transport Protocol, or SMTP.
• The user can configure the web spider for collection, navigation, or filtering, and
also receive emails indicating its current status.
• The web spider is an intermediary agent for web-data gathering and filtering for a
user.
• The web spider acts on the user’s behalf for data gathering, given a set of
constraints from the user.
ROBOT
• A robot can also be considered an agent.
• A robot includes a variety of sensors including vision (cameras,
ultrasonic transducers, infrared detectors), hearing (microphones),
touch (bump sensors), as well as other types of sensors for pressure,
temperature, and movement detection (accelerometers).
• Effectors include motors (to drive wheels, tracks, or limbs), and a
speaker for sound vocalization.
• A robot can also include a number of other effectors that can
manipulate the environment, such as a vacuum, a water pump, or
even a weapon.
• The property of rationality simply means that the agent does the right
thing at the right time, given a known outcome.
• Autonomy simply means that the agent is able to navigate its
environment without guidance from an external entity (such as a
human operator).
• Persistence implies that the agent exists over time and continuously
exists in its environment.
• Agents can communicate with other agents to provide them with
information, or communicate with users (for whom the agent
represents).
ARCHITECTURE FOR INTELLIGENT
AGENTS
Types of Architectures

• Reactive Architectures
• Deliberative Architectures
• Blackboard Architectures
• Belief-Desire-Intention (BDI) Architecture
• Mobile Architectures
Reactive Architectures
• A reactive architecture is the simplest
architecture for agents. In this
architecture, agent behaviors are simply a
mapping between stimulus and response.
The agent has no decision-making skills,
only reactions to the environment in
which it exists.
• Agent simply reads the environment and
then maps the state of the environment
to one or more actions. Given the
environment, more than one action may
be appropriate, and therefore the agent
must choose.
Deliberative Architectures
A deliberative architecture includes some
deliberation over the action to perform given the
current set of inputs.
It considers the sensors, state, prior results of
given actions, and other information in order to
select the best action to perform.
The mechanism for action selection could be a
variety of mechanisms including a production
system, neural network, or any other intelligent
algorithm
Blackboard Architectures
The blackboard is a common work area
for a number of agents that work
cooperatively to solve a given problem.
The blackboard therefore contains
information about the environment, but
also intermediate work results by the
cooperative agent.
• In this example, two separate agents are used to sample the
environment through the available sensors (the sensor agent) and
also through the available actuators (action agent).
• The blackboard contains the current state of the environment that is
constantly updated by the sensor agent, and when an action can be
performed (as specified in the blackboard), the action agent
translates this action into control of the actuators.
• The control of the agent system is provided by one or more reasoning
agents. These agents work together to achieve the goals, which would
also be contained in the blackboard.
• In this example, the first reasoning agent could implement the goal
definition behaviors, where the second reasoning agent could
implement the planning portion (to translate goals into sequences of
actions).
Belief-Desire-Intention (BDI) Architecture
• Belief represents the view of the world by the
agent (what it believes to be the state of the
environment in which it exists).
• Desires are the goals that define the motivation
of the agent (what it wants to achieve). The agent
may have numerous desires, which must be
consistent.
• Intentions specify that the agent uses the Beliefs
and Desires in order to choose one or more
actions in order to meet the desires
• BDI architecture defines the basic architecture of any deliberative
agent. It stores a representation of the state of the environment
(beliefs), maintains a set of goals (desires), and finally, an intentional
element that maps desires to beliefs (to provide one or more actions
that modify the state of the environment based on the agent‘s
needs).
Mobile Architectures
• Mobile architectural pattern introduces the ability for agents to
migrate themselves between hosts. The agent architecture includes
the mobility element, which allows an agent to migrate from one host
to another. An agent can migrate to any host that implements the
mobile framework.
• The mobile agent framework provides a protocol that permits
communication between hosts for agent migration.
• This framework also requires some kind of authentication and
security, to avoid a mobile agent framework from becoming a conduit
for viruses.
• Also implicit in the mobile agent framework is a means for discovery.
For example, which hosts are available for migration, and what
services do they provide?
• Communication is also implicit, as agents can communicate with one
another on a host, or across hosts in preparation for migration.
• The mobile agent architecture is advantageous as it supports the
development of intelligent distributed systems. But a distributed
system that is dynamic, and whose configuration and loading is
defined by the agents themselves.
AGENT COMMUNICATION
• An agent is an active object with the ability to perceive, reason, and
act.
• Agent has explicitly represented knowledge and a mechanism for
operating on or drawing inferences from its knowledge.
• Agent has the ability to communicate.
• This ability is part perception (the receiving of messages) and part
action (the sending of messages).
• Communication can enable the agents to coordinate their actions and
behavior, resulting in systems that are more coherent
• Coordination
• Coordination is a property of a system of agents performing some
activity in a shared environment. The degree of coordination is the
extent to which they avoid extraneous activity by reducing resource
contention, avoiding live lock and deadlock, and maintaining applicable
safety conditions.
• Coherence
• Coherence is how well a system behaves as a unit. A problem for a
multiagent system is how it can maintain global coherence without
explicit global control.
• The agents must be able on their own to determine goals they share
with other agents, determine common tasks, avoid unnecessary
conflicts, and pool knowledge and evidence. It is helpful if there is some
form of organization among the agents.
• Dimensions of Meaning

• Three aspects to the formal study of communication:

• Syntax (how the symbols of communication are structured),


• Semantics (what the symbols denote),
• Pragmatics (how the symbols are interpreted)
• Identity
• When a communication occurs among agents, its meaning is
dependent on the identities and roles of the agents involved, and on
how the involved agents are specified. A message might be sent to a
particular agent, or to just any agent satisfying a specified condition.
• Cardinality
• A message sent privately to one agent would be understood
differently than the same message broadcast publicly
Communication Levels
• Communication protocols are typically specified at several levels.
• Lowest level of the protocol specifies the method of interconnection;
• Middle level specifies the format, or syntax, of the information being transferred;
• Top level specifies the meaning, or semantics, of the information.
• The semantics refers not only to the substance of the message, but also to the
type of the message.
• There are both binary and n-ary communication protocols.
• A binary protocol involves a single sender and a single receiver, whereas an n-ar
protocol involves a single sender and multiple receivers (sometimes called
broadcast or multicast).
Knowledge Query and Manipulation
Language (KQML)
• A fundamental decision for the interaction of agents is to separate the
semantics of the communication protocol (which must be domain
independent) from the semantics of the enclosed message (which
may depend on the domain).
• The communication protocol must be universally shared by all agents.
It should be concise and have only a limited number of primitive
communication acts.
• The knowledge query and manipulation language (KQML) is a protocol
for exchanging information and knowledge.
• The elegance of KQML is that all information for understanding the
content of the message is included in the communication itself.
• The basic protocol is defined by the following structure:
• (KQML-performative :
• sender :
• receiver :
• language :
• ontology :
• content ...)
• The KQML-performatives are modeled on speech act performatives.
Thus, the semantics of KQML performatives is domain independent,
while the semanatics of the message is defined by the fields
• :content (the message itself), :
• language (the langauge in which the message is expressed),
• ontology (the vocabulary of the "words" in the message).
• In effect, KQML "wraps" a message in a structure that can be
understood by any agent. (To understand the message itself, the
recipient must understand the language and have access to the
ontology.) The terms :content, :language, and :ontology delineate the
semantics of the message.
NEGOTIATION
• A frequent form of interaction that occurs among agents with different
goals is termed negotiation.
• Negotiation is a process by which a joint decision is reached by two or
more agents, each trying to reach an individual goal or objective.
• The agents first communicate their positions, which might conflict, and
then try to move towards agreement by making concessions or
searching for alternatives.
• The major features of negotiation are
• (1) Language used by the participating agents,
• (2) Protocol followed by the agents as they negotiate, and
• (3) Decision process that each agent uses to determine its positions,
concessions, and criteria for agreement
Two types of systems and techniques for negotiation:
Environment-centered & Agent centered
• Environment-centered techniques Developers of environment-centered techniques
focus on the following problem: "How can the rules of the environment be designed so
that the agents in it, regardless of their origin, capabilities, or intentions, will interact
productively and fairly?"
• The resultant negotiation mechanism should ideally have the following attributes
• Efficiency: the agents should not waste resources in coming to an agreement.
• Stability: no agent should have an incentive to deviate from agreed-upon strategies.
• Simplicity: the negotiation mechanism should impose low computational and bandwidth
demands on the agents.
• Distribution: the mechanism should not require a central decision maker.
• Symmetry: the mechanism should not be biased against any agent for arbitrary or
inappropriate reasons
• Agent-centered negotiation mechanisms
• Developers of agent-centered negotiation mechanisms focus on the
following problem: "Given an environment in which my agent must
operate, what is the best strategy for it to follow?"
• Most such negotiation strategies have been developed for specific
problems, so few general principles of negotiation have emerged.
• Speech-act classifiers with a possible world semantics
• Agents are economically rational
ARGUMENTATION AMONG AGENTS
• Argumentation theory with AI offers argumentation theory a
laboratory for examining implementations of its rules and concepts.
• Approaches to argumentation in AI integrate insights from different
perspectives.
• In the artificial systems perspective, the aim is to build computer
programs that model or support argumentative tasks.
• Argumentation theory, or argumentation, is the interdisciplinary study
of how conclusions can be reached through logical reasoning; that is,
claims based, soundly or not, on premises.

• It includes the arts and sciences of civil debate, dialogue,


conversation, and persuasion. It studies rules of inference, logic, and
procedural rules in both artificial and real world settings.
• Argumentation includes deliberation and negotiation which are
concerned with collaborative decision making procedures.
Key components of argumentation
• Understanding and identifying arguments Identifying the premises from
which conclusions are derived Establishing the "burden of proof" –
determining who made the initial claim and is thus responsible for providing
evidence why his/her position merits acceptance.
• For the one carrying the "burden of proof", the advocate, to marshal evidence
for his/her position in order to convince or force the opponent's acceptance.
• In a debate, fulfillment of the burden of proof creates a burden of rejoinder.
• One must try to identify faulty reasoning in the opponent's argument, to
attack the reasons/premises of the argument, to provide counterexamples if
possible, to identify any fallacies, and to show why a valid conclusion cannot
be derived from the reasons provided for his/her argument
• Forms of Argument Defeat
• 1. An argument can be undermined. In this form of defeat, the
premises or assumptions of an argument are attacked. This form of
defeat corresponds to Hart‘s denial of the premises.
• 2. An argument can be undercut. In this form of defeat, the
connection between a (set of) reason(s) and a conclusion in an
argument is attacked.
• 3. An argument can be rebutted. In this form of defeat, an argument
is attacked by giving an argument for an 15 opposite conclusion
TRUST AND REPUTATION IN MULTI-
AGENT SYSTEMS
• Trust is subjective and contingent on the uncertainty of future
outcome (as a result of trusting).
• It depends on the level we apply it:
• User confidence •
• Can we trust the user behind the agent?
• – Is he/she a trustworthy source of some / kind of knowledge?
• Does he/she acts in the agent system (through his agents in a
trustworthy way
Why Trust?
• Closed environments
• closed environments, cooperation among agents is included as part of
the designing process: The multi-agent system is usually built by a
single developer or a single team of developers and the chosen
developer or a single team of developers, and the chosen option to
reduce complexity is to ensure cooperation among the agents they
build including it as an important system requirement.
• Kindness assumption: an agent ai requesting information or a certain
service from agent aj can be sure that such agent will answer him if aj
has the capabilities and the resources needed, otherwise aj will
inform ai that it cannot perform the action requested
• Trust and Reputation
• Most authors in literature make a mix between trust and reputation
• Some authors make a difference between them
• Trust is an individual measure of confidence that a given gent has
over other gent(s) Reputation is a social measure of confidence that a
group of agents or a society has over agents or groups (social)
Reputation is one mechanism to compute (individual) Trust •
• I will trust more an agent that has good reputation •
• My reputation clearly affects the amount of trust that others have
towards me

You might also like