Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword or section
Like this
2Activity
P. 1
Agent Oriented Programming

Agent Oriented Programming

Ratings: (0)|Views: 58|Likes:
Published by Jeff Pratt
Uploaded from Google Docs
Uploaded from Google Docs

More info:

Published by: Jeff Pratt on Aug 11, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

03/02/2014

pdf

text

original

 
Artificial Intelligence 60 (1993) 51-92 51ElsevierARTINT 931
Agent-oriented programming
Yoav Shoham
Robotics Laboratory Computer Science Department, Stanford University Stanford,CA 94305, USA
Received June
1991
Revised February 1992
Abstract
Shoham, Y., Agent-oriented programming, Artificial Intelligence 60 (1993) 51-92.A new computational framework is presented, called
agent-oriented programming
(AOP),which can be viewed as a specialization of
object-oriented programming.
The state of anagent consists of components such as beliefs, decisions, capabilities, and obligations; forthis reason the state of an agent is called its
mental state.
The mental state of agentsis described formally in an extension of standard epistemic logics: beside temporalizingthe knowledge and belief operators, AOP introduces operators for obligation, decision,and capability. Agents are controlled by
agent programs,
which include primitives forcommunicating with other agents. In the spirit of
speech act theory,
each communicationprimitive is of a certain type: informing, requesting, offering, and so on. This articlepresents the concept of AOP, discusses the concept of mental state and its formalunderpinning, defines a class of agent interpreters, and then describes in detail a specificinterpreter that has been implemented.
1. Introduction
This paper proposes a new programming paradigm. The paradigm pro-motes a societal view of computation, in which multiple "agents" interactwith one another, although in this document we will concentrate on thedesign of the individual agent. Many of the ideas here intersect and interactwith the ideas of others. For the sake of continuity, however, I will not placethis work in the context of other work until the end.
Correspondence to:
Y. Shoham, Robotics Laboratory, Computer Science Department,Stanford University, Stanford, CA 94305, USA. Telephone: (415) 723-3432. E-mail:shoham @ cs.stan ford.edu.0004-3702/93/$ 06.00 ~ 1993 -- Elsevier Science Publishers B.V. All rights reserved
 
52 E
Shoham
1.1. What is an agent?
The term "agent" is used frequently these days. This is true in AI, butalso outside it, for example in connection with databases and manufacturingautomation. Although increasingly popular, the term has been used in suchdiverse ways that it has become meaningless without reference to a particularnotion of agenthood. Some notions are primarily intuitive, others quiteformal. Some are very austere, defining an agent in automata-theoreticterms, and others use a more lavish vocabulary. The original sense of theword, of someone acting on behalf of someone else, has been all but lostin At (an exception that comes to mind is the use of the word in theintelligent-interfaces community, where there is talk of "software agents"carrying out the user's wishes; this is also the sense of
agency theoo'
ineconomics [64]). Most often, when people in AI use the term "agent",they refer to an entity that functions continuously and autonomously inan environment in which other processes take place and other agents exist.This is perhaps the only property that is assumed uniformly by those in AIwho use the term. The sense of "'autonomy" is not precise, but the termis taken to mean that the agents" activities do not require constant humanguidance or intervention. Often certain further assumptions are made aboutthe environment, for example that it is physical and partially unpredictable.In fact, agents are sometimes taken to be robotic agents, in which caseother issues such as sensory input, motor control, and time pressure arementioned. Finally, agents are often taken to be "high-level". Althoughthis sense is quite vague, many take some version of it to distinguishagents from other software or hardware components. The "high level" ismanifested in symbolic representation and/or some cognitive-like function:agents may be "informable" [24], may contain symbolic plans in additionto stimulus-response rules
[13.18,29,53],
and may even possess naturallanguage capabilities. This sense is
not
assumed uniformly in At, and in facta certain counter-ideology deliberately denies the centrality or even existenceof high-level representation in agents [I,7].Clearly, the notion of agenthood in AI is anything but crisp. I shouldtherefore make it clear what 1 mean by the term "agent", which is preciselythis: An agent is an entity whose state is viewed as consisting of mentalcomponents such as beliefs, capabilities, choices, and commitments. Thesecomponents are defined in a precise fashion, and stand in rough correspon-dence to their common sense counterparts. In this view, therefore, agenthoodis in the mind of the programmer: What makes any hardware or softwarecomponent an agent is precisely the fact that one has chosen to analyze andcontrol it in these mental terms.The question of what an agent is is now replaced by the question ofwhat entities can be viewed as having mental state. The answer is that
 
Agent-oriented programming
53
anything
can be so described, although it is not always advantageous todo so. This view is not original to me. For example, in [15] and otherpublications, Daniel Dennett proposes the "intentional stance", from whichsystems are ascribed mental qualities such as intentions and free will. Theissue, according to Dennett, is not whether a system really is intentional, butwhether we can coherently view it as such. Similar sentiments are expressedby John McCarthy in [47], who also distinguishes belween the "legitimacy"of ascribing mental qualities to machines and its "usefulness":To ascribe certain
beliefs, Ji'ee will, intentions, consciousness, abil-ities
or
wants
to a machine or computer program is
legitimate
when such an ascription expresses the same information aboutthe machine that it expresses about a person. It is
useful
whenthe ascription helps us understand the structure of the machine,its past or future behavior, or how to repair or improve it. It isperhaps never
logically required
even for humans, but expressingreasonably briefly what is actually known about the state of themachine in a particular situation may require mental qualitiesor qualities isomorphic to them. Theories of belief, knowledgeand wanting can be constructed for machines in a simpler set-ting than for humans, and later applied to humans. Ascription ofmental qualities is
tnost straigh(forward
for machines of knownstructure such as thermostats and computer operating systems,but is
most useful
when applied to entities whose structure is veryincompletely known.In [67] I illustrate the point through the light switch example. It is per-fectly coherent to treat a light switch as a (very cooperative) agent with thecapability of transmitting current at will, who invariably transmits currentwhen it believes that we want it transmitted and not otherwise; flicking theswitch is simply our way of communicating our desires. However, while thisis a coherent view, it does not buy us anything, since we essentially under-stand the mechanism sufficiently to have a simpler, mechanistic descriptionof its behavior. In contrast, we do not have equally good knowledge of theoperation of complex systems such robots, people, and, arguably, operat-ing systems. In these cases it is often most convenient to employ mentalterminology; the application of the concept of "knowledge" to distributedcomputation, discussed below, is an example of this convenience, l
l In [67] I discuss how the gradual elimination of animistic explanations with the increase inknowledge is correlated very nicely with both developmental and evolutionary phenomena. Inthe evolution of science, theological notions were replaced over the centuries with mathematicalones. Similarly, in Piaget's stages of child development, there is a clear transition from animisticstages around the ages of 4-6 (when, for example, children claim that clouds move becausethey follow us around), and the more mature later stages.

Activity (2)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->