You are on page 1of 10

1104 Communication Methods: Instructors’ Positions At Istanbul Aydin University Distance Education

Institute

credibility of the process and created confidence for of Management Development 26 (2007) 9-21.
[4] J. Kamau, Challenges of course development and
the researcher.
implementation in a dual mode institution in Botswana, in:
As the reflection is central essence of the work Pan Commonwealth Forum on Open Learning:
based learning, having high level of responsibility on Empowerment through Knowledge and Technology,
pre-planned actions provided confidence for the Darussalam, Brunei, Mar. 1-5, 1999.
[5] I. Mugridge, Response to Greville Rumble’s article “The
researcher and reflection, negotiation on these
competitive vulnerability of distance teaching
activities increased the learning. universities”, Open Learning 7 (1992) 59-62.
[6] G. Srikanthan, J. Dalrymple, A Synthesis of a quality
5. Conclusions management model for education in universities,
International Journal of Educational Management 18
It is work based project which requires
(2004) 266-279.
collaboration of the colleagues to propose change for [7] J. Frahm, Developing communicative competencies for a
better working practice. Reflections and learning learning organization, Journal of Management
experiences of researcher from research process are Development 25 (2006) 201-212.
[8] S. Parka, S.T. Oliverb, T.S. Johnsonc, P. Grahamd, N.K.
aimed to share with academic community. Oppongb, Colleagues’ roles in the professional
High risks and questionable rewards are the reality development of teachers: Results from a research study of
for most complex organizations experiencing rapid National Board certification, Teaching and Teacher
change. Work, even in higher education, is shifting Education 23 (2007) 368-389.
[9] R.M. Pallof, K. Pratt, The Virtual Student, John Wiley &
toward greater interdependence among individuals to
Sons, San Francisco, 2003.
create collective and synergistic products and services [10] R.S. Hubbard, B.M. Power, The Art of Classroom
using advanced technology. As the boundaries Inquiry, Heinemann, USA, 1993.
between traditional positions blur, role clarification [11] E. Saito, P. Hawe, S. Hadiprawiroc, S. Empedhe,
Initiating education reform through lesson study at a
becomes increasingly important. In this learning
university in Indonesia, Educational Action Research 16
environment, the role of the ODL instructor requires (2008) 391-407.
the merging of multiple roles. The convergence of [12] S.D. Gilbert, How to be a Successful Online Student,
advances in computer technology, rapidly growing McGraw-Hill, San Francisco, 2001.
enrollment needs, and cost cutting measures for higher [13] Z. Berge, The role of the online instructor/facilitator,
Educational Technology 35 (1995) 22-30.
education suggest that innovative solutions are
[14] N.W. Coppola, S.R. Hiltz, N.G. Rotter, Becoming a
required. The findings of this study illustrate the virtual professor: Pedagogical roles and asynchronous
complexity of the role of the online instructor through learning networks, Journal of Management Information
a unique perspective in which two types of roles were Systems 18 (2002) 169-189.
[15] P.C. Lim, P.T. Cheah, The role of tutor in asynchronous
examined in great detail.
discussion boards: A case study of a pre-service teacher
course, Education Media International 40 (2003) 33-47.
References
[16] D. Maor, The teachers’ role in developing interaction and
[1] Middlesex University Module Guide Handbook, (2008). reflection in an online learning community, Education
[2] T.V. Eilertsen, N. Gustafson, P. Salo, Action research and Media International 40 (2003) 127-137.
micropolitics in schools, Educational Action Research 16 [17] D. Silverman, Doing qualitative research, SAGE, London,
(2008) 295-309. 2000.
[3] H. Thomas, An analysis of the environment and [18] H. Altrichter, P. Posch, B. Somekh, Teachers Investigate
competitive dynamics of management education, Journal Their Work, Routledge, London, 1993.
Journal of Communication and Computer 10 (2013) 1105-1113

Coordination in Competitive Environments

Salvador Ibarra-Martinez, Jose A. Castan-Rocha and Julio Laria-Menchaca


The Engineering School, Autonomous University of Tamaulipas, Victoria 87000, Mexico

Received: August 02, 2013 / Accepted: August 19, 2013 / Published: August 31, 2013.

Abstract: Despite several researches in autonomous agents important theoretical aspects of multi-agent coordination have been
largely untreatable. Multiple cooperating situated agents support the promise of improved performance and increase the task
allocation problems in cooperative environments. We present a general structure for coordinating heterogeneous situated agents that
allows both autonomy of each agent as well as explicit coordination of them. Such situated agents are embodied for taking into
account their situation to solve any action. Indeed, organizational features have been used as metaphor to achieve highest levels of
interactions in an agent system. Then, a decision algorithm has been developed to perform a match between the situated agent
knowledge and the requirements of an action. Finally, this paper presents preliminary results in a simulated robot soccer scenario
showing an improvement of around 92% between the worst and the best cases.

Key words: Multi-agent coordination, e-institutions, interactive norms, soccer robotics.

1. Introduction way and under a wide-range of conditions or


constraints. In fact, an agent system will have to be
Coordination depends on how autonomous agents
handled with a great level of awareness because the
make collective decisions to work jointly in real
failure of a single agent may cause a total degradation
cooperative environments [1]. Nowadays, several
of the system performance. For thus, this paper aims
researchers have proposed that autonomous agent
to introduce a decision algorithm based on the e-I
systems are computational systems in which two or
(electronic Institution) features [3], which it represents
more agents work together to perform some set of
the rules needed to support an agent society.
tasks or satisfy some set of goals. Research in
Specifically, such algorithm uses knowledge of the
multi-agent systems is then based on the assumption
agent situation regards to three perspectives:
that multiples agents are able to solve problems more
interaction with social information and other relevant
efficiently than a single agent does [2]. Special
details to entrust in other agents or humans; awareness
attention has been given to MAS developed to operate
representing the knowledge of the physical body
in dynamic environments, where uncertainty and
reflecting in the body’s skills; and world including
unforeseen changes can happen due to presence of
information perceived directly from the environment.
other physical representation (i.e., agents) and other
But each type of agent reacts to its perception of the
environmental representations that directly affect the
environment in different ways, modifying the overall
agents’ decisions. Such coordination allows agents to
system performance. In particular, a match function
reach high levels of interaction and increase their
has been formulated to reach a suitability rate based
successful decisions, improving the performance of
on the situated agents’ capabilities and the actions’
complex tasks. Agents must therefore work in some
requirements. In fact, agents can select those actions
for which they are the best qualified. The
Corresponding author: Salvador Ibarra-Martinez, doctor, effectiveness of this work is illustrated by developing
research fields: intelligent systems, autonomous robots and
coordination algorithms. E-mail: sibarram@uat.edu.mx. several examples that analyze cooperative agents’
1106 Coordination in Competitive Environments

behavior considering different situations in a real where CA is the set of all possible supervisor agents.
cooperative environment. Section 2 introduces the When saα has identified its s, it must claim
formal coordinated structure introduced in this information in order to know which actions must be
approach. In Section 3 an example of the achieved in such s. It is possible, then, to define a saα
implementation is presented. Finally, the results and as sensitive to the events that happen in real
conclusions are showed in Section 4. cooperative environments based on the agent
paradigm [5].
2. Our Approach
Let us define a situated agent sai as an entity that
A group of situated agents are here presented as has a physical representation on the environment and
cooperative systems constituted by a group of through which the systems can produce changes in the
autonomous agents who must cooperate among world.
themselves in order to reach specific goals within real  sai , sa j  GSA | sai  sa j  GSA  SA
cooperative environments. When agent interaction where SA  {sa1 , sa2 , sa3 ,..., san }
exists, each element of the agent group must be able SA is the set of all possible situated agents.
to be differentiated from the others. These agents In this sense, sai could be represented in many ways,
require a sense of themselves as distinct and (i.e., one autonomous robot with arms, cameras,
autonomous individuals obliged to interact with others handles, etc.) but for the scope of our proposal; sai is
within cooperative environments (i.e., they require an embodied as an entity which is characterized by the
agent identification) [4]. This identification refers to consideration of three parameters: interaction,
the property of each agent to know who it is and what awareness and world.
it does within the group. In this sense, this work In fact, the paper argues coordination at two
proposes two agent classifications: CA (coach agents) meta-levels (cognitive level—supervision of the
and SA (situated agents). intentions; physical level—execution of the action in
2.1 Adopting e-Features the world, Fig. 1), where the coach agent coordinates
among them to allocate of a set of actions for a group
In order to imitate the ideology of the e-I (i.e., e-I of situated agents.
uses a set of rules to manage the action performance Let us define a norm ni that is denoted as a rule that
in groups of agents), the paper describes how agents must be respected or must fix the behavior that a sai
that work in temporal groups are able to achieve must keep at trying to perform an action in a sα. We
collective behaviour. Such behaviour is possible by indicated the conception of a norm within a scene
using communication among agents. Let us suppose a
scene sα as a spatial region where a set of actions must
be performed by a group of situated agents sα.

 s i , s j  S | si  s j where S  {s1 , s2 , s3 ,..., sn }

S is the set of all possible Scenes.


Let us define a coach agent caα in charge of
supervising the execution of the actions in a particular
sα.
 cai , ca j  GCA | cai  ca j  GCA  CA
where CA  {ca1 , ca2 , ca3 ,..., can } Fig. 1 Levels of interaction.
Coordination in Competitive Environments 1107

following a set of rules such that:  g i , g j  G ( s ) | g i  g j  G ( s )  G


if (ni) do/dont {action} where G ( s )  {g1 , g 2 , g 3 ,..., g o }
 ni , n j  N ( s ) | ni  n j  N ( s )  N  g i  G ( s )  p( g i )  PG ( s ) | 0  p( g i )  1
where N ( s )  {n1 , n2 , n3 ,..., na } where G is the set of all possible Goals and G ( s  ) is
Let us define an obligation obl as the imposition g involved in sα.
given to some sai to perform some action, which it is Let us to define a set of tasks T which represent the
established following a set of rules. In order to denote issues that must be performed to achieve a specific g .
the notion of obligation obl the predicate [3] is present Goal then could be achieved without the implicit

obl ( pai , , s )
as follows: necessity of performing all its involved tasks.

where a sai is obligated to do  in sα.


Therefore, the tasks selected are independent, but their
development could affect in a positive or negative
way the development of other tasks.
 ti , t j  T ( g  ) | ti  t j  T ( g  )  T ( s )  T
2.2 Cooperative Actions

Studies about which actions are involved in where T ( g  )  {t1 , t 2 , t3 ,..., t p }


 ti  T ( g  )  p (ti )  PT ( g  ) | 0  p (ti )  1
determine scene are needed to perceive knowledge
that make possible the organization of any determined
where T is the set of all possible Tasks.
scene. Once a coach knows in which scene it will
Let us define a set of roles R which represent the
develop its function, it must identify the goals to be
actions that a pai must fulfil to perform a t within a sα.
 ri ,r j  R(t ) | ri  r j  R(t )  R( s )  R
accomplished in such spatial region, indicate the tasks

where R(t )  {r1 , r2 , r3 ,..., rq }


that must be performed to achieve these goals, and

 ri  R(t )  p(ri )  PR (t ) | 0  p(ri )  1


what roles are necessary for the task achievement.
Then, a coach is defined in its knowledge base
KB(caα) by the consideration of a set of goals G, a set where R is the set of all possible Roles.
of tasks T and a set of roles R. In order to illustrate how this process is performed,
KB(ca )  G ( s )  T ( s )  R( s ) let us suppose a scene s1 which is supervised by the
where KB ( sa ) is the information of all the issues coach ca1 performing a decision process to define
regarding to a specific scene sα, such that: G ( s ) is which goal must be attended firstly (Fig. 2).
the set of goals, T ( s ) is a set of all tasks, and R( s )
2.3 Embodying Situated Agents
is the set of all roles involved in determined scene sα.
Indeed, it is necessary to propose a priority index pi Supposing that a situated agent lives in a real
that represents the importance of every action. A saα environment, therefore, it has the ability to consider
will know both the order in which the goals and the
tasks must be performed and the order of the role
allocation process regarding its supervised sα. Such
priority index will be established according to system
requirements (i.e., timeline) in order to achieve the saα
aims.
Goals then embody the overall system purpose;
however, a caα could achieve a particular goal without
the necessity of performing another goal at the same Fig. 2 The coach ca1 defines which goal must be
sα. performed first.
1108 Coordination in Competitive Environments

its physical representation in such world. Although achieved through representation of them on a
these characteristics could supposedly take a lot of capabilities basis.
“things” regarding the environment our proposal takes  pai  PA  At ,s ( pai )  A( pai )
three kinds of knowledge that seek to reference all the where At ,s ( pai ) is the Awareness of pai to perform
information that characterize the perception of t in the sα.
particular sai. 2.3.3 World
2.3.1 Interaction World W refers to the set of environmental
Interaction I refers to the certainty that an agent knowledge that physical agents have to perform the
wants to interact with other agents to assume a proposed set of actions. Such domain representation is
specific behavior with successful and high reliability considered as the embodiment of the environment
to achieve any action proposed within any determined knowledge that represents all the physical information
scene. Such information is useful in the interaction that has influence in the physical agents’ reasoning
process of the agents because they can trust in other process.
agents based on the result of their previous Let us define a set of world conditions that
interactions. Obviously, if a sai has a positive represent information about empirical knowledge of
performance of its actions, its interaction level the environmental state, such that:
increases; but if the outcome of the action is negative,  pai  PA  Wt ,s ( pai )  W ( pai )
its interaction level decreases. Such knowledge is
where Wt  ,s (pa i ) is the environmental condition of
obtained when a sai has a direct relationship with a
caα. pai to perform t in a sα; saα uses the above
 sai  GSA  I r ,s ( sai )  I ( sai ) information to know the physical situation of each pai.
where I r ,s ( sai ) is the interaction level of a sai to All knowledge of a particular pai KB(pa i ) is then
perform rγ in the sα. constituted by the information provided for the three
2.3.2 Awareness modules, such that:
Awareness A refers to the set of physical  pai  KB( pai )  [ I ( pai ) A( pai ) W ( pai )]
self-knowledge that a physical agent has represented In particular, all knowledge related to a specific t
about its skills and physical characteristics to execute in sα is given such that:
any proposed action. Such physical representation is KB( pai ) t ,s  [ I t ,s ( pai ) At ,s ( pai ) Wt ,s ( pai )]
considered as the embodiment of the physical features
that constitute all the information that physical agents 2.4 Communication Process
can include in their decision-making. The humans have a communication process that
Physical agents could be any physical objects allows transmit information or ideas in a common
“handled” by an intelligent agent (i.e., an autonomous language to make sure and reliable commitments
robot, a machine or an electric device). Such pai has between us. Likewise, artificial intelligence has
features that consider their physical body properties several approaches showing the same process [5-6] to
(i.e., their dynamic, their physical structure) usually exploit the advantages of expressing communication.
when they commit to perform some task or to assume To accomplish an action, a group of agents must
a specific behaviour within a cooperative environment. establish communication (to coordinate them). On
This fact represents the skill of the physical agents to such coordination agents must “converse” among
know that actions will be performed based on the them to agree who is who within the group (Fig. 3).
knowledge of the physical agents’ bodies, which is Then, a communication with three simple dialogues
Coordination in Competitive Environments 1109

based on the KQML specification is presented as values that establish the relevance of each parameter

Request( sa , sa  , , sn )
follows: related to a sα. These values are in the range [0, 1]. In
this sense, the sa responsible in s uses the
where saα asks to sa its θ in the scene sα KB ( pai ) t ,s and the ID(s ) to perform a match
Reply( sa  , sa , )
where sa responds to saα its decision  based on the
 ID(s )
function by means of Eq. (1).
 KB( pai ) t ,s
3

Inform( sa , sa ,  , s )


information dispatched.
match(ID(s ), KB( pai ) t ,s ) 
j 1
( j)

(1  ID(s )
( j)
(1)

where saα informs sa its state  in the scene sα. 3


3

( j) )
j 1
This process helps to the saα to communicate
A sa uses the match to determine which pai must
among them and with a pai.
perform rq in a s, assigning the higher pai for the
Otherwise, some concepts have been explained
most prior rq in s. In addition, Fig. 4 shows an
throughout this research work, but none of them has
example of the match process.
clarified how a saα could decide who is the pai (or
group) that will take part in the action of its
responsible sα. saα then considers an ID (influence
degree) to all these actions involved in a sα by the
tupla ID(sα) based on the consideration of the
aforementioned parameters to generate an utility
function that helps them in their decision making
structure.

ID(s )  [id EC (s ) id PK (s ) idTV (s )]

where id EC (s ) , id PK (s ) and id TV (s ) are Fig. 3 Conversation between the sa1 and sa2.

Fig. 4 Empirical example of a match process.


1110 Coordination in Competitive Environments

2.5 Decision Algorithm amount of PA, such as R(sα). Suppose that the system
has enough amount of PA to take all the defined roles.
An important criterion for the development of
To know, every saα is able to exclude a pai that
collective actions within real cooperative
presents a lowest action capability.
environments is the traffic of the information available
Stage 4. Show-time. A pai knows the rq that must
from the perception of the intentions to the execution
perform. This involves physical changes in the
of them. We have therefore determined a particular
environment. Now, the environment has been
decision algorithm of four simple stages.
modified. So, a new consensus among the SA could be
Stage 1. Refers to the property of a saα to perceive
performed to adjust it to the current changes in the
which sα must manage, therefore, a saα then knows its
environment.
goals, tasks, roles (the priority of every item is also
perceived) and ID involved in its sα. Hence, the 3. Implementation
knowledge base of each saα could be achieved.
In our implementation, each physical agent has a
Stage 2. All the sa (of the entire SA) must organize
different movement controller which differentiates
them to define which will be the order in that they
from others. Then, we have segmented the scenario
could begin the recruitment of pa to perform the
into three spatial regions (Fig. 5) to represent each sα.
actions within its sα. For thus, the sa must converse
For sake of simplicity, we only have defined one
among them using the developed dialog.
goal per scene G(s1) = g1; G(s2) = g2 and G(s3) = g3.
Stage 3. Based on the order obtained above, a saα is
The consensus to define the execution order of the
approved to start the communication with the entire
scenes is derived as as shown in Fig. 6.
PA to determine that pai will be the selected to
The cbp is the current ball position on the
perform every action. For thus, a saα must obtain the
environment. So, the spatial regions are limited
physical knowledge of each pai by means of directly
according to the simulator dimensions (axis x: [0 220];
communication with them; the environment
conditions and trust value of each pai are obtained
when the saα uses the modules aforementioned
(respectively for each parameter).
Once a saα completes the KB(pa i ) of the entire PA,
it takes such information to perform the match using
the Eq. (1), considering the priority index of all the
roles. Then, saα has a list detailed (form higher to
lowest coefficient) of the entire PA. After, saα knows
that pai must perform that role; therefore, it is able to
obligate a determined pai to perform a role which
Fig. 5 Geographic segmentation of the experimental
represents that action must be performed.
environment.
Hence, the best pai (of the entire PA) will choose
the prior role to perform and then others successively,
until all the roles finish in such sα. Such process
guaranty allocates us a suitable role because the rq
always allocated to the best pai. Indeed, a saα knows
how many PA needs because it needs the same Fig. 6 Supervisor agent consensus.
Coordination in Competitive Environments 1111

axis y: [0 180]). Moreover, specific tasks are defined in s respectively and  is from 1 to Q( s ) and ω is
in order to accomplish each gi such that: from 1 to Q' ( s ) ; that are the number of awards and

T(g 1 )  {t1 , t 2 }  T ( g 2 )  {t 3 , t 4 }  T ( g 3 )  {t 5 ,t 6 }
punishments in s.
Awareness here called Physical Knowledge, PK
where t1 is make-pass, t2 is shooting, t3 is player-on, t4 represents the knowledge of the agents about their
is kick-ball, t5 is protect-ball and t6 is covering a physical capabilities to perform any proposed task. In
position. particular, the introspection process is performed by
Following the rule presented for the goals, the tasks using neural networks taking into account the
also use the cbp as a reference to determine its knowledge that a pai has related to perform t in sα.
PK t ,s ( pai )  [0,1]
execution order. Consider that a high by
Then, using the ranges above, a saα may decide the
task to perform at any time. But, to attempt to achieve representing a suitable pai.
such tasks a saα must define which roles it must World here called Environmental Conditions, EC is
perform and the priority order of such roles. Therefore, a value related to the distance between the current
by means of human analysis we have proposed four location of a pai and the location of the ball. Eq. (4)

ecty , sa ( pai )  (1  d ( pai, r (ty, sa ))


roles that could be used to perform any task such that: shows the calculation:

R(t  )  r1 , r2 , r3 ,r 4 }
/ d max(( sa )) ecty , sa ( pai )  [0, 1]
(4)

where, r1 goes to the ball, r2 kicks the ball, r3 covers a where ect ,s ( pai ) is the value of a pai to perform a
zone and r4 takes a position to be used in each t.

t in s; d ( pai , r (t , s )) is the distance between the


In addition, we have performed a combination with
the information involved in the environment-based
knowledge. Such combination is used by saα to
pai with r (t , s ) and d max(s ) is the maximal
perform the match process considering the
aforementioned parameters. Then, a binary distance of all pa in s. Then, Eq. (5) shows the
combination lets us generate eight influence degrees d max(s  ) calculation where m is the total number of

d max( sa )  max(d (1, sa ),..., d (m, sa )) d max  [0, 1]


(Table 1). We present a review to show how we have pa in IAS.
implemented these parameters in the robot soccer
testbed. (5)
Interaction here called Trust TV represents the In order to show how our approach performs the
social relationship among agents taking into account role allocation process we present a possible situation
the result of past interactions of a sa with a pai. Eq. (Fig. 7) where the ball is within the s2 and we use all
(2) shows the trust calculation if the aim is reached. Table 1 Influence degree consideration (0: is not
Otherwise, using Eq. (3) shows the trust calculation if considered; 1: is considered).
Influence degree TV PK EC

tvt ,s ( pai )  tvt ,s ( pai )  A( s ,  ) (2)


the aim is not reached.
ID0 0 0 0

tvt  , s ( pai )  tvt  , s ( pai )  P ( s ,  ) (3)


ID1 0 0 1
ID2 0 1 0

where the tvt ,s ( pai )  [0,1] and higher tvt ,s ( pai )
ID3 0 1 1
ID4 1 0 0

represents the best pai to perform t in s, A( s ,  )


ID5 1 0 1
ID6 1 1 0
and P(s  , ) are the awards and punishments given ID7 1 1 1
1112 Coordination in Competitive Environments

Exp. 2, a league of 28 games was performed to


confront the IAS among them. So, the IAS
performance increases when using jointly all the
parameters. In fact, the IAS(ID7) shows a better
average (improvement rate: +92%) than IAS(ID0).
As conclusions we argue the need of agent
meta-coordination to exploit the advantages of the
abstract environment knowledge (by the supervisor
agents) and use it to influence the reasoning process
of the physical agents.
Fig. 7 Possible situation for the PA in the environment. In addition, a combination (named Influence
Degree) describes the consideration among these
the influence degrees generated to perform the pa parameters giving to the sa the ability to determine a
selection. Then, we only showed the allocation for one decision process to perform a match between the
action (kick the ball). In Table 2 we present the values scene requirements and the physical agent capabilities.
of a pai regarding to the proposed action. In Table 3 In fact, the best performance is obtained when our
we show the match values obtained by means of Eq. team agent took into account all the parameters in its
(1). Then, is possible to see it will be the pai selected decision process. But it is really interesting to analyze
by the sa2 to perform the proposed action.
Table 2 Physical agents’ knowledge bases.
Additionally, the remained physical agents follow a
pa Trust Intro. Prox.
fix strategy which was defined to consider actions to
KB(pa1)tkickball,s2 0.43 0.47 0.31
the entire PA.
KB(pa 2)tkickball,s2 0.65 0.52 0.46
4. Results and Conclusions KB(pa 3)tkickball,s2 0.71 0.69 0.79
KB(pa 4)tkickball,s2 0.83 0.77 0.63
We ran two experimental evaluations to validate the
proposed approach. In particular, in the experiments Table 3 Some examples of physical agent selection.
our IAS uses all the binary combination of the ID to ID(s2) pa1 pa2 pa3 pa4 pa Selected
perform the match process. In Exp. 1, our IAS ID1(s2) 0.31 0.46 0.79 0.63 pa3
ID2(s2) 0.47 0.52 0.69 0.77 pa4
competed against a blind opponent in 30 games. Here,
ID3(s2) 0.39 0.49 0.74 0.70 pa3
the IAS performance is improved when all the ID4(s2) 0.43 0.65 0.71 0.83 pa4
parameters are considered. So, IAS(ID7) shows a ID5(s2) 0.37 0.55 0.75 0.73 pa3
better average (improvement rate: +81% better) than ID6(s2) 0.45 0.58 0.70 0.80 pa4
IAS(ID0) (any parameter considered). Then, in the ID7(s2) 0.40 0.54 0.73 0.74 pa4

Table 4 Our approach vs. other approaches.


ID T I P VS
0 0 0 0 References take at least one of these parameters.
1 0 0 1 Not references yet.
2 0 1 0 [4-7]
3 0 1 1 [8-10]
4 1 0 0 No references yet.
5 1 0 1 [11-13]
6 1 1 0 [14]
7 1 1 1 [15]
Coordination in Competitive Environments 1113

how the cooperative IAS performance increases when physical agents, Doctoral Thesis, Universitat Rovira I
Virgil.
the system takes the parameters into consideration. In
[8] C.G. Quintero, J. Ll. de la Rosa, J. Vehi, Self-knowledge
conclusion, the situation matching approach is a based on the atomic capabilities concept—A
promising method to be used as utility function perspective to achieve sure commitments among physical
between task requirements and physical agent agents, in: 2nd International Conference on Informatics in
Control Automation and Robotics, Barcelona, Spain, Sep.
capabilities in MAS.
14-17, 2005.
In Table 4 we show some approaches regarding [9] L. Pat, An adaptative architecture for physical agents, in:
architecture for multi-agent cooperation. In particular, IEEE /WIC/ACM International Conference on Intelligent
these architectures express behavior by implementing Agent Technology, Sep. 19-22, 2005 pp. 18-25.
[10] D. Busquets, R. Lopez de Mantaras, C. Sierra, T.G.
different kinds of knowledge, which can be related to
Dietterich, A multi-agent architecture integrating learning
our approach. and fuzzy techniques for landmark-based robot
navigation, Lecture Notes in Comp. Science 2504 (2002)
References 269-281.
[1] S. Ibarra, C. Quintero, J. A.Ramon, J. Ll de la Rosa, J. [11] C.G. Quinero, J. Zubelzu, J.A. Ramon, J. Ll. de la Rosa,
Castan, PAULA: Multi-agent Architecture for Improving the decision making structure about
coordination to intelligent agent systems, in: Proc. of commitments among physical intelligent agents in a
European Control Conference (ECC’07), Kos, Greece, collaborative world, in: In. Proc. of V Workshop on
July 2-5, 2007. Physical Agents, Girona, Spain, Mar. 25-27, 2004, pp.
[2] D. Jung, A. Zelinsky, An architecture for distributed 219-223.
cooperative-planning in a behaviour-based multi-robot [12] R.S. Aylett, D.P. Barnes, A Multi-robot architecture for
system, Journal of Robotics & Autonomous Systems planetary rovers, in: 5th ESA Workshop on Advanced
(RA&S) 26 (1999) 149-174. Space Technologies for Robotics and Automation,
[3] M. Esteva, J.A. Rodriguez, C. Sierra, J.L. Arcos, On the ESTEC, Noordwijk, The Netherlands, Dec. 1-3, 1998.
formal specification of electronic institutions, Agent [13] R. Simmons, T. Smith, M. Bernardine, D. Goldberg, D.
Mediated Electronic Commerce Lecture Notes in Hershberger, A. Stentz, et al., A layered architecture for
Computer Science 1991 (2001) 126-147. coordination of mobile robots, in: Multi-robot Systems:
[4] B. Duffy, Robots social embodiment in autonomous From Swarms to Intelligent Automata, May, 2002.
mobile robotics, Int. J. of Advanced Robotic Systems 1 [14] C.G. Quintero, J.L. de la Rosa, J. Vehi, Physical
(2004) 155-170. Intelligent Agents’ Capabilities Management for Sure
[5] S. Russell, P. Norving, Artificial Intelligence: A Modern Commitments in a Collaborative World, Frontier in
Approach, 3rd ed., Ed. Prentice Hall, London, Dec. 2009, Artificial Intelligence and Applications, IOS Press, 2004,
p. 1152. pp. 251-258.
[6] M. Luck, P. McBurney, O. Shehory, S. Willmott, Agent [15] S. Ibarra, C. Quintero, J. Ramon, J.L. De la Rosa, J.
Technology: Computing as Interaction (A Roadmap for Castan, Studies about multi-agent teamwork coordination
Agent Based Computing), AgentLink, 2005. in the robot soccer environment, in: Proc. of 11th Fira
[7] A. Oller, DPA2: Architecture for co-operative dynamical Robot World Congress 2006, 2006, pp. 63-67.

You might also like