You are on page 1of 23

Science of Computer Programming 95 (2014) 3–25

Contents lists available at ScienceDirect

Science of Computer Programming


www.elsevier.com/locate/scico

An extensible argument-based ontology matching negotiation


approach
Paulo Maio ∗ , Nuno Silva
GECAD – Knowledge Engineering and Decision Support Research Group, School of Engineering of Polytechnic of Porto, Rua Dr. Bernardino de
Almeida 431, 4200-072 Porto, Portugal

h i g h l i g h t s

• A novel argument-based ontology matching negotiation approach is proposed.


• An explicit, formal, shared and extensible argumentation model is adopted.
• Experiments demonstrate the usefulness and pertinence of the approach.
• Easy to adapt and evolve the approach to support different scenarios’ requirements.
• A Software Development Framework for the adoption of the proposed approach.

a r t i c l e i n f o a b s t r a c t

Article history: Computational systems operating in open, dynamic and decentralized environments
Received 1 February 2013 are required to share data with previously unknown computational systems. Due to
Received in revised form 22 October 2013 this ill specification and emergent operation the systems are required to share the
Accepted 24 January 2014
data’s respective schemas and semantics so that the systems can correctly manipulate,
Available online 31 January 2014
understand and reason upon the shared data. The schemas and semantics are typically
Keywords: provided by ontologies using specific semantics provided by the ontology language.
Ontology matching Because computational systems adopt different ontologies to describe their domain of
Argumentation discourse, a consistent and compatible communication relies on the ability to reconcile
Negotiation (in run-time) the vocabulary used in their ontologies. Since each computational system
Systems interoperability might have its own perspective about what are the best correspondences between the
adopted ontologies, conflicts can arise. To address such conflicts, computational systems
may engage in any kind of negotiation process that is able to lead them to a common and
acceptable agreement.
This paper proposes an argumentation-based approach where the computational entities
describe their own arguments according to a commonly agreed argumentation meta-
model. In order to support autonomy and conceptual differences, the community argumen-
tation model can be individually extended yet maintaining computational effectiveness.
Based on the formal specification, a software development framework is proposed.
© 2014 Elsevier B.V. All rights reserved.

1. Introduction

More and more computational systems (e.g. agents, web services) operating in open, dynamic and decentralized envi-
ronments (e.g. semantic web, e-commerce, peer-to-peer, agent-based systems) require information sharing with previously
unknown systems. Due to this ill specification and emergent operation, the computational systems are now required to

* Corresponding author. Tel.: +351 22 834 05 00.


E-mail addresses: pam@isep.ipp.pt (P. Maio), nps@isep.ipp.pt (N. Silva).

0167-6423/$ – see front matter © 2014 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.scico.2014.01.011
4 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

share the data’s respective schemas and semantics, so that the systems can correctly manipulate, understand and reason
upon the shared data. The schemas and semantics are typically provided by ontologies using specific semantics provided
by the ontology language. Nevertheless, computational systems maintain their autonomy and conceptual specificities, lead-
ing to different ontologies and thus preventing the direct information sharing. Accordingly, a successful systems interaction
relies on the ability to reconcile their ontologies in run-time. In literature the ontology reconciliation problem is usually
referred to as Ontology Matching [1]. A reconciliation process consists of establishing a set of correspondences (referred to
as alignment) between the system’s ontologies, which are further exploited to interpret or translate exchanged messages
and their content. Therefore, systems need to autonomously decide on each and all correspondences between the ontolo-
gies they adopt in a conversation/interaction. For that purpose, a common approach found in literature consists in providing
an ontology matching service such that the interacting computational systems agree (implicitly or explicitly) on using that
service and, therefore, an alignment is requested as needed. However, ontology matching is a burdensome and error-prone
process due to different factors. Firstly, because of the different applied semantics of the ontology languages and modeling
approaches. Secondly, because of the conceptual interpretation of the linguistic dimension of the ontology, which typically
grounds the ontology to the domain of knowledge, but unfortunately is a source for multiple interpretations and therefore
for matching ambiguities. Consequently, the ontology matching process can lead to different and contradictory results (i.e.
alignments) depending on the adopted matching approaches. Thus, considering that distinct computational systems may
have different needs and objectives and, therefore, different preferences concerning the matching process, computational
systems may be able to exploit the matching services they find more convenient instead of relying on a common matching
service. For example, a computational system may prefer alignments having a high recall in disfavor of precision, while
the other one may prefer precision instead of recall. In scenarios like the one described above, i.e. where each interact-
ing computational system may adopt its most suitable matching service, it is necessary to provide a mechanism enabling
those systems to avoid and/or resolve possible alignment conflicts. In that sense, state-of-the-art literature refers to two
negotiation-based approaches: relaxation-based [2] and argument-based approaches [3,4].
This paper proposes a novel argument-based approach where arguments are described according to a state-of-the-art
argumentation meta-model that captures general argumentation semantics. Moreover, the adopted meta-model is first
instantiated by the negotiating community into a community argumentation model capturing the commonly agreed ar-
guments (types or schemes) regarding the domain application. Further, in order to support autonomy and conceptual
differences between individual systems, the community argumentation model can be individually extended, yet maintaining
computational effectiveness.
Based on the formal specification (Sections 3 and 4), a software development framework is proposed and its architecture
and design are discussed (Section 5). Examples and experiments adopting the proposals are finally presented (Section 6).
Yet, in order to introduce the reader to important concepts and terminology, the next section revises important background
knowledge.

2. Background knowledge

First, this section concisely surveys the ontology matching domain. Further, it defines the ontology matching negotiation
problem and briefly describes current state-of-the-art approaches.

2.1. Ontology matching

Ontology matching is seen as the process of discovering, (semi-) automatically, the correspondences between semanti-
cally related entities of two different but overlapping ontologies. Thus, as stated in [1], the matching process is formally
defined as a function f : ( O 1 , O 2 , p , res, A ) → A  which, from a pair of ontologies to match O 1 and O 2 , a set of parameters
p, a set of oracles and resources res and an input alignment A, it returns an alignment A  between the matched ontolo-
gies. Ontologies O 1 and O 2 are often denominated as source and target ontologies, respectively. An alignment is a set of
correspondences expressed according to:

• Two entity languages Q L 1 and Q L 2 associated with the ontology languages L 1 and L 2 of matching ontologies (respec-
tively) defining the matchable entities (e.g. classes, object properties, data properties, individuals);
• A set of relations R that is used to express the relation held between the entities (e.g. equivalence, subsumption,
disjoint, concatenation, split);
• A confidence structure φ that is used to assign a degree of confidence in a correspondence. It has a greatest element 
and a smallest element ⊥. The most common structure is the real numbers in the interval [0, 1], where 0 represents
the lowest confidence and 1 represents the highest confidence.

Hence, a correspondence (or a match) is a 4-tuple c = (e , e  , r , n) where e ∈ Q L 1 ( O 1 ) and e  ∈ Q L 2 ( O 1 ) are the entities
between which a relation r ∈ R is asserted and n ∈ φ is the degree of confidence in the correspondence.
Over recent years, research initiatives in ontology matching have developed many systems (e.g. [5]) that rely on the com-
bination of several basic algorithms yielding different and complementary competencies, to achieve better results. A basic
algorithm generates correspondences based on a single matching criterion [6]. These algorithms can be multiply classified
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 5

Fig. 1. An overview of the ontology matching negotiation process.

as proposed in [1,7] (e.g. terminological, structural, semantic). Moreover, systems make use of a variety of functions such
as:

• Aggregation functions whose purpose is to aggregate two or more sets of correspondences into a single one (e.g. min,
max, linear average);
• Alignment Extraction functions whose purpose is to select from a set of correspondences those that will be part of the
resulting alignment. The selection method may rely on the simplest methods such as the ones based on threshold-values
(summarized in [1]) or more complex methods based on, for example, local and global optimizations (e.g. [8,9]).

The selection of the most suitable algorithms/system is still an open issue as they should not be chosen exclusively
with respect to the given data but also adapted to the problem that is to be solved [1]. However, this question has already
been dealt with in [10–12]. Despite all the existing (conceptual and practical) differences between matching systems and
algorithms, we will refer to both as matchers as all of them have a set of (candidate) correspondences as output.

2.2. Ontology matching negotiation

Generically, ontology matching negotiation (OMN) approaches take into consideration that negotiation occurs between
two honest and co-operative computational systems whose purpose is to agree on an alignment between their ontologies
that satisfies and ensures confidence for the business interaction process. Moreover, it is assumed that each computational
system is capable of devising an alignment by itself or, alternatively, in collaboration with other systems not participating in
negotiation. In this respect, it is important to bear in mind that each business system willing to interact may delegate the
negotiation task to a third-party entity acting on its behalf. The object of negotiation is the alignment content to establish
between the systems’ ontologies. Therefore, systems negotiate about the inclusion or exclusion of each correspondence
suggested by one of them into the agreed alignment. The value that each system associates to correspondences is highly
subjective and depends on several factors such as (i) the pertinence of the correspondence with respect to the business
interoperability and (ii) dependencies between other correspondences (e.g. some correspondences may imply or depend on
other correspondences in a valid alignment).
Fig. 1 graphically depicts an overview of the ontology matching negotiation process.
In literature, considering the completely automatic negotiation processes, i.e. where there is no user intervention, one
can find two distinct categories of approaches applied to this problem: (i) the ones based on relaxation mechanisms (e.g.
[2]) and (ii) the argument-based approaches (e.g. [3,4]).
Concerning the former approach, they rely on a set of utility functions enabling a system to (re)classify each correspon-
dence as accepted, negotiable or rejected. Its major drawback is the difficulty defining proper utility functions.
Concerning the latter approach, they instantiate the Value-based Argumentation Framework (VAF) [13], which captures
existing arguments and attack relations between arguments. Each argument promotes a value that is further used to de-
termine if an attack succeeds or not, based on a preferred ordered list of values. Because arguments are generated from
correspondences provided by matchers, possible argument values have been restricted to the five categories of matchers
proposed in [1]:

• Terminological (T): are those that compare the names, labels and comments that are related to the ontological entities;
• Internal Structural (IS): are those that exploit the internal characteristics of entities such as the domain and range of
their properties, the cardinality of attributes or even transitivity and/or symmetry assertions;
6 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

Fig. 2. The three TLAF/EAF modeling layers as captured by the respective OWL ontology.

• External Structural (ES): are those that exploit the (external) relations that an entity has with the other entities of the
ontology such as super-entity, sub-entity or sibling;
• Semantic (S): are those that utilize theoretical models to determine whether there is a correspondence or not between
two entities;
• Extensional (E): are those that compare the set of instances of entities being evaluated.

As that, despite it being simple and quite effective when being adopted by systems, it has some important limitations
namely regarding the characteristics of autonomy and rationality [14] that are typical in systems dealing with the matching
problem (e.g. agents) and the incapacity to take into consideration the (positive or negative) effect of accepting or rejecting
a correspondence on the acceptability of other correspondences under discussion.
To overcome these and other limitations (e.g. lack of quantitative or opinion factors) of current argument-based ap-
proaches, it is envisaged as suitable to follow a different line of research. In this sense, it is proposed a novel approach
where the negotiating systems adopt the generic and domain-independent Argument-based Negotiation Process (ANP) pre-
sented in [15]. The next section briefly describes the Extensible Argumentation Framework (EAF) [15–17] on which the ANP
relies on. The application of both (ANP and EAF) to the ontology matching negotiation context is novel and goes beyond the
state-of-the-art.

3. Extensible Argumentation Framework

The Extensible Argumentation Framework (EAF) [15–17] is based and extends the Three-Layer Argumentation Framework
(TLAF) [18], which comprehends three modeling layers described in Section 3.1. Further, since arguments may be more or
less persuasive and their persuasiveness may vary according to their audience, the arguments acceptability is addressed (cf.
Section 3.2).

3.1. Conceptual layering modeling

Unlike abstract argumentation frameworks such as AF [19], BAF [20] and VAF [13], TLAF adopts a general and intuitive
argument structure and a conceptual layer for the specification of the semantics of argumentation data applied in a specific
domain of application. Therefore, despite being less abstract than AF, BAF and VAF, TLAF it remains domain independent.
While the Meta-Model Layer and the Instance Layer of the adopted argumentation framework roughly correspond to the
(meta-) model layer and the instance layer of abstract argumentation frameworks, the Model Layer does not have any
correspondence in the surveyed abstract argumentation frameworks (illustrated in Fig. 2).
The meta-model layer defines the core argumentation concepts and relations holding between them. It adopts and ex-
tends the minimal definition presented by Walton in [21] where “an argument is a set of statements (propositions), made up
of three parts, a conclusion, a set of premises, and an inference from premises to the conclusion”. For that, the meta-model
layer defines the notion of Argument, Statement and Reasoning Mechanism, and a set of relations between these concepts. An
argument applies a reasoning mechanism (such as rules, methods, or processes) to conclude a conclusion-statement from
a set of premise-statements. An IntentionalArgument is the type of argument whose content corresponds to an intention
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 7

Fig. 3. Partial graphical representation of a Model Layer for the ontology matching domain.

[14,22]. Domain data and its meaning are captured by the notion of Statement. This mandatorily includes the domain in-
tentions, but also the desires and beliefs. The distinction between arguments and statements allows the application of the
same domain data (i.e. statement) in and by different means to arguments. Also the same statement can be concluded by
different arguments, and serve as the premise of several arguments.
With respect to ontology matching negotiation, an intentional argument represents the will to include/exclude a corre-
spondence in/from the agreed alignment while information used to support/attack that is represented by a non-intentional
argument.
The Model Layer captures the semantics of argumentation data (e.g. argument types/schemes) applied in a specific
domain of application (e.g. ontology matching, e-commerce, legal reasoning and decision making) and the relations existing
between them. In that sense, the model layer is important for the purpose of enabling knowledge sharing and reuse between
computational systems. In this context, a model is a specification used for making model commitments. Practically, a model
commitment is an agreement to use a vocabulary in a way that is consistent (but not complete) with respect to the theory
specified by a model [23,24]. Systems then commit to models and models are designed so that the knowledge can be shared
among these systems. Accordingly, the content of this layer directly depends on:

• The domain of application to be captured, and


• The perception one (e.g. a community of systems or an individual system) has about that domain.

Due to this, we adopt the vocabulary of (i) argument (or statement)-instance as an instance of an (ii) argument (or state-
ment)-type defined at the Model Layer. Similarly, we adopt the vocabulary of (i) relation between types, and (ii) relationship
between instances.
At the model layer, an argument-type (or argument scheme) is characterized by the statement-type it concludes, the
applied class of reasoning mechanism (e.g. Deductive, Inductive, Heuristic) and the set of affectation relations (R) it has.
The R relation is a conceptual abstraction of the attack (R att ) and support (R sup ) relationships. The purpose of R is to
define at the conceptual level that argument-instances of an argument-type may affect (either positively or negatively)
instances of another argument-type. For example, according to the model layer of Fig. 2, (C , D ) ∈ R means instances of
argument-type C may attack or may support instances of argument-type D depending on the instances content. On the
other hand, if ( X , Y ) ∈
/ R it means that instances of argument-type X cannot (in any circumstance) attack/support instances
of argument-type Y . Yet, the R relation is also used to determine the types of statements that are admissible as premises of
an argument-instance. So, an argument-instance of type X can only have as premises statements of type S iif S is concluded
by an argument-type Y and Y affects X (i.e. (Y , X ) ∈ R). For example, considering again the model layer of Fig. 2, instances
of argument-type D can only have as premises statements of type B because D is affected by argument-type C only.
Fig. 3 depicts an example of an argumentation model for the ontology matching domain without mentioning the reason-
ing mechanisms.
The MatchArg is an intentional argument representing the intention to include a correspondence into the alignment,
which is affected by three arguments used by the state-of-the-art approaches: Terminological, External Structural and Se-
mantic (cf. Section 2.2). Each correspondence generated by an algorithm classified in one of those categories is seen as a
direct reason for or against the intention of including that correspondence into the alignment.
The Instance Layer corresponds to the instantiation of a particular model layer for a given scenario. Here, an
argument-instance applies a concrete reasoning mechanism to conclude a conclusion-statement-instance from a set of
premise-statement-instances. The relation conflictWith is established between two statement-instances only. A statement-
instance b1 is said to be in conflict with another statement-instance b2 when b1 states something that implies or suggests
that b2 is not true or does not hold. The conflictWith relation is asymmetric (in Fig. 2 b2 conflicts with b1 too). It is worth
8 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

Fig. 4. Instantiation of a Model Layer for the ontology matching domain.

Fig. 5. The inferred support and attack relationships.

noticing that all instances existing in the instance layer must have an existing type in the model layer and according to the
type characterization.
Considering our application scenario, the instance layer captures the correspondences and the reasons for and against
those correspondences that are being exchanged between computational systems. Fig. 4 depicts an instantiation example of
the argumentation model presented above (cf. Fig. 3).
The support (R sup ) and attack (R att ) relationships between argument-instances are automatically inferred by means of
four rules:

• An argument-instance x supports another argument-instance y when the argument-type of x affects the argument-type
of y and either:
◦ The conclusion of x is a premise of y (R1) or
◦ Both argument-instances have the same conclusion (R2);
• An argument-instance x attacks another argument-instance y when the argument-type of x affects the argument-type
of y and either:
◦ The conclusion of x is in conflict with any premise of y (R3) or
◦ The conclusion of x is in conflict with the conclusion of y (R4).

Considering the instantiation depicted in Fig. 4, the inferred support and attack relationships between argument-
instances are the ones illustrated in Fig. 5.
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 9

Fig. 6. Partial graphical representation of an extended Model Layer for the ontology matching domain.

It is worth noticing that computational systems adopting this argumentation framework may use arguments with two
purposes: (i) to represent and communicate intentions (i.e. intentional arguments) and (ii) to provide considerations (i.e. be-
liefs, desires) for and against those intentions (i.e. non-intentional arguments). Thus, an intentional argument may be affected
by several non-intentional arguments. Additionally, to capture dependency between intentions, intentional arguments may
be also affected (directly or indirectly) by other intentional arguments. A defeasible argument is affected by other (sub-)
arguments (i.e. the ones concluding its premises or the ones undermining those premises) while an indefeasible argument
can only be affected by its negation since it cannot have premises. Given that, in a Model Layer, intentional arguments
should be always defeasible. On the contrary, non-intentional arguments can be both defeasible and indefeasible.
EAF extends TLAF by providing the constructs and respective semantics for supporting modularization and extensibility
features to TLAF. In that sense, any EAF model is a TLAF model but not the inverse. In the EAF Model Layer, arguments,
statements and reasoning mechanisms can be structured through the H A , H S and H M relations respectively. These are
acyclic transitive relations established between similar entity types (e.g. arguments), in the sense that in some specific con-
text entities of type e 1 are understood as entities of type e 2 . While these relations are vaguely similar to the specialization
relation (i.e. subclass/superclass between entities) it does not have the same semantics and it is constrained to 1–1 relation-
ship (cf. [16]). An EAF model may reuse and extend the argumentation conceptualizations of several existing EAF models.
Inclusion of an EAF into another EAF is governed by a set of modularization constraints ensuring that no information of in-
cluded EAF is lost. Fig. 6 illustrates the usage of the EAF extensibility feature regarding the ontology matching domain. The
model depicted in this figure (called EAF S1 ) extends the model previously depicted in Fig. 3 (called EAF C ) such that the new
arguments and statements are colored white while the arguments and statements of the extended model are colored gray.
According to this example, the EAF semantics imply (for example) that any instance of SubEntitiesArg is understood and
is translatable to an instance of ExtStructuralArg. In the argument exchange context, this feature is relevant considering
that each computational system internally adopting a distinct EAF model (e.g. EAF S1 ) extended from a common/shared EAF
model (e.g. EAF C ) may translate arguments represented in their internal model to the shared model and, therefore, enabling
the understanding of those arguments by the other computational systems (cf. Section 4.2.6 for details). Therefore, the
EAF features allow the systems to conceptualize their private argumentation model, maintaining the compatibility and the
semantic understanding with the remaining community.
In light of EAF S1 , one may say that argument-instances of type ExtStructuralArg are affected by argument-instances of
type SubEntitiesArg since their conclusions are seen as premises that lead to the conclusion about the relation between
the external structure of the two entities. On the other hand, an argument-instance a1 of type SubEntitiesArg is affected
by the intention of accepting/rejecting a correspondence (MatchArg) when the entities related on the MatchArg instance are
sub-entities of the entities related by a1 . Thus, MatchArg affects SubEntitiesArg. Generalizing, intentional arguments are being
used to support/attack other arguments.

3.2. Argument acceptability

In any computational system using argumentation, one of the most important processes is to determine the acceptability
of argument-instances, i.e. to state which argument-instances hold (are undefeated) and which argument-instances do not
hold (are defeated). Most of the argumentation systems (e.g. the Prakken version of ASPIC [25], MbA [4], FDO [3]) use the
abstraction provided by the argumentation frameworks (e.g. AF [19], BAF [20], VAF [13]) to make logical inferences, i.e. to
select the conclusions of the associated sets of arguments. For that, an abstract argumentation semantics such as the ones
described in [26] is applied.
10 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

An application adopting the EAF is still able to exploit such techniques because an EAF instance-pool can be easily
represented in a more abstract formalism such as BAF [17]1 and as AF2 [20]. Yet, because EAF assumes that bipolarity is
important for the application domain (e.g. ontology matching), argumentation systems may also opt to apply an argument
evaluation process that exploits the bipolarity such as the ones proposed in [28–31]. However, none of these processes
are able to (i) deal with the cyclic relationships that may exist between argument-instances and (ii) take advantage of
the EAF Model Layer. To overcome such limitations an EAF’s argument evaluation process was devised comprehending two
complementary steps.
First step determines the strength of each argument-instance based on (i) the type and (ii) the strength value of the
argument-instances supporting and attacking the argument-instance being evaluated. For that, and ground on the idea that
different argument-types may demand different forms of evaluation, it is required that each argument-type has associated
an argument evaluation function ( f ), which is responsible for the strength evaluation of all argument-instance of that
type. For example, an argument-type applying a deductive reasoning method may be evaluated by a function ( f 1 ) that
returns a value stating that an argument-instance holds iif the argument-instance being evaluated is not attacked by any
other argument-instance, otherwise the function returns a value stating that the argument-instance does not hold. On the
other hand, an argument-type applying a voting reasoning method may be evaluated by a function ( f 2 ) that considers the
difference between the number of argument-instances attacking and supporting the argument-instance being evaluated to
state if the argument-instance holds or not. Yet, it is important to bear in mind that each audience (e.g. computational
system) has distinct preferences and, therefore, may evaluate the argument-strength through a distinct set of evaluation
functions.
Additionally, due to possible cyclic relationships between argument-instances it is also required:

• An algorithm (alg) that iterates over the argument evaluation functions to (re)evaluate the argument-instances strength
until a defined criterion is reached; and
• A matrix of the argument-instances’ strength values (mapV ), where each column represents an argument-instance
and each line represents the strength of every argument-instance in a given iteration of the algorithm being used.
Therefore, mapV i denotes the values of all argument-instances in the ith iteration, mapV ia denotes the strength of an
argument-instance a in the ith iteration. In particular, mapV 0a denotes the initial strength of the argument-instance a
and mapV a denotes the strength of an argument a in the last iteration (row) of the matrix.

Distinct argument evaluation functions may exploit differently the relationships between argument-instances and the
strength/information of those argument-instances. Despite those differences, it is necessary that the values returned by
all functions follow a common semantic understood by alg. Thus, for the sake of simplicity, consider that an argument
evaluation function is defined as f : (AI, mapV i ) → V , where AI is the set of all existing argument-instances and V is an
ordered set {Vmin , . . . , Vm , . . . , Vmax } with at least three possible values, such that:

• Vmin represents the minimal strength value,


• Vmax represents the maximal strength value, and
• Vm represents a value whose distance to Vmin and Vmax is the same.

Hence, the strength value of an argument-instance a ∈ AI in the iteration i evaluated by f , such that mapV ia =
f (a, mapV i −1 ), has the following semantics:

• mapV ia > Vm , means that the argument a holds and is therefore undefeated. In addition, if mapV ia > mapV ib > Vm ,
this means that the confidence on considering argument a undefeated is greater than the confidence on considering
argument b undefeated;
• mapV ia < Vm , means that the argument a does not hold and is therefore defeated. In addition, if Vm > mapV ia > mapV ib ,
it means that the confidence on considering argument b defeated is greater than the confidence on considering argu-
ment a defeated;
• mapV ia = Vm , means that the argument a has an undefined status, i.e. it might be considered either as defeated or as
undefeated. This means that the positive force given by the support relationships and the negative force given by the
attack relationships are equivalent.

The result of the execution of the algorithm alg is therefore the mapV matrix, populated with the arguments strength
values evaluated by the evaluation functions. This matrix is used as input information to the next step.
Second step consists in selecting a preferred extension which is a sub-set of argument-instances representing a consistent
position within the EAF instance-pool such that according to an audience is defensible against all attacks and cannot be

1
The EAF’s set of argument-instances and the derived argument-instances relationships (R sup and R att ) correspond to the three elements constituting a
BAF instance.
2
The EAF’s set of argument-instances and the derived argument-instances relationships (R sup and R att ) is represented as an AF by first representing the
EAF as a BAF and further representing the resulting BAF as an AF. The process to represent a BAF as AF is described in [27].
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 11

Fig. 7. Overview of the proposed argument-based negotiation approach.

further extended without introducing a conflict. For that, the selection process makes use of two empty sets of argument-
instances (T and T  ) and runs as follows. For each argument-instance (say a) whose type is an intentional argument:

1. If the defeat status of a is undefeated then a is added to T and all argument-instances supporting it are added to T  ;
2. If the defeat status of a is defeated then a is not added to T but all argument-instances attacking it are added to T  ;
3. If the defeat status of a is undefined (mapV a = Vm ) it means that multiple preferred extensions exist, resulting in the
execution, alternatively, of one of the above steps.

At the end, the preferred extension is obtained by the union of T and T  (prefext = T ∪ T  ), such that the set T corresponds
to the intentional preferred extension (iprefext) while the set T  corresponds to the belief preferred extension (bprefext).
Thus, an EAF preferred extension is composed of the undefeated intentional arguments and all the non-intentional argu-
ments that support (directly or indirectly) the undefeated intentional arguments. Again, notice that the undefined status of
argument-instances gives rise to multiple preferred extensions. Thus, one considers that (i) an argument is sceptical admis-
sible if it belongs to any preferred extension and (ii) an argument is credulous admissible if it belongs to at least one preferred
extension.
Given a preferred extension (prefext), the intentions and beliefs of a computational system correspond to the statement-
instances concluded by the argument-instances of the preferred extension.

4. The proposed argument-based approach

This section presents the proposed argument-based approach which is inspired and relies on the general Argument-based
Negotiation Process (ANP) described in [15]. First, an overview of the overall approach is provided, followed by a detailed
description of the argument-based negotiation process and its phases.

4.1. Overview

The proposed argument-based approach assumes the negotiation occurs in the scope of a given community. Fig. 7 graph-
ically depicts an overview of the proposed argument-based negotiation approach for the ontology matching domain. It
exploits (and somehow mimics) the way humans argue with each other, in the sense that humans share a large common
knowledge/perception about a given domain (e.g. ontology matching) but each one has its own perception and rationality
over that domain. For that, it comprehends the notions of (private/public) argumentation model (AM), which is an artifact
that captures (partially or totally) the perception and rationality that one has about a specific domain regarding the argu-
mentation process. Accordingly, an argumentation model conceptually defines the vocabulary used to form arguments, the
arguments’ structure and even the way arguments affect (i.e. attack and support) each other.
12 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

Fig. 8. The argument-based negotiation process.

The community in which the negotiation occurs defines a set of rules by which all interactions are governed. In that
sense, the community is also responsible for defining a public argumentation model, which is a shared argumentation
model capturing the minimal common understanding of argumentation over the domain problem being addressed (e.g.
ontology matching) by the members of that community. Therefore, all members of that community are able to understand
the defined public argumentation model and reason on it.
Further, each system (member of the community) must be able to extend the public argumentation model so it better
fits its own needs and knowledge. As a result, the members freely specify their own private argumentation model. Contrary
to a public argumentation model, a private argumentation model captures the individual understanding of argumentation
that a system has over the domain problem being addressed. It is worth noticing that the EAF model layer together with the
extensibility and modularization features satisfies the above definitions of public/private argumentation model. Therefore,
from now the ANP description adopts the EAF.
Because systems adopt their own private argumentation model, each system has the responsibility of searching, iden-
tifying and selecting sources of information (e.g. matching algorithms) that can provide the most relevant and significant
information needed to instantiate its private model. After the private model instantiation each system has a set of argu-
ments that need to be evaluated in order to extract the system’s preferred extension. In this context, a preferred extension
defines the correspondences that a given system wants to include in the alignment and a set of reasons supporting those
correspondences. Therefore, by exchanging the arguments of their preferred extensions, systems might be able to achieve an
agreement, i.e. a consensus about which correspondences belong (or not) to the alignment to be established between their
ontologies.

4.2. The negotiation process

The computational system’s internal phases and its external interaction are illustrated in Fig. 8 as defined by the adopted
general Argument-based Negotiation Process (ANP) [15], which is an iterative and incremental process. A description of each
phase concerning the ontology matching negotiation domain is provided in the next sub-sections.

4.2.1. Setup
In the Setup phase a set of interactions between the systems participating in the negotiation occurs to define the context
and the parameters of the negotiation. In particular, it is during this phase that:

• Each system informs the opponent system of the subject ontology (i.e. the system’s ontology to align);
• The systems identify and accept the public argumentation model (AMC ) provided by the community as the minimal
common understanding between them. As a consequence, the private argumentation model of each system is the same
as AMC or extends it such that AM C
AM S ;
• A priori alignment properties (e.g. the alignment level and cardinality) can be established between the systems.
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 13

Complementary to negotiation parameters, each participant creates an instance-pool of its own argumentation model
(IP(AM S )) that will capture the argumentation data of the ongoing negotiation.
In contrast to other phases, this phase occurs only once.

4.2.2. Data Acquisition phase


During the Data Acquisition phase the computational system collects data/information that constitutes the grounds to
generate arguments. The set of data/information collected by a negotiating participant is referred to as D S such that d ∈ D S
is a pair (G , c) where c is a correspondence and G is a univocal identification of the matcher from where c was collected.
To collect this information, participants (i) exploit internal matching algorithms and/or (ii) interact with other systems
that are not directly participating in the negotiation process. This might be the case of specialized systems providing
matching services and ontology matching repositories (OMRs). Also, as a result of the upcoming phases, correspondences
temporarily agreed (but not settled as definitive) may be used to feed data-collecting mechanisms. This is especially relevant
for the systems wishing to apply matching algorithms (e.g. semantic algorithms) in which the receiving correspondences
play the role of anchors or inductive facts.

4.2.3. Argumentation Model Instantiation


In the Argumentation Model Instantiation phase, the participant makes use of one or more data transformation processes
over the collected data to generate a set of arguments structured according to its argumentation model. In this context, it is
important to bear in mind that EAF does not specify any structure for statements or reasoning mechanisms. Consequently,
the responsibility to specify such entities is left to the application level.
Regarding the ontology matching application, the structure for statements and reasoning mechanisms as well as the
argument instantiation process is defined as proposed in [17] and described in brief, next.
A statement-instance is a 3-tuple s = (G , c , pos) where c is a correspondence, G is a univocal matcher identification and
pos ∈ {+, −} states the position of G about c, i.e. states if G is for (+) or against (−)c. On the other hand, an instance of
a reasoning method is a tuple rm = (Γ, desc) where Γ is a univocal identification of the algorithm used by the matcher
and desc is a textual description of Γ . For the sake of simplicity and in order to be able to distinguish between different
matchers using the same base algorithm Γ but with different configuration-parameters, G is the univocal identification of
the algorithm Γ instance.
The position of a matcher G about a correspondence c = (e , e  , r , n) is determined based on the degree of confidence (n).
In this sense, it is considered that G is:

• In favor (+) of c if its confidence value on c is equal or greater than a given threshold value (n  tr+ );
• Against (−)c if its confidence value on c is less than another threshold value (n < tr− );
• Neither in favor nor against c if tr−  n < tr+ and therefore c may be ignored.

Collected data is transformed into argument-instances through an interpretation function (ψ ) that maps correspondences
to the system’s private argumentation model based on their content and provenance as follows. In this sense, an interpreta-
tion function is defined as ψ : G × c → S × M × pos where G is a univocal identification of the generator of correspondence c,
and S and M are a statement type and a reasoning mechanism of AM S , respectively, and pos is the value resulting from the
interpretation of the matcher’s position.
Details about how the interpretation function is further exploited to generate argument-instances are provided in [17].
However, some interpretation functions’ examples are provided in Section 6.1.

4.2.4. Argument Evaluation


During the Argument Evaluation phase, previously generated argument-instances are evaluated by the negotiation par-
ticipant in order to extract a preferred extension. In this context, a preferred extension includes two kinds of argument:

• Intentional arguments, which define the intentions of the agent with respect to the agreement, i.e. it defines the corre-
spondences that a participant wants to include/exclude in/from the alignment;
• Non-intentional arguments, which represent a set of reasons supporting the intentions, i.e. supporting the inclusion/ex-
clusion of correspondences in/from the alignment.

If the argument evaluation process extracts more than one preferred extension then it is necessary to select one. The
selection criterion has a special relevance during the negotiation process because it directly defines the system’s intentions
and the reasons behind those intentions. Given this, instead of a simple criterion, a more elaborate selection criterion may
be taken into consideration. For example, instead of the “selection of the preferred extension that is maximal with respect
to set inclusion”, one may consider “the preferred extension that minimizes the changes in respect to the previous one”.
The argument evaluation process to be adopted in this phase was previously presented in Section 3.2.

4.2.5. Agreement Attempt


The Agreement Attempt phase consists of two steps.
14 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

In the first step, participants in the negotiation exchange the intentional arguments of their preferred extensions to
perceive:

• Their convergences (AgreedArgs), i.e. the correspondences proposed/accepted by both participant. The set AgreedArgs
represents a candidate alignment (or agreement);
• Their divergences (DisagreedArgs), i.e. the correspondences proposed/accepted by a single participant. The set
DisagreedArgs represents the exiting conflicts between the participants.

In the second step, according to the content of AgreedArgs and DisagreedArgs participants must decide whether to:

• Settle the candidate alignment as definitive and, therefore, proceed to the Settlement phase;
• Continue the negotiation, and therefore proceed to the Persuasion phase in order to try to resolve their conflicts;
• Conclude the negotiation without an agreement.

4.2.6. Persuasion
In order to persuade its opponent to accept or to give up the disagreed correspondences, each system exchanges the set
of non-intentional arguments existing on its preferred extension supporting its position, and therefore attacking the other
systems’ divergent positions.
At the end of this phase each system has collected a new set of information (ED S ), corresponding to the received
arguments presented by the other negotiating systems.
Furthermore, it is important to perceive the consequences of the systems’ making use of private arguments (the ones ex-
isting only in the system private argumentation model). Therefore, for each argument exchanged between two participants,
one of four possible scenarios occurs:

• The type of the argument-instance exists in the community’s argumentation model and:
◦ The receiver system does not re-interpret the received argument-instance according to its private argumentation
model (P1);
◦ The receiver system re-interprets the received argument-instance according to its private argumentation model (P2);
• The type of the argument-instance does not exist in the community’s argumentation model and:
◦ The sender system makes use of H A , H S and H M relations to send the argument-instance as the most specific
community’s argument type (P3);
◦ The sender system is not able to send the argument-instance according to the community’s argumentation model
(P4).

To exemplify each of these scenarios consider a negotiation scenario between two systems (S 1 and S 2 ). Yet, consider
that (i) the EAF model previously depicted Fig. 3 (say EAF C ) as the community argumentation model, (ii) system S 1 uses as
its private argumentation model the EAF model layer previously depicted in Fig. 6 (say EAF S1 ) which extends EAF C and (iii)
the private argumentation model of system S 2 is the one defined by the community, such that EAF S2 ≡ EAF C .
The scenario P1 corresponds to the simplest scenario where argument-instances are straightforwardly exchanged and
similarly understood by both systems. For example, if an argument-instance of type TerminologicalArg is exchanged between
S 1 and S 2 , they will similarly understand it because none of them is able to reclassify the argument-instance to another
type.
In the scenario P2, argument-instances are also straightforwardly exchanged, but the receiver system interprets the
argument-instances differently than the sender system. This implies that the receiver system is able to re-classify the
argument-instances to another type. For example, S 2 sends an argument-instance of type ExtStructuralArg to S 1 , which
has the ability to re-classify it to SubEntitiesArg based (i) on the content of the argument-instance, (ii) on its knowledge re-
garding the argument instantiation process and (iii) on the H relations existing in its private argumentation model (EAF S1 ).
With respect to the scenario P3, the sender realizes that the receiver system is (probably) not able to understand the
argument because it is not (fully or partially) represented according to the common argumentation model. For the argu-
ments exchange purpose, the sender system reclassifies internally those argument-instances as the most specific common
argument type through the existing H A , H S and H M relations. This is the case of S 1 with respect to argument-instances of
type SubEntitiesArg since it does not belong to EAF C which makes S 2 unable to understand such argument-instances. In the
case of S 1 , the most specific common argument type of SubEntitiesArg is ExtStructureArg. Therefore, S 2 will receive instances
of SubEntitiesArg as instances of ExtStructureArg.
In the scenario P4, the sender is not able to reclassify the argument-instances to the community’s model. In such cases,
two mutual exclusive possibilities arise:

• Those argument-instances are not exchanged;


• Those argument-instances are exchanged in a general way (e.g. classified as Argument only) expecting that the receiver
is able to understand them based on their content (similarly to what was described in P2).
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 15

Fig. 9. Software Development Framework package overview.

4.2.7. Argumentation Model Refinement


The Argumentation Model Refinement phase concerns the refinement of the community’s argumentation model (AMc )
according to the exchanged arguments and the private argumentation models (AM S ). Hence, it requires systems’ ability to
learn from interactions with other systems and from other systems’ knowledge.
Due to the envisaged difficulty of the related tasks, this phase is seen as optional and, therefore may be skipped. This
task is out of scope of this paper.

4.2.8. Instance Pool Update


In the Instance Pool Update phase, the participant analyses, processes and possibly reclassifies the arguments received
during the Persuasion phase in light of its private argumentation model. As a result, the system adds new arguments and/or
updates existing arguments. Therefore, the previous preferred extension becomes invalid and is discarded. The added/up-
dated arguments are taken into consideration by the participant in the next round of proposals. The negotiation process
proceeds (again) to the Data Acquisition phase.
At this point, an iteration of the argumentation process is concluded. The process has as many iterations as are needed
to reach an agreed alignment or, instead, until no more (new) arguments are generated by participants. Yet, it might be the
case that a maximum number of iterations is previously defined in the Setup phase. In the two latter cases, the negotiation
may end without an agreement, and therefore unsuccessfully.

4.2.9. Settlement
The goal of the Settlement phase is to transform the candidate agreement into a definitive agreement. In this respect,
this phase is seen as an initiator of a set of tasks that are dependent on the business interaction process that had previously
taken the computational systems to the ontology matching negotiation process. Each negotiating participant makes use of
the agreed alignment to develop the business interaction process.

5. Software Development Framework

This section describes the proposed Software Development Framework (SDF) that captures the previously described
argument-based ontology matching negotiation process. The SDF allows the easy and driven development of the negotiation
process in scope of computational systems. The provided SDF is composed by six main packages (Fig. 9):

• ANP that captures the Argumentation Negotiation Process;


• EAF that captures the concepts of the Extensible Argumentation Framework;
• ANP4OM that captures the adoption of both ANP for ontology matching;
• EAF4OM that captures the adoption of EAF for ontology matching;
• Matching that captures the ontology matching domain specific concepts applied in the proposed approach;
• Ontology, that captures the core concepts of the negotiating ontologies.
16 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

Fig. 10. Details of the proposed EAF4OM classes/interfaces through the class diagram.

The EAF package adopts the EAF original layering structure. The conceptual meta-model layer entities are captured by
the classes in the Model Layer package (Argument Model, Argument Type, Statement Type and Reasoner Type). Respecting
the model layer entities, two design approaches were considered: (i) to adopt classes that subclass the meta-model entities
and (ii) to adopt instances for capturing the diverse types (Argument Type, Statement Type and Reasoner Type). The design
decision fall into the use of instances because the class approach would require manual programming of the classes or
run-time reflexion-based development of classes. Instead, using instances is very simple and straightforward. Further, the
proposed API helps abstracting from this. The Instance Layer package captures the original EAF instance layer. The instances
from the instance layer are related to the instances in the model layer through the “typeof” relation.
The class diagram depicted in Fig. 10 refines the view of the package diagram of Fig. 9 (all entities are in fact interfaces
but for the sake of simplicity of the diagram the interface stereotype was dropped). Notice that the classes and methods
signed with asterisk (∗) mean that these are introduced in EAF4OM. Also, notice that OO design best practices led to the
adoption of the GoF Strategy pattern in order to represent several evaluation processes (as described in previous sections),
namely the statement conflict evaluation process, the preferred extension evaluation process, the arguments evaluation and
the evaluation of the R sup and R att relationships, correspondences interpretation and the decision to either proceed or stop
the negotiation. Accordingly, several interfaces with similar names are proposed.
The Process class in the ANP package captures the core functioning of the ANP process, namely the phase creation
and flow control. The processes phases are captured by nine interfaces and several strategies. The class diagram in Fig. 11
complements the previous diagrams, depicting the creation relations between the ANP entities and the data structures and
strategies defined in the scope of EAF. Notice that the phases instantiation (i.e. creation, not represented in Fig. 11) is
performed by the “config” method of the Process class through a specific factory (not represented).
Several evolution points are then available for the customization of the framework while maintaining the core principles
of the approach. While the most common evolution points are the Argument Type, Statement Type, Reasoner Type and the
strategies, every phase and data entity is potentially specialized for the domain in hands.
An implementation of the described framework was developed and adopted during experiments for evaluating the com-
bination and application of the EAF and ANP in the context of ontology matching.

6. Experiments

In order to evaluate the effectiveness of the proposed approach, an empirical approach was adopted. The experiments
aim to:

• Compare the proposed approach and the state-of-art FDO approach [3] which is an improvement of MbA [4];
• Evaluate the systems’ ability to capture dependency between intentional arguments (i.e. correspondences) in the out-
come of the negotiation process;
• Evaluate the relevance of the H relations in the outcome of the negotiation process.
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 17

Fig. 11. Details of the proposed ANP entities and their creation relation with EAF.

Table 1
The test set of ontologies and their characteristics.

Ontology Named classes Object properties Data properties Expressivity


Cmt 36 49 10 ALCIN(D)
Conference 60 46 18 ALCHIF(D)
ConfOf 38 13 23 SIN(D)
Edas 104 30 20 ALCOIN(D)
Ekaw 74 33 0 SHIN
Iasted 140 38 3 ALCIN(D)
Sigkdd 49 17 11 ALEI(D)

For this, the experiments are analyzed two-fold:

• Measuring the resolved conflicts and its correctness;


• Measuring the accuracy of the agreed alignment achieved by the systems through the proposed argumentation process
when compared to the systems’ initial state, i.e. before the argumentation process.

Before this however, the set-up of experiments is described in the next section.

6.1. Experimental set-up

Seven ontologies representing different theories and origins for the same real-world domain (conference organization)
and, therefore, reflecting real-world heterogeneity were taken from the OAEI 2011 Conference Track [5] repository (Table 1).
Even though other ontologies are available in this repository, they were not used because there is no reference alignment
available.
Since the ordering of the ontologies in each possible pair is irrelevant, a total of 21 ontology pairs were identified.
However, for the sake of brevity and simplicity, the experiment results are presented considering the negotiation of all
individual alignments as just one huge alignment. Accordingly, the reference alignment contains 305 correspondences which
correspond to the sum of the number of correspondences of all reference alignments.
Three distinct systems (further referred to as systems A, B and C) have been conceived, adopting different data ac-
quisition, argumentation models, argument generation interpretation functions and evaluation functions depending on the
experimentation scenario depicted in Table 2. All the argumentation scenarios were executed for the systems’ pairs (A, B)
and (A, C).
18
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25
Table 2
The argumentation scenarios during the experimentation.

Sc. System A System B System C


Arg. model Evaluation functions H’s Recl. Arg. model Evaluation functions H’s Recl. Arg. model Evaluation functions H’s Recl.
Defeasible Indefeasible Defeasible Indefeasible Defeasible Indefeasible
1 EAF C { T , ES} k – – EAF C {ES, T } k – – EAF C {ES, T } k – –
2 EAF C f1 k – – EAF C {ES, T } k – – EAF C {ES, T } k – –
3 EAF C { T , ES} k – – EAF C f1 k – – EAF C f1 k – –
4 EAF C f1 k – – EAF C f1 k – – EAF C f1 k – –
5 EAF C f2 k – – EAF C f2 k – – EAF C f2 k – –
6 EAF DC f2 k – – EAF C f2 k – – EAF C f2 k – –
7 EAF C f2 k – – EAF DC f2 k – – EAF DC f2 k – –
8 EAF DC f2 k – – EAF DC f2 k – – EAF DC f2 k – –
9 EAF A f2 k No No EAF B f2 k No No EAF C f2 k No No
10 EAF A f2 k Yes No EAF B f2 k No Yes EAF C f2 k No Yes
11 EAF A f2 k No Yes EAF B f2 k Yes No EAF C f2 k Yes No
12 EAF A f2 k Yes Yes EAF B f2 k Yes Yes EAF C f2 k Yes Yes
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 19

Fig. 12. Partial representation of two argumentation models: a) the community argumentation model (EAF C ) and b) the community argumentation model
extended to capture dependency between correspondences (EAF DC ).

Fig. 13. Partial representation of the argumentation model adopted by system A (EAF A ).

The first scenario mimics the FDO approach. Since the FDO approach does not have the notion of intentional argument,
the argument-instantiation process was constrained to instantiate the intentional arguments with the value of the most
preferred argument-type of each system (scenarios 2–4 also have this constraint). This guarantees that every intentional
argument-instance is supported by the most preferred argument-instance. The other scenarios (2–8) exploit the EAF-based
approach’s feature concerning the adoption of argument evaluation functions instead of preferences on argument-types.
Additionally, scenarios 9–12 have been set based on (i) the systems ability to exchange arguments through the H relations
and (ii) on the systems ability to reclassify terminological argument-instances.
Five different argumentation models have been used in the experiments. EAF c (Fig. 12a) is a simplified version of the
argumentation model previously introduced in Fig. 3. Notice that intentional and non-intentional arguments are represented
by rounded and non-rounded rectangles respectively, and statements are represented by dashed rectangles. EAF DC (Fig. 12b)
is the systems’ private extension of EAF C in order to introduce an R relation between arguments MatchArg and ExtStruc-
turalArg. Further, to show the relevance of the H relations (H A , H S and H M ), it has been decided to use the EAF DC as
the common argumentation model to all systems. Additional, this argumentation model (EAF DC ) has been extended differ-
ently and privately by each system: EAF A (Fig. 13), EAF B (Fig. 14) and EAF C (Fig. 15). The elements filled in gray are those
belonging to the community argumentation model.
To foster the exchange of arguments, H A and H S relations have been established between the new elements and the
elements of the common argumentation model. Given that, instances of these new arguments might be exchanged as
instances of the terminological argument by systems’ internal reclassification as captured in Table 3. Notice that this table
reflects the systems’ internal and private knowledge, thus a system does not know the reclassification rules of the opponent.
As an example, argument-instances of type EAF A : LexicalLabelArg are exchanged as instances of EAF DC : TerminologicalArg
which are further reclassified as (i) instances of EAF B : WNLabelArg by system B and (ii) as instances of EAF C : LabelArg by
system C.
Concerning the data acquisition phase, the correspondences between each pair of ontologies were generated according
to the matchers mentioned in Tables 4, 5 and 6 for systems A, B and C, respectively.
20 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

Fig. 14. The argumentation model adopted by system B (EAF B ).

Fig. 15. The argumentation model adopted by system C (EAF C ).

Table 3
Reclassification of arguments exchanged as terminological.

Original argument type sent Reclassified as


EAF A : LexicalLabelArg EAF B : WNLabelArg
EAF A : LexicalLabelArg EAF C : LabelArg
EAF A : SyntacticLabelArg EAF C : LabelArg
EAF B : SoundexLabelArg EAF A : LabelArg
EAF B : WNLabelArg EAF A : LexicalLabelArg
EAF C : LabelArg GC1 EAF A : LabelArg
EAF C : LabelArg GC2 EAF A : LexicalLabelArg

These tables also represent the interpretation functions and the thresholds adopted by the respective systems to gen-
erate arguments, considering that: (i) the correspondence content can be anything (e.g. correspondence between concepts,
between properties, between concept and property) and (ii) the reasoning mechanism is Heuristic (cf. [17] for details).
Concerning the argument evaluation phase, two dimensions have to be considered. First, it regards the argument evalu-
ation functions used by the systems. These are described in the “Evaluation functions” column of Table 2 according to the
argument defeasibility: defeasible or indefeasible. The evaluation function defined as { T , ES} or {ES, T } means that argu-
ments are evaluated according to the FDO approach where terminological arguments (T ) are preferred to external structural
arguments (ES) or vice versa, respectively. Function f 1 counts the number of support relationships (nsup ) and the number of
attack relationships (natt ) of the argument-instance being evaluated (x), such that:

⎨ 1: nsup (x) > natt (x)
f 1 (x, mapV i −1 ) = −1: natt (x) > nsup (x)

0: otherwise
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 21

Table 4
The interpretation function of system A.

ID Matcher description Statement type tr+ tr−


G A1 WNMatcher [32] LexicalLabelSt 1.00 1.00
G A2 String-distance [33] SyntacticalLabelSt 0.75 0.75
G A3 V-Doc [34] LabelSt 0.70 0.70
G A4 Max(G A1 , G A2 )a TerminologicalSt 0.80 0.80
G A5 GMO [35] ExtStructuralSt 0.50 0.50
G A6 Falcon-AO [33] MatchSt 0.70 0.70
a
Corresponds to the aggregation of the alignments outputted by the input matching algorithms through the max function.

Table 5
The interpretation function of system B.

ID Matcher description Statement type tr+ tr−


G B1 Soundex [36]a SoundexLabelSt 0.75 0.75
G B2 WNPlusMatcher [32] WNLabelSt 1.00 1.00
G B3 OWA(G B1 , G B2 , BiGramb )c TerminologicalSt 0.60 0.60
G B4 StructureMatcher [32] ExtStructuralSt 0.70 0.70
G B5 Max(G B2 , SMOA [39]) MatchSt 0.25 0.25
a
Implemented in the SimMetrics project available at http://sourceforge.net/projects/simmetrics/.
b
Corresponds to the string-based matching algorithm available in the SimPack [37] that exploits the frequency of substrings with length 2.
c
Corresponds to the aggregation of the alignments outputted by the input matching algorithms through the OWA operator [38].

Table 6
The interpretation function of system C.

ID Matcher description Statement type tr+ tr−


GC1 Levenshtein [40] LabelSt 0.75 0.75
GC2 WNPlusMatcher [32] LabelSt 1.00 1.00
GC3 Avg(G C 1 , G C 2 , SMOA)a TerminologicalSt 0.70 0.70
GC4 Avg(G B4 , SMOA) ExtStructuralSt 0.80 0.80
GC6 Op(Max(G C 2 , SMOA, G B4 ))b MatchSt 0.25 0.25
a
Corresponds to the aggregation of the alignments outputted by the input matching algorithms through the linear average function.
b
Corresponds to the global optimization of the input alignment by the Hungarian method [9].

Function f 2 returns the weighted average between (i) the strength value of the argument-instance being evaluated (x)
and (ii) the normalized difference between the sum of the strength value of all argument-instances supporting it and the
sum of the strength value of all argument-instances attacking it, such that:
    
1 2 y y
f 2 (x, mapV i −1 ) = mapV ix−1 + mapV i −1 − mapV i −1 /| y Rx|
3 3
y R sup x y R att x

Second, regarding the selection of a preferred extension by the systems, it was defined a common criterion shared by all
systems. This criterion states that if more than one preferred extension is generated, the system must adopt the preferred
extension that differs less from the one adopted in the previous iteration of the negotiation process. By using this criterion,
the systems (i) become more consistent with the position previously assumed in the negotiation since (ii) they do not give
up from their initial position so easily.

6.2. Results

With respect to the resolution of conflicts, the results of the argument-based negotiation for the systems’ pairs (A, B)
and (A, C) are depicted in Table 7. The table shows: (i) the initial amount of conflicts existing between the two systems
before the argumentation process execution, (ii) the amount of conflicts resolved during the argumentation process, (iii) the
amount of remaining conflicts after the argumentation process, (iv) the percentage of resolved conflicts and (v) the percent-
age of conflicts correctly resolved and (vi) badly resolved, both regarding the amount of resolved conflicts.
Regarding the alignment accuracy, Table 8 summarizes and characterizes two kinds of alignments: (i) the alignment
generated by each system before the argument-based negotiation process and (ii) the agreed alignment obtained in each
scenario after the argument-based negotiation process. Each alignment is characterized qualitatively by presenting the ac-
curacy measures Precision, Recall and F-Measure. It is worth noticing that the systems’ alignment before the negotiation
process comes from the preferred extension evaluated by the system on the argument evaluation phase of first iteration of
the proposed process, which only considers the arguments generated by the system itself. Arguments put forward by the
counter-part system are just considered in all subsequent iterations of the process. This means that each systems exploits
22 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

Table 7
Analysis of the conflicts between systems.

Sc. System A vs. system B System A vs. system C


Number of conflicts Conflict resolved (%) Number of conflicts Conflict resolved (%)
Initial Resolved Remain Total Correctly Badly Initial Resolved Remain Total Correctly Badly
1 1319 0 1319 0.00 0.00 0.00 493 0 493 0.00 0.00 0.00
2 1319 88 1231 6.67 67.05 32.95 493 51 442 10.34 78.43 21.57
3 1319 170 1149 12.89 98.82 1.18 493 75 418 15.21 98.67 1.33
4 1319 258 1061 19.56 87.98 12.02 493 126 367 25.56 90.48 9.52
5 995 769 226 77.29 93.76 6.24 360 221 139 66.39 90.50 9.50
6 995 742 253 74.57 95.15 4.85 360 213 147 59.17 94.37 5.63
7 995 740 255 74.37 95.27 4.73 356 206 150 57.87 93.69 6.31
8 995 757 238 76.08 94.06 5.94 356 219 137 61.52 93.15 6.85
9 293 3 290 1.02 100.00 0.00 50 24 26 48.00 75.00 25.00
10 293 243 50 82.94 89.30 10.70 50 50 0 100.00 66.00 34.00
11 293 29 264 9.90 75.86 24.14 50 37 13 74.00 67.57 32.43
12 293 257 36 87.71 89.88 10.12 50 38 12 76.00 71.05 28.95

Table 8
Summary and characterization of the alignments.

Sc. System A System B System C System A vs. system B System A vs. system C
Prec. Rec. F-M. Prec. Rec. F-M. Prec. Rec. F-M. Prec. Rec. F-M. Prec. Rec. F-M.
1 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 68.35 48.85 56.98 64.98 54.75 59.43
2 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 67.87 49.18 57.03 64.62 55.08 59.47
3 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 67.73 48.85 56.76 64.98 54.75 59.53
4 57.70 57.70 57.70 11.86 56.39 19.60 27.49 63.28 38.33 67.26 49.18 56.82 64.62 55.08 59.47
5 57.91 56.39 57.14 18.44 72.79 29.42 35.13 57.70 43.67 65.25 55.41 59.93 66.07 48.52 55.95
6 57.91 56.39 57.14 18.44 72.79 29.42 35.13 57.70 43.67 65.35 54.43 59.39 66.96 49.84 57.14
7 57.91 56.39 57.14 18.44 72.79 29.42 35.21 57.38 43.64 65.25 55.41 59.93 66.07 48.52 55.95
8 57.91 56.39 57.14 18.44 72.79 29.42 35.21 57.38 43.64 63.77 57.70 60.59 67.09 52.13 58.67
9 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 80.87 48.52 60.66 78.49 47.87 59.47
10 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 80.77 48.20 60.37 76.50 50.16 60.59
11 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 81.15 50.82 62.50 78.49 47.87 59.47
12 76.26 49.51 60.04 38.68 57.70 46.32 76.14 43.93 55.72 81.08 49.18 61.22 78.19 48.20 59.63

argumentation for both: (i) reasoning about what to believe (e.g. determining the system initial alignment) and (ii) to es-
tablish an agreement with other systems (e.g. determining the agreed alignment). Therefore, the systems’ alignment before
the negotiation process may change from one scenario to another due to changes in the adopted argumentation model, data
acquisition, argument generation and argument evaluation processes.

6.3. Analysis and discussion

The examination of these results shows that in the FDO approach (scenario 1) systems were not able to resolve any
conflict. This occurs because of the argument evaluation process of FDO approach – whenever a system is able to generate
an argument-instance of its preferred argument-type, the best the opponent system can do is send an argument-instance of
the same type but with an opposite position. I.e. a system says a and the opponent system negates a(¬a). In this case, the
system would obtain two possible preferred extensions. Due to the settled criterion on the preferred extension selection the
system opts for the preferred extension that maintains its previous position. Since none of the conflicts between the systems
are resolved during the argumentation, the agreed alignment corresponds exactly to the intersection of the alignments
devised by the systems before the argumentation process runs.
On the other hand, in all the other scenarios (where at least one feature of the proposed EAF-based approach is exploited)
systems were always able to resolve some conflicts. For example, by changing the argument evaluation function of the
defeasible arguments only (scenarios 2 to 5) the rate of resolved conflicts varies from 6.67% to 77.29%. Yet, it is perceivable
that independently of the amount of resolved conflicts the percentage of conflicts correctly resolved is always very high
(66% in the worst case).
Comparing alignment accuracy in terms of f-measure achieved by the FDO approach (scenario 1) with the accuracy of
the agreed alignments in all the other scenarios, one realizes that alignment accuracy varies positively (∼ 5.5%) or negatively
(∼ 3.5%) at the maximum. These small variations occur at the same time that the conflicts are resolved, which allow us to
perceive two issues:

• The great difficulty that a system has to persuade its opponent to accept the inclusion of a given correspondence in the
agreement; which arises from
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 23

• The lack of evidences supporting the inclusion of a given correspondence in the agreement in contrast with the evi-
dences against such inclusion.

Thus, it becomes clear that contrary to the FDO approach, the proposed EAF-based approach is able to resolve conflicts
even when the argumentation skills of the systems are very limited (only two kinds of arguments exist). Simultaneously to
the conflict resolution, the accuracy of the agreed alignment improves.
Yet, it is important to bear in mind that saying an agreed alignment is more or less sound than other depends on
two metrics: (i) the resolved conflicts and its correctness and (ii) the alignment accuracy. However, since the proposed
negotiation process aims to resolve conflicts, one may argue that an agreed alignment with a high level of correctly resolved
conflicts can be taken (i.e. exploited by the computational systems) with more confidence than the same alignment with
a lower level of correctly resolved conflicts. Thus, variations in alignment accuracy assess the impact of conflict resolution
into the quality of alignment. Another important fact concerns the initial amount of conflicts. It is perceivable that as
long as the individual systems’ matching abilities evolve (e.g. by adopting extended argumentation models) (i) the initial
amount of conflicts between systems decreases and (ii) the accuracy of the alignment devised by the systems before the
argumentation process improves. In this respect, the adoption of EAF DC by systems C (scenarios 7 and 8) leads to a reduction
of correspondences from its previous initial alignment (from 360 to 356).
Regarding the dependency between correspondences feature, by comparing the results of scenario 5 where none of the
systems exploit the dependency feature with the scenarios where at least one of the systems exploits such feature (scenarios
6 to 8), it is perceivable that:

• The amount of resolved conflicts slightly decreased;


• The percentage of conflicts correctly resolved slightly increased;
• The f-measure of the agreed alignment in scenarios 6 to 8 is greater than the one achieved in the scenario 5. This is
even more evident in scenario 8 where the two systems are exploiting simultaneously the dependency feature.

The combination of these three facts allows the conclusion that the dependency feature helps to improve the quality of
both (i) the resolved conflicts and (ii) the accuracy of the agreed alignment.
The usefulness of the H relations feature should be measured in combination with the systems’ ability to reclassify
arguments. Hence, by comparing the amount of resolved conflicts and its correctness between scenario 9 (where none of
these features are exploited) and scenario 12 (where both features are exploited), it becomes evident the usefulness of these
two features for conflicts resolution. The results of scenarios 10 and 11 when compared to the results of scenario 9 allow
concluding about the persuasiveness of the system that is exploiting the H relations. While system A was very persuasive
against both systems, system C was also very persuasive (but less than system A). Instead, system B was inefficient since
the amount of resolved conflicts grows from 1.02% to 9.90% only. Similarly, the agreed alignment of scenarios 10 to 12
is better or equal to the agreed alignment in scenario 9. The exception is scenario 10 for the systems’ pair (A, B). Thus,
considering the accuracy of the agreed alignments and the quantity and quality of the resolved conflicts, one may conclude
that establishing H relations in the systems’ private argumentation model is useful in the case where the opponent system
is able to reclassify the argument-instances exchanged based on that feature. This might also be seen by the systems as an
indication to refine the community argumentation model as foreseen in the adopted ANP.
Finally, comparing in terms of f-measure the alignment devised individually by the systems with the agreed alignment,
it becomes clear that systems profit from the argumentation process:

• System A is the one that profits less since it has f-measure disparities from approximately −1.5% to +3.4%. This occurs
because system A is able to generate the best initial alignments and it is very confident on such alignments. Despite
this, in most of the scenarios it has profits instead of losses;
• System B has f-measure improvements that vary from approximately +14% to +36%;
• System C has f-measure improvements that vary from approximately +4% to +21%.

These f-measure improvements happen at the same time the conflicts are resolved.

7. Conclusions

The primary emphasis of the research presented in this paper focuses on proposing a novel argument-based negotia-
tion approach that enables computational systems to resolve their ontology matching divergences. The proposed approach
combines and applies to the ontology matching negotiation problem the core concepts of two generic artifacts: (i) ANP [15]
as the negotiation process and (ii) EAF [16] as the argumentation framework. It is our conviction that either ANP and EAF
are suitable for many negotiation scenarios/domains, including e-commerce and web services selection. In this respect, the
proposed approach is captured into a software development framework that allows a driven and easy development of such
features in diverse computation systems. This framework proved to be suitable and versatile by providing the mechanism
to perform extensive and diverse experiments for evaluating the proposal in respect to the state-of-the-art approaches. To
outperform ontology matching negotiation state-of-the-art approaches, the proposed approach followed a different line of
24 P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25

research. However, it allows to mimic such approaches as demonstrated during the experiments. Furthermore, the proposed
approach goes beyond the state-of-the-art approaches in the following:

1. It encourages computational systems to employ private arguments in their internal reasoning process by privately ex-
tending the public argumentation model. This feature relies on the EAF specific constructs and semantics;
2. It is possible to take into consideration dependencies between correspondences under negotiation by explicitly capturing
such dependencies in the model layer through the R relation between intentional arguments. An intentional argument
is a fully-fledged argument corresponding to an object under negotiation, which can be affected (either directly or
indirectly) by other intentional arguments;
3. It adopts an argument evaluation process allowing computational systems to express more complex and flexible pref-
erences on arguments. This is supported by the ability to apply arbitrarily complex domain dependent or independent
evaluation functions that will typically exploit the adopted argumentation model, namely the R relation;
4. It allows to easily adapt and evolve the approach to support scenarios with different requirements, namely concerning
the amount and types of argument that are plausible computational systems to exploit.

Considering the previous exposition, it is believed that the proposed contributions exceeded the state-of-the-art while
providing a formal yet pragmatic software development framework for its application in diverse computational systems.
Additionally, the presented experiments proved that the proposed argument-based negotiation process performs better
than the state-of-the-art on ontology matching argument-based negotiation approaches both quantitatively and qualita-
tively regarding the resolved conflicts and the accuracy of the agreed alignment. Moreover, the improvements to the
state-of-the-art demonstrate the needs and benefits of adopting an explicit, formal and extensible specification of a shared
argumentation model in order to resolve conflicts and achieve better agreements. The proposed ideas depend on several fac-
tors such as (interpretation of) matchers, argumentation models and their modeling methodologies and evaluation functions
design. While these factors are indeed important and constrain the adoption of the proposed ideas, they are not systemat-
ically addressed in this paper and they will deserve our future attention. Despite that, they are supported by the provided
software development framework.

Acknowledgements

This work is partially supported by the Portuguese projects: COALESCE (PTDC/EIA/74417/2006) of MCTES-FCT and World
Search (QREN11495) of FEDER. The authors would like to acknowledge Jorge Santos, Maria João Viamonte, Jorge Coelho and
Besik Dundua for their useful counsels and Jane Walker for her revision of the document.

References

[1] J. Euzenat, P. Shvaiko, Ontology Matching, 1st ed., Springer-Verlag, Heidelberg, Germany, 2007.
[2] N. Silva, P. Maio, J. Rocha, An approach to ontology mapping negotiation, in: Workshop on Integrating Ontologies of the Third International Conference
on Knowledge Capture, Banff (Alberta), Canada, 2005.
[3] P. Doran, T. Payne, V. Tamma, I. Palmisano, Deciding agent orientation on ontology mappings, in: 9th International Semantic Web Conference (ISWC),
2010.
[4] L. Laera, I. Blacoe, V. Tamma, T.R. Payne, J. Euzenat, T. Bench-Capon, Argumentation over ontology correspondences in MAS, in: 6th International Joint
Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007), Honolulu, Hawaii, USA, 2007, p. 228.
[5] OAEI, Ontology Alignment Evaluation Initiative, 2011 Campaign, available online: http://oaei.ontologymatching.org/2011/, 2011.
[6] E. Rahm, P.A. Bernstein, A survey of approaches to automatic schema matching, VLDB J. 10 (4) (2001) 334–350.
[7] P. Shvaiko, J. Euzenat, A survey of schema-based matching approaches, J. Data Semant. IV (2005) 146–171.
[8] D. Gale, L.S. Shapley, College admissions and the stability of marriage, Am. Math. Mon. 69 (1) (1962) 5–15.
[9] J. Munkres, Algorithms for the assignment and transportation problems, J. Soc. Ind. Appl. Math. 5 (1) (Mar. 1957) 32–38.
[10] D.H. Ngo, Z. Bellahsene, R. Coletta, et al., A flexible system for ontology matching, in: S. Nurcan (Ed.), IS Olympics: Information Systems in a Diverse
World, Springer, Berlin, Heidelberg, 2012, pp. 79–94.
[11] K. Saruladha, G. Aghila, B. Sathiya, A comparative analysis of ontology and schema matching systems, Int. J. Comput. Appl. 34 (8) (2011) 14–21.
[12] P. Maio, N. Silva, GOALS – A test-bed for ontology matching, in: 1st IC3K International Conference on Knowledge Engineering and Ontology Develop-
ment (KEOD), Funchal (Madeira), Portugal, 2009, pp. 293–299.
[13] T. Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput. 13 (3) (2003) 429–448.
[14] M. Wooldridge, An Introduction to MultiAgent Systems, 2nd ed., Wiley, 2009.
[15] P. Maio, N. Silva, J. Cardoso, Iterative, incremental and evolving EAF-based negotiation process, in: T. Ito, M. Zhang, V. Robu, T. Matsuo (Eds.), Complex
Automated Negotiations: Theories, Models, and Software Competitions, Springer, Berlin, Heidelberg, 2013, pp. 161–179.
[16] P. Maio, An extensible argumentation model for ontology matching negotiation, Ph.D. thesis, University of Trás-os-Montes, Vila Real, Portugal, 2012.
[17] P. Maio, N. Silva, J. Cardoso, Generating arguments for ontology matching, in: 10th International Workshop on Web Semantics (WebS) at DEXA,
Toulouse, France, 2011, pp. 239–243.
[18] P. Maio, N. Silva, A three-layer argumentation framework, in: S. Modgil, N. Oren, F. Toni (Eds.), Theories and Applications of Formal Argumentation,
vol. 7132, Springer, Berlin, Heidelberg, 2012, pp. 163–180.
[19] P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artif.
Intell. 77 (2) (1995) 321–357.
[20] C. Cayrol, M.C. Lagasquie-Schiex, On the acceptability of arguments in bipolar argumentation frameworks, in: Symbolic and Quantitative Approaches to
Reasoning with Uncertainty, 2005, pp. 378–389.
[21] D. Walton, Argumentation theory: a very short introduction, in: Argumentation in Artificial Intelligence, Springer Publishing Company, Incorporated,
2009.
P. Maio, N. Silva / Science of Computer Programming 95 (2014) 3–25 25

[22] M.E. Bratman, Intention, Plans and Practical Reason, Harvard University Press, Cambridge, MA, 1987.
[23] T.R. Gruber, A translation approach to portable ontology specifications, Knowl. Acquis. 5 (2) (1993) 199–220.
[24] T. Gruber, What is an ontology?, available online: http://www-ksl.stanford.edu/kst/what-is-an-ontology.html.
[25] H. Prakken, An abstract framework for argumentation with structured arguments, Argument & Computation 1 (2) (2010) 93.
[26] P. Baroni, M. Giacomin, Semantics of abstract argument systems, in: Argumentation in Artificial Intelligence, 2009, pp. 25–44.
[27] C. Cayrol, M.C. Lagasquie-Schiex, Coalitions of arguments: A tool for handling bipolar argumentation frameworks, Int. J. Intell. Syst. 25 (1) (Jan. 2010)
83–109.
[28] C. Cayrol, M.C. Lagasquie-Schiex, Gradual valuation for bipolar argumentation frameworks, in: Symbolic and Quantitative Approaches to Reasoning with
Uncertainty, 2005, pp. 366–377.
[29] L. Amgoud, C. Cayrol, M.C. Lagasquie-Schiex, P. Livet, On bipolarity in argumentation frameworks, Int. J. Intell. Syst. 23 (10) (Oct. 2008) 1062–1093.
[30] N. Karacapilidis, D. Papadias, Computer supported argumentation and collaborative decision making: the Hermes system, Inf. Syst. 26 (2001) 259–277.
[31] B. Verheij, On the existence and multiplicity of extensions in dialectical argumentation, arXiv:cs/0207067, July 2002.
[32] Y. Kalfoglou, B. Hu, N. Shadbolt, D. Reynolds, CROSI-capturing representing and operationalising semantic integration, available online: http://
www.aktors.org/crosi/, 2005.
[33] N. Jian, W. Hu, G. Cheng, Y. Qu, Falcon-AO: aligning ontologies with falcon, in: Proceedings of the K-CAP Workshop on Integrating Ontologies, Banff,
Canada, 2005, pp. 87–93.
[34] Y. Qu, W. Hu, G. Cheng, Constructing virtual documents for ontology matching, in: Proceedings of the 15th International Conference on World Wide
Web, 2006, pp. 23–31.
[35] W. Hu, N. Jian, Y. Qu, Q. Wang, GMO: a graph matching for ontologies, in: Proceedings of the K-CAP Workshop on Integrating Ontologies, Banff, Canada,
2005, pp. 43–50.
[36] R.C. Russell, US Patent 1261167 (A), 2 Apr. 1918.
[37] A. Bernstein, E. Kaufmann, C. Kiefer, C. Burki, SimPack: a generic Java library for similarity measures in ontologies, Technical report, University of
Zurich, Department of Informatics, 2005.
[38] Q. Ji, P. Haase, G. Qi, Combination of similarity measures in ontology matching using the OWA operator, in: Proceedings of the 12th International
Conference on Information Processing and Management of Uncertainty in Knowledge-Base Systems (IPMU’08), 2008.
[39] G. Stoilos, G. Stamou, S. Kollias, A string metric for ontology alignment, in: The Semantic Web – ISWC 2005, 2005, pp. 624–637.
[40] V. Levenshtein, Binary codes capable of correcting deletions, insertions, and reversals, Dokl. Akad. Nauk SSSR 163 (4) (1965) 845–848.

You might also like