Journal of Artificial Intelligence Research 31 (2008) 273-318 Submitted 07/07; published 02/08

Modular Reuse of Ontologies: Theory and Practice
Bernardo Cuenca Grau berg@comlab.ox.ac.uk
Ian Horrocks ian.horrocks@comlab.ox.ac.uk
Yevgeny Kazakov yevgeny.kazakov@comlab.ox.ac.uk
Oxford University Computing Laboratory
Oxford, OX1 3QD, UK
Ulrike Sattler sattler@cs.man.ac.uk
School of Computer Science
The University of Manchester
Manchester, M13 9PL, UK
Abstract
In this paper, we propose a set of tasks that are relevant for the modular reuse of on-
tologies. In order to formalize these tasks as reasoning problems, we introduce the notions
of conservative extension, safety and module for a very general class of logic-based ontology
languages. We investigate the general properties of and relationships between these notions
and study the relationships between the relevant reasoning problems we have previously
identified. To study the computability of these problems, we consider, in particular, De-
scription Logics (DLs), which provide the formal underpinning of the W3C Web Ontology
Language (OWL), and show that all the problems we consider are undecidable or algo-
rithmically unsolvable for the description logic underlying OWL DL. In order to achieve a
practical solution, we identify conditions sufficient for an ontology to reuse a set of sym-
bols “safely”—that is, without changing their meaning. We provide the notion of a safety
class, which characterizes any sufficient condition for safety, and identify a family of safety
classes–called locality—which enjoys a collection of desirable properties. We use the notion
of a safety class to extract modules from ontologies, and we provide various modulariza-
tion algorithms that are appropriate to the properties of the particular safety class in use.
Finally, we show practical benefits of our safety checking and module extraction algorithms.
1. Motivation
Ontologies—conceptualizations of a domain shared by a community of users—play a ma-
jor role in the Semantic Web, and are increasingly being used in knowledge management
systems, e-Science, bio-informatics, and Grid applications (Staab & Studer, 2004).
The design, maintenance, reuse, and integration of ontologies are complex tasks. Like
software engineers, ontology engineers need to be supported by tools and methodologies
that help them to minimize the introduction of errors, i.e., to ensure that ontologies are
consistent and do not have unexpected consequences. In order to develop this support, im-
portant notions from software engineering, such as module, black-box behavior, and controlled
interaction, need to be adapted.
Recently, there has been growing interest in the topic of modularity in ontology engi-
neering (Seidenberg & Rector, 2006; Noy, 2004a; Lutz, Walther, & Wolter, 2007; Cuenca
Grau, Parsia, Sirin, & Kalyanpur, 2006b; Cuenca Grau, Horrocks, Kazakov, & Sattler,
c (2008 AI Access Foundation. All rights reserved.
Cuenca Grau, Horrocks, Kazakov, & Sattler
2007), which has been motivated by the above-mentioned application needs. In this paper,
we focus on the use of modularity to support the partial reuse of ontologies. In particular,
we consider the scenario in which we are developing an ontology { and want to reuse a set
S of symbols from a “foreign” ontology O without changing their meaning.
For example, suppose that an ontology engineer is building an ontology about research
projects, which specifies different types of projects according to the research topic they
focus on. The ontology engineer in charge of the projects ontology { may use terms such
as Cystic Fibrosis and Genetic Disorder in his descriptions of medical research projects. The
ontology engineer is an expert on research projects; he may be unfamiliar, however, with
most of the topics the projects cover and, in particular, with the terms Cystic Fibrosis and
Genetic Disorder. In order to complete the projects ontology with suitable definitions for
these medical terms, he decides to reuse the knowledge about these subjects from a well-
established medical ontology O.
The most straightforward way to reuse these concepts is to construct the logical union
{ ∪ O of the axioms in { and O. It is reasonable to assume that the additional knowledge
about the medical terms used in both { and O will have implications on the meaning of the
projects defined in {; indeed, the additional knowledge about the reused terms provides new
information about medical research projects which are defined using these medical terms.
Less intuitive is the fact that importing O may also result in new entailments concerning
the reused symbols, namely Cystic Fibrosis and Genetic Disorder. Since the ontology engineer
of the projects ontology is not an expert in medicine and relies on the designers of O, it is
to be expected that the meaning of the reused symbols is completely specified in O; that
is, the fact that these symbols are used in the projects ontology { should not imply that
their original “meaning” in O changes. If { does not change the meaning of these symbols
in O, we say that { ∪ O is a conservative extension of O
In realistic application scenarios, it is often unreasonable to assume the foreign ontology
O to be fixed; that is, O may evolve beyond the control of the modelers of {. The ontology
engineers in charge of { may not be authorized to access all the information in O or,
most importantly, they may decide at a later time to reuse the symbols Cystic Fibrosis and
Genetic Disorder from a medical ontology other than O. For application scenarios in which
the external ontology O may change, it is reasonable to “abstract” from the particular O
under consideration. In particular, given a set S of “external symbols”, the fact that the
axioms in { do not change the meaning of any symbol in S should be independent of the
particular meaning ascribed to these symbols by O. In that case, we will say that { is safe
for S.
Moreover, even if { “safely” reuses a set of symbols from an ontology O, it may still
be the case that O is a large ontology. In particular, in our example, the foreign medical
ontology may be huge, and importing the whole ontology would make the consequences
of the additional information costly to compute and difficult for our ontology engineers in
charge of the projects ontology (who are not medical experts) to understand. In practice,
therefore, one may need to extract a module O
1
of O that includes only the relevant infor-
mation. Ideally, this module should be as small as possible while still guarantee to capture
the meaning of the terms used; that is, when answering queries against the research projects
ontology, importing the module O
1
would give exactly the same answers as if the whole
medical ontology O had been imported. In this case, importing the module will have the
274
Modular Reuse of Ontologies: Theory and Practice
same observable effect on the projects ontology as importing the entire ontology. Further-
more, the fact that O
1
is a module in O should be independent of the particular { under
consideration.
The contributions of this paper are as follows:
1. We propose a set of tasks that are relevant to ontology reuse and formalize them as
reasoning problems. To this end, we introduce the notions of conservative extension,
safety and module for a very general class of logic-based ontology languages.
2. We investigate the general properties of and relationships between the notions of
conservative extension, safety, and module and use these properties to study the re-
lationships between the relevant reasoning problems we have previously identified.
3. We consider Description Logics (DLs), which provide the formal underpinning of the
W3C Web Ontology Language (OWL), and study the computability of our tasks. We
show that all the tasks we consider are undecidable or algorithmically unsolvable for
the description logic underlying OWL DL—the most expressive dialect of OWL that
has a direct correspondence to description logics.
4. We consider the problem of deciding safety of an ontology for a signature. Given that
this problem is undecidable for OWL DL, we identify sufficient conditions for safety,
which are decidable for OWL DL—that is, if an ontology satisfies our conditions
then it is safe; the converse, however, does not necessarily hold. We propose the
notion of a safety class, which characterizes any sufficiency condition for safety, and
identify a family of safety classes–called locality—which enjoys a collection of desirable
properties.
5. We next apply the notion of a safety class to the task of extracting modules from
ontologies; we provide various modularization algorithms that are appropriate to the
properties of the particular safety class in use.
6. We present empirical evidence of the practical benefits of our techniques for safety
checking and module extraction.
This paper extends the results in our previous work (Cuenca Grau, Horrocks, Kutz, &
Sattler, 2006; Cuenca Grau et al., 2007; Cuenca Grau, Horrocks, Kazakov, & Sattler, 2007).
2. Preliminaries
In this section we introduce description logics (DLs) (Baader, Calvanese, McGuinness,
Nardi, & Patel-Schneider, 2003), a family of knowledge representation formalisms which
underlie modern ontology languages, such as OWL DL (Patel-Schneider, Hayes, & Hor-
rocks, 2004). A hierarchy of commonly-used description logics is summarized in Table 1.
The syntax of a description logic L is given by a signature and a set of constructors. A
signature (or vocabulary) Sg of a DL is the (disjoint) union of countably infinite sets AC
of atomic concepts (A, B, . . . ) representing sets of elements, AR of atomic roles (r, s, . . . )
representing binary relations between elements, and Ind of individuals (a, b, c, . . . ) repre-
senting constants. We assume the signature to be fixed for every DL.
275
Cuenca Grau, Horrocks, Kazakov, & Sattler
DLs
Constructors Axioms [ Ax ]
Rol Con RBox TBox ABox
cL r ·, A, C
1
¯ C
2
, ∃R.C A ≡ C, C
1
_ C
2
a : C, r(a, b)
/L( –– ––, C –– ––
o –– –– Trans(r) –– ––
+ 1 r

+ H R
1
_ R
2
+ T Funct(R)
+ ^ (nS)
+ O (nS.C)
+ O ¦i¦
Here r ∈ AR, A ∈ AC, a, b ∈ Ind, R
(i)
∈ Rol, C
(i)
∈ Con, n ≥ 1 and S ∈ Rol is a simple
role (see (Horrocks & Sattler, 2005)).
Table 1: The hierarchy of standard description logics
Every DL provides constructors for defining the set Rol of (general) roles (R, S, . . . ),
the set Con of (general) concepts (C, D, . . . ), and the set Ax of axioms (α, β, . . . ) which is
a union of role axioms (RBox), terminological axioms (TBox) and assertions (ABox).
cL (Baader, Brandt, & Lutz, 2005) is a simple description logic which allows one to
construct complex concepts using conjunction C
1
¯ C
2
and existential restriction ∃R.C
starting from atomic concepts A, roles R and the top concept ·. cL provides no role
constructors and no role axioms; thus, every role R in cL is atomic. The TBox axioms of
cL can be either concept definitions A ≡ C or general concept inclusion axioms (GCIs)
C
1
_ C
2
. cL assertions are either concept assertions a : C or role assertions r(a, b). In this
paper we assume the concept definition A ≡ C is an abbreviation for two GCIs A _ C and
C _ A.
The basic description logic /L( (Schmidt-Schauß & Smolka, 1991) is obtained from cL
by adding the concept negation constructor C. We introduce some additional constructors
as abbreviations: the bottom concept ⊥ is a shortcut for ·, the concept disjunction C
1
. C
2
stands for (C
1
¯ C
2
), and the value restriction ∀R.C stands for (∃R.C). In contrast
to cL, /L( can express contradiction axioms like · _ ⊥. The logic o is an extension of
/L( where, additionally, some atomic roles can be declared to be transitive using a role
axiom Trans(r).
Further extensions of description logics add features such as inverse roles r

(indicated
by appending a letter 1 to the name of the logic), role inclusion axioms (RIs) R
1
_ R
2
(+H),
functional roles Funct(R) (+T), number restrictions (nS), with n ≥ 1, (+^), qualified
number restrictions (nS.C), with n ≥ 1, (+O)
1
, and nominals ¦a¦ (+O). Nominals make
it possible to construct a concept representing a singleton set ¦a¦ (a nominal concept) from
an individual a. These extensions can be used in different combinations; for example /L(O
is an extension of /L( with nominals; oH1O is an extension of o with role hierarchies,
1. the dual constructors (nS) and (nS.C) are abbreviations for ¬(n + 1 S) and ¬(n + 1 S.¬C),
respectively
276
Modular Reuse of Ontologies: Theory and Practice
inverse roles and qualified number restrictions; and oHO1O is the DL that uses all the
constructors and axiom types we have presented.
Modern ontology languages, such as OWL, are based on description logics and, to a cer-
tain extent, are syntactic variants thereof. In particular, OWL DL corresponds to oHO1^
(Horrocks, Patel-Schneider, & van Harmelen, 2003). In this paper, we assume an ontology
O based on a description logic L to be a finite set of axioms in L. The signature of an
ontology O (of an axiom α) is the set Sig(O) (Sig(α)) of atomic concepts, atomic roles and
individuals that occur in O (respectively in α).
The main reasoning task for ontologies is entailment: given an ontology O and an axiom
α, check if O implies α. The logical entailment [= is defined using the usual Tarski-style
set-theoretic semantics for description logics as follows. An interpretation 1 is a pair 1 =
(∆
7
,
7
), where ∆
7
is a non-empty set, called the domain of the interpretation, and
7
is the
interpretation function that assigns: to every A ∈ AC a subset A
7
⊆ ∆
7
, to every r ∈ AR
a binary relation r
7
⊆ ∆
7

7
, and to every a ∈ Ind an element a
7
∈ ∆
7
. Note that the
sets AC, AR and Ind are not defined by the interpretation 1 but assumed to be fixed for
the ontology language (DL).
The interpretation function
7
is extended to complex roles and concepts via DL-
constructors as follows:
(·)
7
= ∆
(C ¯ D)
7
= C
7
∩ D
7
(∃R.C)
7
= ¦x ∈ ∆
7
[ ∃y.'x, y` ∈ R
7
∧ y ∈ C
7
¦
(C)
7
= ∆
7
` C
7
(r

)
7
= ¦'x, y` [ 'y, x` ∈ r
7
¦
(nR)
7
= ¦ x ∈ ∆
7
[ ¦y ∈ ∆
7
[ 'x, y` ∈ R
7
¦ ≥ n¦
(nR.C)
7
= ¦ x ∈ ∆
7
[ ¦y ∈ ∆
7
[ 'x, y` ∈ R
7
∧ y ∈ C
7
¦ ≥ n¦
¦a¦
7
= ¦a
7
¦
The satisfaction relation 1 [= α between an interpretation 1 and a DL axiom α (read as 1
satisfies α, or 1 is a model of α) is defined as follows:
1 [= C
1
_ C
2
iff C
7
1
⊆ C
7
2
; 1 [= a : C iff a
7
∈ C
7
;
1 [= R
1
_ R
2
iff R
7
1
⊆ R
7
2
; 1 [= r(a, b) iff 'a
7
, b
7
` ∈ r
7
;
1 [= Trans(r) iff ∀x, y, z ∈ ∆
7
[ 'x, y` ∈ r
7
∧ 'y, z` ∈ r
7
⇒ 'x, z` ∈ r
7
];
1 [= Funct(R) iff ∀x, y, z ∈ ∆
7
[ 'x, y` ∈ R
7
∧ 'x, z` ∈ R
7
⇒ y = z ];
An interpretation 1 is a model of an ontology O if 1 satisfies all axioms in O. An
ontology O implies an axiom α (written O [= α) if 1 [= α for every model 1 of O. Given
a set I of interpretations, we say that an axiom α (an ontology O) is valid in I if every
interpretation 1 ∈ I is a model of α (respectively O). An axiom α is a tautology if it is valid
in the set of all interpretations (or, equivalently, is implied by the empty ontology).
We say that two interpretations 1 = (∆
7
,
7
) and . = (∆
.
,
.
) coincide on the subset
S of the signature (notation: 1[
S
= .[
S
) if ∆
7
= ∆
.
and X
7
= X
.
for every X ∈ S. We
say that two sets of interpretations I and J are equal modulo S (notation: I[
S
= J[
S
) if for
every 1 ∈ I there exits . ∈ J such that .[
S
= 1[
S
and for every . ∈ J there exists 1 ∈ I
such that 1[
S
= .[
S
.
277
Cuenca Grau, Horrocks, Kazakov, & Sattler
Ontology of medical research projects P:
P1 Genetic Disorder Project ≡ Project ¯ ∃has Focus.Genetic Disorder
P2 Cystic Fibrosis EUProject ≡ EUProject ¯ ∃has Focus.Cystic Fibrosis
P3 EUProject _ Project
P4 ∃has Focus.· _ Project
E1 Project ¯ (Genetic Disorder
::
Cystic Fibrosis) _ ⊥
E2 ∀
::
has Focus.Cystic Fibrosis _ ∃has Focus.Genetic Disorder
Ontology of medical terms Q:
M1 Cystic Fibrosis ≡ Fibrosis ¯ ∃located In.Pancreas ¯ ∃has Origin.Genetic Origin
M2 Genetic Fibrosis ≡ Fibrosis ¯ ∃has Origin.Genetic Origin
M3 Fibrosis ¯ ∃located In.Pancreas _ Genetic Fibrosis
M4 Genetic Fibrosis _ Genetic Disorder
M5 DEFBI Gene _ Immuno Protein Gene ¯ ∃associated With.Cystic Fibrosis
Figure 1: Reusing medical terminology in an ontology on research projects
3. Ontology Integration and Knowledge Reuse
In this section, we elaborate on the ontology reuse scenario sketched in Section 1. Based on
this application scenario, we motivate and define the reasoning tasks to be investigated in
the remainder of the paper. In particular, our tasks are based on the notions of a conservative
extension (Section 3.2), safety (Sections 3.2 and 3.3) and module (Section 3.4). These notions
are defined relative to a language L. Within this section, we assume that L is an ontology
language based on a description logic; in Section 3.6, we will define formally the class of
ontology languages for which the given definitions of conservative extensions, safety and
modules apply. For the convenience of the reader, all the tasks we consider in this paper
are summarized in Table 2.
3.1 A Motivating Example
Suppose that an ontology engineer is in charge of a oHO1O ontology on research projects,
which specifies different types of projects according to the research topic they are concerned
with. Assume that the ontology engineer defines two concepts—Genetic Disorder Project and
Cystic Fibrosis EUProject—in his ontology {. The first one describes projects about genetic
disorders; the second one describes European projects about cystic fibrosis, as given by the
axioms P1 and P2 in Figure 1. The ontology engineer is an expert on research projects: he
knows, for example, that every instance of EUProject must be an instance of Project (the
concept-inclusion axiom P3) and that the role has Focus can be applied only to instances
of Project (the domain axiom P4). He may be unfamiliar, however, with most of the topics
the projects cover and, in particular, with the terms Cystic Fibrosis and Genetic Disorder
mentioned in P1 and P2. In order to complete the projects ontology with suitable definitions
278
Modular Reuse of Ontologies: Theory and Practice
for these medical terms, he decides to reuse the knowledge about these subjects from a well-
established and widely-used medical ontology.
Suppose that Cystic Fibrosis and Genetic Disorder are described in an ontology O contai-
ning the axioms M1-M5 in Figure 1. The most straightforward way to reuse these concepts
is to import in { the ontology O—that is, to add the axioms from O to the axioms in {
and work with the extended ontology { ∪O. Importing additional axioms into an ontology
may result in new logical consequences. For example, it is easy to see that axioms M1–M4
in O imply that every instance of Cystic Fibrosis is an instance of Genetic Disorder:
O [= α := (Cystic Fibrosis _ Genetic Disorder) (1)
Indeed, the concept inclusion α
1
:= (Cystic Fibrosis _ Genetic Fibrosis) follows from axioms
M1 and M2 as well as from axioms M1 and M3; α follows from axioms α
1
and M4.
Using inclusion α from (1) and axioms P1–P3 from ontology { we can now prove that
every instance of Cystic Fibrosis EUProject is also an instance of Genetic Disorder Project:
{ ∪ O [= β := (Cystic Fibrosis EUProject _ Genetic Disorder Project) (2)
This inclusion β, however, does not follow from { alone—that is, { [= β. The ontology
engineer might be not aware of Entailment (2), even though it concerns the terms of primary
scope in his projects ontology {.
It is natural to expect that entailments like α in (1) from an imported ontology O result
in new logical consequences, like β, in (2), over the terms defined in the main ontology {.
One would not expect, however, that the meaning of the terms defined in O changes as a
consequence of the import since these terms are supposed to be completely specified within
O. Such a side effect would be highly undesirable for the modeling of ontology { since the
ontology engineer of { might not be an expert on the subject of O and is not supposed to
alter the meaning of the terms defined in O not even implicitly.
The meaning of the reused terms, however, might change during the import, perhaps due
to modeling errors. In order to illustrate such a situation, suppose that the ontology engineer
has learned about the concepts Genetic Disorder and Cystic Fibrosis from the ontology O
(including the inclusion (1)) and has decided to introduce additional axioms formalizing
the following statements:
“Every instance of Project is different from every instance of Genetic Disorder
:::
and every instance of Cystic Fibrosis.”
(3)

::::::
Every
::::::::
project that has Focus on Cystic Fibrosis, also has Focus on Genetic Disorder” (4)
Note that the statements (3) and (4) can be thought of as adding new information about
projects and, intuitively, they should not change or constrain the meaning of the medical
terms.
Suppose the ontology engineer has formalized the statements (3) and (4) in ontology
{ using axioms E1 and E2 respectively. At this point, the ontology engineer has intro-
duced modeling errors and, as a consequence, axioms E1 and E2 do not correspond to (3)
and (4): E1 actually formalizes the following statement: “Every instance of Project is diffe-
rent from every common instance of Genetic Disorder and Cystic Fibrosis”, and E2 expresses
279
Cuenca Grau, Horrocks, Kazakov, & Sattler
that “Every object that either has Focus on nothing, or has Focus only on Cystic Fibrosis,
also has Focus on Genetic Disorder”. These kinds of modeling errors are difficult to detect,
especially when they do not cause inconsistencies in the ontology.
Note that, although axiom E1 does not correspond to fact (3), it is still a consequence
of (3) which means that it should not constrain the meaning of the medical terms. On the
other hand, E2 is not a consequence of (4) and, in fact, it constrains the meaning of medical
terms. Indeed, the axioms E1 and E2 together with axioms P1-P4 from { imply new axioms
about the concepts Cystic Fibrosis and Genetic Disorder, namely their disjointness:
{ [= γ := (Genetic Disorder ¯ Cystic Fibrosis _ ⊥) (5)
The entailment (5) can be proved using axiom E2 which is equivalent to:
· _ ∃has Focus.(Genetic Disorder . Cystic Fibrosis) (6)
The inclusion (6) and P4 imply that every element in the domain must be a project—that
is, { [= (· _ Project). Now, together with axiom E1, this implies (5).
The axioms E1 and E2 not only imply new statements about the medical terms, but also
cause inconsistencies when used together with the imported axioms from O. Indeed, from
(1) and (5) we obtain { ∪O [= δ := (Cystic Fibrosis _ ⊥), which expresses the inconsistency
of the concept Cystic Fibrosis.
To summarize, we have seen that importing an external ontology can lead to undesirable
side effects in our knowledge reuse scenario, like the entailment of new axioms or even incon-
sistencies involving the reused vocabulary. In the next section we discuss how to formalize
the effects we consider undesirable.
3.2 Conservative Extensions and Safety for an Ontology
As argued in the previous section, an important requirement for the reuse of an ontology O
within an ontology { should be that { ∪O produces exactly the same logical consequences
over the vocabulary of O as O alone does. This requirement can be naturally formulated
using the well-known notion of a conservative extension, which has recently been investi-
gated in the context of ontologies (Ghilardi, Lutz, & Wolter, 2006; Lutz et al., 2007).
Definition 1 (Deductive Conservative Extension). Let O and O
1
⊆ O be two L-
ontologies, and S a signature over L. We say that O is a deductive S-conservative extension
of O
1
w.r.t. L, if for every axiom α over L with Sig(α) ⊆ S, we have O [= α iff O
1
[= α.
We say that O is a deductive conservative extension of O
1
w.r.t. L if O is a deductive
S-conservative extension of O
1
w.r.t. L for S = Sig(O
1
). ♦
In other words, an ontology O is a deductive S-conservative extension of O
1
for a
signature S and language L if and only if every logical consequence α of O constructed
using the language L and symbols only from S, is already a logical consequence of O
1
; that
is, the additional axioms in O ` O
1
do not result into new logical consequences over the
vocabulary S. Note that if O is a deductive S-conservative extension of O
1
w.r.t. L, then
O is a deductive S
1
-conservative extension of O
1
w.r.t. L for every S
1
⊆ S.
The notion of a deductive conservative extension can be directly applied to our ontology
reuse scenario.
280
Modular Reuse of Ontologies: Theory and Practice
Definition 2 (Safety for an Ontology). Given L-ontologies O and O
t
, we say that O
is safe for O
t
(or O imports O
t
in a safe way) w.r.t. L if O∪O
t
is a deductive conservative
extension of O
t
w.r.t. L. ♦
Hence, the first reasoning task relevant for our ontology reuse scenario can be formulated
as follows:
T1. given L-ontologies O and O
t
, determine if O is safe for O
t
w.r.t. L.
We have shown in Section 3.1 that, given { consisting of axioms P1–P4, E1, E2, and O
consisting of axioms M1–M5 from Figure 1, there exists an axiom δ = (Cystic Fibrosis _ ⊥)
that uses only symbols in Sig(O) such that O [= δ but {∪O [= δ. According to Definition 1,
this means that { ∪ O is not a deductive conservative extension of O w.r.t. any language
L in which δ can be expressed (e.g. L = /L(). It is possible, however, to show that if the
axiom E2 is removed from { then for the resulting ontology {
1
= { ` ¦E2¦, {
1
∪ O is a
deductive conservative extension of O. The following notion is useful for proving deductive
conservative extensions:
Definition 3 (Model Conservative Extension, Lutz et al., 2007).
Let O and O
1
⊆ O be two L-ontologies and S a signature over L. We say that O is a model
S-conservative extension of O
1
, if for every model 1 of O
1
, there exists a model . of O
such that 1[
S
= .[
S
. We say that O is a model conservative extension of O
1
if O is a model
S-conservative extension of O
1
for S = Sig(O
1
). ♦
The notion of model conservative extension in Definition 3 can be seen as a semantic
counterpart for the notion of deductive conservative extension in Definition 1: the latter is
defined in terms of logical entailment, whereas the former is defined in terms of models.
Intuitively, an ontology O is a model S-conservative extension of O
1
if for every model of
O
1
one can find a model of O over the same domain which interprets the symbols from S
in the same way. The notion of model conservative extension, however, does not provide
a complete characterization of deductive conservative extensions, as given in Definition 1;
that is, this notion can be used for proving that an ontology is a deductive conservative
extension of another, but not vice versa:
Proposition 4 (Model vs. Deductive Conservative Extensions, Lutz et al., 2007)
1. For every two L-ontologies O, O
1
⊆ O, and a signature S over L, if O is a model
S-conservative extension of O
1
then O is a deductive S-conservative extension of O
1
w.r.t. L;
2. There exist two /L( ontologies O and O
1
⊆ O such that O is a deductive conservative
extension of O
1
w.r.t. /L(, but O is not a model conservative extension of O
1
.
Example 5 Consider an ontology {
1
consisting of axioms P1–P4, and E1 and an ontology
O consisting of axioms M1–M5 from Figure 1. We demonstrate that {
1
∪ O is a deductive
conservative extension of O. According to proposition 4, it is sufficient to show that {
1
∪O
is a model conservative extension of O; that is, for every model 1 of O there exists a model
. of {
1
∪ O such that 1[
S
= .[
S
for S = Sig(O).
281
Cuenca Grau, Horrocks, Kazakov, & Sattler
The required model . can be constructed from 1 as follows: take . to be identi-
cal to 1 except for the interpretations of the atomic concepts Genetic Disorder Project,
Cystic Fibrosis EUProject, Project, EUProject and the atomic role has Focus, all of which
we interpret in . as the empty set. It is easy to check that all the axioms P1–P4, E1 are
satisfied in .and hence . [= {
1
. Moreover, since the interpretation of the symbols in O
remains unchanged, we have 1[
Sig(O)
= .[
Sig(O)
and . [= O. Hence, {
1
∪ O is a model
conservative extension of O.
For an example of Statement 2 of the Proposition, we refer the interested reader to the
literature (Lutz et al., 2007, p. 455). ♦
3.3 Safety of an Ontology for a Signature
So far, in our ontology reuse scenario we have assumed that the reused ontology O is fixed
and the axioms of O are just copied into { during the import. In practice, however, it
is often convenient to keep O separate from { and make its axioms available on demand
via a reference. This makes it possible to continue developing { and O independently. For
example, the ontology engineers of the project ontology { may not be willing to depend
on a particular version of O, or may even decide at a later time to reuse the medical terms
(Cystic Fibrosis and Genetic Disorder) from another medical ontology instead. Therefore, for
many application scenarios it is important to develop a stronger safety condition for {
which depends as little as possible on the particular ontology O to be reused. In order to
formulate such condition, we abstract from the particular ontology O to be imported and
focus instead on the symbols from O that are to be reused:
Definition 6 (Safety for a Signature). Let O be a L-ontology and S a signature over L.
We say that O is safe for S w.r.t. L, if for every L-ontology O
t
with Sig(O) ∩ Sig(O
t
) ⊆ S,
we have that O is safe for O
t
w.r.t. L; that is, O∪O
t
is a deductive conservative extension
of O
t
w.r.t. L. ♦
Intuitively, in our knowledge reuse scenario, an ontology O is safe for a signature S w.r.t.
a language L if and only if it imports in a safe way any ontology O
t
written in L that shares
only symbols from S with O. The associated reasoning problem is then formulated in the
following way:
T2. given an L-ontology O and a signature S over L,
determine if O is safe for S w.r.t. L.
As seen in Section 3.2, the ontology { consisting of axioms P1–P4, E1, E2 from Figure 1,
does not import O consisting of axioms M1–M5 from Figure 1 in a safe way for L = /L(.
According to Definition 6, since Sig({) ∩ Sig(O) ⊆ S = ¦Cystic Fibrosis, Genetic Disorder¦,
the ontology { is not safe for S w.r.t. L.
In fact, it is possible to show a stronger result, namely, that any ontology O containing
axiom E2 is not safe for S = ¦Cystic Fibrosis, Genetic Disorder¦ w.r.t. L = /L(. Consider an
ontology O
t
= ¦β
1
, β
2
¦, where β
1
= (· _ Cystic Fibrosis) and β
2
= (Genetic Disorder _ ⊥).
Since E2 is equivalent to axiom (6), it is easy to see that O∪O
t
is inconsistent. Indeed E2,
β
1
and β
2
imply the contradiction α = (· _ ⊥), which is not entailed by O
t
. Hence, O∪O
t
is not a deductive conservative extension of O
t
. By Definition 6, since Sig(O) ∩Sig(O
t
) ⊆ S,
this means that O is not safe for S.
282
Modular Reuse of Ontologies: Theory and Practice
It is clear how one could prove that an ontology O is not safe for a signature S: simply find
an ontology O
t
with Sig(O) ∩Sig(O
t
) ⊆ S, such that O∪O
t
is not a deductive conservative
extension of O
t
. It is not so clear, however, how one could prove that O is safe for S. It
turns out that the notion of model conservative extensions can also be used for this purpose.
The following lemma introduces a property which relates the notion of model conservative
extension with the notion of safety for a signature. Intuitively, it says that the notion of
model conservative extension is stable under expansion with new axioms provided they
share only symbols from S with the original ontologies.
Lemma 7 Let O, O
1
⊆ O, and O
t
be L-ontologies and S a signature over L such that
O is a model S-conservative extension of O
1
and Sig(O) ∩ Sig(O
t
) ⊆ S. Then O ∪ O
t
is a
model S
t
-conservative extension of O
1
∪ O
t
for S
t
= S ∪ Sig(O
t
).
Proof. In order to show that O∪O
t
is a model S
t
-conservative extension of O
1
∪O
t
according
to Definition 3, let 1 be a model of O
1
∪ O
t
. We just construct a model . of O ∪ O
t
such
that 1[
S
= .[
S
. ()
Since O is a model S-conservative extension of O
1
, and 1 is a model of O
1
, by Definition 3
there exists a model .
1
of O such that 1[
S
= .
1
[
S
. Let . be an interpretation such
that .[
Sig(C)
= .
1
[
Sig(C)
and .[
S
= 1[
S
. Since Sig(O) ∩ S
t
= Sig(O) ∩ (S ∪ Sig(O
t
)) ⊆
S ∪ (Sig(O) ∩ Sig(O
t
)) ⊆ S and 1[
S
= .
1
[
S
, such interpretation . always exists. Since
.[
Sig(C)
= .
1
[
Sig(C)
and .
1
[= O, we have . [= O; since .[
S
= 1[
S
, Sig(O
t
) ⊆ S
t
, and
1 [= O
t
, we have . [= O
t
. Hence . [= O ∪ O
t
and 1[
S
= .[
S
, which proves ().
Lemma 7 allows us to identify a condition sufficient to ensure the safety of an ontology
for a signature:
Proposition 8 (Safety for a Signature vs. Model Conservative Extensions)
Let O be an L-ontology and S a signature over L such that O is a model S-conservative
extension of the empty ontology O
1
= ∅; that is, for every interpretation 1 there exists a
model . of O such that .[
S
= 1[
S
. Then O is safe for S w.r.t. L.
Proof. In order to prove that O is safe for S w.r.t. L according to Definition 6, take any
oHO1O ontology O
t
such that Sig(O) ∩Sig(O
t
) ⊆ S. We need to demonstrate that O∪O
t
is a deductive conservative extension of O
t
w.r.t. L. ()
Indeed, by Lemma 7, since O is a model S-conservative extension of O
1
= ∅, and
Sig(O)∩Sig(O
t
) ⊆ S, we have that O∪O
t
is a model S
t
-conservative extension of O
1
∪O
t
= O
t
for S
t
= S ∪ Sig(O
t
). In particular, since Sig(O
t
) ⊆ S
t
, we have that O ∪ O
t
is a deductive
conservative extension of O
t
, as was required to prove ().
Example 9 Let {
1
be the ontology consisting of axioms P1–P4, and E1 from Figure 1.
We show that {
1
is safe for S = ¦Cystic Fibrosis, Genetic Disorder¦ w.r.t. L = oHO1O. By
Proposition 8, in order to prove safety for {
1
it is sufficient to demonstrate that {
1
is a
model S-conservative extension of the empty ontology, that is, for every S-interpretation 1
there exists a model . of {
1
such that 1[
S
= .[
S
.
Consider the model . obtained from 1 as in Example 5. As shown in Example 5, . is
a model of {
1
and 1[
Sig(O)
= .[
Sig(O)
where O consists of axioms M1–M5 from Figure 1.
In particular, since S ⊆ Sig(O), we have 1[
S
= .[
S
. ♦
283
Cuenca Grau, Horrocks, Kazakov, & Sattler
3.4 Extraction of Modules from Ontologies
In our example from Figure 1 the medical ontology O is very small. Well established medical
ontologies, however, can be very large and may describe subject matters in which the de-
signer of { is not interested. For example, the medical ontology O could contain information
about genes, anatomy, surgical techniques, etc.
Even if { imports O without changing the meaning of the reused symbols, processing—
that is, browsing, reasoning over, etc—the resulting ontology { ∪ O may be considerably
harder than processing { alone. Ideally, one would like to extract a (hopefully small) frag-
ment O
1
of the external medical ontology—a module—that describes just the concepts that
are reused in {.
Intuitively, when answering an arbitrary query over the signature of {, importing the
module O
1
should give exactly the same answers as if the whole ontology O had been
imported.
Definition 10 (Module for an Ontology). Let O, O
t
and O
t
1
⊆ O
t
be L-ontologies.
We say that O
t
1
is a module for O in O
t
w.r.t. L, if O ∪ O
t
is a deductive S-conservative
extension of O ∪ O
t
1
for S = Sig(O) w.r.t. L. ♦
The task of extracting modules from imported ontologies in our ontology reuse scenario
can be thus formulated as follows:
T3. given L-ontologies O, O
t
,
compute a module O
t
1
for O in O
t
w.r.t. L.
Example 11 Consider the ontology {
1
consisting of axioms P1–P4, E1 and the ontology O
consisting of axioms M1–M5 from Figure 1. Recall that the axiom α from (1) is a conse-
quence of axioms M1, M2 and M4 as well as of axioms M1, M3 and M4 from ontology O.
In fact, these sets of axioms are actually minimal subsets of O that imply α. In particular,
for the subset O
0
of O consisting of axioms M2, M3, M4 and M5, we have O
0
[= α. It can
be demonstrated that {
1
∪ O
0
is a deductive conservative extension of O
0
. In particular
{
1
∪ O
0
[= α. But then, according to Definition 10, O
0
is not a module for {
1
in O w.r.t.
L = /L( or its extensions, since {
1
∪ O is not an deductive S-conservative extension of
{
1
∪O
0
w.r.t. L = /L( for S = Sig({
1
). Indeed, {
1
∪O [= α, Sig(α) ⊆ S but {
1
∪O
0
[= α.
Similarly, one can show that any subset of O that does not imply α is not a module in O
w.r.t. {
1
.
On the other hand, it is possible to show that both subsets O
1
= ¦M1, M2, M4¦ and
O
2
= ¦M1, M3, M4¦ of O are modules for {
1
in O w.r.t. L = oHO1O. To do so, Defi-
nition 10, we need to demonstrate that {
1
∪ O is a deductive S-conservative extension of
both {
1
∪ O
1
and {
1
∪ O
2
for S = Sig({
1
) w.r.t. L. As usual, we demonstrate a stronger
fact, that {
1
∪O is a model S-conservative extension of {
1
∪O
1
and of {
1
∪O
2
for S which
is sufficient by Claim 1 of Proposition 4.
In order to show that {
1
∪ O is a model S-conservative extension of {
1
∪ O
1
for S =
Sig({
1
), consider any model 1 of {
1
∪ O
1
. We need to construct a model . of {
1
∪ O
such that 1[
S
= .[
S
. Let . be defined exactly as 1 except that the interpretations of the
atomic concepts Fibrosis, Pancreas, Genetic Fibrosis, and Genetic Origin are defined as the
interpretation of Cystic Fibrosis in 1, and the interpretations of atomic roles located In and
284
Modular Reuse of Ontologies: Theory and Practice
has Origin are defined as the identity relation. It is easy to see that axioms M1–M3 and M5
are satisfied in .. Since we do not modify the interpretation of the symbols in {
1
, . also
satisfies all axioms from {
1
. Moreover, . is a model of M4, because Genetic Fibrosis and
Genetic Disorder are interpreted in . like Cystic Fibrosis and Genetic Disorder in 1, and 1 is
a model of O
1
, which implies the concept inclusion α = (Cystic Fibrosis _ Genetic Disorder).
Hence we have constructed a model . of {
1
∪ O such that 1[
S
= .[
S
, thus {
1
∪ O is a
model S-conservative extension of {
1
∪ O
1
.
In fact, the construction above works if we replace O
1
with any subset of O that implies
α. In particular, {
1
∪O is also a model S-conservative extension of {
1
∪O
2
. In this way, we
have demonstrated that the modules for { in O are exactly those subsets of O that imply
α. ♦
An algorithm implementing the task T3 can be used for extracting a module from an
ontology O imported into { prior to performing reasoning over the terms in {. However,
when the ontology { is modified, the module has to be extracted again since a module O
1
for { in O might not necessarily be a module for the modified ontology. Since the extraction
of modules is potentially an expensive operation, it would be more convenient to extract a
module only once and reuse it for any version of the ontology { that reuses the specified
symbols from O. This idea motivates the following definition:
Definition 12 (Module for a Signature). Let O
t
and O
t
1
⊆ O
t
be L-ontologies and S
a signature over L. We say that O
t
1
is a module for S in O
t
(or an S-module in O
t
) w.r.t.
L, if for every L-ontology O with Sig(O) ∩Sig(O
t
) ⊆ S, we have that O
t
1
is a module for O
in O
t
w.r.t. L. ♦
Intuitively, the notion of a module for a signature is a uniform analog of the notion of a
module for an ontology, in a similar way as the notion of safety for a signature is a uniform
analog of safety for an ontology. The reasoning task corresponding to Definition 12 can be
formulated as follows:
T4. given a L-ontology O
t
and a signature S over L,
compute a module O
t
1
for S in O
t
.
Continuing Example 11, it is possible to demonstrate that any subset of O that implies
axiom α, is in fact a module for S = ¦Cystic Fibrosis, Genetic Disorder¦ in O, that is, it can
be imported instead of O into every ontology O that shares with O only symbols from S. In
order to prove this, we use the following sufficient condition based on the notion of model
conservative extension:
Proposition 13 (Modules for a Signature vs. Model Conservative Extensions)
Let O
t
and O
t
1
⊆ O
t
be L-ontologies and S a signature over L such that O
t
is a model
S-conservative extension of O
t
1
. Then O
t
1
is a module for S in O
t
w.r.t. L.
Proof. In order to prove that O
t
1
is a module for S in O
t
w.r.t. L according to Definition 12,
take any oHO1O ontology O such that Sig(O) ∩ Sig(O
t
) ⊆ S. We need to demonstrate
that O
t
1
is a module for O in O
t
w.r.t. L, that is, according to Definition 10, O ∪ O
t
is a
deductive S
t
-conservative extension of O ∪ O
t
1
w.r.t. L for S
t
= Sig(O). ()
285
Cuenca Grau, Horrocks, Kazakov, & Sattler
Indeed, by Lemma 7, since O
t
is a model S-conservative extension of O
t
1
, and Sig(O
t
) ∩
Sig(O) ⊆ S, we have that O
t
∪ O is a model S
tt
-conservative extension of O
t
1
∪ O for
S
tt
= S ∪ Sig(O). In particular, since S
t
= Sig(O) ⊆ S
tt
, we have that O
t
∪ O is a deductive
S
t
-conservative extension of O
t
1
∪ O w.r.t. L, as was required to prove ().
Example 14 Let O and α be as in Example 11. We demonstrate that any subset O
1
of O
which implies α is a module for S = ¦Cystic Fibrosis, Genetic Disorder¦ in O. According to
Proposition 13, it is sufficient to demonstrate that O is a model S-conservative extension of
O
1
, that is, for every model 1 of O
1
there exists a model . of O such that 1[
S
= .[
S
. It is
easy to see that if the model . is constructed from 1 as in Example 11, then the required
property holds. ♦
Note that a module for a signature S in Odoes not necessarily contain all the axioms that
contain symbols from S. For example, the module O
1
consisting of the axiom M1, M2 and
M4 from O does not contain axiom M5 which mentions the atomic concept Cystic Fibrosis
from S. Also note that even in a minimal module like O
1
there might still be some axioms
like M2 that do not mention symbols from S at all.
3.5 Minimal Modules and Essential Axioms
One is usually not interested in extracting arbitrary modules from a reused ontology, but
in extracting modules that are easy to process afterwards. Ideally, the extracted modules
should be as small as possible. Hence it is reasonable to consider the problem of extracting
minimal modules; that is, modules that contain no other module as a subset.
Examples 11 and 14 demonstrate that a minimal module for an ontology or a signature
is not necessarily unique: the ontology O consisting of axioms M1–M5 has two minimal mo-
dules O
1
= ¦M1, M2, M4¦, and O
2
= ¦M1, M3, M4¦, for the ontology {
1
= ¦P1, P2, P3, E1¦
as well as for the signature S = ¦Cystic Fibrosis, Genetic Disorder¦, since these are the min-
imal sets of axioms that imply axiom α = (Cystic Fibrosis _ Genetic Disorder). Depending
on the application scenario, one can consider several variations of tasks T3 and T4 for com-
puting minimal modules. In some applications it might be necessary to extract all minimal
modules, whereas in others any minimal module suffices.
Axioms that do not occur in a minimal module in O are not essential for { because
they can always be removed from every module of O, thus they never need not be imported
into {. This is not true for the axioms that occur in minimal modules of O. It might be
necessary to import such axioms into { in order not to lose essential information from O.
These arguments motivate the following notion:
Definition 15 (Essential Axiom). Let O and O
t
be L-ontologies, S a signature and α
an axiom over L. We say that α is essential for O in O
t
w.r.t. L if α is contained in any
minimal module in O
t
for O w.r.t. L. We say that α is an essential axiom for S in O
t
w.r.t.
L (or S-essential in O
t
) if α is contained in some minimal module for S in O
t
w.r.t. L. ♦
In our example above the axioms M1–M4 from O are essential for the ontology {
1
and
for the signature S = ¦Cystic Fibrosis, Genetic Disorder¦, but the axiom M5 is not essential.
In certain situations one might be interested in computing the set of essential axioms of
an ontology, which can be done by computing the union of all minimal modules. Note that
286
Modular Reuse of Ontologies: Theory and Practice
Notation Input Task
Checking Safety:
T1 O, O
t
, L Check if O is safe for O
t
w.r.t. L
T2 O, S, L Check if O is safe for S w.r.t. L
Extracting of [all / some / union of] [minimal] module(s):
T3[a,s,u][m] O, O
t
, L Extract modules in O
t
w.r.t. O and L
T4[a,s,u][m] O
t
, S, L Extract modules for S in O
t
w.r.t. L
where O, O
t
are ontologies and S a signature over L
Table 2: Summary of reasoning tasks relevant for ontology integration and reuse
computing the union of minimal modules might be easier than computing all the minimal
modules since one does not need to identify which axiom belongs to which minimal module.
In Table 2 we have summarized the reasoning tasks we found to be potentially relevant
for ontology reuse scenarios and have included the variants T3am, T3sm, and T3um of the
task T3 and T4am, T4sm, and T4um of the task T4 for computation of minimal modules
discussed in this section.
Further variants of the tasks T3 and T4 could be considered relevant to ontology reuse.
For example, instead of computing minimal modules, one might be interested in computing
modules with the smallest number of axioms, or modules of the smallest size measured in
the number of symbols, or in any other complexity measure of the ontology. The theoretical
results that we will present in this paper can easily be extended to many such reasoning
tasks.
3.6 Safety and Modules for General Ontology Languages
All the notions introduced in Section 3 are defined with respect to an “ontology language”.
So far, however, we have implicitly assumed that the ontology languages are the description
logics defined in Section 2—that is, the fragments of the DL oHO1O. The notions consid-
ered in Section 3 can be applied, however, to a much broader class of ontology languages.
Our definitions apply to any ontology language with a notion of entailment of axioms from
ontologies, and a mechanism for identifying their signatures.
Definition 16. An ontology language is a tuple L = (Sg, Ax, Sig, [=), where Sg is a set
of signature elements (or vocabulary) of L, Ax is a set of axioms in L, Sig is a function
that assigns to every axiom α ∈ Ax a finite set Sig(α) ⊆ Sg called the signature of α, and
[= is the entailment relation between sets of axioms O ⊆ Ax and axioms α ∈ Ax, written
O [= α. An ontology over L is a finite set of axioms O ⊆ Ax. We extend the function Sig
to ontologies O as follows: Sig(O) :=

α∈C
Sig(α). ♦
Definition 16 provides a very general notion of an ontology language. A language L is
given by a set of symbols (a signature), a set of formulae (axioms) that can be constructed
over those symbols, a function that assigns to each formula its signature, and an entailment
relation between sets of axioms. The ontology language OWL DL as well as all description
287
Cuenca Grau, Horrocks, Kazakov, & Sattler
logics defined in Section 2 are examples of ontology languages in accordance to Definition 16.
Other examples of ontology languages are First Order Logic, Second Order Logic, and Logic
Programs.
It is easy to see that the notions of deductive conservative extension (Definition 1), safety
(Definitions 2 and 6) and modules (Definitions 10 and 12), as well as all the reasoning tasks
from Table 2, are well-defined for every ontology language L given by Definition 16. The
definition of model conservative extension (Definition 3) and the propositions involving
model conservative extensions (Propositions 4, 8, and 13) can be also extended to other
languages with a standard Tarski model-theoretic semantics, such as Higher Order Logic.
To simplify the presentation, however, we will not formulate general requirements on the
semantics of ontology languages, and assume that we deal with sublanguages of oHO1O
whenever semantics is taken into account.
In the remainder of this section, we establish the relationships between the different
notions of safety and modules for arbitrary ontology languages.
Proposition 17 (Safety vs. Modules for an Ontology) Let L be an ontology language,
and let O, O
t
, and O
t
1
⊆ O
t
be ontologies over L. Then:
1. O
t
is safe for O w.r.t. L iff the empty ontology is a module for O in O
t
w.r.t. L.
2. If O
t
` O
t
1
is safe for O ∪ O
t
1
then O
t
1
is a module for O in O
t
w.r.t. L.
Proof. 1. By Definition 2, O
t
is safe for O w.r.t. L iff (a) O∪O
t
is a deductive conservative
extension of O w.r.t. L. By Definition 10, the empty ontology O
t
0
= ∅ is a module for O in
O
t
w.r.t. L iff (b) O ∪ O
t
is a deductive S-conservative extension of O ∪ O
t
0
= O w.r.t. L
for S = Sig(O). It is easy to see that (a) is the same as (b).
2. By Definition 2, O
t
` O
t
1
is safe for O∪O
t
1
w.r.t. L iff (c) O∪O
t
1
∪(O
t
` O
t
1
) = O∪O
t
is a deductive conservative extension of O∪O
t
1
w.r.t. L. In particular, O∪O
t
is a deductive
S-conservative extension of O∪O
t
1
w.r.t. L for S = Sig(O), which implies, by Definition 10,
that O
t
1
is a module for O in O
t
w.r.t. L.
We also provide the analog of Proposition 17 for the notions of safety and modules for
a signature:
Proposition 18 (Safety vs. Modules for a Signature) Let L be an ontology language,
O
t
and O
t
1
⊆ O
t
, ontologies over L, and S a subset of the signature of L. Then:
1. O
t
is safe for S w.r.t. L iff the empty ontology O
t
0
= ∅ is an S-module in O
t
w.r.t. L.
2. If O
t
` O
t
1
is safe for S ∪ Sig(O
t
1
) w.r.t. L, then O
t
1
is an S-module in O
t
w.r.t. L.
Proof. 1. By Definition 6, O
t
is safe for S w.r.t. L iff (a) for every O with Sig(O
t
)∩Sig(O) ⊆
S, it is the case that O
t
is safe for O w.r.t. L. By Definition 12, O
t
0
= ∅ is an S-module
in O
t
w.r.t. L iff (b) for every O with Sig(O
t
) ∩ Sig(O) ⊆ S, it is the case that O
t
0
= ∅ is
a module for O in O
t
w.r.t. L. By Claim 1 of Proposition 17, it is easy to see that (a) is
equivalent to (b).
2. By Definition 6, O
t
` O
t
1
is safe for S ∪ Sig(O
t
1
) w.r.t. L iff (c) for every O, with
Sig(O
t
` O
t
1
) ∩ Sig(O) ⊆ S ∪ Sig(O
t
1
), we have that O
t
` O
t
1
is safe for O w.r.t. L. By
288
Modular Reuse of Ontologies: Theory and Practice
Definition 12, O
t
1
is an S-module in O
t
w.r.t. L iff (d) for every O with Sig(O
t
)∩Sig(O) ⊆ S,
we have that O
t
1
is a module for O in O
t
w.r.t. L.
In order to prove that (c) implies (d), let O be such that Sig(O
t
) ∩Sig(O) ⊆ S. We need
to demonstrate that O
t
1
is a module for O in O
t
w.r.t. L. ()
Let
˜
O := O∪O
t
1
. Note that Sig(O
t
` O
t
1
) ∩Sig(
˜
O) = Sig(O
t
` O
t
1
) ∩(Sig(O) ∪Sig(O
t
1
)) ⊆
(Sig(O
t
` O
t
1
) ∩Sig(O)) ∪Sig(O
t
1
) ⊆ S∪Sig(O
t
1
). Hence, by (c) we have that O
t
` O
t
1
is safe
for
˜
O = O ∪ O
t
1
w.r.t. L which implies by Claim 2 of Proposition 17 that O
t
1
is a module
for O in O
t
w.r.t. L ().
4. Undecidability and Complexity Results
In this section we study the computational properties of the tasks in Table 2 for ontology
languages that correspond to fragments of the description logic oHO1O. We demonstrate
that most of these reasoning tasks are algorithmically unsolvable even for relatively inex-
pressive DLs, and are computationally hard for simple DLs.
Since the notions of modules and safety are defined in Section 3 using the notion of
deductive conservative extension, it is reasonable to identify which (un)decidability and
complexity results for conservative extensions are applicable to the reasoning tasks in Ta-
ble 2. The computational properties of conservative extensions have recently been studied
in the context of description logics. Given O
1
⊆ O over a language L, the problem of
deciding whether O is a deductive conservative extension of O
1
w.r.t. L is 2-EXPTIME-
complete for L = /L( (Ghilardi et al., 2006). This result was extended by Lutz et al.
(2007), who showed that the problem is 2-EXPTIME-complete for L = /L(1O and unde-
cidable for L = /L(1OO. Recently, the problem has also been studied for simple DLs; it
has been shown that deciding deductive conservative extensions is EXPTIME-complete for
L = cL(Lutz & Wolter, 2007). These results can be immediately applied to the notions of
safety and modules for an ontology:
Proposition 19 Given ontologies O and O
t
over L, the problem of determining whether
O is safe for O
t
w.r.t. L is EXPTIME-complete for L = cL, 2-EXPTIME-complete for
L = /L( and L = /L(1O, and undecidable for L = /L(1OO. Given ontologies O, O
t
,
and O
t
1
⊆ O
t
over L, the problem of determining whether O
t
1
is a module for O in O
t
is
EXPTIME-complete for L = cL, 2-EXPTIME complete for L = /L( and L = /L(1O,
and undecidable for L = /L(1OO.
Proof. By Definition 2, an ontology O is safe for O
t
w.r.t. L iff O ∪ O
t
is a deductive
conservative extension of O
t
w.r.t. L. By Definition 2, an ontology O
t
1
is a module for O in
O
t
w.r.t. L if O∪O
t
is a deductive S-conservative extension of O∪O
t
1
for S = Sig(O) w.r.t.
L. Hence, any algorithm for checking deductive conservative extensions can be reused for
checking safety and modules.
Conversely, we demonstrate that any algorithm for checking safety or modules can be
used for checking deductive conservative extensions. Indeed, O is a deductive conservative
extension of O
1
⊆ O w.r.t. L iff O`O
1
is safe for O
1
w.r.t. L iff, by Claim 1 of Proposition 17,
O
0
= ∅ is a module for O
1
in O ` O
1
w.r.t. L .
289
Cuenca Grau, Horrocks, Kazakov, & Sattler
Corollary 20 There exist algorithms performing tasks T1, and T3[a,s,u]m from Table 2
for L = cL and L = /L(1O that run in EXPTIME and 2-EXPTIME respectively. There
is no algorithm performing tasks T1, or T3[a,s,u]m from Table 2 for L = /L(1OO.
Proof. The task T1 corresponds directly to the problem of checking the safety of an ontology,
as given in Definition 2.
Suppose that we have a 2-EXPTIME algorithm that, given ontologies O, O
t
and O
t
1

O
t
, determines whether O
t
1
is a module for O in O
t
w.r.t. L = /L(1O. We demonstrate
that this algorithm can be used to solve the reasoning tasks T3am, T3sm and T3um for
L = /L(1O in 2-EXPTIME. Indeed, given ontologies O and O
t
, one can enumerate all
subsets of O
t
and check in 2-EXPTIME which of these subsets are modules for O in O
t
w.r.t. L. Then we can determine which of these modules are minimal and return all of them,
one of them, or the union of them depending on the reasoning task.
Finally, we prove that solving any of the reasoning tasks T3am, T3sm and T3um is not
easier than checking the safety of an ontology. Indeed, by Claim 1 of Proposition 17, an
ontology O is safe for O
t
w.r.t. L iff O
0
= ∅ is a module for O
t
in O w.r.t. L. Note that
the empty ontology O
0
= ∅ is a module for O
t
in O w.r.t. L iff O
0
= ∅ is the only minimal
module for O
t
in O w.r.t. L.
We have demonstrated that the reasoning tasks T1 and T3[a,s,u]m are computationally
unsolvable for DLs that are as expressive as /L(O1O, and are 2-EXPTIME-hard for /L(.
In the remainder of this section, we focus on the computational properties of the reasoning
tasks T2 and T4[a,s,u]m related to the notions of safety and modules for a signature. We
demonstrate that all of these reasoning tasks are undecidable for DLs that are as expressive
as /L(O.
Theorem 21 (Undecidability of Safety for a Signature) The problem of checking
whether an ontology O consisting of a single /L(-axiom is safe for a signature S is unde-
cidable w.r.t. L = /L(O.
Proof. The proof is a variation of the construction for undecidability of deductive conser-
vative extensions in /L(O1O (Lutz et al., 2007), based on a reduction to a domino tiling
problem.
A domino system is a triple D = (T, H, V ) where T = ¦1, . . . , k¦ is a finite set of tiles
and H, V ⊆ T T are horizontal and vertical matching relations. A solution for a domino
system D is a mapping t
i,j
that assigns to every pair of integers i, j ≥ 1 an element of T,
such that 't
i,j
, t
i,j+1
` ∈ H and 't
i,j
, t
i+1,j
` ∈ V . A periodic solution for a domino system
D is a solution t
i,j
for which there exist integers m ≥ 1 , n ≥ 1 called periods such that
t
i+m,j
= t
i,j
and t
i,j+n
= t
i,j
for every i, j ≥ 1.
Let T be the set of all domino systems, T
s
be the subset of T that admit a solution and
T
ps
be the subset of T
s
that admit a periodic solution. It is well-known (B¨orger, Gr¨adel,
& Gurevich, 1997, Theorem 3.1.7) that the sets T` T
s
and T
ps
are recursively inseparable,
that is, there is no recursive (i.e. decidable) subset T
t
⊆ T of domino systems such that
T
ps
⊆ T
t
⊆ T
s
.
We use this property in our reduction. For each domino system D, we construct a
signature S = S(D), and an ontology O = O(D) which consists of a single /L(-axiom such
that:
290
Modular Reuse of Ontologies: Theory and Practice
(q
1
) · _ A
1
. . A
k
where T = ¦1, . . . , k¦
(q
2
) A
t
¯ A
t
_ ⊥ 1 ≤ t < t
t
≤ k
(q
3
) A
t
_ ∃r
H
.(

/t,t

)∈H
A
t
) 1 ≤ t ≤ k
(q
4
) A
t
_ ∃r
V
.(

/t,t

)∈V
A
t
) 1 ≤ t ≤ k
Figure 2: An ontology O
tile
= O
tile
(D) expressing tiling conditions for a domino system D
(a) if D does not have a solution then O = O(D) is safe for S = S(D) w.r.t. L = /L(O,
and
(b) if D has a periodic solution then O = O(D) is not safe for S = S(D) w.r.t. L = /L(O.
In other words, for the set T
t
consisting of the domino systems D such that O = O(D)
is not safe for S = S(D) w.r.t. L = /L(O, we have T
ps
⊆ T
t
⊆ T
s
. Since T ` T
s
and T
ps
are recursively inseparable, this implies undecidability for T
t
and hence for the problem of
checking if O is S-safe w.r.t. L = /L(O, because otherwise one can use this problem for
deciding membership in T
t
.
The signature S = S(D), and the ontology O = O(D) are constructed as follows. Given
a domino system D = (T, H, V ), let S consist of fresh atomic concepts A
t
for every t ∈ T and
two atomic roles r
H
and r
V
. Consider an ontology O
tile
in Figure 2 constructed for D. Note
that Sig(O
tile
) = S. The axioms of O
tile
express the tiling conditions for a domino system
D, namely (q
1
) and (q
2
) express that every domain element is assigned with a unique tile
t ∈ T; (q
3
) and (q
4
) express that every domain element has horizontal and vertical matching
successors.
Now let s be an atomic role and B an atomic concept such that s, B / ∈ S. Let O := ¦β¦
where:
β := · _ ∃s.

(C
i
¦D
i
)∈C
tile
(C
i
¯ D
i
) . (∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B)

We say that r
H
and r
V
commute in an interpretation 1 = (∆
7
,
7
) if for all domain
elements a, b, c, d
1
and d
2
from ∆
7
such that 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
, 'a, c` ∈ r
V
7
, and
'c, d
2
` ∈ r
H
7
, we have d
1
= d
2
. The following claims can be easily proved:
Claim 1. If O
tile
(D) has a model 1 in which r
H
and r
V
commute, then D has a solution.
Indeed a model 1 = (∆,
7
) of O
tile
(D) can be used to guide the construction of a solution
t
i,j
for D as follows. For every i, j ≥ 1, we construct t
i,j
inductively together with elements
a
i,j
∈ ∆
7
such that 'a
i,j
, a
i,j+1
` ∈ r
V
7
and 'a
i,j
, a
i+1,j
` ∈ r
H
7
. We set a
1,1
to any element
from ∆
7
.
Now suppose a
i,j
with i, j ≥ 1 is constructed. Since 1 is a model of axioms (q
1
) and
(q
2
) from Figure 2, we have a unique A
t
with 1 ≤ t ≤ k such that a
i,j
∈ A
t
7
. Then we set
t
i,j
:= t. Since 1 is a model of axioms (q
3
) and (q
4
) and a
i,j
∈ A
t
7
there exist b, c ∈ ∆
7
and t
t
, t
tt
∈ T such that 'a
i,j
, b` ∈ r
H
7
, 'a
i,j
, c` ∈ r
V
7
, 't, t
t
` ∈ H, 't, t
tt
` ∈ V , b ∈ A
t

7
,
and c ∈ A
t

7
. In this case we assign a
i,j+1
:= b, a
i+1,j
:= c, t
i,j+1
:= t
t
, and t
i+1,j
:= t
tt
.
Note that some values of a
i,j
and t
i,j
can be assigned two times: a
i+1,j+1
and t
i+1,j+1
are
constructed for a
i,j+1
and a
i,j+1
. However, since r
V
and r
H
commute in 1, the value for
291
Cuenca Grau, Horrocks, Kazakov, & Sattler
a
i+1,j+1
is unique, and because of (q
2
), the value for t
i+1,j+1
is unique. It is easy to see that
t
i,j
is a solution for D.
Claim 2. If 1 is a model of O
tile
∪ O, then r
H
and r
V
do not commute in 1.
Indeed, it is easy to see that O
tile
∪ O [= (· _ ∃s.[∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B]). Hence, if
1 = (∆
7
,
7
) is a model of O
tile
∪O, then there exist a, b, c, d
1
and d
2
such that 'x, a` ∈ s
7
for every x ∈ ∆
7
, 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
, d
1
∈ B
7
, 'a, c` ∈ r
V
7
, 'c, d
2
` ∈ r
H
7
, and
d
2
∈ (B)
7
. This implies that d
1
= d
2
, and so, r
h
and r
V
do not commute in 1.
Finally, we demonstrate that O = O(D) satisfies properties (a) and (b).
In order to prove property (a) we use the sufficient condition for safety given in Propo-
sition 8 and demonstrate that if D has no solution then for every interpretation 1 there
exists a model . of O such that .[
S
= 1[
S
. By Proposition 8, this will imply that O is safe
for S w.r.t. L.
Let 1 be an arbitrary interpretation. Since D has no solution, then by the contra-positive
of Claim 1 either (1) 1 is not a model of O
tile
, or (2) r
H
and r
V
do not commute in 1. We
demonstrate for both of these cases how to construct the required model . of O such that
.[
S
= 1[
S
.
Case (1). If 1 = (∆
7
,
7
) is not a model of O
tile
then there exists an axiom (C
i
_ D
i
) ∈
O
tile
such that 1 [= (C
i
_ D
i
). That is, there exists a domain element a ∈ ∆
7
such that
a ∈ C
7
i
but a ∈ D
7
i
. Let us define . to be identical to 1 except for the interpretation of the
atomic role s which we define in . as s
.
= ¦'x, a` [ x ∈ ∆¦. Since the interpretations of
the symbols in S have remained unchanged, we have a ∈ C
.
i
, a ∈ D
.
i
, and so . [= (· _
∃s.[C
i
¯D
j
]). This implies that . [= β, and so, we have constructed a model . of O such
that .[
S
= 1[
S
.
Case (2). Suppose that r
H
and r
V
do not commute in 1 = (∆
7
,
7
). This means that
there exist domain elements a, b, c, d
1
and d
2
from ∆
7
with 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
,
'a, c` ∈ r
V
7
, and 'c, d
2
` ∈ r
H
7
, such that d
1
= d
2
. Let us define . to be identical to 1 except
for the interpretation of the atomic role s and the atomic concept B. We interpret s in . as
s
.
= ¦'x, a` [ x ∈ ∆¦. We interpret B in . as B
.
= ¦d
1
¦. Note that a ∈ (∃r
H
.∃r
V
.B)
.
and
a ∈ (∃r
V
.∃r
H
.B)
.
since d
1
= d
2
. So, we have . [= (· _ ∃s.[∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B])
which implies that . [= β, and thus, we have constructed a model . of O such that
.[
S
= 1[
S
.
In order to prove property (b), assume that D has a periodic solution t
i,j
with the
periods m, n ≥ 1. We demonstrate that O is not S-safe w.r.t. L. For this purpose we
construct an /L(O-ontology O
t
with Sig(O) ∩ Sig(O
t
) ⊆ S such that O ∪ O
t
[= (· _ ⊥),
but O
t
[= (· _ ⊥). This will imply that O is not safe for O
t
w.r.t. L = /L(O, and, hence,
is not safe for S w.r.t. L = /L(O.
We define O
t
such that every model of O
t
is a finite encoding of the periodic solution t
i,j
with the periods m and n. For every pair (i, j) with 1 ≤ i ≤ m and 1 ≤ j ≤ n we introduce
a fresh individual a
i,j
and define O
t
to be an extension of O
tile
with the following axioms:
(p
1
) ¦a
i
1
,j
¦ _ ∃r
V
.¦a
i
2
,j
¦ (p
2
) ¦a
i
1
,j
¦ _ ∀r
V
.¦a
i
2
,j
¦, i
2
= i
1
+ 1 mod m
(p
3
) ¦a
i,j
1
¦ _ ∃r
H
.¦a
i,j
2
¦ (p
4
) ¦a
i,j
1
¦ _ ∀r
H
.¦a
i,j
2
¦, j
2
= j
1
+ 1 mod n
(p
5
) · _

1≤i≤m, 1≤j≤n
¦a
i,j
¦
292
Modular Reuse of Ontologies: Theory and Practice
The purpose of axioms (p
1
)–(p
5
) is to ensure that r
H
and r
V
commute in every model of
O
t
. It is easy to see that O
t
has a model corresponding to every periodic solution for D with
periods m and n. Hence O
t
[= (· _ ⊥). On the other hand, by Claim 2, since O
t
contains
O
tile
, then in every model of O
t
∪ O, r
H
and r
V
do not commute. This is only possible if
O
t
∪ O have no models, so O
t
∪ O [= (· _ ⊥).
A direct consequence of Theorem 21 and Proposition 18 is the undecidability of the
problem of checking whether a subset of an ontology is a module for a signature:
Corollary 22 Given a signature S and /L(-ontologies O
t
and O
t
1
⊆ O
t
, the problem of
determining whether O
t
1
is an S-module in O
t
w.r.t. L = /L(O is undecidable.
Proof. By Claim 1 of Proposition 18, O is S-safe w.r.t. L if O
0
= ∅ is an S-module in O
w.r.t. L. Hence any algorithm for recognizing modules for a signature in L can be used for
checking if an ontology is safe for a signature in L.
Corollary 23 There is no algorithm that can perform tasks T2, or T4[a,s,u]m for L =
/L(O.
Proof. Theorem 21 directly implies that there is no algorithm for task T2, since this task
corresponds to the problem of checking safety for a signature.
Solving any of the reasoning tasks T4am, T4sm, or T4um for L is at least as hard as
checking the safety of an ontology, since, by Claim 1 of Proposition 18, an ontology O is
S-safe w.r.t. L iff O
0
= ∅ is (the only minimal) S-module in O w.r.t. L.
5. Sufficient Conditions for Safety
Theorem 21 establishes the undecidability of checking whether an ontology expressed in
OWL DL is safe w.r.t. a signature. This undecidability result is discouraging and leaves
us with two alternatives: First, we could focus on simple DLs for which this problem is
decidable. Alternatively, we could look for sufficient conditions for the notion of safety—
that is, if an ontology satisfies our conditions, then we can guarantee that it is safe; the
converse, however, does not necessarily hold.
The remainder of this paper focuses on the latter approach. Before we go any further,
however, it is worth noting that Theorem 21 still leaves room for investigating the former
approach. Indeed, safety may still be decidable for weaker description logics, such as cL,
or even for very expressive logics such as oH1O. In the case of oH1O, however, existing
results (Lutz et al., 2007) strongly indicate that checking safety is likely to be exponentially
harder than reasoning and practical algorithms may be hard to design. This said, in what
follows we will focus on defining sufficient conditions for safety that we can use in practice
and restrict ourselves to OWL DL—that is, oHO1O—ontologies.
5.1 Safety Classes
In general, any sufficient condition for safety can be defined by giving, for each signature
S, the set of ontologies over a language that satisfy the condition for that signature. These
ontologies are then guaranteed to be safe for the signature under consideration. These
intuitions lead to the notion of a safety class.
293
Cuenca Grau, Horrocks, Kazakov, & Sattler
Definition 24 (Class of Ontologies, Safety Class). A class of ontologies for a language
L is a function O() that assigns to every subset S of the signature of L, a subset O(S)
of ontologies in L. A class O() is anti-monotonic if S
1
⊆ S
2
implies O(S
2
) ⊆ O(S
1
); it is
compact when O ∈ O(S ∩ Sig(O)) for each O ∈ O(S); and it is subset-closed if O
1
⊆ O
2
and O
2
∈ O(S) implies O
1
∈ O(S); it is union-closed if O
1
∈ O(S) and O
2
∈ O(S) implies
(O
1
∪ O
2
) ∈ O(S).
A safety class (also called a sufficient condition to safety) for an ontology language L
is a class of ontologies O() for L such that for each S it is the case that (i) ∅ ∈ O(S), and
(ii) each ontology O ∈ O(S) is S-safe for L. ♦
Intuitively, a class of ontologies is a collection of sets of ontologies parameterized by a
signature. A safety class represents a sufficient condition for safety: each ontology in O(S)
should be safe for S. Also, w.l.o.g., we assume that the the empty ontology ∅ belongs to
every safety class for every signature. In what follows, whenever an ontology O belongs to
a safety class for a given signature and the safety class is clear from the context, we will
sometimes say that O passes the safety test for S.
Safety classes may admit many natural properties, as given in Definition 24. Anti-
monotonicity intuitively means that if an ontology O can be proved to be safe w.r.t. S
using the sufficient condition, then O can be proved to be safe w.r.t. every subset of S.
Compactness means that it is sufficient to consider only common elements of Sig(O) and S
for checking safety. Subset-closure (union closure) means that if O (O
1
and O
2
) satisfy the
sufficient condition for safety, then every subset of O (the union of O
1
and O
2
) also satisfies
this condition.
5.2 Locality
In this section we introduce a family of safety classes for L = oHO1O that is based on the
semantic properties underlying the notion of model conservative extensions. In Section 3,
we have seen that, according to Proposition 8, one way to prove that O is S-safe is to show
that O is a model S-conservative extension of the empty ontology.
The following definition formalizes the classes of ontologies, called local ontologies, for
which safety can be proved using Proposition 8.
Definition 25 (Class of Interpretations, Locality). Given a oHO1O signature S,
we say that a set of interpretations I is local w.r.t. S if for every oHO1O-interpretation 1
there exists an interpretation . ∈ I such that 1[
S
= .[
S
.
A class of interpretations is a function I() that given a oHO1Osignature S returns a set
of interpretations I(S); it is local if I(S) is local w.r.t. S for every S; it is monotonic if S
1
⊆ S
2
implies I(S
1
) ⊆ I(S
2
); it is compact if for every S
1
, S
2
and S such that (S
1
S
2
) ∩ S = ∅
we have that I(S
1
)[
S
= I(S
2
)[
S
, where S
1
S
2
is the symmetric difference of sets S
1
and
S
2
defined by S
1
S
2
:= S
1
` S
2
∪ S
2
` S
1
.
Given a class of interpretations I(), we say that O() is the class of ontologies O() based
on I() if for every S, O(S) is the set of ontologies that are valid in I(S); if I() is local then
we say that O() is a class of local ontologies, and for every S and O ∈ O(S) and every
α ∈ O, we say that O and α are local (based on I()). ♦
294
Modular Reuse of Ontologies: Theory and Practice
Example 26 Let I
r←∅
A←∅
() be a class of oHO1O interpretations defined as follows. Given a
signature S, the set I
r←∅
A←∅
(S) consists of interpretations . such that r
.
= ∅ for every atomic
role r / ∈ S and A
.
= ∅ for every atomic concept A / ∈ S. It is easy to show that I
r←∅
A←∅
(S)
is local for every S, since for every interpretation 1 = (∆
7
,
7
) and the interpretation
. = (∆
.
,
.
) defined by ∆
.
:= ∆
7
, r
.
= ∅ for r / ∈ S, A
.
= ∅ for A / ∈ S, and X
.
:= X
7
for the remaining symbols X, then . ∈ I
r←∅
A←∅
(S) and 1[
S
= .[
S
. Since I
r←∅
A←∅
(S
1
) ⊆ I
r←∅
A←∅
(S
2
)
for every S
1
⊆ S
2
, it is the case that I
r←∅
A←∅
() is monotonic; I
r←∅
A←∅
() is also compact, since for
every S
1
and S
2
the sets of interpretations I
r←∅
A←∅
(S
1
) and I
r←∅
A←∅
(S
2
) are defined differently
only for elements in S
1
S
2
.
Given a signature S, the set Ax
r←∅
A←∅
(S) of axioms that are local w.r.t. S based on I
r←∅
A←∅
(S)
consists of all axioms α such for every . ∈ I
r←∅
A←∅
(S), it is the case that . [= α. Then the
class of local ontologies based on I
r←∅
A←∅
() is defined by O ∈ O
r←∅
A←∅
(S) iff O ⊆ Ax
r←∅
A←∅
(S).

Proposition 27 (Locality Implies Safety) Let O() be a class of ontologies for oHO1O
based on a local class of interpretations I(). Then O() is a subset-closed and union-closed
safety class for L = oHO1O. Additionally, if I() is monotonic, then O() is anti-monotonic,
and if I() is compact then O() is also compact.
Proof. Assume that O() is a class of ontologies based on I(). Then by Definition 25 for
every oHO1O signature S, we have O ∈ O(S) iff O is valid in I(S) iff . [= O for every
interpretation . ∈ I(S). Since I() is a local class of interpretations, we have that for
every oHO1O-interpretation 1 there exists . ∈ I(S) such that .[
S
= 1[
S
. Hence for
every O ∈ I(S) and every oHO1O interpretation 1 there is a model . ∈ I(S) such that
.[
S
= 1[
S
, which implies by Proposition 8 that O is safe for S w.r.t. L = oHO1O. Thus
O() is a safety class.
The fact that O() is subset-closed and union-closed follows directly from Definition 25
since (O
1
∪ O
2
) ∈ O(S) iff (O
1
∪ O
2
) is valid in I(S) iff O
1
and O
2
are valid in I(S) iff
O
1
∈ O(S) and O
2
∈ O(S). If I() is monotonic then I(S
1
) ⊆ I(S
2
) for every S
1
⊆ S
2
, and
so O ∈ O(S
2
) implies that O is valid in I(S
2
) which implies that O is valid in I(S
1
) which
implies that O ∈ O(S
1
). Hence O() is anti-monotonic.
If I() is compact then for every S
1
, S
2
and S such that (S
1
S
2
) ∩ S = ∅ we have
that I(S
1
)[
S
= I(S
2
)[
S
, hence for every O with Sig(O) ⊆ S we have that O is valid in
I(S
1
) iff O is valid in I(S
2
), and so, O ∈ O(S
1
) iff O ∈ O(S
2
). In particular, O ∈ O(S) iff
O ∈ O(S ∩ Sig(O)) since (S (S ∩ Sig(O))) ∩ Sig(O) = (S ` Sig(O)) ∩ Sig(O) = ∅. Hence,
O() is compact.
Corollary 28 The class of ontologies O
r←∅
A←∅
(S) defined in Example 26 is an anti-monotonic
compact subset-closed and union-closed safety class.
Example 29 Recall that in Example 5 from Section 3, we demonstrated that the ontology
{
1
∪ O given in Figure 1 with {
1
= ¦P1, . . . , P4, E1¦ is a deductive conservative extension
of O for S = ¦Cystic Fibrosis, Genetic Disorder¦. This has been done by showing that every
S-interpretation 1 can be expanded to a model . of axioms P1–P4, E1 by interpreting
the symbols in Sig({
1
) ` S as the empty set. In terms of Example 26 this means that
295
Cuenca Grau, Horrocks, Kazakov, & Sattler
{
1
∈ O
r←∅
A←∅
(S). Since O
r←∅
A←∅
() is a class of local ontologies, by Proposition 27, the ontology
{
1
is safe for S w.r.t. L = oHO1O. ♦
Proposition 27 and Example 29 suggest a particular way of proving the safety of on-
tologies. Given a oHO1O ontology O and a signature S it is sufficient to check whether
O ∈ O
r←∅
A←∅
(S); that is, whether every axiom α in O is satisfied by every interpretation from
I
r←∅
A←∅
(S). If this property holds, then O must be safe for S according to Proposition 27.
It turns out that this notion provides a powerful sufficiency test for safety which works
surprisingly well for many real-world ontologies, as will be shown in Section 8. In the next
section we discuss how to perform this test in practice.
5.3 Testing Locality
In this section, we focus in more detail on the safety class O
r←∅
A←∅
(), introduced in Exam-
ple 26. When ambiguity does not arise, we refer to this safety class simply as locality.
2
From the definition of Ax
r←∅
A←∅
(S) given in Example 26 it is easy to see that an axiom α is
local w.r.t. S (based on I
r←∅
A←∅
(S)) if α is satisfied in every interpretation that fixes the inter-
pretation of all atomic roles and concepts outside S to the empty set. Note that for defining
locality we do not fix the interpretation of the individuals outside S, but in principle, this
could be done. The reason is that there is no elegant way to describe such interpretations.
Namely, every individual needs to be interpreted as an element of the domain, and there is
no “canonical” element of every domain to choose.
In order to test the locality of α w.r.t. S, it is sufficient to interpret every atomic concept
and atomic role not in S as the empty set and then check if α is satisfied in all interpretations
of the remaining symbols. This observation suggests the following test for locality:
Proposition 30 (Testing Locality) Given a oHO1O signature S, concept C, axiom α
and ontology O let τ(C, S), τ(α, S) and τ(O, S) be defined recursively as follows:
τ(C, S) ::= τ(·, S) = ·; (a)
[ τ(A, S) = ⊥ if A / ∈ S and otherwise = A; (b)
[ τ(¦a¦, S) = ¦a¦; (c)
[ τ(C
1
¯ C
2
, S) = τ(C
1
, S) ¯ τ(C
2
, S); (d)
[ τ(C
1
, S) = τ(C
1
, S); (e)
[ τ(∃R.C
1
, S) = ⊥ if Sig(R) S and otherwise = ∃R.τ(C
1
, S); (f)
[ τ(nR.C
1
, S) = ⊥ if Sig(R) S and otherwise = (nR.τ(C
1
, S)). (g)
τ(α, S) ::= τ(C
1
_ C
2
, S) = (τ(C
1
, S) _ τ(C
2
, S)); (h)
[ τ(R
1
_ R
2
, S) = (⊥ _ ⊥) if Sig(R
1
) S, otherwise
= ∃R
1
.· _ ⊥ if Sig(R
2
) S, otherwise = (R
1
_ R
2
); (i)
[ τ(a : C, S) = a : τ(C, S); (j)
[ τ(r(a, b), S) = · _ ⊥ if r / ∈ S and otherwise = r(a, b); (k)
[ τ(Trans(r), S) = ⊥ _ ⊥ if r / ∈ S and otherwise = Trans(r); (l)
[ τ(Funct(R), S) = ⊥ _ ⊥ if Sig(R) S and otherwise = Funct(R). (m)
τ(O, S) ::=

α∈C
τ(α, S) (n)
2. This notion of locality is exactly the one we used in our previous work (Cuenca Grau et al., 2007).
296
Modular Reuse of Ontologies: Theory and Practice
Then, O ∈ O
r←∅
A←∅
(S) iff every axiom in τ(O, S) is a tautology.
Proof. It is easy to check that for every atomic concept A and atomic role r from τ(C, S),
we have A ∈ S and r ∈ S, in other words, all atomic concepts and roles that are not in S
are eliminated by the transformation.
3
It is also easy to show by induction that for every
interpretation 1 ∈ I
r←∅
A←∅
(S), we have C
7
= (τ(C, S))
7
and that 1 [= α iff 1 [= τ(α, S). Hence
an axiom α is local w.r.t. S iff 1 [= α for every interpretation 1 ∈ I
r←∅
A←∅
(S) iff 1 [= τ(α, S)
for every Sig(α)-interpretation 1 ∈ I
r←∅
A←∅
(S) iff τ(α, S) is a tautology.
Example 31 Let O = ¦α¦ be the ontology consisting of axiom α = M2 from Figure 1. We
demonstrate using Proposition 30 that O is local w.r.t. S
1
= ¦Fibrosis, Genetic Origin¦, but
not local w.r.t. S
2
= ¦Genetic Fibrosis, has Origin¦.
Indeed, according to Proposition 30, in order to check whether O is local w.r.t. S
1
it is
sufficient to perform the following replacements in α (the symbols from S
1
are underlined):
M2
⊥ [by (b)]

Genetic Fibrosis ≡ Fibrosis ¯
⊥ [by (f)]

∃has Origin.Genetic Origin (7)
Similarly, in order to check whether O is local w.r.t. S
2
, it is sufficient to perform the
following replacements in α (the symbols from S
2
are underlined):
M2 Genetic Fibrosis ≡
⊥ [by (b)]

Fibrosis ¯ ∃has Origin.
⊥ [by (b)]

Genetic Origin (8)
In the first case we obtain τ(M2, S
1
) = (⊥ ≡ Fibrosis ¯ ⊥) which is a oHO1O-tautology.
Hence O is local w.r.t. S
1
and hence by Proposition 8 is S
1
-safe w.r.t. oHO1O. In the
second case τ(M2, S
2
) = (Genetic Fibrosis ≡ ⊥ ¯ ∃has Origin.⊥) which is not a oHO1O
tautology, hence O is not local w.r.t. S
2
. ♦
5.4 A Tractable Approximation to Locality
One of the important conclusions of Proposition 30 is that one can use the standard ca-
pabilities of available DL-reasoners such as FaCT++ (Tsarkov & Horrocks, 2006), RACER
(M¨oller & Haarslev, 2003), Pellet (Sirin & Parsia, 2004) or KAON2 (Motik, 2006) for testing
locality since these reasoners, among other things, allow testing for DL-tautologies. Che-
cking for tautologies in description logics is, theoretically, a difficult problem (e.g. for the
DL oHO1O it is known to be NEXPTIME-complete, Tobies, 2000). There are, however,
several reasons to believe that the locality test would perform well in practice. The primary
reason is that the sizes of the axioms which need to be tested for tautologies are usually
relatively small compared to the sizes of ontologies. Secondly, modern DL reasoners are
highly optimized for standard reasoning tasks and behave well for most realistic ontologies.
In case reasoning is too costly, it is possible to formulate a tractable approximation to the
locality conditions for oHO1O:
3. Recall that the constructors ⊥, C1 C2, ∀R.C, and (nR.C) are assumed to be expressed using ,
C1 C2, ∃R.C and (nR.C), hence, in particular, every role R

with Sig(R

) S occurs in O as either
∃R

.C, (nR

.C), R

R, R R

, Trans(R

), or Funct(R

), and hence will be eliminated. All atomic
concepts A

/ ∈ S will be eliminated likewise. Note that it is not necessarily the case that Sig(τ(α, S)) ⊆ S,
since τ(α, S) may still contain individuals that do not occur in S.
297
Cuenca Grau, Horrocks, Kazakov, & Sattler
Definition 32 (Syntactic Locality for oHO1O). Let S be a signature. The following
grammar recursively defines two sets of concepts Con

(S) and Con

(S) for a signature S:
Con

(S) ::= A

[ C

[ C ¯ C

[ C

¯ C [ ∃R

.C [ ∃R.C

[ (nR

.C) [ (nR.C

) .
Con

(S) ::= · [ C

[ C

1
¯ C

2
.
where A

/ ∈ S is an atomic concept, R

(possibly the inverse of) an atomic role r

/ ∈ S, C
any concept, R any role, C

∈ Con

(S), C

(i)
∈ Con

(S), and i ∈ ¦1, 2¦.
An axiom α is syntactically local w.r.t. S if it is of one of the following forms: (1) R

_ R,
or (2) Trans(r

), or (3) Funct(R

), or (4) C

_ C, or (5) C _ C

, or (6) a : C

. We denote
by A
˜
x
r←∅
A←∅
(S) the set of all oHO1O-axioms that are syntactically local w.r.t. S.
A oHO1O-ontology O is syntactically local w.r.t. S if O ⊆ A
˜
x
r←∅
A←∅
(S). We denote by
˜
O
r←∅
A←∅
(S) the set of all oHO1O ontologies that are syntactically local w.r.t. S. ♦
Intuitively, syntactic locality provides a simple syntactic test to ensure that an axiom is
satisfied in every interpretation from I
r←∅
A←∅
(S). It is easy to see from the inductive definitions
of Con

(S) and Con

(S) in Definition 32 that for every interpretation 1 = (∆
7
,
7
) from
I
r←∅
A←∅
(S) it is the case that (C

)
7
= ∅ and (C

)
7
= ∆
7
for every C

∈ Con

(S) and
C

∈ Con

(S). Hence, every syntactically local axiom is satisfied in every interpretation
1 from I
r←∅
A←∅
(S), and so we obtain the following conclusion:
Proposition 33 A
˜
x
r←∅
A←∅
(S) ⊆ Ax
r←∅
A←∅
(S).
Further, it can be shown that the safety class for oHO1O based on syntactic locality
enjoys all of the properties from Definition 24:
Proposition 34 The class of syntactically local ontologies
˜
O
r←∅
A←∅
() given in Definition 32
is an anti-monotonic, compact, subset-closed, and union-closed safety class.
Proof.
˜
O
r←∅
A←∅
() is a safety class by Proposition 33. Anti-monotonicity for
˜
O
r←∅
A←∅
() can
be shown by induction, by proving that Con

(S
2
) ⊆ Con

(S
1
), Con

(S
2
) ⊆ Con

(S
1
)
and A
˜
x
r←∅
A←∅
(S
2
) ⊆ A
˜
x
r←∅
A←∅
(S
1
) when S
1
⊆ S
2
. Also one can show by induction that
α ∈ A
˜
x
r←∅
A←∅
(S) iff α ∈ A
˜
x
r←∅
A←∅
(S ∩ Sig(α)), so
˜
O
r←∅
A←∅
() is compact. Since O ∈
˜
O
r←∅
A←∅
(S)
iff O ⊆ A
˜
x
r←∅
A←∅
(S), we have that
˜
O
r←∅
A←∅
() is subset-closed and union-closed.
Example 35 (Example 31 continued) It is easy to see that axiom M2 from Figure 1 is
syntactically local w.r.t. S
1
= ¦Fibrosis, Genetic Origin¦. Below we indicate sub-concepts in
α from Con

(S
1
):
M2
∈ Con

(S
1
) [matches A

]

Genetic Fibrosis ≡ Fibrosis ¯
∈ Con

(S
1
) [matches ∃R

.C]

∃has Origin.Genetic Origin

∈ Con

(S
1
) [matches C ¯ C

]
(9)
It is easy to show in a similar way that axioms P
1
—P4, and E1 from Figure 1 are syn-
tactically local w.r.t. S = ¦Cystic Fibrosis, Genetic Disorder¦. Hence the ontology {
1
=
¦P1, . . . , P4, E1¦ considered in Example 29 is syntactically local w.r.t. S. ♦
298
Modular Reuse of Ontologies: Theory and Practice
I
r←∗
A←∗
(S) r, A ∈ S : r
.
A
.
I
r←∅
A←∅
(S) ∅ ∅
I
r←∆∆
A←∅
(S) ∆
.

.

I
r←id
A←∅
(S) ¦'x, x` [ x ∈ ∆
.
¦ ∅
I
r←∗
A←∗
(S) r, A ∈ S : r
.
A
.
I
r←∅
A←∆
(S) ∅ ∆
.
I
r←∆∆
A←∆
(S) ∆
.

.

.
I
r←id
A←∆
(S) ¦'x, x` [ x ∈ ∆
.
¦ ∆
.
Table 3: Examples for Different Local Classes of Interpretations
The converse of Proposition 33 does not hold in general since there are semantically
local axioms that are not syntactically local. For example, the axiom α = (A _ A . B) is
local w.r.t. every S since it is a tautology (and hence true in every interpretation). On the
other hand, it is easy to see that α is not syntactically local w.r.t. S = ¦A, B¦ according to
Definition 32 since it involves symbols in S only. Another example, which is not a tautology,
is a GCI α = (∃r.A _ ∃r.B). The axiom α is semantically local w.r.t. S = ¦r¦, since
τ(α, S) = (∃r.⊥ _ ∃r.⊥) is a tautology, but not syntactically local. These examples show
that the limitation of our syntactic notion of locality is its inability to “compare” different
occurrences of concepts from the given signature S. As a result, syntactic locality does not
detect all tautological axioms. It is reasonable to assume, however, that tautological axioms
do not occur often in realistic ontologies. Furthermore, syntactic locality checking can be
performed in polynomial time by matching an axiom according to Definition 32.
Proposition 36 There exists an algorithm that given a oHO1O ontology O and a sig-
nature S, determines whether O ∈
˜
O
r←∅
A←∅
(S), and whose running time is polynomial in
[O[ +[S[, where [O[ and [S[ are the number of symbols occurring in O and S respectively.
4
5.5 Other Locality Classes
The locality condition given in Example 26 based on a class of local interpretations I
r←∅
A←∅
()
is just a particular example of locality which can be used for testing safety. Other classes
of local interpretations can be constructed in a similar way by fixing the interpretations
of elements outside S to different values. In Table 3 we have listed several such classes of
local interpretations where we fix the interpretation of atomic roles outside S to be either
the empty set ∅, the universal relation ∆∆, or the identity relation id on ∆, and the
interpretation of atomic concepts outside S to either the empty set ∅ or the set ∆ of all
elements.
Each local class of interpretations in Table 3 defines a corresponding class of local
ontologies analogously to Example 26. In Table 4 we have listed all of these classes together
with examples of typical types of axioms used in ontologies. These axioms are assumed
to be an extension of our project ontology from Figure 1. We indicate which axioms are
local w.r.t. S for which locality conditions assuming, as usual, that the symbols from S are
underlined.
It can be seen from Table 4 that different types of locality conditions are appropriate
for different types of axioms. The locality condition based on I
r←∅
A←∅
(S) captures the domain
axiom P4, definition P5, the disjointness axiom P6, and the functionality axiom P7 but
4. We assume that the numbers in number restrictions are written down using binary coding.
299
Cuenca Grau, Horrocks, Kazakov, & Sattler
α Axiom ? α ∈ Ax
r←∅
A←∅
r←∆∆
A←∅
r←id
A←∅
r←∅
A←∆
r←∆∆
A←∆
r←id
A←∆
P4 ∃has Focus.· _ Project
P5
BioMedical Project ≡ Project ¯
¯ ∃has Focus.Bio Medicine

P6 Project ¯ Bio Medicine _ ⊥
P7 Funct(has Focus)
P8 Human Genome : Project
P9 has Focus(Human Genome, Gene)
E2
∀has Focus.Cystic Fibrosis _
_ ∃has Focus.Cystic Fibrosis

Table 4: A Comparison between Different Types of Locality Conditions
neither of the assertions P8 or P9, since the individuals Human Genome and Gene prevent
us from interpreting the atomic role has Focus and atomic concept Project with the empty
set. The locality condition based on I
r←∆∆
A←∆
(S), where the atomic roles and concepts outside
S are interpreted as the largest possible sets, can capture such assertions but are generally
poor for other types of axioms. For example, the functionality axiom P7 is not captured
by this locality condition since the atomic role has Focus is interpreted as the universal
relation ∆∆, which is not necessarily functional. In order to capture such functionality
axioms, one can use locality based on I
r←id
A←∅
(S) or I
r←id
A←∆
(S), where every atomic role outside
S is interpreted with the identity relation id on the interpretation domain. Note that the
modeling error E2 is not local for any of the given locality conditions. Note also that it is
not possible to come up with a locality condition that captures all the axioms P4–P9, since
in P6 and P8 together imply axiom β = (¦Human Genome¦ ¯ Bio Medicine _ ⊥) which
uses the symbols from S only. Hence, every subset of { containing P6 and P8 is not safe
w.r.t. S, and so cannot be local w.r.t. S.
It might be possible to come up with algorithms for testing locality conditions for the
classes of interpretation in Table 3 similar to the ones presented in Proposition 30. For
example, locality based on the class I
r←∅
A←∆
(S) can be tested as in Proposition 30, where the
case (a) of the definition for τ(C, S) is replaced with the following:
τ(A, S) = · if A / ∈ S and otherwise = A (a
t
)
For the remaining classes of interpretations, that is for I
r←∆∆
A←∗
(S) and I
r←id
A←∗
(S), checking
locality, however, is not that straightforward, since it is not clear how to eliminate the
universal roles and identity roles from the axioms and preserve validity in the respective
classes of interpretations.
Still, it is easy to come up with tractable syntactic approximations for all the locality
conditions considered in this section in a similar manner to what has been done in Sec-
tion 5.4. The idea is the same as that used in Definition 32, namely to define two sets
Con

(S) and Con

(S) of concepts for a signature S which are interpreted as the empty
300
Modular Reuse of Ontologies: Theory and Practice
Con

(S) ::= C

[ C ¯ C

[ C

¯ C
[ ∃R.C

[ nR.C

for I
r←∗
A←∅
(): [ A

for I
r←∅
A←∗
(): [ ∃R

.C [ nR

.C
for I
r←id
A←∗
(): [ (mR
id
.C), m ≥ 2 .
Con

(S) ::= · [ C

[ C

1
¯ C

2
for I
r←∗
A←∆
(): [ A

for I
r←∆∆
A←∗
(): [ ∃R
∆∆
.C

[ (nR
∆∆
.C

)
for I
r←id
A←∗
(): [ ∃R
id
.C

[ (1 R
id
.C

) .
A
˜
x
r←∗
A←∗
(S) ::= C

_ C [ C _ C

[ a : C

for I
r←∅
A←∗
(): [ R

_ R [ Trans(r

) [ Funct(R

)
for I
r←∆∆
A←∗
(): [ R _ R
∆∆
[ Trans(r
∆∆
) [ r
∆∆
(a, b)
for I
r←id
A←∗
(): [ Trans(r
id
) [ Funct(R
id
)
Where:
A

, A

, r

, r
∆∆
, r
id
∈ S;
Sig(R

), Sig(R
∆∆
), Sig(R
id
) S;
C

∈ Con

(S), C

(i)
∈ Con

(S);
C is any concept, R is any role
Figure 3: Syntactic Locality Conditions for the Classes of Interpretations in Table 3
set and, respectively, as ∆
7
in every interpretation 1 from the class and see in which situa-
tions DL-constructors produce the elements from these sets. In Figure 3 we gave recursive
definitions for syntactically local axioms A
˜
x
r←∗
A←∗
(S) that correspond to classes I
r←∗
A←∗
(S) of
interpretations from Table 3, where some cases in the recursive definitions are present only
for the indicated classes of interpretations.
5.6 Combining and Extending Safety Classes
In the previous section we gave examples of several safety classes based on different local
classes of interpretations and demonstrated that different classes are suitable for different
types of axioms. In order to check safety of ontologies in practice, one may try to apply
different sufficient tests and check if any of them succeeds. Obviously, this gives a more
powerful sufficient condition for safety, which can be seen as the union of the safety classes
used in the tests.
Formally, given two classes of ontologies O
1
() and O
2
(), their union (O
1
∪ O
2
)() is a
class of ontologies defined by (O
1
∪O
2
)(S) = O
1
(S)∪O
2
(S). It is easy to see by Definition 24
that if both O
1
() and O
2
() are safety classes then their union (O
1
∪ O
2
)() is a safety
class. Moreover, if each of the safety classes is also anti-monotonic or subset-closed, then
the union is anti-monotonic, or respectively, subset-closed as well. Unfortunately the union-
closure property for safety classes is not preserved under unions, as demonstrated in the
following example:
Example 37 Consider the union (O
r←∅
A←∅
∪ O
r←∆∆
A←∆
)() of two classes of local ontologies
O
r←∅
A←∅
() and O
r←∆∆
A←∆
() defined in Section 5.5. This safety class is not union-closed since,
for example, the ontology O
1
consisting of axioms P4–P7 from Table 4 satisfies the first
locality condition, the ontology O
2
consisting of axioms P8–P9 satisfies the second locality
condition, but their union O
1
∪O
2
satisfies neither the first nor the second locality condition
and, in fact, is not even safe for S as we have shown in Section 5.5. ♦
As shown in Proposition 33, every locality condition gives a union-closed safety class;
however, as seen in Example 37, the union of such safety classes might be no longer union-
closed. One may wonder if locality classes already provide the most powerful sufficient
301
Cuenca Grau, Horrocks, Kazakov, & Sattler
conditions for safety that satisfy all the desirable properties from Definition 24. Surprisingly
this is the case to a certain extent for some locality classes considered in Section 5.5.
Definition 38 (Maximal Union-Closed Safety Class). A safety class O
2
() extends
a safety class O
1
() if O
1
(S) ⊆ O
2
(S) for every S. A safety class O
1
() is maximal union-
closed for a language L if O
1
() is union-closed and for every union-closed safety class O
2
()
that extends O
1
() and every O over L we have that O ∈ O
2
(S) implies O ∈ O
1
(S). ♦
Proposition 39 The classes of local ontologies O
r←∅
A←∅
() and O
r←∅
A←∆
() defined in Section 5.5
are maximal union-closed safety classes for L = oH1O.
Proof. According to Definition 38, if a safety class O() is not maximal union-closed for a
language L, then there exists a signature S and an ontology O over L, such that (i) O / ∈
O(S), (ii) O is safe w.r.t. S in L, and (iii) for every { ∈ O(S), it is the case that O∪ { is
safe w.r.t. S in L; that is, for every ontology O over L with Sig(O) ∩ Sig(O ∪ {) ⊆ S it is
the case that O ∪ { ∪ O is a deductive conservative extension of O. We demonstrate that
this is not possible for O() = O
r←∅
A←∅
() or O() = O
r←∅
A←∆
()
We first consider the case O() = O
r←∅
A←∅
() and then show how to modify the proof for
the case O() = O
r←∅
A←∆
().
Let O be an ontology over L that satisfies conditions (i)–(iii) above. We define ontologies
{ and O as follows. Take { to consist of the axioms A _ ⊥ and ∃r.· _ ⊥ for every atomic
concept A and atomic role r from Sig(O) ` S. It is easy to see that { ∈ O
r←∅
A←∅
(S). Take O
to consist of all tautologies of form ⊥ _ A and ⊥ _ ∃r.· for every A, r ∈ S. Note that
Sig(O ∪ {) ∩ Sig(O) ⊆ S. We claim that O ∪ { ∪ O is not a deductive Sig(O)-conservative
extension of O. ()
Intuitively, the ontology { is chosen in such a way that { ∈ O
r←∅
A←∅
(S) and O ∪ { ∪ O
has only models from I
r←∅
A←∅
(S). O is an ontology which implies nothing but tautologies and
uses all the atomic concepts and roles from S.
Since O / ∈ O
r←∅
A←∅
(S), there exists an axiom α ∈ O such that α / ∈ Ax
r←∅
A←∅
(S). Let
β := τ(α, S) where τ(, ) is defined as in Proposition 30. As has been shown in the proof of
this proposition, 1 [= α iff 1 [= β for every 1 ∈ I
r←∅
A←∅
(S). Now, since O [= α and O ∪ { ∪ O
has only models from I
r←∅
A←∅
(S), it is the case that O∪{∪O [= β. By Proposition 30, since β
does not contain individuals, we have that Sig(β) ⊆ S = Sig(O) and, since α / ∈ Ax
r←∅
A←∅
(S),
β is not a tautology, thus O [= β. Hence, by Definition 1, O ∪ { ∪ O is not a deductive
Sig(O)-conservative extension of O ().
For O() = O
r←∅
A←∆
() the proof can be repeated by taking { to consist of axioms · _ A
and ∃r.· _ ⊥ for all A, r ∈ Sig(O) ` S, and modifying τ(, ) as has been discussed in
Section 5.5.
There are some difficulties in extending the proof of Proposition 39 to other locality
classes considered in Section 5.5. First, it is not clear how to force interpretations of roles
to be the universal or identity relation using oHO1O axioms. Second, it is not clear how
to define the function τ(, ) in these cases (see a related discussion in Section 5.5). Note
also that the proof of Proposition 39 does not work in the presence of nominals, since
this will not guarantee that β = τ(α, S) contains symbols from S only (see Footnote 3 on
p. 297). Hence there is probably room to extend the locality classes O
r←∅
A←∅
() and O
r←∅
A←∆
()
for L = oHO1O while preserving union-closure.
302
Modular Reuse of Ontologies: Theory and Practice
6. Extracting Modules Using Safety Classes
In this section we revisit the problem of extracting modules from ontologies. As shown in
Corollary 23 in Section 4, there exists no general procedure that can recognize or extract
all the (minimal) modules for a signature in an ontology in finite time.
The techniques described in Section 5, however, can be reused for extracting particular
families of modules that satisfy certain sufficient conditions. Proposition 18 establishes the
relationship between the notions of safety and module; more precisely, a subset O
1
of O is
an S-module in O provided that O ` O
1
is safe for S ∪ Sig(O
1
). Therefore, any safety class
O() can provide a sufficient condition for testing modules—that is, in order to prove that
O
1
is a S-module in O, it is sufficient to show that O ` O
1
∈ O(S ∪ Sig(O
1
)). A notion of
modules based on this property can be defined as follows.
Definition 40 (Modules Based on Safety Class).
Let L be an ontology language and O() be a safety class for L. Given an ontology O and
a signature S over L, we say that O
m
⊆ O is an O()-based S-module in O if O ` O
m

O(S ∪ Sig(O
m
)). ♦
Remark 41 Note that for every safety class O(), ontology O and signature S, there exists
at least one O()-based S-module in O, namely O itself; indeed, by Definition 24, the empty
ontology ∅ = O ` O also belongs to O(S) for every O() and S.
Note also that it follows from Definition 40 that O
m
is an O()-based S-module in O iff
O
m
is an O()-based S
t
-module in O for every S
t
with S ⊆ S
t
⊆ (S ∪ Sig(O
m
)). ♦
It is clear that, according to Definition 40, any procedure for checking membership of a
safety class O() can be used directly for checking whether O
m
is a module based on O().
In order to extract an O()-module, it is sufficient to enumerate all possible subsets of the
ontology and check if any of these subsets is a module based on O().
In practice, however, it is possible to avoid checking all the possible subsets of the
input ontology. Figure 4 presents an optimized version of the module-extraction algorithm.
The procedure manipulates configurations of the form O
m
[ O
u
[ O
s
, which represent a
partitioning of the ontology O into three disjoint subsets O
m
, O
u
and O
s
. The set O
m
accumulates the axioms of the extracted module; the set O
s
is intended to be safe w.r.t.
S∪Sig(O
m
). The set O
u
, which is initialized to O, contains the unprocessed axioms. These
axioms are distributed among O
m
and O
s
according to rules R1 and R2. Given an axiom α
from O
u
, rule R1 moves α into O
s
provided O
s
remains safe w.r.t. S ∪ Sig(O
m
) according
to the safety class O(). Otherwise, rule R2 moves α into O
m
and moves all axioms from
O
s
back into O
u
, since Sig(O
m
) might expand and axioms from O
s
might become no longer
safe w.r.t. S ∪ Sig(O
m
). At the end of this process, when no axioms are left in in O
u
, the
set O
m
is an O()-based module in O.
The rewrite rules R1 and R2 preserve invariants I1–I3 given in Figure 4. Invariant I1
states that the three sets O
m
, O
u
and O
s
form a partitioning of O; I2 states that the set O
s
satisfies the safety test for S ∪ Sig(O
m
) w.r.t. O(); finally, I3 establishes that the rewrite
rules either add elements to O
m
, or they add elements to O
s
without changing O
m
; in other
words, the pair ([O
m
[, [O
s
[) consisting of the sizes for these sets increases in lexicographical
order.
303
Cuenca Grau, Horrocks, Kazakov, & Sattler
Input: an ontology O, a signature S, a safety class O()
Output: a module O
m
in O based on O()
Configuration: O
m

module
[
unprocessed

O
u
[ O
s

safe
;
Initial Configuration = ∅ [ O [ ∅
Termination Condition: O
u
= ∅
Rewrite rules:
R1. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
[ O
u
[ O
s
∪ ¦α¦ if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
))
R2. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
∪ ¦α¦ [ O
u
∪ O
s
[ ∅ if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
))
Invariants for O
m
[ O
u
[ O
s
:
I1. O = O
m
¬ O
u
¬ O
s
I2. O
s
∈ O(S ∪ Sig(O
m
))
Invariant for O
m
[ O
u
[ O
s
=⇒ O
t
m
[ O
t
u
[ O
t
s
:
I3. ([O
m
[, [O
s
[) <
lex
([O
t
m
[, [O
t
s
[)
Figure 4: A Procedure for Computing Modules Based on a Safety Class O()
Proposition 42 (Correctness of the Procedure from Figure 4) Let O() be a safety
class for an ontology language L, O an ontology over L, and S a signature over L. Then:
(1) The procedure in Figure 4 with input O, S, O() terminates and returns an O()-
based S-module O
m
in O; and
(2) If, additionally, O() is anti-monotonic, subset-closed and union-closed, then there
is a unique minimal O()-based S-module in O, and the procedure returns precisely this
minimal module.
Proof. (1) The procedure based on the rewrite rules from Figure 4 always terminates for
the following reasons: (i) for every configuration derived by the rewrite rules, the sets O
m
,
O
u
and O
s
form a partitioning of O (see invariant I1 in Figure 4), and therefore the size
of every set is bounded; (ii) at each rewrite step, ([O
m
[, [O
s
[) increases in lexicographical
order (see invariant I3 in Figure 4). Additionally, if O
u
= ∅ then it is always possible to
apply one of the rewrite rules R1 or R2, and hence the procedure always terminates with
O
u
= ∅. Upon termination, by invariant I1 from Figure 4, O is partitioned into O
m
and O
s
and by invariant I2, O
s
∈ O(S ∪ Sig(O
m
)), which implies, by Definition 40, that O
m
is an
O()-based S-module in O.
(2) Now, suppose that, in addition, O() is an anti-monotonic, subset-closed, and union-
closed safety class, and suppose that O
t
m
is an O()-based S-module in O. We demonstrate
by induction that for every configuration O
m
[ O
u
[ O
s
derivable from ∅ [ O [ ∅ by rewrite
rules R1 and R2, it is the case that O
m
⊆ O
t
m
. This will prove that the module computed
by the procedure is a subset of every O()-based S-module in O, and hence, is the smallest
O()-based S-module in O.
Indeed, for the base case we have O
m
= ∅ ⊆ O
t
m
. The rewrite rule R1 does not change
the set O
m
. For the rewrite rule R2 we have: O
m
[ O
u
∪¦α¦ [ O
s
=⇒ O
m
∪¦α¦ [ O
u
∪O
s
[ ∅
if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
)). ()
304
Modular Reuse of Ontologies: Theory and Practice
Input: an ontology O, a signature S, a safety class O()
Output: a module O
m
in O based on O()
Initial Configuration = ∅ [ O [ ∅ Termination Condition: O
u
= ∅
Rewrite rules:
R1. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
[ O
u
[ O
s
∪ ¦α¦ if ¦α¦ ∈ O(S ∪ Sig(O
m
))
R2. O
m
[ O
u
∪ ¦α¦ [ O
α
s
∪ O
s
=⇒ O
m
∪ ¦α¦ [ O
u
∪ O
α
s
[ O
s
if ¦α¦ ∈ O(S ∪ Sig(O
m
)), and
Sig(O
s
) ∩ Sig(α) ⊆ Sig(O
m
)
Figure 5: An Optimized Procedure for Computing Modules Based on a Compact Subset-
Closed Union-Closed Safety Class O()
Suppose, to the contrary, that O
m
⊆ O
t
m
but O
m
∪¦α¦ O
t
m
. Then α ∈ O
t
s
:= O`O
t
m
.
Since O
t
m
is an O()-based S-module in O, we have that O
t
s
∈ O(S ∪ Sig(O
t
m
)). Since O()
is subset-closed, ¦α¦ ∈ O(S ∪ Sig(O
t
m
)). Since O
m
⊆ O
t
m
and O() is anti-monotonic, we
have ¦α¦ ∈ O(S∪Sig(O
m
)). Since by invariant I2 from Figure 4, O
s
∈ O(S∪Sig(O
m
)) and
O() is union-closed, O
s
∪¦α¦ ∈ O(S∪Sig(O
m
)), which contradicts (). This contradiction
implies that the rule R2 also preserves property O
m
⊆ O
t
m
.
Claim (1) of Proposition 42 establishes that the procedure from Figure 4 terminates for
every input and produces a module based on a given safety class. Moreover, it is possible
to show that this procedure runs in polynomial time assuming that the safety test can be
also performed in polynomial time.
If the safety class O() satisfies additional desirable properties, like those based on classes
of local interpretations described in Section 5.2, the procedure, in fact, produces the smallest
possible module based on the safety class, as stated in claim (2) of Proposition 42. In this
case, it is possible to optimize the procedure shown in Figure 4. If O() is union closed,
then, instead of checking whether (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
)) in the conditions of rules
R1 and R2, it is sufficient to check if ¦α¦ ∈ O(S ∪ Sig(O
m
)) since it is already known that
O
s
∈ O(S ∪ Sig(O
m
)). If O() is compact and subset closed, then instead of moving all the
axioms in O
s
to O
u
in rule R2, it is sufficient to move only those axioms O
α
s
that contain
at least one symbol from α which did not occur in O
m
before, since the set of remaining
axioms will stay in O(S ∪ Sig(O
m
)). In Figure 5 we present an optimized version of the
algorithm from Figure 4 for such locality classes.
Example 43 In Table 5 we present a trace of the algorithm from Figure 5 for the ontology O
consisting of axioms M1–M5 from Figure 1, signature S = ¦Cystic Fibrosis, Genetic Disorder¦
and safety class O() = O
r←∅
A←∅
() defined in Example 26. The first column of the table lists
the configurations obtained from the initial configuration ∅ [ O [ ∅ by applying the rewrite
rules R1 and R2 from Figure 5; in each row, the underlined axiom α is the one that is being
tested for safety. The second column of the table shows the elements of S ∪ Sig(O
m
) that
have appeared for the current configuration but have not been present for the preceding
configurations. In the last column we indicate whether the first conditions of the rules R1
and R2 are fulfilled for the selected axiom α from O
u
—that is, whether α is local for I
r←∅
A←∅
().
The rewrite rule corresponding to the result of this test is applied to the configuration.
305
Cuenca Grau, Horrocks, Kazakov, & Sattler
O
m
[ O
u
, α [ O
s
New elements in S ∪ Sig(O
m
) ¦α¦ ∈ O(S ∪ Sig(O
m
))?
1 − [ M1, M2, M3, M4, M5 [ − Cystic Fibrosis, Genetic Disorder Yes ⇒ R1
2 − [ M1, M3, M4, M5 [ M2 − Yes ⇒ R1
3 − [ M1, M4, M5 [ M2, M3 − No ⇒ R2
4 M1 [ M2, M3, M4, M5 [ − Fibrosis, located In, Pancreas,
has Origin, Genetic Origin No ⇒ R2
5 M1, M3 [ M2, M4, M5 [ − Genetic Fibrosis No ⇒ R2
6 M1, M3, M4 [ M2, M5 [ − − Yes ⇒ R1
7 M1, M3, M4 [ M2 [ M5 − No ⇒ R2
8 M1, M2, M3, M4 [ − [ M5 −
Table 5: A trace of the Procedure in Figure 5 for the input O = ¦M1, . . . , M5¦ from Figure 1
and S = ¦Cystic Fibrosis, Genetic Disorder¦
Note that some axioms α are tested for safety several times for different configurations,
because the set O
u
may increase after applications of rule R2; for example, the axiom
α = M2 is tested for safety in both configurations 1 and 7, and α = M3 in configurations
2 and 4. Note also that different results for the locality tests are obtained in these cases:
both M2 and M3 were local w.r.t. S ∪ Sig(O
m
) when O
m
= ∅, but became non-local when
new axioms were added to O
m
. It is also easy to see that, in our case, syntactic locality
produces the same results for the tests.
In our example, the rewrite procedure produces a module O
m
consisting of axioms M1–
M4. Note that it is possible to apply the rewrite rules for different choices of the axiom
α in O
u
, which results in a different computation. In other words, the procedure from
Figure 5 has implicit non-determinism. According to Claim (2) of Proposition 42 all such
computations should produce the same module O
m
, which is the smallest O
r←∅
A←∅
()-based
S-module in O; that is, the implicit non-determinism in the procedure from Figure 5 does
not have any impact on the result of the procedure. However, alternative choices for α
may result in shorter computations: in our example we could have selected axiom M1 in
the first configuration instead of M2 which would have led to a shorter trace consisting of
configurations 1, and 4–8 only. ♦
It is worth examining the connection between the S-modules in an ontology O based on
a particular safety class O() and the actual minimal S-modules in O. It turns out that any
O()-based module O
m
is guaranteed to cover the set of minimal modules, provided that
O() is anti-monotonic and subset-closed. In other words, given O and S, O
m
contains all
the S-essential axioms in O. The following Lemma provides the main technical argument
underlying this result.
Lemma 44 Let O() be an anti-monotonic subset-closed safety class for an ontology lan-
guage L, O an ontology, and S a signature over L. Let O
1
be an S-module in O w.r.t. L
and O
m
an O()-based S-module in O. Then O
2
:= O
1
∩O
m
is an S-module in O w.r.t. L.
306
Modular Reuse of Ontologies: Theory and Practice
Proof. By Definition 40, since O
m
is an O()-based S-module in O, we have that O` O
m

O(S ∪ Sig(O
m
)). Since O
1
`O
2
= O
1
`O
m
⊆ O`O
m
and O() is subset-closed, it is the case
that O
1
` O
2
∈ O(S ∪ Sig(O
m
)). Since O() is anti-monotonic, and O
2
⊆ O
m
, we have that
O
1
` O
2
∈ O(S ∪ Sig(O
2
)), hence, O
2
is an O()-based S-module in O
1
. In particular O
2
is
an S-module in O
1
w.r.t. L. Since O
1
is an S-module in O w.r.t. L, O
2
is an S-module in
O w.r.t. L.
Corollary 45 Let O() be an anti-monotonic, subset-closed safety class for L and O
m
be
an O()-based S-module in O. Then O
m
contains all S-essential axioms in O w.r.t. L.
Proof. Let O
1
be a minimal S-module in O w.r.t. L. We demonstrate that O
1
⊆ O
m
. Indeed,
otherwise, by Lemma 44, O
1
∩O
m
is an S-module in O w.r.t. L which is strictly contained
in O
1
. Hence O
m
is a superset of every minimal S-module in O and hence, contains all
S-essential axioms in O w.r.t. L.
As shown in Section 3.4, all the axioms M1–M4 are essential for the ontology O and
signature S considered in Example 43. We have seen in this example that the locality-based
S-module extracted from O contains all of these axioms, in accordance to Corollary 45. In
our case, the extracted module contains only essential axioms; in general, however, locality-
based modules might contain non-essential axioms.
An interesting application of modules is the pruning of irrelevant axioms when checking
if an axiom α is implied by an ontology O. Indeed, in order to check whether O [= α
it suffices to retrieve the module for Sig(α) and verify if the implication holds w.r.t. this
module. In some cases, it is sufficient to extract a module for a subset of the signature of
α which, in general, leads to smaller modules. In particular, in order to test subsumption
between a pair of atomic concepts, if the safety class being used enjoys some nice properties
then it suffices to extract a module for one of them, as given by the following proposition:
Proposition 46 Let O() be a compact union-closed safety class for an ontology language
L, O an ontology and A, B atomic concepts. Let O
A
be an O()-based module in O for
S = ¦A¦ in O. Then:
1 If O [= α := (A _ B) and ¦B _ ⊥¦ ∈ O(∅) then O
A
[= α;
2 If O [= α := (B _ A) and ¦· _ B¦ ∈ O(∅) then O
A
[= α;
Proof. 1. Consider two cases: (a) B ∈ Sig(O
A
) and (b) B / ∈ Sig(O
A
).
(a) By Remark 41 it is the case that O
A
is an O()-based module in O for S = ¦A, B¦.
Since Sig(α) ⊆ S, by Definition 12 and Definition 1, it is the case that O [= α implies
O
A
[= α.
(b) Consider O
t
= O∪¦B _ ⊥¦. Since Sig(B _ ⊥) = ¦B¦ and B / ∈ Sig(O
A
), ¦B _ ⊥¦ ∈
O(∅) and O() is compact, by Definition 24 it is the case that ¦B _ ⊥¦ ∈ O(S ∪ Sig(O
A
)).
Since O
A
is an O()-based S-module in O, by Definition 40, then O`O
A
∈ O(S∪Sig(O
A
)).
Since O() is union-closed, it is the case that (O ` O
A
) ∪ ¦B _ ⊥¦ ∈ O(S ∪ Sig(O
A
)). Note
that B _ ⊥ ∈ O
A
, since B ∈ Sig(O
A
), hence (O ` O
A
) ∪ ¦B _ ⊥¦ = O
t
` O
A
, and, by
Definition 40, we have that O
A
is an O()-based S-module in O
t
. Now, since O [= (A _ B),
it is the case that O
t
[= (A _ ⊥), and hence, since O
A
is a module in O
t
for S = ¦A¦, we
have that O
A
[= (A _ ⊥) which implies O
A
[= (A _ B).
307
Cuenca Grau, Horrocks, Kazakov, & Sattler
2. The proof of this case is analogous to that for Case 1: Case (a) is applicable without
changes; in Case (b) we show that O
A
is an O()-based module for S = ¦A¦ in O
t
=
O ∪ ¦· _ B¦, and, hence, since O
t
[= (· _ A), it is the case that O
B
[= (· _ A), which
implies O
t
[= (B _ A).
Corollary 47 Let O be a oHO1O ontology and A, B atomic concepts. Let O
r←∗
A←∅
() and
O
r←∗
A←∆
() be locality classes based on local classes of interpretations of form I
r←∗
A←∅
() and
I
r←∗
A←∆
(), respectively, from Table 3. Let O
A
be a module for S = ¦A¦ based on O
r←∗
A←∅
() and
O
B
be a module for S = ¦B¦ based on O
r←∗
A←∆
(). Then O [= (A _ B) iff O
A
[= (A _ B) iff
O
B
[= (A _ B).
Proof. It is easy to see that ¦B _ ⊥¦ ∈ O
r←∗
A←∅
(∅) and ¦· _ A¦ ∈ O
r←∗
A←∆
(∆).
Proposition 46 implies that a module based on a safety class for a single atomic concept
A can be used for capturing either the super-concepts (Case 1), or the sub-concepts (Case
2) B of A, provided that the safety class captures, when applied to the empty signature,
axioms of the form B _ ⊥ (Case 1) or (· _ B) (Case 2). That is, B is a super-concept or
a sub-concept of A in the ontology if and only if it is such in the module. This property
can be used, for example, to optimize the classification of ontologies. In order to check if
the subsumption A _ B holds in an ontology O, it is sufficient to extract a module in
O for S = ¦A¦ using a modularization algorithm based on a safety class in which the
ontology ¦B _ ⊥¦ is local w.r.t. the empty signature, and check whether this subsumption
holds w.r.t. the module. For this purpose, it is convenient to use a syntactically tractable
approximation of the safety class in use; for example, one could use the syntactic locality
conditions given in Figure 3 instead of their semantic counterparts.
It is possible to combine modularization procedures to obtain modules that are smaller
than the ones obtained using these procedures individually. For example, in order to check
the subsumption O [= (A _ B) one could first extract a O
r←∗
A←∅
()-based module ´
1
for
S = ¦A¦ in O; by Corollary 47 this module is complete for all the super-concepts of A in
O, including B—that is, if an atomic concept is a super-concept of A in O, then it is also a
super-concept of A in ´
1
. One could extract a O
r←∗
A←∆
()-based module ´
2
for S = ¦B¦ in
´
1
which, by Corollary 47, is complete for all the sub-concepts of B in ´
1
, including A.
Indeed, ´
2
is an S-module in ´
1
for S = ¦A, B¦ and ´
1
is an S-module in the original
ontology O. By Proposition 46, therefore, it is the case that ´
2
is also an S-module in O.
7. Related Work
We have seen in Section 3 that the notion of conservative extension is valuable in the forma-
lization of ontology reuse tasks. The problem of deciding conservative extensions has been
recently investigated in the context of ontologies (Ghilardi et al., 2006; Lutz et al., 2007;
Lutz & Wolter, 2007). The problem of deciding whether {∪O is a deductive S-conservative
extension of O is EXPTIME-complete for cL (Lutz & Wolter, 2007), 2-EXPTIME-complete
w.r.t. /L(1O (Lutz et al., 2007) (roughly OWL-Lite), and undecidable w.r.t. /L(1OO
(roughly OWL DL). Furthermore, checking model conservative extensions is already un-
decidable for cL (Lutz & Wolter, 2007), and for /L( it is even not semi-decidable (Lutz
et al., 2007).
308
Modular Reuse of Ontologies: Theory and Practice
In the last few years, a rapidly growing body of work has been developed under the
headings of Ontology Mapping and Alignment, Ontology Merging, Ontology Integration,
and Ontology Segmentation (Kalfoglou & Schorlemmer, 2003; Noy, 2004a, 2004b). This field
is rather diverse and has roots in several communities.
In particular, numerous techniques for extracting fragments of ontologies for the pur-
poses of knowledge reuse have been proposed. Most of these techniques rely on syntactically
traversing the axioms in the ontology and employing various heuristics to determine which
axioms are relevant and which are not.
An example of such a procedure is the algorithm implemented in the Prompt-Factor
tool (Noy & Musen, 2003). Given a signature S and an ontology O, the algorithm re-
trieves a fragment O
1
⊆ O as follows: first, the axioms in O that mention any of the
symbols in S are added to O
1
; second, S is expanded with the symbols in Sig(O
1
). These
steps are repeated until a fixpoint is reached. For our example in Section 3, when S =
¦Cystic Fibrosis, Genetic Disorder¦, and O consists of axioms M1–M5 from Figure 1, the al-
gorithm first retrieves axioms M1, M4, and M5 containing these terms, then expands S with
the symbols mentioned in these axioms, such that S contains all the symbols of O. After
this step, all the remaining axioms of O are retrieved. Hence, the fragment extracted by
the Prompt-Factor algorithm consists of all the axioms M1-M5. In this case, the Prompt-
Factor algorithm extracts a module (though not a minimal one). In general, however, the
extracted fragment is not guaranteed to be a module. For example, consider an ontology
O = ¦A ≡ A, B _ C¦ and α = (C _ B). The ontology O is inconsistent due to the
axiom A ≡ A: any axiom (and α in particular) is thus a logical consequence of O. Given
S = ¦B, C¦, the Prompt-Factor algorithm extracts O
2
= ¦B _ C¦; however, O
2
[= α, and
so O
2
is not a module in O. In general, the Prompt-Factor algorithm may fail even if O is
consistent. For example, consider an ontology O = ¦· _ ¦a¦, A _ B¦, α = (A _ ∀r.A),
and S = ¦A¦. It is easy to see that O is consistent, admits only single element models,
and α is satisfied in every such a model; that is, O [= α. In this case, the Prompt-Factor
algorithm extracts O
1
= ¦A _ B¦, which does not imply α.
Another example is Seidenberg’s segmentation algorithm (Seidenberg & Rector, 2006),
which was used for segmentation of the medical ontology GALEN (Rector & Rogers, 1999).
Currently, the full version of GALEN cannot be processed by reasoners, so the authors
investigate the possibility of splitting GALEN into small “segments” which can be processed
by reasoners separately. The authors describe a segmentation procedure which, given a set
of atomic concepts S, computes a “segment” for S in the ontology. The description of
the procedure is very high-level. The authors discuss which concepts and roles should be
included in the segment and which should not. In particular, the segment should contain
all “super-” and “sub-” concepts of the input concepts, concepts that are “linked” from the
input concepts (via existential restrictions) and their “super-concepts”, but not their “sub-
concepts”; for the included concepts, also “their restrictions”, “intersection”, “union”, and
“equivalent concepts” should be considered by including the roles and concepts they contain,
together with their “super-concepts” and “super-roles” but not their “sub-concepts” and
“sub-roles”. From the description of the procedure it is not entirely clear whether it works
with a classified ontology (which is unlikely in the case of GALEN since the full version of
GALEN has not been classified by any existing reasoner), or, otherwise, how the “super-”
and “sub-” concepts are computed. It is also not clear which axioms should be included in
309
Cuenca Grau, Horrocks, Kazakov, & Sattler
the segment in the end, since the procedure talks only about the inclusion of concepts and
roles.
A different approach to module extraction proposed in the literature (Stuckenschmidt
& Klein, 2004) consists of partitioning the concepts in an ontology to facilitate visualization
of and navigation through an ontology. The algorithm uses a set of heuristics for measuring
the degree of dependency between the concepts in the ontology and outputs a graphical
representation of these dependencies. The algorithm is intended as a visualization technique,
and does not establish a correspondence between the nodes of the graph and sets of axioms
in the ontology.
What is common between the modularization procedures we have mentioned is the lack
of a formal treatment for the notion of module. The papers describing these modularization
procedures do not attempt to formally specify the intended outputs of the procedures, but
rather argue what should be in the modules and what not based on intuitive notions. In
particular, they do not take the semantics of the ontology languages into account. It might
be possible to formalize these algorithms and identify ontologies for which the “intuition-
based” modularization procedures work correctly. Such studies are beyond the scope of this
paper.
Module extraction in ontologies has also been investigated from a formal point of view
(Cuenca Grau et al., 2006b). Cuenca Grau et al. (2006) define a notion of a module O
A
in an
ontology Ofor an atomic concept A. One of the requirements for the module is that Oshould
be a conservative extension of O
A
(in the paper O
A
is called a logical module in O). The
paper imposes an additional requirement on modules, namely that the module O
A
should
entail all the subsumptions in the original ontology between atomic concepts involving A and
other atomic concepts in O
A
. The authors present an algorithm for partitioning an ontology
into disjoint modules and proved that the algorithm is correct provided that certain safety
requirements for the input ontology hold: the ontology should be consistent, should not
contain unsatisfiable atomic concepts, and should have only “safe” axioms (which in our
terms means that they are local for the empty signature). In contrast, the algorithm we
present here works for any ontology, including those containing “non-safe” axioms.
The growing interest in the notion of “modularity” in ontologies has been recently
reflected in a workshop on modular ontologies
5
held in conjunction with the International
Semantic Web Conference (ISWC-2006). Concerning the problem of ontology reuse, there
have been various proposals for “safely” combining modules; most of these proposals, such
as c-connections (Cuenca Grau, Parsia, & Sirin, 2006a), Distributed Description Logics
(Borgida & Serafini, 2003) and Package-based Description Logics (Bao, Caragea, & Honavar,
2006) propose a specialized semantics for controlling the interaction between the importing
and the imported modules to avoid side-effects. In contrast to these works, we assume here
that reuse is performed by simply building the logical union of the axioms in the modules
under the standard semantics, and we establish a collection of reasoning services, such as
safety testing, to check for side-effects. The interested reader can find in the literature a
detailed comparison between the different approaches for combining ontologies (Cuenca
Grau & Kutz, 2007).
5. for information see the homepage of the workshop http://www.cild.iastate.edu/events/womo.html
310
Modular Reuse of Ontologies: Theory and Practice
8. Implementation and Proof of Concept
In this section, we provide empirical evidence of the appropriateness of locality for safety
testing and module extraction. For this purpose, we have implemented a syntactic loca-
lity checker for the locality classes
˜
O
r←∅
A←∅
() and
˜
O
r←∆∆
A←∆
() as well as the algorithm for
extracting modules given in Figure 5 from Section 6.
First, we show that the locality class
˜
O
r←∅
A←∅
() provides a powerful sufficiency test for
safety which works for many real-world ontologies. Second, we show that
˜
O
r←∅
A←∅
()-based
modules are typically very small compared to both the size of the ontology and the modules
extracted using other techniques. Third, we report on our implementation in the ontology
editor Swoop (Kalyanpur, Parsia, Sirin, Cuenca Grau, & Hendler, 2006) and illustrate the
combination of the modularization procedures based on the classes
˜
O
r←∅
A←∅
() and
˜
O
r←∆∆
A←∆
().
8.1 Locality for Testing Safety
We have run our syntactic locality checker for the class
˜
O
r←∅
A←∅
() over the ontologies from a
library of 300 ontologies of various sizes and complexity some of which import each other
(Gardiner, Tsarkov, & Horrocks, 2006).
6
For all ontologies { that import an ontology O,
we check if { belongs to the locality class for S = Sig({) ∩ Sig(O).
It turned out that from the 96 ontologies in the library that import other ontologies, all
but 11 were syntactically local for S (and hence also semantically local for S). From the 11
non-local ontologies, 7 are written in the OWL-Full species of OWL (Patel-Schneider et al.,
2004) to which our framework does not yet apply. The remaining 4 non-localities are due
to the presence of so-called mapping axioms of the form A ≡ B
t
, where A / ∈ S and B
t
∈ S.
Note that these axioms simply indicate that the atomic concepts A, B
t
in the two ontologies
under consideration are synonyms. Indeed, we were able to easily repair these non-localities
as follows: we replace every occurrence of A in { with B
t
and then remove this axiom from
the ontology. After this transformation, all 4 non-local ontologies turned out to be local.
8.2 Extraction of Modules
In this section, we compare three modularization
7
algorithms that we have implemented
using Manchester’s OWL API:
8
A1: The Prompt-Factor algorithm (Noy & Musen, 2003);
A2: The segmentation algorithm proposed by Cuenca Grau et al. (2006);
A3: Our modularisation algorithm (Algorithm 5), based on the locality class
˜
O
r←∅
A←∅
().
The aim of the experiments described in this section is not to provide a throughout com-
parison of the quality of existing modularization algorithms since each algorithm extracts
“modules” according to its own requirements, but rather to give an idea of the typical size
of the modules extracted from real ontologies by each of the algorithms.
6. The library is available at http://www.cs.man.ac.uk/
~
horrocks/testing/
7. In this section by “module” we understand the result of the considered modularization procedures which
may not necessarily be a module according to Definition 10 or 12
8. http://sourceforge.net/projects/owlapi
311
Cuenca Grau, Horrocks, Kazakov, & Sattler
(a) Modularization of NCI (a) Modularization of NCI (b) Modularization of GALEN-Small (b) Modularization of GALEN-Small
(c) Modularization of SNOMED (c) Modularization of SNOMED (d) Modularization of GALEN-Full (d) Modularization of GALEN-Full
(e) Small modules of GALEN-Full (e) Small modules of GALEN-Full (f) Large modules of GALEN-Full (f) Large modules of GALEN-Full
Figure 6: Distribution for the sizes of syntactic locality-based modules: the X-Axis gives the
number of concepts in the modules and the Y-Axis the number of modules extracted for
each size range.
312
Modular Reuse of Ontologies: Theory and Practice
Ontology Atomic A1: Prompt-Factor A2: Segmentation A3: Loc.-based mod.
Concepts Max.(%) Avg.(%) Max.(%) Avg.(%) Max.(%) Avg.(%)
NCI 27772 87.6 75.84 55 30.8 0.8 0.08
SNOMED 255318 100 100 100 100 0.5 0.05
GO 22357 1 0.1 1 0.1 0.4 0.05
SUMO 869 100 100 100 100 2 0.09
GALEN-Small 2749 100 100 100 100 10 1.7
GALEN-Full 24089 100 100 100 100 29.8 3.5
SWEET 1816 96.4 88.7 83.3 51.5 1.9 0.1
DOLCE-Lite 499 100 100 100 100 37.3 24.6
Table 6: Comparison of Different Modularization Algorithms
As a test suite, we have collected a set of well-known ontologies available on the Web,
which we divided into two groups:
Simple. In this group, we have included the National Cancer Institute (NCI) Ontology,
9
the SUMO Upper Ontology,
10
the Gene Ontology (GO),
11
and the SNOMED Ontology
12
.
These ontologies are expressed in a simple ontology language and are of a simple structure;
in particular, they do not contain GCIs, but only definitions.
Complex. This group contains the well-known GALEN ontology (GALEN-Full),
13
the
DOLCE upper ontology (DOLCE-Lite),
14
and NASA’s Semantic Web for Earth and En-
vironmental Terminology (SWEET)
15
. These ontologies are complex since they use many
constructors from OWL DL and/or include a significant number of GCIs. In the case of
GALEN, we have also considered a version GALEN-Small that has commonly been used as
a benchmark for OWL reasoners. This ontology is almost 10 times smaller than the original
GALEN-Full ontology, yet similar in structure.
Since there is no benchmark for ontology modularization and only a few use cases are
available, there is no systematic way of evaluating modularization procedures. Therefore
we have designed a simple experiment setup which, even if it may not necessarily reflect
an actual ontology reuse scenario, it should give an idea of typical module sizes. For each
ontology, we took the set of its atomic concepts and extracted modules for every atomic
concept. We compare the maximal and average sizes of the extracted modules.
It is worth emphasizing here that our algorithm A3 does not just extract a module for
the input atomic concept: the extracted fragment is also a module for its whole signature,
which typically includes a fair amount of other concepts and roles.
9. http://www.mindswap.org/2003/CancerOntology/nciOncology.owl
10. http://ontology.teknowledge.com/
11. http://www.geneontology.org
12. http://www.snomed.org
13. http://www.openclinical.org/prj_galen.html
14. http://www.loa-cnr.it/DOLCE.html
15. http://sweet.jpl.nasa.gov/ontology/
313
Cuenca Grau, Horrocks, Kazakov, & Sattler
(a) Concepts DNA Sequence and
Microanatomy in NCI
(b)
˜
O
r←∅
A←∅
(S)-based module for
DNA Sequence in NCI
(c)
˜
O
r←∆×∆
A←∆
(S)-based module for
Micro Anatomy in the fragment 7b
Figure 7: The Module Extraction Functionality in Swoop
The results we have obtained are summarized in Table 6. The table provides the size
of the largest module and the average size of the modules obtained using each of these
algorithms. In the table, we can clearly see that locality-based modules are significantly
smaller than the ones obtained using the other methods; in particular, in the case of SUMO,
DOLCE, GALEN and SNOMED, the algorithms A1 and A2 retrieve the whole ontology as
the module for each input signature. In contrast, the modules we obtain using our algorithm
are significantly smaller than the size of the input ontology.
For NCI, SNOMED, GO and SUMO, we have obtained very small locality-based mo-
dules. This can be explained by the fact that these ontologies, even if large, are simple
in structure and logical expressivity. For example, in SNOMED, the largest locality-based
module obtained is approximately 0.5% of the size of the ontology, and the average size of
the modules is 1/10 of the size of the largest module. In fact, most of the modules we have
obtained for these ontologies contain less than 40 atomic concepts.
For GALEN, SWEET and DOLCE, the locality-based modules are larger. Indeed, the
largest module in GALEN-Small is 1/10 of the size of the ontology, as opposed to 1/200
in the case of SNOMED. For DOLCE, the modules are even bigger—1/3 of the size of
the ontology—which indicates that the dependencies between the different concepts in the
ontology are very strong and complicated. The SWEET ontology is an exception: even
though the ontology uses most of the constructors available in OWL, the ontology is heavily
underspecified, which yields small modules.
In Figure 6, we have a more detailed analysis of the modules for NCI, SNOMED,
GALEN-Small and GALEN-Full. Here, the X-axis represents the size ranges of the ob-
314
Modular Reuse of Ontologies: Theory and Practice
tained modules and the Y-axis the number of modules whose size is within the given range.
The plots thus give an idea of the distribution for the sizes of the different modules.
For SNOMED, NCI and GALEN-Small, we can observe that the size of the modules
follows a smooth distribution. In contrast, for GALEN-Full, we have obtained a large number
of small modules and a significant number of very big ones, but no medium-sized modules
in-between. This abrupt distribution indicates the presence of a big cycle of dependencies in
the ontology. The presence of this cycle can be spotted more clearly in Figure 6(f); the figure
shows that there is a large number of modules of size in between 6515 and 6535 concepts.
This cycle does not occur in the simplified version of GALEN and thus we obtain a smooth
distribution for that case. In contrast, in Figure 6(e) we can see that the distribution for
the “small” modules in GALEN-Full is smooth and much more similar to the one for the
simplified version of GALEN.
The considerable differences in the size of the modules extracted by algorithms A1 –
A3 are due to the fact that these algorithms extract modules according to different require-
ments. Algorithm A1 produces a fragment of the ontology that contains the input atomic
concept and is syntactically separated from the rest of the axioms—that is, the fragment
and the rest of the ontology have disjoint signatures. Algorithm A2 extracts a fragment of
the ontology that is a module for the input atomic concept and is additionally semantically
separated from the rest of the ontology: no entailment between an atomic concept in the
module and an atomic concept not in the module should hold in the original ontology. Since
our algorithm is based on weaker requirements, it is to be expected that it extracts smaller
modules. What is surprising is that the difference in the size of modules is so significant.
In order to explore the use of our results for ontology design and analysis, we have
integrated our algorithm for extracting modules in the ontology editor Swoop (Kalyanpur
et al., 2006). The user interface of Swoop allows for the selection of an input signature and
the retrieval of the corresponding module.
16
Figure 7a shows the classification of the concepts DNA Sequence and Microanatomy in
the NCI ontology. Figure 7b shows the minimal
˜
O
r←∅
A←∅
()-based module for DNA Sequence,
as obtained in Swoop. Recall that, according to Corollary 47, a
˜
O
r←∅
A←∅
()-based module for an
atomic concept contains all necessary axioms for, at least, all its (entailed) super-concepts
in O. Thus this module can be seen as the “upper ontology” for A in O. In fact, Figure 7
shows that this module contains only the concepts in the “path” from DNA Sequence to
the top level concept Anatomy Kind. This suggests that the knowledge in NCI about the
particular concept DNA Sequence is very shallow in the sense that NCI only “knows” that a
DNA Sequence is a macromolecular structure, which, in the end, is an anatomy kind. If one
wants to refine the module by only including the information in the ontology necessary to
entail the “path” from DNA Sequence to Micro Anatomy, one could extract the
˜
O
r←∆∆
A←∆
()-
based module for Micro Anatomy in the fragment 7b. By Corollary 47, this module contains
all the sub-concepts of Micro Anatomy in the previously extracted module. The resulting
module is shown in Figure 7b.
16. The tool can be downloaded at http://code.google.com/p/swoop/
315
Cuenca Grau, Horrocks, Kazakov, & Sattler
9. Conclusion
In this paper, we have proposed a set of reasoning problems that are relevant for ontology
reuse. We have established the relationships between these problems and studied their com-
putability. Using existing results (Lutz et al., 2007) and the results obtained in Section 4, we
have shown that these problems are undecidable or algorithmically unsolvable for the logic
underlying OWL DL. We have dealt with these problems by defining sufficient conditions
for a solution to exist, which can be computed in practice. We have introduced and studied
the notion of a safety class, which characterizes any sufficiency condition for safety of an
ontology w.r.t. a signature. In addition, we have used safety classes to extract modules from
ontologies.
For future work, we would like to study other approximations which can produce small
modules in complex ontologies like GALEN, and exploit modules to optimize ontology
reasoning.
References
Baader, F., Brandt, S., & Lutz, C. (2005). Pushing the cL envelope. In IJCAI-05, Pro-
ceedings of the Nineteenth International Joint Conference on Artificial Intelligence,
Edinburgh, Scotland, UK, July 30-August 5, 2005, pp. 364–370. Professional Book
Center.
Baader, F., Calvanese, D., McGuinness, D. L., Nardi, D., & Patel-Schneider, P. F. (Eds.).
(2003). The Description Logic Handbook: Theory, Implementation, and Applications.
Cambridge University Press.
Bao, J., Caragea, D., & Honavar, V. (2006). On the semantics of linking and importing in
modular ontologies. In Proceedings of the 5th International Semantic Web Conference
(ISWC-2006), Athens, GA, USA, November 5-9, 2006, Vol. 4273 of Lecture Notes in
Computer Science, pp. 72–86.
B¨orger, E., Gr¨adel, E., & Gurevich, Y. (1997). The Classical Decision Problem. Perspectives
of Mathematical Logic. Springer-Verlag. Second printing (Universitext) 2001.
Borgida, A., & Serafini, L. (2003). Distributed description logics: Assimilating information
from peer sources. J. Data Semantics, 1, 153–184.
Cuenca Grau, B., Horrocks, I., Kazakov, Y., & Sattler, U. (2007). A logical framework
for modularity of ontologies. In IJCAI-07, Proceedings of the Twentieth International
Joint Conference on Artificial Intelligence, Hyderabad, India, January 2007, pp. 298–
304. AAAI.
Cuenca Grau, B., & Kutz, O. (2007). Modular ontology languages revisited. In Proceedings
of the Workshop on Semantic Web for Collaborative Knowledge Acquisition, Hyder-
abad, India, January 5, 2007.
Cuenca Grau, B., Parsia, B., & Sirin, E. (2006a). Combining OWL ontologies using c-
connections. J. Web Sem., 4(1), 40–59.
Cuenca Grau, B., Parsia, B., Sirin, E., & Kalyanpur, A. (2006b). Modularity and web ontolo-
gies. In Proceedings of the Tenth International Conference on Principles of Knowledge
316
Modular Reuse of Ontologies: Theory and Practice
Representation and Reasoning (KR-2006), Lake District of the United Kingdom, June
2-5, 2006, pp. 198–209. AAAI Press.
Cuenca Grau, B., Horrocks, I., Kazakov, Y., & Sattler, U. (2007). Just the right amount:
extracting modules from ontologies. In Proceedings of the 16th International Confe-
rence on World Wide Web (WWW-2007), Banff, Alberta, Canada, May 8-12, 2007,
pp. 717–726. ACM.
Cuenca Grau, B., Horrocks, I., Kutz, O., & Sattler, U. (2006). Will my ontologies fit
together?. In Proceedings of the 2006 International Workshop on Description Logics
(DL-2006), Windermere, Lake District, UK, May 30 - June 1, 2006, Vol. 189 of CEUR
Workshop Proceedings. CEUR-WS.org.
Gardiner, T., Tsarkov, D., & Horrocks, I. (2006). Framework for an automated compari-
son of description logic reasoners. In Proceedings of the 5th International Semantic
Web Conference (ISWC-2006), Athens, GA, USA, November 5-9, 2006, Vol. 4273 of
Lecture Notes in Computer Science, pp. 654–667. Springer.
Ghilardi, S., Lutz, C., & Wolter, F. (2006). Did I damage my ontology? a case for con-
servative extensions in description logics. In Proceedings of the Tenth International
Conference on Principles of Knowledge Representation and Reasoning (KR-2006),
Lake District of the United Kingdom, June 2-5, 2006, pp. 187–197. AAAI Press.
Horrocks, I., Patel-Schneider, P. F., & van Harmelen, F. (2003). From SHIQ and RDF to
OWL: the making of a web ontology language. J. Web Sem., 1(1), 7–26.
Horrocks, I., & Sattler, U. (2005). A tableaux decision procedure for oHO1O. In Proceedings
of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05),
Edinburgh, Scotland, UK, July 30-August 5, 2005, pp. 448–453. Professional Book
Center.
Kalfoglou, Y., & Schorlemmer, M. (2003). Ontology mapping: The state of the art. The
Knowledge Engineering Review, 18, 1–31.
Kalyanpur, A., Parsia, B., Sirin, E., Cuenca Grau, B., & Hendler, J. A. (2006). Swoop: A
web ontology editing browser. J. Web Sem., 4(2), 144–153.
Lutz, C., Walther, D., & Wolter, F. (2007). Conservative extensions in expressive description
logics. In Proceedings of the Twentieth International Joint Conference on Artificial
Intelligence (IJCAI-07), Hyderabad, India, January 2007, pp. 453–459. AAAI.
Lutz, C., & Wolter, F. (2007). Conservative extensions in the lightweight description logic
cL. In Proceedings of the 21st International Conference on Automated Deduction
(CADE-21), Bremen, Germany, July 17-20, 2007, Vol. 4603 of Lecture Notes in Com-
puter Science, pp. 84–99. Springer.
M¨oller, R., & Haarslev, V. (2003). Description logic systems. In The Description Logic
Handbook, chap. 8, pp. 282–305. Cambridge University Press.
Motik, B. (2006). Reasoning in Description Logics using Resolution and Deductive
Databases. Ph.D. thesis, Univesit¨at Karlsruhe (TH), Karlsruhe, Germany.
Noy, N. F. (2004a). Semantic integration: A survey of ontology-based approaches. SIGMOD
Record, 33(4), 65–70.
317
Cuenca Grau, Horrocks, Kazakov, & Sattler
Noy, N. F. (2004b). Tools for mapping and merging ontologies. In Staab, & Studer (Staab
& Studer, 2004), pp. 365–384.
Noy, N., & Musen, M. (2003). The PROMPT suite: Interactive tools for ontology mapping
and merging. Int. Journal of Human-Computer Studies, Elsevier, 6(59).
Patel-Schneider, P., Hayes, P., & Horrocks, I. (2004). Web ontology language OWL Abstract
Syntax and Semantics. W3C Recommendation.
Rector, A., & Rogers, J. (1999). Ontological issues in using a description logic to represent
medical concepts: Experience from GALEN. In IMIA WG6 Workshop, Proceedings.
Schmidt-Schauß, M., & Smolka, G. (1991). Attributive concept descriptions with comple-
ments. Artificial Intelligence, Elsevier, 48(1), 1–26.
Seidenberg, J., & Rector, A. L. (2006). Web ontology segmentation: analysis, classification
and use. In Proceedings of the 15th international conference on World Wide Web
(WWW-2006), Edinburgh, Scotland, UK, May 23-26, 2006, pp. 13–22. ACM.
Sirin, E., & Parsia, B. (2004). Pellet system description. In Proceedings of the 2004 In-
ternational Workshop on Description Logics (DL2004), Whistler, British Columbia,
Canada, June 6-8, 2004, Vol. 104 of CEUR Workshop Proceedings. CEUR-WS.org.
Staab, S., & Studer, R. (Eds.). (2004). Handbook on Ontologies. International Handbooks
on Information Systems. Springer.
Stuckenschmidt, H., & Klein, M. (2004). Structure-based partitioning of large class hier-
archies. In Proceedings of the Third International Semantic Web Conference (ISWC-
2004), Hiroshima, Japan, November 7-11, 2004, Vol. 3298 of Lecture Notes in Com-
puter Science, pp. 289–303. Springer.
Tobies, S. (2000). The complexity of reasoning with cardinality restrictions and nominals
in expressive description logics. J. Artif. Intell. Res. (JAIR), 12, 199–217.
Tsarkov, D., & Horrocks, I. (2006). FaCT++ description logic reasoner: System description.
In Proceedings of the Third International Joint Conference on Automated Reasoning
(IJCAR 2006), Seattle, WA, USA, August 17-20, 2006, Vol. 4130 of Lecture Notes in
Computer Science, pp. 292–297. Springer.
318

Cuenca Grau, Horrocks, Kazakov, & Sattler

2007), which has been motivated by the above-mentioned application needs. In this paper, we focus on the use of modularity to support the partial reuse of ontologies. In particular, we consider the scenario in which we are developing an ontology P and want to reuse a set S of symbols from a “foreign” ontology Q without changing their meaning. For example, suppose that an ontology engineer is building an ontology about research projects, which specifies different types of projects according to the research topic they focus on. The ontology engineer in charge of the projects ontology P may use terms such as Cystic Fibrosis and Genetic Disorder in his descriptions of medical research projects. The ontology engineer is an expert on research projects; he may be unfamiliar, however, with most of the topics the projects cover and, in particular, with the terms Cystic Fibrosis and Genetic Disorder. In order to complete the projects ontology with suitable definitions for these medical terms, he decides to reuse the knowledge about these subjects from a wellestablished medical ontology Q. The most straightforward way to reuse these concepts is to construct the logical union P ∪ Q of the axioms in P and Q. It is reasonable to assume that the additional knowledge about the medical terms used in both P and Q will have implications on the meaning of the projects defined in P; indeed, the additional knowledge about the reused terms provides new information about medical research projects which are defined using these medical terms. Less intuitive is the fact that importing Q may also result in new entailments concerning the reused symbols, namely Cystic Fibrosis and Genetic Disorder. Since the ontology engineer of the projects ontology is not an expert in medicine and relies on the designers of Q, it is to be expected that the meaning of the reused symbols is completely specified in Q; that is, the fact that these symbols are used in the projects ontology P should not imply that their original “meaning” in Q changes. If P does not change the meaning of these symbols in Q, we say that P ∪ Q is a conservative extension of Q In realistic application scenarios, it is often unreasonable to assume the foreign ontology Q to be fixed; that is, Q may evolve beyond the control of the modelers of P. The ontology engineers in charge of P may not be authorized to access all the information in Q or, most importantly, they may decide at a later time to reuse the symbols Cystic Fibrosis and Genetic Disorder from a medical ontology other than Q. For application scenarios in which the external ontology Q may change, it is reasonable to “abstract” from the particular Q under consideration. In particular, given a set S of “external symbols”, the fact that the axioms in P do not change the meaning of any symbol in S should be independent of the particular meaning ascribed to these symbols by Q. In that case, we will say that P is safe for S. Moreover, even if P “safely” reuses a set of symbols from an ontology Q, it may still be the case that Q is a large ontology. In particular, in our example, the foreign medical ontology may be huge, and importing the whole ontology would make the consequences of the additional information costly to compute and difficult for our ontology engineers in charge of the projects ontology (who are not medical experts) to understand. In practice, therefore, one may need to extract a module Q1 of Q that includes only the relevant information. Ideally, this module should be as small as possible while still guarantee to capture the meaning of the terms used; that is, when answering queries against the research projects ontology, importing the module Q1 would give exactly the same answers as if the whole medical ontology Q had been imported. In this case, importing the module will have the
274

Modular Reuse of Ontologies: Theory and Practice

same observable effect on the projects ontology as importing the entire ontology. Furthermore, the fact that Q1 is a module in Q should be independent of the particular P under consideration. The contributions of this paper are as follows: 1. We propose a set of tasks that are relevant to ontology reuse and formalize them as reasoning problems. To this end, we introduce the notions of conservative extension, safety and module for a very general class of logic-based ontology languages. 2. We investigate the general properties of and relationships between the notions of conservative extension, safety, and module and use these properties to study the relationships between the relevant reasoning problems we have previously identified. 3. We consider Description Logics (DLs), which provide the formal underpinning of the W3C Web Ontology Language (OWL), and study the computability of our tasks. We show that all the tasks we consider are undecidable or algorithmically unsolvable for the description logic underlying OWL DL—the most expressive dialect of OWL that has a direct correspondence to description logics. 4. We consider the problem of deciding safety of an ontology for a signature. Given that this problem is undecidable for OWL DL, we identify sufficient conditions for safety, which are decidable for OWL DL—that is, if an ontology satisfies our conditions then it is safe; the converse, however, does not necessarily hold. We propose the notion of a safety class, which characterizes any sufficiency condition for safety, and identify a family of safety classes–called locality—which enjoys a collection of desirable properties. 5. We next apply the notion of a safety class to the task of extracting modules from ontologies; we provide various modularization algorithms that are appropriate to the properties of the particular safety class in use. 6. We present empirical evidence of the practical benefits of our techniques for safety checking and module extraction. This paper extends the results in our previous work (Cuenca Grau, Horrocks, Kutz, & Sattler, 2006; Cuenca Grau et al., 2007; Cuenca Grau, Horrocks, Kazakov, & Sattler, 2007).

2. Preliminaries
In this section we introduce description logics (DLs) (Baader, Calvanese, McGuinness, Nardi, & Patel-Schneider, 2003), a family of knowledge representation formalisms which underlie modern ontology languages, such as OWL DL (Patel-Schneider, Hayes, & Horrocks, 2004). A hierarchy of commonly-used description logics is summarized in Table 1. The syntax of a description logic L is given by a signature and a set of constructors. A signature (or vocabulary) Sg of a DL is the (disjoint) union of countably infinite sets AC of atomic concepts (A, B, . . . ) representing sets of elements, AR of atomic roles (r, s, . . . ) representing binary relations between elements, and Ind of individuals (a, b, c, . . . ) representing constants. We assume the signature to be fixed for every DL.
275

Cuenca Grau, Horrocks, Kazakov, & Sattler

Rol RBox EL r ALC – – S – – Trans(r) + I r− + H R 1 R2 + F Funct(R) + N ( n S) + Q ( n S.C) + O {i} Here r ∈ AR, A ∈ AC, a, b ∈ Ind, R(i) ∈ Rol, C(i) ∈ Con, n ≥ 1 and S ∈ Rol is a simple role (see (Horrocks & Sattler, 2005)). Table 1: The hierarchy of standard description logics

DLs

Constructors Con , A, C1 C2 , ∃R.C – –, ¬C – –

Axioms [ Ax ] TBox ABox A ≡ C, C1 C2 a : C, r(a, b) – – – – – – – –

Every DL provides constructors for defining the set Rol of (general) roles (R, S, . . . ), the set Con of (general) concepts (C, D, . . . ), and the set Ax of axioms (α, β, . . . ) which is a union of role axioms (RBox), terminological axioms (TBox) and assertions (ABox). EL (Baader, Brandt, & Lutz, 2005) is a simple description logic which allows one to construct complex concepts using conjunction C1 C2 and existential restriction ∃R.C starting from atomic concepts A, roles R and the top concept . EL provides no role constructors and no role axioms; thus, every role R in EL is atomic. The TBox axioms of EL can be either concept definitions A ≡ C or general concept inclusion axioms (GCIs) C1 C2 . EL assertions are either concept assertions a : C or role assertions r(a, b). In this paper we assume the concept definition A ≡ C is an abbreviation for two GCIs A C and C A. The basic description logic ALC (Schmidt-Schauß & Smolka, 1991) is obtained from EL by adding the concept negation constructor ¬C. We introduce some additional constructors as abbreviations: the bottom concept ⊥ is a shortcut for ¬ , the concept disjunction C1 C2 stands for ¬(¬C1 ¬C2 ), and the value restriction ∀R.C stands for ¬(∃R.¬C). In contrast to EL, ALC can express contradiction axioms like ⊥. The logic S is an extension of ALC where, additionally, some atomic roles can be declared to be transitive using a role axiom Trans(r). Further extensions of description logics add features such as inverse roles r− (indicated by appending a letter I to the name of the logic), role inclusion axioms (RIs) R1 R2 (+H), functional roles Funct(R) (+F), number restrictions ( n S), with n ≥ 1, (+N ), qualified number restrictions ( n S.C), with n ≥ 1, (+Q)1 , and nominals {a} (+O). Nominals make it possible to construct a concept representing a singleton set {a} (a nominal concept) from an individual a. These extensions can be used in different combinations; for example ALCO is an extension of ALC with nominals; SHIQ is an extension of S with role hierarchies,
1. the dual constructors ( n S) and ( n S.C) are abbreviations for ¬( n + 1 S) and ¬( n + 1 S.¬C), respectively

276

and ·I is the interpretation function that assigns: to every A ∈ AC a subset AI ⊆ ∆I . An axiom α is a tautology if it is valid in the set of all interpretations (or. An interpretation I is a pair I = (∆I . I I R2 iff R1 ⊆ R2 . we say that an axiom α (an ontology O) is valid in I if every interpretation I ∈ I is a model of α (respectively O). ·I ). and SHOIQ is the DL that uses all the constructors and axiom types we have presented.C)I = { x ∈ ∆I | {y ∈ ∆I | x. such as OWL. 2003). We say that two sets of interpretations I and J are equal modulo S (notation: I|S = J|S ) if for every I ∈ I there exits J ∈ J such that J |S = I|S and for every J ∈ J there exists I ∈ I such that I|S = J |S . are based on description logics and. z ∈ ∆I [ x. I |= Trans(r) iff ∀x. OWL DL corresponds to SHOIN (Horrocks. I |= Funct(R) iff ∀x. I |= r(a. z ∈ rI ]. In particular. The main reasoning task for ontologies is entailment: given an ontology O and an axiom α. z ∈ RI ⇒ y = z ]. 277 . An ontology O implies an axiom α (written O |= α) if I |= α for every model I of O. We say that two interpretations I = (∆I . x. In this paper. AR and Ind are not defined by the interpretation I but assumed to be fixed for the ontology language (DL). check if O implies α. y ∈ RI ∧ y ∈ C I } ∆I \ C I { x. x ∈ rI } ( n R)I = { x ∈ ∆I | {y ∈ ∆I | x. equivalently. and to every a ∈ Ind an element aI ∈ ∆I . An interpretation I is a model of an ontology O if I satisfies all axioms in O.Modular Reuse of Ontologies: Theory and Practice inverse roles and qualified number restrictions. y ∈ RI ∧ x.C)I (¬C)I (r− )I = = = = = ∆ C I ∩ DI {x ∈ ∆I | ∃y. y. to a certain extent. z ∈ rI ⇒ x. b) iff aI . y ∈ rI ∧ y. atomic roles and individuals that occur in O (respectively in α). are syntactic variants thereof. y | y. Note that the sets AC. where ∆I is a non-empty set. I |= a : C iff aI ∈ C I . called the domain of the interpretation. ·J ) coincide on the subset S of the signature (notation: I|S = J |S ) if ∆I = ∆J and X I = X J for every X ∈ S. to every r ∈ AR a binary relation rI ⊆ ∆I × ∆I . Patel-Schneider. ·I ) and J = (∆J . Given a set I of interpretations. & van Harmelen. bI ∈ rI . or I is a model of α) is defined as follows: I |= C1 I |= R1 I I C2 iff C1 ⊆ C2 . The interpretation function ·I is extended to complex roles and concepts via DLconstructors as follows: ( )I (C D)I (∃R. is implied by the empty ontology). we assume an ontology O based on a description logic L to be a finite set of axioms in L. Modern ontology languages. y ∈ RI ∧ y ∈ C I } ≥ n } {a}I = {aI } The satisfaction relation I |= α between an interpretation I and a DL axiom α (read as I satisfies α. The logical entailment |= is defined using the usual Tarski-style set-theoretic semantics for description logics as follows. z ∈ ∆I [ x. y ∈ RI } ≥ n } ( n R. The signature of an ontology O (of an axiom α) is the set Sig(O) (Sig(α)) of atomic concepts. y.

2). & Sattler Ontology of medical research projects P: P1 P2 P3 P4 E1 E2 Genetic Disorder Project ≡ Project EUProject ∃has Focus. with the terms Cystic Fibrosis and Genetic Disorder mentioned in P1 and P2.Genetic Origin ∃located In. in Section 3. as given by the axioms P1 and P2 in Figure 1.Cystic Fibrosis ∃has Focus.1 A Motivating Example Suppose that an ontology engineer is in charge of a SHOIQ ontology on research projects. Within this section. safety and modules apply.Genetic Disorder ∃has Focus. we motivate and define the reasoning tasks to be investigated in the remainder of the paper. The first one describes projects about genetic disorders. In order to complete the projects ontology with suitable definitions 278 . for example.Cuenca Grau. The ontology engineer is an expert on research projects: he knows.Cystic Fibrosis ∃has Origin. Based on this application scenario. the second one describes European projects about cystic fibrosis. that every instance of EUProject must be an instance of Project (the concept-inclusion axiom P3) and that the role has Focus can be applied only to instances of Project (the domain axiom P4).Pancreas Genetic Fibrosis ∃associated With.Pancreas Genetic Disorder Immuno Protein Gene Figure 1: Reusing medical terminology in an ontology on research projects 3.Cystic Fibrosis Cystic Fibrosis EUProject ≡ EUProject Project Project (Genetic Disorder :: Cystic Fibrosis) ⊥ ∀ has Focus.2 and 3.Genetic Disorder Ontology of medical terms Q: M1 Cystic Fibrosis ≡ Fibrosis M2 Genetic Fibrosis ≡ Fibrosis M3 Fibrosis M4 Genetic Fibrosis M5 DEFBI Gene ∃located In. These notions are defined relative to a language L. Horrocks. our tasks are based on the notions of a conservative extension (Section 3. however. with most of the topics the projects cover and. 3. we will define formally the class of ontology languages for which the given definitions of conservative extensions. in particular.6. all the tasks we consider in this paper are summarized in Table 2. Assume that the ontology engineer defines two concepts—Genetic Disorder Project and Cystic Fibrosis EUProject—in his ontology P. In particular. which specifies different types of projects according to the research topic they are concerned with. He may be unfamiliar. For the convenience of the reader. we assume that L is an ontology language based on a description logic.Genetic Origin ∃has Origin. safety (Sections 3. Ontology Integration and Knowledge Reuse In this section.4). Kazakov. we elaborate on the ontology reuse scenario sketched in Section 1.3) and module (Section 3. Project :: ∃has Focus.

like β. that the meaning of the terms defined in Q changes as a consequence of the import since these terms are supposed to be completely specified within Q. Using inclusion α from (1) and axioms P1–P3 from ontology P we can now prove that every instance of Cystic Fibrosis EUProject is also an instance of Genetic Disorder Project: P ∪ Q |= β := (Cystic Fibrosis EUProject Genetic Disorder Project) (2) This inclusion β. also has Focus on Genetic Disorder” Every project (3) (4) Note that the statements (3) and (4) can be thought of as adding new information about projects and. intuitively. For example. Such a side effect would be highly undesirable for the modeling of ontology P since the ontology engineer of P might not be an expert on the subject of Q and is not supposed to alter the meaning of the terms defined in Q not even implicitly. the ontology engineer has introduced modeling errors and. Importing additional axioms into an ontology may result in new logical consequences. It is natural to expect that entailments like α in (1) from an imported ontology Q result in new logical consequences. P |= β. however. axioms E1 and E2 do not correspond to (3) and (4): E1 actually formalizes the following statement: “Every instance of Project is different from every common instance of Genetic Disorder and Cystic Fibrosis”. α follows from axioms α1 and M4. At this point.” ::: “:::::::::::::: that has Focus on Cystic Fibrosis. as a consequence. suppose that the ontology engineer has learned about the concepts Genetic Disorder and Cystic Fibrosis from the ontology Q (including the inclusion (1)) and has decided to introduce additional axioms formalizing the following statements: “Every instance of Project is different from every instance of Genetic Disorder and every instance of Cystic Fibrosis.Modular Reuse of Ontologies: Theory and Practice for these medical terms. however. One would not expect. Suppose that Cystic Fibrosis and Genetic Disorder are described in an ontology Q containing the axioms M1-M5 in Figure 1. The most straightforward way to reuse these concepts is to import in P the ontology Q—that is. over the terms defined in the main ontology P. to add the axioms from Q to the axioms in P and work with the extended ontology P ∪ Q. might change during the import. and E2 expresses 279 . it is easy to see that axioms M1–M4 in Q imply that every instance of Cystic Fibrosis is an instance of Genetic Disorder: Q |= α := (Cystic Fibrosis Genetic Disorder) (1) Indeed. In order to illustrate such a situation. the concept inclusion α1 := (Cystic Fibrosis Genetic Fibrosis) follows from axioms M1 and M2 as well as from axioms M1 and M3. even though it concerns the terms of primary scope in his projects ontology P. Suppose the ontology engineer has formalized the statements (3) and (4) in ontology P using axioms E1 and E2 respectively. in (2). The ontology engineer might be not aware of Entailment (2). does not follow from P alone—that is. perhaps due to modeling errors. The meaning of the reused terms. they should not change or constrain the meaning of the medical terms. he decides to reuse the knowledge about these subjects from a wellestablished and widely-used medical ontology. however.

or has Focus only on Cystic Fibrosis. Lutz. an important requirement for the reuse of an ontology Q within an ontology P should be that P ∪ Q produces exactly the same logical consequences over the vocabulary of Q as Q alone does. We say that O is a deductive conservative extension of O1 w. from (1) and (5) we obtain P ∪ Q |= δ := (Cystic Fibrosis ⊥). then O is a deductive S1 -conservative extension of O1 w. is already a logical consequence of O1 . & Sattler that “Every object that either has Focus on nothing. if for every axiom α over L with Sig(α) ⊆ S. this implies (5).. the axioms E1 and E2 together with axioms P1-P4 from P imply new axioms about the concepts Cystic Fibrosis and Genetic Disorder. also has Focus on Genetic Disorder”. Horrocks. In the next section we discuss how to formalize the effects we consider undesirable. On the other hand.(Genetic Disorder ¬Cystic Fibrosis) (6) The inclusion (6) and P4 imply that every element in the domain must be a project—that is. but also cause inconsistencies when used together with the imported axioms from Q. To summarize.t. The notion of a deductive conservative extension can be directly applied to our ontology reuse scenario. The axioms E1 and E2 not only imply new statements about the medical terms. 2007). L for every S1 ⊆ S. in fact.2 Conservative Extensions and Safety for an Ontology As argued in the previous section.t.Cuenca Grau. ♦ In other words. and S a signature over L. the additional axioms in O \ O1 do not result into new logical consequences over the vocabulary S. it is still a consequence of (3) which means that it should not constrain the meaning of the medical terms. we have seen that importing an external ontology can lead to undesirable side effects in our knowledge reuse scenario. together with axiom E1. Lutz et al. especially when they do not cause inconsistencies in the ontology.t.r. Definition 1 (Deductive Conservative Extension). Indeed. Kazakov. which has recently been investigated in the context of ontologies (Ghilardi. it constrains the meaning of medical terms.r. 280 .r.t. P |= ( Project). Note that. & Wolter. we have O |= α iff O1 |= α. E2 is not a consequence of (4) and. although axiom E1 does not correspond to fact (3). 2006. that is. These kinds of modeling errors are difficult to detect. We say that O is a deductive S-conservative extension of O1 w. L for S = Sig(O1 ). L.r. which expresses the inconsistency of the concept Cystic Fibrosis. Let O and O1 ⊆ O be two Lontologies. Now. L if O is a deductive S-conservative extension of O1 w. an ontology O is a deductive S-conservative extension of O1 for a signature S and language L if and only if every logical consequence α of O constructed using the language L and symbols only from S. Note that if O is a deductive S-conservative extension of O1 w. like the entailment of new axioms or even inconsistencies involving the reused vocabulary. Indeed. L.t.r. namely their disjointness: P |= γ := (Genetic Disorder Cystic Fibrosis ⊥) (5) The entailment (5) can be proved using axiom E2 which is equivalent to: ∃has Focus. 3. This requirement can be naturally formulated using the well-known notion of a conservative extension.

According to Definition 1. does not provide a complete characterization of deductive conservative extensions. if for every model I of O1 . L = ALC). L..t.r. Intuitively. It is possible. an ontology O is a model S-conservative extension of O1 if for every model of O1 one can find a model of O over the same domain which interprets the symbols from S in the same way. We have shown in Section 3. We say that O is a model conservative extension of O1 if O is a model S-conservative extension of O1 for S = Sig(O1 ).1 that. Deductive Conservative Extensions. given P consisting of axioms P1–P4. as given in Definition 1. For every two L-ontologies O. and E1 and an ontology Q consisting of axioms M1–M5 from Figure 1. however. Lutz et al. this means that P ∪ Q is not a deductive conservative extension of Q w.r. given L-ontologies O and O . L if O ∪ O is a deductive conservative extension of O w.Modular Reuse of Ontologies: Theory and Practice Definition 2 (Safety for an Ontology).r. L. this notion can be used for proving that an ontology is a deductive conservative extension of another. ALC. that is. ♦ The notion of model conservative extension in Definition 3 can be seen as a semantic counterpart for the notion of deductive conservative extension in Definition 1: the latter is defined in terms of logical entailment. Let O and O1 ⊆ O be two L-ontologies and S a signature over L. determine if O is safe for O w.t. 2007) 1. We say that O is a model S-conservative extension of O1 . we say that O is safe for O (or O imports O in a safe way) w. We demonstrate that P1 ∪ Q is a deductive conservative extension of Q. it is sufficient to show that P1 ∪ Q is a model conservative extension of Q. The following notion is useful for proving deductive conservative extensions: Definition 3 (Model Conservative Extension. and a signature S over L. O1 ⊆ O.r. L. 281 . but not vice versa: Proposition 4 (Model vs.r.t. but O is not a model conservative extension of O1 . and Q consisting of axioms M1–M5 from Figure 1.t. ♦ Hence.t. According to proposition 4. there exists an axiom δ = (Cystic Fibrosis ⊥) that uses only symbols in Sig(Q) such that Q |= δ but P ∪ Q |= δ.g. that is. P1 ∪ Q is a deductive conservative extension of Q. for every model I of Q there exists a model J of P1 ∪ Q such that I|S = J |S for S = Sig(Q). Given L-ontologies O and O .r. There exist two ALC ontologies O and O1 ⊆ O such that O is a deductive conservative extension of O1 w. E2. there exists a model J of O such that I|S = J |S . Lutz et al.. The notion of model conservative extension. Example 5 Consider an ontology P1 consisting of axioms P1–P4.t. 2007). to show that if the axiom E2 is removed from P then for the resulting ontology P1 = P \ {E2}. whereas the former is defined in terms of models. any language L in which δ can be expressed (e. E1. the first reasoning task relevant for our ontology reuse scenario can be formulated as follows: T1. however. if O is a model S-conservative extension of O1 then O is a deductive S-conservative extension of O1 w. 2.

it is possible to show a stronger result. given an L-ontology O and a signature S over L.t. As seen in Section 3.3 Safety of an Ontology for a Signature So far. 2007.r.t.r. we have that O is safe for O w. we refer the interested reader to the literature (Lutz et al. Consider an ontology O = {β1 . & Sattler The required model J can be constructed from I as follows: take J to be identical to I except for the interpretations of the atomic concepts Genetic Disorder Project. however. E1 are satisfied in J and hence J |= P1 . the ontology P consisting of axioms P1–P4. it is easy to see that O ∪ O is inconsistent. β2 }. or may even decide at a later time to reuse the medical terms (Cystic Fibrosis and Genetic Disorder) from another medical ontology instead. This makes it possible to continue developing P and Q independently. 282 . O ∪ O is not a deductive conservative extension of O . we have I|Sig(Q) = J |Sig(Q) and J |= Q. in our ontology reuse scenario we have assumed that the reused ontology Q is fixed and the axioms of Q are just copied into P during the import.t. Indeed E2. In practice. L. Hence. this means that O is not safe for S.t. L. since Sig(O) ∩ Sig(O ) ⊆ S. Let O be a L-ontology and S a signature over L. L.r. in our knowledge reuse scenario. the ontology engineers of the project ontology P may not be willing to depend on a particular version of Q. Therefore. namely. By Definition 6. Genetic Disorder}. EUProject and the atomic role has Focus.r. since Sig(P) ∩ Sig(Q) ⊆ S = {Cystic Fibrosis.t. does not import Q consisting of axioms M1–M5 from Figure 1 in a safe way for L = ALC. where β1 = ( Cystic Fibrosis) and β2 = (Genetic Disorder ⊥). O ∪ O is a deductive conservative extension of O w. which is not entailed by O .r. determine if O is safe for S w.r. Kazakov. We say that O is safe for S w. The associated reasoning problem is then formulated in the following way: T2. since the interpretation of the symbols in Q remains unchanged. L = ALC.t. ♦ 3. it is often convenient to keep Q separate from P and make its axioms available on demand via a reference. Genetic Disorder} w.Cuenca Grau.. a language L if and only if it imports in a safe way any ontology O written in L that shares only symbols from S with O. Hence. 455). For example. Moreover. the ontology P is not safe for S w. In order to formulate such condition. if for every L-ontology O with Sig(O) ∩ Sig(O ) ⊆ S. p. E2 from Figure 1.2. ♦ Intuitively. For an example of Statement 2 of the Proposition. Cystic Fibrosis EUProject. that is. L. Since E2 is equivalent to axiom (6). In fact. an ontology O is safe for a signature S w. L. that any ontology O containing axiom E2 is not safe for S = {Cystic Fibrosis. According to Definition 6. P1 ∪ Q is a model conservative extension of Q.t. E1. Horrocks. we abstract from the particular ontology Q to be imported and focus instead on the symbols from Q that are to be reused: Definition 6 (Safety for a Signature). β1 and β2 imply the contradiction α = ( ⊥). for many application scenarios it is important to develop a stronger safety condition for P which depends as little as possible on the particular ontology Q to be reused.r. all of which we interpret in J as the empty set. It is easy to check that all the axioms P1–P4. Project.

t. Hence J |= O ∪ O and I|S = J |S . By Proposition 8. In particular. Example 9 Let P1 be the ontology consisting of axioms P1–P4. however. for every interpretation I there exists a model J of O such that J |S = I|S . since S ⊆ Sig(Q). that is. that is. () Indeed. L. Intuitively. Since Sig(O) ∩ S = Sig(O) ∩ (S ∪ Sig(O )) ⊆ S ∪ (Sig(O) ∩ Sig(O )) ⊆ S and I|S = J1 |S . since J |S = I|S . and E1 from Figure 1.Modular Reuse of Ontologies: Theory and Practice It is clear how one could prove that an ontology O is not safe for a signature S: simply find an ontology O with Sig(O) ∩ Sig(O ) ⊆ S. Genetic Disorder} w. which proves ( ). since O is a model S-conservative extension of O1 = ∅. In order to show that O∪O is a model S -conservative extension of O1 ∪O according to Definition 3. We show that P1 is safe for S = {Cystic Fibrosis. such interpretation J always exists. L = SHOIQ. ♦ 283 .r. We need to demonstrate that O ∪ O is a deductive conservative extension of O w. O1 ⊆ O. Sig(O ) ⊆ S .r. and I is a model of O1 .t. Consider the model J obtained from I as in Example 5. such that O ∪ O is not a deductive conservative extension of O . as was required to prove ( ).t. we have J |= O . by Lemma 7. take any SHOIQ ontology O such that Sig(O) ∩ Sig(O ) ⊆ S. In particular. and O be L-ontologies and S a signature over L such that O is a model S-conservative extension of O1 and Sig(O) ∩ Sig(O ) ⊆ S. Then O ∪ O is a model S -conservative extension of O1 ∪ O for S = S ∪ Sig(O ). Then O is safe for S w. The following lemma introduces a property which relates the notion of model conservative extension with the notion of safety for a signature. we have J |= O. Since J |Sig(O) = J1 |Sig(O) and J1 |= O. () Since O is a model S-conservative extension of O1 . it says that the notion of model conservative extension is stable under expansion with new axioms provided they share only symbols from S with the original ontologies. Let J be an interpretation such that J |Sig(O) = J1 |Sig(O) and J |S = I|S . we have that O∪O is a model S -conservative extension of O1 ∪O = O for S = S ∪ Sig(O ). As shown in Example 5. Lemma 7 Let O. how one could prove that O is safe for S.t. We just construct a model J of O ∪ O such that I|S = J |S . In order to prove that O is safe for S w. It turns out that the notion of model conservative extensions can also be used for this purpose. Model Conservative Extensions) Let O be an L-ontology and S a signature over L such that O is a model S-conservative extension of the empty ontology O1 = ∅. we have that O ∪ O is a deductive conservative extension of O . Proof. in order to prove safety for P1 it is sufficient to demonstrate that P1 is a model S-conservative extension of the empty ontology. J is a model of P1 and I|Sig(Q) = J |Sig(Q) where Q consists of axioms M1–M5 from Figure 1. Lemma 7 allows us to identify a condition sufficient to ensure the safety of an ontology for a signature: Proposition 8 (Safety for a Signature vs. Proof. L according to Definition 6. and I |= O . since Sig(O ) ⊆ S . by Definition 3 there exists a model J1 of O such that I|S = J1 |S . we have I|S = J |S .r. L. and Sig(O)∩Sig(O ) ⊆ S. let I be a model of O1 ∪ O . It is not so clear. for every S-interpretation I there exists a model J of P1 such that I|S = J |S .r.

Pancreas. M3 and M4 from ontology Q. browsing. Q0 is not a module for P1 in Q w. M4 and M5.t. Definition 10. when answering an arbitrary query over the signature of P. Genetic Fibrosis. L.t.t. L. L = ALC for S = Sig(P1 ). M3. On the other hand.r. Well established medical ontologies. We say that O1 is a module for O in O w. if O ∪ O is a deductive S-conservative extension of O ∪ O1 for S = Sig(O) w. anatomy. Let O.t.4 Extraction of Modules from Ontologies In our example from Figure 1 the medical ontology Q is very small. Indeed. Even if P imports Q without changing the meaning of the reused symbols. these sets of axioms are actually minimal subsets of Q that imply α.r. it is possible to show that both subsets Q1 = {M1. Kazakov.r. one would like to extract a (hopefully small) fragment Q1 of the external medical ontology—a module—that describes just the concepts that are reused in P. To do so. L. Sig(α) ⊆ S but P1 ∪ Q0 |= α.r. P1 ∪ Q |= α. one can show that any subset of Q that does not imply α is not a module in Q w. E1 and the ontology Q consisting of axioms M1–M5 from Figure 1. Similarly. and the interpretations of atomic roles located In and 284 . As usual. M3. etc.r. L = ALC or its extensions. Recall that the axiom α from (1) is a consequence of axioms M1. In fact. In order to show that P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 for S = Sig(P1 ). reasoning over. since P1 ∪ Q is not an deductive S-conservative extension of P1 ∪ Q0 w. ♦ The task of extracting modules from imported ontologies in our ontology reuse scenario can be thus formulated as follows: T3.Cuenca Grau. we need to demonstrate that P1 ∪ Q is a deductive S-conservative extension of both P1 ∪ Q1 and P1 ∪ Q2 for S = Sig(P1 ) w. compute a module O1 for O in O w. however. For example. according to Definition 10. L. that P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 and of P1 ∪ Q2 for S which is sufficient by Claim 1 of Proposition 4.t. importing the module Q1 should give exactly the same answers as if the whole ontology Q had been imported. M4} and Q2 = {M1. etc—the resulting ontology P ∪ Q may be considerably harder than processing P alone. Horrocks. It can be demonstrated that P1 ∪ Q0 is a deductive conservative extension of Q0 . and Genetic Origin are defined as the interpretation of Cystic Fibrosis in I.r. O . Example 11 Consider the ontology P1 consisting of axioms P1–P4. But then.t. M2. Ideally.t. the medical ontology Q could contain information about genes.r. we have Q0 |= α. We need to construct a model J of P1 ∪ Q such that I|S = J |S . M2 and M4 as well as of axioms M1. given L-ontologies O. O and O1 ⊆ O be L-ontologies. surgical techniques. L = SHOIQ. Let J be defined exactly as I except that the interpretations of the atomic concepts Fibrosis. for the subset Q0 of Q consisting of axioms M2.t. Definition 10 (Module for an Ontology). consider any model I of P1 ∪ Q1 . M4} of Q are modules for P1 in Q w. & Sattler 3. P1 . In particular. Intuitively. In particular P1 ∪ Q0 |= α.r. can be very large and may describe subject matters in which the designer of P is not interested. processing— that is. we demonstrate a stronger fact.

the notion of a module for a signature is a uniform analog of the notion of a module for an ontology. L for S = Sig(O).r. if for every L-ontology O with Sig(O) ∩ Sig(O ) ⊆ S. However. Proof. It is easy to see that axioms M1–M3 and M5 are satisfied in J . we have demonstrated that the modules for P in Q are exactly those subsets of Q that imply α. L. we have that O1 is a module for O in O w. Then O1 is a module for S in O w. in a similar way as the notion of safety for a signature is a uniform analog of safety for an ontology. it is possible to demonstrate that any subset of Q that implies axiom α. Moreover. O ∪ O is a deductive S -conservative extension of O ∪ O1 w. L. ♦ An algorithm implementing the task T3 can be used for extracting a module from an ontology Q imported into P prior to performing reasoning over the terms in P. thus P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 . take any SHOIQ ontology O such that Sig(O) ∩ Sig(O ) ⊆ S. Hence we have constructed a model J of P1 ∪ Q such that I|S = J |S . We need to demonstrate that O1 is a module for O in O w. J is a model of M4. () 285 .Modular Reuse of Ontologies: Theory and Practice has Origin are defined as the identity relation. that is. Since the extraction of modules is potentially an expensive operation. it can be imported instead of Q into every ontology O that shares with Q only symbols from S.t. Continuing Example 11. compute a module O1 for S in O . the construction above works if we replace Q1 with any subset of Q that implies α. and I is a model of Q1 . In this way. Let O and O1 ⊆ O be L-ontologies and S a signature over L. In order to prove that O1 is a module for S in O w. it would be more convenient to extract a module only once and reuse it for any version of the ontology P that reuses the specified symbols from Q. because Genetic Fibrosis and Genetic Disorder are interpreted in J like Cystic Fibrosis and Genetic Disorder in I. we use the following sufficient condition based on the notion of model conservative extension: Proposition 13 (Modules for a Signature vs. when the ontology P is modified. given a L-ontology O and a signature S over L.t. Model Conservative Extensions) Let O and O1 ⊆ O be L-ontologies and S a signature over L such that O is a model S-conservative extension of O1 . according to Definition 10. Genetic Disorder} in Q. We say that O1 is a module for S in O (or an S-module in O ) w.r.t. J also satisfies all axioms from P1 . This idea motivates the following definition: Definition 12 (Module for a Signature). The reasoning task corresponding to Definition 12 can be formulated as follows: T4. L.t. L according to Definition 12. L. In order to prove this.r. ♦ Intuitively. In fact. P1 ∪ Q is also a model S-conservative extension of P1 ∪ Q2 .r.t. the module has to be extracted again since a module Q1 for P in Q might not necessarily be a module for the modified ontology.r. is in fact a module for S = {Cystic Fibrosis. Since we do not modify the interpretation of the symbols in P1 . which implies the concept inclusion α = (Cystic Fibrosis Genetic Disorder). that is.t.r. In particular.

but in extracting modules that are easy to process afterwards. It might be necessary to import such axioms into P in order not to lose essential information from Q. Note that 286 . In some applications it might be necessary to extract all minimal modules. S a signature and α an axiom over L. M3. In particular.r.Cuenca Grau. We say that α is an essential axiom for S in O w. whereas in others any minimal module suffices. Examples 11 and 14 demonstrate that a minimal module for an ontology or a signature is not necessarily unique: the ontology Q consisting of axioms M1–M5 has two minimal modules Q1 = {M1. since O is a model S-conservative extension of O1 . Example 14 Let Q and α be as in Example 11. the extracted modules should be as small as possible. and Sig(O ) ∩ Sig(O) ⊆ S. Genetic Disorder}. Let O and O be L-ontologies. Genetic Disorder} in Q.t.r. it is sufficient to demonstrate that Q is a model S-conservative extension of Q1 . that is. Ideally. We demonstrate that any subset Q1 of Q which implies α is a module for S = {Cystic Fibrosis. thus they never need not be imported into P. since S = Sig(O) ⊆ S . M4}. For example. It is easy to see that if the model J is constructed from I as in Example 11. which can be done by computing the union of all minimal modules. Depending on the application scenario. for the ontology P1 = {P1. we have that O ∪ O is a deductive S -conservative extension of O1 ∪ O w. Axioms that do not occur in a minimal module in Q are not essential for P because they can always be removed from every module of Q. but the axiom M5 is not essential. P2. These arguments motivate the following notion: Definition 15 (Essential Axiom). the module Q1 consisting of the axiom M1. that is. modules that contain no other module as a subset. then the required property holds. ♦ In our example above the axioms M1–M4 from Q are essential for the ontology P1 and for the signature S = {Cystic Fibrosis. since these are the minimal sets of axioms that imply axiom α = (Cystic Fibrosis Genetic Disorder).t.t. for every model I of Q1 there exists a model J of Q such that I|S = J |S . M2. L. by Lemma 7. Horrocks. L. P3. M4}.r.5 Minimal Modules and Essential Axioms One is usually not interested in extracting arbitrary modules from a reused ontology. and Q2 = {M1.r. Kazakov. According to Proposition 13. ♦ Note that a module for a signature S in Q does not necessarily contain all the axioms that contain symbols from S. & Sattler Indeed.t. 3.t. as was required to prove ( ). Also note that even in a minimal module like Q1 there might still be some axioms like M2 that do not mention symbols from S at all. L (or S-essential in O ) if α is contained in some minimal module for S in O w.r. In certain situations one might be interested in computing the set of essential axioms of an ontology. we have that O ∪ O is a model S -conservative extension of O1 ∪ O for S = S ∪ Sig(O). L. Genetic Disorder}. Hence it is reasonable to consider the problem of extracting minimal modules. E1} as well as for the signature S = {Cystic Fibrosis. L if α is contained in any minimal module in O for O w. one can consider several variations of tasks T3 and T4 for computing minimal modules. M2 and M4 from Q does not contain axiom M5 which mentions the atomic concept Cystic Fibrosis from S. We say that α is essential for O in O w. This is not true for the axioms that occur in minimal modules of Q.

♦ Definition 16 provides a very general notion of an ontology language.u][m] T4[a. where Sg is a set of signature elements (or vocabulary) of L.r. a function that assigns to each formula its signature. In Table 2 we have summarized the reasoning tasks we found to be potentially relevant for ontology reuse scenarios and have included the variants T3am. |=). A language L is given by a set of symbols (a signature). Further variants of the tasks T3 and T4 could be considered relevant to ontology reuse. to a much broader class of ontology languages. Definition 16. For example. written O |= α. The theoretical results that we will present in this paper can easily be extended to many such reasoning tasks. O .t.t. a set of formulae (axioms) that can be constructed over those symbols. and a mechanism for identifying their signatures. and T3um of the task T3 and T4am.s. T4sm. the fragments of the DL SHOIQ. one might be interested in computing modules with the smallest number of axioms. Sig. we have implicitly assumed that the ontology languages are the description logics defined in Section 2—that is. An ontology language is a tuple L = (Sg. L O. L Checking Safety: Extracting of [all / some / union of] [minimal] module(s): where O. S.t. 3. O are ontologies and S a signature over L Table 2: Summary of reasoning tasks relevant for ontology integration and reuse computing the union of minimal modules might be easier than computing all the minimal modules since one does not need to identify which axiom belongs to which minimal module. L Extract modules in O w. L O. instead of computing minimal modules. S. We extend the function Sig to ontologies O as follows: Sig(O) := α∈O Sig(α).6 Safety and Modules for General Ontology Languages All the notions introduced in Section 3 are defined with respect to an “ontology language”. L O . however. or modules of the smallest size measured in the number of symbols. The ontology language OWL DL as well as all description 287 . L Check if O is safe for S w. O . or in any other complexity measure of the ontology. Sig is a function that assigns to every axiom α ∈ Ax a finite set Sig(α) ⊆ Sg called the signature of α. however.r. O and L Extract modules for S in O w. L Task Check if O is safe for O w. Ax is a set of axioms in L. An ontology over L is a finite set of axioms O ⊆ Ax.Modular Reuse of Ontologies: Theory and Practice Notation T1 T2 T3[a.r. and T4um of the task T4 for computation of minimal modules discussed in this section. and |= is the entailment relation between sets of axioms O ⊆ Ax and axioms α ∈ Ax.r.s. The notions considered in Section 3 can be applied. Our definitions apply to any ontology language with a notion of entailment of axioms from ontologies. and an entailment relation between sets of axioms.u][m] Input O. T3sm.t. So far. Ax.

r.r.t.r.r. & Sattler logics defined in Section 2 are examples of ontology languages in accordance to Definition 16. and Logic Programs. Then: 1. Proposition 17 (Safety vs.r. however. If O \ O1 is safe for O ∪ O1 then O1 is a module for O in O w. O \ O1 is safe for S ∪ Sig(O1 ) w. L. L iff the empty ontology is a module for O in O w. ontologies over L. 2. O is safe for O w. L.r.t.t.r. Horrocks. The definition of model conservative extension (Definition 3) and the propositions involving model conservative extensions (Propositions 4. 2. Proof. Proof. L.r. L. 8.t. and let O. with Sig(O \ O1 ) ∩ Sig(O) ⊆ S ∪ Sig(O1 ). L iff (c) for every O.r. O0 = ∅ is an S-module in O w. O and O1 ⊆ O .r. we will not formulate general requirements on the semantics of ontology languages. L. and S a subset of the signature of L.r. By Definition 12. O . Kazakov.r. Modules for an Ontology) Let L be an ontology language.t. O ∪ O is a deductive S-conservative extension of O ∪ O1 w. and 13) can be also extended to other languages with a standard Tarski model-theoretic semantics. L. 2. L. we have that O \ O1 is safe for O w. L. that O1 is a module for O in O w. Second Order Logic.t.r. L iff (a) O ∪ O is a deductive conservative extension of O w. and assume that we deal with sublanguages of SHOIQ whenever semantics is taken into account. are well-defined for every ontology language L given by Definition 16. and O1 ⊆ O be ontologies over L. Other examples of ontology languages are First Order Logic. By Definition 6.t.r. O is safe for S w.r. L for S = Sig(O). By Definition 10. it is easy to see that (a) is equivalent to (b).t. such as Higher Order Logic.t. L. To simplify the presentation. By Claim 1 of Proposition 17. We also provide the analog of Proposition 17 for the notions of safety and modules for a signature: Proposition 18 (Safety vs. the empty ontology O0 = ∅ is a module for O in O w.t.r. By 288 . O is safe for O w. L for S = Sig(O).t.t. which implies.r. by Definition 10. Then: 1. L iff (b) for every O with Sig(O ) ∩ Sig(O) ⊆ S. it is the case that O0 = ∅ is a module for O in O w. In particular. 1. It is easy to see that (a) is the same as (b). 2. safety (Definitions 2 and 6) and modules (Definitions 10 and 12). In the remainder of this section. then O1 is an S-module in O w. it is the case that O is safe for O w. O \ O1 is safe for O ∪ O1 w.t.r. as well as all the reasoning tasks from Table 2. It is easy to see that the notions of deductive conservative extension (Definition 1). By Definition 2. L. By Definition 2. we establish the relationships between the different notions of safety and modules for arbitrary ontology languages. Modules for a Signature) Let L be an ontology language.t.t.t. L iff (b) O ∪ O is a deductive S-conservative extension of O ∪ O0 = O w.t.r. If O \ O1 is safe for S ∪ Sig(O1 ) w.r.Cuenca Grau. L iff (a) for every O with Sig(O ) ∩ Sig(O) ⊆ S. L iff (c) O ∪ O1 ∪ (O \ O1 ) = O ∪ O is a deductive conservative extension of O ∪ O1 w. O is safe for S w. 1.r.t.t. By Definition 6.t. L iff the empty ontology O0 = ∅ is an S-module in O w.t. L.

t.r.r. the problem of deciding whether O is a deductive conservative extension of O1 w. L. L is EXPTIME-complete for L = EL. L iff (d) for every O with Sig(O )∩Sig(O) ⊆ S.t. L which implies by Claim 2 of Proposition 17 that O1 is a module for O in O w. Given O1 ⊆ O over a language L. L .t. In order to prove that (c) implies (d). O1 is an S-module in O w. 2-EXPTIME-complete for L = ALC and L = ALCIQ. 2-EXPTIME complete for L = ALC and L = ALCIQ. and undecidable for L = ALCIQO. it is reasonable to identify which (un)decidability and complexity results for conservative extensions are applicable to the reasoning tasks in Table 2.r. Recently.t. the problem of determining whether O is safe for O w.. 2007). Since the notions of modules and safety are defined in Section 3 using the notion of deductive conservative extension.r. ( ) ˜ := O ∪ O . and undecidable for L = ALCIQO. Undecidability and Complexity Results In this section we study the computational properties of the tasks in Table 2 for ontology languages that correspond to fragments of the description logic SHOIQ. the problem of determining whether O1 is a module for O in O is EXPTIME-complete for L = EL.t. 4. L. L iff O\O1 is safe for O1 w. the problem has also been studied for simple DLs.t. By Definition 2. Hence.t. and O1 ⊆ O over L. 2006). The computational properties of conservative extensions have recently been studied in the context of description logics. an ontology O is safe for O w. This result was extended by Lutz et al. L if O ∪ O is a deductive S-conservative extension of O ∪ O1 for S = Sig(O) w. By Definition 2.r. L. L. Conversely.r. Hence. an ontology O1 is a module for O in O w. L ( ). we demonstrate that any algorithm for checking safety or modules can be used for checking deductive conservative extensions.r. it has been shown that deciding deductive conservative extensions is EXPTIME-complete for L = EL(Lutz & Wolter. any algorithm for checking deductive conservative extensions can be reused for checking safety and modules.t. and are computationally hard for simple DLs. who showed that the problem is 2-EXPTIME-complete for L = ALCIQ and undecidable for L = ALCIQO. L iff. O is a deductive conservative extension of O1 ⊆ O w.r.t.r. O .t.t. L iff O ∪ O is a deductive conservative extension of O w. We need to demonstrate that O1 is a module for O in O w.r. let O be such that Sig(O ) ∩ Sig(O) ⊆ S. Indeed. Proof. by (c) we have that O \ O1 is safe ˜ for O = O ∪ O1 w. Note that Sig(O \ O ) ∩ Sig(O) = Sig(O \ O ) ∩ (Sig(O) ∪ Sig(O )) ⊆ ˜ Let O 1 1 1 1 (Sig(O \ O1 ) ∩ Sig(O)) ∪ Sig(O1 ) ⊆ S ∪ Sig(O1 ).r. Given ontologies O.r.t.t.t. These results can be immediately applied to the notions of safety and modules for an ontology: Proposition 19 Given ontologies O and O over L.Modular Reuse of Ontologies: Theory and Practice Definition 12. We demonstrate that most of these reasoning tasks are algorithmically unsolvable even for relatively inexpressive DLs. O0 = ∅ is a module for O1 in O \ O1 w. we have that O1 is a module for O in O w.r.r. by Claim 1 of Proposition 17. 289 . (2007). L is 2-EXPTIMEcomplete for L = ALC (Ghilardi et al.

e. k} is a finite set of tiles and H. 2007). The task T1 corresponds directly to the problem of checking the safety of an ontology. Theorem 21 (Undecidability of Safety for a Signature) The problem of checking whether an ontology O consisting of a single ALC-axiom is safe for a signature S is undecidable w.s. T3sm and T3um for L = ALCIQ in 2-EXPTIME. and are 2-EXPTIME-hard for ALC. Suppose that we have a 2-EXPTIME algorithm that.j ∈ V .t. or T3[a. n ≥ 1 called periods such that ti+m. given ontologies O. Indeed.t. Finally. o a & Gurevich. and T3[a.r. We use this property in our reduction.j and ti. . A solution for a domino system D is a mapping ti.t. .r. L = ALCO. V ) where T = {1.j .u]m from Table 2 for L = EL and L = ALCIQ that run in EXPTIME and 2-EXPTIME respectively. we prove that solving any of the reasoning tasks T3am. Indeed.u]m are computationally unsolvable for DLs that are as expressive as ALCQIO. decidable) subset D ⊆ D of domino systems such that Dps ⊆ D ⊆ Ds .t. there is no recursive (i. and an ontology O = O(D) which consists of a single ALC-axiom such that: 290 . Proof. one of them. A domino system is a triple D = (T. j ≥ 1.j .7) that the sets D \ Ds and Dps are recursively inseparable.j+1 ∈ H and ti.r.u]m related to the notions of safety and modules for a signature. given ontologies O and O . Then we can determine which of these modules are minimal and return all of them. & Sattler Corollary 20 There exist algorithms performing tasks T1. We have demonstrated that the reasoning tasks T1 and T3[a.j that assigns to every pair of integers i. L. Horrocks. one can enumerate all subsets of O and check in 2-EXPTIME which of these subsets are modules for O in O w. Kazakov.j for every i. ti+1. Note that the empty ontology O0 = ∅ is a module for O in O w. L iff O0 = ∅ is the only minimal module for O in O w.s. .r. based on a reduction to a domino tiling problem. determines whether O1 is a module for O in O w. H. V ⊆ T × T are horizontal and vertical matching relations. Proof. There is no algorithm performing tasks T1.j+n = ti.Cuenca Grau.j for which there exist integers m ≥ 1 . we focus on the computational properties of the reasoning tasks T2 and T4[a. The proof is a variation of the construction for undecidability of deductive conservative extensions in ALCQIO (Lutz et al.u]m from Table 2 for L = ALCIQO. L = ALCIQ. ti. or the union of them depending on the reasoning task.. Theorem 3. Let D be the set of all domino systems. that is. . we construct a signature S = S(D). In the remainder of this section. L iff O0 = ∅ is a module for O in O w. O and O1 ⊆ O .1. Gr¨del. L.s.t.t. Ds be the subset of D that admit a solution and Dps be the subset of Ds that admit a periodic solution. by Claim 1 of Proposition 17. For each domino system D. an ontology O is safe for O w. It is well-known (B¨rger.r. 1997. as given in Definition 2.r. We demonstrate that this algorithm can be used to solve the reasoning tasks T3am.j = ti.r.t. T3sm and T3um is not easier than checking the safety of an ontology. such that ti. We demonstrate that all of these reasoning tasks are undecidable for DLs that are as expressive as ALCO.s. A periodic solution for a domino system D is a solution ti. L. j ≥ 1 an element of T .

d1 ∈ rV I . L = ALCO. we have Dps ⊆ D ⊆ Ds . Given a domino system D = (T. ai. b ∈ At I . The axioms of Otile express the tiling conditions for a domino system D.j .B ∃rV .j . For every i. (Ci Di )∈Otile (Ci ¬Di ) (∃rH .j := c.j ∈ At I there exist b.t ∈H t.t.r.j+1 := t .j and ti. . a. b ∈ rH I .j can be assigned two times: ai+1. H. . b.¬B) We say that rH and rV commute in an interpretation I = (∆I . Now let s be an atomic role and B an atomic concept such that s. because otherwise one can use this problem for deciding membership in D . . ·I ) if for all domain elements a.j for D as follows. Note that some values of ai. Since D \ Ds and Dps are recursively inseparable. ti. In other words. j ≥ 1 is constructed. b ∈ rH I . We set a1. we have d1 = d2 . b. The signature S = S(D).j+1 := b. Consider an ontology Otile in Figure 2 constructed for D. (q3 ) and (q4 ) express that every domain element has horizontal and vertical matching successors. this implies undecidability for D and hence for the problem of checking if O is S-safe w. namely (q1 ) and (q2 ) express that every domain element is assigned with a unique tile t ∈ T . j ≥ 1. ai+1. k} 1≤t<t ≤k At ) At ) 1≤t≤k 1≤t≤k ∃rH .j .j+1 ∈ rV I and ai. and the ontology O = O(D) are constructed as follows. Since I is a model of axioms (q1 ) and (q2 ) from Figure 2. and ti+1.r. Now suppose ai.j inductively together with elements ai. L = ALCO.j ∈ ∆I such that ai. L = ALCO.∃rH . we have a unique At with 1 ≤ t ≤ k such that ai.j+1 are constructed for ai.j .t ∈V where T = {1. The following claims can be easily proved: Claim 1. ·I ) of Otile (D) can be used to guide the construction of a solution ti. d2 ∈ rH I . V ).( Figure 2: An ontology Otile = Otile (D) expressing tiling conditions for a domino system D (a) if D does not have a solution then O = O(D) is safe for S = S(D) w. t.j ∈ rH I . for the set D consisting of the domino systems D such that O = O(D) is not safe for S = S(D) w.j := t .j with i. ai. L = ALCO.j+1 and ti+1. t ∈ T such that ai. the value for 291 . B ∈ S. Note that Sig(Otile ) = S. t ∈ H. t ∈ V .j+1 . we construct ti.1 to any element from ∆I .j ∈ At I . let S consist of fresh atomic concepts At for every t ∈ T and two atomic roles rH and rV . Since I is a model of axioms (q3 ) and (q4 ) and ai. In this case we assign ai. and c ∈ At I .r.( ∃rV .Modular Reuse of Ontologies: Theory and Practice (q1 ) (q2 ) (q3 ) (q4 ) At At At A1 At ··· ⊥ Ak t. Then we set ti.∃rV .t. then D has a solution. c ∈ rV I .t. and (b) if D has a periodic solution then O = O(D) is not safe for S = S(D) w.j+1 and ai. Let O := {β} / where: β := ∃s. If Otile (D) has a model I in which rH and rV commute.r. c ∈ rV I . However. c. t. and c. Indeed a model I = (∆. c ∈ ∆I and t . since rV and rH commute in I.t. .j := t. ai+1. d1 and d2 from ∆I such that a.

1≤i≤m. we demonstrate that O = O(D) satisfies properties (a) and (b). is not safe for S w. For every pair (i. If I = (∆I .j1 } (p5 ) ∀rV .¬B)J since d1 = d2 . assume that D has a periodic solution ti. and so.j with the periods m. b ∈ rH I . c. Claim 2.r. d1 and d2 from ∆I with a.j1 } ∃rV . ∀rH . ·I ) is not a model of Otile then there exists an axiom (Ci Di ) ∈ Otile such that I |= (Ci Di ). ·I ) is a model of O I I = (∆ tile ∪ O. then by the contra-positive of Claim 1 either (1) I is not a model of Otile .∃rV . a | x ∈ ∆}. Note that a ∈ (∃rH . By Proposition 8. a | x ∈ ∆}. In order to prove property (a) we use the sufficient condition for safety given in Proposition 8 and demonstrate that if D has no solution then for every interpretation I there exists a model J of O such that J |S = I|S . but O |= ( ⊥). a. This will imply that O is not safe for O w.∃rV .B ∃rV . or (2) rH and rV do not commute in I.∃rH . and c. L = ALCO. This means that there exist domain elements a. This implies that d1 = d2 . then there exist a. Let I be an arbitrary interpretation. j) with 1 ≤ i ≤ m and 1 ≤ j ≤ n we introduce a fresh individual ai. We demonstrate that O is not S-safe w. We interpret B in J as B J = {d1 }.r. Indeed.{ai.{ai2 . d I .r. we have a ∈ CiJ .¬B]). and.j } ∃rH . Since D has no solution. d ∈ B I . b ∈ r I . Let us define J to be identical to I except for the interpretation of the atomic role s and the atomic concept B.j with the periods m and n. d1 ∈ rV I .j+1 is unique.{ai2 .[∃rH . Kazakov.B ∃rV .j } i2 = i1 + 1 mod m j2 = j1 + 1 mod n 292 . and so J |= ( ∃s. n ≥ 1. This implies that J |= β. c ∈ rV I . d1 and d2 such that x.j2 }. if I .[Ci ¬Dj ]).¬B]) which implies that J |= β.{ai. d2 ∈ rH I . and because of (q2 ). a ∈ s I . Hence.[∃rH .Cuenca Grau.∃rH .j and define O to be an extension of Otile with the following axioms: (p1 ) {ai1 . c. L. a. Case (2). c.j+1 is unique.j }.j is a solution for D. Finally. We interpret s in J as sJ = { x. and I . a ∈ ¬Di . b. b. For this purpose we construct an ALCO-ontology O with Sig(O) ∩ Sig(O ) ⊆ S such that O ∪ O |= ( ⊥). b.t. b. it is easy to see that Otile ∪ O |= ( ∃s.t. & Sattler ai+1. a. we have J |= ( ∃s. there exists a domain element a ∈ ∆I such that I but a ∈ D I . Let us define J to be identical to I except for the interpretation of the a ∈ Ci i atomic role s which we define in J as sJ = { x. It is easy to see that ti.j } (p4 ) {ai.t. such that d1 = d2 . the value for ti+1. we have constructed a model J of O such that J |S = I|S . In order to prove property (b). 1≤j≤n {ai. Horrocks. We define O such that every model of O is a finite encoding of the periodic solution ti. we have constructed a model J of O such that J |S = I|S .r. That is. this will imply that O is safe for S w.j2 } (p2 ) {ai1 . then rH and rV do not commute in I. If I is a model of Otile ∪ O. Case (1). and thus. d for every x ∈ ∆ 2 ∈ rH 1 1 ∈ rV V H d2 ∈ (¬B)I . rh and rV do not commute in I. Since the interpretations of J the symbols in S have remained unchanged.∃rV .∃rH .j } (p3 ) {ai. hence. Suppose that rH and rV do not commute in I = (∆I . We demonstrate for both of these cases how to construct the required model J of O such that J |S = I|S . c ∈ r I .B)J and a ∈ (∃rV . and so. So.t. ·I ). L. L = ALCO.

5. Sufficient Conditions for Safety Theorem 21 establishes the undecidability of checking whether an ontology expressed in OWL DL is safe w. Solving any of the reasoning tasks T4am. we could focus on simple DLs for which this problem is decidable. Proof.Modular Reuse of Ontologies: Theory and Practice The purpose of axioms (p1 )–(p5 ) is to ensure that rH and rV commute in every model of O .u]m for L = ALCO. Hence any algorithm for recognizing modules for a signature in L can be used for checking if an ontology is safe for a signature in L.1 Safety Classes In general. if an ontology satisfies our conditions. by Claim 1 of Proposition 18.. in what follows we will focus on defining sufficient conditions for safety that we can use in practice and restrict ourselves to OWL DL—that is. then we can guarantee that it is safe. 293 . or T4um for L is at least as hard as checking the safety of an ontology. however. the converse. it is worth noting that Theorem 21 still leaves room for investigating the former approach.r. Before we go any further. This said. O is S-safe w. for each signature S. 2007) strongly indicate that checking safety is likely to be exponentially harder than reasoning and practical algorithms may be hard to design. any sufficient condition for safety can be defined by giving. since this task corresponds to the problem of checking safety for a signature. Alternatively. L. By Claim 1 of Proposition 18.r.t. Proof.t. the problem of determining whether O1 is an S-module in O w. On the other hand. safety may still be decidable for weaker description logics. These intuitions lead to the notion of a safety class. This undecidability result is discouraging and leaves us with two alternatives: First. an ontology O is S-safe w. L = ALCO is undecidable. does not necessarily hold. however. rH and rV do not commute.r.t. L. The remainder of this paper focuses on the latter approach. existing results (Lutz et al. we could look for sufficient conditions for the notion of safety— that is. since O contains Otile . by Claim 2. Indeed.t. 5. Corollary 23 There is no algorithm that can perform tasks T2.t. the set of ontologies over a language that satisfy the condition for that signature.t. so O ∪ O |= ( ⊥).r. since. however. such as EL. In the case of SHIQ. or T4[a. then in every model of O ∪ O. Theorem 21 directly implies that there is no algorithm for task T2. SHOIQ—ontologies. This is only possible if O ∪ O have no models. or even for very expressive logics such as SHIQ. L iff O0 = ∅ is (the only minimal) S-module in O w. It is easy to see that O has a model corresponding to every periodic solution for D with periods m and n. A direct consequence of Theorem 21 and Proposition 18 is the undecidability of the problem of checking whether a subset of an ontology is a module for a signature: Corollary 22 Given a signature S and ALC-ontologies O and O1 ⊆ O .s. L if O0 = ∅ is an S-module in O w. a signature. T4sm.r. Hence O |= ( ⊥).r. These ontologies are then guaranteed to be safe for the signature under consideration.

and it is subset-closed if O1 ⊆ O2 and O2 ∈ O(S) implies O1 ∈ O(S). we say that O and α are local (based on I(·)). it is compact if for every S1 . whenever an ontology O belongs to a safety class for a given signature and the safety class is clear from the context. we have seen that. Definition 25 (Class of Interpretations. S2 and S such that (S1 S2 ) ∩ S = ∅ we have that I(S1 )|S = I(S2 )|S . then O can be proved to be safe w. & Sattler Definition 24 (Class of Ontologies.r. for which safety can be proved using Proposition 8. as given in Definition 24. one way to prove that O is S-safe is to show that O is a model S-conservative extension of the empty ontology. and (ii) each ontology O ∈ O(S) is S-safe for L. a subset O(S) of ontologies in L.l.g.r. it is monotonic if S1 ⊆ S2 implies I(S1 ) ⊆ I(S2 ).r.t. called local ontologies. then every subset of O (the union of O1 and O2 ) also satisfies this condition.o.r. The following definition formalizes the classes of ontologies. a class of ontologies is a collection of sets of ontologies parameterized by a signature. Antimonotonicity intuitively means that if an ontology O can be proved to be safe w. Given a class of interpretations I(·). A safety class represents a sufficient condition for safety: each ontology in O(S) should be safe for S. 5. In what follows. w. it is local if I(S) is local w. where S1 S2 is the symmetric difference of sets S1 and S2 defined by S1 S2 := S1 \ S2 ∪ S2 \ S1 .Cuenca Grau. S using the sufficient condition. we say that O(·) is the class of ontologies O(·) based on I(·) if for every S.2 Locality In this section we introduce a family of safety classes for L = SHOIQ that is based on the semantic properties underlying the notion of model conservative extensions. we assume that the the empty ontology ∅ belongs to every safety class for every signature. Also. A class of ontologies for a language L is a function O(·) that assigns to every subset S of the signature of L. Kazakov. it is compact when O ∈ O(S ∩ Sig(O)) for each O ∈ O(S). and for every S and O ∈ O(S) and every α ∈ O. A class of interpretations is a function I(·) that given a SHOIQ signature S returns a set of interpretations I(S). Locality). Subset-closure (union closure) means that if O (O1 and O2 ) satisfy the sufficient condition for safety. S for every S.t. according to Proposition 8.. Safety Class). ♦ Intuitively.t.t. O(S) is the set of ontologies that are valid in I(S). Compactness means that it is sufficient to consider only common elements of Sig(O) and S for checking safety. In Section 3. S if for every SHOIQ-interpretation I there exists an interpretation J ∈ I such that I|S = J |S . Horrocks. we say that a set of interpretations I is local w. A safety class (also called a sufficient condition to safety) for an ontology language L is a class of ontologies O(·) for L such that for each S it is the case that (i) ∅ ∈ O(S). we will sometimes say that O passes the safety test for S. Safety classes may admit many natural properties. it is union-closed if O1 ∈ O(S) and O2 ∈ O(S) implies (O1 ∪ O2 ) ∈ O(S). every subset of S. A class O(·) is anti-monotonic if S1 ⊆ S2 implies O(S2 ) ⊆ O(S1 ). if I(·) is local then we say that O(·) is a class of local ontologies. Given a SHOIQ signature S. ♦ 294 .

Additionally. Then O(·) is a subset-closed and union-closed safety class for L = SHOIQ. Hence O(·) is anti-monotonic. If I(·) is compact then for every S1 . Hence for every O ∈ I(S) and every SHOIQ interpretation I there is a model J ∈ I(S) such that J |S = I|S . it is the case that J |= α. and if I(·) is compact then O(·) is also compact.r. O(·) is compact. if I(·) is monotonic. r←∅ r←∅ Given a signature S. the set AxA←∅ (S) of axioms that are local w. . we have O ∈ O(S) iff O is valid in I(S) iff J |= O for every interpretation J ∈ I(S). S based on IA←∅ (S) r←∅ consists of all axioms α such for every J ∈ IA←∅ (S). E1 by interpreting the symbols in Sig(P1 ) \ S as the empty set. Given a r←∅ signature S. S2 and S such that (S1 S2 ) ∩ S = ∅ we have that I(S1 )|S = I(S2 )|S . Since IA←∅ (S1 ) ⊆ IA←∅ (S2 ) r←∅ r←∅ for every S1 ⊆ S2 .t. Example 29 Recall that in Example 5 from Section 3. ·I ) and the interpretation is local for every S. since for every interpretation I = (∆ J = (∆J . Genetic Disorder}. Proof. IA←∅ (·) is also compact.r. rJ = ∅ for r ∈ S. It is easy to show that IA←∅ (S) / / I . we demonstrated that the ontology P1 ∪ Q given in Figure 1 with P1 = {P1. and so O ∈ O(S2 ) implies that O is valid in I(S2 ) which implies that O is valid in I(S1 ) which implies that O ∈ O(S1 ).Modular Reuse of Ontologies: Theory and Practice r←∅ Example 26 Let IA←∅ (·) be a class of SHOIQ interpretations defined as follows. This has been done by showing that every S-interpretation I can be expanded to a model J of axioms P1–P4. since for r←∅ r←∅ every S1 and S2 the sets of interpretations IA←∅ (S1 ) and IA←∅ (S2 ) are defined differently only for elements in S1 S2 . ♦ Proposition 27 (Locality Implies Safety) Let O(·) be a class of ontologies for SHOIQ based on a local class of interpretations I(·). it is the case that IA←∅ (·) is monotonic. Thus O(·) is a safety class. The fact that O(·) is subset-closed and union-closed follows directly from Definition 25 since (O1 ∪ O2 ) ∈ O(S) iff (O1 ∪ O2 ) is valid in I(S) iff O1 and O2 are valid in I(S) iff O1 ∈ O(S) and O2 ∈ O(S). which implies by Proposition 8 that O is safe for S w. and X J := X I / / r←∅ r←∅ r←∅ for the remaining symbols X.t. . E1} is a deductive conservative extension of Q for S = {Cystic Fibrosis. In terms of Example 26 this means that 295 . In particular. . Then by Definition 25 for every SHOIQ signature S. then J ∈ IA←∅ (S) and I|S = J |S . Since I(·) is a local class of interpretations. O ∈ O(S1 ) iff O ∈ O(S2 ). we have that for every SHOIQ-interpretation I there exists J ∈ I(S) such that J |S = I|S . and so. If I(·) is monotonic then I(S1 ) ⊆ I(S2 ) for every S1 ⊆ S2 . ·J ) defined by ∆J := ∆I . P4. . Assume that O(·) is a class of ontologies based on I(·). AJ = ∅ for A ∈ S. then O(·) is anti-monotonic. O ∈ O(S) iff O ∈ O(S ∩ Sig(O)) since (S (S ∩ Sig(O))) ∩ Sig(O) = (S \ Sig(O)) ∩ Sig(O) = ∅. Hence. the set IA←∅ (S) consists of interpretations J such that rJ = ∅ for every atomic r←∅ role r ∈ S and AJ = ∅ for every atomic concept A ∈ S. Then the r←∅ r←∅ r←∅ class of local ontologies based on IA←∅ (·) is defined by O ∈ OA←∅ (S) iff O ⊆ AxA←∅ (S). hence for every O with Sig(O) ⊆ S we have that O is valid in I(S1 ) iff O is valid in I(S2 ). L = SHOIQ. r←∅ Corollary 28 The class of ontologies OA←∅ (S) defined in Example 26 is an anti-monotonic compact subset-closed and union-closed safety class.

t. If this property holds. S) | τ ({a}.τ (C1 . b). S) 2. otherwise = (R1 R2 ).C1 .r.C 1 . This notion of locality is exactly the one we used in our previous work (Cuenca Grau et al. S) ::= C2 .r. S) = a : τ (C. S. / | τ (Trans(r). that is. The reason is that there is no elegant way to describe such interpretations.r.τ (C1 . (a) (b) (c) (d) (e) (f ) (g) (h) (i) (j) (k) (l) (m) (n) τ (α. introduced in Example 26. S) = (τ (C1 . / = {a}. S) be defined recursively as follows: τ (C. it is sufficient to interpret every atomic concept and atomic role not in S as the empty set and then check if α is satisfied in all interpretations of the remaining symbols. S) τ (C2 . 5.3 Testing Locality r←∅ In this section. When ambiguity does not arise. In order to test the locality of α w. This observation suggests the following test for locality: Proposition 30 (Testing Locality) Given a SHOIQ signature S. | τ (a : C. = ¬τ (C1 . Horrocks. but in principle. Kazakov.. R2 . S) and τ (O. Since OA←∅ (·) is a class of local ontologies. S) = ⊥ ⊥ if r ∈ S and otherwise = Trans(r). axiom α and ontology O let τ (C. the ontology P1 is safe for S w. L = SHOIQ. Note that for defining locality we do not fix the interpretation of the individuals outside S. / | τ (Funct(R). whether every axiom α in O is satisfied by every interpretation from r←∅ IA←∅ (S). S)). concept C. this could be done. S). S (based on IA←∅ (S)) if α is satisfied in every interpretation that fixes the interpretation of all atomic roles and concepts outside S to the empty set. S) | τ (∃R. S) τ (C2 . S) = ⊥ if r ∈ S and otherwise = r(a. In the next section we discuss how to perform this test in practice. then O must be safe for S according to Proposition 27. S)). | τ (r(a. S) τ (C1 | τ (R1 = . = τ (C1 . every individual needs to be interpreted as an element of the domain. 2007). = ⊥ if Sig(R) S and otherwise = ∃R. S) = ⊥ ⊥ if Sig(R) S and otherwise = Funct(R). S) | τ (A. by Proposition 27. S). we refer to this safety class simply as locality. 296 . It turns out that this notion provides a powerful sufficiency test for safety which works surprisingly well for many real-world ontologies. S) ::= τ (α. otherwise = ∃R1 . = ⊥ if A ∈ S and otherwise = A. ♦ Proposition 27 and Example 29 suggest a particular way of proving the safety of ontologies. as will be shown in Section 8. S) | τ (C1 C2 .t. = ⊥ if Sig(R) S and otherwise = ( n R. Given a SHOIQ ontology O and a signature S it is sufficient to check whether r←∅ O ∈ OA←∅ (S). S) = (⊥ ⊥) if Sig(R1 ) S. τ (α.t. b). Namely. S) | τ ( n R.2 r←∅ From the definition of AxA←∅ (S) given in Example 26 it is easy to see that an axiom α is r←∅ local w. ⊥ if Sig(R2 ) S. α∈O τ (O. we focus in more detail on the safety class OA←∅ (·).Cuenca Grau. and there is no “canonical” element of every domain to choose. S). S) ::= τ ( . S). S) | τ (¬C1 . S). & Sattler r←∅ r←∅ P1 ∈ OA←∅ (S).

r. S))I and that I |= α iff I |= τ (α.t.4 A Tractable Approximation to Locality One of the important conclusions of Proposition 30 is that one can use the standard capabilities of available DL-reasoners such as FaCT++ (Tsarkov & Horrocks.r. Recall that the constructors ⊥. The primary reason is that the sizes of the axioms which need to be tested for tautologies are usually relatively small compared to the sizes of ontologies.t. SHOIQ. modern DL reasoners are highly optimized for standard reasoning tasks and behave well for most realistic ontologies.⊥) which is not a SHOIQ tautology.C.t. S1 ) = (⊥ ≡ Fibrosis ⊥) which is a SHOIQ-tautology. and ( n R. It is easy to check that for every atomic concept A and atomic role r from τ (C. ( n R∅ . in other words. C1 C2 . in order to check whether O is local w. Trans(R∅ ). we have A ∈ S and r ∈ S. Secondly. In case reasoning is too costly. Note that it is not necessarily the case that Sig(τ (α. has Origin}. allow testing for DL-tautologies. S).C) are assumed to be expressed using . O ∈ OA←∅ (S) iff every axiom in τ (O. 2000). for the DL SHOIQ it is known to be NEXPTIME-complete. S) is a tautology. RACER (M¨ller & Haarslev.C). among other things. R R∅ . S1 = {Fibrosis. Example 31 Let O = {α} be the ontology consisting of axiom α = M2 from Figure 1. In the second case τ (M2. S2 . C1 C2 .g. in order to check whether O is local w. Tobies. 2003). every role R∅ with Sig(R∅ ) S occurs in O as either ∃R∅ . and hence will be eliminated.C. or Funct(R∅ ). hence O is not local w. Genetic Origin}.t. it is sufficient to perform the following replacements in α (the symbols from S2 are underlined): ⊥ [by (b)] M2 Genetic Fibrosis ≡ Fibrosis ⊥ [by (b)] (8) ∃has Origin. / since τ (α. 2006).r. several reasons to believe that the locality test would perform well in practice. All atomic concepts A∅ ∈ S will be eliminated likewise. a difficult problem (e.Genetic Origin (7) Similarly. ♦ 5.t. 2006) for testing o locality since these reasoners.r.C and ( n R. S) r←∅ for every Sig(α)-interpretation I ∈ IA←∅ (S) iff τ (α. S)) ⊆ S.C). however. Hence O is local w. hence. 297 . it is possible to formulate a tractable approximation to the locality conditions for SHOIQ: 3. R∅ R.Genetic Origin In the first case we obtain τ (M2. S2 ) = (Genetic Fibrosis ≡ ⊥ ∃has Origin. Indeed. Checking for tautologies in description logics is. S2 . theoretically.r. S2 = {Genetic Fibrosis. There are. Pellet (Sirin & Parsia.3 It is also easy to show by induction that for every r←∅ interpretation I ∈ IA←∅ (S). 2004) or KAON2 (Motik.r. according to Proposition 30. Proof. ∃R.r. in particular. all atomic concepts and roles that are not in S are eliminated by the transformation.r.t. S). S1 it is sufficient to perform the following replacements in α (the symbols from S1 are underlined): ⊥ [by (b)] M2 Genetic Fibrosis ≡ Fibrosis ⊥ [by (f)] ∃has Origin. S) is a tautology.t. S iff I |= α for every interpretation I ∈ IA←∅ (S) iff I |= τ (α. S1 and hence by Proposition 8 is S1 -safe w.Modular Reuse of Ontologies: Theory and Practice r←∅ Then. we have C I = (τ (C. Hence r←∅ an axiom α is local w.t. S) may still contain individuals that do not occur in S. but not local w. We demonstrate using Proposition 30 that O is local w. ∀R.

S = {Cystic Fibrosis. P4. and union-closed safety class. R any role.Cuenca Grau.t.r. We denote ˜ r←∅ by AxA←∅ (S) the set of all SHOIQ-axioms that are syntactically local w.t.C) | ( n R. so OA←∅ (·) is compact.C] ∃has Origin. or (5) C C ∆ .t. Let S be a signature. . or (4) C ∅ C. S. An axiom α is syntactically local w.r. Also one can show by induction that ˜ r←∅ ˜ r←∅ ˜ r←∅ ˜ r←∅ α ∈ AxA←∅ (S) iff α ∈ AxA←∅ (S ∩ Sig(α)). Anti-monotonicity for OA←∅ (·) can ∅(S ) ⊆ Con∅(S ). ♦ Intuitively. C / / ∆ any concept.r. we have that OA←∅ (·) is subset-closed and union-closed. ˜ r←∅ ˜ r←∅ Proof. ˜ r←∅ A SHOIQ-ontology O is syntactically local w. S. Horrocks. or (3) Funct(R∅ ). or (2) Trans(r∅ ). 2}. Below we indicate sub-concepts in α from Con∅(S1 ): ∈ Con∅(S1 ) [matches A∅ ] M2 Genetic Fibrosis ≡ Fibrosis ∈ Con∅(S1 ) [matches ∃R∅ . .C ∅ | ( n R∅ . . syntactic locality provides a simple syntactic test to ensure that an axiom is r←∅ satisfied in every interpretation from IA←∅ (S). every syntactically local axiom is satisfied in every interpretation r←∅ I from IA←∅ (S).C | ∃R. Con∆(S ) ⊆ Con∆(S ) be shown by induction. Kazakov. ·I ) from of Con r←∅ IA←∅ (S) it is the case that (C ∅ )I = ∅ and (C ∆ )I = ∆I for every C ∅ ∈ Con∅(S) and C ∆ ∈ Con∆(S). E1} considered in Example 29 is syntactically local w. compact. and i ∈ {1. .r. The following grammar recursively defines two sets of concepts Con∅(S) and Con∆(S) for a signature S: Con∅(S) ::= A∅ | ¬C ∆ | C Con∆(S) ::= ∆ | ¬C ∅ | C1 C∅ | C∅ ∆ C2 . C ∅ ∈ Con∅(S). or (6) a : C ∆ . Example 35 (Example 31 continued) It is easy to see that axiom M2 from Figure 1 is syntactically local w. R∅ (possibly the inverse of) an atomic role r∅ ∈ S. it can be shown that the safety class for SHOIQ based on syntactic locality enjoys all of the properties from Definition 24: ˜ r←∅ Proposition 34 The class of syntactically local ontologies OA←∅ (·) given in Definition 32 is an anti-monotonic. It is easy to see from the inductive definitions ∅(S) and Con∆(S) in Definition 32 that for every interpretation I = (∆I . C(i) ∈ Con∆(S).t. where A∅ ∈ S is an atomic concept. Hence. Further. & Sattler Definition 32 (Syntactic Locality for SHOIQ).t. S1 = {Fibrosis. S if it is of one of the following forms: (1) R∅ R. and E1 from Figure 1 are syntactically local w.r.r. Hence the ontology P1 = {P1. Since O ∈ OA←∅ (S) ˜ r←∅ ˜ r←∅ iff O ⊆ AxA←∅ (S). Genetic Origin}. S if O ⊆ AxA←∅ (S).r. C | ∃R∅ . We denote by ˜ r←∅ OA←∅ (S) the set of all SHOIQ ontologies that are syntactically local w. by proving that Con 2 1 2 1 ˜ r←∅ ˜ r←∅ and AxA←∅ (S2 ) ⊆ AxA←∅ (S1 ) when S1 ⊆ S2 . Genetic Disorder}. and so we obtain the following conclusion: r←∅ ˜ r←∅ Proposition 33 AxA←∅ (S) ⊆ AxA←∅ (S).Genetic Origin ∈ Con∅(S1 ) [matches C C ∅] (9) It is easy to show in a similar way that axioms P1 —P4. S. OA←∅ (·) is a safety class by Proposition 33. ♦ 298 .C ∅ ) .t.t. subset-closed.

is a GCI α = (∃r.¬A ∃r. that the symbols from S are underlined. B} according to Definition 32 since it involves symbols in S only.¬⊥) is a tautology.t. the universal relation ∆ × ∆. S = {r}. We assume that the numbers in number restrictions are written down using binary coding.Modular Reuse of Ontologies: Theory and Practice r←∗ IA←∗ (S) r←∅ IA←∅ (S) r←∆×∆ IA←∅ (S) r←id IA←∅ (S) r.t. S for which locality conditions assuming. A ∈ S : rJ ∅ AJ ∅ ∅ ∅ r←∗ IA←∗ (S) r←∅ IA←∆ (S) r←∆×∆ IA←∆ (S) r←id IA←∆ (S) r. which is not a tautology. Proposition 36 There exists an algorithm that given a SHOIQ ontology O and a sig˜ r←∅ nature S. 299 . definition P5.¬⊥ ∃r.t. S) = (∃r. and the interpretation of atomic concepts outside S to either the empty set ∅ or the set ∆ of all elements. These examples show that the limitation of our syntactic notion of locality is its inability to “compare” different occurrences of concepts from the given signature S. syntactic locality checking can be performed in polynomial time by matching an axiom according to Definition 32. syntactic locality does not detect all tautological axioms. Each local class of interpretations in Table 3 defines a corresponding class of local ontologies analogously to Example 26. Another example. Other classes of local interpretations can be constructed in a similar way by fixing the interpretations of elements outside S to different values. In Table 3 we have listed several such classes of local interpretations where we fix the interpretation of atomic roles outside S to be either the empty set ∅.r. and whose running time is polynomial in |O| + |S|. In Table 4 we have listed all of these classes together with examples of typical types of axioms used in ontologies. that tautological axioms do not occur often in realistic ontologies. however.t.4 5. It is reasonable to assume. These axioms are assumed to be an extension of our project ontology from Figure 1. the axiom α = (A A B) is local w. As a result. The axiom α is semantically local w. x | x ∈ ∆J } Table 3: Examples for Different Local Classes of Interpretations The converse of Proposition 33 does not hold in general since there are semantically local axioms that are not syntactically local. For example. It can be seen from Table 4 that different types of locality conditions are appropriate r←∅ for different types of axioms. where |O| and |S| are the number of symbols occurring in O and S respectively. as usual. S = {A. since τ (α. or the identity relation id on ∆. it is easy to see that α is not syntactically local w.5 Other Locality Classes r←∅ The locality condition given in Example 26 based on a class of local interpretations IA←∅ (·) is just a particular example of locality which can be used for testing safety.r. x | x ∈ ∆J } ∆J × ∆J { x. The locality condition based on IA←∅ (S) captures the domain axiom P4. the disjointness axiom P6. A ∈ S : rJ ∅ AJ ∆J ∆J ∆J ∆J × ∆J { x. On the other hand.¬B). and the functionality axiom P7 but 4. We indicate which axioms are local w. every S since it is a tautology (and hence true in every interpretation). determines whether O ∈ OA←∅ (S). but not syntactically local.r.r. Furthermore.

and so cannot be local w. since the individuals Human Genome and Gene prevent us from interpreting the atomic role has Focus and atomic concept Project with the empty r←∆×∆ set.4. that is for IA←∗ (S) and IA←∗ (S). the functionality axiom P7 is not captured by this locality condition since the atomic role has Focus is interpreted as the universal relation ∆×∆. can capture such assertions but are generally poor for other types of axioms. Still. Kazakov. namely to define two sets Con∅(S) and Con∆(S) of concepts for a signature S which are interpreted as the empty 300 . S) is replaced with the following: τ (A.r. The locality condition based on IA←∆ (S).Cuenca Grau. S. Hence. & Sattler α Axiom Project ? α ∈ Ax r←∅ A←∅ r←∆×∆ A←∅ r←id A←∅ r←∅ A←∆ r←∆×∆ A←∆ r←id A←∆ P4 ∃has Focus. one can use locality based on IA←∅ (S) or IA←∆ (S). however. The idea is the same as that used in Definition 32. S) = if A ∈ S and otherwise = A / (a ) r←∆×∆ r←id For the remaining classes of interpretations. For example.r. For r←∅ example. is not that straightforward. where the atomic roles and concepts outside S are interpreted as the largest possible sets. where the case (a) of the definition for τ (C. P5                                           BioMedical Project ≡ Project ∃has Focus. where every atomic role outside S is interpreted with the identity relation id on the interpretation domain. since it is not clear how to eliminate the universal roles and identity roles from the axioms and preserve validity in the respective classes of interpretations. Horrocks. S. since in P6 and P8 together imply axiom β = ({Human Genome} Bio Medicine ⊥) which uses the symbols from S only. Note that the modeling error E2 is not local for any of the given locality conditions. Gene) E2 ∀has Focus. checking locality. locality based on the class IA←∆ (S) can be tested as in Proposition 30. It might be possible to come up with algorithms for testing locality conditions for the classes of interpretation in Table 3 similar to the ones presented in Proposition 30. every subset of P containing P6 and P8 is not safe w.Cystic Fibrosis Table 4: A Comparison between Different Types of Locality Conditions neither of the assertions P8 or P9.t. which is not necessarily functional. In order to capture such functionality r←id r←id axioms.Cystic Fibrosis ∃has Focus. Note also that it is not possible to come up with a locality condition that captures all the axioms P4–P9. it is easy to come up with tractable syntactic approximations for all the locality conditions considered in this section in a similar manner to what has been done in Section 5.t.Bio Medicine Bio Medicine ⊥ P6 Project P7 Funct(has Focus) P8 Human Genome : Project P9 has Focus(Human Genome.

C is any concept. given two classes of ontologies O1 (·) and O2 (·). This safety class is not union-closed since.C ∆ ) . their union (O1 ∪ O2 )(·) is a class of ontologies defined by (O1 ∪O2 )(S) = O1 (S)∪O2 (S). It is easy to see by Definition 24 that if both O1 (·) and O2 (·) are safety classes then their union (O1 ∪ O2 )(·) is a safety class. Sig(R∅ ). In order to check safety of ontologies in practice. C(i) ∈ Con∆(S). Formally. for example. however. the union of such safety classes might be no longer unionclosed. in fact. subset-closed as well. one may try to apply different sufficient tests and check if any of them succeeds. m ≥ 2 . if each of the safety classes is also anti-monotonic or subset-closed. this gives a more powerful sufficient condition for safety. where some cases in the recursive definitions are present only for the indicated classes of interpretations. respectively. Obviously. as ∆I in every interpretation I from the class and see in which situations DL-constructors produce the elements from these sets.C ∆ | ( 1 Rid . In Figure 3 we gave recursive r←∗ ˜ r←∗ definitions for syntactically local axioms AxA←∗ (S) that correspond to classes IA←∗ (S) of interpretations from Table 3. ∆ C ∅ ∈ Con∅(S).5. A∆ .Modular Reuse of Ontologies: Theory and Practice Con∅(S) ::= ¬C ∆ | C | ∃R. R is any role | A∅ | ∃R∅ .C). r∅ . the ontology O1 consisting of axioms P4–P7 from Table 4 satisfies the first locality condition. r∆×∆ . One may wonder if locality classes already provide the most powerful sufficient 301 . rid ∈ S. but their union O1 ∪O2 satisfies neither the first nor the second locality condition and.C ∆ | ( n R∆×∆ . which can be seen as the union of the safety classes used in the tests.C C Con∆(S) ::= r←∗ for IA←∆ (·): r←∆×∆ for IA←∗ (·): r←id for IA←∗ (·): ∆ | ¬C ∅ | C1 ∆ C2 | A∆ | ∃R∆×∆ . Moreover. as seen in Example 37.5. 5. the ontology O2 consisting of axioms P8–P9 satisfies the second locality condition.C | | ( m Rid .6 Combining and Extending Safety Classes In the previous section we gave examples of several safety classes based on different local classes of interpretations and demonstrated that different classes are suitable for different types of axioms. b) | Trans(rid ) | Funct(Rid ) Figure 3: Syntactic Locality Conditions for the Classes of Interpretations in Table 3 set and.C ∅ n R∅ . Where: A∅ .C ∆ ) | ∃Rid . Sig(Rid ) S. or respectively. is not even safe for S as we have shown in Section 5.C ∅ | r←∗ for IA←∅ (·): r←∅ for IA←∗ (·): r←id for IA←∗ (·): C∅ | C∅ n R. as demonstrated in the following example: r←∆×∆ r←∅ Example 37 Consider the union (OA←∅ ∪ OA←∆ )(·) of two classes of local ontologies r←∆×∆ r←∅ OA←∅ (·) and OA←∆ (·) defined in Section 5. ♦ As shown in Proposition 33. Unfortunately the unionclosure property for safety classes is not preserved under unions. then the union is anti-monotonic. Sig(R∆×∆ ). every locality condition gives a union-closed safety class. C|C C∆ | a : C∆ ˜ r←∗ AxA←∗ (S) ::= C ∅ r←∅ for IA←∗ (·): r←∆×∆ for IA←∗ (·): r←id for IA←∗ (·): | R∅ |R R | Trans(r∅ ) | Funct(R∅ ) R∆×∆ | Trans(r∆×∆ ) | r∆×∆ (a.

There are some difficulties in extending the proof of Proposition 39 to other locality classes considered in Section 5. since α ∈ AxA←∅ (S). Hence there is probably room to extend the locality classes OA←∅ (·) and OA←∆ (·) for L = SHOIQ while preserving union-closure. ·) in these cases (see a related discussion in Section 5. Take Q to consist of all tautologies of form ⊥ A and ⊥ ∃r. ·) as has been discussed in Section 5. and modifying τ (·. for every A. We claim that O ∪ P ∪ Q is not a deductive Sig(Q)-conservative extension of Q. 302 . Hence. Horrocks. Note that Sig(O ∪ P) ∩ Sig(Q) ⊆ S. Kazakov.Cuenca Grau. r ∈ Sig(O) \ S. First.5.5. () r←∅ Intuitively.5). S) where τ (·. According to Definition 38. since O |= α and O ∪ P ∪ Q r←∅ has only models from IA←∅ (S). ⊥ for every atomic r←∅ concept A and atomic role r from Sig(O) \ S. ⊥ for all A. It is easy to see that P ∈ OA←∅ (S). the ontology P is chosen in such a way that P ∈ OA←∅ (S) and O ∪ P ∪ Q r←∅ has only models from IA←∅ (S). S) contains symbols from S only (see Footnote 3 on r←∅ r←∅ p. it is the case that O ∪ P is safe w. then there exists a signature S and an ontology O over L. / β is not a tautology. As has been shown in the proof of r←∅ this proposition. S in L. that is. We demonstrate that r←∅ r←∅ this is not possible for O(·) = OA←∅ (·) or O(·) = OA←∆ (·) r←∅ We first consider the case O(·) = OA←∅ (·) and then show how to modify the proof for r←∅ the case O(·) = OA←∆ (·). A safety class O1 (·) is maximal unionclosed for a language L if O1 (·) is union-closed and for every union-closed safety class O2 (·) that extends O1 (·) and every O over L we have that O ∈ O2 (S) implies O ∈ O1 (S).r. (ii) O is safe w. since β r←∅ does not contain individuals. it is the case that O ∪ P ∪ Q |= β. Second. and (iii) for every P ∈ O(S).5 are maximal union-closed safety classes for L = SHIQ. for every ontology Q over L with Sig(Q) ∩ Sig(O ∪ P) ⊆ S it is the case that O ∪ P ∪ Q is a deductive conservative extension of Q. O ∪ P ∪ Q is not a deductive Sig(Q)-conservative extension of Q ( ). Let / / β := τ (α. it is not clear how to define the function τ (·. it is not clear how to force interpretations of roles to be the universal or identity relation using SHOIQ axioms. Definition 38 (Maximal Union-Closed Safety Class). since this will not guarantee that β = τ (α. 297). r←∅ r←∅ Since O ∈ OA←∅ (S). Note also that the proof of Proposition 39 does not work in the presence of nominals. if a safety class O(·) is not maximal union-closed for a language L. Take P to consist of the axioms A ⊥ and ∃r. thus Q |= β. there exists an axiom α ∈ O such that α ∈ AxA←∅ (S). ♦ r←∅ r←∅ Proposition 39 The classes of local ontologies OA←∅ (·) and OA←∆ (·) defined in Section 5. I |= α iff I |= β for every I ∈ IA←∅ (S). such that (i) O ∈ / O(S). Proof. by Definition 1. & Sattler conditions for safety that satisfy all the desirable properties from Definition 24. A safety class O2 (·) extends a safety class O1 (·) if O1 (S) ⊆ O2 (S) for every S. Surprisingly this is the case to a certain extent for some locality classes considered in Section 5.r. S in L.t.t. ·) is defined as in Proposition 30. r←∅ For O(·) = OA←∆ (·) the proof can be repeated by taking P to consist of axioms A and ∃r.5. Let O be an ontology over L that satisfies conditions (i)–(iii) above. Now. We define ontologies P and Q as follows. Q is an ontology which implies nothing but tautologies and uses all the atomic concepts and roles from S. r ∈ S. By Proposition 30. we have that Sig(β) ⊆ S = Sig(Q) and.

Ou and Os . ontology O and signature S. I2 states that the set Os satisfies the safety test for S ∪ Sig(Om ) w. since Sig(Om ) might expand and axioms from Os might become no longer safe w. or they add elements to Os without changing Om . The procedure manipulates configurations of the form Om | Ou | Os . As shown in Corollary 23 in Section 4. Note also that it follows from Definition 40 that Om is an O(·)-based S-module in O iff Om is an O(·)-based S -module in O for every S with S ⊆ S ⊆ (S ∪ Sig(Om )). Therefore. namely O itself. Figure 4 presents an optimized version of the module-extraction algorithm. however. in other words. by Definition 24. it is possible to avoid checking all the possible subsets of the input ontology.r. rule R1 moves α into Os provided Os remains safe w. Given an axiom α from Ou . Extracting Modules Using Safety Classes In this section we revisit the problem of extracting modules from ontologies. finally.t. At the end of this process. The set Ou . Given an ontology O and a signature S over L.r.t. I3 establishes that the rewrite rules either add elements to Om . it is sufficient to enumerate all possible subsets of the ontology and check if any of these subsets is a module based on O(·). when no axioms are left in in Ou . Ou and Os form a partitioning of O. the set Os is intended to be safe w. any procedure for checking membership of a safety class O(·) can be used directly for checking whether Om is a module based on O(·).Modular Reuse of Ontologies: Theory and Practice 6. In order to extract an O(·)-module. These axioms are distributed among Om and Os according to rules R1 and R2. Definition 40 (Modules Based on Safety Class). there exists at least one O(·)-based S-module in O. |Os |) consisting of the sizes for these sets increases in lexicographical order. Proposition 18 establishes the relationship between the notions of safety and module. S ∪ Sig(Om ).r. however. the pair (|Om |. rule R2 moves α into Om and moves all axioms from Os back into Ou .t.r. the set Om is an O(·)-based module in O. A notion of modules based on this property can be defined as follows. which is initialized to O. there exists no general procedure that can recognize or extract all the (minimal) modules for a signature in an ontology in finite time. any safety class O(·) can provide a sufficient condition for testing modules—that is. The techniques described in Section 5. ♦ It is clear that. O(·). which represent a partitioning of the ontology O into three disjoint subsets Om . indeed. S ∪ Sig(Om ) according to the safety class O(·). according to Definition 40. Otherwise. Invariant I1 states that the three sets Om . the empty ontology ∅ = O \ O also belongs to O(S) for every O(·) and S. ♦ Remark 41 Note that for every safety class O(·). The rewrite rules R1 and R2 preserve invariants I1–I3 given in Figure 4. in order to prove that O1 is a S-module in O. 303 . a subset O1 of O is an S-module in O provided that O \ O1 is safe for S ∪ Sig(O1 ). we say that Om ⊆ O is an O(·)-based S-module in O if O \ Om ∈ O(S ∪ Sig(Om )). it is sufficient to show that O \ O1 ∈ O(S ∪ Sig(O1 )). contains the unprocessed axioms. more precisely. The set Om accumulates the axioms of the extracted module. S ∪ Sig(Om ). can be reused for extracting particular families of modules that satisfy certain sufficient conditions.t. In practice. Let L be an ontology language and O(·) be a safety class for L.

and suppose that Om is an O(·)-based S-module in O. it is the case that Om ⊆ Om . Horrocks. in addition. is the smallest O(·)-based S-module in O. Ou and Os form a partitioning of O (see invariant I1 in Figure 4). |Os |) I2. for the base case we have Om = ∅ ⊆ Om . |Os |) <lex (|Om |. O(·) is anti-monotonic. and the procedure returns precisely this minimal module. subset-closed. Additionally. & Sattler Input: Output: an ontology O. and (2) If. and S a signature over L. O(·) is an anti-monotonic. that Om is an O(·)-based S-module in O. (1) The procedure based on the rewrite rules from Figure 4 always terminates for the following reasons: (i) for every configuration derived by the rewrite rules. which implies. Os ∈ O(S ∪ Sig(Om )). a signature S. Upon termination. Indeed. Os ∈ O(S ∪ Sig(Om )) Figure 4: A Procedure for Computing Modules Based on a Safety Class O(·) Proposition 42 (Correctness of the Procedure from Figure 4) Let O(·) be a safety class for an ontology language L. (|Om |. the sets Om . (ii) at each rewrite step. Kazakov. () 304 . if Ou = ∅ then it is always possible to apply one of the rewrite rules R1 or R2. and unionclosed safety class. then there is a unique minimal O(·)-based S-module in O. by invariant I1 from Figure 4.Cuenca Grau. Om | Ou ∪ {α} | Os =⇒ Om | Ou | Os ∪ {α} R2. S. The rewrite rule R1 does not change the set Om . Proof. O is partitioned into Om and Os and by invariant I2. We demonstrate by induction that for every configuration Om | Ou | Os derivable from ∅ | O | ∅ by rewrite rules R1 and R2. a safety class O(·) a module Om in O based on O(·) unprocessed ↓ Configuration: Om | Ou | Os . |Os |) increases in lexicographical order (see invariant I3 in Figure 4). This will prove that the module computed by the procedure is a subset of every O(·)-based S-module in O. Om | Ou ∪ {α} | Os =⇒ Om ∪ {α} | Ou ∪ Os | ∅ Invariants for Om | Ou | Os : I1. additionally. (|Om |. (2) Now. Then: (1) The procedure in Figure 4 with input O. suppose that. ↑ ↑ Initial Configuration = ∅|O|∅ Termination Condition: Ou = ∅ module safe Rewrite rules: R1. subset-closed and union-closed. and therefore the size of every set is bounded. O an ontology over L. and hence the procedure always terminates with Ou = ∅. For the rewrite rule R2 we have: Om | Ou ∪ {α} | Os =⇒ Om ∪ {α} | Ou ∪ Os | ∅ if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )). O = Om Ou Os if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) Invariant for Om | Ou | Os =⇒ Om | Ou | Os : I3. O(·) terminates and returns an O(·)based S-module Om in O. by Definition 40. and hence.

Claim (1) of Proposition 42 establishes that the procedure from Figure 4 terminates for every input and produces a module based on a given safety class. This contradiction implies that the rule R2 also preserves property Om ⊆ Om . signature S = {Cystic Fibrosis. it is sufficient to move only those axioms Os that contain at least one symbol from α which did not occur in Om before. a signature S. Example 43 In Table 5 we present a trace of the algorithm from Figure 5 for the ontology O consisting of axioms M1–M5 from Figure 1. If O(·) is compact and subset closed. the procedure. In the last column we indicate whether the first conditions of the rules R1 r←∅ and R2 are fulfilled for the selected axiom α from Ou —that is. which contradicts ( ). since the set of remaining axioms will stay in O(S ∪ Sig(Om )). that Om ⊆ Om but Om ∪ {α} Om . to the contrary. Os ∈ O(S ∪ Sig(Om )) and O(·) is union-closed. {α} ∈ O(S ∪ Sig(Om )). in each row. it is sufficient to check if {α} ∈ O(S ∪ Sig(Om )) since it is already known that Os ∈ O(S ∪ Sig(Om )). Om | Ou ∪ {α} | Os ∪ Os =⇒ Om ∪ {α} | Ou ∪ Os | Os if {α} ∈ O(S ∪ Sig(Om )). Moreover. whether α is local for IA←∅ (·). and Sig(Os ) ∩ Sig(α) ⊆ Sig(Om ) ∅|O|∅ Termination Condition: Ou = ∅ Figure 5: An Optimized Procedure for Computing Modules Based on a Compact SubsetClosed Union-Closed Safety Class O(·) Suppose. 305 . If the safety class O(·) satisfies additional desirable properties. In this case. In Figure 5 we present an optimized version of the algorithm from Figure 4 for such locality classes. The rewrite rule corresponding to the result of this test is applied to the configuration. The first column of the table lists the configurations obtained from the initial configuration ∅ | O | ∅ by applying the rewrite rules R1 and R2 from Figure 5. we have that Os ∈ O(S ∪ Sig(Om )). then instead of moving all the α axioms in Os to Ou in rule R2. a safety class O(·) Output: a module Om in O based on O(·) Initial Configuration = Rewrite rules: R1. Om | Ou ∪ {α} | Os =⇒ Om | Ou | Os ∪ {α} if {α} ∈ O(S ∪ Sig(Om )) α α R2. The second column of the table shows the elements of S ∪ Sig(Om ) that have appeared for the current configuration but have not been present for the preceding configurations. Since Om is an O(·)-based S-module in O. it is possible to optimize the procedure shown in Figure 4. Genetic Disorder} r←∅ and safety class O(·) = OA←∅ (·) defined in Example 26.2. it is possible to show that this procedure runs in polynomial time assuming that the safety test can be also performed in polynomial time. in fact. as stated in claim (2) of Proposition 42. we have {α} ∈ O(S ∪ Sig(Om )). Since O(·) is subset-closed. then.Modular Reuse of Ontologies: Theory and Practice Input: an ontology O. Then α ∈ Os := O \ Om . like those based on classes of local interpretations described in Section 5. Since by invariant I2 from Figure 4. Since Om ⊆ Om and O(·) is anti-monotonic. If O(·) is union closed. produces the smallest possible module based on the safety class. instead of checking whether (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) in the conditions of rules R1 and R2. Os ∪ {α} ∈ O(S ∪ Sig(Om )). the underlined axiom α is the one that is being tested for safety.

M3. In other words. located In. M4 | − | M5 New elements in S ∪ Sig(Om ) − − Fibrosis. in our case. M5 | − M1. Kazakov. but became non-local when new axioms were added to Om . Note that it is possible to apply the rewrite rules for different choices of the axiom α in Ou . Genetic Origin Genetic Fibrosis − − − {α} ∈ O(S ∪ Sig(Om ))? Yes Yes No No No Yes No ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ R1 R1 R2 R2 R2 R1 R2 1 − | M1. It turns out that any O(·)-based module Om is guaranteed to cover the set of minimal modules. which is the smallest OA←∅ (·)-based S-module in O. However. M4. and 4–8 only. Om contains all the S-essential axioms in O. M5 | − Cystic Fibrosis. . for example. M3. Note also that different results for the locality tests are obtained in these cases: both M2 and M3 were local w. M4. M4 | M2 | M5 M1. the implicit non-determinism in the procedure from Figure 5 does not have any impact on the result of the procedure. Then O2 := O1 ∩ Om is an S-module in O w. α | Os 2 3 4 5 6 7 8 − | M1. and α = M3 in configurations 2 and 4. the rewrite procedure produces a module Om consisting of axioms M1– M4.t. According to Claim (2) of Proposition 42 all such r←∅ computations should produce the same module Om . L and Om an O(·)-based S-module in O. which results in a different computation. It is also easy to see that. M4. Lemma 44 Let O(·) be an anti-monotonic subset-closed safety class for an ontology language L.r. . Pancreas. Genetic Disorder} Note that some axioms α are tested for safety several times for different configurations.r. M5 | M2. Let O1 be an S-module in O w. O an ontology. S ∪ Sig(Om ) when Om = ∅. M4 | M2. ♦ It is worth examining the connection between the S-modules in an ontology O based on a particular safety class O(·) and the actual minimal S-modules in O. .t.Cuenca Grau. given O and S. Genetic Disorder Table 5: A trace of the Procedure in Figure 5 for the input Q = {M1. M3 | M2. L.r. provided that O(·) is anti-monotonic and subset-closed.t. M5 | − M1. M4. The following Lemma provides the main technical argument underlying this result. M3. In other words. that is. M2. M4. M3. M3. the procedure from Figure 5 has implicit non-determinism. the axiom α = M2 is tested for safety in both configurations 1 and 7. M2. because the set Ou may increase after applications of rule R2. In our example. & Sattler Om | Ou . alternative choices for α may result in shorter computations: in our example we could have selected axiom M1 in the first configuration instead of M2 which would have led to a shorter trace consisting of configurations 1. M5} from Figure 1 and S = {Cystic Fibrosis. M3. M5 | M2 − | M1. has Origin. and S a signature over L. M3 M1 | M2. syntactic locality produces the same results for the tests. M5 | − M1. 306 . . Horrocks.

this module. localitybased modules might contain non-essential axioms. Proof. In our case. We demonstrate that O1 ⊆ Om . subset-closed safety class for L and Om be an O(·)-based S-module in O. (b) Consider O = O ∪{B ⊥}.r. and hence. Hence Om is a superset of every minimal S-module in O and hence. we have that OA is an O(·)-based S-module in O . since O |= (A B). Then: 1 If O |= α := (A 2 If O |= α := (B B) and {B A) and { ⊥} ∈ O(∅) then OA |= α. Since OA is an O(·)-based S-module in O. since Om is an O(·)-based S-module in O. hence. hence (O \ OA ) ∪ {B ⊥} = O \ OA . In some cases. B}. In particular. An interesting application of modules is the pruning of irrelevant axioms when checking if an axiom α is implied by an ontology O. Since Sig(B ⊥) = {B} and B ∈ Sig(OA ). Consider two cases: (a) B ∈ Sig(OA ) and (b) B ∈ Sig(OA ). contains all S-essential axioms in O w. Then Om contains all S-essential axioms in O w. B atomic concepts. O2 is an S-module in O w. Now. L. in general. if the safety class being used enjoys some nice properties then it suffices to extract a module for one of them. Since O(·) is union-closed. In particular O2 is an S-module in O1 w.r. Indeed. O2 is an O(·)-based S-module in O1 . L.t.t. {B ⊥} ∈ / O(∅) and O(·) is compact. then O \ OA ∈ O(S ∪ Sig(OA )). Note that B ⊥ ∈ OA . 307 .Modular Reuse of Ontologies: Theory and Practice Proof. we have that O \ Om ∈ O(S ∪ Sig(Om )).t. as given by the following proposition: Proposition 46 Let O(·) be a compact union-closed safety class for an ontology language L. / (a) By Remark 41 it is the case that OA is an O(·)-based module in O for S = {A. by Definition 12 and Definition 1. in accordance to Corollary 45.t.r. Since O1 is an S-module in O w. L which is strictly contained in O1 . all the axioms M1–M4 are essential for the ontology O and signature S considered in Example 43. otherwise. the extracted module contains only essential axioms. L.t. Proof. O1 ∩ Om is an S-module in O w. it is sufficient to extract a module for a subset of the signature of α which.t. and O2 ⊆ Om . L.r. Let O1 be a minimal S-module in O w. L. Let OA be an O(·)-based module in O for S = {A} in O. however. and. we have that OA |= (A ⊥) which implies OA |= (A B). in general. Since Sig(α) ⊆ S. B} ∈ O(∅) then OA |= α. in order to test subsumption between a pair of atomic concepts. As shown in Section 3. Indeed. by Definition 40. in order to check whether O |= α it suffices to retrieve the module for Sig(α) and verify if the implication holds w. 1. it is the case that O |= α implies OA |= α.r.r. since OA is a module in O for S = {A}. by Definition 24 it is the case that {B ⊥} ∈ O(S ∪ Sig(OA )).t.t. Corollary 45 Let O(·) be an anti-monotonic. it is the case that O |= (A ⊥). L. by Definition 40. it is the case that (O \ OA ) ∪ {B ⊥} ∈ O(S ∪ Sig(OA )). By Definition 40. it is the case that O1 \ O2 ∈ O(S ∪ Sig(Om )). leads to smaller modules. Since O1 \ O2 = O1 \ Om ⊆ O \ Om and O(·) is subset-closed. We have seen in this example that the locality-based S-module extracted from O contains all of these axioms.r. we have that O1 \ O2 ∈ O(S ∪ Sig(O2 )). since B ∈ Sig(OA ). by Lemma 44. O an ontology and A.r. Since O(·) is anti-monotonic.4.

Let O be a module for S = {A} based on O r←∗ (·) and IA←∆ A A←∅ r←∗ OB be a module for S = {B} based on OA←∆ (·). and undecidable w. 7.r. 2007). Indeed. B atomic concepts. 2007).. Lutz & Wolter. Related Work We have seen in Section 3 that the notion of conservative extension is valuable in the formalization of ontology reuse tasks.r. Lutz et al. which implies O |= (B A).r.t. provided that the safety class captures. by Corollary 47. it is the case that M2 is also an S-module in O. Proof. The problem of deciding whether P ∪ Q is a deductive S-conservative extension of Q is EXPTIME-complete for EL (Lutz & Wolter. Then O |= (A B) iff OA |= (A B) iff OB |= (A B). is complete for all the sub-concepts of B in M1 .. hence. in Case (b) we show that OA is an O(·)-based module for S = {A} in O = O∪{ B}.. B is a super-concept or a sub-concept of A in the ontology if and only if it is such in the module. ALCIQ (Lutz et al. including A. by Corollary 47 this module is complete for all the super-concepts of A in O. since O |= ( A). to optimize the classification of ontologies. 2007. checking model conservative extensions is already undecidable for EL (Lutz & Wolter. & Sattler 2. It is easy to see that {B r←∗ ⊥} ∈ OA←∅ (∅) and { r←∗ A} ∈ OA←∆ (∆). when applied to the empty signature. Let OA←∅ (·) and r←∗ r←∗ OA←∆ (·) be locality classes based on local classes of interpretations of form IA←∅ (·) and r←∗ (·). For this purpose.r. The problem of deciding conservative extensions has been recently investigated in the context of ontologies (Ghilardi et al. Horrocks. 2007) (roughly OWL-Lite). ALCIQO (roughly OWL DL). The proof of this case is analogous to that for Case 1: Case (a) is applicable without changes. the module. 2-EXPTIME-complete w.t. It is possible to combine modularization procedures to obtain modules that are smaller than the ones obtained using these procedures individually.t.. One could extract a OA←∆ (·)-based module M2 for S = {B} in M1 which. therefore. 308 . 2007).t. This property can be used. For example. if an atomic concept is a super-concept of A in O. 2007). B} and M1 is an S-module in the original ontology O. 2006. or the sub-concepts (Case 2) B of A. it is the case that OB |= ( A).Cuenca Grau. That is. including B—that is. then it is also a r←∗ super-concept of A in M1 . Proposition 46 implies that a module based on a safety class for a single atomic concept A can be used for capturing either the super-concepts (Case 1). M2 is an S-module in M1 for S = {A. In order to check if the subsumption A B holds in an ontology O. respectively. axioms of the form B ⊥ (Case 1) or ( B) (Case 2). it is convenient to use a syntactically tractable approximation of the safety class in use. from Table 3. and check whether this subsumption holds w. it is sufficient to extract a module in O for S = {A} using a modularization algorithm based on a safety class in which the ontology {B ⊥} is local w. for example. the empty signature. By Proposition 46. Kazakov. and for ALC it is even not semi-decidable (Lutz et al. Furthermore. r←∗ Corollary 47 Let O be a SHOIQ ontology and A. in order to check r←∗ the subsumption O |= (A B) one could first extract a OA←∅ (·)-based module M1 for S = {A} in O. for example. one could use the syntactic locality conditions given in Figure 3 instead of their semantic counterparts. and.

Another example is Seidenberg’s segmentation algorithm (Seidenberg & Rector. This field is rather diverse and has roots in several communities. admits only single element models. B C} and α = (C B). 2003). In this case. Ontology Merging. the algorithm retrieves a fragment Q1 ⊆ Q as follows: first. Given a signature S and an ontology Q. Most of these techniques rely on syntactically traversing the axioms in the ontology and employing various heuristics to determine which axioms are relevant and which are not. 2004a. In general. 2006). how the “super-” and “sub-” concepts are computed.Modular Reuse of Ontologies: Theory and Practice In the last few years. The authors describe a segmentation procedure which. Currently. α = (A ∀r.A). but not their “subconcepts”. second. so the authors investigate the possibility of splitting GALEN into small “segments” which can be processed by reasoners separately. For our example in Section 3. computes a “segment” for S in the ontology. Noy. the Prompt-Factor algorithm may fail even if Q is consistent. Hence. the Prompt-Factor algorithm extracts Q2 = {B C}. the full version of GALEN cannot be processed by reasoners. After this step. the fragment extracted by the Prompt-Factor algorithm consists of all the axioms M1-M5. and α is satisfied in every such a model. the extracted fragment is not guaranteed to be a module. given a set of atomic concepts S. For example. the Prompt-Factor algorithm extracts Q1 = {A B}. In particular. An example of such a procedure is the algorithm implemented in the Prompt-Factor tool (Noy & Musen. A B}. 2004b). 2003. Q |= α. For example. that is. and Ontology Segmentation (Kalfoglou & Schorlemmer. and so Q2 is not a module in Q. Given S = {B. The ontology Q is inconsistent due to the axiom A ≡ ¬A: any axiom (and α in particular) is thus a logical consequence of Q. such that S contains all the symbols of Q. the PromptFactor algorithm extracts a module (though not a minimal one). however. numerous techniques for extracting fragments of ontologies for the purposes of knowledge reuse have been proposed. also “their restrictions”. or. In general. Q2 |= α. a rapidly growing body of work has been developed under the headings of Ontology Mapping and Alignment. In this case. The authors discuss which concepts and roles should be included in the segment and which should not. when S = {Cystic Fibrosis. otherwise. “intersection”. concepts that are “linked” from the input concepts (via existential restrictions) and their “super-concepts”. and M5 containing these terms. the algorithm first retrieves axioms M1. The description of the procedure is very high-level. M4. It is easy to see that Q is consistent. and Q consists of axioms M1–M5 from Figure 1. 1999). consider an ontology Q = {A ≡ ¬A. C}. all the remaining axioms of Q are retrieved. however. These steps are repeated until a fixpoint is reached. “union”. which does not imply α. S is expanded with the symbols in Sig(Q1 ). It is also not clear which axioms should be included in 309 . Ontology Integration. From the description of the procedure it is not entirely clear whether it works with a classified ontology (which is unlikely in the case of GALEN since the full version of GALEN has not been classified by any existing reasoner). consider an ontology Q = { {a}. and “equivalent concepts” should be considered by including the roles and concepts they contain. which was used for segmentation of the medical ontology GALEN (Rector & Rogers. together with their “super-concepts” and “super-roles” but not their “sub-concepts” and “sub-roles”. In particular. Genetic Disorder}. for the included concepts. and S = {A}. the segment should contain all “super-” and “sub-” concepts of the input concepts. the axioms in Q that mention any of the symbols in S are added to Q1 . then expands S with the symbols mentioned in these axioms.

and does not establish a correspondence between the nodes of the graph and sets of axioms in the ontology. Kazakov. & Sattler the segment in the end. The interested reader can find in the literature a detailed comparison between the different approaches for combining ontologies (Cuenca Grau & Kutz. Cuenca Grau et al. Caragea. The papers describing these modularization procedures do not attempt to formally specify the intended outputs of the procedures. most of these proposals. 5. Parsia. In contrast. One of the requirements for the module is that Q should be a conservative extension of QA (in the paper QA is called a logical module in Q). 2007). for information see the homepage of the workshop http://www. Horrocks. to check for side-effects. The algorithm uses a set of heuristics for measuring the degree of dependency between the concepts in the ontology and outputs a graphical representation of these dependencies. they do not take the semantics of the ontology languages into account. including those containing “non-safe” axioms. Concerning the problem of ontology reuse. & Honavar. we assume here that reuse is performed by simply building the logical union of the axioms in the modules under the standard semantics. The authors present an algorithm for partitioning an ontology into disjoint modules and proved that the algorithm is correct provided that certain safety requirements for the input ontology hold: the ontology should be consistent. and should have only “safe” axioms (which in our terms means that they are local for the empty signature). since the procedure talks only about the inclusion of concepts and roles. 2006) propose a specialized semantics for controlling the interaction between the importing and the imported modules to avoid side-effects. there have been various proposals for “safely” combining modules.iastate. What is common between the modularization procedures we have mentioned is the lack of a formal treatment for the notion of module. A different approach to module extraction proposed in the literature (Stuckenschmidt & Klein..html 310 . Such studies are beyond the scope of this paper. & Sirin. and we establish a collection of reasoning services. the algorithm we present here works for any ontology. The algorithm is intended as a visualization technique.edu/events/womo. Distributed Description Logics (Borgida & Serafini. such as E-connections (Cuenca Grau. but rather argue what should be in the modules and what not based on intuitive notions. The paper imposes an additional requirement on modules. In contrast to these works.cild. It might be possible to formalize these algorithms and identify ontologies for which the “intuitionbased” modularization procedures work correctly. such as safety testing. The growing interest in the notion of “modularity” in ontologies has been recently reflected in a workshop on modular ontologies5 held in conjunction with the International Semantic Web Conference (ISWC-2006). 2006b). 2003) and Package-based Description Logics (Bao.Cuenca Grau. namely that the module QA should entail all the subsumptions in the original ontology between atomic concepts involving A and other atomic concepts in QA . 2006a). should not contain unsatisfiable atomic concepts. Module extraction in ontologies has also been investigated from a formal point of view (Cuenca Grau et al. In particular. (2006) define a notion of a module QA in an ontology Q for an atomic concept A. 2004) consists of partitioning the concepts in an ontology to facilitate visualization of and navigation through an ontology.

1 Locality for Testing Safety ˜ r←∅ We have run our syntactic locality checker for the class OA←∅ (·) over the ontologies from a library of 300 ontologies of various sizes and complexity some of which import each other (Gardiner. http://sourceforge. 8.2 Extraction of Modules In this section.Modular Reuse of Ontologies: Theory and Practice 8. where A ∈ S and B ∈ S. The library is available at http://www. Tsarkov.ac. After this transformation. / Note that these axioms simply indicate that the atomic concepts A. For this purpose. we provide empirical evidence of the appropriateness of locality for safety testing and module extraction. all but 11 were syntactically local for S (and hence also semantically local for S). 2004) to which our framework does not yet apply. 2006) and illustrate the ˜ r←∅ ˜ r←∆×∆ combination of the modularization procedures based on the classes OA←∅ (·) and OA←∆ (·). we compare three modularization7 algorithms that we have implemented using Manchester’s OWL API:8 A1: The Prompt-Factor algorithm (Noy & Musen. & Hendler. From the 11 non-local ontologies. 2003). Sirin. Implementation and Proof of Concept In this section. It turned out that from the 96 ontologies in the library that import other ontologies. 6.uk/~horrocks/testing/ 7. ˜ r←∅ A3: Our modularisation algorithm (Algorithm 5). Third. & Horrocks. 8.. The remaining 4 non-localities are due to the presence of so-called mapping axioms of the form A ≡ B . we show that OA←∅ (·)-based modules are typically very small compared to both the size of the ontology and the modules extracted using other techniques. Second. we check if P belongs to the locality class for S = Sig(P) ∩ Sig(Q). we report on our implementation in the ontology editor Swoop (Kalyanpur. we were able to easily repair these non-localities as follows: we replace every occurrence of A in P with B and then remove this axiom from the ontology. B in the two ontologies under consideration are synonyms. Parsia. 2006).6 For all ontologies P that import an ontology Q. In this section by “module” we understand the result of the considered modularization procedures which may not necessarily be a module according to Definition 10 or 12 8. Cuenca Grau. 7 are written in the OWL-Full species of OWL (Patel-Schneider et al. we have implemented a syntactic loca˜ r←∅ ˜ r←∆×∆ lity checker for the locality classes OA←∅ (·) and OA←∆ (·) as well as the algorithm for extracting modules given in Figure 5 from Section 6. ˜ r←∅ First. A2: The segmentation algorithm proposed by Cuenca Grau et al. we show that the locality class OA←∅ (·) provides a powerful sufficiency test for ˜ r←∅ safety which works for many real-world ontologies. Indeed. but rather to give an idea of the typical size of the modules extracted from real ontologies by each of the algorithms.cs. based on the locality class OA←∅ (·). The aim of the experiments described in this section is not to provide a throughout comparison of the quality of existing modularization algorithms since each algorithm extracts “modules” according to its own requirements. (2006).net/projects/owlapi 311 .man. all 4 non-local ontologies turned out to be local.

& Sattler (a) Modularization of (a) Modularization of NCINCI (b) Modularization of GALEN-Small (b) Modularization of GALEN-Small (c) Modularization of of SNOMED (c) Modularization SNOMED (d) Modularization of of GALEN-Full (d) Modularization GALEN-Full (e) Small modules GALEN-Full (e) Smallmodules of of GALEN-Full (f) Large modules GALEN-Full (f) Largemodules of of GALEN-Full Figure 6: Distribution for the sizes of syntactic locality-based modules: the X-Axis gives the number of concepts in the modules and the Y-Axis the number of modules extracted for each size range. Kazakov. Horrocks.Cuenca Grau. 312 .

Max. It is worth emphasizing here that our algorithm A3 does not just extract a module for the input atomic concept: the extracted fragment is also a module for its whole signature.10 the Gene Ontology (GO). they do not contain GCIs.7 3. we took the set of its atomic concepts and extracted modules for every atomic concept. We compare the maximal and average sizes of the extracted modules.org/2003/CancerOntology/nciOncology.(%) 0. This ontology is almost 10 times smaller than the original GALEN-Full ontology.com/ http://www.5 0.gov/ontology/ 313 .05 0. yet similar in structure. there is no systematic way of evaluating modularization procedures. In this group.1 100 100 100 88.mindswap. but only definitions.org http://www.11 and the SNOMED Ontology12 . http://www.08 0.-based mod. Since there is no benchmark for ontology modularization and only a few use cases are available.84 100 0.7 100 A2: Segmentation Max. This group contains the well-known GALEN ontology (GALEN-Full). For each ontology. we have also considered a version GALEN-Small that has commonly been used as a benchmark for OWL reasoners.13 the DOLCE upper ontology (DOLCE-Lite).loa-cnr.3 100 Avg.org http://www. 15. Therefore we have designed a simple experiment setup which. 10.05 0.snomed.8 1.1 100 100 100 51. which we divided into two groups: Simple. we have collected a set of well-known ontologies available on the Web.(%) 75. 9.org/prj_galen. 11.geneontology.4 2 10 29.8 0. Complex.html http://sweet. These ontologies are complex since they use many constructors from OWL DL and/or include a significant number of GCIs.09 1. In the case of GALEN.nasa. in particular. we have included the National Cancer Institute (NCI) Ontology.4 100 Avg.14 and NASA’s Semantic Web for Earth and Environmental Terminology (SWEET)15 .(%) 30.(%) 87. 13.1 24. which typically includes a fair amount of other concepts and roles. 12.it/DOLCE.html http://www.9 37.owl http://ontology.3 Avg.teknowledge. 14.openclinical.6 100 1 100 100 100 96. These ontologies are expressed in a simple ontology language and are of a simple structure.9 the SUMO Upper Ontology.5 0.8 100 0. even if it may not necessarily reflect an actual ontology reuse scenario.(%) 55 100 1 100 100 100 83.6 NCI SNOMED GO SUMO GALEN-Small GALEN-Full SWEET DOLCE-Lite 27772 255318 22357 869 2749 24089 1816 499 Table 6: Comparison of Different Modularization Algorithms As a test suite.(%) 0.jpl.Modular Reuse of Ontologies: Theory and Practice Ontology Atomic Concepts A1: Prompt-Factor Max. it should give an idea of typical module sizes.5 100 A3: Loc.

the ontology is heavily underspecified. the algorithms A1 and A2 retrieve the whole ontology as the module for each input signature. For example. are simple in structure and logical expressivity. GALEN and SNOMED. which yields small modules. we have a more detailed analysis of the modules for NCI. GO and SUMO. and the average size of the modules is 1/10 of the size of the largest module. in SNOMED. SWEET and DOLCE. SNOMED. as opposed to 1/200 in the case of SNOMED. in the case of SUMO. we can clearly see that locality-based modules are significantly smaller than the ones obtained using the other methods. even if large. GALEN-Small and GALEN-Full. Horrocks. In Figure 6. DOLCE. the largest module in GALEN-Small is 1/10 of the size of the ontology. The SWEET ontology is an exception: even though the ontology uses most of the constructors available in OWL. the locality-based modules are larger. This can be explained by the fact that these ontologies. Kazakov.5% of the size of the ontology. most of the modules we have obtained for these ontologies contain less than 40 atomic concepts. we have obtained very small locality-based modules. & Sattler ˜ r←∅ ˜ r←∆×∆ (a) Concepts DNA Sequence and (b) OA←∅ (S)-based module for (c) OA←∆ (S)-based module for Microanatomy in NCI DNA Sequence in NCI Micro Anatomy in the fragment 7b Figure 7: The Module Extraction Functionality in Swoop The results we have obtained are summarized in Table 6. the modules we obtain using our algorithm are significantly smaller than the size of the input ontology. In contrast. Here. the modules are even bigger—1/3 of the size of the ontology—which indicates that the dependencies between the different concepts in the ontology are very strong and complicated. SNOMED. For DOLCE. For GALEN. The table provides the size of the largest module and the average size of the modules obtained using each of these algorithms. in particular. In fact. In the table. the X-axis represents the size ranges of the ob314 .Cuenca Grau. the largest locality-based module obtained is approximately 0. Indeed. For NCI.

one could extract the OA←∆ (·)based module for Micro Anatomy in the fragment 7b. Figure 7b shows the minimal OA←∅ (·)-based module for DNA Sequence. 16. all its (entailed) super-concepts in O.com/p/swoop/ 315 . at least. Algorithm A1 produces a fragment of the ontology that contains the input atomic concept and is syntactically separated from the rest of the axioms—that is. The user interface of Swoop allows for the selection of an input signature and the retrieval of the corresponding module. according to Corollary 47. ˜ r←∅ as obtained in Swoop. NCI and GALEN-Small. The considerable differences in the size of the modules extracted by algorithms A1 – A3 are due to the fact that these algorithms extract modules according to different requirements.Modular Reuse of Ontologies: Theory and Practice tained modules and the Y-axis the number of modules whose size is within the given range. we have obtained a large number of small modules and a significant number of very big ones. this module contains all the sub-concepts of Micro Anatomy in the previously extracted module. for GALEN-Full. For SNOMED.. we can observe that the size of the modules follows a smooth distribution. The tool can be downloaded at http://code.16 Figure 7a shows the classification of the concepts DNA Sequence and Microanatomy in ˜ r←∅ the NCI ontology. What is surprising is that the difference in the size of modules is so significant. This suggests that the knowledge in NCI about the particular concept DNA Sequence is very shallow in the sense that NCI only “knows” that a DNA Sequence is a macromolecular structure. Recall that. a OA←∅ (·)-based module for an atomic concept contains all necessary axioms for. is an anatomy kind. Thus this module can be seen as the “upper ontology” for A in O. In order to explore the use of our results for ontology design and analysis. the figure shows that there is a large number of modules of size in between 6515 and 6535 concepts. it is to be expected that it extracts smaller modules. In contrast. in the end. The plots thus give an idea of the distribution for the sizes of the different modules. but no medium-sized modules in-between. The presence of this cycle can be spotted more clearly in Figure 6(f). In fact. we have integrated our algorithm for extracting modules in the ontology editor Swoop (Kalyanpur et al. This abrupt distribution indicates the presence of a big cycle of dependencies in the ontology. in Figure 6(e) we can see that the distribution for the “small” modules in GALEN-Full is smooth and much more similar to the one for the simplified version of GALEN. If one wants to refine the module by only including the information in the ontology necessary to ˜ r←∆×∆ entail the “path” from DNA Sequence to Micro Anatomy. Figure 7 shows that this module contains only the concepts in the “path” from DNA Sequence to the top level concept Anatomy Kind.google. Algorithm A2 extracts a fragment of the ontology that is a module for the input atomic concept and is additionally semantically separated from the rest of the ontology: no entailment between an atomic concept in the module and an atomic concept not in the module should hold in the original ontology. the fragment and the rest of the ontology have disjoint signatures. which. Since our algorithm is based on weaker requirements. 2006). The resulting module is shown in Figure 7b. This cycle does not occur in the simplified version of GALEN and thus we obtain a smooth distribution for that case. By Corollary 47. In contrast.

(2003). and exploit modules to optimize ontology reasoning.Cuenca Grau. O. E. Using existing results (Lutz et al. Data Semantics. C. In IJCAI-07. 4273 of Lecture Notes in Computer Science. Bao. Web Sem. 298– 304. Parsia. Horrocks. (Eds. Cuenca Grau. F.. Implementation. D.. Pushing the EL envelope. Distributed description logics: Assimilating information from peer sources. AAAI. Combining OWL ontologies using Econnections. Modularity and web ontologies. 40–59. D. In addition. B. GA.. E. A. In Proceedings of the Tenth International Conference on Principles of Knowledge 316 . Professional Book Center.. we have shown that these problems are undecidable or algorithmically unsolvable for the logic underlying OWL DL. I. Sirin. Proceedings of the Twentieth International Joint Conference on Artificial Intelligence. and Applications.. Edinburgh. UK. In IJCAI-05. Baader.... Hyderabad. pp.). we would like to study other approximations which can produce small modules in complex ontologies like GALEN. USA. 2007. 364–370. Conclusion In this paper. The Classical Decision Problem. & Sirin. P. 153–184. A logical framework for modularity of ontologies. J. we have proposed a set of reasoning problems that are relevant for ontology reuse. B. & Sattler. Perspectives o a of Mathematical Logic. (2006b). Vol... & Patel-Schneider. Calvanese. 2006. McGuinness. B. 72–86. In Proceedings of the Workshop on Semantic Web for Collaborative Knowledge Acquisition. Springer-Verlag. & Kutz. V. Parsia. (2007).t. In Proceedings of the 5th International Semantic Web Conference (ISWC-2006). Borgida. A. Athens... July 30-August 5. (2003). L. S. Kazakov. 2005. References Baader. D. Scotland. (2006a). (1997). Cuenca Grau. D. January 2007. B. Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence.. U. Cambridge University Press. Gr¨del. J. Second printing (Universitext) 2001. Hyderabad. & Serafini. & Lutz. Brandt. E..r. Caragea. 4 (1). & Kalyanpur.. pp. (2006). Y. For future work. Cuenca Grau. we have used safety classes to extract modules from ontologies. We have introduced and studied the notion of a safety class. India. F. Modular ontology languages revisited.. E. B... November 5-9. & Gurevich.. B¨rger. 2007) and the results obtained in Section 4. We have dealt with these problems by defining sufficient conditions for a solution to exist. 1. January 5. F. Y... On the semantics of linking and importing in modular ontologies. which can be computed in practice. J. a signature. India. pp. Kazakov. Horrocks. We have established the relationships between these problems and studied their computability. L. & Honavar. B. Nardi. (2007). The Description Logic Handbook: Theory. which characterizes any sufficiency condition for safety of an ontology w. Cuenca Grau. & Sattler 9. (2005)..

Swoop: A web ontology editing browser. AAAI Press. 2006. & Sattler..D. chap. 1–31.. USA. a Noy. B.. Windermere. SIGMOD Record.. Walther. F. Semantic integration: A survey of ontology-based approaches.. (2007). CEUR-WS. 2006.org. Web Sem. E. Patel-Schneider. Framework for an automated comparison of description logic reasoners. J. D. From SHIQ and RDF to OWL: the making of a web ontology language. & van Harmelen.. Cuenca Grau.. 448–453. & Sattler. Horrocks. B. Horrocks. T.. Vol. J. Parsia. I. U. D. 453–459. (2004a). Cambridge University Press. 2007. F. GA. Bremen. N. June 2-5. Will my ontologies fit together?. In Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR-2006). 33 (4). C. (2006). pp. (2006). UK. 317 . A tableaux decision procedure for SHOIQ.Modular Reuse of Ontologies: Theory and Practice Representation and Reasoning (KR-2006). & Hendler. Canada. 2005.. F. R. A. India.. Y. 144–153. O. 65–70. Reasoning in Description Logics using Resolution and Deductive Databases. I. F. 2006. AAAI. C. J. 4273 of Lecture Notes in Computer Science. Tsarkov. Germany. Lutz. November 5-9. Ontology mapping: The state of the art. Lake District of the United Kingdom. May 8-12. Kalfoglou. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05). May 30 . Vol. 654–667. Y. C. Edinburgh. Cuenca Grau. Athens. July 30-August 5. In Proceedings of the 16th International Conference on World Wide Web (WWW-2007).. Lake District of the United Kingdom. U. (2006). Just the right amount: extracting modules from ontologies. Banff.. M¨ller. 4 (2). (2006). Horrocks. I. In Proceedings of the 2006 International Workshop on Description Logics (DL-2006). pp. In Proceedings of the 21st International Conference on Automated Deduction (CADE-21). & Horrocks. F. Web Sem. P. Gardiner.. (2005). 18. 187–197. & Sattler. & Wolter. Hyderabad. & Schorlemmer. pp. Lake District. Univesit¨t Karlsruhe (TH). 8. Horrocks.. 84–99. B. Lutz. 717–726. June 2-5... V. Springer. (2006). U.. S. & Wolter. July 17-20. Karlsruhe. pp. (2003). (2003). Did I damage my ontology? a case for conservative extensions in description logics.. Kutz. The Knowledge Engineering Review. Lutz. Vol. Kazakov. B. pp. January 2007.. 198–209.June 1. 282–305. (2003). A.. Springer. Ghilardi. ACM. 2006. UK. Conservative extensions in expressive description logics. Professional Book Center.. M. 1 (1). (2007).. 189 of CEUR Workshop Proceedings. Conservative extensions in the lightweight description logic EL. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-07). 7–26.. 4603 of Lecture Notes in Computer Science. Germany. AAAI Press. pp. I. & Wolter. pp. Ph. Cuenca Grau. I. Description logic systems. & Haarslev. Scotland. In The Description Logic o Handbook. pp. Kalyanpur. Sirin. In Proceedings of the 5th International Semantic Web Conference (ISWC-2006). Alberta. Motik. 2007. (2007).. F. B. thesis.

Structure-based partitioning of large class hierarchies. Japan. 199–217. (2006).. S.. Artificial Intelligence. Hayes. Web ontology language OWL Abstract Syntax and Semantics. 318 . (2004). CEUR-WS.org. Proceedings.. (2004b). I. S. Springer. 3298 of Lecture Notes in Computer Science. 2004. N. Ontological issues in using a description logic to represent medical concepts: Experience from GALEN. J. 4130 of Lecture Notes in Computer Science. UK. P. A. Schmidt-Schauß. D. Whistler. pp. (2004). & Klein. In Proceedings of the Third International Joint Conference on Automated Reasoning (IJCAR 2006). J. 12.). (1991). Hiroshima. (2000).. (JAIR). August 17-20. & Rector. B. In IMIA WG6 Workshop. Seidenberg. & Parsia. & Sattler Noy. Seattle. Springer. USA. 289–303. Sirin. Int. Tools for mapping and merging ontologies. In Staab. M. R. & Studer. Edinburgh. H. Pellet system description.Cuenca Grau. Artif. Springer. Vol. I. May 23-26.. M. Intell. 2006. Staab. pp. M. British Columbia. Canada. The complexity of reasoning with cardinality restrictions and nominals in expressive description logics. 2006.. Kazakov. 2004). Res. 2004.. & Horrocks. Noy. Elsevier. (Eds. Handbook on Ontologies. International Handbooks on Information Systems. In Proceedings of the Third International Semantic Web Conference (ISWC2004). classification and use. Elsevier. (1999). November 7-11. W3C Recommendation. 13–22. FaCT++ description logic reasoner: System description. Vol. (2006). 104 of CEUR Workshop Proceedings. N. A. F. Web ontology segmentation: analysis. 6 (59). pp. 365–384. ACM. G. 292–297. Scotland. Journal of Human-Computer Studies. Tsarkov. The PROMPT suite: Interactive tools for ontology mapping and merging. Patel-Schneider. 48 (1). Vol.. pp. June 6-8. Tobies. Attributive concept descriptions with complements.. 1–26. In Proceedings of the 15th international conference on World Wide Web (WWW-2006). J. (2004). & Rogers. Rector. (2004). & Smolka. Horrocks. & Studer (Staab & Studer. & Musen. & Horrocks. L.. P. WA. In Proceedings of the 2004 International Workshop on Description Logics (DL2004). Stuckenschmidt. E. (2003).

Sign up to vote on this title
UsefulNot useful