Journal of Artificial Intelligence Research 31 (2008) 273-318 Submitted 07/07; published 02/08

Modular Reuse of Ontologies: Theory and Practice
Bernardo Cuenca Grau berg@comlab.ox.ac.uk
Ian Horrocks ian.horrocks@comlab.ox.ac.uk
Yevgeny Kazakov yevgeny.kazakov@comlab.ox.ac.uk
Oxford University Computing Laboratory
Oxford, OX1 3QD, UK
Ulrike Sattler sattler@cs.man.ac.uk
School of Computer Science
The University of Manchester
Manchester, M13 9PL, UK
Abstract
In this paper, we propose a set of tasks that are relevant for the modular reuse of on-
tologies. In order to formalize these tasks as reasoning problems, we introduce the notions
of conservative extension, safety and module for a very general class of logic-based ontology
languages. We investigate the general properties of and relationships between these notions
and study the relationships between the relevant reasoning problems we have previously
identified. To study the computability of these problems, we consider, in particular, De-
scription Logics (DLs), which provide the formal underpinning of the W3C Web Ontology
Language (OWL), and show that all the problems we consider are undecidable or algo-
rithmically unsolvable for the description logic underlying OWL DL. In order to achieve a
practical solution, we identify conditions sufficient for an ontology to reuse a set of sym-
bols “safely”—that is, without changing their meaning. We provide the notion of a safety
class, which characterizes any sufficient condition for safety, and identify a family of safety
classes–called locality—which enjoys a collection of desirable properties. We use the notion
of a safety class to extract modules from ontologies, and we provide various modulariza-
tion algorithms that are appropriate to the properties of the particular safety class in use.
Finally, we show practical benefits of our safety checking and module extraction algorithms.
1. Motivation
Ontologies—conceptualizations of a domain shared by a community of users—play a ma-
jor role in the Semantic Web, and are increasingly being used in knowledge management
systems, e-Science, bio-informatics, and Grid applications (Staab & Studer, 2004).
The design, maintenance, reuse, and integration of ontologies are complex tasks. Like
software engineers, ontology engineers need to be supported by tools and methodologies
that help them to minimize the introduction of errors, i.e., to ensure that ontologies are
consistent and do not have unexpected consequences. In order to develop this support, im-
portant notions from software engineering, such as module, black-box behavior, and controlled
interaction, need to be adapted.
Recently, there has been growing interest in the topic of modularity in ontology engi-
neering (Seidenberg & Rector, 2006; Noy, 2004a; Lutz, Walther, & Wolter, 2007; Cuenca
Grau, Parsia, Sirin, & Kalyanpur, 2006b; Cuenca Grau, Horrocks, Kazakov, & Sattler,
c (2008 AI Access Foundation. All rights reserved.
Cuenca Grau, Horrocks, Kazakov, & Sattler
2007), which has been motivated by the above-mentioned application needs. In this paper,
we focus on the use of modularity to support the partial reuse of ontologies. In particular,
we consider the scenario in which we are developing an ontology { and want to reuse a set
S of symbols from a “foreign” ontology O without changing their meaning.
For example, suppose that an ontology engineer is building an ontology about research
projects, which specifies different types of projects according to the research topic they
focus on. The ontology engineer in charge of the projects ontology { may use terms such
as Cystic Fibrosis and Genetic Disorder in his descriptions of medical research projects. The
ontology engineer is an expert on research projects; he may be unfamiliar, however, with
most of the topics the projects cover and, in particular, with the terms Cystic Fibrosis and
Genetic Disorder. In order to complete the projects ontology with suitable definitions for
these medical terms, he decides to reuse the knowledge about these subjects from a well-
established medical ontology O.
The most straightforward way to reuse these concepts is to construct the logical union
{ ∪ O of the axioms in { and O. It is reasonable to assume that the additional knowledge
about the medical terms used in both { and O will have implications on the meaning of the
projects defined in {; indeed, the additional knowledge about the reused terms provides new
information about medical research projects which are defined using these medical terms.
Less intuitive is the fact that importing O may also result in new entailments concerning
the reused symbols, namely Cystic Fibrosis and Genetic Disorder. Since the ontology engineer
of the projects ontology is not an expert in medicine and relies on the designers of O, it is
to be expected that the meaning of the reused symbols is completely specified in O; that
is, the fact that these symbols are used in the projects ontology { should not imply that
their original “meaning” in O changes. If { does not change the meaning of these symbols
in O, we say that { ∪ O is a conservative extension of O
In realistic application scenarios, it is often unreasonable to assume the foreign ontology
O to be fixed; that is, O may evolve beyond the control of the modelers of {. The ontology
engineers in charge of { may not be authorized to access all the information in O or,
most importantly, they may decide at a later time to reuse the symbols Cystic Fibrosis and
Genetic Disorder from a medical ontology other than O. For application scenarios in which
the external ontology O may change, it is reasonable to “abstract” from the particular O
under consideration. In particular, given a set S of “external symbols”, the fact that the
axioms in { do not change the meaning of any symbol in S should be independent of the
particular meaning ascribed to these symbols by O. In that case, we will say that { is safe
for S.
Moreover, even if { “safely” reuses a set of symbols from an ontology O, it may still
be the case that O is a large ontology. In particular, in our example, the foreign medical
ontology may be huge, and importing the whole ontology would make the consequences
of the additional information costly to compute and difficult for our ontology engineers in
charge of the projects ontology (who are not medical experts) to understand. In practice,
therefore, one may need to extract a module O
1
of O that includes only the relevant infor-
mation. Ideally, this module should be as small as possible while still guarantee to capture
the meaning of the terms used; that is, when answering queries against the research projects
ontology, importing the module O
1
would give exactly the same answers as if the whole
medical ontology O had been imported. In this case, importing the module will have the
274
Modular Reuse of Ontologies: Theory and Practice
same observable effect on the projects ontology as importing the entire ontology. Further-
more, the fact that O
1
is a module in O should be independent of the particular { under
consideration.
The contributions of this paper are as follows:
1. We propose a set of tasks that are relevant to ontology reuse and formalize them as
reasoning problems. To this end, we introduce the notions of conservative extension,
safety and module for a very general class of logic-based ontology languages.
2. We investigate the general properties of and relationships between the notions of
conservative extension, safety, and module and use these properties to study the re-
lationships between the relevant reasoning problems we have previously identified.
3. We consider Description Logics (DLs), which provide the formal underpinning of the
W3C Web Ontology Language (OWL), and study the computability of our tasks. We
show that all the tasks we consider are undecidable or algorithmically unsolvable for
the description logic underlying OWL DL—the most expressive dialect of OWL that
has a direct correspondence to description logics.
4. We consider the problem of deciding safety of an ontology for a signature. Given that
this problem is undecidable for OWL DL, we identify sufficient conditions for safety,
which are decidable for OWL DL—that is, if an ontology satisfies our conditions
then it is safe; the converse, however, does not necessarily hold. We propose the
notion of a safety class, which characterizes any sufficiency condition for safety, and
identify a family of safety classes–called locality—which enjoys a collection of desirable
properties.
5. We next apply the notion of a safety class to the task of extracting modules from
ontologies; we provide various modularization algorithms that are appropriate to the
properties of the particular safety class in use.
6. We present empirical evidence of the practical benefits of our techniques for safety
checking and module extraction.
This paper extends the results in our previous work (Cuenca Grau, Horrocks, Kutz, &
Sattler, 2006; Cuenca Grau et al., 2007; Cuenca Grau, Horrocks, Kazakov, & Sattler, 2007).
2. Preliminaries
In this section we introduce description logics (DLs) (Baader, Calvanese, McGuinness,
Nardi, & Patel-Schneider, 2003), a family of knowledge representation formalisms which
underlie modern ontology languages, such as OWL DL (Patel-Schneider, Hayes, & Hor-
rocks, 2004). A hierarchy of commonly-used description logics is summarized in Table 1.
The syntax of a description logic L is given by a signature and a set of constructors. A
signature (or vocabulary) Sg of a DL is the (disjoint) union of countably infinite sets AC
of atomic concepts (A, B, . . . ) representing sets of elements, AR of atomic roles (r, s, . . . )
representing binary relations between elements, and Ind of individuals (a, b, c, . . . ) repre-
senting constants. We assume the signature to be fixed for every DL.
275
Cuenca Grau, Horrocks, Kazakov, & Sattler
DLs
Constructors Axioms [ Ax ]
Rol Con RBox TBox ABox
cL r ·, A, C
1
¯ C
2
, ∃R.C A ≡ C, C
1
_ C
2
a : C, r(a, b)
/L( –– ––, C –– ––
o –– –– Trans(r) –– ––
+ 1 r

+ H R
1
_ R
2
+ T Funct(R)
+ ^ (nS)
+ O (nS.C)
+ O ¦i¦
Here r ∈ AR, A ∈ AC, a, b ∈ Ind, R
(i)
∈ Rol, C
(i)
∈ Con, n ≥ 1 and S ∈ Rol is a simple
role (see (Horrocks & Sattler, 2005)).
Table 1: The hierarchy of standard description logics
Every DL provides constructors for defining the set Rol of (general) roles (R, S, . . . ),
the set Con of (general) concepts (C, D, . . . ), and the set Ax of axioms (α, β, . . . ) which is
a union of role axioms (RBox), terminological axioms (TBox) and assertions (ABox).
cL (Baader, Brandt, & Lutz, 2005) is a simple description logic which allows one to
construct complex concepts using conjunction C
1
¯ C
2
and existential restriction ∃R.C
starting from atomic concepts A, roles R and the top concept ·. cL provides no role
constructors and no role axioms; thus, every role R in cL is atomic. The TBox axioms of
cL can be either concept definitions A ≡ C or general concept inclusion axioms (GCIs)
C
1
_ C
2
. cL assertions are either concept assertions a : C or role assertions r(a, b). In this
paper we assume the concept definition A ≡ C is an abbreviation for two GCIs A _ C and
C _ A.
The basic description logic /L( (Schmidt-Schauß & Smolka, 1991) is obtained from cL
by adding the concept negation constructor C. We introduce some additional constructors
as abbreviations: the bottom concept ⊥ is a shortcut for ·, the concept disjunction C
1
. C
2
stands for (C
1
¯ C
2
), and the value restriction ∀R.C stands for (∃R.C). In contrast
to cL, /L( can express contradiction axioms like · _ ⊥. The logic o is an extension of
/L( where, additionally, some atomic roles can be declared to be transitive using a role
axiom Trans(r).
Further extensions of description logics add features such as inverse roles r

(indicated
by appending a letter 1 to the name of the logic), role inclusion axioms (RIs) R
1
_ R
2
(+H),
functional roles Funct(R) (+T), number restrictions (nS), with n ≥ 1, (+^), qualified
number restrictions (nS.C), with n ≥ 1, (+O)
1
, and nominals ¦a¦ (+O). Nominals make
it possible to construct a concept representing a singleton set ¦a¦ (a nominal concept) from
an individual a. These extensions can be used in different combinations; for example /L(O
is an extension of /L( with nominals; oH1O is an extension of o with role hierarchies,
1. the dual constructors (nS) and (nS.C) are abbreviations for ¬(n + 1 S) and ¬(n + 1 S.¬C),
respectively
276
Modular Reuse of Ontologies: Theory and Practice
inverse roles and qualified number restrictions; and oHO1O is the DL that uses all the
constructors and axiom types we have presented.
Modern ontology languages, such as OWL, are based on description logics and, to a cer-
tain extent, are syntactic variants thereof. In particular, OWL DL corresponds to oHO1^
(Horrocks, Patel-Schneider, & van Harmelen, 2003). In this paper, we assume an ontology
O based on a description logic L to be a finite set of axioms in L. The signature of an
ontology O (of an axiom α) is the set Sig(O) (Sig(α)) of atomic concepts, atomic roles and
individuals that occur in O (respectively in α).
The main reasoning task for ontologies is entailment: given an ontology O and an axiom
α, check if O implies α. The logical entailment [= is defined using the usual Tarski-style
set-theoretic semantics for description logics as follows. An interpretation 1 is a pair 1 =
(∆
7
,
7
), where ∆
7
is a non-empty set, called the domain of the interpretation, and
7
is the
interpretation function that assigns: to every A ∈ AC a subset A
7
⊆ ∆
7
, to every r ∈ AR
a binary relation r
7
⊆ ∆
7

7
, and to every a ∈ Ind an element a
7
∈ ∆
7
. Note that the
sets AC, AR and Ind are not defined by the interpretation 1 but assumed to be fixed for
the ontology language (DL).
The interpretation function
7
is extended to complex roles and concepts via DL-
constructors as follows:
(·)
7
= ∆
(C ¯ D)
7
= C
7
∩ D
7
(∃R.C)
7
= ¦x ∈ ∆
7
[ ∃y.'x, y` ∈ R
7
∧ y ∈ C
7
¦
(C)
7
= ∆
7
` C
7
(r

)
7
= ¦'x, y` [ 'y, x` ∈ r
7
¦
(nR)
7
= ¦ x ∈ ∆
7
[ ¦y ∈ ∆
7
[ 'x, y` ∈ R
7
¦ ≥ n¦
(nR.C)
7
= ¦ x ∈ ∆
7
[ ¦y ∈ ∆
7
[ 'x, y` ∈ R
7
∧ y ∈ C
7
¦ ≥ n¦
¦a¦
7
= ¦a
7
¦
The satisfaction relation 1 [= α between an interpretation 1 and a DL axiom α (read as 1
satisfies α, or 1 is a model of α) is defined as follows:
1 [= C
1
_ C
2
iff C
7
1
⊆ C
7
2
; 1 [= a : C iff a
7
∈ C
7
;
1 [= R
1
_ R
2
iff R
7
1
⊆ R
7
2
; 1 [= r(a, b) iff 'a
7
, b
7
` ∈ r
7
;
1 [= Trans(r) iff ∀x, y, z ∈ ∆
7
[ 'x, y` ∈ r
7
∧ 'y, z` ∈ r
7
⇒ 'x, z` ∈ r
7
];
1 [= Funct(R) iff ∀x, y, z ∈ ∆
7
[ 'x, y` ∈ R
7
∧ 'x, z` ∈ R
7
⇒ y = z ];
An interpretation 1 is a model of an ontology O if 1 satisfies all axioms in O. An
ontology O implies an axiom α (written O [= α) if 1 [= α for every model 1 of O. Given
a set I of interpretations, we say that an axiom α (an ontology O) is valid in I if every
interpretation 1 ∈ I is a model of α (respectively O). An axiom α is a tautology if it is valid
in the set of all interpretations (or, equivalently, is implied by the empty ontology).
We say that two interpretations 1 = (∆
7
,
7
) and . = (∆
.
,
.
) coincide on the subset
S of the signature (notation: 1[
S
= .[
S
) if ∆
7
= ∆
.
and X
7
= X
.
for every X ∈ S. We
say that two sets of interpretations I and J are equal modulo S (notation: I[
S
= J[
S
) if for
every 1 ∈ I there exits . ∈ J such that .[
S
= 1[
S
and for every . ∈ J there exists 1 ∈ I
such that 1[
S
= .[
S
.
277
Cuenca Grau, Horrocks, Kazakov, & Sattler
Ontology of medical research projects P:
P1 Genetic Disorder Project ≡ Project ¯ ∃has Focus.Genetic Disorder
P2 Cystic Fibrosis EUProject ≡ EUProject ¯ ∃has Focus.Cystic Fibrosis
P3 EUProject _ Project
P4 ∃has Focus.· _ Project
E1 Project ¯ (Genetic Disorder
::
Cystic Fibrosis) _ ⊥
E2 ∀
::
has Focus.Cystic Fibrosis _ ∃has Focus.Genetic Disorder
Ontology of medical terms Q:
M1 Cystic Fibrosis ≡ Fibrosis ¯ ∃located In.Pancreas ¯ ∃has Origin.Genetic Origin
M2 Genetic Fibrosis ≡ Fibrosis ¯ ∃has Origin.Genetic Origin
M3 Fibrosis ¯ ∃located In.Pancreas _ Genetic Fibrosis
M4 Genetic Fibrosis _ Genetic Disorder
M5 DEFBI Gene _ Immuno Protein Gene ¯ ∃associated With.Cystic Fibrosis
Figure 1: Reusing medical terminology in an ontology on research projects
3. Ontology Integration and Knowledge Reuse
In this section, we elaborate on the ontology reuse scenario sketched in Section 1. Based on
this application scenario, we motivate and define the reasoning tasks to be investigated in
the remainder of the paper. In particular, our tasks are based on the notions of a conservative
extension (Section 3.2), safety (Sections 3.2 and 3.3) and module (Section 3.4). These notions
are defined relative to a language L. Within this section, we assume that L is an ontology
language based on a description logic; in Section 3.6, we will define formally the class of
ontology languages for which the given definitions of conservative extensions, safety and
modules apply. For the convenience of the reader, all the tasks we consider in this paper
are summarized in Table 2.
3.1 A Motivating Example
Suppose that an ontology engineer is in charge of a oHO1O ontology on research projects,
which specifies different types of projects according to the research topic they are concerned
with. Assume that the ontology engineer defines two concepts—Genetic Disorder Project and
Cystic Fibrosis EUProject—in his ontology {. The first one describes projects about genetic
disorders; the second one describes European projects about cystic fibrosis, as given by the
axioms P1 and P2 in Figure 1. The ontology engineer is an expert on research projects: he
knows, for example, that every instance of EUProject must be an instance of Project (the
concept-inclusion axiom P3) and that the role has Focus can be applied only to instances
of Project (the domain axiom P4). He may be unfamiliar, however, with most of the topics
the projects cover and, in particular, with the terms Cystic Fibrosis and Genetic Disorder
mentioned in P1 and P2. In order to complete the projects ontology with suitable definitions
278
Modular Reuse of Ontologies: Theory and Practice
for these medical terms, he decides to reuse the knowledge about these subjects from a well-
established and widely-used medical ontology.
Suppose that Cystic Fibrosis and Genetic Disorder are described in an ontology O contai-
ning the axioms M1-M5 in Figure 1. The most straightforward way to reuse these concepts
is to import in { the ontology O—that is, to add the axioms from O to the axioms in {
and work with the extended ontology { ∪O. Importing additional axioms into an ontology
may result in new logical consequences. For example, it is easy to see that axioms M1–M4
in O imply that every instance of Cystic Fibrosis is an instance of Genetic Disorder:
O [= α := (Cystic Fibrosis _ Genetic Disorder) (1)
Indeed, the concept inclusion α
1
:= (Cystic Fibrosis _ Genetic Fibrosis) follows from axioms
M1 and M2 as well as from axioms M1 and M3; α follows from axioms α
1
and M4.
Using inclusion α from (1) and axioms P1–P3 from ontology { we can now prove that
every instance of Cystic Fibrosis EUProject is also an instance of Genetic Disorder Project:
{ ∪ O [= β := (Cystic Fibrosis EUProject _ Genetic Disorder Project) (2)
This inclusion β, however, does not follow from { alone—that is, { [= β. The ontology
engineer might be not aware of Entailment (2), even though it concerns the terms of primary
scope in his projects ontology {.
It is natural to expect that entailments like α in (1) from an imported ontology O result
in new logical consequences, like β, in (2), over the terms defined in the main ontology {.
One would not expect, however, that the meaning of the terms defined in O changes as a
consequence of the import since these terms are supposed to be completely specified within
O. Such a side effect would be highly undesirable for the modeling of ontology { since the
ontology engineer of { might not be an expert on the subject of O and is not supposed to
alter the meaning of the terms defined in O not even implicitly.
The meaning of the reused terms, however, might change during the import, perhaps due
to modeling errors. In order to illustrate such a situation, suppose that the ontology engineer
has learned about the concepts Genetic Disorder and Cystic Fibrosis from the ontology O
(including the inclusion (1)) and has decided to introduce additional axioms formalizing
the following statements:
“Every instance of Project is different from every instance of Genetic Disorder
:::
and every instance of Cystic Fibrosis.”
(3)

::::::
Every
::::::::
project that has Focus on Cystic Fibrosis, also has Focus on Genetic Disorder” (4)
Note that the statements (3) and (4) can be thought of as adding new information about
projects and, intuitively, they should not change or constrain the meaning of the medical
terms.
Suppose the ontology engineer has formalized the statements (3) and (4) in ontology
{ using axioms E1 and E2 respectively. At this point, the ontology engineer has intro-
duced modeling errors and, as a consequence, axioms E1 and E2 do not correspond to (3)
and (4): E1 actually formalizes the following statement: “Every instance of Project is diffe-
rent from every common instance of Genetic Disorder and Cystic Fibrosis”, and E2 expresses
279
Cuenca Grau, Horrocks, Kazakov, & Sattler
that “Every object that either has Focus on nothing, or has Focus only on Cystic Fibrosis,
also has Focus on Genetic Disorder”. These kinds of modeling errors are difficult to detect,
especially when they do not cause inconsistencies in the ontology.
Note that, although axiom E1 does not correspond to fact (3), it is still a consequence
of (3) which means that it should not constrain the meaning of the medical terms. On the
other hand, E2 is not a consequence of (4) and, in fact, it constrains the meaning of medical
terms. Indeed, the axioms E1 and E2 together with axioms P1-P4 from { imply new axioms
about the concepts Cystic Fibrosis and Genetic Disorder, namely their disjointness:
{ [= γ := (Genetic Disorder ¯ Cystic Fibrosis _ ⊥) (5)
The entailment (5) can be proved using axiom E2 which is equivalent to:
· _ ∃has Focus.(Genetic Disorder . Cystic Fibrosis) (6)
The inclusion (6) and P4 imply that every element in the domain must be a project—that
is, { [= (· _ Project). Now, together with axiom E1, this implies (5).
The axioms E1 and E2 not only imply new statements about the medical terms, but also
cause inconsistencies when used together with the imported axioms from O. Indeed, from
(1) and (5) we obtain { ∪O [= δ := (Cystic Fibrosis _ ⊥), which expresses the inconsistency
of the concept Cystic Fibrosis.
To summarize, we have seen that importing an external ontology can lead to undesirable
side effects in our knowledge reuse scenario, like the entailment of new axioms or even incon-
sistencies involving the reused vocabulary. In the next section we discuss how to formalize
the effects we consider undesirable.
3.2 Conservative Extensions and Safety for an Ontology
As argued in the previous section, an important requirement for the reuse of an ontology O
within an ontology { should be that { ∪O produces exactly the same logical consequences
over the vocabulary of O as O alone does. This requirement can be naturally formulated
using the well-known notion of a conservative extension, which has recently been investi-
gated in the context of ontologies (Ghilardi, Lutz, & Wolter, 2006; Lutz et al., 2007).
Definition 1 (Deductive Conservative Extension). Let O and O
1
⊆ O be two L-
ontologies, and S a signature over L. We say that O is a deductive S-conservative extension
of O
1
w.r.t. L, if for every axiom α over L with Sig(α) ⊆ S, we have O [= α iff O
1
[= α.
We say that O is a deductive conservative extension of O
1
w.r.t. L if O is a deductive
S-conservative extension of O
1
w.r.t. L for S = Sig(O
1
). ♦
In other words, an ontology O is a deductive S-conservative extension of O
1
for a
signature S and language L if and only if every logical consequence α of O constructed
using the language L and symbols only from S, is already a logical consequence of O
1
; that
is, the additional axioms in O ` O
1
do not result into new logical consequences over the
vocabulary S. Note that if O is a deductive S-conservative extension of O
1
w.r.t. L, then
O is a deductive S
1
-conservative extension of O
1
w.r.t. L for every S
1
⊆ S.
The notion of a deductive conservative extension can be directly applied to our ontology
reuse scenario.
280
Modular Reuse of Ontologies: Theory and Practice
Definition 2 (Safety for an Ontology). Given L-ontologies O and O
t
, we say that O
is safe for O
t
(or O imports O
t
in a safe way) w.r.t. L if O∪O
t
is a deductive conservative
extension of O
t
w.r.t. L. ♦
Hence, the first reasoning task relevant for our ontology reuse scenario can be formulated
as follows:
T1. given L-ontologies O and O
t
, determine if O is safe for O
t
w.r.t. L.
We have shown in Section 3.1 that, given { consisting of axioms P1–P4, E1, E2, and O
consisting of axioms M1–M5 from Figure 1, there exists an axiom δ = (Cystic Fibrosis _ ⊥)
that uses only symbols in Sig(O) such that O [= δ but {∪O [= δ. According to Definition 1,
this means that { ∪ O is not a deductive conservative extension of O w.r.t. any language
L in which δ can be expressed (e.g. L = /L(). It is possible, however, to show that if the
axiom E2 is removed from { then for the resulting ontology {
1
= { ` ¦E2¦, {
1
∪ O is a
deductive conservative extension of O. The following notion is useful for proving deductive
conservative extensions:
Definition 3 (Model Conservative Extension, Lutz et al., 2007).
Let O and O
1
⊆ O be two L-ontologies and S a signature over L. We say that O is a model
S-conservative extension of O
1
, if for every model 1 of O
1
, there exists a model . of O
such that 1[
S
= .[
S
. We say that O is a model conservative extension of O
1
if O is a model
S-conservative extension of O
1
for S = Sig(O
1
). ♦
The notion of model conservative extension in Definition 3 can be seen as a semantic
counterpart for the notion of deductive conservative extension in Definition 1: the latter is
defined in terms of logical entailment, whereas the former is defined in terms of models.
Intuitively, an ontology O is a model S-conservative extension of O
1
if for every model of
O
1
one can find a model of O over the same domain which interprets the symbols from S
in the same way. The notion of model conservative extension, however, does not provide
a complete characterization of deductive conservative extensions, as given in Definition 1;
that is, this notion can be used for proving that an ontology is a deductive conservative
extension of another, but not vice versa:
Proposition 4 (Model vs. Deductive Conservative Extensions, Lutz et al., 2007)
1. For every two L-ontologies O, O
1
⊆ O, and a signature S over L, if O is a model
S-conservative extension of O
1
then O is a deductive S-conservative extension of O
1
w.r.t. L;
2. There exist two /L( ontologies O and O
1
⊆ O such that O is a deductive conservative
extension of O
1
w.r.t. /L(, but O is not a model conservative extension of O
1
.
Example 5 Consider an ontology {
1
consisting of axioms P1–P4, and E1 and an ontology
O consisting of axioms M1–M5 from Figure 1. We demonstrate that {
1
∪ O is a deductive
conservative extension of O. According to proposition 4, it is sufficient to show that {
1
∪O
is a model conservative extension of O; that is, for every model 1 of O there exists a model
. of {
1
∪ O such that 1[
S
= .[
S
for S = Sig(O).
281
Cuenca Grau, Horrocks, Kazakov, & Sattler
The required model . can be constructed from 1 as follows: take . to be identi-
cal to 1 except for the interpretations of the atomic concepts Genetic Disorder Project,
Cystic Fibrosis EUProject, Project, EUProject and the atomic role has Focus, all of which
we interpret in . as the empty set. It is easy to check that all the axioms P1–P4, E1 are
satisfied in .and hence . [= {
1
. Moreover, since the interpretation of the symbols in O
remains unchanged, we have 1[
Sig(O)
= .[
Sig(O)
and . [= O. Hence, {
1
∪ O is a model
conservative extension of O.
For an example of Statement 2 of the Proposition, we refer the interested reader to the
literature (Lutz et al., 2007, p. 455). ♦
3.3 Safety of an Ontology for a Signature
So far, in our ontology reuse scenario we have assumed that the reused ontology O is fixed
and the axioms of O are just copied into { during the import. In practice, however, it
is often convenient to keep O separate from { and make its axioms available on demand
via a reference. This makes it possible to continue developing { and O independently. For
example, the ontology engineers of the project ontology { may not be willing to depend
on a particular version of O, or may even decide at a later time to reuse the medical terms
(Cystic Fibrosis and Genetic Disorder) from another medical ontology instead. Therefore, for
many application scenarios it is important to develop a stronger safety condition for {
which depends as little as possible on the particular ontology O to be reused. In order to
formulate such condition, we abstract from the particular ontology O to be imported and
focus instead on the symbols from O that are to be reused:
Definition 6 (Safety for a Signature). Let O be a L-ontology and S a signature over L.
We say that O is safe for S w.r.t. L, if for every L-ontology O
t
with Sig(O) ∩ Sig(O
t
) ⊆ S,
we have that O is safe for O
t
w.r.t. L; that is, O∪O
t
is a deductive conservative extension
of O
t
w.r.t. L. ♦
Intuitively, in our knowledge reuse scenario, an ontology O is safe for a signature S w.r.t.
a language L if and only if it imports in a safe way any ontology O
t
written in L that shares
only symbols from S with O. The associated reasoning problem is then formulated in the
following way:
T2. given an L-ontology O and a signature S over L,
determine if O is safe for S w.r.t. L.
As seen in Section 3.2, the ontology { consisting of axioms P1–P4, E1, E2 from Figure 1,
does not import O consisting of axioms M1–M5 from Figure 1 in a safe way for L = /L(.
According to Definition 6, since Sig({) ∩ Sig(O) ⊆ S = ¦Cystic Fibrosis, Genetic Disorder¦,
the ontology { is not safe for S w.r.t. L.
In fact, it is possible to show a stronger result, namely, that any ontology O containing
axiom E2 is not safe for S = ¦Cystic Fibrosis, Genetic Disorder¦ w.r.t. L = /L(. Consider an
ontology O
t
= ¦β
1
, β
2
¦, where β
1
= (· _ Cystic Fibrosis) and β
2
= (Genetic Disorder _ ⊥).
Since E2 is equivalent to axiom (6), it is easy to see that O∪O
t
is inconsistent. Indeed E2,
β
1
and β
2
imply the contradiction α = (· _ ⊥), which is not entailed by O
t
. Hence, O∪O
t
is not a deductive conservative extension of O
t
. By Definition 6, since Sig(O) ∩Sig(O
t
) ⊆ S,
this means that O is not safe for S.
282
Modular Reuse of Ontologies: Theory and Practice
It is clear how one could prove that an ontology O is not safe for a signature S: simply find
an ontology O
t
with Sig(O) ∩Sig(O
t
) ⊆ S, such that O∪O
t
is not a deductive conservative
extension of O
t
. It is not so clear, however, how one could prove that O is safe for S. It
turns out that the notion of model conservative extensions can also be used for this purpose.
The following lemma introduces a property which relates the notion of model conservative
extension with the notion of safety for a signature. Intuitively, it says that the notion of
model conservative extension is stable under expansion with new axioms provided they
share only symbols from S with the original ontologies.
Lemma 7 Let O, O
1
⊆ O, and O
t
be L-ontologies and S a signature over L such that
O is a model S-conservative extension of O
1
and Sig(O) ∩ Sig(O
t
) ⊆ S. Then O ∪ O
t
is a
model S
t
-conservative extension of O
1
∪ O
t
for S
t
= S ∪ Sig(O
t
).
Proof. In order to show that O∪O
t
is a model S
t
-conservative extension of O
1
∪O
t
according
to Definition 3, let 1 be a model of O
1
∪ O
t
. We just construct a model . of O ∪ O
t
such
that 1[
S
= .[
S
. ()
Since O is a model S-conservative extension of O
1
, and 1 is a model of O
1
, by Definition 3
there exists a model .
1
of O such that 1[
S
= .
1
[
S
. Let . be an interpretation such
that .[
Sig(C)
= .
1
[
Sig(C)
and .[
S
= 1[
S
. Since Sig(O) ∩ S
t
= Sig(O) ∩ (S ∪ Sig(O
t
)) ⊆
S ∪ (Sig(O) ∩ Sig(O
t
)) ⊆ S and 1[
S
= .
1
[
S
, such interpretation . always exists. Since
.[
Sig(C)
= .
1
[
Sig(C)
and .
1
[= O, we have . [= O; since .[
S
= 1[
S
, Sig(O
t
) ⊆ S
t
, and
1 [= O
t
, we have . [= O
t
. Hence . [= O ∪ O
t
and 1[
S
= .[
S
, which proves ().
Lemma 7 allows us to identify a condition sufficient to ensure the safety of an ontology
for a signature:
Proposition 8 (Safety for a Signature vs. Model Conservative Extensions)
Let O be an L-ontology and S a signature over L such that O is a model S-conservative
extension of the empty ontology O
1
= ∅; that is, for every interpretation 1 there exists a
model . of O such that .[
S
= 1[
S
. Then O is safe for S w.r.t. L.
Proof. In order to prove that O is safe for S w.r.t. L according to Definition 6, take any
oHO1O ontology O
t
such that Sig(O) ∩Sig(O
t
) ⊆ S. We need to demonstrate that O∪O
t
is a deductive conservative extension of O
t
w.r.t. L. ()
Indeed, by Lemma 7, since O is a model S-conservative extension of O
1
= ∅, and
Sig(O)∩Sig(O
t
) ⊆ S, we have that O∪O
t
is a model S
t
-conservative extension of O
1
∪O
t
= O
t
for S
t
= S ∪ Sig(O
t
). In particular, since Sig(O
t
) ⊆ S
t
, we have that O ∪ O
t
is a deductive
conservative extension of O
t
, as was required to prove ().
Example 9 Let {
1
be the ontology consisting of axioms P1–P4, and E1 from Figure 1.
We show that {
1
is safe for S = ¦Cystic Fibrosis, Genetic Disorder¦ w.r.t. L = oHO1O. By
Proposition 8, in order to prove safety for {
1
it is sufficient to demonstrate that {
1
is a
model S-conservative extension of the empty ontology, that is, for every S-interpretation 1
there exists a model . of {
1
such that 1[
S
= .[
S
.
Consider the model . obtained from 1 as in Example 5. As shown in Example 5, . is
a model of {
1
and 1[
Sig(O)
= .[
Sig(O)
where O consists of axioms M1–M5 from Figure 1.
In particular, since S ⊆ Sig(O), we have 1[
S
= .[
S
. ♦
283
Cuenca Grau, Horrocks, Kazakov, & Sattler
3.4 Extraction of Modules from Ontologies
In our example from Figure 1 the medical ontology O is very small. Well established medical
ontologies, however, can be very large and may describe subject matters in which the de-
signer of { is not interested. For example, the medical ontology O could contain information
about genes, anatomy, surgical techniques, etc.
Even if { imports O without changing the meaning of the reused symbols, processing—
that is, browsing, reasoning over, etc—the resulting ontology { ∪ O may be considerably
harder than processing { alone. Ideally, one would like to extract a (hopefully small) frag-
ment O
1
of the external medical ontology—a module—that describes just the concepts that
are reused in {.
Intuitively, when answering an arbitrary query over the signature of {, importing the
module O
1
should give exactly the same answers as if the whole ontology O had been
imported.
Definition 10 (Module for an Ontology). Let O, O
t
and O
t
1
⊆ O
t
be L-ontologies.
We say that O
t
1
is a module for O in O
t
w.r.t. L, if O ∪ O
t
is a deductive S-conservative
extension of O ∪ O
t
1
for S = Sig(O) w.r.t. L. ♦
The task of extracting modules from imported ontologies in our ontology reuse scenario
can be thus formulated as follows:
T3. given L-ontologies O, O
t
,
compute a module O
t
1
for O in O
t
w.r.t. L.
Example 11 Consider the ontology {
1
consisting of axioms P1–P4, E1 and the ontology O
consisting of axioms M1–M5 from Figure 1. Recall that the axiom α from (1) is a conse-
quence of axioms M1, M2 and M4 as well as of axioms M1, M3 and M4 from ontology O.
In fact, these sets of axioms are actually minimal subsets of O that imply α. In particular,
for the subset O
0
of O consisting of axioms M2, M3, M4 and M5, we have O
0
[= α. It can
be demonstrated that {
1
∪ O
0
is a deductive conservative extension of O
0
. In particular
{
1
∪ O
0
[= α. But then, according to Definition 10, O
0
is not a module for {
1
in O w.r.t.
L = /L( or its extensions, since {
1
∪ O is not an deductive S-conservative extension of
{
1
∪O
0
w.r.t. L = /L( for S = Sig({
1
). Indeed, {
1
∪O [= α, Sig(α) ⊆ S but {
1
∪O
0
[= α.
Similarly, one can show that any subset of O that does not imply α is not a module in O
w.r.t. {
1
.
On the other hand, it is possible to show that both subsets O
1
= ¦M1, M2, M4¦ and
O
2
= ¦M1, M3, M4¦ of O are modules for {
1
in O w.r.t. L = oHO1O. To do so, Defi-
nition 10, we need to demonstrate that {
1
∪ O is a deductive S-conservative extension of
both {
1
∪ O
1
and {
1
∪ O
2
for S = Sig({
1
) w.r.t. L. As usual, we demonstrate a stronger
fact, that {
1
∪O is a model S-conservative extension of {
1
∪O
1
and of {
1
∪O
2
for S which
is sufficient by Claim 1 of Proposition 4.
In order to show that {
1
∪ O is a model S-conservative extension of {
1
∪ O
1
for S =
Sig({
1
), consider any model 1 of {
1
∪ O
1
. We need to construct a model . of {
1
∪ O
such that 1[
S
= .[
S
. Let . be defined exactly as 1 except that the interpretations of the
atomic concepts Fibrosis, Pancreas, Genetic Fibrosis, and Genetic Origin are defined as the
interpretation of Cystic Fibrosis in 1, and the interpretations of atomic roles located In and
284
Modular Reuse of Ontologies: Theory and Practice
has Origin are defined as the identity relation. It is easy to see that axioms M1–M3 and M5
are satisfied in .. Since we do not modify the interpretation of the symbols in {
1
, . also
satisfies all axioms from {
1
. Moreover, . is a model of M4, because Genetic Fibrosis and
Genetic Disorder are interpreted in . like Cystic Fibrosis and Genetic Disorder in 1, and 1 is
a model of O
1
, which implies the concept inclusion α = (Cystic Fibrosis _ Genetic Disorder).
Hence we have constructed a model . of {
1
∪ O such that 1[
S
= .[
S
, thus {
1
∪ O is a
model S-conservative extension of {
1
∪ O
1
.
In fact, the construction above works if we replace O
1
with any subset of O that implies
α. In particular, {
1
∪O is also a model S-conservative extension of {
1
∪O
2
. In this way, we
have demonstrated that the modules for { in O are exactly those subsets of O that imply
α. ♦
An algorithm implementing the task T3 can be used for extracting a module from an
ontology O imported into { prior to performing reasoning over the terms in {. However,
when the ontology { is modified, the module has to be extracted again since a module O
1
for { in O might not necessarily be a module for the modified ontology. Since the extraction
of modules is potentially an expensive operation, it would be more convenient to extract a
module only once and reuse it for any version of the ontology { that reuses the specified
symbols from O. This idea motivates the following definition:
Definition 12 (Module for a Signature). Let O
t
and O
t
1
⊆ O
t
be L-ontologies and S
a signature over L. We say that O
t
1
is a module for S in O
t
(or an S-module in O
t
) w.r.t.
L, if for every L-ontology O with Sig(O) ∩Sig(O
t
) ⊆ S, we have that O
t
1
is a module for O
in O
t
w.r.t. L. ♦
Intuitively, the notion of a module for a signature is a uniform analog of the notion of a
module for an ontology, in a similar way as the notion of safety for a signature is a uniform
analog of safety for an ontology. The reasoning task corresponding to Definition 12 can be
formulated as follows:
T4. given a L-ontology O
t
and a signature S over L,
compute a module O
t
1
for S in O
t
.
Continuing Example 11, it is possible to demonstrate that any subset of O that implies
axiom α, is in fact a module for S = ¦Cystic Fibrosis, Genetic Disorder¦ in O, that is, it can
be imported instead of O into every ontology O that shares with O only symbols from S. In
order to prove this, we use the following sufficient condition based on the notion of model
conservative extension:
Proposition 13 (Modules for a Signature vs. Model Conservative Extensions)
Let O
t
and O
t
1
⊆ O
t
be L-ontologies and S a signature over L such that O
t
is a model
S-conservative extension of O
t
1
. Then O
t
1
is a module for S in O
t
w.r.t. L.
Proof. In order to prove that O
t
1
is a module for S in O
t
w.r.t. L according to Definition 12,
take any oHO1O ontology O such that Sig(O) ∩ Sig(O
t
) ⊆ S. We need to demonstrate
that O
t
1
is a module for O in O
t
w.r.t. L, that is, according to Definition 10, O ∪ O
t
is a
deductive S
t
-conservative extension of O ∪ O
t
1
w.r.t. L for S
t
= Sig(O). ()
285
Cuenca Grau, Horrocks, Kazakov, & Sattler
Indeed, by Lemma 7, since O
t
is a model S-conservative extension of O
t
1
, and Sig(O
t
) ∩
Sig(O) ⊆ S, we have that O
t
∪ O is a model S
tt
-conservative extension of O
t
1
∪ O for
S
tt
= S ∪ Sig(O). In particular, since S
t
= Sig(O) ⊆ S
tt
, we have that O
t
∪ O is a deductive
S
t
-conservative extension of O
t
1
∪ O w.r.t. L, as was required to prove ().
Example 14 Let O and α be as in Example 11. We demonstrate that any subset O
1
of O
which implies α is a module for S = ¦Cystic Fibrosis, Genetic Disorder¦ in O. According to
Proposition 13, it is sufficient to demonstrate that O is a model S-conservative extension of
O
1
, that is, for every model 1 of O
1
there exists a model . of O such that 1[
S
= .[
S
. It is
easy to see that if the model . is constructed from 1 as in Example 11, then the required
property holds. ♦
Note that a module for a signature S in Odoes not necessarily contain all the axioms that
contain symbols from S. For example, the module O
1
consisting of the axiom M1, M2 and
M4 from O does not contain axiom M5 which mentions the atomic concept Cystic Fibrosis
from S. Also note that even in a minimal module like O
1
there might still be some axioms
like M2 that do not mention symbols from S at all.
3.5 Minimal Modules and Essential Axioms
One is usually not interested in extracting arbitrary modules from a reused ontology, but
in extracting modules that are easy to process afterwards. Ideally, the extracted modules
should be as small as possible. Hence it is reasonable to consider the problem of extracting
minimal modules; that is, modules that contain no other module as a subset.
Examples 11 and 14 demonstrate that a minimal module for an ontology or a signature
is not necessarily unique: the ontology O consisting of axioms M1–M5 has two minimal mo-
dules O
1
= ¦M1, M2, M4¦, and O
2
= ¦M1, M3, M4¦, for the ontology {
1
= ¦P1, P2, P3, E1¦
as well as for the signature S = ¦Cystic Fibrosis, Genetic Disorder¦, since these are the min-
imal sets of axioms that imply axiom α = (Cystic Fibrosis _ Genetic Disorder). Depending
on the application scenario, one can consider several variations of tasks T3 and T4 for com-
puting minimal modules. In some applications it might be necessary to extract all minimal
modules, whereas in others any minimal module suffices.
Axioms that do not occur in a minimal module in O are not essential for { because
they can always be removed from every module of O, thus they never need not be imported
into {. This is not true for the axioms that occur in minimal modules of O. It might be
necessary to import such axioms into { in order not to lose essential information from O.
These arguments motivate the following notion:
Definition 15 (Essential Axiom). Let O and O
t
be L-ontologies, S a signature and α
an axiom over L. We say that α is essential for O in O
t
w.r.t. L if α is contained in any
minimal module in O
t
for O w.r.t. L. We say that α is an essential axiom for S in O
t
w.r.t.
L (or S-essential in O
t
) if α is contained in some minimal module for S in O
t
w.r.t. L. ♦
In our example above the axioms M1–M4 from O are essential for the ontology {
1
and
for the signature S = ¦Cystic Fibrosis, Genetic Disorder¦, but the axiom M5 is not essential.
In certain situations one might be interested in computing the set of essential axioms of
an ontology, which can be done by computing the union of all minimal modules. Note that
286
Modular Reuse of Ontologies: Theory and Practice
Notation Input Task
Checking Safety:
T1 O, O
t
, L Check if O is safe for O
t
w.r.t. L
T2 O, S, L Check if O is safe for S w.r.t. L
Extracting of [all / some / union of] [minimal] module(s):
T3[a,s,u][m] O, O
t
, L Extract modules in O
t
w.r.t. O and L
T4[a,s,u][m] O
t
, S, L Extract modules for S in O
t
w.r.t. L
where O, O
t
are ontologies and S a signature over L
Table 2: Summary of reasoning tasks relevant for ontology integration and reuse
computing the union of minimal modules might be easier than computing all the minimal
modules since one does not need to identify which axiom belongs to which minimal module.
In Table 2 we have summarized the reasoning tasks we found to be potentially relevant
for ontology reuse scenarios and have included the variants T3am, T3sm, and T3um of the
task T3 and T4am, T4sm, and T4um of the task T4 for computation of minimal modules
discussed in this section.
Further variants of the tasks T3 and T4 could be considered relevant to ontology reuse.
For example, instead of computing minimal modules, one might be interested in computing
modules with the smallest number of axioms, or modules of the smallest size measured in
the number of symbols, or in any other complexity measure of the ontology. The theoretical
results that we will present in this paper can easily be extended to many such reasoning
tasks.
3.6 Safety and Modules for General Ontology Languages
All the notions introduced in Section 3 are defined with respect to an “ontology language”.
So far, however, we have implicitly assumed that the ontology languages are the description
logics defined in Section 2—that is, the fragments of the DL oHO1O. The notions consid-
ered in Section 3 can be applied, however, to a much broader class of ontology languages.
Our definitions apply to any ontology language with a notion of entailment of axioms from
ontologies, and a mechanism for identifying their signatures.
Definition 16. An ontology language is a tuple L = (Sg, Ax, Sig, [=), where Sg is a set
of signature elements (or vocabulary) of L, Ax is a set of axioms in L, Sig is a function
that assigns to every axiom α ∈ Ax a finite set Sig(α) ⊆ Sg called the signature of α, and
[= is the entailment relation between sets of axioms O ⊆ Ax and axioms α ∈ Ax, written
O [= α. An ontology over L is a finite set of axioms O ⊆ Ax. We extend the function Sig
to ontologies O as follows: Sig(O) :=

α∈C
Sig(α). ♦
Definition 16 provides a very general notion of an ontology language. A language L is
given by a set of symbols (a signature), a set of formulae (axioms) that can be constructed
over those symbols, a function that assigns to each formula its signature, and an entailment
relation between sets of axioms. The ontology language OWL DL as well as all description
287
Cuenca Grau, Horrocks, Kazakov, & Sattler
logics defined in Section 2 are examples of ontology languages in accordance to Definition 16.
Other examples of ontology languages are First Order Logic, Second Order Logic, and Logic
Programs.
It is easy to see that the notions of deductive conservative extension (Definition 1), safety
(Definitions 2 and 6) and modules (Definitions 10 and 12), as well as all the reasoning tasks
from Table 2, are well-defined for every ontology language L given by Definition 16. The
definition of model conservative extension (Definition 3) and the propositions involving
model conservative extensions (Propositions 4, 8, and 13) can be also extended to other
languages with a standard Tarski model-theoretic semantics, such as Higher Order Logic.
To simplify the presentation, however, we will not formulate general requirements on the
semantics of ontology languages, and assume that we deal with sublanguages of oHO1O
whenever semantics is taken into account.
In the remainder of this section, we establish the relationships between the different
notions of safety and modules for arbitrary ontology languages.
Proposition 17 (Safety vs. Modules for an Ontology) Let L be an ontology language,
and let O, O
t
, and O
t
1
⊆ O
t
be ontologies over L. Then:
1. O
t
is safe for O w.r.t. L iff the empty ontology is a module for O in O
t
w.r.t. L.
2. If O
t
` O
t
1
is safe for O ∪ O
t
1
then O
t
1
is a module for O in O
t
w.r.t. L.
Proof. 1. By Definition 2, O
t
is safe for O w.r.t. L iff (a) O∪O
t
is a deductive conservative
extension of O w.r.t. L. By Definition 10, the empty ontology O
t
0
= ∅ is a module for O in
O
t
w.r.t. L iff (b) O ∪ O
t
is a deductive S-conservative extension of O ∪ O
t
0
= O w.r.t. L
for S = Sig(O). It is easy to see that (a) is the same as (b).
2. By Definition 2, O
t
` O
t
1
is safe for O∪O
t
1
w.r.t. L iff (c) O∪O
t
1
∪(O
t
` O
t
1
) = O∪O
t
is a deductive conservative extension of O∪O
t
1
w.r.t. L. In particular, O∪O
t
is a deductive
S-conservative extension of O∪O
t
1
w.r.t. L for S = Sig(O), which implies, by Definition 10,
that O
t
1
is a module for O in O
t
w.r.t. L.
We also provide the analog of Proposition 17 for the notions of safety and modules for
a signature:
Proposition 18 (Safety vs. Modules for a Signature) Let L be an ontology language,
O
t
and O
t
1
⊆ O
t
, ontologies over L, and S a subset of the signature of L. Then:
1. O
t
is safe for S w.r.t. L iff the empty ontology O
t
0
= ∅ is an S-module in O
t
w.r.t. L.
2. If O
t
` O
t
1
is safe for S ∪ Sig(O
t
1
) w.r.t. L, then O
t
1
is an S-module in O
t
w.r.t. L.
Proof. 1. By Definition 6, O
t
is safe for S w.r.t. L iff (a) for every O with Sig(O
t
)∩Sig(O) ⊆
S, it is the case that O
t
is safe for O w.r.t. L. By Definition 12, O
t
0
= ∅ is an S-module
in O
t
w.r.t. L iff (b) for every O with Sig(O
t
) ∩ Sig(O) ⊆ S, it is the case that O
t
0
= ∅ is
a module for O in O
t
w.r.t. L. By Claim 1 of Proposition 17, it is easy to see that (a) is
equivalent to (b).
2. By Definition 6, O
t
` O
t
1
is safe for S ∪ Sig(O
t
1
) w.r.t. L iff (c) for every O, with
Sig(O
t
` O
t
1
) ∩ Sig(O) ⊆ S ∪ Sig(O
t
1
), we have that O
t
` O
t
1
is safe for O w.r.t. L. By
288
Modular Reuse of Ontologies: Theory and Practice
Definition 12, O
t
1
is an S-module in O
t
w.r.t. L iff (d) for every O with Sig(O
t
)∩Sig(O) ⊆ S,
we have that O
t
1
is a module for O in O
t
w.r.t. L.
In order to prove that (c) implies (d), let O be such that Sig(O
t
) ∩Sig(O) ⊆ S. We need
to demonstrate that O
t
1
is a module for O in O
t
w.r.t. L. ()
Let
˜
O := O∪O
t
1
. Note that Sig(O
t
` O
t
1
) ∩Sig(
˜
O) = Sig(O
t
` O
t
1
) ∩(Sig(O) ∪Sig(O
t
1
)) ⊆
(Sig(O
t
` O
t
1
) ∩Sig(O)) ∪Sig(O
t
1
) ⊆ S∪Sig(O
t
1
). Hence, by (c) we have that O
t
` O
t
1
is safe
for
˜
O = O ∪ O
t
1
w.r.t. L which implies by Claim 2 of Proposition 17 that O
t
1
is a module
for O in O
t
w.r.t. L ().
4. Undecidability and Complexity Results
In this section we study the computational properties of the tasks in Table 2 for ontology
languages that correspond to fragments of the description logic oHO1O. We demonstrate
that most of these reasoning tasks are algorithmically unsolvable even for relatively inex-
pressive DLs, and are computationally hard for simple DLs.
Since the notions of modules and safety are defined in Section 3 using the notion of
deductive conservative extension, it is reasonable to identify which (un)decidability and
complexity results for conservative extensions are applicable to the reasoning tasks in Ta-
ble 2. The computational properties of conservative extensions have recently been studied
in the context of description logics. Given O
1
⊆ O over a language L, the problem of
deciding whether O is a deductive conservative extension of O
1
w.r.t. L is 2-EXPTIME-
complete for L = /L( (Ghilardi et al., 2006). This result was extended by Lutz et al.
(2007), who showed that the problem is 2-EXPTIME-complete for L = /L(1O and unde-
cidable for L = /L(1OO. Recently, the problem has also been studied for simple DLs; it
has been shown that deciding deductive conservative extensions is EXPTIME-complete for
L = cL(Lutz & Wolter, 2007). These results can be immediately applied to the notions of
safety and modules for an ontology:
Proposition 19 Given ontologies O and O
t
over L, the problem of determining whether
O is safe for O
t
w.r.t. L is EXPTIME-complete for L = cL, 2-EXPTIME-complete for
L = /L( and L = /L(1O, and undecidable for L = /L(1OO. Given ontologies O, O
t
,
and O
t
1
⊆ O
t
over L, the problem of determining whether O
t
1
is a module for O in O
t
is
EXPTIME-complete for L = cL, 2-EXPTIME complete for L = /L( and L = /L(1O,
and undecidable for L = /L(1OO.
Proof. By Definition 2, an ontology O is safe for O
t
w.r.t. L iff O ∪ O
t
is a deductive
conservative extension of O
t
w.r.t. L. By Definition 2, an ontology O
t
1
is a module for O in
O
t
w.r.t. L if O∪O
t
is a deductive S-conservative extension of O∪O
t
1
for S = Sig(O) w.r.t.
L. Hence, any algorithm for checking deductive conservative extensions can be reused for
checking safety and modules.
Conversely, we demonstrate that any algorithm for checking safety or modules can be
used for checking deductive conservative extensions. Indeed, O is a deductive conservative
extension of O
1
⊆ O w.r.t. L iff O`O
1
is safe for O
1
w.r.t. L iff, by Claim 1 of Proposition 17,
O
0
= ∅ is a module for O
1
in O ` O
1
w.r.t. L .
289
Cuenca Grau, Horrocks, Kazakov, & Sattler
Corollary 20 There exist algorithms performing tasks T1, and T3[a,s,u]m from Table 2
for L = cL and L = /L(1O that run in EXPTIME and 2-EXPTIME respectively. There
is no algorithm performing tasks T1, or T3[a,s,u]m from Table 2 for L = /L(1OO.
Proof. The task T1 corresponds directly to the problem of checking the safety of an ontology,
as given in Definition 2.
Suppose that we have a 2-EXPTIME algorithm that, given ontologies O, O
t
and O
t
1

O
t
, determines whether O
t
1
is a module for O in O
t
w.r.t. L = /L(1O. We demonstrate
that this algorithm can be used to solve the reasoning tasks T3am, T3sm and T3um for
L = /L(1O in 2-EXPTIME. Indeed, given ontologies O and O
t
, one can enumerate all
subsets of O
t
and check in 2-EXPTIME which of these subsets are modules for O in O
t
w.r.t. L. Then we can determine which of these modules are minimal and return all of them,
one of them, or the union of them depending on the reasoning task.
Finally, we prove that solving any of the reasoning tasks T3am, T3sm and T3um is not
easier than checking the safety of an ontology. Indeed, by Claim 1 of Proposition 17, an
ontology O is safe for O
t
w.r.t. L iff O
0
= ∅ is a module for O
t
in O w.r.t. L. Note that
the empty ontology O
0
= ∅ is a module for O
t
in O w.r.t. L iff O
0
= ∅ is the only minimal
module for O
t
in O w.r.t. L.
We have demonstrated that the reasoning tasks T1 and T3[a,s,u]m are computationally
unsolvable for DLs that are as expressive as /L(O1O, and are 2-EXPTIME-hard for /L(.
In the remainder of this section, we focus on the computational properties of the reasoning
tasks T2 and T4[a,s,u]m related to the notions of safety and modules for a signature. We
demonstrate that all of these reasoning tasks are undecidable for DLs that are as expressive
as /L(O.
Theorem 21 (Undecidability of Safety for a Signature) The problem of checking
whether an ontology O consisting of a single /L(-axiom is safe for a signature S is unde-
cidable w.r.t. L = /L(O.
Proof. The proof is a variation of the construction for undecidability of deductive conser-
vative extensions in /L(O1O (Lutz et al., 2007), based on a reduction to a domino tiling
problem.
A domino system is a triple D = (T, H, V ) where T = ¦1, . . . , k¦ is a finite set of tiles
and H, V ⊆ T T are horizontal and vertical matching relations. A solution for a domino
system D is a mapping t
i,j
that assigns to every pair of integers i, j ≥ 1 an element of T,
such that 't
i,j
, t
i,j+1
` ∈ H and 't
i,j
, t
i+1,j
` ∈ V . A periodic solution for a domino system
D is a solution t
i,j
for which there exist integers m ≥ 1 , n ≥ 1 called periods such that
t
i+m,j
= t
i,j
and t
i,j+n
= t
i,j
for every i, j ≥ 1.
Let T be the set of all domino systems, T
s
be the subset of T that admit a solution and
T
ps
be the subset of T
s
that admit a periodic solution. It is well-known (B¨orger, Gr¨adel,
& Gurevich, 1997, Theorem 3.1.7) that the sets T` T
s
and T
ps
are recursively inseparable,
that is, there is no recursive (i.e. decidable) subset T
t
⊆ T of domino systems such that
T
ps
⊆ T
t
⊆ T
s
.
We use this property in our reduction. For each domino system D, we construct a
signature S = S(D), and an ontology O = O(D) which consists of a single /L(-axiom such
that:
290
Modular Reuse of Ontologies: Theory and Practice
(q
1
) · _ A
1
. . A
k
where T = ¦1, . . . , k¦
(q
2
) A
t
¯ A
t
_ ⊥ 1 ≤ t < t
t
≤ k
(q
3
) A
t
_ ∃r
H
.(

/t,t

)∈H
A
t
) 1 ≤ t ≤ k
(q
4
) A
t
_ ∃r
V
.(

/t,t

)∈V
A
t
) 1 ≤ t ≤ k
Figure 2: An ontology O
tile
= O
tile
(D) expressing tiling conditions for a domino system D
(a) if D does not have a solution then O = O(D) is safe for S = S(D) w.r.t. L = /L(O,
and
(b) if D has a periodic solution then O = O(D) is not safe for S = S(D) w.r.t. L = /L(O.
In other words, for the set T
t
consisting of the domino systems D such that O = O(D)
is not safe for S = S(D) w.r.t. L = /L(O, we have T
ps
⊆ T
t
⊆ T
s
. Since T ` T
s
and T
ps
are recursively inseparable, this implies undecidability for T
t
and hence for the problem of
checking if O is S-safe w.r.t. L = /L(O, because otherwise one can use this problem for
deciding membership in T
t
.
The signature S = S(D), and the ontology O = O(D) are constructed as follows. Given
a domino system D = (T, H, V ), let S consist of fresh atomic concepts A
t
for every t ∈ T and
two atomic roles r
H
and r
V
. Consider an ontology O
tile
in Figure 2 constructed for D. Note
that Sig(O
tile
) = S. The axioms of O
tile
express the tiling conditions for a domino system
D, namely (q
1
) and (q
2
) express that every domain element is assigned with a unique tile
t ∈ T; (q
3
) and (q
4
) express that every domain element has horizontal and vertical matching
successors.
Now let s be an atomic role and B an atomic concept such that s, B / ∈ S. Let O := ¦β¦
where:
β := · _ ∃s.

(C
i
¦D
i
)∈C
tile
(C
i
¯ D
i
) . (∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B)

We say that r
H
and r
V
commute in an interpretation 1 = (∆
7
,
7
) if for all domain
elements a, b, c, d
1
and d
2
from ∆
7
such that 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
, 'a, c` ∈ r
V
7
, and
'c, d
2
` ∈ r
H
7
, we have d
1
= d
2
. The following claims can be easily proved:
Claim 1. If O
tile
(D) has a model 1 in which r
H
and r
V
commute, then D has a solution.
Indeed a model 1 = (∆,
7
) of O
tile
(D) can be used to guide the construction of a solution
t
i,j
for D as follows. For every i, j ≥ 1, we construct t
i,j
inductively together with elements
a
i,j
∈ ∆
7
such that 'a
i,j
, a
i,j+1
` ∈ r
V
7
and 'a
i,j
, a
i+1,j
` ∈ r
H
7
. We set a
1,1
to any element
from ∆
7
.
Now suppose a
i,j
with i, j ≥ 1 is constructed. Since 1 is a model of axioms (q
1
) and
(q
2
) from Figure 2, we have a unique A
t
with 1 ≤ t ≤ k such that a
i,j
∈ A
t
7
. Then we set
t
i,j
:= t. Since 1 is a model of axioms (q
3
) and (q
4
) and a
i,j
∈ A
t
7
there exist b, c ∈ ∆
7
and t
t
, t
tt
∈ T such that 'a
i,j
, b` ∈ r
H
7
, 'a
i,j
, c` ∈ r
V
7
, 't, t
t
` ∈ H, 't, t
tt
` ∈ V , b ∈ A
t

7
,
and c ∈ A
t

7
. In this case we assign a
i,j+1
:= b, a
i+1,j
:= c, t
i,j+1
:= t
t
, and t
i+1,j
:= t
tt
.
Note that some values of a
i,j
and t
i,j
can be assigned two times: a
i+1,j+1
and t
i+1,j+1
are
constructed for a
i,j+1
and a
i,j+1
. However, since r
V
and r
H
commute in 1, the value for
291
Cuenca Grau, Horrocks, Kazakov, & Sattler
a
i+1,j+1
is unique, and because of (q
2
), the value for t
i+1,j+1
is unique. It is easy to see that
t
i,j
is a solution for D.
Claim 2. If 1 is a model of O
tile
∪ O, then r
H
and r
V
do not commute in 1.
Indeed, it is easy to see that O
tile
∪ O [= (· _ ∃s.[∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B]). Hence, if
1 = (∆
7
,
7
) is a model of O
tile
∪O, then there exist a, b, c, d
1
and d
2
such that 'x, a` ∈ s
7
for every x ∈ ∆
7
, 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
, d
1
∈ B
7
, 'a, c` ∈ r
V
7
, 'c, d
2
` ∈ r
H
7
, and
d
2
∈ (B)
7
. This implies that d
1
= d
2
, and so, r
h
and r
V
do not commute in 1.
Finally, we demonstrate that O = O(D) satisfies properties (a) and (b).
In order to prove property (a) we use the sufficient condition for safety given in Propo-
sition 8 and demonstrate that if D has no solution then for every interpretation 1 there
exists a model . of O such that .[
S
= 1[
S
. By Proposition 8, this will imply that O is safe
for S w.r.t. L.
Let 1 be an arbitrary interpretation. Since D has no solution, then by the contra-positive
of Claim 1 either (1) 1 is not a model of O
tile
, or (2) r
H
and r
V
do not commute in 1. We
demonstrate for both of these cases how to construct the required model . of O such that
.[
S
= 1[
S
.
Case (1). If 1 = (∆
7
,
7
) is not a model of O
tile
then there exists an axiom (C
i
_ D
i
) ∈
O
tile
such that 1 [= (C
i
_ D
i
). That is, there exists a domain element a ∈ ∆
7
such that
a ∈ C
7
i
but a ∈ D
7
i
. Let us define . to be identical to 1 except for the interpretation of the
atomic role s which we define in . as s
.
= ¦'x, a` [ x ∈ ∆¦. Since the interpretations of
the symbols in S have remained unchanged, we have a ∈ C
.
i
, a ∈ D
.
i
, and so . [= (· _
∃s.[C
i
¯D
j
]). This implies that . [= β, and so, we have constructed a model . of O such
that .[
S
= 1[
S
.
Case (2). Suppose that r
H
and r
V
do not commute in 1 = (∆
7
,
7
). This means that
there exist domain elements a, b, c, d
1
and d
2
from ∆
7
with 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
,
'a, c` ∈ r
V
7
, and 'c, d
2
` ∈ r
H
7
, such that d
1
= d
2
. Let us define . to be identical to 1 except
for the interpretation of the atomic role s and the atomic concept B. We interpret s in . as
s
.
= ¦'x, a` [ x ∈ ∆¦. We interpret B in . as B
.
= ¦d
1
¦. Note that a ∈ (∃r
H
.∃r
V
.B)
.
and
a ∈ (∃r
V
.∃r
H
.B)
.
since d
1
= d
2
. So, we have . [= (· _ ∃s.[∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B])
which implies that . [= β, and thus, we have constructed a model . of O such that
.[
S
= 1[
S
.
In order to prove property (b), assume that D has a periodic solution t
i,j
with the
periods m, n ≥ 1. We demonstrate that O is not S-safe w.r.t. L. For this purpose we
construct an /L(O-ontology O
t
with Sig(O) ∩ Sig(O
t
) ⊆ S such that O ∪ O
t
[= (· _ ⊥),
but O
t
[= (· _ ⊥). This will imply that O is not safe for O
t
w.r.t. L = /L(O, and, hence,
is not safe for S w.r.t. L = /L(O.
We define O
t
such that every model of O
t
is a finite encoding of the periodic solution t
i,j
with the periods m and n. For every pair (i, j) with 1 ≤ i ≤ m and 1 ≤ j ≤ n we introduce
a fresh individual a
i,j
and define O
t
to be an extension of O
tile
with the following axioms:
(p
1
) ¦a
i
1
,j
¦ _ ∃r
V
.¦a
i
2
,j
¦ (p
2
) ¦a
i
1
,j
¦ _ ∀r
V
.¦a
i
2
,j
¦, i
2
= i
1
+ 1 mod m
(p
3
) ¦a
i,j
1
¦ _ ∃r
H
.¦a
i,j
2
¦ (p
4
) ¦a
i,j
1
¦ _ ∀r
H
.¦a
i,j
2
¦, j
2
= j
1
+ 1 mod n
(p
5
) · _

1≤i≤m, 1≤j≤n
¦a
i,j
¦
292
Modular Reuse of Ontologies: Theory and Practice
The purpose of axioms (p
1
)–(p
5
) is to ensure that r
H
and r
V
commute in every model of
O
t
. It is easy to see that O
t
has a model corresponding to every periodic solution for D with
periods m and n. Hence O
t
[= (· _ ⊥). On the other hand, by Claim 2, since O
t
contains
O
tile
, then in every model of O
t
∪ O, r
H
and r
V
do not commute. This is only possible if
O
t
∪ O have no models, so O
t
∪ O [= (· _ ⊥).
A direct consequence of Theorem 21 and Proposition 18 is the undecidability of the
problem of checking whether a subset of an ontology is a module for a signature:
Corollary 22 Given a signature S and /L(-ontologies O
t
and O
t
1
⊆ O
t
, the problem of
determining whether O
t
1
is an S-module in O
t
w.r.t. L = /L(O is undecidable.
Proof. By Claim 1 of Proposition 18, O is S-safe w.r.t. L if O
0
= ∅ is an S-module in O
w.r.t. L. Hence any algorithm for recognizing modules for a signature in L can be used for
checking if an ontology is safe for a signature in L.
Corollary 23 There is no algorithm that can perform tasks T2, or T4[a,s,u]m for L =
/L(O.
Proof. Theorem 21 directly implies that there is no algorithm for task T2, since this task
corresponds to the problem of checking safety for a signature.
Solving any of the reasoning tasks T4am, T4sm, or T4um for L is at least as hard as
checking the safety of an ontology, since, by Claim 1 of Proposition 18, an ontology O is
S-safe w.r.t. L iff O
0
= ∅ is (the only minimal) S-module in O w.r.t. L.
5. Sufficient Conditions for Safety
Theorem 21 establishes the undecidability of checking whether an ontology expressed in
OWL DL is safe w.r.t. a signature. This undecidability result is discouraging and leaves
us with two alternatives: First, we could focus on simple DLs for which this problem is
decidable. Alternatively, we could look for sufficient conditions for the notion of safety—
that is, if an ontology satisfies our conditions, then we can guarantee that it is safe; the
converse, however, does not necessarily hold.
The remainder of this paper focuses on the latter approach. Before we go any further,
however, it is worth noting that Theorem 21 still leaves room for investigating the former
approach. Indeed, safety may still be decidable for weaker description logics, such as cL,
or even for very expressive logics such as oH1O. In the case of oH1O, however, existing
results (Lutz et al., 2007) strongly indicate that checking safety is likely to be exponentially
harder than reasoning and practical algorithms may be hard to design. This said, in what
follows we will focus on defining sufficient conditions for safety that we can use in practice
and restrict ourselves to OWL DL—that is, oHO1O—ontologies.
5.1 Safety Classes
In general, any sufficient condition for safety can be defined by giving, for each signature
S, the set of ontologies over a language that satisfy the condition for that signature. These
ontologies are then guaranteed to be safe for the signature under consideration. These
intuitions lead to the notion of a safety class.
293
Cuenca Grau, Horrocks, Kazakov, & Sattler
Definition 24 (Class of Ontologies, Safety Class). A class of ontologies for a language
L is a function O() that assigns to every subset S of the signature of L, a subset O(S)
of ontologies in L. A class O() is anti-monotonic if S
1
⊆ S
2
implies O(S
2
) ⊆ O(S
1
); it is
compact when O ∈ O(S ∩ Sig(O)) for each O ∈ O(S); and it is subset-closed if O
1
⊆ O
2
and O
2
∈ O(S) implies O
1
∈ O(S); it is union-closed if O
1
∈ O(S) and O
2
∈ O(S) implies
(O
1
∪ O
2
) ∈ O(S).
A safety class (also called a sufficient condition to safety) for an ontology language L
is a class of ontologies O() for L such that for each S it is the case that (i) ∅ ∈ O(S), and
(ii) each ontology O ∈ O(S) is S-safe for L. ♦
Intuitively, a class of ontologies is a collection of sets of ontologies parameterized by a
signature. A safety class represents a sufficient condition for safety: each ontology in O(S)
should be safe for S. Also, w.l.o.g., we assume that the the empty ontology ∅ belongs to
every safety class for every signature. In what follows, whenever an ontology O belongs to
a safety class for a given signature and the safety class is clear from the context, we will
sometimes say that O passes the safety test for S.
Safety classes may admit many natural properties, as given in Definition 24. Anti-
monotonicity intuitively means that if an ontology O can be proved to be safe w.r.t. S
using the sufficient condition, then O can be proved to be safe w.r.t. every subset of S.
Compactness means that it is sufficient to consider only common elements of Sig(O) and S
for checking safety. Subset-closure (union closure) means that if O (O
1
and O
2
) satisfy the
sufficient condition for safety, then every subset of O (the union of O
1
and O
2
) also satisfies
this condition.
5.2 Locality
In this section we introduce a family of safety classes for L = oHO1O that is based on the
semantic properties underlying the notion of model conservative extensions. In Section 3,
we have seen that, according to Proposition 8, one way to prove that O is S-safe is to show
that O is a model S-conservative extension of the empty ontology.
The following definition formalizes the classes of ontologies, called local ontologies, for
which safety can be proved using Proposition 8.
Definition 25 (Class of Interpretations, Locality). Given a oHO1O signature S,
we say that a set of interpretations I is local w.r.t. S if for every oHO1O-interpretation 1
there exists an interpretation . ∈ I such that 1[
S
= .[
S
.
A class of interpretations is a function I() that given a oHO1Osignature S returns a set
of interpretations I(S); it is local if I(S) is local w.r.t. S for every S; it is monotonic if S
1
⊆ S
2
implies I(S
1
) ⊆ I(S
2
); it is compact if for every S
1
, S
2
and S such that (S
1
S
2
) ∩ S = ∅
we have that I(S
1
)[
S
= I(S
2
)[
S
, where S
1
S
2
is the symmetric difference of sets S
1
and
S
2
defined by S
1
S
2
:= S
1
` S
2
∪ S
2
` S
1
.
Given a class of interpretations I(), we say that O() is the class of ontologies O() based
on I() if for every S, O(S) is the set of ontologies that are valid in I(S); if I() is local then
we say that O() is a class of local ontologies, and for every S and O ∈ O(S) and every
α ∈ O, we say that O and α are local (based on I()). ♦
294
Modular Reuse of Ontologies: Theory and Practice
Example 26 Let I
r←∅
A←∅
() be a class of oHO1O interpretations defined as follows. Given a
signature S, the set I
r←∅
A←∅
(S) consists of interpretations . such that r
.
= ∅ for every atomic
role r / ∈ S and A
.
= ∅ for every atomic concept A / ∈ S. It is easy to show that I
r←∅
A←∅
(S)
is local for every S, since for every interpretation 1 = (∆
7
,
7
) and the interpretation
. = (∆
.
,
.
) defined by ∆
.
:= ∆
7
, r
.
= ∅ for r / ∈ S, A
.
= ∅ for A / ∈ S, and X
.
:= X
7
for the remaining symbols X, then . ∈ I
r←∅
A←∅
(S) and 1[
S
= .[
S
. Since I
r←∅
A←∅
(S
1
) ⊆ I
r←∅
A←∅
(S
2
)
for every S
1
⊆ S
2
, it is the case that I
r←∅
A←∅
() is monotonic; I
r←∅
A←∅
() is also compact, since for
every S
1
and S
2
the sets of interpretations I
r←∅
A←∅
(S
1
) and I
r←∅
A←∅
(S
2
) are defined differently
only for elements in S
1
S
2
.
Given a signature S, the set Ax
r←∅
A←∅
(S) of axioms that are local w.r.t. S based on I
r←∅
A←∅
(S)
consists of all axioms α such for every . ∈ I
r←∅
A←∅
(S), it is the case that . [= α. Then the
class of local ontologies based on I
r←∅
A←∅
() is defined by O ∈ O
r←∅
A←∅
(S) iff O ⊆ Ax
r←∅
A←∅
(S).

Proposition 27 (Locality Implies Safety) Let O() be a class of ontologies for oHO1O
based on a local class of interpretations I(). Then O() is a subset-closed and union-closed
safety class for L = oHO1O. Additionally, if I() is monotonic, then O() is anti-monotonic,
and if I() is compact then O() is also compact.
Proof. Assume that O() is a class of ontologies based on I(). Then by Definition 25 for
every oHO1O signature S, we have O ∈ O(S) iff O is valid in I(S) iff . [= O for every
interpretation . ∈ I(S). Since I() is a local class of interpretations, we have that for
every oHO1O-interpretation 1 there exists . ∈ I(S) such that .[
S
= 1[
S
. Hence for
every O ∈ I(S) and every oHO1O interpretation 1 there is a model . ∈ I(S) such that
.[
S
= 1[
S
, which implies by Proposition 8 that O is safe for S w.r.t. L = oHO1O. Thus
O() is a safety class.
The fact that O() is subset-closed and union-closed follows directly from Definition 25
since (O
1
∪ O
2
) ∈ O(S) iff (O
1
∪ O
2
) is valid in I(S) iff O
1
and O
2
are valid in I(S) iff
O
1
∈ O(S) and O
2
∈ O(S). If I() is monotonic then I(S
1
) ⊆ I(S
2
) for every S
1
⊆ S
2
, and
so O ∈ O(S
2
) implies that O is valid in I(S
2
) which implies that O is valid in I(S
1
) which
implies that O ∈ O(S
1
). Hence O() is anti-monotonic.
If I() is compact then for every S
1
, S
2
and S such that (S
1
S
2
) ∩ S = ∅ we have
that I(S
1
)[
S
= I(S
2
)[
S
, hence for every O with Sig(O) ⊆ S we have that O is valid in
I(S
1
) iff O is valid in I(S
2
), and so, O ∈ O(S
1
) iff O ∈ O(S
2
). In particular, O ∈ O(S) iff
O ∈ O(S ∩ Sig(O)) since (S (S ∩ Sig(O))) ∩ Sig(O) = (S ` Sig(O)) ∩ Sig(O) = ∅. Hence,
O() is compact.
Corollary 28 The class of ontologies O
r←∅
A←∅
(S) defined in Example 26 is an anti-monotonic
compact subset-closed and union-closed safety class.
Example 29 Recall that in Example 5 from Section 3, we demonstrated that the ontology
{
1
∪ O given in Figure 1 with {
1
= ¦P1, . . . , P4, E1¦ is a deductive conservative extension
of O for S = ¦Cystic Fibrosis, Genetic Disorder¦. This has been done by showing that every
S-interpretation 1 can be expanded to a model . of axioms P1–P4, E1 by interpreting
the symbols in Sig({
1
) ` S as the empty set. In terms of Example 26 this means that
295
Cuenca Grau, Horrocks, Kazakov, & Sattler
{
1
∈ O
r←∅
A←∅
(S). Since O
r←∅
A←∅
() is a class of local ontologies, by Proposition 27, the ontology
{
1
is safe for S w.r.t. L = oHO1O. ♦
Proposition 27 and Example 29 suggest a particular way of proving the safety of on-
tologies. Given a oHO1O ontology O and a signature S it is sufficient to check whether
O ∈ O
r←∅
A←∅
(S); that is, whether every axiom α in O is satisfied by every interpretation from
I
r←∅
A←∅
(S). If this property holds, then O must be safe for S according to Proposition 27.
It turns out that this notion provides a powerful sufficiency test for safety which works
surprisingly well for many real-world ontologies, as will be shown in Section 8. In the next
section we discuss how to perform this test in practice.
5.3 Testing Locality
In this section, we focus in more detail on the safety class O
r←∅
A←∅
(), introduced in Exam-
ple 26. When ambiguity does not arise, we refer to this safety class simply as locality.
2
From the definition of Ax
r←∅
A←∅
(S) given in Example 26 it is easy to see that an axiom α is
local w.r.t. S (based on I
r←∅
A←∅
(S)) if α is satisfied in every interpretation that fixes the inter-
pretation of all atomic roles and concepts outside S to the empty set. Note that for defining
locality we do not fix the interpretation of the individuals outside S, but in principle, this
could be done. The reason is that there is no elegant way to describe such interpretations.
Namely, every individual needs to be interpreted as an element of the domain, and there is
no “canonical” element of every domain to choose.
In order to test the locality of α w.r.t. S, it is sufficient to interpret every atomic concept
and atomic role not in S as the empty set and then check if α is satisfied in all interpretations
of the remaining symbols. This observation suggests the following test for locality:
Proposition 30 (Testing Locality) Given a oHO1O signature S, concept C, axiom α
and ontology O let τ(C, S), τ(α, S) and τ(O, S) be defined recursively as follows:
τ(C, S) ::= τ(·, S) = ·; (a)
[ τ(A, S) = ⊥ if A / ∈ S and otherwise = A; (b)
[ τ(¦a¦, S) = ¦a¦; (c)
[ τ(C
1
¯ C
2
, S) = τ(C
1
, S) ¯ τ(C
2
, S); (d)
[ τ(C
1
, S) = τ(C
1
, S); (e)
[ τ(∃R.C
1
, S) = ⊥ if Sig(R) S and otherwise = ∃R.τ(C
1
, S); (f)
[ τ(nR.C
1
, S) = ⊥ if Sig(R) S and otherwise = (nR.τ(C
1
, S)). (g)
τ(α, S) ::= τ(C
1
_ C
2
, S) = (τ(C
1
, S) _ τ(C
2
, S)); (h)
[ τ(R
1
_ R
2
, S) = (⊥ _ ⊥) if Sig(R
1
) S, otherwise
= ∃R
1
.· _ ⊥ if Sig(R
2
) S, otherwise = (R
1
_ R
2
); (i)
[ τ(a : C, S) = a : τ(C, S); (j)
[ τ(r(a, b), S) = · _ ⊥ if r / ∈ S and otherwise = r(a, b); (k)
[ τ(Trans(r), S) = ⊥ _ ⊥ if r / ∈ S and otherwise = Trans(r); (l)
[ τ(Funct(R), S) = ⊥ _ ⊥ if Sig(R) S and otherwise = Funct(R). (m)
τ(O, S) ::=

α∈C
τ(α, S) (n)
2. This notion of locality is exactly the one we used in our previous work (Cuenca Grau et al., 2007).
296
Modular Reuse of Ontologies: Theory and Practice
Then, O ∈ O
r←∅
A←∅
(S) iff every axiom in τ(O, S) is a tautology.
Proof. It is easy to check that for every atomic concept A and atomic role r from τ(C, S),
we have A ∈ S and r ∈ S, in other words, all atomic concepts and roles that are not in S
are eliminated by the transformation.
3
It is also easy to show by induction that for every
interpretation 1 ∈ I
r←∅
A←∅
(S), we have C
7
= (τ(C, S))
7
and that 1 [= α iff 1 [= τ(α, S). Hence
an axiom α is local w.r.t. S iff 1 [= α for every interpretation 1 ∈ I
r←∅
A←∅
(S) iff 1 [= τ(α, S)
for every Sig(α)-interpretation 1 ∈ I
r←∅
A←∅
(S) iff τ(α, S) is a tautology.
Example 31 Let O = ¦α¦ be the ontology consisting of axiom α = M2 from Figure 1. We
demonstrate using Proposition 30 that O is local w.r.t. S
1
= ¦Fibrosis, Genetic Origin¦, but
not local w.r.t. S
2
= ¦Genetic Fibrosis, has Origin¦.
Indeed, according to Proposition 30, in order to check whether O is local w.r.t. S
1
it is
sufficient to perform the following replacements in α (the symbols from S
1
are underlined):
M2
⊥ [by (b)]

Genetic Fibrosis ≡ Fibrosis ¯
⊥ [by (f)]

∃has Origin.Genetic Origin (7)
Similarly, in order to check whether O is local w.r.t. S
2
, it is sufficient to perform the
following replacements in α (the symbols from S
2
are underlined):
M2 Genetic Fibrosis ≡
⊥ [by (b)]

Fibrosis ¯ ∃has Origin.
⊥ [by (b)]

Genetic Origin (8)
In the first case we obtain τ(M2, S
1
) = (⊥ ≡ Fibrosis ¯ ⊥) which is a oHO1O-tautology.
Hence O is local w.r.t. S
1
and hence by Proposition 8 is S
1
-safe w.r.t. oHO1O. In the
second case τ(M2, S
2
) = (Genetic Fibrosis ≡ ⊥ ¯ ∃has Origin.⊥) which is not a oHO1O
tautology, hence O is not local w.r.t. S
2
. ♦
5.4 A Tractable Approximation to Locality
One of the important conclusions of Proposition 30 is that one can use the standard ca-
pabilities of available DL-reasoners such as FaCT++ (Tsarkov & Horrocks, 2006), RACER
(M¨oller & Haarslev, 2003), Pellet (Sirin & Parsia, 2004) or KAON2 (Motik, 2006) for testing
locality since these reasoners, among other things, allow testing for DL-tautologies. Che-
cking for tautologies in description logics is, theoretically, a difficult problem (e.g. for the
DL oHO1O it is known to be NEXPTIME-complete, Tobies, 2000). There are, however,
several reasons to believe that the locality test would perform well in practice. The primary
reason is that the sizes of the axioms which need to be tested for tautologies are usually
relatively small compared to the sizes of ontologies. Secondly, modern DL reasoners are
highly optimized for standard reasoning tasks and behave well for most realistic ontologies.
In case reasoning is too costly, it is possible to formulate a tractable approximation to the
locality conditions for oHO1O:
3. Recall that the constructors ⊥, C1 C2, ∀R.C, and (nR.C) are assumed to be expressed using ,
C1 C2, ∃R.C and (nR.C), hence, in particular, every role R

with Sig(R

) S occurs in O as either
∃R

.C, (nR

.C), R

R, R R

, Trans(R

), or Funct(R

), and hence will be eliminated. All atomic
concepts A

/ ∈ S will be eliminated likewise. Note that it is not necessarily the case that Sig(τ(α, S)) ⊆ S,
since τ(α, S) may still contain individuals that do not occur in S.
297
Cuenca Grau, Horrocks, Kazakov, & Sattler
Definition 32 (Syntactic Locality for oHO1O). Let S be a signature. The following
grammar recursively defines two sets of concepts Con

(S) and Con

(S) for a signature S:
Con

(S) ::= A

[ C

[ C ¯ C

[ C

¯ C [ ∃R

.C [ ∃R.C

[ (nR

.C) [ (nR.C

) .
Con

(S) ::= · [ C

[ C

1
¯ C

2
.
where A

/ ∈ S is an atomic concept, R

(possibly the inverse of) an atomic role r

/ ∈ S, C
any concept, R any role, C

∈ Con

(S), C

(i)
∈ Con

(S), and i ∈ ¦1, 2¦.
An axiom α is syntactically local w.r.t. S if it is of one of the following forms: (1) R

_ R,
or (2) Trans(r

), or (3) Funct(R

), or (4) C

_ C, or (5) C _ C

, or (6) a : C

. We denote
by A
˜
x
r←∅
A←∅
(S) the set of all oHO1O-axioms that are syntactically local w.r.t. S.
A oHO1O-ontology O is syntactically local w.r.t. S if O ⊆ A
˜
x
r←∅
A←∅
(S). We denote by
˜
O
r←∅
A←∅
(S) the set of all oHO1O ontologies that are syntactically local w.r.t. S. ♦
Intuitively, syntactic locality provides a simple syntactic test to ensure that an axiom is
satisfied in every interpretation from I
r←∅
A←∅
(S). It is easy to see from the inductive definitions
of Con

(S) and Con

(S) in Definition 32 that for every interpretation 1 = (∆
7
,
7
) from
I
r←∅
A←∅
(S) it is the case that (C

)
7
= ∅ and (C

)
7
= ∆
7
for every C

∈ Con

(S) and
C

∈ Con

(S). Hence, every syntactically local axiom is satisfied in every interpretation
1 from I
r←∅
A←∅
(S), and so we obtain the following conclusion:
Proposition 33 A
˜
x
r←∅
A←∅
(S) ⊆ Ax
r←∅
A←∅
(S).
Further, it can be shown that the safety class for oHO1O based on syntactic locality
enjoys all of the properties from Definition 24:
Proposition 34 The class of syntactically local ontologies
˜
O
r←∅
A←∅
() given in Definition 32
is an anti-monotonic, compact, subset-closed, and union-closed safety class.
Proof.
˜
O
r←∅
A←∅
() is a safety class by Proposition 33. Anti-monotonicity for
˜
O
r←∅
A←∅
() can
be shown by induction, by proving that Con

(S
2
) ⊆ Con

(S
1
), Con

(S
2
) ⊆ Con

(S
1
)
and A
˜
x
r←∅
A←∅
(S
2
) ⊆ A
˜
x
r←∅
A←∅
(S
1
) when S
1
⊆ S
2
. Also one can show by induction that
α ∈ A
˜
x
r←∅
A←∅
(S) iff α ∈ A
˜
x
r←∅
A←∅
(S ∩ Sig(α)), so
˜
O
r←∅
A←∅
() is compact. Since O ∈
˜
O
r←∅
A←∅
(S)
iff O ⊆ A
˜
x
r←∅
A←∅
(S), we have that
˜
O
r←∅
A←∅
() is subset-closed and union-closed.
Example 35 (Example 31 continued) It is easy to see that axiom M2 from Figure 1 is
syntactically local w.r.t. S
1
= ¦Fibrosis, Genetic Origin¦. Below we indicate sub-concepts in
α from Con

(S
1
):
M2
∈ Con

(S
1
) [matches A

]

Genetic Fibrosis ≡ Fibrosis ¯
∈ Con

(S
1
) [matches ∃R

.C]

∃has Origin.Genetic Origin

∈ Con

(S
1
) [matches C ¯ C

]
(9)
It is easy to show in a similar way that axioms P
1
—P4, and E1 from Figure 1 are syn-
tactically local w.r.t. S = ¦Cystic Fibrosis, Genetic Disorder¦. Hence the ontology {
1
=
¦P1, . . . , P4, E1¦ considered in Example 29 is syntactically local w.r.t. S. ♦
298
Modular Reuse of Ontologies: Theory and Practice
I
r←∗
A←∗
(S) r, A ∈ S : r
.
A
.
I
r←∅
A←∅
(S) ∅ ∅
I
r←∆∆
A←∅
(S) ∆
.

.

I
r←id
A←∅
(S) ¦'x, x` [ x ∈ ∆
.
¦ ∅
I
r←∗
A←∗
(S) r, A ∈ S : r
.
A
.
I
r←∅
A←∆
(S) ∅ ∆
.
I
r←∆∆
A←∆
(S) ∆
.

.

.
I
r←id
A←∆
(S) ¦'x, x` [ x ∈ ∆
.
¦ ∆
.
Table 3: Examples for Different Local Classes of Interpretations
The converse of Proposition 33 does not hold in general since there are semantically
local axioms that are not syntactically local. For example, the axiom α = (A _ A . B) is
local w.r.t. every S since it is a tautology (and hence true in every interpretation). On the
other hand, it is easy to see that α is not syntactically local w.r.t. S = ¦A, B¦ according to
Definition 32 since it involves symbols in S only. Another example, which is not a tautology,
is a GCI α = (∃r.A _ ∃r.B). The axiom α is semantically local w.r.t. S = ¦r¦, since
τ(α, S) = (∃r.⊥ _ ∃r.⊥) is a tautology, but not syntactically local. These examples show
that the limitation of our syntactic notion of locality is its inability to “compare” different
occurrences of concepts from the given signature S. As a result, syntactic locality does not
detect all tautological axioms. It is reasonable to assume, however, that tautological axioms
do not occur often in realistic ontologies. Furthermore, syntactic locality checking can be
performed in polynomial time by matching an axiom according to Definition 32.
Proposition 36 There exists an algorithm that given a oHO1O ontology O and a sig-
nature S, determines whether O ∈
˜
O
r←∅
A←∅
(S), and whose running time is polynomial in
[O[ +[S[, where [O[ and [S[ are the number of symbols occurring in O and S respectively.
4
5.5 Other Locality Classes
The locality condition given in Example 26 based on a class of local interpretations I
r←∅
A←∅
()
is just a particular example of locality which can be used for testing safety. Other classes
of local interpretations can be constructed in a similar way by fixing the interpretations
of elements outside S to different values. In Table 3 we have listed several such classes of
local interpretations where we fix the interpretation of atomic roles outside S to be either
the empty set ∅, the universal relation ∆∆, or the identity relation id on ∆, and the
interpretation of atomic concepts outside S to either the empty set ∅ or the set ∆ of all
elements.
Each local class of interpretations in Table 3 defines a corresponding class of local
ontologies analogously to Example 26. In Table 4 we have listed all of these classes together
with examples of typical types of axioms used in ontologies. These axioms are assumed
to be an extension of our project ontology from Figure 1. We indicate which axioms are
local w.r.t. S for which locality conditions assuming, as usual, that the symbols from S are
underlined.
It can be seen from Table 4 that different types of locality conditions are appropriate
for different types of axioms. The locality condition based on I
r←∅
A←∅
(S) captures the domain
axiom P4, definition P5, the disjointness axiom P6, and the functionality axiom P7 but
4. We assume that the numbers in number restrictions are written down using binary coding.
299
Cuenca Grau, Horrocks, Kazakov, & Sattler
α Axiom ? α ∈ Ax
r←∅
A←∅
r←∆∆
A←∅
r←id
A←∅
r←∅
A←∆
r←∆∆
A←∆
r←id
A←∆
P4 ∃has Focus.· _ Project
P5
BioMedical Project ≡ Project ¯
¯ ∃has Focus.Bio Medicine

P6 Project ¯ Bio Medicine _ ⊥
P7 Funct(has Focus)
P8 Human Genome : Project
P9 has Focus(Human Genome, Gene)
E2
∀has Focus.Cystic Fibrosis _
_ ∃has Focus.Cystic Fibrosis

Table 4: A Comparison between Different Types of Locality Conditions
neither of the assertions P8 or P9, since the individuals Human Genome and Gene prevent
us from interpreting the atomic role has Focus and atomic concept Project with the empty
set. The locality condition based on I
r←∆∆
A←∆
(S), where the atomic roles and concepts outside
S are interpreted as the largest possible sets, can capture such assertions but are generally
poor for other types of axioms. For example, the functionality axiom P7 is not captured
by this locality condition since the atomic role has Focus is interpreted as the universal
relation ∆∆, which is not necessarily functional. In order to capture such functionality
axioms, one can use locality based on I
r←id
A←∅
(S) or I
r←id
A←∆
(S), where every atomic role outside
S is interpreted with the identity relation id on the interpretation domain. Note that the
modeling error E2 is not local for any of the given locality conditions. Note also that it is
not possible to come up with a locality condition that captures all the axioms P4–P9, since
in P6 and P8 together imply axiom β = (¦Human Genome¦ ¯ Bio Medicine _ ⊥) which
uses the symbols from S only. Hence, every subset of { containing P6 and P8 is not safe
w.r.t. S, and so cannot be local w.r.t. S.
It might be possible to come up with algorithms for testing locality conditions for the
classes of interpretation in Table 3 similar to the ones presented in Proposition 30. For
example, locality based on the class I
r←∅
A←∆
(S) can be tested as in Proposition 30, where the
case (a) of the definition for τ(C, S) is replaced with the following:
τ(A, S) = · if A / ∈ S and otherwise = A (a
t
)
For the remaining classes of interpretations, that is for I
r←∆∆
A←∗
(S) and I
r←id
A←∗
(S), checking
locality, however, is not that straightforward, since it is not clear how to eliminate the
universal roles and identity roles from the axioms and preserve validity in the respective
classes of interpretations.
Still, it is easy to come up with tractable syntactic approximations for all the locality
conditions considered in this section in a similar manner to what has been done in Sec-
tion 5.4. The idea is the same as that used in Definition 32, namely to define two sets
Con

(S) and Con

(S) of concepts for a signature S which are interpreted as the empty
300
Modular Reuse of Ontologies: Theory and Practice
Con

(S) ::= C

[ C ¯ C

[ C

¯ C
[ ∃R.C

[ nR.C

for I
r←∗
A←∅
(): [ A

for I
r←∅
A←∗
(): [ ∃R

.C [ nR

.C
for I
r←id
A←∗
(): [ (mR
id
.C), m ≥ 2 .
Con

(S) ::= · [ C

[ C

1
¯ C

2
for I
r←∗
A←∆
(): [ A

for I
r←∆∆
A←∗
(): [ ∃R
∆∆
.C

[ (nR
∆∆
.C

)
for I
r←id
A←∗
(): [ ∃R
id
.C

[ (1 R
id
.C

) .
A
˜
x
r←∗
A←∗
(S) ::= C

_ C [ C _ C

[ a : C

for I
r←∅
A←∗
(): [ R

_ R [ Trans(r

) [ Funct(R

)
for I
r←∆∆
A←∗
(): [ R _ R
∆∆
[ Trans(r
∆∆
) [ r
∆∆
(a, b)
for I
r←id
A←∗
(): [ Trans(r
id
) [ Funct(R
id
)
Where:
A

, A

, r

, r
∆∆
, r
id
∈ S;
Sig(R

), Sig(R
∆∆
), Sig(R
id
) S;
C

∈ Con

(S), C

(i)
∈ Con

(S);
C is any concept, R is any role
Figure 3: Syntactic Locality Conditions for the Classes of Interpretations in Table 3
set and, respectively, as ∆
7
in every interpretation 1 from the class and see in which situa-
tions DL-constructors produce the elements from these sets. In Figure 3 we gave recursive
definitions for syntactically local axioms A
˜
x
r←∗
A←∗
(S) that correspond to classes I
r←∗
A←∗
(S) of
interpretations from Table 3, where some cases in the recursive definitions are present only
for the indicated classes of interpretations.
5.6 Combining and Extending Safety Classes
In the previous section we gave examples of several safety classes based on different local
classes of interpretations and demonstrated that different classes are suitable for different
types of axioms. In order to check safety of ontologies in practice, one may try to apply
different sufficient tests and check if any of them succeeds. Obviously, this gives a more
powerful sufficient condition for safety, which can be seen as the union of the safety classes
used in the tests.
Formally, given two classes of ontologies O
1
() and O
2
(), their union (O
1
∪ O
2
)() is a
class of ontologies defined by (O
1
∪O
2
)(S) = O
1
(S)∪O
2
(S). It is easy to see by Definition 24
that if both O
1
() and O
2
() are safety classes then their union (O
1
∪ O
2
)() is a safety
class. Moreover, if each of the safety classes is also anti-monotonic or subset-closed, then
the union is anti-monotonic, or respectively, subset-closed as well. Unfortunately the union-
closure property for safety classes is not preserved under unions, as demonstrated in the
following example:
Example 37 Consider the union (O
r←∅
A←∅
∪ O
r←∆∆
A←∆
)() of two classes of local ontologies
O
r←∅
A←∅
() and O
r←∆∆
A←∆
() defined in Section 5.5. This safety class is not union-closed since,
for example, the ontology O
1
consisting of axioms P4–P7 from Table 4 satisfies the first
locality condition, the ontology O
2
consisting of axioms P8–P9 satisfies the second locality
condition, but their union O
1
∪O
2
satisfies neither the first nor the second locality condition
and, in fact, is not even safe for S as we have shown in Section 5.5. ♦
As shown in Proposition 33, every locality condition gives a union-closed safety class;
however, as seen in Example 37, the union of such safety classes might be no longer union-
closed. One may wonder if locality classes already provide the most powerful sufficient
301
Cuenca Grau, Horrocks, Kazakov, & Sattler
conditions for safety that satisfy all the desirable properties from Definition 24. Surprisingly
this is the case to a certain extent for some locality classes considered in Section 5.5.
Definition 38 (Maximal Union-Closed Safety Class). A safety class O
2
() extends
a safety class O
1
() if O
1
(S) ⊆ O
2
(S) for every S. A safety class O
1
() is maximal union-
closed for a language L if O
1
() is union-closed and for every union-closed safety class O
2
()
that extends O
1
() and every O over L we have that O ∈ O
2
(S) implies O ∈ O
1
(S). ♦
Proposition 39 The classes of local ontologies O
r←∅
A←∅
() and O
r←∅
A←∆
() defined in Section 5.5
are maximal union-closed safety classes for L = oH1O.
Proof. According to Definition 38, if a safety class O() is not maximal union-closed for a
language L, then there exists a signature S and an ontology O over L, such that (i) O / ∈
O(S), (ii) O is safe w.r.t. S in L, and (iii) for every { ∈ O(S), it is the case that O∪ { is
safe w.r.t. S in L; that is, for every ontology O over L with Sig(O) ∩ Sig(O ∪ {) ⊆ S it is
the case that O ∪ { ∪ O is a deductive conservative extension of O. We demonstrate that
this is not possible for O() = O
r←∅
A←∅
() or O() = O
r←∅
A←∆
()
We first consider the case O() = O
r←∅
A←∅
() and then show how to modify the proof for
the case O() = O
r←∅
A←∆
().
Let O be an ontology over L that satisfies conditions (i)–(iii) above. We define ontologies
{ and O as follows. Take { to consist of the axioms A _ ⊥ and ∃r.· _ ⊥ for every atomic
concept A and atomic role r from Sig(O) ` S. It is easy to see that { ∈ O
r←∅
A←∅
(S). Take O
to consist of all tautologies of form ⊥ _ A and ⊥ _ ∃r.· for every A, r ∈ S. Note that
Sig(O ∪ {) ∩ Sig(O) ⊆ S. We claim that O ∪ { ∪ O is not a deductive Sig(O)-conservative
extension of O. ()
Intuitively, the ontology { is chosen in such a way that { ∈ O
r←∅
A←∅
(S) and O ∪ { ∪ O
has only models from I
r←∅
A←∅
(S). O is an ontology which implies nothing but tautologies and
uses all the atomic concepts and roles from S.
Since O / ∈ O
r←∅
A←∅
(S), there exists an axiom α ∈ O such that α / ∈ Ax
r←∅
A←∅
(S). Let
β := τ(α, S) where τ(, ) is defined as in Proposition 30. As has been shown in the proof of
this proposition, 1 [= α iff 1 [= β for every 1 ∈ I
r←∅
A←∅
(S). Now, since O [= α and O ∪ { ∪ O
has only models from I
r←∅
A←∅
(S), it is the case that O∪{∪O [= β. By Proposition 30, since β
does not contain individuals, we have that Sig(β) ⊆ S = Sig(O) and, since α / ∈ Ax
r←∅
A←∅
(S),
β is not a tautology, thus O [= β. Hence, by Definition 1, O ∪ { ∪ O is not a deductive
Sig(O)-conservative extension of O ().
For O() = O
r←∅
A←∆
() the proof can be repeated by taking { to consist of axioms · _ A
and ∃r.· _ ⊥ for all A, r ∈ Sig(O) ` S, and modifying τ(, ) as has been discussed in
Section 5.5.
There are some difficulties in extending the proof of Proposition 39 to other locality
classes considered in Section 5.5. First, it is not clear how to force interpretations of roles
to be the universal or identity relation using oHO1O axioms. Second, it is not clear how
to define the function τ(, ) in these cases (see a related discussion in Section 5.5). Note
also that the proof of Proposition 39 does not work in the presence of nominals, since
this will not guarantee that β = τ(α, S) contains symbols from S only (see Footnote 3 on
p. 297). Hence there is probably room to extend the locality classes O
r←∅
A←∅
() and O
r←∅
A←∆
()
for L = oHO1O while preserving union-closure.
302
Modular Reuse of Ontologies: Theory and Practice
6. Extracting Modules Using Safety Classes
In this section we revisit the problem of extracting modules from ontologies. As shown in
Corollary 23 in Section 4, there exists no general procedure that can recognize or extract
all the (minimal) modules for a signature in an ontology in finite time.
The techniques described in Section 5, however, can be reused for extracting particular
families of modules that satisfy certain sufficient conditions. Proposition 18 establishes the
relationship between the notions of safety and module; more precisely, a subset O
1
of O is
an S-module in O provided that O ` O
1
is safe for S ∪ Sig(O
1
). Therefore, any safety class
O() can provide a sufficient condition for testing modules—that is, in order to prove that
O
1
is a S-module in O, it is sufficient to show that O ` O
1
∈ O(S ∪ Sig(O
1
)). A notion of
modules based on this property can be defined as follows.
Definition 40 (Modules Based on Safety Class).
Let L be an ontology language and O() be a safety class for L. Given an ontology O and
a signature S over L, we say that O
m
⊆ O is an O()-based S-module in O if O ` O
m

O(S ∪ Sig(O
m
)). ♦
Remark 41 Note that for every safety class O(), ontology O and signature S, there exists
at least one O()-based S-module in O, namely O itself; indeed, by Definition 24, the empty
ontology ∅ = O ` O also belongs to O(S) for every O() and S.
Note also that it follows from Definition 40 that O
m
is an O()-based S-module in O iff
O
m
is an O()-based S
t
-module in O for every S
t
with S ⊆ S
t
⊆ (S ∪ Sig(O
m
)). ♦
It is clear that, according to Definition 40, any procedure for checking membership of a
safety class O() can be used directly for checking whether O
m
is a module based on O().
In order to extract an O()-module, it is sufficient to enumerate all possible subsets of the
ontology and check if any of these subsets is a module based on O().
In practice, however, it is possible to avoid checking all the possible subsets of the
input ontology. Figure 4 presents an optimized version of the module-extraction algorithm.
The procedure manipulates configurations of the form O
m
[ O
u
[ O
s
, which represent a
partitioning of the ontology O into three disjoint subsets O
m
, O
u
and O
s
. The set O
m
accumulates the axioms of the extracted module; the set O
s
is intended to be safe w.r.t.
S∪Sig(O
m
). The set O
u
, which is initialized to O, contains the unprocessed axioms. These
axioms are distributed among O
m
and O
s
according to rules R1 and R2. Given an axiom α
from O
u
, rule R1 moves α into O
s
provided O
s
remains safe w.r.t. S ∪ Sig(O
m
) according
to the safety class O(). Otherwise, rule R2 moves α into O
m
and moves all axioms from
O
s
back into O
u
, since Sig(O
m
) might expand and axioms from O
s
might become no longer
safe w.r.t. S ∪ Sig(O
m
). At the end of this process, when no axioms are left in in O
u
, the
set O
m
is an O()-based module in O.
The rewrite rules R1 and R2 preserve invariants I1–I3 given in Figure 4. Invariant I1
states that the three sets O
m
, O
u
and O
s
form a partitioning of O; I2 states that the set O
s
satisfies the safety test for S ∪ Sig(O
m
) w.r.t. O(); finally, I3 establishes that the rewrite
rules either add elements to O
m
, or they add elements to O
s
without changing O
m
; in other
words, the pair ([O
m
[, [O
s
[) consisting of the sizes for these sets increases in lexicographical
order.
303
Cuenca Grau, Horrocks, Kazakov, & Sattler
Input: an ontology O, a signature S, a safety class O()
Output: a module O
m
in O based on O()
Configuration: O
m

module
[
unprocessed

O
u
[ O
s

safe
;
Initial Configuration = ∅ [ O [ ∅
Termination Condition: O
u
= ∅
Rewrite rules:
R1. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
[ O
u
[ O
s
∪ ¦α¦ if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
))
R2. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
∪ ¦α¦ [ O
u
∪ O
s
[ ∅ if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
))
Invariants for O
m
[ O
u
[ O
s
:
I1. O = O
m
¬ O
u
¬ O
s
I2. O
s
∈ O(S ∪ Sig(O
m
))
Invariant for O
m
[ O
u
[ O
s
=⇒ O
t
m
[ O
t
u
[ O
t
s
:
I3. ([O
m
[, [O
s
[) <
lex
([O
t
m
[, [O
t
s
[)
Figure 4: A Procedure for Computing Modules Based on a Safety Class O()
Proposition 42 (Correctness of the Procedure from Figure 4) Let O() be a safety
class for an ontology language L, O an ontology over L, and S a signature over L. Then:
(1) The procedure in Figure 4 with input O, S, O() terminates and returns an O()-
based S-module O
m
in O; and
(2) If, additionally, O() is anti-monotonic, subset-closed and union-closed, then there
is a unique minimal O()-based S-module in O, and the procedure returns precisely this
minimal module.
Proof. (1) The procedure based on the rewrite rules from Figure 4 always terminates for
the following reasons: (i) for every configuration derived by the rewrite rules, the sets O
m
,
O
u
and O
s
form a partitioning of O (see invariant I1 in Figure 4), and therefore the size
of every set is bounded; (ii) at each rewrite step, ([O
m
[, [O
s
[) increases in lexicographical
order (see invariant I3 in Figure 4). Additionally, if O
u
= ∅ then it is always possible to
apply one of the rewrite rules R1 or R2, and hence the procedure always terminates with
O
u
= ∅. Upon termination, by invariant I1 from Figure 4, O is partitioned into O
m
and O
s
and by invariant I2, O
s
∈ O(S ∪ Sig(O
m
)), which implies, by Definition 40, that O
m
is an
O()-based S-module in O.
(2) Now, suppose that, in addition, O() is an anti-monotonic, subset-closed, and union-
closed safety class, and suppose that O
t
m
is an O()-based S-module in O. We demonstrate
by induction that for every configuration O
m
[ O
u
[ O
s
derivable from ∅ [ O [ ∅ by rewrite
rules R1 and R2, it is the case that O
m
⊆ O
t
m
. This will prove that the module computed
by the procedure is a subset of every O()-based S-module in O, and hence, is the smallest
O()-based S-module in O.
Indeed, for the base case we have O
m
= ∅ ⊆ O
t
m
. The rewrite rule R1 does not change
the set O
m
. For the rewrite rule R2 we have: O
m
[ O
u
∪¦α¦ [ O
s
=⇒ O
m
∪¦α¦ [ O
u
∪O
s
[ ∅
if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
)). ()
304
Modular Reuse of Ontologies: Theory and Practice
Input: an ontology O, a signature S, a safety class O()
Output: a module O
m
in O based on O()
Initial Configuration = ∅ [ O [ ∅ Termination Condition: O
u
= ∅
Rewrite rules:
R1. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
[ O
u
[ O
s
∪ ¦α¦ if ¦α¦ ∈ O(S ∪ Sig(O
m
))
R2. O
m
[ O
u
∪ ¦α¦ [ O
α
s
∪ O
s
=⇒ O
m
∪ ¦α¦ [ O
u
∪ O
α
s
[ O
s
if ¦α¦ ∈ O(S ∪ Sig(O
m
)), and
Sig(O
s
) ∩ Sig(α) ⊆ Sig(O
m
)
Figure 5: An Optimized Procedure for Computing Modules Based on a Compact Subset-
Closed Union-Closed Safety Class O()
Suppose, to the contrary, that O
m
⊆ O
t
m
but O
m
∪¦α¦ O
t
m
. Then α ∈ O
t
s
:= O`O
t
m
.
Since O
t
m
is an O()-based S-module in O, we have that O
t
s
∈ O(S ∪ Sig(O
t
m
)). Since O()
is subset-closed, ¦α¦ ∈ O(S ∪ Sig(O
t
m
)). Since O
m
⊆ O
t
m
and O() is anti-monotonic, we
have ¦α¦ ∈ O(S∪Sig(O
m
)). Since by invariant I2 from Figure 4, O
s
∈ O(S∪Sig(O
m
)) and
O() is union-closed, O
s
∪¦α¦ ∈ O(S∪Sig(O
m
)), which contradicts (). This contradiction
implies that the rule R2 also preserves property O
m
⊆ O
t
m
.
Claim (1) of Proposition 42 establishes that the procedure from Figure 4 terminates for
every input and produces a module based on a given safety class. Moreover, it is possible
to show that this procedure runs in polynomial time assuming that the safety test can be
also performed in polynomial time.
If the safety class O() satisfies additional desirable properties, like those based on classes
of local interpretations described in Section 5.2, the procedure, in fact, produces the smallest
possible module based on the safety class, as stated in claim (2) of Proposition 42. In this
case, it is possible to optimize the procedure shown in Figure 4. If O() is union closed,
then, instead of checking whether (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
)) in the conditions of rules
R1 and R2, it is sufficient to check if ¦α¦ ∈ O(S ∪ Sig(O
m
)) since it is already known that
O
s
∈ O(S ∪ Sig(O
m
)). If O() is compact and subset closed, then instead of moving all the
axioms in O
s
to O
u
in rule R2, it is sufficient to move only those axioms O
α
s
that contain
at least one symbol from α which did not occur in O
m
before, since the set of remaining
axioms will stay in O(S ∪ Sig(O
m
)). In Figure 5 we present an optimized version of the
algorithm from Figure 4 for such locality classes.
Example 43 In Table 5 we present a trace of the algorithm from Figure 5 for the ontology O
consisting of axioms M1–M5 from Figure 1, signature S = ¦Cystic Fibrosis, Genetic Disorder¦
and safety class O() = O
r←∅
A←∅
() defined in Example 26. The first column of the table lists
the configurations obtained from the initial configuration ∅ [ O [ ∅ by applying the rewrite
rules R1 and R2 from Figure 5; in each row, the underlined axiom α is the one that is being
tested for safety. The second column of the table shows the elements of S ∪ Sig(O
m
) that
have appeared for the current configuration but have not been present for the preceding
configurations. In the last column we indicate whether the first conditions of the rules R1
and R2 are fulfilled for the selected axiom α from O
u
—that is, whether α is local for I
r←∅
A←∅
().
The rewrite rule corresponding to the result of this test is applied to the configuration.
305
Cuenca Grau, Horrocks, Kazakov, & Sattler
O
m
[ O
u
, α [ O
s
New elements in S ∪ Sig(O
m
) ¦α¦ ∈ O(S ∪ Sig(O
m
))?
1 − [ M1, M2, M3, M4, M5 [ − Cystic Fibrosis, Genetic Disorder Yes ⇒ R1
2 − [ M1, M3, M4, M5 [ M2 − Yes ⇒ R1
3 − [ M1, M4, M5 [ M2, M3 − No ⇒ R2
4 M1 [ M2, M3, M4, M5 [ − Fibrosis, located In, Pancreas,
has Origin, Genetic Origin No ⇒ R2
5 M1, M3 [ M2, M4, M5 [ − Genetic Fibrosis No ⇒ R2
6 M1, M3, M4 [ M2, M5 [ − − Yes ⇒ R1
7 M1, M3, M4 [ M2 [ M5 − No ⇒ R2
8 M1, M2, M3, M4 [ − [ M5 −
Table 5: A trace of the Procedure in Figure 5 for the input O = ¦M1, . . . , M5¦ from Figure 1
and S = ¦Cystic Fibrosis, Genetic Disorder¦
Note that some axioms α are tested for safety several times for different configurations,
because the set O
u
may increase after applications of rule R2; for example, the axiom
α = M2 is tested for safety in both configurations 1 and 7, and α = M3 in configurations
2 and 4. Note also that different results for the locality tests are obtained in these cases:
both M2 and M3 were local w.r.t. S ∪ Sig(O
m
) when O
m
= ∅, but became non-local when
new axioms were added to O
m
. It is also easy to see that, in our case, syntactic locality
produces the same results for the tests.
In our example, the rewrite procedure produces a module O
m
consisting of axioms M1–
M4. Note that it is possible to apply the rewrite rules for different choices of the axiom
α in O
u
, which results in a different computation. In other words, the procedure from
Figure 5 has implicit non-determinism. According to Claim (2) of Proposition 42 all such
computations should produce the same module O
m
, which is the smallest O
r←∅
A←∅
()-based
S-module in O; that is, the implicit non-determinism in the procedure from Figure 5 does
not have any impact on the result of the procedure. However, alternative choices for α
may result in shorter computations: in our example we could have selected axiom M1 in
the first configuration instead of M2 which would have led to a shorter trace consisting of
configurations 1, and 4–8 only. ♦
It is worth examining the connection between the S-modules in an ontology O based on
a particular safety class O() and the actual minimal S-modules in O. It turns out that any
O()-based module O
m
is guaranteed to cover the set of minimal modules, provided that
O() is anti-monotonic and subset-closed. In other words, given O and S, O
m
contains all
the S-essential axioms in O. The following Lemma provides the main technical argument
underlying this result.
Lemma 44 Let O() be an anti-monotonic subset-closed safety class for an ontology lan-
guage L, O an ontology, and S a signature over L. Let O
1
be an S-module in O w.r.t. L
and O
m
an O()-based S-module in O. Then O
2
:= O
1
∩O
m
is an S-module in O w.r.t. L.
306
Modular Reuse of Ontologies: Theory and Practice
Proof. By Definition 40, since O
m
is an O()-based S-module in O, we have that O` O
m

O(S ∪ Sig(O
m
)). Since O
1
`O
2
= O
1
`O
m
⊆ O`O
m
and O() is subset-closed, it is the case
that O
1
` O
2
∈ O(S ∪ Sig(O
m
)). Since O() is anti-monotonic, and O
2
⊆ O
m
, we have that
O
1
` O
2
∈ O(S ∪ Sig(O
2
)), hence, O
2
is an O()-based S-module in O
1
. In particular O
2
is
an S-module in O
1
w.r.t. L. Since O
1
is an S-module in O w.r.t. L, O
2
is an S-module in
O w.r.t. L.
Corollary 45 Let O() be an anti-monotonic, subset-closed safety class for L and O
m
be
an O()-based S-module in O. Then O
m
contains all S-essential axioms in O w.r.t. L.
Proof. Let O
1
be a minimal S-module in O w.r.t. L. We demonstrate that O
1
⊆ O
m
. Indeed,
otherwise, by Lemma 44, O
1
∩O
m
is an S-module in O w.r.t. L which is strictly contained
in O
1
. Hence O
m
is a superset of every minimal S-module in O and hence, contains all
S-essential axioms in O w.r.t. L.
As shown in Section 3.4, all the axioms M1–M4 are essential for the ontology O and
signature S considered in Example 43. We have seen in this example that the locality-based
S-module extracted from O contains all of these axioms, in accordance to Corollary 45. In
our case, the extracted module contains only essential axioms; in general, however, locality-
based modules might contain non-essential axioms.
An interesting application of modules is the pruning of irrelevant axioms when checking
if an axiom α is implied by an ontology O. Indeed, in order to check whether O [= α
it suffices to retrieve the module for Sig(α) and verify if the implication holds w.r.t. this
module. In some cases, it is sufficient to extract a module for a subset of the signature of
α which, in general, leads to smaller modules. In particular, in order to test subsumption
between a pair of atomic concepts, if the safety class being used enjoys some nice properties
then it suffices to extract a module for one of them, as given by the following proposition:
Proposition 46 Let O() be a compact union-closed safety class for an ontology language
L, O an ontology and A, B atomic concepts. Let O
A
be an O()-based module in O for
S = ¦A¦ in O. Then:
1 If O [= α := (A _ B) and ¦B _ ⊥¦ ∈ O(∅) then O
A
[= α;
2 If O [= α := (B _ A) and ¦· _ B¦ ∈ O(∅) then O
A
[= α;
Proof. 1. Consider two cases: (a) B ∈ Sig(O
A
) and (b) B / ∈ Sig(O
A
).
(a) By Remark 41 it is the case that O
A
is an O()-based module in O for S = ¦A, B¦.
Since Sig(α) ⊆ S, by Definition 12 and Definition 1, it is the case that O [= α implies
O
A
[= α.
(b) Consider O
t
= O∪¦B _ ⊥¦. Since Sig(B _ ⊥) = ¦B¦ and B / ∈ Sig(O
A
), ¦B _ ⊥¦ ∈
O(∅) and O() is compact, by Definition 24 it is the case that ¦B _ ⊥¦ ∈ O(S ∪ Sig(O
A
)).
Since O
A
is an O()-based S-module in O, by Definition 40, then O`O
A
∈ O(S∪Sig(O
A
)).
Since O() is union-closed, it is the case that (O ` O
A
) ∪ ¦B _ ⊥¦ ∈ O(S ∪ Sig(O
A
)). Note
that B _ ⊥ ∈ O
A
, since B ∈ Sig(O
A
), hence (O ` O
A
) ∪ ¦B _ ⊥¦ = O
t
` O
A
, and, by
Definition 40, we have that O
A
is an O()-based S-module in O
t
. Now, since O [= (A _ B),
it is the case that O
t
[= (A _ ⊥), and hence, since O
A
is a module in O
t
for S = ¦A¦, we
have that O
A
[= (A _ ⊥) which implies O
A
[= (A _ B).
307
Cuenca Grau, Horrocks, Kazakov, & Sattler
2. The proof of this case is analogous to that for Case 1: Case (a) is applicable without
changes; in Case (b) we show that O
A
is an O()-based module for S = ¦A¦ in O
t
=
O ∪ ¦· _ B¦, and, hence, since O
t
[= (· _ A), it is the case that O
B
[= (· _ A), which
implies O
t
[= (B _ A).
Corollary 47 Let O be a oHO1O ontology and A, B atomic concepts. Let O
r←∗
A←∅
() and
O
r←∗
A←∆
() be locality classes based on local classes of interpretations of form I
r←∗
A←∅
() and
I
r←∗
A←∆
(), respectively, from Table 3. Let O
A
be a module for S = ¦A¦ based on O
r←∗
A←∅
() and
O
B
be a module for S = ¦B¦ based on O
r←∗
A←∆
(). Then O [= (A _ B) iff O
A
[= (A _ B) iff
O
B
[= (A _ B).
Proof. It is easy to see that ¦B _ ⊥¦ ∈ O
r←∗
A←∅
(∅) and ¦· _ A¦ ∈ O
r←∗
A←∆
(∆).
Proposition 46 implies that a module based on a safety class for a single atomic concept
A can be used for capturing either the super-concepts (Case 1), or the sub-concepts (Case
2) B of A, provided that the safety class captures, when applied to the empty signature,
axioms of the form B _ ⊥ (Case 1) or (· _ B) (Case 2). That is, B is a super-concept or
a sub-concept of A in the ontology if and only if it is such in the module. This property
can be used, for example, to optimize the classification of ontologies. In order to check if
the subsumption A _ B holds in an ontology O, it is sufficient to extract a module in
O for S = ¦A¦ using a modularization algorithm based on a safety class in which the
ontology ¦B _ ⊥¦ is local w.r.t. the empty signature, and check whether this subsumption
holds w.r.t. the module. For this purpose, it is convenient to use a syntactically tractable
approximation of the safety class in use; for example, one could use the syntactic locality
conditions given in Figure 3 instead of their semantic counterparts.
It is possible to combine modularization procedures to obtain modules that are smaller
than the ones obtained using these procedures individually. For example, in order to check
the subsumption O [= (A _ B) one could first extract a O
r←∗
A←∅
()-based module ´
1
for
S = ¦A¦ in O; by Corollary 47 this module is complete for all the super-concepts of A in
O, including B—that is, if an atomic concept is a super-concept of A in O, then it is also a
super-concept of A in ´
1
. One could extract a O
r←∗
A←∆
()-based module ´
2
for S = ¦B¦ in
´
1
which, by Corollary 47, is complete for all the sub-concepts of B in ´
1
, including A.
Indeed, ´
2
is an S-module in ´
1
for S = ¦A, B¦ and ´
1
is an S-module in the original
ontology O. By Proposition 46, therefore, it is the case that ´
2
is also an S-module in O.
7. Related Work
We have seen in Section 3 that the notion of conservative extension is valuable in the forma-
lization of ontology reuse tasks. The problem of deciding conservative extensions has been
recently investigated in the context of ontologies (Ghilardi et al., 2006; Lutz et al., 2007;
Lutz & Wolter, 2007). The problem of deciding whether {∪O is a deductive S-conservative
extension of O is EXPTIME-complete for cL (Lutz & Wolter, 2007), 2-EXPTIME-complete
w.r.t. /L(1O (Lutz et al., 2007) (roughly OWL-Lite), and undecidable w.r.t. /L(1OO
(roughly OWL DL). Furthermore, checking model conservative extensions is already un-
decidable for cL (Lutz & Wolter, 2007), and for /L( it is even not semi-decidable (Lutz
et al., 2007).
308
Modular Reuse of Ontologies: Theory and Practice
In the last few years, a rapidly growing body of work has been developed under the
headings of Ontology Mapping and Alignment, Ontology Merging, Ontology Integration,
and Ontology Segmentation (Kalfoglou & Schorlemmer, 2003; Noy, 2004a, 2004b). This field
is rather diverse and has roots in several communities.
In particular, numerous techniques for extracting fragments of ontologies for the pur-
poses of knowledge reuse have been proposed. Most of these techniques rely on syntactically
traversing the axioms in the ontology and employing various heuristics to determine which
axioms are relevant and which are not.
An example of such a procedure is the algorithm implemented in the Prompt-Factor
tool (Noy & Musen, 2003). Given a signature S and an ontology O, the algorithm re-
trieves a fragment O
1
⊆ O as follows: first, the axioms in O that mention any of the
symbols in S are added to O
1
; second, S is expanded with the symbols in Sig(O
1
). These
steps are repeated until a fixpoint is reached. For our example in Section 3, when S =
¦Cystic Fibrosis, Genetic Disorder¦, and O consists of axioms M1–M5 from Figure 1, the al-
gorithm first retrieves axioms M1, M4, and M5 containing these terms, then expands S with
the symbols mentioned in these axioms, such that S contains all the symbols of O. After
this step, all the remaining axioms of O are retrieved. Hence, the fragment extracted by
the Prompt-Factor algorithm consists of all the axioms M1-M5. In this case, the Prompt-
Factor algorithm extracts a module (though not a minimal one). In general, however, the
extracted fragment is not guaranteed to be a module. For example, consider an ontology
O = ¦A ≡ A, B _ C¦ and α = (C _ B). The ontology O is inconsistent due to the
axiom A ≡ A: any axiom (and α in particular) is thus a logical consequence of O. Given
S = ¦B, C¦, the Prompt-Factor algorithm extracts O
2
= ¦B _ C¦; however, O
2
[= α, and
so O
2
is not a module in O. In general, the Prompt-Factor algorithm may fail even if O is
consistent. For example, consider an ontology O = ¦· _ ¦a¦, A _ B¦, α = (A _ ∀r.A),
and S = ¦A¦. It is easy to see that O is consistent, admits only single element models,
and α is satisfied in every such a model; that is, O [= α. In this case, the Prompt-Factor
algorithm extracts O
1
= ¦A _ B¦, which does not imply α.
Another example is Seidenberg’s segmentation algorithm (Seidenberg & Rector, 2006),
which was used for segmentation of the medical ontology GALEN (Rector & Rogers, 1999).
Currently, the full version of GALEN cannot be processed by reasoners, so the authors
investigate the possibility of splitting GALEN into small “segments” which can be processed
by reasoners separately. The authors describe a segmentation procedure which, given a set
of atomic concepts S, computes a “segment” for S in the ontology. The description of
the procedure is very high-level. The authors discuss which concepts and roles should be
included in the segment and which should not. In particular, the segment should contain
all “super-” and “sub-” concepts of the input concepts, concepts that are “linked” from the
input concepts (via existential restrictions) and their “super-concepts”, but not their “sub-
concepts”; for the included concepts, also “their restrictions”, “intersection”, “union”, and
“equivalent concepts” should be considered by including the roles and concepts they contain,
together with their “super-concepts” and “super-roles” but not their “sub-concepts” and
“sub-roles”. From the description of the procedure it is not entirely clear whether it works
with a classified ontology (which is unlikely in the case of GALEN since the full version of
GALEN has not been classified by any existing reasoner), or, otherwise, how the “super-”
and “sub-” concepts are computed. It is also not clear which axioms should be included in
309
Cuenca Grau, Horrocks, Kazakov, & Sattler
the segment in the end, since the procedure talks only about the inclusion of concepts and
roles.
A different approach to module extraction proposed in the literature (Stuckenschmidt
& Klein, 2004) consists of partitioning the concepts in an ontology to facilitate visualization
of and navigation through an ontology. The algorithm uses a set of heuristics for measuring
the degree of dependency between the concepts in the ontology and outputs a graphical
representation of these dependencies. The algorithm is intended as a visualization technique,
and does not establish a correspondence between the nodes of the graph and sets of axioms
in the ontology.
What is common between the modularization procedures we have mentioned is the lack
of a formal treatment for the notion of module. The papers describing these modularization
procedures do not attempt to formally specify the intended outputs of the procedures, but
rather argue what should be in the modules and what not based on intuitive notions. In
particular, they do not take the semantics of the ontology languages into account. It might
be possible to formalize these algorithms and identify ontologies for which the “intuition-
based” modularization procedures work correctly. Such studies are beyond the scope of this
paper.
Module extraction in ontologies has also been investigated from a formal point of view
(Cuenca Grau et al., 2006b). Cuenca Grau et al. (2006) define a notion of a module O
A
in an
ontology Ofor an atomic concept A. One of the requirements for the module is that Oshould
be a conservative extension of O
A
(in the paper O
A
is called a logical module in O). The
paper imposes an additional requirement on modules, namely that the module O
A
should
entail all the subsumptions in the original ontology between atomic concepts involving A and
other atomic concepts in O
A
. The authors present an algorithm for partitioning an ontology
into disjoint modules and proved that the algorithm is correct provided that certain safety
requirements for the input ontology hold: the ontology should be consistent, should not
contain unsatisfiable atomic concepts, and should have only “safe” axioms (which in our
terms means that they are local for the empty signature). In contrast, the algorithm we
present here works for any ontology, including those containing “non-safe” axioms.
The growing interest in the notion of “modularity” in ontologies has been recently
reflected in a workshop on modular ontologies
5
held in conjunction with the International
Semantic Web Conference (ISWC-2006). Concerning the problem of ontology reuse, there
have been various proposals for “safely” combining modules; most of these proposals, such
as c-connections (Cuenca Grau, Parsia, & Sirin, 2006a), Distributed Description Logics
(Borgida & Serafini, 2003) and Package-based Description Logics (Bao, Caragea, & Honavar,
2006) propose a specialized semantics for controlling the interaction between the importing
and the imported modules to avoid side-effects. In contrast to these works, we assume here
that reuse is performed by simply building the logical union of the axioms in the modules
under the standard semantics, and we establish a collection of reasoning services, such as
safety testing, to check for side-effects. The interested reader can find in the literature a
detailed comparison between the different approaches for combining ontologies (Cuenca
Grau & Kutz, 2007).
5. for information see the homepage of the workshop http://www.cild.iastate.edu/events/womo.html
310
Modular Reuse of Ontologies: Theory and Practice
8. Implementation and Proof of Concept
In this section, we provide empirical evidence of the appropriateness of locality for safety
testing and module extraction. For this purpose, we have implemented a syntactic loca-
lity checker for the locality classes
˜
O
r←∅
A←∅
() and
˜
O
r←∆∆
A←∆
() as well as the algorithm for
extracting modules given in Figure 5 from Section 6.
First, we show that the locality class
˜
O
r←∅
A←∅
() provides a powerful sufficiency test for
safety which works for many real-world ontologies. Second, we show that
˜
O
r←∅
A←∅
()-based
modules are typically very small compared to both the size of the ontology and the modules
extracted using other techniques. Third, we report on our implementation in the ontology
editor Swoop (Kalyanpur, Parsia, Sirin, Cuenca Grau, & Hendler, 2006) and illustrate the
combination of the modularization procedures based on the classes
˜
O
r←∅
A←∅
() and
˜
O
r←∆∆
A←∆
().
8.1 Locality for Testing Safety
We have run our syntactic locality checker for the class
˜
O
r←∅
A←∅
() over the ontologies from a
library of 300 ontologies of various sizes and complexity some of which import each other
(Gardiner, Tsarkov, & Horrocks, 2006).
6
For all ontologies { that import an ontology O,
we check if { belongs to the locality class for S = Sig({) ∩ Sig(O).
It turned out that from the 96 ontologies in the library that import other ontologies, all
but 11 were syntactically local for S (and hence also semantically local for S). From the 11
non-local ontologies, 7 are written in the OWL-Full species of OWL (Patel-Schneider et al.,
2004) to which our framework does not yet apply. The remaining 4 non-localities are due
to the presence of so-called mapping axioms of the form A ≡ B
t
, where A / ∈ S and B
t
∈ S.
Note that these axioms simply indicate that the atomic concepts A, B
t
in the two ontologies
under consideration are synonyms. Indeed, we were able to easily repair these non-localities
as follows: we replace every occurrence of A in { with B
t
and then remove this axiom from
the ontology. After this transformation, all 4 non-local ontologies turned out to be local.
8.2 Extraction of Modules
In this section, we compare three modularization
7
algorithms that we have implemented
using Manchester’s OWL API:
8
A1: The Prompt-Factor algorithm (Noy & Musen, 2003);
A2: The segmentation algorithm proposed by Cuenca Grau et al. (2006);
A3: Our modularisation algorithm (Algorithm 5), based on the locality class
˜
O
r←∅
A←∅
().
The aim of the experiments described in this section is not to provide a throughout com-
parison of the quality of existing modularization algorithms since each algorithm extracts
“modules” according to its own requirements, but rather to give an idea of the typical size
of the modules extracted from real ontologies by each of the algorithms.
6. The library is available at http://www.cs.man.ac.uk/
~
horrocks/testing/
7. In this section by “module” we understand the result of the considered modularization procedures which
may not necessarily be a module according to Definition 10 or 12
8. http://sourceforge.net/projects/owlapi
311
Cuenca Grau, Horrocks, Kazakov, & Sattler
(a) Modularization of NCI (a) Modularization of NCI (b) Modularization of GALEN-Small (b) Modularization of GALEN-Small
(c) Modularization of SNOMED (c) Modularization of SNOMED (d) Modularization of GALEN-Full (d) Modularization of GALEN-Full
(e) Small modules of GALEN-Full (e) Small modules of GALEN-Full (f) Large modules of GALEN-Full (f) Large modules of GALEN-Full
Figure 6: Distribution for the sizes of syntactic locality-based modules: the X-Axis gives the
number of concepts in the modules and the Y-Axis the number of modules extracted for
each size range.
312
Modular Reuse of Ontologies: Theory and Practice
Ontology Atomic A1: Prompt-Factor A2: Segmentation A3: Loc.-based mod.
Concepts Max.(%) Avg.(%) Max.(%) Avg.(%) Max.(%) Avg.(%)
NCI 27772 87.6 75.84 55 30.8 0.8 0.08
SNOMED 255318 100 100 100 100 0.5 0.05
GO 22357 1 0.1 1 0.1 0.4 0.05
SUMO 869 100 100 100 100 2 0.09
GALEN-Small 2749 100 100 100 100 10 1.7
GALEN-Full 24089 100 100 100 100 29.8 3.5
SWEET 1816 96.4 88.7 83.3 51.5 1.9 0.1
DOLCE-Lite 499 100 100 100 100 37.3 24.6
Table 6: Comparison of Different Modularization Algorithms
As a test suite, we have collected a set of well-known ontologies available on the Web,
which we divided into two groups:
Simple. In this group, we have included the National Cancer Institute (NCI) Ontology,
9
the SUMO Upper Ontology,
10
the Gene Ontology (GO),
11
and the SNOMED Ontology
12
.
These ontologies are expressed in a simple ontology language and are of a simple structure;
in particular, they do not contain GCIs, but only definitions.
Complex. This group contains the well-known GALEN ontology (GALEN-Full),
13
the
DOLCE upper ontology (DOLCE-Lite),
14
and NASA’s Semantic Web for Earth and En-
vironmental Terminology (SWEET)
15
. These ontologies are complex since they use many
constructors from OWL DL and/or include a significant number of GCIs. In the case of
GALEN, we have also considered a version GALEN-Small that has commonly been used as
a benchmark for OWL reasoners. This ontology is almost 10 times smaller than the original
GALEN-Full ontology, yet similar in structure.
Since there is no benchmark for ontology modularization and only a few use cases are
available, there is no systematic way of evaluating modularization procedures. Therefore
we have designed a simple experiment setup which, even if it may not necessarily reflect
an actual ontology reuse scenario, it should give an idea of typical module sizes. For each
ontology, we took the set of its atomic concepts and extracted modules for every atomic
concept. We compare the maximal and average sizes of the extracted modules.
It is worth emphasizing here that our algorithm A3 does not just extract a module for
the input atomic concept: the extracted fragment is also a module for its whole signature,
which typically includes a fair amount of other concepts and roles.
9. http://www.mindswap.org/2003/CancerOntology/nciOncology.owl
10. http://ontology.teknowledge.com/
11. http://www.geneontology.org
12. http://www.snomed.org
13. http://www.openclinical.org/prj_galen.html
14. http://www.loa-cnr.it/DOLCE.html
15. http://sweet.jpl.nasa.gov/ontology/
313
Cuenca Grau, Horrocks, Kazakov, & Sattler
(a) Concepts DNA Sequence and
Microanatomy in NCI
(b)
˜
O
r←∅
A←∅
(S)-based module for
DNA Sequence in NCI
(c)
˜
O
r←∆×∆
A←∆
(S)-based module for
Micro Anatomy in the fragment 7b
Figure 7: The Module Extraction Functionality in Swoop
The results we have obtained are summarized in Table 6. The table provides the size
of the largest module and the average size of the modules obtained using each of these
algorithms. In the table, we can clearly see that locality-based modules are significantly
smaller than the ones obtained using the other methods; in particular, in the case of SUMO,
DOLCE, GALEN and SNOMED, the algorithms A1 and A2 retrieve the whole ontology as
the module for each input signature. In contrast, the modules we obtain using our algorithm
are significantly smaller than the size of the input ontology.
For NCI, SNOMED, GO and SUMO, we have obtained very small locality-based mo-
dules. This can be explained by the fact that these ontologies, even if large, are simple
in structure and logical expressivity. For example, in SNOMED, the largest locality-based
module obtained is approximately 0.5% of the size of the ontology, and the average size of
the modules is 1/10 of the size of the largest module. In fact, most of the modules we have
obtained for these ontologies contain less than 40 atomic concepts.
For GALEN, SWEET and DOLCE, the locality-based modules are larger. Indeed, the
largest module in GALEN-Small is 1/10 of the size of the ontology, as opposed to 1/200
in the case of SNOMED. For DOLCE, the modules are even bigger—1/3 of the size of
the ontology—which indicates that the dependencies between the different concepts in the
ontology are very strong and complicated. The SWEET ontology is an exception: even
though the ontology uses most of the constructors available in OWL, the ontology is heavily
underspecified, which yields small modules.
In Figure 6, we have a more detailed analysis of the modules for NCI, SNOMED,
GALEN-Small and GALEN-Full. Here, the X-axis represents the size ranges of the ob-
314
Modular Reuse of Ontologies: Theory and Practice
tained modules and the Y-axis the number of modules whose size is within the given range.
The plots thus give an idea of the distribution for the sizes of the different modules.
For SNOMED, NCI and GALEN-Small, we can observe that the size of the modules
follows a smooth distribution. In contrast, for GALEN-Full, we have obtained a large number
of small modules and a significant number of very big ones, but no medium-sized modules
in-between. This abrupt distribution indicates the presence of a big cycle of dependencies in
the ontology. The presence of this cycle can be spotted more clearly in Figure 6(f); the figure
shows that there is a large number of modules of size in between 6515 and 6535 concepts.
This cycle does not occur in the simplified version of GALEN and thus we obtain a smooth
distribution for that case. In contrast, in Figure 6(e) we can see that the distribution for
the “small” modules in GALEN-Full is smooth and much more similar to the one for the
simplified version of GALEN.
The considerable differences in the size of the modules extracted by algorithms A1 –
A3 are due to the fact that these algorithms extract modules according to different require-
ments. Algorithm A1 produces a fragment of the ontology that contains the input atomic
concept and is syntactically separated from the rest of the axioms—that is, the fragment
and the rest of the ontology have disjoint signatures. Algorithm A2 extracts a fragment of
the ontology that is a module for the input atomic concept and is additionally semantically
separated from the rest of the ontology: no entailment between an atomic concept in the
module and an atomic concept not in the module should hold in the original ontology. Since
our algorithm is based on weaker requirements, it is to be expected that it extracts smaller
modules. What is surprising is that the difference in the size of modules is so significant.
In order to explore the use of our results for ontology design and analysis, we have
integrated our algorithm for extracting modules in the ontology editor Swoop (Kalyanpur
et al., 2006). The user interface of Swoop allows for the selection of an input signature and
the retrieval of the corresponding module.
16
Figure 7a shows the classification of the concepts DNA Sequence and Microanatomy in
the NCI ontology. Figure 7b shows the minimal
˜
O
r←∅
A←∅
()-based module for DNA Sequence,
as obtained in Swoop. Recall that, according to Corollary 47, a
˜
O
r←∅
A←∅
()-based module for an
atomic concept contains all necessary axioms for, at least, all its (entailed) super-concepts
in O. Thus this module can be seen as the “upper ontology” for A in O. In fact, Figure 7
shows that this module contains only the concepts in the “path” from DNA Sequence to
the top level concept Anatomy Kind. This suggests that the knowledge in NCI about the
particular concept DNA Sequence is very shallow in the sense that NCI only “knows” that a
DNA Sequence is a macromolecular structure, which, in the end, is an anatomy kind. If one
wants to refine the module by only including the information in the ontology necessary to
entail the “path” from DNA Sequence to Micro Anatomy, one could extract the
˜
O
r←∆∆
A←∆
()-
based module for Micro Anatomy in the fragment 7b. By Corollary 47, this module contains
all the sub-concepts of Micro Anatomy in the previously extracted module. The resulting
module is shown in Figure 7b.
16. The tool can be downloaded at http://code.google.com/p/swoop/
315
Cuenca Grau, Horrocks, Kazakov, & Sattler
9. Conclusion
In this paper, we have proposed a set of reasoning problems that are relevant for ontology
reuse. We have established the relationships between these problems and studied their com-
putability. Using existing results (Lutz et al., 2007) and the results obtained in Section 4, we
have shown that these problems are undecidable or algorithmically unsolvable for the logic
underlying OWL DL. We have dealt with these problems by defining sufficient conditions
for a solution to exist, which can be computed in practice. We have introduced and studied
the notion of a safety class, which characterizes any sufficiency condition for safety of an
ontology w.r.t. a signature. In addition, we have used safety classes to extract modules from
ontologies.
For future work, we would like to study other approximations which can produce small
modules in complex ontologies like GALEN, and exploit modules to optimize ontology
reasoning.
References
Baader, F., Brandt, S., & Lutz, C. (2005). Pushing the cL envelope. In IJCAI-05, Pro-
ceedings of the Nineteenth International Joint Conference on Artificial Intelligence,
Edinburgh, Scotland, UK, July 30-August 5, 2005, pp. 364–370. Professional Book
Center.
Baader, F., Calvanese, D., McGuinness, D. L., Nardi, D., & Patel-Schneider, P. F. (Eds.).
(2003). The Description Logic Handbook: Theory, Implementation, and Applications.
Cambridge University Press.
Bao, J., Caragea, D., & Honavar, V. (2006). On the semantics of linking and importing in
modular ontologies. In Proceedings of the 5th International Semantic Web Conference
(ISWC-2006), Athens, GA, USA, November 5-9, 2006, Vol. 4273 of Lecture Notes in
Computer Science, pp. 72–86.
B¨orger, E., Gr¨adel, E., & Gurevich, Y. (1997). The Classical Decision Problem. Perspectives
of Mathematical Logic. Springer-Verlag. Second printing (Universitext) 2001.
Borgida, A., & Serafini, L. (2003). Distributed description logics: Assimilating information
from peer sources. J. Data Semantics, 1, 153–184.
Cuenca Grau, B., Horrocks, I., Kazakov, Y., & Sattler, U. (2007). A logical framework
for modularity of ontologies. In IJCAI-07, Proceedings of the Twentieth International
Joint Conference on Artificial Intelligence, Hyderabad, India, January 2007, pp. 298–
304. AAAI.
Cuenca Grau, B., & Kutz, O. (2007). Modular ontology languages revisited. In Proceedings
of the Workshop on Semantic Web for Collaborative Knowledge Acquisition, Hyder-
abad, India, January 5, 2007.
Cuenca Grau, B., Parsia, B., & Sirin, E. (2006a). Combining OWL ontologies using c-
connections. J. Web Sem., 4(1), 40–59.
Cuenca Grau, B., Parsia, B., Sirin, E., & Kalyanpur, A. (2006b). Modularity and web ontolo-
gies. In Proceedings of the Tenth International Conference on Principles of Knowledge
316
Modular Reuse of Ontologies: Theory and Practice
Representation and Reasoning (KR-2006), Lake District of the United Kingdom, June
2-5, 2006, pp. 198–209. AAAI Press.
Cuenca Grau, B., Horrocks, I., Kazakov, Y., & Sattler, U. (2007). Just the right amount:
extracting modules from ontologies. In Proceedings of the 16th International Confe-
rence on World Wide Web (WWW-2007), Banff, Alberta, Canada, May 8-12, 2007,
pp. 717–726. ACM.
Cuenca Grau, B., Horrocks, I., Kutz, O., & Sattler, U. (2006). Will my ontologies fit
together?. In Proceedings of the 2006 International Workshop on Description Logics
(DL-2006), Windermere, Lake District, UK, May 30 - June 1, 2006, Vol. 189 of CEUR
Workshop Proceedings. CEUR-WS.org.
Gardiner, T., Tsarkov, D., & Horrocks, I. (2006). Framework for an automated compari-
son of description logic reasoners. In Proceedings of the 5th International Semantic
Web Conference (ISWC-2006), Athens, GA, USA, November 5-9, 2006, Vol. 4273 of
Lecture Notes in Computer Science, pp. 654–667. Springer.
Ghilardi, S., Lutz, C., & Wolter, F. (2006). Did I damage my ontology? a case for con-
servative extensions in description logics. In Proceedings of the Tenth International
Conference on Principles of Knowledge Representation and Reasoning (KR-2006),
Lake District of the United Kingdom, June 2-5, 2006, pp. 187–197. AAAI Press.
Horrocks, I., Patel-Schneider, P. F., & van Harmelen, F. (2003). From SHIQ and RDF to
OWL: the making of a web ontology language. J. Web Sem., 1(1), 7–26.
Horrocks, I., & Sattler, U. (2005). A tableaux decision procedure for oHO1O. In Proceedings
of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05),
Edinburgh, Scotland, UK, July 30-August 5, 2005, pp. 448–453. Professional Book
Center.
Kalfoglou, Y., & Schorlemmer, M. (2003). Ontology mapping: The state of the art. The
Knowledge Engineering Review, 18, 1–31.
Kalyanpur, A., Parsia, B., Sirin, E., Cuenca Grau, B., & Hendler, J. A. (2006). Swoop: A
web ontology editing browser. J. Web Sem., 4(2), 144–153.
Lutz, C., Walther, D., & Wolter, F. (2007). Conservative extensions in expressive description
logics. In Proceedings of the Twentieth International Joint Conference on Artificial
Intelligence (IJCAI-07), Hyderabad, India, January 2007, pp. 453–459. AAAI.
Lutz, C., & Wolter, F. (2007). Conservative extensions in the lightweight description logic
cL. In Proceedings of the 21st International Conference on Automated Deduction
(CADE-21), Bremen, Germany, July 17-20, 2007, Vol. 4603 of Lecture Notes in Com-
puter Science, pp. 84–99. Springer.
M¨oller, R., & Haarslev, V. (2003). Description logic systems. In The Description Logic
Handbook, chap. 8, pp. 282–305. Cambridge University Press.
Motik, B. (2006). Reasoning in Description Logics using Resolution and Deductive
Databases. Ph.D. thesis, Univesit¨at Karlsruhe (TH), Karlsruhe, Germany.
Noy, N. F. (2004a). Semantic integration: A survey of ontology-based approaches. SIGMOD
Record, 33(4), 65–70.
317
Cuenca Grau, Horrocks, Kazakov, & Sattler
Noy, N. F. (2004b). Tools for mapping and merging ontologies. In Staab, & Studer (Staab
& Studer, 2004), pp. 365–384.
Noy, N., & Musen, M. (2003). The PROMPT suite: Interactive tools for ontology mapping
and merging. Int. Journal of Human-Computer Studies, Elsevier, 6(59).
Patel-Schneider, P., Hayes, P., & Horrocks, I. (2004). Web ontology language OWL Abstract
Syntax and Semantics. W3C Recommendation.
Rector, A., & Rogers, J. (1999). Ontological issues in using a description logic to represent
medical concepts: Experience from GALEN. In IMIA WG6 Workshop, Proceedings.
Schmidt-Schauß, M., & Smolka, G. (1991). Attributive concept descriptions with comple-
ments. Artificial Intelligence, Elsevier, 48(1), 1–26.
Seidenberg, J., & Rector, A. L. (2006). Web ontology segmentation: analysis, classification
and use. In Proceedings of the 15th international conference on World Wide Web
(WWW-2006), Edinburgh, Scotland, UK, May 23-26, 2006, pp. 13–22. ACM.
Sirin, E., & Parsia, B. (2004). Pellet system description. In Proceedings of the 2004 In-
ternational Workshop on Description Logics (DL2004), Whistler, British Columbia,
Canada, June 6-8, 2004, Vol. 104 of CEUR Workshop Proceedings. CEUR-WS.org.
Staab, S., & Studer, R. (Eds.). (2004). Handbook on Ontologies. International Handbooks
on Information Systems. Springer.
Stuckenschmidt, H., & Klein, M. (2004). Structure-based partitioning of large class hier-
archies. In Proceedings of the Third International Semantic Web Conference (ISWC-
2004), Hiroshima, Japan, November 7-11, 2004, Vol. 3298 of Lecture Notes in Com-
puter Science, pp. 289–303. Springer.
Tobies, S. (2000). The complexity of reasoning with cardinality restrictions and nominals
in expressive description logics. J. Artif. Intell. Res. (JAIR), 12, 199–217.
Tsarkov, D., & Horrocks, I. (2006). FaCT++ description logic reasoner: System description.
In Proceedings of the Third International Joint Conference on Automated Reasoning
(IJCAR 2006), Seattle, WA, USA, August 17-20, 2006, Vol. 4130 of Lecture Notes in
Computer Science, pp. 292–297. Springer.
318

Cuenca Grau, Horrocks, Kazakov, & Sattler

2007), which has been motivated by the above-mentioned application needs. In this paper, we focus on the use of modularity to support the partial reuse of ontologies. In particular, we consider the scenario in which we are developing an ontology P and want to reuse a set S of symbols from a “foreign” ontology Q without changing their meaning. For example, suppose that an ontology engineer is building an ontology about research projects, which specifies different types of projects according to the research topic they focus on. The ontology engineer in charge of the projects ontology P may use terms such as Cystic Fibrosis and Genetic Disorder in his descriptions of medical research projects. The ontology engineer is an expert on research projects; he may be unfamiliar, however, with most of the topics the projects cover and, in particular, with the terms Cystic Fibrosis and Genetic Disorder. In order to complete the projects ontology with suitable definitions for these medical terms, he decides to reuse the knowledge about these subjects from a wellestablished medical ontology Q. The most straightforward way to reuse these concepts is to construct the logical union P ∪ Q of the axioms in P and Q. It is reasonable to assume that the additional knowledge about the medical terms used in both P and Q will have implications on the meaning of the projects defined in P; indeed, the additional knowledge about the reused terms provides new information about medical research projects which are defined using these medical terms. Less intuitive is the fact that importing Q may also result in new entailments concerning the reused symbols, namely Cystic Fibrosis and Genetic Disorder. Since the ontology engineer of the projects ontology is not an expert in medicine and relies on the designers of Q, it is to be expected that the meaning of the reused symbols is completely specified in Q; that is, the fact that these symbols are used in the projects ontology P should not imply that their original “meaning” in Q changes. If P does not change the meaning of these symbols in Q, we say that P ∪ Q is a conservative extension of Q In realistic application scenarios, it is often unreasonable to assume the foreign ontology Q to be fixed; that is, Q may evolve beyond the control of the modelers of P. The ontology engineers in charge of P may not be authorized to access all the information in Q or, most importantly, they may decide at a later time to reuse the symbols Cystic Fibrosis and Genetic Disorder from a medical ontology other than Q. For application scenarios in which the external ontology Q may change, it is reasonable to “abstract” from the particular Q under consideration. In particular, given a set S of “external symbols”, the fact that the axioms in P do not change the meaning of any symbol in S should be independent of the particular meaning ascribed to these symbols by Q. In that case, we will say that P is safe for S. Moreover, even if P “safely” reuses a set of symbols from an ontology Q, it may still be the case that Q is a large ontology. In particular, in our example, the foreign medical ontology may be huge, and importing the whole ontology would make the consequences of the additional information costly to compute and difficult for our ontology engineers in charge of the projects ontology (who are not medical experts) to understand. In practice, therefore, one may need to extract a module Q1 of Q that includes only the relevant information. Ideally, this module should be as small as possible while still guarantee to capture the meaning of the terms used; that is, when answering queries against the research projects ontology, importing the module Q1 would give exactly the same answers as if the whole medical ontology Q had been imported. In this case, importing the module will have the
274

Modular Reuse of Ontologies: Theory and Practice

same observable effect on the projects ontology as importing the entire ontology. Furthermore, the fact that Q1 is a module in Q should be independent of the particular P under consideration. The contributions of this paper are as follows: 1. We propose a set of tasks that are relevant to ontology reuse and formalize them as reasoning problems. To this end, we introduce the notions of conservative extension, safety and module for a very general class of logic-based ontology languages. 2. We investigate the general properties of and relationships between the notions of conservative extension, safety, and module and use these properties to study the relationships between the relevant reasoning problems we have previously identified. 3. We consider Description Logics (DLs), which provide the formal underpinning of the W3C Web Ontology Language (OWL), and study the computability of our tasks. We show that all the tasks we consider are undecidable or algorithmically unsolvable for the description logic underlying OWL DL—the most expressive dialect of OWL that has a direct correspondence to description logics. 4. We consider the problem of deciding safety of an ontology for a signature. Given that this problem is undecidable for OWL DL, we identify sufficient conditions for safety, which are decidable for OWL DL—that is, if an ontology satisfies our conditions then it is safe; the converse, however, does not necessarily hold. We propose the notion of a safety class, which characterizes any sufficiency condition for safety, and identify a family of safety classes–called locality—which enjoys a collection of desirable properties. 5. We next apply the notion of a safety class to the task of extracting modules from ontologies; we provide various modularization algorithms that are appropriate to the properties of the particular safety class in use. 6. We present empirical evidence of the practical benefits of our techniques for safety checking and module extraction. This paper extends the results in our previous work (Cuenca Grau, Horrocks, Kutz, & Sattler, 2006; Cuenca Grau et al., 2007; Cuenca Grau, Horrocks, Kazakov, & Sattler, 2007).

2. Preliminaries
In this section we introduce description logics (DLs) (Baader, Calvanese, McGuinness, Nardi, & Patel-Schneider, 2003), a family of knowledge representation formalisms which underlie modern ontology languages, such as OWL DL (Patel-Schneider, Hayes, & Horrocks, 2004). A hierarchy of commonly-used description logics is summarized in Table 1. The syntax of a description logic L is given by a signature and a set of constructors. A signature (or vocabulary) Sg of a DL is the (disjoint) union of countably infinite sets AC of atomic concepts (A, B, . . . ) representing sets of elements, AR of atomic roles (r, s, . . . ) representing binary relations between elements, and Ind of individuals (a, b, c, . . . ) representing constants. We assume the signature to be fixed for every DL.
275

Cuenca Grau, Horrocks, Kazakov, & Sattler

Rol RBox EL r ALC – – S – – Trans(r) + I r− + H R 1 R2 + F Funct(R) + N ( n S) + Q ( n S.C) + O {i} Here r ∈ AR, A ∈ AC, a, b ∈ Ind, R(i) ∈ Rol, C(i) ∈ Con, n ≥ 1 and S ∈ Rol is a simple role (see (Horrocks & Sattler, 2005)). Table 1: The hierarchy of standard description logics

DLs

Constructors Con , A, C1 C2 , ∃R.C – –, ¬C – –

Axioms [ Ax ] TBox ABox A ≡ C, C1 C2 a : C, r(a, b) – – – – – – – –

Every DL provides constructors for defining the set Rol of (general) roles (R, S, . . . ), the set Con of (general) concepts (C, D, . . . ), and the set Ax of axioms (α, β, . . . ) which is a union of role axioms (RBox), terminological axioms (TBox) and assertions (ABox). EL (Baader, Brandt, & Lutz, 2005) is a simple description logic which allows one to construct complex concepts using conjunction C1 C2 and existential restriction ∃R.C starting from atomic concepts A, roles R and the top concept . EL provides no role constructors and no role axioms; thus, every role R in EL is atomic. The TBox axioms of EL can be either concept definitions A ≡ C or general concept inclusion axioms (GCIs) C1 C2 . EL assertions are either concept assertions a : C or role assertions r(a, b). In this paper we assume the concept definition A ≡ C is an abbreviation for two GCIs A C and C A. The basic description logic ALC (Schmidt-Schauß & Smolka, 1991) is obtained from EL by adding the concept negation constructor ¬C. We introduce some additional constructors as abbreviations: the bottom concept ⊥ is a shortcut for ¬ , the concept disjunction C1 C2 stands for ¬(¬C1 ¬C2 ), and the value restriction ∀R.C stands for ¬(∃R.¬C). In contrast to EL, ALC can express contradiction axioms like ⊥. The logic S is an extension of ALC where, additionally, some atomic roles can be declared to be transitive using a role axiom Trans(r). Further extensions of description logics add features such as inverse roles r− (indicated by appending a letter I to the name of the logic), role inclusion axioms (RIs) R1 R2 (+H), functional roles Funct(R) (+F), number restrictions ( n S), with n ≥ 1, (+N ), qualified number restrictions ( n S.C), with n ≥ 1, (+Q)1 , and nominals {a} (+O). Nominals make it possible to construct a concept representing a singleton set {a} (a nominal concept) from an individual a. These extensions can be used in different combinations; for example ALCO is an extension of ALC with nominals; SHIQ is an extension of S with role hierarchies,
1. the dual constructors ( n S) and ( n S.C) are abbreviations for ¬( n + 1 S) and ¬( n + 1 S.¬C), respectively

276

I |= r(a. 277 . are syntactic variants thereof. z ∈ ∆I [ x. are based on description logics and. atomic roles and individuals that occur in O (respectively in α).Modular Reuse of Ontologies: Theory and Practice inverse roles and qualified number restrictions. x ∈ rI } ( n R)I = { x ∈ ∆I | {y ∈ ∆I | x. check if O implies α. called the domain of the interpretation. z ∈ ∆I [ x. or I is a model of α) is defined as follows: I |= C1 I |= R1 I I C2 iff C1 ⊆ C2 .C)I (¬C)I (r− )I = = = = = ∆ C I ∩ DI {x ∈ ∆I | ∃y. In particular. we say that an axiom α (an ontology O) is valid in I if every interpretation I ∈ I is a model of α (respectively O). and ·I is the interpretation function that assigns: to every A ∈ AC a subset AI ⊆ ∆I .C)I = { x ∈ ∆I | {y ∈ ∆I | x. such as OWL. ·J ) coincide on the subset S of the signature (notation: I|S = J |S ) if ∆I = ∆J and X I = X J for every X ∈ S. to every r ∈ AR a binary relation rI ⊆ ∆I × ∆I . I I R2 iff R1 ⊆ R2 . y ∈ RI ∧ x. equivalently. y ∈ RI ∧ y ∈ C I } ≥ n } {a}I = {aI } The satisfaction relation I |= α between an interpretation I and a DL axiom α (read as I satisfies α. The interpretation function ·I is extended to complex roles and concepts via DLconstructors as follows: ( )I (C D)I (∃R. z ∈ rI ]. Modern ontology languages. z ∈ rI ⇒ x. we assume an ontology O based on a description logic L to be a finite set of axioms in L. bI ∈ rI . where ∆I is a non-empty set. y ∈ rI ∧ y. b) iff aI . & van Harmelen. to a certain extent. y. In this paper. I |= a : C iff aI ∈ C I . We say that two interpretations I = (∆I . We say that two sets of interpretations I and J are equal modulo S (notation: I|S = J|S ) if for every I ∈ I there exits J ∈ J such that J |S = I|S and for every J ∈ J there exists I ∈ I such that I|S = J |S . 2003). I |= Trans(r) iff ∀x. An interpretation I is a model of an ontology O if I satisfies all axioms in O. I |= Funct(R) iff ∀x. y. The main reasoning task for ontologies is entailment: given an ontology O and an axiom α. An interpretation I is a pair I = (∆I . y ∈ RI ∧ y ∈ C I } ∆I \ C I { x. ·I ) and J = (∆J . Patel-Schneider. x. y | y. The logical entailment |= is defined using the usual Tarski-style set-theoretic semantics for description logics as follows. is implied by the empty ontology). and to every a ∈ Ind an element aI ∈ ∆I . The signature of an ontology O (of an axiom α) is the set Sig(O) (Sig(α)) of atomic concepts. and SHOIQ is the DL that uses all the constructors and axiom types we have presented. y ∈ RI } ≥ n } ( n R. An axiom α is a tautology if it is valid in the set of all interpretations (or. Given a set I of interpretations. OWL DL corresponds to SHOIN (Horrocks. AR and Ind are not defined by the interpretation I but assumed to be fixed for the ontology language (DL). An ontology O implies an axiom α (written O |= α) if I |= α for every model I of O. z ∈ RI ⇒ y = z ]. Note that the sets AC. ·I ).

the second one describes European projects about cystic fibrosis. These notions are defined relative to a language L.6. our tasks are based on the notions of a conservative extension (Section 3. with most of the topics the projects cover and. with the terms Cystic Fibrosis and Genetic Disorder mentioned in P1 and P2. Assume that the ontology engineer defines two concepts—Genetic Disorder Project and Cystic Fibrosis EUProject—in his ontology P. safety and modules apply.Pancreas Genetic Disorder Immuno Protein Gene Figure 1: Reusing medical terminology in an ontology on research projects 3.Genetic Origin ∃has Origin. Horrocks. we assume that L is an ontology language based on a description logic. 3. & Sattler Ontology of medical research projects P: P1 P2 P3 P4 E1 E2 Genetic Disorder Project ≡ Project EUProject ∃has Focus.Genetic Disorder ∃has Focus. which specifies different types of projects according to the research topic they are concerned with. in particular.4).Cuenca Grau.1 A Motivating Example Suppose that an ontology engineer is in charge of a SHOIQ ontology on research projects.Genetic Origin ∃located In. in Section 3. The ontology engineer is an expert on research projects: he knows. Based on this application scenario.Cystic Fibrosis ∃has Origin. that every instance of EUProject must be an instance of Project (the concept-inclusion axiom P3) and that the role has Focus can be applied only to instances of Project (the domain axiom P4). He may be unfamiliar. we motivate and define the reasoning tasks to be investigated in the remainder of the paper. however.Cystic Fibrosis Cystic Fibrosis EUProject ≡ EUProject Project Project (Genetic Disorder :: Cystic Fibrosis) ⊥ ∀ has Focus. Within this section. for example. we elaborate on the ontology reuse scenario sketched in Section 1. For the convenience of the reader. Kazakov. In particular. The first one describes projects about genetic disorders.Pancreas Genetic Fibrosis ∃associated With.2 and 3. In order to complete the projects ontology with suitable definitions 278 .Genetic Disorder Ontology of medical terms Q: M1 Cystic Fibrosis ≡ Fibrosis M2 Genetic Fibrosis ≡ Fibrosis M3 Fibrosis M4 Genetic Fibrosis M5 DEFBI Gene ∃located In.Cystic Fibrosis ∃has Focus.2). as given by the axioms P1 and P2 in Figure 1. all the tasks we consider in this paper are summarized in Table 2. we will define formally the class of ontology languages for which the given definitions of conservative extensions.3) and module (Section 3. Project :: ∃has Focus. Ontology Integration and Knowledge Reuse In this section. safety (Sections 3.

perhaps due to modeling errors. intuitively. and E2 expresses 279 . Importing additional axioms into an ontology may result in new logical consequences. over the terms defined in the main ontology P. In order to illustrate such a situation. as a consequence. he decides to reuse the knowledge about these subjects from a wellestablished and widely-used medical ontology. Using inclusion α from (1) and axioms P1–P3 from ontology P we can now prove that every instance of Cystic Fibrosis EUProject is also an instance of Genetic Disorder Project: P ∪ Q |= β := (Cystic Fibrosis EUProject Genetic Disorder Project) (2) This inclusion β. Suppose the ontology engineer has formalized the statements (3) and (4) in ontology P using axioms E1 and E2 respectively. P |= β. that the meaning of the terms defined in Q changes as a consequence of the import since these terms are supposed to be completely specified within Q. in (2). Suppose that Cystic Fibrosis and Genetic Disorder are described in an ontology Q containing the axioms M1-M5 in Figure 1. One would not expect. like β.” ::: “:::::::::::::: that has Focus on Cystic Fibrosis. At this point. even though it concerns the terms of primary scope in his projects ontology P.Modular Reuse of Ontologies: Theory and Practice for these medical terms. however. For example. does not follow from P alone—that is. the concept inclusion α1 := (Cystic Fibrosis Genetic Fibrosis) follows from axioms M1 and M2 as well as from axioms M1 and M3. It is natural to expect that entailments like α in (1) from an imported ontology Q result in new logical consequences. they should not change or constrain the meaning of the medical terms. it is easy to see that axioms M1–M4 in Q imply that every instance of Cystic Fibrosis is an instance of Genetic Disorder: Q |= α := (Cystic Fibrosis Genetic Disorder) (1) Indeed. the ontology engineer has introduced modeling errors and. axioms E1 and E2 do not correspond to (3) and (4): E1 actually formalizes the following statement: “Every instance of Project is different from every common instance of Genetic Disorder and Cystic Fibrosis”. α follows from axioms α1 and M4. to add the axioms from Q to the axioms in P and work with the extended ontology P ∪ Q. however. however. might change during the import. The most straightforward way to reuse these concepts is to import in P the ontology Q—that is. Such a side effect would be highly undesirable for the modeling of ontology P since the ontology engineer of P might not be an expert on the subject of Q and is not supposed to alter the meaning of the terms defined in Q not even implicitly. The meaning of the reused terms. The ontology engineer might be not aware of Entailment (2). suppose that the ontology engineer has learned about the concepts Genetic Disorder and Cystic Fibrosis from the ontology Q (including the inclusion (1)) and has decided to introduce additional axioms formalizing the following statements: “Every instance of Project is different from every instance of Genetic Disorder and every instance of Cystic Fibrosis. also has Focus on Genetic Disorder” Every project (3) (4) Note that the statements (3) and (4) can be thought of as adding new information about projects and.

& Sattler that “Every object that either has Focus on nothing. also has Focus on Genetic Disorder”. or has Focus only on Cystic Fibrosis. P |= ( Project). which expresses the inconsistency of the concept Cystic Fibrosis.2 Conservative Extensions and Safety for an Ontology As argued in the previous section. from (1) and (5) we obtain P ∪ Q |= δ := (Cystic Fibrosis ⊥)..t. L. This requirement can be naturally formulated using the well-known notion of a conservative extension.r. and S a signature over L.(Genetic Disorder ¬Cystic Fibrosis) (6) The inclusion (6) and P4 imply that every element in the domain must be a project—that is. On the other hand. L. we have seen that importing an external ontology can lead to undesirable side effects in our knowledge reuse scenario. 2006.t.r. Indeed. E2 is not a consequence of (4) and. The axioms E1 and E2 not only imply new statements about the medical terms. Lutz. in fact. L if O is a deductive S-conservative extension of O1 w. These kinds of modeling errors are difficult to detect. 280 .r. Lutz et al. Kazakov.t.t. L for S = Sig(O1 ). then O is a deductive S1 -conservative extension of O1 w. an ontology O is a deductive S-conservative extension of O1 for a signature S and language L if and only if every logical consequence α of O constructed using the language L and symbols only from S. which has recently been investigated in the context of ontologies (Ghilardi. Note that. We say that O is a deductive conservative extension of O1 w. Let O and O1 ⊆ O be two Lontologies.t. We say that O is a deductive S-conservative extension of O1 w. together with axiom E1. the additional axioms in O \ O1 do not result into new logical consequences over the vocabulary S. In the next section we discuss how to formalize the effects we consider undesirable. Indeed. ♦ In other words. namely their disjointness: P |= γ := (Genetic Disorder Cystic Fibrosis ⊥) (5) The entailment (5) can be proved using axiom E2 which is equivalent to: ∃has Focus. is already a logical consequence of O1 . 2007). Now. To summarize. especially when they do not cause inconsistencies in the ontology. we have O |= α iff O1 |= α.Cuenca Grau. 3. L for every S1 ⊆ S. although axiom E1 does not correspond to fact (3). that is. Note that if O is a deductive S-conservative extension of O1 w. Horrocks. an important requirement for the reuse of an ontology Q within an ontology P should be that P ∪ Q produces exactly the same logical consequences over the vocabulary of Q as Q alone does. it is still a consequence of (3) which means that it should not constrain the meaning of the medical terms. like the entailment of new axioms or even inconsistencies involving the reused vocabulary. & Wolter. but also cause inconsistencies when used together with the imported axioms from Q. it constrains the meaning of medical terms.r. if for every axiom α over L with Sig(α) ⊆ S. the axioms E1 and E2 together with axioms P1-P4 from P imply new axioms about the concepts Cystic Fibrosis and Genetic Disorder. The notion of a deductive conservative extension can be directly applied to our ontology reuse scenario. this implies (5). Definition 1 (Deductive Conservative Extension).r.

Given L-ontologies O and O . 2007) 1. ALC. if for every model I of O1 . We say that O is a model S-conservative extension of O1 . if O is a model S-conservative extension of O1 then O is a deductive S-conservative extension of O1 w. to show that if the axiom E2 is removed from P then for the resulting ontology P1 = P \ {E2}. this means that P ∪ Q is not a deductive conservative extension of Q w. Intuitively. E2. O1 ⊆ O. Lutz et al. but O is not a model conservative extension of O1 . L if O ∪ O is a deductive conservative extension of O w. it is sufficient to show that P1 ∪ Q is a model conservative extension of Q. L. and E1 and an ontology Q consisting of axioms M1–M5 from Figure 1.t. however. Let O and O1 ⊆ O be two L-ontologies and S a signature over L. There exist two ALC ontologies O and O1 ⊆ O such that O is a deductive conservative extension of O1 w. L. We demonstrate that P1 ∪ Q is a deductive conservative extension of Q. According to proposition 4. Lutz et al.r. given L-ontologies O and O . we say that O is safe for O (or O imports O in a safe way) w. It is possible. the first reasoning task relevant for our ontology reuse scenario can be formulated as follows: T1.r. The following notion is useful for proving deductive conservative extensions: Definition 3 (Model Conservative Extension. there exists an axiom δ = (Cystic Fibrosis ⊥) that uses only symbols in Sig(Q) such that Q |= δ but P ∪ Q |= δ. for every model I of Q there exists a model J of P1 ∪ Q such that I|S = J |S for S = Sig(Q). determine if O is safe for O w.1 that. Deductive Conservative Extensions. given P consisting of axioms P1–P4. and a signature S over L. as given in Definition 1.t. however. whereas the former is defined in terms of models.r. there exists a model J of O such that I|S = J |S . and Q consisting of axioms M1–M5 from Figure 1. that is. ♦ The notion of model conservative extension in Definition 3 can be seen as a semantic counterpart for the notion of deductive conservative extension in Definition 1: the latter is defined in terms of logical entailment.r.Modular Reuse of Ontologies: Theory and Practice Definition 2 (Safety for an Ontology).t. E1. For every two L-ontologies O.r.t. P1 ∪ Q is a deductive conservative extension of Q. The notion of model conservative extension.g. Example 5 Consider an ontology P1 consisting of axioms P1–P4. this notion can be used for proving that an ontology is a deductive conservative extension of another. According to Definition 1.t.t. L. an ontology O is a model S-conservative extension of O1 if for every model of O1 one can find a model of O over the same domain which interprets the symbols from S in the same way. any language L in which δ can be expressed (e.. L = ALC). 2. that is.. 2007). does not provide a complete characterization of deductive conservative extensions. but not vice versa: Proposition 4 (Model vs. We say that O is a model conservative extension of O1 if O is a model S-conservative extension of O1 for S = Sig(O1 ). ♦ Hence. 281 . We have shown in Section 3.r.

This makes it possible to continue developing P and Q independently. Cystic Fibrosis EUProject. Moreover. we abstract from the particular ontology Q to be imported and focus instead on the symbols from Q that are to be reused: Definition 6 (Safety for a Signature). it is often convenient to keep Q separate from P and make its axioms available on demand via a reference. a language L if and only if it imports in a safe way any ontology O written in L that shares only symbols from S with O.r. we refer the interested reader to the literature (Lutz et al.3 Safety of an Ontology for a Signature So far. O ∪ O is a deductive conservative extension of O w. that any ontology O containing axiom E2 is not safe for S = {Cystic Fibrosis. In fact.t.t. L. L = ALC. determine if O is safe for S w. As seen in Section 3. β1 and β2 imply the contradiction α = ( ⊥). 455). 2007. 282 . E2 from Figure 1. however. By Definition 6. which is not entailed by O . where β1 = ( Cystic Fibrosis) and β2 = (Genetic Disorder ⊥).r. it is easy to see that O ∪ O is inconsistent. does not import Q consisting of axioms M1–M5 from Figure 1 in a safe way for L = ALC. Consider an ontology O = {β1 . in our ontology reuse scenario we have assumed that the reused ontology Q is fixed and the axioms of Q are just copied into P during the import. E1 are satisfied in J and hence J |= P1 .r. all of which we interpret in J as the empty set.t. it is possible to show a stronger result. the ontology engineers of the project ontology P may not be willing to depend on a particular version of Q.t.r. The associated reasoning problem is then formulated in the following way: T2. we have that O is safe for O w. in our knowledge reuse scenario. or may even decide at a later time to reuse the medical terms (Cystic Fibrosis and Genetic Disorder) from another medical ontology instead. Hence.Cuenca Grau. Let O be a L-ontology and S a signature over L.r.2. Since E2 is equivalent to axiom (6). namely. & Sattler The required model J can be constructed from I as follows: take J to be identical to I except for the interpretations of the atomic concepts Genetic Disorder Project. Horrocks. since Sig(O) ∩ Sig(O ) ⊆ S. For example. Genetic Disorder} w. It is easy to check that all the axioms P1–P4. In practice. given an L-ontology O and a signature S over L. p. β2 }. the ontology P consisting of axioms P1–P4.r. According to Definition 6.t. ♦ 3. O ∪ O is not a deductive conservative extension of O . the ontology P is not safe for S w. In order to formulate such condition.t. for many application scenarios it is important to develop a stronger safety condition for P which depends as little as possible on the particular ontology Q to be reused. since Sig(P) ∩ Sig(Q) ⊆ S = {Cystic Fibrosis. For an example of Statement 2 of the Proposition. Indeed E2.r. Genetic Disorder}. Hence. that is. since the interpretation of the symbols in Q remains unchanged. an ontology O is safe for a signature S w. E1. EUProject and the atomic role has Focus. ♦ Intuitively. if for every L-ontology O with Sig(O) ∩ Sig(O ) ⊆ S. this means that O is not safe for S. we have I|Sig(Q) = J |Sig(Q) and J |= Q.. L. Project. L. Therefore. L. P1 ∪ Q is a model conservative extension of Q. Kazakov. L. We say that O is safe for S w.t.

It is not so clear. As shown in Example 5. by Definition 3 there exists a model J1 of O such that I|S = J1 |S . however. Sig(O ) ⊆ S . since Sig(O ) ⊆ S . L. Model Conservative Extensions) Let O be an L-ontology and S a signature over L such that O is a model S-conservative extension of the empty ontology O1 = ∅. we have I|S = J |S .r. The following lemma introduces a property which relates the notion of model conservative extension with the notion of safety for a signature. O1 ⊆ O. which proves ( ). Lemma 7 Let O. ♦ 283 . In particular.t. take any SHOIQ ontology O such that Sig(O) ∩ Sig(O ) ⊆ S.Modular Reuse of Ontologies: Theory and Practice It is clear how one could prove that an ontology O is not safe for a signature S: simply find an ontology O with Sig(O) ∩ Sig(O ) ⊆ S. for every interpretation I there exists a model J of O such that J |S = I|S . Hence J |= O ∪ O and I|S = J |S . Intuitively. Then O ∪ O is a model S -conservative extension of O1 ∪ O for S = S ∪ Sig(O ). J is a model of P1 and I|Sig(Q) = J |Sig(Q) where Q consists of axioms M1–M5 from Figure 1. We just construct a model J of O ∪ O such that I|S = J |S . Lemma 7 allows us to identify a condition sufficient to ensure the safety of an ontology for a signature: Proposition 8 (Safety for a Signature vs. By Proposition 8.r. and I |= O . Example 9 Let P1 be the ontology consisting of axioms P1–P4. and Sig(O)∩Sig(O ) ⊆ S. We need to demonstrate that O ∪ O is a deductive conservative extension of O w. L = SHOIQ. it says that the notion of model conservative extension is stable under expansion with new axioms provided they share only symbols from S with the original ontologies. in order to prove safety for P1 it is sufficient to demonstrate that P1 is a model S-conservative extension of the empty ontology. In order to prove that O is safe for S w. It turns out that the notion of model conservative extensions can also be used for this purpose.r.t. Proof. let I be a model of O1 ∪ O . () Indeed. for every S-interpretation I there exists a model J of P1 such that I|S = J |S . Then O is safe for S w. we have that O∪O is a model S -conservative extension of O1 ∪O = O for S = S ∪ Sig(O ).t. Genetic Disorder} w. Since Sig(O) ∩ S = Sig(O) ∩ (S ∪ Sig(O )) ⊆ S ∪ (Sig(O) ∩ Sig(O )) ⊆ S and I|S = J1 |S . how one could prove that O is safe for S. such that O ∪ O is not a deductive conservative extension of O . we have that O ∪ O is a deductive conservative extension of O . L. In order to show that O∪O is a model S -conservative extension of O1 ∪O according to Definition 3. since O is a model S-conservative extension of O1 = ∅. Let J be an interpretation such that J |Sig(O) = J1 |Sig(O) and J |S = I|S . we have J |= O . since J |S = I|S . In particular. that is. that is. L according to Definition 6. we have J |= O.r. and E1 from Figure 1. Proof. since S ⊆ Sig(Q). by Lemma 7.t. Consider the model J obtained from I as in Example 5. such interpretation J always exists. as was required to prove ( ). We show that P1 is safe for S = {Cystic Fibrosis. Since J |Sig(O) = J1 |Sig(O) and J1 |= O. and I is a model of O1 . () Since O is a model S-conservative extension of O1 . and O be L-ontologies and S a signature over L such that O is a model S-conservative extension of O1 and Sig(O) ∩ Sig(O ) ⊆ S.

Genetic Fibrosis. L = ALC or its extensions. L. when answering an arbitrary query over the signature of P. Even if P imports Q without changing the meaning of the reused symbols. L. M2 and M4 as well as of axioms M1. Intuitively.r. Definition 10 (Module for an Ontology). given L-ontologies O. that P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 and of P1 ∪ Q2 for S which is sufficient by Claim 1 of Proposition 4. M2. Recall that the axiom α from (1) is a consequence of axioms M1. surgical techniques. anatomy. if O ∪ O is a deductive S-conservative extension of O ∪ O1 for S = Sig(O) w. P1 . O and O1 ⊆ O be L-ontologies. It can be demonstrated that P1 ∪ Q0 is a deductive conservative extension of Q0 . In fact. We need to construct a model J of P1 ∪ Q such that I|S = J |S . we demonstrate a stronger fact. etc—the resulting ontology P ∪ Q may be considerably harder than processing P alone. For example.r. it is possible to show that both subsets Q1 = {M1. L = SHOIQ.t. since P1 ∪ Q is not an deductive S-conservative extension of P1 ∪ Q0 w. P1 ∪ Q |= α.4 Extraction of Modules from Ontologies In our example from Figure 1 the medical ontology Q is very small. Let O.r. M4} and Q2 = {M1. one would like to extract a (hopefully small) fragment Q1 of the external medical ontology—a module—that describes just the concepts that are reused in P.t. the medical ontology Q could contain information about genes.r.r. & Sattler 3.t. Indeed. E1 and the ontology Q consisting of axioms M1–M5 from Figure 1. In order to show that P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 for S = Sig(P1 ). Similarly. we need to demonstrate that P1 ∪ Q is a deductive S-conservative extension of both P1 ∪ Q1 and P1 ∪ Q2 for S = Sig(P1 ) w. Example 11 Consider the ontology P1 consisting of axioms P1–P4. Pancreas.t. M3. Sig(α) ⊆ S but P1 ∪ Q0 |= α. Kazakov. we have Q0 |= α. Q0 is not a module for P1 in Q w.r. L. Horrocks.t. To do so. In particular.t. etc. reasoning over. M4} of Q are modules for P1 in Q w. O . M4 and M5. We say that O1 is a module for O in O w. browsing. But then. according to Definition 10. one can show that any subset of Q that does not imply α is not a module in Q w. and Genetic Origin are defined as the interpretation of Cystic Fibrosis in I. these sets of axioms are actually minimal subsets of Q that imply α. compute a module O1 for O in O w. On the other hand. processing— that is. consider any model I of P1 ∪ Q1 . and the interpretations of atomic roles located In and 284 . L = ALC for S = Sig(P1 ). Definition 10.t.Cuenca Grau. Well established medical ontologies. however.t. for the subset Q0 of Q consisting of axioms M2. L. M3.r. Ideally. Let J be defined exactly as I except that the interpretations of the atomic concepts Fibrosis. M3 and M4 from ontology Q. In particular P1 ∪ Q0 |= α. ♦ The task of extracting modules from imported ontologies in our ontology reuse scenario can be thus formulated as follows: T3.r. can be very large and may describe subject matters in which the designer of P is not interested. importing the module Q1 should give exactly the same answers as if the whole ontology Q had been imported. As usual.

J is a model of M4. when the ontology P is modified. we have demonstrated that the modules for P in Q are exactly those subsets of Q that imply α. L. because Genetic Fibrosis and Genetic Disorder are interpreted in J like Cystic Fibrosis and Genetic Disorder in I. it can be imported instead of Q into every ontology O that shares with Q only symbols from S. Continuing Example 11. Proof.t. ♦ Intuitively. Genetic Disorder} in Q. J also satisfies all axioms from P1 . However. it is possible to demonstrate that any subset of Q that implies axiom α.r.Modular Reuse of Ontologies: Theory and Practice has Origin are defined as the identity relation.r. if for every L-ontology O with Sig(O) ∩ Sig(O ) ⊆ S. O ∪ O is a deductive S -conservative extension of O ∪ O1 w. The reasoning task corresponding to Definition 12 can be formulated as follows: T4. Since the extraction of modules is potentially an expensive operation. Hence we have constructed a model J of P1 ∪ Q such that I|S = J |S . Moreover.r. that is.r. This idea motivates the following definition: Definition 12 (Module for a Signature). L. We need to demonstrate that O1 is a module for O in O w. compute a module O1 for S in O . is in fact a module for S = {Cystic Fibrosis. according to Definition 10. and I is a model of Q1 . () 285 .t. the module has to be extracted again since a module Q1 for P in Q might not necessarily be a module for the modified ontology.t. ♦ An algorithm implementing the task T3 can be used for extracting a module from an ontology Q imported into P prior to performing reasoning over the terms in P.r.t.t. L for S = Sig(O).r. that is. the construction above works if we replace Q1 with any subset of Q that implies α. Then O1 is a module for S in O w. P1 ∪ Q is also a model S-conservative extension of P1 ∪ Q2 . given a L-ontology O and a signature S over L. Since we do not modify the interpretation of the symbols in P1 . In fact. Let O and O1 ⊆ O be L-ontologies and S a signature over L. we use the following sufficient condition based on the notion of model conservative extension: Proposition 13 (Modules for a Signature vs. L. the notion of a module for a signature is a uniform analog of the notion of a module for an ontology. In order to prove this. in a similar way as the notion of safety for a signature is a uniform analog of safety for an ontology. It is easy to see that axioms M1–M3 and M5 are satisfied in J . In particular. L. it would be more convenient to extract a module only once and reuse it for any version of the ontology P that reuses the specified symbols from Q. L according to Definition 12. which implies the concept inclusion α = (Cystic Fibrosis Genetic Disorder). thus P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 .t. Model Conservative Extensions) Let O and O1 ⊆ O be L-ontologies and S a signature over L such that O is a model S-conservative extension of O1 . In order to prove that O1 is a module for S in O w. we have that O1 is a module for O in O w. In this way. take any SHOIQ ontology O such that Sig(O) ∩ Sig(O ) ⊆ S. We say that O1 is a module for S in O (or an S-module in O ) w.

We say that α is essential for O in O w. the module Q1 consisting of the axiom M1. Let O and O be L-ontologies. M4}. one can consider several variations of tasks T3 and T4 for computing minimal modules. Ideally.r. & Sattler Indeed.Cuenca Grau. ♦ In our example above the axioms M1–M4 from Q are essential for the ontology P1 and for the signature S = {Cystic Fibrosis. thus they never need not be imported into P. but the axiom M5 is not essential.t. ♦ Note that a module for a signature S in Q does not necessarily contain all the axioms that contain symbols from S. then the required property holds. E1} as well as for the signature S = {Cystic Fibrosis. S a signature and α an axiom over L. Genetic Disorder} in Q. Hence it is reasonable to consider the problem of extracting minimal modules. M3. L (or S-essential in O ) if α is contained in some minimal module for S in O w. modules that contain no other module as a subset. Examples 11 and 14 demonstrate that a minimal module for an ontology or a signature is not necessarily unique: the ontology Q consisting of axioms M1–M5 has two minimal modules Q1 = {M1. M4}. L. we have that O ∪ O is a deductive S -conservative extension of O1 ∪ O w. since these are the minimal sets of axioms that imply axiom α = (Cystic Fibrosis Genetic Disorder).t. P3.r.t. as was required to prove ( ).r. that is. Genetic Disorder}.r. and Sig(O ) ∩ Sig(O) ⊆ S. Genetic Disorder}. M2 and M4 from Q does not contain axiom M5 which mentions the atomic concept Cystic Fibrosis from S. These arguments motivate the following notion: Definition 15 (Essential Axiom). L. Also note that even in a minimal module like Q1 there might still be some axioms like M2 that do not mention symbols from S at all. since S = Sig(O) ⊆ S . L if α is contained in any minimal module in O for O w.t. by Lemma 7. the extracted modules should be as small as possible. Example 14 Let Q and α be as in Example 11. We demonstrate that any subset Q1 of Q which implies α is a module for S = {Cystic Fibrosis. It is easy to see that if the model J is constructed from I as in Example 11. Axioms that do not occur in a minimal module in Q are not essential for P because they can always be removed from every module of Q. 3. We say that α is an essential axiom for S in O w. for every model I of Q1 there exists a model J of Q such that I|S = J |S . This is not true for the axioms that occur in minimal modules of Q. and Q2 = {M1. In certain situations one might be interested in computing the set of essential axioms of an ontology.5 Minimal Modules and Essential Axioms One is usually not interested in extracting arbitrary modules from a reused ontology. since O is a model S-conservative extension of O1 . we have that O ∪ O is a model S -conservative extension of O1 ∪ O for S = S ∪ Sig(O).t. M2. but in extracting modules that are easy to process afterwards.r. it is sufficient to demonstrate that Q is a model S-conservative extension of Q1 . which can be done by computing the union of all minimal modules. In particular. Depending on the application scenario. For example. L. Horrocks. According to Proposition 13. whereas in others any minimal module suffices. P2. In some applications it might be necessary to extract all minimal modules. Kazakov. for the ontology P1 = {P1. It might be necessary to import such axioms into P in order not to lose essential information from Q. Note that 286 . that is.

u][m] Input O. Further variants of the tasks T3 and T4 could be considered relevant to ontology reuse. instead of computing minimal modules. Ax. Sig. The ontology language OWL DL as well as all description 287 . O and L Extract modules for S in O w. L Extract modules in O w. the fragments of the DL SHOIQ. L O. S. and T3um of the task T3 and T4am. Sig is a function that assigns to every axiom α ∈ Ax a finite set Sig(α) ⊆ Sg called the signature of α. Ax is a set of axioms in L. The theoretical results that we will present in this paper can easily be extended to many such reasoning tasks. L O. however. however.6 Safety and Modules for General Ontology Languages All the notions introduced in Section 3 are defined with respect to an “ontology language”.s. a function that assigns to each formula its signature. and T4um of the task T4 for computation of minimal modules discussed in this section. For example. Our definitions apply to any ontology language with a notion of entailment of axioms from ontologies. a set of formulae (axioms) that can be constructed over those symbols.t. T4sm. or in any other complexity measure of the ontology.t. The notions considered in Section 3 can be applied. and a mechanism for identifying their signatures.Modular Reuse of Ontologies: Theory and Practice Notation T1 T2 T3[a. where Sg is a set of signature elements (or vocabulary) of L. L Task Check if O is safe for O w. O are ontologies and S a signature over L Table 2: Summary of reasoning tasks relevant for ontology integration and reuse computing the union of minimal modules might be easier than computing all the minimal modules since one does not need to identify which axiom belongs to which minimal module. T3sm. O . |=). ♦ Definition 16 provides a very general notion of an ontology language. or modules of the smallest size measured in the number of symbols. written O |= α. 3. We extend the function Sig to ontologies O as follows: Sig(O) := α∈O Sig(α).t.s. Definition 16. we have implicitly assumed that the ontology languages are the description logics defined in Section 2—that is.r. L Check if O is safe for S w.r. L Checking Safety: Extracting of [all / some / union of] [minimal] module(s): where O.u][m] T4[a. An ontology over L is a finite set of axioms O ⊆ Ax. L O . and |= is the entailment relation between sets of axioms O ⊆ Ax and axioms α ∈ Ax. and an entailment relation between sets of axioms. one might be interested in computing modules with the smallest number of axioms. O .r.t. S.r. An ontology language is a tuple L = (Sg. In Table 2 we have summarized the reasoning tasks we found to be potentially relevant for ontology reuse scenarios and have included the variants T3am. to a much broader class of ontology languages. So far. A language L is given by a set of symbols (a signature).

O is safe for S w. L iff (a) for every O with Sig(O ) ∩ Sig(O) ⊆ S. Then: 1. In the remainder of this section.r. ontologies over L.r. L.t.r.r.r.t.t. By Definition 10.r. By Definition 12. L. however. O . and let O.r. L iff (c) O ∪ O1 ∪ (O \ O1 ) = O ∪ O is a deductive conservative extension of O ∪ O1 w. By Definition 2.t. 1. L. Then: 1. with Sig(O \ O1 ) ∩ Sig(O) ⊆ S ∪ Sig(O1 ).t. L iff (c) for every O. By Definition 6. L iff the empty ontology O0 = ∅ is an S-module in O w. 2.t.t.t.r. Second Order Logic. L. O0 = ∅ is an S-module in O w. that O1 is a module for O in O w.r. To simplify the presentation.t. Modules for an Ontology) Let L be an ontology language. We also provide the analog of Proposition 17 for the notions of safety and modules for a signature: Proposition 18 (Safety vs. Proof. it is easy to see that (a) is equivalent to (b). By Definition 6. L. By Definition 2.t.t. the empty ontology O0 = ∅ is a module for O in O w. & Sattler logics defined in Section 2 are examples of ontology languages in accordance to Definition 16. L iff (b) for every O with Sig(O ) ∩ Sig(O) ⊆ S. L. it is the case that O0 = ∅ is a module for O in O w. Other examples of ontology languages are First Order Logic. O is safe for O w.r. O ∪ O is a deductive S-conservative extension of O ∪ O1 w.Cuenca Grau. Proposition 17 (Safety vs. L. O and O1 ⊆ O . Proof. it is the case that O is safe for O w. by Definition 10. such as Higher Order Logic.r.r. L for S = Sig(O). L. It is easy to see that the notions of deductive conservative extension (Definition 1). Kazakov. O \ O1 is safe for O ∪ O1 w.t.r. If O \ O1 is safe for O ∪ O1 then O1 is a module for O in O w.t. If O \ O1 is safe for S ∪ Sig(O1 ) w. then O1 is an S-module in O w.t. L. we establish the relationships between the different notions of safety and modules for arbitrary ontology languages.r.r. and S a subset of the signature of L. which implies. The definition of model conservative extension (Definition 3) and the propositions involving model conservative extensions (Propositions 4.r.t. 2. 1. It is easy to see that (a) is the same as (b).r.t. 2.r.r. are well-defined for every ontology language L given by Definition 16. L. Horrocks.t.t.t. Modules for a Signature) Let L be an ontology language. O \ O1 is safe for S ∪ Sig(O1 ) w.t. L iff the empty ontology is a module for O in O w. and assume that we deal with sublanguages of SHOIQ whenever semantics is taken into account.r. and O1 ⊆ O be ontologies over L.t. In particular. 2. 8. and Logic Programs. L iff (b) O ∪ O is a deductive S-conservative extension of O ∪ O0 = O w. O is safe for O w. O is safe for S w. L iff (a) O ∪ O is a deductive conservative extension of O w. By 288 . as well as all the reasoning tasks from Table 2. By Claim 1 of Proposition 17. we have that O \ O1 is safe for O w. we will not formulate general requirements on the semantics of ontology languages.r. and 13) can be also extended to other languages with a standard Tarski model-theoretic semantics. L for S = Sig(O). safety (Definitions 2 and 6) and modules (Definitions 10 and 12). L.

t.t.t. These results can be immediately applied to the notions of safety and modules for an ontology: Proposition 19 Given ontologies O and O over L. O0 = ∅ is a module for O1 in O \ O1 w. Since the notions of modules and safety are defined in Section 3 using the notion of deductive conservative extension. and undecidable for L = ALCIQO.t.r. L if O ∪ O is a deductive S-conservative extension of O ∪ O1 for S = Sig(O) w. L is EXPTIME-complete for L = EL.t. the problem of determining whether O1 is a module for O in O is EXPTIME-complete for L = EL. the problem has also been studied for simple DLs. L. 2007). Note that Sig(O \ O ) ∩ Sig(O) = Sig(O \ O ) ∩ (Sig(O) ∪ Sig(O )) ⊆ ˜ Let O 1 1 1 1 (Sig(O \ O1 ) ∩ Sig(O)) ∪ Sig(O1 ) ⊆ S ∪ Sig(O1 ). let O be such that Sig(O ) ∩ Sig(O) ⊆ S. L. Undecidability and Complexity Results In this section we study the computational properties of the tasks in Table 2 for ontology languages that correspond to fragments of the description logic SHOIQ. L iff O ∪ O is a deductive conservative extension of O w. Given ontologies O. Conversely. L. (2007).r.r. We demonstrate that most of these reasoning tasks are algorithmically unsolvable even for relatively inexpressive DLs. 289 . who showed that the problem is 2-EXPTIME-complete for L = ALCIQ and undecidable for L = ALCIQO. We need to demonstrate that O1 is a module for O in O w.t.t. L iff. Given O1 ⊆ O over a language L. any algorithm for checking deductive conservative extensions can be reused for checking safety and modules. Indeed. L is 2-EXPTIMEcomplete for L = ALC (Ghilardi et al.r. Recently. O is a deductive conservative extension of O1 ⊆ O w. O .r.r. L iff (d) for every O with Sig(O )∩Sig(O) ⊆ S. L.t. 2-EXPTIME complete for L = ALC and L = ALCIQ. an ontology O1 is a module for O in O w.r. L iff O\O1 is safe for O1 w. Hence.r. the problem of determining whether O is safe for O w. By Definition 2.r.t.t. an ontology O is safe for O w.t. Proof. 2006). it has been shown that deciding deductive conservative extensions is EXPTIME-complete for L = EL(Lutz & Wolter.t. The computational properties of conservative extensions have recently been studied in the context of description logics.r. the problem of deciding whether O is a deductive conservative extension of O1 w. Hence. by (c) we have that O \ O1 is safe ˜ for O = O ∪ O1 w. L ( ).t.r. it is reasonable to identify which (un)decidability and complexity results for conservative extensions are applicable to the reasoning tasks in Table 2.r. and O1 ⊆ O over L. 4. by Claim 1 of Proposition 17. This result was extended by Lutz et al. ( ) ˜ := O ∪ O .t. 2-EXPTIME-complete for L = ALC and L = ALCIQ. L .r.r. and undecidable for L = ALCIQO. In order to prove that (c) implies (d).Modular Reuse of Ontologies: Theory and Practice Definition 12.. and are computationally hard for simple DLs. By Definition 2. O1 is an S-module in O w. we demonstrate that any algorithm for checking safety or modules can be used for checking deductive conservative extensions. L which implies by Claim 2 of Proposition 17 that O1 is a module for O in O w. we have that O1 is a module for O in O w.

decidable) subset D ⊆ D of domino systems such that Dps ⊆ D ⊆ Ds .t.u]m from Table 2 for L = EL and L = ALCIQ that run in EXPTIME and 2-EXPTIME respectively.r.r. and T3[a. O and O1 ⊆ O .t.Cuenca Grau. We have demonstrated that the reasoning tasks T1 and T3[a. determines whether O1 is a module for O in O w. Suppose that we have a 2-EXPTIME algorithm that.r.t. given ontologies O and O . we focus on the computational properties of the reasoning tasks T2 and T4[a. as given in Definition 2. o a & Gurevich. one of them. Proof. L iff O0 = ∅ is the only minimal module for O in O w. L. 1997. In the remainder of this section. 2007).r.s.s. L = ALCO. It is well-known (B¨rger. based on a reduction to a domino tiling problem. Finally. H. V ) where T = {1.j+n = ti.j and ti. A solution for a domino system D is a mapping ti. T3sm and T3um is not easier than checking the safety of an ontology.j ∈ V .t.j+1 ∈ H and ti. Let D be the set of all domino systems. . and an ontology O = O(D) which consists of a single ALC-axiom such that: 290 . Indeed. Ds be the subset of D that admit a solution and Dps be the subset of Ds that admit a periodic solution. The proof is a variation of the construction for undecidability of deductive conservative extensions in ALCQIO (Lutz et al. by Claim 1 of Proposition 17.1.s. ti+1. & Sattler Corollary 20 There exist algorithms performing tasks T1. The task T1 corresponds directly to the problem of checking the safety of an ontology. there is no recursive (i. we construct a signature S = S(D).u]m from Table 2 for L = ALCIQO.t. . k} is a finite set of tiles and H.r.j for which there exist integers m ≥ 1 . Note that the empty ontology O0 = ∅ is a module for O in O w. such that ti. we prove that solving any of the reasoning tasks T3am.j = ti. T3sm and T3um for L = ALCIQ in 2-EXPTIME. one can enumerate all subsets of O and check in 2-EXPTIME which of these subsets are modules for O in O w.j that assigns to every pair of integers i. given ontologies O.t. Then we can determine which of these modules are minimal and return all of them. j ≥ 1 an element of T .u]m related to the notions of safety and modules for a signature. Theorem 21 (Undecidability of Safety for a Signature) The problem of checking whether an ontology O consisting of a single ALC-axiom is safe for a signature S is undecidable w. . L.r. Gr¨del. Horrocks. Indeed. . Proof. Kazakov. j ≥ 1. Theorem 3. that is.s. We demonstrate that this algorithm can be used to solve the reasoning tasks T3am. L.e. A domino system is a triple D = (T. or the union of them depending on the reasoning task.u]m are computationally unsolvable for DLs that are as expressive as ALCQIO. L = ALCIQ. n ≥ 1 called periods such that ti+m. We use this property in our reduction. There is no algorithm performing tasks T1. and are 2-EXPTIME-hard for ALC. We demonstrate that all of these reasoning tasks are undecidable for DLs that are as expressive as ALCO. ti. V ⊆ T × T are horizontal and vertical matching relations.7) that the sets D \ Ds and Dps are recursively inseparable. L iff O0 = ∅ is a module for O in O w.r.j . an ontology O is safe for O w. A periodic solution for a domino system D is a solution ti.j .j for every i..t. or T3[a. For each domino system D.

( ∃rV . ai. let S consist of fresh atomic concepts At for every t ∈ T and two atomic roles rH and rV . ti. t. b ∈ rH I .j ∈ rH I . we have Dps ⊆ D ⊆ Ds . since rV and rH commute in I. .j ∈ At I there exist b. L = ALCO. However. Now suppose ai.r.j . In other words.r. b ∈ rH I . c ∈ rV I . ai+1. and ti+1. for the set D consisting of the domino systems D such that O = O(D) is not safe for S = S(D) w. Since I is a model of axioms (q1 ) and (q2 ) from Figure 2.j ∈ ∆I such that ai.¬B) We say that rH and rV commute in an interpretation I = (∆I .j ∈ At I . t.j+1 := b. the value for 291 .j . c ∈ ∆I and t . d1 ∈ rV I . t ∈ H.j for D as follows. then D has a solution.j inductively together with elements ai.t ∈V where T = {1.1 to any element from ∆I .j := t. b. ai. (Ci Di )∈Otile (Ci ¬Di ) (∃rH . j ≥ 1.( Figure 2: An ontology Otile = Otile (D) expressing tiling conditions for a domino system D (a) if D does not have a solution then O = O(D) is safe for S = S(D) w.j can be assigned two times: ai+1. If Otile (D) has a model I in which rH and rV commute. namely (q1 ) and (q2 ) express that every domain element is assigned with a unique tile t ∈ T . Since I is a model of axioms (q3 ) and (q4 ) and ai. because otherwise one can use this problem for deciding membership in D . L = ALCO. Now let s be an atomic role and B an atomic concept such that s.j+1 := t . d2 ∈ rH I . L = ALCO. Let O := {β} / where: β := ∃s. The axioms of Otile express the tiling conditions for a domino system D.j+1 and ai. and c ∈ At I .t. ·I ) of Otile (D) can be used to guide the construction of a solution ti. Then we set ti. we construct ti.B ∃rV . c ∈ rV I . V ). H. Note that Sig(Otile ) = S. We set a1. this implies undecidability for D and hence for the problem of checking if O is S-safe w.r.j := c.∃rH .j . and (b) if D has a periodic solution then O = O(D) is not safe for S = S(D) w. Note that some values of ai.j+1 ∈ rV I and ai. The signature S = S(D).t. t ∈ V . Since D \ Ds and Dps are recursively inseparable.j .∃rV . d1 and d2 from ∆I such that a. The following claims can be easily proved: Claim 1. we have d1 = d2 . B ∈ S.j with i.t ∈H t.j and ti. j ≥ 1 is constructed.Modular Reuse of Ontologies: Theory and Practice (q1 ) (q2 ) (q3 ) (q4 ) At At At A1 At ··· ⊥ Ak t. b ∈ At I . Indeed a model I = (∆. t ∈ T such that ai. and the ontology O = O(D) are constructed as follows. For every i.j := t .j+1 are constructed for ai.t. k} 1≤t<t ≤k At ) At ) 1≤t≤k 1≤t≤k ∃rH . and c. Given a domino system D = (T. ·I ) if for all domain elements a. Consider an ontology Otile in Figure 2 constructed for D. c. ai+1.j+1 . a. L = ALCO. .t. In this case we assign ai. (q3 ) and (q4 ) express that every domain element has horizontal and vertical matching successors. we have a unique At with 1 ≤ t ≤ k such that ai.j+1 and ti+1.r. . . b.

In order to prove property (b). If I is a model of Otile ∪ O.∃rV . and so.∃rV . ·I ) is not a model of Otile then there exists an axiom (Ci Di ) ∈ Otile such that I |= (Ci Di ).B ∃rV .j } (p3 ) {ai. d1 and d2 from ∆I with a. By Proposition 8. ·I ) is a model of O I I = (∆ tile ∪ O. it is easy to see that Otile ∪ O |= ( ∃s. Finally. c.j1 } (p5 ) ∀rV . So. Note that a ∈ (∃rH . L = ALCO. we have a ∈ CiJ . hence. Since the interpretations of J the symbols in S have remained unchanged. and so. It is easy to see that ti. L. c ∈ r I . 1≤i≤m. then rH and rV do not commute in I. c. Since D has no solution. L. Let I be an arbitrary interpretation. d1 ∈ rV I . ·I ). L = ALCO.j+1 is unique.{ai2 . and c. we demonstrate that O = O(D) satisfies properties (a) and (b). and because of (q2 ). we have constructed a model J of O such that J |S = I|S .{ai. 1≤j≤n {ai. assume that D has a periodic solution ti.j and define O to be an extension of Otile with the following axioms: (p1 ) {ai1 .j } i2 = i1 + 1 mod m j2 = j1 + 1 mod n 292 . b.t. then by the contra-positive of Claim 1 either (1) I is not a model of Otile . For this purpose we construct an ALCO-ontology O with Sig(O) ∩ Sig(O ) ⊆ S such that O ∪ O |= ( ⊥). a | x ∈ ∆}.t. there exists a domain element a ∈ ∆I such that I but a ∈ D I .B ∃rV . b. j) with 1 ≤ i ≤ m and 1 ≤ j ≤ n we introduce a fresh individual ai. b ∈ rH I . Case (1).B)J and a ∈ (∃rV . a ∈ ¬Di . Let us define J to be identical to I except for the interpretation of the atomic role s and the atomic concept B. That is. then there exist a.¬B]). We interpret B in J as B J = {d1 }. if I . a ∈ s I .[∃rH . a | x ∈ ∆}.t.j1 } ∃rV . b ∈ r I . Let us define J to be identical to I except for the interpretation of the a ∈ Ci i atomic role s which we define in J as sJ = { x. b.[∃rH . d2 ∈ rH I . and thus. Suppose that rH and rV do not commute in I = (∆I .[Ci ¬Dj ]).Cuenca Grau.r.j }. ∀rH . but O |= ( ⊥). Indeed.∃rH .j } (p4 ) {ai. & Sattler ai+1. and.{ai. we have constructed a model J of O such that J |S = I|S .r.r.j2 } (p2 ) {ai1 . rh and rV do not commute in I.j is a solution for D. b. Hence. the value for ti+1. c ∈ rV I . Horrocks.{ai2 . Case (2). a. such that d1 = d2 .t. c.j with the periods m and n. Kazakov. d ∈ B I .j2 }.¬B]) which implies that J |= β. This will imply that O is not safe for O w.j } ∃rH .j with the periods m. d for every x ∈ ∆ 2 ∈ rH 1 1 ∈ rV V H d2 ∈ (¬B)I .r. we have J |= ( ∃s.∃rV . This implies that d1 = d2 . We interpret s in J as sJ = { x. We define O such that every model of O is a finite encoding of the periodic solution ti. is not safe for S w. For every pair (i. or (2) rH and rV do not commute in I. n ≥ 1. a. We demonstrate for both of these cases how to construct the required model J of O such that J |S = I|S .∃rH . We demonstrate that O is not S-safe w.j+1 is unique. Claim 2. This means that there exist domain elements a. this will imply that O is safe for S w. In order to prove property (a) we use the sufficient condition for safety given in Proposition 8 and demonstrate that if D has no solution then for every interpretation I there exists a model J of O such that J |S = I|S . This implies that J |= β. a. d1 and d2 such that x.∃rH .¬B)J since d1 = d2 . and I . d I . and so J |= ( ∃s. If I = (∆I .

or even for very expressive logics such as SHIQ. by Claim 1 of Proposition 18.r. since O contains Otile . safety may still be decidable for weaker description logics. Alternatively. so O ∪ O |= ( ⊥). On the other hand. or T4um for L is at least as hard as checking the safety of an ontology.r. Solving any of the reasoning tasks T4am.t. or T4[a. any sufficient condition for safety can be defined by giving. L = ALCO is undecidable.r. 5. Hence O |= ( ⊥). Proof.1 Safety Classes In general. The remainder of this paper focuses on the latter approach. then in every model of O ∪ O. This said.t. T4sm.s. since this task corresponds to the problem of checking safety for a signature. O is S-safe w.. an ontology O is S-safe w. the converse. By Claim 1 of Proposition 18. however. we could look for sufficient conditions for the notion of safety— that is.u]m for L = ALCO. it is worth noting that Theorem 21 still leaves room for investigating the former approach. a signature. does not necessarily hold.Modular Reuse of Ontologies: Theory and Practice The purpose of axioms (p1 )–(p5 ) is to ensure that rH and rV commute in every model of O . SHOIQ—ontologies. the set of ontologies over a language that satisfy the condition for that signature.t. L. These ontologies are then guaranteed to be safe for the signature under consideration. if an ontology satisfies our conditions. L iff O0 = ∅ is (the only minimal) S-module in O w. by Claim 2. we could focus on simple DLs for which this problem is decidable. This is only possible if O ∪ O have no models. It is easy to see that O has a model corresponding to every periodic solution for D with periods m and n.t. L if O0 = ∅ is an S-module in O w. A direct consequence of Theorem 21 and Proposition 18 is the undecidability of the problem of checking whether a subset of an ontology is a module for a signature: Corollary 22 Given a signature S and ALC-ontologies O and O1 ⊆ O . Sufficient Conditions for Safety Theorem 21 establishes the undecidability of checking whether an ontology expressed in OWL DL is safe w. This undecidability result is discouraging and leaves us with two alternatives: First. 2007) strongly indicate that checking safety is likely to be exponentially harder than reasoning and practical algorithms may be hard to design. 293 . in what follows we will focus on defining sufficient conditions for safety that we can use in practice and restrict ourselves to OWL DL—that is.t. Before we go any further. rH and rV do not commute. Corollary 23 There is no algorithm that can perform tasks T2. Theorem 21 directly implies that there is no algorithm for task T2. Proof. however. These intuitions lead to the notion of a safety class. L.r.r. 5. the problem of determining whether O1 is an S-module in O w.t. however. In the case of SHIQ. existing results (Lutz et al. for each signature S. then we can guarantee that it is safe. Hence any algorithm for recognizing modules for a signature in L can be used for checking if an ontology is safe for a signature in L.r. such as EL. since. Indeed.

t. then every subset of O (the union of O1 and O2 ) also satisfies this condition. it is compact if for every S1 . one way to prove that O is S-safe is to show that O is a model S-conservative extension of the empty ontology. ♦ Intuitively. then O can be proved to be safe w. A class O(·) is anti-monotonic if S1 ⊆ S2 implies O(S2 ) ⊆ O(S1 ). & Sattler Definition 24 (Class of Ontologies.r. Definition 25 (Class of Interpretations.g.o. we say that a set of interpretations I is local w. where S1 S2 is the symmetric difference of sets S1 and S2 defined by S1 S2 := S1 \ S2 ∪ S2 \ S1 . ♦ 294 . 5. The following definition formalizes the classes of ontologies. Given a SHOIQ signature S. Compactness means that it is sufficient to consider only common elements of Sig(O) and S for checking safety. Also.r. S for every S. S if for every SHOIQ-interpretation I there exists an interpretation J ∈ I such that I|S = J |S . A class of interpretations is a function I(·) that given a SHOIQ signature S returns a set of interpretations I(S). In Section 3. A safety class (also called a sufficient condition to safety) for an ontology language L is a class of ontologies O(·) for L such that for each S it is the case that (i) ∅ ∈ O(S). every subset of S. if I(·) is local then we say that O(·) is a class of local ontologies. we say that O and α are local (based on I(·)). and (ii) each ontology O ∈ O(S) is S-safe for L. a subset O(S) of ontologies in L.l. S2 and S such that (S1 S2 ) ∩ S = ∅ we have that I(S1 )|S = I(S2 )|S . Kazakov. it is local if I(S) is local w. and for every S and O ∈ O(S) and every α ∈ O.t. we will sometimes say that O passes the safety test for S.. it is monotonic if S1 ⊆ S2 implies I(S1 ) ⊆ I(S2 ). Safety Class). w. whenever an ontology O belongs to a safety class for a given signature and the safety class is clear from the context. Antimonotonicity intuitively means that if an ontology O can be proved to be safe w. and it is subset-closed if O1 ⊆ O2 and O2 ∈ O(S) implies O1 ∈ O(S). In what follows. O(S) is the set of ontologies that are valid in I(S). a class of ontologies is a collection of sets of ontologies parameterized by a signature. according to Proposition 8. Subset-closure (union closure) means that if O (O1 and O2 ) satisfy the sufficient condition for safety. Locality).r.Cuenca Grau. we say that O(·) is the class of ontologies O(·) based on I(·) if for every S. A class of ontologies for a language L is a function O(·) that assigns to every subset S of the signature of L. S using the sufficient condition. Horrocks.t. we have seen that. it is compact when O ∈ O(S ∩ Sig(O)) for each O ∈ O(S).2 Locality In this section we introduce a family of safety classes for L = SHOIQ that is based on the semantic properties underlying the notion of model conservative extensions.r. we assume that the the empty ontology ∅ belongs to every safety class for every signature. it is union-closed if O1 ∈ O(S) and O2 ∈ O(S) implies (O1 ∪ O2 ) ∈ O(S). called local ontologies. Given a class of interpretations I(·). for which safety can be proved using Proposition 8.t. Safety classes may admit many natural properties. as given in Definition 24. A safety class represents a sufficient condition for safety: each ontology in O(S) should be safe for S.

P4. hence for every O with Sig(O) ⊆ S we have that O is valid in I(S1 ) iff O is valid in I(S2 ). This has been done by showing that every S-interpretation I can be expanded to a model J of axioms P1–P4. Proof. Hence for every O ∈ I(S) and every SHOIQ interpretation I there is a model J ∈ I(S) such that J |S = I|S . Then the r←∅ r←∅ r←∅ class of local ontologies based on IA←∅ (·) is defined by O ∈ OA←∅ (S) iff O ⊆ AxA←∅ (S). AJ = ∅ for A ∈ S. we have O ∈ O(S) iff O is valid in I(S) iff J |= O for every interpretation J ∈ I(S). it is the case that J |= α. E1} is a deductive conservative extension of Q for S = {Cystic Fibrosis. which implies by Proposition 8 that O is safe for S w. r←∅ r←∅ Given a signature S. then J ∈ IA←∅ (S) and I|S = J |S . S2 and S such that (S1 S2 ) ∩ S = ∅ we have that I(S1 )|S = I(S2 )|S . O ∈ O(S1 ) iff O ∈ O(S2 ). r←∅ Corollary 28 The class of ontologies OA←∅ (S) defined in Example 26 is an anti-monotonic compact subset-closed and union-closed safety class. ·I ) and the interpretation is local for every S. since for every interpretation I = (∆ J = (∆J .r. In particular. . rJ = ∅ for r ∈ S.Modular Reuse of Ontologies: Theory and Practice r←∅ Example 26 Let IA←∅ (·) be a class of SHOIQ interpretations defined as follows. Hence O(·) is anti-monotonic. ·J ) defined by ∆J := ∆I . If I(·) is compact then for every S1 . Thus O(·) is a safety class. O ∈ O(S) iff O ∈ O(S ∩ Sig(O)) since (S (S ∩ Sig(O))) ∩ Sig(O) = (S \ Sig(O)) ∩ Sig(O) = ∅. since for r←∅ r←∅ every S1 and S2 the sets of interpretations IA←∅ (S1 ) and IA←∅ (S2 ) are defined differently only for elements in S1 S2 . and so O ∈ O(S2 ) implies that O is valid in I(S2 ) which implies that O is valid in I(S1 ) which implies that O ∈ O(S1 ). IA←∅ (·) is also compact. If I(·) is monotonic then I(S1 ) ⊆ I(S2 ) for every S1 ⊆ S2 . .r.t. The fact that O(·) is subset-closed and union-closed follows directly from Definition 25 since (O1 ∪ O2 ) ∈ O(S) iff (O1 ∪ O2 ) is valid in I(S) iff O1 and O2 are valid in I(S) iff O1 ∈ O(S) and O2 ∈ O(S). then O(·) is anti-monotonic. E1 by interpreting the symbols in Sig(P1 ) \ S as the empty set. and so. L = SHOIQ. we demonstrated that the ontology P1 ∪ Q given in Figure 1 with P1 = {P1. Since I(·) is a local class of interpretations.t. Hence. Example 29 Recall that in Example 5 from Section 3. It is easy to show that IA←∅ (S) / / I . the set AxA←∅ (S) of axioms that are local w. and X J := X I / / r←∅ r←∅ r←∅ for the remaining symbols X. Additionally. Assume that O(·) is a class of ontologies based on I(·). . In terms of Example 26 this means that 295 . S based on IA←∅ (S) r←∅ consists of all axioms α such for every J ∈ IA←∅ (S). Since IA←∅ (S1 ) ⊆ IA←∅ (S2 ) r←∅ r←∅ for every S1 ⊆ S2 . Genetic Disorder}. . it is the case that IA←∅ (·) is monotonic. Then O(·) is a subset-closed and union-closed safety class for L = SHOIQ. the set IA←∅ (S) consists of interpretations J such that rJ = ∅ for every atomic r←∅ role r ∈ S and AJ = ∅ for every atomic concept A ∈ S. if I(·) is monotonic. ♦ Proposition 27 (Locality Implies Safety) Let O(·) be a class of ontologies for SHOIQ based on a local class of interpretations I(·). O(·) is compact. Then by Definition 25 for every SHOIQ signature S. and if I(·) is compact then O(·) is also compact. we have that for every SHOIQ-interpretation I there exists J ∈ I(S) such that J |S = I|S . Given a r←∅ signature S.

S) = ⊥ ⊥ if Sig(R) S and otherwise = Funct(R). L = SHOIQ. S) = ⊥ if r ∈ S and otherwise = r(a. S). S) | τ (C1 C2 . S)). axiom α and ontology O let τ (C. The reason is that there is no elegant way to describe such interpretations. S).t. = ⊥ if Sig(R) S and otherwise = ( n R. 296 . S). If this property holds. Kazakov. R2 . and there is no “canonical” element of every domain to choose. S) | τ (¬C1 . τ (α. (a) (b) (c) (d) (e) (f ) (g) (h) (i) (j) (k) (l) (m) (n) τ (α. S) 2. S. & Sattler r←∅ r←∅ P1 ∈ OA←∅ (S). it is sufficient to interpret every atomic concept and atomic role not in S as the empty set and then check if α is satisfied in all interpretations of the remaining symbols.r. | τ (r(a. Horrocks. In the next section we discuss how to perform this test in practice. S) = (τ (C1 . concept C. S) τ (C2 . otherwise = ∃R1 . 5. S) be defined recursively as follows: τ (C. = ¬τ (C1 . S) and τ (O. / = {a}. by Proposition 27.2 r←∅ From the definition of AxA←∅ (S) given in Example 26 it is easy to see that an axiom α is r←∅ local w.. otherwise = (R1 R2 ). S) τ (C1 | τ (R1 = . S) | τ ({a}.t. S) ::= τ (α.C1 . S)). S) = a : τ (C. b). It turns out that this notion provides a powerful sufficiency test for safety which works surprisingly well for many real-world ontologies. This notion of locality is exactly the one we used in our previous work (Cuenca Grau et al. we refer to this safety class simply as locality.τ (C1 . S) = ⊥ ⊥ if r ∈ S and otherwise = Trans(r). = τ (C1 . Since OA←∅ (·) is a class of local ontologies.τ (C1 . Given a SHOIQ ontology O and a signature S it is sufficient to check whether r←∅ O ∈ OA←∅ (S). S). Note that for defining locality we do not fix the interpretation of the individuals outside S. / | τ (Funct(R). When ambiguity does not arise. whether every axiom α in O is satisfied by every interpretation from r←∅ IA←∅ (S).3 Testing Locality r←∅ In this section. S) = (⊥ ⊥) if Sig(R1 ) S. S) | τ ( n R. / | τ (Trans(r). S (based on IA←∅ (S)) if α is satisfied in every interpretation that fixes the interpretation of all atomic roles and concepts outside S to the empty set. S) ::= τ ( . | τ (a : C. ♦ Proposition 27 and Example 29 suggest a particular way of proving the safety of ontologies. Namely. we focus in more detail on the safety class OA←∅ (·). S) | τ (∃R. S) | τ (A.C 1 .r. = ⊥ if A ∈ S and otherwise = A. 2007). ⊥ if Sig(R2 ) S. the ontology P1 is safe for S w.Cuenca Grau. S) ::= C2 . as will be shown in Section 8. but in principle. S). This observation suggests the following test for locality: Proposition 30 (Testing Locality) Given a SHOIQ signature S. S) τ (C2 . introduced in Example 26. = ⊥ if Sig(R) S and otherwise = ∃R. α∈O τ (O. this could be done.t. every individual needs to be interpreted as an element of the domain. then O must be safe for S according to Proposition 27.r. b). that is. In order to test the locality of α w.

C). however.r. Example 31 Let O = {α} be the ontology consisting of axiom α = M2 from Figure 1. Trans(R∅ ). C1 C2 . In case reasoning is too costly. All atomic concepts A∅ ∈ S will be eliminated likewise.4 A Tractable Approximation to Locality One of the important conclusions of Proposition 30 is that one can use the standard capabilities of available DL-reasoners such as FaCT++ (Tsarkov & Horrocks.Modular Reuse of Ontologies: Theory and Practice r←∅ Then. and ( n R. in particular. several reasons to believe that the locality test would perform well in practice.t. S) is a tautology. Checking for tautologies in description logics is. Indeed. R R∅ . hence. S2 . a difficult problem (e. Proof. S1 ) = (⊥ ≡ Fibrosis ⊥) which is a SHOIQ-tautology. allow testing for DL-tautologies. O ∈ OA←∅ (S) iff every axiom in τ (O. Recall that the constructors ⊥. in order to check whether O is local w.t. / since τ (α. S) may still contain individuals that do not occur in S. or Funct(R∅ ).r.C and ( n R.t.r. Genetic Origin}.t.C) are assumed to be expressed using .3 It is also easy to show by induction that for every r←∅ interpretation I ∈ IA←∅ (S). S). There are. S1 = {Fibrosis. we have C I = (τ (C.C. ( n R∅ . it is possible to formulate a tractable approximation to the locality conditions for SHOIQ: 3. among other things. S) r←∅ for every Sig(α)-interpretation I ∈ IA←∅ (S) iff τ (α.⊥) which is not a SHOIQ tautology. but not local w.C). S iff I |= α for every interpretation I ∈ IA←∅ (S) iff I |= τ (α.r. hence O is not local w.C.Genetic Origin (7) Similarly. S1 it is sufficient to perform the following replacements in α (the symbols from S1 are underlined): ⊥ [by (b)] M2 Genetic Fibrosis ≡ Fibrosis ⊥ [by (f)] ∃has Origin. for the DL SHOIQ it is known to be NEXPTIME-complete. We demonstrate using Proposition 30 that O is local w. 2004) or KAON2 (Motik. all atomic concepts and roles that are not in S are eliminated by the transformation. SHOIQ.t. Hence O is local w. has Origin}. S2 ) = (Genetic Fibrosis ≡ ⊥ ∃has Origin. RACER (M¨ller & Haarslev. C1 C2 . and hence will be eliminated.r. in order to check whether O is local w. S)) ⊆ S. Hence r←∅ an axiom α is local w. In the second case τ (M2. theoretically. S) is a tautology. ∀R.t.t. The primary reason is that the sizes of the axioms which need to be tested for tautologies are usually relatively small compared to the sizes of ontologies. according to Proposition 30. S1 and hence by Proposition 8 is S1 -safe w. 2006). S))I and that I |= α iff I |= τ (α. 2003).r. 2000). we have A ∈ S and r ∈ S. modern DL reasoners are highly optimized for standard reasoning tasks and behave well for most realistic ontologies.t. every role R∅ with Sig(R∅ ) S occurs in O as either ∃R∅ . S). Pellet (Sirin & Parsia. S2 . It is easy to check that for every atomic concept A and atomic role r from τ (C. in other words. 2006) for testing o locality since these reasoners. Secondly. 297 .g. ♦ 5. ∃R. Note that it is not necessarily the case that Sig(τ (α.r.Genetic Origin In the first case we obtain τ (M2. S2 = {Genetic Fibrosis. Tobies. R∅ R.r. it is sufficient to perform the following replacements in α (the symbols from S2 are underlined): ⊥ [by (b)] M2 Genetic Fibrosis ≡ Fibrosis ⊥ [by (b)] (8) ∃has Origin.

where A∅ ∈ S is an atomic concept. R any role. Genetic Origin}. Genetic Disorder}. or (6) a : C ∆ . S. Hence the ontology P1 = {P1. P4. C | ∃R∅ . S = {Cystic Fibrosis. Kazakov. Since O ∈ OA←∅ (S) ˜ r←∅ ˜ r←∅ iff O ⊆ AxA←∅ (S).C ∅ | ( n R∅ .C) | ( n R. R∅ (possibly the inverse of) an atomic role r∅ ∈ S. OA←∅ (·) is a safety class by Proposition 33. Below we indicate sub-concepts in α from Con∅(S1 ): ∈ Con∅(S1 ) [matches A∅ ] M2 Genetic Fibrosis ≡ Fibrosis ∈ Con∅(S1 ) [matches ∃R∅ . and E1 from Figure 1 are syntactically local w.t. it can be shown that the safety class for SHOIQ based on syntactic locality enjoys all of the properties from Definition 24: ˜ r←∅ Proposition 34 The class of syntactically local ontologies OA←∅ (·) given in Definition 32 is an anti-monotonic.t.r. ♦ Intuitively. An axiom α is syntactically local w. S if O ⊆ AxA←∅ (S).r. syntactic locality provides a simple syntactic test to ensure that an axiom is r←∅ satisfied in every interpretation from IA←∅ (S). ·I ) from of Con r←∅ IA←∅ (S) it is the case that (C ∅ )I = ∅ and (C ∆ )I = ∆I for every C ∅ ∈ Con∅(S) and C ∆ ∈ Con∆(S). Hence. Con∆(S ) ⊆ Con∆(S ) be shown by induction. We denote ˜ r←∅ by AxA←∅ (S) the set of all SHOIQ-axioms that are syntactically local w.t. or (3) Funct(R∅ ). so OA←∅ (·) is compact. The following grammar recursively defines two sets of concepts Con∅(S) and Con∆(S) for a signature S: Con∅(S) ::= A∅ | ¬C ∆ | C Con∆(S) ::= ∆ | ¬C ∅ | C1 C∅ | C∅ ∆ C2 . and so we obtain the following conclusion: r←∅ ˜ r←∅ Proposition 33 AxA←∅ (S) ⊆ AxA←∅ (S). Further.r. S. & Sattler Definition 32 (Syntactic Locality for SHOIQ). subset-closed. we have that OA←∅ (·) is subset-closed and union-closed. by proving that Con 2 1 2 1 ˜ r←∅ ˜ r←∅ and AxA←∅ (S2 ) ⊆ AxA←∅ (S1 ) when S1 ⊆ S2 .r. ♦ 298 . and i ∈ {1. C ∅ ∈ Con∅(S).C | ∃R. Let S be a signature. and union-closed safety class.Genetic Origin ∈ Con∅(S1 ) [matches C C ∅] (9) It is easy to show in a similar way that axioms P1 —P4.t. E1} considered in Example 29 is syntactically local w. We denote by ˜ r←∅ OA←∅ (S) the set of all SHOIQ ontologies that are syntactically local w. or (5) C C ∆ . 2}.t. . Horrocks. . C(i) ∈ Con∆(S).C ∅ ) . S1 = {Fibrosis. compact. ˜ r←∅ A SHOIQ-ontology O is syntactically local w.r.t. S. ˜ r←∅ ˜ r←∅ Proof. Also one can show by induction that ˜ r←∅ ˜ r←∅ ˜ r←∅ ˜ r←∅ α ∈ AxA←∅ (S) iff α ∈ AxA←∅ (S ∩ Sig(α)). Anti-monotonicity for OA←∅ (·) can ∅(S ) ⊆ Con∅(S ). .C] ∃has Origin. C / / ∆ any concept.t. or (2) Trans(r∅ ). S if it is of one of the following forms: (1) R∅ R. Example 35 (Example 31 continued) It is easy to see that axiom M2 from Figure 1 is syntactically local w. It is easy to see from the inductive definitions ∅(S) and Con∆(S) in Definition 32 that for every interpretation I = (∆I .r.r. every syntactically local axiom is satisfied in every interpretation r←∅ I from IA←∅ (S). . or (4) C ∅ C.Cuenca Grau.

¬⊥ ∃r.Modular Reuse of Ontologies: Theory and Practice r←∗ IA←∗ (S) r←∅ IA←∅ (S) r←∆×∆ IA←∅ (S) r←id IA←∅ (S) r. As a result. and the interpretation of atomic concepts outside S to either the empty set ∅ or the set ∆ of all elements. the universal relation ∆ × ∆.r. since τ (α. S for which locality conditions assuming. A ∈ S : rJ ∅ AJ ∆J ∆J ∆J ∆J × ∆J { x. and whose running time is polynomial in |O| + |S|. For example.¬⊥) is a tautology. or the identity relation id on ∆. The locality condition based on IA←∅ (S) captures the domain axiom P4.r. and the functionality axiom P7 but 4. determines whether O ∈ OA←∅ (S).4 5. It is reasonable to assume.¬A ∃r. that tautological axioms do not occur often in realistic ontologies. x | x ∈ ∆J } Table 3: Examples for Different Local Classes of Interpretations The converse of Proposition 33 does not hold in general since there are semantically local axioms that are not syntactically local. In Table 4 we have listed all of these classes together with examples of typical types of axioms used in ontologies. the disjointness axiom P6. These axioms are assumed to be an extension of our project ontology from Figure 1.t. In Table 3 we have listed several such classes of local interpretations where we fix the interpretation of atomic roles outside S to be either the empty set ∅. S = {A. which is not a tautology. x | x ∈ ∆J } ∆J × ∆J { x. We indicate which axioms are local w. but not syntactically local. the axiom α = (A A B) is local w.r. These examples show that the limitation of our syntactic notion of locality is its inability to “compare” different occurrences of concepts from the given signature S. 299 . Proposition 36 There exists an algorithm that given a SHOIQ ontology O and a sig˜ r←∅ nature S. every S since it is a tautology (and hence true in every interpretation). A ∈ S : rJ ∅ AJ ∅ ∅ ∅ r←∗ IA←∗ (S) r←∅ IA←∆ (S) r←∆×∆ IA←∆ (S) r←id IA←∆ (S) r. it is easy to see that α is not syntactically local w. definition P5. syntactic locality does not detect all tautological axioms. syntactic locality checking can be performed in polynomial time by matching an axiom according to Definition 32. It can be seen from Table 4 that different types of locality conditions are appropriate r←∅ for different types of axioms. however. is a GCI α = (∃r. Each local class of interpretations in Table 3 defines a corresponding class of local ontologies analogously to Example 26. B} according to Definition 32 since it involves symbols in S only.5 Other Locality Classes r←∅ The locality condition given in Example 26 based on a class of local interpretations IA←∅ (·) is just a particular example of locality which can be used for testing safety. as usual. S = {r}. Furthermore. where |O| and |S| are the number of symbols occurring in O and S respectively. Other classes of local interpretations can be constructed in a similar way by fixing the interpretations of elements outside S to different values. that the symbols from S are underlined. Another example. S) = (∃r.r. We assume that the numbers in number restrictions are written down using binary coding.¬B). The axiom α is semantically local w.t.t. On the other hand.t.

Note that the modeling error E2 is not local for any of the given locality conditions.Cystic Fibrosis Table 4: A Comparison between Different Types of Locality Conditions neither of the assertions P8 or P9. since it is not clear how to eliminate the universal roles and identity roles from the axioms and preserve validity in the respective classes of interpretations. It might be possible to come up with algorithms for testing locality conditions for the classes of interpretation in Table 3 similar to the ones presented in Proposition 30. & Sattler α Axiom Project ? α ∈ Ax r←∅ A←∅ r←∆×∆ A←∅ r←id A←∅ r←∅ A←∆ r←∆×∆ A←∆ r←id A←∆ P4 ∃has Focus. For example.t. which is not necessarily functional. S) = if A ∈ S and otherwise = A / (a ) r←∆×∆ r←id For the remaining classes of interpretations. S. For r←∅ example. one can use locality based on IA←∅ (S) or IA←∆ (S). Horrocks. Kazakov. and so cannot be local w. Still.Cuenca Grau. that is for IA←∗ (S) and IA←∗ (S).r. Note also that it is not possible to come up with a locality condition that captures all the axioms P4–P9.t. since the individuals Human Genome and Gene prevent us from interpreting the atomic role has Focus and atomic concept Project with the empty r←∆×∆ set.Cystic Fibrosis ∃has Focus.4. S. The locality condition based on IA←∆ (S).r. checking locality. The idea is the same as that used in Definition 32. locality based on the class IA←∆ (S) can be tested as in Proposition 30. In order to capture such functionality r←id r←id axioms. it is easy to come up with tractable syntactic approximations for all the locality conditions considered in this section in a similar manner to what has been done in Section 5.Bio Medicine Bio Medicine ⊥ P6 Project P7 Funct(has Focus) P8 Human Genome : Project P9 has Focus(Human Genome. where every atomic role outside S is interpreted with the identity relation id on the interpretation domain. Hence. where the case (a) of the definition for τ (C. every subset of P containing P6 and P8 is not safe w. Gene) E2 ∀has Focus. can capture such assertions but are generally poor for other types of axioms. is not that straightforward. S) is replaced with the following: τ (A. since in P6 and P8 together imply axiom β = ({Human Genome} Bio Medicine ⊥) which uses the symbols from S only. the functionality axiom P7 is not captured by this locality condition since the atomic role has Focus is interpreted as the universal relation ∆×∆. where the atomic roles and concepts outside S are interpreted as the largest possible sets. P5                                           BioMedical Project ≡ Project ∃has Focus. however. namely to define two sets Con∅(S) and Con∆(S) of concepts for a signature S which are interpreted as the empty 300 .

this gives a more powerful sufficient condition for safety. Unfortunately the unionclosure property for safety classes is not preserved under unions.C ∅ n R∅ . m ≥ 2 .C | | ( m Rid . Formally.5. Where: A∅ .5.6 Combining and Extending Safety Classes In the previous section we gave examples of several safety classes based on different local classes of interpretations and demonstrated that different classes are suitable for different types of axioms. for example. Sig(R∅ ).Modular Reuse of Ontologies: Theory and Practice Con∅(S) ::= ¬C ∆ | C | ∃R. Sig(Rid ) S. C(i) ∈ Con∆(S).C ∆ | ( 1 Rid . 5. as seen in Example 37. Obviously. the ontology O1 consisting of axioms P4–P7 from Table 4 satisfies the first locality condition. one may try to apply different sufficient tests and check if any of them succeeds. C|C C∆ | a : C∆ ˜ r←∗ AxA←∗ (S) ::= C ∅ r←∅ for IA←∗ (·): r←∆×∆ for IA←∗ (·): r←id for IA←∗ (·): | R∅ |R R | Trans(r∅ ) | Funct(R∅ ) R∆×∆ | Trans(r∆×∆ ) | r∆×∆ (a.C C Con∆(S) ::= r←∗ for IA←∆ (·): r←∆×∆ for IA←∗ (·): r←id for IA←∗ (·): ∆ | ¬C ∅ | C1 ∆ C2 | A∆ | ∃R∆×∆ . the union of such safety classes might be no longer unionclosed. ♦ As shown in Proposition 33. rid ∈ S. In order to check safety of ontologies in practice. their union (O1 ∪ O2 )(·) is a class of ontologies defined by (O1 ∪O2 )(S) = O1 (S)∪O2 (S).C ∅ | r←∗ for IA←∅ (·): r←∅ for IA←∗ (·): r←id for IA←∗ (·): C∅ | C∅ n R. Moreover. b) | Trans(rid ) | Funct(Rid ) Figure 3: Syntactic Locality Conditions for the Classes of Interpretations in Table 3 set and. Sig(R∆×∆ ). r∅ . C is any concept. but their union O1 ∪O2 satisfies neither the first nor the second locality condition and. It is easy to see by Definition 24 that if both O1 (·) and O2 (·) are safety classes then their union (O1 ∪ O2 )(·) is a safety class. as ∆I in every interpretation I from the class and see in which situations DL-constructors produce the elements from these sets.C). which can be seen as the union of the safety classes used in the tests. however. is not even safe for S as we have shown in Section 5. given two classes of ontologies O1 (·) and O2 (·). In Figure 3 we gave recursive r←∗ ˜ r←∗ definitions for syntactically local axioms AxA←∗ (S) that correspond to classes IA←∗ (S) of interpretations from Table 3. subset-closed as well. then the union is anti-monotonic. R is any role | A∅ | ∃R∅ . the ontology O2 consisting of axioms P8–P9 satisfies the second locality condition. every locality condition gives a union-closed safety class. or respectively. One may wonder if locality classes already provide the most powerful sufficient 301 . This safety class is not union-closed since. ∆ C ∅ ∈ Con∅(S). respectively. A∆ . as demonstrated in the following example: r←∆×∆ r←∅ Example 37 Consider the union (OA←∅ ∪ OA←∆ )(·) of two classes of local ontologies r←∆×∆ r←∅ OA←∅ (·) and OA←∆ (·) defined in Section 5. in fact. where some cases in the recursive definitions are present only for the indicated classes of interpretations.C ∆ ) .C ∆ | ( n R∆×∆ . if each of the safety classes is also anti-monotonic or subset-closed.C ∆ ) | ∃Rid . r∆×∆ .

() r←∅ Intuitively.r. Q is an ontology which implies nothing but tautologies and uses all the atomic concepts and roles from S.5 are maximal union-closed safety classes for L = SHIQ. and (iii) for every P ∈ O(S). it is not clear how to force interpretations of roles to be the universal or identity relation using SHOIQ axioms.t. r ∈ S. First. ⊥ for all A. and modifying τ (·.5. Note also that the proof of Proposition 39 does not work in the presence of nominals.5. Hence there is probably room to extend the locality classes OA←∅ (·) and OA←∆ (·) for L = SHOIQ while preserving union-closure. 297).5. We claim that O ∪ P ∪ Q is not a deductive Sig(Q)-conservative extension of Q. since α ∈ AxA←∅ (S). By Proposition 30. then there exists a signature S and an ontology O over L. There are some difficulties in extending the proof of Proposition 39 to other locality classes considered in Section 5. r←∅ r←∅ Since O ∈ OA←∅ (S). Horrocks. since O |= α and O ∪ P ∪ Q r←∅ has only models from IA←∅ (S). ·) is defined as in Proposition 30. & Sattler conditions for safety that satisfy all the desirable properties from Definition 24. ⊥ for every atomic r←∅ concept A and atomic role r from Sig(O) \ S. it is the case that O ∪ P ∪ Q |= β. by Definition 1. Hence. S in L. Second. ·) as has been discussed in Section 5. since this will not guarantee that β = τ (α. (ii) O is safe w. the ontology P is chosen in such a way that P ∈ OA←∅ (S) and O ∪ P ∪ Q r←∅ has only models from IA←∅ (S). We define ontologies P and Q as follows. ·) in these cases (see a related discussion in Section 5.r. it is the case that O ∪ P is safe w. that is. O ∪ P ∪ Q is not a deductive Sig(Q)-conservative extension of Q ( ). Proof. if a safety class O(·) is not maximal union-closed for a language L. It is easy to see that P ∈ OA←∅ (S). Let / / β := τ (α. A safety class O2 (·) extends a safety class O1 (·) if O1 (S) ⊆ O2 (S) for every S. such that (i) O ∈ / O(S). Note that Sig(O ∪ P) ∩ Sig(Q) ⊆ S. ♦ r←∅ r←∅ Proposition 39 The classes of local ontologies OA←∅ (·) and OA←∆ (·) defined in Section 5. 302 . S in L.5). Surprisingly this is the case to a certain extent for some locality classes considered in Section 5. Definition 38 (Maximal Union-Closed Safety Class). According to Definition 38. Now. Kazakov. r←∅ For O(·) = OA←∆ (·) the proof can be repeated by taking P to consist of axioms A and ∃r. Let O be an ontology over L that satisfies conditions (i)–(iii) above.Cuenca Grau. for every ontology Q over L with Sig(Q) ∩ Sig(O ∪ P) ⊆ S it is the case that O ∪ P ∪ Q is a deductive conservative extension of Q. We demonstrate that r←∅ r←∅ this is not possible for O(·) = OA←∅ (·) or O(·) = OA←∆ (·) r←∅ We first consider the case O(·) = OA←∅ (·) and then show how to modify the proof for r←∅ the case O(·) = OA←∆ (·).t. / β is not a tautology. it is not clear how to define the function τ (·. since β r←∅ does not contain individuals. for every A. S) where τ (·. As has been shown in the proof of r←∅ this proposition. r ∈ Sig(O) \ S. A safety class O1 (·) is maximal unionclosed for a language L if O1 (·) is union-closed and for every union-closed safety class O2 (·) that extends O1 (·) and every O over L we have that O ∈ O2 (S) implies O ∈ O1 (S). we have that Sig(β) ⊆ S = Sig(Q) and. there exists an axiom α ∈ O such that α ∈ AxA←∅ (S). S) contains symbols from S only (see Footnote 3 on r←∅ r←∅ p. Take P to consist of the axioms A ⊥ and ∃r. I |= α iff I |= β for every I ∈ IA←∅ (S). thus Q |= β. Take Q to consist of all tautologies of form ⊥ A and ⊥ ∃r.

rule R1 moves α into Os provided Os remains safe w. I3 establishes that the rewrite rules either add elements to Om . a subset O1 of O is an S-module in O provided that O \ O1 is safe for S ∪ Sig(O1 ). O(·). Given an axiom α from Ou . the set Om is an O(·)-based module in O. The rewrite rules R1 and R2 preserve invariants I1–I3 given in Figure 4. contains the unprocessed axioms. however. Therefore. These axioms are distributed among Om and Os according to rules R1 and R2. or they add elements to Os without changing Om . it is sufficient to enumerate all possible subsets of the ontology and check if any of these subsets is a module based on O(·). namely O itself. Otherwise. Ou and Os form a partitioning of O. Let L be an ontology language and O(·) be a safety class for L. S ∪ Sig(Om ) according to the safety class O(·). more precisely. which is initialized to O. Ou and Os . ♦ It is clear that. however. Invariant I1 states that the three sets Om . finally. by Definition 24. any procedure for checking membership of a safety class O(·) can be used directly for checking whether Om is a module based on O(·). In order to extract an O(·)-module. The set Om accumulates the axioms of the extracted module.t. there exists at least one O(·)-based S-module in O.t.t.r. there exists no general procedure that can recognize or extract all the (minimal) modules for a signature in an ontology in finite time.r. the pair (|Om |. we say that Om ⊆ O is an O(·)-based S-module in O if O \ Om ∈ O(S ∪ Sig(Om )).r. I2 states that the set Os satisfies the safety test for S ∪ Sig(Om ) w. it is sufficient to show that O \ O1 ∈ O(S ∪ Sig(O1 )). Figure 4 presents an optimized version of the module-extraction algorithm. The set Ou . Note also that it follows from Definition 40 that Om is an O(·)-based S-module in O iff Om is an O(·)-based S -module in O for every S with S ⊆ S ⊆ (S ∪ Sig(Om )). At the end of this process. which represent a partitioning of the ontology O into three disjoint subsets Om . the set Os is intended to be safe w. when no axioms are left in in Ou . according to Definition 40. Definition 40 (Modules Based on Safety Class).r. 303 .Modular Reuse of Ontologies: Theory and Practice 6. rule R2 moves α into Om and moves all axioms from Os back into Ou . The procedure manipulates configurations of the form Om | Ou | Os . in order to prove that O1 is a S-module in O. |Os |) consisting of the sizes for these sets increases in lexicographical order. the empty ontology ∅ = O \ O also belongs to O(S) for every O(·) and S. The techniques described in Section 5. in other words. Given an ontology O and a signature S over L. S ∪ Sig(Om ). As shown in Corollary 23 in Section 4. any safety class O(·) can provide a sufficient condition for testing modules—that is. A notion of modules based on this property can be defined as follows.t. Extracting Modules Using Safety Classes In this section we revisit the problem of extracting modules from ontologies. indeed. since Sig(Om ) might expand and axioms from Os might become no longer safe w. ♦ Remark 41 Note that for every safety class O(·). Proposition 18 establishes the relationship between the notions of safety and module. ontology O and signature S. can be reused for extracting particular families of modules that satisfy certain sufficient conditions. In practice. it is possible to avoid checking all the possible subsets of the input ontology. S ∪ Sig(Om ).

For the rewrite rule R2 we have: Om | Ou ∪ {α} | Os =⇒ Om ∪ {α} | Ou ∪ Os | ∅ if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )). O(·) is anti-monotonic. in addition. by Definition 40. a signature S. Ou and Os form a partitioning of O (see invariant I1 in Figure 4). by invariant I1 from Figure 4. O(·) terminates and returns an O(·)based S-module Om in O. (|Om |. Om | Ou ∪ {α} | Os =⇒ Om ∪ {α} | Ou ∪ Os | ∅ Invariants for Om | Ou | Os : I1. and hence the procedure always terminates with Ou = ∅. subset-closed. O = Om Ou Os if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) Invariant for Om | Ou | Os =⇒ Om | Ou | Os : I3. Proof. Kazakov. O(·) is an anti-monotonic. The rewrite rule R1 does not change the set Om . additionally. and unionclosed safety class. |Os |) <lex (|Om |. (|Om |. Om | Ou ∪ {α} | Os =⇒ Om | Ou | Os ∪ {α} R2.Cuenca Grau. (2) Now. |Os |) I2. Horrocks. |Os |) increases in lexicographical order (see invariant I3 in Figure 4). This will prove that the module computed by the procedure is a subset of every O(·)-based S-module in O. O an ontology over L. the sets Om . is the smallest O(·)-based S-module in O. ↑ ↑ Initial Configuration = ∅|O|∅ Termination Condition: Ou = ∅ module safe Rewrite rules: R1. O is partitioned into Om and Os and by invariant I2. Indeed. () 304 . Upon termination. it is the case that Om ⊆ Om . (1) The procedure based on the rewrite rules from Figure 4 always terminates for the following reasons: (i) for every configuration derived by the rewrite rules. that Om is an O(·)-based S-module in O. We demonstrate by induction that for every configuration Om | Ou | Os derivable from ∅ | O | ∅ by rewrite rules R1 and R2. and (2) If. and S a signature over L. Os ∈ O(S ∪ Sig(Om )). and suppose that Om is an O(·)-based S-module in O. which implies. if Ou = ∅ then it is always possible to apply one of the rewrite rules R1 or R2. a safety class O(·) a module Om in O based on O(·) unprocessed ↓ Configuration: Om | Ou | Os . subset-closed and union-closed. for the base case we have Om = ∅ ⊆ Om . Os ∈ O(S ∪ Sig(Om )) Figure 4: A Procedure for Computing Modules Based on a Safety Class O(·) Proposition 42 (Correctness of the Procedure from Figure 4) Let O(·) be a safety class for an ontology language L. Then: (1) The procedure in Figure 4 with input O. and hence. & Sattler Input: Output: an ontology O. S. and the procedure returns precisely this minimal module. suppose that. Additionally. then there is a unique minimal O(·)-based S-module in O. and therefore the size of every set is bounded. (ii) at each rewrite step.

whether α is local for IA←∅ (·). Since O(·) is subset-closed. Os ∪ {α} ∈ O(S ∪ Sig(Om )). Then α ∈ Os := O \ Om .2.Modular Reuse of Ontologies: Theory and Practice Input: an ontology O. instead of checking whether (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) in the conditions of rules R1 and R2. it is possible to show that this procedure runs in polynomial time assuming that the safety test can be also performed in polynomial time. that Om ⊆ Om but Om ∪ {α} Om . the procedure. If the safety class O(·) satisfies additional desirable properties. to the contrary. Genetic Disorder} r←∅ and safety class O(·) = OA←∅ (·) defined in Example 26. in fact. like those based on classes of local interpretations described in Section 5. as stated in claim (2) of Proposition 42. Os ∈ O(S ∪ Sig(Om )) and O(·) is union-closed. Since Om is an O(·)-based S-module in O. we have that Os ∈ O(S ∪ Sig(Om )). In Figure 5 we present an optimized version of the algorithm from Figure 4 for such locality classes. we have {α} ∈ O(S ∪ Sig(Om )). If O(·) is union closed. it is possible to optimize the procedure shown in Figure 4. Claim (1) of Proposition 42 establishes that the procedure from Figure 4 terminates for every input and produces a module based on a given safety class. If O(·) is compact and subset closed. a safety class O(·) Output: a module Om in O based on O(·) Initial Configuration = Rewrite rules: R1. since the set of remaining axioms will stay in O(S ∪ Sig(Om )). In the last column we indicate whether the first conditions of the rules R1 r←∅ and R2 are fulfilled for the selected axiom α from Ou —that is. {α} ∈ O(S ∪ Sig(Om )). in each row. The second column of the table shows the elements of S ∪ Sig(Om ) that have appeared for the current configuration but have not been present for the preceding configurations. it is sufficient to move only those axioms Os that contain at least one symbol from α which did not occur in Om before. The first column of the table lists the configurations obtained from the initial configuration ∅ | O | ∅ by applying the rewrite rules R1 and R2 from Figure 5. the underlined axiom α is the one that is being tested for safety. then instead of moving all the α axioms in Os to Ou in rule R2. it is sufficient to check if {α} ∈ O(S ∪ Sig(Om )) since it is already known that Os ∈ O(S ∪ Sig(Om )). a signature S. produces the smallest possible module based on the safety class. 305 . This contradiction implies that the rule R2 also preserves property Om ⊆ Om . signature S = {Cystic Fibrosis. Example 43 In Table 5 we present a trace of the algorithm from Figure 5 for the ontology O consisting of axioms M1–M5 from Figure 1. The rewrite rule corresponding to the result of this test is applied to the configuration. Moreover. Since by invariant I2 from Figure 4. Om | Ou ∪ {α} | Os ∪ Os =⇒ Om ∪ {α} | Ou ∪ Os | Os if {α} ∈ O(S ∪ Sig(Om )). Since Om ⊆ Om and O(·) is anti-monotonic. In this case. then. and Sig(Os ) ∩ Sig(α) ⊆ Sig(Om ) ∅|O|∅ Termination Condition: Ou = ∅ Figure 5: An Optimized Procedure for Computing Modules Based on a Compact SubsetClosed Union-Closed Safety Class O(·) Suppose. Om | Ou ∪ {α} | Os =⇒ Om | Ou | Os ∪ {α} if {α} ∈ O(S ∪ Sig(Om )) α α R2. which contradicts ( ).

M4. the implicit non-determinism in the procedure from Figure 5 does not have any impact on the result of the procedure. M3. in our case. M4. M4 | M2. the rewrite procedure produces a module Om consisting of axioms M1– M4. and α = M3 in configurations 2 and 4. M2. Om contains all the S-essential axioms in O. Lemma 44 Let O(·) be an anti-monotonic subset-closed safety class for an ontology language L.r. M5 | − Cystic Fibrosis. L. the axiom α = M2 is tested for safety in both configurations 1 and 7. It is also easy to see that. In other words. M4 | M2 | M5 M1. which is the smallest OA←∅ (·)-based S-module in O. According to Claim (2) of Proposition 42 all such r←∅ computations should produce the same module Om . M5 | − M1. M5 | M2. M3 | M2. Note that it is possible to apply the rewrite rules for different choices of the axiom α in Ou . M4. which results in a different computation. Genetic Origin Genetic Fibrosis − − − {α} ∈ O(S ∪ Sig(Om ))? Yes Yes No No No Yes No ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ R1 R1 R2 R2 R2 R1 R2 1 − | M1. ♦ It is worth examining the connection between the S-modules in an ontology O based on a particular safety class O(·) and the actual minimal S-modules in O. It turns out that any O(·)-based module Om is guaranteed to cover the set of minimal modules. alternative choices for α may result in shorter computations: in our example we could have selected axiom M1 in the first configuration instead of M2 which would have led to a shorter trace consisting of configurations 1. M2. L and Om an O(·)-based S-module in O. that is. M3. The following Lemma provides the main technical argument underlying this result. Let O1 be an S-module in O w. M5} from Figure 1 and S = {Cystic Fibrosis. Horrocks. Genetic Disorder Table 5: A trace of the Procedure in Figure 5 for the input Q = {M1. M3. given O and S. Kazakov. because the set Ou may increase after applications of rule R2. . M3 M1 | M2.t. M3. . provided that O(·) is anti-monotonic and subset-closed.r. for example. M5 | − M1. M5 | − M1. and 4–8 only. M5 | M2 − | M1.Cuenca Grau. Genetic Disorder} Note that some axioms α are tested for safety several times for different configurations. M3. However. and S a signature over L. . the procedure from Figure 5 has implicit non-determinism. M4. but became non-local when new axioms were added to Om .r.t. In our example. & Sattler Om | Ou . M3. . Pancreas. Note also that different results for the locality tests are obtained in these cases: both M2 and M3 were local w. syntactic locality produces the same results for the tests. Then O2 := O1 ∩ Om is an S-module in O w.t. In other words. 306 . has Origin. located In. M4. O an ontology. M4 | − | M5 New elements in S ∪ Sig(Om ) − − Fibrosis. S ∪ Sig(Om ) when Om = ∅. α | Os 2 3 4 5 6 7 8 − | M1.

in order to test subsumption between a pair of atomic concepts. as given by the following proposition: Proposition 46 Let O(·) be a compact union-closed safety class for an ontology language L. hence. by Definition 40. L. the extracted module contains only essential axioms. since O |= (A B). leads to smaller modules. it is the case that O |= α implies OA |= α. hence (O \ OA ) ∪ {B ⊥} = O \ OA . L. Since O(·) is anti-monotonic. O1 ∩ Om is an S-module in O w. by Definition 40. An interesting application of modules is the pruning of irrelevant axioms when checking if an axiom α is implied by an ontology O.4. L.r. Indeed. in order to check whether O |= α it suffices to retrieve the module for Sig(α) and verify if the implication holds w. this module. (b) Consider O = O ∪{B ⊥}.t. In some cases.r. we have that OA |= (A ⊥) which implies OA |= (A B). In particular. 307 . by Lemma 44. Hence Om is a superset of every minimal S-module in O and hence. L.t.r. since Om is an O(·)-based S-module in O. by Definition 12 and Definition 1. and O2 ⊆ Om . Indeed. in general. in accordance to Corollary 45. it is the case that (O \ OA ) ∪ {B ⊥} ∈ O(S ∪ Sig(OA )).r. / (a) By Remark 41 it is the case that OA is an O(·)-based module in O for S = {A. contains all S-essential axioms in O w.r. O2 is an O(·)-based S-module in O1 .t. Let OA be an O(·)-based module in O for S = {A} in O.r. Note that B ⊥ ∈ OA . Proof. since OA is a module in O for S = {A}. and. L.t. Then: 1 If O |= α := (A 2 If O |= α := (B B) and {B A) and { ⊥} ∈ O(∅) then OA |= α. by Definition 24 it is the case that {B ⊥} ∈ O(S ∪ Sig(OA )).t. however. Let O1 be a minimal S-module in O w. in general. it is the case that O |= (A ⊥). it is the case that O1 \ O2 ∈ O(S ∪ Sig(Om )). we have that O1 \ O2 ∈ O(S ∪ Sig(O2 )). L which is strictly contained in O1 . By Definition 40. we have that OA is an O(·)-based S-module in O . Since Sig(α) ⊆ S.Modular Reuse of Ontologies: Theory and Practice Proof. Consider two cases: (a) B ∈ Sig(OA ) and (b) B ∈ Sig(OA ). {B ⊥} ∈ / O(∅) and O(·) is compact. Now. Since O1 is an S-module in O w.r. if the safety class being used enjoys some nice properties then it suffices to extract a module for one of them. Then Om contains all S-essential axioms in O w. O an ontology and A. Proof. We demonstrate that O1 ⊆ Om . all the axioms M1–M4 are essential for the ontology O and signature S considered in Example 43. then O \ OA ∈ O(S ∪ Sig(OA )). O2 is an S-module in O w. since B ∈ Sig(OA ). Corollary 45 Let O(·) be an anti-monotonic. otherwise. Since OA is an O(·)-based S-module in O. and hence.t. we have that O \ Om ∈ O(S ∪ Sig(Om )). Since O(·) is union-closed. We have seen in this example that the locality-based S-module extracted from O contains all of these axioms. Since Sig(B ⊥) = {B} and B ∈ Sig(OA ).t.r. 1.t. Since O1 \ O2 = O1 \ Om ⊆ O \ Om and O(·) is subset-closed. localitybased modules might contain non-essential axioms. B atomic concepts. B}. L. In our case. subset-closed safety class for L and Om be an O(·)-based S-module in O. B} ∈ O(∅) then OA |= α. it is sufficient to extract a module for a subset of the signature of α which. In particular O2 is an S-module in O1 w. As shown in Section 3.

2007. checking model conservative extensions is already undecidable for EL (Lutz & Wolter. 2007). and undecidable w. 2007) (roughly OWL-Lite). Indeed.. 7. Lutz & Wolter. In order to check if the subsumption A B holds in an ontology O..r. it is the case that M2 is also an S-module in O. M2 is an S-module in M1 for S = {A. therefore. by Corollary 47 this module is complete for all the super-concepts of A in O. r←∗ Corollary 47 Let O be a SHOIQ ontology and A. 2007). Furthermore. is complete for all the sub-concepts of B in M1 . it is sufficient to extract a module in O for S = {A} using a modularization algorithm based on a safety class in which the ontology {B ⊥} is local w. and. since O |= ( A).. That is.r. including B—that is. in Case (b) we show that OA is an O(·)-based module for S = {A} in O = O∪{ B}. By Proposition 46. which implies O |= (B A). to optimize the classification of ontologies. Proof.t.t. Proposition 46 implies that a module based on a safety class for a single atomic concept A can be used for capturing either the super-concepts (Case 1).r. provided that the safety class captures. Lutz et al. or the sub-concepts (Case 2) B of A. For this purpose. For example. B is a super-concept or a sub-concept of A in the ontology if and only if it is such in the module. One could extract a OA←∆ (·)-based module M2 for S = {B} in M1 which. if an atomic concept is a super-concept of A in O. it is the case that OB |= ( A).Cuenca Grau. & Sattler 2. It is possible to combine modularization procedures to obtain modules that are smaller than the ones obtained using these procedures individually. by Corollary 47. Let OA←∅ (·) and r←∗ r←∗ OA←∆ (·) be locality classes based on local classes of interpretations of form IA←∅ (·) and r←∗ (·). Then O |= (A B) iff OA |= (A B) iff OB |= (A B). hence.. The problem of deciding conservative extensions has been recently investigated in the context of ontologies (Ghilardi et al. Related Work We have seen in Section 3 that the notion of conservative extension is valuable in the formalization of ontology reuse tasks. one could use the syntactic locality conditions given in Figure 3 instead of their semantic counterparts. the empty signature. B atomic concepts. It is easy to see that {B r←∗ ⊥} ∈ OA←∅ (∅) and { r←∗ A} ∈ OA←∆ (∆).t. in order to check r←∗ the subsumption O |= (A B) one could first extract a OA←∅ (·)-based module M1 for S = {A} in O. and for ALC it is even not semi-decidable (Lutz et al.t. 2007). 2006. 308 . and check whether this subsumption holds w. Horrocks. This property can be used. axioms of the form B ⊥ (Case 1) or ( B) (Case 2). for example. ALCIQ (Lutz et al. 2007). the module.r. The problem of deciding whether P ∪ Q is a deductive S-conservative extension of Q is EXPTIME-complete for EL (Lutz & Wolter. when applied to the empty signature. it is convenient to use a syntactically tractable approximation of the safety class in use. for example. then it is also a r←∗ super-concept of A in M1 . Let O be a module for S = {A} based on O r←∗ (·) and IA←∆ A A←∅ r←∗ OB be a module for S = {B} based on OA←∆ (·). from Table 3. B} and M1 is an S-module in the original ontology O. The proof of this case is analogous to that for Case 1: Case (a) is applicable without changes. respectively. ALCIQO (roughly OWL DL). including A. Kazakov. 2-EXPTIME-complete w.

A B}. or. concepts that are “linked” from the input concepts (via existential restrictions) and their “super-concepts”. From the description of the procedure it is not entirely clear whether it works with a classified ontology (which is unlikely in the case of GALEN since the full version of GALEN has not been classified by any existing reasoner). together with their “super-concepts” and “super-roles” but not their “sub-concepts” and “sub-roles”. and so Q2 is not a module in Q. the extracted fragment is not guaranteed to be a module. Q |= α. Ontology Integration. In this case. the axioms in Q that mention any of the symbols in S are added to Q1 . α = (A ∀r. and S = {A}. In particular. For example. An example of such a procedure is the algorithm implemented in the Prompt-Factor tool (Noy & Musen. Currently. when S = {Cystic Fibrosis. the Prompt-Factor algorithm extracts Q1 = {A B}. C}. and Ontology Segmentation (Kalfoglou & Schorlemmer. so the authors investigate the possibility of splitting GALEN into small “segments” which can be processed by reasoners separately. then expands S with the symbols mentioned in these axioms. a rapidly growing body of work has been developed under the headings of Ontology Mapping and Alignment. the fragment extracted by the Prompt-Factor algorithm consists of all the axioms M1-M5. “intersection”. which was used for segmentation of the medical ontology GALEN (Rector & Rogers. These steps are repeated until a fixpoint is reached. S is expanded with the symbols in Sig(Q1 ). Genetic Disorder}. and Q consists of axioms M1–M5 from Figure 1. 2003). For our example in Section 3. consider an ontology Q = { {a}. Hence. 2004a. the Prompt-Factor algorithm extracts Q2 = {B C}. the segment should contain all “super-” and “sub-” concepts of the input concepts. M4. admits only single element models. the Prompt-Factor algorithm may fail even if Q is consistent. the PromptFactor algorithm extracts a module (though not a minimal one). and α is satisfied in every such a model. such that S contains all the symbols of Q. which does not imply α. Given S = {B. for the included concepts. Q2 |= α. 2004b). but not their “subconcepts”. numerous techniques for extracting fragments of ontologies for the purposes of knowledge reuse have been proposed. This field is rather diverse and has roots in several communities. that is. and M5 containing these terms.A). For example. In particular. the algorithm retrieves a fragment Q1 ⊆ Q as follows: first. After this step. B C} and α = (C B). It is easy to see that Q is consistent. It is also not clear which axioms should be included in 309 . given a set of atomic concepts S. also “their restrictions”.Modular Reuse of Ontologies: Theory and Practice In the last few years. how the “super-” and “sub-” concepts are computed. 2003. Another example is Seidenberg’s segmentation algorithm (Seidenberg & Rector. In general. however. second. The authors discuss which concepts and roles should be included in the segment and which should not. Given a signature S and an ontology Q. and “equivalent concepts” should be considered by including the roles and concepts they contain. Ontology Merging. computes a “segment” for S in the ontology. consider an ontology Q = {A ≡ ¬A. 1999). Noy. 2006). however. Most of these techniques rely on syntactically traversing the axioms in the ontology and employing various heuristics to determine which axioms are relevant and which are not. The authors describe a segmentation procedure which. “union”. otherwise. the algorithm first retrieves axioms M1. the full version of GALEN cannot be processed by reasoners. In this case. all the remaining axioms of Q are retrieved. The description of the procedure is very high-level. In general. The ontology Q is inconsistent due to the axiom A ≡ ¬A: any axiom (and α in particular) is thus a logical consequence of Q.

for information see the homepage of the workshop http://www. 2004) consists of partitioning the concepts in an ontology to facilitate visualization of and navigation through an ontology. Parsia.cild. 5. & Sirin. 2007). should not contain unsatisfiable atomic concepts. Cuenca Grau et al. such as E-connections (Cuenca Grau. Such studies are beyond the scope of this paper. & Sattler the segment in the end. there have been various proposals for “safely” combining modules. What is common between the modularization procedures we have mentioned is the lack of a formal treatment for the notion of module. The algorithm is intended as a visualization technique. namely that the module QA should entail all the subsumptions in the original ontology between atomic concepts involving A and other atomic concepts in QA . most of these proposals. to check for side-effects. The growing interest in the notion of “modularity” in ontologies has been recently reflected in a workshop on modular ontologies5 held in conjunction with the International Semantic Web Conference (ISWC-2006). Concerning the problem of ontology reuse.. In contrast to these works. A different approach to module extraction proposed in the literature (Stuckenschmidt & Klein. but rather argue what should be in the modules and what not based on intuitive notions. they do not take the semantics of the ontology languages into account.html 310 . including those containing “non-safe” axioms.Cuenca Grau. 2006b). The algorithm uses a set of heuristics for measuring the degree of dependency between the concepts in the ontology and outputs a graphical representation of these dependencies. Horrocks. such as safety testing. Distributed Description Logics (Borgida & Serafini. In contrast. and should have only “safe” axioms (which in our terms means that they are local for the empty signature). The interested reader can find in the literature a detailed comparison between the different approaches for combining ontologies (Cuenca Grau & Kutz.iastate. 2003) and Package-based Description Logics (Bao. 2006) propose a specialized semantics for controlling the interaction between the importing and the imported modules to avoid side-effects. Caragea. and does not establish a correspondence between the nodes of the graph and sets of axioms in the ontology. The authors present an algorithm for partitioning an ontology into disjoint modules and proved that the algorithm is correct provided that certain safety requirements for the input ontology hold: the ontology should be consistent.edu/events/womo. since the procedure talks only about the inclusion of concepts and roles. 2006a). Module extraction in ontologies has also been investigated from a formal point of view (Cuenca Grau et al. It might be possible to formalize these algorithms and identify ontologies for which the “intuitionbased” modularization procedures work correctly. we assume here that reuse is performed by simply building the logical union of the axioms in the modules under the standard semantics. One of the requirements for the module is that Q should be a conservative extension of QA (in the paper QA is called a logical module in Q). The papers describing these modularization procedures do not attempt to formally specify the intended outputs of the procedures. the algorithm we present here works for any ontology. (2006) define a notion of a module QA in an ontology Q for an atomic concept A. & Honavar. In particular. Kazakov. The paper imposes an additional requirement on modules. and we establish a collection of reasoning services.

we check if P belongs to the locality class for S = Sig(P) ∩ Sig(Q). 2004) to which our framework does not yet apply. we compare three modularization7 algorithms that we have implemented using Manchester’s OWL API:8 A1: The Prompt-Factor algorithm (Noy & Musen. 2006) and illustrate the ˜ r←∅ ˜ r←∆×∆ combination of the modularization procedures based on the classes OA←∅ (·) and OA←∆ (·). Cuenca Grau. Implementation and Proof of Concept In this section. we provide empirical evidence of the appropriateness of locality for safety testing and module extraction.2 Extraction of Modules In this section.. The library is available at http://www. 8. all 4 non-local ontologies turned out to be local.1 Locality for Testing Safety ˜ r←∅ We have run our syntactic locality checker for the class OA←∅ (·) over the ontologies from a library of 300 ontologies of various sizes and complexity some of which import each other (Gardiner. 6. In this section by “module” we understand the result of the considered modularization procedures which may not necessarily be a module according to Definition 10 or 12 8. It turned out that from the 96 ontologies in the library that import other ontologies. we show that the locality class OA←∅ (·) provides a powerful sufficiency test for ˜ r←∅ safety which works for many real-world ontologies.cs. we have implemented a syntactic loca˜ r←∅ ˜ r←∆×∆ lity checker for the locality classes OA←∅ (·) and OA←∆ (·) as well as the algorithm for extracting modules given in Figure 5 from Section 6. all but 11 were syntactically local for S (and hence also semantically local for S). B in the two ontologies under consideration are synonyms. & Horrocks. Third. Second. Parsia. (2006).uk/~horrocks/testing/ 7. After this transformation. For this purpose. we were able to easily repair these non-localities as follows: we replace every occurrence of A in P with B and then remove this axiom from the ontology. 8. we show that OA←∅ (·)-based modules are typically very small compared to both the size of the ontology and the modules extracted using other techniques. 2003). From the 11 non-local ontologies.ac. 2006). http://sourceforge.6 For all ontologies P that import an ontology Q. where A ∈ S and B ∈ S. A2: The segmentation algorithm proposed by Cuenca Grau et al. Tsarkov. The aim of the experiments described in this section is not to provide a throughout comparison of the quality of existing modularization algorithms since each algorithm extracts “modules” according to its own requirements. & Hendler.Modular Reuse of Ontologies: Theory and Practice 8.man. but rather to give an idea of the typical size of the modules extracted from real ontologies by each of the algorithms. Indeed. ˜ r←∅ A3: Our modularisation algorithm (Algorithm 5). ˜ r←∅ First. we report on our implementation in the ontology editor Swoop (Kalyanpur. based on the locality class OA←∅ (·).net/projects/owlapi 311 . Sirin. / Note that these axioms simply indicate that the atomic concepts A. 7 are written in the OWL-Full species of OWL (Patel-Schneider et al. The remaining 4 non-localities are due to the presence of so-called mapping axioms of the form A ≡ B .

Horrocks. & Sattler (a) Modularization of (a) Modularization of NCINCI (b) Modularization of GALEN-Small (b) Modularization of GALEN-Small (c) Modularization of of SNOMED (c) Modularization SNOMED (d) Modularization of of GALEN-Full (d) Modularization GALEN-Full (e) Small modules GALEN-Full (e) Smallmodules of of GALEN-Full (f) Large modules GALEN-Full (f) Largemodules of of GALEN-Full Figure 6: Distribution for the sizes of syntactic locality-based modules: the X-Axis gives the number of concepts in the modules and the Y-Axis the number of modules extracted for each size range.Cuenca Grau. Kazakov. 312 .

it should give an idea of typical module sizes. we have also considered a version GALEN-Small that has commonly been used as a benchmark for OWL reasoners.-based mod.5 0.14 and NASA’s Semantic Web for Earth and Environmental Terminology (SWEET)15 .mindswap. It is worth emphasizing here that our algorithm A3 does not just extract a module for the input atomic concept: the extracted fragment is also a module for its whole signature. which we divided into two groups: Simple. These ontologies are complex since they use many constructors from OWL DL and/or include a significant number of GCIs. 13.5 0.05 0.geneontology.org/2003/CancerOntology/nciOncology.owl http://ontology.4 100 Avg. In this group.(%) 30. These ontologies are expressed in a simple ontology language and are of a simple structure. 12. 10.com/ http://www.9 37. Since there is no benchmark for ontology modularization and only a few use cases are available.org http://www. there is no systematic way of evaluating modularization procedures. 15.1 24.html http://sweet. in particular.(%) 55 100 1 100 100 100 83.jpl. even if it may not necessarily reflect an actual ontology reuse scenario. Complex. In the case of GALEN. Therefore we have designed a simple experiment setup which.8 0.11 and the SNOMED Ontology12 . we have collected a set of well-known ontologies available on the Web.6 NCI SNOMED GO SUMO GALEN-Small GALEN-Full SWEET DOLCE-Lite 27772 255318 22357 869 2749 24089 1816 499 Table 6: Comparison of Different Modularization Algorithms As a test suite. For each ontology.84 100 0.08 0. This group contains the well-known GALEN ontology (GALEN-Full). Max. This ontology is almost 10 times smaller than the original GALEN-Full ontology.org/prj_galen. but only definitions.1 100 100 100 51.(%) 0.Modular Reuse of Ontologies: Theory and Practice Ontology Atomic Concepts A1: Prompt-Factor Max.10 the Gene Ontology (GO). we took the set of its atomic concepts and extracted modules for every atomic concept. We compare the maximal and average sizes of the extracted modules.3 100 Avg. we have included the National Cancer Institute (NCI) Ontology.05 0.09 1. 14. yet similar in structure.(%) 87. 9.5 100 A3: Loc.7 100 A2: Segmentation Max. http://www.teknowledge.html http://www.it/DOLCE.8 100 0.snomed.gov/ontology/ 313 .13 the DOLCE upper ontology (DOLCE-Lite).loa-cnr.1 100 100 100 88.4 2 10 29.3 Avg. they do not contain GCIs.7 3.org http://www.(%) 0.6 100 1 100 100 100 96.8 1.nasa.(%) 75. 11.openclinical.9 the SUMO Upper Ontology. which typically includes a fair amount of other concepts and roles.

the ontology is heavily underspecified. are simple in structure and logical expressivity. DOLCE. SNOMED. Indeed. in SNOMED. The table provides the size of the largest module and the average size of the modules obtained using each of these algorithms. The SWEET ontology is an exception: even though the ontology uses most of the constructors available in OWL. For DOLCE. as opposed to 1/200 in the case of SNOMED.Cuenca Grau. which yields small modules. & Sattler ˜ r←∅ ˜ r←∆×∆ (a) Concepts DNA Sequence and (b) OA←∅ (S)-based module for (c) OA←∆ (S)-based module for Microanatomy in NCI DNA Sequence in NCI Micro Anatomy in the fragment 7b Figure 7: The Module Extraction Functionality in Swoop The results we have obtained are summarized in Table 6. This can be explained by the fact that these ontologies. and the average size of the modules is 1/10 of the size of the largest module. In fact. SWEET and DOLCE. Here. Kazakov. the largest locality-based module obtained is approximately 0. the locality-based modules are larger. in particular. For NCI. we have obtained very small locality-based modules.5% of the size of the ontology. In Figure 6. In contrast. we can clearly see that locality-based modules are significantly smaller than the ones obtained using the other methods. GALEN and SNOMED. we have a more detailed analysis of the modules for NCI. in the case of SUMO. even if large. Horrocks. SNOMED. For example. For GALEN. In the table. GO and SUMO. the X-axis represents the size ranges of the ob314 . the algorithms A1 and A2 retrieve the whole ontology as the module for each input signature. the modules are even bigger—1/3 of the size of the ontology—which indicates that the dependencies between the different concepts in the ontology are very strong and complicated. GALEN-Small and GALEN-Full. the largest module in GALEN-Small is 1/10 of the size of the ontology. most of the modules we have obtained for these ontologies contain less than 40 atomic concepts. the modules we obtain using our algorithm are significantly smaller than the size of the input ontology.

all its (entailed) super-concepts in O. What is surprising is that the difference in the size of modules is so significant. which. for GALEN-Full. This suggests that the knowledge in NCI about the particular concept DNA Sequence is very shallow in the sense that NCI only “knows” that a DNA Sequence is a macromolecular structure..Modular Reuse of Ontologies: Theory and Practice tained modules and the Y-axis the number of modules whose size is within the given range. we have obtained a large number of small modules and a significant number of very big ones. In order to explore the use of our results for ontology design and analysis. Algorithm A1 produces a fragment of the ontology that contains the input atomic concept and is syntactically separated from the rest of the axioms—that is. 16. the fragment and the rest of the ontology have disjoint signatures. 2006). Figure 7b shows the minimal OA←∅ (·)-based module for DNA Sequence. In contrast. a OA←∅ (·)-based module for an atomic concept contains all necessary axioms for. This cycle does not occur in the simplified version of GALEN and thus we obtain a smooth distribution for that case. The resulting module is shown in Figure 7b.google. Algorithm A2 extracts a fragment of the ontology that is a module for the input atomic concept and is additionally semantically separated from the rest of the ontology: no entailment between an atomic concept in the module and an atomic concept not in the module should hold in the original ontology. The presence of this cycle can be spotted more clearly in Figure 6(f). The considerable differences in the size of the modules extracted by algorithms A1 – A3 are due to the fact that these algorithms extract modules according to different requirements. Figure 7 shows that this module contains only the concepts in the “path” from DNA Sequence to the top level concept Anatomy Kind. we can observe that the size of the modules follows a smooth distribution. NCI and GALEN-Small. The tool can be downloaded at http://code. The plots thus give an idea of the distribution for the sizes of the different modules. but no medium-sized modules in-between. The user interface of Swoop allows for the selection of an input signature and the retrieval of the corresponding module. Recall that. is an anatomy kind. in the end. one could extract the OA←∆ (·)based module for Micro Anatomy in the fragment 7b. Thus this module can be seen as the “upper ontology” for A in O. this module contains all the sub-concepts of Micro Anatomy in the previously extracted module. in Figure 6(e) we can see that the distribution for the “small” modules in GALEN-Full is smooth and much more similar to the one for the simplified version of GALEN. the figure shows that there is a large number of modules of size in between 6515 and 6535 concepts.com/p/swoop/ 315 . it is to be expected that it extracts smaller modules. we have integrated our algorithm for extracting modules in the ontology editor Swoop (Kalyanpur et al. If one wants to refine the module by only including the information in the ontology necessary to ˜ r←∆×∆ entail the “path” from DNA Sequence to Micro Anatomy. according to Corollary 47.16 Figure 7a shows the classification of the concepts DNA Sequence and Microanatomy in ˜ r←∅ the NCI ontology. This abrupt distribution indicates the presence of a big cycle of dependencies in the ontology. For SNOMED. at least. By Corollary 47. In contrast. In fact. ˜ r←∅ as obtained in Swoop. Since our algorithm is based on weaker requirements.

AAAI. GA.. Modular ontology languages revisited. Pushing the EL envelope. In Proceedings of the Tenth International Conference on Principles of Knowledge 316 . Y. We have introduced and studied the notion of a safety class.. E. we have proposed a set of reasoning problems that are relevant for ontology reuse. & Sattler 9. D. Horrocks. The Classical Decision Problem. we have used safety classes to extract modules from ontologies. we have shown that these problems are undecidable or algorithmically unsolvable for the logic underlying OWL DL. Baader. F.. & Kalyanpur. F.. (2005). Data Semantics. Modularity and web ontologies. Scotland. 2006. 364–370.. January 2007. Athens. (2003). (2006a). Cuenca Grau. & Patel-Schneider. Brandt. U. which characterizes any sufficiency condition for safety of an ontology w. 2005.. we would like to study other approximations which can produce small modules in complex ontologies like GALEN. 153–184. L. pp.. D.. B. & Serafini. B. 4 (1).. On the semantics of linking and importing in modular ontologies. We have established the relationships between these problems and studied their computability. E. & Sattler. & Honavar. July 30-August 5. January 5.. Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence. J. F. & Kutz.. UK. B. 298– 304. (2003). Parsia. Edinburgh. D. Professional Book Center. 2007. In addition. Nardi. I. S. In IJCAI-05. Combining OWL ontologies using Econnections. Vol. The Description Logic Handbook: Theory...r. Hyderabad.Cuenca Grau. McGuinness. A. & Gurevich. Kazakov. In IJCAI-07. Horrocks. E. B.. USA. Hyderabad. Y. (1997). India. (2007). Cuenca Grau. Cuenca Grau. In Proceedings of the Workshop on Semantic Web for Collaborative Knowledge Acquisition.. Proceedings of the Twentieth International Joint Conference on Artificial Intelligence. Conclusion In this paper. B¨rger. J. Bao. Kazakov. 40–59. D. L. B.. 4273 of Lecture Notes in Computer Science.. J. O. B. In Proceedings of the 5th International Semantic Web Conference (ISWC-2006). a signature. References Baader. 1. Caragea. C. For future work. Using existing results (Lutz et al. 2007) and the results obtained in Section 4. Borgida.. which can be computed in practice.. Calvanese. pp. and exploit modules to optimize ontology reasoning.t. Perspectives o a of Mathematical Logic. A. pp. India.). (2006b). Implementation. (2006).. (Eds. Cambridge University Press. 72–86. Parsia. & Sirin. E. We have dealt with these problems by defining sufficient conditions for a solution to exist. Sirin. Distributed description logics: Assimilating information from peer sources. and Applications. P. November 5-9. Second printing (Universitext) 2001. Cuenca Grau. Gr¨del. V. Web Sem... A logical framework for modularity of ontologies. Springer-Verlag. (2007). & Lutz.

D. Web Sem.Modular Reuse of Ontologies: Theory and Practice Representation and Reasoning (KR-2006). (2006). pp. I. 4273 of Lecture Notes in Computer Science. 654–667. Lutz. May 30 . (2003). Swoop: A web ontology editing browser. June 2-5. D. Conservative extensions in the lightweight description logic EL. Lake District of the United Kingdom. J. Ph. USA. In Proceedings of the 21st International Conference on Automated Deduction (CADE-21). Walther.. & Schorlemmer.org. Semantic integration: A survey of ontology-based approaches. I. Horrocks. 2007. N.. Just the right amount: extracting modules from ontologies. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-07). & Hendler. GA. Cuenca Grau. 33 (4).. May 8-12. O. A.. 2005. (2003). F. Gardiner. 2007. Tsarkov. Karlsruhe. F. P. (2007). 4 (2). & van Harmelen. Lutz.. Sirin.. Kazakov. Lutz. J. 2006. & Horrocks. From SHIQ and RDF to OWL: the making of a web ontology language. Cuenca Grau. Canada. Ontology mapping: The state of the art. pp. Cuenca Grau.. Vol. Horrocks. Alberta. 282–305. 198–209. F.. Bremen. U. Motik. Did I damage my ontology? a case for conservative extensions in description logics. Hyderabad. In Proceedings of the 2006 International Workshop on Description Logics (DL-2006). (2005). 65–70. Conservative extensions in expressive description logics.. 717–726. India. In Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR-2006). In Proceedings of the 5th International Semantic Web Conference (ISWC-2006). Horrocks. AAAI Press. & Wolter. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05). 453–459. B.. Will my ontologies fit together?. (2003). C. Lake District of the United Kingdom.. F. AAAI. 2006. & Wolter. (2007).. C.. M¨ller. A. In Proceedings of the 16th International Conference on World Wide Web (WWW-2007). Horrocks. & Sattler. (2006). & Haarslev. Cambridge University Press. F. July 30-August 5. B. 2006. Univesit¨t Karlsruhe (TH). UK. Professional Book Center. R. The Knowledge Engineering Review. Edinburgh.. 448–453. Reasoning in Description Logics using Resolution and Deductive Databases. pp. November 5-9. Germany. AAAI Press. January 2007. Kalyanpur. (2006). 2006. B. chap. 18. 144–153. June 2-5. pp. UK. Banff. 8. Description logic systems. & Wolter. (2007). Scotland.. Web Sem.. Patel-Schneider. 1 (1). (2006). J. a Noy. I. U. thesis. D. 187–197. 84–99. 1–31. Y. Lake District. I. 7–26. pp. pp. SIGMOD Record. M. F. Athens. S. Springer.. E. I. (2004a).. C. & Sattler. & Sattler. (2006). Germany. 189 of CEUR Workshop Proceedings. Parsia... ACM. U. A tableaux decision procedure for SHOIQ. Kutz. pp. Ghilardi. 4603 of Lecture Notes in Computer Science... V. T. Kalfoglou. Y. Vol.. CEUR-WS. 317 . Springer. B. B. pp. Framework for an automated comparison of description logic reasoners. In The Description Logic o Handbook.June 1.. Windermere. Vol. July 17-20.

Cuenca Grau. W3C Recommendation. 289–303.. Ontological issues in using a description logic to represent medical concepts: Experience from GALEN. May 23-26. & Horrocks. Artificial Intelligence.. 3298 of Lecture Notes in Computer Science. International Handbooks on Information Systems. D. I. Attributive concept descriptions with complements. Tools for mapping and merging ontologies. 199–217. Edinburgh. R. In Proceedings of the 2004 International Workshop on Description Logics (DL2004). & Sattler Noy. & Horrocks. USA. Whistler. 2006. & Rogers. Japan. pp. Tsarkov. Pellet system description. 1–26. (2006). November 7-11. Seattle. L. J. & Studer (Staab & Studer. Hiroshima.. M. June 6-8. F. Journal of Human-Computer Studies. Springer. (1999). Hayes. (2004). Tobies. In Proceedings of the 15th international conference on World Wide Web (WWW-2006). Res. 365–384. Schmidt-Schauß. Elsevier. Handbook on Ontologies. 2004. classification and use. Vol. 6 (59). P. (2004). & Studer. 2004. A. pp. Web ontology language OWL Abstract Syntax and Semantics. Structure-based partitioning of large class hierarchies. Artif. M. N. Patel-Schneider. 2006. Springer. British Columbia. In Proceedings of the Third International Semantic Web Conference (ISWC2004). The PROMPT suite: Interactive tools for ontology mapping and merging. pp. CEUR-WS. In Proceedings of the Third International Joint Conference on Automated Reasoning (IJCAR 2006). Springer. Scotland.. Noy. & Musen. UK. In IMIA WG6 Workshop. In Staab. S. B. ACM.). (2006). 292–297. Kazakov. (2004b). Rector. & Parsia. J. Web ontology segmentation: analysis. M. & Smolka. Sirin. Intell. 4130 of Lecture Notes in Computer Science. Vol. (2000).. Elsevier. 104 of CEUR Workshop Proceedings. P. 13–22. (JAIR). I.. 12. pp.. & Klein.. August 17-20. N. WA. (2004). J.. A. 48 (1). FaCT++ description logic reasoner: System description. Proceedings. G. (2004). & Rector. Canada. 2004). Int. S. Vol.. Horrocks. (Eds. Staab. Stuckenschmidt. (2003). E. Seidenberg. The complexity of reasoning with cardinality restrictions and nominals in expressive description logics. (1991). 318 . H.org.

Sign up to vote on this title
UsefulNot useful