Journal of Artificial Intelligence Research 31 (2008) 273-318 Submitted 07/07; published 02/08

Modular Reuse of Ontologies: Theory and Practice
Bernardo Cuenca Grau berg@comlab.ox.ac.uk
Ian Horrocks ian.horrocks@comlab.ox.ac.uk
Yevgeny Kazakov yevgeny.kazakov@comlab.ox.ac.uk
Oxford University Computing Laboratory
Oxford, OX1 3QD, UK
Ulrike Sattler sattler@cs.man.ac.uk
School of Computer Science
The University of Manchester
Manchester, M13 9PL, UK
Abstract
In this paper, we propose a set of tasks that are relevant for the modular reuse of on-
tologies. In order to formalize these tasks as reasoning problems, we introduce the notions
of conservative extension, safety and module for a very general class of logic-based ontology
languages. We investigate the general properties of and relationships between these notions
and study the relationships between the relevant reasoning problems we have previously
identified. To study the computability of these problems, we consider, in particular, De-
scription Logics (DLs), which provide the formal underpinning of the W3C Web Ontology
Language (OWL), and show that all the problems we consider are undecidable or algo-
rithmically unsolvable for the description logic underlying OWL DL. In order to achieve a
practical solution, we identify conditions sufficient for an ontology to reuse a set of sym-
bols “safely”—that is, without changing their meaning. We provide the notion of a safety
class, which characterizes any sufficient condition for safety, and identify a family of safety
classes–called locality—which enjoys a collection of desirable properties. We use the notion
of a safety class to extract modules from ontologies, and we provide various modulariza-
tion algorithms that are appropriate to the properties of the particular safety class in use.
Finally, we show practical benefits of our safety checking and module extraction algorithms.
1. Motivation
Ontologies—conceptualizations of a domain shared by a community of users—play a ma-
jor role in the Semantic Web, and are increasingly being used in knowledge management
systems, e-Science, bio-informatics, and Grid applications (Staab & Studer, 2004).
The design, maintenance, reuse, and integration of ontologies are complex tasks. Like
software engineers, ontology engineers need to be supported by tools and methodologies
that help them to minimize the introduction of errors, i.e., to ensure that ontologies are
consistent and do not have unexpected consequences. In order to develop this support, im-
portant notions from software engineering, such as module, black-box behavior, and controlled
interaction, need to be adapted.
Recently, there has been growing interest in the topic of modularity in ontology engi-
neering (Seidenberg & Rector, 2006; Noy, 2004a; Lutz, Walther, & Wolter, 2007; Cuenca
Grau, Parsia, Sirin, & Kalyanpur, 2006b; Cuenca Grau, Horrocks, Kazakov, & Sattler,
c (2008 AI Access Foundation. All rights reserved.
Cuenca Grau, Horrocks, Kazakov, & Sattler
2007), which has been motivated by the above-mentioned application needs. In this paper,
we focus on the use of modularity to support the partial reuse of ontologies. In particular,
we consider the scenario in which we are developing an ontology { and want to reuse a set
S of symbols from a “foreign” ontology O without changing their meaning.
For example, suppose that an ontology engineer is building an ontology about research
projects, which specifies different types of projects according to the research topic they
focus on. The ontology engineer in charge of the projects ontology { may use terms such
as Cystic Fibrosis and Genetic Disorder in his descriptions of medical research projects. The
ontology engineer is an expert on research projects; he may be unfamiliar, however, with
most of the topics the projects cover and, in particular, with the terms Cystic Fibrosis and
Genetic Disorder. In order to complete the projects ontology with suitable definitions for
these medical terms, he decides to reuse the knowledge about these subjects from a well-
established medical ontology O.
The most straightforward way to reuse these concepts is to construct the logical union
{ ∪ O of the axioms in { and O. It is reasonable to assume that the additional knowledge
about the medical terms used in both { and O will have implications on the meaning of the
projects defined in {; indeed, the additional knowledge about the reused terms provides new
information about medical research projects which are defined using these medical terms.
Less intuitive is the fact that importing O may also result in new entailments concerning
the reused symbols, namely Cystic Fibrosis and Genetic Disorder. Since the ontology engineer
of the projects ontology is not an expert in medicine and relies on the designers of O, it is
to be expected that the meaning of the reused symbols is completely specified in O; that
is, the fact that these symbols are used in the projects ontology { should not imply that
their original “meaning” in O changes. If { does not change the meaning of these symbols
in O, we say that { ∪ O is a conservative extension of O
In realistic application scenarios, it is often unreasonable to assume the foreign ontology
O to be fixed; that is, O may evolve beyond the control of the modelers of {. The ontology
engineers in charge of { may not be authorized to access all the information in O or,
most importantly, they may decide at a later time to reuse the symbols Cystic Fibrosis and
Genetic Disorder from a medical ontology other than O. For application scenarios in which
the external ontology O may change, it is reasonable to “abstract” from the particular O
under consideration. In particular, given a set S of “external symbols”, the fact that the
axioms in { do not change the meaning of any symbol in S should be independent of the
particular meaning ascribed to these symbols by O. In that case, we will say that { is safe
for S.
Moreover, even if { “safely” reuses a set of symbols from an ontology O, it may still
be the case that O is a large ontology. In particular, in our example, the foreign medical
ontology may be huge, and importing the whole ontology would make the consequences
of the additional information costly to compute and difficult for our ontology engineers in
charge of the projects ontology (who are not medical experts) to understand. In practice,
therefore, one may need to extract a module O
1
of O that includes only the relevant infor-
mation. Ideally, this module should be as small as possible while still guarantee to capture
the meaning of the terms used; that is, when answering queries against the research projects
ontology, importing the module O
1
would give exactly the same answers as if the whole
medical ontology O had been imported. In this case, importing the module will have the
274
Modular Reuse of Ontologies: Theory and Practice
same observable effect on the projects ontology as importing the entire ontology. Further-
more, the fact that O
1
is a module in O should be independent of the particular { under
consideration.
The contributions of this paper are as follows:
1. We propose a set of tasks that are relevant to ontology reuse and formalize them as
reasoning problems. To this end, we introduce the notions of conservative extension,
safety and module for a very general class of logic-based ontology languages.
2. We investigate the general properties of and relationships between the notions of
conservative extension, safety, and module and use these properties to study the re-
lationships between the relevant reasoning problems we have previously identified.
3. We consider Description Logics (DLs), which provide the formal underpinning of the
W3C Web Ontology Language (OWL), and study the computability of our tasks. We
show that all the tasks we consider are undecidable or algorithmically unsolvable for
the description logic underlying OWL DL—the most expressive dialect of OWL that
has a direct correspondence to description logics.
4. We consider the problem of deciding safety of an ontology for a signature. Given that
this problem is undecidable for OWL DL, we identify sufficient conditions for safety,
which are decidable for OWL DL—that is, if an ontology satisfies our conditions
then it is safe; the converse, however, does not necessarily hold. We propose the
notion of a safety class, which characterizes any sufficiency condition for safety, and
identify a family of safety classes–called locality—which enjoys a collection of desirable
properties.
5. We next apply the notion of a safety class to the task of extracting modules from
ontologies; we provide various modularization algorithms that are appropriate to the
properties of the particular safety class in use.
6. We present empirical evidence of the practical benefits of our techniques for safety
checking and module extraction.
This paper extends the results in our previous work (Cuenca Grau, Horrocks, Kutz, &
Sattler, 2006; Cuenca Grau et al., 2007; Cuenca Grau, Horrocks, Kazakov, & Sattler, 2007).
2. Preliminaries
In this section we introduce description logics (DLs) (Baader, Calvanese, McGuinness,
Nardi, & Patel-Schneider, 2003), a family of knowledge representation formalisms which
underlie modern ontology languages, such as OWL DL (Patel-Schneider, Hayes, & Hor-
rocks, 2004). A hierarchy of commonly-used description logics is summarized in Table 1.
The syntax of a description logic L is given by a signature and a set of constructors. A
signature (or vocabulary) Sg of a DL is the (disjoint) union of countably infinite sets AC
of atomic concepts (A, B, . . . ) representing sets of elements, AR of atomic roles (r, s, . . . )
representing binary relations between elements, and Ind of individuals (a, b, c, . . . ) repre-
senting constants. We assume the signature to be fixed for every DL.
275
Cuenca Grau, Horrocks, Kazakov, & Sattler
DLs
Constructors Axioms [ Ax ]
Rol Con RBox TBox ABox
cL r ·, A, C
1
¯ C
2
, ∃R.C A ≡ C, C
1
_ C
2
a : C, r(a, b)
/L( –– ––, C –– ––
o –– –– Trans(r) –– ––
+ 1 r

+ H R
1
_ R
2
+ T Funct(R)
+ ^ (nS)
+ O (nS.C)
+ O ¦i¦
Here r ∈ AR, A ∈ AC, a, b ∈ Ind, R
(i)
∈ Rol, C
(i)
∈ Con, n ≥ 1 and S ∈ Rol is a simple
role (see (Horrocks & Sattler, 2005)).
Table 1: The hierarchy of standard description logics
Every DL provides constructors for defining the set Rol of (general) roles (R, S, . . . ),
the set Con of (general) concepts (C, D, . . . ), and the set Ax of axioms (α, β, . . . ) which is
a union of role axioms (RBox), terminological axioms (TBox) and assertions (ABox).
cL (Baader, Brandt, & Lutz, 2005) is a simple description logic which allows one to
construct complex concepts using conjunction C
1
¯ C
2
and existential restriction ∃R.C
starting from atomic concepts A, roles R and the top concept ·. cL provides no role
constructors and no role axioms; thus, every role R in cL is atomic. The TBox axioms of
cL can be either concept definitions A ≡ C or general concept inclusion axioms (GCIs)
C
1
_ C
2
. cL assertions are either concept assertions a : C or role assertions r(a, b). In this
paper we assume the concept definition A ≡ C is an abbreviation for two GCIs A _ C and
C _ A.
The basic description logic /L( (Schmidt-Schauß & Smolka, 1991) is obtained from cL
by adding the concept negation constructor C. We introduce some additional constructors
as abbreviations: the bottom concept ⊥ is a shortcut for ·, the concept disjunction C
1
. C
2
stands for (C
1
¯ C
2
), and the value restriction ∀R.C stands for (∃R.C). In contrast
to cL, /L( can express contradiction axioms like · _ ⊥. The logic o is an extension of
/L( where, additionally, some atomic roles can be declared to be transitive using a role
axiom Trans(r).
Further extensions of description logics add features such as inverse roles r

(indicated
by appending a letter 1 to the name of the logic), role inclusion axioms (RIs) R
1
_ R
2
(+H),
functional roles Funct(R) (+T), number restrictions (nS), with n ≥ 1, (+^), qualified
number restrictions (nS.C), with n ≥ 1, (+O)
1
, and nominals ¦a¦ (+O). Nominals make
it possible to construct a concept representing a singleton set ¦a¦ (a nominal concept) from
an individual a. These extensions can be used in different combinations; for example /L(O
is an extension of /L( with nominals; oH1O is an extension of o with role hierarchies,
1. the dual constructors (nS) and (nS.C) are abbreviations for ¬(n + 1 S) and ¬(n + 1 S.¬C),
respectively
276
Modular Reuse of Ontologies: Theory and Practice
inverse roles and qualified number restrictions; and oHO1O is the DL that uses all the
constructors and axiom types we have presented.
Modern ontology languages, such as OWL, are based on description logics and, to a cer-
tain extent, are syntactic variants thereof. In particular, OWL DL corresponds to oHO1^
(Horrocks, Patel-Schneider, & van Harmelen, 2003). In this paper, we assume an ontology
O based on a description logic L to be a finite set of axioms in L. The signature of an
ontology O (of an axiom α) is the set Sig(O) (Sig(α)) of atomic concepts, atomic roles and
individuals that occur in O (respectively in α).
The main reasoning task for ontologies is entailment: given an ontology O and an axiom
α, check if O implies α. The logical entailment [= is defined using the usual Tarski-style
set-theoretic semantics for description logics as follows. An interpretation 1 is a pair 1 =
(∆
7
,
7
), where ∆
7
is a non-empty set, called the domain of the interpretation, and
7
is the
interpretation function that assigns: to every A ∈ AC a subset A
7
⊆ ∆
7
, to every r ∈ AR
a binary relation r
7
⊆ ∆
7

7
, and to every a ∈ Ind an element a
7
∈ ∆
7
. Note that the
sets AC, AR and Ind are not defined by the interpretation 1 but assumed to be fixed for
the ontology language (DL).
The interpretation function
7
is extended to complex roles and concepts via DL-
constructors as follows:
(·)
7
= ∆
(C ¯ D)
7
= C
7
∩ D
7
(∃R.C)
7
= ¦x ∈ ∆
7
[ ∃y.'x, y` ∈ R
7
∧ y ∈ C
7
¦
(C)
7
= ∆
7
` C
7
(r

)
7
= ¦'x, y` [ 'y, x` ∈ r
7
¦
(nR)
7
= ¦ x ∈ ∆
7
[ ¦y ∈ ∆
7
[ 'x, y` ∈ R
7
¦ ≥ n¦
(nR.C)
7
= ¦ x ∈ ∆
7
[ ¦y ∈ ∆
7
[ 'x, y` ∈ R
7
∧ y ∈ C
7
¦ ≥ n¦
¦a¦
7
= ¦a
7
¦
The satisfaction relation 1 [= α between an interpretation 1 and a DL axiom α (read as 1
satisfies α, or 1 is a model of α) is defined as follows:
1 [= C
1
_ C
2
iff C
7
1
⊆ C
7
2
; 1 [= a : C iff a
7
∈ C
7
;
1 [= R
1
_ R
2
iff R
7
1
⊆ R
7
2
; 1 [= r(a, b) iff 'a
7
, b
7
` ∈ r
7
;
1 [= Trans(r) iff ∀x, y, z ∈ ∆
7
[ 'x, y` ∈ r
7
∧ 'y, z` ∈ r
7
⇒ 'x, z` ∈ r
7
];
1 [= Funct(R) iff ∀x, y, z ∈ ∆
7
[ 'x, y` ∈ R
7
∧ 'x, z` ∈ R
7
⇒ y = z ];
An interpretation 1 is a model of an ontology O if 1 satisfies all axioms in O. An
ontology O implies an axiom α (written O [= α) if 1 [= α for every model 1 of O. Given
a set I of interpretations, we say that an axiom α (an ontology O) is valid in I if every
interpretation 1 ∈ I is a model of α (respectively O). An axiom α is a tautology if it is valid
in the set of all interpretations (or, equivalently, is implied by the empty ontology).
We say that two interpretations 1 = (∆
7
,
7
) and . = (∆
.
,
.
) coincide on the subset
S of the signature (notation: 1[
S
= .[
S
) if ∆
7
= ∆
.
and X
7
= X
.
for every X ∈ S. We
say that two sets of interpretations I and J are equal modulo S (notation: I[
S
= J[
S
) if for
every 1 ∈ I there exits . ∈ J such that .[
S
= 1[
S
and for every . ∈ J there exists 1 ∈ I
such that 1[
S
= .[
S
.
277
Cuenca Grau, Horrocks, Kazakov, & Sattler
Ontology of medical research projects P:
P1 Genetic Disorder Project ≡ Project ¯ ∃has Focus.Genetic Disorder
P2 Cystic Fibrosis EUProject ≡ EUProject ¯ ∃has Focus.Cystic Fibrosis
P3 EUProject _ Project
P4 ∃has Focus.· _ Project
E1 Project ¯ (Genetic Disorder
::
Cystic Fibrosis) _ ⊥
E2 ∀
::
has Focus.Cystic Fibrosis _ ∃has Focus.Genetic Disorder
Ontology of medical terms Q:
M1 Cystic Fibrosis ≡ Fibrosis ¯ ∃located In.Pancreas ¯ ∃has Origin.Genetic Origin
M2 Genetic Fibrosis ≡ Fibrosis ¯ ∃has Origin.Genetic Origin
M3 Fibrosis ¯ ∃located In.Pancreas _ Genetic Fibrosis
M4 Genetic Fibrosis _ Genetic Disorder
M5 DEFBI Gene _ Immuno Protein Gene ¯ ∃associated With.Cystic Fibrosis
Figure 1: Reusing medical terminology in an ontology on research projects
3. Ontology Integration and Knowledge Reuse
In this section, we elaborate on the ontology reuse scenario sketched in Section 1. Based on
this application scenario, we motivate and define the reasoning tasks to be investigated in
the remainder of the paper. In particular, our tasks are based on the notions of a conservative
extension (Section 3.2), safety (Sections 3.2 and 3.3) and module (Section 3.4). These notions
are defined relative to a language L. Within this section, we assume that L is an ontology
language based on a description logic; in Section 3.6, we will define formally the class of
ontology languages for which the given definitions of conservative extensions, safety and
modules apply. For the convenience of the reader, all the tasks we consider in this paper
are summarized in Table 2.
3.1 A Motivating Example
Suppose that an ontology engineer is in charge of a oHO1O ontology on research projects,
which specifies different types of projects according to the research topic they are concerned
with. Assume that the ontology engineer defines two concepts—Genetic Disorder Project and
Cystic Fibrosis EUProject—in his ontology {. The first one describes projects about genetic
disorders; the second one describes European projects about cystic fibrosis, as given by the
axioms P1 and P2 in Figure 1. The ontology engineer is an expert on research projects: he
knows, for example, that every instance of EUProject must be an instance of Project (the
concept-inclusion axiom P3) and that the role has Focus can be applied only to instances
of Project (the domain axiom P4). He may be unfamiliar, however, with most of the topics
the projects cover and, in particular, with the terms Cystic Fibrosis and Genetic Disorder
mentioned in P1 and P2. In order to complete the projects ontology with suitable definitions
278
Modular Reuse of Ontologies: Theory and Practice
for these medical terms, he decides to reuse the knowledge about these subjects from a well-
established and widely-used medical ontology.
Suppose that Cystic Fibrosis and Genetic Disorder are described in an ontology O contai-
ning the axioms M1-M5 in Figure 1. The most straightforward way to reuse these concepts
is to import in { the ontology O—that is, to add the axioms from O to the axioms in {
and work with the extended ontology { ∪O. Importing additional axioms into an ontology
may result in new logical consequences. For example, it is easy to see that axioms M1–M4
in O imply that every instance of Cystic Fibrosis is an instance of Genetic Disorder:
O [= α := (Cystic Fibrosis _ Genetic Disorder) (1)
Indeed, the concept inclusion α
1
:= (Cystic Fibrosis _ Genetic Fibrosis) follows from axioms
M1 and M2 as well as from axioms M1 and M3; α follows from axioms α
1
and M4.
Using inclusion α from (1) and axioms P1–P3 from ontology { we can now prove that
every instance of Cystic Fibrosis EUProject is also an instance of Genetic Disorder Project:
{ ∪ O [= β := (Cystic Fibrosis EUProject _ Genetic Disorder Project) (2)
This inclusion β, however, does not follow from { alone—that is, { [= β. The ontology
engineer might be not aware of Entailment (2), even though it concerns the terms of primary
scope in his projects ontology {.
It is natural to expect that entailments like α in (1) from an imported ontology O result
in new logical consequences, like β, in (2), over the terms defined in the main ontology {.
One would not expect, however, that the meaning of the terms defined in O changes as a
consequence of the import since these terms are supposed to be completely specified within
O. Such a side effect would be highly undesirable for the modeling of ontology { since the
ontology engineer of { might not be an expert on the subject of O and is not supposed to
alter the meaning of the terms defined in O not even implicitly.
The meaning of the reused terms, however, might change during the import, perhaps due
to modeling errors. In order to illustrate such a situation, suppose that the ontology engineer
has learned about the concepts Genetic Disorder and Cystic Fibrosis from the ontology O
(including the inclusion (1)) and has decided to introduce additional axioms formalizing
the following statements:
“Every instance of Project is different from every instance of Genetic Disorder
:::
and every instance of Cystic Fibrosis.”
(3)

::::::
Every
::::::::
project that has Focus on Cystic Fibrosis, also has Focus on Genetic Disorder” (4)
Note that the statements (3) and (4) can be thought of as adding new information about
projects and, intuitively, they should not change or constrain the meaning of the medical
terms.
Suppose the ontology engineer has formalized the statements (3) and (4) in ontology
{ using axioms E1 and E2 respectively. At this point, the ontology engineer has intro-
duced modeling errors and, as a consequence, axioms E1 and E2 do not correspond to (3)
and (4): E1 actually formalizes the following statement: “Every instance of Project is diffe-
rent from every common instance of Genetic Disorder and Cystic Fibrosis”, and E2 expresses
279
Cuenca Grau, Horrocks, Kazakov, & Sattler
that “Every object that either has Focus on nothing, or has Focus only on Cystic Fibrosis,
also has Focus on Genetic Disorder”. These kinds of modeling errors are difficult to detect,
especially when they do not cause inconsistencies in the ontology.
Note that, although axiom E1 does not correspond to fact (3), it is still a consequence
of (3) which means that it should not constrain the meaning of the medical terms. On the
other hand, E2 is not a consequence of (4) and, in fact, it constrains the meaning of medical
terms. Indeed, the axioms E1 and E2 together with axioms P1-P4 from { imply new axioms
about the concepts Cystic Fibrosis and Genetic Disorder, namely their disjointness:
{ [= γ := (Genetic Disorder ¯ Cystic Fibrosis _ ⊥) (5)
The entailment (5) can be proved using axiom E2 which is equivalent to:
· _ ∃has Focus.(Genetic Disorder . Cystic Fibrosis) (6)
The inclusion (6) and P4 imply that every element in the domain must be a project—that
is, { [= (· _ Project). Now, together with axiom E1, this implies (5).
The axioms E1 and E2 not only imply new statements about the medical terms, but also
cause inconsistencies when used together with the imported axioms from O. Indeed, from
(1) and (5) we obtain { ∪O [= δ := (Cystic Fibrosis _ ⊥), which expresses the inconsistency
of the concept Cystic Fibrosis.
To summarize, we have seen that importing an external ontology can lead to undesirable
side effects in our knowledge reuse scenario, like the entailment of new axioms or even incon-
sistencies involving the reused vocabulary. In the next section we discuss how to formalize
the effects we consider undesirable.
3.2 Conservative Extensions and Safety for an Ontology
As argued in the previous section, an important requirement for the reuse of an ontology O
within an ontology { should be that { ∪O produces exactly the same logical consequences
over the vocabulary of O as O alone does. This requirement can be naturally formulated
using the well-known notion of a conservative extension, which has recently been investi-
gated in the context of ontologies (Ghilardi, Lutz, & Wolter, 2006; Lutz et al., 2007).
Definition 1 (Deductive Conservative Extension). Let O and O
1
⊆ O be two L-
ontologies, and S a signature over L. We say that O is a deductive S-conservative extension
of O
1
w.r.t. L, if for every axiom α over L with Sig(α) ⊆ S, we have O [= α iff O
1
[= α.
We say that O is a deductive conservative extension of O
1
w.r.t. L if O is a deductive
S-conservative extension of O
1
w.r.t. L for S = Sig(O
1
). ♦
In other words, an ontology O is a deductive S-conservative extension of O
1
for a
signature S and language L if and only if every logical consequence α of O constructed
using the language L and symbols only from S, is already a logical consequence of O
1
; that
is, the additional axioms in O ` O
1
do not result into new logical consequences over the
vocabulary S. Note that if O is a deductive S-conservative extension of O
1
w.r.t. L, then
O is a deductive S
1
-conservative extension of O
1
w.r.t. L for every S
1
⊆ S.
The notion of a deductive conservative extension can be directly applied to our ontology
reuse scenario.
280
Modular Reuse of Ontologies: Theory and Practice
Definition 2 (Safety for an Ontology). Given L-ontologies O and O
t
, we say that O
is safe for O
t
(or O imports O
t
in a safe way) w.r.t. L if O∪O
t
is a deductive conservative
extension of O
t
w.r.t. L. ♦
Hence, the first reasoning task relevant for our ontology reuse scenario can be formulated
as follows:
T1. given L-ontologies O and O
t
, determine if O is safe for O
t
w.r.t. L.
We have shown in Section 3.1 that, given { consisting of axioms P1–P4, E1, E2, and O
consisting of axioms M1–M5 from Figure 1, there exists an axiom δ = (Cystic Fibrosis _ ⊥)
that uses only symbols in Sig(O) such that O [= δ but {∪O [= δ. According to Definition 1,
this means that { ∪ O is not a deductive conservative extension of O w.r.t. any language
L in which δ can be expressed (e.g. L = /L(). It is possible, however, to show that if the
axiom E2 is removed from { then for the resulting ontology {
1
= { ` ¦E2¦, {
1
∪ O is a
deductive conservative extension of O. The following notion is useful for proving deductive
conservative extensions:
Definition 3 (Model Conservative Extension, Lutz et al., 2007).
Let O and O
1
⊆ O be two L-ontologies and S a signature over L. We say that O is a model
S-conservative extension of O
1
, if for every model 1 of O
1
, there exists a model . of O
such that 1[
S
= .[
S
. We say that O is a model conservative extension of O
1
if O is a model
S-conservative extension of O
1
for S = Sig(O
1
). ♦
The notion of model conservative extension in Definition 3 can be seen as a semantic
counterpart for the notion of deductive conservative extension in Definition 1: the latter is
defined in terms of logical entailment, whereas the former is defined in terms of models.
Intuitively, an ontology O is a model S-conservative extension of O
1
if for every model of
O
1
one can find a model of O over the same domain which interprets the symbols from S
in the same way. The notion of model conservative extension, however, does not provide
a complete characterization of deductive conservative extensions, as given in Definition 1;
that is, this notion can be used for proving that an ontology is a deductive conservative
extension of another, but not vice versa:
Proposition 4 (Model vs. Deductive Conservative Extensions, Lutz et al., 2007)
1. For every two L-ontologies O, O
1
⊆ O, and a signature S over L, if O is a model
S-conservative extension of O
1
then O is a deductive S-conservative extension of O
1
w.r.t. L;
2. There exist two /L( ontologies O and O
1
⊆ O such that O is a deductive conservative
extension of O
1
w.r.t. /L(, but O is not a model conservative extension of O
1
.
Example 5 Consider an ontology {
1
consisting of axioms P1–P4, and E1 and an ontology
O consisting of axioms M1–M5 from Figure 1. We demonstrate that {
1
∪ O is a deductive
conservative extension of O. According to proposition 4, it is sufficient to show that {
1
∪O
is a model conservative extension of O; that is, for every model 1 of O there exists a model
. of {
1
∪ O such that 1[
S
= .[
S
for S = Sig(O).
281
Cuenca Grau, Horrocks, Kazakov, & Sattler
The required model . can be constructed from 1 as follows: take . to be identi-
cal to 1 except for the interpretations of the atomic concepts Genetic Disorder Project,
Cystic Fibrosis EUProject, Project, EUProject and the atomic role has Focus, all of which
we interpret in . as the empty set. It is easy to check that all the axioms P1–P4, E1 are
satisfied in .and hence . [= {
1
. Moreover, since the interpretation of the symbols in O
remains unchanged, we have 1[
Sig(O)
= .[
Sig(O)
and . [= O. Hence, {
1
∪ O is a model
conservative extension of O.
For an example of Statement 2 of the Proposition, we refer the interested reader to the
literature (Lutz et al., 2007, p. 455). ♦
3.3 Safety of an Ontology for a Signature
So far, in our ontology reuse scenario we have assumed that the reused ontology O is fixed
and the axioms of O are just copied into { during the import. In practice, however, it
is often convenient to keep O separate from { and make its axioms available on demand
via a reference. This makes it possible to continue developing { and O independently. For
example, the ontology engineers of the project ontology { may not be willing to depend
on a particular version of O, or may even decide at a later time to reuse the medical terms
(Cystic Fibrosis and Genetic Disorder) from another medical ontology instead. Therefore, for
many application scenarios it is important to develop a stronger safety condition for {
which depends as little as possible on the particular ontology O to be reused. In order to
formulate such condition, we abstract from the particular ontology O to be imported and
focus instead on the symbols from O that are to be reused:
Definition 6 (Safety for a Signature). Let O be a L-ontology and S a signature over L.
We say that O is safe for S w.r.t. L, if for every L-ontology O
t
with Sig(O) ∩ Sig(O
t
) ⊆ S,
we have that O is safe for O
t
w.r.t. L; that is, O∪O
t
is a deductive conservative extension
of O
t
w.r.t. L. ♦
Intuitively, in our knowledge reuse scenario, an ontology O is safe for a signature S w.r.t.
a language L if and only if it imports in a safe way any ontology O
t
written in L that shares
only symbols from S with O. The associated reasoning problem is then formulated in the
following way:
T2. given an L-ontology O and a signature S over L,
determine if O is safe for S w.r.t. L.
As seen in Section 3.2, the ontology { consisting of axioms P1–P4, E1, E2 from Figure 1,
does not import O consisting of axioms M1–M5 from Figure 1 in a safe way for L = /L(.
According to Definition 6, since Sig({) ∩ Sig(O) ⊆ S = ¦Cystic Fibrosis, Genetic Disorder¦,
the ontology { is not safe for S w.r.t. L.
In fact, it is possible to show a stronger result, namely, that any ontology O containing
axiom E2 is not safe for S = ¦Cystic Fibrosis, Genetic Disorder¦ w.r.t. L = /L(. Consider an
ontology O
t
= ¦β
1
, β
2
¦, where β
1
= (· _ Cystic Fibrosis) and β
2
= (Genetic Disorder _ ⊥).
Since E2 is equivalent to axiom (6), it is easy to see that O∪O
t
is inconsistent. Indeed E2,
β
1
and β
2
imply the contradiction α = (· _ ⊥), which is not entailed by O
t
. Hence, O∪O
t
is not a deductive conservative extension of O
t
. By Definition 6, since Sig(O) ∩Sig(O
t
) ⊆ S,
this means that O is not safe for S.
282
Modular Reuse of Ontologies: Theory and Practice
It is clear how one could prove that an ontology O is not safe for a signature S: simply find
an ontology O
t
with Sig(O) ∩Sig(O
t
) ⊆ S, such that O∪O
t
is not a deductive conservative
extension of O
t
. It is not so clear, however, how one could prove that O is safe for S. It
turns out that the notion of model conservative extensions can also be used for this purpose.
The following lemma introduces a property which relates the notion of model conservative
extension with the notion of safety for a signature. Intuitively, it says that the notion of
model conservative extension is stable under expansion with new axioms provided they
share only symbols from S with the original ontologies.
Lemma 7 Let O, O
1
⊆ O, and O
t
be L-ontologies and S a signature over L such that
O is a model S-conservative extension of O
1
and Sig(O) ∩ Sig(O
t
) ⊆ S. Then O ∪ O
t
is a
model S
t
-conservative extension of O
1
∪ O
t
for S
t
= S ∪ Sig(O
t
).
Proof. In order to show that O∪O
t
is a model S
t
-conservative extension of O
1
∪O
t
according
to Definition 3, let 1 be a model of O
1
∪ O
t
. We just construct a model . of O ∪ O
t
such
that 1[
S
= .[
S
. ()
Since O is a model S-conservative extension of O
1
, and 1 is a model of O
1
, by Definition 3
there exists a model .
1
of O such that 1[
S
= .
1
[
S
. Let . be an interpretation such
that .[
Sig(C)
= .
1
[
Sig(C)
and .[
S
= 1[
S
. Since Sig(O) ∩ S
t
= Sig(O) ∩ (S ∪ Sig(O
t
)) ⊆
S ∪ (Sig(O) ∩ Sig(O
t
)) ⊆ S and 1[
S
= .
1
[
S
, such interpretation . always exists. Since
.[
Sig(C)
= .
1
[
Sig(C)
and .
1
[= O, we have . [= O; since .[
S
= 1[
S
, Sig(O
t
) ⊆ S
t
, and
1 [= O
t
, we have . [= O
t
. Hence . [= O ∪ O
t
and 1[
S
= .[
S
, which proves ().
Lemma 7 allows us to identify a condition sufficient to ensure the safety of an ontology
for a signature:
Proposition 8 (Safety for a Signature vs. Model Conservative Extensions)
Let O be an L-ontology and S a signature over L such that O is a model S-conservative
extension of the empty ontology O
1
= ∅; that is, for every interpretation 1 there exists a
model . of O such that .[
S
= 1[
S
. Then O is safe for S w.r.t. L.
Proof. In order to prove that O is safe for S w.r.t. L according to Definition 6, take any
oHO1O ontology O
t
such that Sig(O) ∩Sig(O
t
) ⊆ S. We need to demonstrate that O∪O
t
is a deductive conservative extension of O
t
w.r.t. L. ()
Indeed, by Lemma 7, since O is a model S-conservative extension of O
1
= ∅, and
Sig(O)∩Sig(O
t
) ⊆ S, we have that O∪O
t
is a model S
t
-conservative extension of O
1
∪O
t
= O
t
for S
t
= S ∪ Sig(O
t
). In particular, since Sig(O
t
) ⊆ S
t
, we have that O ∪ O
t
is a deductive
conservative extension of O
t
, as was required to prove ().
Example 9 Let {
1
be the ontology consisting of axioms P1–P4, and E1 from Figure 1.
We show that {
1
is safe for S = ¦Cystic Fibrosis, Genetic Disorder¦ w.r.t. L = oHO1O. By
Proposition 8, in order to prove safety for {
1
it is sufficient to demonstrate that {
1
is a
model S-conservative extension of the empty ontology, that is, for every S-interpretation 1
there exists a model . of {
1
such that 1[
S
= .[
S
.
Consider the model . obtained from 1 as in Example 5. As shown in Example 5, . is
a model of {
1
and 1[
Sig(O)
= .[
Sig(O)
where O consists of axioms M1–M5 from Figure 1.
In particular, since S ⊆ Sig(O), we have 1[
S
= .[
S
. ♦
283
Cuenca Grau, Horrocks, Kazakov, & Sattler
3.4 Extraction of Modules from Ontologies
In our example from Figure 1 the medical ontology O is very small. Well established medical
ontologies, however, can be very large and may describe subject matters in which the de-
signer of { is not interested. For example, the medical ontology O could contain information
about genes, anatomy, surgical techniques, etc.
Even if { imports O without changing the meaning of the reused symbols, processing—
that is, browsing, reasoning over, etc—the resulting ontology { ∪ O may be considerably
harder than processing { alone. Ideally, one would like to extract a (hopefully small) frag-
ment O
1
of the external medical ontology—a module—that describes just the concepts that
are reused in {.
Intuitively, when answering an arbitrary query over the signature of {, importing the
module O
1
should give exactly the same answers as if the whole ontology O had been
imported.
Definition 10 (Module for an Ontology). Let O, O
t
and O
t
1
⊆ O
t
be L-ontologies.
We say that O
t
1
is a module for O in O
t
w.r.t. L, if O ∪ O
t
is a deductive S-conservative
extension of O ∪ O
t
1
for S = Sig(O) w.r.t. L. ♦
The task of extracting modules from imported ontologies in our ontology reuse scenario
can be thus formulated as follows:
T3. given L-ontologies O, O
t
,
compute a module O
t
1
for O in O
t
w.r.t. L.
Example 11 Consider the ontology {
1
consisting of axioms P1–P4, E1 and the ontology O
consisting of axioms M1–M5 from Figure 1. Recall that the axiom α from (1) is a conse-
quence of axioms M1, M2 and M4 as well as of axioms M1, M3 and M4 from ontology O.
In fact, these sets of axioms are actually minimal subsets of O that imply α. In particular,
for the subset O
0
of O consisting of axioms M2, M3, M4 and M5, we have O
0
[= α. It can
be demonstrated that {
1
∪ O
0
is a deductive conservative extension of O
0
. In particular
{
1
∪ O
0
[= α. But then, according to Definition 10, O
0
is not a module for {
1
in O w.r.t.
L = /L( or its extensions, since {
1
∪ O is not an deductive S-conservative extension of
{
1
∪O
0
w.r.t. L = /L( for S = Sig({
1
). Indeed, {
1
∪O [= α, Sig(α) ⊆ S but {
1
∪O
0
[= α.
Similarly, one can show that any subset of O that does not imply α is not a module in O
w.r.t. {
1
.
On the other hand, it is possible to show that both subsets O
1
= ¦M1, M2, M4¦ and
O
2
= ¦M1, M3, M4¦ of O are modules for {
1
in O w.r.t. L = oHO1O. To do so, Defi-
nition 10, we need to demonstrate that {
1
∪ O is a deductive S-conservative extension of
both {
1
∪ O
1
and {
1
∪ O
2
for S = Sig({
1
) w.r.t. L. As usual, we demonstrate a stronger
fact, that {
1
∪O is a model S-conservative extension of {
1
∪O
1
and of {
1
∪O
2
for S which
is sufficient by Claim 1 of Proposition 4.
In order to show that {
1
∪ O is a model S-conservative extension of {
1
∪ O
1
for S =
Sig({
1
), consider any model 1 of {
1
∪ O
1
. We need to construct a model . of {
1
∪ O
such that 1[
S
= .[
S
. Let . be defined exactly as 1 except that the interpretations of the
atomic concepts Fibrosis, Pancreas, Genetic Fibrosis, and Genetic Origin are defined as the
interpretation of Cystic Fibrosis in 1, and the interpretations of atomic roles located In and
284
Modular Reuse of Ontologies: Theory and Practice
has Origin are defined as the identity relation. It is easy to see that axioms M1–M3 and M5
are satisfied in .. Since we do not modify the interpretation of the symbols in {
1
, . also
satisfies all axioms from {
1
. Moreover, . is a model of M4, because Genetic Fibrosis and
Genetic Disorder are interpreted in . like Cystic Fibrosis and Genetic Disorder in 1, and 1 is
a model of O
1
, which implies the concept inclusion α = (Cystic Fibrosis _ Genetic Disorder).
Hence we have constructed a model . of {
1
∪ O such that 1[
S
= .[
S
, thus {
1
∪ O is a
model S-conservative extension of {
1
∪ O
1
.
In fact, the construction above works if we replace O
1
with any subset of O that implies
α. In particular, {
1
∪O is also a model S-conservative extension of {
1
∪O
2
. In this way, we
have demonstrated that the modules for { in O are exactly those subsets of O that imply
α. ♦
An algorithm implementing the task T3 can be used for extracting a module from an
ontology O imported into { prior to performing reasoning over the terms in {. However,
when the ontology { is modified, the module has to be extracted again since a module O
1
for { in O might not necessarily be a module for the modified ontology. Since the extraction
of modules is potentially an expensive operation, it would be more convenient to extract a
module only once and reuse it for any version of the ontology { that reuses the specified
symbols from O. This idea motivates the following definition:
Definition 12 (Module for a Signature). Let O
t
and O
t
1
⊆ O
t
be L-ontologies and S
a signature over L. We say that O
t
1
is a module for S in O
t
(or an S-module in O
t
) w.r.t.
L, if for every L-ontology O with Sig(O) ∩Sig(O
t
) ⊆ S, we have that O
t
1
is a module for O
in O
t
w.r.t. L. ♦
Intuitively, the notion of a module for a signature is a uniform analog of the notion of a
module for an ontology, in a similar way as the notion of safety for a signature is a uniform
analog of safety for an ontology. The reasoning task corresponding to Definition 12 can be
formulated as follows:
T4. given a L-ontology O
t
and a signature S over L,
compute a module O
t
1
for S in O
t
.
Continuing Example 11, it is possible to demonstrate that any subset of O that implies
axiom α, is in fact a module for S = ¦Cystic Fibrosis, Genetic Disorder¦ in O, that is, it can
be imported instead of O into every ontology O that shares with O only symbols from S. In
order to prove this, we use the following sufficient condition based on the notion of model
conservative extension:
Proposition 13 (Modules for a Signature vs. Model Conservative Extensions)
Let O
t
and O
t
1
⊆ O
t
be L-ontologies and S a signature over L such that O
t
is a model
S-conservative extension of O
t
1
. Then O
t
1
is a module for S in O
t
w.r.t. L.
Proof. In order to prove that O
t
1
is a module for S in O
t
w.r.t. L according to Definition 12,
take any oHO1O ontology O such that Sig(O) ∩ Sig(O
t
) ⊆ S. We need to demonstrate
that O
t
1
is a module for O in O
t
w.r.t. L, that is, according to Definition 10, O ∪ O
t
is a
deductive S
t
-conservative extension of O ∪ O
t
1
w.r.t. L for S
t
= Sig(O). ()
285
Cuenca Grau, Horrocks, Kazakov, & Sattler
Indeed, by Lemma 7, since O
t
is a model S-conservative extension of O
t
1
, and Sig(O
t
) ∩
Sig(O) ⊆ S, we have that O
t
∪ O is a model S
tt
-conservative extension of O
t
1
∪ O for
S
tt
= S ∪ Sig(O). In particular, since S
t
= Sig(O) ⊆ S
tt
, we have that O
t
∪ O is a deductive
S
t
-conservative extension of O
t
1
∪ O w.r.t. L, as was required to prove ().
Example 14 Let O and α be as in Example 11. We demonstrate that any subset O
1
of O
which implies α is a module for S = ¦Cystic Fibrosis, Genetic Disorder¦ in O. According to
Proposition 13, it is sufficient to demonstrate that O is a model S-conservative extension of
O
1
, that is, for every model 1 of O
1
there exists a model . of O such that 1[
S
= .[
S
. It is
easy to see that if the model . is constructed from 1 as in Example 11, then the required
property holds. ♦
Note that a module for a signature S in Odoes not necessarily contain all the axioms that
contain symbols from S. For example, the module O
1
consisting of the axiom M1, M2 and
M4 from O does not contain axiom M5 which mentions the atomic concept Cystic Fibrosis
from S. Also note that even in a minimal module like O
1
there might still be some axioms
like M2 that do not mention symbols from S at all.
3.5 Minimal Modules and Essential Axioms
One is usually not interested in extracting arbitrary modules from a reused ontology, but
in extracting modules that are easy to process afterwards. Ideally, the extracted modules
should be as small as possible. Hence it is reasonable to consider the problem of extracting
minimal modules; that is, modules that contain no other module as a subset.
Examples 11 and 14 demonstrate that a minimal module for an ontology or a signature
is not necessarily unique: the ontology O consisting of axioms M1–M5 has two minimal mo-
dules O
1
= ¦M1, M2, M4¦, and O
2
= ¦M1, M3, M4¦, for the ontology {
1
= ¦P1, P2, P3, E1¦
as well as for the signature S = ¦Cystic Fibrosis, Genetic Disorder¦, since these are the min-
imal sets of axioms that imply axiom α = (Cystic Fibrosis _ Genetic Disorder). Depending
on the application scenario, one can consider several variations of tasks T3 and T4 for com-
puting minimal modules. In some applications it might be necessary to extract all minimal
modules, whereas in others any minimal module suffices.
Axioms that do not occur in a minimal module in O are not essential for { because
they can always be removed from every module of O, thus they never need not be imported
into {. This is not true for the axioms that occur in minimal modules of O. It might be
necessary to import such axioms into { in order not to lose essential information from O.
These arguments motivate the following notion:
Definition 15 (Essential Axiom). Let O and O
t
be L-ontologies, S a signature and α
an axiom over L. We say that α is essential for O in O
t
w.r.t. L if α is contained in any
minimal module in O
t
for O w.r.t. L. We say that α is an essential axiom for S in O
t
w.r.t.
L (or S-essential in O
t
) if α is contained in some minimal module for S in O
t
w.r.t. L. ♦
In our example above the axioms M1–M4 from O are essential for the ontology {
1
and
for the signature S = ¦Cystic Fibrosis, Genetic Disorder¦, but the axiom M5 is not essential.
In certain situations one might be interested in computing the set of essential axioms of
an ontology, which can be done by computing the union of all minimal modules. Note that
286
Modular Reuse of Ontologies: Theory and Practice
Notation Input Task
Checking Safety:
T1 O, O
t
, L Check if O is safe for O
t
w.r.t. L
T2 O, S, L Check if O is safe for S w.r.t. L
Extracting of [all / some / union of] [minimal] module(s):
T3[a,s,u][m] O, O
t
, L Extract modules in O
t
w.r.t. O and L
T4[a,s,u][m] O
t
, S, L Extract modules for S in O
t
w.r.t. L
where O, O
t
are ontologies and S a signature over L
Table 2: Summary of reasoning tasks relevant for ontology integration and reuse
computing the union of minimal modules might be easier than computing all the minimal
modules since one does not need to identify which axiom belongs to which minimal module.
In Table 2 we have summarized the reasoning tasks we found to be potentially relevant
for ontology reuse scenarios and have included the variants T3am, T3sm, and T3um of the
task T3 and T4am, T4sm, and T4um of the task T4 for computation of minimal modules
discussed in this section.
Further variants of the tasks T3 and T4 could be considered relevant to ontology reuse.
For example, instead of computing minimal modules, one might be interested in computing
modules with the smallest number of axioms, or modules of the smallest size measured in
the number of symbols, or in any other complexity measure of the ontology. The theoretical
results that we will present in this paper can easily be extended to many such reasoning
tasks.
3.6 Safety and Modules for General Ontology Languages
All the notions introduced in Section 3 are defined with respect to an “ontology language”.
So far, however, we have implicitly assumed that the ontology languages are the description
logics defined in Section 2—that is, the fragments of the DL oHO1O. The notions consid-
ered in Section 3 can be applied, however, to a much broader class of ontology languages.
Our definitions apply to any ontology language with a notion of entailment of axioms from
ontologies, and a mechanism for identifying their signatures.
Definition 16. An ontology language is a tuple L = (Sg, Ax, Sig, [=), where Sg is a set
of signature elements (or vocabulary) of L, Ax is a set of axioms in L, Sig is a function
that assigns to every axiom α ∈ Ax a finite set Sig(α) ⊆ Sg called the signature of α, and
[= is the entailment relation between sets of axioms O ⊆ Ax and axioms α ∈ Ax, written
O [= α. An ontology over L is a finite set of axioms O ⊆ Ax. We extend the function Sig
to ontologies O as follows: Sig(O) :=

α∈C
Sig(α). ♦
Definition 16 provides a very general notion of an ontology language. A language L is
given by a set of symbols (a signature), a set of formulae (axioms) that can be constructed
over those symbols, a function that assigns to each formula its signature, and an entailment
relation between sets of axioms. The ontology language OWL DL as well as all description
287
Cuenca Grau, Horrocks, Kazakov, & Sattler
logics defined in Section 2 are examples of ontology languages in accordance to Definition 16.
Other examples of ontology languages are First Order Logic, Second Order Logic, and Logic
Programs.
It is easy to see that the notions of deductive conservative extension (Definition 1), safety
(Definitions 2 and 6) and modules (Definitions 10 and 12), as well as all the reasoning tasks
from Table 2, are well-defined for every ontology language L given by Definition 16. The
definition of model conservative extension (Definition 3) and the propositions involving
model conservative extensions (Propositions 4, 8, and 13) can be also extended to other
languages with a standard Tarski model-theoretic semantics, such as Higher Order Logic.
To simplify the presentation, however, we will not formulate general requirements on the
semantics of ontology languages, and assume that we deal with sublanguages of oHO1O
whenever semantics is taken into account.
In the remainder of this section, we establish the relationships between the different
notions of safety and modules for arbitrary ontology languages.
Proposition 17 (Safety vs. Modules for an Ontology) Let L be an ontology language,
and let O, O
t
, and O
t
1
⊆ O
t
be ontologies over L. Then:
1. O
t
is safe for O w.r.t. L iff the empty ontology is a module for O in O
t
w.r.t. L.
2. If O
t
` O
t
1
is safe for O ∪ O
t
1
then O
t
1
is a module for O in O
t
w.r.t. L.
Proof. 1. By Definition 2, O
t
is safe for O w.r.t. L iff (a) O∪O
t
is a deductive conservative
extension of O w.r.t. L. By Definition 10, the empty ontology O
t
0
= ∅ is a module for O in
O
t
w.r.t. L iff (b) O ∪ O
t
is a deductive S-conservative extension of O ∪ O
t
0
= O w.r.t. L
for S = Sig(O). It is easy to see that (a) is the same as (b).
2. By Definition 2, O
t
` O
t
1
is safe for O∪O
t
1
w.r.t. L iff (c) O∪O
t
1
∪(O
t
` O
t
1
) = O∪O
t
is a deductive conservative extension of O∪O
t
1
w.r.t. L. In particular, O∪O
t
is a deductive
S-conservative extension of O∪O
t
1
w.r.t. L for S = Sig(O), which implies, by Definition 10,
that O
t
1
is a module for O in O
t
w.r.t. L.
We also provide the analog of Proposition 17 for the notions of safety and modules for
a signature:
Proposition 18 (Safety vs. Modules for a Signature) Let L be an ontology language,
O
t
and O
t
1
⊆ O
t
, ontologies over L, and S a subset of the signature of L. Then:
1. O
t
is safe for S w.r.t. L iff the empty ontology O
t
0
= ∅ is an S-module in O
t
w.r.t. L.
2. If O
t
` O
t
1
is safe for S ∪ Sig(O
t
1
) w.r.t. L, then O
t
1
is an S-module in O
t
w.r.t. L.
Proof. 1. By Definition 6, O
t
is safe for S w.r.t. L iff (a) for every O with Sig(O
t
)∩Sig(O) ⊆
S, it is the case that O
t
is safe for O w.r.t. L. By Definition 12, O
t
0
= ∅ is an S-module
in O
t
w.r.t. L iff (b) for every O with Sig(O
t
) ∩ Sig(O) ⊆ S, it is the case that O
t
0
= ∅ is
a module for O in O
t
w.r.t. L. By Claim 1 of Proposition 17, it is easy to see that (a) is
equivalent to (b).
2. By Definition 6, O
t
` O
t
1
is safe for S ∪ Sig(O
t
1
) w.r.t. L iff (c) for every O, with
Sig(O
t
` O
t
1
) ∩ Sig(O) ⊆ S ∪ Sig(O
t
1
), we have that O
t
` O
t
1
is safe for O w.r.t. L. By
288
Modular Reuse of Ontologies: Theory and Practice
Definition 12, O
t
1
is an S-module in O
t
w.r.t. L iff (d) for every O with Sig(O
t
)∩Sig(O) ⊆ S,
we have that O
t
1
is a module for O in O
t
w.r.t. L.
In order to prove that (c) implies (d), let O be such that Sig(O
t
) ∩Sig(O) ⊆ S. We need
to demonstrate that O
t
1
is a module for O in O
t
w.r.t. L. ()
Let
˜
O := O∪O
t
1
. Note that Sig(O
t
` O
t
1
) ∩Sig(
˜
O) = Sig(O
t
` O
t
1
) ∩(Sig(O) ∪Sig(O
t
1
)) ⊆
(Sig(O
t
` O
t
1
) ∩Sig(O)) ∪Sig(O
t
1
) ⊆ S∪Sig(O
t
1
). Hence, by (c) we have that O
t
` O
t
1
is safe
for
˜
O = O ∪ O
t
1
w.r.t. L which implies by Claim 2 of Proposition 17 that O
t
1
is a module
for O in O
t
w.r.t. L ().
4. Undecidability and Complexity Results
In this section we study the computational properties of the tasks in Table 2 for ontology
languages that correspond to fragments of the description logic oHO1O. We demonstrate
that most of these reasoning tasks are algorithmically unsolvable even for relatively inex-
pressive DLs, and are computationally hard for simple DLs.
Since the notions of modules and safety are defined in Section 3 using the notion of
deductive conservative extension, it is reasonable to identify which (un)decidability and
complexity results for conservative extensions are applicable to the reasoning tasks in Ta-
ble 2. The computational properties of conservative extensions have recently been studied
in the context of description logics. Given O
1
⊆ O over a language L, the problem of
deciding whether O is a deductive conservative extension of O
1
w.r.t. L is 2-EXPTIME-
complete for L = /L( (Ghilardi et al., 2006). This result was extended by Lutz et al.
(2007), who showed that the problem is 2-EXPTIME-complete for L = /L(1O and unde-
cidable for L = /L(1OO. Recently, the problem has also been studied for simple DLs; it
has been shown that deciding deductive conservative extensions is EXPTIME-complete for
L = cL(Lutz & Wolter, 2007). These results can be immediately applied to the notions of
safety and modules for an ontology:
Proposition 19 Given ontologies O and O
t
over L, the problem of determining whether
O is safe for O
t
w.r.t. L is EXPTIME-complete for L = cL, 2-EXPTIME-complete for
L = /L( and L = /L(1O, and undecidable for L = /L(1OO. Given ontologies O, O
t
,
and O
t
1
⊆ O
t
over L, the problem of determining whether O
t
1
is a module for O in O
t
is
EXPTIME-complete for L = cL, 2-EXPTIME complete for L = /L( and L = /L(1O,
and undecidable for L = /L(1OO.
Proof. By Definition 2, an ontology O is safe for O
t
w.r.t. L iff O ∪ O
t
is a deductive
conservative extension of O
t
w.r.t. L. By Definition 2, an ontology O
t
1
is a module for O in
O
t
w.r.t. L if O∪O
t
is a deductive S-conservative extension of O∪O
t
1
for S = Sig(O) w.r.t.
L. Hence, any algorithm for checking deductive conservative extensions can be reused for
checking safety and modules.
Conversely, we demonstrate that any algorithm for checking safety or modules can be
used for checking deductive conservative extensions. Indeed, O is a deductive conservative
extension of O
1
⊆ O w.r.t. L iff O`O
1
is safe for O
1
w.r.t. L iff, by Claim 1 of Proposition 17,
O
0
= ∅ is a module for O
1
in O ` O
1
w.r.t. L .
289
Cuenca Grau, Horrocks, Kazakov, & Sattler
Corollary 20 There exist algorithms performing tasks T1, and T3[a,s,u]m from Table 2
for L = cL and L = /L(1O that run in EXPTIME and 2-EXPTIME respectively. There
is no algorithm performing tasks T1, or T3[a,s,u]m from Table 2 for L = /L(1OO.
Proof. The task T1 corresponds directly to the problem of checking the safety of an ontology,
as given in Definition 2.
Suppose that we have a 2-EXPTIME algorithm that, given ontologies O, O
t
and O
t
1

O
t
, determines whether O
t
1
is a module for O in O
t
w.r.t. L = /L(1O. We demonstrate
that this algorithm can be used to solve the reasoning tasks T3am, T3sm and T3um for
L = /L(1O in 2-EXPTIME. Indeed, given ontologies O and O
t
, one can enumerate all
subsets of O
t
and check in 2-EXPTIME which of these subsets are modules for O in O
t
w.r.t. L. Then we can determine which of these modules are minimal and return all of them,
one of them, or the union of them depending on the reasoning task.
Finally, we prove that solving any of the reasoning tasks T3am, T3sm and T3um is not
easier than checking the safety of an ontology. Indeed, by Claim 1 of Proposition 17, an
ontology O is safe for O
t
w.r.t. L iff O
0
= ∅ is a module for O
t
in O w.r.t. L. Note that
the empty ontology O
0
= ∅ is a module for O
t
in O w.r.t. L iff O
0
= ∅ is the only minimal
module for O
t
in O w.r.t. L.
We have demonstrated that the reasoning tasks T1 and T3[a,s,u]m are computationally
unsolvable for DLs that are as expressive as /L(O1O, and are 2-EXPTIME-hard for /L(.
In the remainder of this section, we focus on the computational properties of the reasoning
tasks T2 and T4[a,s,u]m related to the notions of safety and modules for a signature. We
demonstrate that all of these reasoning tasks are undecidable for DLs that are as expressive
as /L(O.
Theorem 21 (Undecidability of Safety for a Signature) The problem of checking
whether an ontology O consisting of a single /L(-axiom is safe for a signature S is unde-
cidable w.r.t. L = /L(O.
Proof. The proof is a variation of the construction for undecidability of deductive conser-
vative extensions in /L(O1O (Lutz et al., 2007), based on a reduction to a domino tiling
problem.
A domino system is a triple D = (T, H, V ) where T = ¦1, . . . , k¦ is a finite set of tiles
and H, V ⊆ T T are horizontal and vertical matching relations. A solution for a domino
system D is a mapping t
i,j
that assigns to every pair of integers i, j ≥ 1 an element of T,
such that 't
i,j
, t
i,j+1
` ∈ H and 't
i,j
, t
i+1,j
` ∈ V . A periodic solution for a domino system
D is a solution t
i,j
for which there exist integers m ≥ 1 , n ≥ 1 called periods such that
t
i+m,j
= t
i,j
and t
i,j+n
= t
i,j
for every i, j ≥ 1.
Let T be the set of all domino systems, T
s
be the subset of T that admit a solution and
T
ps
be the subset of T
s
that admit a periodic solution. It is well-known (B¨orger, Gr¨adel,
& Gurevich, 1997, Theorem 3.1.7) that the sets T` T
s
and T
ps
are recursively inseparable,
that is, there is no recursive (i.e. decidable) subset T
t
⊆ T of domino systems such that
T
ps
⊆ T
t
⊆ T
s
.
We use this property in our reduction. For each domino system D, we construct a
signature S = S(D), and an ontology O = O(D) which consists of a single /L(-axiom such
that:
290
Modular Reuse of Ontologies: Theory and Practice
(q
1
) · _ A
1
. . A
k
where T = ¦1, . . . , k¦
(q
2
) A
t
¯ A
t
_ ⊥ 1 ≤ t < t
t
≤ k
(q
3
) A
t
_ ∃r
H
.(

/t,t

)∈H
A
t
) 1 ≤ t ≤ k
(q
4
) A
t
_ ∃r
V
.(

/t,t

)∈V
A
t
) 1 ≤ t ≤ k
Figure 2: An ontology O
tile
= O
tile
(D) expressing tiling conditions for a domino system D
(a) if D does not have a solution then O = O(D) is safe for S = S(D) w.r.t. L = /L(O,
and
(b) if D has a periodic solution then O = O(D) is not safe for S = S(D) w.r.t. L = /L(O.
In other words, for the set T
t
consisting of the domino systems D such that O = O(D)
is not safe for S = S(D) w.r.t. L = /L(O, we have T
ps
⊆ T
t
⊆ T
s
. Since T ` T
s
and T
ps
are recursively inseparable, this implies undecidability for T
t
and hence for the problem of
checking if O is S-safe w.r.t. L = /L(O, because otherwise one can use this problem for
deciding membership in T
t
.
The signature S = S(D), and the ontology O = O(D) are constructed as follows. Given
a domino system D = (T, H, V ), let S consist of fresh atomic concepts A
t
for every t ∈ T and
two atomic roles r
H
and r
V
. Consider an ontology O
tile
in Figure 2 constructed for D. Note
that Sig(O
tile
) = S. The axioms of O
tile
express the tiling conditions for a domino system
D, namely (q
1
) and (q
2
) express that every domain element is assigned with a unique tile
t ∈ T; (q
3
) and (q
4
) express that every domain element has horizontal and vertical matching
successors.
Now let s be an atomic role and B an atomic concept such that s, B / ∈ S. Let O := ¦β¦
where:
β := · _ ∃s.

(C
i
¦D
i
)∈C
tile
(C
i
¯ D
i
) . (∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B)

We say that r
H
and r
V
commute in an interpretation 1 = (∆
7
,
7
) if for all domain
elements a, b, c, d
1
and d
2
from ∆
7
such that 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
, 'a, c` ∈ r
V
7
, and
'c, d
2
` ∈ r
H
7
, we have d
1
= d
2
. The following claims can be easily proved:
Claim 1. If O
tile
(D) has a model 1 in which r
H
and r
V
commute, then D has a solution.
Indeed a model 1 = (∆,
7
) of O
tile
(D) can be used to guide the construction of a solution
t
i,j
for D as follows. For every i, j ≥ 1, we construct t
i,j
inductively together with elements
a
i,j
∈ ∆
7
such that 'a
i,j
, a
i,j+1
` ∈ r
V
7
and 'a
i,j
, a
i+1,j
` ∈ r
H
7
. We set a
1,1
to any element
from ∆
7
.
Now suppose a
i,j
with i, j ≥ 1 is constructed. Since 1 is a model of axioms (q
1
) and
(q
2
) from Figure 2, we have a unique A
t
with 1 ≤ t ≤ k such that a
i,j
∈ A
t
7
. Then we set
t
i,j
:= t. Since 1 is a model of axioms (q
3
) and (q
4
) and a
i,j
∈ A
t
7
there exist b, c ∈ ∆
7
and t
t
, t
tt
∈ T such that 'a
i,j
, b` ∈ r
H
7
, 'a
i,j
, c` ∈ r
V
7
, 't, t
t
` ∈ H, 't, t
tt
` ∈ V , b ∈ A
t

7
,
and c ∈ A
t

7
. In this case we assign a
i,j+1
:= b, a
i+1,j
:= c, t
i,j+1
:= t
t
, and t
i+1,j
:= t
tt
.
Note that some values of a
i,j
and t
i,j
can be assigned two times: a
i+1,j+1
and t
i+1,j+1
are
constructed for a
i,j+1
and a
i,j+1
. However, since r
V
and r
H
commute in 1, the value for
291
Cuenca Grau, Horrocks, Kazakov, & Sattler
a
i+1,j+1
is unique, and because of (q
2
), the value for t
i+1,j+1
is unique. It is easy to see that
t
i,j
is a solution for D.
Claim 2. If 1 is a model of O
tile
∪ O, then r
H
and r
V
do not commute in 1.
Indeed, it is easy to see that O
tile
∪ O [= (· _ ∃s.[∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B]). Hence, if
1 = (∆
7
,
7
) is a model of O
tile
∪O, then there exist a, b, c, d
1
and d
2
such that 'x, a` ∈ s
7
for every x ∈ ∆
7
, 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
, d
1
∈ B
7
, 'a, c` ∈ r
V
7
, 'c, d
2
` ∈ r
H
7
, and
d
2
∈ (B)
7
. This implies that d
1
= d
2
, and so, r
h
and r
V
do not commute in 1.
Finally, we demonstrate that O = O(D) satisfies properties (a) and (b).
In order to prove property (a) we use the sufficient condition for safety given in Propo-
sition 8 and demonstrate that if D has no solution then for every interpretation 1 there
exists a model . of O such that .[
S
= 1[
S
. By Proposition 8, this will imply that O is safe
for S w.r.t. L.
Let 1 be an arbitrary interpretation. Since D has no solution, then by the contra-positive
of Claim 1 either (1) 1 is not a model of O
tile
, or (2) r
H
and r
V
do not commute in 1. We
demonstrate for both of these cases how to construct the required model . of O such that
.[
S
= 1[
S
.
Case (1). If 1 = (∆
7
,
7
) is not a model of O
tile
then there exists an axiom (C
i
_ D
i
) ∈
O
tile
such that 1 [= (C
i
_ D
i
). That is, there exists a domain element a ∈ ∆
7
such that
a ∈ C
7
i
but a ∈ D
7
i
. Let us define . to be identical to 1 except for the interpretation of the
atomic role s which we define in . as s
.
= ¦'x, a` [ x ∈ ∆¦. Since the interpretations of
the symbols in S have remained unchanged, we have a ∈ C
.
i
, a ∈ D
.
i
, and so . [= (· _
∃s.[C
i
¯D
j
]). This implies that . [= β, and so, we have constructed a model . of O such
that .[
S
= 1[
S
.
Case (2). Suppose that r
H
and r
V
do not commute in 1 = (∆
7
,
7
). This means that
there exist domain elements a, b, c, d
1
and d
2
from ∆
7
with 'a, b` ∈ r
H
7
, 'b, d
1
` ∈ r
V
7
,
'a, c` ∈ r
V
7
, and 'c, d
2
` ∈ r
H
7
, such that d
1
= d
2
. Let us define . to be identical to 1 except
for the interpretation of the atomic role s and the atomic concept B. We interpret s in . as
s
.
= ¦'x, a` [ x ∈ ∆¦. We interpret B in . as B
.
= ¦d
1
¦. Note that a ∈ (∃r
H
.∃r
V
.B)
.
and
a ∈ (∃r
V
.∃r
H
.B)
.
since d
1
= d
2
. So, we have . [= (· _ ∃s.[∃r
H
.∃r
V
.B ¯ ∃r
V
.∃r
H
.B])
which implies that . [= β, and thus, we have constructed a model . of O such that
.[
S
= 1[
S
.
In order to prove property (b), assume that D has a periodic solution t
i,j
with the
periods m, n ≥ 1. We demonstrate that O is not S-safe w.r.t. L. For this purpose we
construct an /L(O-ontology O
t
with Sig(O) ∩ Sig(O
t
) ⊆ S such that O ∪ O
t
[= (· _ ⊥),
but O
t
[= (· _ ⊥). This will imply that O is not safe for O
t
w.r.t. L = /L(O, and, hence,
is not safe for S w.r.t. L = /L(O.
We define O
t
such that every model of O
t
is a finite encoding of the periodic solution t
i,j
with the periods m and n. For every pair (i, j) with 1 ≤ i ≤ m and 1 ≤ j ≤ n we introduce
a fresh individual a
i,j
and define O
t
to be an extension of O
tile
with the following axioms:
(p
1
) ¦a
i
1
,j
¦ _ ∃r
V
.¦a
i
2
,j
¦ (p
2
) ¦a
i
1
,j
¦ _ ∀r
V
.¦a
i
2
,j
¦, i
2
= i
1
+ 1 mod m
(p
3
) ¦a
i,j
1
¦ _ ∃r
H
.¦a
i,j
2
¦ (p
4
) ¦a
i,j
1
¦ _ ∀r
H
.¦a
i,j
2
¦, j
2
= j
1
+ 1 mod n
(p
5
) · _

1≤i≤m, 1≤j≤n
¦a
i,j
¦
292
Modular Reuse of Ontologies: Theory and Practice
The purpose of axioms (p
1
)–(p
5
) is to ensure that r
H
and r
V
commute in every model of
O
t
. It is easy to see that O
t
has a model corresponding to every periodic solution for D with
periods m and n. Hence O
t
[= (· _ ⊥). On the other hand, by Claim 2, since O
t
contains
O
tile
, then in every model of O
t
∪ O, r
H
and r
V
do not commute. This is only possible if
O
t
∪ O have no models, so O
t
∪ O [= (· _ ⊥).
A direct consequence of Theorem 21 and Proposition 18 is the undecidability of the
problem of checking whether a subset of an ontology is a module for a signature:
Corollary 22 Given a signature S and /L(-ontologies O
t
and O
t
1
⊆ O
t
, the problem of
determining whether O
t
1
is an S-module in O
t
w.r.t. L = /L(O is undecidable.
Proof. By Claim 1 of Proposition 18, O is S-safe w.r.t. L if O
0
= ∅ is an S-module in O
w.r.t. L. Hence any algorithm for recognizing modules for a signature in L can be used for
checking if an ontology is safe for a signature in L.
Corollary 23 There is no algorithm that can perform tasks T2, or T4[a,s,u]m for L =
/L(O.
Proof. Theorem 21 directly implies that there is no algorithm for task T2, since this task
corresponds to the problem of checking safety for a signature.
Solving any of the reasoning tasks T4am, T4sm, or T4um for L is at least as hard as
checking the safety of an ontology, since, by Claim 1 of Proposition 18, an ontology O is
S-safe w.r.t. L iff O
0
= ∅ is (the only minimal) S-module in O w.r.t. L.
5. Sufficient Conditions for Safety
Theorem 21 establishes the undecidability of checking whether an ontology expressed in
OWL DL is safe w.r.t. a signature. This undecidability result is discouraging and leaves
us with two alternatives: First, we could focus on simple DLs for which this problem is
decidable. Alternatively, we could look for sufficient conditions for the notion of safety—
that is, if an ontology satisfies our conditions, then we can guarantee that it is safe; the
converse, however, does not necessarily hold.
The remainder of this paper focuses on the latter approach. Before we go any further,
however, it is worth noting that Theorem 21 still leaves room for investigating the former
approach. Indeed, safety may still be decidable for weaker description logics, such as cL,
or even for very expressive logics such as oH1O. In the case of oH1O, however, existing
results (Lutz et al., 2007) strongly indicate that checking safety is likely to be exponentially
harder than reasoning and practical algorithms may be hard to design. This said, in what
follows we will focus on defining sufficient conditions for safety that we can use in practice
and restrict ourselves to OWL DL—that is, oHO1O—ontologies.
5.1 Safety Classes
In general, any sufficient condition for safety can be defined by giving, for each signature
S, the set of ontologies over a language that satisfy the condition for that signature. These
ontologies are then guaranteed to be safe for the signature under consideration. These
intuitions lead to the notion of a safety class.
293
Cuenca Grau, Horrocks, Kazakov, & Sattler
Definition 24 (Class of Ontologies, Safety Class). A class of ontologies for a language
L is a function O() that assigns to every subset S of the signature of L, a subset O(S)
of ontologies in L. A class O() is anti-monotonic if S
1
⊆ S
2
implies O(S
2
) ⊆ O(S
1
); it is
compact when O ∈ O(S ∩ Sig(O)) for each O ∈ O(S); and it is subset-closed if O
1
⊆ O
2
and O
2
∈ O(S) implies O
1
∈ O(S); it is union-closed if O
1
∈ O(S) and O
2
∈ O(S) implies
(O
1
∪ O
2
) ∈ O(S).
A safety class (also called a sufficient condition to safety) for an ontology language L
is a class of ontologies O() for L such that for each S it is the case that (i) ∅ ∈ O(S), and
(ii) each ontology O ∈ O(S) is S-safe for L. ♦
Intuitively, a class of ontologies is a collection of sets of ontologies parameterized by a
signature. A safety class represents a sufficient condition for safety: each ontology in O(S)
should be safe for S. Also, w.l.o.g., we assume that the the empty ontology ∅ belongs to
every safety class for every signature. In what follows, whenever an ontology O belongs to
a safety class for a given signature and the safety class is clear from the context, we will
sometimes say that O passes the safety test for S.
Safety classes may admit many natural properties, as given in Definition 24. Anti-
monotonicity intuitively means that if an ontology O can be proved to be safe w.r.t. S
using the sufficient condition, then O can be proved to be safe w.r.t. every subset of S.
Compactness means that it is sufficient to consider only common elements of Sig(O) and S
for checking safety. Subset-closure (union closure) means that if O (O
1
and O
2
) satisfy the
sufficient condition for safety, then every subset of O (the union of O
1
and O
2
) also satisfies
this condition.
5.2 Locality
In this section we introduce a family of safety classes for L = oHO1O that is based on the
semantic properties underlying the notion of model conservative extensions. In Section 3,
we have seen that, according to Proposition 8, one way to prove that O is S-safe is to show
that O is a model S-conservative extension of the empty ontology.
The following definition formalizes the classes of ontologies, called local ontologies, for
which safety can be proved using Proposition 8.
Definition 25 (Class of Interpretations, Locality). Given a oHO1O signature S,
we say that a set of interpretations I is local w.r.t. S if for every oHO1O-interpretation 1
there exists an interpretation . ∈ I such that 1[
S
= .[
S
.
A class of interpretations is a function I() that given a oHO1Osignature S returns a set
of interpretations I(S); it is local if I(S) is local w.r.t. S for every S; it is monotonic if S
1
⊆ S
2
implies I(S
1
) ⊆ I(S
2
); it is compact if for every S
1
, S
2
and S such that (S
1
S
2
) ∩ S = ∅
we have that I(S
1
)[
S
= I(S
2
)[
S
, where S
1
S
2
is the symmetric difference of sets S
1
and
S
2
defined by S
1
S
2
:= S
1
` S
2
∪ S
2
` S
1
.
Given a class of interpretations I(), we say that O() is the class of ontologies O() based
on I() if for every S, O(S) is the set of ontologies that are valid in I(S); if I() is local then
we say that O() is a class of local ontologies, and for every S and O ∈ O(S) and every
α ∈ O, we say that O and α are local (based on I()). ♦
294
Modular Reuse of Ontologies: Theory and Practice
Example 26 Let I
r←∅
A←∅
() be a class of oHO1O interpretations defined as follows. Given a
signature S, the set I
r←∅
A←∅
(S) consists of interpretations . such that r
.
= ∅ for every atomic
role r / ∈ S and A
.
= ∅ for every atomic concept A / ∈ S. It is easy to show that I
r←∅
A←∅
(S)
is local for every S, since for every interpretation 1 = (∆
7
,
7
) and the interpretation
. = (∆
.
,
.
) defined by ∆
.
:= ∆
7
, r
.
= ∅ for r / ∈ S, A
.
= ∅ for A / ∈ S, and X
.
:= X
7
for the remaining symbols X, then . ∈ I
r←∅
A←∅
(S) and 1[
S
= .[
S
. Since I
r←∅
A←∅
(S
1
) ⊆ I
r←∅
A←∅
(S
2
)
for every S
1
⊆ S
2
, it is the case that I
r←∅
A←∅
() is monotonic; I
r←∅
A←∅
() is also compact, since for
every S
1
and S
2
the sets of interpretations I
r←∅
A←∅
(S
1
) and I
r←∅
A←∅
(S
2
) are defined differently
only for elements in S
1
S
2
.
Given a signature S, the set Ax
r←∅
A←∅
(S) of axioms that are local w.r.t. S based on I
r←∅
A←∅
(S)
consists of all axioms α such for every . ∈ I
r←∅
A←∅
(S), it is the case that . [= α. Then the
class of local ontologies based on I
r←∅
A←∅
() is defined by O ∈ O
r←∅
A←∅
(S) iff O ⊆ Ax
r←∅
A←∅
(S).

Proposition 27 (Locality Implies Safety) Let O() be a class of ontologies for oHO1O
based on a local class of interpretations I(). Then O() is a subset-closed and union-closed
safety class for L = oHO1O. Additionally, if I() is monotonic, then O() is anti-monotonic,
and if I() is compact then O() is also compact.
Proof. Assume that O() is a class of ontologies based on I(). Then by Definition 25 for
every oHO1O signature S, we have O ∈ O(S) iff O is valid in I(S) iff . [= O for every
interpretation . ∈ I(S). Since I() is a local class of interpretations, we have that for
every oHO1O-interpretation 1 there exists . ∈ I(S) such that .[
S
= 1[
S
. Hence for
every O ∈ I(S) and every oHO1O interpretation 1 there is a model . ∈ I(S) such that
.[
S
= 1[
S
, which implies by Proposition 8 that O is safe for S w.r.t. L = oHO1O. Thus
O() is a safety class.
The fact that O() is subset-closed and union-closed follows directly from Definition 25
since (O
1
∪ O
2
) ∈ O(S) iff (O
1
∪ O
2
) is valid in I(S) iff O
1
and O
2
are valid in I(S) iff
O
1
∈ O(S) and O
2
∈ O(S). If I() is monotonic then I(S
1
) ⊆ I(S
2
) for every S
1
⊆ S
2
, and
so O ∈ O(S
2
) implies that O is valid in I(S
2
) which implies that O is valid in I(S
1
) which
implies that O ∈ O(S
1
). Hence O() is anti-monotonic.
If I() is compact then for every S
1
, S
2
and S such that (S
1
S
2
) ∩ S = ∅ we have
that I(S
1
)[
S
= I(S
2
)[
S
, hence for every O with Sig(O) ⊆ S we have that O is valid in
I(S
1
) iff O is valid in I(S
2
), and so, O ∈ O(S
1
) iff O ∈ O(S
2
). In particular, O ∈ O(S) iff
O ∈ O(S ∩ Sig(O)) since (S (S ∩ Sig(O))) ∩ Sig(O) = (S ` Sig(O)) ∩ Sig(O) = ∅. Hence,
O() is compact.
Corollary 28 The class of ontologies O
r←∅
A←∅
(S) defined in Example 26 is an anti-monotonic
compact subset-closed and union-closed safety class.
Example 29 Recall that in Example 5 from Section 3, we demonstrated that the ontology
{
1
∪ O given in Figure 1 with {
1
= ¦P1, . . . , P4, E1¦ is a deductive conservative extension
of O for S = ¦Cystic Fibrosis, Genetic Disorder¦. This has been done by showing that every
S-interpretation 1 can be expanded to a model . of axioms P1–P4, E1 by interpreting
the symbols in Sig({
1
) ` S as the empty set. In terms of Example 26 this means that
295
Cuenca Grau, Horrocks, Kazakov, & Sattler
{
1
∈ O
r←∅
A←∅
(S). Since O
r←∅
A←∅
() is a class of local ontologies, by Proposition 27, the ontology
{
1
is safe for S w.r.t. L = oHO1O. ♦
Proposition 27 and Example 29 suggest a particular way of proving the safety of on-
tologies. Given a oHO1O ontology O and a signature S it is sufficient to check whether
O ∈ O
r←∅
A←∅
(S); that is, whether every axiom α in O is satisfied by every interpretation from
I
r←∅
A←∅
(S). If this property holds, then O must be safe for S according to Proposition 27.
It turns out that this notion provides a powerful sufficiency test for safety which works
surprisingly well for many real-world ontologies, as will be shown in Section 8. In the next
section we discuss how to perform this test in practice.
5.3 Testing Locality
In this section, we focus in more detail on the safety class O
r←∅
A←∅
(), introduced in Exam-
ple 26. When ambiguity does not arise, we refer to this safety class simply as locality.
2
From the definition of Ax
r←∅
A←∅
(S) given in Example 26 it is easy to see that an axiom α is
local w.r.t. S (based on I
r←∅
A←∅
(S)) if α is satisfied in every interpretation that fixes the inter-
pretation of all atomic roles and concepts outside S to the empty set. Note that for defining
locality we do not fix the interpretation of the individuals outside S, but in principle, this
could be done. The reason is that there is no elegant way to describe such interpretations.
Namely, every individual needs to be interpreted as an element of the domain, and there is
no “canonical” element of every domain to choose.
In order to test the locality of α w.r.t. S, it is sufficient to interpret every atomic concept
and atomic role not in S as the empty set and then check if α is satisfied in all interpretations
of the remaining symbols. This observation suggests the following test for locality:
Proposition 30 (Testing Locality) Given a oHO1O signature S, concept C, axiom α
and ontology O let τ(C, S), τ(α, S) and τ(O, S) be defined recursively as follows:
τ(C, S) ::= τ(·, S) = ·; (a)
[ τ(A, S) = ⊥ if A / ∈ S and otherwise = A; (b)
[ τ(¦a¦, S) = ¦a¦; (c)
[ τ(C
1
¯ C
2
, S) = τ(C
1
, S) ¯ τ(C
2
, S); (d)
[ τ(C
1
, S) = τ(C
1
, S); (e)
[ τ(∃R.C
1
, S) = ⊥ if Sig(R) S and otherwise = ∃R.τ(C
1
, S); (f)
[ τ(nR.C
1
, S) = ⊥ if Sig(R) S and otherwise = (nR.τ(C
1
, S)). (g)
τ(α, S) ::= τ(C
1
_ C
2
, S) = (τ(C
1
, S) _ τ(C
2
, S)); (h)
[ τ(R
1
_ R
2
, S) = (⊥ _ ⊥) if Sig(R
1
) S, otherwise
= ∃R
1
.· _ ⊥ if Sig(R
2
) S, otherwise = (R
1
_ R
2
); (i)
[ τ(a : C, S) = a : τ(C, S); (j)
[ τ(r(a, b), S) = · _ ⊥ if r / ∈ S and otherwise = r(a, b); (k)
[ τ(Trans(r), S) = ⊥ _ ⊥ if r / ∈ S and otherwise = Trans(r); (l)
[ τ(Funct(R), S) = ⊥ _ ⊥ if Sig(R) S and otherwise = Funct(R). (m)
τ(O, S) ::=

α∈C
τ(α, S) (n)
2. This notion of locality is exactly the one we used in our previous work (Cuenca Grau et al., 2007).
296
Modular Reuse of Ontologies: Theory and Practice
Then, O ∈ O
r←∅
A←∅
(S) iff every axiom in τ(O, S) is a tautology.
Proof. It is easy to check that for every atomic concept A and atomic role r from τ(C, S),
we have A ∈ S and r ∈ S, in other words, all atomic concepts and roles that are not in S
are eliminated by the transformation.
3
It is also easy to show by induction that for every
interpretation 1 ∈ I
r←∅
A←∅
(S), we have C
7
= (τ(C, S))
7
and that 1 [= α iff 1 [= τ(α, S). Hence
an axiom α is local w.r.t. S iff 1 [= α for every interpretation 1 ∈ I
r←∅
A←∅
(S) iff 1 [= τ(α, S)
for every Sig(α)-interpretation 1 ∈ I
r←∅
A←∅
(S) iff τ(α, S) is a tautology.
Example 31 Let O = ¦α¦ be the ontology consisting of axiom α = M2 from Figure 1. We
demonstrate using Proposition 30 that O is local w.r.t. S
1
= ¦Fibrosis, Genetic Origin¦, but
not local w.r.t. S
2
= ¦Genetic Fibrosis, has Origin¦.
Indeed, according to Proposition 30, in order to check whether O is local w.r.t. S
1
it is
sufficient to perform the following replacements in α (the symbols from S
1
are underlined):
M2
⊥ [by (b)]

Genetic Fibrosis ≡ Fibrosis ¯
⊥ [by (f)]

∃has Origin.Genetic Origin (7)
Similarly, in order to check whether O is local w.r.t. S
2
, it is sufficient to perform the
following replacements in α (the symbols from S
2
are underlined):
M2 Genetic Fibrosis ≡
⊥ [by (b)]

Fibrosis ¯ ∃has Origin.
⊥ [by (b)]

Genetic Origin (8)
In the first case we obtain τ(M2, S
1
) = (⊥ ≡ Fibrosis ¯ ⊥) which is a oHO1O-tautology.
Hence O is local w.r.t. S
1
and hence by Proposition 8 is S
1
-safe w.r.t. oHO1O. In the
second case τ(M2, S
2
) = (Genetic Fibrosis ≡ ⊥ ¯ ∃has Origin.⊥) which is not a oHO1O
tautology, hence O is not local w.r.t. S
2
. ♦
5.4 A Tractable Approximation to Locality
One of the important conclusions of Proposition 30 is that one can use the standard ca-
pabilities of available DL-reasoners such as FaCT++ (Tsarkov & Horrocks, 2006), RACER
(M¨oller & Haarslev, 2003), Pellet (Sirin & Parsia, 2004) or KAON2 (Motik, 2006) for testing
locality since these reasoners, among other things, allow testing for DL-tautologies. Che-
cking for tautologies in description logics is, theoretically, a difficult problem (e.g. for the
DL oHO1O it is known to be NEXPTIME-complete, Tobies, 2000). There are, however,
several reasons to believe that the locality test would perform well in practice. The primary
reason is that the sizes of the axioms which need to be tested for tautologies are usually
relatively small compared to the sizes of ontologies. Secondly, modern DL reasoners are
highly optimized for standard reasoning tasks and behave well for most realistic ontologies.
In case reasoning is too costly, it is possible to formulate a tractable approximation to the
locality conditions for oHO1O:
3. Recall that the constructors ⊥, C1 C2, ∀R.C, and (nR.C) are assumed to be expressed using ,
C1 C2, ∃R.C and (nR.C), hence, in particular, every role R

with Sig(R

) S occurs in O as either
∃R

.C, (nR

.C), R

R, R R

, Trans(R

), or Funct(R

), and hence will be eliminated. All atomic
concepts A

/ ∈ S will be eliminated likewise. Note that it is not necessarily the case that Sig(τ(α, S)) ⊆ S,
since τ(α, S) may still contain individuals that do not occur in S.
297
Cuenca Grau, Horrocks, Kazakov, & Sattler
Definition 32 (Syntactic Locality for oHO1O). Let S be a signature. The following
grammar recursively defines two sets of concepts Con

(S) and Con

(S) for a signature S:
Con

(S) ::= A

[ C

[ C ¯ C

[ C

¯ C [ ∃R

.C [ ∃R.C

[ (nR

.C) [ (nR.C

) .
Con

(S) ::= · [ C

[ C

1
¯ C

2
.
where A

/ ∈ S is an atomic concept, R

(possibly the inverse of) an atomic role r

/ ∈ S, C
any concept, R any role, C

∈ Con

(S), C

(i)
∈ Con

(S), and i ∈ ¦1, 2¦.
An axiom α is syntactically local w.r.t. S if it is of one of the following forms: (1) R

_ R,
or (2) Trans(r

), or (3) Funct(R

), or (4) C

_ C, or (5) C _ C

, or (6) a : C

. We denote
by A
˜
x
r←∅
A←∅
(S) the set of all oHO1O-axioms that are syntactically local w.r.t. S.
A oHO1O-ontology O is syntactically local w.r.t. S if O ⊆ A
˜
x
r←∅
A←∅
(S). We denote by
˜
O
r←∅
A←∅
(S) the set of all oHO1O ontologies that are syntactically local w.r.t. S. ♦
Intuitively, syntactic locality provides a simple syntactic test to ensure that an axiom is
satisfied in every interpretation from I
r←∅
A←∅
(S). It is easy to see from the inductive definitions
of Con

(S) and Con

(S) in Definition 32 that for every interpretation 1 = (∆
7
,
7
) from
I
r←∅
A←∅
(S) it is the case that (C

)
7
= ∅ and (C

)
7
= ∆
7
for every C

∈ Con

(S) and
C

∈ Con

(S). Hence, every syntactically local axiom is satisfied in every interpretation
1 from I
r←∅
A←∅
(S), and so we obtain the following conclusion:
Proposition 33 A
˜
x
r←∅
A←∅
(S) ⊆ Ax
r←∅
A←∅
(S).
Further, it can be shown that the safety class for oHO1O based on syntactic locality
enjoys all of the properties from Definition 24:
Proposition 34 The class of syntactically local ontologies
˜
O
r←∅
A←∅
() given in Definition 32
is an anti-monotonic, compact, subset-closed, and union-closed safety class.
Proof.
˜
O
r←∅
A←∅
() is a safety class by Proposition 33. Anti-monotonicity for
˜
O
r←∅
A←∅
() can
be shown by induction, by proving that Con

(S
2
) ⊆ Con

(S
1
), Con

(S
2
) ⊆ Con

(S
1
)
and A
˜
x
r←∅
A←∅
(S
2
) ⊆ A
˜
x
r←∅
A←∅
(S
1
) when S
1
⊆ S
2
. Also one can show by induction that
α ∈ A
˜
x
r←∅
A←∅
(S) iff α ∈ A
˜
x
r←∅
A←∅
(S ∩ Sig(α)), so
˜
O
r←∅
A←∅
() is compact. Since O ∈
˜
O
r←∅
A←∅
(S)
iff O ⊆ A
˜
x
r←∅
A←∅
(S), we have that
˜
O
r←∅
A←∅
() is subset-closed and union-closed.
Example 35 (Example 31 continued) It is easy to see that axiom M2 from Figure 1 is
syntactically local w.r.t. S
1
= ¦Fibrosis, Genetic Origin¦. Below we indicate sub-concepts in
α from Con

(S
1
):
M2
∈ Con

(S
1
) [matches A

]

Genetic Fibrosis ≡ Fibrosis ¯
∈ Con

(S
1
) [matches ∃R

.C]

∃has Origin.Genetic Origin

∈ Con

(S
1
) [matches C ¯ C

]
(9)
It is easy to show in a similar way that axioms P
1
—P4, and E1 from Figure 1 are syn-
tactically local w.r.t. S = ¦Cystic Fibrosis, Genetic Disorder¦. Hence the ontology {
1
=
¦P1, . . . , P4, E1¦ considered in Example 29 is syntactically local w.r.t. S. ♦
298
Modular Reuse of Ontologies: Theory and Practice
I
r←∗
A←∗
(S) r, A ∈ S : r
.
A
.
I
r←∅
A←∅
(S) ∅ ∅
I
r←∆∆
A←∅
(S) ∆
.

.

I
r←id
A←∅
(S) ¦'x, x` [ x ∈ ∆
.
¦ ∅
I
r←∗
A←∗
(S) r, A ∈ S : r
.
A
.
I
r←∅
A←∆
(S) ∅ ∆
.
I
r←∆∆
A←∆
(S) ∆
.

.

.
I
r←id
A←∆
(S) ¦'x, x` [ x ∈ ∆
.
¦ ∆
.
Table 3: Examples for Different Local Classes of Interpretations
The converse of Proposition 33 does not hold in general since there are semantically
local axioms that are not syntactically local. For example, the axiom α = (A _ A . B) is
local w.r.t. every S since it is a tautology (and hence true in every interpretation). On the
other hand, it is easy to see that α is not syntactically local w.r.t. S = ¦A, B¦ according to
Definition 32 since it involves symbols in S only. Another example, which is not a tautology,
is a GCI α = (∃r.A _ ∃r.B). The axiom α is semantically local w.r.t. S = ¦r¦, since
τ(α, S) = (∃r.⊥ _ ∃r.⊥) is a tautology, but not syntactically local. These examples show
that the limitation of our syntactic notion of locality is its inability to “compare” different
occurrences of concepts from the given signature S. As a result, syntactic locality does not
detect all tautological axioms. It is reasonable to assume, however, that tautological axioms
do not occur often in realistic ontologies. Furthermore, syntactic locality checking can be
performed in polynomial time by matching an axiom according to Definition 32.
Proposition 36 There exists an algorithm that given a oHO1O ontology O and a sig-
nature S, determines whether O ∈
˜
O
r←∅
A←∅
(S), and whose running time is polynomial in
[O[ +[S[, where [O[ and [S[ are the number of symbols occurring in O and S respectively.
4
5.5 Other Locality Classes
The locality condition given in Example 26 based on a class of local interpretations I
r←∅
A←∅
()
is just a particular example of locality which can be used for testing safety. Other classes
of local interpretations can be constructed in a similar way by fixing the interpretations
of elements outside S to different values. In Table 3 we have listed several such classes of
local interpretations where we fix the interpretation of atomic roles outside S to be either
the empty set ∅, the universal relation ∆∆, or the identity relation id on ∆, and the
interpretation of atomic concepts outside S to either the empty set ∅ or the set ∆ of all
elements.
Each local class of interpretations in Table 3 defines a corresponding class of local
ontologies analogously to Example 26. In Table 4 we have listed all of these classes together
with examples of typical types of axioms used in ontologies. These axioms are assumed
to be an extension of our project ontology from Figure 1. We indicate which axioms are
local w.r.t. S for which locality conditions assuming, as usual, that the symbols from S are
underlined.
It can be seen from Table 4 that different types of locality conditions are appropriate
for different types of axioms. The locality condition based on I
r←∅
A←∅
(S) captures the domain
axiom P4, definition P5, the disjointness axiom P6, and the functionality axiom P7 but
4. We assume that the numbers in number restrictions are written down using binary coding.
299
Cuenca Grau, Horrocks, Kazakov, & Sattler
α Axiom ? α ∈ Ax
r←∅
A←∅
r←∆∆
A←∅
r←id
A←∅
r←∅
A←∆
r←∆∆
A←∆
r←id
A←∆
P4 ∃has Focus.· _ Project
P5
BioMedical Project ≡ Project ¯
¯ ∃has Focus.Bio Medicine

P6 Project ¯ Bio Medicine _ ⊥
P7 Funct(has Focus)
P8 Human Genome : Project
P9 has Focus(Human Genome, Gene)
E2
∀has Focus.Cystic Fibrosis _
_ ∃has Focus.Cystic Fibrosis

Table 4: A Comparison between Different Types of Locality Conditions
neither of the assertions P8 or P9, since the individuals Human Genome and Gene prevent
us from interpreting the atomic role has Focus and atomic concept Project with the empty
set. The locality condition based on I
r←∆∆
A←∆
(S), where the atomic roles and concepts outside
S are interpreted as the largest possible sets, can capture such assertions but are generally
poor for other types of axioms. For example, the functionality axiom P7 is not captured
by this locality condition since the atomic role has Focus is interpreted as the universal
relation ∆∆, which is not necessarily functional. In order to capture such functionality
axioms, one can use locality based on I
r←id
A←∅
(S) or I
r←id
A←∆
(S), where every atomic role outside
S is interpreted with the identity relation id on the interpretation domain. Note that the
modeling error E2 is not local for any of the given locality conditions. Note also that it is
not possible to come up with a locality condition that captures all the axioms P4–P9, since
in P6 and P8 together imply axiom β = (¦Human Genome¦ ¯ Bio Medicine _ ⊥) which
uses the symbols from S only. Hence, every subset of { containing P6 and P8 is not safe
w.r.t. S, and so cannot be local w.r.t. S.
It might be possible to come up with algorithms for testing locality conditions for the
classes of interpretation in Table 3 similar to the ones presented in Proposition 30. For
example, locality based on the class I
r←∅
A←∆
(S) can be tested as in Proposition 30, where the
case (a) of the definition for τ(C, S) is replaced with the following:
τ(A, S) = · if A / ∈ S and otherwise = A (a
t
)
For the remaining classes of interpretations, that is for I
r←∆∆
A←∗
(S) and I
r←id
A←∗
(S), checking
locality, however, is not that straightforward, since it is not clear how to eliminate the
universal roles and identity roles from the axioms and preserve validity in the respective
classes of interpretations.
Still, it is easy to come up with tractable syntactic approximations for all the locality
conditions considered in this section in a similar manner to what has been done in Sec-
tion 5.4. The idea is the same as that used in Definition 32, namely to define two sets
Con

(S) and Con

(S) of concepts for a signature S which are interpreted as the empty
300
Modular Reuse of Ontologies: Theory and Practice
Con

(S) ::= C

[ C ¯ C

[ C

¯ C
[ ∃R.C

[ nR.C

for I
r←∗
A←∅
(): [ A

for I
r←∅
A←∗
(): [ ∃R

.C [ nR

.C
for I
r←id
A←∗
(): [ (mR
id
.C), m ≥ 2 .
Con

(S) ::= · [ C

[ C

1
¯ C

2
for I
r←∗
A←∆
(): [ A

for I
r←∆∆
A←∗
(): [ ∃R
∆∆
.C

[ (nR
∆∆
.C

)
for I
r←id
A←∗
(): [ ∃R
id
.C

[ (1 R
id
.C

) .
A
˜
x
r←∗
A←∗
(S) ::= C

_ C [ C _ C

[ a : C

for I
r←∅
A←∗
(): [ R

_ R [ Trans(r

) [ Funct(R

)
for I
r←∆∆
A←∗
(): [ R _ R
∆∆
[ Trans(r
∆∆
) [ r
∆∆
(a, b)
for I
r←id
A←∗
(): [ Trans(r
id
) [ Funct(R
id
)
Where:
A

, A

, r

, r
∆∆
, r
id
∈ S;
Sig(R

), Sig(R
∆∆
), Sig(R
id
) S;
C

∈ Con

(S), C

(i)
∈ Con

(S);
C is any concept, R is any role
Figure 3: Syntactic Locality Conditions for the Classes of Interpretations in Table 3
set and, respectively, as ∆
7
in every interpretation 1 from the class and see in which situa-
tions DL-constructors produce the elements from these sets. In Figure 3 we gave recursive
definitions for syntactically local axioms A
˜
x
r←∗
A←∗
(S) that correspond to classes I
r←∗
A←∗
(S) of
interpretations from Table 3, where some cases in the recursive definitions are present only
for the indicated classes of interpretations.
5.6 Combining and Extending Safety Classes
In the previous section we gave examples of several safety classes based on different local
classes of interpretations and demonstrated that different classes are suitable for different
types of axioms. In order to check safety of ontologies in practice, one may try to apply
different sufficient tests and check if any of them succeeds. Obviously, this gives a more
powerful sufficient condition for safety, which can be seen as the union of the safety classes
used in the tests.
Formally, given two classes of ontologies O
1
() and O
2
(), their union (O
1
∪ O
2
)() is a
class of ontologies defined by (O
1
∪O
2
)(S) = O
1
(S)∪O
2
(S). It is easy to see by Definition 24
that if both O
1
() and O
2
() are safety classes then their union (O
1
∪ O
2
)() is a safety
class. Moreover, if each of the safety classes is also anti-monotonic or subset-closed, then
the union is anti-monotonic, or respectively, subset-closed as well. Unfortunately the union-
closure property for safety classes is not preserved under unions, as demonstrated in the
following example:
Example 37 Consider the union (O
r←∅
A←∅
∪ O
r←∆∆
A←∆
)() of two classes of local ontologies
O
r←∅
A←∅
() and O
r←∆∆
A←∆
() defined in Section 5.5. This safety class is not union-closed since,
for example, the ontology O
1
consisting of axioms P4–P7 from Table 4 satisfies the first
locality condition, the ontology O
2
consisting of axioms P8–P9 satisfies the second locality
condition, but their union O
1
∪O
2
satisfies neither the first nor the second locality condition
and, in fact, is not even safe for S as we have shown in Section 5.5. ♦
As shown in Proposition 33, every locality condition gives a union-closed safety class;
however, as seen in Example 37, the union of such safety classes might be no longer union-
closed. One may wonder if locality classes already provide the most powerful sufficient
301
Cuenca Grau, Horrocks, Kazakov, & Sattler
conditions for safety that satisfy all the desirable properties from Definition 24. Surprisingly
this is the case to a certain extent for some locality classes considered in Section 5.5.
Definition 38 (Maximal Union-Closed Safety Class). A safety class O
2
() extends
a safety class O
1
() if O
1
(S) ⊆ O
2
(S) for every S. A safety class O
1
() is maximal union-
closed for a language L if O
1
() is union-closed and for every union-closed safety class O
2
()
that extends O
1
() and every O over L we have that O ∈ O
2
(S) implies O ∈ O
1
(S). ♦
Proposition 39 The classes of local ontologies O
r←∅
A←∅
() and O
r←∅
A←∆
() defined in Section 5.5
are maximal union-closed safety classes for L = oH1O.
Proof. According to Definition 38, if a safety class O() is not maximal union-closed for a
language L, then there exists a signature S and an ontology O over L, such that (i) O / ∈
O(S), (ii) O is safe w.r.t. S in L, and (iii) for every { ∈ O(S), it is the case that O∪ { is
safe w.r.t. S in L; that is, for every ontology O over L with Sig(O) ∩ Sig(O ∪ {) ⊆ S it is
the case that O ∪ { ∪ O is a deductive conservative extension of O. We demonstrate that
this is not possible for O() = O
r←∅
A←∅
() or O() = O
r←∅
A←∆
()
We first consider the case O() = O
r←∅
A←∅
() and then show how to modify the proof for
the case O() = O
r←∅
A←∆
().
Let O be an ontology over L that satisfies conditions (i)–(iii) above. We define ontologies
{ and O as follows. Take { to consist of the axioms A _ ⊥ and ∃r.· _ ⊥ for every atomic
concept A and atomic role r from Sig(O) ` S. It is easy to see that { ∈ O
r←∅
A←∅
(S). Take O
to consist of all tautologies of form ⊥ _ A and ⊥ _ ∃r.· for every A, r ∈ S. Note that
Sig(O ∪ {) ∩ Sig(O) ⊆ S. We claim that O ∪ { ∪ O is not a deductive Sig(O)-conservative
extension of O. ()
Intuitively, the ontology { is chosen in such a way that { ∈ O
r←∅
A←∅
(S) and O ∪ { ∪ O
has only models from I
r←∅
A←∅
(S). O is an ontology which implies nothing but tautologies and
uses all the atomic concepts and roles from S.
Since O / ∈ O
r←∅
A←∅
(S), there exists an axiom α ∈ O such that α / ∈ Ax
r←∅
A←∅
(S). Let
β := τ(α, S) where τ(, ) is defined as in Proposition 30. As has been shown in the proof of
this proposition, 1 [= α iff 1 [= β for every 1 ∈ I
r←∅
A←∅
(S). Now, since O [= α and O ∪ { ∪ O
has only models from I
r←∅
A←∅
(S), it is the case that O∪{∪O [= β. By Proposition 30, since β
does not contain individuals, we have that Sig(β) ⊆ S = Sig(O) and, since α / ∈ Ax
r←∅
A←∅
(S),
β is not a tautology, thus O [= β. Hence, by Definition 1, O ∪ { ∪ O is not a deductive
Sig(O)-conservative extension of O ().
For O() = O
r←∅
A←∆
() the proof can be repeated by taking { to consist of axioms · _ A
and ∃r.· _ ⊥ for all A, r ∈ Sig(O) ` S, and modifying τ(, ) as has been discussed in
Section 5.5.
There are some difficulties in extending the proof of Proposition 39 to other locality
classes considered in Section 5.5. First, it is not clear how to force interpretations of roles
to be the universal or identity relation using oHO1O axioms. Second, it is not clear how
to define the function τ(, ) in these cases (see a related discussion in Section 5.5). Note
also that the proof of Proposition 39 does not work in the presence of nominals, since
this will not guarantee that β = τ(α, S) contains symbols from S only (see Footnote 3 on
p. 297). Hence there is probably room to extend the locality classes O
r←∅
A←∅
() and O
r←∅
A←∆
()
for L = oHO1O while preserving union-closure.
302
Modular Reuse of Ontologies: Theory and Practice
6. Extracting Modules Using Safety Classes
In this section we revisit the problem of extracting modules from ontologies. As shown in
Corollary 23 in Section 4, there exists no general procedure that can recognize or extract
all the (minimal) modules for a signature in an ontology in finite time.
The techniques described in Section 5, however, can be reused for extracting particular
families of modules that satisfy certain sufficient conditions. Proposition 18 establishes the
relationship between the notions of safety and module; more precisely, a subset O
1
of O is
an S-module in O provided that O ` O
1
is safe for S ∪ Sig(O
1
). Therefore, any safety class
O() can provide a sufficient condition for testing modules—that is, in order to prove that
O
1
is a S-module in O, it is sufficient to show that O ` O
1
∈ O(S ∪ Sig(O
1
)). A notion of
modules based on this property can be defined as follows.
Definition 40 (Modules Based on Safety Class).
Let L be an ontology language and O() be a safety class for L. Given an ontology O and
a signature S over L, we say that O
m
⊆ O is an O()-based S-module in O if O ` O
m

O(S ∪ Sig(O
m
)). ♦
Remark 41 Note that for every safety class O(), ontology O and signature S, there exists
at least one O()-based S-module in O, namely O itself; indeed, by Definition 24, the empty
ontology ∅ = O ` O also belongs to O(S) for every O() and S.
Note also that it follows from Definition 40 that O
m
is an O()-based S-module in O iff
O
m
is an O()-based S
t
-module in O for every S
t
with S ⊆ S
t
⊆ (S ∪ Sig(O
m
)). ♦
It is clear that, according to Definition 40, any procedure for checking membership of a
safety class O() can be used directly for checking whether O
m
is a module based on O().
In order to extract an O()-module, it is sufficient to enumerate all possible subsets of the
ontology and check if any of these subsets is a module based on O().
In practice, however, it is possible to avoid checking all the possible subsets of the
input ontology. Figure 4 presents an optimized version of the module-extraction algorithm.
The procedure manipulates configurations of the form O
m
[ O
u
[ O
s
, which represent a
partitioning of the ontology O into three disjoint subsets O
m
, O
u
and O
s
. The set O
m
accumulates the axioms of the extracted module; the set O
s
is intended to be safe w.r.t.
S∪Sig(O
m
). The set O
u
, which is initialized to O, contains the unprocessed axioms. These
axioms are distributed among O
m
and O
s
according to rules R1 and R2. Given an axiom α
from O
u
, rule R1 moves α into O
s
provided O
s
remains safe w.r.t. S ∪ Sig(O
m
) according
to the safety class O(). Otherwise, rule R2 moves α into O
m
and moves all axioms from
O
s
back into O
u
, since Sig(O
m
) might expand and axioms from O
s
might become no longer
safe w.r.t. S ∪ Sig(O
m
). At the end of this process, when no axioms are left in in O
u
, the
set O
m
is an O()-based module in O.
The rewrite rules R1 and R2 preserve invariants I1–I3 given in Figure 4. Invariant I1
states that the three sets O
m
, O
u
and O
s
form a partitioning of O; I2 states that the set O
s
satisfies the safety test for S ∪ Sig(O
m
) w.r.t. O(); finally, I3 establishes that the rewrite
rules either add elements to O
m
, or they add elements to O
s
without changing O
m
; in other
words, the pair ([O
m
[, [O
s
[) consisting of the sizes for these sets increases in lexicographical
order.
303
Cuenca Grau, Horrocks, Kazakov, & Sattler
Input: an ontology O, a signature S, a safety class O()
Output: a module O
m
in O based on O()
Configuration: O
m

module
[
unprocessed

O
u
[ O
s

safe
;
Initial Configuration = ∅ [ O [ ∅
Termination Condition: O
u
= ∅
Rewrite rules:
R1. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
[ O
u
[ O
s
∪ ¦α¦ if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
))
R2. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
∪ ¦α¦ [ O
u
∪ O
s
[ ∅ if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
))
Invariants for O
m
[ O
u
[ O
s
:
I1. O = O
m
¬ O
u
¬ O
s
I2. O
s
∈ O(S ∪ Sig(O
m
))
Invariant for O
m
[ O
u
[ O
s
=⇒ O
t
m
[ O
t
u
[ O
t
s
:
I3. ([O
m
[, [O
s
[) <
lex
([O
t
m
[, [O
t
s
[)
Figure 4: A Procedure for Computing Modules Based on a Safety Class O()
Proposition 42 (Correctness of the Procedure from Figure 4) Let O() be a safety
class for an ontology language L, O an ontology over L, and S a signature over L. Then:
(1) The procedure in Figure 4 with input O, S, O() terminates and returns an O()-
based S-module O
m
in O; and
(2) If, additionally, O() is anti-monotonic, subset-closed and union-closed, then there
is a unique minimal O()-based S-module in O, and the procedure returns precisely this
minimal module.
Proof. (1) The procedure based on the rewrite rules from Figure 4 always terminates for
the following reasons: (i) for every configuration derived by the rewrite rules, the sets O
m
,
O
u
and O
s
form a partitioning of O (see invariant I1 in Figure 4), and therefore the size
of every set is bounded; (ii) at each rewrite step, ([O
m
[, [O
s
[) increases in lexicographical
order (see invariant I3 in Figure 4). Additionally, if O
u
= ∅ then it is always possible to
apply one of the rewrite rules R1 or R2, and hence the procedure always terminates with
O
u
= ∅. Upon termination, by invariant I1 from Figure 4, O is partitioned into O
m
and O
s
and by invariant I2, O
s
∈ O(S ∪ Sig(O
m
)), which implies, by Definition 40, that O
m
is an
O()-based S-module in O.
(2) Now, suppose that, in addition, O() is an anti-monotonic, subset-closed, and union-
closed safety class, and suppose that O
t
m
is an O()-based S-module in O. We demonstrate
by induction that for every configuration O
m
[ O
u
[ O
s
derivable from ∅ [ O [ ∅ by rewrite
rules R1 and R2, it is the case that O
m
⊆ O
t
m
. This will prove that the module computed
by the procedure is a subset of every O()-based S-module in O, and hence, is the smallest
O()-based S-module in O.
Indeed, for the base case we have O
m
= ∅ ⊆ O
t
m
. The rewrite rule R1 does not change
the set O
m
. For the rewrite rule R2 we have: O
m
[ O
u
∪¦α¦ [ O
s
=⇒ O
m
∪¦α¦ [ O
u
∪O
s
[ ∅
if (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
)). ()
304
Modular Reuse of Ontologies: Theory and Practice
Input: an ontology O, a signature S, a safety class O()
Output: a module O
m
in O based on O()
Initial Configuration = ∅ [ O [ ∅ Termination Condition: O
u
= ∅
Rewrite rules:
R1. O
m
[ O
u
∪ ¦α¦ [ O
s
=⇒ O
m
[ O
u
[ O
s
∪ ¦α¦ if ¦α¦ ∈ O(S ∪ Sig(O
m
))
R2. O
m
[ O
u
∪ ¦α¦ [ O
α
s
∪ O
s
=⇒ O
m
∪ ¦α¦ [ O
u
∪ O
α
s
[ O
s
if ¦α¦ ∈ O(S ∪ Sig(O
m
)), and
Sig(O
s
) ∩ Sig(α) ⊆ Sig(O
m
)
Figure 5: An Optimized Procedure for Computing Modules Based on a Compact Subset-
Closed Union-Closed Safety Class O()
Suppose, to the contrary, that O
m
⊆ O
t
m
but O
m
∪¦α¦ O
t
m
. Then α ∈ O
t
s
:= O`O
t
m
.
Since O
t
m
is an O()-based S-module in O, we have that O
t
s
∈ O(S ∪ Sig(O
t
m
)). Since O()
is subset-closed, ¦α¦ ∈ O(S ∪ Sig(O
t
m
)). Since O
m
⊆ O
t
m
and O() is anti-monotonic, we
have ¦α¦ ∈ O(S∪Sig(O
m
)). Since by invariant I2 from Figure 4, O
s
∈ O(S∪Sig(O
m
)) and
O() is union-closed, O
s
∪¦α¦ ∈ O(S∪Sig(O
m
)), which contradicts (). This contradiction
implies that the rule R2 also preserves property O
m
⊆ O
t
m
.
Claim (1) of Proposition 42 establishes that the procedure from Figure 4 terminates for
every input and produces a module based on a given safety class. Moreover, it is possible
to show that this procedure runs in polynomial time assuming that the safety test can be
also performed in polynomial time.
If the safety class O() satisfies additional desirable properties, like those based on classes
of local interpretations described in Section 5.2, the procedure, in fact, produces the smallest
possible module based on the safety class, as stated in claim (2) of Proposition 42. In this
case, it is possible to optimize the procedure shown in Figure 4. If O() is union closed,
then, instead of checking whether (O
s
∪ ¦α¦) ∈ O(S ∪ Sig(O
m
)) in the conditions of rules
R1 and R2, it is sufficient to check if ¦α¦ ∈ O(S ∪ Sig(O
m
)) since it is already known that
O
s
∈ O(S ∪ Sig(O
m
)). If O() is compact and subset closed, then instead of moving all the
axioms in O
s
to O
u
in rule R2, it is sufficient to move only those axioms O
α
s
that contain
at least one symbol from α which did not occur in O
m
before, since the set of remaining
axioms will stay in O(S ∪ Sig(O
m
)). In Figure 5 we present an optimized version of the
algorithm from Figure 4 for such locality classes.
Example 43 In Table 5 we present a trace of the algorithm from Figure 5 for the ontology O
consisting of axioms M1–M5 from Figure 1, signature S = ¦Cystic Fibrosis, Genetic Disorder¦
and safety class O() = O
r←∅
A←∅
() defined in Example 26. The first column of the table lists
the configurations obtained from the initial configuration ∅ [ O [ ∅ by applying the rewrite
rules R1 and R2 from Figure 5; in each row, the underlined axiom α is the one that is being
tested for safety. The second column of the table shows the elements of S ∪ Sig(O
m
) that
have appeared for the current configuration but have not been present for the preceding
configurations. In the last column we indicate whether the first conditions of the rules R1
and R2 are fulfilled for the selected axiom α from O
u
—that is, whether α is local for I
r←∅
A←∅
().
The rewrite rule corresponding to the result of this test is applied to the configuration.
305
Cuenca Grau, Horrocks, Kazakov, & Sattler
O
m
[ O
u
, α [ O
s
New elements in S ∪ Sig(O
m
) ¦α¦ ∈ O(S ∪ Sig(O
m
))?
1 − [ M1, M2, M3, M4, M5 [ − Cystic Fibrosis, Genetic Disorder Yes ⇒ R1
2 − [ M1, M3, M4, M5 [ M2 − Yes ⇒ R1
3 − [ M1, M4, M5 [ M2, M3 − No ⇒ R2
4 M1 [ M2, M3, M4, M5 [ − Fibrosis, located In, Pancreas,
has Origin, Genetic Origin No ⇒ R2
5 M1, M3 [ M2, M4, M5 [ − Genetic Fibrosis No ⇒ R2
6 M1, M3, M4 [ M2, M5 [ − − Yes ⇒ R1
7 M1, M3, M4 [ M2 [ M5 − No ⇒ R2
8 M1, M2, M3, M4 [ − [ M5 −
Table 5: A trace of the Procedure in Figure 5 for the input O = ¦M1, . . . , M5¦ from Figure 1
and S = ¦Cystic Fibrosis, Genetic Disorder¦
Note that some axioms α are tested for safety several times for different configurations,
because the set O
u
may increase after applications of rule R2; for example, the axiom
α = M2 is tested for safety in both configurations 1 and 7, and α = M3 in configurations
2 and 4. Note also that different results for the locality tests are obtained in these cases:
both M2 and M3 were local w.r.t. S ∪ Sig(O
m
) when O
m
= ∅, but became non-local when
new axioms were added to O
m
. It is also easy to see that, in our case, syntactic locality
produces the same results for the tests.
In our example, the rewrite procedure produces a module O
m
consisting of axioms M1–
M4. Note that it is possible to apply the rewrite rules for different choices of the axiom
α in O
u
, which results in a different computation. In other words, the procedure from
Figure 5 has implicit non-determinism. According to Claim (2) of Proposition 42 all such
computations should produce the same module O
m
, which is the smallest O
r←∅
A←∅
()-based
S-module in O; that is, the implicit non-determinism in the procedure from Figure 5 does
not have any impact on the result of the procedure. However, alternative choices for α
may result in shorter computations: in our example we could have selected axiom M1 in
the first configuration instead of M2 which would have led to a shorter trace consisting of
configurations 1, and 4–8 only. ♦
It is worth examining the connection between the S-modules in an ontology O based on
a particular safety class O() and the actual minimal S-modules in O. It turns out that any
O()-based module O
m
is guaranteed to cover the set of minimal modules, provided that
O() is anti-monotonic and subset-closed. In other words, given O and S, O
m
contains all
the S-essential axioms in O. The following Lemma provides the main technical argument
underlying this result.
Lemma 44 Let O() be an anti-monotonic subset-closed safety class for an ontology lan-
guage L, O an ontology, and S a signature over L. Let O
1
be an S-module in O w.r.t. L
and O
m
an O()-based S-module in O. Then O
2
:= O
1
∩O
m
is an S-module in O w.r.t. L.
306
Modular Reuse of Ontologies: Theory and Practice
Proof. By Definition 40, since O
m
is an O()-based S-module in O, we have that O` O
m

O(S ∪ Sig(O
m
)). Since O
1
`O
2
= O
1
`O
m
⊆ O`O
m
and O() is subset-closed, it is the case
that O
1
` O
2
∈ O(S ∪ Sig(O
m
)). Since O() is anti-monotonic, and O
2
⊆ O
m
, we have that
O
1
` O
2
∈ O(S ∪ Sig(O
2
)), hence, O
2
is an O()-based S-module in O
1
. In particular O
2
is
an S-module in O
1
w.r.t. L. Since O
1
is an S-module in O w.r.t. L, O
2
is an S-module in
O w.r.t. L.
Corollary 45 Let O() be an anti-monotonic, subset-closed safety class for L and O
m
be
an O()-based S-module in O. Then O
m
contains all S-essential axioms in O w.r.t. L.
Proof. Let O
1
be a minimal S-module in O w.r.t. L. We demonstrate that O
1
⊆ O
m
. Indeed,
otherwise, by Lemma 44, O
1
∩O
m
is an S-module in O w.r.t. L which is strictly contained
in O
1
. Hence O
m
is a superset of every minimal S-module in O and hence, contains all
S-essential axioms in O w.r.t. L.
As shown in Section 3.4, all the axioms M1–M4 are essential for the ontology O and
signature S considered in Example 43. We have seen in this example that the locality-based
S-module extracted from O contains all of these axioms, in accordance to Corollary 45. In
our case, the extracted module contains only essential axioms; in general, however, locality-
based modules might contain non-essential axioms.
An interesting application of modules is the pruning of irrelevant axioms when checking
if an axiom α is implied by an ontology O. Indeed, in order to check whether O [= α
it suffices to retrieve the module for Sig(α) and verify if the implication holds w.r.t. this
module. In some cases, it is sufficient to extract a module for a subset of the signature of
α which, in general, leads to smaller modules. In particular, in order to test subsumption
between a pair of atomic concepts, if the safety class being used enjoys some nice properties
then it suffices to extract a module for one of them, as given by the following proposition:
Proposition 46 Let O() be a compact union-closed safety class for an ontology language
L, O an ontology and A, B atomic concepts. Let O
A
be an O()-based module in O for
S = ¦A¦ in O. Then:
1 If O [= α := (A _ B) and ¦B _ ⊥¦ ∈ O(∅) then O
A
[= α;
2 If O [= α := (B _ A) and ¦· _ B¦ ∈ O(∅) then O
A
[= α;
Proof. 1. Consider two cases: (a) B ∈ Sig(O
A
) and (b) B / ∈ Sig(O
A
).
(a) By Remark 41 it is the case that O
A
is an O()-based module in O for S = ¦A, B¦.
Since Sig(α) ⊆ S, by Definition 12 and Definition 1, it is the case that O [= α implies
O
A
[= α.
(b) Consider O
t
= O∪¦B _ ⊥¦. Since Sig(B _ ⊥) = ¦B¦ and B / ∈ Sig(O
A
), ¦B _ ⊥¦ ∈
O(∅) and O() is compact, by Definition 24 it is the case that ¦B _ ⊥¦ ∈ O(S ∪ Sig(O
A
)).
Since O
A
is an O()-based S-module in O, by Definition 40, then O`O
A
∈ O(S∪Sig(O
A
)).
Since O() is union-closed, it is the case that (O ` O
A
) ∪ ¦B _ ⊥¦ ∈ O(S ∪ Sig(O
A
)). Note
that B _ ⊥ ∈ O
A
, since B ∈ Sig(O
A
), hence (O ` O
A
) ∪ ¦B _ ⊥¦ = O
t
` O
A
, and, by
Definition 40, we have that O
A
is an O()-based S-module in O
t
. Now, since O [= (A _ B),
it is the case that O
t
[= (A _ ⊥), and hence, since O
A
is a module in O
t
for S = ¦A¦, we
have that O
A
[= (A _ ⊥) which implies O
A
[= (A _ B).
307
Cuenca Grau, Horrocks, Kazakov, & Sattler
2. The proof of this case is analogous to that for Case 1: Case (a) is applicable without
changes; in Case (b) we show that O
A
is an O()-based module for S = ¦A¦ in O
t
=
O ∪ ¦· _ B¦, and, hence, since O
t
[= (· _ A), it is the case that O
B
[= (· _ A), which
implies O
t
[= (B _ A).
Corollary 47 Let O be a oHO1O ontology and A, B atomic concepts. Let O
r←∗
A←∅
() and
O
r←∗
A←∆
() be locality classes based on local classes of interpretations of form I
r←∗
A←∅
() and
I
r←∗
A←∆
(), respectively, from Table 3. Let O
A
be a module for S = ¦A¦ based on O
r←∗
A←∅
() and
O
B
be a module for S = ¦B¦ based on O
r←∗
A←∆
(). Then O [= (A _ B) iff O
A
[= (A _ B) iff
O
B
[= (A _ B).
Proof. It is easy to see that ¦B _ ⊥¦ ∈ O
r←∗
A←∅
(∅) and ¦· _ A¦ ∈ O
r←∗
A←∆
(∆).
Proposition 46 implies that a module based on a safety class for a single atomic concept
A can be used for capturing either the super-concepts (Case 1), or the sub-concepts (Case
2) B of A, provided that the safety class captures, when applied to the empty signature,
axioms of the form B _ ⊥ (Case 1) or (· _ B) (Case 2). That is, B is a super-concept or
a sub-concept of A in the ontology if and only if it is such in the module. This property
can be used, for example, to optimize the classification of ontologies. In order to check if
the subsumption A _ B holds in an ontology O, it is sufficient to extract a module in
O for S = ¦A¦ using a modularization algorithm based on a safety class in which the
ontology ¦B _ ⊥¦ is local w.r.t. the empty signature, and check whether this subsumption
holds w.r.t. the module. For this purpose, it is convenient to use a syntactically tractable
approximation of the safety class in use; for example, one could use the syntactic locality
conditions given in Figure 3 instead of their semantic counterparts.
It is possible to combine modularization procedures to obtain modules that are smaller
than the ones obtained using these procedures individually. For example, in order to check
the subsumption O [= (A _ B) one could first extract a O
r←∗
A←∅
()-based module ´
1
for
S = ¦A¦ in O; by Corollary 47 this module is complete for all the super-concepts of A in
O, including B—that is, if an atomic concept is a super-concept of A in O, then it is also a
super-concept of A in ´
1
. One could extract a O
r←∗
A←∆
()-based module ´
2
for S = ¦B¦ in
´
1
which, by Corollary 47, is complete for all the sub-concepts of B in ´
1
, including A.
Indeed, ´
2
is an S-module in ´
1
for S = ¦A, B¦ and ´
1
is an S-module in the original
ontology O. By Proposition 46, therefore, it is the case that ´
2
is also an S-module in O.
7. Related Work
We have seen in Section 3 that the notion of conservative extension is valuable in the forma-
lization of ontology reuse tasks. The problem of deciding conservative extensions has been
recently investigated in the context of ontologies (Ghilardi et al., 2006; Lutz et al., 2007;
Lutz & Wolter, 2007). The problem of deciding whether {∪O is a deductive S-conservative
extension of O is EXPTIME-complete for cL (Lutz & Wolter, 2007), 2-EXPTIME-complete
w.r.t. /L(1O (Lutz et al., 2007) (roughly OWL-Lite), and undecidable w.r.t. /L(1OO
(roughly OWL DL). Furthermore, checking model conservative extensions is already un-
decidable for cL (Lutz & Wolter, 2007), and for /L( it is even not semi-decidable (Lutz
et al., 2007).
308
Modular Reuse of Ontologies: Theory and Practice
In the last few years, a rapidly growing body of work has been developed under the
headings of Ontology Mapping and Alignment, Ontology Merging, Ontology Integration,
and Ontology Segmentation (Kalfoglou & Schorlemmer, 2003; Noy, 2004a, 2004b). This field
is rather diverse and has roots in several communities.
In particular, numerous techniques for extracting fragments of ontologies for the pur-
poses of knowledge reuse have been proposed. Most of these techniques rely on syntactically
traversing the axioms in the ontology and employing various heuristics to determine which
axioms are relevant and which are not.
An example of such a procedure is the algorithm implemented in the Prompt-Factor
tool (Noy & Musen, 2003). Given a signature S and an ontology O, the algorithm re-
trieves a fragment O
1
⊆ O as follows: first, the axioms in O that mention any of the
symbols in S are added to O
1
; second, S is expanded with the symbols in Sig(O
1
). These
steps are repeated until a fixpoint is reached. For our example in Section 3, when S =
¦Cystic Fibrosis, Genetic Disorder¦, and O consists of axioms M1–M5 from Figure 1, the al-
gorithm first retrieves axioms M1, M4, and M5 containing these terms, then expands S with
the symbols mentioned in these axioms, such that S contains all the symbols of O. After
this step, all the remaining axioms of O are retrieved. Hence, the fragment extracted by
the Prompt-Factor algorithm consists of all the axioms M1-M5. In this case, the Prompt-
Factor algorithm extracts a module (though not a minimal one). In general, however, the
extracted fragment is not guaranteed to be a module. For example, consider an ontology
O = ¦A ≡ A, B _ C¦ and α = (C _ B). The ontology O is inconsistent due to the
axiom A ≡ A: any axiom (and α in particular) is thus a logical consequence of O. Given
S = ¦B, C¦, the Prompt-Factor algorithm extracts O
2
= ¦B _ C¦; however, O
2
[= α, and
so O
2
is not a module in O. In general, the Prompt-Factor algorithm may fail even if O is
consistent. For example, consider an ontology O = ¦· _ ¦a¦, A _ B¦, α = (A _ ∀r.A),
and S = ¦A¦. It is easy to see that O is consistent, admits only single element models,
and α is satisfied in every such a model; that is, O [= α. In this case, the Prompt-Factor
algorithm extracts O
1
= ¦A _ B¦, which does not imply α.
Another example is Seidenberg’s segmentation algorithm (Seidenberg & Rector, 2006),
which was used for segmentation of the medical ontology GALEN (Rector & Rogers, 1999).
Currently, the full version of GALEN cannot be processed by reasoners, so the authors
investigate the possibility of splitting GALEN into small “segments” which can be processed
by reasoners separately. The authors describe a segmentation procedure which, given a set
of atomic concepts S, computes a “segment” for S in the ontology. The description of
the procedure is very high-level. The authors discuss which concepts and roles should be
included in the segment and which should not. In particular, the segment should contain
all “super-” and “sub-” concepts of the input concepts, concepts that are “linked” from the
input concepts (via existential restrictions) and their “super-concepts”, but not their “sub-
concepts”; for the included concepts, also “their restrictions”, “intersection”, “union”, and
“equivalent concepts” should be considered by including the roles and concepts they contain,
together with their “super-concepts” and “super-roles” but not their “sub-concepts” and
“sub-roles”. From the description of the procedure it is not entirely clear whether it works
with a classified ontology (which is unlikely in the case of GALEN since the full version of
GALEN has not been classified by any existing reasoner), or, otherwise, how the “super-”
and “sub-” concepts are computed. It is also not clear which axioms should be included in
309
Cuenca Grau, Horrocks, Kazakov, & Sattler
the segment in the end, since the procedure talks only about the inclusion of concepts and
roles.
A different approach to module extraction proposed in the literature (Stuckenschmidt
& Klein, 2004) consists of partitioning the concepts in an ontology to facilitate visualization
of and navigation through an ontology. The algorithm uses a set of heuristics for measuring
the degree of dependency between the concepts in the ontology and outputs a graphical
representation of these dependencies. The algorithm is intended as a visualization technique,
and does not establish a correspondence between the nodes of the graph and sets of axioms
in the ontology.
What is common between the modularization procedures we have mentioned is the lack
of a formal treatment for the notion of module. The papers describing these modularization
procedures do not attempt to formally specify the intended outputs of the procedures, but
rather argue what should be in the modules and what not based on intuitive notions. In
particular, they do not take the semantics of the ontology languages into account. It might
be possible to formalize these algorithms and identify ontologies for which the “intuition-
based” modularization procedures work correctly. Such studies are beyond the scope of this
paper.
Module extraction in ontologies has also been investigated from a formal point of view
(Cuenca Grau et al., 2006b). Cuenca Grau et al. (2006) define a notion of a module O
A
in an
ontology Ofor an atomic concept A. One of the requirements for the module is that Oshould
be a conservative extension of O
A
(in the paper O
A
is called a logical module in O). The
paper imposes an additional requirement on modules, namely that the module O
A
should
entail all the subsumptions in the original ontology between atomic concepts involving A and
other atomic concepts in O
A
. The authors present an algorithm for partitioning an ontology
into disjoint modules and proved that the algorithm is correct provided that certain safety
requirements for the input ontology hold: the ontology should be consistent, should not
contain unsatisfiable atomic concepts, and should have only “safe” axioms (which in our
terms means that they are local for the empty signature). In contrast, the algorithm we
present here works for any ontology, including those containing “non-safe” axioms.
The growing interest in the notion of “modularity” in ontologies has been recently
reflected in a workshop on modular ontologies
5
held in conjunction with the International
Semantic Web Conference (ISWC-2006). Concerning the problem of ontology reuse, there
have been various proposals for “safely” combining modules; most of these proposals, such
as c-connections (Cuenca Grau, Parsia, & Sirin, 2006a), Distributed Description Logics
(Borgida & Serafini, 2003) and Package-based Description Logics (Bao, Caragea, & Honavar,
2006) propose a specialized semantics for controlling the interaction between the importing
and the imported modules to avoid side-effects. In contrast to these works, we assume here
that reuse is performed by simply building the logical union of the axioms in the modules
under the standard semantics, and we establish a collection of reasoning services, such as
safety testing, to check for side-effects. The interested reader can find in the literature a
detailed comparison between the different approaches for combining ontologies (Cuenca
Grau & Kutz, 2007).
5. for information see the homepage of the workshop http://www.cild.iastate.edu/events/womo.html
310
Modular Reuse of Ontologies: Theory and Practice
8. Implementation and Proof of Concept
In this section, we provide empirical evidence of the appropriateness of locality for safety
testing and module extraction. For this purpose, we have implemented a syntactic loca-
lity checker for the locality classes
˜
O
r←∅
A←∅
() and
˜
O
r←∆∆
A←∆
() as well as the algorithm for
extracting modules given in Figure 5 from Section 6.
First, we show that the locality class
˜
O
r←∅
A←∅
() provides a powerful sufficiency test for
safety which works for many real-world ontologies. Second, we show that
˜
O
r←∅
A←∅
()-based
modules are typically very small compared to both the size of the ontology and the modules
extracted using other techniques. Third, we report on our implementation in the ontology
editor Swoop (Kalyanpur, Parsia, Sirin, Cuenca Grau, & Hendler, 2006) and illustrate the
combination of the modularization procedures based on the classes
˜
O
r←∅
A←∅
() and
˜
O
r←∆∆
A←∆
().
8.1 Locality for Testing Safety
We have run our syntactic locality checker for the class
˜
O
r←∅
A←∅
() over the ontologies from a
library of 300 ontologies of various sizes and complexity some of which import each other
(Gardiner, Tsarkov, & Horrocks, 2006).
6
For all ontologies { that import an ontology O,
we check if { belongs to the locality class for S = Sig({) ∩ Sig(O).
It turned out that from the 96 ontologies in the library that import other ontologies, all
but 11 were syntactically local for S (and hence also semantically local for S). From the 11
non-local ontologies, 7 are written in the OWL-Full species of OWL (Patel-Schneider et al.,
2004) to which our framework does not yet apply. The remaining 4 non-localities are due
to the presence of so-called mapping axioms of the form A ≡ B
t
, where A / ∈ S and B
t
∈ S.
Note that these axioms simply indicate that the atomic concepts A, B
t
in the two ontologies
under consideration are synonyms. Indeed, we were able to easily repair these non-localities
as follows: we replace every occurrence of A in { with B
t
and then remove this axiom from
the ontology. After this transformation, all 4 non-local ontologies turned out to be local.
8.2 Extraction of Modules
In this section, we compare three modularization
7
algorithms that we have implemented
using Manchester’s OWL API:
8
A1: The Prompt-Factor algorithm (Noy & Musen, 2003);
A2: The segmentation algorithm proposed by Cuenca Grau et al. (2006);
A3: Our modularisation algorithm (Algorithm 5), based on the locality class
˜
O
r←∅
A←∅
().
The aim of the experiments described in this section is not to provide a throughout com-
parison of the quality of existing modularization algorithms since each algorithm extracts
“modules” according to its own requirements, but rather to give an idea of the typical size
of the modules extracted from real ontologies by each of the algorithms.
6. The library is available at http://www.cs.man.ac.uk/
~
horrocks/testing/
7. In this section by “module” we understand the result of the considered modularization procedures which
may not necessarily be a module according to Definition 10 or 12
8. http://sourceforge.net/projects/owlapi
311
Cuenca Grau, Horrocks, Kazakov, & Sattler
(a) Modularization of NCI (a) Modularization of NCI (b) Modularization of GALEN-Small (b) Modularization of GALEN-Small
(c) Modularization of SNOMED (c) Modularization of SNOMED (d) Modularization of GALEN-Full (d) Modularization of GALEN-Full
(e) Small modules of GALEN-Full (e) Small modules of GALEN-Full (f) Large modules of GALEN-Full (f) Large modules of GALEN-Full
Figure 6: Distribution for the sizes of syntactic locality-based modules: the X-Axis gives the
number of concepts in the modules and the Y-Axis the number of modules extracted for
each size range.
312
Modular Reuse of Ontologies: Theory and Practice
Ontology Atomic A1: Prompt-Factor A2: Segmentation A3: Loc.-based mod.
Concepts Max.(%) Avg.(%) Max.(%) Avg.(%) Max.(%) Avg.(%)
NCI 27772 87.6 75.84 55 30.8 0.8 0.08
SNOMED 255318 100 100 100 100 0.5 0.05
GO 22357 1 0.1 1 0.1 0.4 0.05
SUMO 869 100 100 100 100 2 0.09
GALEN-Small 2749 100 100 100 100 10 1.7
GALEN-Full 24089 100 100 100 100 29.8 3.5
SWEET 1816 96.4 88.7 83.3 51.5 1.9 0.1
DOLCE-Lite 499 100 100 100 100 37.3 24.6
Table 6: Comparison of Different Modularization Algorithms
As a test suite, we have collected a set of well-known ontologies available on the Web,
which we divided into two groups:
Simple. In this group, we have included the National Cancer Institute (NCI) Ontology,
9
the SUMO Upper Ontology,
10
the Gene Ontology (GO),
11
and the SNOMED Ontology
12
.
These ontologies are expressed in a simple ontology language and are of a simple structure;
in particular, they do not contain GCIs, but only definitions.
Complex. This group contains the well-known GALEN ontology (GALEN-Full),
13
the
DOLCE upper ontology (DOLCE-Lite),
14
and NASA’s Semantic Web for Earth and En-
vironmental Terminology (SWEET)
15
. These ontologies are complex since they use many
constructors from OWL DL and/or include a significant number of GCIs. In the case of
GALEN, we have also considered a version GALEN-Small that has commonly been used as
a benchmark for OWL reasoners. This ontology is almost 10 times smaller than the original
GALEN-Full ontology, yet similar in structure.
Since there is no benchmark for ontology modularization and only a few use cases are
available, there is no systematic way of evaluating modularization procedures. Therefore
we have designed a simple experiment setup which, even if it may not necessarily reflect
an actual ontology reuse scenario, it should give an idea of typical module sizes. For each
ontology, we took the set of its atomic concepts and extracted modules for every atomic
concept. We compare the maximal and average sizes of the extracted modules.
It is worth emphasizing here that our algorithm A3 does not just extract a module for
the input atomic concept: the extracted fragment is also a module for its whole signature,
which typically includes a fair amount of other concepts and roles.
9. http://www.mindswap.org/2003/CancerOntology/nciOncology.owl
10. http://ontology.teknowledge.com/
11. http://www.geneontology.org
12. http://www.snomed.org
13. http://www.openclinical.org/prj_galen.html
14. http://www.loa-cnr.it/DOLCE.html
15. http://sweet.jpl.nasa.gov/ontology/
313
Cuenca Grau, Horrocks, Kazakov, & Sattler
(a) Concepts DNA Sequence and
Microanatomy in NCI
(b)
˜
O
r←∅
A←∅
(S)-based module for
DNA Sequence in NCI
(c)
˜
O
r←∆×∆
A←∆
(S)-based module for
Micro Anatomy in the fragment 7b
Figure 7: The Module Extraction Functionality in Swoop
The results we have obtained are summarized in Table 6. The table provides the size
of the largest module and the average size of the modules obtained using each of these
algorithms. In the table, we can clearly see that locality-based modules are significantly
smaller than the ones obtained using the other methods; in particular, in the case of SUMO,
DOLCE, GALEN and SNOMED, the algorithms A1 and A2 retrieve the whole ontology as
the module for each input signature. In contrast, the modules we obtain using our algorithm
are significantly smaller than the size of the input ontology.
For NCI, SNOMED, GO and SUMO, we have obtained very small locality-based mo-
dules. This can be explained by the fact that these ontologies, even if large, are simple
in structure and logical expressivity. For example, in SNOMED, the largest locality-based
module obtained is approximately 0.5% of the size of the ontology, and the average size of
the modules is 1/10 of the size of the largest module. In fact, most of the modules we have
obtained for these ontologies contain less than 40 atomic concepts.
For GALEN, SWEET and DOLCE, the locality-based modules are larger. Indeed, the
largest module in GALEN-Small is 1/10 of the size of the ontology, as opposed to 1/200
in the case of SNOMED. For DOLCE, the modules are even bigger—1/3 of the size of
the ontology—which indicates that the dependencies between the different concepts in the
ontology are very strong and complicated. The SWEET ontology is an exception: even
though the ontology uses most of the constructors available in OWL, the ontology is heavily
underspecified, which yields small modules.
In Figure 6, we have a more detailed analysis of the modules for NCI, SNOMED,
GALEN-Small and GALEN-Full. Here, the X-axis represents the size ranges of the ob-
314
Modular Reuse of Ontologies: Theory and Practice
tained modules and the Y-axis the number of modules whose size is within the given range.
The plots thus give an idea of the distribution for the sizes of the different modules.
For SNOMED, NCI and GALEN-Small, we can observe that the size of the modules
follows a smooth distribution. In contrast, for GALEN-Full, we have obtained a large number
of small modules and a significant number of very big ones, but no medium-sized modules
in-between. This abrupt distribution indicates the presence of a big cycle of dependencies in
the ontology. The presence of this cycle can be spotted more clearly in Figure 6(f); the figure
shows that there is a large number of modules of size in between 6515 and 6535 concepts.
This cycle does not occur in the simplified version of GALEN and thus we obtain a smooth
distribution for that case. In contrast, in Figure 6(e) we can see that the distribution for
the “small” modules in GALEN-Full is smooth and much more similar to the one for the
simplified version of GALEN.
The considerable differences in the size of the modules extracted by algorithms A1 –
A3 are due to the fact that these algorithms extract modules according to different require-
ments. Algorithm A1 produces a fragment of the ontology that contains the input atomic
concept and is syntactically separated from the rest of the axioms—that is, the fragment
and the rest of the ontology have disjoint signatures. Algorithm A2 extracts a fragment of
the ontology that is a module for the input atomic concept and is additionally semantically
separated from the rest of the ontology: no entailment between an atomic concept in the
module and an atomic concept not in the module should hold in the original ontology. Since
our algorithm is based on weaker requirements, it is to be expected that it extracts smaller
modules. What is surprising is that the difference in the size of modules is so significant.
In order to explore the use of our results for ontology design and analysis, we have
integrated our algorithm for extracting modules in the ontology editor Swoop (Kalyanpur
et al., 2006). The user interface of Swoop allows for the selection of an input signature and
the retrieval of the corresponding module.
16
Figure 7a shows the classification of the concepts DNA Sequence and Microanatomy in
the NCI ontology. Figure 7b shows the minimal
˜
O
r←∅
A←∅
()-based module for DNA Sequence,
as obtained in Swoop. Recall that, according to Corollary 47, a
˜
O
r←∅
A←∅
()-based module for an
atomic concept contains all necessary axioms for, at least, all its (entailed) super-concepts
in O. Thus this module can be seen as the “upper ontology” for A in O. In fact, Figure 7
shows that this module contains only the concepts in the “path” from DNA Sequence to
the top level concept Anatomy Kind. This suggests that the knowledge in NCI about the
particular concept DNA Sequence is very shallow in the sense that NCI only “knows” that a
DNA Sequence is a macromolecular structure, which, in the end, is an anatomy kind. If one
wants to refine the module by only including the information in the ontology necessary to
entail the “path” from DNA Sequence to Micro Anatomy, one could extract the
˜
O
r←∆∆
A←∆
()-
based module for Micro Anatomy in the fragment 7b. By Corollary 47, this module contains
all the sub-concepts of Micro Anatomy in the previously extracted module. The resulting
module is shown in Figure 7b.
16. The tool can be downloaded at http://code.google.com/p/swoop/
315
Cuenca Grau, Horrocks, Kazakov, & Sattler
9. Conclusion
In this paper, we have proposed a set of reasoning problems that are relevant for ontology
reuse. We have established the relationships between these problems and studied their com-
putability. Using existing results (Lutz et al., 2007) and the results obtained in Section 4, we
have shown that these problems are undecidable or algorithmically unsolvable for the logic
underlying OWL DL. We have dealt with these problems by defining sufficient conditions
for a solution to exist, which can be computed in practice. We have introduced and studied
the notion of a safety class, which characterizes any sufficiency condition for safety of an
ontology w.r.t. a signature. In addition, we have used safety classes to extract modules from
ontologies.
For future work, we would like to study other approximations which can produce small
modules in complex ontologies like GALEN, and exploit modules to optimize ontology
reasoning.
References
Baader, F., Brandt, S., & Lutz, C. (2005). Pushing the cL envelope. In IJCAI-05, Pro-
ceedings of the Nineteenth International Joint Conference on Artificial Intelligence,
Edinburgh, Scotland, UK, July 30-August 5, 2005, pp. 364–370. Professional Book
Center.
Baader, F., Calvanese, D., McGuinness, D. L., Nardi, D., & Patel-Schneider, P. F. (Eds.).
(2003). The Description Logic Handbook: Theory, Implementation, and Applications.
Cambridge University Press.
Bao, J., Caragea, D., & Honavar, V. (2006). On the semantics of linking and importing in
modular ontologies. In Proceedings of the 5th International Semantic Web Conference
(ISWC-2006), Athens, GA, USA, November 5-9, 2006, Vol. 4273 of Lecture Notes in
Computer Science, pp. 72–86.
B¨orger, E., Gr¨adel, E., & Gurevich, Y. (1997). The Classical Decision Problem. Perspectives
of Mathematical Logic. Springer-Verlag. Second printing (Universitext) 2001.
Borgida, A., & Serafini, L. (2003). Distributed description logics: Assimilating information
from peer sources. J. Data Semantics, 1, 153–184.
Cuenca Grau, B., Horrocks, I., Kazakov, Y., & Sattler, U. (2007). A logical framework
for modularity of ontologies. In IJCAI-07, Proceedings of the Twentieth International
Joint Conference on Artificial Intelligence, Hyderabad, India, January 2007, pp. 298–
304. AAAI.
Cuenca Grau, B., & Kutz, O. (2007). Modular ontology languages revisited. In Proceedings
of the Workshop on Semantic Web for Collaborative Knowledge Acquisition, Hyder-
abad, India, January 5, 2007.
Cuenca Grau, B., Parsia, B., & Sirin, E. (2006a). Combining OWL ontologies using c-
connections. J. Web Sem., 4(1), 40–59.
Cuenca Grau, B., Parsia, B., Sirin, E., & Kalyanpur, A. (2006b). Modularity and web ontolo-
gies. In Proceedings of the Tenth International Conference on Principles of Knowledge
316
Modular Reuse of Ontologies: Theory and Practice
Representation and Reasoning (KR-2006), Lake District of the United Kingdom, June
2-5, 2006, pp. 198–209. AAAI Press.
Cuenca Grau, B., Horrocks, I., Kazakov, Y., & Sattler, U. (2007). Just the right amount:
extracting modules from ontologies. In Proceedings of the 16th International Confe-
rence on World Wide Web (WWW-2007), Banff, Alberta, Canada, May 8-12, 2007,
pp. 717–726. ACM.
Cuenca Grau, B., Horrocks, I., Kutz, O., & Sattler, U. (2006). Will my ontologies fit
together?. In Proceedings of the 2006 International Workshop on Description Logics
(DL-2006), Windermere, Lake District, UK, May 30 - June 1, 2006, Vol. 189 of CEUR
Workshop Proceedings. CEUR-WS.org.
Gardiner, T., Tsarkov, D., & Horrocks, I. (2006). Framework for an automated compari-
son of description logic reasoners. In Proceedings of the 5th International Semantic
Web Conference (ISWC-2006), Athens, GA, USA, November 5-9, 2006, Vol. 4273 of
Lecture Notes in Computer Science, pp. 654–667. Springer.
Ghilardi, S., Lutz, C., & Wolter, F. (2006). Did I damage my ontology? a case for con-
servative extensions in description logics. In Proceedings of the Tenth International
Conference on Principles of Knowledge Representation and Reasoning (KR-2006),
Lake District of the United Kingdom, June 2-5, 2006, pp. 187–197. AAAI Press.
Horrocks, I., Patel-Schneider, P. F., & van Harmelen, F. (2003). From SHIQ and RDF to
OWL: the making of a web ontology language. J. Web Sem., 1(1), 7–26.
Horrocks, I., & Sattler, U. (2005). A tableaux decision procedure for oHO1O. In Proceedings
of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05),
Edinburgh, Scotland, UK, July 30-August 5, 2005, pp. 448–453. Professional Book
Center.
Kalfoglou, Y., & Schorlemmer, M. (2003). Ontology mapping: The state of the art. The
Knowledge Engineering Review, 18, 1–31.
Kalyanpur, A., Parsia, B., Sirin, E., Cuenca Grau, B., & Hendler, J. A. (2006). Swoop: A
web ontology editing browser. J. Web Sem., 4(2), 144–153.
Lutz, C., Walther, D., & Wolter, F. (2007). Conservative extensions in expressive description
logics. In Proceedings of the Twentieth International Joint Conference on Artificial
Intelligence (IJCAI-07), Hyderabad, India, January 2007, pp. 453–459. AAAI.
Lutz, C., & Wolter, F. (2007). Conservative extensions in the lightweight description logic
cL. In Proceedings of the 21st International Conference on Automated Deduction
(CADE-21), Bremen, Germany, July 17-20, 2007, Vol. 4603 of Lecture Notes in Com-
puter Science, pp. 84–99. Springer.
M¨oller, R., & Haarslev, V. (2003). Description logic systems. In The Description Logic
Handbook, chap. 8, pp. 282–305. Cambridge University Press.
Motik, B. (2006). Reasoning in Description Logics using Resolution and Deductive
Databases. Ph.D. thesis, Univesit¨at Karlsruhe (TH), Karlsruhe, Germany.
Noy, N. F. (2004a). Semantic integration: A survey of ontology-based approaches. SIGMOD
Record, 33(4), 65–70.
317
Cuenca Grau, Horrocks, Kazakov, & Sattler
Noy, N. F. (2004b). Tools for mapping and merging ontologies. In Staab, & Studer (Staab
& Studer, 2004), pp. 365–384.
Noy, N., & Musen, M. (2003). The PROMPT suite: Interactive tools for ontology mapping
and merging. Int. Journal of Human-Computer Studies, Elsevier, 6(59).
Patel-Schneider, P., Hayes, P., & Horrocks, I. (2004). Web ontology language OWL Abstract
Syntax and Semantics. W3C Recommendation.
Rector, A., & Rogers, J. (1999). Ontological issues in using a description logic to represent
medical concepts: Experience from GALEN. In IMIA WG6 Workshop, Proceedings.
Schmidt-Schauß, M., & Smolka, G. (1991). Attributive concept descriptions with comple-
ments. Artificial Intelligence, Elsevier, 48(1), 1–26.
Seidenberg, J., & Rector, A. L. (2006). Web ontology segmentation: analysis, classification
and use. In Proceedings of the 15th international conference on World Wide Web
(WWW-2006), Edinburgh, Scotland, UK, May 23-26, 2006, pp. 13–22. ACM.
Sirin, E., & Parsia, B. (2004). Pellet system description. In Proceedings of the 2004 In-
ternational Workshop on Description Logics (DL2004), Whistler, British Columbia,
Canada, June 6-8, 2004, Vol. 104 of CEUR Workshop Proceedings. CEUR-WS.org.
Staab, S., & Studer, R. (Eds.). (2004). Handbook on Ontologies. International Handbooks
on Information Systems. Springer.
Stuckenschmidt, H., & Klein, M. (2004). Structure-based partitioning of large class hier-
archies. In Proceedings of the Third International Semantic Web Conference (ISWC-
2004), Hiroshima, Japan, November 7-11, 2004, Vol. 3298 of Lecture Notes in Com-
puter Science, pp. 289–303. Springer.
Tobies, S. (2000). The complexity of reasoning with cardinality restrictions and nominals
in expressive description logics. J. Artif. Intell. Res. (JAIR), 12, 199–217.
Tsarkov, D., & Horrocks, I. (2006). FaCT++ description logic reasoner: System description.
In Proceedings of the Third International Joint Conference on Automated Reasoning
(IJCAR 2006), Seattle, WA, USA, August 17-20, 2006, Vol. 4130 of Lecture Notes in
Computer Science, pp. 292–297. Springer.
318

Cuenca Grau, Horrocks, Kazakov, & Sattler

2007), which has been motivated by the above-mentioned application needs. In this paper, we focus on the use of modularity to support the partial reuse of ontologies. In particular, we consider the scenario in which we are developing an ontology P and want to reuse a set S of symbols from a “foreign” ontology Q without changing their meaning. For example, suppose that an ontology engineer is building an ontology about research projects, which specifies different types of projects according to the research topic they focus on. The ontology engineer in charge of the projects ontology P may use terms such as Cystic Fibrosis and Genetic Disorder in his descriptions of medical research projects. The ontology engineer is an expert on research projects; he may be unfamiliar, however, with most of the topics the projects cover and, in particular, with the terms Cystic Fibrosis and Genetic Disorder. In order to complete the projects ontology with suitable definitions for these medical terms, he decides to reuse the knowledge about these subjects from a wellestablished medical ontology Q. The most straightforward way to reuse these concepts is to construct the logical union P ∪ Q of the axioms in P and Q. It is reasonable to assume that the additional knowledge about the medical terms used in both P and Q will have implications on the meaning of the projects defined in P; indeed, the additional knowledge about the reused terms provides new information about medical research projects which are defined using these medical terms. Less intuitive is the fact that importing Q may also result in new entailments concerning the reused symbols, namely Cystic Fibrosis and Genetic Disorder. Since the ontology engineer of the projects ontology is not an expert in medicine and relies on the designers of Q, it is to be expected that the meaning of the reused symbols is completely specified in Q; that is, the fact that these symbols are used in the projects ontology P should not imply that their original “meaning” in Q changes. If P does not change the meaning of these symbols in Q, we say that P ∪ Q is a conservative extension of Q In realistic application scenarios, it is often unreasonable to assume the foreign ontology Q to be fixed; that is, Q may evolve beyond the control of the modelers of P. The ontology engineers in charge of P may not be authorized to access all the information in Q or, most importantly, they may decide at a later time to reuse the symbols Cystic Fibrosis and Genetic Disorder from a medical ontology other than Q. For application scenarios in which the external ontology Q may change, it is reasonable to “abstract” from the particular Q under consideration. In particular, given a set S of “external symbols”, the fact that the axioms in P do not change the meaning of any symbol in S should be independent of the particular meaning ascribed to these symbols by Q. In that case, we will say that P is safe for S. Moreover, even if P “safely” reuses a set of symbols from an ontology Q, it may still be the case that Q is a large ontology. In particular, in our example, the foreign medical ontology may be huge, and importing the whole ontology would make the consequences of the additional information costly to compute and difficult for our ontology engineers in charge of the projects ontology (who are not medical experts) to understand. In practice, therefore, one may need to extract a module Q1 of Q that includes only the relevant information. Ideally, this module should be as small as possible while still guarantee to capture the meaning of the terms used; that is, when answering queries against the research projects ontology, importing the module Q1 would give exactly the same answers as if the whole medical ontology Q had been imported. In this case, importing the module will have the
274

Modular Reuse of Ontologies: Theory and Practice

same observable effect on the projects ontology as importing the entire ontology. Furthermore, the fact that Q1 is a module in Q should be independent of the particular P under consideration. The contributions of this paper are as follows: 1. We propose a set of tasks that are relevant to ontology reuse and formalize them as reasoning problems. To this end, we introduce the notions of conservative extension, safety and module for a very general class of logic-based ontology languages. 2. We investigate the general properties of and relationships between the notions of conservative extension, safety, and module and use these properties to study the relationships between the relevant reasoning problems we have previously identified. 3. We consider Description Logics (DLs), which provide the formal underpinning of the W3C Web Ontology Language (OWL), and study the computability of our tasks. We show that all the tasks we consider are undecidable or algorithmically unsolvable for the description logic underlying OWL DL—the most expressive dialect of OWL that has a direct correspondence to description logics. 4. We consider the problem of deciding safety of an ontology for a signature. Given that this problem is undecidable for OWL DL, we identify sufficient conditions for safety, which are decidable for OWL DL—that is, if an ontology satisfies our conditions then it is safe; the converse, however, does not necessarily hold. We propose the notion of a safety class, which characterizes any sufficiency condition for safety, and identify a family of safety classes–called locality—which enjoys a collection of desirable properties. 5. We next apply the notion of a safety class to the task of extracting modules from ontologies; we provide various modularization algorithms that are appropriate to the properties of the particular safety class in use. 6. We present empirical evidence of the practical benefits of our techniques for safety checking and module extraction. This paper extends the results in our previous work (Cuenca Grau, Horrocks, Kutz, & Sattler, 2006; Cuenca Grau et al., 2007; Cuenca Grau, Horrocks, Kazakov, & Sattler, 2007).

2. Preliminaries
In this section we introduce description logics (DLs) (Baader, Calvanese, McGuinness, Nardi, & Patel-Schneider, 2003), a family of knowledge representation formalisms which underlie modern ontology languages, such as OWL DL (Patel-Schneider, Hayes, & Horrocks, 2004). A hierarchy of commonly-used description logics is summarized in Table 1. The syntax of a description logic L is given by a signature and a set of constructors. A signature (or vocabulary) Sg of a DL is the (disjoint) union of countably infinite sets AC of atomic concepts (A, B, . . . ) representing sets of elements, AR of atomic roles (r, s, . . . ) representing binary relations between elements, and Ind of individuals (a, b, c, . . . ) representing constants. We assume the signature to be fixed for every DL.
275

Cuenca Grau, Horrocks, Kazakov, & Sattler

Rol RBox EL r ALC – – S – – Trans(r) + I r− + H R 1 R2 + F Funct(R) + N ( n S) + Q ( n S.C) + O {i} Here r ∈ AR, A ∈ AC, a, b ∈ Ind, R(i) ∈ Rol, C(i) ∈ Con, n ≥ 1 and S ∈ Rol is a simple role (see (Horrocks & Sattler, 2005)). Table 1: The hierarchy of standard description logics

DLs

Constructors Con , A, C1 C2 , ∃R.C – –, ¬C – –

Axioms [ Ax ] TBox ABox A ≡ C, C1 C2 a : C, r(a, b) – – – – – – – –

Every DL provides constructors for defining the set Rol of (general) roles (R, S, . . . ), the set Con of (general) concepts (C, D, . . . ), and the set Ax of axioms (α, β, . . . ) which is a union of role axioms (RBox), terminological axioms (TBox) and assertions (ABox). EL (Baader, Brandt, & Lutz, 2005) is a simple description logic which allows one to construct complex concepts using conjunction C1 C2 and existential restriction ∃R.C starting from atomic concepts A, roles R and the top concept . EL provides no role constructors and no role axioms; thus, every role R in EL is atomic. The TBox axioms of EL can be either concept definitions A ≡ C or general concept inclusion axioms (GCIs) C1 C2 . EL assertions are either concept assertions a : C or role assertions r(a, b). In this paper we assume the concept definition A ≡ C is an abbreviation for two GCIs A C and C A. The basic description logic ALC (Schmidt-Schauß & Smolka, 1991) is obtained from EL by adding the concept negation constructor ¬C. We introduce some additional constructors as abbreviations: the bottom concept ⊥ is a shortcut for ¬ , the concept disjunction C1 C2 stands for ¬(¬C1 ¬C2 ), and the value restriction ∀R.C stands for ¬(∃R.¬C). In contrast to EL, ALC can express contradiction axioms like ⊥. The logic S is an extension of ALC where, additionally, some atomic roles can be declared to be transitive using a role axiom Trans(r). Further extensions of description logics add features such as inverse roles r− (indicated by appending a letter I to the name of the logic), role inclusion axioms (RIs) R1 R2 (+H), functional roles Funct(R) (+F), number restrictions ( n S), with n ≥ 1, (+N ), qualified number restrictions ( n S.C), with n ≥ 1, (+Q)1 , and nominals {a} (+O). Nominals make it possible to construct a concept representing a singleton set {a} (a nominal concept) from an individual a. These extensions can be used in different combinations; for example ALCO is an extension of ALC with nominals; SHIQ is an extension of S with role hierarchies,
1. the dual constructors ( n S) and ( n S.C) are abbreviations for ¬( n + 1 S) and ¬( n + 1 S.¬C), respectively

276

z ∈ rI ⇒ x. where ∆I is a non-empty set. We say that two sets of interpretations I and J are equal modulo S (notation: I|S = J|S ) if for every I ∈ I there exits J ∈ J such that J |S = I|S and for every J ∈ J there exists I ∈ I such that I|S = J |S . are syntactic variants thereof. z ∈ ∆I [ x. b) iff aI . bI ∈ rI . I |= Trans(r) iff ∀x. y ∈ RI ∧ y ∈ C I } ≥ n } {a}I = {aI } The satisfaction relation I |= α between an interpretation I and a DL axiom α (read as I satisfies α.Modular Reuse of Ontologies: Theory and Practice inverse roles and qualified number restrictions.C)I (¬C)I (r− )I = = = = = ∆ C I ∩ DI {x ∈ ∆I | ∃y. y ∈ RI } ≥ n } ( n R. In this paper. atomic roles and individuals that occur in O (respectively in α). I I R2 iff R1 ⊆ R2 . z ∈ rI ]. called the domain of the interpretation. An interpretation I is a model of an ontology O if I satisfies all axioms in O. to a certain extent. x ∈ rI } ( n R)I = { x ∈ ∆I | {y ∈ ∆I | x. I |= Funct(R) iff ∀x. The main reasoning task for ontologies is entailment: given an ontology O and an axiom α. OWL DL corresponds to SHOIN (Horrocks. I |= r(a. we say that an axiom α (an ontology O) is valid in I if every interpretation I ∈ I is a model of α (respectively O). are based on description logics and. ·I ). In particular. y | y. & van Harmelen. and to every a ∈ Ind an element aI ∈ ∆I . y ∈ RI ∧ y ∈ C I } ∆I \ C I { x. to every r ∈ AR a binary relation rI ⊆ ∆I × ∆I . The signature of an ontology O (of an axiom α) is the set Sig(O) (Sig(α)) of atomic concepts. 277 . and SHOIQ is the DL that uses all the constructors and axiom types we have presented.C)I = { x ∈ ∆I | {y ∈ ∆I | x. y ∈ RI ∧ x. I |= a : C iff aI ∈ C I . The interpretation function ·I is extended to complex roles and concepts via DLconstructors as follows: ( )I (C D)I (∃R. z ∈ ∆I [ x. such as OWL. ·I ) and J = (∆J . Note that the sets AC. We say that two interpretations I = (∆I . y. Patel-Schneider. and ·I is the interpretation function that assigns: to every A ∈ AC a subset AI ⊆ ∆I . Given a set I of interpretations. is implied by the empty ontology). z ∈ RI ⇒ y = z ]. An ontology O implies an axiom α (written O |= α) if I |= α for every model I of O. check if O implies α. 2003). The logical entailment |= is defined using the usual Tarski-style set-theoretic semantics for description logics as follows. y. ·J ) coincide on the subset S of the signature (notation: I|S = J |S ) if ∆I = ∆J and X I = X J for every X ∈ S. we assume an ontology O based on a description logic L to be a finite set of axioms in L. or I is a model of α) is defined as follows: I |= C1 I |= R1 I I C2 iff C1 ⊆ C2 . equivalently. Modern ontology languages. x. y ∈ rI ∧ y. An interpretation I is a pair I = (∆I . An axiom α is a tautology if it is valid in the set of all interpretations (or. AR and Ind are not defined by the interpretation I but assumed to be fixed for the ontology language (DL).

with most of the topics the projects cover and.3) and module (Section 3.Pancreas Genetic Fibrosis ∃associated With. we motivate and define the reasoning tasks to be investigated in the remainder of the paper. Horrocks. Kazakov. In order to complete the projects ontology with suitable definitions 278 . in Section 3. we assume that L is an ontology language based on a description logic. Project :: ∃has Focus. as given by the axioms P1 and P2 in Figure 1. Based on this application scenario.Genetic Disorder Ontology of medical terms Q: M1 Cystic Fibrosis ≡ Fibrosis M2 Genetic Fibrosis ≡ Fibrosis M3 Fibrosis M4 Genetic Fibrosis M5 DEFBI Gene ∃located In. our tasks are based on the notions of a conservative extension (Section 3. we will define formally the class of ontology languages for which the given definitions of conservative extensions.4).Cuenca Grau. for example. The ontology engineer is an expert on research projects: he knows. In particular.2 and 3.2). The first one describes projects about genetic disorders.Cystic Fibrosis ∃has Focus. however.6. that every instance of EUProject must be an instance of Project (the concept-inclusion axiom P3) and that the role has Focus can be applied only to instances of Project (the domain axiom P4). safety (Sections 3. in particular. all the tasks we consider in this paper are summarized in Table 2.Pancreas Genetic Disorder Immuno Protein Gene Figure 1: Reusing medical terminology in an ontology on research projects 3. 3. Within this section.Genetic Disorder ∃has Focus. which specifies different types of projects according to the research topic they are concerned with. with the terms Cystic Fibrosis and Genetic Disorder mentioned in P1 and P2. These notions are defined relative to a language L. safety and modules apply.Cystic Fibrosis ∃has Origin. For the convenience of the reader.Genetic Origin ∃located In.Cystic Fibrosis Cystic Fibrosis EUProject ≡ EUProject Project Project (Genetic Disorder :: Cystic Fibrosis) ⊥ ∀ has Focus. we elaborate on the ontology reuse scenario sketched in Section 1.Genetic Origin ∃has Origin.1 A Motivating Example Suppose that an ontology engineer is in charge of a SHOIQ ontology on research projects. the second one describes European projects about cystic fibrosis. & Sattler Ontology of medical research projects P: P1 P2 P3 P4 E1 E2 Genetic Disorder Project ≡ Project EUProject ∃has Focus. Assume that the ontology engineer defines two concepts—Genetic Disorder Project and Cystic Fibrosis EUProject—in his ontology P. He may be unfamiliar. Ontology Integration and Knowledge Reuse In this section.

The most straightforward way to reuse these concepts is to import in P the ontology Q—that is. over the terms defined in the main ontology P. however. In order to illustrate such a situation. Importing additional axioms into an ontology may result in new logical consequences. The meaning of the reused terms. might change during the import. in (2). the ontology engineer has introduced modeling errors and. however. Using inclusion α from (1) and axioms P1–P3 from ontology P we can now prove that every instance of Cystic Fibrosis EUProject is also an instance of Genetic Disorder Project: P ∪ Q |= β := (Cystic Fibrosis EUProject Genetic Disorder Project) (2) This inclusion β. It is natural to expect that entailments like α in (1) from an imported ontology Q result in new logical consequences. it is easy to see that axioms M1–M4 in Q imply that every instance of Cystic Fibrosis is an instance of Genetic Disorder: Q |= α := (Cystic Fibrosis Genetic Disorder) (1) Indeed.Modular Reuse of Ontologies: Theory and Practice for these medical terms. For example. Suppose the ontology engineer has formalized the statements (3) and (4) in ontology P using axioms E1 and E2 respectively. suppose that the ontology engineer has learned about the concepts Genetic Disorder and Cystic Fibrosis from the ontology Q (including the inclusion (1)) and has decided to introduce additional axioms formalizing the following statements: “Every instance of Project is different from every instance of Genetic Disorder and every instance of Cystic Fibrosis. At this point. Suppose that Cystic Fibrosis and Genetic Disorder are described in an ontology Q containing the axioms M1-M5 in Figure 1. P |= β. perhaps due to modeling errors.” ::: “:::::::::::::: that has Focus on Cystic Fibrosis. the concept inclusion α1 := (Cystic Fibrosis Genetic Fibrosis) follows from axioms M1 and M2 as well as from axioms M1 and M3. Such a side effect would be highly undesirable for the modeling of ontology P since the ontology engineer of P might not be an expert on the subject of Q and is not supposed to alter the meaning of the terms defined in Q not even implicitly. does not follow from P alone—that is. axioms E1 and E2 do not correspond to (3) and (4): E1 actually formalizes the following statement: “Every instance of Project is different from every common instance of Genetic Disorder and Cystic Fibrosis”. even though it concerns the terms of primary scope in his projects ontology P. α follows from axioms α1 and M4. like β. as a consequence. The ontology engineer might be not aware of Entailment (2). that the meaning of the terms defined in Q changes as a consequence of the import since these terms are supposed to be completely specified within Q. intuitively. One would not expect. to add the axioms from Q to the axioms in P and work with the extended ontology P ∪ Q. he decides to reuse the knowledge about these subjects from a wellestablished and widely-used medical ontology. however. also has Focus on Genetic Disorder” Every project (3) (4) Note that the statements (3) and (4) can be thought of as adding new information about projects and. they should not change or constrain the meaning of the medical terms. and E2 expresses 279 .

t.t. which expresses the inconsistency of the concept Cystic Fibrosis. although axiom E1 does not correspond to fact (3). L.2 Conservative Extensions and Safety for an Ontology As argued in the previous section. L for S = Sig(O1 ). Note that.. L for every S1 ⊆ S. like the entailment of new axioms or even inconsistencies involving the reused vocabulary.r. from (1) and (5) we obtain P ∪ Q |= δ := (Cystic Fibrosis ⊥). To summarize. This requirement can be naturally formulated using the well-known notion of a conservative extension. & Sattler that “Every object that either has Focus on nothing.r. 3. Indeed. an ontology O is a deductive S-conservative extension of O1 for a signature S and language L if and only if every logical consequence α of O constructed using the language L and symbols only from S. We say that O is a deductive S-conservative extension of O1 w. especially when they do not cause inconsistencies in the ontology. The notion of a deductive conservative extension can be directly applied to our ontology reuse scenario. On the other hand. but also cause inconsistencies when used together with the imported axioms from Q. ♦ In other words.r.t. Horrocks. We say that O is a deductive conservative extension of O1 w. Let O and O1 ⊆ O be two Lontologies. it is still a consequence of (3) which means that it should not constrain the meaning of the medical terms. also has Focus on Genetic Disorder”.r. 2006. and S a signature over L.Cuenca Grau. an important requirement for the reuse of an ontology Q within an ontology P should be that P ∪ Q produces exactly the same logical consequences over the vocabulary of Q as Q alone does. The axioms E1 and E2 not only imply new statements about the medical terms. 2007). it constrains the meaning of medical terms. P |= ( Project). Definition 1 (Deductive Conservative Extension). L. we have seen that importing an external ontology can lead to undesirable side effects in our knowledge reuse scenario. or has Focus only on Cystic Fibrosis. Lutz. namely their disjointness: P |= γ := (Genetic Disorder Cystic Fibrosis ⊥) (5) The entailment (5) can be proved using axiom E2 which is equivalent to: ∃has Focus. Indeed. Lutz et al. then O is a deductive S1 -conservative extension of O1 w. L if O is a deductive S-conservative extension of O1 w. in fact. 280 . this implies (5). the axioms E1 and E2 together with axioms P1-P4 from P imply new axioms about the concepts Cystic Fibrosis and Genetic Disorder.t. is already a logical consequence of O1 . E2 is not a consequence of (4) and. if for every axiom α over L with Sig(α) ⊆ S. Kazakov. Note that if O is a deductive S-conservative extension of O1 w. In the next section we discuss how to formalize the effects we consider undesirable. the additional axioms in O \ O1 do not result into new logical consequences over the vocabulary S. Now. we have O |= α iff O1 |= α. & Wolter. together with axiom E1. that is. which has recently been investigated in the context of ontologies (Ghilardi.r.t. These kinds of modeling errors are difficult to detect.(Genetic Disorder ¬Cystic Fibrosis) (6) The inclusion (6) and P4 imply that every element in the domain must be a project—that is.

281 . given P consisting of axioms P1–P4.. however. but not vice versa: Proposition 4 (Model vs. that is. Deductive Conservative Extensions. 2007). L. Lutz et al. There exist two ALC ontologies O and O1 ⊆ O such that O is a deductive conservative extension of O1 w. determine if O is safe for O w. Let O and O1 ⊆ O be two L-ontologies and S a signature over L. there exists a model J of O such that I|S = J |S . as given in Definition 1.1 that. there exists an axiom δ = (Cystic Fibrosis ⊥) that uses only symbols in Sig(Q) such that Q |= δ but P ∪ Q |= δ. According to proposition 4. if O is a model S-conservative extension of O1 then O is a deductive S-conservative extension of O1 w. E1. L if O ∪ O is a deductive conservative extension of O w.r. We have shown in Section 3.t. Given L-ontologies O and O . for every model I of Q there exists a model J of P1 ∪ Q such that I|S = J |S for S = Sig(Q). It is possible. this means that P ∪ Q is not a deductive conservative extension of Q w..t. L. and a signature S over L. We demonstrate that P1 ∪ Q is a deductive conservative extension of Q. and E1 and an ontology Q consisting of axioms M1–M5 from Figure 1. For every two L-ontologies O. P1 ∪ Q is a deductive conservative extension of Q. ♦ The notion of model conservative extension in Definition 3 can be seen as a semantic counterpart for the notion of deductive conservative extension in Definition 1: the latter is defined in terms of logical entailment.t. ALC.t. an ontology O is a model S-conservative extension of O1 if for every model of O1 one can find a model of O over the same domain which interprets the symbols from S in the same way. however. ♦ Hence. The notion of model conservative extension. Lutz et al. to show that if the axiom E2 is removed from P then for the resulting ontology P1 = P \ {E2}.r. According to Definition 1. this notion can be used for proving that an ontology is a deductive conservative extension of another. it is sufficient to show that P1 ∪ Q is a model conservative extension of Q. L = ALC).r. that is.r. any language L in which δ can be expressed (e. we say that O is safe for O (or O imports O in a safe way) w. O1 ⊆ O. 2007) 1. L. and Q consisting of axioms M1–M5 from Figure 1. Example 5 Consider an ontology P1 consisting of axioms P1–P4. if for every model I of O1 . given L-ontologies O and O . whereas the former is defined in terms of models. We say that O is a model S-conservative extension of O1 .r. but O is not a model conservative extension of O1 .t.t. 2.Modular Reuse of Ontologies: Theory and Practice Definition 2 (Safety for an Ontology). The following notion is useful for proving deductive conservative extensions: Definition 3 (Model Conservative Extension. does not provide a complete characterization of deductive conservative extensions. We say that O is a model conservative extension of O1 if O is a model S-conservative extension of O1 for S = Sig(O1 ).g.r. Intuitively. E2. the first reasoning task relevant for our ontology reuse scenario can be formulated as follows: T1.

determine if O is safe for S w.r. we refer the interested reader to the literature (Lutz et al. since Sig(P) ∩ Sig(Q) ⊆ S = {Cystic Fibrosis. ♦ 3. For example. we have that O is safe for O w. β2 }. however. As seen in Section 3. O ∪ O is not a deductive conservative extension of O .. if for every L-ontology O with Sig(O) ∩ Sig(O ) ⊆ S. since Sig(O) ∩ Sig(O ) ⊆ S. Let O be a L-ontology and S a signature over L. that is. it is often convenient to keep Q separate from P and make its axioms available on demand via a reference. it is possible to show a stronger result. E2 from Figure 1. p. the ontology P is not safe for S w. EUProject and the atomic role has Focus.2.t. It is easy to check that all the axioms P1–P4.r.r. L. where β1 = ( Cystic Fibrosis) and β2 = (Genetic Disorder ⊥). Since E2 is equivalent to axiom (6). given an L-ontology O and a signature S over L.t. Project. Indeed E2. Genetic Disorder} w.r. an ontology O is safe for a signature S w. a language L if and only if it imports in a safe way any ontology O written in L that shares only symbols from S with O. the ontology engineers of the project ontology P may not be willing to depend on a particular version of Q. E1. Kazakov.t.t.r. For an example of Statement 2 of the Proposition. it is easy to see that O ∪ O is inconsistent. The associated reasoning problem is then formulated in the following way: T2. O ∪ O is a deductive conservative extension of O w. In practice. Hence. Genetic Disorder}. namely. P1 ∪ Q is a model conservative extension of Q. L. 2007. Hence. since the interpretation of the symbols in Q remains unchanged. Cystic Fibrosis EUProject. E1 are satisfied in J and hence J |= P1 . or may even decide at a later time to reuse the medical terms (Cystic Fibrosis and Genetic Disorder) from another medical ontology instead. in our ontology reuse scenario we have assumed that the reused ontology Q is fixed and the axioms of Q are just copied into P during the import.3 Safety of an Ontology for a Signature So far. all of which we interpret in J as the empty set. L. Moreover. this means that O is not safe for S. & Sattler The required model J can be constructed from I as follows: take J to be identical to I except for the interpretations of the atomic concepts Genetic Disorder Project.Cuenca Grau. We say that O is safe for S w.r. ♦ Intuitively. the ontology P consisting of axioms P1–P4. Horrocks.r.t. that any ontology O containing axiom E2 is not safe for S = {Cystic Fibrosis. which is not entailed by O . In order to formulate such condition. L = ALC. 455).t. L. In fact. By Definition 6. Consider an ontology O = {β1 . we abstract from the particular ontology Q to be imported and focus instead on the symbols from Q that are to be reused: Definition 6 (Safety for a Signature). Therefore. This makes it possible to continue developing P and Q independently. in our knowledge reuse scenario. According to Definition 6. we have I|Sig(Q) = J |Sig(Q) and J |= Q. for many application scenarios it is important to develop a stronger safety condition for P which depends as little as possible on the particular ontology Q to be reused. β1 and β2 imply the contradiction α = ( ⊥).t. L. does not import Q consisting of axioms M1–M5 from Figure 1 in a safe way for L = ALC. 282 .

such that O ∪ O is not a deductive conservative extension of O . The following lemma introduces a property which relates the notion of model conservative extension with the notion of safety for a signature. however. O1 ⊆ O. let I be a model of O1 ∪ O . Then O ∪ O is a model S -conservative extension of O1 ∪ O for S = S ∪ Sig(O ). we have I|S = J |S . In order to show that O∪O is a model S -conservative extension of O1 ∪O according to Definition 3. Proof. Lemma 7 allows us to identify a condition sufficient to ensure the safety of an ontology for a signature: Proposition 8 (Safety for a Signature vs.r. L = SHOIQ. and O be L-ontologies and S a signature over L such that O is a model S-conservative extension of O1 and Sig(O) ∩ Sig(O ) ⊆ S. that is. As shown in Example 5. we have that O ∪ O is a deductive conservative extension of O . Genetic Disorder} w. since J |S = I|S . Then O is safe for S w.r. It turns out that the notion of model conservative extensions can also be used for this purpose. L according to Definition 6. ♦ 283 .Modular Reuse of Ontologies: Theory and Practice It is clear how one could prove that an ontology O is not safe for a signature S: simply find an ontology O with Sig(O) ∩ Sig(O ) ⊆ S. () Indeed. Proof. L. that is.t. we have J |= O . () Since O is a model S-conservative extension of O1 . Intuitively. such interpretation J always exists. we have that O∪O is a model S -conservative extension of O1 ∪O = O for S = S ∪ Sig(O ). by Lemma 7. Lemma 7 Let O. for every interpretation I there exists a model J of O such that J |S = I|S . since O is a model S-conservative extension of O1 = ∅. how one could prove that O is safe for S. and Sig(O)∩Sig(O ) ⊆ S. by Definition 3 there exists a model J1 of O such that I|S = J1 |S . we have J |= O. and I |= O . as was required to prove ( ). By Proposition 8. Since J |Sig(O) = J1 |Sig(O) and J1 |= O. Consider the model J obtained from I as in Example 5. In particular. Example 9 Let P1 be the ontology consisting of axioms P1–P4.r. We need to demonstrate that O ∪ O is a deductive conservative extension of O w.t. it says that the notion of model conservative extension is stable under expansion with new axioms provided they share only symbols from S with the original ontologies. for every S-interpretation I there exists a model J of P1 such that I|S = J |S . since S ⊆ Sig(Q). In particular. take any SHOIQ ontology O such that Sig(O) ∩ Sig(O ) ⊆ S. We just construct a model J of O ∪ O such that I|S = J |S . which proves ( ). J is a model of P1 and I|Sig(Q) = J |Sig(Q) where Q consists of axioms M1–M5 from Figure 1. In order to prove that O is safe for S w. in order to prove safety for P1 it is sufficient to demonstrate that P1 is a model S-conservative extension of the empty ontology. Sig(O ) ⊆ S .r. L. We show that P1 is safe for S = {Cystic Fibrosis.t. Hence J |= O ∪ O and I|S = J |S . It is not so clear. Since Sig(O) ∩ S = Sig(O) ∩ (S ∪ Sig(O )) ⊆ S ∪ (Sig(O) ∩ Sig(O )) ⊆ S and I|S = J1 |S . and E1 from Figure 1. and I is a model of O1 .t. Let J be an interpretation such that J |Sig(O) = J1 |Sig(O) and J |S = I|S . Model Conservative Extensions) Let O be an L-ontology and S a signature over L such that O is a model S-conservative extension of the empty ontology O1 = ∅. since Sig(O ) ⊆ S .

r.t. M2. Indeed.r. Intuitively. if O ∪ O is a deductive S-conservative extension of O ∪ O1 for S = Sig(O) w. anatomy.t. Genetic Fibrosis. L.Cuenca Grau.4 Extraction of Modules from Ontologies In our example from Figure 1 the medical ontology Q is very small. M4} and Q2 = {M1. Ideally.t. the medical ontology Q could contain information about genes.t.t. browsing. In particular P1 ∪ Q0 |= α. Well established medical ontologies.t. however. But then. reasoning over. one can show that any subset of Q that does not imply α is not a module in Q w. M4} of Q are modules for P1 in Q w. Similarly. compute a module O1 for O in O w. we demonstrate a stronger fact. For example. surgical techniques. L = SHOIQ. L. To do so. consider any model I of P1 ∪ Q1 . processing— that is. Example 11 Consider the ontology P1 consisting of axioms P1–P4. it is possible to show that both subsets Q1 = {M1. & Sattler 3. and the interpretations of atomic roles located In and 284 . We need to construct a model J of P1 ∪ Q such that I|S = J |S . M3. L = ALC or its extensions. we need to demonstrate that P1 ∪ Q is a deductive S-conservative extension of both P1 ∪ Q1 and P1 ∪ Q2 for S = Sig(P1 ) w. Let J be defined exactly as I except that the interpretations of the atomic concepts Fibrosis. M2 and M4 as well as of axioms M1. E1 and the ontology Q consisting of axioms M1–M5 from Figure 1. Kazakov. etc.t. M3 and M4 from ontology Q. M4 and M5. P1 ∪ Q |= α. P1 . that P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 and of P1 ∪ Q2 for S which is sufficient by Claim 1 of Proposition 4. importing the module Q1 should give exactly the same answers as if the whole ontology Q had been imported.t. L. Definition 10. Let O.r. M3. can be very large and may describe subject matters in which the designer of P is not interested. O and O1 ⊆ O be L-ontologies. we have Q0 |= α. O . As usual. when answering an arbitrary query over the signature of P. In fact.r. Definition 10 (Module for an Ontology).r.r. Horrocks. according to Definition 10. Q0 is not a module for P1 in Q w. given L-ontologies O. Sig(α) ⊆ S but P1 ∪ Q0 |= α. etc—the resulting ontology P ∪ Q may be considerably harder than processing P alone. It can be demonstrated that P1 ∪ Q0 is a deductive conservative extension of Q0 .r. these sets of axioms are actually minimal subsets of Q that imply α. since P1 ∪ Q is not an deductive S-conservative extension of P1 ∪ Q0 w. Even if P imports Q without changing the meaning of the reused symbols. On the other hand. ♦ The task of extracting modules from imported ontologies in our ontology reuse scenario can be thus formulated as follows: T3. In particular. Recall that the axiom α from (1) is a consequence of axioms M1.r. for the subset Q0 of Q consisting of axioms M2. L. L = ALC for S = Sig(P1 ). Pancreas. We say that O1 is a module for O in O w. one would like to extract a (hopefully small) fragment Q1 of the external medical ontology—a module—that describes just the concepts that are reused in P. and Genetic Origin are defined as the interpretation of Cystic Fibrosis in I. In order to show that P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 for S = Sig(P1 ).

Moreover. which implies the concept inclusion α = (Cystic Fibrosis Genetic Disorder). it is possible to demonstrate that any subset of Q that implies axiom α.r. the construction above works if we replace Q1 with any subset of Q that implies α. In particular. Genetic Disorder} in Q. because Genetic Fibrosis and Genetic Disorder are interpreted in J like Cystic Fibrosis and Genetic Disorder in I. We say that O1 is a module for S in O (or an S-module in O ) w. However. when the ontology P is modified. In this way. it can be imported instead of Q into every ontology O that shares with Q only symbols from S. Since we do not modify the interpretation of the symbols in P1 . that is. O ∪ O is a deductive S -conservative extension of O ∪ O1 w.t. that is. in a similar way as the notion of safety for a signature is a uniform analog of safety for an ontology.t.r. ♦ Intuitively. it would be more convenient to extract a module only once and reuse it for any version of the ontology P that reuses the specified symbols from Q. In order to prove this. we have that O1 is a module for O in O w. L for S = Sig(O).r. P1 ∪ Q is also a model S-conservative extension of P1 ∪ Q2 . L.Modular Reuse of Ontologies: Theory and Practice has Origin are defined as the identity relation.t.t.r. J is a model of M4. In fact. is in fact a module for S = {Cystic Fibrosis. This idea motivates the following definition: Definition 12 (Module for a Signature).r. the module has to be extracted again since a module Q1 for P in Q might not necessarily be a module for the modified ontology. We need to demonstrate that O1 is a module for O in O w. Continuing Example 11. Since the extraction of modules is potentially an expensive operation. () 285 . Model Conservative Extensions) Let O and O1 ⊆ O be L-ontologies and S a signature over L such that O is a model S-conservative extension of O1 .t. we have demonstrated that the modules for P in Q are exactly those subsets of Q that imply α. the notion of a module for a signature is a uniform analog of the notion of a module for an ontology. Proof. L. thus P1 ∪ Q is a model S-conservative extension of P1 ∪ Q1 . Hence we have constructed a model J of P1 ∪ Q such that I|S = J |S . ♦ An algorithm implementing the task T3 can be used for extracting a module from an ontology Q imported into P prior to performing reasoning over the terms in P. we use the following sufficient condition based on the notion of model conservative extension: Proposition 13 (Modules for a Signature vs. take any SHOIQ ontology O such that Sig(O) ∩ Sig(O ) ⊆ S. J also satisfies all axioms from P1 . L. L. Let O and O1 ⊆ O be L-ontologies and S a signature over L. given a L-ontology O and a signature S over L. The reasoning task corresponding to Definition 12 can be formulated as follows: T4. In order to prove that O1 is a module for S in O w.t. if for every L-ontology O with Sig(O) ∩ Sig(O ) ⊆ S. Then O1 is a module for S in O w. compute a module O1 for S in O . according to Definition 10.r. and I is a model of Q1 . L according to Definition 12. It is easy to see that axioms M1–M3 and M5 are satisfied in J .

r. Genetic Disorder}. modules that contain no other module as a subset.r. L (or S-essential in O ) if α is contained in some minimal module for S in O w. ♦ In our example above the axioms M1–M4 from Q are essential for the ontology P1 and for the signature S = {Cystic Fibrosis. as was required to prove ( ). for the ontology P1 = {P1. Note that 286 . the extracted modules should be as small as possible. These arguments motivate the following notion: Definition 15 (Essential Axiom). Depending on the application scenario.t.t. M4}. whereas in others any minimal module suffices. P2. since O is a model S-conservative extension of O1 . and Q2 = {M1. since S = Sig(O) ⊆ S . Horrocks.r. L if α is contained in any minimal module in O for O w.t. Kazakov. Genetic Disorder} in Q. the module Q1 consisting of the axiom M1. that is. and Sig(O ) ∩ Sig(O) ⊆ S. we have that O ∪ O is a model S -conservative extension of O1 ∪ O for S = S ∪ Sig(O).Cuenca Grau. but the axiom M5 is not essential. In certain situations one might be interested in computing the set of essential axioms of an ontology. since these are the minimal sets of axioms that imply axiom α = (Cystic Fibrosis Genetic Disorder). This is not true for the axioms that occur in minimal modules of Q. L. by Lemma 7. M2. It is easy to see that if the model J is constructed from I as in Example 11. Genetic Disorder}. E1} as well as for the signature S = {Cystic Fibrosis.r. Also note that even in a minimal module like Q1 there might still be some axioms like M2 that do not mention symbols from S at all. Let O and O be L-ontologies. we have that O ∪ O is a deductive S -conservative extension of O1 ∪ O w. & Sattler Indeed. L. M2 and M4 from Q does not contain axiom M5 which mentions the atomic concept Cystic Fibrosis from S. but in extracting modules that are easy to process afterwards. In some applications it might be necessary to extract all minimal modules. Example 14 Let Q and α be as in Example 11. M4}. In particular. We say that α is essential for O in O w. for every model I of Q1 there exists a model J of Q such that I|S = J |S . one can consider several variations of tasks T3 and T4 for computing minimal modules. 3.r. L. ♦ Note that a module for a signature S in Q does not necessarily contain all the axioms that contain symbols from S. S a signature and α an axiom over L. M3. which can be done by computing the union of all minimal modules. We demonstrate that any subset Q1 of Q which implies α is a module for S = {Cystic Fibrosis. According to Proposition 13. thus they never need not be imported into P. Ideally.t. P3.t. For example. Axioms that do not occur in a minimal module in Q are not essential for P because they can always be removed from every module of Q. It might be necessary to import such axioms into P in order not to lose essential information from Q. then the required property holds. Examples 11 and 14 demonstrate that a minimal module for an ontology or a signature is not necessarily unique: the ontology Q consisting of axioms M1–M5 has two minimal modules Q1 = {M1. that is. it is sufficient to demonstrate that Q is a model S-conservative extension of Q1 . We say that α is an essential axiom for S in O w.5 Minimal Modules and Essential Axioms One is usually not interested in extracting arbitrary modules from a reused ontology. Hence it is reasonable to consider the problem of extracting minimal modules.

The theoretical results that we will present in this paper can easily be extended to many such reasoning tasks. S.s. S. and T4um of the task T4 for computation of minimal modules discussed in this section. Ax is a set of axioms in L. a function that assigns to each formula its signature. Our definitions apply to any ontology language with a notion of entailment of axioms from ontologies. So far. Sig is a function that assigns to every axiom α ∈ Ax a finite set Sig(α) ⊆ Sg called the signature of α. the fragments of the DL SHOIQ. L O. where Sg is a set of signature elements (or vocabulary) of L. O . The notions considered in Section 3 can be applied. We extend the function Sig to ontologies O as follows: Sig(O) := α∈O Sig(α). A language L is given by a set of symbols (a signature). instead of computing minimal modules.r. written O |= α. however. Further variants of the tasks T3 and T4 could be considered relevant to ontology reuse. T4sm. Ax.t. and |= is the entailment relation between sets of axioms O ⊆ Ax and axioms α ∈ Ax.u][m] T4[a.t. |=). L O.r.s. L Task Check if O is safe for O w. however. we have implicitly assumed that the ontology languages are the description logics defined in Section 2—that is. and T3um of the task T3 and T4am. and an entailment relation between sets of axioms. to a much broader class of ontology languages. The ontology language OWL DL as well as all description 287 . For example. Definition 16.r. Sig. L O . L Check if O is safe for S w. O are ontologies and S a signature over L Table 2: Summary of reasoning tasks relevant for ontology integration and reuse computing the union of minimal modules might be easier than computing all the minimal modules since one does not need to identify which axiom belongs to which minimal module. one might be interested in computing modules with the smallest number of axioms. L Checking Safety: Extracting of [all / some / union of] [minimal] module(s): where O. In Table 2 we have summarized the reasoning tasks we found to be potentially relevant for ontology reuse scenarios and have included the variants T3am. a set of formulae (axioms) that can be constructed over those symbols.u][m] Input O.6 Safety and Modules for General Ontology Languages All the notions introduced in Section 3 are defined with respect to an “ontology language”. An ontology over L is a finite set of axioms O ⊆ Ax. An ontology language is a tuple L = (Sg. ♦ Definition 16 provides a very general notion of an ontology language. and a mechanism for identifying their signatures. O . T3sm. L Extract modules in O w.Modular Reuse of Ontologies: Theory and Practice Notation T1 T2 T3[a. or in any other complexity measure of the ontology.t. O and L Extract modules for S in O w. or modules of the smallest size measured in the number of symbols.t. 3.r.

t. it is the case that O0 = ∅ is a module for O in O w. L iff (c) O ∪ O1 ∪ (O \ O1 ) = O ∪ O is a deductive conservative extension of O ∪ O1 w. & Sattler logics defined in Section 2 are examples of ontology languages in accordance to Definition 16. L iff (a) for every O with Sig(O ) ∩ Sig(O) ⊆ S.t.r. L. Proof.t. L for S = Sig(O).r.t. 2. that O1 is a module for O in O w. 1. we have that O \ O1 is safe for O w. and let O.r.r. L. L.t. and O1 ⊆ O be ontologies over L. O \ O1 is safe for S ∪ Sig(O1 ) w. By Definition 2. By 288 .Cuenca Grau. O is safe for O w. L iff the empty ontology is a module for O in O w. L. we will not formulate general requirements on the semantics of ontology languages. By Definition 6.r. L iff (b) for every O with Sig(O ) ∩ Sig(O) ⊆ S.r.r. By Definition 6. Then: 1. The definition of model conservative extension (Definition 3) and the propositions involving model conservative extensions (Propositions 4. L. If O \ O1 is safe for S ∪ Sig(O1 ) w. To simplify the presentation. safety (Definitions 2 and 6) and modules (Definitions 10 and 12).r.t. L iff the empty ontology O0 = ∅ is an S-module in O w.r.r.r.r. Then: 1. By Definition 10.t.r. as well as all the reasoning tasks from Table 2.r. 1. such as Higher Order Logic. By Claim 1 of Proposition 17.r.t. O0 = ∅ is an S-module in O w. however. L.t. ontologies over L. L for S = Sig(O). are well-defined for every ontology language L given by Definition 16. which implies. Second Order Logic.r.t. with Sig(O \ O1 ) ∩ Sig(O) ⊆ S ∪ Sig(O1 ). It is easy to see that (a) is the same as (b).t. and S a subset of the signature of L. it is the case that O is safe for O w. O is safe for S w. L.r.r.r. 8. the empty ontology O0 = ∅ is a module for O in O w. In the remainder of this section.t.t. L iff (c) for every O. Other examples of ontology languages are First Order Logic. it is easy to see that (a) is equivalent to (b). 2. It is easy to see that the notions of deductive conservative extension (Definition 1). and assume that we deal with sublanguages of SHOIQ whenever semantics is taken into account. If O \ O1 is safe for O ∪ O1 then O1 is a module for O in O w.t.t. Proposition 17 (Safety vs. 2. O and O1 ⊆ O . We also provide the analog of Proposition 17 for the notions of safety and modules for a signature: Proposition 18 (Safety vs.t.t. by Definition 10. L iff (a) O ∪ O is a deductive conservative extension of O w. O . Proof. and Logic Programs. O is safe for S w. Modules for a Signature) Let L be an ontology language.r.t. L. O ∪ O is a deductive S-conservative extension of O ∪ O1 w. O \ O1 is safe for O ∪ O1 w. and 13) can be also extended to other languages with a standard Tarski model-theoretic semantics.t. By Definition 2. L. 2. O is safe for O w. we establish the relationships between the different notions of safety and modules for arbitrary ontology languages. L.t. L iff (b) O ∪ O is a deductive S-conservative extension of O ∪ O0 = O w. By Definition 12. then O1 is an S-module in O w. Horrocks. In particular.t. Modules for an Ontology) Let L be an ontology language. L.r. Kazakov.

Conversely. it has been shown that deciding deductive conservative extensions is EXPTIME-complete for L = EL(Lutz & Wolter.t.r. who showed that the problem is 2-EXPTIME-complete for L = ALCIQ and undecidable for L = ALCIQO. the problem of determining whether O1 is a module for O in O is EXPTIME-complete for L = EL. These results can be immediately applied to the notions of safety and modules for an ontology: Proposition 19 Given ontologies O and O over L.t.r.t.. We demonstrate that most of these reasoning tasks are algorithmically unsolvable even for relatively inexpressive DLs.t. L is EXPTIME-complete for L = EL. 2-EXPTIME complete for L = ALC and L = ALCIQ. O . 4. In order to prove that (c) implies (d).r.t. L iff O ∪ O is a deductive conservative extension of O w.t. 2006). L iff (d) for every O with Sig(O )∩Sig(O) ⊆ S.t.r. and undecidable for L = ALCIQO.r. any algorithm for checking deductive conservative extensions can be reused for checking safety and modules. and are computationally hard for simple DLs. (2007). This result was extended by Lutz et al.t. Since the notions of modules and safety are defined in Section 3 using the notion of deductive conservative extension. an ontology O1 is a module for O in O w. O1 is an S-module in O w.t. 2-EXPTIME-complete for L = ALC and L = ALCIQ. we have that O1 is a module for O in O w. L . the problem of determining whether O is safe for O w.t. O0 = ∅ is a module for O1 in O \ O1 w. By Definition 2.r.Modular Reuse of Ontologies: Theory and Practice Definition 12.r. O is a deductive conservative extension of O1 ⊆ O w.r.t. Hence.r. let O be such that Sig(O ) ∩ Sig(O) ⊆ S. and O1 ⊆ O over L. The computational properties of conservative extensions have recently been studied in the context of description logics.t.r. L iff O\O1 is safe for O1 w. L. L ( ). the problem of deciding whether O is a deductive conservative extension of O1 w. 289 . Note that Sig(O \ O ) ∩ Sig(O) = Sig(O \ O ) ∩ (Sig(O) ∪ Sig(O )) ⊆ ˜ Let O 1 1 1 1 (Sig(O \ O1 ) ∩ Sig(O)) ∪ Sig(O1 ) ⊆ S ∪ Sig(O1 ).r. it is reasonable to identify which (un)decidability and complexity results for conservative extensions are applicable to the reasoning tasks in Table 2. L if O ∪ O is a deductive S-conservative extension of O ∪ O1 for S = Sig(O) w. By Definition 2. and undecidable for L = ALCIQO. We need to demonstrate that O1 is a module for O in O w. L iff. L. ( ) ˜ := O ∪ O . we demonstrate that any algorithm for checking safety or modules can be used for checking deductive conservative extensions.r.t. Indeed. Undecidability and Complexity Results In this section we study the computational properties of the tasks in Table 2 for ontology languages that correspond to fragments of the description logic SHOIQ.r. the problem has also been studied for simple DLs. 2007).t. Given ontologies O.r. Proof. L. L. L which implies by Claim 2 of Proposition 17 that O1 is a module for O in O w. by (c) we have that O \ O1 is safe ˜ for O = O ∪ O1 w. by Claim 1 of Proposition 17. Hence. Given O1 ⊆ O over a language L. Recently. an ontology O is safe for O w. L is 2-EXPTIMEcomplete for L = ALC (Ghilardi et al.

and are 2-EXPTIME-hard for ALC.1.j for every i.r. 2007).r.u]m related to the notions of safety and modules for a signature. by Claim 1 of Proposition 17. Indeed. It is well-known (B¨rger. & Sattler Corollary 20 There exist algorithms performing tasks T1. or the union of them depending on the reasoning task. Theorem 21 (Undecidability of Safety for a Signature) The problem of checking whether an ontology O consisting of a single ALC-axiom is safe for a signature S is undecidable w. there is no recursive (i.t. one can enumerate all subsets of O and check in 2-EXPTIME which of these subsets are modules for O in O w. L iff O0 = ∅ is a module for O in O w. Finally. we construct a signature S = S(D). H. V ) where T = {1. Proof.u]m from Table 2 for L = EL and L = ALCIQ that run in EXPTIME and 2-EXPTIME respectively.j+n = ti. j ≥ 1 an element of T .s.t.7) that the sets D \ Ds and Dps are recursively inseparable. The proof is a variation of the construction for undecidability of deductive conservative extensions in ALCQIO (Lutz et al.r. The task T1 corresponds directly to the problem of checking the safety of an ontology. For each domino system D.t. o a & Gurevich.t.r.j . determines whether O1 is a module for O in O w.t. Proof. an ontology O is safe for O w.s.Cuenca Grau. decidable) subset D ⊆ D of domino systems such that Dps ⊆ D ⊆ Ds . T3sm and T3um for L = ALCIQ in 2-EXPTIME. A periodic solution for a domino system D is a solution ti.u]m are computationally unsolvable for DLs that are as expressive as ALCQIO. We use this property in our reduction.s. L = ALCIQ.j that assigns to every pair of integers i.r. n ≥ 1 called periods such that ti+m. We have demonstrated that the reasoning tasks T1 and T3[a. k} is a finite set of tiles and H.j and ti. We demonstrate that all of these reasoning tasks are undecidable for DLs that are as expressive as ALCO.j = ti.t. based on a reduction to a domino tiling problem. such that ti. Suppose that we have a 2-EXPTIME algorithm that. Indeed. ti+1. Let D be the set of all domino systems. T3sm and T3um is not easier than checking the safety of an ontology.s.t.u]m from Table 2 for L = ALCIQO.j+1 ∈ H and ti. L. given ontologies O and O . as given in Definition 2. and an ontology O = O(D) which consists of a single ALC-axiom such that: 290 . and T3[a. Then we can determine which of these modules are minimal and return all of them. Note that the empty ontology O0 = ∅ is a module for O in O w.j ∈ V . We demonstrate that this algorithm can be used to solve the reasoning tasks T3am. we prove that solving any of the reasoning tasks T3am. given ontologies O. In the remainder of this section. that is.r. L. one of them. There is no algorithm performing tasks T1. . Kazakov. Ds be the subset of D that admit a solution and Dps be the subset of Ds that admit a periodic solution. or T3[a. j ≥ 1. L iff O0 = ∅ is the only minimal module for O in O w. 1997.r. L = ALCO. . A domino system is a triple D = (T.j . Horrocks. . L.e. V ⊆ T × T are horizontal and vertical matching relations. O and O1 ⊆ O .j for which there exist integers m ≥ 1 .. ti. Theorem 3. Gr¨del. . we focus on the computational properties of the reasoning tasks T2 and T4[a. A solution for a domino system D is a mapping ti.

j ∈ ∆I such that ai. L = ALCO.∃rH . L = ALCO. and c ∈ At I .j+1 := t . ai+1. ai. c. because otherwise one can use this problem for deciding membership in D .j+1 are constructed for ai.j . B ∈ S.j+1 and ai.t ∈V where T = {1. In other words. t ∈ V . and (b) if D has a periodic solution then O = O(D) is not safe for S = S(D) w. In this case we assign ai. .¬B) We say that rH and rV commute in an interpretation I = (∆I . we have Dps ⊆ D ⊆ Ds . let S consist of fresh atomic concepts At for every t ∈ T and two atomic roles rH and rV . Then we set ti. this implies undecidability for D and hence for the problem of checking if O is S-safe w. we construct ti.j := t .j ∈ At I there exist b. Since D \ Ds and Dps are recursively inseparable.j . L = ALCO. Now let s be an atomic role and B an atomic concept such that s.j and ti. .j . and c.j ∈ At I . (q3 ) and (q4 ) express that every domain element has horizontal and vertical matching successors. b ∈ rH I . the value for 291 . The signature S = S(D).r. t ∈ H.Modular Reuse of Ontologies: Theory and Practice (q1 ) (q2 ) (q3 ) (q4 ) At At At A1 At ··· ⊥ Ak t.r. Now suppose ai. b.j can be assigned two times: ai+1. Given a domino system D = (T. Note that Sig(Otile ) = S. c ∈ ∆I and t . j ≥ 1 is constructed. The axioms of Otile express the tiling conditions for a domino system D.t. However.j inductively together with elements ai. and the ontology O = O(D) are constructed as follows. t. d1 and d2 from ∆I such that a. ·I ) of Otile (D) can be used to guide the construction of a solution ti. b ∈ At I . L = ALCO.j . H. since rV and rH commute in I. j ≥ 1. ai+1. .r.t. k} 1≤t<t ≤k At ) At ) 1≤t≤k 1≤t≤k ∃rH . (Ci Di )∈Otile (Ci ¬Di ) (∃rH . ti.B ∃rV .j+1 and ti+1. Consider an ontology Otile in Figure 2 constructed for D.t. V ). then D has a solution.∃rV . If Otile (D) has a model I in which rH and rV commute. d2 ∈ rH I .j+1 ∈ rV I and ai.( Figure 2: An ontology Otile = Otile (D) expressing tiling conditions for a domino system D (a) if D does not have a solution then O = O(D) is safe for S = S(D) w. ·I ) if for all domain elements a.j := c.1 to any element from ∆I .j := t. For every i. ai.j+1 := b.j ∈ rH I .j with i.j for D as follows. Note that some values of ai.( ∃rV .j+1 . d1 ∈ rV I . Since I is a model of axioms (q3 ) and (q4 ) and ai. Indeed a model I = (∆.t ∈H t. b ∈ rH I . for the set D consisting of the domino systems D such that O = O(D) is not safe for S = S(D) w. Let O := {β} / where: β := ∃s. We set a1. and ti+1. c ∈ rV I .r. . b. we have d1 = d2 . t ∈ T such that ai. Since I is a model of axioms (q1 ) and (q2 ) from Figure 2. namely (q1 ) and (q2 ) express that every domain element is assigned with a unique tile t ∈ T .t. The following claims can be easily proved: Claim 1. t. we have a unique At with 1 ≤ t ≤ k such that ai. c ∈ rV I . a.

L. c. j) with 1 ≤ i ≤ m and 1 ≤ j ≤ n we introduce a fresh individual ai. By Proposition 8. Suppose that rH and rV do not commute in I = (∆I . this will imply that O is safe for S w. d1 and d2 such that x. This will imply that O is not safe for O w.j1 } ∃rV .t. the value for ti+1. a ∈ ¬Di .r. We interpret s in J as sJ = { x.j+1 is unique. L = ALCO. b ∈ r I . b.B ∃rV . ·I ) is not a model of Otile then there exists an axiom (Ci Di ) ∈ Otile such that I |= (Ci Di ). we have constructed a model J of O such that J |S = I|S .∃rV .j2 }. Let us define J to be identical to I except for the interpretation of the atomic role s and the atomic concept B. For this purpose we construct an ALCO-ontology O with Sig(O) ∩ Sig(O ) ⊆ S such that O ∪ O |= ( ⊥). In order to prove property (b). This implies that J |= β.∃rH . We demonstrate for both of these cases how to construct the required model J of O such that J |S = I|S . Case (1). b. We interpret B in J as B J = {d1 }.j1 } (p5 ) ∀rV .B)J and a ∈ (∃rV . We define O such that every model of O is a finite encoding of the periodic solution ti. we have a ∈ CiJ . c ∈ r I . Claim 2. d1 and d2 from ∆I with a.j with the periods m.t. That is. c.Cuenca Grau. Since D has no solution.j is a solution for D.¬B]). Finally. if I . & Sattler ai+1. This means that there exist domain elements a.∃rH . Case (2).j } i2 = i1 + 1 mod m j2 = j1 + 1 mod n 292 . d ∈ B I . and because of (q2 ). L. b. hence. Kazakov. and I . and c. and thus. We demonstrate that O is not S-safe w. b ∈ rH I . ∀rH . d I . If I = (∆I .∃rV . a | x ∈ ∆}. a | x ∈ ∆}. then there exist a. and so. we have J |= ( ∃s. b.t.[∃rH . c.j2 } (p2 ) {ai1 .r.j }. and. ·I ).r.¬B]) which implies that J |= β.j and define O to be an extension of Otile with the following axioms: (p1 ) {ai1 .[∃rH . a. it is easy to see that Otile ∪ O |= ( ∃s. ·I ) is a model of O I I = (∆ tile ∪ O. In order to prove property (a) we use the sufficient condition for safety given in Proposition 8 and demonstrate that if D has no solution then for every interpretation I there exists a model J of O such that J |S = I|S . c ∈ rV I .¬B)J since d1 = d2 .j } (p4 ) {ai.t. a. n ≥ 1. Let us define J to be identical to I except for the interpretation of the a ∈ Ci i atomic role s which we define in J as sJ = { x. 1≤i≤m. then by the contra-positive of Claim 1 either (1) I is not a model of Otile .r. This implies that d1 = d2 .B ∃rV .j } (p3 ) {ai. we demonstrate that O = O(D) satisfies properties (a) and (b). a. or (2) rH and rV do not commute in I. For every pair (i. Indeed. and so. assume that D has a periodic solution ti. d for every x ∈ ∆ 2 ∈ rH 1 1 ∈ rV V H d2 ∈ (¬B)I . Note that a ∈ (∃rH .{ai. such that d1 = d2 .j+1 is unique. d2 ∈ rH I . but O |= ( ⊥). d1 ∈ rV I .∃rH .{ai2 . there exists a domain element a ∈ ∆I such that I but a ∈ D I . If I is a model of Otile ∪ O. So.j with the periods m and n.j } ∃rH . is not safe for S w. and so J |= ( ∃s. we have constructed a model J of O such that J |S = I|S . a ∈ s I . L = ALCO. rh and rV do not commute in I.{ai2 . Horrocks.{ai. Since the interpretations of J the symbols in S have remained unchanged.[Ci ¬Dj ]). It is easy to see that ti. then rH and rV do not commute in I. Hence.∃rV . Let I be an arbitrary interpretation. 1≤j≤n {ai.

L if O0 = ∅ is an S-module in O w. we could focus on simple DLs for which this problem is decidable. the converse.t.r. Solving any of the reasoning tasks T4am. On the other hand. T4sm. by Claim 2. since this task corresponds to the problem of checking safety for a signature. the set of ontologies over a language that satisfy the condition for that signature. Proof. then in every model of O ∪ O.. SHOIQ—ontologies. This said. or T4um for L is at least as hard as checking the safety of an ontology. it is worth noting that Theorem 21 still leaves room for investigating the former approach. since O contains Otile .Modular Reuse of Ontologies: Theory and Practice The purpose of axioms (p1 )–(p5 ) is to ensure that rH and rV commute in every model of O . rH and rV do not commute.1 Safety Classes In general. 5. This undecidability result is discouraging and leaves us with two alternatives: First. for each signature S. does not necessarily hold. since.t.r. L. In the case of SHIQ. Before we go any further. however. These ontologies are then guaranteed to be safe for the signature under consideration. or T4[a. such as EL. Theorem 21 directly implies that there is no algorithm for task T2. by Claim 1 of Proposition 18. L iff O0 = ∅ is (the only minimal) S-module in O w.r. however. existing results (Lutz et al.r. 5. we could look for sufficient conditions for the notion of safety— that is. Hence O |= ( ⊥).r. This is only possible if O ∪ O have no models. Sufficient Conditions for Safety Theorem 21 establishes the undecidability of checking whether an ontology expressed in OWL DL is safe w. the problem of determining whether O1 is an S-module in O w.u]m for L = ALCO. 2007) strongly indicate that checking safety is likely to be exponentially harder than reasoning and practical algorithms may be hard to design. Proof. in what follows we will focus on defining sufficient conditions for safety that we can use in practice and restrict ourselves to OWL DL—that is. so O ∪ O |= ( ⊥). if an ontology satisfies our conditions. These intuitions lead to the notion of a safety class. The remainder of this paper focuses on the latter approach. any sufficient condition for safety can be defined by giving. Indeed. L. 293 . then we can guarantee that it is safe. Alternatively. It is easy to see that O has a model corresponding to every periodic solution for D with periods m and n. By Claim 1 of Proposition 18. Corollary 23 There is no algorithm that can perform tasks T2.r.t. L = ALCO is undecidable. however.t. O is S-safe w. an ontology O is S-safe w. safety may still be decidable for weaker description logics.s. Hence any algorithm for recognizing modules for a signature in L can be used for checking if an ontology is safe for a signature in L.t.t. or even for very expressive logics such as SHIQ. a signature. A direct consequence of Theorem 21 and Proposition 18 is the undecidability of the problem of checking whether a subset of an ontology is a module for a signature: Corollary 22 Given a signature S and ALC-ontologies O and O1 ⊆ O .

Given a class of interpretations I(·). Given a SHOIQ signature S. as given in Definition 24. A safety class represents a sufficient condition for safety: each ontology in O(S) should be safe for S. if I(·) is local then we say that O(·) is a class of local ontologies. O(S) is the set of ontologies that are valid in I(S). ♦ Intuitively. it is compact when O ∈ O(S ∩ Sig(O)) for each O ∈ O(S). S using the sufficient condition. we say that O and α are local (based on I(·)). it is local if I(S) is local w. and for every S and O ∈ O(S) and every α ∈ O. w. then every subset of O (the union of O1 and O2 ) also satisfies this condition. and it is subset-closed if O1 ⊆ O2 and O2 ∈ O(S) implies O1 ∈ O(S). ♦ 294 . Horrocks. Safety classes may admit many natural properties. a class of ontologies is a collection of sets of ontologies parameterized by a signature. A safety class (also called a sufficient condition to safety) for an ontology language L is a class of ontologies O(·) for L such that for each S it is the case that (i) ∅ ∈ O(S). for which safety can be proved using Proposition 8. it is compact if for every S1 . called local ontologies. A class O(·) is anti-monotonic if S1 ⊆ S2 implies O(S2 ) ⊆ O(S1 ).t.o.g.r. & Sattler Definition 24 (Class of Ontologies. it is union-closed if O1 ∈ O(S) and O2 ∈ O(S) implies (O1 ∪ O2 ) ∈ O(S). we assume that the the empty ontology ∅ belongs to every safety class for every signature. Compactness means that it is sufficient to consider only common elements of Sig(O) and S for checking safety.t. In what follows. S if for every SHOIQ-interpretation I there exists an interpretation J ∈ I such that I|S = J |S . A class of ontologies for a language L is a function O(·) that assigns to every subset S of the signature of L. and (ii) each ontology O ∈ O(S) is S-safe for L. The following definition formalizes the classes of ontologies.r. where S1 S2 is the symmetric difference of sets S1 and S2 defined by S1 S2 := S1 \ S2 ∪ S2 \ S1 . we say that O(·) is the class of ontologies O(·) based on I(·) if for every S. Definition 25 (Class of Interpretations.t.l. a subset O(S) of ontologies in L. A class of interpretations is a function I(·) that given a SHOIQ signature S returns a set of interpretations I(S).t. whenever an ontology O belongs to a safety class for a given signature and the safety class is clear from the context. then O can be proved to be safe w. Also.r. Kazakov.2 Locality In this section we introduce a family of safety classes for L = SHOIQ that is based on the semantic properties underlying the notion of model conservative extensions. we will sometimes say that O passes the safety test for S. In Section 3. according to Proposition 8.Cuenca Grau. S for every S. Safety Class). one way to prove that O is S-safe is to show that O is a model S-conservative extension of the empty ontology. Antimonotonicity intuitively means that if an ontology O can be proved to be safe w. S2 and S such that (S1 S2 ) ∩ S = ∅ we have that I(S1 )|S = I(S2 )|S .r. we have seen that.. Locality). we say that a set of interpretations I is local w. it is monotonic if S1 ⊆ S2 implies I(S1 ) ⊆ I(S2 ). Subset-closure (union closure) means that if O (O1 and O2 ) satisfy the sufficient condition for safety. 5. every subset of S.

t. we demonstrated that the ontology P1 ∪ Q given in Figure 1 with P1 = {P1. ♦ Proposition 27 (Locality Implies Safety) Let O(·) be a class of ontologies for SHOIQ based on a local class of interpretations I(·). then J ∈ IA←∅ (S) and I|S = J |S . we have that for every SHOIQ-interpretation I there exists J ∈ I(S) such that J |S = I|S . and so O ∈ O(S2 ) implies that O is valid in I(S2 ) which implies that O is valid in I(S1 ) which implies that O ∈ O(S1 ). If I(·) is compact then for every S1 . Thus O(·) is a safety class. and if I(·) is compact then O(·) is also compact.Modular Reuse of Ontologies: Theory and Practice r←∅ Example 26 Let IA←∅ (·) be a class of SHOIQ interpretations defined as follows. E1 by interpreting the symbols in Sig(P1 ) \ S as the empty set. If I(·) is monotonic then I(S1 ) ⊆ I(S2 ) for every S1 ⊆ S2 . Proof.t. Then by Definition 25 for every SHOIQ signature S. O ∈ O(S1 ) iff O ∈ O(S2 ). L = SHOIQ. the set IA←∅ (S) consists of interpretations J such that rJ = ∅ for every atomic r←∅ role r ∈ S and AJ = ∅ for every atomic concept A ∈ S. Since IA←∅ (S1 ) ⊆ IA←∅ (S2 ) r←∅ r←∅ for every S1 ⊆ S2 . Since I(·) is a local class of interpretations. This has been done by showing that every S-interpretation I can be expanded to a model J of axioms P1–P4. O ∈ O(S) iff O ∈ O(S ∩ Sig(O)) since (S (S ∩ Sig(O))) ∩ Sig(O) = (S \ Sig(O)) ∩ Sig(O) = ∅. it is the case that J |= α. we have O ∈ O(S) iff O is valid in I(S) iff J |= O for every interpretation J ∈ I(S). Example 29 Recall that in Example 5 from Section 3. since for r←∅ r←∅ every S1 and S2 the sets of interpretations IA←∅ (S1 ) and IA←∅ (S2 ) are defined differently only for elements in S1 S2 .r. Then O(·) is a subset-closed and union-closed safety class for L = SHOIQ. The fact that O(·) is subset-closed and union-closed follows directly from Definition 25 since (O1 ∪ O2 ) ∈ O(S) iff (O1 ∪ O2 ) is valid in I(S) iff O1 and O2 are valid in I(S) iff O1 ∈ O(S) and O2 ∈ O(S). P4. ·J ) defined by ∆J := ∆I . which implies by Proposition 8 that O is safe for S w. Hence for every O ∈ I(S) and every SHOIQ interpretation I there is a model J ∈ I(S) such that J |S = I|S . ·I ) and the interpretation is local for every S. Then the r←∅ r←∅ r←∅ class of local ontologies based on IA←∅ (·) is defined by O ∈ OA←∅ (S) iff O ⊆ AxA←∅ (S). . In terms of Example 26 this means that 295 . E1} is a deductive conservative extension of Q for S = {Cystic Fibrosis. Hence. IA←∅ (·) is also compact. S2 and S such that (S1 S2 ) ∩ S = ∅ we have that I(S1 )|S = I(S2 )|S . since for every interpretation I = (∆ J = (∆J . r←∅ r←∅ Given a signature S. . hence for every O with Sig(O) ⊆ S we have that O is valid in I(S1 ) iff O is valid in I(S2 ). It is easy to show that IA←∅ (S) / / I . Hence O(·) is anti-monotonic. S based on IA←∅ (S) r←∅ consists of all axioms α such for every J ∈ IA←∅ (S). r←∅ Corollary 28 The class of ontologies OA←∅ (S) defined in Example 26 is an anti-monotonic compact subset-closed and union-closed safety class. O(·) is compact.r. the set AxA←∅ (S) of axioms that are local w. it is the case that IA←∅ (·) is monotonic. Assume that O(·) is a class of ontologies based on I(·). Genetic Disorder}. rJ = ∅ for r ∈ S. then O(·) is anti-monotonic. and so. and X J := X I / / r←∅ r←∅ r←∅ for the remaining symbols X. Additionally. . AJ = ∅ for A ∈ S. In particular. . if I(·) is monotonic. Given a r←∅ signature S.

the ontology P1 is safe for S w. | τ (r(a. 5. S) = ⊥ ⊥ if r ∈ S and otherwise = Trans(r). L = SHOIQ. R2 . S)).t. = ⊥ if Sig(R) S and otherwise = ∃R. S) | τ (A. / = {a}.τ (C1 . Namely.r. that is. S) ::= τ (α. S (based on IA←∅ (S)) if α is satisfied in every interpretation that fixes the interpretation of all atomic roles and concepts outside S to the empty set. α∈O τ (O. b). Kazakov. S). ♦ Proposition 27 and Example 29 suggest a particular way of proving the safety of ontologies. we focus in more detail on the safety class OA←∅ (·). τ (α. Given a SHOIQ ontology O and a signature S it is sufficient to check whether r←∅ O ∈ OA←∅ (S).C1 . axiom α and ontology O let τ (C. S) | τ (∃R. we refer to this safety class simply as locality. = ⊥ if A ∈ S and otherwise = A.3 Testing Locality r←∅ In this section. 296 . S) | τ (¬C1 . The reason is that there is no elegant way to describe such interpretations. This notion of locality is exactly the one we used in our previous work (Cuenca Grau et al. by Proposition 27. | τ (a : C. In the next section we discuss how to perform this test in practice. / | τ (Trans(r). concept C. every individual needs to be interpreted as an element of the domain. otherwise = ∃R1 . When ambiguity does not arise. S) τ (C2 . S) τ (C1 | τ (R1 = . 2007). & Sattler r←∅ r←∅ P1 ∈ OA←∅ (S). but in principle. S)). b). S) = a : τ (C. it is sufficient to interpret every atomic concept and atomic role not in S as the empty set and then check if α is satisfied in all interpretations of the remaining symbols. as will be shown in Section 8.r. S) = (τ (C1 . S) and τ (O. If this property holds.t. S) τ (C2 . Horrocks. Note that for defining locality we do not fix the interpretation of the individuals outside S. and there is no “canonical” element of every domain to choose.r. S) 2.C 1 . S) = ⊥ ⊥ if Sig(R) S and otherwise = Funct(R). = τ (C1 . S) ::= C2 . S) = (⊥ ⊥) if Sig(R1 ) S. It turns out that this notion provides a powerful sufficiency test for safety which works surprisingly well for many real-world ontologies. ⊥ if Sig(R2 ) S.Cuenca Grau.t.τ (C1 . This observation suggests the following test for locality: Proposition 30 (Testing Locality) Given a SHOIQ signature S. = ⊥ if Sig(R) S and otherwise = ( n R. S). this could be done. S) | τ (C1 C2 . S) | τ ( n R. S). then O must be safe for S according to Proposition 27. S) | τ ({a}. introduced in Example 26. S). (a) (b) (c) (d) (e) (f ) (g) (h) (i) (j) (k) (l) (m) (n) τ (α. S) be defined recursively as follows: τ (C. = ¬τ (C1 . In order to test the locality of α w. otherwise = (R1 R2 ). S. S). / | τ (Funct(R). S) ::= τ ( . Since OA←∅ (·) is a class of local ontologies. S) = ⊥ if r ∈ S and otherwise = r(a.. whether every axiom α in O is satisfied by every interpretation from r←∅ IA←∅ (S).2 r←∅ From the definition of AxA←∅ (S) given in Example 26 it is easy to see that an axiom α is r←∅ local w.

C). S2 . S))I and that I |= α iff I |= τ (α.t. Indeed. in particular.3 It is also easy to show by induction that for every r←∅ interpretation I ∈ IA←∅ (S).r. hence.Modular Reuse of Ontologies: Theory and Practice r←∅ Then. 2006). modern DL reasoners are highly optimized for standard reasoning tasks and behave well for most realistic ontologies. Trans(R∅ ). all atomic concepts and roles that are not in S are eliminated by the transformation. It is easy to check that for every atomic concept A and atomic role r from τ (C.Genetic Origin In the first case we obtain τ (M2. Example 31 Let O = {α} be the ontology consisting of axiom α = M2 from Figure 1. 2003). but not local w. we have C I = (τ (C. and hence will be eliminated. and ( n R.t. S1 ) = (⊥ ≡ Fibrosis ⊥) which is a SHOIQ-tautology.C. allow testing for DL-tautologies. SHOIQ. S2 ) = (Genetic Fibrosis ≡ ⊥ ∃has Origin. or Funct(R∅ ). 297 . S)) ⊆ S. Secondly. in order to check whether O is local w. The primary reason is that the sizes of the axioms which need to be tested for tautologies are usually relatively small compared to the sizes of ontologies. R∅ R. it is possible to formulate a tractable approximation to the locality conditions for SHOIQ: 3.Genetic Origin (7) Similarly. ∀R. Checking for tautologies in description logics is. has Origin}. several reasons to believe that the locality test would perform well in practice. We demonstrate using Proposition 30 that O is local w. theoretically. Hence r←∅ an axiom α is local w. S1 it is sufficient to perform the following replacements in α (the symbols from S1 are underlined): ⊥ [by (b)] M2 Genetic Fibrosis ≡ Fibrosis ⊥ [by (f)] ∃has Origin.t.r. ( n R∅ . Proof. Pellet (Sirin & Parsia. a difficult problem (e. S) may still contain individuals that do not occur in S. ∃R.r. There are.g. hence O is not local w.C) are assumed to be expressed using . 2000).r.t. in order to check whether O is local w.t.t. RACER (M¨ller & Haarslev. In the second case τ (M2. in other words.⊥) which is not a SHOIQ tautology. we have A ∈ S and r ∈ S.t.C and ( n R. S1 = {Fibrosis. Genetic Origin}. it is sufficient to perform the following replacements in α (the symbols from S2 are underlined): ⊥ [by (b)] M2 Genetic Fibrosis ≡ Fibrosis ⊥ [by (b)] (8) ∃has Origin.r. O ∈ OA←∅ (S) iff every axiom in τ (O. S) is a tautology. In case reasoning is too costly. for the DL SHOIQ it is known to be NEXPTIME-complete.r. 2004) or KAON2 (Motik. according to Proposition 30. among other things. Tobies.4 A Tractable Approximation to Locality One of the important conclusions of Proposition 30 is that one can use the standard capabilities of available DL-reasoners such as FaCT++ (Tsarkov & Horrocks. every role R∅ with Sig(R∅ ) S occurs in O as either ∃R∅ .r. All atomic concepts A∅ ∈ S will be eliminated likewise. 2006) for testing o locality since these reasoners. Hence O is local w.t. S2 = {Genetic Fibrosis. C1 C2 . ♦ 5. S iff I |= α for every interpretation I ∈ IA←∅ (S) iff I |= τ (α.C. S). S2 . Recall that the constructors ⊥. S). Note that it is not necessarily the case that Sig(τ (α.C). R R∅ . / since τ (α. S1 and hence by Proposition 8 is S1 -safe w. however. C1 C2 .r. S) is a tautology. S) r←∅ for every Sig(α)-interpretation I ∈ IA←∅ (S) iff τ (α.

Con∆(S ) ⊆ Con∆(S ) be shown by induction.C) | ( n R. Example 35 (Example 31 continued) It is easy to see that axiom M2 from Figure 1 is syntactically local w. C / / ∆ any concept. Further. C | ∃R∅ . where A∅ ∈ S is an atomic concept. Kazakov.r.C ∅ ) . C ∅ ∈ Con∅(S). . S. Let S be a signature.r. 2}. R∅ (possibly the inverse of) an atomic role r∅ ∈ S. we have that OA←∅ (·) is subset-closed and union-closed. An axiom α is syntactically local w.t. ♦ 298 . Genetic Disorder}. We denote by ˜ r←∅ OA←∅ (S) the set of all SHOIQ ontologies that are syntactically local w. & Sattler Definition 32 (Syntactic Locality for SHOIQ). or (3) Funct(R∅ ). Hence the ontology P1 = {P1.t. ·I ) from of Con r←∅ IA←∅ (S) it is the case that (C ∅ )I = ∅ and (C ∆ )I = ∆I for every C ∅ ∈ Con∅(S) and C ∆ ∈ Con∆(S).C | ∃R.Genetic Origin ∈ Con∅(S1 ) [matches C C ∅] (9) It is easy to show in a similar way that axioms P1 —P4. E1} considered in Example 29 is syntactically local w. Below we indicate sub-concepts in α from Con∅(S1 ): ∈ Con∅(S1 ) [matches A∅ ] M2 Genetic Fibrosis ≡ Fibrosis ∈ Con∅(S1 ) [matches ∃R∅ .r. Anti-monotonicity for OA←∅ (·) can ∅(S ) ⊆ Con∅(S ). S1 = {Fibrosis. Since O ∈ OA←∅ (S) ˜ r←∅ ˜ r←∅ iff O ⊆ AxA←∅ (S). . or (6) a : C ∆ . C(i) ∈ Con∆(S). and so we obtain the following conclusion: r←∅ ˜ r←∅ Proposition 33 AxA←∅ (S) ⊆ AxA←∅ (S). We denote ˜ r←∅ by AxA←∅ (S) the set of all SHOIQ-axioms that are syntactically local w.r.r. and i ∈ {1. P4. Horrocks. Also one can show by induction that ˜ r←∅ ˜ r←∅ ˜ r←∅ ˜ r←∅ α ∈ AxA←∅ (S) iff α ∈ AxA←∅ (S ∩ Sig(α)). subset-closed.t. .C ∅ | ( n R∅ . it can be shown that the safety class for SHOIQ based on syntactic locality enjoys all of the properties from Definition 24: ˜ r←∅ Proposition 34 The class of syntactically local ontologies OA←∅ (·) given in Definition 32 is an anti-monotonic.r.Cuenca Grau. by proving that Con 2 1 2 1 ˜ r←∅ ˜ r←∅ and AxA←∅ (S2 ) ⊆ AxA←∅ (S1 ) when S1 ⊆ S2 . or (2) Trans(r∅ ). It is easy to see from the inductive definitions ∅(S) and Con∆(S) in Definition 32 that for every interpretation I = (∆I . and E1 from Figure 1 are syntactically local w. OA←∅ (·) is a safety class by Proposition 33.t. Genetic Origin}. Hence. S.r. S.C] ∃has Origin. S if O ⊆ AxA←∅ (S). S if it is of one of the following forms: (1) R∅ R. and union-closed safety class. The following grammar recursively defines two sets of concepts Con∅(S) and Con∆(S) for a signature S: Con∅(S) ::= A∅ | ¬C ∆ | C Con∆(S) ::= ∆ | ¬C ∅ | C1 C∅ | C∅ ∆ C2 .t. every syntactically local axiom is satisfied in every interpretation r←∅ I from IA←∅ (S). S = {Cystic Fibrosis.t.t. so OA←∅ (·) is compact. ˜ r←∅ ˜ r←∅ Proof. R any role. ♦ Intuitively. compact. or (4) C ∅ C. syntactic locality provides a simple syntactic test to ensure that an axiom is r←∅ satisfied in every interpretation from IA←∅ (S). ˜ r←∅ A SHOIQ-ontology O is syntactically local w. . or (5) C C ∆ .

syntactic locality checking can be performed in polynomial time by matching an axiom according to Definition 32. but not syntactically local. In Table 4 we have listed all of these classes together with examples of typical types of axioms used in ontologies. We indicate which axioms are local w. is a GCI α = (∃r. Each local class of interpretations in Table 3 defines a corresponding class of local ontologies analogously to Example 26. x | x ∈ ∆J } Table 3: Examples for Different Local Classes of Interpretations The converse of Proposition 33 does not hold in general since there are semantically local axioms that are not syntactically local. syntactic locality does not detect all tautological axioms.5 Other Locality Classes r←∅ The locality condition given in Example 26 based on a class of local interpretations IA←∅ (·) is just a particular example of locality which can be used for testing safety. S = {r}. the universal relation ∆ × ∆. Other classes of local interpretations can be constructed in a similar way by fixing the interpretations of elements outside S to different values. the axiom α = (A A B) is local w. and whose running time is polynomial in |O| + |S|. On the other hand. S = {A. The locality condition based on IA←∅ (S) captures the domain axiom P4. B} according to Definition 32 since it involves symbols in S only. and the functionality axiom P7 but 4. it is easy to see that α is not syntactically local w. Another example. It can be seen from Table 4 that different types of locality conditions are appropriate r←∅ for different types of axioms. For example. S for which locality conditions assuming.t. every S since it is a tautology (and hence true in every interpretation). and the interpretation of atomic concepts outside S to either the empty set ∅ or the set ∆ of all elements. 299 .r. The axiom α is semantically local w. These axioms are assumed to be an extension of our project ontology from Figure 1.¬B).r.¬⊥) is a tautology.Modular Reuse of Ontologies: Theory and Practice r←∗ IA←∗ (S) r←∅ IA←∅ (S) r←∆×∆ IA←∅ (S) r←id IA←∅ (S) r. since τ (α. x | x ∈ ∆J } ∆J × ∆J { x. which is not a tautology.t. Proposition 36 There exists an algorithm that given a SHOIQ ontology O and a sig˜ r←∅ nature S. A ∈ S : rJ ∅ AJ ∆J ∆J ∆J ∆J × ∆J { x. Furthermore. however. We assume that the numbers in number restrictions are written down using binary coding.4 5. determines whether O ∈ OA←∅ (S). where |O| and |S| are the number of symbols occurring in O and S respectively.¬A ∃r. as usual. A ∈ S : rJ ∅ AJ ∅ ∅ ∅ r←∗ IA←∗ (S) r←∅ IA←∆ (S) r←∆×∆ IA←∆ (S) r←id IA←∆ (S) r. These examples show that the limitation of our syntactic notion of locality is its inability to “compare” different occurrences of concepts from the given signature S. In Table 3 we have listed several such classes of local interpretations where we fix the interpretation of atomic roles outside S to be either the empty set ∅. As a result.t. that the symbols from S are underlined. S) = (∃r. It is reasonable to assume.r.t.¬⊥ ∃r. the disjointness axiom P6.r. definition P5. that tautological axioms do not occur often in realistic ontologies. or the identity relation id on ∆.

For example. Kazakov. however. Note also that it is not possible to come up with a locality condition that captures all the axioms P4–P9. & Sattler α Axiom Project ? α ∈ Ax r←∅ A←∅ r←∆×∆ A←∅ r←id A←∅ r←∅ A←∆ r←∆×∆ A←∆ r←id A←∆ P4 ∃has Focus. locality based on the class IA←∆ (S) can be tested as in Proposition 30. In order to capture such functionality r←id r←id axioms. namely to define two sets Con∅(S) and Con∆(S) of concepts for a signature S which are interpreted as the empty 300 . Horrocks. that is for IA←∗ (S) and IA←∗ (S). the functionality axiom P7 is not captured by this locality condition since the atomic role has Focus is interpreted as the universal relation ∆×∆. For r←∅ example. checking locality. It might be possible to come up with algorithms for testing locality conditions for the classes of interpretation in Table 3 similar to the ones presented in Proposition 30. is not that straightforward. every subset of P containing P6 and P8 is not safe w.Cystic Fibrosis ∃has Focus. since the individuals Human Genome and Gene prevent us from interpreting the atomic role has Focus and atomic concept Project with the empty r←∆×∆ set. and so cannot be local w. since in P6 and P8 together imply axiom β = ({Human Genome} Bio Medicine ⊥) which uses the symbols from S only. S) is replaced with the following: τ (A. it is easy to come up with tractable syntactic approximations for all the locality conditions considered in this section in a similar manner to what has been done in Section 5. where the case (a) of the definition for τ (C. Still. since it is not clear how to eliminate the universal roles and identity roles from the axioms and preserve validity in the respective classes of interpretations. P5                                           BioMedical Project ≡ Project ∃has Focus. Hence. can capture such assertions but are generally poor for other types of axioms. The idea is the same as that used in Definition 32. where every atomic role outside S is interpreted with the identity relation id on the interpretation domain. which is not necessarily functional. Note that the modeling error E2 is not local for any of the given locality conditions.r. S.Bio Medicine Bio Medicine ⊥ P6 Project P7 Funct(has Focus) P8 Human Genome : Project P9 has Focus(Human Genome.t. Gene) E2 ∀has Focus. S) = if A ∈ S and otherwise = A / (a ) r←∆×∆ r←id For the remaining classes of interpretations.4. one can use locality based on IA←∅ (S) or IA←∆ (S).Cystic Fibrosis Table 4: A Comparison between Different Types of Locality Conditions neither of the assertions P8 or P9.Cuenca Grau. The locality condition based on IA←∆ (S). where the atomic roles and concepts outside S are interpreted as the largest possible sets.t.r. S.

C). Sig(R∅ ). C(i) ∈ Con∆(S). Sig(R∆×∆ ).6 Combining and Extending Safety Classes In the previous section we gave examples of several safety classes based on different local classes of interpretations and demonstrated that different classes are suitable for different types of axioms. as demonstrated in the following example: r←∆×∆ r←∅ Example 37 Consider the union (OA←∅ ∪ OA←∆ )(·) of two classes of local ontologies r←∆×∆ r←∅ OA←∅ (·) and OA←∆ (·) defined in Section 5.C ∅ n R∅ .C ∆ | ( 1 Rid . their union (O1 ∪ O2 )(·) is a class of ontologies defined by (O1 ∪O2 )(S) = O1 (S)∪O2 (S). where some cases in the recursive definitions are present only for the indicated classes of interpretations. Unfortunately the unionclosure property for safety classes is not preserved under unions. In Figure 3 we gave recursive r←∗ ˜ r←∗ definitions for syntactically local axioms AxA←∗ (S) that correspond to classes IA←∗ (S) of interpretations from Table 3. C is any concept. 5. Moreover. the ontology O1 consisting of axioms P4–P7 from Table 4 satisfies the first locality condition. C|C C∆ | a : C∆ ˜ r←∗ AxA←∗ (S) ::= C ∅ r←∅ for IA←∗ (·): r←∆×∆ for IA←∗ (·): r←id for IA←∗ (·): | R∅ |R R | Trans(r∅ ) | Funct(R∅ ) R∆×∆ | Trans(r∆×∆ ) | r∆×∆ (a. the ontology O2 consisting of axioms P8–P9 satisfies the second locality condition. One may wonder if locality classes already provide the most powerful sufficient 301 . as seen in Example 37. if each of the safety classes is also anti-monotonic or subset-closed. It is easy to see by Definition 24 that if both O1 (·) and O2 (·) are safety classes then their union (O1 ∪ O2 )(·) is a safety class. Formally. this gives a more powerful sufficient condition for safety. Sig(Rid ) S.5. for example. the union of such safety classes might be no longer unionclosed. Obviously. respectively. ♦ As shown in Proposition 33.C ∅ | r←∗ for IA←∅ (·): r←∅ for IA←∗ (·): r←id for IA←∗ (·): C∅ | C∅ n R. ∆ C ∅ ∈ Con∅(S). one may try to apply different sufficient tests and check if any of them succeeds. Where: A∅ . b) | Trans(rid ) | Funct(Rid ) Figure 3: Syntactic Locality Conditions for the Classes of Interpretations in Table 3 set and. then the union is anti-monotonic. In order to check safety of ontologies in practice.C ∆ ) | ∃Rid . m ≥ 2 . which can be seen as the union of the safety classes used in the tests. subset-closed as well. r∅ . is not even safe for S as we have shown in Section 5.Modular Reuse of Ontologies: Theory and Practice Con∅(S) ::= ¬C ∆ | C | ∃R. however. A∆ .C ∆ ) . as ∆I in every interpretation I from the class and see in which situations DL-constructors produce the elements from these sets. but their union O1 ∪O2 satisfies neither the first nor the second locality condition and. every locality condition gives a union-closed safety class.C C Con∆(S) ::= r←∗ for IA←∆ (·): r←∆×∆ for IA←∗ (·): r←id for IA←∗ (·): ∆ | ¬C ∅ | C1 ∆ C2 | A∆ | ∃R∆×∆ . R is any role | A∅ | ∃R∅ . rid ∈ S. r∆×∆ . in fact. or respectively.C ∆ | ( n R∆×∆ .C | | ( m Rid .5. This safety class is not union-closed since. given two classes of ontologies O1 (·) and O2 (·).

r. r←∅ r←∅ Since O ∈ OA←∅ (S). and modifying τ (·. Hence there is probably room to extend the locality classes OA←∅ (·) and OA←∆ (·) for L = SHOIQ while preserving union-closure. ·) is defined as in Proposition 30. Second. Let / / β := τ (α. since α ∈ AxA←∅ (S). the ontology P is chosen in such a way that P ∈ OA←∅ (S) and O ∪ P ∪ Q r←∅ has only models from IA←∅ (S). for every A. ♦ r←∅ r←∅ Proposition 39 The classes of local ontologies OA←∅ (·) and OA←∆ (·) defined in Section 5. S in L. Kazakov. 302 . (ii) O is safe w. for every ontology Q over L with Sig(Q) ∩ Sig(O ∪ P) ⊆ S it is the case that O ∪ P ∪ Q is a deductive conservative extension of Q.5). by Definition 1. Let O be an ontology over L that satisfies conditions (i)–(iii) above. Definition 38 (Maximal Union-Closed Safety Class). Hence. & Sattler conditions for safety that satisfy all the desirable properties from Definition 24. it is not clear how to define the function τ (·. since β r←∅ does not contain individuals. ⊥ for every atomic r←∅ concept A and atomic role r from Sig(O) \ S. and (iii) for every P ∈ O(S). Proof. Surprisingly this is the case to a certain extent for some locality classes considered in Section 5. if a safety class O(·) is not maximal union-closed for a language L. Q is an ontology which implies nothing but tautologies and uses all the atomic concepts and roles from S. Note also that the proof of Proposition 39 does not work in the presence of nominals. S in L. it is not clear how to force interpretations of roles to be the universal or identity relation using SHOIQ axioms. Now. There are some difficulties in extending the proof of Proposition 39 to other locality classes considered in Section 5. since O |= α and O ∪ P ∪ Q r←∅ has only models from IA←∅ (S). I |= α iff I |= β for every I ∈ IA←∅ (S).5. 297). By Proposition 30. S) where τ (·. S) contains symbols from S only (see Footnote 3 on r←∅ r←∅ p.t. since this will not guarantee that β = τ (α. We claim that O ∪ P ∪ Q is not a deductive Sig(Q)-conservative extension of Q. We define ontologies P and Q as follows. r←∅ For O(·) = OA←∆ (·) the proof can be repeated by taking P to consist of axioms A and ∃r. it is the case that O ∪ P is safe w.5. A safety class O1 (·) is maximal unionclosed for a language L if O1 (·) is union-closed and for every union-closed safety class O2 (·) that extends O1 (·) and every O over L we have that O ∈ O2 (S) implies O ∈ O1 (S). A safety class O2 (·) extends a safety class O1 (·) if O1 (S) ⊆ O2 (S) for every S. we have that Sig(β) ⊆ S = Sig(Q) and. / β is not a tautology. Take Q to consist of all tautologies of form ⊥ A and ⊥ ∃r.5 are maximal union-closed safety classes for L = SHIQ. It is easy to see that P ∈ OA←∅ (S). Take P to consist of the axioms A ⊥ and ∃r. it is the case that O ∪ P ∪ Q |= β. ⊥ for all A. r ∈ Sig(O) \ S. that is. According to Definition 38. there exists an axiom α ∈ O such that α ∈ AxA←∅ (S). () r←∅ Intuitively. thus Q |= β. As has been shown in the proof of r←∅ this proposition.5. O ∪ P ∪ Q is not a deductive Sig(Q)-conservative extension of Q ( ). Note that Sig(O ∪ P) ∩ Sig(Q) ⊆ S. ·) in these cases (see a related discussion in Section 5. First. We demonstrate that r←∅ r←∅ this is not possible for O(·) = OA←∅ (·) or O(·) = OA←∆ (·) r←∅ We first consider the case O(·) = OA←∅ (·) and then show how to modify the proof for r←∅ the case O(·) = OA←∆ (·). Horrocks. ·) as has been discussed in Section 5.r. then there exists a signature S and an ontology O over L.t. such that (i) O ∈ / O(S).Cuenca Grau. r ∈ S.

r. The rewrite rules R1 and R2 preserve invariants I1–I3 given in Figure 4. namely O itself.r. I2 states that the set Os satisfies the safety test for S ∪ Sig(Om ) w. more precisely.t. The techniques described in Section 5. ♦ It is clear that. according to Definition 40.r. 303 . it is possible to avoid checking all the possible subsets of the input ontology. Otherwise. the empty ontology ∅ = O \ O also belongs to O(S) for every O(·) and S. |Os |) consisting of the sizes for these sets increases in lexicographical order. when no axioms are left in in Ou . rule R2 moves α into Om and moves all axioms from Os back into Ou . rule R1 moves α into Os provided Os remains safe w. Invariant I1 states that the three sets Om . there exists at least one O(·)-based S-module in O. Extracting Modules Using Safety Classes In this section we revisit the problem of extracting modules from ontologies. Ou and Os . In order to extract an O(·)-module. in other words. however.Modular Reuse of Ontologies: Theory and Practice 6. Proposition 18 establishes the relationship between the notions of safety and module. ontology O and signature S. Note also that it follows from Definition 40 that Om is an O(·)-based S-module in O iff Om is an O(·)-based S -module in O for every S with S ⊆ S ⊆ (S ∪ Sig(Om )). the pair (|Om |. since Sig(Om ) might expand and axioms from Os might become no longer safe w. indeed. Ou and Os form a partitioning of O. however. A notion of modules based on this property can be defined as follows. In practice. or they add elements to Os without changing Om . a subset O1 of O is an S-module in O provided that O \ O1 is safe for S ∪ Sig(O1 ). can be reused for extracting particular families of modules that satisfy certain sufficient conditions. As shown in Corollary 23 in Section 4. S ∪ Sig(Om ). Therefore. it is sufficient to enumerate all possible subsets of the ontology and check if any of these subsets is a module based on O(·). At the end of this process. any safety class O(·) can provide a sufficient condition for testing modules—that is. which is initialized to O. any procedure for checking membership of a safety class O(·) can be used directly for checking whether Om is a module based on O(·). it is sufficient to show that O \ O1 ∈ O(S ∪ Sig(O1 )).r. Given an ontology O and a signature S over L. finally. contains the unprocessed axioms. Let L be an ontology language and O(·) be a safety class for L. by Definition 24. there exists no general procedure that can recognize or extract all the (minimal) modules for a signature in an ontology in finite time. Definition 40 (Modules Based on Safety Class). ♦ Remark 41 Note that for every safety class O(·). These axioms are distributed among Om and Os according to rules R1 and R2. The procedure manipulates configurations of the form Om | Ou | Os . The set Om accumulates the axioms of the extracted module. the set Om is an O(·)-based module in O. The set Ou . S ∪ Sig(Om ) according to the safety class O(·). the set Os is intended to be safe w. Given an axiom α from Ou . I3 establishes that the rewrite rules either add elements to Om . we say that Om ⊆ O is an O(·)-based S-module in O if O \ Om ∈ O(S ∪ Sig(Om )). which represent a partitioning of the ontology O into three disjoint subsets Om .t.t. in order to prove that O1 is a S-module in O.t. S ∪ Sig(Om ). O(·). Figure 4 presents an optimized version of the module-extraction algorithm.

which implies. and unionclosed safety class. and S a signature over L. (|Om |. a safety class O(·) a module Om in O based on O(·) unprocessed ↓ Configuration: Om | Ou | Os . Horrocks. a signature S. additionally. is the smallest O(·)-based S-module in O. This will prove that the module computed by the procedure is a subset of every O(·)-based S-module in O. |Os |) I2. subset-closed. for the base case we have Om = ∅ ⊆ Om . and (2) If. S. if Ou = ∅ then it is always possible to apply one of the rewrite rules R1 or R2. For the rewrite rule R2 we have: Om | Ou ∪ {α} | Os =⇒ Om ∪ {α} | Ou ∪ Os | ∅ if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )). Om | Ou ∪ {α} | Os =⇒ Om | Ou | Os ∪ {α} R2. and hence the procedure always terminates with Ou = ∅. |Os |) increases in lexicographical order (see invariant I3 in Figure 4). by invariant I1 from Figure 4. O(·) is anti-monotonic. Then: (1) The procedure in Figure 4 with input O. ↑ ↑ Initial Configuration = ∅|O|∅ Termination Condition: Ou = ∅ module safe Rewrite rules: R1. then there is a unique minimal O(·)-based S-module in O. Om | Ou ∪ {α} | Os =⇒ Om ∪ {α} | Ou ∪ Os | ∅ Invariants for Om | Ou | Os : I1. (|Om |.Cuenca Grau. Upon termination. Ou and Os form a partitioning of O (see invariant I1 in Figure 4). it is the case that Om ⊆ Om . O is partitioned into Om and Os and by invariant I2. and hence. Proof. O(·) is an anti-monotonic. Os ∈ O(S ∪ Sig(Om )) Figure 4: A Procedure for Computing Modules Based on a Safety Class O(·) Proposition 42 (Correctness of the Procedure from Figure 4) Let O(·) be a safety class for an ontology language L. Kazakov. and suppose that Om is an O(·)-based S-module in O. We demonstrate by induction that for every configuration Om | Ou | Os derivable from ∅ | O | ∅ by rewrite rules R1 and R2. () 304 . Additionally. (2) Now. O(·) terminates and returns an O(·)based S-module Om in O. Indeed. and therefore the size of every set is bounded. (ii) at each rewrite step. subset-closed and union-closed. by Definition 40. O = Om Ou Os if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) if (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) Invariant for Om | Ou | Os =⇒ Om | Ou | Os : I3. & Sattler Input: Output: an ontology O. in addition. and the procedure returns precisely this minimal module. |Os |) <lex (|Om |. suppose that. the sets Om . O an ontology over L. Os ∈ O(S ∪ Sig(Om )). (1) The procedure based on the rewrite rules from Figure 4 always terminates for the following reasons: (i) for every configuration derived by the rewrite rules. that Om is an O(·)-based S-module in O. The rewrite rule R1 does not change the set Om .

instead of checking whether (Os ∪ {α}) ∈ O(S ∪ Sig(Om )) in the conditions of rules R1 and R2. Example 43 In Table 5 we present a trace of the algorithm from Figure 5 for the ontology O consisting of axioms M1–M5 from Figure 1. Os ∈ O(S ∪ Sig(Om )) and O(·) is union-closed. a signature S. 305 . Moreover. {α} ∈ O(S ∪ Sig(Om )). the underlined axiom α is the one that is being tested for safety. The rewrite rule corresponding to the result of this test is applied to the configuration. Since O(·) is subset-closed. and Sig(Os ) ∩ Sig(α) ⊆ Sig(Om ) ∅|O|∅ Termination Condition: Ou = ∅ Figure 5: An Optimized Procedure for Computing Modules Based on a Compact SubsetClosed Union-Closed Safety Class O(·) Suppose. If O(·) is union closed. like those based on classes of local interpretations described in Section 5. Om | Ou ∪ {α} | Os =⇒ Om | Ou | Os ∪ {α} if {α} ∈ O(S ∪ Sig(Om )) α α R2.Modular Reuse of Ontologies: Theory and Practice Input: an ontology O. which contradicts ( ). If O(·) is compact and subset closed. it is possible to show that this procedure runs in polynomial time assuming that the safety test can be also performed in polynomial time. that Om ⊆ Om but Om ∪ {α} Om . The second column of the table shows the elements of S ∪ Sig(Om ) that have appeared for the current configuration but have not been present for the preceding configurations. Then α ∈ Os := O \ Om . Os ∪ {α} ∈ O(S ∪ Sig(Om )). This contradiction implies that the rule R2 also preserves property Om ⊆ Om . whether α is local for IA←∅ (·). in fact. as stated in claim (2) of Proposition 42. it is possible to optimize the procedure shown in Figure 4. Since by invariant I2 from Figure 4. since the set of remaining axioms will stay in O(S ∪ Sig(Om )). Om | Ou ∪ {α} | Os ∪ Os =⇒ Om ∪ {α} | Ou ∪ Os | Os if {α} ∈ O(S ∪ Sig(Om )). then. The first column of the table lists the configurations obtained from the initial configuration ∅ | O | ∅ by applying the rewrite rules R1 and R2 from Figure 5. we have that Os ∈ O(S ∪ Sig(Om )). In Figure 5 we present an optimized version of the algorithm from Figure 4 for such locality classes. Genetic Disorder} r←∅ and safety class O(·) = OA←∅ (·) defined in Example 26. Claim (1) of Proposition 42 establishes that the procedure from Figure 4 terminates for every input and produces a module based on a given safety class. Since Om ⊆ Om and O(·) is anti-monotonic. in each row. In the last column we indicate whether the first conditions of the rules R1 r←∅ and R2 are fulfilled for the selected axiom α from Ou —that is. a safety class O(·) Output: a module Om in O based on O(·) Initial Configuration = Rewrite rules: R1. then instead of moving all the α axioms in Os to Ou in rule R2. signature S = {Cystic Fibrosis. to the contrary. If the safety class O(·) satisfies additional desirable properties. In this case. it is sufficient to move only those axioms Os that contain at least one symbol from α which did not occur in Om before. Since Om is an O(·)-based S-module in O. the procedure. it is sufficient to check if {α} ∈ O(S ∪ Sig(Om )) since it is already known that Os ∈ O(S ∪ Sig(Om )). we have {α} ∈ O(S ∪ Sig(Om )).2. produces the smallest possible module based on the safety class.

in our case. L.t. M3. M5 | − M1.t. for example. which results in a different computation. the axiom α = M2 is tested for safety in both configurations 1 and 7. M4. Then O2 := O1 ∩ Om is an S-module in O w. Lemma 44 Let O(·) be an anti-monotonic subset-closed safety class for an ontology language L. M3. M4. M4 | M2. because the set Ou may increase after applications of rule R2. . located In. ♦ It is worth examining the connection between the S-modules in an ontology O based on a particular safety class O(·) and the actual minimal S-modules in O. M5 | − M1. M3. & Sattler Om | Ou . Genetic Disorder} Note that some axioms α are tested for safety several times for different configurations. S ∪ Sig(Om ) when Om = ∅. M5} from Figure 1 and S = {Cystic Fibrosis. .r. has Origin. According to Claim (2) of Proposition 42 all such r←∅ computations should produce the same module Om . M5 | M2 − | M1. M4. Let O1 be an S-module in O w. but became non-local when new axioms were added to Om . M3. which is the smallest OA←∅ (·)-based S-module in O. provided that O(·) is anti-monotonic and subset-closed. It is also easy to see that. O an ontology. In other words. and α = M3 in configurations 2 and 4. given O and S. syntactic locality produces the same results for the tests. alternative choices for α may result in shorter computations: in our example we could have selected axiom M1 in the first configuration instead of M2 which would have led to a shorter trace consisting of configurations 1. Om contains all the S-essential axioms in O. Genetic Disorder Table 5: A trace of the Procedure in Figure 5 for the input Q = {M1. M3 M1 | M2. Horrocks. Pancreas. M3.t. that is. M2. Kazakov. Note that it is possible to apply the rewrite rules for different choices of the axiom α in Ou . L and Om an O(·)-based S-module in O. and 4–8 only. In our example. M5 | − Cystic Fibrosis. It turns out that any O(·)-based module Om is guaranteed to cover the set of minimal modules. the procedure from Figure 5 has implicit non-determinism. However. α | Os 2 3 4 5 6 7 8 − | M1. In other words. . and S a signature over L.Cuenca Grau. Genetic Origin Genetic Fibrosis − − − {α} ∈ O(S ∪ Sig(Om ))? Yes Yes No No No Yes No ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ ⇒ R1 R1 R2 R2 R2 R1 R2 1 − | M1. M5 | − M1.r. the implicit non-determinism in the procedure from Figure 5 does not have any impact on the result of the procedure. M3.r. M4 | − | M5 New elements in S ∪ Sig(Om ) − − Fibrosis. M4. M4 | M2 | M5 M1. M5 | M2. . M4. M2. The following Lemma provides the main technical argument underlying this result. 306 . M3 | M2. Note also that different results for the locality tests are obtained in these cases: both M2 and M3 were local w. the rewrite procedure produces a module Om consisting of axioms M1– M4.

by Definition 40. we have that OA |= (A ⊥) which implies OA |= (A B). We have seen in this example that the locality-based S-module extracted from O contains all of these axioms. In some cases. O2 is an O(·)-based S-module in O1 . if the safety class being used enjoys some nice properties then it suffices to extract a module for one of them. In particular O2 is an S-module in O1 w. since B ∈ Sig(OA ). Since O(·) is union-closed.4. Since Sig(α) ⊆ S. Now. Note that B ⊥ ∈ OA . and. contains all S-essential axioms in O w. it is sufficient to extract a module for a subset of the signature of α which. {B ⊥} ∈ / O(∅) and O(·) is compact. hence (O \ OA ) ∪ {B ⊥} = O \ OA . / (a) By Remark 41 it is the case that OA is an O(·)-based module in O for S = {A.t. however. L.t. leads to smaller modules.t. by Definition 24 it is the case that {B ⊥} ∈ O(S ∪ Sig(OA )). Since Sig(B ⊥) = {B} and B ∈ Sig(OA ). Since O1 \ O2 = O1 \ Om ⊆ O \ Om and O(·) is subset-closed. hence.t. In particular. then O \ OA ∈ O(S ∪ Sig(OA )). in general. L. B atomic concepts. Since OA is an O(·)-based S-module in O.r. We demonstrate that O1 ⊆ Om . in accordance to Corollary 45. Then Om contains all S-essential axioms in O w. Since O(·) is anti-monotonic. since O |= (A B). 307 .t. As shown in Section 3. Proof. as given by the following proposition: Proposition 46 Let O(·) be a compact union-closed safety class for an ontology language L. Consider two cases: (a) B ∈ Sig(OA ) and (b) B ∈ Sig(OA ). L. by Definition 40. otherwise. L.r. Let O1 be a minimal S-module in O w. in order to test subsumption between a pair of atomic concepts. we have that O \ Om ∈ O(S ∪ Sig(Om )). Proof. in general. the extracted module contains only essential axioms. O an ontology and A. Corollary 45 Let O(·) be an anti-monotonic. (b) Consider O = O ∪{B ⊥}. this module. L.r. since Om is an O(·)-based S-module in O.r. B} ∈ O(∅) then OA |= α. we have that O1 \ O2 ∈ O(S ∪ Sig(O2 )). subset-closed safety class for L and Om be an O(·)-based S-module in O. all the axioms M1–M4 are essential for the ontology O and signature S considered in Example 43. An interesting application of modules is the pruning of irrelevant axioms when checking if an axiom α is implied by an ontology O. By Definition 40. O2 is an S-module in O w. Then: 1 If O |= α := (A 2 If O |= α := (B B) and {B A) and { ⊥} ∈ O(∅) then OA |= α. L which is strictly contained in O1 . O1 ∩ Om is an S-module in O w.r. L. by Lemma 44. In our case. since OA is a module in O for S = {A}. 1.t. it is the case that O |= (A ⊥). Indeed. localitybased modules might contain non-essential axioms. and hence.r.t. and O2 ⊆ Om . Indeed.r.t. it is the case that O |= α implies OA |= α. B}. by Definition 12 and Definition 1. it is the case that (O \ OA ) ∪ {B ⊥} ∈ O(S ∪ Sig(OA )). Let OA be an O(·)-based module in O for S = {A} in O. in order to check whether O |= α it suffices to retrieve the module for Sig(α) and verify if the implication holds w.r. we have that OA is an O(·)-based S-module in O . Hence Om is a superset of every minimal S-module in O and hence. it is the case that O1 \ O2 ∈ O(S ∪ Sig(Om )).Modular Reuse of Ontologies: Theory and Practice Proof. Since O1 is an S-module in O w.

2-EXPTIME-complete w. 2007). in Case (b) we show that OA is an O(·)-based module for S = {A} in O = O∪{ B}. & Sattler 2. by Corollary 47... ALCIQO (roughly OWL DL). Then O |= (A B) iff OA |= (A B) iff OB |= (A B). axioms of the form B ⊥ (Case 1) or ( B) (Case 2).r. 2007). therefore.t. 7.r. when applied to the empty signature.. Let OA←∅ (·) and r←∗ r←∗ OA←∆ (·) be locality classes based on local classes of interpretations of form IA←∅ (·) and r←∗ (·). or the sub-concepts (Case 2) B of A. Related Work We have seen in Section 3 that the notion of conservative extension is valuable in the formalization of ontology reuse tasks. in order to check r←∗ the subsumption O |= (A B) one could first extract a OA←∅ (·)-based module M1 for S = {A} in O. is complete for all the sub-concepts of B in M1 . and. ALCIQ (Lutz et al. Proof. Let O be a module for S = {A} based on O r←∗ (·) and IA←∆ A A←∅ r←∗ OB be a module for S = {B} based on OA←∆ (·). to optimize the classification of ontologies. B atomic concepts. Indeed. Lutz & Wolter. provided that the safety class captures. hence. since O |= ( A). checking model conservative extensions is already undecidable for EL (Lutz & Wolter.r. then it is also a r←∗ super-concept of A in M1 . 2007). and check whether this subsumption holds w. which implies O |= (B A). For this purpose. for example. one could use the syntactic locality conditions given in Figure 3 instead of their semantic counterparts. 2007) (roughly OWL-Lite). including A.r. Furthermore.t. the module. It is possible to combine modularization procedures to obtain modules that are smaller than the ones obtained using these procedures individually. if an atomic concept is a super-concept of A in O. M2 is an S-module in M1 for S = {A. The proof of this case is analogous to that for Case 1: Case (a) is applicable without changes.t. 2007). and undecidable w. it is the case that OB |= ( A). 308 . B is a super-concept or a sub-concept of A in the ontology if and only if it is such in the module.Cuenca Grau. The problem of deciding conservative extensions has been recently investigated in the context of ontologies (Ghilardi et al. respectively. r←∗ Corollary 47 Let O be a SHOIQ ontology and A. B} and M1 is an S-module in the original ontology O. by Corollary 47 this module is complete for all the super-concepts of A in O. the empty signature. Horrocks. One could extract a OA←∆ (·)-based module M2 for S = {B} in M1 which.t. and for ALC it is even not semi-decidable (Lutz et al. It is easy to see that {B r←∗ ⊥} ∈ OA←∅ (∅) and { r←∗ A} ∈ OA←∆ (∆). That is. including B—that is. it is convenient to use a syntactically tractable approximation of the safety class in use. Proposition 46 implies that a module based on a safety class for a single atomic concept A can be used for capturing either the super-concepts (Case 1). from Table 3. For example. 2007. In order to check if the subsumption A B holds in an ontology O.. 2006. it is the case that M2 is also an S-module in O. Lutz et al. By Proposition 46. The problem of deciding whether P ∪ Q is a deductive S-conservative extension of Q is EXPTIME-complete for EL (Lutz & Wolter. Kazakov. it is sufficient to extract a module in O for S = {A} using a modularization algorithm based on a safety class in which the ontology {B ⊥} is local w. for example. This property can be used.

the fragment extracted by the Prompt-Factor algorithm consists of all the axioms M1-M5. then expands S with the symbols mentioned in these axioms. M4. In this case. 2003. This field is rather diverse and has roots in several communities. and “equivalent concepts” should be considered by including the roles and concepts they contain. In this case. for the included concepts. In general. The authors describe a segmentation procedure which. The authors discuss which concepts and roles should be included in the segment and which should not. numerous techniques for extracting fragments of ontologies for the purposes of knowledge reuse have been proposed. and Ontology Segmentation (Kalfoglou & Schorlemmer. Q2 |= α. 1999). An example of such a procedure is the algorithm implemented in the Prompt-Factor tool (Noy & Musen. 2004b). Hence.Modular Reuse of Ontologies: Theory and Practice In the last few years. 2006). Ontology Integration. For our example in Section 3. A B}. Given S = {B. the Prompt-Factor algorithm may fail even if Q is consistent. In general. the full version of GALEN cannot be processed by reasoners. the Prompt-Factor algorithm extracts Q2 = {B C}. how the “super-” and “sub-” concepts are computed. computes a “segment” for S in the ontology. and Q consists of axioms M1–M5 from Figure 1. when S = {Cystic Fibrosis. Genetic Disorder}. Given a signature S and an ontology Q. but not their “subconcepts”. C}. which does not imply α. otherwise. It is easy to see that Q is consistent. the algorithm retrieves a fragment Q1 ⊆ Q as follows: first. the Prompt-Factor algorithm extracts Q1 = {A B}. the axioms in Q that mention any of the symbols in S are added to Q1 . also “their restrictions”. 2003). however. together with their “super-concepts” and “super-roles” but not their “sub-concepts” and “sub-roles”. the extracted fragment is not guaranteed to be a module. second. all the remaining axioms of Q are retrieved. In particular. concepts that are “linked” from the input concepts (via existential restrictions) and their “super-concepts”. that is. however. and so Q2 is not a module in Q. For example. S is expanded with the symbols in Sig(Q1 ). Q |= α. the PromptFactor algorithm extracts a module (though not a minimal one). “intersection”. a rapidly growing body of work has been developed under the headings of Ontology Mapping and Alignment. and S = {A}. or. After this step. In particular. admits only single element models.A). consider an ontology Q = {A ≡ ¬A. It is also not clear which axioms should be included in 309 . Ontology Merging. and M5 containing these terms. The ontology Q is inconsistent due to the axiom A ≡ ¬A: any axiom (and α in particular) is thus a logical consequence of Q. such that S contains all the symbols of Q. consider an ontology Q = { {a}. α = (A ∀r. B C} and α = (C B). From the description of the procedure it is not entirely clear whether it works with a classified ontology (which is unlikely in the case of GALEN since the full version of GALEN has not been classified by any existing reasoner). the algorithm first retrieves axioms M1. Noy. so the authors investigate the possibility of splitting GALEN into small “segments” which can be processed by reasoners separately. Currently. For example. and α is satisfied in every such a model. which was used for segmentation of the medical ontology GALEN (Rector & Rogers. The description of the procedure is very high-level. 2004a. Another example is Seidenberg’s segmentation algorithm (Seidenberg & Rector. the segment should contain all “super-” and “sub-” concepts of the input concepts. Most of these techniques rely on syntactically traversing the axioms in the ontology and employing various heuristics to determine which axioms are relevant and which are not. given a set of atomic concepts S. These steps are repeated until a fixpoint is reached. “union”.

In contrast. The growing interest in the notion of “modularity” in ontologies has been recently reflected in a workshop on modular ontologies5 held in conjunction with the International Semantic Web Conference (ISWC-2006). In contrast to these works. 2003) and Package-based Description Logics (Bao. since the procedure talks only about the inclusion of concepts and roles.html 310 . Kazakov. 5. 2006) propose a specialized semantics for controlling the interaction between the importing and the imported modules to avoid side-effects.iastate. namely that the module QA should entail all the subsumptions in the original ontology between atomic concepts involving A and other atomic concepts in QA . One of the requirements for the module is that Q should be a conservative extension of QA (in the paper QA is called a logical module in Q). 2007). Parsia. The algorithm is intended as a visualization technique. for information see the homepage of the workshop http://www. The paper imposes an additional requirement on modules. Module extraction in ontologies has also been investigated from a formal point of view (Cuenca Grau et al. 2006a). such as safety testing. Concerning the problem of ontology reuse. & Honavar. to check for side-effects. they do not take the semantics of the ontology languages into account.cild. there have been various proposals for “safely” combining modules. and we establish a collection of reasoning services. & Sattler the segment in the end. 2004) consists of partitioning the concepts in an ontology to facilitate visualization of and navigation through an ontology. Such studies are beyond the scope of this paper. 2006b). The algorithm uses a set of heuristics for measuring the degree of dependency between the concepts in the ontology and outputs a graphical representation of these dependencies. What is common between the modularization procedures we have mentioned is the lack of a formal treatment for the notion of module. and does not establish a correspondence between the nodes of the graph and sets of axioms in the ontology. Caragea. The papers describing these modularization procedures do not attempt to formally specify the intended outputs of the procedures. the algorithm we present here works for any ontology. we assume here that reuse is performed by simply building the logical union of the axioms in the modules under the standard semantics. including those containing “non-safe” axioms. It might be possible to formalize these algorithms and identify ontologies for which the “intuitionbased” modularization procedures work correctly. (2006) define a notion of a module QA in an ontology Q for an atomic concept A. The authors present an algorithm for partitioning an ontology into disjoint modules and proved that the algorithm is correct provided that certain safety requirements for the input ontology hold: the ontology should be consistent. Horrocks. A different approach to module extraction proposed in the literature (Stuckenschmidt & Klein.Cuenca Grau. but rather argue what should be in the modules and what not based on intuitive notions. Cuenca Grau et al. such as E-connections (Cuenca Grau. The interested reader can find in the literature a detailed comparison between the different approaches for combining ontologies (Cuenca Grau & Kutz. should not contain unsatisfiable atomic concepts. most of these proposals. & Sirin..edu/events/womo. In particular. Distributed Description Logics (Borgida & Serafini. and should have only “safe” axioms (which in our terms means that they are local for the empty signature).

In this section by “module” we understand the result of the considered modularization procedures which may not necessarily be a module according to Definition 10 or 12 8. ˜ r←∅ First. (2006). The aim of the experiments described in this section is not to provide a throughout comparison of the quality of existing modularization algorithms since each algorithm extracts “modules” according to its own requirements. 8. Tsarkov. based on the locality class OA←∅ (·). we check if P belongs to the locality class for S = Sig(P) ∩ Sig(Q). 7 are written in the OWL-Full species of OWL (Patel-Schneider et al. Parsia. all but 11 were syntactically local for S (and hence also semantically local for S). & Horrocks.cs. http://sourceforge. we show that the locality class OA←∅ (·) provides a powerful sufficiency test for ˜ r←∅ safety which works for many real-world ontologies. where A ∈ S and B ∈ S.. we show that OA←∅ (·)-based modules are typically very small compared to both the size of the ontology and the modules extracted using other techniques. 2004) to which our framework does not yet apply. 6. The remaining 4 non-localities are due to the presence of so-called mapping axioms of the form A ≡ B . Third. It turned out that from the 96 ontologies in the library that import other ontologies.net/projects/owlapi 311 .1 Locality for Testing Safety ˜ r←∅ We have run our syntactic locality checker for the class OA←∅ (·) over the ontologies from a library of 300 ontologies of various sizes and complexity some of which import each other (Gardiner. we have implemented a syntactic loca˜ r←∅ ˜ r←∆×∆ lity checker for the locality classes OA←∅ (·) and OA←∆ (·) as well as the algorithm for extracting modules given in Figure 5 from Section 6. / Note that these axioms simply indicate that the atomic concepts A. Sirin.Modular Reuse of Ontologies: Theory and Practice 8. The library is available at http://www. we provide empirical evidence of the appropriateness of locality for safety testing and module extraction. A2: The segmentation algorithm proposed by Cuenca Grau et al. Second. Indeed.uk/~horrocks/testing/ 7. & Hendler.ac.man. B in the two ontologies under consideration are synonyms. From the 11 non-local ontologies. 8. 2006) and illustrate the ˜ r←∅ ˜ r←∆×∆ combination of the modularization procedures based on the classes OA←∅ (·) and OA←∆ (·). we report on our implementation in the ontology editor Swoop (Kalyanpur. ˜ r←∅ A3: Our modularisation algorithm (Algorithm 5). After this transformation.6 For all ontologies P that import an ontology Q. we were able to easily repair these non-localities as follows: we replace every occurrence of A in P with B and then remove this axiom from the ontology. For this purpose. Implementation and Proof of Concept In this section. 2006). we compare three modularization7 algorithms that we have implemented using Manchester’s OWL API:8 A1: The Prompt-Factor algorithm (Noy & Musen. 2003). all 4 non-local ontologies turned out to be local. Cuenca Grau. but rather to give an idea of the typical size of the modules extracted from real ontologies by each of the algorithms.2 Extraction of Modules In this section.

Horrocks. & Sattler (a) Modularization of (a) Modularization of NCINCI (b) Modularization of GALEN-Small (b) Modularization of GALEN-Small (c) Modularization of of SNOMED (c) Modularization SNOMED (d) Modularization of of GALEN-Full (d) Modularization GALEN-Full (e) Small modules GALEN-Full (e) Smallmodules of of GALEN-Full (f) Large modules GALEN-Full (f) Largemodules of of GALEN-Full Figure 6: Distribution for the sizes of syntactic locality-based modules: the X-Axis gives the number of concepts in the modules and the Y-Axis the number of modules extracted for each size range. 312 . Kazakov.Cuenca Grau.

owl http://ontology.(%) 30.4 2 10 29.11 and the SNOMED Ontology12 . but only definitions.13 the DOLCE upper ontology (DOLCE-Lite).5 100 A3: Loc.(%) 0.jpl.nasa. Complex.teknowledge. we have included the National Cancer Institute (NCI) Ontology.3 100 Avg. they do not contain GCIs. Max. Since there is no benchmark for ontology modularization and only a few use cases are available.(%) 0. Therefore we have designed a simple experiment setup which.mindswap.(%) 87. it should give an idea of typical module sizes.3 Avg.it/DOLCE. 13. 14. These ontologies are complex since they use many constructors from OWL DL and/or include a significant number of GCIs.05 0.10 the Gene Ontology (GO). 11.6 100 1 100 100 100 96. in particular.8 0.(%) 75.Modular Reuse of Ontologies: Theory and Practice Ontology Atomic Concepts A1: Prompt-Factor Max.1 100 100 100 51. It is worth emphasizing here that our algorithm A3 does not just extract a module for the input atomic concept: the extracted fragment is also a module for its whole signature.org/prj_galen.gov/ontology/ 313 .05 0. 10. In this group. In the case of GALEN. we have collected a set of well-known ontologies available on the Web.14 and NASA’s Semantic Web for Earth and Environmental Terminology (SWEET)15 .8 1.09 1. we have also considered a version GALEN-Small that has commonly been used as a benchmark for OWL reasoners. which we divided into two groups: Simple. yet similar in structure.1 24. 9. These ontologies are expressed in a simple ontology language and are of a simple structure.08 0.loa-cnr. 15.8 100 0.9 the SUMO Upper Ontology.geneontology.openclinical. We compare the maximal and average sizes of the extracted modules.4 100 Avg.9 37.5 0.html http://www.84 100 0. which typically includes a fair amount of other concepts and roles.org http://www. http://www. we took the set of its atomic concepts and extracted modules for every atomic concept. 12.snomed.(%) 55 100 1 100 100 100 83. there is no systematic way of evaluating modularization procedures. This ontology is almost 10 times smaller than the original GALEN-Full ontology. For each ontology.-based mod. even if it may not necessarily reflect an actual ontology reuse scenario.7 100 A2: Segmentation Max.1 100 100 100 88. This group contains the well-known GALEN ontology (GALEN-Full).5 0.org http://www.html http://sweet.org/2003/CancerOntology/nciOncology.7 3.com/ http://www.6 NCI SNOMED GO SUMO GALEN-Small GALEN-Full SWEET DOLCE-Lite 27772 255318 22357 869 2749 24089 1816 499 Table 6: Comparison of Different Modularization Algorithms As a test suite.

in the case of SUMO. Indeed. in particular. we can clearly see that locality-based modules are significantly smaller than the ones obtained using the other methods. & Sattler ˜ r←∅ ˜ r←∆×∆ (a) Concepts DNA Sequence and (b) OA←∅ (S)-based module for (c) OA←∆ (S)-based module for Microanatomy in NCI DNA Sequence in NCI Micro Anatomy in the fragment 7b Figure 7: The Module Extraction Functionality in Swoop The results we have obtained are summarized in Table 6. most of the modules we have obtained for these ontologies contain less than 40 atomic concepts.Cuenca Grau. the X-axis represents the size ranges of the ob314 . as opposed to 1/200 in the case of SNOMED. we have obtained very small locality-based modules. In fact.5% of the size of the ontology. In contrast. The SWEET ontology is an exception: even though the ontology uses most of the constructors available in OWL. the locality-based modules are larger. even if large. the modules we obtain using our algorithm are significantly smaller than the size of the input ontology. the ontology is heavily underspecified. and the average size of the modules is 1/10 of the size of the largest module. For NCI. are simple in structure and logical expressivity. This can be explained by the fact that these ontologies. we have a more detailed analysis of the modules for NCI. The table provides the size of the largest module and the average size of the modules obtained using each of these algorithms. the algorithms A1 and A2 retrieve the whole ontology as the module for each input signature. GALEN and SNOMED. Horrocks. in SNOMED. SNOMED. In Figure 6. Kazakov. the largest locality-based module obtained is approximately 0. Here. DOLCE. which yields small modules. SNOMED. In the table. the largest module in GALEN-Small is 1/10 of the size of the ontology. For example. GALEN-Small and GALEN-Full. For GALEN. SWEET and DOLCE. For DOLCE. GO and SUMO. the modules are even bigger—1/3 of the size of the ontology—which indicates that the dependencies between the different concepts in the ontology are very strong and complicated.

all its (entailed) super-concepts in O. this module contains all the sub-concepts of Micro Anatomy in the previously extracted module. the fragment and the rest of the ontology have disjoint signatures. The considerable differences in the size of the modules extracted by algorithms A1 – A3 are due to the fact that these algorithms extract modules according to different requirements. In order to explore the use of our results for ontology design and analysis. Recall that. 16. The plots thus give an idea of the distribution for the sizes of the different modules. By Corollary 47. 2006). we have obtained a large number of small modules and a significant number of very big ones. The tool can be downloaded at http://code. NCI and GALEN-Small. If one wants to refine the module by only including the information in the ontology necessary to ˜ r←∆×∆ entail the “path” from DNA Sequence to Micro Anatomy. The user interface of Swoop allows for the selection of an input signature and the retrieval of the corresponding module. This cycle does not occur in the simplified version of GALEN and thus we obtain a smooth distribution for that case. is an anatomy kind. The presence of this cycle can be spotted more clearly in Figure 6(f). Since our algorithm is based on weaker requirements. For SNOMED. Algorithm A2 extracts a fragment of the ontology that is a module for the input atomic concept and is additionally semantically separated from the rest of the ontology: no entailment between an atomic concept in the module and an atomic concept not in the module should hold in the original ontology. in the end. at least.16 Figure 7a shows the classification of the concepts DNA Sequence and Microanatomy in ˜ r←∅ the NCI ontology.. one could extract the OA←∆ (·)based module for Micro Anatomy in the fragment 7b. This abrupt distribution indicates the presence of a big cycle of dependencies in the ontology.google. Algorithm A1 produces a fragment of the ontology that contains the input atomic concept and is syntactically separated from the rest of the axioms—that is. the figure shows that there is a large number of modules of size in between 6515 and 6535 concepts. This suggests that the knowledge in NCI about the particular concept DNA Sequence is very shallow in the sense that NCI only “knows” that a DNA Sequence is a macromolecular structure. The resulting module is shown in Figure 7b. but no medium-sized modules in-between. ˜ r←∅ as obtained in Swoop. in Figure 6(e) we can see that the distribution for the “small” modules in GALEN-Full is smooth and much more similar to the one for the simplified version of GALEN.com/p/swoop/ 315 . according to Corollary 47. What is surprising is that the difference in the size of modules is so significant. a OA←∅ (·)-based module for an atomic concept contains all necessary axioms for. for GALEN-Full. we have integrated our algorithm for extracting modules in the ontology editor Swoop (Kalyanpur et al. Figure 7b shows the minimal OA←∅ (·)-based module for DNA Sequence. we can observe that the size of the modules follows a smooth distribution. it is to be expected that it extracts smaller modules. In contrast. which. In fact. In contrast. Figure 7 shows that this module contains only the concepts in the “path” from DNA Sequence to the top level concept Anatomy Kind.Modular Reuse of Ontologies: Theory and Practice tained modules and the Y-axis the number of modules whose size is within the given range. Thus this module can be seen as the “upper ontology” for A in O.

AAAI. GA. Y.. Cambridge University Press. pp.. O. we would like to study other approximations which can produce small modules in complex ontologies like GALEN. (Eds. Using existing results (Lutz et al. 1. pp.. Y. Cuenca Grau. L. Second printing (Universitext) 2001. & Kutz. Calvanese. (2007). E. which can be computed in practice. E. (2006). F. & Sirin. D. a signature.. Cuenca Grau. (2006a). & Gurevich. & Honavar. UK. (2007).t. (2005).). 4 (1). USA. In IJCAI-07. C. V. Data Semantics. Brandt. I. A. which characterizes any sufficiency condition for safety of an ontology w. References Baader. January 2007. Vol.r. B. we have shown that these problems are undecidable or algorithmically unsolvable for the logic underlying OWL DL. India. 2006. & Kalyanpur. Cuenca Grau. We have dealt with these problems by defining sufficient conditions for a solution to exist.. India. (2003). In addition. Web Sem. McGuinness. Parsia.. Baader. and Applications. pp. A. In Proceedings of the 5th International Semantic Web Conference (ISWC-2006). B¨rger.. 2005. Implementation. Professional Book Center. (1997). & Sattler 9. For future work.. S. Caragea... Borgida.. In Proceedings of the Tenth International Conference on Principles of Knowledge 316 . We have established the relationships between these problems and studied their computability. We have introduced and studied the notion of a safety class. D. 153–184. 72–86. The Classical Decision Problem. & Sattler. E. Athens. Modularity and web ontologies. July 30-August 5. Parsia. A logical framework for modularity of ontologies. B. In IJCAI-05. Conclusion In this paper.. P. Combining OWL ontologies using Econnections. Bao. 2007) and the results obtained in Section 4.. & Patel-Schneider.. 4273 of Lecture Notes in Computer Science.. and exploit modules to optimize ontology reasoning.. B. Horrocks. (2006b). D. Horrocks.Cuenca Grau. J. E. On the semantics of linking and importing in modular ontologies.. November 5-9.. In Proceedings of the Workshop on Semantic Web for Collaborative Knowledge Acquisition. J. Nardi. Cuenca Grau. Perspectives o a of Mathematical Logic.. 298– 304. & Lutz. The Description Logic Handbook: Theory. D. Edinburgh. F. 40–59. Gr¨del. Proceedings of the Twentieth International Joint Conference on Artificial Intelligence. Sirin. Distributed description logics: Assimilating information from peer sources. we have proposed a set of reasoning problems that are relevant for ontology reuse. & Serafini.. Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence. 2007.. F. B. Modular ontology languages revisited. Kazakov.. we have used safety classes to extract modules from ontologies. Springer-Verlag. L. J. Scotland. Pushing the EL envelope. (2003). U. Hyderabad. B. B. January 5. Kazakov. Hyderabad. 364–370.

Springer. Parsia. F. F. & Sattler. Lutz. B. Edinburgh. June 2-5. R. (2007). UK. F. & Sattler. M¨ller.. & Wolter. Gardiner. 448–453.. Canada.. 4 (2). 2006. Hyderabad. Just the right amount: extracting modules from ontologies. Cuenca Grau. Horrocks. AAAI Press. pp. J. Lutz. J. & van Harmelen. Horrocks. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05). Lake District. C. 144–153. A... (2003). O. pp. B. J. SIGMOD Record. B. I. Description logic systems. UK. pp.. S.. Walther. Swoop: A web ontology editing browser. GA.Modular Reuse of Ontologies: Theory and Practice Representation and Reasoning (KR-2006). 84–99. 654–667. In Proceedings of the 16th International Conference on World Wide Web (WWW-2007). 2005.D. Univesit¨t Karlsruhe (TH).. pp. V. Lake District of the United Kingdom. 4603 of Lecture Notes in Computer Science. C. Professional Book Center. 2006. Springer. F. (2006). The Knowledge Engineering Review.. In Proceedings of the 2006 International Workshop on Description Logics (DL-2006). I. Ph. Vol.. In Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR-2006). C. pp.. Vol. Cuenca Grau. thesis. 189 of CEUR Workshop Proceedings.. 2006. 2007.. P. & Haarslev. chap. & Horrocks. 8. India. Germany. Will my ontologies fit together?. Alberta. Bremen. F. 33 (4). In Proceedings of the 21st International Conference on Automated Deduction (CADE-21). & Sattler. Framework for an automated comparison of description logic reasoners. 282–305. & Hendler. November 5-9. Web Sem. (2006). E. Windermere. Cambridge University Press. (2007). In Proceedings of the 5th International Semantic Web Conference (ISWC-2006). Reasoning in Description Logics using Resolution and Deductive Databases. Germany. In The Description Logic o Handbook. Lutz. (2003). In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-07). May 8-12.. & Wolter. T. Tsarkov. July 17-20. 65–70. & Wolter. Y. 1–31. pp. Did I damage my ontology? a case for conservative extensions in description logics. Kalyanpur. I.. Conservative extensions in the lightweight description logic EL. 2007. January 2007. 187–197. M. (2003).. U. USA. ACM. Horrocks. 18. Y. & Schorlemmer. I. Conservative extensions in expressive description logics. Vol. N. (2004a). Karlsruhe. CEUR-WS. (2006). pp.. Kutz. Scotland. 4273 of Lecture Notes in Computer Science. (2005). Cuenca Grau. Web Sem. July 30-August 5.. D. From SHIQ and RDF to OWL: the making of a web ontology language. Kalfoglou.. a Noy. I.. 7–26. Banff.. Semantic integration: A survey of ontology-based approaches. Horrocks. AAAI. 453–459. U. U. Athens. (2007). A tableaux decision procedure for SHOIQ. 317 . (2006). Ghilardi.June 1. Patel-Schneider. (2006). F.. 198–209. 2006. AAAI Press.. pp. B. Lake District of the United Kingdom. Kazakov. 717–726. Ontology mapping: The state of the art. June 2-5. A. D.. Motik. B. Sirin.org. 1 (1). May 30 .

Horrocks. P. 365–384. M.. (2004).. (2006).org. & Klein. Elsevier. G. Noy. Whistler. & Studer. Japan. 2004. classification and use. Artif. pp. W3C Recommendation. 6 (59). ACM. pp. pp. WA. & Musen. Proceedings. E. (1999). CEUR-WS. 199–217. Structure-based partitioning of large class hierarchies. 12.Cuenca Grau. L. Tsarkov. 104 of CEUR Workshop Proceedings. & Smolka... (2004b). In Staab. Scotland. May 23-26. (JAIR). & Horrocks. 318 . 2004). (2004). 48 (1). pp.). J. In IMIA WG6 Workshop. (2004). The PROMPT suite: Interactive tools for ontology mapping and merging. Vol. 2006. Canada. Tobies.. F. (2000). & Parsia. (2004). Tools for mapping and merging ontologies. UK. P. Sirin. & Sattler Noy. 2004. FaCT++ description logic reasoner: System description. 289–303. (2003). Stuckenschmidt. Res. (Eds. 13–22. In Proceedings of the Third International Joint Conference on Automated Reasoning (IJCAR 2006). Seidenberg. Artificial Intelligence. Edinburgh.. Int. Web ontology language OWL Abstract Syntax and Semantics. N. Springer. November 7-11. N. 4130 of Lecture Notes in Computer Science. In Proceedings of the Third International Semantic Web Conference (ISWC2004). A.. Elsevier. & Horrocks. USA. Vol. Rector.. & Rogers. Staab. (2006). D. June 6-8.. In Proceedings of the 2004 International Workshop on Description Logics (DL2004). Springer. M. R. Vol. & Studer (Staab & Studer. The complexity of reasoning with cardinality restrictions and nominals in expressive description logics. & Rector. Hayes.. (1991). J. August 17-20. In Proceedings of the 15th international conference on World Wide Web (WWW-2006). Springer. 1–26. Hiroshima. Attributive concept descriptions with complements. Kazakov. 3298 of Lecture Notes in Computer Science. Patel-Schneider. Web ontology segmentation: analysis. International Handbooks on Information Systems. British Columbia. Handbook on Ontologies. S. B. H. Intell. Pellet system description. I. I. Ontological issues in using a description logic to represent medical concepts: Experience from GALEN. 2006. Seattle. Journal of Human-Computer Studies. A. J. S. M. Schmidt-Schauß. 292–297.

Sign up to vote on this title
UsefulNot useful