You are on page 1of 101

2 An Introduction to the

Stable and Well-Founded


Semantics of Logic
Programs
Miroslaw Truszczynski

This chapter provides a brief introduction to two main semantics of logic pro-
grams with negation, the stable-model semantics of Gelfond and Lifschitz, and the
well-founded semantics of Van Gelder, Ross, and Schlipf. We present definitions,
introduce basic results, and relate the two semantics to each other. We restrict
attention to the syntax of normal logic programs and focus on classical results.
However, throughout the chapter and in concluding remarks we briefly discuss
generalizations of the syntax and extensions of the semantics, and mention several
recent developments.

Introduction
2.1 The roots of logic programming can be traced back to efforts to build resolution-
based automated theorem provers in the mid-1960s [Robinson 1965]. A realization
that resolution can turn Horn theories into programs came about in the early 1970s
and rested on the foundational work of Kowalski and Kuehner [1971] and Kowalski
[1974], as well as the implementation effort of Colmerauer and his research group,
in which the Prolog programming language was developed [Colmerauer et al. 1973].
Negation was present in Prolog from the very beginning. Its meaning was spec-
ified operationally through its implementation. However, for almost two decades,
122 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

finding a satisfactory declarative account of negation was elusive. The first signifi-
cant progress was obtained by Clark, who proposed reading programs as definitions
and formalized that reading by means of the program completion [Clark 1978]. A dif-
ferent plan of attack was developed a few years later by Apt, Blair, and Walker [1988]
and, independently, by Przymusinski [1988a, 1988b]. They introduced a large and
natural class of programs with negation, called stratified programs, and showed
that the meaning of a stratified program is captured by a certain well-motivated
Herbrand model. Przymusinski [1988a] called this model perfect. Soon thereafter,
Gelfond and Lifschitz [1988] proposed the stable-model semantics and Van Gelder
et al. [1988, 1991] the well-founded semantics. The stable-model semantics was
strongly influenced by research in knowledge representation concerned with the
notion of nonmonotonic reasoning, and built on the semantics of default logic by
Reiter [1980] and autoepistemic logic by Moore [1985]. The well-founded semantics
followed the query-evaluation paradigm of Prolog but framed it in a three-valued
setting. Both semantics were heavily influenced by the perfect-model semantics and
can be regarded as its generalizations to the class of all programs. Moreover, despite
some fundamental differences, they also show strong and interesting connections
to the completion semantics by Clark.
The stable-model and the well-founded semantics have had major implications
on the fields of logic programming and knowledge representation. Since their in-
ception, they fueled research in these two fields and gave rise to fascinating theo-
retical results, implementations of declarative programming languages, and suc-
cessful applications. In particular, the answer-set programming paradigm [Marek
and Truszczyński 1999, Niemelä 1999, Brewka et al. 2011] and its modern imple-
mentations trace their roots to the stable-model semantics [Gebser et al. 2012],
while some closely related declarative programming systems grew out of general-
izations of the well-founded semantics [Denecker 2009]. The latter underlies also
a successful Prolog descendant, the XSB system [Sagonas et al. 1994], and several
other systems we mention later on.
In this tutorial presentation, we introduce the two semantics and show that,
despite their differences, they are closely related. Our goal is to present basic prop-
erties of the stable-model and the well-founded semantics, focusing on the most
significant lines of research. After introducing the terminology and the most es-
sential preliminaries, we start our presentation with the case of Horn programs
(Section 2.3). In Section 2.4 we discuss several examples that point out problems
that arise for programs with negation, and informally suggest ways in which these
problems could be addressed. We follow up with a formal discussion of the stable-
model semantics as an extension of the least-model semantics of Horn programs to
2.2 Terminology, Notation, and Other Preliminaries 123

the case of programs with negation (Section 2.5). The key topics we discuss are strat-
ification and program splitting, supported models, completion, tight programs,
loops and the Loop Theorem, and strong equivalence of programs. We then change
gears and move into the realm of four-valued Herbrand interpretations. While
there, we define partial stable and partial supported models, and the most dis-
tinguished representatives of the two classes of “partial” models: the well-founded
and the Kripke-Kleene models, respectively (Section 2.6). We conclude with brief
closing comments.

Terminology, Notation, and Other Preliminaries


2.2
Logic Programming
The language of logic programming is determined by a countable vocabulary σ
consisting of function and predicate symbols, each assigned a non-negative arity.
Function symbols of arity 0 are called constant symbols. We assume that σ contains
at least one constant symbol. Expressions of the language may also contain variable
symbols. They come from a fixed infinite countable set Var that does not depend
on σ . In other words, the same set of variables is used with every vocabulary.
The terms of the language are defined in the same way as in the first-order logic:
all constant and variable symbols are terms and, if t1 , . . . , tk are terms and f ∈ σ is
a k-ary function symbol, then f (t1 , . . . , tk ) is also a term.
An atom is an expression p(t1 , . . . , tk ), where p ∈ σ is a k-ary predicate symbol
and t1 , . . . , tk are terms. Thus, atoms in logic programming are of the same form
as atomic formulas in the language of first-order logic.
Terms and atoms that have no occurrences of variable symbols are ground. The
Herbrand universe comprises all ground terms of the language and the Herbrand
base comprises all ground atoms. One of the connectives in the language of logic
programming is the negation connective not. Negation is applied only to atoms.
Atoms and negated atoms are called literals. Rules are expressions constructed
from literals by means of the rule connective “←” and the conjunction connective
“,”. More precisely, a rule (sometimes called a clause) is an expression of the form

a ← b1 , . . . , bm , not c1 , . . . , not cn , (2.1)

where a and all bi ’s and cj ’s are atoms. The atom a is the head of the rule (2.1)
and the conjunction (list) b1 , . . . , bm , not c1 , . . . , not cn of literals is its body. If r
denotes a rule, we write H (r) and B(r) for the head and the body of r, respectively.
124 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

We extend this notation to programs and write H (P ) and B(P ) for the sets of heads
and bodies of rules in a program P .
We often write rules as a ← B, where a is an atom and B is a list of liter-
als. For every list B = b1 , . . . , bm , not c1 , . . . , not cn of literals we define B + =
{b1 , . . . , bm} and B − = {c1 , . . . , cn}, and we often specify a rule a ← B as a ←
B + , not B −. We note a slight abuse of the notation here. The expression in the
body, B + , not B −, is not a list but a pair of sets. Nevertheless, as all semantics of
logic programs we consider in this chapter are insensitive to the order of literals
in the bodies of rules, the notation gives all essential information about the rule it
describes.
If n = 0, rule (2.1) is a Horn rule and if m + n = 0, a fact. In the latter case, we
omit ‘←’ from the notation, that is, we write a instead of a ←. A logic program (or
just a program) is a collection of rules. A Horn program is a program consisting of
Horn rules.
A program can be considered within any language of logic programming whose
vocabulary σ contains all constant, function, and predicate symbols that occur
in the program. However, it is convenient to see a program as an element of the
minimal language within which it can be studied, the one determined by the set
of the symbols the program contains. For a program P , we denote by HU(P ) and
HB(P ) the Herbrand universe and the Herbrand base of this most economical
language, and we refer to them as the Herbrand universe and the Herbrand base
of P . We restrict attention to programs that contain at least one constant symbol
to make sure that the requirement we imposed earlier on the language of logic
programming is satisfied.
A ground instance of a rule r is any rule that can be obtained by consistently
substituting its variables with ground terms. The grounding of a program P , de-
noted by gr(P ), is the program consisting of all ground instances of rules in P .
The grounding of P is determined by the language within which we consider P .
However, unless stated otherwise, by the grounding of a program P we mean the
grounding of P in the language determined by the symbols of P . That is, we instan-
tiate variables to terms in the Herbrand universe of P , HU(P ).
To illustrate, let P be the program

even(0)
even(s 2(X)) ← even(X) (2.2)
odd(X) ← not even(X).
2.2 Terminology, Notation, and Other Preliminaries 125

The language of P is determined by the vocabulary consisting of a constant sym-


bol 0, a unary function symbol s, and unary predicate symbols even and odd.1 There
is one fact in P , even(0). The last two rules contain occurrences of a variable sym-
bol X. The last rule contains an occurrence of not. The Herbrand universe of P is
given by

HU(P ) = {0, s(0), s 2(0), . . .}

and the Herbrand base by

HB(P ) = {even(s i (0)) | i = 0, 1, . . .} ∪ {odd(s i (0)) | i = 0, 1, . . .}.

The program
gr(P ) = {even(0)}

∪ {even(s i+2(0)) ← even(s i (0)) | i = 0, 1, . . .}

∪ {odd(s i (0)) ← not even(s i (0) | i = 0, 1, . . .}


is the grounding of P .
The program P is not a Horn program as its third rule is not a Horn rule.
However, the first two rules are Horn rules and the program consisting of these
two rules is a Horn program.

Lattices, Operators, Pre-Fixpoints and Fixpoints,


Monotonicity, and Antimonotonicity
A lattice is a structure L, ≤, where L is a non-empty set, and ≤ is a partial order on
L such that every two elements x , y ∈ L have the least upper bound and the greatest
lower bound. We often write L to represent the lattice L, ≤, especially when ≤ is
understood. A lattice is complete if every (even empty or infinite) set X of its elements
has the least upper bound and the greatest lower bound, denoted lub(X) and glb(X),
respectively. An operator on a lattice L, ≤ (or simply an operator on L) is any
mapping O : L → L. An operator O on a lattice L is monotone if for every a, b ∈ L
such that a ≤ b, O(a) ≤ O(b). An element a ∈ L is a pre-fixpoint of an operator O on
a lattice L if O(a) ≤ a; it is a fixpoint of O if O(a) = a. The fundamental property of
monotone operators on complete lattices is given by the Knaster-Tarski theorem
[Tarski 1955].

1. As usual, s i (t) stands for s(s(. . . s (t) . . .)). In particular, s 0(t) stands for t.
  
i
126 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Theorem 2.1 Knaster-Tarski. Let O be a monotone operator on a complete lattice L, ≤. Then
O has a least pre-fixpoint and a least fixpoint, and the two coincide.
Proof Let us define P = {x ∈ L: O(x) ≤ x}. In other words, P is the set of all pre-fixpoints
of O. We note that P = ∅. This observation follows as a corollary from the argu-
ment below. However, it can also be seen in a more direct way. Indeed, since L is
complete, lub(L) is well defined. Clearly, O(lub(L)) ∈ L and so, O(lub(L)) ≤ lub(L).
Thus, lub(L) ∈ P .
Let p be the greatest lower bound of P (p is well-defined by the completeness of
L). For every y ∈ P , p ≤ y. Thus, by the monotonicity of O and the definition of P ,
O(p) ≤ O(y) ≤ y. It follows that O(p) is a lower bound for P . Since p is the greatest
lower bound for P , O(p) ≤ p. Thus, p ∈ P or, equivalently, p is a pre-fixpoint of O.
Consequently, p is the least pre-fixpoint of O. Moreover, using the monotonicity of
O again, we obtain O(O(p)) ≤ O(p); that is, O(p) ∈ P . Thus, p ≤ O(p). It follows
that O(p) = p. That is, p is a fixpoint of O and, since fixpoints are pre-fixpoints, the
least fixpoint of O.
The proof we presented is non-constructive. But the Knaster-Tarski theorem
has also a simple constructive proof. It makes use of ordinal numbers, transfinite
sequences, and transfinite induction. Reviewing these concepts goes beyond the
scope of this text. Instead, we refer to the monograph by Jech [2003] for an elegant
discussion (a brief overview can also be found in the classic logic programming text
by Lloyd [1984]).
In a constructive proof of the Knaster-Tarski theorem, we define a certain trans-
finite sequence of sets (a sequence that is labeled with ordinal numbers). The
sequence starts with the empty set. Each element in the sequence that is labeled
with a successor ordinal is obtained from the previous one by applying O. Each
element in the sequence that is labeled with the limit ordinal is the union of all
previous sets in the sequence. Importantly, one can show that some element in the
sequence is a fixpoint of O and, in fact, the least fixpoint (and the least pre-fixpoint).
The least ordinal that appears as the label of such a set represents the “number” of
steps in which the fixpoint is reached. We will encounter a specific example of this
construction later on.
An operator O on a complete lattice L is finitary if for every countable sequence
a0 ≤ a1 ≤ . . . of elements from L

O(lub({a0 , a1 , . . .})) ≤ lub({O(a0), O(a1), . . .}).

Operators that are monotone and finitary are continuous. The operators we will
consider in this presentation are continuous. One can show that for continuous
2.2 Terminology, Notation, and Other Preliminaries 127

operators the identity above becomes an equality, that is,

O(lub({a0 , a1 , . . .})) = lub({O(a0), O(a1), . . .}).

Moreover, the least fixpoint (whose existence is guaranteed by the monotonicity of


a continuous operator) is reached in ω steps, where ω is the least infinite ordinal
number.
Similarly, to monotone operators, we define an operator O on a lattice L to be
antimonotone if for every a, b ∈ L such that a ≤ b, O(b) ≤ O(a). An easy to prove yet
important property of an antimonotone operator O is that its “square” O 2, defined
by O 2(x) = O(O(x)), is monotone.

Theorem 2.2 Let O be an antimonotone operator on a lattice L, ≤. Then the operator O 2 on
L, ≤ is monotone.

Proof Let a, b ∈ L be such that a ≤ b. Since O is antimonotone, O(b) ≤ O(a). Using


antimonotonicity of O again, we get O 2(a) = O(O(a)) ≤ O(O(b)) = O 2(b).

Herbrand Interpretations
The subsets of the Herbrand base of a program P are in a one-to-one and onto cor-
respondence with the standard first-order logic Herbrand interpretations of the
vocabulary of P [Doets 1994]. It is because an Herbrand interpretation is uniquely
determined by the set of ground atoms that are true in it. In fact, that set is often
taken to stand for the corresponding Herbrand interpretation. We will follow this
convention and refer to subsets of HB(P ) as Herbrand interpretations (of the lan-
guage determined by P ). Below, we express the standard definition of the satis-
faction relation for Herbrand interpretations under the assumption that Herbrand
interpretations are sets of ground atoms, and extend it to rules and programs.

Definition 2.1 Let I be an Herbrand interpretation. The interpretation I satisfies a ground atom
a if a ∈ I , and it satisfies a ground literal not a, if a ∈ I . Furthermore, I satisfies
a conjunction B = b1 , . . . , bm , not c1 , . . . , not cn of ground literals if B + ⊆ I and
B − ∩ I = ∅.
The interpretation I satisfies a ground rule a ← B, if I does not satisfy B or
I satisfies a. Further, I satisfies a rule r, if I satisfies every ground instance of r.
Finally, I satisfies a program P , if I satisfies every rule in P or, equivalently, if I
satisfies every ground rule in gr(P ).
Instead of “satisfies” we will often say “is a model of.” We write I |= E, when I
satisfies an expression (an atom, a literal, a rule, or a program) E. Otherwise, we
write I |= E.
128 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

The set All(P ) of all Herbrand interpretations of a program P (that is, according
to our convention, the collection of all subsets of HB(P )), together with the inclu-
sion relation, forms a complete lattice. Similarly, the set All 4(P ) = All(P ) × All(P )
(we explain the notation All 4(P ) in the next paragraph) of pairs of Herbrand inter-
pretations forms a complete lattice with respect to the precision ordering ≤p defined
as follows:

(I , J ) ≤p (I  , J ) if and only if I ⊆ I  and J  ⊆ J ,

where I , J , I J  ∈ HB(P ) [Denecker et al. 2000]. The term “precision ordering” is


motivated by the “approximation” intuition. Namely, if I , J , and K are two-valued
interpretations such that I ⊆ K ⊆ J , then the pair (I , J ) can be viewed as an ap-
proximation of K, where I underestimates what is true in K and J underestimates
what is false in K. Under this intuition, if (I , J ) ≤p (I  , J ) and (I , J ) and (I  , J )
both approximate K, then (I  , J ) provides a higher-precision approximation to K
than (I , J ) does. Hence, the term precision ordering for ≤p .2
The notation All 4(P ) is motivated by the observation that pairs (I , J ) of interpre-
tations from All(P ) can be viewed as 4-valued interpretations. Indeed, let us think
of ground atoms in I as certain and those in J as possible. The following definition
is then quite intuitive. A ground atom A is true in (I , J ) if A ∈ I ∩ J (if it is certain
and possible), unknown if A ∈ J \ I (if it is possible but not certain), inconsistent
if A ∈ I \ J (if it is certain but not possible), and false if A ∈ I ∪ J (if it is neither
certain nor possible).
All semantics we consider in this chapter have elegant descriptions in terms of
fixpoints of monotone operators on the lattices All(P ) and All 4(P ).

The One-Step Provability Operator


With each program P we can associate its one-step provability operator TP intro-
duced by van Emden and Kowalski [1976]. It maps subsets of HB(P ) to subsets of
HB(P ). We will view it as an operator on the lattice All(P ). Thus, let P be a program
and let I be a subset of HB(P ). We define

TP (I ) = {a | a ← B ∈ gr(P ), I |= B}. (2.3)

It is clear from the definition that TP (I ) is indeed a subset of HB(P ), that is, that
TP is an operator on All(P ).

2. We point out that the intuition no longer applies if the comparison involves pairs (I , J ),
where I ⊆ J , as such pairs do not approximate any interpretation. Nevertheless, we use the term
“precision ordering” whenever any two pairs of interpretations are compared.
2.2 Terminology, Notation, and Other Preliminaries 129

Herbrand models of a program can be described as pre-fixpoints of its one-step


provability operator.

Theorem 2.3 Let P be a logic program. An Herbrand interpretation I ⊆ HB(P ) is an Herbrand


model of P if and only if I is a pre-fixpoint of TP .

Proof Let us assume that I is an Herbrand model of P and let a ∈ TP (I ). By the definition
of the operator TP , there is a rule a ← B ∈ gr(P ) such that I |= B. Since I |= a ← B,
a ∈ I . Hence, TP (I ) ⊆ I .
Conversely, let us suppose that TP (I ) ⊆ I and let a ← B ∈ gr(P ). If I |= B then,
by definition, a ∈ TP (I ). Thus, a ∈ I . It follows that I |= a ← B and, as a ← B is an
arbitrary rule in gr(P ), I |= P .

In general, the one-step provability operator has no monotonicity properties,


due to the fact that negated atoms may appear in the bodies of program rules.
However, for Horn programs (only non-negated occurrences of atoms in the bodies
of rules) we have the following property.

Theorem 2.4 If P is a Horn program, the operator TP is a monotone and finitary operator on the
lattice All(P ).

Proof Let I ⊆ I  ⊆ HB(P ) and let a ∈ TP (I ). It follows that there is a rule a ← B in gr(P )
such that I |= B. Since P is a Horn program, B − = ∅. Thus, the condition I |= B is
equivalent to B + ⊆ I . By the assumption, B + ⊆ I  and so, I  |= B. This implies that
a ∈ TP (I ). Hence, TP (I ) ⊆ TP (I ) follows.

Next, let us consider a sequence A0 ⊆ A1 ⊆ . . . ⊆ HB(P ). If a ∈ TP ( ∞ k=0 Ak ), then
∞
there is a rule a ← B in gr(P ) such that B ⊆ k=0 Ak . Since B is finite, there is n
 
such that B ⊆ nk=0 Ak = Ak . Consequently, a ∈ TP (Ak ). Thus, a ∈ ∞ k=0 TP (Ak ). It
follows that TP is finitary.

Following Fitting [2002], we now extend the concept of one-step provability


to the four-valued setting. Specifically, we define a two-input one-step provability
operator P : All 4(P ) → All(P ), and a four-valued one-step provability operator
TP : All 4 → All 4. To this end, for (I , J ) ∈ All 4, we set

P (I , J ) = {a | a ← B ∈ gr(P ), B + ⊆ I , and B − ∩ J = ∅} (2.4)

and

TP (I , J ) = (P (I , J ), P (J , I )). (2.5)

If we view an interpretation I as representing what is surely true, and J as what is


possible (possibly true), P (I , J ) consists of atoms that are derivable in one step
130 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

by a rule in gr(P ) that surely applies in (I , J ) or, more formally, has all literals in its
body surely true in (I , J ) (positive atoms true, that is, in I , atoms that are negated
surely false, that is, not in J ). Similarly, we can see P (J , I ) as the set of atoms that
are derivable in one step by a rule that possibly applies in (I , J ) or, more formally,
has all literals in its body possible in (I , J ) (positive atoms in J , atoms that are
negated possibly false, that is, not in I ). Thus, the operator TP can be seen as a
four-valued counterpart to TP .
Below, we often write P (., J ) to represent an operator on All(P ) that to any
interpretation I ∈ All(P ) assigns the interpretation P (I , J ). We use the notation
P (I , .) in a similar way.
In the four-valued setting, we have several general monotonicity properties for
operators. These properties underlie all major semantics of logic programs.

Theorem 2.5 For every logic program P , the operator P (., J ) on All(P ) is monotone (with
respect to inclusion), the operator P (I , .) on All(P ) is antimonotone (with respect
to inclusion), and the operator TP on All 4(P ) is monotone (with respect to ≤p ).

Proof Let I ⊆ I  ⊆ HB(P ) and J  ⊆ J ⊆ HB(P ). Inclusions P (I , J ) ⊆ P (I  , J ) and


P (I , J ) ⊆ P (I , J ) follow directly from the definition of P by means of a rea-
soning similar to that we used in the proof of Theorem 2.4. The last assertion is
a direct corollary from these two properties. Let (I , J ) ≤p (I  , J ). It follows that
I ⊆ I  and J  ⊆ J . Thus, we have

P (I , J ) ⊆ P (I  , J ) ⊆ P (I  , J )

and

P (J  , I ) ⊆ P (J , I ) ⊆ P (J , I ).

Consequently,

TP (I , J ) = (P (I , J ), P (J , I )) ≤p (P (I  , J ), P (J  , I )) = TP (I  , J ).

Formal and Informal Reading of a Program


Any logic-based language to model and solve computational problems preferably
will satisfy two general desiderata.

1. Each expression in the language must have a formal semantics providing a


precise specification of its meaning.
2. Each expression in the language must also have an intuitive reading giving
it a meaning corresponding to the one provided by the formal semantics.
2.3 The Case of Horn Logic Programs 131

The need for formal semantics is obvious. Expressions in the language (programs)
must have a precise unambiguous meaning to allow analyzing them for correct-
ness, and to serve as a specification for software to compile and execute them. The
second requirement is not absolutely necessary, but it is critical if a formalism is
to be used effectively by programmers. To build programs, programmers must un-
derstand what programs mean. This meaning should be unambiguously suggested
by the informal reading of rules and programs. What this informal reading should
be has been the subject of a considerable debate in the logic programming com-
munity. Denecker and his collaborators provide a comprehensive overview of issues
involved [Denecker et al. 2001, Denecker and Ternovska 2008, Denecker et al. 2012].
They argue that programs should be viewed as representations of informal defini-
tions that serve as restrictions on (Tarskian) possible worlds, and not as epistemic
expressions on beliefs of an agent, a view that influenced Gelfond and Lifschitz
[1988] in their work that led to the stable-model semantics.
We skirt this issue here and simply suggest to read a rule (2.1) as:

assuming that atoms C1 , . . . , Cn cannot be established (alternatively, derived or


computed), establish atom A if atoms B1 , . . . Bm have already been established.

We note two related difficulties with this reading. First, it is not necessarily clear
when we can assume that an atom “cannot be established.” Second, the statement
“cannot be established” is applied to a single rule but clearly depends on other rules
in the program. Hence, the meaning of program rules depends on the program,
and the meaning of programs is not derived from the meaning of its rules by any
obvious composition operator such as, for instance, conjunction. Nevertheless, the
proposed reading is useful in motivating formal semantics of logic programs and
we will refer to it below.

The Case of Horn Logic Programs


2.3 Since Horn rules do not contain negated atoms Ci , we do not need to worry about
the precise meaning of the phrase “assuming that atoms Ci cannot be established.”
Thus, for Horn programs the problematic part of the informal reading of rules
disappears and therefore they seem like a good departure point for our discussion.
Let us look at this program:
even(0)
(2.6)
even(s 2(X)) ← even(X).
The first rule has an empty body. Therefore, based on our informal reading of rules
we establish even(0). If in the second rule we replace X with 0, we obtain an instance
132 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

of that rule, even(s 2(0)) ← even(0). Since we already established even(0), based on
the informal reading of rules we can now establish even(s 2(0)). Continuing in this
way, we can establish even(s 4(0)), even(s 6(0)), etc. Thus, based on our informal
reading of rules we can establish all atoms in the set {even(0), even(s 2(0)), . . .} and
nothing else. It is worth noting that each atom we could establish is an element of
the Herbrand base, and that all derivations used rules from the grounding of the
program.
Next, let us consider the program

r(1, 2)
r(2, 3)
r(3, 4) (2.7)
tc(X, X)
tc(X, Y ) ← r(X, Z), tc(Z, Y ).

Reasoning as above, we can establish atoms r(1, 2), r(2, 3), and r(3, 4). The case of
the rule tc(X, X) requires some consideration. Establishing an atom tc(X, X), with-
out specifying what X is to be substituted with, is meaningless. Assuming that the
program describes all that is relevant to the problem at hand, the only reasonable
substitutions are those using terms from the Herbrand universe of the program,
in this case, the constants 1, 2, 3, and 4. Thus, the rule tc(X, X) allows us to es-
tablish tc(1, 1), tc(2, 2), tc(3, 3), and tc(4, 4).3 From that point on, new atoms can
only be established by means of instantiations of the last rule. Clearly, “potentially
usable” instantiations arise only when elements of the Herbrand universe, that is,
constants 1, 2, 3, and 4 are substituted for variables X, Y , and Z. Thus, we can es-
tablish tc(1, 2), since r(1, 2) and tc(2, 2) are already established, and tc(2, 3) and
tc(3, 4), by a similar reasoning. Next, we can establish tc(1, 3) and tc(2, 4) and,
finally, tc(1, 4). With that we reach a point where no new atoms can be established.
To summarize, each of the programs (2.6) and (2.7) has a clear meaning given
by a subset of its Herbrand base. That subset is constructed in a series of steps,

3. Rules such as tc(X, X) do not appear in programs arising in practice. The reason is precisely
the ambiguity concerning legal substitutions for X. Typically, instead of the rule tc(X, X) we
would have two rules tc(X, X) ← r(X, Y ) and tc(X, X) ← r(Y , X). We could now produce arbitrary
instantiations of these rules. However, the only “usable” instantiations would be those where X
and Y are replaced with values x and y such that r(x , y) or r(y , x) might possibly be true. Such
values x and y must come from the Herbrand universe of the program. In this way, we explicitly
enforce the principle that the only allowed substitutions are those by elements of the Herbrand
universe, rather than adopt it as a convention.
2.3 The Case of Horn Logic Programs 133

in which new atoms are derived based on those obtained previously by means of
ground instantiations of rules of the program. In the first case, the result represents
the concept of a non-negative even number and, in the second case, the graph given
by the relation r and its transitive closure given by the relation tc. This suggests that
Horn programs can be interpreted as definitions, an observation that underlies the
definitional reading of programs mentioned above [Denecker et al. 2001, Denecker
and Ternovska 2008, Denecker et al. 2012], as well as earlier extensive uses of Horn
programs as database queries [Ullman 1988, Ceri et al. 1990].
The derivation process used in our two examples can be defined for an arbitrary
Horn program P . Formally, we do this in terms of the one-step provability operator
TP and exploit its monotonicity (in this section we implicitly assume that all pro-
grams we consider are Horn and so Theorem 2.4 applies). Specifically, we construct
a sequence {TPk }∞k=0 of subsets of HB(P ) by setting

TP0 = ∅

TPk = TP (TPk−1), for k > 0.

Since TP0 = ∅, we clearly have TP0 ⊆ TP1. By the monotonicity of TP , we also have

TP1 = TP (TP0) ⊆ TP (TP1) = TP2 .

Generalizing by induction, we obtain that for every k

TPk ⊆ TPk+1 ,

that is, every set in the sequence contains all elements belonging to the earlier ones.
It should be clear that the sequence {TPk }∞ k=0 is a formal representation of the
derivation process we described for programs (2.6) and (2.7). The identity TP0 = ∅
reflects that we start with no atoms established, and the formula TPk+1 = TP (TPk )
captures the informal reading of rules and programs: TPk+1 is the set of all atoms
that can be established by means of a single application of a rule from gr(P ) to
atoms established earlier.
In our examples we reached the limit of the process, that is, the set of atoms
from which no new atoms can be derived. It turns out that it is always possible. Let
us set


TP∞ = TPk .
k=0
134 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

We will show that TP∞ is a fixpoint of the operator TP , that is, TP∞ = TP (TP∞). We
have
TP0 = ∅ ⊆ TP (TP∞), and

TPk+1 = TP (TPk ) ⊆ TP (TP∞), k = 0, 1, . . . .

Thus,


TP∞ = TPk ⊆ TP (TP∞).
k=0

The converse inclusion also holds. Since TP is finitary,



 ∞

TP (TP∞) ⊆ TP (TPk ) = TPk+1 ⊆ TP∞ .
k=0 k=0

The two inclusions show that TP (TP∞) = TP∞,


that is, TP∞ is a fixpoint of TP . There-
fore, once we reach it, no new atoms can be derived. This argument shows that the
least fixpoint of TP exists and can be reached in no more than ω steps. In general,
as illustrated by the program (2.6), ω steps are in fact necessary.4 We gather our ob-
servations in the following theorem (which, obviously, also follows from Theorems
2.1 and 2.4).

Theorem 2.6 For every Horn program P , the one-step provability operator TP has a least pre-
fixpoint and the least fixpoint, and the two coincide. Moreover, the least (pre-
)fixpoint of TP can be reached in at most ω steps.

Proof We have already demonstrated that TP∞ is a fixpoint of TP . Let U be any pre-fixpoint
of TP . Then, for every k = 0, 1, . . . , TPk ⊆ U . Indeed, TP0 = ∅ ⊆ U and, if TPi ⊆ U , then
TPi+1 = TP (TPi ) ⊆ TP (U ) ⊆ U . Hence, the claim follows by induction. Consequently,
TP∞ ⊆ U . Since fixpoints are, in particular, pre-fixpoints, the assertion follows.

So far we described the fixpoint TP∞ in terms of a derivation procedure. It has,


however, a simple declarative characterization as a certain Herbrand model of P ,
a property observed by van Emden and Kowalski [1976].

4. The discussion we presented above amounts to what we described earlier as the constructive
proof of the Knaster-Tarski theorem for the one-step provability operator TP , where P is a Horn
program. The sequence TP0 , TP1 , . . . , TPω (where we now write TPω for TP∞) is the transfinite sequence
we talked about then. We observe in passing that the proof remains almost the same for arbitrary
monotone operators on complete lattices. The key difference is that we need to use a transfinite
induction. In general, the least fixpoint of a monotone operator on a complete lattice cannot be
reached in ω steps.
2.4 Moving Beyond Horn Programs—An Informal Introduction 135

Theorem 2.7 Let P be a Horn program. The least fixpoint of the one-step provability operator TP
and the least Herbrand model of P are equal.

Proof By Theorem 2.3, pre-fixpoints of TP and Herbrand models of TP coincide. Thus,


TP∞, being the least pre-fixpoint of TP , is the least Herbrand model of P .

From now on, we write LM(P ) for the least Herbrand model of P . The existence
of the least Herbrand model (the least fixpoint of the one-step provability operator),
allows us to formalize the meaning of Horn programs. They can be seen as defini-
tions, that is, devices that define relations, with the precise meaning of what they
define given by their least Herbrand models.

Definition 2.2 A Horn program P defines a relation R ⊆ HU(P )n if there is an n-ary relation symbol
r in the language of P such that for every t1 , . . . , tn ∈ HU(P ),

(t1 , . . . , tn) ∈ R if and only if r(t1 , . . . , tn) ∈ LM(P ).

This concept of definability of relations by Horn programs is closely related to


Turing’s computability. The following theorem due to Andréka and Németi [1978],
building on the results by Smullyan [1961], Büchi [1962], and Börger [1974], shows
the connection.

Theorem 2.8 A relation R over an Herbrand universe of a vocabulary σ is recursively enumerable


if and only if R is defined by a finite Horn program P over some vocabulary σ 
extending σ .

Theorem 2.8 implies that Horn logic programming can be used as the foun-
dation for a Turing-complete programming language. This observation found its
implementation in Prolog [Colmerauer et al. 1973]. Prolog employs resolution pro-
cedures [Robinson 1965], in particular, the SLD (selective linear definite clause)
resolution [Kowalski and Kuehner 1971, Lloyd 1984], for finding whether a ground
atom belongs to the least Herbrand model of the program, that is, whether the
tuple of its ground terms has a property defined by the program.

Moving Beyond Horn Programs—An Informal Introduction


2.4 Horn programs are important both for the theory and for the practice of logic
programming. However, neither logic programming in general nor Prolog—its
particular implementation—are limited to Horn programs. From its very inception,
Prolog allowed negated atoms to appear in the bodies of rules. And while Prolog
implementers developed techniques to process programs with negation based on
generalizations of the basic SLD resolution, no corresponding characterization
136 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

in terms of a class of Herbrand models was offered. Identifying such a class of


models became a long-standing problem in logic programming that for many years
fueled theoretical research of the semantics of logic programs. In fact, given some
problems with the semantics implemented by Prolog, somewhere along the way the
quest was redefined as the one to find a “correct” declarative account of programs
with negation, that is, a semantics interpreting them in accordance with their
informal reading.
Two semantics proposed at the end of 1980s gradually gained acceptance as
satisfactory solutions to the problem: the stable-model semantics [Gelfond and
Lifschitz 1988] and the well-founded semantics [Van Gelder et al. 1991]. In the
remainder of this tutorial presentation, we describe them and discuss their basic
properties.
When discussing the Horn case, we focused on Herbrand interpretations and
viewed programs as sets of ground instantiations of their rules. When discussion
programs with negation, we proceed similarly. Adopting these two “design” choices
allows us to think about elements of the Herbrand base as elementary propositional
atoms and about ground programs as propositional ones (albeit, in general, infi-
nite). Thus, in the remainder of this chapter, we focus attention on propositional
programs and only occasionally comment on how concepts and results we discuss
can be lifted to the general case. In particular, from now on, unless explicitly stated
otherwise, all programs we consider are propositional. We use the term programs
with variables whenever we need to speak about the general case.
Let At denote a countable (possibly finite) set of propositional atoms. The set At
can be viewed as the Herbrand base of the propositional language determined by At.
Similarly, the set of propositional atoms appearing in a propositional program P
can be viewed as the Herbrand base of P and could be denoted by HB(P ). However,
to stay closer to the most commonly used notation we write At(P ) for the set of all
atoms that appear in P .
We identify an interpretation of At (a truth assignment on At) with the subset of
At that consists of all elements of At that hold in the interpretation (are assigned
the logic value true). Thus, as in the general case, interpretations are subsets of the
Herbrand base.
Before we move on to introduce and formally discuss the main semantics that
emerged for programs with negation over the years, we illustrate some of the
difficulties resulting from allowing negated literals in the bodies of rules.
Let P be a program consisting of the rules

a
(2.8)
c ← a, not b,
2.4 Moving Beyond Horn Programs—An Informal Introduction 137

where a, b, and c are propositional atoms. Reasoning as in the case of Horn pro-
grams, we see that a can be derived, as it is simply given as a fact, and b cannot be
established, as there is no rule with b in the head. Applying our informal reading
to the second rule, shows that it “applies” and so c can be established. It follows
that the program “computes” or “justifies” the interpretation {a, c} in a way a Horn
program “computes” or “justifies” their least model. It is clear that this interpreta-
tion is a model of the program and that no other interpretation can be computed
in this way. It can then be reasonably taken as a unique intended model of the pro-
gram. We will later extend this example in a way that yields the class of stratified
programs [Apt et al. 1988, Przymusinski 1988a, Przymusinski 1988b]. These pro-
grams may include negation but the informal reading of rules allows us to identify
for them a unique intended model, called a perfect model [Przymusinski 1988a].
However, even in this simple way of using negation, there is already a difference
from the Horn case. The intended model {a, c} of the program (2.8) is no longer
the least model of the program. It is a minimal model of the program, but the pro-
gram has yet another minimal model namely, {a, b}. Hence, the property that an
intended model is a least model cannot be preserved as we move to the setting of
programs with negation, even if the use of negation is “stratified.”
In general, the uniqueness cannot be preserved either, at least not in the setting
of two logical values true and false—an essential caveat, as we will shortly see. Let
P be a program consisting of the rules

a ← not b,
(2.9)
b ← not a.

This program has three models (over the vocabulary {a, b}): {a, b}, {a}, and {b}.
The first of these interpretations is inconsistent with our informal reading of rules.
The two rules in the program can only be applied when a and b cannot be estab-
lished. But, if it were the case, both rules would be applicable and a and b would get
established, a contradiction. On the other hand, each of the two remaining inter-
pretations is consistent with the informal reading of rules. Indeed, assuming that b
cannot be established yields a (by the first rule). With a established, the second rule
cannot be used and so, b cannot be established, just as we assumed. A symmetric
argument justifies {b}. However, the symmetry implies that there is no difference
between the interpretations {a} and {b} that could allow us to select one of them as
a unique intended model of the program (2.8). The key to having multiple intended
models in programs with negation turns out to be the existence of cyclic dependen-
cies via negation. In our example, a depends on not b (the first rule) and b depends
on not a (the second rule). Analyzing other similar programs led to understanding
138 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

how to break such dependencies, which resulted in the concept of the reduct and
in the stable-model semantics [Gelfond and Lifschitz 1988].
In the previous paragraph we restricted attention to the two-valued case. If
we allow the three-valued setting or, as we do in this chapter, the four-valued
one, the uniqueness can be preserved at a cost of completeness of the model.
Considering again the program (2.9) one might posit that it does not provide
enough information to establish the logical value of a and b and so, the correct
logical value to assign to each of them is unknown. The resulting three-value model
is a natural candidate for the intended model of the program. While in this example
it offers no information about truth or falsity of a and b, in many other cases it
assigns true or false to a vast majority of atoms, leaving only some atoms as unknown.
Allowing the unknown truth value turned out to be quite fruitful. It paved the way to
the well-founded semantics [Van Gelder et al. 1988, Van Gelder et al. 1991], a most
broadly accepted three-valued unique intended model generalization to the case
of programs with negation of the least model semantics of Horn programs and the
perfect model of programs with “stratified” negation.
We discuss the stable model and the well-founded model semantics in formal
terms in the remainder of the chapter. Along the way, we encounter two other
semantics of programs with negation, both related to the notion of Clark’s com-
pletion of the program [Clark 1978]: the two-valued supported-model semantics
[Clark 1978, Apt et al. 1988], and the three-valued Kripke-Kleene semantics [Fitting
1985]. Clark’s completion is a formalization of an intuition that programs are def-
initions, with all rules with the same atom, say p in the head, being the definition
of p.
Let us consider a program

a←b
a ← not c (2.10)
b ← d.
This program can be seen as defining a, b, and c by listing all cases when these
atoms are true and implicitly assuming that these are the only cases when they can
be. In our case, the program defines a to hold precisely when b is true or when c is
false. In all other cases, a must be false. The program defines b to be true precisely
when d is true and, as there are no rules defining c and d, the program defines c
and d as false. This is the same set of constraints we would obtain in classical logic
by means of the theory consisting of formulas obtained from the program by first
collecting together rules with the same head leading (with some abuse of notation)
to rules:
2.4 Moving Beyond Horn Programs—An Informal Introduction 139

a ← b ∨ not c
(2.11)
b←d
and then replacing the rule connective ← with the if and only if connective ↔ of
classical logic:

a ↔ b ∨ ¬c
(2.12)
b ↔ d.
We changed in this last step the program connective not to ¬ as we want to see the
result as a theory in the standard language of logic. That theory is known as Clark’s
completion of a program and models of Clark’s completion are known as supported.
In our example, there is only one supported model: {a} (as already noted c and d are
false, b is false because of the formula b ↔ d, and a is true because of the formula
a ↔ b ∨ ¬c). It is worth noting that this model also arises from our adopted informal
reading of programs. As there are no rules for c and d, neither can be established
and so, they are false. Since d cannot be established, b cannot be established either
(the rule b ← d is the only rule that has b in its head). Last, a can be established
via the rule a ← not c. Since c cannot be established, a can! This program has
no dependency cycles involving only non-negated occurrences of atoms (positive
dependency cycles) and this is crucial. For such programs, supported models and
stable models coincide. In general, any supported model is stable but, in general,
the converse does not hold.
As positive dependency cycles are the main reason for the difference, let us
consider an example of a program where such dependencies are present:

a←b
(2.13)
b ← a.
According to our informal reading of programs, this program has one intended
model, ∅. It turns out to be the unique stable model of this program and the least
model of this program (the program is a Horn one). On the other hand, the Clark’s
completion of this program is the theory {a ↔ b} and this theory has two models:
∅ and {a, b}. The program has then two supported models, only one of which is a
stable model.
So far, we considered two-valued models of Clark’s completion. If we change
the setting to the three-valued one, the Clark’s completion of the program (2.13)
will have one more model in which both a and b are unknown. It turns out we
can associate with every program with negation a single “least informative” three-
valued model. This model is known as the Kripke-Kleene model. We note that it is
different than the well-founded model of this program which, as it will be clear later
140 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

when we give formal definitions, turns out to make both a and b false. However,
we also note that the Kripke-Kleene model of the program (2.10) coincides with
the well-founded model. Thus, the relation between Kripke-Kleene and the well-
founded semantics follows a similar pattern to that between the supported-model
and the stable-model semantics.
The point of this discussion was to show informally complexities that arise when
we go beyond the class of Horn programs, and to build some intuitions. In the
remainder of this chapter, we offer a formal discussion of the concepts alluded to
above.

The Stable Model Semantics


2.5 We start our formal presentation of semantics of programs with negation with a
discussion of the stable-model semantics proposed by Gelfond and Lifschitz [1988].

2.5.1 The Definition and Basic Properties of Stable Models


The key concept behind the semantics of stable models is that of the reduct of a
program.

Definition 2.3 Let P be a program over a set At of atoms and let M ⊆ At be an interpretation of At.
The reduct of P with respect to M, denoted by P M , is the program obtained from
P by eliminating every rule r such that M ∩ B − = ∅ and replacing each remaining
rule a ← B + , not B − with a ← B +.

Informally, the reduct can be thought of as the result of evaluating negative


literals in the bodies of rules with respect to M, and then simplifying in the standard
way: removing every “true” from the rule bodies, and eliminating every rule that
has at least one “false” in its body. To illustrate let us consider a program P over
At = {a, b, c, d} consisting of the rules:
a ← b, not c
b ← not a
(2.14)
b ← not c
c ← b, not a.
The atom d does not appear in P . Therefore, At(P ) = {a, b, c}. Let us now consider
an interpretation M ⊆ At, say M = {a, b, d}. To produce the reduct, we remove the
second and the fourth rule, in each case, because of the negated atom a, which is an
element of M (and so, not a evaluates to false in M). We then remove negative literals
from the remaining rules, in each case, the literal not c is removed (it evaluates to
2.5 The Stable Model Semantics 141

true in M). The result is the program P M consisting of the rules:


a←b
(2.15)
b←.
We observe that the presence or absence of d in an interpretation does not affect
the reduct, as d is not present in the program.
Before we move on to define stable models of a program, we state some simple
properties of reducts.

Proposition 2.1 Let P be a program. For every interpretation M, TP (M) = TP M (M).

Proof Let a ∈ TP (M). It follows that there is a rule a ← B in P such that M |= B. For that
rule we have a ← B + ∈ P M and M |= B +. Thus, a ∈ TP M (M).
Conversely, if a ∈ TP M (M), there is a rule a ← C in P M such that M |= C. Since
a ← C belongs to P M , there is a rule a ← B in P such that M |= not b, for every
b ∈ B −, and also B + = C. Clearly, M |= B and so, a ∈ TP (M).

Corollary 2.1 If an interpretation M is a model of P , then it is a model of the reduct P M .

Proof If M is a model of P , TP (M) ⊆ M (cf. Theorem 2.3). Thus, by Proposition 2.1,


TP M (M) ⊆ M and so, M is a model of TP M .

With the notion of the reduct in hand, we are now ready to define stable models
of a program.

Definition 2.4 Let P be a program over a set At of propositional atoms. An interpretation M ⊆ At is


a stable model of P if M is the least model of the reduct P M , that is, M = LM(P M ).

This definition relies on the fact that a reduct of a program is a Horn program
and so, it has a least model. Directly from the definition it follows that for Horn
programs the stable-model semantics reduces to the least-model semantics we
discussed in the previous section. This is an important and desirable property,
as the least-model semantics of Horn programs is universally accepted as the
right one.

Proposition 2.2 Let P be a Horn program over a set At of propositional atoms. Then the least model
of P is the only stable model of P .

Proof For every M ⊆ At, P M = P . Thus, if M is a stable model of P , M = LM(P M ) = LM(P ).


On the other hand, for the same reason, M = LM(P ) satisfies M = LM(P M ) and so
is a stable model of P .

Also directly from the definition, it follows that stable models are subsets of the
set of atoms that appear in the program and, even more, of the set of atoms that
142 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

appear as the heads of rules of the program. (We recall that we write H (P ) for the
set of atoms in the heads of rules of P .)

Proposition 2.3 Let P be a propositional program over a set At of propositional atoms. If M is a


stable model of P , then M ⊆ H (P ).

Proof If M is a stable model of P then it is a least model of the reduct P M . If M contains


an atom a that is not the head of any rule in P M , then M \ {a} is also a model of
P M , a contradiction.

Proposition 2.3 is important for automated techniques to compute stable mod-


els as it narrows the scope of relevant atoms. For instance, let P be the program
given by the rules (2.14). Then the interpretation M = {a, b, d} is not a stable model
of P as d is not the head of any rule in P . In fact, every stable model of that program
must be a subset of the set {a, b, c}.

Remark 2.1 In view of Proposition 2.3, from now on whenever we discuss stable models of a
program P , we restrict attention to interpretations contained in At(P ).

Coming back to our example program P given by the rules (2.14), let us consider
the interpretation M  = {a, b}. Since it is obtained from M = {a, b, d} by dropping

d, which does not appear in the program P , P M is also given by the rules (2.15).

Clearly, the least model of P M is {a, b}. Thus, M  = {a, b} is a stable model of P .

There is another stable model, M  = {b, c}. Indeed, P M consists of the rules
b←
(2.16)
c←b
and the least model of this program is {b, c}. On the other hand, the interpretation
{a, c} is not a stable model of P . The reduct of P with respect to {a, c} is empty and
so, its least model is ∅ and not {a, c}. Checking all subsets of At(P ) = {a, b, c} shows
that M  = {a, b} and M  = {b, c} are the only stable models of P .
We also note that not every program has a stable model. We will provide two
examples here. First, let P = {a ← not a}. Since At(P ) = {a}, there are only two
candidates for a stable model: M1 = ∅ and M2 = {a}. It is easy to see that P M1 = {a}
and so, LM(P M1) = {a} = M1. Similarly, P M2 = ∅ and so, LM(P M2 ) = ∅ = M2. For the
next example, let P consist of the rules
a←b
(2.17)
b ← not a.
There are now four candidates for a stable model as At(P ) = {a, b}. However, if
a is in a stable model, then b is not (the second rule does not contribute to the
2.5 The Stable Model Semantics 143

reduct; thus, b is not in the least model of the reduct). Consequently, a is not in the
stable model either, a contradiction (to use the first rule to derive a, we must have
b, first). On the other hand, if a is not in a stable model, then b is, but then so is a,
a contradiction again.
We have been using the term stable model. But are stable models models of a
program? The next result shows that it is indeed the case.

Proposition 2.4 Let P be a program and M ⊆ At(P ) a stable model of P . Then, M is a model of P .

Proof Since M is a stable model of P , M is the least model of the reduct P M . In particular,
M is a model of P M . By Theorem 2.3, TP M (M) ⊆ M. By Proposition 2.1, TP M (M) =
TP (M). Thus, TP (M) ⊆ M and, again by Theorem 2.3, M is a model of P .

Stable models can be defined as fixpoints of operators. Let us define an operator


 on the lattice All(P ) of interpretations (in this case, the lattice of subsets of At(P ))
by setting

P (M) = LM(P M ).

The operator P is antimonotone.

Proposition 2.5 Let P be a logic program. For all interpretations I ⊆ J ⊆ At(P ), P (J ) ⊆ P (I ).

Proof Let us consider a rule a ← B + , not B − in P . Clearly, if B − ∩ I = ∅, then B − ∩ J = ∅.


Thus, by the definition of the reduct, P J ⊆ P I . It follows that P (J ) = LM(P J ) ⊆
LM(P I ) = P (I ).

The operator P can be used to characterize stable models of P . Directly from


the definitions, it follows that stable models of P are fixpoints of the operator P .

Theorem 2.9 Let P be a program. An interpretation M ⊆ At(P ) is a stable model of P if and only
if M = P (M).

Theorem 2.9, Corollary 2.1, and Proposition 2.5 imply one of the most important
properties of stable models from the perspective of knowledge representation,
namely, that they are minimal models.

Theorem 2.10 Every stable model of a program P is a minimal model of P .

Proof Let M be a stable model of P and let M  be a model of P such that M  ⊆ M.



By Proposition 2.5, P (M) ⊆ P (M ). Next, we recall that P (M ) = LM(P M ). By

Corollary 2.1, M  is a model of P M . Thus, P (M ) ⊆ M . Finally, since M is a stable
144 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

model of P , Theorem 2.9 implies that M = P (M). Putting these identities together
we have

M = P (M) ⊆ P (M ) ⊆ M  ⊆ M.

Consequently, M = M  and so, the minimality of M follows.

Algorithmic aspects of the problems of determining the existence of and com-


puting stable models of propositional programs received a substantial amount of
attention. The reason is that computing stable models is a basic automated rea-
soning task underlying answer-set programming, a declarative programming para-
digm of growing popularity [Marek and Truszczyński 1999, Niemelä 1999, Brewka
et al. 2011]. We mention here two fundamental results in the long line of research
on the complexity of computing stable models of logic programs, and refer to the
excellent survey by Dantsin et al. [2001] for more details.

Theorem 2.11 Dowling and Gallier [1984]. The unique stable model (equivalently, the least
model) of a finite propositional Horn logic program P can be computed in time
linear in the size of P .

Proof (Sketch) To compute the least model of P , we start by setting X = ∅. We then arrange
all atoms that are heads of rules with the empty body into a queue. We repeatedly
take the front atom off the queue and add it to X. We also add to the end of the
queue all atoms that are heads of rules whose bodies are contained in X. When the
queue is empty, X contains the least model of P .
The task of adding atoms to the queue can be facilitated by keeping for each
atom the list of rules whose bodies contain it, and by keeping for each rule a counter
initially set to the number of atoms in its body. When an atom, say a, is taken off
the queue, we visit each rule that contains a in the body (we have a list of those
rules) and decrease the counter by 1. If the counter becomes 0, the head of the rule
is placed on the queue. With this arrangement, the algorithm works in linear time.

The situation is drastically different for the general case.

Theorem 2.12 Marek and Truszczyński [1991]. The following problem is NP-complete: given a
propositional logic program P , decide whether P has a stable model.

Proof (Sketch) The membership of the problem in the class NP is evident. Once we guess
a set M ⊆ At(P ) of atoms, we can compute the reduct P M , the least model of the
reduct and verify that M and this least model coincide. All these tasks can be
accomplished in time linear in the size of P .
2.5 The Stable Model Semantics 145

To prove hardness, we show that the propositional satisfiability problem can be


reduced to the problem of deciding whether a program has a stable model. Let F
be a CNF formula. We introduce a fresh atom f and, for every atom a ∈ At(F ), a
fresh atom a  (meant to represent the negation of a). Next, for each clause C of F ,
where

C = a1 ∨ . . . ∨ ak ∨ ¬b1 ∨ . . . ∨ ¬bm ,

we define the program rule

r(C) = f ← not f , a1 , . . . , ak , b1 , . . . , bm .

We define P (F ) to be the program consisting of all rules r(C), where C is a clause


of F , and of rules a ← not a  and a  ← not a, for every atom appearing in F . One can
show that F has a model M if and only if M ∪ {a : a ∈ At(F ) \ M} is a stable model
of P (F ).

Extension to Programs with Variables. The concept of a stable model extends to


programs with variables. The key to the extension is grounding. Namely, if P is a
logic program with variables, we define an Herbrand interpretation M ⊆ HB(P ) to
be a stable model of P if M is a stable model of gr(P ). As noted at the beginning
of the section, gr(P ) can be viewed as a propositional logic program over the set
HB(P ) of ground atoms. Thus, the extension is well defined.
With this definition, most properties of stable models of propositional logic
programs generalize to programs with variables in a straightforward fashion. In
particular, stable models are Herbrand models of a program (cf. Proposition 2.4),
a Horn program has a unique stable model (cf. Proposition 2.2), stable models of
a program contain only atoms whose predicate symbol has a “head” occurrence
in the program (cf. Proposition 2.3), and stable models of a program are minimal
Herbrand models of the program (cf. Theorem 2.10).

2.5.2 Stratification and Splitting


By introducing negation into the bodies of rules of programs, we break a simple
semantic picture of Horn programs, which are guaranteed to have a unique stable
model. In the general case, some programs with negation have a single stable model
but many other programs have multiple stable models or no stable models at all.
The question of how stable models depend on the structure of a program and, in
particular, on how negation is used in its rules, has received much attention. The
notion of a “stratified” negation proposed by Apt et al. [1988] and Przymusinski
[1988b] turned out to be especially fruitful.
146 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Let P be a program given by the rule

a ← not b. (2.18)

Since b is not the head of a rule in P , it cannot be established. Applying the


rule a ← not b, we establish a. Thus, our informal reading of rules and programs
suggests exactly one interpretation of a and b, namely M = {a}, which makes a true
and b false. It is easy to verify that, consistently with the informal reading, M is
a unique stable model of P . Hence, even though P contains negation, it still has
the key property of Horn programs, a unique stable model. On reflection, it is not
surprising at all. There is a negation in P but it is applied only to an atom that the
program has no rule for.
Programs like the one we just discussed not only have a unique stable model.
In fact, they have an even stronger property.

Proposition 2.6 Let P be a program such that no atom appearing negated in P is the head of a rule
in P . Then, for every set A of atoms the program P ∪ A has a unique stable model.

Proof Clearly, P A ∪ A is a Horn program. Let M = LM(P A ∪ A). It follows that A ⊆ M ⊆


H (P ) ∪ A.
(Existence) By our assumption on P , (P ∪ A)M = P A ∪ A. Thus, M = LM((P ∪
A) ) and so, M is a stable model of P ∪ A.
M

(Uniqueness) Let M  be a stable model of P ∪ A. It follows that A ⊆ M  ⊆ H (P ) ∪


 
A. Thus, by our assumption on P , (P ∪ A)M = P A ∪ A. Since M  = LM((P ∪ A)M ) =
LM(P A ∪ A) = M, we have M  = M. It follows that M is the unique stable model of P .

Our discussion can be generalized to programs that are obtained by “stacking


up” programs of the type we have just discussed one over another. The formal term
is stratification and we will introduce it now.

Definition 2.5 Let P be a propositional logic program, α an ordinal (possibly transfinite). A se-
quence {Pβ }β<α of non-empty subsets of P is called a stratification of P in type
α if

1. β<α Pβ = P ; and
2. for every ordinal β < α, and for every propositional atom p that appears in
the head of a rule in Pβ :

1. p has no occurrences in γ <β Pγ , and
2. p has no negated occurrences in Pβ .
The sets (programs) Pβ are called strata.
2.5 The Stable Model Semantics 147

The idea is simple. By the condition (2a), atoms in the heads of rules in the
program Pβ , that is, atoms defined by Pβ are not “redefined” by any later stratum.

Thus, their logical values are determined by the program P≤β = γ ≤β Pγ . Moreover,
atoms defined in the stratum β have no effect on atoms defined in earlier strata.
Thus, to determine the values of the atoms defined by the stratum Pβ , we first

establish the values of the atoms defined by the program P<β = γ <β Pγ . We then
use these values to simplify the program Pβ , that is, we eliminate rules whose
bodies contain literals evaluated in P<β to false and, from the remaining rules of
Pβ , we remove literals evaluated in P<β to true. Due to the condition (2b), the truth
values of all atoms that appear negated in the bodies of rules of Pβ are determined
by P<β . Thus, the result of this simplification is a Horn program, and its least
model specifies the logical values for atoms defined in Pβ . Applying this reasoning
according to the order of strata yields a model of the program. We will soon show
that this model is in fact the unique stable model of the program.
Let P consist of the rules
a ← not b
d ← a, e
(2.19)
e ← d , not b
c ← a, not d.

Let us define P0 = {a ← not b}, P1 = {d ← a, e; e ← d , not b} and P2 = {c ← a, not d}.


It is easy to verify that the sequence P0 , P1 , P2 is a stratification of P . The negated
atom in P0 has its value established by the set of rules in lower strata, that is, by
the empty program. Since the least model of the empty program is the empty set,
b is false. Thus, P0 simplifies to P0 = {a} and so, the value of a is true. Given the
logical values of a and b, P1 simplifies to P1 = {d ← e; e ← d}. The least model of
this program is empty and so, d and e are false. Moving on to P2, we simplify it to
P2 = {c} and obtain that c is true. In this way, we construct an interpretation {a, c}
of P . It is easy to see that this interpretation is a stable model of P .
Before we proceed with a discussion of formal properties of stratified programs,
we note that the concept of stratification, despite its intuitive nature, demonstrates
some subtle aspects. It is clear that there are stratified programs that require
infinitely many strata. What may be less obvious is that some infinite programs do
not admit stratifications in type ω or less. For instance, let P consist of the following
rules:
pi+1 ← not pi , i = 0, 1, . . .
(2.20)
q ← not pi , i = 0, 1, . . . .
148 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Let us define Pi = {pi+1 ← not pi } and Pω = {q ← not pi : i = 0, 1, . . .}. It is easy to


verify that the sequence {Pβ }β<ω+1 is a stratification of P . In fact, it is the only
stratification of P . Indeed, in every stratification of P , the stratum of the rule
p1 ← not p0 must strictly precede the stratum of the rule p2 ← not p1, etc. Moreover,
by the condition (2a), all rules q ← pi must be in the same stratum. Finally, the
stratum of the rule pi+1 ← not pi must strictly precede the stratum of the rule
q ← not pi+1. Thus, in every stratification, the strata must be the sets Pi , i = 0, 1, . . .,
and Pω , and they must appear in that order.
We now formally state the key property of stratified programs, which we have
already announced above.

Theorem 2.13 Let P be a stratified propositional logic program. Then P has a unique stable model.

The proof of this result depends on the Splitting Lemma, a result due to Lifschitz
and Turner [1994], also discovered independently by Eiter et al. [1994, 1997].

Definition 2.6 Let P be a propositional logic program. A pair (Q, R) of programs is a splitting of P
if Q and R are non-empty, P = Q ∪ R, and no atom appearing in the head of rules
in R appears in Q.

Informally, if (Q, R) is a splitting, Q may feed information into R (and, in gen-


eral, does) but not vice versa. This is similar to the “information flow” in stratified
programs and, indeed, splitting and stratification are closely related. The only dif-
ference is that for “strata” in splitting, the condition (2b) is not required. That is,
atoms in the heads of rules in a splitting stratum may appear negated in the bodies
of the rules in this stratum. In this sense, splitting generalizes stratification.5

Lemma 2.1 Splitting Lemma. Let P be a program with a splitting (Q, R). Then an interpreta-
tion M is a stable model of P if and only if M ∩ At(Q) is a stable model of Q and M
is a stable model of R ∪ (M ∩ At(Q)).

Proof (⇒) Let M be a stable model of P . We set N = M ∩ At(Q). Since (M \ N ) ∩ At(Q) =


∅, QN = QM . By definition, M is the least model of P M . We have Q ⊆ P and,
consequently, QM ⊆ P M . Thus, M is a model of QM and so, also a model of QN .
Using again the property (M \ N ) ∩ At(Q) = ∅, we obtain that N is a model of QN .
Let N  ⊆ N be a model of QN . It follows that N  is a model of QM (as QN = QM ).
Moreover, since (M \ N ) ∩ At(Q) = ∅, also the interpretation N  ∪ (M \ N ) is a
model of QM .

5. Even though we only defined splitting into two strata, extensions to the case of finite splittings
and splittings into an arbitrary ordinal type are straightforward.
2.5 The Stable Model Semantics 149

Let us consider a rule r = a ← B + from R M . If B + ⊆ N  ∪ (M \ N ), then B + ⊆


N ∪ (M \ N ) = M. Since M is a model of P M , a ∈ M. By the definition of splitting, a ∈
At(Q). Thus, a ∈ N . It follows that a ∈ M \ N and, consequently, a ∈ N  ∪ (M \ N ).
Hence, N  ∪ (M \ N ) is a model of R M .
It follows that N  ∪ (M \ N ) is a model of P M . Since N  ⊆ N ⊆ M, we have
N  ∪ (M \ N ) ⊆ M. We now recall that M is the least model of P M . Thus, N  ∪ (M \
N ) = M, which implies that N  = N . We conclude that N is the least model of QN
and so, it is a stable model of QN .
Next, we show that M is a stable model of R ∪ N . First, we note that M is a
model of P M and, consequently, a model of R M . Next, since N ⊆ M, M is a model
of R M ∪ N = (R ∪ N )M . Let M  ⊆ M and assume that M  is a model of (R ∪ N )M .
Then M  is a model of R M ∪ N and so, M  is a model of R M and N ⊆ M . Since
M  ⊆ M, M  ∩ At(Q) ⊆ M ∩ At(Q) = N . Moreover, N ⊆ M  and N ⊆ At(Q). Thus,
M  ∩ At(Q) = N and so, M  is a model of QM (since N is a model of QM ). It follows
that M  is a model of QM ∪ R M = P M . But M  ⊆ M and M is the least model of P M .
Hence, M  = M and so, M is the least model of (R ∪ N )M . Consequently, M is a
stable model of R ∪ N .
(⇐) Let M be an interpretation such that M is a stable model of R ∪ N , where
N = M ∩ At(Q) and N is a stable model of Q. Since M is a stable model of R ∪ N ,
M is a model of (R ∪ N )M = R M ∪ N . Thus, M is a model of R M . Similarly, N is
a model of QN . Since N = M ∩ At(Q), M is a model of QM . It follows that M is a
model of QM ∪ R M = P M .
Let us consider an interpretation M  ⊆ M and assume M  is a model of P M .
Writing N  for M  ∩ At(Q), we obtain that N  is a model of QM (since Q ⊆ P and
M  is a model of P M ). Consequently, N  is a model of QN and, since N is the least
model of QN , N  = N . It follows that N ⊆ M  and so, M  is a model of R M ∪ N . Since
M is the least model of R M ∪ N , M  = M. Thus, M is the least model of P M and so,
a stable model of P .

The Splitting Lemma has some fundamental implications. First, it is the basis
for the development of modular logic programs, where each next module intro-
duces new concepts defined in terms of those defined already. In particular, split-
ting underlies the generate-define-test methodology proposed by Lifschitz [2002],
which is now broadly used in answer-set programming [Brewka et al. 2011].
Second, splitting supports “stratum-by-stratum” computation of stable models.
Consequently, at each stage stable-model finding algorithms can limit the search
space of interpretations to those restricted to the language of the stratum at hand,
which improves performance.
150 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

To make this last observation clear, let us assume that (Q, R) is a splitting of
a program P . For every N ⊆ At(Q), we write R|N for the simplification of R with
respect to N , that is, the program obtained by

1. eliminating from R each rule a ← B such that for some b ∈ B + ∩ At(Q),


N |= b (b ∈ N), or for some atom b ∈ B − ∩ At(Q), N |= not b (b ∈ N ), and
2. removing from the resulting program all remaining literals b and not b, where
b ∈ At(Q).

It is easy to see that M ⊆ At(P ) is a stable model of R ∪ (M ∩ At(Q)) if and only if


M ∩ At(R) is a stable model of R|M∩At(Q). Thus, the Splitting Lemma implies that to
find all stable model of P , it suffices to compute stable models of Q and, for each
stable model N of Q, to compute stable models of the program R|N .
Third, and this is how we will use the Splitting Lemma here, it is a powerful
technical tool. In particular, it implies Theorem 2.13.

Proof of Theorem 2.13 Let us assume that {Pβ }β<α is a stratification of P in type α. We recall that we write

P<β for γ <β Pγ . In particular, P<α = P . We will prove by transfinite induction that
for every β ≤ α, P<β has a unique stable model, say Mβ , and that for every γ < β,
Mγ ⊆ Mβ and no atom in Mβ \ Mγ appears in P<γ .
The claim holds for β = 0. Indeed, we have P<0 = ∅ and so, P<0 has a unique
stable model M0 = ∅. The second part of the claim holds vacuously. Let us consider
an ordinal β, 0 < β ≤ α and assume the claim holds for every ordinal γ such that
γ < β. We will prove the claim for β.
First, let us assume that for some ordinal β  we have β = β  + 1. Clearly, P<β =
P<β  ∪ Pβ  . By stratification, it follows that (P<β  , Pβ  ) is a splitting of P<β . By the in-
duction hypothesis, P<β  has a unique stable model Mβ  . Moreover, by the Splitting
Lemma and Proposition 2.6, P<β has a unique stable model, say Mβ , Mβ  ⊆ Mβ and
no atom in Mβ \ Mβ  appears in P<β  .
To complete an argument in this case, it remains to show that for every γ < β ,
Mγ ⊆ Mβ and that no atom in Mβ \ Mγ appears in P<γ . By the induction hypothesis,
Mγ ⊆ Mβ  and no atom in Mβ  \ Mγ appears in P<γ . Since Mβ  ⊆ Mβ , we have Mγ ⊆
Mβ . Moreover, since Mγ ⊆ Mβ  ⊆ Mβ , Mβ \ Mγ = (Mβ \ Mβ  ) ∪ (Mβ  \ Mγ ). Thus, no
atom in Mβ \ Mγ appears in P<γ (atoms in Mβ \ Mβ  do not appear in P<β  and so,
they do not appear in P<γ ).

Second, let us assume that β is a limit ordinal. It follows that P<β = γ <β P<γ .
Indeed, for every δ < β, Pδ ⊆ P<δ+1. Since β is a limit ordinal, δ + 1 < β and so,
 
Pδ ⊆ γ <β P<γ . Consequently, P<β ⊆ γ <β P<γ . The converse inclusion is evident.
2.5 The Stable Model Semantics 151


Let Mβ = γ <β Mγ . Using the induction hypothesis, we can show that for every
γ < β, Mγ ⊆ Mβ and no atom in Mβ \ Mγ appears in P<γ . We can also show that
Mβ is a model of P<β . Indeed, let r be a rule in P<β . Then, there is γ < β such that
r ∈ P<γ . Since Mγ is a stable model of P<γ (by the induction hypothesis), Mγ ⊆ Mβ ,
and Mβ \ Mγ has no atoms from P<γ (as noted above), Mβ is a model of r.
M M
By Corollary 2.1, Mβ is a model of P<ββ . Let M  ⊆ Mβ be a model of P<ββ . Then,
M
for every γ < β, M  is a model of P<γβ . Consequently, Mγ = M  ∩ At(P<γ ) is a model
M M M M
of P<γβ . Since P<γβ = P<γγ , Mγ is a model of P<γγ . By the induction hypothesis, for
M
every γ < β, Mγ ⊆ Mγ and so, Mβ ⊆ M . It follows that Mβ is a least model of P<ββ
and so, a stable model of P<β .
Let M  be another stable model of P<β . By the definition of stratification, for
every γ < β, (P<γ , P ), where P  = P<β \ P<γ , is a splitting of P<β . By the Splitting
Lemma, M  ∩ At(P<γ ) is a stable model of P<γ . By the induction hypothesis, for
every γ < β, M  ∩ At(P<γ ) = Mγ . Thus, M  = M.

We will now look briefly at matters of computation. First, given a propositional


logic program one can decide in linear time in the size of the program whether
it is stratified. Second, given a stratified propositional logic program, its unique
stable model can be computed in linear time in the size of the program. To do this
we proceed stratum-by-stratum, computing the least model of the Horn program
resulting by simplifying the stratum with respect to the model computed so far.
Once that is done, we append that model to the set of atoms we are constructing.
A stratification testing algorithm and an algorithm to compute a unique stable
model of a logic program were first described by Niemelä and Rintanen [1994] in
the setting of autoepistemic counterparts of programs.

Extension to Programs with Variables. Unlike in most other cases, there are at
least two ways in which the notion of stratification can be extended to the case of
programs with variables. First, we can adapt the idea of stratification to the general
case by restricting the way relation symbols can occur in a program. We recall that
programs with variables are assumed to be finite. Thus, there is no need to consider
infinite stratifications.

Definition 2.7 Let P be a logic program over a first-order vocabulary. A sequence P0 , P1 , . . . , Pn
of non-empty subsets of P is a stratification of P if

1. ni=0 Pi = P ; and
2. for every i, 0 ≤ i ≤ n, and for every relation symbol r that appears in the head
of a rule in Pi :
152 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs


1. r has no occurrences in j <i Pj , and
2. r has no negated occurrences in Pi .
Programs that have a stratification are stratified.

It is clear that if a program P over a first-order vocabulary is stratified then


gr(P ) is stratified according to Definition 2.5 (in fact, even finitely stratified). Thus,
Theorem 2.13 implies the following corollary.

Corollary 2.2 Let P be a program over a first-order vocabulary. If P is stratified, then P has a
unique stable model.

The program (2.2) is stratified. Indeed, if we define P0 to consist of the first two
rules of the program and P1 of the third rule, then P0 , P1 is a stratification of P .
The second way to generalize the notion of stratification to programs with vari-
ables is motivated by an observation that some such programs are not stratified but
their groundings are. For instance, let P be a program consisting of the following
three rules (this program defines even and odd non-negative integers, but does it
differently from the program (2.2) we considered before):
even(0)
even(s(X)) ← not odd(X) (2.21)
odd(s(X)) ← not even(X).
This program is not stratified in the sense of Definition 2.7. Indeed, the last two
rules cannot belong to the same stratum. Moreover, neither of the two rules can
appear in a stratum preceding the stratum containing the other. However, the
grounding of the program is stratified in the sense of Definition 2.5.

Definition 2.8 Let P be a logic program over a first-order vocabulary. Then, P is locally stratified
if gr(P ) is stratified.

The following result follows directly from the corresponding definitions and
from Theorem 2.13.

Theorem 2.14 If P is locally stratified then P has a unique stable model.

2.5.3 Connections to Classical Logic: Supported Models, Tight Programs,


and the Loop Theorem
Stable models of programs are rooted in the intuition of non-circular justification.
Relaxing the notion of non-circular justification to that of support (possibly, a
circular self-support) leads to the semantics of supported models [Apt et al. 1988].
2.5 The Stable Model Semantics 153

They are of interest as they can be viewed as a bridge between programs with the
stable-model semantics and classical propositional theories.

Definition 2.9 Let P be a logic program. An interpretation M is a supported model of P if M =


TP (M).

Let us say that an atom is “supported” by a program P with respect to an inter-


pretation M if P contains a rule a ← B such that M |= B. Then, M is a supported
model of a program P when it is precisely the set of atoms “supported” by P with
respect to M. This intuition motivates the term “supported.”
Let us note three simple properties of supported models. First, just as stable
models, supported models of a program are contained in the set of atoms that
appear in the heads of rules of the program. This follows from the fact that TP (M) ⊆
H (P ).

Proposition 2.7 Let P be a logic program. If M is a supported model of P , then M ⊆ H (P ).

Second, again as stable models, supported models of a program are models of


the program.

Proposition 2.8 Let P be a logic program. Every supported model of P is a model of P .

Proof From the definition it follows that supported models are pre-fixpoints of the oper-
ator TP (in fact, even fixpoints). Thus, the assertion follows from Theorem 2.3.

The converse does not hold. There is an obvious reason why not. Namely, if
M is a model of a program, extending M by any atom not in H (P ) is a model
of P , too. However, by Proposition 2.7, that model cannot be supported. More
interestingly, the implication in Proposition 2.8 cannot be reversed even when we
restrict attention to models that consist of atoms that occur as heads of rules of
the program. For instance, let P = {p ← not p}. This program has no supported
models. Indeed, there are only two candidates for a supported model here, M1 = ∅
and M2 = {p}. However, TP (M1) = M2 and TP (M2) = M1. Thus, neither M1 nor M2
is supported. On the other hand, M2 is a model of P .
Finally, supported models of a program are also supported models of the reducts
they determine. Speaking precisely, we have the following result.

Proposition 2.9 If M is a supported model of a logic program P , then M is a supported model of


the reduct P M .

Proof Let M be a supported model of P . Then, M = TP (M) and so, by Proposition 2.1,
M = TP M (M). In other words, M is a supported model of P M .
154 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Intuitively, every element a in a stable model M of a program P has a justification


provided by the rule of P whose “reduced” version belongs to the reduct P M and,
in the bottom-up computation of the least model of P M , “fires” to add a to M. That
rule of P “supports” a with respect to M. Thus, one might expect that stable models
are supported. It is indeed so.

Theorem 2.15 Let P be a logic program and M a stable model of P . Then, M is a supported model
of P .

Proof Since M is a stable model of P , M = LM(P M ). By Theorem 2.7, M is a fixpoint


of the operator TP M (even a least fixpoint, but we do not need this here). Thus,
M = TP M (M). By Proposition 2.1, TP M (M) = TP (M). Thus, M = TP (M), that is, M is
a supported model of P .

Again, the converse does not hold. Let P = {p ← p}. This program has two
supported models: M1 = ∅ and M2 = {p}. Indeed, it is easy to verify that TP (M1) = M1
and TP (M2) = M2. On the other hand, M2 is not a stable model of P (P is a Horn
program and its least model, M1, is the only stable model of P ). We note that p in
M2 is self-supported. This is the reason why M2 is not stable (all atoms must have
non-circular support). This example also shows that supported models do not need
to be minimal.

Remark 2.2 We saw that stable models are both supported and minimal. It is then natural to
ask whether supportedness and minimality together characterize stable models.
The answer is no. Let us assume that P = {p ← p, p ← not p}. It is clear that {p}
is both a supported model of P and a minimal model of P . However, it is equally
clear that it is not stable.

For some programs though, supported models are stable. In particular, it is so


for those programs whose structure precludes the existence of self-support. The
following definition and the subsequent result are due to Fages [1994].

Definition 2.10 A propositional program P is tight if there is a function (labeling) λ from atoms to
ordinals such that for every rule a ← B in P and every b ∈ B +, λ(a) > λ(b). We call
each labeling λ satisfying this condition a tight labeling of P .

We note that there are tight programs that require transfinite ordinals in each
of their tight labelings. For instance, let us consider a program given by rules:
pi+1 ← pi , i = 0, 1, . . .
(2.22)
q ← pi , i = 0, 1, . . . .
2.5 The Stable Model Semantics 155

In every tight labeling of this program, the label of pi+1 must be strictly larger than
the label of pi . Moreover, the label of q must be strictly larger than the label of every
atom pi . Thus, no tight labeling in the set {0, 1, . . .} is possible. On the other hand,
the labeling λ, where λ(pi ) = i and λ(q) = ω, is tight. Of course, for every finite tight
program, there is a tight labeling that uses only integers.
The intuition behind tightness is clear. Supported models are those models
that can be “self-justified” by means of the one-step provability operator. Such
self-justification may, in general, involve self-justification of individual atoms. For
instance, if P = {p ← p}, then {p} is self-justified by means of TP (formally, {p} =
TP ({p})) and so, it is a supported model of P . For this program, the only way to
justify p is on the basis of p itself, and this self-dependence of p is the reason
why {p} is not a stable model of P . The tightness condition eliminates a possibility
for positive self-dependence of individual atoms. For instance, the program {p ←
not q; q ← not p} is tight. Setting λ(p) = λ(q) = 1 yields a tight labeling of this
program. Similarly, the program {a ← b, c; b ← d; d} is tight, too (set λ(c) = λ(d) = 1,
λ(b) = 2 and λ(a) = 3). Clearly, there are no positive self-dependencies of atoms in
these programs and it is easy to verify that their supported models are also stable.
These considerations can be made formal to show that tightness of a program
is indeed a sufficient condition for supported models to be stable, as discovered by
Fages [1994].

Theorem 2.16 Fages Lemma. Let P be a tight logic program. Then every supported model of P
is a stable model of P .

Proof Let M be a supported model of P . It follows that M is a model of P and so, a model
of the reduct P M (cf. Corollary 2.1). Let N be a model of P M . We will show that
M ⊆ N . In this way, we will show that M = LM(P M ), that is, that M is stable.
Let us consider a tight labeling λ for P . To show the inclusion, we prove by
transfinite induction that for every ordinal α, if a ∈ M and λ(a) = α, then a ∈ N . For
α = 0, the property holds trivially. Indeed, since M is a supported model of P , M =
TP (M) and so, there is a rule a ← B in P such that M |= B. Clearly, a ← B + belongs
to P M . Consequently, N |= a ← B +. Moreover, for every b ∈ B +, 0 = λ(a) > λ(b) ≥ 0.
Thus, B + = ∅ and so, N |= B +. It follows then that N |= a, that is, a ∈ N .
Let us then consider an ordinal α > 0 and assume that the claim holds for
every ordinal that is less than α. Let a ∈ M satisfy λ(a) = α. Since M is a supported
model of P , M = TP (M) and so, there is a rule a ← B in P such that B + ⊆ M and
B − ∩ M = ∅. By the definition of tightness, for every b ∈ B +, λ(b) < α. Since B + ⊆ M,
the induction hypothesis yields b ∈ N . Hence, B + ⊆ N . By the definition of the
reduct, the rule a ← B + belongs to P M . Since N is a model of P M , a ∈ N . Thus,
156 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

the claim follows for α and, by induction, it holds for every ordinal. Consequently,
the claim implies that M ⊆ N , as needed.

The result of Fages can be strengthened. The generalization was discovered by


Erdem and Lifschitz [2003].

Definition 2.11 A program P is tight on a set X ⊆ At(P ) if the program consisting of those rules
a ← B in P that satisfy B + ⊆ X is tight.

The following result is a simple corollary of Fages Lemma.

Corollary 2.3 Relativized Fages Lemma [Erdem and Lifschitz 2003]. If a program P is tight on a
set X ⊆ At(P ) and M is a supported model of P such that M ⊆ X, then M is a stable
model of P .

Proof Let Q be the program consisting of all rules a ← B in P such that B + ⊆ X. Since
Q ⊆ P , TQ(M) ⊆ TP (M). Moreover, since M ⊆ X, TP (M) ⊆ TQ(M). It follows that
TQ(M) = TP (M). Since M is a supported model of P , M = TP (M). This implies
M = TQ(M), that is, M is a supported model of Q. By Fages Lemma, M is a stable
model of Q. That is, M = LM(QM ). Since M is a model of P , M is a model of P M .
Moreover, QM ⊆ P M implies LM(QM ) ⊆ LM(P M ). Thus, we have that M is a model
of P M and M ⊆ LM(P M ). It follows that M = LM(P M ) and so, M is a stable model
of P .

To see the value of the generalization, let us consider the program consisting of
the following rules:
p ← q , not s
r ← p, not q , not s
s ← not q (2.23)
q ← not s
p ← r.
This program has two supported models: M1 = {p, q} and M2 = {s}. However, this
program is not tight. The second rule imposes on any tight labeling λ of the program
a constraint λ(r) > λ(p), while the last rule requires that λ(p) > λ(r). Thus, we
cannot apply Fages Lemma to conclude that these supported models are stable. It
is easy to check that while not tight, P is tight on M1 and on M2. Thus, by Corollary
2.3, both supported models are in fact stable models of P .
Supported models can be characterized in terms of classical models of proposi-
tional theories obtained from programs by means of the so-called Clark’s completion
2.5 The Stable Model Semantics 157

[Clark 1978]. In the case of arbitrary (possibly infinite) propositional programs, the
completion of a program is, in general, a formula in propositional logic with in-
finitary conjunctions and disjunctions. Thus, from now on in this section, we allow
formulas with infinitary disjunctions and infinitary conjunctions. Since the seman-
tics of such formulas is a straightforward generalization from the finite case and
since we only rely on semantic arguments below, it is not a problem.6 Moreover,
throughout the remainder of this section, given a possibly infinite set X of formu-
las we will often write X ∧ and X ∨ for the (infinitary) conjunction and disjunction
of formulas in X, respectively.
For a rule r = a ← b1 , . . . , bm , not c1 , . . . , not cn, we define

body∧(r) = b1 ∧ . . . ∧ bm ∧ ¬c1 ∧ . . . ∧ ¬cn .

Informally, body∧(r) is the propositional formula representing the condition of the


body of the rule r. We note that if the body of r is empty, we have body∧(r) = .7 If
we use the notation a ← B for a rule r, we write B ∧ to denote the formula body∧(r).
Next, given a program P and an atom a, we set

def P (a) = {body∧(r): r ∈ P and H (r) = a}∨ .

That is, def P (a) is the disjunction (in general, infinitary) of all conditions of rules
that “define” a in P .

Definition 2.12 Let P be a logic program over a propositional vocabulary At. We set

cmpl ←(P ) = {def P (a) → a: a ∈ At}∧


cmpl →(P ) = {a → def P (a): a ∈ At}∧
cmpl(P ) = cmpl ←(P ) ∧ cmpl →(P )
(≡ {a ↔ def P (a): a ∈ At}∧).
We call the last of these three (infinitary, in general) formulas the (Clark’s) comple-
tion of P .

The formula cmpl ←(P ) can be viewed as a propositional counterpart to the pro-
gram. Its models are precisely the models of the program. The formula cmpl →(P )
captures the idea that if a holds, at least one of its defining conditions holds. Thus,

6. Those readers that are not comfortable with infinitary propositional logic may simply restrict
the scope of the discussion to programs that are finite.
7. Here  is the standard symbol from the language of propositional logic that is always interpreted
as true.
158 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

models of cmpl(P ) are precisely those models of P whose elements are justified
in the sense captured by the formula cmpl →(P ). This notion of justification is pre-
cisely that of “support” in supported models. That is, models of the completion of
a program are precisely the supported models of the program.

Theorem 2.17 Let P be a propositional program. An interpretation M is a supported model of P


if and only if M is a model of cmpl(P ).

Proof (⇒) Since M is a supported model of P , M = TP (M). Let us consider any atom a ∈ At.
If a ∈ M, there is a rule r = a ← B in P , such that B + ⊆ M and M ∩ B − = ∅. Clearly,
M |= body∧(r) and so, M |= def P (a). Thus, M |= def P (a) ↔ a. If a ∈ M, then for every
rule a ← B in P , we have B + ⊆ M or M ∩ B − = ∅. It follows that M |= def P (a) and
so, M |= def P (a) ↔ a. Consequently, M |= cmpl(P ).
(⇒) Let M |= cmpl(P ). It follows that M is a model of cmpl ←(P ) and so, M is a
model of P . This implies that TP (M) ⊆ M. Moreover, since M |= cmpl →(P ), we have
that for every a ∈ At, M |= a → def P (a). In particular, if a ∈ M, M |= def P (a). That is,
there is a rule r = a ← B in P such that M |= body∧(r). It follows that B + ⊆ M and
M ∩ B − = ∅, that is, a ∈ TP (M). Thus, M ⊆ TP (M) and, consequently, M = TP (M),
that is, M is a supported model of P .

It follows that in all those cases when supported and stable models coincide,
stable models are models of the completion. Thus, we have the following corollary
to our earlier results.

Corollary 2.4 Let P be a tight program. Then an interpretation M is a stable model of P if and
only if M is a model of the completion of P .

This connection to propositional logic is important. It provides a direct way to


use SAT solvers to compute stable models of tight programs. All that is needed is to
construct the completion and convert it to the conjunctive normal form (clausify it).
This observation is the foundation of an answer-set programming solver8 cmodels9
[Giunchiglia et al. 2006].
Clearly, not every program is tight and programs often have supported models
that are not stable. For such programs, the completion is too weak to provide a

8. Programs computing stable models are commonly called answer-set solvers. It is because they
typically handle programs in an extended language (allowing, among other things, disjunctions
in the heads of rules, two types of negation: constraint atoms and aggregates), in which the term
answer set is used instead of stable model.
9. http://www.cs.utexas.edu/users/tag/cmodels.html
2.5 The Stable Model Semantics 159

description of stable models. However, it is possible to strengthen the completion


so that to obtain a representation of stable models of an arbitrary logic program in
terms of models of (infinitary) propositional formulas. The discussion we provide
here follows closely that of Ferraris et al. [2006].

Definition 2.13 Let P be a logic program and Y a nonempty set of atoms. The external support
formula for Y , written esP (Y ), is the disjunction of all formulas body∧(r), where r is
a rule in P such that H (r) ∈ Y and B(r)+ ∩ Y = ∅.

We note that for every atom a we have esP ({a}) |= def P (a). Indeed, def P (a) =
esP ({a}) ∨ F , where F is the disjunction of formulas body∧(r), for every rule r = a ←
B such that a ∈ B +. Such rules are evidently circular and can be removed without
affecting the class of stable models (such removals may affect the class of supported
models, though). Thus, “reasonable” programs do not contain such rules, and for
those programs we even have esP ({a}) = def P (a). Let us define formulas

es∧(P ) = {Y ∧ → esP (Y ): Y ⊆ At(P ), Y = ∅}∧ , and


es∨(P ) = {Y ∨ → esP (Y ): Y ⊆ At(P ), Y = ∅}∧ ,

and let us call them the conjunctive and the disjunctive external support formulas,
respectively. The following two properties follow from the observation above:

es∧(P ) |= cmpl →(P ), and


es∨(P ) |= cmpl →(P ).

Thus, we have

cmpl ←(P ) ∧ es∧(P ) |= cmpl(P ), and



cmpl (P ) ∧ es∨(P ) |= cmpl(P ).

The strengthening of the formula cmpl ←(P ) with es∧(P ) (or es∨(P )), rather then
with cmpl →(P ), is what is needed to translate logic programs under the stable-
model semantics into (infinitary, in the general case) propositional theories. The
following theorem, in the form we present it, is due to Ferraris et al. [2006]. How-
ever, as they state in their paper, the theorem can essentially be attributed to Saccà
and Zaniolo [1990], who provided a semantic characterization of stable models
in terms of unfounded sets, the concept we will encounter later in the chapter.
Since the intuitions behind unfounded sets and external support are closely related,
160 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

the result of Saccà and Zaniolo can be directly reformulated in terms of external
support.10

Theorem 2.18 Let P be a logic program. The following conditions are equivalent.
1. X is a stable model of P .
2. X is a model of cmpl ←(P ) ∧ es∨(P ).
3. X is a model of cmpl ←(P ) ∧ es∧(P ).

Proof (1) ⇒ (2). Let X be a stable model of P . It follows that X is a model of P . Conse-
quently, X is a model of cmpl ←(P ). Let us consider Y ⊆ At(P ) such that Y = ∅ and
let us assume that X |= Y ∨. It follows that X ∩ Y = ∅. Since X = LM(P X ), let a be
the first element of X ∩ Y derived in the bottom-up computation of LM(P X ). Let
a ← B be the rule in P such that X ∩ B − = ∅ and a ← B + ∈ P X is used in the der-
ivation. It follows that X |= B + and, so, since X ∩ B − = ∅, also X |= B ∧. Moreover,
by the choice of a, B + ∩ Y = ∅. Thus, B ∧ is a disjunct of esP (Y ) and, consequently,
X |= esP (Y ).
(2) ⇒ (3). This implication follows from the fact that es ∨(P ) entails es∧(P ).
Indeed, let us consider an interpretation M ⊆ At such that M |= es ∨(P ). Let Y be a
non-empty subset of At(P ) such that M |= Y ∧. Then, M |= Y ∨ and so, M |= esP (Y ).
Thus, M |= es ∧(P ).
(3) ⇒ (1). Let us assume that X |= cmpl ←(P ) ∪ es∧(P ). Since X |= cmpl ←(P ),
X |= P and, consequently, X |= P X . Let X  = LM(P X ). Clearly, X  ⊆ X. Let Y = X \ X 
and let us assume that Y = ∅.
Since X |= es∧(P ), X |= Y ∧ → esP (Y ). Moreover, we have Y ⊆ X and so, X |= Y ∧.
Thus, X |= esP (Y ). It follows that there is a rule a ← B in P such that a ∈ Y , B + ∩ Y =
∅, and X |= B ∧. The last property implies that a ← B + belongs to P X and B + ⊆ X.
Let z ∈ B +. Since B + ∩ Y = ∅, z ∈ Y . Thus, z ∈ X  (as we have B + ⊆ X). This shows
that B + ⊆ X . Since X  is a model of P X , it is a model of a ← B +. Thus, a ∈ X , a
contradiction. It follows that X  = X, that is X = LM(P X ). Hence, X is a stable model
of P .

Theorem 2.18 can be formulated in a slightly different but equivalent way.


Namely, the role of cmpl ←(P ) is only to ensure that X is a model of P . Thus,
conditions (2) and (3) can be phrased as:

10. The result by Saccà and Zaniolo was generalized by Leone et al. [1997] to the so-called disjunc-
tive logic programs, that is, programs with disjunctions of atoms as rule heads). Ferraris et al.
[2006] also considered the disjunctive programs in their study.
2.5 The Stable Model Semantics 161

2. X is a model of P and of es∨(P ).


3. X is a model of P and of es∧(P ),

respectively.
For finite programs, Theorem 2.18 could be used as the basis of an algorithm
for computing stable models, as it reduces the problem to that of finding models
of propositional theories and allows one to use satisfiability solvers for the task.
We discussed a similar application of the completion construction on tight pro-
grams earlier, where we mentioned an answer-set programming solver cmodels. The
problem here is that the theories es∧(P ) and es∨(P ) can be large, in the worst case
exponential in the size of the program. However, their size can be restricted, thanks
to the concept of a loop formula proposed by Lin and Zhao [2002] and an important
characterization of stable models they discovered.

Definition 2.14 A positive dependency graph of a finite propositional program P , denoted by G+(P ),
has atoms of P as its nodes and there is an edge from an atom a to an atom b in
G+(P ) if P has a rule a ← B such that b ∈ B +. A set X ⊆ At(P ) is a loop in P if the
subgraph of G+(P ) induced by X is strongly connected. We write L(P ) for the set
of loops of P .

We note that all subgraphs of G+(P ) induced by single vertices are trivially
strongly connected (each vertex can be reached from itself by a path of length 0).
Therefore, all singleton subsets of At(P ) are loops.
Let us define formulas
lp∧(P ) = {Y ∧ → esP (Y ): Y ∈ L(P )}∧ , and
lp∨(P ) = {Y ∨ → esP (Y ): Y ∈ L(P )}∧ ,

and let us call them the conjunctive and disjunctive loop formulas, respectively.
Clearly, es∧(P ) |= lp∧(P ) and es∨(P ) |= lp∨(P ). Moreover, in general, the entailment
relation cannot be reversed (the conjunctive and disjunctive loop formulas are
weaker than the corresponding external support formulas). However, for finite
graphs, the formulas lp∧(P ) and lp∨(P ) can be used in place of es∧(P ) and es∨(P ),
respectively.

Theorem 2.19 Let P be a finite logic program. The following conditions are equivalent.
1. X is a stable model of P .
2. X is a model of cmpl ←(P ) ∧ lp∨(P ).
3. X is a model of cmpl ←(P ) ∧ lp∧(P ).
162 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Proof (1) ⇒ (2). This implication follows from the entailment es∨(P ) |= lp∨(P ) and Theo-
rem 2.18.
(2) ⇒ (3). The argument used in the proof of the implication (2) ⇒ (3) of Theorem
2.18 can be used here without any change.
(3) ⇒ (1). Let us assume that X |= cmpl ←(P ) ∧ lp∧(P )}. Since X |= cmpl ←(P ),
X |= P and, consequently, X |= P X . Let X  = LM(P X ). Clearly, X  ⊆ X. Let us assume
that X \ X  = ∅. Let H be the subgraph of G+(P ) induced by X \ X  and let Y be the
set of nodes in a terminal strongly connected component in that graph (a strongly
component is terminal if no edge starts in that component and ends in another).
Clearly, Y ∈ L(P ) and so, X |= Y ∧ → esP (Y ).
Since Y ⊆ X, X |= Y ∧ and, consequently, X |= esP (Y ). Thus, there is a rule a ← B
in P such that a ∈ Y , B + ∩ Y = ∅, and X |= B ∧. It follows that a ← B + belongs to
P X and B + ⊆ X.
Let z ∈ B +. Then the dependency graph G+(P ) contains an edge (a, z). We
observe that a ∈ X \ X  (because a ∈ Y ). If z ∈ X \ X  then both a and z are nodes
of H and so, the edge (a, z) is an edge of H . Since Y is a set of nodes of a terminal
strongly connected component in H and a ∈ Y , z ∈ Y . However, we have Y ∩ B + = ∅,
a contradiction. Thus, z ∈ X \ X . Since B + ⊆ X and z ∈ B +, we have z ∈ X. Thus,
z ∈ X.
It follows that B + ⊆ X  and, since X  = LM(P X ), a ∈ X . This is a contradiction.
It follows that X  = X, that is X = LM(P X ). Hence, X is a stable model of P .

Theorem 2.19 can be similarly reformulated as Theorem 2.18 by stating explic-


itly that X must be a model of P rather than ensuring this by involving cmpl ←(P )
in the conditions (2) and (3).
Theorem 2.19 is the basis for the answer-set solver assat, developed by Lin and
Zhao [2002]. Lin and Zhao demonstrated that it is not necessary to pre-compute the
entire loop formula at once. It is possible to compute it incrementally, one loop at
a time, and a stable model (if one exists) is often found after just a few loops have
been exploited. Experiments proved the method worked well for many non-tight
programs (for such programs, the completion is in general too weak to capture
stable models). Nevertheless, the approach has its limitations as there are strong
computational complexity arguments suggesting that in the worst case the number
of loops must be exponential in the size of the program [Lifschitz and Razborov
2006].
It is worth noting that the use of external support formulas to find a proposi-
tional representation of a program is related to the approach proposed and inves-
tigated by Brass and Dix [1995a, 1995b, 1997]. By replacing atoms in the positive
2.5 The Stable Model Semantics 163

bodies of rules by the bodies of rules that define them, and by eliminating rules
that cannot be reduced in this way to rules with only negative literals in the bodies,
Brass and Dix construct the so called residual program for a program P . It has the
same stable models as P . Since it is tight (it only contains rules that have empty
positive body), the models of its completion are precisely the stable models of P .

Extensions to Programs with Variables. Concepts discussed in this subsection and


the results we presented here generalize to programs with variables. For instance,
we define an Herbrand interpretation M to be a supported model of a program with
variables, say P , if M is a supported model of the ground program gr(P ) (treated as a
propositional program over the set HB(P ) of atoms). Similarly, grounding allows us
to extend the notion of tightness to such programs. A first-order sentence capturing
the completion of a program P with variables can be constructed by adapting to the
first-order case the construction we gave for propositional programs [Clark 1978].
Under these extensions the results we presented here generalize. In particular,
stable models of programs with variables are supported, supported models of tight
programs are stable, and supported models of programs are precisely Herbrand
models of the completion.
The results characterizing stable models by means of the external support for-
mulas and the loop formulas (Theorems 2.18 and 2.19) suggested extensions of
the concept of a stable model to the case of arbitrary first-order sentences (not
just programs), and to arbitrary (not just Herbrand) interpretations [Ferraris et al.
2007, Lifschitz 2010, Ferraris et al. 2011]. The general form of the characterization
exposed a close relationship between the stable-model semantics and circumscrip-
tion [Ferraris et al. 2011].
Stable models can also be characterized in terms of the so called equilibrium
logic [Pearce 1997, Pearce 2006]. This characterization and the connection of the
equilibrium logic to the logic of here-and-there [Heyting 1930] led to the notion of
strong equivalence, which we discuss below. However, through extensions of the
equilibrium logic to the first-order language case, it also resulted in the correspond-
ing generalization of the stable-model semantics [Pearce and Valverde 2004, Pearce
and Valverde 2008], equivalent to the one developed by Ferraris et al..

2.5.4 Equivalence and Strong Equivalence


Perhaps the most fundamental notion in any formal system is that of equivalence
of formulas. A generic definition stipulates that two formulas are equivalent if they
have the same models (under the semantics considered) or, informally, if they
have the same meaning. Rewriting formulas into equivalent ones is an important
164 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

technique of automated reasoning as it often allows us to replace formulas by


simpler ones. To rewrite F , we typically identify in it a subformula, say G, and
replace G with another one, say G. The rewriting is correct if F and the result of
replacing G with G in F are equivalent. If replacing G with G yields a correct
rewriting no matter to what formula we apply the substitution, then G and G are
equivalent for replacement. In classical logic, two formulas G and G are equivalent
for replacement if and only if they have the same models, that is, have the same
meaning. This is a simple consequence of the following monotonicity property of
classical logic: for every formulas F and G, Mod(F ∧ G) = Mod(F ) ∩ Mod(G).11
Informally, extending a theory in classical logic by additional formulas has the
effect of eliminating models.
The situation is different in the case of logic programming. Let us consider two
programs P = {a ← not b} and P  = P ∪ {b}. It is clear that M = {a} is a unique stable
model of P and M  = {b} is a unique stable model of P . Thus, adding a new rule
to P does not act on stable models in the same way as adding formulas does on
models. This behavior, called non-monotonicity of the stable-model semantics, is
the reason why equivalence of logic programs does not characterize the equivalence
for the replacement. To give a concrete example, the programs {a} and {a ← not b}
have the same stable models and so are equivalent. However, replacing in the
program {a; b} the subprogram {a} with the program {a ← not b} is not a correct
rewriting. The resulting program {a ← not b; b} has a unique stable model {b} while
the original program has a unique stable model {a, b}. Replacing one subprogram
with an equivalent one changes stable models!
We will now formally introduce the notion of equivalence for replacement for
logic programs called strong equivalence. The definition is due to Lifschitz et al.
[2001].

Definition 2.15 Two logic programs P and Q are strongly equivalent if for every program R, the
programs P ∪ R and Q ∪ R have the same stable models.

To characterize strong equivalence we exploit the notion of an se-model


[Lifschitz et al. 2001]. Se-models arise from a connection between logic program-
ming and a certain intermediate logic, the logic here-and-there of Heyting [Heyting
1930, Chagrov and Zakharyaschev 1997]. The connection has been discovered by
Pearce [1997]. He used it to define the equilibrium logic. As noted at the end of the
previous section, the equilibrium logic played an important role in generalizing
the stable-model semantics to the first-order setting.

11. We write Mod(F ) for the set of models of a formula F .


2.5 The Stable Model Semantics 165

Definition 2.16 Let At be a propositional vocabulary and let X and Y be subsets of At. A pair (X, Y )
is an se-model of a program P over At if
1. X ⊆ Y ,
2. Y |= P , and
3. X |= P Y .
We denote the set of all se-models of P by SE(P ).

We note that se-models contain all information necessary to identify stable


models of a program. The second components Y of se-models are models of a
program P and, therefore, also of the reduct P Y . For each model Y of P , the
collection of se-models of the form (X, Y ) allows us to decide whether Y is a stable
model. If the only X such that (X, Y ) is an se-model of P is X = Y , then Y is the
least model of the reduct P Y and, therefore, a stable model of P . Otherwise, Y is
not a least model of P Y and, therefore, not a stable model of P . This discussion is
formalized below.

Proposition 2.10 An interpretation Y is a stable model of a program P if and only if (Y , Y ) ∈ SE(P )


and for every X ⊂ Y , (X, Y ) ∈ SE(P ).

Proof If Y is a stable model of P , then Y |= P and Y |= P Y . Thus, (Y , Y ) ∈ SE(P ). If for


some X ⊂ Y , we have X |= P Y , then Y = LM(P Y ), a contradiction. Thus, X |= P Y
and so (X, Y ) ∈ SE(P ). The converse implication follows by a similar argument.

We can now formulate and prove a characterization of strong equivalence dis-


covered by Lifschitz et al. [2001].

Theorem 2.20 Logic programs P and Q are strongly equivalent if and only if SE(P ) = SE(Q), that
is, they have the same se-models.

Proof We first observe that for every three programs P , Q, and R,

SE(P ∪ R) = SE(P ) ∩ SE(R). (2.24)

Indeed, let (X, Y ) be a pair of interpretations. By the definition, (X, Y ) ∈ SE(P ∪ R)


if and only if X ⊆ Y , Y |= P ∪ R and X |= (P ∪ R)Y = P Y ∪ R Y . These conditions
together are equivalent to the conditions X ⊆ Y , Y |= P , Y |= R, X |= P Y , and X |=
R Y . This new set of conditions is clearly equivalent to (X, Y ) ∈ SE(P ) and (X, Y ) ∈
SE(R) or, equivalently, (X, Y ) ∈ SE(P ) ∩ SE(R).
166 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

(⇐). Let us assume that SE(P ) = SE(Q). By (2.24), for every program R,

SE(P ∪ R) = SE(P ) ∩ SE(R) = SE(Q) ∩ SE(R) = SE(Q ∪ R).

Thus, by Proposition 2.10, P ∪ R and Q ∪ R have the same stable models.


(⇒). Let us assume that P and Q are strongly equivalent and that SE(P ) \
SE(Q) = ∅. Let (X, Y ) ∈ SE(P ) \ SE(Q). Then, (X, Y ) ∈ SE(P ) and (X, Y ) ∈ SE(Q).
The former implies that Y is a model of P and, so, of P Y , as well. Thus, Y =
LM(P Y ∪ Y ). Since P Y ∪ Y = (P ∪ Y )Y , it follows that Y is a stable model of P ∪ Y .
Since P and Q are strongly equivalent, Y is a stable model of Q ∪ Y and, so, a model
of Q. Since (X, Y ) ∈ SE(Q), X |= QY .
Let us define R = X ∪ {y ← y  | y , y  ∈ Y \ X}. We have that Y is a model of both
Q ∪ R and (Q ∪ R)Y . Let Z be any model of (Q ∪ R)Y such that Z ⊆ Y . Since R is a
Horn program, (Q ∪ R)Y = QY ∪ R and, so, Z |= QY ∪ R. In particular, Z |= QY and,
so, X = Z (we recall that X |= QY ). Since Z |= R, X ⊆ Z. Thus, Z \ X = ∅. Let y ∈
Z \ X. Clearly, we have y ∈ Y \ X. Moreover, for every y  ∈ Y \ X, we also have that
the rule y  ← y is in R. Since Z |= R, Y \ X ⊆ Z. Consequently, Y ⊆ Z. It follows
that Z = Y and, so, Y is a stable model of Q ∪ R. By the strong equivalence of P
and Q, Y is a stable model of P ∪ R and, consequently, Y = LM((P ∪ R)Y ). Since
X |= P Y ∪ R = (P ∪ R)Y and X ⊆ Y , X = Y . We argued above that X |= QY . Thus,
Y |= QY and, so, Y |= Q, a contradiction. Hence, we obtain SE(P ) ⊆ SE(Q). The
converse inclusion follows by the symmetry argument.

Comments. The notion of strong equivalence has received a significant amount


of attention. Several variants of the notion have been proposed, among which the
most interesting one is that of uniform equivalence [Sagiv 1988, Maher 1988, Eiter
and Fink 2003] (two propositional programs P and Q are uniformly equivalent if for
every set A of atoms, the programs P ∪ A and Q ∪ A have the same stable models).
As with other concepts, the notions of strong and uniform equivalence have been
extended to the first-order setting. Discussing this line of research, including the
vast array of complexity results is beyond the scope of this tutorial. Instead, we refer
to a survey by Woltran [2011], which is a good source of references.

The Well-Founded Model Semantics


2.6 For Horn programs, the stable-model semantics coincides with the least-model
semantics and forces a truth value on every atom appearing in the program. For
programs with negation it is no longer so. Let P be the program consisting of rules
2.6 The Well-Founded Model Semantics 167

a
b←c
(2.25)
d ← not e, not b
e ← a, not d.

This program has two stable models: {a, d} and {a, e}. Considering the stable
models of P as representations of two possible ways how things may be, P entails
a, ¬b and ¬c. However, P entails neither d nor ¬d, and neither e nor ¬e. These
properties of P are captured by a three-valued interpretation of At = {a, b, c, d , e},
in which a is true, b and c are false and d and e are unknown. The problem we
study in this section is how to derive this three-valued interpretation or, at least,
approximate it, without having to compute all stable models. In this example, it
seems possible. The program clearly makes a true. Since there is no way to derive c,
the program makes c false. But now, there is no way to derive b. Thus, b is false, too.
Simplifying the program with respect to these truth values results in the program

d ← not e
(2.26)
e ← not d.

At this point, we see that no further derivations are possible and, so, d and e are
unknown.
In the remainder of this section, we study methods to approximate truth values
forced by a program on its atoms. To this end, we will consider three- and four-
valued interpretations on sets of propositional atoms, most often on the set At(P )
of atoms of a program P . We recall (cf. Section 2.2) that we represent a four-
valued interpretation of At(P ) by a pair (I , J ) of interpretations (subsets) of At(P )
and that we write All(P ) and All 4(P ) for the lattices of two-valued and four-valued
interpretations, respectively, the former with the inclusion and the latter with the
precision ordering. We recall that both lattices are complete.
Given a four-valued interpretation (I , J ), atoms in I can be thought of as certain,
and those not in J as impossible. Because of the latter, atoms in J will often be
referred to as possible. In the four-valued setting, an atom is true if it is certain and
possible, false if it is not certain and not possible, unknown if it is possible but
not certain, and inconsistent if it is certain and impossible (cf. Section 2.2). We are
primarily interested in the space of three-valued interpretations, that is, four-valued
interpretations (I , J ) such that I ⊆ J (they do not use the inconsistent truth value).
We call such interpretations consistent. However, for the sake of elegance and
simplicity of technical arguments, it is convenient not to impose any restrictions
on the class of four-valued interpretations and consider the entire lattice All 4(P ).
168 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

The mappings P : All 4(P ) → All(P ) and TP : All 4(P ) → All(P ) play the central
role in our discussion. We recall that

P (I , J ) = {a: a ← B + , not B − ∈ P , B + ⊆ I , and B − ∩ J = ∅}.

If (I , J ) is an approximation of the set of atoms that are true under P , then all atoms
in P (I , J ) can be regarded as true under P , too. Indeed they can be derived by rules
whose positive body contains only atoms that are certain and whose negative body
consists of atoms that are impossible. Similarly, P (J , I ) can be regarded as the
set of atoms that are possible. Indeed it contains those atoms that can be derived
by means of a rule that possibly might “fire” (atoms that appear in the body of the
rule non-negated are possible and those that appear negated are not certain and
so, might possibly be false). Consequently, the pair (P (I , J ), P (J , I )) can also
be viewed as an approximation to the set of atoms that are true under the program.
This is the intuition that motivated the four-valued one-step provability operator
TP which, we recall, is given by

TP (I , J ) = (P (I , J ), P (J , I )).

We also recall that given two interpretations (I , J ) and (I  , J ), we defined


(I , J ) ≤p (I  , J ) to hold if and only if I ⊆ I  and J  ⊆ J . We say that an interpretation
(I , J ) is a four-valued model of P if TP (I , J ) ≤p (I , J ). This notion is equivalent to
the notion of a model in the four-valued Belnap logic [Belnap 1977, Fitting 2002].
Fixpoints of TP are of interest as they are those models of P that TP “revises”
back into themselves or, to put it differently, that cannot be “revised away” by
applying TP . We call fixpoints of TP four-valued supported models of P . Consistent
fixpoints of TP are called partial supported models. The terminology is motivated by
the following property.

Proposition 2.11 Let P be a program and M an interpretation. Then M is a supported model of P if


and only if (M , M) is a fixpoint of TP .

Proof It follows directly from the definition that P (M , M) = TP (M). Thus,

TP (M , M) = (TP (M), TP (M))

and so, TP (M) = M if and only if TP (M , M) = (M , M).

There are programs with no two-valued supported models (for instance, {p ←


not p}). However, partial supported models are guaranteed to exist. In fact, there
is a unique least partial supported model, a property that follows from Theorem
2.5 according to which the operator TP is monotone and, therefore, it has the least
2.6 The Well-Founded Model Semantics 169

fixpoint. We call the least partial supported model of TP the Kripke-Kleene model of
P and denote it by KK(P ). The Kripke-Kleene model of P is three-valued.

Proposition 2.12 Let P be a propositional logic program and let KK(P ) = (I , J ). Then, I ⊆ J , that is,
KK(P ) is three-valued.

Proof Let us consider a three-valued interpretation (I , J ). By definition, I ⊆ J . By Theo-


rem 2.5

P (I , J ) ⊆ P (I , I ) ⊆ P (J , I ).

It follows that TP (I , J ) = (P (I , J ), P (J , I )) is three-valued. The least fixpoint


of TP is obtained by iterating it over the three-valued interpretation (∅, H (P ))
(the argument we used to construct the least fixpoint of the operator TP applies
almost literally, except that the construction needs to extend to ordinals beyond ω;
footnote 4). Since (∅, H (P )) is three-valued, using the property we just proved, it is
easy to show by transfinite induction that every step of the process to construct the
least fixpoint of TP , yields a three-valued interpretation.

Let us illustrate the iterative construction behind the Kripke-Kleene model.


To this end, let P be the program given by the rules (2.25). We have At(P ) =
{a, b, c, d , e} and
TP (∅, {a, b, c, d , e}) = ({a}, {a, b, d , e})
TP ({a}, {a, b, d , e}) = ({a}, {a, d , e})
TP ({a}, {a, d , e}) = ({a}, {a, d , e}).
Since, as we see, we have already reached a fixpoint, KK(P ) = ({a}, {a, d , e}) and it
is precisely the interpretation we derived informally above. However, in general the
Kripke-Kleene model is quite weak. Let now P be the program given by the rules
a←a
(2.27)
b ← not b, not a.
We have At(P ) = {a, b} and

TP (∅, {a, b}) = (∅, {a, b}).

Thus, KK(P ) = (∅, {a, b}) and it gives us no information about the truth values of the
atoms. At the same time, {a} is the only supported model of P and the construction
of the Kripke-Kleene model fails to inform us about it.
While the Kripke-Kleene construction offers an approximation to two-valued
supported models (albeit often a poor one, as the last example demonstrated), we
170 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

are primarily interested in approximating stable models. Since stable models are
supported models, we can use the Kripke-Kleene model as an approximation to
stable models (when they exist). However, the Kripke-Kleene model often fails to
provide a good approximation even for Horn programs. Let P = {p ← p}. It is easy
to verify that KK(P ) = (∅, {p}) while P has a unique stable model in which p is
false. The Kripke-Kleene process fails to discover that. The problem is that p is
self-supported.
To obtain more information about the truth values of atoms under a program,
we need a stronger approximation revision method. Let (I , J ) be an approximation.
The assumption that all atoms not in J are false implies that every rule in P with
each negative literal in its body having the form not a, where a ∈ J , can be used
when deriving atoms from P . The reduct P J is the set of all such rules with their
negative literals simplified away. Thus, any atom we can derive from P J can be
taken as true under P (as long as every atom not in J is false), and the set of all
those atoms can be taken as the lower bound on the revised approximation. That is,
we can revise the lower bound of the approximation to LM(P J ) = P (J ). Similarly,
using the fact that all atoms not in I are possibly false, we are justified in using the
rules of the reduct P I to establish atoms that are possibly true under P (given an
approximation (I , J )). That is, we can revise the upper bound of the approximation
to LM(P I ) = P (I ). This motivates the following approximation revision operator:

SP (I , J ) = (P (J ), P (I )).

The operator SP has the following properties similar to those of TP .

Theorem 2.21 For every logic program P , SP is monotone with respect to the precision operator
and for every interpretations I and J of At(P ) such that I ⊆ J , P (J ) ⊆ P (I ), that
is, SP (I , J ) is consistent.

Proof Let us assume that (I , J ) ≤p (I  , J ). It follows that I ⊆ I  and J  ⊆ J . By the anti-
monotonicity of P (cf. Proposition 2.5),

P (J ) ⊆ P (J ) and P (I ) ⊆ P (I ).

Thus,

SP (I , J ) = (P (J ), P (I )) ≤p (P (J ), P (I )) = SP (I  , J ).

Moreover, if I ⊆ J then P (J ) ⊆ P (I ). Thus, a single iteration of the operator


SP on a consistent interpretation yields a consistent interpretation. It is a routine
matter to use this property in transfinite induction to show that every interpretation
obtained by iterating SP over (∅, At(P )) is consistent.
2.6 The Well-Founded Model Semantics 171

The following result shows that stable models of a program P can be described
in terms of (some) fixpoints of the operator SP .

Proposition 2.13 For every program P , an interpretation M ⊆ At(P ) is a stable model of P if and only
if (M , M) is a fixpoint of SP .

Proof It is clear that SP (M , M) = (P (M), P (M)). Thus, the assertion follows.

Proposition 2.13 motivates the following concepts. An interpretation (I , J ) ∈


All 4(P ) is a four-valued stable model of P if it is a fixpoint of the operator SP . A
consistent four-valued stable model of P is a partial stable model of P (or a three-
valued stable model of P ). Under this notation, we have the following generalization
of Theorem 2.15.

Theorem 2.22 Let P be a propositional logic program and (I , J ) a four-valued stable model of P .
Then (I , J ) is a four-valued supported model of P .

Proof Let (I , J ) be a four-valued stable model of P . By the definition, (I , J ) = SP (I , J ),


that is, I = P (J ) and J = P (I ). It follows that I , as a least fixpoint of TP J , satisfies
I = P (I , J ). Similarly, J satisfies J = P (J , I ). Thus, (I , J ) = TP (I , J ) and so,
(I , J ) is a four-valued supported model of P .

The Knaster-Tarski theorem combined with Theorem 2.21 implies that the
operator SP has a least fixpoint, which is consistent and provides the basis for the
following definition of the well-founded model of a program. The concept of the
well-founded model is due to Van Gelder et al. [1991]. Our presentation follows the
exposition proposed by Fitting [2002].

Definition 2.17 The well-founded model of a propositional logic program P is defined as the least
fixpoint of SP . We denote it by WF(P ).

By the definition, the well-founded model approximates all partial stable models
and, in particular, all two-valued stable models (if they exist).
The well-founded model of a program is a “tighter” approximation than the
Kripke-Kleene model. This property follows directly from Theorem 2.22.

Corollary 2.5 For every propositional program P , KK(P ) ≤p WF(P ).

Let us consider again the program P = {p ← p}. We have seen that KK(P ) =
(∅, {p}). On the other hand, we have
SP (∅, {p}) = (∅, ∅), and
SP (∅, ∅) = (∅, ∅).
172 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Thus, WF(P ) = (∅, ∅) and so, p is false (rather than unknown) in the well-founded
model, in agreement with the intuitive meaning of the program.

Remark 2.3 We note that originally the semantics we are discussing here were introduced with
their own specific motivations. In particular, the motivation behind the Kripke-
Kleene and the well-founded semantics was to extend the least-model semantics
of Horn programs to programs with negation, with the objective to preserve the
property of the uniqueness of an intended model. In general, this meant moving
beyond the setting of two-valued interpretations. In contrast, the objective behind
the proposal of the stable-model semantics was to keep intended models two
valued. But that meant that the existence and uniqueness properties of intended
models could no longer be guaranteed. Interestingly, it turned out that all major
semantics can be seen from the perspective of the unifying algebraic framework
of fixpoints of lattice operators. This perspective elucidates salient features of the
semantics and allows us to relate them to each other showing, for instance, how
stable models can be approximated by the Kripke-Kleene and the well-founded
semantics. We discuss several other similar connections below.

We have seen that there is a close connection between the semantics of stable
models and the well-founded semantics. First, the well-founded model of a pro-
gram is one of (possibly many) four-valued stable models of the program. Moreover,
if WF(P ) = (I , J ) and M is a two-valued stable model P then we have I ⊆ M ⊆ J .
That is, the well-founded model provides some information about stable models.
We will now discuss some stronger connections between the two.

Theorem 2.23 Let P be a propositional logic program such that WF(P ) is two-valued and equals
(I , I ). Then I is a unique stable model of P .

Proof Since WF(P ) is a fixpoint of the operator SP ,

(I , I ) = S(I , I ) = (P (I ), P (I )).

Thus, I = P (I ) and so, I is a stable model of P . Let J be any stable model of P .


It follows that (J , J ) is a fixpoint of SP (Proposition 2.13). Thus, (I , I ) ≤p (J , J ).
Consequently, I ⊆ J and J ⊆ I and so, I = J .

In view of this result, it is natural to ask if there are any interesting classes of
programs for which the well-founded model is two-valued. The classes of Horn
programs and, more generally, stratified programs are obvious candidates and,
indeed, stratified programs have the property in question.

Theorem 2.24 Let P be a stratified propositional program. Then, WF(P ) is two-valued.


2.6 The Well-Founded Model Semantics 173

Proof We will only consider Horn programs and leave the more technically (but not
conceptually) involved case of stratified programs to the reader. If P is a Horn
program, for every interpretation I ⊆ At(P ), P (I ) = LM(P ). Thus, SP (∅, At(P )) =
(LM(P ), LM(P )) and SP (LM(P ), LM(P )) = (LM(P ), LM(P )). It follows that
WF(P ) = (LM(P ), LM(P )).

We conclude with two alternative characterizations of the well-founded model.


First, we observe that the operator P2 is monotone.

Proposition 2.14 For every propositional program P , the operator P2 is monotone.

Proof Let I ⊆ J . Since P is antimonotone (cf. Proposition 2.5), P (J ) ⊆ P (I ). Thus,


again by the antimonotonicity of P , P2 (I ) ⊆ P2 (J ).

The essence of Proposition 2.14 is that P2 has a least fixpoint, which we will
denote by LF(P2 ). We now have the following result that can be traced back to an
early paper on the well-founded semantics due to Van Gelder [1993] in which he
proposed and studied alternating fixpoints of programs.

Theorem 2.25 Let P be a propositional logic program and let A be the least fixpoint of P2 . Then
WF(P ) = (A, P (A)).

Proof By definition, SP (I , J ) = (I , J ) if and only if P (J ) = I and P (I ) = J or, equiv-


alently, P2 (I ) = I and J = P (I ). Thus, if A is the least fixpoint of P2 , then
(A, P (A)) is the least fixpoint of SP , that is, WF(P ) = (A, P (A)).

Finally, we present yet another characterization of the well-founded model, the


one proposed in the original paper by Van Gelder et al. [1991].

Definition 2.18 Let P be a propositional logic program and (I , J ) a four-valued interpretation of


At(P ). A set X ⊆ At(P ) is an unfounded set for P with respect to (I , J ) if for each atom
a ∈ X and for each rule a ← B in P at least one of these two conditions holds:
1. B + ⊆ J or I ∩ B − = ∅.
2. B + ∩ X = ∅.

We note that unfounded sets are in some sense complementary to the idea of
external support, which we exploited when discussing the Loop Theorem.
Informally, the condition (1) says that B evaluates to false in (I , J ) and so, the
rule a ← B cannot be used. The condition (2) says that using a ← B to derive a
depends on some other element of X being true, and implies that self-support
cannot be avoided when deriving elements of X. Thus, elements of an unfounded
set cannot be derived from a program under the assumption that I contains atoms
174 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

that are certain and J those that are possible (those that are not impossible).
Consequently, elements of unfounded sets can be regarded as false. This is the
reason why unfounded sets are of interest to the semantics of a program. The key
property of unfounded sets is that they are closed under unions.

Theorem 2.26 Let P be a propositional logic program and (I , J ) a four-valued interpretation of


At(P ). The union of any collection of subsets of At(P ) that are unfounded for P
with respect to (I , J ) is unfounded for P with respect to (I , J ).

Proof Let X be a family of subsets of At(P ) that are unfounded for P with respect to (I , J ).

Let us set Y = X . To show that Y is unfounded for P with respect to (I , J ), let
a ∈ Y and let us consider a rule a ← B in P . If B + ⊆ J or I ∩ B − = ∅, then the
condition (1) of Definition 2.18 holds. Thus, let us assume otherwise (B + ⊆ J and
I ∩ B − = ∅). Since a ∈ Y , there is X ∈ X such that a ∈ X. Since X is unfounded for
P with respect to (I , J ) and the condition (1) does not hold for the atom a and the
rule a ← B, it follows that B + ∩ X = ∅. Thus, B + ∩ Y = ∅ (as X ⊆ Y ).

As a corollary from this result, we obtain that for every propositional program
P and every four-valued interpretation (I , J ) of At(P ), there exists a greatest un-
founded set for P with respect to (I , J ). We will denote it by GUS P (I , J ). This
mapping GUSP is monotone.

Proposition 2.15 Let P be a propositional logic program and (I , J ) and (I  , J ) two four-valued
interpretations of At(P ) such that (I , J ) ≤p (I  , J ). Then, GUSP (I , J ) ⊆ GUSP (I J ).

Proof It suffices to prove that if X is unfounded for P with respect to (I , J ) then X is


unfounded for P with respect to (I  , J ). To this end, let us consider an atom a ∈ X
and a rule a ← B in P . If B + ⊆ J or I ∩ B − = ∅ then, since (I , J ) ≤p (I  , J ), B + ⊆ J 
or I  ∩ B − = ∅. If B + ∩ X = ∅ then, obviously, X satisfies the same condition for a
and a ← B with respect to (I  , J ).

We now define the operator WP : All 4(P ) → All 4(P ) by setting

WP (I , J ) = (P (I , J ), At(P ) \ GUSP (I , J )), (2.28)

where (I , J ) ∈ All 4(P ). We note that the operator WP is monotone, which follows
from our earlier result on the monotonicity of the operator TP (cf. Theorem 2.5; it
implies that if (I , J ) ≤p (I  , J ) then P (I , J ) ⊆ P (I  , J )) and Proposition 2.15.
Thus, it has a least fixpoint. In general, the operator WP is different from SP .12
However, the least fixpoints of the two operators coincide.

12. Indeed, we have P (∅, ∅) = TP ∅ (∅) and P (∅) = LM(P ∅). In general, TP ∅ (∅) = LM(P ∅) (the
latter is the limit of iterating TP ∅ over ∅). Whenever the two sets are different, WP (∅, ∅) = SP (∅, ∅).
2.6 The Well-Founded Model Semantics 175

Theorem 2.27 Let P be a propositional program. The well-founded model WF(P ), of P is the least
fixpoint of the operator WP .

The fact that the well-founded model approximates stable models allows us to
use it (or, more commonly, computation identifying unfounded sets) as a strong
propagation rule (search-space pruning mechanism) when computing stable mod-
els of programs. The propagation method based on the well-founded semantics
is, in general, stronger than similar methods based on the Kripke-Kleene seman-
tics. However, computation of the well-founded model is relatively expensive. The
well-founded model of a program can be computed in time that is quadratic in the
size of P , as opposed to the linear-time computation of the Kripke-Kleene model.
Some improvements over the quadratic-time worst-case bound have been obtained
[Berman et al. 1995, Lonc and Truszczyński 2001], but it is still an open problem
whether significantly faster algorithms exist. It is however important to note that
the quadratic worst-case arises quite rarely. Moreover, for many programs written
to exploit the well-founded model as the unique intended model in a Prolog-style
fashion, the well-founded semantics can be computed fast enough to make sys-
tems such as XSB [Sagonas et al. 1994] effective in many practical settings (more
comments on XSB follow below).

Extensions to Programs with Variables. The well-founded semantics can be defined


for programs with variables. As in the other cases we discussed, the extension ex-
ploits the concept of grounding. Let P be a program with variables over a vocabulary
σ . A pair (I , J ) of two Herbrand interpretations I and J of σ (a four-valued Herbrand
interpretation of σ ) is the well-founded model of P if (I , J ) is the well-founded model
of gr(P ). It follows directly from this definition that the well-founded model of a
program is, in fact, a three-valued interpretation of σ .
All other notions, most importantly those of partial stable and supported mod-
els and the Kripke-Kleene semantics, extend to programs with variables in the same
way. The properties we discussed generalize, too. In particular, the Kripke-Kleene
model approximates all partial supported models and so, also all partial stable
models, including the well-founded model. Moreover, the well-founded model ap-
proximates all partial stable models. Finally, the well-founded model of any strati-
fied or locally stratified program is two-valued.
The fact that for many programs with negation including, as we just noted, those
that are stratified, the well-founded model is two-valued gave rise to efforts aiming
at making the well-founded semantics the theoretical basis of logic programming
with negation. A Prolog-like query-evaluation programming language XSB was pro-
posed, developed, and implemented by Warren and his collaborators [Chen and
176 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

Warren 1993a, Chen and Warren 1993b, Chen et al. 1993, Sagonas et al. 1994]. It is
at present among the most broadly used Prolog descendants.
In an orthogonal effort, Denecker proposed the well-founded semantics as the
basis for the concept of an inductive definitions [Denecker 1998, Denecker 2000,
Denecker and Ternovska 2008]. Together with his collaborators he designed and
implemented a knowledge representation and reasoning system FO(.) [Denecker
2009], in which first-order logic is extended by modules called definitions that
are interpreted in terms of the well-founded semantics adjusted to account for
“inputs.” This computational knowledge representation system is now among the
best in terms of performance and modeling capabilities [Denecker et al. 2009,
Calimeri et al. 2014].

Concluding Remarks
2.7 This brings our brief tour of the two semantics to an end. There is much that has
been omitted or addressed only briefly. For instance, we said very little about the
impressive body of results concerning the complexity and the expressivity issues.
Excellent papers by Schlipf [1995] and Dantsin et al. [2001] can help fill this gap.
Similarly, we only briefly mentioned answer-set programming, which trans-
formed the stable-model into practical computational methodology and a formal-
ism for modeling and solving hard search problems. For more details we refer to the
paper by Brewka et al. [2011], the monograph by Gebser et al. [2012], papers report-
ing on the answer-set programming competitions [Denecker et al. 2009, Calimeri
et al. 2014], and to the special issue of the AI Magazine dedicated to answer-set
programming [Brewka et al. 2016]. On-going development efforts that have already
resulted in answer-set program processing software with excellent performance
and a growing track record of successful practical applications that are worth men-
tioning include the family of tools around the gringo grounder and the clasp solver,13
the IDP project,14, and the dlv project 15
Practical systems based on the well-founded semantics form another thriving
area of research, with XSB,16 Ontobroker,17 and Flora-218 among the most promi-

13. The Potsdam group: http://potassco.sourceforge.net .


14. The Leuven group: http://dtai.cs.kuleuven.be/software/idp .
15. The University of Calabria group: http://www.dlvsystem.com .
16. http://xsb.sourceforge.net .
17. http://www.semafora-systems.com/en/products/ontobroker/ .
18. http://flora.sourceforge.net .
References 177

nent examples. Yap19 and Ciao Prolog 20 offer partial support for the well-founded
semantics and plan to expand it.
Finally, we have only mentioned but not discussed in any detail recent gener-
alizations of the stable-model semantics to the full syntax of first-order logic and
arbitrary (not only Herbrand) interpretations. The papers by Ferraris et al. [2011]
and Pearce and Valverde [2008] are good starting points for an in-depth look into
that topic. A useful tool to study extensions to the first-order language is the theory
of the stable-model semantics for infinitary propositional logic [Truszczynski 2012,
Gebser et al. 2015, Harrison et al. 2015].

Acknowledgments
The author thanks the reviewers for many constructive comments that helped to
improve the presentation. The author wants to extend special thanks to Thomas
Eiter and Michael Kifer who commented extensively on earlier drafts of the chapter.

References
H. Andréka and I. Németi. 1978. The generalized completeness of horn predicate-logic as a
programming language. Acta Cybern., 4(1): 3–10. 135
K. Apt, H. Blair, and A. Walker. 1988. Towards a theory of declarative knowledge. In J. Minker,
ed., Foundations of Deductive Databases and Logic Programming, pp. 89–142. Morgan
Kaufmann, Los Altos, CA. DOI: 10.1016/B978-0-934613-40-8.50006-3. 122, 137, 138,
145, 152
N. D. Belnap. 1977. A useful four-valued logic. In J. M. Dunn and G. Epstein, eds., Modern
Uses of Multiple-Valued Logic. D. Reidel. DOI: 10.1007/978-94-010-1161-7_2. 168
K. Berman, J. Schlipf, and J.Franco. 1995. Computing the well-founded semantics faster.
In Logic Programming and Nonmonotonic Reasoning (Lexington, KY, 1995), volume 928
of Lecture Notes in Artificial Intelligence, pp. 113–125. Springer. DOI: 10.1007/3-540-
59487-6_9. 175
E. Börger. 1974. Beitrag zur Reduktion des Entscheidungsproblems auf Klassen von
Horn-formeln mit kurzen Alternationen. Archiv für mathematische Logik und
Grudlagenforschung, 16: 67–84. 135
S. Brass and J. Dix. 1995a. Characterizations of the stable semantics by partial evaluation.
In V. W. Marek and A. Nerode, eds., Proceedings of the 3rd International Conference on
Logic Programming and Nonmonotonic Reasoning, LPNMR 1995, volume 928 of LNCS,
pp. 85–98. Springer. DOI: 10.1007/3-540-59487-6_7. 162

19. http://www.dcc.fc.up.pt/~vsc/Yap/ .
20. http://ciao-lang.org .
178 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

S. Brass and J. Dix. 1995b. Disjunctive semantics based upon partial and bottom-up
evaluation. In L. Sterling, ed., Proceedings of the 12th International Conference on
Logic Programming, ICLP 1995, pp. 199–213. MIT Press. DOI: 10.1007/978-3-642-
51136-3_13. 162
S. Brass and J. Dix. 1997. Characterizations of the disjunctive stable semantics by partial
evaluation. J. Log. Program., 32(3): 207–228. DOI: 10.1016/S0743-1066(96)00115-X.
162
G. Brewka, T. Eiter, and M. Truszczynski. 2011. Answer set programming at a glance.
Commun. ACM, 54(12): 92–103. DOI: 10.1145/2043174.2043195. 122, 144, 149, 176
G. Brewka, T. Eiter, and M. Truszczynski, eds. 2016. Special Issue on Answer Set Programming,
volume 37(3) of AI Magazine. AAAI. DOI: 10.1609/aimag.v37i3.2669. 176
J. Büchi. 1962. Turing Machines and the Entscheidungsproblem. Mathematische Annalen,
148. DOI: 10.1007/978-1-4613-8928-6_34. 135
F. Calimeri, G. Ianni, and F. Ricca. 2014. The third open answer set programming
competition. Theory and Practice of Logic Programming, 14(1): 117–135. DOI:
10.1017/S1471068412000105. 176
S. Ceri, G. Gottlob, and L. Tanca. 1990. Logic programming and databases. Surveys in computer
science. Springer-Verlag. DOI: 10.1007/978-3-642-83952-8. 133
A. Chagrov and M. Zakharyaschev. 1997. Modal Logic. Oxford University Press. 164
W. Chen and D. S. Warren. 1993a. A goal-oriented approach to computing the well-founded
semantics. J. Log. Program., 17(2/3&4): 279–300. DOI: 10.1016/0743-1066(93)90034-E.
176
W. Chen and D. S. Warren. 1993b. Query evaluation under the well founded semantics.
In C. Beeri, ed., Proceedings of the 12th ACM SIGACT-SIGMOD-SIGART Symposium on
Principles of Database Systems, PODS 1993, pp. 168–179. ACM Press. DOI: 10.1145/
153850.153865. 176
W. Chen, T. Swift, and D. S. Warren. 1993. Goal-directed evaluation of well-founded
sematics for XSB. In D. Miller, ed., Proceedings of the International Symposium on
Logic Programming, ISLP 1993, p. 679. MIT Press. 176
K. Clark. 1978. Negation as failure. In H. Gallaire and J. Minker, eds., Logic and data bases,
pp. 293–322. Plenum Press, New York-London. DOI: 10.1007/978-1-4684-3384-5_11.
122, 138, 157, 163
A. Colmerauer, H. Kanoui, P. Roussel, and R. Passero. 1973. Un système de communication
homme-machine en Francais. Technical report, Groupe de Recherche en Intelligence
Artificielle, Universitè d’Aix-Marseille II. 121, 135
E. Dantsin, T. Eiter, G. Gottlob, and A. Voronkov. 2001. Complexity and expressive power of
logic programming. ACM Comput. Surv., 33(3): 374–425. DOI: 10.1145/502807.502810.
144, 176
M. Denecker. 1998. The well-founded semantics is the principle of inductive definition. In
J. Dix, L. F. del Cerro, and U. Furbach, eds., Proceedings of the European Workshop on
References 179

Logics in Artificial Intelligence, European Workshop, JELIA 1998, volume 1489 of Lecture
Notes in Computer Science, pp. 1–16. Springer. DOI: 10.1007/3-540-49545-2_1. 176
M. Denecker. 2000. Extending classical logic with inductive definitions. In J. W. Lloyd,
V. Dahl, U. Furbach, M. Kerber, K.-K. Lau, C. Palamidessi, L. M. Pereira, Y. Sagiv, and
P. J. Stuckey, eds., Proceedings of the First International Conference on Computational
Logic, CL 2000, volume 1861 of Lecture Notes in Computer Science, pp. 703–717.
Springer. DOI: 10.1007/3-540-44957-4_47. 176
M. Denecker. 2009. A knowledge base system project for fo(.). In P. M. Hill and D. S.
Warren, eds., Proceedings of the 25th International Conference on Logic Programming,
ICLP 2009, volume 5649 of Lecture Notes in Computer Science, p. 22. Springer. DOI:
10.1007/978-3-642-02846-5_2. 122, 176
M. Denecker and E. Ternovska. 2008. A logic for non-monotone inductive definitions. ACM
Transactions on Computational Logic, 9(2). DOI: 10.1145/1342991.1342998. 131, 133,
176
M. Denecker, V. Marek, and M. Truszczynski. 2000. Approximations, stable operators, well-
founded fixpoints and applications in nonmonotonic reasoning. In J. Minker, ed.,
Logic-Based Artificial Intelligence, pp. 127–144. Kluwer Academic Publishers. DOI:
10.1007/978-1-4615-1567-8_6. 128
M. Denecker, M. Bruynooghe, and V. Marek. 2001. Logic programming revisited: Logic
programs as inductive definitions. ACM Transactions on Computational Logic, 2(4):
623–654. DOI: 10.1145/383779.383789. 131, 133
M. Denecker, J. Vennekens, S. Bond, M. Gebser, and M. Truszczynski. 2009. The second
answer set programming competition. In E. Erdem, F. Lin, and T. Schaub,
eds., Proceedings of the 10th International Conference on Logic Programming and
Nonmonotonic Reasoning, volume 5753 of Lecture Notes in Computer Science, pp.
637–654. Springer. DOI: 10.1007/978-3-642-04238-6_75. 176
M. Denecker, Y. Lierler, M. Truszczynski, and J. Vennekens. 2012. A Tarskian Informal
Semantics for Answer Set Programming. In A. Dovier and V. S. Costa, eds., Technical
Communications of the 28th International Conference on Logic Programming (ICLP
2012), volume 17 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 277–
289. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany. DOI:
10.4230/LIPIcs.ICLP.2012.277. 131, 133
K. Doets. 1994. From Logic to Logic Programming. Foundations of Computing Series. MIT
Press, Cambridge, MA. 127
W. Dowling and J. Gallier. 1984. Linear-time algorithms for testing the satisfiability of
propositional Horn formulae. Journal of Logic Programming, 1(3): 267–284. DOI:
10.1016/0743-1066(84)90014-1. 144
T. Eiter and M. Fink. 2003. Uniform equivalence of logic programs under the stable model
semantics. In C. Palamidessi, ed., Proceedings of the 19th International Conference on
Logic Programming, ICLP 2003, volume 2916 of Lecture Notes in Computer Science, pp.
224–238. Springer. DOI: 10.1007/978-3-540-24599-5_16. 166
180 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

T. Eiter, G. Gottlob, and H. Mannila. 1994. Adding disjunction to datalog. In V. Vianu,


ed., Proceedings of the 13th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of
Database Systems, PODS 1994, pp. 267–278. ACM Press. DOI: 10.1145/182591.182639.
148
T. Eiter, G. Gottlob, and H. Mannila. 1997. Disjunctive datalog. ACM Trans. Database Syst.,
22(3): 364–418. DOI: 10.1145/261124.261126. 148
E. Erdem and V. Lifschitz. 2003. Tight logic programs. Theory and Practice of Logic
Programming, 3(4-5): 499–518. DOI: 10.1017/S1471068403001765. 156
F. Fages. 1994. Consistency of Clark’s completion and existence of stable models. Journal of
Methods of Logic in Computer Science, 1: 51–60. 154, 155
P. Ferraris, J. Lee, and V. Lifschitz. 2006. A generalization of the lin-zhao theorem. Ann. Math.
Artif. Intell., 47(1-2): 79–101. DOI: 10.1007/s10472-006-9025-2. 159, 160
P. Ferraris, J. Lee, and V. Lifschitz. 2007. A new perspective on stable models. In M. M. Veloso,
ed., Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI
2007, pp. 372–379. 163
P. Ferraris, J. Lee, and V. Lifschitz. 2011. Stable models and circumscription. Artif. Intell.,
175(1): 236–263. DOI: 10.1016/j.artint.2010.04.011. 163, 177
M. C. Fitting. 1985. A Kripke-Kleene semantics for logic programs. Journal of Logic
Programming, 2(4): 295–312. DOI: 10.1016/S0743-1066(85)80005-4. 138
M. C. Fitting. 2002. Fixpoint semantics for logic programming—a survey. Theoretical
Computer Science, 278: 25–51. DOI: 10.1016/S0304-3975(00)00330-3. 129, 168, 171
M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub. 2012. Answer Set Solving in Practice.
Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan &
Claypool Publishers. 122, 176
M. Gebser, A. Harrison, R. Kaminski, V. Lifschitz, and T. Schaub. 2015. Abstract Gringo. In
T. Eiter and F. Toni, eds., Proceedings of International Conference on Logic Programming,
ICLP 2015. (to appear). DOI: 10.1017/S1471068415000150. 177
M. Gelfond and V. Lifschitz. 1988. The Stable Semantics for Logic Programs. In R. A. Kowalski
and K. A. Bowen, eds., Proceedings of the 5th International Conference and Symposium
on Logic Programming, pp. 1070–1080. MIT Press. 122, 131, 136, 138, 140
E. Giunchiglia, Y. Lierler, and M. Maratea. 2006. Answer set programming based
on propositional satisfiability. Journal of Automated Reasoning, 36: 345–377.
DOI: 10.1007/s10817-006-9033-2. 158
A. Harrison, V. Lifschitz, D. Pearce, and A. Valverde. 2015. Infinitary equilibrium logic and
strong equivalence. In F. Calimeri, G. Ianni, and M. Truszczynski, eds., Proceedings of
International Conference on Logic Programming and Nonmonotonic Reasoning, LPNMR
2015, volume 9345 of LNAI. DOI: 10.1007/978-3-319-23264-5_33. 177
A. Heyting. 1930. Die formalen Regeln der intuitionistischen Logik. Sitzungsberichte der
Preussischen Akademie von Wissenschaften. Physikalisch-mathematische Klasse, pp.
42–56. 163, 164
References 181

T. Jech. 2003. Set Theory. Springer Monographs in Mathematics. Springer, Berlin, New York.
126
R. Kowalski. 1974. Predicate logic as a programming language. In Proceedings of the Congress
of the International Federation for Information Processing (IFIP-1974), pp. 569–574.
North Holland, Amsterdam. 121
R. A. Kowalski and D. Kuehner. 1971. Linear resolution with selection function. Artif. Intell.,
2(3/4): 227–260. DOI: 10.1016/0004-3702(71)90012-9. 121, 135
N. Leone, P. Rullo, and F. Scarcello. 1997. Disjunctive stable models: Unfounded sets,
fixpoint semantics, and computation. Inf. Comput., 135(2): 69–112. DOI: 10.1006/
inco.1997.2630. 160
V. Lifschitz. 2002. Answer set programming and plan generation. Artificial Intelligence,
138(1-2): 39–54. DOI: 10.1016/S0004-3702(02)00186-8. 149
V. Lifschitz. 2010. Thirteen definitions of a stable model. In A. Blass, N. Dershowitz, and
W. Reisig, eds., Fields of Logic and Computation, Essays Dedicated to Yuri Gurevich on
the Occasion of His 70th Birthday, volume 6300 of Lecture Notes in Computer Science,
pp. 488–503. Springer. DOI: 10.1007/978-3-642-15025-8_24. 163
V. Lifschitz and A. A. Razborov. 2006. Why are there so many loop formulas? ACM Trans.
Comput. Log., 7(2): 261–268. DOI: 10.1145/1131313.1131316. 162
V. Lifschitz and H. Turner. 1994. Splitting a logic program. In P. V. Hentenryck, ed.,
Proceedings of the 11th Internationall Conference on Logic Programming, ICLP 1994, pp.
23–37. MIT Press. 148
V. Lifschitz, D. Pearce, and A. Valverde. 2001. Strongly equivalent logic programs. ACM
Transactions on Computational Logic, 2(4): 526–541. DOI: 10.1145/383779.383783.
164, 165
F. Lin and Y. Zhao. 2002. ASSAT: Computing answer sets of a logic program by SAT solvers.
In Proceedings of the 18th National Conference on Artificial Intelligence (AAAI 2002), pp.
112–117. AAAI Press. DOI: 10.1016/j.artint.2004.04.004. 161, 162
J. W. Lloyd. 1984. Foundations of logic programming. Symbolic Computation. Artificial
Intelligence. Springer, Berlin-New York. 126, 135
Z. Lonc and M. Truszczyński. 2001. On the problem of computing the well-founded
semantics. Theory and Practice of Logic Programming, 5: 591–609. DOI: 10.1007/3-
540-44957-4_45. 175
M. J. Maher. 1988. Equivalences of logic programs. In J. Minker, ed., Foundations of Deductive
Databases and Logic Programming., pp. 627–658. Morgan Kaufmann, Los Altos, CA.
DOI: 10.1007/3-540-16492-8_91. 166
V. Marek and M. Truszczyński. 1999. Stable models and an alternative logic programming
paradigm. In K. Apt, W. Marek, M. Truszczyński, and D. Warren, eds., The Logic
Programming Paradigm: a 25-Year Perspective, pp. 375–398. Springer, Berlin. DOI:
10.1007/978-3-642-60085-2_17. 122, 144
W. Marek and M. Truszczyński. 1991. Autoepistemic logic. Journal of the ACM, 38(3): 588–619.
DOI: 10.1145/116825.116836. 144
182 Chapter 2 An Introduction to the Stable and Well-Founded Semantics of Logic Programs

R. Moore. 1985. Semantical considerations on nonmonotonic logic. Artificial Intelligence,


25(1): 75–94. DOI: 10.1016/0004-3702(85)90042-6. 122
I. Niemelä. 1999. Logic programming with stable model semantics as a constraint
programming paradigm. Annals of Mathematics and Artificial Intelligence, 25(3-4):
241–273. DOI: 10.1023/A:1018930122475. 122, 144
I. Niemelä and J. Rintanen. 1994. On the impact of stratification on the complexity of
nonmonotonic reasoning. Journal of Applied Non-Classical Logics, 4(2): 141–179. DOI:
10.1007/3-540-58107-3_16. 151
D. Pearce. 1997. A new logical characterisation of stable models and answer sets. In
J. Dix, L. M. Pereira, and T. C. Przymusinski, eds., Non-Monotonic Extensions of Logic
Programming, NMELP ’96, volume 1216 of Lecture Notes in Computer Science, pp.
57–70. Springer. DOI: 10.1007/BFb0023801. 163, 164
D. Pearce. 2006. Equilibrium logic. Ann. Math. Artif. Intell., 47(1-2): 3–41. DOI: 10.1007/
s10472-006-9028-z. 163
D. Pearce and A. Valverde. 2004. Towards a first order equilibrium logic for nonmonotonic
reasoning. In J. J. Alferes and J. A. Leite, eds., Proceedings of the 9th European
Conference on Logics in Artificial Intelligence, JELIA 2004, volume 3229 of LNCS, pp.
147–160. Springer. DOI: 10.1007/978-3-540-30227-8_15. 163
D. Pearce and A. Valverde. 2008. Quantified equilibrium logic and foundations for answer
set programs. In M. G. de la Banda and E. Pontelli, eds., Proceedings of the 24th
International Conference on Logic Programming, ICLP 2008, volume 5366 of LNCS, pp.
546–560. Springer. DOI: 10.1007/978-3-540-89982-2_46. 163, 177
T. Przymusinski. 1988a. On the declarative semantics of deductive databases and
logic programs. In J. Minker, ed., Foundations of Deductive Databases and Logic
Programming, pp. 193–216. Morgan Kaufmann, Los Altos, CA. DOI: 10.1016/B978-0-
934613-40-8.50009-9. 122, 137
T. C. Przymusinski. 1988b. Perfect model semantics. In R. A. Kowalski and K. A. Bowen, eds.,
Proceedings of the Fifth International Conference and Symposium on Logic Programming,
pp. 1081–1096. MIT Press. 122, 137, 145
R. Reiter. 1980. A logic for default reasoning. Artificial Intelligence, 13(1-2): 81–132. DOI:
10.1016/0004-3702(80)90014-4. 122
J. Robinson. 1965. A machine-oriented logic based on resolution principle. Journal of the
ACM, 12: 23–41. DOI: 10.1145/321250.321253. 121, 135
D. Saccà and C. Zaniolo. 1990. Stable models and non-determinism in logic programs
with negation. In D. J. Rosenkrantz and Y. Sagiv, eds., Proceedings of the 9th ACM
SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, PODS 1990,
pp. 205–217. ACM Press. DOI: 10.1145/298514.298572. 159
Y. Sagiv. 1988. Optimizing datalog programs. In J. Minker, ed., Foundations of Deductive
Databases and Logic Programming, pp. 659–698. Morgan Kaufmann, Los Altos, CA.
DOI: 10.1016/B978-0-934613-40-8.50021-X. 166
References 183

K. F. Sagonas, T. Swift, and D. S. Warren. 1994. XSB as an efficient deductive database


engine. In R. T. Snodgrass and M. Winslett, eds., Proceedings of the 1994 ACM
SIGMOD International Conference on Management of Data, pp. 442–453. ACM Press.
DOI: 10.1145/191839.191927. 122, 175, 176
J. S. Schlipf. 1995. The expressive powers of the logic programming semantics. J. Comput.
Syst. Sci., 51(1): 64–86. DOI: 10.1006/jcss.1995.1053. 176
R. M. Smullyan. 1961. Theory of Formal Systems, volume 47 of Annals of Mathematical Studies.
Princeton University Press, Princeton, New Jersey. 135
A. Tarski. 1955. Lattice-theoretic fixpoint theorem and its applications. Pacific Journal of
Mathematics, 5: 285–309. DOI: 10.2140/pjm.1955.5.285. 125
M. Truszczynski. 2012. Connecting first-order ASP and the logic FO(ID) through reducts. In
E. Erdem, J. Lee, Y. Lierler, and D. Pearce, eds., Correct Reasoning - Essays on Logic-
Based AI in Honour of Vladimir Lifschitz, volume 7265 of LNCS, pp. 543–559. Springer.
DOI: 10.1007/978-3-642-30743-0_37. 177
J. Ullman. 1988. Principles of Database and Knowledge-Base Systems. Computer Science Press,
Rockville, MD. 133
M. van Emden and R. Kowalski. 1976. The semantics of predicate logic as a programming
language. Journal of the ACM, 23(4): 733–742. DOI: 10.1145/321978.321991. 128, 134
A. Van Gelder. 1993. The alternating fixpoint of logic programs with negation. Journal of
Computer and System Sciences, 47(1): 185–221. DOI: 10.1016/0022-0000(93)90024-Q.
173
A. Van Gelder, K. A. Ross, and J. S. Schlipf. 1988. Unfounded sets and well-founded semantics
for general logic programs. In C. Edmondson-Yurkanan and M. Yannakakis, eds.,
Proceedings of the Seventh ACM SIGACT-SIGMOD-SIGART Symposium on Principles of
Database Systems, PODS 1988, pp. 221–230. ACM Press. DOI: 10.1145/308386.308444.
122, 138
A. Van Gelder, K. Ross, and J. Schlipf. 1991. The well-founded semantics for general logic
programs. Journal of the ACM, 38(3): 620–650. DOI: 10.1145/116825.116838. 122, 136,
138, 171, 173
S. Woltran. 2011. Equivalence between extended datalog programs - a brief survey. In
O. de Moor, G. Gottlob, T. Furche, and A. J. Sellers, eds., Datalog Reloaded - First
International Workshop, Datalog 2010, volume 6702 of Lecture Notes in Computer
Science, pp. 106–119. Springer. DOI: 10.1007/978-3-642-24206-9_7. 166
Index

2DHP-PSP problem, 389–390 Aggregation and aggregates


Answer Set Programming, 369–370
A Modeling Language for Mathematical CTL model checking, 434
Programming (AMPL), 332 Datalog, 23–25
Abducible predicates, 505–506 Datomic, 90
Abductive reasoning, 505 declarative networking, 81
Abstractions and implementations, 519 IDP system, 288
constraint satisfaction, 536–541 join queries, 527, 529
control abstractions, 522–526 LDL language, 67
data abstractions, 521–522 Prolog, 434–435
join queries, 526–532 Agreement in natural language processing,
recursive rules and queries, 532–536 488–489
Accepting states in Büchi PDS, 455 AI (artificial intelligence)
ACID properties, 336 Datalog, 13–14
Action Description Languages, 403 Stable-Model Semantics, 21
Actions AI Winter, 99
data-independent systems, 448–449 Ailog2 system, 221
Datalog, 40–41, 43 AIMMS (Advanced Integrated Multidi-
labeled transition systems, 436–437 mensional Modeling Software),
planning problems, 459, 464–465 332
Activation Records (ARs), 239–241 Alaska prototype, 28
Active programs in LogicBlox, 335 Algebraic Modeling Languages (AMLs). See
Acyclic Datalog LPADs, 218 SolverBox
Acyclic logic programs, 217 Algebras
Additive probability space in distribution distribution semantics, 200–205
semantics, 204–205 finite-state model checking, 431
Adenine in DNA, 362–364 AllegroGraph system, 79
Aditi database project, 72 Alleles, 380
Aditi-Prolog language, 72 allocate instruction in WAM, 247–248, 250,
Advanced Integrated Multidimensional 266, 272
Modeling Software (AIMMS), 332 Alloy system, 322
560 Index

Alternating fixed points in modal mu- Approximation intuition in Herbrand


calculus, 437–438 interpretations, 128
Alternation depth in modal mu-calculus, Approximation revision operator in well-
438 founded semantics, 170
Ambiguity in natural language processing, Arbitrary logic programs, 159
487 Architecture in IDP system, 294–296
Amino acids, 364–366 Arginine in DNA, 365
AMLs (Algebraic Modeling Languages). See Arity
SolverBox deterministic Prolog records, 256–257
AMPL (A Modeling Language for first-order logic, 284
Mathematical Programming), 332 ARs (Activation Records), 239–241
Anaphora in natural language processing, Artificial intelligence (AI)
494, 496–497 Datalog, 13–14
Andersen’s pointer analysis, 535 Stable-Model Semantics, 21
Annotated disjunctive clauses, 190–191 ASP. See Answer Set Programming (ASP)
Annotated Probabilistic Logic Program- ASP language, 322
ming, 214–215 Assertions
Answer Set Programming (ASP) control abstractions, 523
applications, 543 join queries, 527
constraint satisfaction, 538 Associativity in natural language processing,
distribution semantics, 208–209 485–486
gene regulatory networks, 397–398 Assumption Grammars, 481, 492–497
Haplotype inference, 381–383 Assumption of exclusion in query
metabolic networks, 401–402 evaluation, 225
phylogenetic trees, 374–377 Assumption of independence in query
RNA secondary structure prediction, evaluation, 225
386–388, 390–392 Atomic choices in distribution semantics,
from Stable Model semantics, 22, 76, 122 192–193, 201–204
tabled logic programs, 430 atomic_constraint predicate, 441
Answer-set solvers, 158 Atomic constraints, 441, 443
Answer subsumption Atomic formulas in logic programming, 123
Binary Decision Diagrams, 224 Atomicity in transactions, 45
deterministic planning, 459–463 Atoms
infinite-state model checking, 443–444 deterministic Datalog, 244
PITA systems, 225 first-order logic, 284–285
Antagonism toward Datalog, 75 Herbrand interpretations, 128
Antimonotonicity Horn programs, 138
Horn programs, 130 logic programming, 123–124
logic programming, 127 well-founded semantics, 167, 170
well-founded semantics, 173 Attractors in gene regulatory networks, 403
append instruction in WAM, 271 Attributes in Datalog rules, 7
Approximation Autoepistemic logic, 122
ground-and-solve, 314–315 Automated symmetry breaking in IDP
inferencing in PLP, 226–227 system, 283
Index 561

Automated translation in IDP system, 283 BIOCHAM system, 404


Bioinformatics, 359
B language, 322 Answer Set Programming, 366–370
B-Prolog language biology overview, 362–366
natural language processing, 483–484, Haplotype inference, 379–384
508 instances, 408–412
query evaluation for restricted programs, logic programming approaches, 404–
224 406
Backbones in DNA, 365 phylogenetics, 370–379
Backjumping in constraint satisfaction, 539 protein structure prediction, 389–393
Backtrackable updates, 45 RNA secondary structure prediction,
Backtracking in constraint satisfaction, 538 384–389
Backward anaphora, 496–497 systems biology, 393–404
Backward Fixpoint Procedure (BFP), 61 Biological species concept, 379
BANG files, 69 BioSigNet-RR tool, 404
Barber paradox, 207–208 Bits in data abstractions, 522
Barman planning domain, 467–468 bldcon instruction in WAM, 260
Base pairing in RNA, 363–364 Blocks
Baum-Welch algorithm, 226 LogicBlox, 335–336
Bayes classifier, 531 LogiQL clauses, 335
Bayesian Logic Programs (BLPs), 186–187, Bloom system, 544
215, 217–219 BLPs (Bayesian Logic Programs), 186–187,
Bayesian networks 215, 217–219
conversions, 217–219 Bodies
Knowledge Base Model Construction, Answer Set Programming rules, 367
215–217 Datalog rules, 5
BDDs (Binary Decision Diagrams) logic programming rules, 123–124
ProbLog, 185, 187 LogiQL clauses, 335
query evaluation for unrestricted natural language processing constraints,
programs, 221–224 498
Belnap logic, 168 Bottom-up evaluation in Datalog, 9, 48–52,
BERMUDA system, 63 61
BFP (Backward Fixpoint Procedure), 61 Bounded left-recursive shortest path (SPLB)
Bifurcations in gene regulatory networks, deterministic planning, 462–463
403 grid planning, 465–466
Big data in Datalog, 82–83 Box operator, 436
BigDatalog, 83 BPDS (Büchi PDS), 455–457
Binary Decision Diagrams (BDDs), 79 Branching time logics, 438–440
ProbLog, 185, 187 Büchi acceptance conditions, 451
query evaluation for unrestricted Büchi automatons, 439
programs, 221–224 Büchi PDS (BPDS), 455–457
Binder language in trust management, 536 Buffers in data-independent systems,
Binding requirements in Datalog, 23 448–449
BinProlog language, 496 Business intelligence (BI), 531
562 Index

C language, 239–243 Chord overlay algorithm, 81


call instruction in WAM, 248–253, 268, CHR Grammars (CHRGs), 499–501, 504–508
272–273 Chromosomes, 379
Camin-Sokal requirement, 374 CHRs (Constraint Handling Rules), 481
Carbon atoms in amino acids, 365 grammars, 499–501
Cardinality definition, 498–499
Haplotype inference, 381 CIAO system, 483
IDP system, 288 Clark’s completion
Cardinality constraint literals in Answer Set Horn programs, 138–139
Programming, 369 Prolog, 18
Cardinality-minimal repairs in metabolic supported models, 156–157
networks, 402 Clasp system, 322
CASP program, 322 Classes in Coherent Definition Framework,
Cassandra framework, 82, 536 530
Cataphora in natural language processing, Clausal Normal Form (CNF) in IDP system,
496–497 321
Cavediving planning domain, 467–468 Clauses
CBC (COIN-OR Branch and Cut) solver, 347 logic programming, 123
CDAO (Comparative Data Analysis LogiQL, 335
Ontology), 379 Clingcon system, 322–323
CDF (Coherent Definition Framework), Clingo system, 541
529–531 Closed-world assumptions for databases,
Central dogma of biology, 362 13
Certain atoms CLP (Constraint Logic Programming), 322,
Herbrand interpretations, 128 341, 361, 404–406
well-founded semantics, 167 CLP(BN) program, 186, 215, 217
ceval predicate in infinite-state model CNF (Clausal Normal Form) in IDP system,
checking, 443 321
CFGs (context-free grammars) Coalescent model in Haplotype inference,
natural language processing, 478 384
tabled logic programs, 429 CodeQuest system, 79, 535
Character compatibility in phylogenetic Codons in RNA, 364
trees, 372–377 Coelom
Chart parsing in natural language Haplotype inference, 384
processing, 482 phylogenetic trees, 374
Chase process, 27 Coherent Definition Framework (CDF),
Chat-80 system, 12 529–531
CheckAccess function, 532 COIN-OR Branch and Cut (CBC) solver, 347
Checkpoints in nondeterministic Prolog, Coin tosses, 190
263–264 Collatz conjecture in data-independent
Chlamydomonas reinhardtii, 402 systems, 448
Choice points in nondeterministic Prolog, Colmerauer, Alain, 481
263–264 Colon-dash (:-) in Datalog rules, 5
Chomsky Normal Form, 454 Combined complexity in Datalog, 76
Index 563

Combining rule in Bayesian Logic metabolic networks, 402


Programs, 215 Conditional probability in distribution
Commas (,) semantics, 194
conjunction connectives, 123 Conjunction connectives in logic
Datalog rules, 5 programming, 123
IDP system, 299 Conjunction of constraints in infinite-state
LogicBlox, 337 model checking, 441
Comments in IDP system, 298 Conjunctions
Common Table Expressions, 79 control abstractions, 523
Compactness theorem, 281 LogicBlox, 337
Comparative Data Analysis Ontology Conjunctive external support formulas, 159
(CDAO), 379 Conjunctive loop formulas, 161
Compatible characters in phylogenetic Consistency in regulatory networks, 396
trees, 372–374 Consistent atoms
Compatible worlds in distribution distribution semantics, 193
semantics, 201 well-founded semantics, 167
Compilers in finite-state model checking, Constant symbols in logic programming,
431 123
Compiling nondeterministic Prolog, 265– Constants in deterministic Datalog, 244–
268 246
Complementary strings in DNA, 362 Constraint Handling Rules (CHRs), 481
Complete lattices in logic programming, grammars, 499–501
125–126 definition, 498–499
Completion of definitions in IDP system, Constraint Logic Programming (CLP), 322,
293 341, 361, 404–406
Completion stacks in WAM, 275 Constraints, 28–29
Complex objects, 33–34 Answer Set Programming, 368–369
Complexity bioinformatics, 404–405
CTL model checking, 435 constraint satisfaction, 536–541
Datalog, 76–77 control abstractions, 526
query evaluation in PLP, 220 Datalog, 28–32
Composite atomic choices in distribution EKS, 70
semantics, 193, 201–204 infinite-state model checking, 441–445
Compositionality, 39 LogicBlox, 335, 339
Computation in Stable Model semantics, LogiQL, 344
151 maintenance, 30–31
Computation Tree Logic (CTL) natural language processing, 498–504
finite-state model checking, 431–436 Constructed types in IDP system, 304–305
vs. Linear Temporal Logic, 438–440 Construction process in IDP system, 290
modal mu-calculus, 436–438 Context dependencies, 32
Computing with Logic (Maier and Warren), Context-free grammars (CFGs)
15–16 natural language processing, 478
Conditional literals tabled logic programs, 429
Answer Set Programming, 369 Context-sensitive points-to analyses, 80
564 Index

Contextual independence in query artificial intelligence, 13–14


evaluation for unrestricted big data, 82–83
programs, 221 bottom-up evaluation, 48–52
Continuous operators in complete lattices, bottom-up vs. top-down evaluation, 61
126–127 coining of name, 15–16
Control abstractions, 522–526 conclusions, 98–99
Control locations in Timed Safety Automata, contributions, 76–77
444 data and queries, 84–85
Conversions in Bayesian networks, 217–219 database system evaluation methods,
CORAL deductive system, 67–68 61–63
Cost terms in IDP system, 303–304 database systems, 12–13
CP integration in ground-and-solve, 316– in Datomic, 88–90
317 declarative networking, 80–82
CP-logic, 186 decline, 74–77
cpgroundatoms option in IDP system, 319 destructive updates in rule bodies, 42–46
cplint system, 222, 224 deterministic. See Deterministic Datalog
cpsupport option in IDP system, 318–319 distribution semantics, 192
CTL (Computation Tree Logic) early systems, 65–73
finite-state model checking, 431–436 emergence, 10–14
vs. Linear Temporal Logic, 438–440 evaluation techniques, 46–63
modal mu-calculus, 436–438 existential variables, 25–28
Curly braces ({ }) in natural language experiments, 93–98
processing, 480 explicit state identifiers, 40–42
CVE inferencing, 221, 224 extensions overview, 16
Cytosine in DNA, 362–364 in Flora-2 and Ergo, 91–93
higher-order extensions, 36–38
DADM databases, 13, 63 introduction, 3–10
Dahl, Veronica, 481 Logic Programming, 11–12
Data abstractions, 521–522 in LogicBlox, 87–88
Data complexity in Datalog, 76 negation, 17–22
Data-independent systems, 448–451 object-oriented logic programming,
Data management applications, 544 32–36
Database systems program analysis, 79–80
deductive, 65–73 query optimization, 64
evaluation methods, 61–63 resurgence, 77–84
IDP system, 291 Semantic Web, 78–79
LogicBlox, 336–337 top-down evaluation, 52–60
schema, 28 typing and constraints, 28–32
Datalog, 3 updates, 38–46
aggregation and sets, 23–25 in XSB, 85–87
algebraic modeling. See SolverBox Dates in logenetic trees, 372
application areas, 83–84 Datomic, 88–90
arithmetic and evaluable predicates, DBMs (Difference bound matrices), 446
22–23 DCA (Domain closure axiom), 304
Index 565

DCG. See Definite Clause Grammar (DCG) Demand transformation in bottom-up


notation evaluation, 52, 61
dDatalog, 82 Denial axioms, 29–30
Deadlock freedom in al mu-calculus, 437 Deoxyribonucleic Acid (DNA) sequences
deallocate instruction in WAM, 248 bioinformatics, 360, 362–365
DeALS system, 83 diploid organisms, 379–380
Decision nodes in Binary Decision Dependencies
Diagrams, 222 context, 32
Decision support applications, 544 natural language processing, 494–497
Declarative language in IDP system, 294 TGDs and EGDs, 26
Declarative networking in Datalog, 80–82 Dependency cycles in Horn programs, 139
Declarative semantics, 39 Dependency graphs
DECLARE language, 72 Bayesian networks, 217
DEDUCE 2 databases, 13, 63 Stable Model semantics, 161
Deduction in IDP system, 307–308 Dereferencing deterministic Datalog, 246
Deductive database systems, 13, 65–73 Derivation rules in LogicBlox, 337
Default negation in Answer Set Program- Derivations vs. answers in Datalog, 46–47
ming rules, 367 Derived predicates in LogicBlox, 336
Defeasibility, 36 Derived relations in Datalog, 5
Definability of relations for Horn programs, Description logic
135 Datalog, 64
Definite Clause Grammar (DCG) notation Semantic Web, 78
assumption grammars, 492–497 Design decisions in IDP system, 294–296
Hyprolog, 506 Destructive updates, 39
natural language processing, 479–481, rule bodies, 43–46
484–488 rule heads, 42–43
push-down systems, 453–454 Deterministic Datalog, 244
toy DCG, 509–511 examples, 255–256
Definite Horn clauses, 16 ground facts, 250–251
Definite programs in Answer Set instruction summary, 254–255
Programming, 367–368 new variables, 251–252
Definitional implication in IDP system, 296 simple programs, 246–249
Definitions StackTop pointer, 250
Horn programs, 138 temporary variables, 252–254
IDP system, 289–293 variables and constants, 244–246
well-founded semantics, 176 Deterministic planning
Definitions of concepts, IDP system as, answer subsumption and shortest-path
280–281 problem, 459–463
Delay transitions in Timed Safety Automata, tabled logic programs, 458–459
445–446 Deterministic Prolog, 256–263
Delta predicates in LogicBlox, 338 DFJ model, 348–349
Demand-driven computation DIADEM project, 83
join queries, 529 Diamond operator in labeled transition
recursive queries, 533–534 systems, 436
566 Index

Difference bound matrices (DBMs), 446 Dollo requirement


Difference operator in Datalog, 17 Haplotype inference, 384
Diploid organisms, 379–380 phylogenetic trees, 374
Directed graphs Domain atoms in first-order logic, 285
Binary Decision Diagrams, 222 Domain closure axiom (DCA), 304
CTL model checking, 432 Domain specific languages, 332
labeled transition systems, 436 Domains
Disjoint-statements in Probabilistic Horn datatypes, 28
Abduction, 188 first-order logic, 284–285
Disjoint types in IDP system, 293 Double helix in DNA, 362–363
Disjunctions Downward closed constraints, 31
control abstractions, 523–524 Dynamic filtering in bottom-up evaluation,
LogicBlox, 337 51
Disjunctive Datalog, 83 Dynamic logic, 39
Disjunctive external support formulas in Dynamic predicates in Prolog, 274
Stable Model semantics, 159 Dynamic stratification
Disjunctive logic programs (DLPs), 22, 160, Datalog, 20
368 negation, 434–435
Disjunctive loop formulas in Stable Model
semantics, 161 Early Datalog systems, 65–73
Distribution semantics, 186–187, 192 ECA (event-condition-action) rules, 40, 42,
algebras and probability measures, 46
200–205 Ectocarpus Siliculosus, 402
bioinformatics, 405 EDBs (extensional databases)
examples, 197–198 Datalog, 9–10, 15, 29–31, 62–63
expressive power, 195–197 LogicBlox, 336
Logic Programs with Annotated Edges
Disjunctions, 190–191 gene regulatory networks, 397
non-stratified programs, 207–210 phylogenetic trees, 371
Probabilistic Horn Abduction, 188–189 EDS (Expert Database Systems), 14
ProbLog, 191 EFA (extended finite automata), 448–
Sato and Kameya’s definition, 206 449
Stable Model semantics, 208–210 EGDs (equality-generating dependencies),
stratified programs, 192–195 26–28
stratified programs with function EKS system, 69–70
symbols, 198–207 Elog language, 83
well defined, 206–207 EMBLEM system, 227
DLOG system, 14 Encoding knowledge in IDP system, 283
DLPs (disjunctive logic programs), 22, 160, Enterprise decision making in constraint
368 satisfaction, 539–540
DLV language, 28, 83, 322 Enterprise management in join queries, 531
DNA (Deoxyribonucleic Acid) sequences Entities in LogicBlox, 336
bioinformatics, 360, 362–365 Environment trimming in WAM, 272–273
diploid organisms, 379–380 Enzymes in DNA, 363
Index 567

Equality-generating dependencies (EGDs), Extension Table (ET) evaluation, 59


26–28 Extensional databases (EDBs)
Equilibrium logic in Stable Model Datalog, 9–10, 15, 29–31, 62–63
semantics, 163 LogicBlox, 336
Equivalence in Stable Model semantics, External support formulas in Stable Model
163–165 semantics, 159
Equivalent composite choices in distribu- Extra arguments in natural language
tion semantics, 202–203 processing, 488–489
Erase relation in Büchi PDS, 456–457 EZ system, 322–323
Ergo system
vs. Datalog, 91–93 F-logic, 33–36
F-logic, 34–36 objects, 545
HiLog, 38 Semantic Web, 78
Escherichia coli, 402 Facts
ET (Extension Table) evaluation, 59 control abstractions, 523–524
Evaluable predicates in Datalog, 22–23 Datalog, 4–5, 8, 47
Event-B language, 287, 322 FAD language, 66
Event calculus, 41 Fages Lemma, 155–156
Event-condition-action (ECA) rules, 40, 42, False atoms
46 first-order logic, 285
Evidential probability, 213–214 Herbrand interpretation grounds, 128
Evolutionary theory in phylogenetics, Horn programs, 138
370–371 well-founded semantics, 167, 170
Exclamation points (!) in LogicBlox, 337 Fields in deterministic Prolog records, 256
Exclusion axiom in query evaluation for Fifth-Generation Computer Systems (FGCS),
restricted programs, 224–225 11–12
execute instruction in WAM, 252, 268 findall predicate in Prolog, 434–435
Executing blocks in LogicBlox, 336–337 Finitary operators
Existential variables in Datalog, 25–28 complete lattices, 126
EXODUS storage manager, 68 Horn programs, 129
Expert Database Systems (EDS), 14 Finite directed graphs
Explanations CTL model checking, 432
distribution semantics, 205 labeled transition systems, 436
IDP system, 297 Finite Domain, 404
query evaluation for unrestricted Finite-state model checking, 430–431
programs, 221–224 CTL, 431–436
Explicit constraints, 537 linear-time vs. branching time logics,
Explicit state identifiers, 39–42 438–440
Expression complexity in Datalog, 76 modal mu-calculus, 436–438
Expressive power in distribution semantics, First-order (FO) logic, 22, 284–286
195–197 aggregates, 288
Expressiveness in database systems, 12–13 arithmetic, 287–288
EXPTIME-complete data complexity, 77 control abstractions, 525
Extended finite automata (EFA), 448–449 definitions, 289–293
568 Index

First-order (FO) logic (continued) Frame axioms, 41


IDP system, 281–282, 295–296 Frame-based systems, 14
partial functions, 286–287 Frame problems, 41
types, 293–294 Frame representation in F-logic, 34–35
Fixed points Function detection in IDP system, 283
Backward Fixpoint Procedure, 61 Function-free logic, 13
CTL model checking, 433–435 Function symbols
Datalog, 9, 13, 19 logic programming, 123
modal mu-calculus, 436–438 stratified programs with, 198–207
Fixpoints Functional dependencies in IDP system,
Horn programs, 134–135 312
logic programming, 125–127 functiondetection option in IDP system,
well-founded semantics, 168–169, 171– 319
173 Functions in IDP system, 298–300
FlatZinc, 321
Floortile planning domain, 467–468 Gambler program, 209–210
FLOPC++ (Formulation of Linear Games and puzzles
Optimization Problems in C++), applications, 544
332 constraint satisfaction, 540–541
Flora-2 system, 33–36 GAMS (General Algebraic Modeling System),
vs. Datalog, 91–93 332
transaction logic, 46 Gecode system, 322
FLORID system, 34, 36, 73 Gene regulatory networks
Flux balance analysis in metabolic Answer Set Programming encoding,
networks, 399 397–398
FO(ID). See First-order (FO) logic instances, 411
Folding in protein structure prediction, 390 modeling, 395–397
For-loops in join queries, 527–528 systems biology, 394–395
Foreign keys, 28 General Algebraic Modeling System (GAMS),
Formal definitions in IDP system, 289 332
Formal semantics, 130–131 Generalized counting
Formulas bottom-up evaluation, 51
first-order logic, 284 SALAD prototype, 66
IDP system, 300 Generate-define-test methodology, 149
LogicBlox, 337 Genes
partial functions, 286–287 DNA, 362–363
Formulation of Linear Optimization phylogenetic trees, 371
Problems in C++ (FLOPC++), 332 Genomes, 362
Founded semantics, 22 Genomics, 360–362
Four-valued models in well-founded Genotypes, 379
semantics, 138, 168, 171 getcon instruction in WAM, 250–251, 255
Four-valued one-step provability operator, getnum instruction in WAM, 255
129–130 getstr instruction in WAM, 259
FPspace complete problem, 220 gettval instruction in WAM, 253, 255
Index 569

getvar instruction in WAM, 247, 249, 255 Hairpin loops in RNA, 386, 388
Global memory in natural language Haplotype inference, 379–380
processing, 492–497 Answer Set Programming encoding,
Globalizing variables in WAM, 270 381–383
Glue language, 68–69 bioinformatics, 361
Grammar induction in natural language instances, 409
processing constraints, 501–504 modeling, 380–381
GraphDB system, 79 Head constraints in natural language
GraphX, 83 processing, 498
Greater-than comparisons in Datalog, 23 Heads
Greatest upper bounds (gubs) in complete Answer Set Programming rules, 367
lattices, 125–126 Datalog rules, 5, 7, 9, 25–28
Grids planning example, 463–467 IDP rules, 289–290
Gringo language, 322 logic programming rules, 123–124
Ground-and-solve optimization in IDP LogiQL clauses, 335
system, 309 Heap in deterministic Prolog, 256–257
Ground atoms Herbrand base
distribution semantics, 194 Bayesian networks, 217–218
Herbrand interpretations, 128 ground atoms, 123–124
Ground facts in deterministic Datalog, Horn programs, 136
250–251 logic programming, 124–125
Ground terms in logic programming, Herbrand interpretations
123 Horn programs, 136
Grounding and ground instances IDP system, 291
Answer Set Programming rules, 367 logic programming, 127–128
Bayesian networks, 218 one-step provability operator, 128–130
Datalog rules, 6 Stable Model semantics, 145
IDP system, 313–317 well-founded semantics, 175
logic programming, 124 Herbrand model for stratified programs,
LogiQL, 346–347 122
query evaluation for restricted programs, Herbrand universe, 123–124
225 Here-and-there logic in Stable Model
Grounding phase in IDP system, 296 semantics, 163–164
Grounding with bounds in IDP system, 283 Heterozygous sites, 380
groundpropagate procedure in IDP system, Hidden Markov Models (HMMs)
307 distribution semantics, 198–200, 206
groundwithbounds option in IDP system, inferencing in PLP, 226–227
318 query evaluation for restricted programs,
Guanine in DNA, 362–364 226
Guards HiLog, 36–37
natural language processing, 498 Semantic Web, 78
Timed Safety Automata, 444 HIPP-DEC, 381–383
gubs (greatest upper bounds) in complete HMMs (Hidden Markov Models)
lattices, 125–126 distribution semantics, 198–200, 206
570 Index

HMMs (Hidden Markov Models) (continued) inference methods, 307–308


inferencing in PLP, 226–227 input*-definitions, 311
query evaluation for restricted programs, KBS, 294–297
226 language, 297–304
Homozygous sites, 380 logic, 298–302
Horn clauses optimization workflow, 317–319
IDP system, 279 output*-definitions, 311–312
KBS systems, 14 output vocabulary, 306
LDL language, 65–66 partial functions, 286–287
logic programming, 11–12 post-processing, 317
Magic Templates, 51 preprocessing, 309–313
Prolog, 15–16 procedures, 302–303
Horn programs, 131–135 quantification depth, 312
extensions, 135–140 scalability and infinity, 319–320
logic programming, 124–125 structure consistency, 309–310
one-step provability operator, 129 structures, 297–299
Stable Model semantics, 145 structuring components, 305–306
well-founded semantics, 173 symmetries, 313
Horn rules, 124–125 terms, 303–304
Human Genome Project, 362 theory, 300–302
Hybrid Annotated PLPs, 215 types, 293–294
Hypotheses uses, 320–321
control abstractions, 524 vocabulary, 297–299
join queries, 526–527 IDP3 tool, 321
Hypothetical reasoning in natural language If-statements in join queries, 527–528
processing, 504–508 IGlue language, 69
Hyprolog language, 504–508 IGlue-to-IGlue static optimizer, 69
ILOG Concert, 332
ICL. See Independent Choice Logic (ICL) ILP (Inductive Logic Programming), 405
IDBs (intensional databases) Immediate consequence operator in
Datalog, 9–10, 29–31, 62–63 Datalog, 8, 48
IDP system, 291 Immediate Dominance/ Linear Precedence
LogicBlox, 336–337 (IDLP) formalism, 501–502
IDLP (Immediate Dominance/ Linear Imperative-Declarative Programming.
Precedence) formalism, 501–502 See IDP (Imperative-Declarative
IDP (Imperative-Declarative Programming), Programming)
279–284 Imperative programming languages in
aggregates, 288 knowledge base systems, 282–
arithmetic, 287–288 283
component operations, 308–309 Implication arrows (->) in LogicBlox, 339
constructed types, 304–305 Impossible atoms in well-founded
definitions, 289–293 semantics, 167
first-order logic, 284–286 Inca system, 322
ground-and-solve, 313–317 Inclusion dependencies, 32
Index 571

Inclusion-exclusion principle in inferencing Inferencing in PLP, 219–220


in PLP, 226–227 approximation and other inferencing
Incompatible characters in phylogenetic tasks, 226–227
trees, 372–374 distribution semantics, 187
Incompatible worlds in distribution query evaluation complexity, 220
semantics, 201 query evaluation for restricted programs,
Inconsistent atoms 224–226
Herbrand interpretations, 128 query evaluation for unrestricted
well-founded semantics, 167 programs, 221–224
Incremental completion algorithm in Infinitary propositional logic, 177
top-down evaluation, 60–61 Infinite loops in natural language
Incremental computation in recursive processing, 482, 484–485
queries, 534 Infinite sites model, 384
Incrementality in LogicBlox, 336 Infinite-state model checking, 440–441
Independence axiom, 224 data-independent systems, 448–451
Independence in query evaluation for push-down systems, 451–458
restricted programs, 224–225 real-time systems, 444–448
Independent and identically distributed tabled evaluation with constraints,
(iid) variables, 188 441–444
Independent Choice Logic (ICL), 186 Infinite words in Linear Temporal Logic,
composite choices, 205 439
disjoint-statements, 188–189 Infinity in IDP system, 319–320
distribution semantics, 196 Influence graphs in gene regulatory
query evaluation for unrestricted networks, 395–396, 403
programs, 221 Informal semantics, 130–131
well defined distribution semantics, Inheritance rules, 197–198
206–207 Inner fixed points in modal mu-calculus,
Indexing 437
join queries, 528 Input data load in Datalog, 94
LogiQL, 341–343 Input*-definitions in IDP system, 311
WAM, 270–272 Instances in Datalog rules, 6
Induction process in IDP system, 291 Integrity constraints in natural language
Inductive definitions in IDP system, 281, processing, 505–506
290 Intelligent backtracking in LDL language,
Inductive Logic Programming (ILP), 405 66–67
Inference engines Intensional databases (IDBs)
IDP system, 283 Datalog, 9–10, 29–31, 62–63
LogicBlox, 336–337 IDP system, 291
XSB system, 70–72 LogicBlox, 336–337
Inference in recursive queries, 533–534 Interaction graphs in gene regulatory
Inference methods networks, 403
IDP system, 294, 296–297, 307–308 Interior nodes in Binary Decision Diagrams,
knowledge base systems, 282 222
Inference rules in Datalog, 5 Intermediate code in WAM, 238–239
572 Index

Interpretation in first-order logic, 284 Knowledge base systems (KBSs)


Interrogative clauses in natural language EDS relationship, 14
processing, 494 IDP system, 281–283, 294–297
Intractable languages in IDP system, 295 Knowledge bases, 14
Intractable problems in bioinformatics, 360 Knowledge management applications, 544
Intuitionistic assumptions in natural Knowledge representation (KR) languages,
language processing, 492, 495 282
Invariant locations in Timed Safety Knuth’s Attribute Grammars, 480
Automata, 444 Kripke-Kleene model
Inverse scope problem in metabolic Horn programs, 139–140
networks, 399–400 three-valued semantics, 138
Invocation code in procedural languages, well-founded semantics, 169–172, 175
241–242 Kripke structures in CTL model checking,
IRIS prototype, 28 432–435
Irrelevant results in bottom-up evaluation,
49–52 Labeled transition systems (LTSs), 436
Iterative computation in recursive queries, Lambda-calculus in natural language
534 processing, in 482–483
Iterative QSQ (QSQI), 60–61 Lambda Prolog, 38
Large fact bases, 47
Jena system, 79 Last call optimization in WAM, 268–270
Jeopardy Man vs. Machine Challenge, 535 Lattices
Join queries, 525–527 Datalog, 24–25
Aditi, 72 logic programming, 125–127
applications, 529–532 Lazy clause generation (LCG), 322
control abstractions, 523 Lazy Grounding, 320
implementation, 527–529 LDL language, 24, 46, 65–67
LDL++ language, 24, 67
K-incompatibility problem, 374 Least fixed points
KBMC (Knowledge Base Model Construc- CTL model checking, 433–434
tion), 212, 215–217 Datalog, 9, 13, 19
KBMS (Knowledge-Base Management Least models
System), 69–70 CTL model checking, 433–434
KBSs (knowledge base systems) Datalog, 9
EDS relationship, 14 Stable Model semantics, 141–142
IDP system, 281–283, 294–297 Least sets in IDP system, 290
Keys, 28 Least upper bounds (lubs) in complete
Knaster-Tarski theorem lattices, 125–126
complete lattices, 125–126 Left-recursive grammars, 484–486
well-founded semantics, 171 Left-recursive shortest path (SPL)
Knowledge-Base Management System deterministic planning, 461–462
(KBMS), 69–70 grid planning, 465
Knowledge Base Model Construction LeProbLog system, 227
(KBMC), 212, 215–217 Leucine in DNA, 365
Index 573

LFI-ProbLog system, 227 LogicBlox


Libraries for Algebraic Modeling Languages, applications, 543
332 concepts, 335–336
LIFE language, 34 constraint satisfaction, 539–540
Lifted unit propagation in ground-and- vs. Datalog, 87–88
solve, 314–315 experiments, 95–98
liftedunitpropagation option in IDP system, join queries, 531
318 LogiQL by example, 336–339
Ligands in bioinformatics, 365 LOGIN language, 34
Linear assumptions in natural language LogiQL language, 87, 332–334
processing, 492–493 components, 335–336
Linear Datalog, 77 LogiQL by example, 336–339
Linear resolution in Datalog, 17 mathematical programming, 339–348
Linear tabling in Extension Table algorithm, optimization, 346–348
59 production-transportation model, 341–
Linear Temporal Logic (LTL), 438–440 345
Linear time calculus (LTC), 308 Traveling Salesman Problem, 348–353
Linearity in natural language processing LOGSPACE (logarithmic space) data
constraints, 502 complexity, 76
Linguistic applications, 544 Logtalk, 33, 35
LINQ, 282 LOLA system, 73
Lisp for natural language processing, 484 Long-distance dependencies in natural
Lists in data abstractions, 522 language processing, 494–497
Literals Loop formulas in Stable Model semantics,
Answer Set Programming, 369 161–163
first-order logic, 284 LPADs (Logic Programs with Annotated
logic programming, 123–124 Disjunctions), 186, 190–191, 194–
metabolic networks, 402 195
Lixto system, 83 LPS system, 542
Local stratification LTC (linear time calculus), 308, 438–440
Datalog, 20, 24 LTSs (labeled transition systems), 436
Stable Model semantics, 151 lubs (least upper bounds) in complete
Location-specifier variables in NDlog, 81 lattices, 125–126
Locations in push-down systems, 451 Lunar Science System in natural language
Logarithmic space (LOGSPACE) data processing, 482
complexity, 76
Logging in nondeterministic Prolog, 264 Machine learning in Semantic Web, 78
Logic in IDP system, 298–302 Magic Sets, 50–52
Logic Programming, 123–125 declarative networking, 81
Datalog, 11–12, 32–36 bottom-up vs. top-down, 61
Logic Programs with Annotated Dis- SALAD prototype, 66
junctions (LPADs), 186, 190–191, Starburst, 73
194–195 Magic Templates
Logical variables in Datalog rules, 5–6 bottom-up evaluation, 51
574 Index

Magic Templates (continued) metabolic networks, 402


CORAL, 68 Minimization inference in IDP system, 297
Maier, David, 15–16 Minimum-cardinality in Haplotype
Markov Logic Networks (MLNs), 212–213 inference, 381
Mathematical programming in LogiQL, Minimum functions in IDP system, 288
339–348 MiniZinc system, 321–322
Maximum A Posteriori inference, 226 Minus signs (-) in Datalog rules, 5, 7–8
Maximum functions in IDP system, 288 Mixed integer programming (MIP)
Maximum likelihood criteria in Haplotype constraint satisfaction, 539
inference, 384 LogiQL, 346–348
Maximum quartet consistency (MQC), 378 MLNs (Markov Logic Networks), 212–213
Measurable sets in distribution semantics, Modal mu-calculus
200 model checking, 436–438
MegaLog system, 69–70 timed, 447–448
Meld applications, 544 Model checking in IDP system, 297, 307
MELD language, 83 Model checking problems
Membership relations in F-logic, 34 finite-state model checking, 430–440
Memo-ization in WAM, 275 linear-time vs. branching time logics,
Memoing 438–440
natural language processing, 482–483 modal mu-calculus, 436–438
top-down evaluation, 54–58 tabled logic programs, 429
Memory Model expansions in IDP system, 296–297,
deterministic Datalog, 245–246 308
deterministic Prolog, 256–258 Model generation in IDP system, 296
nondeterministic Prolog, 263–265 Model-theoretic database views, 13
Mendelian rules of inheritance, 197–198 modelexpand procedure in IDP system,
Meta-interactions in gene regulatory 302–303
networks, 403 models function in CTL model checking,
Metabolic networks, 398–399 435–436
Answer Set Programming encoding, Modular languages in IDP system, 295–296
401–402 Monadic Datalog, 64, 77, 83
completion, 399 Monotonicity
instances, 411–412 Horn programs, 129–130
modeling, 399–400 logic programming, 125–127
systems biology, 394 Stable Model semantics, 164
Methionine in DNA, 365 well-founded semantics, 170, 173
Metropolis-Hastings algorithm, 226 Montague grammar, 483
Minimal languages in logic programming, Monte Carlo simulations, 226
124 Most precise structure in ground-and-solve,
Minimal models 315
Horn programs, 137 Most Probable Explanation inference, 226
Stable Model semantics, 143–144, 154 movreg instruction in WAM, 253
Minimal repairs MQC (maximum quartet consistency), 378
gene regulatory networks, 397 mRNA synthesis in systems biology, 394
Index 575

MRRPS 3.0 databases, 13 Negation


MTZ model, 348–349 Answer Set Programming rules, 367
Multiple inference methods in IDP system, constraint satisfaction, 537
296–297 control abstractions, 524
Mutual embedding, 67–68 CORAL, 68
Myria system, 82–83 CTL model checking, 434–435
Datalog, 17–22
n-ary symbols in first-order logic, 284–285 EKS, 69
N-queens puzzle, 540–541 Horn programs, 138
Nail language, 68–69 IDP system, 280
NailLog system, 68–69 join queries, 527, 529
Naı̈ve approach to bottom-up evaluation, logic programming, 123
48–49 LogicBlox, 337
Names Prolog, 121–122
Datalog variables, 10 recursive queries, 532–533
IDP system, 298–300 Nested Relation model, 34
Namespaces in IDP system, 283, 305–306 Networks
Native state in proteins, 365 Bayesian, 215–219
Natural language processing (NLP), 477– declarative, 80–82
481 gene regulatory, 394–398
assumption grammars, 492–497 metabolic. See Metabolic networks
constraints, 498–504 phylogenetics, 379
history, 481–483 systems biology, 394–397
hypothetical reasoning, 504–508 Nilsson’s probabilistic logic, 211–212
probabilistic logic programming, 508– NLP. See Natural language processing (NLP)
509 Non-First-Normal-Form model, 34
recursive queries, 535 Non-ground rules in Answer Set Program-
semantic structures, 490–492 ming, 367
syntactic features and agreement, 488– Non-monotonicity
489 Answer Set Programming, 368
syntax trees, 489–490 Stable Model semantics, 164
tabled logic programming and Definite Non-recursive Datalog, 77
Clause Grammars, 484–488 Non-stratified programs, 207–210
tabling origins, 483–484 Nondeterministic Prolog, 263–268
toy DCG, 509–511 Nonmonotonic reasoning in stable-model
Natural-language questions in Chat-80 semantics, 122
system, 12 Normalized SLP, 211
Natural languages in IDP system, 295–296 Notational convenience in Datalog rules, 10
Natural models in IDP system, 283 NP-complete problem
nbmodels option in IDP system, 319 Answer Set Programming, 368
NDlog language phylogenetic trees, 374
applications, 544 RNA secondary structure prediction,
declarative networking, 81 389–390
NED-2 Intelligent Information System, 63 Stable Model semantics, 144–145
576 Index

NP-hard problem in Datalog, 21–22 Datalog, 64


NU-Prolog system, 72 IDP system, 317–319
Nucleotides in DNA, 362–364 LogiQL, 346–348
Nulls, 26 WAM, 268–270
Optimization inferences in IDP system,
O-logic language, 34 307–309
Object-logic (OLOG), 33 Optimization Services Instance Language
Object-oriented programming (OSiL), 347
Datalog, 32–36 Orchestra, 83
description, 545–546 Order sorted logic in IDP system, 293
Object-oriented Prolog (OOP), 32 Ordering
Objects Binary Decision Diagrams, 222
Coherent Definition Framework, 530 join queries, 528
data abstractions, 521–522 Ordinal numbers in Knaster-Tarski
Oblivious chases, 27 theorem, 126
Ockham’s principle of parsimony, 381 OSiL (Optimization Services Instance
ODBC interface, 72 Language), 347
ODC (ontology-directed classifier), 531 Out-of-order message delivery in extended
OLDT resolution finite automata, 449
natural language processing, 483 Output*- definitions in IDP system, 311–312
top-down evaluation, 59 Output vocabulary in IDP system, 306
OLOG (object-logic), 33 Overlay networks in declarative networking,
OLTP (Online transaction processing), 531 81
One-step provability operator, 128–130 Overlog, 544
Horn programs, 133–135 OWL in Semantic Web, 78
Online transaction processing (OLTP), 531
Ontobroker system, 34, 545 P-log, 186, 208–210
Ontology-directed classifier (ODC), 531 P2 facility, 81
Ontology management in join queries, P2Ps (predicate to predicate mappings),
529–531 337–338
OOP (object-oriented Prolog), 32 Pairwise incompatible composite choices,
Open symbols in IDP system, 289 201–202
Open-world assumptions for databases, 13 Parallelism in Aditi, 72
Operators Parameterized systems in infinite-state
complete lattices, 126–127 model checking, 458
CTL model checking, 432 Parameters
Horn programs, 129 IDP system, 289
logic programming, 125–127 procedural languages, 242–243
Operons in gene regulatory networks, 396 Parsimony in Haplotype inference, 381
OPL Technologies, 332 Parsing natural language processing,
Optimal plan generation problem in tabled 482–483
logic programs, 429 Partial bottom-up evaluation, 51
Optimization Partial functions in IDP system, 286–287,
Answer Set Programming, 370 299
Index 577

Partial stable model for well-founded grids, 463–467


semantics, 171 Planning via tabled search, 458–469
Partial structures for first-order logic, PLP. See Probabilistic logic programming
285–286 (PLP)
Partial supported models for well-founded PODS (Principals of Database Systems),
semantics, 168–169 12–13
Passing parameters in procedural Pointer analysis in recursive queries, 535
languages, 242–243 Points-to analyses, 79–80
Path quantifiers in CTL model checking, Polymeric strands in DNA, 362
431 Polynomial time (PTIME) data complexity
pD language, 186, 221 bioinformatics, 360
PDSs (push-down systems), 451–458 Datalog, 76–77
Peptidic bonds in DNA, 365 Polytomies, 378
Pereira, Fernando, 481 Porter, Harry, 16
Perfect-model semantics, 122 Positive dependency graphs in Stable Model
Perfect models semantics, 161
Datalog programs, 19–20 Possible atoms
Horn programs, 137 Herbrand interpretations, 128
Perfect phylogeny in Haplotype inference, well-founded semantics, 167
384 Post-processing in IDP system, 317
Periods (.) in IDP system, 300 postprocessdefs option in IDP system,
Persistent data in XSB, 72 318
Petri nets in metabolic networks, 399 Practical limitations in Datalog, 74
PHA. See Probabilistic Horn Abduction Pre-fixpoints in logic programming, 125–
(PHA) 127
Phylogenetics, 370–371 Pre-images in timed modal mu-calculus,
Answer Set Programming encoding, 447
374–377 Precision order
inference, 361 first-order logic, 286
instances, 408–409 Herbrand interpretations, 128
modeling, 371–374 Predicate logic in IDP system. See
Phyloinformatics, 379 IDP (Imperative-Declarative
Picat-based planners Programming)
domains, 467 Predicate stratification in Datalog, 20, 24
grids, 464–466 Predicate symbols in logic programming,
tabled logic programs, 458–459 123
Pigeonhole problem, 313 Predicate to predicate mappings (P2Ps),
PITA systems, 185 337–338
query evaluation for restricted programs, Predicates
224–225 Datalog, 22–23
query evaluation for unrestricted IDP system, 298
programs, 222, 224 LogicBlox, 335–337
Planning examples, 463 LogiQL, 344
domains, 467–469 Prolog, 38–39, 274
578 Index

Prediction problem in gene regulatory distribution semantics. See Distribution


networks, 397 semantics
Prefix language in push-down systems, evidential probability, 213–214
452–453 inferencing, 219–227
Preprocessing in IDP system, 309–313 Logic Programs with Annotated
Prescriptive analysis in constraint Disjunctions, 190–191
satisfaction, 539–540 Markov Logic Networks, 212–213
Primary keys, 28 natural language processing, 508–509
Primary sequences of proteins in RNA, 364 Nilsson’s probabilistic logic, 211–212
Principals of Database Systems (PODS), PRISM, 189–190
12–13 Probabilistic Horn Abduction, 188–189
printmodels procedure in IDP system, ProbLog, 191
302–303 Stochastic Logic Programs, 211
PRISM system, 185–186 Probability atoms in P-log, 209
applications, 544 Probability distributions in Markov Logic
bioinformatics, 405 Networks, 212–213
inferencing in PLP, 226 Probability measures in distribution
natural language processing, 508 semantics, 200–205
PRISM language, 189–190 Probability of pairwise incompatible sets of
query evaluation for restricted programs, composite choices, 202
224–226 Probability space in distribution semantics,
well defined distribution semantics, 200
206–207 PROBE system, 13
PRISMAlog language, 63 ProbLog system, 186, 191
ProB language, 544 applications, 544
Probabilistic Context-Free Grammars Binary Decision Diagrams, 185, 187
inferencing in PLP, 226 bioinformatics, 405
natural language processing, 508 inferencing in PLP, 226
Probabilistic facts in ProbLog, 191 Nilsson’s probabilistic logic, 212
Probabilistic Horn Abduction (PHA), 186, query evaluation for unrestricted
188–189 programs, 221–223
composite choices, 205 ProbLog2, 226
distribution semantics, 194–196 Procedural integration
query evaluation for restricted programs, IDP system, 294
224 knowledge base systems, 283
well defined distribution semantics, Procedural languages run-time environ-
206–207 ments, 239–243
Probabilistic logic programming (PLP), Procedures in IDP system, 297, 302–303
185–187 proceed instruction in WAM, 248
Annotated Probabilistic Logic Program- Process calculi in metabolic networks, 399
ming, 214–215 Process logic, 39
background and assumptions, 187–188 Product functions in IDP system, 288
Bayesian networks, 215–219 Production-transportation model, 341–345
Index 579

Products of reaction in metabolic networks, Protein structure prediction, 389


399 bioinformatics, 361
Program analysis instances, 410
applications, 544 modeling, 389–390
Datalog, 79–80 Proteins, 364–366
recursive queries, 535 Pseudo-knots in RNA secondary structure
Program completion, 122 prediction, 388–389
Program points in push-down systems, PTIME (polynomial time) data complexity
451–452 bioinformatics, 360
Programs in Datalog, 8–9 Datalog, 76–77
project predicate in infinite-state model Pure SLP, 211
checking, 441 Push-down systems (PDSs), 451–458
Prolog putcon instruction in WAM, 248, 255
aggregation, 24 putnum instruction in WAM, 255
Constraint Handling Rules, 498 putstr instruction in WAM, 260
CTL model checking, 434 puttval instruction in WAM, 255
Datalog relationship, 10–12 puttvar instruction in WAM, 255, 269
deterministic, 256–263 putuval instruction in WAM, 270, 273
natural language processing, 479–481, putval instruction in WAM, 248, 252, 255,
504–508 270
negation, 17–18, 121–122 putvar instruction in WAM, 252, 269
nondeterministic, 263–268 Puzzles
predicates, 38–39 applications, 544
variables and constants, 244–246 constraint satisfaction, 540–541
WAM, 238, 273–274
Prolog++, 33 QBF (Quantified Boolean Form), 321
Pronoun references in natural language QSQ (Query-SubQuery) approach, 60–61
processing, 494–495 QSQI (iterative QSQ), 60–61
Proof-theoretic database views, 13 QSQR (recursive QSQ), 60–61
propagate procedure in IDP system, 307 Quantification
Propagation inference in IDP system, 296, IDP system, 312
307 natural language processing, 490–492
Propagation rules Quantified Boolean Form (QBF), 321
EKS, 70 Quantifiers in join queries, 529
natural language processing, 499–500 Queries
well-founded semantics, 175 Datomic, 88–90
Properties for natural language processing Flora-2, 91–93
constraints, 501 LogicBlox, 87–88, 336
Property grammars, 498, 502 minimization inference, 297
Propositional logic in Stable Model recursion, 532–536
semantics, 157–158 Stochastic Logic Programs, 211
Propositions in CTL model checking, 431 XSB, 85–87
Protein interaction networks, 394 Query complexity in Datalog, 76
580 Index

Query evaluation natural language processing, 484–486


inferencing in PLP, 220 PROBE system, 13
restricted programs, 224–226 push-down systems, 451–453
unrestricted programs, 221–224 rules and queries, 532–536
Query inferences in IDP system, 307 Recursive definitions in IDP system, 281
Query optimization Recursive QSQ (QSQR), 60–61
Datalog, 64 reducedgrounding option in IDP system,
XSB, 71 318
Query-SubQuery (QSQ) approach, 60–61 Reduct program
Question-answering system QA1, 13 Answer Set Programming, 367–368
Question mark–dash (?-) in Datalog rules, Stable Model semantics, 140–141
7–8 Reduction in Binary Decision Diagrams,
222
Radial restraints, 28 References in evidential probability, 213–
Random selection rules in P-log, 209 214
Random switch names in PRISM, 189 Referential integrity, 28
Ranges in Answer Set Programming, 369 Refinements in IDP system, 291–292
RBAC (role-based access control), 531–532 Regular-expression paths, 541–542
RCP register in WAM, 264, 266, 274 Regulatory graphs, 403
RCPheaptop register in WAM, 267 Regulatory networks in systems biology,
RDBMSs (relational database management 394–397
systems), 11 Reiter’s hitting set algorithm, 205
RDF format, 34, 78–79 Relation areas in Datalog, 4
RDL system, 73 Relation symbols in Stable Model
Re-computations in natural language semantics, 151
processing, 482 Relational database management systems
Reachability (RDBMSs), 11
bottom-up evaluation, 51–52 Relational semantics for partial functions,
extended finite automata, 450 287
push-down systems, 451, 455, 457 Relationships
Timed Safety Automata, 446–447 control abstractions, 523
Reachable metabolites, 399 data abstractions, 522
Reactants of reaction, 399 Datalog rules, 8
Reactive rules Relative clauses in natural language
description, 40 processing, 494, 497
LogicBlox, 338–339 Relativized Fages Lemma, 156
Real-time systems in infinite-state model Repairs
checking, 444–448 gene regulatory networks, 396–397
Reasoning in Semantic Web, 78 metabolic networks, 402
Records in deterministic Prolog, 256–258 Repeated derivation in bottom-up
Recursion evaluation, 48–49
aggregation, 23–24 Resets in Timed Safety Automata, 445
control abstractions, 523–526 Residual programs
Datalog rules, 8, 47 modal mu-calculus, 438
Index 581

Stable Model semantics, 163 Datalog, 5–9, 23, 25–28


Residues, 31–32 destructive updates in, 42–46
Resolution theorem proving in Logic IDP system, 281, 289–290, 300
Programming, 11 logic programming, 123–125
Resource allocation in constraint LogicBlox, 337–339
satisfaction, 540 LogiQL, 347
Restricted chases, 27 natural language processing constraints,
Restricted programs, query evaluation for, 498–504
224–226 program analysis, 80
retrymeelse instruction in WAM, 266 recursion, 532–536
retrymeelsefail instruction in WAM, 266 Stable Model semantics, 140–145
Reusing knowledge in IDP system, 283 Rules of inheritance, 197–198
Rewriting natural language processing Run-time environments in procedural
rules, 501 languages, 239–243
Rheaptop register in WAM, 258, 267 Run-time stacks in C language, 239–240
Ribonucleic Acid (RNA) sequences, 360,
362–364 Safe Datalog rules, 10, 23
RIF (Rule Interchange Format), 78 SALAD prototype, 66–67
Right-recursive shortest path (SPR), 462 Sample space in distribution semantics,
Rmode register in WAM, 258–259 200
RMSD (root-mean-square deviations), 393 Satisfiability checking in IDP system, 307
RNA (Ribonucleic Acid) sequences, 360, Satisfiability solvers in Stable Model
362–364 semantics, 161
RNA secondary structure prediction, 384– Sato and Kameya’s definition in distribution
386 semantics, 206
Answer Set Programming encoding, Scalability in IDP system, 319–320
386–388, 390–392 Scope in metabolic networks, 399, 402
bioinformatics, 361, 364 SD3 language, 536
instances, 409–410 SDS (Smart Data System), 72
modeling, 386 Se-models in Stable Model semantics,
Robetta prediction, 393 164–165
Rodin toolset, 287 Search in ground-and-solve, 315–317
Role-based access control (RBAC), 531–532 Search-space pruning mechanism, 175
Root-mean-square deviations (RMSD), 393 Second-order existential quantifiers in
Rooted trees in phylogenetics, 378 LogiQL, 341
Rsreg register in WAM, 258–259 Secondary structure prediction in RNA,
RT language in trust management, 536 384–389
RTrail register in WAM, 264, 266–267 Security applications, 544
Rule-based reasoning in Semantic Web, 78 Security policy frameworks
Rule connectives in logic programming, 123 join queries, 531–532
Rule Interchange Format (RIF), 78 recursive queries, 535–536
Rules Seed metabolites in metabolic networks,
Answer Set Programming, 367 399, 401
control abstractions, 523–524 Selections in distribution semantics, 193
582 Index

Self-justified models in Stable Model tabled logic programs, 458–469


semantics, 155 Simplification
SEM-CP-logic system, 227 ground-and-solve, 314
Semantic equivalence parsing, 483 natural language processing, 499
Semantic networks, 14 Stable Model semantics splitting, 150
Semantic query optimization, 31 Single Nucleotide Polymorphism (SNP), 380
Semantic Web, 78–79 Single quotes (’) in Datalog, 4
Semantics Situation calculus, 40
distribution. See Distribution semantics Skolem constants, 26
natural language processing, 490–492 SLD resolution and trees
Stable Model. See Stable Model semantics Datalog, 46–47
(SMS) Logic Programming, 11
well-founded. See Well-founded query evaluation, 26
semantics (WFS) SLDNF
Semi-naı̈ve approach in bottom-up proof procedure, 284
evaluation, 48–49 query evaluation for unrestricted
Semi-structured data, 34 programs, 224
Semicolons (;) in LogicBlox, 337 SLG algorithm
Sentences in first-order logic, 284 natural language processing, 482–483
Serial conjunction, 45 top-down evaluation, 59
Serine in DNA, 365 SLIPCASE system, 227
Sets SLIPCOVER system, 227
Datalog, 23–25 SLPs (Stochastic Logic Programs), 211
IDP system, 288, 290 Smart Data System (SDS), 72
Sets of actions SMS. See Stable Model semantics (SMS)
data abstractions, 522 SNP (Single Nucleotide Polymorphism), 380
labeled transition systems, 436 SociaLite big data, 82–83
Sets of equivalent states in tabled logic Sokoban planning domain, 467–469
programs, 429 solve procedure in LogiQL, 347
Shortest-path problem in deterministic SolverBox, 331–333. See also LogiQL
planning, 459–463 language
SICStus, 33 Datalog relation, 333–334
Side chains in amino acids, 365 Traveling Salesman Problem, 348–353
Sideways information passing in bottom-up Soundness in probabilistic logic program-
evaluation, 51 ming, 192
Sign Consistency Model in gene regulatory Spark, 83
networks, 396, 403 SPARQLlanguage, 76
Signaling networks in systems biology, 394 Spatial model in protein structure
Simpagation rules in natural language prediction, 389
processing, 499 Speciation in phylogenetic trees, 371
Simple planning via tabled search Specificity in evidential probability, 214
answer subsumption and the shortest- SPL (left-recursive shortest path)
path problem, 459–463 deterministic planning, 461–462
planning examples, 463–469 grid planning, 465
Index 583

SPLB (bounded left-recursive shortest path) Static filtering in bottom-up evaluation, 51


deterministic planning, 462–463 Static models in bioinformatics, 361
grid planning, 465–466 Static predicates in Prolog, 274
splitdefs option in IDP system, 318 Statistical approach in natural language
Splitting processing, 508–509
composite choices in distribution stdoptions components in IDP system,
semantics, 202–203 317–319
Stable Model semantics, 145–150 Steady state in gene regulatory networks,
Splitting Algorithm in query evaluation for 396
unrestricted programs, 221 Stem loops in RNA, 386
Splitting Lemma, 148–150 Stochastic Logic Programs (SLPs), 211
SPR (right-recursive shortest path), 462 Stored predicates, 12
SQL in program analysis, 79 Stored procedures, 39
Stable Model semantics (SMS), 121–123 Stratification
aggregation, 24–25 Datalog, 20, 24
Answer Set programming, 76 Stable Model semantics, 145–152
bioinformatics, 361 Stratified negation
constraint satisfaction, 537–538 Horn programs, 138
control abstractions, 525 recursive queries, 533
Datalog, 20–22, 77 Stratified programs
definition and basic properties, 140–145 CTL model checking, 434–435
distribution semantics, 208–210 Datalog, 19
equivalence and strong equivalence, distribution semantics, 192–195
163–165 with function symbols, 198–207
loop formulas, 161–163 Horn programs, 137
one-step provability operator, 128–130 negation, 122
stratification and splitting, 145–150 Strict refinements in IDP system, 292
terminology and notation, 123–131 Strong equivalence in Stable Model
tightness, 155–158 semantics, 163–165
StackTop pointer in deterministic Datalog, Strong propagation rule in well-founded
250 semantics, 175
Starburst project, 73 Structural studies in bioinformatics, 360–
State identifiers, 40–42 361
State quantifiers in CTL model checking, Structures in IDP system, 297, 299–300
431 components, 305–306
State space consistency, 309–310
deterministic planning, 459 Subclasses in F-logic, 34
grids, 463–464 Subroutines, 39
searches in tabled logic programs. See Subsets in distribution semantics, 200
Tabled logic programs Subsumptive demand transformation in
States bottom-up evaluation, 52, 61
Büchi PDS, 455 Subsumptive tabling
gene regulatory networks, 396 bottom-up evaluation, 52
Timed Safety Automata, 445 top-down evaluation, 55
584 Index

Subtours in LogiQL, 349–353 simple planning via tabled search,


Sum functions in IDP system, 288 458–469
Super-tokenizer tool, 534–535 Tabling
Supplementary Magic Sets, 51 Extension Table algorithm, 59
Supported models join queries, 528
Clark’s completion, 139 origins, 483–484
Stable Model semantics, 152–163 top-down evaluation, 54–58
well-founded semantics, 168–169 WAM extensions, 274–275
SWI Prolog, 483, 496 XSB, 71
Switch names in PRISM, 189 Tags in deterministic Prolog records,
switch_on_constant instruction in WAM, 256–257
271–272 Target metabolites, 399, 401
switch_on_structure instruction in WAM, Targets in evidential probability, 213–214
271–272 Taxonomic units in phylogenetics, 370, 375
switch_on_type instruction in WAM, 271 Teaching applications, 544
Syllog system, 14 Temporal operators in CTL model checking,
Symbolic unit propagation in ground-and- 432
solve, 315 Terminal well-founded inductions in IDP
Symbols in IDP system, 289 system, 292
Symmetries Terms
Horn programs, 137 first-order logic, 284
IDP system, 313 IDP system, 297, 303–304
Symmetry breaking in IDP system, 283, 296 logic programming, 123
Symmetry detection in IDP system, 296, 308 Ternary predicate models in timed modal
symmetrybreaking option in IDP system, mu-calculus, 447
318 Text processing in recursive queries, 534–
Syntax in natural language processing, 535
477–478 TGDs (tuple-generating dependencies),
features, 488–489 26–28, 32
trees, 489–490 Theoretical limitations in Datalog, 74
Systems biology, 366 Theories
gene regulatory networks, 395–398 first-order logic, 284
metabolic networks, 398–404 IDP system, 297, 300–302
overview, 393–395 Three-valued interpretation in well-founded
Systems studies in bioinformatics, 360–361 semantics, 167, 169
Three-valued setting in Horn programs, 138
Table functions in Starburst, 73 Three-valued stable model in well-founded
Tabled evaluation with constraints, 441–444 semantics, 171
Tabled logic programs, 427 Thymine in DNA, 362
discussion, 469–471 Tightness
finite-state model checking, 430–440 Stable Model semantics, 154–158
infinite-state model checking. See well-founded semantics, 171
Infinite-state model checking Timed modal mu-calculus, 447–448
natural language processing, 484–488 Timed Safety Automata (TSA), 444–448
Index 585

Timed systems in tabled logic programs, Trust management (TM) for recursive
429 queries, 535–536
timeFunc function, 312 trustmeelsefail instruction in WAM, 266
Timeless assumptions in natural language Truth order in first-order logic, 286
processing, 496–497 trymeelse instruction in WAM, 265–266,
timeOf function, 312 272
TM (trust management) for recursive trymeelsefail instruction in WAM, 265–266
queries, 535–536 Trytophan in DNA, 365
tnot operation in CTL model checking, 435 TSA (Timed Safety Automata), 444–448
Top-down evaluation in Datalog, 9, 52–60 TSP (Traveling Salesman Problem), 348–
Topicalization in natural language 353
processing, 494 Tuple-generating dependencies (TGDs),
Toy DCG, 509–511 26–28, 32
Trail stacks in nondeterministic Prolog, 264 Tuples in IDP system, 299
Transaction Logic, 36 Turing’s computability for Horn programs,
extensions, 542 135
rule updates, 43–46
Transactions in LogicBlox, 335–336 Two-input one-step provability operator,
Transcription in biology, 362 129
Transcriptional regulatory networks, 394 Two-valued structures in first-order logic,
Transfinite induction in Knaster-Tarski 284
theorem, 126 Type groups in LOLA system, 73
Transfinite sequences in Knaster-Tarski Types
theorem, 126 Datalog, 28–32
Transformation IDP system, 293–294, 298–302, 304–305
Datalog, 8
query evaluation for unrestricted UDAs (user-defined aggregates), 67
programs, 221 UNA (unique names axiom), 304
Transitions Unbounded dependencies in natural
labeled transition systems, 436 language processing, 494
Timed Safety Automata, 444–446 Unbounded planner for grids, 466
Transitive closure in IDP system, 290 Uncertain inferences, 186
Translation in biology, 362 Undecidable languages, 295
Translators in Prolog, 11 Unfounded sets
Traveling Salesman Problem (TSP), 348–353 Stable Model semantics, 159
Traversal recursion in PROBE system, 13 well-founded semantics, 173–174
Tree reconstruction in phylogenetics, unicon instruction in WAM, 259
371–374 Unidirectional Grid domain in grid
Triggers for reactive rules, 40 planning, 466
True atoms Uniform equivalence
first-order logic, 285 Datalog, 64
Herbrand interpretations, 128 Stable Model semantics, 166
Horn programs, 138 Union of conjunctive queries in Datalog, 64
well-founded semantics, 167, 170 Unique names axiom (UNA), 304
586 Index

Unique stable models View definitions, 6–7


Stable Model semantics, 144, 146, 148 Virtual machines. See Warren Abstract
well-founded semantics, 170 Machine (WAM)
Uniqueness constraints, 28 Virtual relations in Datalog, 5, 8
unitval instruction in WAM, 261 Visualization inference in IDP system, 297
unitvar instruction in WAM, 261 Viterbi paths, 226–227
unival instruction in WAM, 261 Vocabulary
univar instruction in WAM, 261 first-order logic, 284
Universal models, 27 IDP system, 297–299, 306
Unknown (truth value of) atoms logic programming, 123
Herbrand interpretations, 128
Horn programs, 138 Warren, David H. D., 37, 238, 481–482
well-founded semantics, 167 Warren, David Scott, 15–16, 480–482
Unreachable nodes in Datalog, 18–19 Warren Abstract Machine (WAM), 237
Unrestricted programs, query evaluation deterministic Datalog, 244–256
for, 221–224 deterministic Prolog, 256–263
Unrooted trees in phylogenetics, 378 environment trimming, 272–273
Unsat-core extraction in IDP system, 308 examples, 255–256
Updates ground facts, 250–251
in Datalog, 38–46 indexing, 270–272
extensions, 542–543 instruction summary, 254–255
rule bodies, 43–46 last call optimization, 268–270
rule heads, 42–43 natural language processing, 483
Uracil in DNA, 362–363 nondeterministic Prolog, 263–268
User-defined aggregates (UDAs), 67 procedural languages, 239–243
Prolog features, 273–274
VALIDITY database, 70 simple programs, 246–249
Value-passing automata in data- StackTop pointer, 250
independent systems, 448 tabling extensions, 274–275
Value types in LogicBlox, 336 temporary variables, 252–254
Variables variables and constants, 244–246
Datalog, 25–28 Watson-Crick pairs, 364
Datalog rules, 5–6, 10 wCSP in bioinformatics, 405
deterministic Datalog, 244–246, 251–254 WebDamlog language, 83–84
IDP system, 294, 298, 302, 319–320 WebLog, 36
NDlog, 81 Well defined distribution semantics, 205–
Stable Model semantics, 145, 151, 163 207
WAM, 269–270 Well-founded induction in IDP system,
well-founded semantics, 175 291–293
Variant tabling Well-founded semantics (WFS), 121–123
bottom-up evaluation, 52 aggregation, 24–25
top-down evaluation, 55 constraint satisfaction, 537–538
verbosity option in IDP system, 319 control abstractions, 525
Versioned predicates in LogicBlox, 338 Datalog, 20–21, 77
Index 587

distribution semantics, 207–208 Coherent Definition Framework, 530–531


IDP system, 291 vs. Datalog, 85–87
one-step provability operator, 128–130 experiments, 94–98
definition, 166–176 Flora-2, 91
terminology and notation, 123–131 grid planning, 466
WFM (workforce management) in natural language processing, 483–484
constraint satisfaction, 540 planning domains, 469
WFS. See Well-founded semantics (WFS) SLG algorithm, 59
Womb Grammars, 498, 503–504 variant tabling, 52
Workforce management (WFM) in xsb option in IDP system, 317–318
constraint satisfaction, 540
Workspaces in LogicBlox, 335–336 YAP Prolog
Worlds in distribution semantics, 186, natural language processing, 483
193–194, 201 probability computations, 223
Yedalog, 83
XMC applications, 544
XSB Logic Programming System, 70–72 Zinc language, 322
aggregation, 24–25 Zones in Timed Safety Automata, 446
applications, 543 Zucker’s algorithm, 388
Biographies
Editors
Michael Kifer is a professor with the Department of Computer Science, Stony
Brook University, USA. He received his Ph.D. in Computer Science in 1984 from the
Hebrew University of Jerusalem, Israel, and the M.S. degree in Mathematics in 1976
from Lomonosov Moscow State University, Russia. Since 2012, Dr. Kifer has served
as the President of the Rules and Reasoning Association (RRA). His work spans the
areas of knowledge representation and reasoning (KRR), logic programming, Web
information systems, and databases. He published four textbooks and numerous
articles in these areas as well as co-invented F-logic, HiLog, Annotated Logic, and
Transaction Logic, which are among the most widely cited works in Computer
Science and Semantic Web research, in particular. Twice, in 1999 and 2002, he
was a recipient of the prestigious ACM-SIGMOD “Test of Time” awards for his
works on F-logic and object-oriented database languages. In 2008, he received
SUNY Chancellor’s Award for Excellence in Scholarship. In 2013, Dr. Kifer received
another prestigious award: The 20-year “Test of Time” award from the Association
for Logic Programming (ALP) for his work on Transaction Logic. In 2013, Kifer
co-founded Coherent Knowledge Systems, a startup that commercializes semantic
and KRR technologies.

Yanhong Annie Liu is a professor of Computer Science at Stony Brook University.


She received her B.S. from Peking University, M.Eng. from Tsinghua University,
and Ph.D. from Cornell University, all in Computer Science. Her primary research
is in languages and algorithms, especially on systematic design and optimization,
centered around incrementalization—the discrete counterpart of differentiation in
calculus. Her current research focus is on languages and efficient implementations
for secure distributed programming and for declarative system specifications. She
has published in many prestigious venues, taught in a wide range of computer
science areas, and presented over 100 conferences and invited talks worldwide.
She serves on the ACM Books Editorial Board as the Area Editor for Programming
590 Biographies

Languages, and she is a member of IFIP WG 2.1 on Algorithmic Languages and


Calculi. Her awards include a State University of New York Chancellor’s Award for
Excellence in Scholarship and Creative Activities.

Contributors
Molham Aref earned a Bachelor’s degree in Computer Engineering (1989) and two
Master’s degrees in Electrical Engineering (1990) and Computer Science (1991)
from the Georgia Institute of Technology. He has more than 27 years of experience
developing high-value enterprise analytical solutions across various industries,
including retail, telco, and financial services. He started his career as a software
engineer and scientist at AT&T and founded Relational AI, LogicBlox, Predictix,
Optimi, and Brickstream.

Bart Bogaerts studied mathematics at the University of Leuven (KU Leuven), Bel-
gium. His Master’s thesis was titled “Unieke factorisatie in reguliere lokale ringen
(Unique Factorisation in Regular Local Rings)” and was supervised by Dr. Jan Schep-
ers. He graduated summa cum laude in July 2011. In September 2011, he joined the
DTAI (Declaratieve Talen en Artificiele Intelligentie/Declarative Languages and Arti-
ficial Intelligence) research group in the Department of Computer Science at KU
Leuven to investigate knowledge representation and reasoning techniques, super-
vised by Prof. Dr. Marc Denecker.
As his research interests evolved, Prof. Dr. Joost Vennekens and Prof. Dr. Jan
Van den Bussche became his co-supervisors. He defended his Ph.D. thesis, entitled
“Groundedness in Logics with a Fixpoint Definition” in June 2015. He graduated
with the congratulations of the examination board, which consisted among others
of renowned researches Thomas Eiter and Gerhard Brewka. In September 2015,
Bart Bogaerts joined the Computational Logic group of Aalto University (Espoo,
Finland) to work with Tomi Janhunen, among others, on Logic Programming, SAT
and QBF. In October 2016, he returned to his alma mater, KU Leuven, where he
currently works as an FWO post-doctoral fellow.

Dr. Conrado Borraz-Sánchez has over 10 years of mathematical modeling expe-


rience, solving real-life decision-making problems with extensive computational
experience in optimization methods, high-performance computing, and experi-
mental design. After obtaining his doctorate from the University of Bergen, Norway,
in 2010, Dr. Borraz-Sánchez held a postdoctoral position at Northwestern University
(2011–2014) where he was engaged in multiple projects sponsored by the U.S. De-
partment of Transportation and U.S. Class-I railroad companies on data analytics
for track maintenance, scheduling and routing problems, infrastructure deploy-
Biographies 591

ment of electric vehicle charging station problems, and WDM optical network
problems. He also held a postdoctoral position at Los Alamos National Labora-
tory (2014–2016) where he developed mathematical-model-based approaches for
Energy Infrastructure Planning projects sponsored by the U.S. Department of En-
ergy, including natural gas and grid-power transmission, expansion, and design
problems. Since 2016, Dr. Borraz-Sánchez has been part of an R&D group in one
of the Big Four firms, KPMG, where his wide range of engagement projects in-
cludes developing and deploying solutions for Professional Sport Leagues like the
National Basketball Association (NBA), the Women’s National Basketball Associa-
tion (WNBA), the Major League Baseball (MLB), and the National Basketball League
(NBL, Australia, and New Zealand).

Maurice Bruynooghe has been professor emeritus at the Katholieke Universiteit


Leuven since October 2015. His Ph.D. work on logic programming was completed
in 1979. It provided the foundation for starting a research group on Declarative
Languages and Artificial Intelligence in the 1980s and for participating in several
European ESPRIT projects. He supervised Luc De Raedt’s Ph.D. work on inductive
logic programming and together they initiated the machine learning subgroup. His
research interests cover the three research areas of the group: design, analysis, and
implementation of declarative programming languages; knowledge representation
and reasoning; and machine learning and data mining. He was Editor-in-Chief of
the Journal of Logic Programming (1991–2000) and of Theory and Practice of Logic
Programming ( 2001—2005); the latter journal started as a protest against the exces-
sive price policy of Elsevier. He is a fellow of ECCAI, the European Coordinating
Committee for Artificial Intelligence.

Henning Christiansen is a full professor of Computer Science at Roskilde Univer-


sity, Denmark. He obtained his Master’s degree from Aarhus University, Denmark,
in 1981 and his Ph.D. from Roskilde University in 1988. His main interests in-
clude logic and constraint programming, abductive reasoning, probabilistic-logic
models, language analysis, and artificial intelligence. He also works with artistic
interactive installations and robots in scenic contexts.

Verónica Dahl is an Argentine/Canadian mother, computer scientist, musician, and


writer, recognized as one of the 15 founders of the field of logic programming. She
has contributed over 120 scientific publications in computational linguistics, logic
programming, deductive knowledge bases, computational molecular biology and
web-based virtual worlds. She received several awards for her scientific results in
AI and three first prizes for her literary work.
592 Biographies

Alessandro Dal Palù , Ph.D., is an Associate Professor and head of the Bachelor’s
degree program in Computer Science at the University of Parma, Italy. Over the last
15 years, he has conducted research on protein structure prediction and constraint
programming applications and has published over 40 papers on these topics. His
research interests span from bioinformatics to high-performance computing and
logic programming. He is one of the editors of the Constraint in Bioinformatics se-
ries of the Algorithms for Molecular Biology journal. He has conducted the Workshop
on Constraint-Based Methods in Bioinformatics since 2005.

Broes De Cat is a researcher at Flanders Make (Belgium), working on automated


reasoning in the context of industrial robotics and flexible assembly applications.
Previously, he developed and analyzed software at OM Partners, a major supply
chain software company. During his Ph.D., he worked on model expansion, fo-
cusing specifically on lazy approaches to increase scalability. He holds a Master’s
degree in Civil Engineering-Computer Science (2009) and a Ph.D. in Computer Sci-
ence (2014), both from KU Leuven.

Marc Denecker is a professor at KU Leuven and head of the Knowledge Represen-


tation and Reasoning research group in the department Computer Science. His
research is on foundations for Knowledge Representation, logics for logic program-
ming, nonmonotonic reasoning, and knowledge representation. His research is
concerned with formal and informal semantics, methodology, inference, and ap-
plications. His group develops the IDP knowledge base system.

Agostino Dovier received his Ph.D. in Computer Science from the University of Pisa
and is a professor of Computer Science at the University of Udine, Italy, where he is
coordinating the bachelor and master’s degree programs in computer science. His
research interests include logic and constraint programming, and computational
biology. He served as general and program chair of the international conference
of logic programming (2008, 2012) and as president of the Italian Association for
Logic Programming (2012–2018). Since 2005, he has contributed to the organiza-
tion of the Workshops on Constraint-Based Methods in Bioinformatics. He is a
member of the editorial board of Theory and Practice of Logic Programming and of
the constraints series of Algorithms for Molecular Biology.

Andrea Formisano is an associate professor of Computer Science in the Depart-


ment of Mathematics and Computer Science at the University of Perugia, Italy. He
received his Ph.D. in Computer Science at the University “La Sapienza” in Rome.
His main research interests include computational logics, automated reasoning,
knowledge representation and nonmonotonic reasoning, multi-agent systems, and
GPU computing.
Biographies 593

Gerda Janssens is a professor in the Department of Computer Science at KU Leuven.


She has been a member of the Logic Programming group since the start of her
research career. She obtained her Ph.D. in Computer Science (KU Leuven) in March
1990. She is a member of the Knowledge Representation and Reasoning group and
is actively involved in the development of the FO(.)-KBS.

Diego Klabjan is a professor at Northwestern University, Department of Industrial


Engineering and Management Sciences. He is also Founding Director of the Mas-
ter of Science in Analytics program. After obtaining his doctorate in Algorithms,
Combinatorics, and Optimization from the School of Industrial and Systems Engi-
neering at the Georgia Institute of Technology in 1999, he joined the University of
Illinois at Urbana-Champaign that same year. In 2007, he became an associate pro-
fessor at Northwestern and, in 2012, was promoted to a full professor. His research
is focused on machine learning, deep learning, and analytics with concentrations
in finance, transportation, sport, and bioinformatics. Professor Klabjan has led
projects with large companies such as Intel, Baxter, Allstate, AbbVie, FedEx Express,
General Motors, and United Continental, among many others, and is assisting
numerous start-ups with their analytics needs. He is also a founder of Opex An-
alytics LLC.

David Maier is the Maseeh Professor of Emerging Technologies at Portland State


University. Prior to his current position, he was on the faculty at SUNY-Stony Brook
and the Oregon Graduate Institute. He has spent extended visits with INRIA, Uni-
versity of Wisconsin–Madison, Microsoft Research, and the National University of
Singapore. He is the author of books on relational databases, logic programming,
and object-oriented databases, as well as papers in database theory, object-oriented
technology, scientific databases, and data streams. He is a recognized expert on the
challenges of large-scale data in the sciences. He received an NSF Young Investiga-
tor Award in 1984, the 1997 SIGMOD Innovations Award for his contributions in
objects and databases, and a Microsoft Research Outstanding Collaborator Award
in 2016. He is also an ACM Fellow and IEEE Senior Member. He holds a dual
B.A. in Mathematics and Computer Science from the University of Oregon (Honors
College, 1974) and a Ph.D. in Electrical Engineering and Computer Science from
Princeton University (1978).

Emir Pasalic is a Director of Software Development at Infor. He has worked in the


areas of type theory, programming languages, mathematical programming, data-
bases, and machine learning. He was a postdoctoral fellow at the Rice University
Department of Computer Science (2004–2007) and has worked at LogicBlox Inc. on
implementing in-database linear programming systems (2007–present). He holds
594 Biographies

a Ph.D. in Computer Science from Oregon Health and Sciences University (OGI
School of Engineering, 2004).
Enrico Pontelli is a Regents Professor of Computer Science and Dean of the College
of Arts & Sciences at New Mexico State University. He received a Ph.D. in Com-
puter Science from NMSU in 1997. His research interests are in computational
logic, constraint programming, high-performance computing, knowledge repre-
sentation and reasoning, bioinformatics, and assistive technologies. He is the re-
cipient of a NSF CAREER award and he has raised over $16M in research funding
in the last 15 years. He has published over 250 peer-reviewed papers and serves on
the editorial boards of several journals.
C. R. Ramakrishnan is a professor in the Department of Computer Science at Stony
Brook University. His research interests revolve around logic programming and for-
mal methods. He works on various aspects of inference in logic programs, ranging
from query evaluation in probabilistic logic programs to incremental evaluation.
He also works on model checking concurrent systems, such as mobile systems,
and expressive logics for specifying properties of such systems. His research has
been consistently funded by grants from the NSF and the ONR.
Fabrizio Riguzzi is an associate professor of Computer Science in the Department
of Mathematics and Computer Science at the University of Ferrara. He was previ-
ously an assistant professor at the same university. He earned his Master’s degree
and Ph.D. in Computer Engineering from the University of Bologna. Dr. Riguzzi is
the Editor-in-Chief of Intelligenza Artificiale, the official journal of the Italian As-
sociation for Artificial Intelligence. He is the author of more than 150 papers in
the areas of machine learning, inductive logic programming, and statistical rela-
tional learning. His aim is to develop intelligent systems by combining in novel
ways techniques from artificial intelligence, logic, and statistics.
K. Tuncay Tekle is a research assistant professor at Stony Brook University with a re-
search focus on high-level programming languages and efficient implementations.
After completing his Ph.D. thesis on the analysis and efficient implementation of
Datalog queries under the supervision of Prof. Annie Liu, he has worked at Log-
icBlox, a company whose state-of-the-art Datalog engine helps solve complex enter-
prise problems. Apart from his research work, he is a software consultant solving
database-centric problems for top retail companies.
Miroslaw Truszczynski is a professor of computer science at the University of Ken-
tucky. Truszczynski’s research interests include knowledge representation, non-
monotonic reasoning, logic programming, and constraint satisfaction and pro-
Biographies 595

gramming. He has authored and co-authored more than 190 technical papers in
journals, conference proceedings, and books, and is a co-author of the research
monograph Nonmonotonic Logic on mathematical foundations of nonmonotonic
reasoning, which marked a milestone in the development of that area. He is one of
the founders of the field of answer-set programming. In 2013, Truszczynski was
elected Fellow of the Association for the Advancement of Artificial Intelligence
(AAAI).

Dr. Theresa Swift has over 25 years of experience in computational logic research
and the design and development of major computer systems. She led the develop-
ment of the XSB Programming system, a major open-source Prolog. As a researcher,
Dr. Swift has published more than 75 articles in major refereed journals and con-
ferences and edited three books. Her research interests include high-performance
logic systems, nonmonotonic reasoning and reasoning under uncertainty, and
knowledge representation.

David S. Warren is Professor Emeritus at Stony Brook University and Vice President
of Sciences at XSB, Inc. His entire research career has been in the area of logic
programming, including theory, systems, and applications. He is the author of
more than 100 technical papers in the area and has served as President of the
Association for Logic Programming. He is the leader of the XSB project that has
developed and supported the open source XSB Tabled Prolog system, which has
been the leading tabled logic programming system for the last 25 years. He is a
Fellow of the ACM, past chair of the Stony Brook Computer Science Department,
and advisor of 20 Ph.D. students. He is co-founder of XSB, Inc., a 20-year-old
software company that commercializes applications supported by the XSB Prolog
system.

You might also like