This action might not be possible to undo. Are you sure you want to continue?
P = NP
Vinay Deolalikar
HP Research Labs, Palo Alto
vinay.deolalikar@hp.com
August 11, 2010
¤¯¯¤¬
H1H¯1+ ¬ë H1«+
¯¬ö¬
।
+¬¯¤1«+¯ö ¬ +¬1
+¯¬+।
¬ +¬+¬= ö
¬
¬ ö H¥¨¯1+¬m॥
This work is dedicated to my late parents:
my father Shri. Shrinivas Deolalikar, my mother Smt. Usha Deolalikar,
and my maushi Kum. Manik Deogire,
for all their hard work in raising me;
and to my late grand parents:
Shri. Rajaram Deolalikar and Smt. Vimal Deolalikar,
for their struggle to educate my father inspite of extreme poverty.
This work is part of my MatruPitru Rin
1
.
I am forever indebted to my wife for her faith during these years.
1
The debt to mother and father that a pious Hindu regards as his obligation to repay in this
life.
Abstract
We demonstrate the separation of the complexity class NP from its sub
class P. Throughout our proof, we observe that the ability to compute a prop
erty on structures in polynomial time is intimately related to an atypical prop
erty of the space of solutions — namely, the space is parametrizable with only
c
poly(log n)
, c > 1 parameters instead of the typical c
n
parameters required for a
joint distribution of n covariates.
This type of exponentially smaller parametrization arises as a result of severe
limitations placed on the interaction between the variates. In particular, it may
arise from range limited interactions where variates interact at short ranges,
and chain together such interactions to create long range interactions. Such
long range interactions then would be characterized by the statistical notions
of conditional independence and sufﬁcient statistics. The presence of condi
tional independencies manifests in the form of economical parametrizations of
the joint distribution of covariates. Likewise, such economical parametrizations
can arise frominteractions which take only c
poly(log n)
many values. In both cases,
the result on the joint distribution is the same — it is parametrizable with only
c
poly(log n)
independent parameters. In order to apply this analysis to the space
of solutions of random constraint satisfaction problems, we utilize and expand
upon ideas from several ﬁelds spanning logic, statistics, graphical models, ran
dom ensembles, and statistical physics.
We begin by introducing the requisite framework of graphical models for a
set of interacting variables. We focus on the correspondence between Markov
and Gibbs properties for directed and undirected models as reﬂected in the fac
torization of their joint distribution, and the number of independent parameters
required to specify the distribution.
Next, we build the central contribution of this work. We show that there are
fundamental conceptual relationships between polynomial time computation,
which is completely captured by the logic FO(LFP) on classes of successor struc
tures, and poly(log n)parametrization. In particular, monadic LFP is a range
limited interaction model that possesses certain directed Markov properties
that may be stated in terms of conditional independence and sufﬁcient statis
tics. In order to demonstrate these relationships, we view the LFP computation
as “factoring through” several stages of ﬁrst order computations, and then uti
lize the limitations of ﬁrst order logic. Speciﬁcally, we exploit the limitation
that ﬁrst order logic can only express properties in terms of a bounded num
ber of local neighborhoods of the underlying structure. Then we relate com
plex ﬁxed points to value limited interactions, which again result in poly(log n)
parametrization.
Next we introduce ideas from the 1RSB replica symmetry breaking ansatz of
statistical physics. We recollect the description of the clustered phase for ran
dom kSAT that arises when the clause density is sufﬁciently high and k ≥ 9.
In this phase, known as the d1RSB phase, an arbitrarily large fraction of all vari
ables in cores freeze within exponentially many clusters in the thermodynamic
limit, as the clause density is increased towards the SATunSAT threshold. The
Hamming distance between a solution that lies in one cluster and that in an
other is O(n). Note that the onset of this phase is rigorously proven only for
k ≥ 9, and it is here that we will demonstrate our separation.
Next, we encode kSAT formulae as structures on which FO(LFP) captures
polynomial time. By asking FO(LFP) to extend partial assignments on ensem
bles of random kSAT, we build distributions of solutions. We then construct a
dynamic graphical model on a product space that captures all the information
ﬂows through the various stages of a LFP computation on ensembles of kSAT
structures. Distributions computed by LFP must satisfy this model. This model
is directed, which allows us to compute factorizations locally and parameterize
using Gibbs potentials on cliques. We then use results from ensembles of factor
graphs of random kSAT to bound the various information ﬂows in this di
rected graphical model. We parametrize the resulting distributions in a manner
that demonstrates that irreducible interactions between covariates — namely,
those that may not be factored any further through conditional independencies
— cannot grow faster than poly(log n) in the range limited monadic LFP com
puted distributions. For value limited complex LFP, we show how to obtain a
parametrization of the solution space by merging potentials with scope O(n).
This allows us to analyze the behavior of the entire class of polynomial time
algorithms on ensembles simultaneously.
Using the aforementioned limitations of LFP, we demonstrate that a pur
ported polynomial time solution to kSAT would result in solution space that
is a mixture of distributions each having an exponentially smaller parametriza
tion than is consistent with the highly constrained d1RSB phases of kSAT. We
show that this would contradict the behavior exhibited by the solution space in
the d1RSB phase. This corresponds to the intuitive picture provided by physics
about the emergence of extensive (meaning O(n)) longrange correlations be
tween variables in this phase and also explains the empirical observation that
all known polynomial time algorithms break down in this phase.
Our work shows that every polynomial time algorithm must fail to produce
solutions to large enough problem instances of kSAT in the d1RSB phase. This
shows that polynomial time algorithms are not capable of solving NPcomplete
problems in their hard phases, and demonstrates the separation of P from NP.
Contents
1 Introduction 3
1.1 Synopsis of Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Interaction Models and Conditional Independence 15
2.1 Conditional Independence . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Conditional Independence in Undirected Graphical Models . . . 17
2.2.1 Gibbs Random Fields and the HammersleyClifford The
orem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Factor Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 The MarkovGibbs Correspondence for Directed Models . . . . . 26
2.5 Tmaps and Tmaps . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Distributions with poly(log n)Parametrization 30
3.1 Two Kinds of poly(log n)parameterizations . . . . . . . . . . . . . 32
3.1.1 Range Limited Interactions . . . . . . . . . . . . . . . . . . 32
3.1.2 Value Limited Interactions . . . . . . . . . . . . . . . . . . . 35
3.1.3 On the Atypical Nature of poly(log n)parameterization . . 38
3.1.4 Our Treatment of Range and Value Limited Distributions . 38
4 Logical Descriptions of Computations 40
4.1 Inductive Deﬁnitions and Fixed Points . . . . . . . . . . . . . . . . 41
4.2 Fixed Point Logics for P and PSPACE . . . . . . . . . . . . . . . 44
1
2
5 The Link Between Polynomial Time Computation and Conditional In
dependence 48
5.1 The Limitations of LFP . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1.1 Locality of First Order Logic . . . . . . . . . . . . . . . . . 51
5.2 Simple Monadic LFP and Conditional Independence . . . . . . . . 55
5.3 Conditional Independence in Complex Fixed Points . . . . . . . . 60
5.4 Aggregate Properties of LFP over Ensembles . . . . . . . . . . . . 62
6 The 1RSB Ansatz of Statistical Physics 64
6.1 Ensembles and Phase Transitions . . . . . . . . . . . . . . . . . . . 64
6.2 The d1RSB Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.2.1 Cores and Frozen Variables . . . . . . . . . . . . . . . . . . 68
6.2.2 Performance of Known Algorithms in the d1RSB Phase . . 71
7 Random Graph Ensembles 74
7.1 Properties of Factor Graph Ensembles . . . . . . . . . . . . . . . . 75
7.1.1 Locally TreeLike Property . . . . . . . . . . . . . . . . . . 75
7.1.2 Degree Proﬁles in Random Graphs . . . . . . . . . . . . . . 76
8 Separation of Complexity Classes 78
8.1 Measuring Conditional Independence in Range Limited Models . 78
8.2 Generating Distributions from LFP . . . . . . . . . . . . . . . . . . 80
8.2.1 Encoding kSAT into Structures . . . . . . . . . . . . . . . 80
8.2.2 The LFP Neighborhood System . . . . . . . . . . . . . . . . 83
8.2.3 Generating Distributions . . . . . . . . . . . . . . . . . . . 85
8.3 Disentangling the Interactions: The ENSP Model . . . . . . . . . . 87
8.4 Parametrization of the ENSP . . . . . . . . . . . . . . . . . . . . . . 93
8.5 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.6 Some Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
A Reduction to a Single LFP Operation 105
A.1 The Transitivity Theorem for LFP . . . . . . . . . . . . . . . . . . . 105
A.2 Sections and the Simultaneous Induction Lemma for LFP . . . . . 106
2
1. Introduction
The P
?
= NP question is generally considered one of the most important and
far reaching questions in contemporary mathematics and computer science.
The origin of the question seems to date back to a letter from G¨ odel to Von
Neumann in 1956 [Sip92]. Formal deﬁnitions of the class NP awaited work by
Edmonds [Edm65], Cook [Coo71], and Levin [Lev73]. The CookLevin theorem
showed the existence of complete problems for this class, and demonstrated
that SAT– the problem of determining whether a set of clauses of Boolean lit
erals has a satisfying assignment – was one such problem. Later, Karp [Kar72]
showed that twentyone well known combinatorial problems, which include
TRAVELLING SALESMAN, CLIQUE, and HAMILTONIAN CIRCUIT, were also
NPcomplete. In subsequent years, many problems central to diverse areas of
application were shown to be NPcomplete (see [GJ79] for a list). If P = NP,
we could never solve these problems efﬁciently. If, on the other hand P = NP,
the consequences would be even more stunning, since every one of these prob
lems would have a polynomial time solution. The implications of this on ap
plications such as cryptography, and on the general philosophical question of
whether human creativity can be automated, would be profound.
The P
?
= NP question is also singular in the number of approaches that re
searchers have brought to bear upon it over the years. From the initial question
in logic, the focus moved to complexity theory where early work used diago
nalization and relativization techniques. However, [BGS75] showed that these
methods were perhaps inadequate to resolve P
?
= NP by demonstrating rela
tivized worlds in which P = NP and others in which P = NP (both relations
for the appropriately relativized classes). This shifted the focus to methods us
3
1. INTRODUCTION 4
ing circuit complexity and for a while this approach was deemed the one most
likely to resolve the question. Once again, a negative result in [RR97] showed
that a class of techniques known as “Natural Proofs” that subsumed the above
could not separate the classes NP and P, provided oneway functions exist.
Owing to the difﬁculty of resolving the question, and also to the negative
results mentioned above, there has been speculation that resolving the P
?
=
NP question might be outside the domain of mathematical techniques. More
precisely, the question might be independent of standard axioms of set theory.
The ﬁrst such results in [HH76] show that some relativized versions of the P
?
=
NP question are independent of reasonable formalizations of set theory.
The inﬂuence of the P
?
= NP question is felt in other areas of mathematics.
We mention one of these, since it is central to our work. This is the area of de
scriptive complexity theory — the branch of ﬁnite model theory that studies the
expressive power of various logics viewed through the lens of complexity the
ory. This ﬁeld began with the result [Fag74] that showed that NP corresponds
to queries that are expressible in second order existential logic over ﬁnite struc
tures. Later, characterizations of the classes P [Imm86], [Var82] and PSPACE
over ordered structures were also obtained.
There are several introductions to the P
?
= NP question and the enormous
amount of research that it has produced. The reader is referred to [Coo06] for an
introduction which also serves as the ofﬁcial problem description for the Clay
Millenium Prize. An older excellent review is [Sip92]. See [Wig07] for a more
recent introduction. Most books on theoretical computer science in general,
and complexity theory in particular, also contain accounts of the problem and
attempts made to resolve it. See the books [Sip97] and [BDG95] for standard
references.
Preliminaries and Notation
Treatments of standard notions from complexity theory, such as deﬁnitions of
the complexity classes P, NP, PSPACE, and notions of reductions and com
pleteness for complexity classes, etc. may be found in [Sip97, BDG95].
4
1. INTRODUCTION 5
Our work will span various developments in three broad areas. While we
have endeavored to be relatively complete in our treatment, we feel it would
be helpful to provide standard textual references for these areas, in the order
in which they appear in the work. Additional references to results will be pro
vided within the chapters.
Standard references for graphical models include [Lau96] and the more re
cent [KF09]. For an engaging introduction, please see [Bis06, Ch. 8]. For an
early treatment in statistical mechanics of Markov random ﬁelds and Gibbs dis
tributions, see [KS80].
Preliminaries from logic, such as notions of structure, vocabulary, ﬁrst order
language, models, etc., may be obtained from any standard text on logic such
as [Hod93]. In particular, we refer to [EF06, Lib04] for excellent treatments of
ﬁnite model theory and [Imm99] for descriptive complexity.
For a treatment of the statistical physics approach to random CSPs, we rec
ommend [MM09]. An earlier text is [MPV87].
1.1 Synopsis of Proof
This proof requires a convergence of ideas and an interplay of principles that
span several areas within mathematics and physics. This represents the major
ity of the effort that went into constructing the proof. Given this, we felt that
it would be beneﬁcial to explain the various stages of the proof, and highlight
their interplay. The technical details of each stage are described in subsequent
chapters.
Consider a system of n interacting variables such as is ubiquitous in mathe
matical sciences. For example, these may be the variables in a kSAT instance
that interact with each other through the clauses present in the kSAT formula,
or n Ising spins that interact with each other in a ferromagnet. For ease of pre
sentation, we will assume our variables are binary. Through their interaction,
variables exert an inﬂuence on each other, and affect the values each other may
take. The proof centers on the study of logical and algorithmic constructs where
5
1. INTRODUCTION 6
such complex interactions have “simple” descriptions.
What constitutes a simple description of the interaction of n variables? The
number of independent parameters required to specify the joint distribution is a
measure of the complexity of interactions between the covariates. There are
two components to this. The ﬁrst measures correlations, and the second mea
sures “ampleness” under those correlations. This is best explained with two
examples. Consider ﬁrst the uniform distribution over all binary pairs
¦(0, 0), (0, 1), (1, 0), (1, 1)¦
There is no correlation between the two variables in this distribution. They are
independent. Consider next the distribution over 5 covariates which is uni
formly supported only on
(0, 0, 0, 0, 0) and (1, 1, 1, 1, 1).
In this distribution, the covariates are tightly correlated, but the distribution is
not “ample”. A distribution over n covariates is deﬁned to be ample when it is
supported on c
n
, c > 1 points.
Though initially these two distributions appear quite different, there is a
commonality. Both can be speciﬁed with just two parameters. In the ﬁrst exam
ple, the two parameters are the probability of the ﬁrst variate and the probability
of the second variate taking the value 1. With this much information, we can
specify the joint distribution since the variates are independent.
In the second example, we again need two parameters to specify the distri
bution — namely, the two points on which it is supported.
Though both distributions have simple descriptions, the reasons are very
different. We will study distributions on n covariates that require only 2
poly(log n)
parameters to specify. We will call such distributions poly(log n)parametrizable.
We will see that such distributions are at the heart of polynomial time com
putability. Conversely, in hard phases of constraint satisfaction problems such
as kSAT, the space of solutions is both correlated and ample. This causes all
polynomial time algorithms to fail on them.
6
1. INTRODUCTION 7
A distribution is simple to describe if there is either independence between
the variates (as was the case in our ﬁrst example) or limited support of the dis
tribution (as was the case in our second example). We call the ﬁrst case a range
limited interaction because variates interact with a limited range of other vari
ates. The second case is called value limited since the number of joint values the
variates can take is limited. The common feature underlying both cases is that
the distribution has a very economical parametrization as compared to “true”
joint distributions (more precisely, statistically typical joint distributions) on n
covariates, which require O(2
n
) parameters to specify. Thus, we wish to study
such distributions, and will consider both the cases of range and value limited
interactions.
At this point, we visit the topic of graphical interaction models and condi
tional independence which is a manifestation of range limited interactions. While
complete independence between variates in a complex system is rare, condi
tional independence between blocks of variables is fairly frequent. We see that
factorization into conditionally independent pieces manifests in terms of eco
nomical parametrizations of the joint distribution. Graphical models offer us a
way to measure the size of these interactions.
The factorization of interactions can be represented by a corresponding fac
torization of the joint distribution of the variables over the space of conﬁgura
tions of the n variables subject to the constraints of the problem. It has been real
ized in the statistics and physics communities for long that certain multivariate
distributions decompose into the product of a few types of factors, with each
factor itself having only a few variables. Such a factorization of joint distribu
tions into simpler factors can often be represented by graphical models whose
vertices index the variables. Afactorization of the joint distribution according to
the graph implies that the interactions between variables can be factored into a
sequence of “local interactions” between vertices that lie within neighborhoods
of each other.
Consider the case of an undirected graphical model. The factoring of inter
actions may be stated in terms of either a Markov property, or a Gibbs property
7
1. INTRODUCTION 8
with respect to the graph. Speciﬁcally, the local Markov property of such mod
els states that the distribution of a variable is only dependent directly on that
of its neighbors in an appropriate neighborhood system. Of course, two vari
ables arbitrarily far apart can inﬂuence each other, but only through a sequence
of successive local interactions. The global Markov property for such models states
that when two sets of vertices are separated by a third, this induces a condi
tional independence on variables corresponding to these sets of vertices, given
those corresponding to the third set. On the other hand, the Gibbs property of a
distribution with respect to a graph asserts that the distribution factors into a
product of potential functions over the maximal cliques of the graph. Each po
tential captures the interaction between the set of variables that form the clique.
The HammersleyClifford theorem states that a positive distribution having the
Markov property with respect to a graph must have the Gibbs property with
respect to the same graph.
The condition of positivity is essential in the HammersleyClifford theorem
for undirected graphs. However, it is not required when the distribution satis
ﬁes certain directed models. In that case, the Markov property with respect to
the directed graph implies that the distribution factorizes into local conditional
probability distributions (CPDs). Furthermore, if the model is a directed acyclic
graph (DAG), we can obtain the Gibbs property with respect to an undirected
graph constructed from the DAG by a process known as moralization. We will
return to the directed case shortly.
Chapter 2 develops the principles underlying the framework of graphical
models. We will not use any of these models in particular, but construct another
directed model on a larger product space that utilizes these principles and tailors
them to the case of least ﬁxed point logic, which we turn to next.
At this point, we change to the setting of ﬁnite model theory. Finite model
theory is a branch of mathematical logic that has provided machine indepen
dent characterizations of various important complexity classes including P,
NP, and PSPACE. In particular, the class of polynomial time computable
queries on successor structures has a precise description —it is the class of queries
8
1. INTRODUCTION 9
expressible in the logic FO(LFP) which extends ﬁrst order logic with the abil
ity to compute least ﬁxed points of positive ﬁrst order formulae. Least ﬁxed
point constructions iterate an underlying positive ﬁrst order formula, thereby
building up a relation in stages. We take a geometric picture of a monadic LFP
computation. Initially the relation to be built is empty. At the ﬁrst stage, certain
elements, whose types satisfy the ﬁrst order formula, enter the relation. This
changes the neighborhoods of these elements, and therefore in the next stage,
other elements (whose neighborhoods have been thus changed in the previous
stages) become eligible for entering the relation. The positivity of the formula
implies that once an element is in the relation, it cannot be removed, and so
the iterations reach a ﬁxed point in a polynomial number of steps. Importantly
from our point of view, the positivity and the stagewise nature of LFP means
that the computation has a directed representation on a graphical model that we
will construct. Recall at this stage that distributions over directed models enjoy
factorization even when they are not deﬁned over the entire space of conﬁgura
tions.
We may interpret this as follows: monadic LFP relies on the assumption that
variables that are highly entangled with each other due to constraints can be
disentangled in a way that they now interact with each other through condi
tional independencies induced by a certain directed graphical model construc
tion. Of course, an element does inﬂuence others arbitrarily far away, but only
through a sequence of such successive local and bounded interactions. The reason LFP
computations terminate in polynomial time is analogous to the notions of con
ditional independence that underlie efﬁcient algorithms on graphical models
having sufﬁcient factorization into local interactions.
In order to apply this picture in full generality to all LFP computations, we
use the simultaneous induction lemma to push all simultaneous inductions into
nested ones, and then employ the transitivity theorem to encode nested ﬁxed
points as sections of a single relation of higher arity. We then see that this is
the case of a value limited interaction between O(n) variates. Namely, although
n variates interact with each other, they do not take c
n
joint values. Building
9
1. INTRODUCTION 10
the machinery that can precisely map all these cases to the picture of either
factorization into range limited or value limited interactions is the subject of
Chapter 5.
The preceding insights now direct us to the setting necessary in order to
separate P fromNP. We need a regime of NPcomplete problems where inter
actions between variables have the following two properties.
1. They are so “dense” that they cannot be factored through the bottleneck of
the local and bounded properties of ﬁrst order logic that limit each stage
of LFP computation.
2. The distribution is ample. Namely, it takes c
n
joint values.
Intuitively, this should happen when each variable has to simultaneously sat
isfy constraints involving an extensive (O(n)) fraction of the variables in the
problem, and blocks of n variables are instantiated c
n
distinct ways under these
strong correlations. Namely, we have ample, and highly correlated distribu
tions having no factorization into conditionally independent pieces (remember
the value limited case is already ruled out since the distribution is ample).
In search of regimes where such situations arise, we turn to the study of
ensemble random kSAT where the properties of the ensemble are studied as a
function of the clause density parameter. We will now add ideas from this ﬁeld
which lies on the intersection of statistical mechanics and computer science to
the set of ideas in the proof.
In the past two decades, the phase changes in the solution geometry of ran
dom kSAT ensembles as the clause density increases, have gathered much re
search attention. The 1RSB ansatz of statistical mechanics says that the space of
solutions of random kSAT shatters into exponentially many clusters of solu
tions when the clause density is sufﬁciently high. This phase is called 1dRSB (1
Step Dynamic Replica Symmetry Breaking) and was conjectured by physicists
as part of the 1RSB ansatz. It has since been rigorously proved for high values
of k. It demonstrates the properties of high correlation between large sets of
variables that we will need. Speciﬁcally, the emergence of cores that are sets of
10
1. INTRODUCTION 11
C clauses all of whose variables lie in a set of size C (this actually forces C to be
O(n)). As the clause density is increased, the variables in these cores “freeze.”
Namely, they take the same value throughout the cluster. Changing the value of
a variable within a cluster necessitates changing O(n) other variables in order
to arrive at another satisfying solution, which would be in a different cluster.
Furthermore, as the clause density is increased towards the SATunSAT thresh
old, each cluster collapses steadily towards a single solution, that is maximally
far apart from every other cluster. Physicists think of this as an “energy gap”
between the clusters. Such stages are precisely the ones that we need since they
possess the following two properties.
1. Due to strong O(n) correlations that cannot be factored through condi
tional independencies, they resist attack by local and bounded ﬁrst order
stages of a monadic LFP computation.
2. Due to their ampleness, which arises from their instantiations in expo
nentially many clusters, they resist attack by complex ﬁxed points that
produce value limited distributions.
Finally, as the clause density increases above the SATunSAT threshold, the so
lution space vanishes, and the underlying instance of SAT is no longer satisﬁ
able.
We should stress that the picture described above is known to hold in the
case of random kSAT only for k ≥ 9. For lower values of k, such as k = 3,
there is empirical evidence that this picture does not hold. In other words, the
“true” d1RSB phase arises in randomkSAT for k ≥ 9 as the clause density rises
above (2
k
/k) ln k. Since we need all the known properties of the d1RSB phase,
we will work in this regime. Therefore, our proof does not say anything about
the efﬁcacy of various algorithms for 3SAT, for instance. We speciﬁcally prove
that the d1RSB phase is out of reach for polynomial time algorithms, and this
phase is only reached at k ≥ 9. We reproduce the rigorously proved picture of
the 1RSB ansatz that we will need in Chapter 6.
In Chapter 7, we make a brief excursion into the random graph theory of
11
1. INTRODUCTION 12
the factor graph ensembles underlying random kSAT. From here, we obtain
results that asymptotically almost surely upper bound the size of the largest
cliques in the neighborhood systems on the Gaifman graphs that we study
later when we build models for the range limited interactions that occur during
monadic LFP. These provide us with bounds on the largest irreducible interac
tions between variables during the various stages of a LFP computation.
Finally in Chapter 8, we pull all the threads and machinery together. First,
we encode kSAT instances as queries on structures over a certain vocabulary
in a way that LFP captures all polynomial time computable queries on them.
We then set up the framework whereby we can generate distributions of solu
tions to each instance by asking a purported LFP algorithm for kSAT to extend
partial assignments on variables to full satisfying assignments.
Next, we embed the space of covariates into a larger product space which al
lows us to “disentangle” the ﬂow of information during a LFP computation.
This allows us to study the computations performed by the LFP with various
initial values under a directed graphical model. This model is only polynomi
ally larger than the structure itself. We call this the ElementNeighborhoodStage
Product, or ENSP model. The distribution of solutions generated by LFP then is
a mixture of distributions each of whom factors according to an ENSP.
At this point, we wish to measure the growth of independent parameters of
distributions of solutions whose embeddings into the larger product space fac
tor over the ENSP. In order to do so, we utilize the following properties for
range limited models.
1. The directed nature of the model that comes from properties of LFP.
2. The properties of neighborhoods that are obtained by studies on random
graph ensembles, speciﬁcally that neighborhoods that occur during the
LFP computation are of size poly(log n) asymptotically almost surely in
the n → ∞limit.
3. The locality and boundedness properties of FO that put constraints upon
each individual stage of the LFP computation.
12
1. INTRODUCTION 13
4. Simple properties of LFP, such as the closure ordinal being a polynomial
in the structure size.
The crucial property that allows us to analyze mixtures of range limited dis
tributions that factor according to some ENSP is that we can parametrize the
distribution using potentials on cliques of its moralized graph that are of size at
most poly(log n). This means that when the mixture is exponentially numerous,
we will see features that reﬂect the poly(log n) factor size of the conditionally
independent parametrization.
Next, we come to value limited models. Here, interactions are of size O(n),
but they are limited to only c
poly(log n)
values, thereby giving us poly(log n)parametrization.
We show how to deal with mixtures of value limited models. We build a tech
nique that merges various O(n) potentials that are poly(log n)parametrizable
into a single potential that is also poly(log n)parametrizable, and covers the en
tire graphical model (that has poly(n) variables.)
Now we close the loop and show that a distribution of solutions for kSAT
constructed by any purported LFP algorithm (monadic or complex) would not
have enough parameters to describe the known picture of kSAT in the d1RSB
phase for k ≥ 9 — namely, the presence of extensive frozen variables in ex
ponentially many clusters with Hamming distance between the clusters be
ing O(n). In particular, in exponentially numerous mixtures of range limited
models, we would have conditionally independent variation between blocks of
poly(log n) variables, causing the Hamming distance between solutions to be of
this order as well. In other words, solutions for kSAT that are constructed us
ing range limited LFP will display aggregate behavior that reﬂects that they are
constructed out of “building blocks” of size poly(log n). This behavior will man
ifest when exponentially many solutions are generated by the LFP construction.
The case of value limited LFP also leads to a contradiction since it would be
unable to explain the exponentially many cluster instantiations of cores that are
present in the d1RSB phase.
This shows that LFP cannot express the satisﬁability query in the d1RSB
phase for high enough k, and separates P from NP. This also explains the
13
1. INTRODUCTION 14
empirical observation that all known polynomial time algorithms fail in the
d1RSB phase for high values of k, and also establishes on rigorous principles
the physics intuition about the onset of extensive long range correlations in the
d1RSB phase that causes all known polynomial time algorithms to fail. It also
completes this picture, since it says that extensive O(n) correlations that (a) can
not factor through conditional independencies and (b) are amply instantiated,
are the source of failure of polynomial time algorithms.
14
2. Interaction Models and
Conditional Independence
Systems involving a large number of variables interacting in complex ways are
ubiquitous in the mathematical sciences. These interactions induce dependen
cies between the variables. Because of the presence of such dependencies in a
complex systemwith interacting variables, it is not often that one encounters in
dependence between variables. However, one frequently encounters conditional
independence between sets of variables. Both independence and conditional in
dependence among sets of variables have been standard objects of study in
probability and statistics. Speaking in terms of algorithmic complexity, one of
ten hopes that by exploiting the conditional independence between certain sets
of variables, one may avoid the cost of enumeration of an exponential number
of hypothesis in evaluating functions of the distribution that are of interest.
2.1 Conditional Independence
We ﬁrst ﬁx some notation. Random variables will be denoted by upper case
letters such as X, Y, Z, etc. The values a random variable takes will be denoted
by the corresponding lower case letters, such as x, y, z. Throughout this work,
we assume our random variables to be discrete unless stated otherwise. We
may also assume that they take values in a common ﬁnite state space, which
we usually denote by Λ following physics convention. We denote the probabil
ity mass functions of discrete random variables X, Y, Z by P
X
(x), P
Y
(y), P
Z
(z)
respectively. Similarly, P
X,Y
(x, y) will denote the joint mass of (X, Y ), and so
15
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 16
on. We drop subscripts on the P when it causes no confusion. We freely use the
term “distribution” for the probability mass function.
The notion of conditional independence is central to our proof. The intuitive
deﬁnition of the conditional independence of X from Y given Z is that the con
ditional distribution of X given (Y, Z) is equal to the conditional distribution
of X given Z alone. This means that once the value of Z is given, no further
information about the value of X can be extracted from the value of Y . This
is an asymmetric deﬁnition, and can be replaced by the following symmetric
deﬁnition. Recall that X is independent of Y if
P(x, y) = P(x)P(y).
Deﬁnition 2.1. Let notation be as above. X is conditionally independent of Y
given Z, written X⊥⊥Y [ Z, if
P(x, y [ z) = P(x [ z)P(y [ z),
The asymmetric version which says that the information contained in Y is
superﬂuous to determining the value of X once the value of Z is known may
be represented as
P(x [ y, z) = P(x [ z).
The notion of conditional independence pervades statistical theory [Daw79,
Daw80]. Several notions from statistics may be recast in this language.
EXAMPLE 2.2. The notion of sufﬁciency may be seen as the presence of a cer
tain conditional independence [Daw79]. A sufﬁcient statistic T in the problem
of parameter estimation is that which renders the estimate of the parameter in
dependent of any further information from the sample X. Thus, if Θ is the
parameter to be estimated, then T is a sufﬁcient statistic if
P(θ [ x) = P(θ [ t).
Thus, all there is to be gained from the sample in terms of information about
Θ is already present in T alone. In particular, if Θ is a posterior that is being
16
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 17
computed by Bayesian inference, then the above relation says that the posterior
depends on the data X through the value of T alone. Clearly, such a statement
would lead to a reduction in the complexity of inference.
2.2 Conditional Independence in Undirected Graph
ical Models
Graphical models offer a convenient framework and methodology to describe
and exploit conditional independence between sets of variables in a system.
One may think of the graphical model as representing the family of distribu
tions whose law fulﬁlls the conditional independence statements made by the
graph. A member of this family may satisfy any number of additional condi
tional independence statements, but not less than those prescribed by the graph.
In general, we will consider graphs ( = (V, E) whose n vertices index a set of
n random variables (X
1
, . . . , X
n
). The random variables all take their values
in a common state space Λ. The random vector (X
1
, . . . , X
n
) then takes values
in a conﬁguration space Ω
n
= Λ
n
. We will denote values of the random vector
(X
1
, . . . , X
n
) simply by x = (x
1
, . . . , x
n
). The notation X
V \I
will denote the set
of variables excluding those whose indices lie in the set I. Let P be a proba
bility measure on the conﬁguration space. We will study the interplay between
conditional independence properties of P and its factorization properties.
There are, broadly, two kinds of graphical models: directed and undirected.
We ﬁrst consider the case of undirected models. Fig. 2.1 illustrates an undirected
graphical model with ten variables.
Random Fields and Markov Properties
Graphical models are very useful because they allow us to read off conditional
independencies of the distributions that satisfy these models from the graph
itself. Recall that we wish to study the relation between conditional indepen
dence of a distribution with respect to a graphical model, and its factorization.
17
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 18
A C B
Figure 2.1: An undirected graphical model. Each vertex represents a random
variable. The vertices in set A are separated from those in set B by set C. For
random variables to satisfy the global Markov property relative to this graph
ical model, the corresponding sets of random variables must be conditionally
independent. Namely, A⊥⊥B[ C.
Towards that end, one may write increasingly stringent conditional indepen
dence properties that a set of random variables satisfying a graphical model
may possess, with respect to the graph. In order to state these, we ﬁrst deﬁne
two graph theoretic notions — those of a general neighborhood system, and of
separation.
Deﬁnition 2.3. Given a set of variables S known as sites, a neighborhood system
A
S
on S is a collection of subsets ¦A
i
: 1 ≤ i ≤ n¦ indexed by the sites in S that
satisfy
1. a site is not a neighbor to itself (this also means there are no selfloops in
the induced graph): s
i
/ ∈ A
i
, and
2. the relationship of being a neighbor is mutual: s
i
∈ A
j
⇔ s
j
∈ A
i
.
In many applications, the sites are vertices on a graph, and the neighborhood
system A
i
is the set of neighbors of vertex s
i
on the graph. We will often be
interested in homogeneous neighborhood systems of S on a graph in which, for
18
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 19
each s
i
∈ S, the neighborhood A
i
is deﬁned as
(
i
:= ¦s
j
∈ S: d(s
i
, s
j
) ≤ r¦.
Namely, in such neighborhood systems, the neighborhood of a site is simply
the set of sites that lie in the radius r ball around that site. Note that a nearest
neighbor systemthat is often used in physics is just the case of r = 1. We will need
to use the general case, where r will be determined by considerations from logic
that will be introduced in the next two chapters. We will use the term“variable”
freely in place of “site” when we move to logic.
Deﬁnition 2.4. Let A, B, C be three disjoint subsets of the vertices V of a graph
(. The set C is said to separate A and B if every path from a vertex in A to a
vertex in B must pass through C.
Nowwe return to the case of the vertices indexing randomvariables (X
1
, . . . , X
n
)
and the vector (X
1
, . . . , X
n
) taking values in a conﬁguration space Ω
n
. A proba
bility measure P on Ω
n
is said to satisfy certain Markov properties with respect
to the graph when it satisﬁes the appropriate conditional independencies with
respect to that graph. We will study the following two Markov properties, and
their relation to factorization of the distribution.
Deﬁnition 2.5. 1. The local Markov property. The distribution X
i
(for every i)
is conditionally independent of the rest of the graph given just the vari
ables that lie in the neighborhood of the vertex. In other words, the inﬂu
ence that variables exert on any given variable is completely described by
the inﬂuence that is exerted through the neighborhood variables alone.
2. The global Markov property. For any disjoint subsets A, B, C of V such that
C separates A from B in the graph, it holds that
A⊥⊥B[ C.
We are interested in distributions that do satisfy such properties, and will
examine what effect these Markov properties have on the factorization of the
19
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 20
distributions. For most applications, this is done in the context of Markov random
ﬁelds.
We motivate a Markov random ﬁeld with the simple example of a Markov
chain ¦X
n
: n ≥ 0¦. The Markov property of this chain is that any variable in
the chain is conditionally independent of all other variables in the chain given
just its immediate neighbors:
X
n
⊥⊥¦x
k
: k / ∈ ¦n −1, n, n + 1¦ [ X
n−1
, X
n+1
¦.
A Markov random ﬁeld is the natural generalization of this picture to higher
dimensions and more general neighborhood systems.
Deﬁnition 2.6. The collection of random variables X
1
, . . . , X
n
is a Markov ran
dom ﬁeld with respect to a neighborhood system on ( if and only if the following
two conditions are satisﬁed.
1. The distribution is positive on the space of conﬁgurations: P(x) > 0 for x ∈
Ω
n
.
2. The distribution at each vertex is conditionally independent of all other
vertices given just those in its neighborhood:
P(X
i
[ X
V \i
) = P(X
i
[ X
N
i
)
These local conditional distributions are known as local characteristics of
the ﬁeld.
The second condition says that Markov randomﬁelds satisfy the local Markov
property with respect to the neighborhood system. Thus, we can think of inter
actions between variables in Markov random ﬁelds as being characterized by
“piecewise local” interactions. Namely, the inﬂuence of far away vertices must
“factor through” local interactions. This may be interpreted as:
The inﬂuence of far away variables is limited to that which is transmit
ted through the interspersed intermediate variables — there is no “direct”
inﬂuence of far away vertices beyond that which is factored through such
intermediate interactions.
20
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 21
However, through such local interactions, a vertex may inﬂuence any other ar
bitrarily far away. Notice though, that this is a considerably simpler picture
than having to consult the joint distribution over all variables for all interac
tions, for here, we need only know the local joint distributions and use these to
infer the correlations of far away variables. We shall see in later chapters that
this picture, with some additional caveats, is at the heart of polynomial time
computations.
Note the positivity condition on Markov random ﬁelds. With this positivity
condition, the complete set of conditionals given by the local characteristics of
a ﬁeld determine the joint distribution [Bes74].
Markov random ﬁelds satisfy the global Markov property as well.
Theorem 2.7. Markov random ﬁelds with respect to a neighborhood system satisfy the
global Markov property with respect to the graph constructed from the neighborhood
system.
Markov random ﬁelds originated in statistical mechanics [Dob68], where
they model probability measures on conﬁgurations of interacting particles, such
as Ising spins. See [KS80] for a treatment that focusses on this setting. Their lo
cal properties were later found to have applications to analysis of images and
other systems that can be modelled through some form of spatial interaction.
This ﬁeld started with [Bes74] and came into its own with [GG84] which ex
ploited the MarkovGibbs correspondence that we will deal with shortly. See
also [Li09].
2.2.1 Gibbs Random Fields and the HammersleyClifford The
orem
We are interested in how the Markov properties of the previous section trans
late into factorization of the distribution. Note that Markov random ﬁelds are
characterized by a local condition — namely, their local conditional indepen
dence characteristics. We now describe another random ﬁeld that has a global
characterization — the Gibbs random ﬁeld.
21
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 22
Deﬁnition 2.8. AGibbs randomﬁeld (or Gibbs distribution) with respect to a neigh
borhood system A
G
on the graph ( is a probability measure on the set of con
ﬁgurations Ω
n
having a representation of the form
P(x
1
, . . . , x
n
) =
1
Z
exp(−
U(x)
T
),
where
1. Z is the partition function and is a normalizing factor that ensures that the
measure sums to unity,
Z =
¸
x∈Ω
n
exp(−
U(x)
T
).
Evaluating Z explicitly is hard in general since it is a summation over each
of the Λ
n
conﬁgurations in the space.
2. T is a constant known as the “Temperature” that has origins in statistical
mechanics. It controls the sharpness of the distribution. At high tempera
tures, the distribution tends to be uniform over the conﬁgurations. At low
temperatures, it tends towards a distribution that is supported only on the
lowest energy states.
3. U(x) is the “energy” of conﬁguration x and takes the following form as a
sum
U(x) =
¸
c∈C
V
c
(x).
over the set of cliques ( of (. The functions V
c
: c ∈ ( are the clique poten
tials such that the value of V
c
(x) depends only on the coordinates of x that
lie in the clique c. These capture the interactions between vertices in the
clique.
Thus, a Gibbs random ﬁeld has a probability distribution that factorizes into
its constituent “interaction potentials.” This says that the probability of a con
ﬁguration depends only on the interactions that occur between the variables,
broken up into cliques. For example, let us say that in a system, each particle
22
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 23
interacts with only 2 other particles at a time, (if one prefers to think in terms
of statistical mechanics) then the energy of each state would be expressible as a
sum of potentials, each of whom had just three variables in its support. Thus,
the Gibbs factorization carries in it a faithful representation of the underlying
interactions between the particles. This type of factorization obviously yields
a “simpler description” of the distribution. The precise notion is that of inde
pendent parameters it takes to specify the distribution. Factorization into con
ditionally independent interactions of scope k means that we can specify the
distribution in O(γ
k
) parameters rather than O(γ
n
). We will return to this at the
end of this chapter.
Deﬁnition 2.9. Let P be a Gibbs distribution whose energy function U(x) =
¸
c∈C
V
c
(x). The support of the potential V
c
is the cardinality of the clique c. The
degree of the distribution P, denoted by deg(P), is the maximum of the supports
of the potentials. In other words, the degree of the distribution is the size of the
largest clique that occurs in its factorization.
One may immediately see that the degree of a distribution is a measure of
the complexity of interactions in the system since it is the size of the largest set
of variables whose interaction cannot be split up in terms of smaller interactions
between subsets. One would expect this to be the hurdle in efﬁcient algorithmic
applications.
The HammersleyClifford theorem relates the two types of random ﬁelds.
Theorem 2.10 (HammersleyClifford). X is Markov random ﬁeld with respect to a
neighborhood system A
G
on the graph ( if and only if it is a Gibbs random ﬁeld with
respect to the same neighborhood system.
The theorem appears in the unpublished manuscript [HC71] and uses a cer
tain “blackening algebra” in the proof. The ﬁrst published proofs appear in
[Bes74] and [Mou74].
Note that the condition of positivity on the distribution (which is part of
the deﬁnition of a Markov random ﬁeld) is essential to state the theorem in
full generality. The following example from [Mou74] shows that relaxing this
23
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 24
condition allows us to build distributions having the Markov property, but not
the Gibbs property.
EXAMPLE 2.11. Consider a system of four binary variables ¦X
1
, X
2
, X
3
, X
4
¦.
Each of the following combinations have probability 1/8, while the remaining
combinations are disallowed.
(0, 0, 0, 0) (1, 0, 0, 0) (1, 1, 0, 0) (1, 1, 1, 0)
(0, 0, 0, 1) (0, 0, 1, 1) (0, 1, 1, 1) (1, 1, 1, 1).
We may check that this distribution has the global Markov property with re
spect to the 4 vertex cycle graph. Namely we have
X
1
⊥⊥X
3
[ X
2
, X
4
and X
2
⊥⊥X
4
[ X
1
, X
3
.
However, the distribution does not factorize into Gibbs potentials.
2.3 Factor Graphs
Factor graphs are bipartite graphs that express the decomposition of a “global”
multivariate function into “local” functions of subsets of the set of variables.
They are a class of undirected models. The two types of nodes in a factor graph
correspond to variable nodes, and factor nodes. See Fig. 2.2.
x
1
C
1
C
2
C
3
x
2
x
3
x
4
x
5
x
6
Figure 2.2: A factor graph showing the three clause 3SAT formula (X
1
∨ X
4
∨
X
6
) ∧ (X
1
∨ X
2
∨ X
3
) ∧ (X
4
∨ X
5
∨ X
6
). A dashed line indicates that the
variable appears negated in the clause.
24
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 25
The distribution modelled by this factor graph will show a factorization as
follows
p(x
1
, . . . , x
6
) =
1
Z
ϕ
1
(x
1
, x
4
, x
6
)ϕ
2
(x
1
, x
2
, x
3
)ϕ(x
4
, x
5
, x
6
), (2.1)
where Z =
¸
x
1
,...,x
6
ϕ
1
(x
1
, x
4
, x
6
)ϕ
2
(x
1
, x
2
, x
3
)ϕ(x
4
, x
5
, x
6
). (2.2)
Factor graphs offer a ﬁner grained view of factorization of a distribution
than Bayesian networks or Markov networks. One should keep in mind that
this factorization is (in general) far from being a factorization into conditionals
and does not express conditional independence. The system must embed each
of these factors in ways that are global and not obvious from the factors. This
global information is contained in the partition function. Thus, in general, these
factors do not represent conditionally independent pieces of the joint distribu
tions. In summary, the factorization above is not the one what we are seeking —
it does not imply a series of conditional independencies in the joint distribution.
Factor graphs have been very useful in various applications, most notably
perhaps in coding theory where they are used as graphical models that un
derlie various decoding algorithms based on forms of belief propagation (also
known as the sumproduct algorithm) that is an exact algorithm for computing
marginals on tree graphs but performs remarkably well even in the presence of
loops. See [KFaL98] and [AM00] for surveys of this ﬁeld. As might be expected
from the preceding comments, these do not focus on conditional independence,
but rather on algorithmic applications of local features (such as locally tree like)
of factor graphs.
A HammersleyClifford type theorem holds over the completion of a factor
graph. A clique in a factor graph is a set of variable nodes such that every pair
in the set is connected by a function node. The completion of a factor graph is
obtained by introducing a new function node for each clique, and connecting
it to all the variable nodes in the clique, and no others. Then, a positive distri
bution that satisﬁes the global Markov property with respect to a factor graph
satisﬁes the Gibbs property with respect to its completion.
25
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 26
2.4 The MarkovGibbs Correspondence for Directed
Models
Consider ﬁrst a directed acyclic graph (DAG), which is simply a directed graph
without any directed cycles in it. Some speciﬁc points of additional terminology
for directed graphs are as follows. If there is a directed edge from x to y, we say
that x is a parent of y, and y is the child of x. The set of parents of x is denoted
by pa(x), while the set of children of x is denoted by ch(a). The set of vertices
from whom directed paths lead to x is called the ancestor set of x and is denoted
an(x). Similarly, the set of vertices to whom directed paths from x lead is called
the descendant set of x and is denoted de(x). Note that DAGs is allowed to have
loops (and loopy DAGs are central to the study of iterative decoding algorithms
on graphical models). Finally, we often assume that the graph is equipped with
a distance function d(, ) between vertices which is just the length of the shortest
path between them. A set of random variables whose interdependencies may
be represented using a DAG is known as a Bayesian network or a directed Markov
ﬁeld. The idea is best illustrated with a simple example.
Consider the DAG of Fig. 2.3 (left). The corresponding factorization of the
joint density that is induced by the DAG model is
p(x
1
, . . . , x
6
) = p(x
1
)p(x
2
)p(x
3
)p(x
4
[ x
1
)p(x
5
[ x
2
, x
3
, x
4
).
Thus, every joint distribution that satisﬁes this DAG factorizes as above.
Given a directed graphical model, one may construct an undirected one by
a process known as moralization. In moralization, we (a) replace a directed edge
from one vertex to another by an undirected one between the same two vertices
and (b) “marry” the parents of each vertex by introducing edges between each
pair of parents of the vertex at the head of the former directed edge. The process
is illustrated in the ﬁgure below.
In general, if we denote the set of parents of the variable x
i
by pa(x
i
), then
26
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 27
x
2
x
4
x
3
x
5
x
1
x
2
x
4
x
3
x
5
x
1
Figure 2.3: The moralization of the DAG on the left to obtain the moralized
undirected graph on the right.
the joint distribution of (x
1
, . . . , x
n
) factorizes as
p(x
1
, . . . , x
n
) =
N
¸
n=1
p(x
n
[ pa
n
).
We want, however, is to obtain a MarkovGibbs equivalence for such graphi
cal models in the same manner that the HammersleyClifford theoremprovided
for positive Markov random ﬁelds. We have seen that relaxing the positivity
condition on the distribution in the HammersleyClifford theorem (Thm. 2.10)
cannot be done in general. In some cases however, one may remove the positiv
ity condition safely. In particular, [LDLL90] extends the HammersleyClifford
correspondence to the case of arbitrary distributions (namely, dropping the pos
itivity requirement) for the case of directed Markov ﬁelds. In doing so, they sim
plify and strengthen an earlier criterion for directed graphs given by [KSC84].
We will use the result from [LDLL90], which we reproduce next.
Deﬁnition 2.12. A measure p admits a recursive factorization according to graph
( if there exist nonnegative functions, known as kernels, k
v
(., .) for v ∈ V de
ﬁned on ΛΛ
 pa(v)
where the ﬁrst factor is the state space for X
v
and the second
for X
pa(v)
, such that
k
v
(y
v
, x
pa(v)
)µ
v
(dy
v
) = 1
27
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 28
and
p = f.µ where f(x) =
¸
v∈V
k
v
(x
v
, x
pa(v)
).
In this case, the kernels k
v
(., x
pa(v)
) are the conditional densities for the dis
tribution of X
v
conditioned on the value of its parents X
pa(v)
= x
pa(v)
. Now let
(
m
be the moral graph corresponding to (.
Theorem 2.13. If p admits a recursive factorization according to (, then it admits a
factorization (into potentials) according to the moral graph (
m
.
Dseparation
We have considered the notion of separation on undirected models and its ef
fect on the set of conditional independencies satisﬁed by the distributions that
factor according to the model. For directed models, there is an analogous no
tion of separation known as Dseparation. The notion is what one would expect
intuitively if one views directed models as representing “ﬂows” of probabilistic
inﬂuence.
We simply state the property and refer the reader to [KF09, '3.3.1] and [Bis06,
'8.2.2] for discussion and examples. Let A,B, and C be sets of vertices on a
directed model. Consider the set of all directed paths coming from a node in A
and going to a node in B. Such a path is said to be blocked if one of the following
two scenarios occurs.
1. Arrows on the path meet headtotail or tailtotail at a node in C.
2. Arrows meet headtohead at a node, and neither the node nor any of its
descendants is in C.
If all paths from A to B are blocked as above, then C is said to Dseparate A
from B, and the joint distribution must satisfy A⊥⊥B[ C.
28
2. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 29
2.5 Tmaps and Tmaps
We have seen that there are two broad classes of graphical models —undirected
and directed — which may be used to represent the interaction of variables
in a system. The conditional independence properties of these two classes are
obtained differently.
Deﬁnition 2.14. A graph (directed or undirected) is said to be a Tmap (’depen
dencies map’) for a distribution if every conditional independence statement of
the form A⊥⊥B[ C for sets of variables A, B, and C that is satisﬁed by the distri
bution is reﬂected in the graph. Thus, a completely disconnected graph having
no edges is trivially a Tmap for any distribution.
A Tmap may express more conditional independencies than the distribu
tion possesses.
Deﬁnition 2.15. A graph (directed or undirected) is said to be a Tmap (’inde
pendencies map’) for a distribution if every conditional independence state
ment of the form A⊥⊥B[ C for sets of variables A, B, and C that is expressed
by the graph is also satisﬁed by the distribution. Thus, a completely connected
graph is trivially a Tmap for any distribution.
A Tmap may express less conditional independencies than the distribution
possesses.
Deﬁnition 2.16. A graph that is both an Tmap and a Tmap for a distribution
is called its {map (’perfect map’).
In other words a {map expresses precisely the set of conditional indepen
dencies that are present in the distribution.
Not all distributions have {maps. Indeed, the class of distributions having
directed {maps is itself distinct from the class having undirected {maps and
neither equals the class of all distributions (see [Bis06, '3.8.4] for examples).
29
3. Distributions with
poly(log n)Parametrization
We now come to a central theme in our work. Consider a system of n binary
covariates (X
1
, . . . , X
n
). To specify their joint distribution p(x
1
, . . . , x
n
) in the
absence of any additional information, we would have to give the probability
mass function at each of the 2
n
conﬁgurations that these n variables can take
jointly. The only constraint given on these probability masses is that they must
sum up to 1. Thus, given the function value at 2
n
− 1 conﬁgurations, we could
ﬁnd that at the remaining conﬁguration. This means that in the absence of any
additional information, n covariates require 2
n
− 1 parameters to specify their
joint distribution. Thus, it takes exponentially many in n parameters to specify
a “true” joint distribution of n covariates. This statement can be made more
precise — the typical joint distribution on n variates requires O(2
n
) parameters
for its speciﬁcation.
In light of the above, a joint distribution that requires only 2
poly(log n)
param
eters to specify would seem quite unusual. We would intuitively expect it to be
“much simpler” in some way than the typical joint distribution on n variates.
Indeed, because of the exponent of poly(log n), we would expect that it would
be “somewhat like” a joint distribution on only poly(log n) covariates. In other
words — distributions on n variates but requiring only 2
poly(log n)
parameters for
their speciﬁcation are like the typical distribution on poly(log n) variates. We
shall refer to such distributions as having poly(log n)parametrization.
Let us take an extreme case of such a “simple” joint distribution. Take the
case of n covariates, except that we are provided with one critical piece of extra
30
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 31
information — that the n variates are independent of each other. In that case,
we would need 1 parameter to specify each of their individual distributions —
namely, the probability that it takes the value 1. These n parameters then spec
ify the joint distribution simply because the distribution factorizes completely
into factors whose scopes are single variables (namely, just the p(x
i
)), as a re
sult of the independence. Thus, we go from exponentially many independent
parameters to linearly many if we know that the variates are independent.
Let us consider another extreme case of such a distribution. Consider the
distribution on n variates that is nonzero only at (0, 0, . . . , 0) and (1, 1, . . . , 1).
Here, the variates are highly correlated. But once again, we require only two
parameters to specify the distribution. In this case, it is because the distribu
tion is supported on only two out of a possible 2
n
values. In other words, it is
severely limited by the small number of joint values the covariates take.
In both cases above, the distribution on n covariates required far fewer pa
rameters to specify than the typical n variate distribution does.
In order to state the typical case of a n variate distribution, we make the
following deﬁnition.
Deﬁnition 3.1. A distribution on n variates will be called ample if it is supported
on c
n
joint values for c > 1.
In other words, ample distributions take the typical number of joint values.
Deﬁnition 3.2. A distribution on n variates will be said to have irreducible O(n)
correlations if there exist correlations between O(n) variates that do not permit
factorization into smaller scopes through conditional independencies.
It is distributions that possess both these properties that are problematic for
polynomial time algorithms. We will see that distributions constructed by poly
nomial time algorithms can have one or the other property, but not both. Note
that distributions having both these properties require O(2
n
) independent pa
rameters to specify. There is neither factorization, nor limited support, that will
permit more economical parametrization.
31
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 32
This brings us to a key motivating question: What if n covariates had a joint
distribution that required only exponential in poly(log n) many parameters to
specify? When would such a distribution arise, and what would be its limita
tions? This question is really the heart of P
?
= NP. Indeed, all the machinery
we build and use in this work really takes us to the following insight: Polyno
mial time computations build distributions of solutions that can be parameterized us
ing only exponential in poly(log n) many parameters. Namely, they have poly(log n)
parametrization. In contrast, in the hard phases of NPcomplete problems like kSAT,
the distribution of solutions requires exponentially many parameters to specify. In par
ticular, the distribution of solutions in the hard phases of NPcomplete prob
lems displays two properties
1. The variates are as far from being independent as possible — they inter
act with each other O(n) at a time, with no possibility for factorization
into conditional independencies. In other words, the distribution has irre
ducible O(n) correlations.
2. The distribution is ample.
Note that both conditions are required. It is not only long range correlations,
but (a) the nonfactorizability of such correlations and (b) ampleness under
such nonfactorizable correlations that characterizes the solution spaces in hard
phases of NPcomplete problems.
3.1 Two Kinds of poly(log n)parameterizations
We have seen that distributions on n variates that are poly(log n)parametrizable
are very atypical. When do they arise? They can be studied in two categories,
both of which will correspond to polynomial time algorithms.
3.1.1 Range Limited Interactions
As noted earlier, it is not often that complex systems of n interacting variables
have complete independence between some subsets. What is far more frequent
32
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 33
is that there are conditional independencies between certain subsets given some
intermediate subset. In this case, the joint will factorize into factors each of
whose scope is a subset of (X
1
, . . . , X
n
). If the factorization is into condition
ally independent factors, each of whose scope is of size at most k , then we can
parametrize the joint distribution with at most n2
n
independent parameters.
We should emphasize that the factors must give us conditional independence
for this to be true. For example, factor graphs give us a factorization, but it is,
in general, not a factorization into conditional independents, and so we cannot
conclude anything about the number of independent parameters by just exam
ining the factor graph. Fromour perspective, a major feature of directed graphi
cal models is that their factorizations are already globally normalized once they
are locally normalized, meaning that there is a recursive factorization of the
joint into conditionally independent pieces. The conditional independence in
this case is from all nondescendants, given the parents. Therefore, if each node
has at most k parents, we can parametrize the distribution using at most n2
k
independent parameters. We may also moralize the graph and see this as a fac
torization over cliques in the moralized graph. Note that such a factorization
(namely, starting from a directed model and moralizing) holds even if the dis
tribution is not positive in contrast with those distributions which do not factor
over directed models and where we have to invoke the HammersleyClifford
theorem to get a similar factorization. See [KF09] for further discussion on pa
rameterizations for directed and undirected graphical models.
Our proof scheme requires us to distinguish distributions based on the size
of the irreducible direct interactions between subsets of the covariates. Namely,
we would like to distinguish distributions where there are O(n) such covariates
whose joint interaction cannot be factored through smaller interactions (having
less than O(n) covariates) chained together by conditional independencies. We
would like to contrast such distributions from others which can be so factored
through factors having only poly(log n) variates in their scope. The measure that
allows us to make this distinction is the number of independent parameters it
takes to specify the distribution. When the size of the smallest irreducible inter
33
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 34
Range Limited to poly(log n)
R
a
n
g
e
L
i
m
i
t
e
d
t
o
p
o
l
y
(
l
o
g
n
)
R
a
n
g
e
L
i
m
i
t
e
d
t
o
p
o
l
y
(
l
o
g
n
)
Figure 3.1: Arange limited joint distribution on n covariates that has poly(log n)
parametrization. Although interactions between variables may be ample for
their range, but their range is limited to poly(log n).
34
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 35
actions is O(n), then we need O(c
n
) parameters where c > 1. On the other hand,
if we were able to demonstrate that the distribution factors through interactions
which always have scope poly(log n), then we would need only O(c
poly(log n)
) pa
rameters. See Fig. 3.1
Let us consider the example of a Markov random ﬁeld. By Hammersley
Clifford, it is also a Gibbs random ﬁeld over the set of maximal cliques in the
graph encoding the neighborhood system of the Markov random ﬁeld. This
Gibbs ﬁeld comes with conditional independence assurance, and therefore, we
have an upper bound on the number of parameters it takes to specify the dis
tribution. Namely, it is just
¸
c∈C
2
c
. Thus, if at most k < n variables interact
directly at a time, then the largest clique size would be k, and this would give
us a more economical parameterization than the one which requires 2
n
− 1 pa
rameters.
In Chapter 5, we will build machinery that shows that if a problem lies in P
as a result of a range limited algorithm (like monadic LFP), then the factoriza
tion of the distribution of solutions to that problem causes it to have economi
cal parametrization, precisely because variables do not interact all at once, but
rather in smaller subsets in a directed manner that gives us conditional inde
pendencies between sets that are of size poly(log n).
Note that the case where all n variates are independent falls into the range
limited category with range being one. The resulting distribution is ample.
3.1.2 Value Limited Interactions
In the previous section we saw the ﬁrst type of interaction between n covari
ates that can be parametrized by just poly(log n) independent parameters. This
was the case where the n variates interact directly only poly(log n) at a time, and
such interactions are chained together through conditional independencies. In
this section, we will see another such limited interaction, where the n variates
do interact directly O(n) at a time, but they are restricted to taking only c
poly(log n)
many distinct values (see Fig. 3.2). One sees immediately that the underlying lim
itation in both this case and the previous is common — the set of n covariates
35
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 36
do not take 2
n
different values with extensive O(n) correlations that do not fac
tor through conditional independencies like a “true”(or more precisely, typical)
joint distribution of n variates. Instead, in both cases, the n covariates behave in
ways that is similar to a system of poly(log n) covariates. Namely, in both cases,
their “jointness” resembles a system of poly(log n) covariates.
How do we precisely state this property? Through the notion of indepen
dent parameters. We will measure the jointness of a distribution by the number
of independent parameters required to specify it. A “true” joint distribution
takes O(c
n
), c > 1 independent parameters to specify. On the other hand, both
range and value limited interactions require only O(c
poly(log n)
) independent pa
rameters to specify. This is the crux of the P = NP question, as we shall see. In
particular, we shall see that in the hard phases of problems such as kSAT for
k > 8, O(c
poly(log n)
) independent parameters simply will not sufﬁce to explain
the behavior of the solution space of the problem. We will recall this behav
ior in some detail in Chapter 6. We should stress that this behavior has been
rigorously shown to hold for some phases of kSAT for high values of k. It is in
these phases that our separation of complexity classes can be demonstrated, not
elsewhere. We should also point out that once we have isolated the precise no
tion that is at the heart of polynomial time computation — namely poly(log n)
parametrizability of the space of solutions — several apparent issues resolve
themselves. Take the case of clustering in XORSAT, for instance. We only
need note that the linear nature of XORSAT solution spaces mean there is a
poly(log n)parametrization (the basis provides this for linear spaces). The core
issue is that of the number of independent parameters it takes to specify the
distribution of the entire space of solutions.
In both cases — range limited and value limited interactions — the sys
tem of n covariates behaves as though it was a system of only poly(log n)
covariates. In the case of range limited, this is because the covariates only
jointly vary with poly(log n) other variates at a time. In the case of value
limited interactions, this is because though O(n) variates vary jointly, they
only take 2
poly(log n)
joint values. Thus, in both cases, the joint distribution
36
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 37
has a very economical parametrization using only 2
poly(log n)
independent
parameters.
In later chapters, we will build machinery to see that polynomial time LFP
algorithms can capture either range or value limited behaviors, but not the joint
behavior of a “true” joint distribution of n covariates.
It is also useful to notice that neither type of limitation implies the other.
For instance, n independent variates are range limited, but not value limited.
Whereas the distribution supported on the all 1 and all 0 tuple is value limited,
but not range limited. Regimes of problems where the distributions of solutions
are neither value limited nor range limited cause the failure of polynomial time
algorithms on the average.
Range O(n)
V
a
l
u
e
L
i
m
i
t
e
d
t
o
2
p
o
ly
(
lo
g
n
)
V
a
l
u
e
L
i
m
i
t
e
d
t
o
2
p
o
ly
(
lo
g
n
)
Value Limited to
2
poly(log n)
Figure 3.2: Avalue limited joint distribution on n covariates that has poly(log n)
parametrization. Although interactions between variables are O(n) at a time,
they do not display ampleness in their joint distribution.
37
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 38
3.1.3 On the Atypical Nature of poly(log n)parameterization
We brieﬂy mentioned earlier that the typical member of the space of distribu
tions on n covariates requires O(2
n
) parameters. Note that this is a statistical
statement. Namely, if we picked a n variable distribution at random, with
high likelihood we would get a distribution that required O(2
n
) parameters to
specify. In other words, with high likelihood, we would not get a poly(log n)
parametrizable distribution. This observation may be used to state results about
average case complexity in hard phases of random kSAT. In many ways, these
hard phases are simply typical, nothing more. The solution space shows the
behavior of a typical joint distribution on n covariates in that it is ample and
correlated. It is polynomial time solution spaces that are atypical for n variate
distributions in that they are either not ample (the value limited case) or they are
not correlated solidly enough (the range limited case, where they admit Gibbs
factorizations into smaller potentials).
This short section owes its existence to Leonid Levin and Avi Wigderson,
both of whom asked us whether our methods could be used to make statements
about average case complexity. We will return to this issue in future versions of
this paper or in the manuscript [Deo10] which is under preparation.
3.1.4 Our Treatment of Range and Value Limited Distributions
The two types of distributions that we have mentioned above are only superﬁ
cially dissimilar. In both cases, the range of behaviors of the n covariates can be
parametrized with the number of independent parameters it takes to specify a
joint distribution of only poly(log n) covariates. For purposes of pedagogy, we
will disregard this superﬁcial dissimilarity and provide a full treatment of the
range limited case. We can even think of the value limited behavior as a type of
range limited behavior where, even though a covariate sees O(n) other covari
ates, it only utilizes poly(log n) amount of the information in them in order to
make its decision.
We end this chapter by tying poly(log n)parameterizations to a Markov or
38
3. DISTRIBUTIONS WITH poly(LOGN)PARAMETRIZATION 39
(equivalently for directed) Gibbs models. Once again, consider two kinds of
poly(log n)parameterizations — range limited and value limited. A range lim
ited parametrization would correspond to a Gibbs ﬁeld whose potentials are
speciﬁed over maximum cliques of size poly(log n). A value limited parameter
ization could have maximumcliques of size O(n), but the number of parameters
for such a clique would only be 2
poly(log n)
instead of the possible 2
O(n)
. In either
case, the random ﬁeld would have poly(log n)parametrization. See Figs. 3.1
and 3.2.
39
4. Logical Descriptions of
Computations
Work in ﬁnite model theory and descriptive complexity theory — a branch of
ﬁnite model theory that studies the expressive power of various logics in terms
of complexity classes — has resulted in machine independent characterizations
of various complexity classes. In particular, over ordered structures, there is
a precise and highly insightful characterization of the class of queries that are
computable in polynomial time, and those that are computable in polynomial
space. In order to keep the treatment relatively complete, we begin with a brief
pr´ ecis of this theory. Readers from a ﬁnite model theory background may skip
this chapter.
We quickly set notation. A vocabulary, denoted by σ, is a set consisting of
ﬁnitely many relation and constant symbols,
σ = 'R
1
, . . . , R
m
, c
1
, . . . , c
s
`.
Each relation has a ﬁxed arity. We consider only relational vocabularies in that
there are no function symbols. This poses no shortcomings since functions may
be encoded as relations. A σstructure A consists of a set A which is the universe
of A, interpretations R
A
for each of the relation symbols in the vocabulary, and
interpretations c
A
for each of the constant symbols in the vocabulary. Namely,
A = 'A, R
A
1
, . . . , R
A
m
, c
A
1
, . . . , c
A
s
`.
An example is the vocabulary of graphs which consists of a single relation
symbol having arity two. Then, a graph may be seen as a structure over this
40
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 41
vocabulary, where the universe is the set of nodes, and the relation symbol is
interpreted as an edge. In addition, some applications may require us to work
with a graph vocabulary having two constants interpreted in the structure as
source and sink nodes respectively.
We also denote by σ
n
the extension of σ by n additional constants, and de
note by (A, a) the structure where the tuple a has been identiﬁed with these
additional constants.
4.1 Inductive Deﬁnitions and Fixed Points
The material in this section is standard, and we refer the reader to [Mos74] for
the ﬁrst monograph on the subject, and to [EF06, Lib04] for detailed treatments
in the context of ﬁnite model theory. See [Imm99] for a text on descriptive com
plexity theory. Our treatment is taken mostly from these sources, and stresses
the facts we need.
Inductive deﬁnitions are a fundamental primitive of mathematics. The idea
is to build up a set in stages, where the deﬁning relation for each stage can be
written in the ﬁrst order language of the underlying structure and uses elements
added to the set in previous stages. In the most general case, there is an under
lying structure A = 'A, R
1
, . . . , R
m
` and a formula
φ(S, x) ≡ φ(S, x
1
, . . . , x
n
)
in the ﬁrstorder language of A. The variable S is a secondorder relation vari
able that will eventually hold the set we are trying to build up in stages. At the
ξ
th
stage of the induction, denoted by I
ξ
φ
, we insert into the relation S the tuples
according to
x ∈ I
ξ
φ
⇔ φ(
¸
η<ξ
I
η
φ
, x).
We will denote the stage that a tuple enters the relation in the induction deﬁned
by φ by [ [
A
φ
. The decomposition into its various stages is a central characteristic
of inductively deﬁned relations. We will also require that φ have only posi
tive occurrences of the nary relation variable S, namely all occurrences of S be
41
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 42
within the scope of an even number of negations. Such inductions are called
positive elementary. In the most general case, a transﬁnite induction may result.
The least ordinal κ at which I
κ
φ
= I
κ+1
φ
is called the closure ordinal of the induc
tion, and is denoted by [φ
A
[. When the underlying structures are ﬁnite, this is
also known as the inductive depth. Note that the cardinality of the ordinal κ is at
most [A[
n
.
Finally, we deﬁne the relation
I
φ
=
¸
ξ
I
ξ
φ
.
Sets of the form I
φ
are known as ﬁxed points of the structure. Relations that may
be deﬁned by
R(x) ⇔ I
φ
(a, x)
for some choice of tuple a over A are known as inductive relations. Thus, induc
tive relations are sections of ﬁxed points.
Note that there are deﬁnitions of the set I
φ
that are equivalent, but can be
stated only in the second order language of A. Note that the deﬁnition above is
1. elementary at each stage, and
2. constructive.
We will use both these properties throughout our work.
We now proceed more formally by introducing operators and their ﬁxed
points, and then consider the operators on structures that are induced by ﬁrst
order formulae. We begin by deﬁning two classes of operators on sets.
Deﬁnition 4.1. Let Abe a ﬁnite set, and {(A) be its power set. An operator F on
Ais a function F : {(A) → {(A). The operator F is monotone if it respects subset
inclusion, namely, for all subsets X, Y of A, if X ⊆ Y , then F(X) ⊆ F(Y ). The
operator F is inﬂationary if it maps sets to their supersets, namely, X ⊆ F(X).
Next, we deﬁne sequences induced by operators, and characterize the se
quences induced by monotone and inﬂationary operators.
42
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 43
Deﬁnition 4.2. Let F be an operator on A. Consider the sequence of sets F
0
, F
1
, . . .
deﬁned by
F
0
= ∅,
F
i+1
= F(F
i
).
(4.1)
This sequence (F
i
) is called inductive if it is increasing, namely, if F
i
⊆ F
i+1
for
all i. In this case, we deﬁne
F
∞
:=
∞
¸
i=0
F
i
. (4.2)
Lemma 4.3. If F is either monotone or inﬂationary, the sequence (F
i
) is inductive.
Now we are ready to deﬁne ﬁxed points of operators on sets.
Deﬁnition 4.4. Let F be an operator on A. The set X ⊆ A is called a ﬁxed point
of F if F(X) = X. A ﬁxed point X of F is called its least ﬁxed point, denoted
LFP(F), if it is contained in every other ﬁxed point Y of F, namely, X ⊆ Y
whenever Y is a ﬁxed point of F.
Not all operators have ﬁxed points, let alone least ﬁxed points. The Tarski
Knaster guarantees that monotone operators do, and also provides two con
structions of the least ﬁxed point for such operators: one “from above” and the
other “from below.” The latter construction uses the sequences (4.1).
Theorem 4.5 (TarskiKnaster). Let F be a monotone operator on a set A.
1. F has a least ﬁxed point LFP(F) which is the intersection of all the ﬁxed points
of F. Namely,
LFP(F) =
¸
¦Y : Y = F(Y )¦.
2. LFP(F) is also equal to the union of the stages of the sequence (F
i
) deﬁned in
(4.1). Namely,
LFP(F) =
¸
F
i
= F
∞
.
However, not all operators are monotone; therefore we need a means of con
structing ﬁxed points for nonmonotone operators.
43
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 44
Deﬁnition 4.6. For an inﬂationary operator F, the sequence F
i
is inductive, and
hence eventually stabilizes to the ﬁxed point F
∞
. For an arbitrary operator G,
we associate the inﬂationary operator G
inﬂ
deﬁned by G
inﬂ
(Y ) Y ∪G(Y ). The
set G
inﬂ
∞
is called the inﬂationary ﬁxed point of G, and denoted by IFP(G).
Deﬁnition 4.7. Consider the sequence (F
i
) induced by an arbitrary operator F
on A. The sequence may or may not stabilize. In the ﬁrst case, there is a positive
integer n such that F
n+1
= F
n
, and therefore for all m > n, F
m
= F
n
. In the
latter case, the sequence F
i
does not stabilize, namely, for all n ≤ 2
A
, F
n
=
F
n+1
. Now, we deﬁne the partial ﬁxed point of F, denoted PFP(F), as F
n
in the
ﬁrst case, and the empty set in the second case.
4.2 Fixed Point Logics for P and PSPACE
We now specialize the theory of ﬁxed points of operators to the case where the
operators are deﬁned by means of ﬁrst order formulae.
Deﬁnition 4.8. Let σ be a relational vocabulary, and R a relational symbol of
arity k that is not in σ. Let ϕ(R, x
1
, . . . , x
n
) = ϕ(R, x) be a formula of vocabulary
σ ∪ ¦R¦. Now consider a structure A of vocabulary σ. The formula ϕ(R, x)
deﬁnes an operator F
ϕ
: {(A
k
) → {(A
k
) on A
k
which acts on a subset X ⊆ A
k
as
F
ϕ
(X) = ¦a [ A [= ϕ(X/R, a¦, (4.3)
where ϕ(X/R, a¦ means that R is interpreted as X in ϕ.
We wish to extend FO by adding ﬁxed points of operators of the form F
φ
,
where φ is a formula in FO. This gives us ﬁxed point logics which play a central
role in descriptive complexity theory.
Deﬁnition 4.9. Let the notation be as above.
1. The logic FO(IFP) is obtained by extending FO with the following forma
tion rule: if ϕ(R, x) is a formula and t a ktuple of terms, then [IFP
R,x
ϕ(R, x)](t)
44
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 45
is a formula whose free variables are those of t. The semantics are given
by
A [= [IFP
R,x
ϕ(R, x)](a) iff a ∈ IFP(F
ϕ
).
2. The logic FO(PFP) is obtained by extending FO with the following forma
tion rule: if ϕ(R, x) is a formula and t a ktuple of terms, then [PFP
R,x
ϕ(R, x)](t)
is a formula whose free variables are those of t. The semantics are given
by
A [= [PFP
R,x
ϕ(R, x)](a) iff a ∈ PFP(F
ϕ
).
We cannot deﬁne the closure of FO under taking least ﬁxed points in the
above manner without further restrictions since least ﬁxed points are guaran
teed to exist only for monotone operators, and testing for monotonicity is un
decidable. If we were to form a logic by extending FO by least ﬁxed points
without further restrictions, we would obtain a logic with an undecidable syn
tax. Hence, we make some restrictions on the formulae which guarantee that
the operators obtained from them as described by (4.3) will be monotone, and
thus will have a least ﬁxed point. We need a deﬁnition.
Deﬁnition 4.10. Let notation be as earlier. Let ϕ be a formula containing a rela
tional symbol R. An occurrence of R is said to be positive if it is under the scope
of an even number of negations, and negative if it is under the scope of an odd
number of negations. A formula is said to be positive in R if all occurrences of R
in it are positive, or there are no occurrences of R at all. In particular, there are
no negative occurrences of R in the formula.
Lemma 4.11. Let notation be as earlier. If the formula ϕ(R, x) is positive in R, then
the operator obtained from ϕ by construction (4.3) is monotone.
Now we can deﬁne the closure of FO under least ﬁxed points of operators
obtained from formulae that are positive in a relational variable.
45
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 46
Deﬁnition 4.12. The logic FO(LFP) is obtained by extending FO with the fol
lowing formation rule: if ϕ(R, x) is a formula that is positive in the kary rela
tional variable R, and t is a ktuple of terms, then [LFP
R,x
ϕ(R, x)](t) is a formula
whose free variables are those of t. The semantics are given by
A [= [LFP
R,x
ϕ(R, x)](a) iff a ∈ LFP(F
ϕ
).
As earlier, the stage at which the tuple a enters the relation R is denoted by
[a[
A
ϕ
, and inductive depths are denoted by [ϕ
A
[. This is well deﬁned for least
ﬁxed points since a tuple enters a relation only once, and is never removed
from it after. In ﬁxed points (such as partial ﬁxed points) where the underlying
formula is not necessarily positive, this is not true. A tuple may enter and leave
the relation being built multiple times.
Next, we informally state two wellknown results on the expressive power
of ﬁxed point logics. First, adding the ability to do simultaneous induction
over several formulae does not increase the expressive power of the logic, and
secondly FO(IFP) = FO(LFP) over ﬁnite structures. See [Lib04, '10.3, p. 184] for
details.
We have introduced various ﬁxed point constructions and extensions of ﬁrst
order logic by these constructions. We end this section by relating these log
ics to various complexity classes. These are the central results of descriptive
complexity theory.
Fagin [Fag74] obtained the ﬁrst machine independent logical characteriza
tion of an important complexity class. Here, ∃SO refers to the restriction of
secondorder logic to formulae of the form
∃X
1
∃X
m
ϕ,
where ϕ does not have any secondorder quantiﬁcation.
Theorem 4.13 (Fagin).
∃SO = NP.
Immerman [Imm82] and Vardi [Var82] obtained the following central result
that captures the class P on ordered structures.
46
4. LOGICAL DESCRIPTIONS OF COMPUTATIONS 47
Theorem 4.14 (ImmermanVardi). Over ﬁnite, ordered structures, the queries ex
pressible in the logic FO(LFP) are precisely those that can be computed in polynomial
time. Namely,
FO(LFP) = P.
A characterization of PSPACE in terms of PFP was obtained in [AV91,
Var82].
Theorem 4.15 (AbiteboulVianu, Vardi). Over ﬁnite, ordered structures, the queries
expressible in the logic FO(PFP) are precisely those that can be computed in polynomial
space. Namely,
FO(PFP) = PSPACE.
Note: We will often use the term LFP generically instead of FO(LFP) when we
wish to emphasize the ﬁxed point construction being performed, rather than the
language.
47
5. The Link Between Polynomial
Time Computation and Conditional
Independence
In Chapter 2 we saw how certain joint distributions that encode interactions
between collections of variables “factor through” smaller, simpler interactions.
This necessarily affects the type of inﬂuence a variable may exert on other vari
ables in the system. Thus, while a variable in such a system can exert its inﬂu
ence throughout the system, this inﬂuence must necessarily be bottlenecked by
the simpler interactions that it must factor through. In other words, the inﬂu
ence must propagate with bottlenecks at each stage. In the case where there are
conditional independencies, the inﬂuence can only be “transmitted through”
the values of the intermediate conditioning variables.
In this chapter, we will uncover a similar phenomenon underlying the log
ical description of polynomial time computation on ordered structures. The
fundamental observation is the following:
Least ﬁxed point computations “factor through” ﬁrst order computations,
and so limitations of ﬁrst order logic must be the source of the bottleneck at
each stage to the propagation of information in such computations.
The treatment of LFP versus FOin ﬁnite model theory centers around the fact
that FO can only express local properties, while LFP allows nonlocal properties
such as transitive closure to be expressed. We are taking as given the nonlocal
capability of LFP, and asking how this nonlocal nature factors at each step, and what is
the effect of such a factorization on the joint distribution of LFP acting upon ensembles.
48
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 49
Fixed point logics allow variables to be nonlocal in their inﬂuence, but this
nonlocal inﬂuence must factor through ﬁrst order logic at each stage. This is
a very similar underlying idea to the statistical mechanical picture of random
ﬁelds over spaces of conﬁgurations that we sawin Chapter 2, but comes cloaked
in a very different garb — that of logic and operators. The sequence (F
i
ϕ
) of op
erators that construct ﬁxed points may be seen as the propagation of inﬂuence
in a structure by means of setting values of “intermediate variables”. In this
case, the variables are set by inducting them into a relation at various stages
of the induction. We want to understand the stagewise bottleneck that a ﬁxed
point computation faces at each step of its execution, and tie this back to no
tions of conditional independence and factorization of distributions. In order
to accomplish this, we must understand the limitations of each stage of a LFP
computation and understand how this affects the propagation of longrange in
ﬂuence in relations computed by LFP. Namely, we will bring to bear ideas from
statistical mechanics and message passing to the logical description of compu
tations.
It will be beneﬁcial to state this intuition with the example of transitive clo
sure.
EXAMPLE 5.1. The transitive closure of an edge in a graph is the standard exam
ple of a nonlocal property that cannot be expressed by ﬁrst order logic. It can
be expressed in FO(LFP) as follows. Let E be a binary relation that expresses
the presence of an edge between its arguments. Then we can see that iterating
the positive ﬁrst order formula ϕ(R, x, y) given by
ϕ(R, x, y) ≡ E(x, y) ∨ ∃z(E(x, z) ∧ R(z, y)).
builds the transitive closure relation in stages.
Notice that the decision of whether a vertex enters the relation is based on
the immediate neighborhood of the vertex. In other words, the relation is built
stage by stage, and at each stage, vertices that have entered a relation make
other vertices that are adjacent to them eligible to enter the relation at the next
stage. Thus, though the resulting property is nonlocal, the information ﬂow used to
49
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 50
compute it is stagewise local. The computation factors through a local property
at each stage, but by chaining many such local factors together, we obtain the
nonlocal relation of transitive closure. This picture relates to a Markov random
ﬁeld, where such local interactions are chained together in a way that variables
can exert their inﬂuence to arbitrary lengths, but the factorization of that inﬂu
ence (encoded in the joint distribution) reveals the stagewise local nature of the
interaction. There are important differences however — the ﬂow of LFP com
putation is directed, whereas a Markov random ﬁeld is undirected, for instance.
We have used this simple example just to provide some preliminary intuition.
We will now proceed to build the requisite framework.
5.1 The Limitations of LFP
Many of the techniques in model theory break down when restricted to ﬁnite
models. A notable exception is the EhrenfeuchtFra¨ıss´ e game for ﬁrst order
logic. This has led to much research attention to game theoretic characteriza
tions of various logics. The primary technique for demonstrating the limitations
of ﬁxed point logics in expressing properties is to consider them a segment of
the logic L
k
∞ω
, which extends ﬁrst order logic with inﬁnitary connectives, and
then use the characterization of expressibility in this logic in terms of kpebble
games. This is however not useful for our purpose (namely, separating P from
NP) since NP ⊆ PSPACE and the latter class is captured by PFP, which is
also a segment of L
k
∞ω
.
One of the central contributions of our work is demonstrating a completely
different viewpoint of LFP computations in terms of the concepts of conditional
independence and factoring of distributions, both of which are fundamental to
statistics and probability theory. In order to arrive at this correspondence, we
will need to understand the limitations of ﬁrst order logic. Least ﬁxed point
is an iteration of ﬁrst order formulas. The limitations of ﬁrst order formulae
mentioned in the previous section therefore appear at each step of a least ﬁxed
point computation.
50
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 51
Viewing LFP as “stagewise ﬁrst order” is central to our analysis. Let us
pause for a while and see how this ﬁts into our global framework. We are in
terested in factoring complex interactions between variables into their smallest
constituent irreducible factors. Viewed this way, LFP has a natural factorization
into its stages, which are all described by ﬁrst order formulae.
Let us now analyze the limitations of the LFP computation through this
viewpoint.
5.1.1 Locality of First Order Logic
The local properties of ﬁrst order logic have received considerable research at
tention and expositions can be found in standard references such as [Lib04, Ch.
4], [EF06, Ch. 2], [Imm99, Ch. 6]. The basic idea is that ﬁrst order formulae can
only “see” up to a certain distance away from their free variables. This distance
is determined by the quantiﬁer rank of the formula.
The idea that ﬁrst order formulae are local has been formalized in essen
tially two different ways. This has led to two major notions of locality — Hanf
locality [Han65] and Gaifman locality [Gai82]. Informally, Hanf locality says
that whether or not a ﬁrst order formula ϕ holds in a structure depends only on
its multiset of isomorphism types of spheres of radius r. Gaifman locality says
that whether or not ϕ holds in a structure depends on the number of elements
of that structure having pairwise disjoint rneighborhoods that fulﬁll ﬁrst order
formulae of quantiﬁer depth d for some ﬁxed d (which depends on ϕ). Clearly,
both notions express properties of combinations of neighborhoods of ﬁxed size.
In the literature of ﬁnite model theory, these properties were developed to
deal with cases where the neighborhoods of the elements in the structure had
bounded diameters. In particular, some of the most striking applications of
such properties are in graphs with bounded degree, such as the linear time al
gorithm to evaluate ﬁrst order properties on bounded degree graphs [See96].
In contrast, we will use some of the normal forms developed in the context of
locality properties in ﬁnite model theory, but in the scenario where neighbor
hoods of elements have unbounded diameter. Thus, it is not only the locality
51
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 52
that is of interest to us, but the exact speciﬁcation of the ﬁnitary nature of the
ﬁrst order computation. We will see that what we need is that ﬁrst order logic
can only exploit a bounded number of local properties. We will need both these
properties in our analysis.
Recall the notation and deﬁnitions from the previous chapter. We need some
deﬁnitions in order to state the results.
Deﬁnition 5.2. The Gaifman graph of a σstructure A is denoted by G
A
and de
ﬁned as follows. The set of nodes of G
A
is A. There is an edge between two
nodes a
1
and a
2
in G
A
if there is a relation R in σ and a tuple t ∈ R
A
such that
both a
1
and a
2
appear in t.
With the graph deﬁned, we have a notion of distance between elements a
i
, a
j
of A, denoted by d(a
i
, a
j
), as simply the length of the shortest path between a
i
and a
j
in G
A
. We extend this to a notion of distance between tuples from A as
follows. Let a = (a
1
, . . . , a
n
) and b = (b
1
, . . . , b
m
). Then
d
A
(a, b) = min¦d
A
(a
i
, b
j
): 1 ≤ i ≤ n, 1 ≤ j ≤ m¦.
There is no restriction on n and m above. In particular, the deﬁnition above
also applies to the case where either of them is equal to one. Namely, we have
the notion of distance between a tuple and a singleton element. We are now
ready to deﬁne neighborhoods of tuples. Recall that σ
n
is the expansion of σ by
n additional constants.
Deﬁnition 5.3. Let A be a σstructure and let a be a tuple over A. The ball of
radius r around a is a set deﬁned by
B
A
r
(a) = ¦b ∈ A: d
A
(a, b) ≤ r¦.
The rneighborhood of a in Ais the σ
n
structure N
A
r
(a) whose universe is B
A
r
(a);
each relation R is interpreted as R
A
restricted to B
A
r
(a); and the n additional
constants are interpreted as a
1
, . . . , a
n
.
We recall the notion of a type. Informally, if L is a logic (or language), the L
type of a tuple is the sum total of the information that can be expressed about it
52
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 53
in the language L. Thus, the ﬁrst order type of a mtuple in a structure is deﬁned
as the set of all FO formulae having m free variables that are satisﬁed by the
tuple. Over ﬁnite structures, this notion is far too powerful since it characterizes
the structure (A, a) up to isomorphism. Amore useful notion is the local type of a
tuple. In particular, a neighborhood is a σ
n
structure, and a type of a neighborhood
is an equivalence class of such structures up to isomorphism. Note that any
isomorphism between N
A
r
(a
1
, . . . , a
n
) and N
B
r
(b
1
, . . . , b
n
) must send a
i
to b
i
for
1 ≤ i ≤ n.
Deﬁnition 5.4. Notation as above. The local rtype of a tuple a in A is the type of
a in the substructure induced by the rneighborhood of a in A, namely in N
A
r
(a).
In what follows, we may drop the superscript if the underlying structure is
clear. The following three notions of locality are used in stating the results.
Deﬁnition 5.5. 1. Formulas whose truth at a tuple a depends only on B
r
(a)
are called rlocal. In other words, quantiﬁcation in such formulas is re
stricted to the structure N
r
(x).
2. Formulas that are rlocal around their variables for some value of r are
said to be local.
3. Boolean combinations of formulas that are local around the various coor
dinates x
i
of x are said to be basic local.
As mentioned earlier, there are two broad ﬂavors of locality results in lit
erature – those that follow from Hanf’s theorem, and those that follow from
Gaifman’s theorem. The ﬁrst relates two different structures. [Han65] proved
his result for inﬁnite structures. We provide below the locality result due to
[FSV95] that is suitable for ﬁnite models. To proceed, we need a deﬁnition.
Deﬁnition 5.6. Let A, Bbe σstructures and let m ∈ N. If for every isomorphism
type τ of a rneighborhood of a point, either
1. Both A and Bhave the same number of elements of type τ.
53
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 54
2. Both A and Bhave more than m elements of type τ.
Then we say that A and Bare threshold (r, m)equivalent.
Theorem 5.7 ([FSV95]). For each k, l > 0, there exist r, m > 0 such that if A and B
are threshold (r, m)equivalent and every element has degree at most l, then they satisfy
the same ﬁrst order formulae up to quantiﬁer rank k, written A ≡
k
B. Furthermore, r
depends only on k.
We refer the reader to [FSV95] for a discussion comparing the FaginStockmeyer
Vardi theoremwith Hanf’s theoremin the context of applications to ﬁnite model
theory. In particular, neither theorem seems to imply the other.
The Hanf locality lemma for formulae having a single free variable has a
simple form and is an easy consequence of Thm. 5.7.
Lemma 5.8. Notation as above. Let ϕ(x) be a formula of quantiﬁer depth q. Then there
is a radius r and threshold t such that if A and Bhave the same multiset of local types
up to threshold t, and the elements a ∈ A and b ∈ B have the same local type up to
radius r, then
A [= ϕ(a) ↔B[= ϕ(b).
See [Lin05] for an application to computing simple monadic ﬁxed points on
structures of bounded degree in linear time.
Next we come to Gaifman’s version of locality.
Theorem 5.9 ([Gai82]). Every FO formula ϕ(x) over a relational vocabulary is equiv
alent to a Boolean combination of
1. local formula around x, and
2. sentences of the form
∃x
1
, . . . , x
s
s
i=1
φ(x
i
) ∧
1≤i≤j≤s
d
>2r
(x
i
, x
j
)
,
where the φ are rlocal.
54
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 55
In words, for every ﬁrst order formula, there is an r such that the truth of the
formula on a structure depends only on the number of elements having disjoint
rneighborhoods that satisfy certain local formulas. This again expresses the
bounded number of local properties feature that limits ﬁrst order logic.
The following normal form for ﬁrst order logic that was developed in an
attempt to merge some of the ideas from Hanf and Gaifman locality.
Theorem 5.10 ([SB99]). Every ﬁrstorder sentence is logically equivalent to one of the
form
∃x
1
∃x
l
∀yϕ(x, y),
where ϕ is local around y.
5.2 Simple Monadic LFP and Conditional Indepen
dence
In this section, we exploit the limitations described in the previous section to
build conceptual bridges from least ﬁxed point logic to the MarkovGibbs pic
ture of the preceding section. At ﬁrst, this may seemto be an unlikely union. But
we will establish that there are fundamental conceptual relationships between
the directed Markovian picture and least ﬁxed point computations. The key is
to see the constructions underlying least ﬁxed point computations through the
lens of inﬂuence propagation and conditional independence. In this section,
we will demonstrate this relationship for the case of simple monadic least ﬁxed
points. Namely, a FO(LFP) formula without any nesting or simultaneous induc
tion, and where the LFP relation being constructed is monadic. In later sections,
we show how to deal with complex ﬁxed points as well.
We wish to build a viewof ﬁxed point computation as an information propa
gation algorithm. In order to do so, let us examine the geometry of information
ﬂow during an LFP computation. At stage zero of the ﬁxed point computation,
none of the elements of the structure are in the relation being computed. At the
ﬁrst stage, some subset of elements enters the relation. This changes the local
55
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 56
neighborhoods of these elements, and the vertices that lie in these local neigh
borhoods change their local type. Due to the global changes in the multiset of
local types, more elements in the structure become eligible for inclusion into the
relation at the next stage. This process continues, and the changes “propagate”
through the structure. Thus, the fundamental vehicle of this information propagation
is that a ﬁxed point computation ϕ(R, x) changes local neighborhoods of elements at
each stage of the computation.
This propagation is
1. directed, and
2. relies on a bounded number of local neighborhoods at each stage.
In other words, we observe that
The inﬂuence of an element during LFP computation propagates in a simi
lar manner to the inﬂuence of a random variable in a directed Markov ﬁeld.
This correspondence is important to us. Let us try to uncover the under
lying principles that cause it. The directed property comes from the positivity
of the ﬁrst order formula that is being iterated. This ensures that once an ele
ment is inserted into the relation that is being computed, it is never removed.
Thus, inﬂuence ﬂows in the direction of the stages of the LFP computation. Fur
thermore, this inﬂuence ﬂow is local in the following sense: the inﬂuence of an
element can propagate throughout the structure, but only through its inﬂuence
on various local neighborhoods.
This correspondence is most striking in the case of bounded degree struc
tures. In that case, we have only O(1) local types.
Lemma 5.11. On a graph of bounded degree, there is a ﬁxed number of nonisomorphic
neighborhoods with radius r. Consequently, there are only a ﬁxed number of local r
types.
In order to determine whether an element in a structure satisﬁes a ﬁrst order
formula we need (a) the multiset of local rtypes in the structure (also known
56
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 57
as its global type) for some value of r, and (b) the local type of the element.
Furthermore, by threshold Hanf, we only need to know the multiset of local
types up to a certain threshold.
For large enough structures, we will cross the Hanf threshold for the multi
set of rtypes. At this point, we will be making a decision of whether an element
enters the relation based solely on its local rtype. This type potentially changes
with each stage of the LFP. At the time when this change renders the element
eligible for entering the relation, it will do so. Once it enters the relation, it
changes the local rtype of all those elements which lie within a rneighborhood
of it, and such changes render them eligible, and so on. This is how the compu
tation proceeds, in a purely stagewise local manner. This is a Markov property:
the inﬂuence of an element upon another must factor entirely through the local
neighborhood of the latter.
In the more general case where degrees are not bounded, we still have fac
toring through local neighborhoods, except that we have to consider all the lo
cal neighborhoods in the structure. However, here the bounded nature of FO
comes in. The FO formula that is being iterated can only express a property
about some bounded number of such local neighborhoods. For example, in
the Gaifman form, there are s distinguished disjoint neighborhoods that must
satisfy some local condition.
Remark 5.12. The same concept can be expressed in the language of sufﬁcient
statistics. Namely, knowing some information about certain local neighbor
hoods renders the rest of the information about variable values that have en
tered the relation in previous stages of the graph superﬂuous. In particular,
Gaifman’s theorem says that for ﬁrst order properties, there exists a sufﬁcient
statistic that is gathered locally at a bounded number of elements. Knowing this statis
tic gives us conditional independence from the values of other elements that
have already entered the relation previously, but not from elements that will
enter the relation subsequently. This is similar to the directed Markov picture
where there is conditional independence of any variable from nondescendants
given the value of the parents.
57
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 58
X
1
X
n
X
n1
Interacting variables, highly constrained by one another
LFP assumes conditional independence
after statistics are obtained
X
2
Conditional Independence and factorization over
a larger directed model called the ENSP
(developed in Chapter 7)
Φ
s
Φ
2
Φ
1
Φ
s1
Bounded number of local
statistics at each stage
Figure 5.1: Range limited LFP computation process viewed as conditional inde
pendencies.
58
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 59
At this point, we have exhibited a correspondence between two apparently
very different formalisms. This correspondence is illustrated in Fig. 5.1.
59
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 60
5.3 Conditional Independence in Complex Fixed Points
In the previous sections, we showed that the natural “factorization” of LFP into
ﬁrst order logic, coupled with the bounded local property of ﬁrst order logic can
be used to exhibit conditional independencies in the relation being computed.
But the argument we provided was for simple ﬁxed points having one free
variable, namely, for monadic least ﬁxed points. How can we show that this
picture is the same for complex ﬁxed points? We accomplish this in stages.
1. First, we use the transitivity theorem for ﬁxed point logic to move nested
ﬁxed points into simultaneous ﬁxed points without nesting.
2. Next, we use the simultaneous induction lemma for ﬁxed point logic to
encode the relation to be computed as a “section” of a single LFP relation
of higher arity.
Steps 1 and 2 involve standard constructions in ﬁnite model theory, which we
recall in Appendix A.
At this point, we are now working with ktuples, for a k ﬁxed for all problem
sizes, instead of single elements. This will change the distance properties of the
resulting structure of ktuples. Let us examine the case of a 2ary relation that
is being computed. In this case, we have the following situation. Every pair
of elements occurs in the set of 2tuples. This means that the neighborhood of
every pair is O(n), since for any element a of the structure, every other element
b, c, d, occurs in a pair along with a.
This means that when there is a change to a 2tuple containing a, that change
affects the neighborhoods of O(n) other 2tuples. At this point, we see that we
are in the situation of O(n) range interactions. The key point to note is that we
still have only poly(log n) parametrization. This is because even though the inter
actions are of O(n) range, the computation terminates in poly(n) steps, giving
us a economical parametrization of the state space. Put another way, though the
interactions are indeed between O(n) elements at a time, they are severely value
limited, leading once again to poly(log n) parametrization. Recall the discussion
60
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 61
of the two kinds of poly(log n) parameterizations (range limited and value lim
ited) from Chapter 3. We will actually build a graphical model to give us the
parameterization in Chapter 8.
We also need to ensure that our original structure has a relation that allows
an order to be established on ktuples. In particular, this does not pose a prob
lem for encoding instances of kSAT. The basic nature of information gathering
and processing in LFP does not change when the arity of the computation rises.
It merely adds the ability to gather polynomially more information at each stage
taken from O(n) variates at a time. But since the LFP terminates in polynomi
ally many steps, the number of joint values taken by the system of n variables
is only 2
poly(log n)
. Although each element sees O(n) variates at each stage of the
LFP, it has the capability to utilize only poly(log n) amount of that information
in the following precise sense. A “true” joint distribution over n takes c
n
, c > 1
different values. It requires, therefore, O(c
n
) independent parameters to spec
ify. This happens because the behavior of one variable is dependent on all n−1
others simultaneously. In cases of joint distributions of n covariates which take
only 2
poly(log n)
values, this can not be the case since the resulting distribution
can be parameterized far too economically.
Remark 5.13. We could work over a product structure where LFP captures the
class of polynomial time computable queries. In other words, we have to work
in a structure whose elements are ktuples of our original structure. In this way,
a kary LFP over the original structure would be a monadic LFP over this struc
ture. The O(n) nature of interactions remains, but again the parametrization is
only poly(log n).
Note that there are elegant ways to work with the space of equivalence
classes of ktuples with equivalence under ﬁrst order logic with kvariables.
For instance, one can consider a construction known as the canonical structure
due originally to [DLW95] who used it to provide a model theoretic proof of
the important theorem in [AV95] that P = PSPACE if and only if LFP = PFP.
Note that this is for all structures, not just for ordered structures.
The issue one faces is that there is a linear order on the canonical structure,
61
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 62
which renders the Gaifman graph trivial (totally connected). See [Lib04, '11.5]
for more details on canonical structures. The simple scheme described above
sufﬁces for our purposes.
Remark 5.14. Though the ImmermanVardi theoremis usually stated for ordered
structures, it holds for structures equipped with a successor relation (and no lin
ear ordering). See [LR03, '11.2, p. 204] where the result is stated for successor
structures. The beneﬁt of equipping our structures only with a successor struc
ture is that the Gaifman graph remains nontrivial.
5.4 Aggregate Properties of LFP over Ensembles
We have shown that any polynomial time computation will update its relation
according to a certain Markov type property on the space of ktypes of the un
derlying structure, after extracting a statistic from the local neighborhoods of
the underlying structure. Thus far, there is no probabilistic picture, or a distri
bution that we can analyze. We are only describing a fully deterministic com
putation.
The distribution we seek will arise when we examine the aggregate behav
ior of LFP over ensembles of structures that come from ensembles of constraint
satisfaction problems (CSPs) such as random kSAT. When we examine the
properties in the aggregate of LFP running over ensembles, we will ﬁnd the
following.
The “bounded number of local” property of each stage of monadic LFP
computation manifests as conditional independencies in the distribution,
making the distribution of solutions poly(log n)parametrizable. Likewise,
value limited interactions in higher arity LFP computations also lead to
distribution of solutions that are poly(log n)parametrizable.
This gives us the setting where we can exploit the full machinery of graphical
models of Chapter 2.
62
5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND
CONDITIONAL INDEPENDENCE 63
Before we examine the distributions arising from LFP acting on ensembles
of structures, we will bring in ideas from statistical physics into the proof. We
begin this in the next chapter.
63
6. The 1RSB Ansatz of Statistical
Physics
6.1 Ensembles and Phase Transitions
The study of random ensembles of various constraint satisfaction problems
(CSPs) is over two decades old, dating back at least to [CF86]. While a given
CSP — say, 3SAT— might be NPcomplete, many instances of the CSP might
be quite easy to solve, even using fairly simple algorithms. Furthermore, such
“easy” instances lay in certain well deﬁned regimes of the CSP, while “harder”
instances lay in clearly separated regimes. Thus, researchers were motivated to
study randomly generated ensembles of CSPs having certain parameters that
would specify which regime the instances of the ensemble belonged to. We will
see this behavior in some detail for the speciﬁc case of the ensemble known as
random kSAT.
An instance of kSAT is a propositional formula in conjunctive normal form
Φ = C
1
∧ C
2
∧ ∧ C
m
having m clauses C
i
, each of whom is a disjunction of k literals taken from n
variables ¦x
1
, . . . , x
n
¦. The decision problem of whether a satisfying assignment
to the variables exists is NPcomplete for k ≥ 3. The ensemble known as ran
dom kSAT consists of instances of kSAT generated randomly as follows. An
instance is generated by drawing each of the m clauses ¦C
1
, . . . , C
m
¦ uniformly
from the 2
k
n
k
possible clauses having k variables. The entire ensemble of ran
dom kSAT having m clauses over n literals will be denoted by SAT
k
(n, m),
64
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 65
and a single instance of this ensemble will be denoted by Φ
k
(n, m). The clause
density, denoted by α and deﬁned as α := m/n is the single most important
parameter that controls the geometry of the solution space of random kSAT.
Thus, we will mostly be interested in the case where every formula in the en
semble has clause density α. We will denote this ensemble by SAT
k
(n, α), and
an individual formula in it by Φ
k
(n, α).
Random CSPs such as kSAT have attracted the attention of physicists be
cause they model disordered systems such as spin glasses where the Ising spin of
each particle is a binary variable (”up” or “down”) and must satisfy some con
straints that are expressed in terms of the spins of other particles. The energy of
such a system can then be measured by the number of unsatisﬁed clauses of a
certain kSAT instance, where the clauses of the formula model the constraints
upon the spins. The case of zero energy then corresponds to a solution to the
kSAT instance. The following formulation is due to [MZ97]. First we trans
late the Boolean variables x
i
to Ising variables S
i
in the standard way, namely
S
i
= −(−1)
x
i
. Then we introduce new variables C
li
as follows. The variable C
li
is equal to 1 if the clause C
l
contains x
i
, it is −1 if the clause contains x
i
, and is
zero if neither appears in the clause. In this way, the sum
¸
n
i=1
C
li
S
i
measures
the satisﬁability of clause C
l
. Speciﬁcally, if
¸
n
i=1
C
li
S
i
− k > 0, the clause is
satisﬁed by the Ising variables. The energy of the system is then measured by
the Hamiltonian
H =
m
¸
i=1
δ(
n
¸
i=1
C
li
S
i
, −K).
Here δ(i, j) is the Kronecker delta. Thus, satisfaction of the kSAT instance
translates to vanishing of this Hamiltonian. Statistical mechanics then offers
techniques such as replica symmetry, to analyze the macroscopic properties of
this ensemble.
Also very interesting from the physicist’s point of view is the presence of a
sharp phase transition [CKT91, MSL92] (see also [KS94]) between satisﬁable and
unsatisﬁable regimes of random kSAT. Namely, empirical evidence suggested
that the properties of this ensemble undergoes a clearly deﬁned transition when
the clause density is varied. This transition is conjectured to be as follows. For
65
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 66
each value of k, there exists a transition threshold α
c
(k) such that with proba
bility approaching 1 as n → ∞(called the Thermodynamic limit by physicists),
• if α < α
c
(k), an instance of random kSAT is satisﬁable. Hence this region
is called the SAT phase.
• If α > α
c
(k), an instance of random kSAT is unsatisﬁable. This region is
known as the unSAT phase.
There has been intense research attention on determining the numerical value
of the threshold between the SAT and unSAT phases as a function of k. [Fri99]
provides a sharp but nonuniform construction (namely, the value α
c
is a func
tion of the problem size, and is conjectured to converge as n → ∞). Functional
upper bounds have been obtained using the ﬁrst moment method [MA02] and
improved using the second moment method [AP04] that improves as k gets
larger.
6.2 The d1RSB Phase
More recently, another thread on this crossroad has originated once again from
statistical physics and is most germane to our perspective. This is the work in
the progression [MZ97], [BMW00], [MZ02], and [MPZ02] that studies the evo
lution of the solution space of randomkSAT as the constraint density increases
towards the transition threshold. In these papers, physicists have conjectured
that there is a second threshold that divides the SAT phase into two — an “easy”
SAT phase, and a “hard” SAT phase. In both phases, there is a solution with
high probability, but while in the easy phase one giant connected cluster of
solutions contains almost all the solutions, in the hard phase this giant clus
ter shatters into exponentially many communities that are far apart from each
other in terms of least Hamming distance between solutions that lie in distinct
communities. Furthermore, these communities shrink and recede maximally
far apart as the constraint density is increased towards the SATunSAT thresh
old. As this threshold is crossed, they vanish altogether.
66
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 67
As the clause density is increased, a picture known as the “1RSB hypothesis”
emerges that is illustrated in Fig. 6.1, and described below.
RS For α < α
d
, a problem has many solutions, but they all form one giant
cluster within which going from one solution to another involves ﬂipping
only a ﬁnite (bounded) set of variables. This is the replica symmetric phase.
d1RSB At some value of α = α
d
which is below α
c
, it has been observed that
the space of solutions splits up into “communities” of solutions such that
solutions within a community are close to one another, but are far away
from the solutions in any other community. This effect is known as shat
tering [ACO08]. Within a community, ﬂipping a bounded ﬁnite number
of variable assignments on one satisfying takes one to another satisfying
assignment. But to go from one satisfying assignment in one community
to a satisfying assignment in another, one has to ﬂip a fraction of the set
of variables and therefore encounters what physicists would consider an
“energy barrier” between states. This is the dynamical one step replica sym
metry breaking phase.
unSAT Above the SATunSAT threshold, the formulas of random kSAT are
unsatisﬁable with high probability.
Using statistical physics methods, [KMRT
+
07] obtained another phase that
lies between d1RSB and unSAT. In this phase, known as 1RSB (one step replica
symmetry breaking), there is a “condensation” of the solution space into a sub
exponential number of clusters, and the sizes of these clusters go to zero as the
transition occurs, after which there are no more solutions. This phase has not
been proven rigorously thus far to our knowledge and we will not revisit it in
this work.
The 1RSB hypothesis has been proven rigorously for high values of k. Specif
ically, the existence of the d1RSB phase has been proven rigorously for the case
of k > 8, starting with [MMZ05] (see also [DMMZ08]) who showed the exis
tence of clusters in a certain region of the SAT phase using ﬁrst moment meth
ods. Later, [ART06] rigorously proved that there exist exponentially many clus
67
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 68
ters in the d1RSB phase and showed that within any cluster, the fraction of
variables that take the same value in the entire cluster (the socalled frozen vari
ables) goes to one as the SATunSAT threshold is approached. Further [ACO08]
obtained analytical expressions for the threshold at which the solution space of
random kSAT (as also two other CSPs — random graph coloring and random
hypergraph 2colorability) shatters, as well as conﬁrmed the O(n) Hamming
separation between clusters.
α
d
α
c α
Figure 6.1: The clustering of solutions just before the SATunSAT threshold.
Below α
d
, the space of solution is largely connected. Between α
d
and α
c
, the
solutions break up into exponentially many communities. Above α
c
, there are
no more solutions, which is indicated by the unﬁlled circle.
In summary, in the region of constraint density α ∈ [α
d
, α
c
], the solution
space is comprised of exponentially many communities of solutions which re
quire a fraction of the variable assignments to be ﬂipped in order to move be
tween each other.
6.2.1 Cores and Frozen Variables
In this section, we reproduce results about the distribution of variable assign
ments within each cluster of the d1RSB phase from [MMW07], [ART06], and
[ACO08].
We ﬁrst need the notion of the core of a cluster. Given any solution in a clus
ter, one may obtain the core of the cluster by “peeling away” variable assign
ments that, loosely speaking occur only in clauses that are satisﬁed by other
68
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 69
variable assignments. This process leads to the core of the cluster.
To get a formal deﬁnition, ﬁrst we deﬁne a partial assignment of a set of vari
ables (x
1
, . . . , x
n
) as an assignment of each variable to a value in ¦0, 1, ∗¦. The
∗ assignment is akin to a “joker state” which can take whichever value is most
useful in order to satisfy the kSAT formula.
Next, we say that a variable in a partial assignment is free when each clause
it occurs in has at least one other variable that satisﬁes the clause, or has as
assignment to ∗.
Finally, to obtain the core of a cluster, we repeat the following starting with
any solution in the cluster: if a variable is free, assign it a ∗.
This process will eventually lead to a ﬁxed point, and that is the core of the
cluster. We may easily see that the core is not dependent upon the choice of the
initial solution.
What does the core of a cluster look like? Recall that the core is itself a
partial assignment, with each variable being assigned a 0, 1 or a ∗. Of obvious
interest are those variables that are assigned 0 or 1. These variables are said to be
frozen. Note that since the core can be arrived at starting from any choice of an
initial solution in the cluster, it follows that frozen variables take the same value
throughout the cluster. For example, if the variable x
i
takes value 1 in the core
of a cluster, then every solution lying in the cluster has x
i
assigned the value
1. The nonfrozen variables are those that are assigned the value ∗ in the core.
These take both values 0 and 1 in the cluster. Clearly the number of ∗ variables
is a measure of the internal entropy (and therefore the size) of a cluster since it
is only these variables whose values vary within the cluster.
Apriori, we have no way to tell that the core will not be the all ∗ partial
assignment. Namely, we do not know whether there are any frozen variables at
all. However, [ART06] proved that for high enough values of k, with probability
going to 1 in the thermodynamic limit, almost every variable in a core is frozen
as we increase the clause density towards the SATunSAT threshold.
Theorem 6.1 ([ART06]). For every r ∈ [0,
1
2
] there is a constant k
r
such that for all
k ≥ k
r
, there exists a clause density α(k, r) < α
c
such that for all α ∈ [α(k, r), α
c
],
69
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 70
asymptotically almost surely
1. every cluster of solutions of Φ
k
(n, αn) has at least (1 −r)n frozen variables,
2. fewer than rn variables take the value ∗.
This gives us the corollary.
Corollary 6.2 ([ART06]). For every k ≥ 9, there exist α < α
c
(k) such that with high
probability, every cluster of the solution space of Φ
k
(n, αn) has frozen variables.
Note that this picture is known to hold only for k ≥ 9 and is an open question
for k < 9. See also the remark at the end of this section.
We end this section with a physical picture of what forms a core. If a formula
Φ has a core with C clauses, then these clauses must have literals that come
from a set of at most C variables. By bounding the probability of this event,
[MMW07] obtained a lower bound on the size of cores. The bound is linear,
which means that when nontrivial cores do exist ( [ART06] proved their exis
tence for k ≥ 9), they must involve a fraction of all the variables in the formula.
In other words, a core may be thought of as the onset of a large single interaction
of degree O(n) among the variables. Furthermore, this core is instantiated am
ply in the solution space (by that we mean it takes exponentially many values
in those many clusters of the d1RSB phase). As the reader may imagine after
reading the previous chapters, this sort of interaction cannot be dealt with by
LFP algorithms. We will need more work to make this precise, but informally
cores are too large to pass through the bottlenecks that the stagewise ﬁrst order
LFP algorithms create.
This may also be interpreted as follows. Algorithms based on LFP can tackle
long range interactions between variables, but only when they can be factored
into interactions of degree poly(log n) or are value limited. But the appearance
of cores is equivalent to the onset of O(n) degree interactions which cannot be
further factored into poly(log n) degree interactions, and are ample. Such am
ple irreducible O(n) interactions, caused by increasing the clause density sufﬁ
ciently, cannot be dealt with using an LFP algorithm.
70
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 71
We have already noted that this is because LFP algorithms factor through
ﬁrst order computations, and in a ﬁrst order computation, the decision of whether
an element is to enter the relation being computed is based on information col
lected from local neighborhoods and combined in a bounded fashion. This bot
tleneck is too small for a core to factor through in range limited LFP. The am
pleness precludes value limited interactions also as we shall see. The precise
statement of this intuitive picture will be provided in the next chapter when we
build our conditional independence hierarchies.
The freezing of variables in cores is known to happen only for k ≥ 9
[ART06]. It remains open for k < 8. Indeed, for low values of k such
as k = 2, 3, there is empirical evidence that this phenomenon does not
take place [MMW05], also see the discussion in [ART06, '1]. Hence, our
separation of complexity classes needs the regime of k ≥ 9.
6.2.2 Performance of Known Algorithms in the d1RSB Phase
We end this chapter with a brief overview of the performance of known algo
rithms as a function of the clause density, and pointers to more detailed surveys.
Beginning with [CKT91] and [MSL92], there has been an understanding that
hard instances of random kSAT tend to occur when the constraint density α
is near the transition threshold, and that this behavior was similar to phase tran
sitions in spin glasses [KS94]. Now that we have surveyed the known results
about the geometry of the space of solutions in this region, we turn to the ques
tion of how the two are related.
It has been empirically observed that the onset of the d1RSB transition seems
to coincide with the constraint density where traditional solvers tend to exhibit
exponential slowdown; see [ACO08] and [CO09]. See also [CO09] for the best
current algorithm along with a comparison of various other algorithms to it.
Thus, while both regimes in SAT have solutions with high probability, the ease
of ﬁnding a solution differs quite dramatically on traditional SAT solvers due to
a clustering of the solution space into numerous communities that are far apart
71
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 72
from each other in terms of Hamming distance. In particular, for clause den
sities above O(2
k
/k), no algorithms are known to produce solutions in polyno
mial time with probability Ω(1) — neither on the basis of rigorous or empirical
analysis, or any other evidence [CO09]. Compare this to the SATunSAT thresh
old, which is asymptotically 2
k
ln 2. Thus, well below the SATunSAT threshold,
in regimes where we know solutions exist, we are currently unable to ﬁnd them
in polynomial time. Our work will explain that indeed, this is fundamentally
a limitation of polynomial time algorithms. Speciﬁcally, in such phases (for
k ≥ 9), the solution space geometry is not expressible as a mixture of range
or value limited poly(log n)parametrizable pieces. This is because in the d1RSB
phase, the distribution of solutions is both irreducibly correlated at ranges O(n),
and ample, precluding both range and value limited parametrizations.
Please see [CO09] for the best known algorithm that does solve SAT in
stances with nonvanishing probability for densities up to 2
k
ω(k)/k for any
sequence ω(k) → ∞. See [ACO08] for proofs that the clause density where
all known polynomial time algorithms fail on NPcomplete problems such as
kSAT and graph coloring coincides with the onset of the d1RSB phase in these
problems. This clause density threshold for the onset of the d1RSB phase is
2
k
/k ln k [ACO08]. The earlier [ART06] had established the existence of shatter
ing and freezing of variables within cores for α = Θ(2
k
).
The signiﬁcance of the value of k By the results of [ART06] and [ACO08, '2.1,
Rem. 2], we are guaranteed the presence of the full d1RSB phenomena only for k ≥ 9
and clause density above (2
k
/k) ln k.
Hence, for our separation of complexity classes, we will work with ran
dom kSAT in the k ≥ 9 regime, and the clause density sufﬁciently high so
that we are in the d1RSB phase. We will require all known properties of the
d1RSB phase — namely, the exponentially many clusters, the freezing of vari
ables within clusters, and the O(n) variable changes required to move from one
cluster to another. These properties are not known to hold except for k ≥ 9 and
clause densities above (2
k
/k) ln k.
72
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 73
It should be noted that there is empirical evidence that the d1RSB phase is
not present in random 3SAT in the following sense. The cores in the clusters
of random 3SAT are trivial. By that we mean that they tend to be the all ∗ core,
unlike k ≥ 9 where [ART06] show the existence of nontrivial cores for almost
all clusters after the d1RSB threshold.
We should also point out that the experimental behavior of algorithms for
kSAT is largely characterized for lower values of k = 2, 3, 4, where the full
d1RSB picture is not known to hold. For instance, the experimental behavior
of algorithms reported in [MRTS07] is on random 4SAT. See also [KMRT
+
06],
where experiments are reported on 4SAT. We are not aware of experimental
work done that shows the efﬁcacy (even under mild requirements) of any algo
rithm on k ≥ 9 after the onset of the d1RSB phase with nontrivial cores.
Incomplete algorithms are a class that do not always ﬁnd a solution when
it exists, nor do they indicate the lack of solution except to the extent that
they were unable to ﬁnd one. Incomplete algorithms are obviously very im
portant for hard regimes of constraint satisfaction problems since we do not
have complete algorithms in these regimes that have economical running times.
More recently, a breakthrough for incomplete algorithms in this ﬁeld came with
[MPZ02] who used the cavity method from spin glass theory to construct an
algorithm named survey propagation that does very well on instances of random
kSAT with constraint density above the aforementioned clustering threshold,
and continues to perform well very close to the threshold α
c
for low values of
k. Survey propagation seems to scale as nlog n in this region. The algorithm
uses the 1RSB hypothesis about the clustering of the solution space into numer
ous communities. The original work reported in [MPZ02] was on 3SAT. The
behavior of survey propagation for higher values of k is still being researched.
73
7. Random Graph Ensembles
We will use factor graphs as a convenient means to encode various properties
of the random kSAT ensemble. In this section we introduce the factor graph
ensembles that represent random kSAT. Our treatment of this section follows
[MM09, Chapter 9].
Deﬁnition 7.1. The randomkfactor graph ensemble, denoted by G
k
(n, m), consists
of graphs having n variable nodes and mfunction nodes constructed as follows.
A graph in the ensemble is constructed by picking, for each of the m function
nodes in the graph, a ktuple of variables uniformly from the
n
k
possibilities
for such a ktuple chosen from n variables.
Graphs constructed in this manner may have two function nodes connected
to the same ktuple of variables. In this ensemble, function nodes all have de
gree k, while the degree of the variable nodes is a random variable with expec
tation km/n.
Deﬁnition 7.2. The random(k, α)factor graph ensemble, denoted by G
k
(n, α), con
sists of graphs constructed as follows. For each of the
n
k
ktuples of variables,
a function node that connects to only these k variables is added to the graph
with probability αn/
n
k
.
In this ensemble, the number of function nodes is a random variable with
expectation αn, and the degree of variable nodes is a random variable with
expectation αk.
We will be interested in the case of the thermodynamic limit of n, m → ∞
with the ratio α := m/n being held constant. In this case, both the ensembles
converge in the properties that are important to us, and both can be seen as the
74
7. RANDOM GRAPH ENSEMBLES 75
underlying factor graph ensembles to our random kSAT ensemble SAT
k
(n, α)
(see Chapter 6 for deﬁnitions and our notation for random kSAT ensembles).
With the deﬁnitions in place, we are ready to describe two properties of
random graph ensembles that are pertinent to our problem.
7.1 Properties of Factor Graph Ensembles
The ﬁrst property provides us with intuition on why algorithms ﬁnd it so hard
to put together local information to form a global perspective in CSPs.
7.1.1 Locally TreeLike Property
We have seen in Chapter 5 that the propagation of inﬂuence of variables during
a LFP computation is stagewiselocal. This is really the fundamental limitation
of LFP that we seek to exploit. In order to understand why this is a limitation,
we need to examine what local neighborhoods of the factor graphs underly
ing NPcomplete problems like kSAT look like in hard phases such as d1RSB.
In such phases, there are many extensive (meaning O(n)) correlations between
variables that arise due to loops of sizes O(log n) and above.
However, remarkably, such graphs are locally trivial. By that we mean that
there are no cycles in a O(1) sized neighborhood of any vertex as the size of the
graph goes to inﬁnity [MM09, '9.5]. One may demonstrate this for the Erdos
Renyi random graph as follows. Here, there are n vertices, and there is an edge
between any two with probability c/n where c is a constant that parametrizes
the density of the graph. Edges are “drawn” uniformly and independently of
each other. Consider the probability of a certain graph (V, E) occurring as a
subgraph of the ErdosRenyi graph. Such a graph can occur in
n
V 
positions.
At each position, the probability of the graph structure occurring is
p
E
(1 −p)
V 
2
−E
.
Applying Stirling’s approximations, we see that such a graph occurs asymptot
ically O([V [ − [E[) times. If the graph is connected, [V [ ≤ [E[ − 1 with equality
75
7. RANDOM GRAPH ENSEMBLES 76
only for trees. Thus, in the limit of n → ∞, ﬁnite connected graphs have van
ishing probability of occurring in ﬁnite neighborhoods of any element.
In short, if only local neighborhoods are examined, the two ensembles G
k
(n, m)
and T
k
(n, m) are indistinguishable from each other.
Theorem 7.3. Let G be a randomly chosen graph in the ensemble G
k
(n, m), and i
be a uniformly chosen node in G. Then the rneighborhood of i in G converges in
distribution to T
k
(n, m) as n → ∞.
Let us see what this means in terms of the information such graphs divulge
locally. The simplest local property is degrees of elements. These are, of course,
available through local inspection. The next would be small connected sub
graphs (triangles, for instance). But even this next step is not available. In
other words, such random graphs do not provide any of their global proper
ties through local inspection at each element.
Let us think about what this implies. We know from the onset of cores and
frozen variables in the 1dRSB phase of kSAT that there are strong correlations
between blocks of variables of size O(n) in that phase. However, these loops
are invisible when we inspect local neighborhoods of a ﬁxed ﬁnite size, as the
problem size grows.
7.1.2 Degree Proﬁles in Random Graphs
The degree of a variable node in the ensemble G
k
(n, m) is a random variable.
We wish to understand the distribution of this random variable. The expected
value of the fraction of variables in G
k
(n, m) having degree d is the same as the
expected value that a single variable node has degree d, both being equal to
P(deg v
i
= d) =
m
d
p
k
(1 −p)
m−d
where p =
k
d
.
In the large graph limit we get
lim
n→∞
P(deg v
i
= d) = e
−kα
(kα)
d
n !
.
76
7. RANDOM GRAPH ENSEMBLES 77
In other words, the degree is asymptotically a Poisson random variable.
A corollary is that the maximum degree of a variable node is almost surely
less than O(log n) in the large graph case.
Lemma 7.4. The maximum variable node degree in G
k
(n, m) is asymptotically almost
surely O(log n). In particular, it asymptotically almost surely satisﬁes the following
d
max
kαe
=
z
log(z/ log z)
¸
1 + Θ
log log z
(log z)
2
. (7.1)
where z = (log n)/kαe.
Proof. See [MM09, p. 184] for a discussion of this upper bound, as well as a
lower bound.
77
8. Separation of Complexity Classes
We have built a framework that connects ideas from graphical models, logic,
statistical mechanics, and random graphs. We are now ready to begin our ﬁnal
constructions that will yield the separation of complexity classes.
We have described the fundamental similarity between range limited and
value limited distributions in Chapter 3. Both are hampered by the same un
derlying property — that inspite of being distributions on n covariates, they
can be speciﬁed with only 2
poly(log n)
parameters. In our terminology, they are
both poly(log n)parametrizable. Informally, this means that their joint distribu
tion behaves like the joint distribution of only poly(log n) covariates instead of
n covariates.
In light of the above, we ﬁrst consider the case of range limited poly(log n)
parametrizations. We return to value limited poly(log n)parametrizations just
before the ﬁnal separation of complexity classes in Section 8.5.
8.1 Measuring Conditional Independence in Range
Limited Models
Our central concern with respect to range limited models is to understand which
variable interactions in a system are irreducible — namely, those that cannot be
expressed in terms of interactions between smaller sets of variables with con
ditional independencies between them. Such irreducible interactions can be
2interactions (between pairs), 3interactions (between triples), and so on, up to
ninteractions between all n variables simultaneously.
A joint distribution encodes the interaction of a system of n variables. What
78
8. SEPARATION OF COMPLEXITY CLASSES 79
would happen if all the direct interactions between variables in the system were
all of less than a certain ﬁnite range k, with k < n? In such a case, the “joint
ness” of the covariates really would lie at a lower “level” than n. We would like
to measure the “level” of conditional independence in a system of interacting
variables by inspecting their joint distribution. At level zero of this “hierarchy”,
the covariates should be independent of each other. At level n, they are coupled
together n at a time, without the possibility of being decoupled. In this way,
we can make statements about how deeply entrenched the conditional inde
pendence between the covariates is, or dually, about how large the set of direct
interactions between variables is.
Remark 8.1. Similarly, if the variables did interact n at a time, but took only
2
poly(log n)
joint values, the “jointness” of the distribution would lie at a lower
level than n. In both cases above, as stated in Chapter 3, the n covariates do not
display the behavior of a typical joint distribution of n variables. Instead, they
behave in ways similar to a set of poly(log n) covariates.
When the largest irreducible interactions are kinteractions, the distribution
can be parametrized with n2
k
independent parameters. Thus, in families of dis
tributions where the irreducible interactions are of ﬁxed size, the independent
parameter space grows polynomially with n, whereas in a general distribution
without any conditional independencies, it grows exponentially with n. The
case of monadic LFP lies in between — the interactions are not of ﬁxed size, but
they grow relatively slowly. The case of complex LFP is also one of poly(log n)
parametrization, except it is a valuelimited O(n) interaction model.
There are some technical issues with constructing such a hierarchy to mea
sure conditional independence. The ﬁrst issue would be how to measure the
level of a distribution in this hierarchy. If, for instance, the distribution has a
directed {map, then we could measure the size of the largest clique that ap
pears in its moralized graph. However, as noted in Sec. 2.5, not all distributions
have such maps. We may, of course, upper and lower bound the level using
minimal Tmaps and maximal Tmaps for the distribution. In the case of or
dered graphs, we should note that there may be different minimal Tmaps for
79
8. SEPARATION OF COMPLEXITY CLASSES 80
the same distribution for different orderings of the variables. See [KF09, p. 80]
for an example.
The insight that allows us to resolve the issue is as follows. If we could
somehow embed the distribution of solutions generated by LFP into a larger dis
tribution, such that
1. the larger distribution factorized recursively according to some directed
graphical model, and
2. the larger distribution had only polynomially many more variates than
the original one,
then we would have obtained a parametrization of our distribution that would
reﬂect the factorization of the larger distribution, and would cost us only poly
nomially more, which does not affect us.
By pursuing the above course, we aim to demonstrate that distributions of
solutions generated by LFP lie at a lower level of conditional independence than
distributions that occur in the d1RSB phase of random kSAT. Consequently,
they have more economical parametrizations than the space of solutions in the
d1RSB phase does.
We will return to the task of constructing such an embedding in Sec. 8.3.
First we describe how we use LFP to create a distribution of solutions.
8.2 Generating Distributions from LFP
We will describe the method of generating distributions and showing economic
parametrizations by embedding the covariates into a larger directed graphical
model below for monadic LFP. We will indicate the differences for complex
LFP.
8.2.1 Encoding kSAT into Structures
In order to use the framework from Chapters 4 and 5, we will encode kSAT
formulae as structures over a ﬁxed vocabulary.
80
8. SEPARATION OF COMPLEXITY CLASSES 81
Our vocabularies are relational, and so we need only specify the set of rela
tions, and the set of constants. We will use three relations.
1. The ﬁrst relation R
C
will encode the clauses that a SAT formula comprises.
Since we are studying ensembles of random kSAT, this relation will have
arity k.
2. We need a relation in order to make FO(LFP) capture polynomial time
queries on the class of kSAT structures. We will not introduce a linear
ordering since that would make the Gaifman graph a clique. Rather we
will include a relation such that FO(LFP) can capture all the polynomial
time queries on the structure. This will be a binary relation R
E
.
3. Lastly, we need a relation R
P
to hold “partial assignments” to the SAT
formulae. We will describe these in the Sec. 8.2.3.
4. We do not require constants.
This describes our vocabulary
σ = ¦R
C
, R
E
, R
P
¦.
Next, we come to the universe. A SAT formula is deﬁned over n variables,
but they can come either in positive or negative form. Thus, our universe will
have 2n elements corresponding to the variables x
1
, . . . , x
n
, x
1
, . . . , x
n
. In or
der to avoid new notation, we will simply use the same notation to indicate the
corresponding element in the universe. We denote by lower case x
i
the literals
of the formula, while the corresponding upper case X
i
denotes the correspond
ing variable in a model.
Finally, we need to interpret our relations in our universe. We dispense with
the superscripts since the underlying structure is clear. The relation R
C
will
consist of ktuples from the universe interpreted as clauses consisting of dis
junctions between the variables in the tuple. The relation R
E
will be interpreted
as an “edge” between successive variables. The relation R
P
will be a partial
assignment of values to the underlying variables.
81
8. SEPARATION OF COMPLEXITY CLASSES 82
Now we encode our kSAT formulae into σstructures in the natural way.
For example, for k = 3, the clause x
1
∨ x
2
∨ x
3
in the SAT formula will be
encoded by inserting the tuple (x
1
, x
2
, x
3
) in the relation R
C
. Similarly, the
pairs (x
i
, x
i+1
) and (x
i
, x
i+1
), both for 1 ≤ i < n, as well as the pair (x
n
, x
1
)
will be in the relation R
E
. This chains together the elements of the structure.
The reason for the relation R
E
that creates the chain is that on such struc
tures, polynomial time queries are captured by FO(LFP) [EF06, '11.2]. This is
a technicality. Recall that an order on the structure enables the LFP computa
tion (or the Turing machine the runs this computation) to represent tuples in
a lexicographical ordering. In our problem of kSAT, it plays no further role.
Speciﬁcally, the assignments to the variables that are computed by the LFP have
nothing to do with their order. They depend only on the relation R
C
which en
codes the clauses and the relation R
P
that holds the initial partial assignment
that we are going to ask the LFP to extend. In other words, each stage of the
LFP is orderinvariant. It is known that the class of order invariant queries is also
Gaifman local [GS00]. However to allow LFP to capture polynomial time on the
class of encodings, we need to give the LFP something it can use to create an
ordering. We could encode our structures with a linear order, but that would
make the Gaifman graph fully connected. What we want is something weaker,
that still sufﬁces. Thus, we encode our structures as successortype structures
through the relation R
E
. This seems most natural, since it imparts on the struc
ture an ordering based on that of the variables. Note also that SAT problems
may also be represented as matrices (rows for clauses, columns for variables
that appear in them), which have a well deﬁned notion of order on them.
Ensembles of kSAT Let us now create ensembles of σstructures using the
encoding described above. We will start with the ensemble SAT
k
(n, α) and
encode each kSAT instance as a σstructure. The resulting ensemble will be
denoted by S
k
(n, α). The encoding of the problem Φ
k
(n, α) as a σstructure will
be denoted by P
k
(n, α).
82
8. SEPARATION OF COMPLEXITY CLASSES 83
8.2.2 The LFP Neighborhood System
In this section, we wish to describe the neighborhood system that underlies the
monadic LFP computations on structures of S
k
(n, α). We begin with the factor
graph, and build the neighborhood system through the Gaifman graph.
Let us recall the factor graph ensemble G
k
(n, m). Each graph in this ensem
ble encodes an instance of random kSAT. We encode the kSAT instance as
a structure as described in the previous section. Next, we build the Gaifman
graph of each such structure. The set of vertices of the Gaifman graph are sim
ply the set of variable nodes in the factor graph and their negations since we
are using both variables and their negations for convenience (this is simply an
implementation detail). For instance, the Gaifman graph for the factor graph of
Fig 2.2 will have 12 vertices. Two vertices are joined by an edge in the Gaifman
graph either when the two corresponding variable nodes were joined to a single
function node (i.e., appeared in a single clause) of the factor graph or if they are
adjacent to each other in the chain that relation R
E
has created on the structure.
On this Gaifman graph, the simple monadic LFP computation induces a
neighborhood system described as follows. The sites of the neighborhood sys
temare the variable nodes. The neighborhood A
s
of a site s is the set of all nodes
that lie in the rneighborhood of a site, where r is the locality rank of the ﬁrst
order formula ϕ whose ﬁxed point is being constructed by the LFP computation.
Finally, we make the neighborhood system into a graph in the standard way.
Namely, the vertices of the graph will be the set of sites. Each site s will be con
nected by an edge to every other site in A
s
. This graph will be called the interac
tion graph of the LFP computation. The ensemble of such graphs, parametrized
by the clause density α, will be denoted by I
k
(n, α).
Note that this interaction graph has many more edges in general than the
Gaifman graph. In particular, every node that was within the locality rank
neighborhood of the Gaifman graph is now connected to it by a single edge.
The resulting graph is, therefore, far more dense than the Gaifman graph.
What is the size of cliques in this interaction graph? This is not the same as
the size of cliques in the factor graph, or the Gaifman graph, because the density
83
8. SEPARATION OF COMPLEXITY CLASSES 84
of the graph is higher. The size of the largest clique is a random variable. What
we want is an asymptotic almost sure (by this we mean with probability tending
to 1 in the thermodynamic limit) upper bound on the size of the cliques in the
distribution of the ensemble I
k
(n, α).
Note: From here on, all the statements we make about ensembles should be under
stood to hold asymptotically almost surely in the respective random ensembles. By that
we mean that they hold with probability 1 as n → ∞.
Lemma 8.2. The size of cliques that appear in graphs of the ensemble I
k
(n, α) are upper
bounded by poly(log n) asymptotically almost surely.
Proof. Let d
max
be as in (7.1), and r be the locality rank of ϕ. The maximum
degree of a node in the Gaifman graph is asymptotically almost surely upper
bounded by d
max
= O(log n). The locality rank is a ﬁxed number (roughly equal
to 3
d
where d is the quantiﬁer depth of the ﬁrst order formula that is being
iterated). The node under consideration could have at most d
max
others adjacent
to it, and the same for those, and so on. This gives us a coarse d
r
max
upper bound
on the size of cliques.
Remark 8.3. While this bound is coarse, there is not much point trying to tighten
it because any constant power factor (r in the case above) can always be in
troduced by computing a rary LFP relation. This bound will be sufﬁcient for
us.
Remark 8.4. High degree nodes in the Gaifman graph become signiﬁcant fea
tures in the interaction graph since they connect a large number of other nodes
to each other, and therefore allow the LFP computation to access a lot of infor
mation through a neighborhood system of given radius. It is these high degree
nodes that reduce factorization of the joint distribution since they represent di
rect interaction of a large number of variables with each other. Note that al
though the radii of neighborhoods are O(1), the number of nodes in them is
not O(1) due to the Poisson distribution of the variable node degrees, and the
existence of high degree nodes.
84
8. SEPARATION OF COMPLEXITY CLASSES 85
Remark 8.5. The relation being constructed is monadic, and so it does not intro
duce new edges into the Gaifman graph at each stage of the LFP computation.
When we compute a kary LFP, we can encode it into a monadic LFP over a
polynomially (n
k
) larger product space, as is done in the canonical structure,
for instance, but with the linear order replaced by a weaker successor type rela
tion. Therefore, we can always chose to deal with monadic LFP. This is really a
restatement of the transitivity principle for inductive deﬁnitions that says that
if one can write an inductive deﬁnition in terms of other inductively deﬁned
relations over a structure, then one can write it directly in terms of the original
relations that existed in the structure [Mos74, p. 16].
8.2.3 Generating Distributions
The standard scenario in ﬁnite model theory is to ask a query about a structure
and obtain a Yes/No answer. For example, given a graph structure, we may ask
the query “Is the graph connected?” and get an answer.
But what we want are distributions of solutions that are computed by a pur
ported LFP algorithm for kSAT. This is not generally the case in ﬁnite model
theory. Intuitively, we want to generate solutions lying in exponentially many
clusters of the solution space of SAT in the d1RSB phase. How do we do this?
To generate these distributions, we will start with partial assignments to the set
of variables in the formula, and ask the question whether such a partial as
signment can be extended to a satisfying assignment. We need the following
deﬁnition.
Deﬁnition 8.6. A global relation associated to a decision problem on a class K is
a relation R of a ﬁxed arity k over A associated to each structure A ∈ K.
The following is a restatement of the ImmermanVardi theorem phrased in
terms of computability of global relations. See [LR03, '11.2, p. 206] for a proof.
Theorem 8.7. A global relation R on a class of successor structures is computable in
polynomial time if and only if R is inductive.
85
8. SEPARATION OF COMPLEXITY CLASSES 86
We wish to see that the global relation that associates to each structure a
complete assignment that coincides with the partial assignment placed in the
relation R
P
is inductive. By the theorem above, this is equivalent to showing
that it is computable in polynomial time. In order to see this, we recall that
decision problems that are NPcomplete have a property called selfreducibility
that allows us to query a decision procedure for them a polynomial number of
times and build a solution to the search version of the problem. If P = NP,
then all decision problems in NP have polynomial time solutions, and one can
use selfreducibility to see that the search version will also be polynomial time
solvable — namely, a solution will be constructible in polynomial time. Next
we will deﬁne our search problem in a way that a solution to it will be a global
relation: an instance of the problem will be a structure with partial assignments,
and the question will be whether the partial assignment can be extended to a
complete assignment. The complete assignment can be represented by a global
unary relation that will store all the literals assigned +1, and which must concur
with the partial assignment on its overlap. This decision problem is clearly in
NP, and therefore if P = NP, it would have a polynomial time search solution,
making R computable in polynomial time. The theorem then says R must be
inductive.
Since we want to generate exponentially many such solutions, we will have
to partially assign O(n) (a small fraction) of the variables, and ask the LFP to
extend this assignment, whenever possible, to a satisfying assignment to all
variables. Thus, we now see what the relation R
P
in our vocabulary stands for.
It holds the partial assignment to the variables. For example, suppose we want
to ask whether the partial assignment x
1
= 1, x
2
= 0, x
3
= 1 can be extended to
a satisfying assignment to the SAT formula, we would store this partial assign
ment in the tuple (x
1
, x
2
, x
3
) in the relation R
P
in our structure.
As mentioned earlier, the output satisfying assignment will be computed as
a unary relation which holds all the literals that are assigned the value 1. This
means that x
i
is in the relation if x
i
has been assigned the value 1 by the LFP,
and otherwise x
i
is in the relation meaning that x
i
has been assigned the value
86
8. SEPARATION OF COMPLEXITY CLASSES 87
0 by the LFP computation. This is the simplest case where the FO(LFP) formula
is simple monadic. For more complex formulas, the output will be some section
of a relation of higher arity (please see Appendix A for details), and we will
view it as monadic over a polynomially larger structure.
Now we “initialize” our structure with different partial assignments and
ask the LFP to compute complete assignments when they exist. If the partial
assignment cannot be extended, we simply abort that particular attempt and
carry on with other partial assignments until we generate enough solutions. By
“enough” we mean rising exponentially with the underlying problem size. In
this way we get a distribution of solutions that is exponentially numerous, and
we now analyze it and compare it to the one that arises in the d1RSB phase of
random kSAT.
8.3 Disentangling the Interactions: The ENSP Model
Now that we have a distribution of solutions computed by LFP, we would like
to examine its conditional independence characteristics. Does it factor through
any particular graphical model, for instance?
In Chapter 2, we considered various graphical models and their conditional
independence characteristics. Once again, our situation is not exactly like any
of these models. We will have to build our own, based on the principles we
have learnt. Let us ﬁrst note two issues.
The ﬁrst issue is that graphical models considered in literature are mostly
static. By this we mean that
1. they are of ﬁxed size, over a ﬁxed set of variables, and
2. the relations between the variables encoded in the models are ﬁxed.
In short, they model ﬁxed interactions between a ﬁxed set of variables. Since
we wish to apply them to the setting of complexity theory, we are interested in
families of such models, with a focus on how their structure changes with the
problem size.
87
8. SEPARATION OF COMPLEXITY CLASSES 88
The second issue that faces us nowis as follows. Even within a certain size n,
we do not have a ﬁxed graph on n vertices that will model all our interactions.
The way a LFP computation proceeds through the structure will, in general,
vary with the initial partial assignment. We would expect a different “trajec
tory” of the LFP computation for different clusters in the d1RSB phase. So, if
one initial partial assignment landed us in cluster X, and another in cluster Y,
the way the LFP would go about assigning values to the unassigned variables
would be, in general, different. Even within a cluster, the trajectories of two
different initial partial assignments will not be the same, although we would
expect them to be similar. How do we deal with this situation?
In order to model this dynamic behavior, let us build some intuition ﬁrst.
1. We know that there is a “directedness” to LFP in that elements that are
assigned values at a certain stage of the computation then go on to inﬂu
ence other elements who are as yet unassigned. Thus, there is a directed
ﬂow of inﬂuence as the LFP computation progresses. This is, for exam
ple, different from a Markov random ﬁeld distribution which has no such
direction.
2. There are two types of ﬂows of information in a LFP computation. Con
sider simple monadic LFP. In one type of ﬂow, neighborhoods across the
structure inﬂuence the value an unassigned node will take. In the other
type of ﬂow, once an element is assigned a value, it changes the neighbor
hoods (or more precisely the local types of various other elements) in its
vicinity. Note that while the ﬁrst type of ﬂow happens during a stage of
the LFP, the second type is implicit. Namely, there is no separate stage of
the LFP where it happens. It implicitly happens once any element enters
the relation being computed.
3. Because the ﬂow of information is as described above, we will not be able
to express it using a simple DAG on either the set of vertices, or the set of
neighborhoods. Thus, we have to consider building a graphical model on
certain larger product spaces.
88
8. SEPARATION OF COMPLEXITY CLASSES 89
4. The stagewise nature of LFP is central to our analysis, and the various
stages cannot be bundled into one without losing crucial information.
Thus, we do need a model which captures each stage separately.
5. In order to exploit the factorization properties of directed graphical models,
and the resulting parametrization by potentials, we would like to avoid
any closed directed paths.
Let us now incorporate this intuition into a model, which we will call a
ElementNeighborhoodStage Product Model, or ENSP model for short. This model
appears to be of independent interest. We now describe the ENSP model for
a simple monadic least ﬁxed point computation. The model is illustrated in
Fig. 8.1. It has two types of vertices.
Element Vertices These vertices, which encode the variables of the kSAT in
stance, are represented by the smaller circles in Fig. 8.1. They therefore
correspond to elements in the structure (recall that elements of the struc
ture represent the literals in the kSAT formula). However, each variable in
our original system X
1
, . . . , X
n
is represented by a different vertex at each stage
of the computation. Thus, each variable in the original system gives rise to
[ϕ
A
[ vertices in the ENSP model. Also recall that there are 2n elements in
the kSAT structure, where n is the number of variables in the SAT for
mula. However, in Fig 8.1, we have only shown one vertex per variable,
and allowed it to be colored two colors  green indicating the variable
has been assigned a value of +1, and red indicating the variable has been
assigned the value −1. Since the underlying formula ϕ that is being iter
ated is positive, elements do not change their color once they have been
assigned.
Neighborhood Vertices These vertices, denoted by the larger circles with blue
shading in Fig. 8.1, represent the rneighborhoods of the elements in the
structure. Just like variables, each neighborhood is also represented by a
different vertex at each stage of the LFP computation. Each of their pos
sible values are the possible isomorphism types of the rneighborhoods,
89
8. SEPARATION OF COMPLEXITY CLASSES 90
N(x
1,1
)
X
1,1
N(x
2,1
)
N(x
3,1
)
N(x
n1,1
)
N(x
n,1
)
N(x
1,2
)
N(x
2,2
)
N(x
3,2
)
N(x
n1,2
)
N(x
n,2
)
N(x
1,3
)
N(x
2,3
)
N(x
3,3
)
N(x
n1,3
)
N(x
n,3
)
N(x
1
)
N(x
2
)
N(x
3
)
N(x
n1
)
N(x
n
)
⋮
X
2,1
X
3,1
X
4,1
X
i,1
X
i+1,1
X
n1,1
X
n,1
⋮
⋮
X
1,2
X
2,2
X
3,2
X
4,2
X
i,2
X
i+1,2
X
n1,2
X
n,2
X
1,3
X
2,3
X
3,3
X
4,3
X
i,3
X
i+1,3
X
n1,3
X
n,3
X
1
X
2
X
3
X
4
X
i
X
i+1
X
n1
X
n
⋮ ⋮
⋮ ⋮ ⋮
Stages of LFP
E
l
e
m
e
n
t
s
N
e
i
g
h
b
o
r
h
o
o
d
s
Figure 8.1: The ElementNeighborhoodStage Product (ENSP) model for LFP
ϕ
. See
text for description.
90
8. SEPARATION OF COMPLEXITY CLASSES 91
namely, the local rtypes of the corresponding element. These vertices
may be thought of as vectors of size poly(log n) corresponding to the cliques
that occur in the neighborhood system we described in Sec. 8.2.2, or one
may think of them as a single variable taking the value of the various local
rtypes.
Now we describe the stages of the ENSP. There are 2[ϕ
A
[ stages, starting
from the leftmost and terminating at the rightmost. Each stage of the LFP com
putation is represented by two stages in the ENSP. Initially, at the start of the
LFP computation, we are in the leftmost stage. Here, notice that some variable
vertices are colored green, and some red. In the ﬁgure, X
4,1
is green, and X
i,1
is
red. This indicates that the initial partial assignment that we provided the LFP
had variable X
4
assigned +1 and variable X
i
assigned −1. In this way, a small
fraction O(n) of the variables are assigned values. The LFP is asked to extend
this partial assignment to a complete satisfying assignment on all variables (if it
exists, and abort if not).
Let us now look at the transition to the second stage of the ENSP. At this
stage, based on the conditions expressed by the formula ϕ in terms of their
own local neighborhoods, and the existence of a bounded number of other local
neighborhoods in the structure, some elements enter the relation. This means
they get assigned +1 or −1. In the ﬁgure, the variable X
3,2
takes the color green
based on information gathered from its own neighborhood N(X
3,1
) and two
other neighborhoods N(X
2,1
) and N(X
n−1,1
). This indicates that at the ﬁrst
stage, the LFP assigned the value +1 to the variable X
3
. Similarly, it assigns
the value −1 to variable X
n
(remember that the ﬁrst two stages in the ENSP
correspond to the ﬁrst stage of the LFP computation). The vertices that do not
change state simply transmit their existing state to the corresponding vertices
in the next stage by a horizontal arrow, which we do not show in the ﬁgure in
order to avoid clutter.
Once some variables have been assigned values in the ﬁrst stage, their neigh
borhoods, and the neighborhoods in their vicinity (meaning, the neighborhoods
of other elements that are in their vicinity) change. This is indicated by the dot
91
8. SEPARATION OF COMPLEXITY CLASSES 92
ted arrows between the second and third stages of the ENSP. Note that this
happens implicitly during LFP computation. That is why we have represented
each stage of the actual LFP computation by two stages in the ENSP. The ﬁrst
stage is the explicit stage, where variables get assigned values. The second stage
is the implicit stage, where variables “update their neighborhoods” and those
neighborhoods in their vicinity. For example, once X
3
has been assigned the
value +1, it updates its neighborhood and also the neighborhood of variable
X
2
which lies in its vicinity (in this example). In this way, inﬂuence propagates
through the structure during a LFP computation. There are two stages of the
ENSP for each stage of the LFP. Thus, there are 2[ϕ
A
[ stages of the ENSP in all.
By the end of the computation, all variables have been assigned values, and
we have a satisfying assignment. The variables at the last stage X
i,ϕ
A

are just
the original X
i
. Thus, we recover our original variables (X
1
, . . . , X
n
) by simply
looking only at the last (rightmost in the ﬁgure) level of the ENSP.
By introducing extra variables to represent each stage of each variable and
each neighborhood in the SAT formula, we have accomplished our original
aim. We have embedded our original set of variates into a polynomially larger
product space, and obtained a directed graphical model on this larger space.
This product space has a nice factorization due to the directed graph structure.
This is what we will exploit.
Remark 8.8. The explicit stages of the ENSP also perform the task of propagating
the local constraints placed by the various factors in the underlying factor graph
outward into the larger graphical model. For example, in our case of the factors
encoding clauses of a kSAT formula, the local constraint placed by a clause
is that the global assignment must evade exactly one restriction to a speciﬁed
set of k coordinates. For example, in the case of k = 3 the clause x
1
∨ x
2
∨
x
3
permits all global assignments except those whose ﬁrst three coordinates
are (−1, −1, +1). In contrast, if the factor were a XORSAT clause, the local
restrictions are all in the form of linear spaces, and so the global solution is an
intersection of such spaces. kSAT asks a question about whether certain spaces
92
8. SEPARATION OF COMPLEXITY CLASSES 93
of the form
¦ω: (ω
i
1
, . . . , ω
i
k
) = (ν
1
, . . . , ν
k
)¦
have nonempty intersections, where 1 ≤ i
1
< i
2
< < i
k
≤ n and the
prohibited ν
i
are ±1. Note that these are O(1) local constraints per factor. In
contrast, XORSAT asks the question about whether certain linear spaces have a
nonempty intersection. Linearity is a global constraint. Of course, all messages
are coded into the formula ϕ. Thus, the end result of multiple runs of the LFP
will be a space of solutions conditioned upon the requirements. So, for instance,
if we were to try to solve XORSAT formulae, we would obtain a space that
would be linear.
Thus, we have a directed graph with 2n + n = 3n vertices at each stage,
and 2[ϕ
A
[ stages. Since the LFP completes its computation in under a ﬁxed
polynomial number of steps, this means that we have managed to represent
the LFP computation on a structure as a directed model using a polynomial
overhead in the number of parameters of our representation space. In other
words, by embedding the covariates into a polynomially larger space, we have
been able to put a common structure on various computations done by LFP
on them. Note that without embedding the covariates into a larger space, we
would not be able to place the various computations done by LFP into a single
graphical model. The insight that we can afford to incur a polynomial cost in
order to obtain a common graphical model on a larger product space was key
to this section.
8.4 Parametrization of the ENSP
Our goal is to demonstrate the following.
If LFP were able to compute solutions to the d1RSB phase of randomkSAT,
then the distribution of the entire space of solutions would have a substan
tially simpler parametrization than we know it does.
93
8. SEPARATION OF COMPLEXITY CLASSES 94
In order to accomplish this, we need to measure the growth in the dimension of
independent parameters it requires to parametrize the distribution of solutions
that we have just computed using LFP.
In order to do this, we have embedded our variates into a polynomially
larger space that has factorization according to a directed model — the ENSP.
We have seen that the cliques in the ENSP are of size poly(log n). By employ
ing the version of HammersleyClifford for directed models, Theorem 2.13, we
also know that we can parameterize the distribution by specifying a system of
potentials over its cliques, automatically ensuring conditional independence.
The directed nature of the ENSP also means that we can factor the resulting
distribution into conditional probability distributions (CPDs) at each vertex of
the model of the form P(x [ pa(x)), and then normalize each CPD. Once again,
each CPD will have scope only poly(log n). From our perspective, the major
beneﬁt of directed graphical models is that we can do this always, without any
added positivity constraints. Recall that positivity is required in order to apply
the HammersleyClifford theorem to obtain factorizations for undirected mod
els.
How do we compute the CPDs or potentials? We assign various initial par
tial assignments to the variables as described in Sec. 8.2.3 and let the LFP com
putations run. We only consider successful computations, namely those where
the LFP was able to extend the partial assignment to a full satisfying assignment
to the underlying kSAT formula. We represent each stage of the LFP compu
tation on the corresponding two stages of the ENSP and thus obtain one full
instantiation of the representation space. We do this exponentially numerous
times, and build up our local CPDs by simply recording local statistics over all
these runs. This gives us the factorization (over the expanded representation
space) of our distribution, assuming that P = NP.
The ENSP for different runs of the LFP will, in general, be different. This
is because the ﬂow of inﬂuences through the stages of the ENSP will, in gen
eral, depend on the initial partial assignment. What is important is that each
such model will have some properties — such as largest clique size, which de
94
8. SEPARATION OF COMPLEXITY CLASSES 95
termines the order of the number of parameters — in common. Let us inspect
these properties that determine the parametrization of the ENSP model.
1. There are polynomially many more vertices in the ENSP model than ele
ments in the underlying structure.
2. Lemma 8.2 gives us a poly(log n) upper bound on the size of the neigh
borhoods. The number of local rtypes whose value each neighborhood
vertex can take is 2
poly(log n)
.
3. By Theorem 5.9 there is a ﬁxed constant s such that there must exist s
neighborhoods in the structure satisfying certain local conditions for the
formula to hold. Remember, we are presently analyzing a single stage of
the LFP. This again gives us poly(n) (O(n
s
) in this case) different possibil
ities for each explicit stage of the ENSP. The same can also be arrived at
by utilizing the normal form of Theorem 5.10. By the previous point, each
of these possibilities can be parameterized by 2
poly(log n)
parameters, giving
us a total of 2
poly(log n)
parameters required.
4. At each implicit stage of the ENSP, we have to update the types of the
neighborhoods that were affected by the induction of elements at the pre
vious explicit stage. There are only n neighborhoods, and each has poly(log n)
elements at most.
The ENSP is an interaction model where direct interactions are of size poly(log n),
and are chained together through conditional independencies.
Proposition 8.9. A distribution that factorizes according to the ENSP can be pa
rameterized with 2
poly(log n)
independent parameters. The scope of the factors in the
parametrization grows as poly(log n).
This also underscores the principle that the description of the parameter
space is simpler because it only involves interactions between l variates at a
time directly, and then chains these together through conditional independen
cies. In the case of the LFP neighborhood system, the size of the largest cliques
95
8. SEPARATION OF COMPLEXITY CLASSES 96
are poly(log n) for each single run of the LFP. This will not change if we were
computing using complex ﬁxed points since the space of ktypes is only poly
nomially larger than the underlying structure.
The crucial property of the distribution of the ENSP is that it admits a recur
sive factorization. This is what drastically reduces the parameter space required
to specify the distribution. It also allows us to parametrize the ENSP by simply
specifying potentials on its maximal cliques, which are of size poly(log n).
While the entire distribution obtained by LFP may not factor according to
any one ENSP, it is a mixture of distributions each of whom factorizes as per
some ENSP. Next, we analyze the features of such a mixture when exponen
tially many instantiations of it are provided. As the reader may intuit, when
such a mixture is asked to provide exponentially many samples, these will show
features of scope poly(log n). This is simply a statement about the paucity of in
dependent parameters in the component distributions in the mixture.
8.5 Separation
We continue our treatment of range limited poly(log n)parametrizations. We
will treat the value limited case shortly. The property of the ENSP for range
limited models that allows us to analyze the behavior of mixtures is that it is
speciﬁed by local Gibbs potentials on its cliques. In other words, a variable
interacts with the rest of the model only through the cliques that it is part of.
These cliques are parametrized by potentials. We may think of the cliques as
the building blocks of each ENSP. The cliques are also upper bounded in size
by poly(log n). Furthermore, a vertex may be in at most O(log n) such cliques.
Therefore, a vertex displays collective behavior only of range poly(log n). Thus,
the mixture comprises distributions that can be parametrized by a subspace of
R
poly(log n)
, in contrast to requiring the larger space R
O(n)
. This means that when
exponentially many solutions are generated, the features in the mixture will be
of size poly(log n), not of size O(n).
Next, let us examine the value limited case. In this case, the differences are
96
8. SEPARATION OF COMPLEXITY CLASSES 97
as follows
1. The solutions are generated by complex LFP, as sections of inductive rela
tions of higher arity.
2. There are O(n) interactions at each stage, but the graphical model is parametriz
able with only 2
poly(log n)
parameters.
3. Since the interactions are O(n), the Gibbs potentials are speciﬁed over
cliques of size O(n).
4. However, the potentials are parameterized with only 2
poly(log n)
parameters
inspite of having O(n) size. If we think of a potential as a CPD, then the
CPDs are wide (have O(n) columns), but are not very long (have only
2
poly(log n)
rows).
How do we analyze mixtures of such potentials? The idea is as follows. The
potentials are already large in their scope (O(n)). So we will create a single po
tential over the entire graphical model which will have scope poly(n) (since the
computation terminates in polynomial time). How do we merge various O(n)
potentials into a single poly(n) sized potential? And what will be the resulting
parametrization of this merged potential?
In order to merge the potentials, we observe that they have a certain sheaf
like property. Namely, since they are CPDs of the same LFP, they must agree
on overlaps. This means that two CPDs cannot specify different behavior for
the same priors. Remember, these CPDs are nothing but the rules by which the
computation proceeds, and these rules are the same for different computations
since it is the same LFP that is being used. Thus, the ﬁnal merged potential will
be compatible with each smaller potential on overlaps.
Using this property, we can see that if each of the potentials had poly(log n)
parametrization, then so must the ﬁnal merged potential. Once again we see
that we cannot instantiate exponentially many solutions from such a limited
parametrization and obtain the d1RSB picture which requires ample O(n) joint
distributions.
97
8. SEPARATION OF COMPLEXITY CLASSES 98
This explains why polynomial time algorithms fail when interactions be
tween variable are ampleO(n), without the possibility of factoring into smaller
pieces through conditional independencies. This also puts on rigorous ground
the empirical observation that even NPcomplete problems are easy in large
regimes, and become hard only when the densities of constraints increase above
a certain threshold. This threshold is precisely the value where ample irreducible
O(n) interactions ﬁrst appear in almost all randomly constructed instances.
In case of random kSAT in the d1RSB phase, these ample irreducibleO(n)
interactions manifest through the appearance of cores which comprise clauses
whose variables are coupled so tightly that one has to assign them “simultane
ously.” Cores arise when a set of C = O(n) clauses have all their variables also
lying in a set of size C. Once clause density is sufﬁciently high, cores cannot
be assigned poly(log n) at a time, and successive such assignments chained to
gether through conditional independencies. Nor are they value limited since
they instantiate in each of the exponentially many clusters in the d1RSB phase.
Since cores do not factor through conditional independencies, and are not value
limited either, this makes it impossible for polynomial time algorithms to assign
their variables correctly. Intuitively, variables in a core are so tightly coupled to
gether that they can only vary jointly, without any conditional independencies
between subsets. Furthermore, their variation is ample. In other words, they
represent irreducible interactions of size O(n) which may not be factored any
further, and which display the ample joint behavior of a system of n covari
ates, which requires O(c
n
) independent parameters to specify. In such cases,
parametrization over cliques of size only poly(log n) is insufﬁcient to specify the
joint distribution. Likewise, parametrization over cliques of size O(n), but with
only 2
poly(log n)
parameters, is insufﬁcient.
We have shown that in the ENSP for range limited models, the size of the
largest such irreducible interactions are poly(log n), not O(n). Furthermore,
since the model is directed, it guarantees us conditional independencies at the
level of its largest interactions. More precisely, it guarantees us that there will
exist conditional independencies in sets of size larger than the largest cliques in
98
8. SEPARATION OF COMPLEXITY CLASSES 99
its moral graph, which are O(poly(log n)). In other words, there will be inde
pendent variation within cores when conditioned upon values of intermediate
variables that also lie within the core, should the core factorize as per the ENSP.
This is illustrated in Fig. 8.2. This is contradictory to the known behaviour of
cores for sufﬁciently high values of k and clause density in the d1RSB phase. In
other words, while the space of solutions generated by LFP has features of size
poly(log n), the features present in cores in the d1RSB phase have size O(n).
The framework we have constructed allows us to analyze the set of poly
nomial time algorithms simultaneously, since they can all be captured by
some LFP, instead of dealing with each individual algorithm separately. It
makes precise the notion that polynomial time algorithms can take into ac
count only interactions between variables that grow as poly(log n), not as
O(n).
poly(log n) poly(log n)
Independent given
Intermediate values
Independent given
Intermediate values
Figure 8.2: The factorization and conditional independencies within a core due
to potentials of size poly(log n).
At this point, we are ready to state our main theorem.
Theorem 8.10. P = NP.
Proof. Consider the solution space of kSAT in the d1RSB phase for k > 8 as
recalled in Section. 6.2.1. We know that for high enough values of the clause
99
8. SEPARATION OF COMPLEXITY CLASSES 100
density α, we have O(n) frozen variables in almost all of the exponentially
many clusters. The ﬁrst observation we make is that since the variables in cores
are instantiated in exponentially many clusters, we can preclude value limited
poly(log n)parametrization. Let us consider then the situation where these clus
ters were generated by a purported range limited LFP algorithm for kSAT that
can be parametrized by the ENSP model with clique sizes poly(log n). When
exponentially many solutions have been generated from distributions having
the parametrization of the ENSP model, we will see the effect of conditional
independencies beyond range poly(log n). Let αβγ be a representation of the
variables in cliques α, β and γ, then given a value of β, we will see independent
variation over all their possible conditional values in the variables of α and
γ. If each set of such variables has scope at most poly(log n), then this means
that we have generated more than exponential in poly(log n) distinct solutions,
we will have nontrivial conditional distributions conditioned upon values of β
variables. At this point, the conditional independencies ensure that we will see
cross terms of the form
α
1
βγ
1
α
2
βγ
2
α
1
βγ
2
α
2
βγ
1
.
Note that since O(n) variables have to be changed when jumping from one clus
ter to another, we may even choose our poly(log n) blocks to be in overlaps of
these variables. This would mean that with a poly(log n) change in frozen vari
ables of one cluster, we would get a solution in another cluster. But we know
that in the highly constrained phases of d1RSB, we need O(n) variable ﬂips to
get fromone cluster to the next. This gives us the contradiction that we seek.
The basic question in analyzing such mixtures is: How many variables do
we need to condition upon in order to split the distribution into conditionally
independent pieces? The answer is given by (a) the size of the largest cliques
and (b) the number of such cliques that a single variable can occur in. In our
case, these two give us a poly(log n) quantity. When exponentially many solu
tions have been generated, there will be conditional distributions that exhibit
conditional independence between blocks of variates size poly(log n). Namely,
100
8. SEPARATION OF COMPLEXITY CLASSES 101
there will be no effect of the values of one upon those of the other. This is what
prevents the Hamming distance between solutions from being O(n). This is
shown pictorially in Fig. 8.2.
We may think of such mixtures as possessing only c
poly(log n)
“channels” to
communicate directly with other variables. All long range correlations trans
mitted in such a distribution must pass through only these many channels.
Therefore, exponentially many solutions cannot independently transmit O(n)
correlations (namely, the variables that have to be changed when jumping from
one cluster to another). Their correlations must factor through this bottleneck,
which gives us conditional independences after range poly(log n). This means
that blocks of size larger than this are now varying independently of each other
conditioned upon some intermediate variables. This gives us the crossterms
described earlier, and prevents the Hamming distance from being O(n) on the
average over exponentially many solutions. Instead, it must be poly(log n).
We can see that due to the limited parameter space that determines each
variable, it can only display a limited joint behavior. This behavior is completely
determined by poly(log n) other variates, not by O(n) other variates. Thus, the
“jointness” in this distribution lies at a level poly(log n). This is why when
enough solutions have been generated by the LFP, the resulting distribution
will start showing features that are at most of size poly(log n). In other words,
there will be solutions that show crossterms between features whose size is
poly(log n).
It is also useful to consider how many different parametrizations a block of
size poly(log n) may have. Each variable may choose poly(log n) partners out of
O(n) to form a potential. It may choose O(log n) such potentials. Even coarsely,
this means blocks of variables of size poly(log n) only “see” the rest of the dis
tribution through equivalence classes that grow as O(n
poly(log n)
)). This quantity
would have to grow exponentially with n in order to display the behavior of
the d1RSB phase. Once again we return to the same point — that the jointness
of the distribution that a purported LFP algorithm would generate would lie
at the poly(log n) levels of conditional independence, whereas the jointness in
101
8. SEPARATION OF COMPLEXITY CLASSES 102
the distribution of the d1RSB solution space is truly O(n). Namely, there are
irreducible interactions of size O(n) that cannot be expressed as interactions be
tween poly(log n) variates at a time, and chained together by conditional inde
pendencies as would be done by a LFP. This is central to the separation of com
plexity classes. Hard regimes of NPcomplete problems allow O(n) variates to
irreducibly jointly vary, and accounting for such O(n) jointness that cannot be
factored any further is beyond the capability of polynomial time algorithms.
We collect some observations in the following.
Remark 8.11. The poly(log n) size of features and therefore Hamming distance
between solutions tells us that polynomial time algorithms correspond to the
RS phase of the 1RSB picture, not to the d1RSB phase.
Remark 8.12. We can see from the preceding discussion that the number of in
dependent parameters required to specify the distribution of the entire solution
space in the d1RSB phase (for k > 8) rises as c
n
, c > 1. This is because it takes
that many parameters to specify the exponentially many O(n) variable “jumps”
between the clusters. These jumps are independent, and cannot be factored
through poly(log n) sized factors since that would mean conditional indepen
dence of pieces of size poly(log n) and would ensure that the Hamming distance
between solutions was of that order.
Remark 8.13. Note that the central notion is that of the number of independent
parameters, not frozen variables. For example, frozen variables would occur
even in low dimensional parametrizations in the presence of additional con
straints placed by the problem. This is what happens in XORSAT, where the
linearity of the problem causes frozen variables to occur. The frozen variables
in XORSAT do not arise due to a high dimensional parametrization, but sim
ply because the 2core percolates [MM09, '18.3]. Each cluster is a linear space
tagged on to a solution for the 2core, which is also why the clusters are all of
the same size. Linear spaces always admit a simple description as the linear
span of a basis, which takes the order of log of the size of the space.
Remark 8.14. It is tempting to think that there will be such a parametrization
whenever the algorithmic procedure used to generate the solutions is stage
102
8. SEPARATION OF COMPLEXITY CLASSES 103
wise local. This is not so. We need the added requirement that “mistakes” are
not allowed. Namely, we cannot change a decision that has been made. Other
wise, even PFP has the stagewise bounded local property, but it can give rise to
distributions without any conditional independence factorizations whose fac
tors are of size poly(log n). When placed in the ENSP, we see that there is fac
torization, but over an exponentially larger space, where clique sizes are of ex
ponential size. One might observe that the requirement that we not make any
trial and error at all that limits LFP computations in a fundamentally different
manner than the locality of information ﬂows. See [Put65] for an interesting
related notion of “trial and error predicates” in computability theory.
8.6 Some Perspectives
The following perspectives are reinforced by this work.
1. The most natural object of study for constraint satisfaction problems is the
entire space of solutions. It is this space where the dependencies and inde
pendencies that the CSP imposes upon covariates that satisfy it manifest.
2. There is an intimate relation between the geometry of the space and its
parametrization. Studying the parametrization of the space of solutions is
a worthwhile pursuit.
3. The view that an algorithm is a means to generate one solution is limited
in the sense that it is oblivious to the geometry of the space of all solutions.
It may, of course, be the appropriate approach in many applications. But
there are applications where requiring algorithms to generate numerous
solutions and approximate with increasing accuracy the entire space of
solutions seems more natural.
4. Conditional independence over factors of small scope is at the heart of re
solving CSPs by means of polynomial time algorithms. In other words,
polynomial time algorithms succeed by successively “breaking up” the
103
8. SEPARATION OF COMPLEXITY CLASSES 104
problem into smaller subproblems that are joined to each other through
conditional independence. Consequently, polynomial time algorithms can
not solve problems in regimes where blocks whose order is the same as the
underlying problem instance require simultaneous resolution.
5. Polynomial time algorithms resolve the variables in CSPs in a certain or
der, and with a certain structure. This structure is important in their study.
In order to bring this structure under study, we may have to embed the
space of covariates into a larger space (as done by the ENSP).
104
A. Reduction to a Single LFP
Operation
A.1 The Transitivity Theorem for LFP
We now gather a few results that will enable us to cast any LFP into one having
just one application of the LFP operator. Since we use this construction to deal
with complex ﬁxed points, we reproduce it in this appendix. The presentation
here closely follows [EF06, Ch. 8].
The ﬁrst result, known as the transitivity theorem, tells us that nested ﬁxed
points can always be replaced by simultaneous ﬁxed points. Let ϕ(x, X, Y ) and
ψ(y, X, Y ) be ﬁrst order formulas positive in X and Y . Moreover, assume that
no individual variable free in LFP
y,Y
ψ(y, X, Y ) gets into the scope of a corre
sponding quantiﬁer or LFP operator in A.1.
[LFP
x,X
ϕ(x, X, [LFP
y,Y
ψ(y, X, Y )])]t (A.1)
Then A.1 is equivalent to a formula of the form
∃(∀)u[LFP
z,Z,
χ(z, Z)]u,
where χ is ﬁrst order.
105
APPENDIX A. REDUCTION TO A SINGLE LFP OPERATION 106
A.2 Sections and the Simultaneous Induction Lemma
for LFP
Next we deal with simultaneous ﬁxed points. Recall that simultaneous induc
tions do not increase the expressive power of LFP. The proof utilizes a coding
procedure whereby each simultaneous induction is embedded as a section in a
single LFP operation of higher arity. First, we introduce the notion of a section.
Deﬁnition A.1. Let R be a relation of arity (k + l) on A and a ∈ A
k
. Then the
asection of R, denoted by R
a
, is given by
R
a
:= ¦b ∈ A
k
[R(ba)¦
Next we see how sections can be used to encode multiple simultaneous op
erators producing relations of lower arity into a single operator producing a
relation of higher arity. Let m operators F
1
, . . . , F
m
act as follows:
F
1
: (A
k
1
) (A
k
m
) → (A
k
1
)
F
2
: (A
k
1
) (A
k
m
) → (A
k
2
)
.
.
.
F
m
: (A
k
1
) (A
k
m
) → (A
k
m
)
(A.2)
We wish to embed these operators as sections of a “larger” operator, which
is known as their simultaneous join.
We will denote a tuple consisting only of a’s by ˜ a. The length of ˜ a be clear
from context.
Deﬁnition A.2. Let F
1
, . . . , F
m
be operators acting as above. Set
k := max¦k
1
, . . . , k
m
¦ +m+ 1.
The simultaneous join of F
1
, . . . , F
m
, denoted by J(F
1
, . . . , F
m
), is an operator
acting as
J(F
1
, . . . , F
m
): (A
k
) → (A
k
)
106
APPENDIX A. REDUCTION TO A SINGLE LFP OPERATION 107
such that for any a, b ∈ A, the ˜ ab
i
section (where the length of ˜ a here is k −
i + 1) of the n
th
power of J is the n
th
power of the operator F
i
. Concretely, the
simultaneous join is given by
J(R) :=
¸
a,b∈A,a=b
((F
1
(R
˜ ab
1, . . . , R
˜ ab
m) ¦˜ ab
1
¦) ∪
∪ (F
m
(R
˜ ab
1, . . . , R
˜ ab
m) ¦˜ ab
m
¦)). (A.3)
The simultaneous join operator deﬁned above has properties we will need
to use. These are collected below.
Lemma A.3. The i
th
power J
i
of the simultaneous join operator satisﬁes
J
i
=
¸
a,b∈A,a=b
((F
i
1
¦˜ ab
1
¦) ∪ ∪ (F
i
m
¦˜ ab
m
¦)). (A.4)
The following corollaries are now immediate.
Corollary A.4. The ﬁxed point J
∞
of the simultaneous join of operators (F
1
, . . . , F
m
)
exists if and only if their simultaneous ﬁxed point (F
∞
1
, . . . , F
∞
m
) exists.
Corollary A.5. The simultaneous join of inductive operators is inductive.
Finally, we need to show that the simultaneous join can itself be expressed
as a LFP computation. We need formulas that will help us deﬁne sections of a
simultaneous induction. Since the sections are coded using tuples of the form
a
k−i+k
i
+1
b
i
, we will need formulas that can express this.
Deﬁnition A.6. For ≥ 1 and i = 1, . . . , , the section formulas δ
l
i
(x
1
, . . . , x
l
, v, w)
δ
l
i
(x
1
, . . . , x
l
, v, w) :=
(v = w) ∧ (x
1
= = x
= v) i = 1
(v = w) ∧ (x
1
= = x
−i+1
= v) ∧
(x
−i+2
= = w) i > 1.
(A.5)
For distinct a, b ∈ A, A [= δ
i
[˜ ab
j
ab] if and only if i = j.
Now we are ready to show that simultaneous ﬁxedpoint inductions of for
mulas can be replaced by the ﬁxed point induction of a single formula.
107
APPENDIX A. REDUCTION TO A SINGLE LFP OPERATION 108
Deﬁnition A.7. Let
ϕ
1
(R
1
, . . . , R
m
, x
1
), . . . , ϕ
m
(R
1
, . . . , R
m
, x
m
)
be formulas of LFP. As always, we let R
i
be a k
i
ary relation and x
i
be a k
i
tuple.
Furthermore, let ϕ
1
, . . . , ϕ
m
be positive in R
1
, . . . , R
m
. Set k := max¦k
1
, . . . , k
m
¦+
m+ 1. Deﬁne a new ﬁrst order formula χ
J
having k variables and computing a
single kary relation Z by
χ
J
(Z, z
1
, . . . , z
k
) := ∃v∃w(v = w∧
((ϕ
1
(Z
˜ vw
1, . . . , Z
˜ vw
m, z
1
, . . . , z
k
) ∧ δ
k
1
(z
1
, . . . , z
k
, v, w))
∨ (ϕ
2
(Z
˜ vw
1, . . . , Z
˜ vw
m, z
1
, . . . , z
k
) ∧ δ
k
2
(z
1
, . . . , z
k
, v, w))
.
.
.
∨ (ϕ
m
(Z
˜ vw
1, . . . , Z
˜ vw
m, z
1
, . . . , z
k
) ∧ δ
k
m
(z
1
, . . . , z
k
, v, w)))) (A.6)
Then, the relation computed by the least ﬁxed point of χ
J
contains all the
individual least ﬁxed points computed by the simultaneous induction as its sec
tions.
108
Bibliography
[ACO08] D. Achlioptas and A. CojaOghlan. Algorithmic barriers from
phase transitions. arXiv:0803.2122v2 [math.CO], 2008.
[AM00] Srinivas M. Aji and Robert J. McEliece. The generalized distribu
tive law. IEEE Trans. Inform. Theory, 46(2):325–343, 2000.
[AP04] Dimitris Achlioptas and Yuval Peres. The threshold for random k
SAT is 2
k
log 2−O(k). J. Amer. Math. Soc., 17(4):947–973 (electronic),
2004.
[ART06] Dimitris Achlioptas and Federico RicciTersenghi. On the solution
space geometry of random constraint satisfaction problems. In
STOC’06: Proceedings of the 38th Annual ACM Symposium on The
ory of Computing, pages 130–139. ACM, New York, 2006.
[AV91] Serge Abiteboul and Victor Vianu. Datalog extensions for database
queries and updates. J. Comput. Syst. Sci., 43(1):62–124, 1991.
[AV95] Serge Abiteboul and Victor Vianu. Computing with ﬁrstorder
logic. Journal of Computer and System Sciences, 50:309–335, 1995.
[BDG95] Jos´ e Luis Balc´ azar, Josep D´ıaz, and Joaquim Gabarr´ o. Structural
complexity. I. Texts in Theoretical Computer Science. An EATCS
Series. SpringerVerlag, Berlin, second edition, 1995.
[Bes74] Julian Besag. Spatial interaction and the statistical analysis of lat
tice systems. J. Roy. Statist. Soc. Ser. B, 36:192–236, 1974. With dis
109
BIBLIOGRAPHY 110
cussion by D. R. Cox, A. G. Hawkes, P. Clifford, P. Whittle, K. Ord,
R. Mead, J. M. Hammersley, and M. S. Bartlett and with a reply by
the author.
[BGS75] Theodore Baker, John Gill, and Robert Solovay. Relativizations of
the { =?A{ question. SIAM J. Comput., 4(4):431–442, 1975.
[Bis06] Christopher M. Bishop. Pattern recognition and machine learning. In
formation Science and Statistics. Springer, New York, 2006.
[BMW00] G Biroli, R Monasson, and M Weigt. A variational description
of the ground state structure in random satisﬁability problems.
PHYSICAL JOURNAL B, 568:551–568, 2000.
[CF86] MingTe Chao and John V. Franco. Probabilistic analysis of
two heuristics for the 3satisﬁability problem. SIAM J. Comput.,
15(4):1106–1118, 1986.
[CKT91] Peter Cheeseman, Bob Kanefsky, and William M. Taylor. Where
the really hard problems are. In IJCAI, pages 331–340, 1991.
[CO09] A. CojaOghlan. A better algorithm for random ksat.
arXiv:0902.3583v1 [math.CO], 2009.
[Coo71] Stephen A. Cook. The complexity of theoremproving procedures.
In STOC ’71: Proceedings of the third annual ACM symposium on The
ory of computing, pages 151–158, New York, NY, USA, 1971. ACM
Press.
[Coo06] Stephen Cook. The P versus NP problem. In The millennium prize
problems, pages 87–104. Clay Math. Inst., Cambridge, MA, 2006.
[Daw79] A. P. Dawid. Conditional independence in statistical theory. J. Roy.
Statist. Soc. Ser. B, 41(1):1–31, 1979.
[Daw80] A. Philip Dawid. Conditional independence for statistical opera
tions. Ann. Statist., 8(3):598–617, 1980.
110
BIBLIOGRAPHY 111
[Deo10] Vinay Deolalikar. A distribution centric approach to constraint sat
isfaction problems. Under preparation, 2010.
[DLW95] Anuj Dawar, Steven Lindell, and Scott Weinstein. Inﬁnitary logic
and inductive deﬁnability over ﬁnite structures. Inform. and Com
put., 119(2):160–175, 1995.
[DMMZ08] Herv´ e Daud´ e, Marc M´ ezard, Thierry Mora, and Riccardo
Zecchina. Pairs of satassignments in random boolean formulæ.
Theor. Comput. Sci., 393(13):260–279, 2008.
[Dob68] R. L. Dobrushin. The description of a random ﬁeld by means of
conditional probabilities and conditions on its regularity. Theory
Prob. Appl., 13:197–224, 1968.
[Edm65] Jack Edmonds. Minimum partition of a matroid into independents
subsets. Journal of Research of the National Bureau of Standards, 69:67–
72, 1965.
[EF06] HeinzDieter Ebbinghaus and J ¨ org Flum. Finite model theory.
Springer Monographs in Mathematics. SpringerVerlag, Berlin, en
larged edition, 2006.
[Fag74] Ronald Fagin. Generalized ﬁrstorder spectra and polynomial
time recognizable sets. In Complexity of computation (Proc. SIAM
AMS Sympos. Appl. Math., New York, 1973), pages 43–73. SIAM–
AMS Proc., Vol. VII. Amer. Math. Soc., Providence, R.I., 1974.
[Fri99] E. Friedgut. Necessary and sufﬁcient conditions for sharp thresh
olds and the ksat problem. J. Amer. Math. Soc., 12(20):1017–1054,
1999.
[FSV95] Ronald Fagin, Larry J. Stockmeyer, and Moshe Y. Vardi. On
monadic np vs. monadic conp. Inf. Comput., 120(1):78–92, 1995.
111
BIBLIOGRAPHY 112
[Gai82] Haim Gaifman. On local and nonlocal properties. In Proceedings of
the Herbrand symposium (Marseilles, 1981), volume 107 of Stud. Logic
Found. Math., pages 105–135, Amsterdam, 1982. NorthHolland.
[GG84] Stuart Geman and Donald Geman. Stochastic relaxation, gibbs
distributions and the bayesian restoration of images. IEEE Trans
actions on Pattern Analysis and Machine Intelligence, 6(6):721–741,
November 1984.
[GJ79] Michael R. Garey and David S. Johnson. Computers and intractabil
ity. W. H. Freeman and Co., San Francisco, Calif., 1979. A guide
to the theory of NPcompleteness, A Series of Books in the Mathe
matical Sciences.
[GS00] Martin Grohe and Thomas Schwentick. Locality of orderinvariant
ﬁrstorder formulas. ACM Trans. Comput. Log., 1(1):112–130, 2000.
[Han65] WilliamHanf. Modeltheoretic methods in the study of elementary
logic. In Theory of Models (Proc. 1963 Internat. Sympos. Berkeley),
pages 132–145. NorthHolland, Amsterdam, 1965.
[HC71] J. M. Hammersley and P. Clifford. Markov ﬁelds on ﬁnite graphs
and lattices. 1971.
[HH76] J. Hartmanis and J. E. Hopcroft. Independence results in computer
science. SIGACT News, 8(4):13–24, 1976.
[Hod93] Wilfrid Hodges. Model theory, volume 42 of Encyclopedia of Math
ematics and its Applications. Cambridge University Press, Cam
bridge, 1993.
[Imm82] Neil Immerman. Relational queries computable in polynomial
time (extended abstract). In STOC ’82: Proceedings of the fourteenth
annual ACMsymposiumon Theory of computing, pages 147–152, New
York, NY, USA, 1982. ACM.
112
BIBLIOGRAPHY 113
[Imm86] Neil Immerman. Relational queries computable in polynomial
time. Inform. and Control, 68(13):86–104, 1986.
[Imm99] Neil Immerman. Descriptive complexity. Graduate Texts in Com
puter Science. SpringerVerlag, New York, 1999.
[Kar72] R. M. Karp. Reducibility among combinatorial problems. In R. E.
Miller and J. W. Thatcher, editors, Complexity of Computer Computa
tions, pages 85–103. Plenum Press, 1972.
[KF09] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles
and Techniques. MIT Press, 2009.
[KFaL98] Frank R. Kschischang, Brendan J. Frey, and Hans andrea Loeliger.
Factor graphs and the sumproduct algorithm. IEEE Transactions
on Information Theory, 47:498–519, 1998.
[KMRT
+
06] Florent Krzakala, Andrea Montanari, Federico RicciTersenghi,
Guilhem Semerjian, and Lenka Zdeborov´ a. Gibbs states and the
set of solutions of random constraint satisfaction problems. CoRR,
abs/condmat/0612365, 2006.
[KMRT
+
07] Florent Krza¸kała, Andrea Montanari, Federico RicciTersenghi,
Guilhem Semerjian, and Lenka Zdeborov´ a. Gibbs states and the
set of solutions of random constraint satisfaction problems. Proc.
Natl. Acad. Sci. USA, 104(25):10318–10323 (electronic), 2007.
[KS80] R. Kinderman and J. L. Snell. Markov random ﬁelds and their ap
plications. American Mathematical Society, 1:1–142, 1980.
[KS94] Scott Kirkpatrick and Bart Selman. Critical behavior in the satisﬁ
ability of random boolean formulae. Science, 264:1297–1301, 1994.
[KSC84] Harri Kiiveri, T. P. Speed, and J. B. Carlin. Recursive causal models.
J. Austral. Math. Soc. Ser. A, 36(1):30–52, 1984.
113
BIBLIOGRAPHY 114
[Lau96] Steffen L. Lauritzen. Graphical models, volume 17 of Oxford Statis
tical Science Series. The Clarendon Press Oxford University Press,
New York, 1996. Oxford Science Publications.
[LDLL90] S. L. Lauritzen, A. P. Dawid, B. N. Larsen, and H.G. Leimer.
Independence properties of directed Markov ﬁelds. Networks,
20(5):491–505, 1990. Special issue on inﬂuence diagrams.
[Lev73] Leonid A. Levin. Universal sequential search problems. Problems
of Information Transmission, 9(3), 1973.
[Li09] Stan Z. Li. Markov random ﬁeld modeling in image analysis. Ad
vances in Pattern Recognition. SpringerVerlag London Ltd., Lon
don, third edition, 2009. With forewords by Anil K. Jain and Rama
Chellappa.
[Lib04] Leonid Libkin. Elements of ﬁnite model theory. Texts in Theoretical
Computer Science. An EATCS Series. SpringerVerlag, Berlin, 2004.
[Lin05] S. Lindell. Computing monadic ﬁxed points in linear
time on doubly linked data structures. available online at
http://citeseerx.ist.psu.edu/doi=10.1.1.122.1447, 2005.
[LR03] Richard Lassaigne and Michel De Rougemont. Logic and Complex
ity. SpringerVerlag, London, 2003.
[MA02] Cristopher Moore and Dimitris Achlioptas. Random ksat: Two
moments sufﬁce to cross a sharp threshold. FOCS, pages 779–788,
2002.
[MM09] Marc M´ ezard and Andrea Montanari. Information, physics, and com
putation. Oxford Graduate Texts. Oxford University Press, Oxford,
2009.
114
BIBLIOGRAPHY 115
[MMW05] Elitza N. Maneva, Elchanan Mossel, and Martin J. Wainwright. A
new look at survey propagation and its generalizations. In SODA,
pages 1089–1098, 2005.
[MMW07] Elitza Maneva, Elchanan Mossel, and Martin J. Wainwright. A
new look at survey propagation and its generalizations. J. ACM,
54(4):Art. 17, 41 pp. (electronic), 2007.
[MMZ05] M. M´ ezard, T. Mora, and R. Zecchina. Clustering of solutions in
the random satisﬁability problem. Phys. Rev. Lett., 94(19):197–205,
May 2005.
[Mos74] Yiannis N. Moschovakis. Elementary induction on abstract structures.
NorthHolland Publishing Co., Amsterdam, 1974. Studies in Logic
and the Foundations of Mathematics, Vol. 77.
[Mou74] John Moussouris. Gibbs and Markov random systems with con
straints. J. Statist. Phys., 10:11–33, 1974.
[MPV87] Marc M´ ezard, Giorgio Parisi, and Miguel Angel Virasoro. Spin
glass theory and beyond, volume 9 of World Scientiﬁc Lecture Notes in
Physics. World Scientiﬁc Publishing Co. Inc., Teaneck, NJ, 1987.
[MPZ02] M M` ezard, G Parisi, and R Zecchina. Analytic and Algorithmic
Satisﬁability Problems. Science, 297(August):812–815, 2002.
[MRTS07] Andrea Montanari, Federico RicciTersenghi, and Guilhem Se
merjian. Solving constraint satisfaction problems through belief
propagationguided decimation, Sep 2007.
[MSL92] David Mitchell, Bart Selman, and Hector Levesque. Hard and easy
distributions of sat problems. In AAAI, pages 459–465, 1992.
[MZ97] R´ emi Monasson and Riccardo Zecchina. Statistical mechanics of
the random ksatisﬁability model. Phys. Rev. E, 56(2):1357–1370,
Aug 1997.
115
BIBLIOGRAPHY 116
[MZ02] Marc M´ ezard and Riccardo Zecchina. Random ksatisﬁability
problem: From an analytic solution to an efﬁcient algorithm. Phys.
Rev. E, 66(5):056126, Nov 2002.
[Put65] Hilary Putnam. Trial and error predicates and the solution to a
problem of mostowski. J. Symb. Log., 30(1):49–57, 1965.
[RR97] Alexander A. Razborov and Steven Rudich. Natural proofs. J.
Comput. System Sci., 55(1, part 1):24–35, 1997. 26th Annual ACM
Symposium on the Theory of Computing (STOC ’94) (Montreal,
PQ, 1994).
[SB99] Thomas Schwentick and Klaus Barthelmann. Local normal forms
for ﬁrstorder logic with applications to games and automata. In
Discrete Mathematics and Theoretical Computer Science, pages 444–
454. Springer Verlag, 1999.
[See96] Detlef Seese. Linear time computable problems and ﬁrstorder de
scriptions. Math. Structures Comput. Sci., 6(6):505–526, 1996. Joint
COMPUGRAPH/SEMAGRAPH Workshop on Graph Rewriting
and Computation (Volterra, 1995).
[Sip92] Michael Sipser. The history and status of the p versus np question.
In STOC, pages 603–618, 1992.
[Sip97] M. Sipser. Introduction to the Theory of Computation. PWS Publishing
Company, 1997.
[Var82] Moshe Y. Vardi. The complexity of relational query languages (ex
tended abstract). In STOC ’82: Proceedings of the fourteenth annual
ACM symposium on Theory of computing, pages 137–146, New York,
NY, USA, 1982. ACM.
[Wig07] Avi Wigderson. P, NP, and Mathematics  a computational com
plexity perspective. Proceedings of the ICM 2006, 1:665–712, 2007.
116
ÚÊ
This work is dedicated to my late parents: my father Shri. Shrinivas Deolalikar, my mother Smt. Usha Deolalikar, and my maushi Kum. Manik Deogire, for all their hard work in raising me; and to my late grand parents: Shri. Rajaram Deolalikar and Smt. Vimal Deolalikar, for their struggle to educate my father inspite of extreme poverty. This work is part of my MatruPitru Rin1 .
I am forever indebted to my wife for her faith during these years.
1
The debt to mother and father that a pious Hindu regards as his obligation to repay in this
life.
Abstract We demonstrate the separation of the complexity class NP from its subclass P. Throughout our proof, we observe that the ability to compute a property on structures in polynomial time is intimately related to an atypical property of the space of solutions — namely, the space is parametrizable with only cpoly(log n) , c > 1 parameters instead of the typical cn parameters required for a joint distribution of n covariates. This type of exponentially smaller parametrization arises as a result of severe limitations placed on the interaction between the variates. In particular, it may arise from range limited interactions where variates interact at short ranges, and chain together such interactions to create long range interactions. Such long range interactions then would be characterized by the statistical notions of conditional independence and sufﬁcient statistics. The presence of conditional independencies manifests in the form of economical parametrizations of the joint distribution of covariates. Likewise, such economical parametrizations can arise from interactions which take only cpoly(log n) many values. In both cases, the result on the joint distribution is the same — it is parametrizable with only cpoly(log n) independent parameters. In order to apply this analysis to the space of solutions of random constraint satisfaction problems, we utilize and expand upon ideas from several ﬁelds spanning logic, statistics, graphical models, random ensembles, and statistical physics. We begin by introducing the requisite framework of graphical models for a set of interacting variables. We focus on the correspondence between Markov and Gibbs properties for directed and undirected models as reﬂected in the factorization of their joint distribution, and the number of independent parameters required to specify the distribution. Next, we build the central contribution of this work. We show that there are fundamental conceptual relationships between polynomial time computation,
we encode kSAT formulae as structures on which FO(LFP) captures polynomial time. Distributions computed by LFP must satisfy this model. The Hamming distance between a solution that lies in one cluster and that in another is O(n). Then we relate complex ﬁxed points to value limited interactions. an arbitrarily large fraction of all variables in cores freeze within exponentially many clusters in the thermodynamic limit. as the clause density is increased towards the SATunSAT threshold. In particular. known as the d1RSB phase. we build distributions of solutions. We recollect the description of the clustered phase for random kSAT that arises when the clause density is sufﬁciently high and k ≥ 9. and poly(log n)parametrization. . Note that the onset of this phase is rigorously proven only for k ≥ 9. We then use results from ensembles of factor graphs of random kSAT to bound the various information ﬂows in this directed graphical model. In this phase. monadic LFP is a range limited interaction model that possesses certain directed Markov properties that may be stated in terms of conditional independence and sufﬁcient statistics. Next. we exploit the limitation that ﬁrst order logic can only express properties in terms of a bounded number of local neighborhoods of the underlying structure. and then utilize the limitations of ﬁrst order logic. which allows us to compute factorizations locally and parameterize using Gibbs potentials on cliques. By asking FO(LFP) to extend partial assignments on ensembles of random kSAT. We parametrize the resulting distributions in a manner that demonstrates that irreducible interactions between covariates — namely. which again result in poly(log n)parametrization. Next we introduce ideas from the 1RSB replica symmetry breaking ansatz of statistical physics. we view the LFP computation as “factoring through” several stages of ﬁrst order computations. In order to demonstrate these relationships.which is completely captured by the logic FO(LFP) on classes of successor structures. We then construct a dynamic graphical model on a product space that captures all the information ﬂows through the various stages of a LFP computation on ensembles of kSAT structures. This model is directed. and it is here that we will demonstrate our separation. Speciﬁcally.
For value limited complex LFP.those that may not be factored any further through conditional independencies — cannot grow faster than poly(log n) in the range limited monadic LFP computed distributions. and demonstrates the separation of P from NP. Using the aforementioned limitations of LFP. This allows us to analyze the behavior of the entire class of polynomial time algorithms on ensembles simultaneously. Our work shows that every polynomial time algorithm must fail to produce solutions to large enough problem instances of kSAT in the d1RSB phase. This shows that polynomial time algorithms are not capable of solving NPcomplete problems in their hard phases. we demonstrate that a purported polynomial time solution to kSAT would result in solution space that is a mixture of distributions each having an exponentially smaller parametrization than is consistent with the highly constrained d1RSB phases of kSAT. we show how to obtain a parametrization of the solution space by merging potentials with scope O(n). We show that this would contradict the behavior exhibited by the solution space in the d1RSB phase. . This corresponds to the intuitive picture provided by physics about the emergence of extensive (meaning O(n)) longrange correlations between variables in this phase and also explains the empirical observation that all known polynomial time algorithms break down in this phase.
. . . . . . . . . . . . .1 4. . . . . . . . . . . . . . . . . 3 Distributions with poly(log n)Parametrization 3. 1 . . . . . . . .3 2. . . . . . . . .Contents 1 Introduction 1. . . . . . . Imaps and Dmaps . .1 Two Kinds of poly(log n)parameterizations . . . . . . . . . . . . . . . . 2.1 3.1. . .3 3. . . . . . Conditional Independence in Undirected Graphical Models . .1 2. . . . .2 3. . . . . .2 Conditional Independence . . The MarkovGibbs Correspondence for Directed Models . . . . . . . . . . 3. . . Factor Graphs . .1. . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . Value Limited Interactions . . 4 Logical Descriptions of Computations 4. 3 5 15 15 17 21 24 26 29 30 32 32 35 38 38 40 41 44 Interaction Models and Conditional Independence 2. . . . . Our Treatment of Range and Value Limited Distributions . . . . . . . .1. . . . . . . . . . .4 2. . . . . . . . . . . . . . On the Atypical Nature of poly(log n)parameterization . . . . . .4 Range Limited Interactions . . . . . . . . . . . . .2 Inductive Deﬁnitions and Fixed Points . . .1 2. . . .1 2 Synopsis of Proof . . . . . . . . . . . . . .1. . . . . . . .5 Gibbs Random Fields and the HammersleyClifford Theorem . . . . . . . . Fixed Point Logics for P and PSPACE . . . . .
5. . . . . . . . . . . 6. .1 8. . . . . . . . . . . . . . . . . .4 6 The Limitations of LFP . . . . . . . . 8 Separation of Complexity Classes 8. .1 The Transitivity Theorem for LFP . . Aggregate Properties of LFP over Ensembles .1 6. . . . . . . . .4 8. . . Performance of Known Algorithms in the d1RSB Phase . . 106 2 . . . . . . . 8. . . . . . . . . .1. 7 Random Graph Ensembles 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some Perspectives . The d1RSB Phase . . . . . . . . . 105 A. . . . Degree Proﬁles in Random Graphs . .3 8. . . .2. .1 8. 48 50 51 55 60 62 64 64 66 68 71 74 75 75 76 78 78 80 80 83 85 87 93 96 The 1RSB Ansatz of Statistical Physics 6. . . . . . .2 Locally TreeLike Property . . . . . . . . . . . . . . . .2 Cores and Frozen Variables . . . . . . .2 Ensembles and Phase Transitions . . . . . . . . . . . .3 8. . . . . . Separation .5 8. . . . . . . . . . 7. . . . . . . . . . . . . . . . . . Conditional Independence in Complex Fixed Points . . . .2. . . . . .1 Locality of First Order Logic . 103 105 A Reduction to a Single LFP Operation A. . . . . . . . .2 5 The Link Between Polynomial Time Computation and Conditional Independence 5. . . . . . . .2. . . . . . . . . . . . . . .6 Encoding kSAT into Structures . . . . . . .2 Sections and the Simultaneous Induction Lemma for LFP . . . . . . . . . . . . . The LFP Neighborhood System . . Simple Monadic LFP and Conditional Independence . . . . . . . . . . .2 Measuring Conditional Independence in Range Limited Models . . Parametrization of the ENSP . . . . . . . . . . . . . . .2 5. . .1 5. . . . . . .1. . . . . . . . . . .2. . . . . . . . . . . . . . . . . Generating Distributions .1 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . Disentangling the Interactions: The ENSP Model . . . .2 8. . . . . .3 5. . . . . . . . . . . . .1 6. Generating Distributions from LFP .1 Properties of Factor Graph Ensembles . . . .2. . . . .1. .
If. would be profound. If P = NP. and H AMILTONIAN C IRCUIT. Later. since every one of these problems would have a polynomial time solution. Cook [Coo71]. This shifted the focus to methods us? ? ? 3 . and on the general philosophical question of whether human creativity can be automated. Introduction The P = NP question is generally considered one of the most important and far reaching questions in contemporary mathematics and computer science. we could never solve these problems efﬁciently. The CookLevin theorem showed the existence of complete problems for this class. the consequences would be even more stunning. and Levin [Lev73]. which include T RAVELLING S ALESMAN.1. on the other hand P = NP. C LIQUE. From the initial question in logic. many problems central to diverse areas of application were shown to be NPcomplete (see [GJ79] for a list). were also NPcomplete. The P = NP question is also singular in the number of approaches that researchers have brought to bear upon it over the years. the focus moved to complexity theory where early work used diagonalization and relativization techniques. ¨ The origin of the question seems to date back to a letter from Godel to Von Neumann in 1956 [Sip92]. In subsequent years. However. Formal deﬁnitions of the class NP awaited work by Edmonds [Edm65]. and demonstrated that SAT– the problem of determining whether a set of clauses of Boolean literals has a satisfying assignment – was one such problem. Karp [Kar72] showed that twentyone well known combinatorial problems. [BGS75] showed that these methods were perhaps inadequate to resolve P = NP by demonstrating relativized worlds in which P = NP and others in which P = NP (both relations for the appropriately relativized classes). The implications of this on applications such as cryptography.
This is the area of descriptive complexity theory — the branch of ﬁnite model theory that studies the expressive power of various logics viewed through the lens of complexity theory. and also to the negative results mentioned above. the question might be independent of standard axioms of set theory. there has been speculation that resolving the P = NP question might be outside the domain of mathematical techniques. etc. More precisely. The reader is referred to [Coo06] for an introduction which also serves as the ofﬁcial problem description for the Clay Millenium Prize. provided oneway functions exist. also contain accounts of the problem and attempts made to resolve it. INTRODUCTION 4 ing circuit complexity and for a while this approach was deemed the one most likely to resolve the question. NP. and notions of reductions and completeness for complexity classes. The inﬂuence of the P = NP question is felt in other areas of mathematics. Owing to the difﬁculty of resolving the question. Once again. PSPACE. Preliminaries and Notation Treatments of standard notions from complexity theory. An older excellent review is [Sip92]. See [Wig07] for a more recent introduction. a negative result in [RR97] showed that a class of techniques known as “Natural Proofs” that subsumed the above could not separate the classes NP and P. We mention one of these. and complexity theory in particular. 4 ? ? ? ? . such as deﬁnitions of the complexity classes P. There are several introductions to the P = NP question and the enormous amount of research that it has produced. See the books [Sip97] and [BDG95] for standard references. The ﬁrst such results in [HH76] show that some relativized versions of the P = NP question are independent of reasonable formalizations of set theory. BDG95]. Later.1. characterizations of the classes P [Imm86]. This ﬁeld began with the result [Fag74] that showed that NP corresponds to queries that are expressible in second order existential logic over ﬁnite structures. [Var82] and PSPACE over ordered structures were also obtained. may be found in [Sip97. Most books on theoretical computer science in general. since it is central to our work.
For an engaging introduction. please see [Bis06. these may be the variables in a kSAT instance that interact with each other through the clauses present in the kSAT formula. and affect the values each other may take. or n Ising spins that interact with each other in a ferromagnet. INTRODUCTION 5 Our work will span various developments in three broad areas. While we have endeavored to be relatively complete in our treatment. ﬁrst order language. An earlier text is [MPV87]. Through their interaction. etc. Additional references to results will be provided within the chapters. For an early treatment in statistical mechanics of Markov random ﬁelds and Gibbs distributions. For example. in the order in which they appear in the work. For ease of presentation. Ch. The proof centers on the study of logical and algorithmic constructs where 5 . Lib04] for excellent treatments of ﬁnite model theory and [Imm99] for descriptive complexity. Preliminaries from logic. we recommend [MM09]. 1. vocabulary. Standard references for graphical models include [Lau96] and the more recent [KF09]. may be obtained from any standard text on logic such as [Hod93]. we will assume our variables are binary. Given this. This represents the majority of the effort that went into constructing the proof. In particular. models. see [KS80].. variables exert an inﬂuence on each other. we felt that it would be beneﬁcial to explain the various stages of the proof. such as notions of structure. 8]. For a treatment of the statistical physics approach to random CSPs. we refer to [EF06.1 Synopsis of Proof This proof requires a convergence of ideas and an interplay of principles that span several areas within mathematics and physics. and highlight their interplay. we feel it would be helpful to provide standard textual references for these areas.1. Consider a system of n interacting variables such as is ubiquitous in mathematical sciences. The technical details of each stage are described in subsequent chapters.
Though initially these two distributions appear quite different. 1). In the ﬁrst example. We will call such distributions poly(log n)parametrizable. 0) and (1. 0). 1. we can specify the joint distribution since the variates are independent. c > 1 points. 0. there is a commonality. Though both distributions have simple descriptions. (1. 0. the reasons are very different. Consider next the distribution over 5 covariates which is uniformly supported only on (0. In the second example. There are two components to this. 1)} There is no correlation between the two variables in this distribution. We will see that such distributions are at the heart of polynomial time computability. the two points on which it is supported. 1. With this much information. the covariates are tightly correlated. 1). but the distribution is not “ample”. 0. 0). The ﬁrst measures correlations. A distribution over n covariates is deﬁned to be ample when it is supported on cn .1. in hard phases of constraint satisfaction problems such as kSAT. They are independent. (0. Consider ﬁrst the uniform distribution over all binary pairs {(0. INTRODUCTION such complex interactions have “simple” descriptions. We will study distributions on n covariates that require only 2poly(log n) parameters to specify. we again need two parameters to specify the distribution — namely. 6 What constitutes a simple description of the interaction of n variables? The number of independent parameters required to specify the joint distribution is a measure of the complexity of interactions between the covariates. (1. 6 . Both can be speciﬁed with just two parameters. and the second measures “ampleness” under those correlations. This causes all polynomial time algorithms to fail on them. This is best explained with two examples. 1. Conversely. the two parameters are the probability of the ﬁrst variate and the probability of the second variate taking the value 1. the space of solutions is both correlated and ample. In this distribution.
1. INTRODUCTION
7
A distribution is simple to describe if there is either independence between the variates (as was the case in our ﬁrst example) or limited support of the distribution (as was the case in our second example). We call the ﬁrst case a range limited interaction because variates interact with a limited range of other variates. The second case is called value limited since the number of joint values the variates can take is limited. The common feature underlying both cases is that the distribution has a very economical parametrization as compared to “true” joint distributions (more precisely, statistically typical joint distributions) on n covariates, which require O(2n ) parameters to specify. Thus, we wish to study such distributions, and will consider both the cases of range and value limited interactions. At this point, we visit the topic of graphical interaction models and conditional independence which is a manifestation of range limited interactions. While complete independence between variates in a complex system is rare, conditional independence between blocks of variables is fairly frequent. We see that factorization into conditionally independent pieces manifests in terms of economical parametrizations of the joint distribution. Graphical models offer us a way to measure the size of these interactions. The factorization of interactions can be represented by a corresponding factorization of the joint distribution of the variables over the space of conﬁgurations of the n variables subject to the constraints of the problem. It has been realized in the statistics and physics communities for long that certain multivariate distributions decompose into the product of a few types of factors, with each factor itself having only a few variables. Such a factorization of joint distributions into simpler factors can often be represented by graphical models whose vertices index the variables. A factorization of the joint distribution according to the graph implies that the interactions between variables can be factored into a sequence of “local interactions” between vertices that lie within neighborhoods of each other. Consider the case of an undirected graphical model. The factoring of interactions may be stated in terms of either a Markov property, or a Gibbs property
7
1. INTRODUCTION
8
with respect to the graph. Speciﬁcally, the local Markov property of such models states that the distribution of a variable is only dependent directly on that of its neighbors in an appropriate neighborhood system. Of course, two variables arbitrarily far apart can inﬂuence each other, but only through a sequence of successive local interactions. The global Markov property for such models states that when two sets of vertices are separated by a third, this induces a conditional independence on variables corresponding to these sets of vertices, given those corresponding to the third set. On the other hand, the Gibbs property of a distribution with respect to a graph asserts that the distribution factors into a product of potential functions over the maximal cliques of the graph. Each potential captures the interaction between the set of variables that form the clique. The HammersleyClifford theorem states that a positive distribution having the Markov property with respect to a graph must have the Gibbs property with respect to the same graph. The condition of positivity is essential in the HammersleyClifford theorem for undirected graphs. However, it is not required when the distribution satisﬁes certain directed models. In that case, the Markov property with respect to the directed graph implies that the distribution factorizes into local conditional probability distributions (CPDs). Furthermore, if the model is a directed acyclic graph (DAG), we can obtain the Gibbs property with respect to an undirected graph constructed from the DAG by a process known as moralization. We will return to the directed case shortly. Chapter 2 develops the principles underlying the framework of graphical models. We will not use any of these models in particular, but construct another directed model on a larger product space that utilizes these principles and tailors them to the case of least ﬁxed point logic, which we turn to next. At this point, we change to the setting of ﬁnite model theory. Finite model theory is a branch of mathematical logic that has provided machine independent characterizations of various important complexity classes including P, NP, and PSPACE. In particular, the class of polynomial time computable queries on successor structures has a precise description — it is the class of queries
8
1. INTRODUCTION
9
expressible in the logic FO(LFP) which extends ﬁrst order logic with the ability to compute least ﬁxed points of positive ﬁrst order formulae. Least ﬁxed point constructions iterate an underlying positive ﬁrst order formula, thereby building up a relation in stages. We take a geometric picture of a monadic LFP computation. Initially the relation to be built is empty. At the ﬁrst stage, certain elements, whose types satisfy the ﬁrst order formula, enter the relation. This changes the neighborhoods of these elements, and therefore in the next stage, other elements (whose neighborhoods have been thus changed in the previous stages) become eligible for entering the relation. The positivity of the formula implies that once an element is in the relation, it cannot be removed, and so the iterations reach a ﬁxed point in a polynomial number of steps. Importantly from our point of view, the positivity and the stagewise nature of LFP means that the computation has a directed representation on a graphical model that we will construct. Recall at this stage that distributions over directed models enjoy factorization even when they are not deﬁned over the entire space of conﬁgurations. We may interpret this as follows: monadic LFP relies on the assumption that variables that are highly entangled with each other due to constraints can be disentangled in a way that they now interact with each other through conditional independencies induced by a certain directed graphical model construction. Of course, an element does inﬂuence others arbitrarily far away, but only through a sequence of such successive local and bounded interactions. The reason LFP computations terminate in polynomial time is analogous to the notions of conditional independence that underlie efﬁcient algorithms on graphical models having sufﬁcient factorization into local interactions. In order to apply this picture in full generality to all LFP computations, we use the simultaneous induction lemma to push all simultaneous inductions into nested ones, and then employ the transitivity theorem to encode nested ﬁxed points as sections of a single relation of higher arity. We then see that this is the case of a value limited interaction between O(n) variates. Namely, although n variates interact with each other, they do not take cn joint values. Building
9
Namely. Namely. In the past two decades. This phase is called 1dRSB (1Step Dynamic Replica Symmetry Breaking) and was conjectured by physicists as part of the 1RSB ansatz. It demonstrates the properties of high correlation between large sets of variables that we will need. In search of regimes where such situations arise. we turn to the study of ensemble random kSAT where the properties of the ensemble are studied as a function of the clause density parameter. The distribution is ample. and blocks of n variables are instantiated cn distinct ways under these strong correlations. They are so “dense” that they cannot be factored through the bottleneck of the local and bounded properties of ﬁrst order logic that limit each stage of LFP computation. 1.1. the emergence of cores that are sets of 10 . and highly correlated distributions having no factorization into conditionally independent pieces (remember the value limited case is already ruled out since the distribution is ample). INTRODUCTION 10 the machinery that can precisely map all these cases to the picture of either factorization into range limited or value limited interactions is the subject of Chapter 5. we have ample. We need a regime of NPcomplete problems where interactions between variables have the following two properties. 2. have gathered much research attention. The 1RSB ansatz of statistical mechanics says that the space of solutions of random kSAT shatters into exponentially many clusters of solutions when the clause density is sufﬁciently high. The preceding insights now direct us to the setting necessary in order to separate P from NP. We will now add ideas from this ﬁeld which lies on the intersection of statistical mechanics and computer science to the set of ideas in the proof. Speciﬁcally. Intuitively. this should happen when each variable has to simultaneously satisfy constraints involving an extensive (O(n)) fraction of the variables in the problem. it takes cn joint values. It has since been rigorously proved for high values of k. the phase changes in the solution geometry of random kSAT ensembles as the clause density increases.
Due to strong O(n) correlations that cannot be factored through conditional independencies. INTRODUCTION 11 C clauses all of whose variables lie in a set of size C (this actually forces C to be O(n)). which arises from their instantiations in exponentially many clusters. they resist attack by local and bounded ﬁrst order stages of a monadic LFP computation. Due to their ampleness. Changing the value of a variable within a cluster necessitates changing O(n) other variables in order to arrive at another satisfying solution. We reproduce the rigorously proved picture of the 1RSB ansatz that we will need in Chapter 6. we make a brief excursion into the random graph theory of 11 . we will work in this regime. In other words. the variables in these cores “freeze. as the clause density is increased towards the SATunSAT threshold. there is empirical evidence that this picture does not hold. For lower values of k. We should stress that the picture described above is known to hold in the case of random kSAT only for k ≥ 9. Therefore. Since we need all the known properties of the d1RSB phase. In Chapter 7. our proof does not say anything about the efﬁcacy of various algorithms for 3SAT. Finally. which would be in a different cluster. As the clause density is increased. 1. Such stages are precisely the ones that we need since they possess the following two properties. 2. for instance. they resist attack by complex ﬁxed points that produce value limited distributions. the solution space vanishes. they take the same value throughout the cluster. each cluster collapses steadily towards a single solution. Furthermore. such as k = 3. as the clause density increases above the SATunSAT threshold. We speciﬁcally prove that the d1RSB phase is out of reach for polynomial time algorithms.1. the “true” d1RSB phase arises in random kSAT for k ≥ 9 as the clause density rises above (2k /k) ln k. Physicists think of this as an “energy gap” between the clusters.” Namely. that is maximally far apart from every other cluster. and this phase is only reached at k ≥ 9. and the underlying instance of SAT is no longer satisﬁable.
Finally in Chapter 8. In order to do so. We call this the ElementNeighborhoodStage Product. we encode kSAT instances as queries on structures over a certain vocabulary in a way that LFP captures all polynomial time computable queries on them. Next. This model is only polynomially larger than the structure itself. At this point. INTRODUCTION 12 the factor graph ensembles underlying random kSAT. 1. These provide us with bounds on the largest irreducible interactions between variables during the various stages of a LFP computation. speciﬁcally that neighborhoods that occur during the LFP computation are of size poly(log n) asymptotically almost surely in the n → ∞ limit. First. 12 . or ENSP model. we obtain results that asymptotically almost surely upper bound the size of the largest cliques in the neighborhood systems on the Gaifman graphs that we study later when we build models for the range limited interactions that occur during monadic LFP. 3. The distribution of solutions generated by LFP then is a mixture of distributions each of whom factors according to an ENSP.1. We then set up the framework whereby we can generate distributions of solutions to each instance by asking a purported LFP algorithm for kSAT to extend partial assignments on variables to full satisfying assignments. we embed the space of covariates into a larger product space which allows us to “disentangle” the ﬂow of information during a LFP computation. we utilize the following properties for range limited models. The locality and boundedness properties of FO that put constraints upon each individual stage of the LFP computation. The directed nature of the model that comes from properties of LFP. The properties of neighborhoods that are obtained by studies on random graph ensembles. 2. we pull all the threads and machinery together. we wish to measure the growth of independent parameters of distributions of solutions whose embeddings into the larger product space factor over the ENSP. From here. This allows us to study the computations performed by the LFP with various initial values under a directed graphical model.
In particular. Next. In other words.) Now we close the loop and show that a distribution of solutions for kSAT constructed by any purported LFP algorithm (monadic or complex) would not have enough parameters to describe the known picture of kSAT in the d1RSB phase for k ≥ 9 — namely. we would have conditionally independent variation between blocks of poly(log n) variables. thereby giving us poly(log n)parametrization. we will see features that reﬂect the poly(log n) factor size of the conditionally independent parametrization. We build a technique that merges various O(n) potentials that are poly(log n)parametrizable into a single potential that is also poly(log n)parametrizable. but they are limited to only cpoly(log n) values.1. the presence of extensive frozen variables in exponentially many clusters with Hamming distance between the clusters being O(n). The crucial property that allows us to analyze mixtures of range limited distributions that factor according to some ENSP is that we can parametrize the distribution using potentials on cliques of its moralized graph that are of size at most poly(log n). This shows that LFP cannot express the satisﬁability query in the d1RSB phase for high enough k. and separates P from NP. This means that when the mixture is exponentially numerous. such as the closure ordinal being a polynomial in the structure size. This behavior will manifest when exponentially many solutions are generated by the LFP construction. interactions are of size O(n). This also explains the 13 . and covers the entire graphical model (that has poly(n) variables. Simple properties of LFP. We show how to deal with mixtures of value limited models. causing the Hamming distance between solutions to be of this order as well. The case of value limited LFP also leads to a contradiction since it would be unable to explain the exponentially many cluster instantiations of cores that are present in the d1RSB phase. INTRODUCTION 13 4. solutions for kSAT that are constructed using range limited LFP will display aggregate behavior that reﬂects that they are constructed out of “building blocks” of size poly(log n). in exponentially numerous mixtures of range limited models. Here. we come to value limited models.
1. INTRODUCTION 14 empirical observation that all known polynomial time algorithms fail in the d1RSB phase for high values of k. and also establishes on rigorous principles the physics intuition about the onset of extensive long range correlations in the d1RSB phase that causes all known polynomial time algorithms to fail. are the source of failure of polynomial time algorithms. 14 . since it says that extensive O(n) correlations that (a) cannot factor through conditional independencies and (b) are amply instantiated. It also completes this picture.
Because of the presence of such dependencies in a complex system with interacting variables. The values a random variable takes will be denoted by the corresponding lower case letters. However. one often hopes that by exploiting the conditional independence between certain sets of variables. and so 15 . Both independence and conditional independence among sets of variables have been standard objects of study in probability and statistics.1 Conditional Independence We ﬁrst ﬁx some notation. y) will denote the joint mass of (X. it is not often that one encounters independence between variables. Throughout this work. PY (y). Interaction Models and Conditional Independence Systems involving a large number of variables interacting in complex ways are ubiquitous in the mathematical sciences. Y ). we assume our random variables to be discrete unless stated otherwise. We may also assume that they take values in a common ﬁnite state space.Y (x. Z. such as x. Y. etc. These interactions induce dependencies between the variables. PZ (z) respectively. Z by PX (x). one may avoid the cost of enumeration of an exponential number of hypothesis in evaluating functions of the distribution that are of interest. We denote the probability mass functions of discrete random variables X. z. which we usually denote by Λ following physics convention. Similarly. Y. 2. one frequently encounters conditional independence between sets of variables. Random variables will be denoted by upper case letters such as X. y. PX.2. Speaking in terms of algorithmic complexity.
2. Thus. In particular. and can be replaced by the following symmetric deﬁnition. The asymmetric version which says that the information contained in Y is superﬂuous to determining the value of X once the value of Z is known may be represented as P (x  y. Let notation be as above. then T is a sufﬁcient statistic if P (θ  x) = P (θ  t). This is an asymmetric deﬁnition. if Θ is the parameter to be estimated. z) = P (x  z). if Θ is a posterior that is being 16 . no further information about the value of X can be extracted from the value of Y . all there is to be gained from the sample in terms of information about Θ is already present in T alone. Thus.2. X is conditionally independent of Y given Z. E XAMPLE 2. Several notions from statistics may be recast in this language. The notion of conditional independence is central to our proof. The notion of sufﬁciency may be seen as the presence of a certain conditional independence [Daw79]. y  z) = P (x  z)P (y  z). We freely use the term “distribution” for the probability mass function. Z) is equal to the conditional distribution of X given Z alone. if ⊥Y P (x. written X⊥  Z. A sufﬁcient statistic T in the problem of parameter estimation is that which renders the estimate of the parameter independent of any further information from the sample X. The intuitive deﬁnition of the conditional independence of X from Y given Z is that the conditional distribution of X given (Y. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 16 on. Daw80]. Deﬁnition 2. y) = P (x)P (y). The notion of conditional independence pervades statistical theory [Daw79. We drop subscripts on the P when it causes no confusion.1. This means that once the value of Z is given. Recall that X is independent of Y if P (x.
. Xn ).1 illustrates an undirected graphical model with ten variables. but not less than those prescribed by the graph. E) whose n vertices index a set of n random variables (X1 . . such a statement would lead to a reduction in the complexity of inference.2. 17 . . . broadly. The random vector (X1 . xn ). and its factorization. Let P be a probability measure on the conﬁguration space. Clearly. two kinds of graphical models: directed and undirected. . We ﬁrst consider the case of undirected models.2 Conditional Independence in Undirected Graphical Models Graphical models offer a convenient framework and methodology to describe and exploit conditional independence between sets of variables in a system. One may think of the graphical model as representing the family of distributions whose law fulﬁlls the conditional independence statements made by the graph. . Xn ) then takes values in a conﬁguration space Ωn = Λn . We will study the interplay between conditional independence properties of P and its factorization properties. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 17 computed by Bayesian inference. Fig. Random Fields and Markov Properties Graphical models are very useful because they allow us to read off conditional independencies of the distributions that satisfy these models from the graph itself. Xn ) simply by x = (x1 . . The notation XV \I will denote the set of variables excluding those whose indices lie in the set I. . . . A member of this family may satisfy any number of additional conditional independence statements. Recall that we wish to study the relation between conditional independence of a distribution with respect to a graphical model. In general. . . 2. We will denote values of the random vector (X1 . . we will consider graphs G = (V. There are. 2. . The random variables all take their values in a common state space Λ. then the above relation says that the posterior depends on the data X through the value of T alone. . .
the relationship of being a neighbor is mutual: si ∈ Nj ⇔ sj ∈ Ni . one may write increasingly stringent conditional independence properties that a set of random variables satisfying a graphical model may possess. Namely. with respect to the graph.1: An undirected graphical model. Deﬁnition 2.2. A⊥  C. and / 2. for 18 . a site is not a neighbor to itself (this also means there are no selfloops in the induced graph): si ∈ Ni . In many applications. For random variables to satisfy the global Markov property relative to this graphical model. The vertices in set A are separated from those in set B by set C. In order to state these. Given a set of variables S known as sites. we ﬁrst deﬁne two graph theoretic notions — those of a general neighborhood system. We will often be interested in homogeneous neighborhood systems of S on a graph in which. the sites are vertices on a graph. Each vertex represents a random variable. and of separation. and the neighborhood system Ni is the set of neighbors of vertex si on the graph. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 18 A C B Figure 2. ⊥B Towards that end. a neighborhood system NS on S is a collection of subsets {Ni : 1 ≤ i ≤ n} indexed by the sites in S that satisfy 1. the corresponding sets of random variables must be conditionally independent.3.
. sj ) ≤ r}. in such neighborhood systems. where r will be determined by considerations from logic that will be introduced in the next two chapters. B. . INTERACTION MODELS AND CONDITIONAL INDEPENDENCE each si ∈ S. We will study the following two Markov properties. C be three disjoint subsets of the vertices V of a graph G. the neighborhood of a site is simply the set of sites that lie in the radius r ball around that site. 19 Namely. The distribution Xi (for every i) is conditionally independent of the rest of the graph given just the variables that lie in the neighborhood of the vertex. Let A. A probability measure P on Ωn is said to satisfy certain Markov properties with respect to the graph when it satisﬁes the appropriate conditional independencies with respect to that graph. . C of V such that C separates A from B in the graph. . We will need to use the general case. the inﬂuence that variables exert on any given variable is completely described by the inﬂuence that is exerted through the neighborhood variables alone. Deﬁnition 2. 2.5. Xn ) taking values in a conﬁguration space Ωn . and their relation to factorization of the distribution. . Deﬁnition 2. ⊥B We are interested in distributions that do satisfy such properties. In other words. 1. Note that a nearest neighbor system that is often used in physics is just the case of r = 1. it holds that A⊥  C. Xn ) and the vector (X1 . The set C is said to separate A and B if every path from a vertex in A to a vertex in B must pass through C. the neighborhood Ni is deﬁned as Gi := {sj ∈ S : d(si . Now we return to the case of the vertices indexing random variables (X1 . The global Markov property. . . The local Markov property. . We will use the term “variable” freely in place of “site” when we move to logic. B. and will examine what effect these Markov properties have on the factorization of the 19 .2.4. For any disjoint subsets A.
The second condition says that Markov random ﬁelds satisfy the local Markov property with respect to the neighborhood system. 20 . this is done in the context of Markov random ﬁelds. . Deﬁnition 2. ⊥{x / A Markov random ﬁeld is the natural generalization of this picture to higher dimensions and more general neighborhood systems. n. the inﬂuence of far away vertices must “factor through” local interactions. We motivate a Markov random ﬁeld with the simple example of a Markov chain {Xn : n ≥ 0}.6. Namely. Xn is a Markov random ﬁeld with respect to a neighborhood system on G if and only if the following two conditions are satisﬁed. 1. For most applications. This may be interpreted as: The inﬂuence of far away variables is limited to that which is transmitted through the interspersed intermediate variables — there is no “direct” inﬂuence of far away vertices beyond that which is factored through such intermediate interactions. we can think of interactions between variables in Markov random ﬁelds as being characterized by “piecewise local” interactions. n + 1}  Xn−1 . The collection of random variables X1 .2. The distribution at each vertex is conditionally independent of all other vertices given just those in its neighborhood: P (Xi  XV \i ) = P (Xi  XNi ) These local conditional distributions are known as local characteristics of the ﬁeld. . . The distribution is positive on the space of conﬁgurations: P (x) > 0 for x ∈ Ωn . . Xn+1 }. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 20 distributions. The Markov property of this chain is that any variable in the chain is conditionally independent of all other variables in the chain given just its immediate neighbors: Xn ⊥ k : k ∈ {n − 1. Thus. 2.
2. 21 . With this positivity condition. See [KS80] for a treatment that focusses on this setting. a vertex may inﬂuence any other arbitrarily far away. Markov random ﬁelds satisfy the global Markov property as well. Note the positivity condition on Markov random ﬁelds. the complete set of conditionals given by the local characteristics of a ﬁeld determine the joint distribution [Bes74]. we need only know the local joint distributions and use these to infer the correlations of far away variables. through such local interactions. Their local properties were later found to have applications to analysis of images and other systems that can be modelled through some form of spatial interaction. See also [Li09]. that this is a considerably simpler picture than having to consult the joint distribution over all variables for all interactions. Markov random ﬁelds with respect to a neighborhood system satisfy the global Markov property with respect to the graph constructed from the neighborhood system.2. This ﬁeld started with [Bes74] and came into its own with [GG84] which exploited the MarkovGibbs correspondence that we will deal with shortly. We now describe another random ﬁeld that has a global characterization — the Gibbs random ﬁeld. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 21 However. We shall see in later chapters that this picture. their local conditional independence characteristics.1 Gibbs Random Fields and the HammersleyClifford Theorem We are interested in how the Markov properties of the previous section translate into factorization of the distribution. with some additional caveats.7. such as Ising spins. is at the heart of polynomial time computations. Notice though. Markov random ﬁelds originated in statistical mechanics [Dob68]. Note that Markov random ﬁelds are characterized by a local condition — namely. for here. Theorem 2. 2. where they model probability measures on conﬁgurations of interacting particles.
3. the distribution tends to be uniform over the conﬁgurations. It controls the sharpness of the distribution. T is a constant known as the “Temperature” that has origins in statistical mechanics. a Gibbs random ﬁeld has a probability distribution that factorizes into its constituent “interaction potentials. At high temperatures. At low temperatures. 2.2. . broken up into cliques. These capture the interactions between vertices in the clique. over the set of cliques C of G. . The functions Vc : c ∈ C are the clique potentials such that the value of Vc (x) depends only on the coordinates of x that lie in the clique c. Z is the partition function and is a normalizing factor that ensures that the measure sums to unity. each particle 22 . . T Evaluating Z explicitly is hard in general since it is a summation over each of the Λn conﬁgurations in the space. A Gibbs random ﬁeld (or Gibbs distribution) with respect to a neighborhood system NG on the graph G is a probability measure on the set of conﬁgurations Ωn having a representation of the form P (x1 . Z T exp(− U (x) ). Thus. Z= x∈Ωn U (x) 1 exp(− ).” This says that the probability of a conﬁguration depends only on the interactions that occur between the variables. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 22 Deﬁnition 2. xn ) = where 1.8. . U (x) is the “energy” of conﬁguration x and takes the following form as a sum U (x) = c∈C Vc (x). For example. let us say that in a system. it tends towards a distribution that is supported only on the lowest energy states.
The precise notion is that of independent parameters it takes to specify the distribution. One may immediately see that the degree of a distribution is a measure of the complexity of interactions in the system since it is the size of the largest set of variables whose interaction cannot be split up in terms of smaller interactions between subsets. Factorization into conditionally independent interactions of scope k means that we can specify the distribution in O(γ k ) parameters rather than O(γ n ). The theorem appears in the unpublished manuscript [HC71] and uses a certain “blackening algebra” in the proof. Thus.2. The ﬁrst published proofs appear in [Bes74] and [Mou74]. The following example from [Mou74] shows that relaxing this 23 . Theorem 2. Note that the condition of positivity on the distribution (which is part of the deﬁnition of a Markov random ﬁeld) is essential to state the theorem in full generality. The support of the potential Vc is the cardinality of the clique c. Deﬁnition 2.9. X is Markov random ﬁeld with respect to a neighborhood system NG on the graph G if and only if it is a Gibbs random ﬁeld with respect to the same neighborhood system. the Gibbs factorization carries in it a faithful representation of the underlying interactions between the particles. denoted by deg(P ). The HammersleyClifford theorem relates the two types of random ﬁelds. We will return to this at the end of this chapter. The degree of the distribution P . the degree of the distribution is the size of the largest clique that occurs in its factorization. In other words. Let P be a Gibbs distribution whose energy function U (x) = c∈C Vc (x). is the maximum of the supports of the potentials. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 23 interacts with only 2 other particles at a time. each of whom had just three variables in its support. This type of factorization obviously yields a “simpler description” of the distribution. One would expect this to be the hurdle in efﬁcient algorithmic applications. (if one prefers to think in terms of statistical mechanics) then the energy of each state would be expressible as a sum of potentials.10 (HammersleyClifford).
and factor nodes. 1. 0. (0. 0) (0. 1) (0.2. 1. 24 . We may check that this distribution has the global Markov property with respect to the 4 vertex cycle graph. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 24 condition allows us to build distributions having the Markov property.2: A factor graph showing the three clause 3SAT formula (X1 ∨ X4 ∨ ¬X6 ) ∧ (¬X1 ∨ X2 ∨ ¬X3 ) ∧ (X4 ∨ X5 ∨ X6 ). 0. 0. X4 }. 0. x1 x2 x3 x4 x5 x6 C1 C2 C3 Figure 2. 1. 1. ⊥X ⊥X However. See Fig. X3 . They are a class of undirected models. 1. 1. 1. 1). 2.11. while the remaining combinations are disallowed. but not the Gibbs property.3 Factor Graphs Factor graphs are bipartite graphs that express the decomposition of a “global” multivariate function into “local” functions of subsets of the set of variables. 0. 0. 0) (1. the distribution does not factorize into Gibbs potentials. Namely we have X1 ⊥ 3  X2 . The two types of nodes in a factor graph correspond to variable nodes. 0) (1. 2. 1. 1) (0. E XAMPLE 2. X4 and X2 ⊥ 4  X1 . Consider a system of four binary variables {X1 .2. X2 . X3 . 0. 0) (1. Each of the following combinations have probability 1/8. 0. A dashed line indicates that the variable appears negated in the clause. 1) (1.
. As might be expected from the preceding comments.. in general. Then. See [KFaL98] and [AM00] for surveys of this ﬁeld. most notably perhaps in coding theory where they are used as graphical models that underlie various decoding algorithms based on forms of belief propagation (also known as the sumproduct algorithm) that is an exact algorithm for computing marginals on tree graphs but performs remarkably well even in the presence of loops.2. x6 ). x5 . these do not focus on conditional independence. a positive distribution that satisﬁes the global Markov property with respect to a factor graph satisﬁes the Gibbs property with respect to its completion. and no others. x5 . and connecting it to all the variable nodes in the clique. A HammersleyClifford type theorem holds over the completion of a factor graph. x6 )ϕ2 (x1 . x2 . INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 25 The distribution modelled by this factor graph will show a factorization as follows p(x1 . x6 )ϕ2 (x1 . This global information is contained in the partition function. x2 . x6 ) = where Z = 1 ϕ1 (x1 . The system must embed each of these factors in ways that are global and not obvious from the factors.x6 Factor graphs offer a ﬁner grained view of factorization of a distribution than Bayesian networks or Markov networks... the factorization above is not the one what we are seeking — it does not imply a series of conditional independencies in the joint distribution.1) (2. x6 ). Factor graphs have been very useful in various applications. x4 . (2. . Thus. In summary.2) x1 .. The completion of a factor graph is obtained by introducing a new function node for each clique. Z ϕ1 (x1 . x3 )ϕ(x4 . x3 )ϕ(x4 . these factors do not represent conditionally independent pieces of the joint distributions. but rather on algorithmic applications of local features (such as locally tree like) of factor graphs. . x4 . A clique in a factor graph is a set of variable nodes such that every pair in the set is connected by a function node. One should keep in mind that this factorization is (in general) far from being a factorization into conditionals and does not express conditional independence. . 25 .
if we denote the set of parents of the variable xi by pa(xi ).3 (left). which is simply a directed graph without any directed cycles in it. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 26 2. ·) between vertices which is just the length of the shortest path between them. we say that x is a parent of y. and y is the child of x. Finally. Similarly. Note that DAGs is allowed to have loops (and loopy DAGs are central to the study of iterative decoding algorithms on graphical models). x3 . . then 26 . The corresponding factorization of the joint density that is induced by the DAG model is p(x1 .2. . while the set of children of x is denoted by ch(a). A set of random variables whose interdependencies may be represented using a DAG is known as a Bayesian network or a directed Markov ﬁeld. 2. In general. every joint distribution that satisﬁes this DAG factorizes as above. . In moralization. we (a) replace a directed edge from one vertex to another by an undirected one between the same two vertices and (b) “marry” the parents of each vertex by introducing edges between each pair of parents of the vertex at the head of the former directed edge. . The set of vertices from whom directed paths lead to x is called the ancestor set of x and is denoted an(x). one may construct an undirected one by a process known as moralization. x4 ). Some speciﬁc points of additional terminology for directed graphs are as follows. x6 ) = p(x1 )p(x2 )p(x3 )p(x4  x1 )p(x5  x2 . Consider the DAG of Fig. If there is a directed edge from x to y. Given a directed graphical model. The process is illustrated in the ﬁgure below. The idea is best illustrated with a simple example. we often assume that the graph is equipped with a distance function d(·. Thus.4 The MarkovGibbs Correspondence for Directed Models Consider ﬁrst a directed acyclic graph (DAG). the set of vertices to whom directed paths from x lead is called the descendant set of x and is denoted de(x). The set of parents of x is denoted by pa(x).
) for v ∈ V deﬁned on Λ×Λ pa(v) where the ﬁrst factor is the state space for Xv and the second for Xpa(v) . We will use the result from [LDLL90]. dropping the positivity requirement) for the case of directed Markov ﬁelds. however. . such that k v (yv . one may remove the positivity condition safely. . In doing so. In some cases however. . xpa(v) )µv (dyv ) = 1 27 . the joint distribution of (x1 . .3: The moralization of the DAG on the left to obtain the moralized p(x1 . [LDLL90] extends the HammersleyClifford correspondence to the case of arbitrary distributions (namely. A measure p admits a recursive factorization according to graph G if there exist nonnegative functions. .2. which we reproduce next. 2. Deﬁnition 2. In particular. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 27 x1 x2 x3 x5 undirected graph on the right.. We have seen that relaxing the positivity condition on the distribution in the HammersleyClifford theorem (Thm. they simplify and strengthen an earlier criterion for directed graphs given by [KSC84]. . is to obtain a MarkovGibbs equivalence for such graphical models in the same manner that the HammersleyClifford theorem provided for positive Markov random ﬁelds. k v (. . known as kernels. xn ) = n=1 p(xn  pan ). We want.10) cannot be done in general.12. xn ) factorizes as N x1 x4 x2 x3 x5 x4 Figure 2. . .
then it admits a factorization (into potentials) according to the moral graph G m . and neither the node nor any of its descendants is in C. §8. Consider the set of all directed paths coming from a node in A and going to a node in B. 2.13. and C be sets of vertices on a directed model. ⊥B 28 . 1..3. For directed models. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE and p = f. Arrows on the path meet headtotail or tailtotail at a node in C.2. the kernels k v (. Let A. In this case. §3. and the joint distribution must satisfy A⊥  C.B. then C is said to Dseparate A from B. If all paths from A to B are blocked as above.1] and [Bis06.2. We simply state the property and refer the reader to [KF09. The notion is what one would expect intuitively if one views directed models as representing “ﬂows” of probabilistic inﬂuence. Theorem 2.µ where f (x) = v∈V 28 k v (xv . Arrows meet headtohead at a node. Dseparation We have considered the notion of separation on undirected models and its effect on the set of conditional independencies satisﬁed by the distributions that factor according to the model. If p admits a recursive factorization according to G.2] for discussion and examples. xpa(v) ) are the conditional densities for the distribution of Xv conditioned on the value of its parents Xpa(v) = xpa(v) . there is an analogous notion of separation known as Dseparation. Such a path is said to be blocked if one of the following two scenarios occurs. xpa(v) ). Now let G m be the moral graph corresponding to G.
and C that is satisﬁed by the distri⊥B bution is reﬂected in the graph.4] for examples). In other words a Pmap expresses precisely the set of conditional independencies that are present in the distribution. Indeed.15. Thus. a completely disconnected graph having no edges is trivially a Dmap for any distribution. A graph that is both an Imap and a Dmap for a distribution is called its Pmap (’perfect map’). A graph (directed or undirected) is said to be a Imap (’independencies map’) for a distribution if every conditional independence statement of the form A⊥  C for sets of variables A. Deﬁnition 2. §3. INTERACTION MODELS AND CONDITIONAL INDEPENDENCE 29 2. Deﬁnition 2. Thus. Not all distributions have Pmaps. Deﬁnition 2. A graph (directed or undirected) is said to be a Dmap (’dependencies map’) for a distribution if every conditional independence statement of the form A⊥  C for sets of variables A. A Imap may express less conditional independencies than the distribution possesses. The conditional independence properties of these two classes are obtained differently. 29 .8.16. and C that is expressed ⊥B by the graph is also satisﬁed by the distribution.2.5 Imaps and Dmaps We have seen that there are two broad classes of graphical models — undirected and directed — which may be used to represent the interaction of variables in a system. A Dmap may express more conditional independencies than the distribution possesses. B. the class of distributions having directed Pmaps is itself distinct from the class having undirected Pmaps and neither equals the class of all distributions (see [Bis06. B. a completely connected graph is trivially a Imap for any distribution.14.
. n covariates require 2n − 1 parameters to specify their joint distribution. we could ﬁnd that at the remaining conﬁguration. we would expect that it would be “somewhat like” a joint distribution on only poly(log n) covariates. Distributions with poly(log n)Parametrization We now come to a central theme in our work. given the function value at 2n − 1 conﬁgurations. . we would have to give the probability mass function at each of the 2n conﬁgurations that these n variables can take jointly. This statement can be made more precise — the typical joint distribution on n variates requires O(2n ) parameters for its speciﬁcation. Thus. We shall refer to such distributions as having poly(log n)parametrization. Indeed. This means that in the absence of any additional information. . xn ) in the absence of any additional information. . a joint distribution that requires only 2poly(log n) parameters to specify would seem quite unusual. Let us take an extreme case of such a “simple” joint distribution. . We would intuitively expect it to be “much simpler” in some way than the typical joint distribution on n variates. it takes exponentially many in n parameters to specify a “true” joint distribution of n covariates. In other words — distributions on n variates but requiring only 2poly(log n) parameters for their speciﬁcation are like the typical distribution on poly(log n) variates. . The only constraint given on these probability masses is that they must sum up to 1.3. Take the case of n covariates. Xn ). . Thus. . except that we are provided with one critical piece of extra 30 . Consider a system of n binary covariates (X1 . To specify their joint distribution p(x1 . because of the exponent of poly(log n). In light of the above.
Deﬁnition 3. but not both. . In that case. we require only two parameters to specify the distribution. . . Consider the distribution on n variates that is nonzero only at (0. it is severely limited by the small number of joint values the covariates take. 1. just the p(xi )). A distribution on n variates will be called ample if it is supported on cn joint values for c > 1. Here. as a result of the independence. 1). DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 31 information — that the n variates are independent of each other. nor limited support. the probability that it takes the value 1. We will see that distributions constructed by polynomial time algorithms can have one or the other property. that will permit more economical parametrization. Thus. A distribution on n variates will be said to have irreducible O(n) correlations if there exist correlations between O(n) variates that do not permit factorization into smaller scopes through conditional independencies. we make the following deﬁnition. In both cases above. Note that distributions having both these properties require O(2n ) independent parameters to specify. . we go from exponentially many independent parameters to linearly many if we know that the variates are independent. 31 . 0) and (1. . it is because the distribution is supported on only two out of a possible 2n values. 0. we would need 1 parameter to specify each of their individual distributions — namely. In other words. ample distributions take the typical number of joint values. In other words. In this case. In order to state the typical case of a n variate distribution. But once again.2. . It is distributions that possess both these properties that are problematic for polynomial time algorithms. the distribution on n covariates required far fewer parameters to specify than the typical n variate distribution does. the variates are highly correlated. These n parameters then specify the joint distribution simply because the distribution factorizes completely into factors whose scopes are single variables (namely.3. Let us consider another extreme case of such a distribution. There is neither factorization. Deﬁnition 3.1. . .
they have poly(log n)parametrization. Namely.1 Two Kinds of poly(log n)parameterizations We have seen that distributions on n variates that are poly(log n)parametrizable are very atypical. and what would be its limitations? This question is really the heart of P = NP. the distribution of solutions in the hard phases of NPcomplete problems displays two properties 1. the distribution of solutions requires exponentially many parameters to specify.3. When do they arise? They can be studied in two categories. What is far more frequent 32 .1. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 32 This brings us to a key motivating question: What if n covariates had a joint distribution that required only exponential in poly(log n) many parameters to specify? When would such a distribution arise. both of which will correspond to polynomial time algorithms. Note that both conditions are required. The variates are as far from being independent as possible — they interact with each other O(n) at a time. 3. the distribution has irreducible O(n) correlations. The distribution is ample. all the machinery we build and use in this work really takes us to the following insight: Polynomial time computations build distributions of solutions that can be parameterized using only exponential in poly(log n) many parameters.1 Range Limited Interactions As noted earlier. in the hard phases of NPcomplete problems like kSAT. Indeed. In particular. it is not often that complex systems of n interacting variables have complete independence between some subsets. ? 3. with no possibility for factorization into conditional independencies. It is not only long range correlations. but (a) the nonfactorizability of such correlations and (b) ampleness under such nonfactorizable correlations that characterizes the solution spaces in hard phases of NPcomplete problems. In other words. 2. In contrast.
given the parents. Therefore. If the factorization is into conditionally independent factors. In this case. not a factorization into conditional independents. Namely. a major feature of directed graphical models is that their factorizations are already globally normalized once they are locally normalized. For example. We should emphasize that the factors must give us conditional independence for this to be true. each of whose scope is of size at most k . DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 33 is that there are conditional independencies between certain subsets given some intermediate subset. Our proof scheme requires us to distinguish distributions based on the size of the irreducible direct interactions between subsets of the covariates. We may also moralize the graph and see this as a factorization over cliques in the moralized graph. and so we cannot conclude anything about the number of independent parameters by just examining the factor graph. The conditional independence in this case is from all nondescendants. if each node has at most k parents.3. Note that such a factorization (namely. then we can parametrize the joint distribution with at most n2n independent parameters. we can parametrize the distribution using at most n2k independent parameters. but it is. From our perspective. . The measure that allows us to make this distinction is the number of independent parameters it takes to specify the distribution. When the size of the smallest irreducible inter 33 . We would like to contrast such distributions from others which can be so factored through factors having only poly(log n) variates in their scope. Xn ). the joint will factorize into factors each of whose scope is a subset of (X1 . See [KF09] for further discussion on parameterizations for directed and undirected graphical models. . . we would like to distinguish distributions where there are O(n) such covariates whose joint interaction cannot be factored through smaller interactions (having less than O(n) covariates) chained together by conditional independencies. . starting from a directed model and moralizing) holds even if the distribution is not positive in contrast with those distributions which do not factor over directed models and where we have to invoke the HammersleyClifford theorem to get a similar factorization. meaning that there is a recursive factorization of the joint into conditionally independent pieces. in general. factor graphs give us a factorization.
but their range is limited to poly(log n). 34 . Although interactions between variables may be ample for their range. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 34 Range Limited to poly(log n) Ra n ge Li m ite d to po ly( lo g n) Range Limited to poly(log n) Figure 3.3.1: A range limited joint distribution on n covariates that has poly(log n)parametrization.
we will build machinery that shows that if a problem lies in P as a result of a range limited algorithm (like monadic LFP).1. it is just c∈C 2c . precisely because variables do not interact all at once. In this section. but rather in smaller subsets in a directed manner that gives us conditional independencies between sets that are of size poly(log n). In Chapter 5. The resulting distribution is ample. if at most k < n variables interact directly at a time. Namely. and such interactions are chained together through conditional independencies. 3. One sees immediately that the underlying limitation in both this case and the previous is common — the set of n covariates 35 . By HammersleyClifford.1 Let us consider the example of a Markov random ﬁeld.3. and therefore. then we would need only O(cpoly(log n) ) parameters. we have an upper bound on the number of parameters it takes to specify the distribution. then the factorization of the distribution of solutions to that problem causes it to have economical parametrization. On the other hand. we will see another such limited interaction. it is also a Gibbs random ﬁeld over the set of maximal cliques in the graph encoding the neighborhood system of the Markov random ﬁeld. This Gibbs ﬁeld comes with conditional independence assurance. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 35 actions is O(n). if we were able to demonstrate that the distribution factors through interactions which always have scope poly(log n). and this would give us a more economical parameterization than the one which requires 2n − 1 parameters.2). This was the case where the n variates interact directly only poly(log n) at a time. 3. Thus. then we need O(cn ) parameters where c > 1. See Fig. then the largest clique size would be k. 3. but they are restricted to taking only cpoly(log n) many distinct values (see Fig. where the n variates do interact directly O(n) at a time. Note that the case where all n variates are independent falls into the range limited category with range being one.2 Value Limited Interactions In the previous section we saw the ﬁrst type of interaction between n covariates that can be parametrized by just poly(log n) independent parameters.
in both cases. We should stress that this behavior has been rigorously shown to hold for some phases of kSAT for high values of k. both range and value limited interactions require only O(cpoly(log n) ) independent parameters to specify. How do we precisely state this property? Through the notion of independent parameters. this is because the covariates only jointly vary with poly(log n) other variates at a time. this is because though O(n) variates vary jointly. in both cases. Instead. Thus. We only need note that the linear nature of XORSAT solution spaces mean there is a poly(log n)parametrization (the basis provides this for linear spaces). Take the case of clustering in XORSAT. In the case of range limited. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 36 do not take 2n different values with extensive O(n) correlations that do not factor through conditional independencies like a “true”(or more precisely. the n covariates behave in ways that is similar to a system of poly(log n) covariates. We will measure the jointness of a distribution by the number of independent parameters required to specify it. The core issue is that of the number of independent parameters it takes to specify the distribution of the entire space of solutions. We will recall this behavior in some detail in Chapter 6. in both cases. their “jointness” resembles a system of poly(log n) covariates. O(cpoly(log n) ) independent parameters simply will not sufﬁce to explain the behavior of the solution space of the problem. c > 1 independent parameters to specify. It is in these phases that our separation of complexity classes can be demonstrated. as we shall see. We should also point out that once we have isolated the precise notion that is at the heart of polynomial time computation — namely poly(log n)parametrizability of the space of solutions — several apparent issues resolve themselves. the joint distribution 36 . On the other hand. not elsewhere. A “true” joint distribution takes O(cn ). for instance.3. Namely. we shall see that in the hard phases of problems such as kSAT for k > 8. In particular. typical) joint distribution of n variates. they only take 2poly(log n) joint values. In the case of value limited interactions. In both cases — range limited and value limited interactions — the system of n covariates behaves as though it was a system of only poly(log n) covariates. This is the crux of the P = NP question.
but not range limited. Whereas the distribution supported on the all 1 and all 0 tuple is value limited. we will build machinery to see that polynomial time LFP algorithms can capture either range or value limited behaviors. Value Limited to 2poly(log n) Range O(n) Figure 3. 37 In later chapters. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION has a very economical parametrization using only 2poly(log n) independent parameters. For instance. It is also useful to notice that neither type of limitation implies the other. but not the joint behavior of a “true” joint distribution of n covariates. n independent variates are range limited.2: A value limited joint distribution on n covariates that has poly(log n)parametrization. Regimes of problems where the distributions of solutions are neither value limited nor range limited cause the failure of polynomial time algorithms on the average.3. 37 Value Limited to 2poly(log n) Value Limited to 2poly(log n) . but not value limited. Although interactions between variables are O(n) at a time. they do not display ampleness in their joint distribution.
We will return to this issue in future versions of this paper or in the manuscript [Deo10] which is under preparation. even though a covariate sees O(n) other covariates. if we picked a n variable distribution at random. The solution space shows the behavior of a typical joint distribution on n covariates in that it is ample and correlated.3. In other words.3 On the Atypical Nature of poly(log n)parameterization We brieﬂy mentioned earlier that the typical member of the space of distributions on n covariates requires O(2n ) parameters. We end this chapter by tying poly(log n)parameterizations to a Markov or 38 . This short section owes its existence to Leonid Levin and Avi Wigderson. This observation may be used to state results about average case complexity in hard phases of random kSAT. 3. it only utilizes poly(log n) amount of the information in them in order to make its decision. these hard phases are simply typical. In both cases. It is polynomial time solution spaces that are atypical for n variate distributions in that they are either not ample (the value limited case) or they are not correlated solidly enough (the range limited case. where they admit Gibbs factorizations into smaller potentials). For purposes of pedagogy. We can even think of the value limited behavior as a type of range limited behavior where.1. Namely. nothing more. both of whom asked us whether our methods could be used to make statements about average case complexity. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 38 3. with high likelihood. In many ways.4 Our Treatment of Range and Value Limited Distributions The two types of distributions that we have mentioned above are only superﬁcially dissimilar.1. with high likelihood we would get a distribution that required O(2n ) parameters to specify. Note that this is a statistical statement. we will disregard this superﬁcial dissimilarity and provide a full treatment of the range limited case. we would not get a poly(log n)parametrizable distribution. the range of behaviors of the n covariates can be parametrized with the number of independent parameters it takes to specify a joint distribution of only poly(log n) covariates.
the random ﬁeld would have poly(log n)parametrization. A value limited parameterization could have maximum cliques of size O(n).3. but the number of parameters for such a clique would only be 2poly(log n) instead of the possible 2O(n) .1 and 3. 3. In either case. consider two kinds of poly(log n)parameterizations — range limited and value limited.2. DISTRIBUTIONS WITH poly(LOG N )PARAMETRIZATION 39 (equivalently for directed) Gibbs models. See Figs. A range limited parametrization would correspond to a Gibbs ﬁeld whose potentials are speciﬁed over maximum cliques of size poly(log n). Once again. 39 .
Rm . over ordered structures. cA . . there is a precise and highly insightful characterization of the class of queries that are computable in polynomial time. . Each relation has a ﬁxed arity. Readers from a ﬁnite model theory background may skip e this chapter. c1 . This poses no shortcomings since functions may be encoded as relations. . A σstructure A consists of a set A which is the universe of A. . cs . . Then. R1 . . σ = R1 .4. A A A = A. . and those that are computable in polynomial space. In particular. Rm . . denoted by σ. we begin with a brief pr´ cis of this theory. We quickly set notation. We consider only relational vocabularies in that there are no function symbols. . . In order to keep the treatment relatively complete. 1 s An example is the vocabulary of graphs which consists of a single relation symbol having arity two. and interpretations cA for each of the constant symbols in the vocabulary. interpretations RA for each of the relation symbols in the vocabulary. . cA . . . A vocabulary. Namely. Logical Descriptions of Computations Work in ﬁnite model theory and descriptive complexity theory — a branch of ﬁnite model theory that studies the expressive power of various logics in terms of complexity classes — has resulted in machine independent characterizations of various complexity classes. is a set consisting of ﬁnitely many relation and constant symbols. . . . a graph may be seen as a structure over this 40 .
and we refer the reader to [Mos74] for the ﬁrst monograph on the subject. we insert into the relation S the tuples according to ξ x ∈ Iφ ⇔ φ( η<ξ η Iφ . where the deﬁning relation for each stage can be written in the ﬁrst order language of the underlying structure and uses elements added to the set in previous stages. x1 . there is an underlying structure A = A. and the relation symbol is interpreted as an edge. and to [EF06. See [Imm99] for a text on descriptive complexity theory. LOGICAL DESCRIPTIONS OF COMPUTATIONS 41 vocabulary. R1 . Rm and a formula φ(S. where the universe is the set of nodes. We will also require that φ have only positive occurrences of the nary relation variable S. and stresses the facts we need. Inductive deﬁnitions are a fundamental primitive of mathematics. In addition. xn ) in the ﬁrstorder language of A. The idea is to build up a set in stages. We also denote by σn the extension of σ by n additional constants. Lib04] for detailed treatments in the context of ﬁnite model theory. namely all occurrences of S be 41 . x) ≡ φ(S. a) the structure where the tuple a has been identiﬁed with these additional constants. x). . some applications may require us to work with a graph vocabulary having two constants interpreted in the structure as source and sink nodes respectively. In the most general case. and denote by (A. . . 4.4. The variable S is a secondorder relation variable that will eventually hold the set we are trying to build up in stages. At the ξ ξ th stage of the induction. denoted by Iφ . . Our treatment is taken mostly from these sources. The decomposition into its various stages is a central characteristic φ of inductively deﬁned relations.1 Inductive Deﬁnitions and Fixed Points The material in this section is standard. . . . . We will denote the stage that a tuple enters the relation in the induction deﬁned by φ by  · A .
1. a transﬁnite induction may result. and 2. constructive. X ⊆ F (X). The operator F is monotone if it respects subset inclusion. we deﬁne sequences induced by operators. and then consider the operators on structures that are induced by ﬁrst order formulae. 42 . and P(A) be its power set. We begin by deﬁning two classes of operators on sets. Finally. Relations that may be deﬁned by R(x) ⇔ Iφ (a. Sets of the form Iφ are known as ﬁxed points of the structure. inductive relations are sections of ﬁxed points. this is also known as the inductive depth. Note that the cardinality of the ordinal κ is at most An . we deﬁne the relation Iφ = ξ ξ Iφ . In the most general case. Note that the deﬁnition above is 1. LOGICAL DESCRIPTIONS OF COMPUTATIONS 42 within the scope of an even number of negations. if X ⊆ Y . Let A be a ﬁnite set. then F (X) ⊆ F (Y ). We will use both these properties throughout our work. Deﬁnition 4. Thus. κ+1 κ The least ordinal κ at which Iφ = Iφ is called the closure ordinal of the induc tion.4. for all subsets X. When the underlying structures are ﬁnite. The operator F is inﬂationary if it maps sets to their supersets. x) for some choice of tuple a over A are known as inductive relations. Such inductions are called positive elementary. namely. Note that there are deﬁnitions of the set Iφ that are equivalent. and characterize the sequences induced by monotone and inﬂationary operators. Next. but can be stated only in the second order language of A. elementary at each stage. We now proceed more formally by introducing operators and their ﬁxed points. namely. An operator F on A is a function F : P(A) → P(A). and is denoted by φA . Y of A.
Let F be a monotone operator on a set A. the sequence (F i ) is inductive. we deﬁne F ∞ ∞ := i=0 F i. LOGICAL DESCRIPTIONS OF COMPUTATIONS 43 Deﬁnition 4. Deﬁnition 4. Namely. However. F has a least ﬁxed point LFP(F ) which is the intersection of all the ﬁxed points of F . if F i ⊆ F i+1 for all i. . (4. Now we are ready to deﬁne ﬁxed points of operators on sets. (4. therefore we need a means of constructing ﬁxed points for nonmonotone operators. 43 .4.2. and also provides two constructions of the least ﬁxed point for such operators: one “from above” and the other “from below.1). deﬁned by F 0 = ∅. namely. Not all operators have ﬁxed points. If F is either monotone or inﬂationary. F i+1 = F (F i ). The set X ⊆ A is called a ﬁxed point of F if F (X) = X. A ﬁxed point X of F is called its least ﬁxed point. Let F be an operator on A.1). 2. denoted LFP(F ). F 1 . if it is contained in every other ﬁxed point Y of F . 1.1) This sequence (F i ) is called inductive if it is increasing. Namely. .2) Lemma 4. LFP(F ) = {Y : Y = F (Y )}. Let F be an operator on A. Consider the sequence of sets F 0 . X ⊆ Y whenever Y is a ﬁxed point of F .5 (TarskiKnaster). The TarskiKnaster guarantees that monotone operators do. not all operators are monotone. let alone least ﬁxed points.” The latter construction uses the sequences (4.3. LFP(F ) = F i = F ∞. . namely.4. In this case. LFP(F ) is also equal to the union of the stages of the sequence (F i ) deﬁned in (4. Theorem 4.
4. F m = F n . We wish to extend FO by adding ﬁxed points of operators of the form Fφ . namely. Now. Deﬁnition 4. Let ϕ(R. 4. In the ﬁrst case. as F n in the ﬁrst case. . The sequence may or may not stabilize. x) deﬁnes an operator Fϕ : P(Ak ) → P(Ak ) on Ak which acts on a subset X ⊆ Ak as Fϕ (X) = {a  A = ϕ(X/R. The logic FO(IFP) is obtained by extending FO with the following formation rule: if ϕ(R. LOGICAL DESCRIPTIONS OF COMPUTATIONS 44 Deﬁnition 4. For an arbitrary operator G. and the empty set in the second case. and hence eventually stabilizes to the ﬁxed point F ∞ . Consider the sequence (F i ) induced by an arbitrary operator F on A. where ϕ(X/R. The formula ϕ(R. 1. where φ is a formula in FO.3) . then [IFPR. a}. we associate the inﬂationary operator Ginﬂ deﬁned by Ginﬂ (Y ) set Ginﬂ ∞ Y ∪ G(Y ). Let the notation be as above.8. x) is a formula and t a ktuple of terms. x) be a formula of vocabulary σ ∪ {R}.x ϕ(R. the sequence F i is inductive. for all n ≤ 2A . Deﬁnition 4.7. we deﬁne the partial ﬁxed point of F . x1 . . x)](t) 44 (4. there is a positive integer n such that F n+1 = F n . a} means that R is interpreted as X in ϕ. Let σ be a relational vocabulary. This gives us ﬁxed point logics which play a central role in descriptive complexity theory. and denoted by IFP(G). denoted PFP(F ). F n = F n+1 . the sequence F i does not stabilize. Deﬁnition 4.2 Fixed Point Logics for P and PSPACE We now specialize the theory of ﬁxed points of operators to the case where the operators are deﬁned by means of ﬁrst order formulae. .6. Now consider a structure A of vocabulary σ. and R a relational symbol of arity k that is not in σ. and therefore for all m > n. . For an inﬂationary operator F . xn ) = ϕ(R. In the latter case. The is called the inﬂationary ﬁxed point of G.9.
x) is a formula and t a ktuple of terms.4.3) will be monotone. Hence. and thus will have a least ﬁxed point. We need a deﬁnition. there are no negative occurrences of R in the formula. Let notation be as earlier. A formula is said to be positive in R if all occurrences of R in it are positive. An occurrence of R is said to be positive if it is under the scope of an even number of negations. Lemma 4. We cannot deﬁne the closure of FO under taking least ﬁxed points in the above manner without further restrictions since least ﬁxed points are guaranteed to exist only for monotone operators.x ϕ(R. LOGICAL DESCRIPTIONS OF COMPUTATIONS 45 is a formula whose free variables are those of t. The semantics are given by A = [PFPR. x) is positive in R. The semantics are given by A = [IFPR. then the operator obtained from ϕ by construction (4. 2. x)](a) iff a ∈ PFP(Fϕ ). In particular.10. The logic FO(PFP) is obtained by extending FO with the following formation rule: if ϕ(R.3) is monotone. If we were to form a logic by extending FO by least ﬁxed points without further restrictions.11. and negative if it is under the scope of an odd number of negations. then [PFPR. we would obtain a logic with an undecidable syntax. Let ϕ be a formula containing a relational symbol R.x ϕ(R. and testing for monotonicity is undecidable. 45 . x)](t) is a formula whose free variables are those of t. we make some restrictions on the formulae which guarantee that the operators obtained from them as described by (4. Let notation be as earlier.x ϕ(R. Deﬁnition 4. x)](a) iff a ∈ IFP(Fϕ ). Now we can deﬁne the closure of FO under least ﬁxed points of operators obtained from formulae that are positive in a relational variable. or there are no occurrences of R at all. If the formula ϕ(R.
Immerman [Imm82] and Vardi [Var82] obtained the following central result that captures the class P on ordered structures. Theorem 4. adding the ability to do simultaneous induction over several formulae does not increase the expressive power of the logic.x ϕ(R. 184] for details. First. The semantics are given by A = [LFPR. the stage at which the tuple a enters the relation R is denoted by aA .12. We end this section by relating these logics to various complexity classes. These are the central results of descriptive complexity theory. The logic FO(LFP) is obtained by extending FO with the following formation rule: if ϕ(R.4. Next. and inductive depths are denoted by ϕA . 46 . As earlier. A tuple may enter and leave the relation being built multiple times.13 (Fagin). x) is a formula that is positive in the kary relational variable R. ∃SO refers to the restriction of secondorder logic to formulae of the form ∃X1 · · · ∃Xm ϕ. In ﬁxed points (such as partial ﬁxed points) where the underlying formula is not necessarily positive. and secondly FO(IFP) = FO(LFP) over ﬁnite structures. x)](a) iff a ∈ LFP(Fϕ ). Fagin [Fag74] obtained the ﬁrst machine independent logical characterization of an important complexity class. ∃SO = NP. and t is a ktuple of terms. Here. We have introduced various ﬁxed point constructions and extensions of ﬁrst order logic by these constructions. LOGICAL DESCRIPTIONS OF COMPUTATIONS 46 Deﬁnition 4. p. where ϕ does not have any secondorder quantiﬁcation. we informally state two wellknown results on the expressive power of ﬁxed point logics. then [LFPR. This is well deﬁned for least ϕ ﬁxed points since a tuple enters a relation only once. and is never removed from it after. §10.3. See [Lib04. x)](t) is a formula whose free variables are those of t. this is not true.x ϕ(R.
14 (ImmermanVardi). Vardi). Over ﬁnite. Over ﬁnite. FO(LFP) = P. ordered structures. A characterization of PSPACE in terms of PFP was obtained in [AV91. the queries expressible in the logic FO(PFP) are precisely those that can be computed in polynomial space. the queries expressible in the logic FO(LFP) are precisely those that can be computed in polynomial time. LOGICAL DESCRIPTIONS OF COMPUTATIONS 47 Theorem 4. Theorem 4. Note: We will often use the term LFP generically instead of FO(LFP) when we wish to emphasize the ﬁxed point construction being performed. 47 . Namely. Var82].15 (AbiteboulVianu. rather than the language. FO(PFP) = PSPACE. ordered structures. Namely.4.
The treatment of LFP versus FO in ﬁnite model theory centers around the fact that FO can only express local properties. Thus. while LFP allows nonlocal properties such as transitive closure to be expressed. this inﬂuence must necessarily be bottlenecked by the simpler interactions that it must factor through. In this chapter. the inﬂuence can only be “transmitted through” the values of the intermediate conditioning variables.5. and asking how this nonlocal nature factors at each step. and what is the effect of such a factorization on the joint distribution of LFP acting upon ensembles. simpler interactions. This necessarily affects the type of inﬂuence a variable may exert on other variables in the system. 48 . The Link Between Polynomial Time Computation and Conditional Independence In Chapter 2 we saw how certain joint distributions that encode interactions between collections of variables “factor through” smaller. In the case where there are conditional independencies. In other words. while a variable in such a system can exert its inﬂuence throughout the system. the inﬂuence must propagate with bottlenecks at each stage. we will uncover a similar phenomenon underlying the logical description of polynomial time computation on ordered structures. We are taking as given the nonlocal capability of LFP. and so limitations of ﬁrst order logic must be the source of the bottleneck at each stage to the propagation of information in such computations. The fundamental observation is the following: Least ﬁxed point computations “factor through” ﬁrst order computations.
we will bring to bear ideas from statistical mechanics and message passing to the logical description of computations. but this nonlocal inﬂuence must factor through ﬁrst order logic at each stage. z) ∧ R(z. y) given by ϕ(R. We want to understand the stagewise bottleneck that a ﬁxed point computation faces at each step of its execution. In this case. vertices that have entered a relation make other vertices that are adjacent to them eligible to enter the relation at the next stage. y) ≡ E(x. Namely. builds the transitive closure relation in stages. Thus. the information ﬂow used to 49 .5. It will be beneﬁcial to state this intuition with the example of transitive closure. The sequence (Fϕ ) of op erators that construct ﬁxed points may be seen as the propagation of inﬂuence in a structure by means of setting values of “intermediate variables”. x. y) ∨ ∃z(E(x. It can be expressed in FO(LFP) as follows. though the resulting property is nonlocal. Notice that the decision of whether a vertex enters the relation is based on the immediate neighborhood of the vertex. and tie this back to notions of conditional independence and factorization of distributions. Then we can see that iterating the positive ﬁrst order formula ϕ(R. but comes cloaked i in a very different garb — that of logic and operators. In other words. This is a very similar underlying idea to the statistical mechanical picture of random ﬁelds over spaces of conﬁgurations that we saw in Chapter 2. Let E be a binary relation that expresses the presence of an edge between its arguments. x. In order to accomplish this. y)). E XAMPLE 5. The transitive closure of an edge in a graph is the standard example of a nonlocal property that cannot be expressed by ﬁrst order logic. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 49 Fixed point logics allow variables to be nonlocal in their inﬂuence.1. we must understand the limitations of each stage of a LFP computation and understand how this affects the propagation of longrange inﬂuence in relations computed by LFP. the relation is built stage by stage. and at each stage. the variables are set by inducting them into a relation at various stages of the induction.
The primary technique for demonstrating the limitations of ﬁxed point logics in expressing properties is to consider them a segment of the logic Lk .5.1 The Limitations of LFP Many of the techniques in model theory break down when restricted to ﬁnite models. we will need to understand the limitations of ﬁrst order logic. This has led to much research attention to game theoretic characterizations of various logics. but the factorization of that inﬂuence (encoded in the joint distribution) reveals the stagewise local nature of the interaction. 5. where such local interactions are chained together in a way that variables can exert their inﬂuence to arbitrary lengths. which extends ﬁrst order logic with inﬁnitary connectives. A notable exception is the EhrenfeuchtFra¨ss´ game for ﬁrst order ı e logic. The computation factors through a local property at each stage. separating P from NP) since NP ⊆ PSPACE and the latter class is captured by PFP. 50 . We will now proceed to build the requisite framework. but by chaining many such local factors together. for instance. This picture relates to a Markov random ﬁeld. We have used this simple example just to provide some preliminary intuition. ∞ω One of the central contributions of our work is demonstrating a completely different viewpoint of LFP computations in terms of the concepts of conditional independence and factoring of distributions. we obtain the nonlocal relation of transitive closure. whereas a Markov random ﬁeld is undirected. and ∞ω then use the characterization of expressibility in this logic in terms of kpebble games. both of which are fundamental to statistics and probability theory. In order to arrive at this correspondence. The limitations of ﬁrst order formulae mentioned in the previous section therefore appear at each step of a least ﬁxed point computation. which is also a segment of Lk . This is however not useful for our purpose (namely. Least ﬁxed point is an iteration of ﬁrst order formulas. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 50 compute it is stagewise local. There are important differences however — the ﬂow of LFP computation is directed.
Informally. it is not only the locality 51 . some of the most striking applications of such properties are in graphs with bounded degree. these properties were developed to deal with cases where the neighborhoods of the elements in the structure had bounded diameters. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 51 Viewing LFP as “stagewise ﬁrst order” is central to our analysis. Hanf locality says that whether or not a ﬁrst order formula ϕ holds in a structure depends only on its multiset of isomorphism types of spheres of radius r. The basic idea is that ﬁrst order formulae can only “see” up to a certain distance away from their free variables. we will use some of the normal forms developed in the context of locality properties in ﬁnite model theory. such as the linear time algorithm to evaluate ﬁrst order properties on bounded degree graphs [See96]. Ch. In particular. 4].1 Locality of First Order Logic The local properties of ﬁrst order logic have received considerable research attention and expositions can be found in standard references such as [Lib04. In contrast. but in the scenario where neighborhoods of elements have unbounded diameter. We are interested in factoring complex interactions between variables into their smallest constituent irreducible factors.5. Ch. [EF06. LFP has a natural factorization into its stages. 6]. both notions express properties of combinations of neighborhoods of ﬁxed size. The idea that ﬁrst order formulae are local has been formalized in essentially two different ways. Gaifman locality says that whether or not ϕ holds in a structure depends on the number of elements of that structure having pairwise disjoint rneighborhoods that fulﬁll ﬁrst order formulae of quantiﬁer depth d for some ﬁxed d (which depends on ϕ). This distance is determined by the quantiﬁer rank of the formula. 5. 2]. Clearly. Thus.1. Let us now analyze the limitations of the LFP computation through this viewpoint. Ch. Viewed this way. which are all described by ﬁrst order formulae. Let us pause for a while and see how this ﬁts into our global framework. This has led to two major notions of locality — Hanf locality [Han65] and Gaifman locality [Gai82]. In the literature of ﬁnite model theory. [Imm99.
Deﬁnition 5. We extend this to a notion of distance between tuples from A as follows. We will need both these properties in our analysis. .5. Namely. A A The rneighborhood of a in A is the σn structure Nr (a) whose universe is Br (a). . 1 ≤ j ≤ m}. There is no restriction on n and m above. . . as simply the length of the shortest path between ai and aj in GA . and the n additional constants are interpreted as a1 . we have the notion of distance between a tuple and a singleton element. . bm ). The ball of radius r around a is a set deﬁned by A Br (a) = {b ∈ A : dA (a. . . A each relation R is interpreted as RA restricted to Br (a). an ) and b = (b1 . In particular. . Deﬁnition 5. an . but the exact speciﬁcation of the ﬁnitary nature of the ﬁrst order computation. With the graph deﬁned. The Gaifman graph of a σstructure A is denoted by GA and deﬁned as follows. We need some deﬁnitions in order to state the results. the deﬁnition above also applies to the case where either of them is equal to one. aj of A. b) ≤ r}. . Let a = (a1 . . THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 52 that is of interest to us. There is an edge between two nodes a1 and a2 in GA if there is a relation R in σ and a tuple t ∈ RA such that both a1 and a2 appear in t. We will see that what we need is that ﬁrst order logic can only exploit a bounded number of local properties.2. Recall that σn is the expansion of σ by n additional constants. Then dA (a. We are now ready to deﬁne neighborhoods of tuples. The set of nodes of GA is A.3. b) = min{dA (ai . Recall the notation and deﬁnitions from the previous chapter. if L is a logic (or language). Informally. . the Ltype of a tuple is the sum total of the information that can be expressed about it 52 . We recall the notion of a type. Let A be a σstructure and let a be a tuple over A. denoted by d(ai . we have a notion of distance between elements ai . . aj ). bj ) : 1 ≤ i ≤ n.
an ) and Nr (b1 . The ﬁrst relates two different structures. As mentioned earlier. . 2. namely in Nr (a). and those that follow from Gaifman’s theorem. B be σstructures and let m ∈ N. . Note that any A B isomorphism between Nr (a1 . Notation as above. .5. bn ) must send ai to bi for 1 ≤ i ≤ n. Deﬁnition 5. 1.5. the ﬁrst order type of a mtuple in a structure is deﬁned as the set of all FO formulae having m free variables that are satisﬁed by the tuple.6. there are two broad ﬂavors of locality results in literature – those that follow from Hanf’s theorem. A more useful notion is the local type of a tuple. . Deﬁnition 5. quantiﬁcation in such formulas is restricted to the structure Nr (x). either 1. this notion is far too powerful since it characterizes the structure (A. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 53 in the language L. Thus. The following three notions of locality are used in stating the results. Let A. . In what follows. a) up to isomorphism. If for every isomorphism type τ of a rneighborhood of a point. .4. we need a deﬁnition. In other words. Over ﬁnite structures. we may drop the superscript if the underlying structure is clear. 53 . We provide below the locality result due to [FSV95] that is suitable for ﬁnite models. Both A and B have the same number of elements of type τ . The local rtype of a tuple a in A is the type of A a in the substructure induced by the rneighborhood of a in A. To proceed. Formulas whose truth at a tuple a depends only on Br (a) are called rlocal. Deﬁnition 5. Boolean combinations of formulas that are local around the various coordinates xi of x are said to be basic local. Formulas that are rlocal around their variables for some value of r are said to be local. . a neighborhood is a σn structure. . and a type of a neighborhood is an equivalence class of such structures up to isomorphism. [Han65] proved his result for inﬁnite structures. 3. In particular.
there exist r. Let ϕ(x) be a formula of quantiﬁer depth q. Furthermore. Notation as above. 54 . We refer the reader to [FSV95] for a discussion comparing the FaginStockmeyerVardi theorem with Hanf’s theorem in the context of applications to ﬁnite model theory. m)equivalent and every element has degree at most l. local formula around x. then they satisfy the same ﬁrst order formulae up to quantiﬁer rank k. 5. Theorem 5. where the φ are rlocal. then A = ϕ(a) ↔ B = ϕ(b). Next we come to Gaifman’s version of locality. xs i=1 φ(xi ) ∧ 1≤i≤j≤s d>2r (xi . m > 0 such that if A and B are threshold (r.9 ([Gai82]). In particular. 54 Theorem 5. and 2.7.8. l > 0. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 2. Then there is a radius r and threshold t such that if A and B have the same multiset of local types up to threshold t. written A ≡k B. and the elements a ∈ A and b ∈ B have the same local type up to radius r. Every FO formula ϕ(x) over a relational vocabulary is equivalent to a Boolean combination of 1.7 ([FSV95]). The Hanf locality lemma for formulae having a single free variable has a simple form and is an easy consequence of Thm. neither theorem seems to imply the other. For each k. sentences of the form s ∃x1 . See [Lin05] for an application to computing simple monadic ﬁxed points on structures of bounded degree in linear time. . Lemma 5. r depends only on k. xj ) . .5. . Both A and B have more than m elements of type τ . m)equivalent. . Then we say that A and B are threshold (r.
we exploit the limitations described in the previous section to build conceptual bridges from least ﬁxed point logic to the MarkovGibbs picture of the preceding section. y).10 ([SB99]). this may seem to be an unlikely union. some subset of elements enters the relation. This changes the local 55 . The key is to see the constructions underlying least ﬁxed point computations through the lens of inﬂuence propagation and conditional independence. In later sections. Namely. This again expresses the bounded number of local properties feature that limits ﬁrst order logic. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 55 In words.5. and where the LFP relation being constructed is monadic. for every ﬁrst order formula. We wish to build a view of ﬁxed point computation as an information propagation algorithm. we will demonstrate this relationship for the case of simple monadic least ﬁxed points. where ϕ is local around y. there is an r such that the truth of the formula on a structure depends only on the number of elements having disjoint rneighborhoods that satisfy certain local formulas. none of the elements of the structure are in the relation being computed. In order to do so. At ﬁrst. we show how to deal with complex ﬁxed points as well. At stage zero of the ﬁxed point computation. let us examine the geometry of information ﬂow during an LFP computation. At the ﬁrst stage.2 Simple Monadic LFP and Conditional Independence In this section. The following normal form for ﬁrst order logic that was developed in an attempt to merge some of the ideas from Hanf and Gaifman locality. a FO(LFP) formula without any nesting or simultaneous induction. But we will establish that there are fundamental conceptual relationships between the directed Markovian picture and least ﬁxed point computations. Every ﬁrstorder sentence is logically equivalent to one of the form ∃x1 · · · ∃xl ∀yϕ(x. In this section. Theorem 5. 5.
Consequently. Thus. Thus. In that case. On a graph of bounded degree. we observe that The inﬂuence of an element during LFP computation propagates in a similar manner to the inﬂuence of a random variable in a directed Markov ﬁeld. x) changes local neighborhoods of elements at each stage of the computation.11. the fundamental vehicle of this information propagation is that a ﬁxed point computation ϕ(R.5. This correspondence is important to us. inﬂuence ﬂows in the direction of the stages of the LFP computation. This propagation is 1. and the changes “propagate” through the structure. directed. Lemma 5. This ensures that once an element is inserted into the relation that is being computed. The directed property comes from the positivity of the ﬁrst order formula that is being iterated. In other words. it is never removed. Let us try to uncover the underlying principles that cause it. Due to the global changes in the multiset of local types. Furthermore. and 2. This correspondence is most striking in the case of bounded degree structures. and the vertices that lie in these local neighborhoods change their local type. but only through its inﬂuence on various local neighborhoods. we have only O(1) local types. This process continues. this inﬂuence ﬂow is local in the following sense: the inﬂuence of an element can propagate throughout the structure. In order to determine whether an element in a structure satisﬁes a ﬁrst order formula we need (a) the multiset of local rtypes in the structure (also known 56 . there are only a ﬁxed number of local rtypes. there is a ﬁxed number of nonisomorphic neighborhoods with radius r. more elements in the structure become eligible for inclusion into the relation at the next stage. relies on a bounded number of local neighborhoods at each stage. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 56 neighborhoods of these elements.
12. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 57 as its global type) for some value of r. in a purely stagewise local manner. For example. Once it enters the relation. This type potentially changes with each stage of the LFP. At this point. The FO formula that is being iterated can only express a property about some bounded number of such local neighborhoods. by threshold Hanf. Remark 5.5. In particular. and such changes render them eligible. we only need to know the multiset of local types up to a certain threshold. it will do so. This is a Markov property: the inﬂuence of an element upon another must factor entirely through the local neighborhood of the latter. we will cross the Hanf threshold for the multiset of rtypes. in the Gaifman form. we will be making a decision of whether an element enters the relation based solely on its local rtype. knowing some information about certain local neighborhoods renders the rest of the information about variable values that have entered the relation in previous stages of the graph superﬂuous. The same concept can be expressed in the language of sufﬁcient statistics. This is how the computation proceeds. except that we have to consider all the local neighborhoods in the structure. but not from elements that will enter the relation subsequently. here the bounded nature of FO comes in. For large enough structures. In the more general case where degrees are not bounded. Knowing this statistic gives us conditional independence from the values of other elements that have already entered the relation previously. However. we still have factoring through local neighborhoods. and so on. 57 . Gaifman’s theorem says that for ﬁrst order properties. Namely. there exists a sufﬁcient statistic that is gathered locally at a bounded number of elements. there are s distinguished disjoint neighborhoods that must satisfy some local condition. and (b) the local type of the element. it changes the local rtype of all those elements which lie within a rneighborhood of it. Furthermore. This is similar to the directed Markov picture where there is conditional independence of any variable from nondescendants given the value of the parents. At the time when this change renders the element eligible for entering the relation.
highly constrained by one another Φ1 Φ2 LFP assumes conditional independence after statistics are obtained Φs1 Φs Bounded number of local statistics at each stage Conditional Independence and factorization over a larger directed model called the ENSP (developed in Chapter 7) Figure 5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 58 X1 X2 Xn1 Xn Interacting variables. 58 .5.1: Range limited LFP computation process viewed as conditional independencies.
we have exhibited a correspondence between two apparently very different formalisms. This correspondence is illustrated in Fig. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 59 At this point. 59 . 5.5.1.
leading once again to poly(log n) parametrization. we have the following situation. This is because even though the interactions are of O(n) range. Next. we showed that the natural “factorization” of LFP into ﬁrst order logic. namely. Steps 1 and 2 involve standard constructions in ﬁnite model theory. 2. Let us examine the case of a 2ary relation that is being computed. the computation terminates in poly(n) steps. Recall the discussion 60 . First. since for any element a of the structure. c. 1. we use the simultaneous induction lemma for ﬁxed point logic to encode the relation to be computed as a “section” of a single LFP relation of higher arity. How can we show that this picture is the same for complex ﬁxed points? We accomplish this in stages. The key point to note is that we still have only poly(log n) parametrization. Every pair of elements occurs in the set of 2tuples. At this point. giving us a economical parametrization of the state space. we use the transitivity theorem for ﬁxed point logic to move nested ﬁxed points into simultaneous ﬁxed points without nesting. In this case. This means that when there is a change to a 2tuple containing a. which we recall in Appendix A. At this point. This means that the neighborhood of every pair is O(n). we see that we are in the situation of O(n) range interactions. instead of single elements. This will change the distance properties of the resulting structure of ktuples. they are severely value limited. we are now working with ktuples. coupled with the bounded local property of ﬁrst order logic can be used to exhibit conditional independencies in the relation being computed. Put another way. But the argument we provided was for simple ﬁxed points having one free variable.3 Conditional Independence in Complex Fixed Points In the previous sections.5. for monadic least ﬁxed points. that change affects the neighborhoods of O(n) other 2tuples. d. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 60 5. every other element b. · · · occurs in a pair along with a. for a k ﬁxed for all problem sizes. though the interactions are indeed between O(n) elements at a time.
This happens because the behavior of one variable is dependent on all n − 1 others simultaneously.5. In this way. 61 . one can consider a construction known as the canonical structure due originally to [DLW95] who used it to provide a model theoretic proof of the important theorem in [AV95] that P = PSPACE if and only if LFP = PFP. c > 1 different values. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 61 of the two kinds of poly(log n) parameterizations (range limited and value limited) from Chapter 3. The issue one faces is that there is a linear order on the canonical structure. the number of joint values taken by the system of n variables is only 2poly(log n) . not just for ordered structures. The basic nature of information gathering and processing in LFP does not change when the arity of the computation rises. Note that there are elegant ways to work with the space of equivalence classes of ktuples with equivalence under ﬁrst order logic with kvariables. In other words. We could work over a product structure where LFP captures the class of polynomial time computable queries. it has the capability to utilize only poly(log n) amount of that information in the following precise sense. It requires. this does not pose a problem for encoding instances of kSAT. In particular. but again the parametrization is only poly(log n). A “true” joint distribution over n takes cn . we have to work in a structure whose elements are ktuples of our original structure. a kary LFP over the original structure would be a monadic LFP over this structure. We also need to ensure that our original structure has a relation that allows an order to be established on ktuples. Remark 5.13. We will actually build a graphical model to give us the parameterization in Chapter 8. Note that this is for all structures. But since the LFP terminates in polynomially many steps. this can not be the case since the resulting distribution can be parameterized far too economically. therefore. For instance. O(cn ) independent parameters to specify. It merely adds the ability to gather polynomially more information at each stage taken from O(n) variates at a time. Although each element sees O(n) variates at each stage of the LFP. The O(n) nature of interactions remains. In cases of joint distributions of n covariates which take only 2poly(log n) values.
The simple scheme described above sufﬁces for our purposes. 204] where the result is stated for successor structures. p. 5. there is no probabilistic picture. We are only describing a fully deterministic computation. Remark 5. When we examine the properties in the aggregate of LFP running over ensembles.2. it holds for structures equipped with a successor relation (and no linear ordering). §11. after extracting a statistic from the local neighborhoods of the underlying structure. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 62 which renders the Gaifman graph trivial (totally connected). Likewise.5] for more details on canonical structures. Though the ImmermanVardi theorem is usually stated for ordered structures. 62 . The beneﬁt of equipping our structures only with a successor structure is that the Gaifman graph remains nontrivial. The distribution we seek will arise when we examine the aggregate behavior of LFP over ensembles of structures that come from ensembles of constraint satisfaction problems (CSPs) such as random kSAT.14. This gives us the setting where we can exploit the full machinery of graphical models of Chapter 2. we will ﬁnd the following.5. §11. Thus far. The “bounded number of local” property of each stage of monadic LFP computation manifests as conditional independencies in the distribution. See [Lib04.4 Aggregate Properties of LFP over Ensembles We have shown that any polynomial time computation will update its relation according to a certain Markov type property on the space of ktypes of the underlying structure. or a distribution that we can analyze. value limited interactions in higher arity LFP computations also lead to distribution of solutions that are poly(log n)parametrizable. See [LR03. making the distribution of solutions poly(log n)parametrizable.
63 . we will bring in ideas from statistical physics into the proof.5. THE LINK BETWEEN POLYNOMIAL TIME COMPUTATION AND CONDITIONAL INDEPENDENCE 63 Before we examine the distributions arising from LFP acting on ensembles of structures. We begin this in the next chapter.
Cm } uniformly from the 2k n k possible clauses having k variables. 3SAT— might be NPcomplete. . An instance is generated by drawing each of the m clauses {C1 . Furthermore. The 1RSB Ansatz of Statistical Physics 6. . xn }. . The entire ensemble of ran dom kSAT having m clauses over n literals will be denoted by SATk (n. . m). dating back at least to [CF86]. . 64 . such “easy” instances lay in certain well deﬁned regimes of the CSP. . The decision problem of whether a satisfying assignment to the variables exists is NPcomplete for k ≥ 3. We will see this behavior in some detail for the speciﬁc case of the ensemble known as random kSAT. researchers were motivated to study randomly generated ensembles of CSPs having certain parameters that would specify which regime the instances of the ensemble belonged to. each of whom is a disjunction of k literals taken from n variables {x1 . The ensemble known as random kSAT consists of instances of kSAT generated randomly as follows. While a given CSP — say. . many instances of the CSP might be quite easy to solve. Thus. An instance of kSAT is a propositional formula in conjunctive normal form Φ = C1 ∧ C2 ∧ · · · ∧ Cm having m clauses Ci .6. . even using fairly simple algorithms. while “harder” instances lay in clearly separated regimes.1 Ensembles and Phase Transitions The study of random ensembles of various constraint satisfaction problems (CSPs) is over two decades old.
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS
65
and a single instance of this ensemble will be denoted by Φk (n, m). The clause density, denoted by α and deﬁned as α := m/n is the single most important parameter that controls the geometry of the solution space of random kSAT. Thus, we will mostly be interested in the case where every formula in the ensemble has clause density α. We will denote this ensemble by SATk (n, α), and an individual formula in it by Φk (n, α). Random CSPs such as kSAT have attracted the attention of physicists because they model disordered systems such as spin glasses where the Ising spin of each particle is a binary variable (”up” or “down”) and must satisfy some constraints that are expressed in terms of the spins of other particles. The energy of such a system can then be measured by the number of unsatisﬁed clauses of a certain kSAT instance, where the clauses of the formula model the constraints upon the spins. The case of zero energy then corresponds to a solution to the kSAT instance. The following formulation is due to [MZ97]. First we translate the Boolean variables xi to Ising variables Si in the standard way, namely Si = −(−1)xi . Then we introduce new variables Cli as follows. The variable Cli is equal to 1 if the clause Cl contains xi , it is −1 if the clause contains ¬xi , and is zero if neither appears in the clause. In this way, the sum the satisﬁability of clause Cl . Speciﬁcally, if the Hamiltonian H=
i=1 n i=1 n i=1
Cli Si measures
Cli Si − k > 0, the clause is
satisﬁed by the Ising variables. The energy of the system is then measured by
m n
δ(
i=1
Cli Si , −K).
Here δ(i, j) is the Kronecker delta. Thus, satisfaction of the kSAT instance translates to vanishing of this Hamiltonian. Statistical mechanics then offers techniques such as replica symmetry, to analyze the macroscopic properties of this ensemble. Also very interesting from the physicist’s point of view is the presence of a sharp phase transition [CKT91, MSL92] (see also [KS94]) between satisﬁable and unsatisﬁable regimes of random kSAT. Namely, empirical evidence suggested that the properties of this ensemble undergoes a clearly deﬁned transition when the clause density is varied. This transition is conjectured to be as follows. For 65
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS
66
each value of k, there exists a transition threshold αc (k) such that with probability approaching 1 as n → ∞ (called the Thermodynamic limit by physicists), • if α < αc (k), an instance of random kSAT is satisﬁable. Hence this region is called the SAT phase. • If α > αc (k), an instance of random kSAT is unsatisﬁable. This region is known as the unSAT phase. There has been intense research attention on determining the numerical value of the threshold between the SAT and unSAT phases as a function of k. [Fri99] provides a sharp but nonuniform construction (namely, the value αc is a function of the problem size, and is conjectured to converge as n → ∞). Functional upper bounds have been obtained using the ﬁrst moment method [MA02] and improved using the second moment method [AP04] that improves as k gets larger.
6.2
The d1RSB Phase
More recently, another thread on this crossroad has originated once again from statistical physics and is most germane to our perspective. This is the work in the progression [MZ97], [BMW00], [MZ02], and [MPZ02] that studies the evolution of the solution space of random kSAT as the constraint density increases towards the transition threshold. In these papers, physicists have conjectured that there is a second threshold that divides the SAT phase into two — an “easy” SAT phase, and a “hard” SAT phase. In both phases, there is a solution with high probability, but while in the easy phase one giant connected cluster of solutions contains almost all the solutions, in the hard phase this giant cluster shatters into exponentially many communities that are far apart from each other in terms of least Hamming distance between solutions that lie in distinct communities. Furthermore, these communities shrink and recede maximally far apart as the constraint density is increased towards the SATunSAT threshold. As this threshold is crossed, they vanish altogether. 66
6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS
67
As the clause density is increased, a picture known as the “1RSB hypothesis” emerges that is illustrated in Fig. 6.1, and described below. RS For α < αd , a problem has many solutions, but they all form one giant cluster within which going from one solution to another involves ﬂipping only a ﬁnite (bounded) set of variables. This is the replica symmetric phase. d1RSB At some value of α = αd which is below αc , it has been observed that the space of solutions splits up into “communities” of solutions such that solutions within a community are close to one another, but are far away from the solutions in any other community. This effect is known as shattering [ACO08]. Within a community, ﬂipping a bounded ﬁnite number of variable assignments on one satisfying takes one to another satisfying assignment. But to go from one satisfying assignment in one community to a satisfying assignment in another, one has to ﬂip a fraction of the set of variables and therefore encounters what physicists would consider an “energy barrier” between states. This is the dynamical one step replica symmetry breaking phase. unSAT Above the SATunSAT threshold, the formulas of random kSAT are unsatisﬁable with high probability. Using statistical physics methods, [KMRT+ 07] obtained another phase that lies between d1RSB and unSAT. In this phase, known as 1RSB (one step replica symmetry breaking), there is a “condensation” of the solution space into a subexponential number of clusters, and the sizes of these clusters go to zero as the transition occurs, after which there are no more solutions. This phase has not been proven rigorously thus far to our knowledge and we will not revisit it in this work. The 1RSB hypothesis has been proven rigorously for high values of k. Specifically, the existence of the d1RSB phase has been proven rigorously for the case of k > 8, starting with [MMZ05] (see also [DMMZ08]) who showed the existence of clusters in a certain region of the SAT phase using ﬁrst moment methods. Later, [ART06] rigorously proved that there exist exponentially many clus67
in the region of constraint density α ∈ [αd .6. one may obtain the core of the cluster by “peeling away” variable assignments that. the fraction of variables that take the same value in the entire cluster (the socalled frozen variables) goes to one as the SATunSAT threshold is approached. α αd αc Figure 6.1: The clustering of solutions just before the SATunSAT threshold. In summary. loosely speaking occur only in clauses that are satisﬁed by other 68 . [ART06]. as well as conﬁrmed the O(n) Hamming separation between clusters. and [ACO08]. Between αd and αc . We ﬁrst need the notion of the core of a cluster.1 Cores and Frozen Variables In this section. Above αc . which is indicated by the unﬁlled circle. Given any solution in a cluster. αc ]. there are no more solutions. the solutions break up into exponentially many communities. the solution space is comprised of exponentially many communities of solutions which require a fraction of the variable assignments to be ﬂipped in order to move between each other. Further [ACO08] obtained analytical expressions for the threshold at which the solution space of random kSAT (as also two other CSPs — random graph coloring and random hypergraph 2colorability) shatters. Below αd .2. 6. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 68 ters in the d1RSB phase and showed that within any cluster. the space of solution is largely connected. we reproduce results about the distribution of variable assignments within each cluster of the d1RSB phase from [MMW07].
we say that a variable in a partial assignment is free when each clause it occurs in has at least one other variable that satisﬁes the clause. we have no way to tell that the core will not be the all ∗ partial assignment. assign it a ∗. with each variable being assigned a 0. This process will eventually lead to a ﬁxed point.6. r) < αc such that for all α ∈ [α(k. THE 1RSB ANSATZ OF STATISTICAL PHYSICS variable assignments. 1. to obtain the core of a cluster. it follows that frozen variables take the same value throughout the cluster. there exists a clause density α(k. . . αc ]. if the variable xi takes value 1 in the core of a cluster. This process leads to the core of the cluster. we repeat the following starting with any solution in the cluster: if a variable is free. r). Finally. ∗}. However. xn ) as an assignment of each variable to a value in {0. . we do not know whether there are any frozen variables at all. almost every variable in a core is frozen as we increase the clause density towards the SATunSAT threshold. For example. . 69 To get a formal deﬁnition. ﬁrst we deﬁne a partial assignment of a set of variables (x1 . 1 or a ∗. Apriori. 69 . Note that since the core can be arrived at starting from any choice of an initial solution in the cluster. with probability going to 1 in the thermodynamic limit. Of obvious interest are those variables that are assigned 0 or 1. For every r ∈ [0. Namely. These take both values 0 and 1 in the cluster. or has as assignment to ∗. and that is the core of the cluster. Next.1 ([ART06]). These variables are said to be frozen. [ART06] proved that for high enough values of k. The nonfrozen variables are those that are assigned the value ∗ in the core. Theorem 6. We may easily see that the core is not dependent upon the choice of the initial solution. then every solution lying in the cluster has xi assigned the value 1. 1 ] there is a constant kr such that for all 2 k ≥ kr . The ∗ assignment is akin to a “joker state” which can take whichever value is most useful in order to satisfy the kSAT formula. Clearly the number of ∗ variables is a measure of the internal entropy (and therefore the size) of a cluster since it is only these variables whose values vary within the cluster. What does the core of a cluster look like? Recall that the core is itself a partial assignment.
fewer than rn variables take the value ∗. but informally cores are too large to pass through the bottlenecks that the stagewise ﬁrst order LFP algorithms create. which means that when nontrivial cores do exist ( [ART06] proved their existence for k ≥ 9). By bounding the probability of this event. there exist α < αc (k) such that with high probability. this sort of interaction cannot be dealt with by LFP algorithms. As the reader may imagine after reading the previous chapters. they must involve a fraction of all the variables in the formula. cannot be dealt with using an LFP algorithm. 2. For every k ≥ 9. This gives us the corollary.6. but only when they can be factored into interactions of degree poly(log n) or are value limited. αn) has frozen variables. In other words. [MMW07] obtained a lower bound on the size of cores. But the appearance of cores is equivalent to the onset of O(n) degree interactions which cannot be further factored into poly(log n) degree interactions. αn) has at least (1 − r)n frozen variables. The bound is linear. this core is instantiated amply in the solution space (by that we mean it takes exponentially many values in those many clusters of the d1RSB phase). We will need more work to make this precise. every cluster of the solution space of Φk (n. 70 . caused by increasing the clause density sufﬁciently.2 ([ART06]). 70 Corollary 6. and are ample. This may also be interpreted as follows. a core may be thought of as the onset of a large single interaction of degree O(n) among the variables. Algorithms based on LFP can tackle long range interactions between variables. THE 1RSB ANSATZ OF STATISTICAL PHYSICS asymptotically almost surely 1. See also the remark at the end of this section. Furthermore. We end this section with a physical picture of what forms a core. then these clauses must have literals that come from a set of at most C variables. If a formula Φ has a core with C clauses. every cluster of solutions of Φk (n. Such ample irreducible O(n) interactions. Note that this picture is known to hold only for k ≥ 9 and is an open question for k < 9.
we turn to the question of how the two are related. there has been an understanding that hard instances of random kSAT tend to occur when the constraint density α is near the transition threshold. 3. Hence. The precise statement of this intuitive picture will be provided in the next chapter when we build our conditional independence hierarchies. for low values of k such as k = 2. there is empirical evidence that this phenomenon does not take place [MMW05]. also see the discussion in [ART06. see [ACO08] and [CO09]. Thus. The ampleness precludes value limited interactions also as we shall see. This bottleneck is too small for a core to factor through in range limited LFP. The freezing of variables in cores is known to happen only for k ≥ 9 [ART06]. the ease of ﬁnding a solution differs quite dramatically on traditional SAT solvers due to a clustering of the solution space into numerous communities that are far apart 71 . See also [CO09] for the best current algorithm along with a comparison of various other algorithms to it. It remains open for k < 8. Now that we have surveyed the known results about the geometry of the space of solutions in this region. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 71 We have already noted that this is because LFP algorithms factor through ﬁrst order computations.6. our separation of complexity classes needs the regime of k ≥ 9. while both regimes in SAT have solutions with high probability. and in a ﬁrst order computation. 6. §1]. and that this behavior was similar to phase transitions in spin glasses [KS94]. It has been empirically observed that the onset of the d1RSB transition seems to coincide with the constraint density where traditional solvers tend to exhibit exponential slowdown. Indeed.2. Beginning with [CKT91] and [MSL92]. and pointers to more detailed surveys. the decision of whether an element is to enter the relation being computed is based on information collected from local neighborhoods and combined in a bounded fashion.2 Performance of Known Algorithms in the d1RSB Phase We end this chapter with a brief overview of the performance of known algorithms as a function of the clause density.
we are currently unable to ﬁnd them in polynomial time. These properties are not known to hold except for k ≥ 9 and clause densities above (2k /k) ln k. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 72 from each other in terms of Hamming distance. well below the SATunSAT threshold.6. Hence. See [ACO08] for proofs that the clause density where all known polynomial time algorithms fail on NPcomplete problems such as kSAT and graph coloring coincides with the onset of the d1RSB phase in these problems. the freezing of variables within clusters. no algorithms are known to produce solutions in polynomial time with probability Ω(1) — neither on the basis of rigorous or empirical analysis. or any other evidence [CO09]. in regimes where we know solutions exist. In particular. this is fundamentally a limitation of polynomial time algorithms. we are guaranteed the presence of the full d1RSB phenomena only for k ≥ 9 72 . §2. for our separation of complexity classes. Our work will explain that indeed. Rem. This is because in the d1RSB phase. We will require all known properties of the d1RSB phase — namely. precluding both range and value limited parametrizations. and the O(n) variable changes required to move from one cluster to another. Please see [CO09] for the best known algorithm that does solve SAT instances with nonvanishing probability for densities up to 2k ω(k)/k for any sequence ω(k) → ∞. Compare this to the SATunSAT threshold. Thus. we will work with random kSAT in the k ≥ 9 regime. the distribution of solutions is both irreducibly correlated at ranges O(n). and the clause density sufﬁciently high so that we are in the d1RSB phase. for clause densities above O(2k /k). which is asymptotically 2k ln 2. 2]. The signiﬁcance of the value of k and clause density above (2k /k) ln k. By the results of [ART06] and [ACO08. the exponentially many clusters. the solution space geometry is not expressible as a mixture of range or value limited poly(log n)parametrizable pieces. and ample. in such phases (for k ≥ 9).1. The earlier [ART06] had established the existence of shattering and freezing of variables within cores for α = Θ(2k ). Speciﬁcally. This clause density threshold for the onset of the d1RSB phase is 2k /k ln k [ACO08].
73 . Incomplete algorithms are a class that do not always ﬁnd a solution when it exists. The cores in the clusters of random 3SAT are trivial. We are not aware of experimental work done that shows the efﬁcacy (even under mild requirements) of any algorithm on k ≥ 9 after the onset of the d1RSB phase with nontrivial cores. 3. The original work reported in [MPZ02] was on 3SAT. where experiments are reported on 4SAT. unlike k ≥ 9 where [ART06] show the existence of nontrivial cores for almost all clusters after the d1RSB threshold. THE 1RSB ANSATZ OF STATISTICAL PHYSICS 73 It should be noted that there is empirical evidence that the d1RSB phase is not present in random 3SAT in the following sense. The algorithm uses the 1RSB hypothesis about the clustering of the solution space into numerous communities. The behavior of survey propagation for higher values of k is still being researched. Survey propagation seems to scale as n log n in this region. 4. See also [KMRT+ 06]. a breakthrough for incomplete algorithms in this ﬁeld came with [MPZ02] who used the cavity method from spin glass theory to construct an algorithm named survey propagation that does very well on instances of random kSAT with constraint density above the aforementioned clustering threshold. More recently. We should also point out that the experimental behavior of algorithms for kSAT is largely characterized for lower values of k = 2. the experimental behavior of algorithms reported in [MRTS07] is on random 4SAT. For instance. nor do they indicate the lack of solution except to the extent that they were unable to ﬁnd one. and continues to perform well very close to the threshold αc for low values of k. By that we mean that they tend to be the all ∗ core.6. Incomplete algorithms are obviously very important for hard regimes of constraint satisfaction problems since we do not have complete algorithms in these regimes that have economical running times. where the full d1RSB picture is not known to hold.
m → ∞ with the ratio α := m/n being held constant. Graphs constructed in this manner may have two function nodes connected to the same ktuple of variables. The random kfactor graph ensemble. Our treatment of this section follows [MM09. Deﬁnition 7. The random (k. In this section we introduce the factor graph ensembles that represent random kSAT. Random Graph Ensembles We will use factor graphs as a convenient means to encode various properties of the random kSAT ensemble.2. denoted by Gk (n. both the ensembles converge in the properties that are important to us. α). α)factor graph ensemble. We will be interested in the case of the thermodynamic limit of n. consists of graphs having n variable nodes and m function nodes constructed as follows. while the degree of the variable nodes is a random variable with expectation km/n. denoted by Gk (n. Chapter 9]. a ktuple of variables uniformly from the for such a ktuple chosen from n variables. consists of graphs constructed as follows.7. In this case. the number of function nodes is a random variable with expectation αn.1. For each of the with probability αn/ n k n k n k possibilities ktuples of variables. A graph in the ensemble is constructed by picking. for each of the m function nodes in the graph. Deﬁnition 7. In this ensemble. a function node that connects to only these k variables is added to the graph . function nodes all have degree k. and both can be seen as the 74 . In this ensemble. m). and the degree of variable nodes is a random variable with expectation αk.
1 Locally TreeLike Property We have seen in Chapter 5 that the propagation of inﬂuence of variables during a LFP computation is stagewiselocal. With the deﬁnitions in place. At each position. However. the probability of the graph structure occurring is pE (1 − p) . This is really the fundamental limitation of LFP that we seek to exploit. One may demonstrate this for the ErdosRenyi random graph as follows. Consider the probability of a certain graph (V. §9. we are ready to describe two properties of random graph ensembles that are pertinent to our problem. E) occurring as a subgraph of the ErdosRenyi graph. Here.1. 7. and there is an edge between any two with probability c/n where c is a constant that parametrizes the density of the graph. Applying Stirling’s approximations. By that we mean that there are no cycles in a O(1) sized neighborhood of any vertex as the size of the graph goes to inﬁnity [MM09. there are many extensive (meaning O(n)) correlations between variables that arise due to loops of sizes O(log n) and above. Edges are “drawn” uniformly and independently of each other. In order to understand why this is a limitation. If the graph is connected.7. remarkably.1 Properties of Factor Graph Ensembles The ﬁrst property provides us with intuition on why algorithms ﬁnd it so hard to put together local information to form a global perspective in CSPs. we see that such a graph occurs asymptotically O(V  − E) times. 7. V  ≤ E − 1 with equality 75 . RANDOM GRAPH ENSEMBLES 75 underlying factor graph ensembles to our random kSAT ensemble SATk (n. we need to examine what local neighborhoods of the factor graphs underlying NPcomplete problems like kSAT look like in hard phases such as d1RSB. such graphs are locally trivial. Such a graph can occur in V  −E 2 n V  positions. there are n vertices.5]. In such phases. α) (see Chapter 6 for deﬁnitions and our notation for random kSAT ensembles).
In short. Let G be a randomly chosen graph in the ensemble Gk (n. The next would be small connected subgraphs (triangles. m) are indistinguishable from each other. m). for instance). in the limit of n → ∞. Let us see what this means in terms of the information such graphs divulge locally. m) having degree d is the same as the expected value that a single variable node has degree d. m) and Tk (n. such random graphs do not provide any of their global properties through local inspection at each element.7. if only local neighborhoods are examined. 7. The expected value of the fraction of variables in Gk (n. RANDOM GRAPH ENSEMBLES 76 only for trees. In other words. of course. Thus. ﬁnite connected graphs have vanishing probability of occurring in ﬁnite neighborhoods of any element. the two ensembles Gk (n. These are.2 Degree Proﬁles in Random Graphs The degree of a variable node in the ensemble Gk (n. But even this next step is not available. d In the large graph limit we get lim P (deg vi = d) = e−kα (kα)d . available through local inspection. these loops are invisible when we inspect local neighborhoods of a ﬁxed ﬁnite size. m) as n → ∞. Then the rneighborhood of i in G converges in distribution to Tk (n.1. m) is a random variable. as the problem size grows. We wish to understand the distribution of this random variable. The simplest local property is degrees of elements. Theorem 7. both being equal to P (deg vi = d) = m k p (1 − p)m−d d where p = k . Let us think about what this implies. n! n→∞ 76 .3. However. We know from the onset of cores and frozen variables in the 1dRSB phase of kSAT that there are strong correlations between blocks of variables of size O(n) in that phase. and i be a uniformly chosen node in G.
log log z (log z)2 . the degree is asymptotically a Poisson random variable. In particular. 184] for a discussion of this upper bound. p. m) is asymptotically almost surely O(log n). (7. RANDOM GRAPH ENSEMBLES In other words.7. Proof. Lemma 7.1) 77 .4. 77 A corollary is that the maximum degree of a variable node is almost surely less than O(log n) in the large graph case. The maximum variable node degree in Gk (n. it asymptotically almost surely satisﬁes the following dmax z = 1+Θ kαe log(z/ log z) where z = (log n)/kαe. as well as a lower bound. See [MM09.
1 Measuring Conditional Independence in Range Limited Models Our central concern with respect to range limited models is to understand which variable interactions in a system are irreducible — namely. Both are hampered by the same underlying property — that inspite of being distributions on n covariates. Separation of Complexity Classes We have built a framework that connects ideas from graphical models.8. What 78 . We are now ready to begin our ﬁnal constructions that will yield the separation of complexity classes. this means that their joint distribution behaves like the joint distribution of only poly(log n) covariates instead of n covariates. We have described the fundamental similarity between range limited and value limited distributions in Chapter 3. they can be speciﬁed with only 2poly(log n) parameters. In light of the above. Such irreducible interactions can be 2interactions (between pairs). those that cannot be expressed in terms of interactions between smaller sets of variables with conditional independencies between them. statistical mechanics. 8. they are both poly(log n)parametrizable. Informally. A joint distribution encodes the interaction of a system of n variables. 3interactions (between triples). up to ninteractions between all n variables simultaneously.5. we ﬁrst consider the case of range limited poly(log n)parametrizations. In our terminology. We return to value limited poly(log n)parametrizations just before the ﬁnal separation of complexity classes in Section 8. and so on. and random graphs. logic.
Remark 8. At level zero of this “hierarchy”. or dually. We may. with k < n? In such a case. When the largest irreducible interactions are kinteractions. they are coupled together n at a time. but they grow relatively slowly. they behave in ways similar to a set of poly(log n) covariates. In the case of ordered graphs. the covariates should be independent of each other.5. SEPARATION OF COMPLEXITY CLASSES 79 would happen if all the direct interactions between variables in the system were all of less than a certain ﬁnite range k. without the possibility of being decoupled. the independent parameter space grows polynomially with n. the “jointness” of the distribution would lie at a lower level than n. we can make statements about how deeply entrenched the conditional independence between the covariates is. the n covariates do not display the behavior of a typical joint distribution of n variables. upper and lower bound the level using minimal Imaps and maximal Dmaps for the distribution. 2. the “jointness” of the covariates really would lie at a lower “level” than n. as noted in Sec. The case of complex LFP is also one of poly(log n)parametrization. However. If. Similarly. it grows exponentially with n. In both cases above. if the variables did interact n at a time. as stated in Chapter 3. whereas in a general distribution without any conditional independencies. There are some technical issues with constructing such a hierarchy to measure conditional independence. Instead. for instance. the distribution can be parametrized with n2k independent parameters. We would like to measure the “level” of conditional independence in a system of interacting variables by inspecting their joint distribution. Thus. the distribution has a directed Pmap. The ﬁrst issue would be how to measure the level of a distribution in this hierarchy. The case of monadic LFP lies in between — the interactions are not of ﬁxed size.1. but took only 2poly(log n) joint values. in families of distributions where the irreducible interactions are of ﬁxed size. not all distributions have such maps. In this way. of course. we should note that there may be different minimal Imaps for 79 . then we could measure the size of the largest clique that appears in its moralized graph. except it is a valuelimited O(n) interaction model. about how large the set of direct interactions between variables is.8. At level n.
the larger distribution factorized recursively according to some directed graphical model.2 Generating Distributions from LFP We will describe the method of generating distributions and showing economic parametrizations by embedding the covariates into a larger directed graphical model below for monadic LFP. 80] for an example. p. we will encode kSAT formulae as structures over a ﬁxed vocabulary. If we could somehow embed the distribution of solutions generated by LFP into a larger distribution. the larger distribution had only polynomially many more variates than the original one.2. We will return to the task of constructing such an embedding in Sec. The insight that allows us to resolve the issue is as follows.1 Encoding kSAT into Structures In order to use the framework from Chapters 4 and 5. SEPARATION OF COMPLEXITY CLASSES 80 the same distribution for different orderings of the variables. and would cost us only polynomially more. then we would have obtained a parametrization of our distribution that would reﬂect the factorization of the larger distribution. We will indicate the differences for complex LFP. which does not affect us. First we describe how we use LFP to create a distribution of solutions. we aim to demonstrate that distributions of solutions generated by LFP lie at a lower level of conditional independence than distributions that occur in the d1RSB phase of random kSAT. such that 1. 80 . Consequently. and 2. By pursuing the above course.8. they have more economical parametrizations than the space of solutions in the d1RSB phase does. See [KF09. 8. 8.3. 8.
SEPARATION OF COMPLEXITY CLASSES 81 Our vocabularies are relational. 4. 81 . we come to the universe. In order to avoid new notation. Next. . ¬x1 . We will use three relations. but they can come either in positive or negative form. The relation RC will consist of ktuples from the universe interpreted as clauses consisting of disjunctions between the variables in the tuple. ¬xn . We will describe these in the Sec. RP }. 8. our universe will have 2n elements corresponding to the variables x1 . Rather we will include a relation such that FO(LFP) can capture all the polynomial time queries on the structure. The relation RE will be interpreted as an “edge” between successive variables. . 1.8. The relation RP will be a partial assignment of values to the underlying variables. A SAT formula is deﬁned over n variables. RE . we need to interpret our relations in our universe. We do not require constants. Lastly. . we will simply use the same notation to indicate the corresponding element in the universe. xn . This describes our vocabulary σ = {RC . . The ﬁrst relation RC will encode the clauses that a SAT formula comprises. . Thus. Since we are studying ensembles of random kSAT. 2. We will not introduce a linear ordering since that would make the Gaifman graph a clique. we need a relation RP to hold “partial assignments” to the SAT formulae. this relation will have arity k. . and so we need only specify the set of relations. Finally. This will be a binary relation RE .2. 3. We dispense with the superscripts since the underlying structure is clear. We need a relation in order to make FO(LFP) capture polynomial time queries on the class of kSAT structures. . while the corresponding upper case Xi denotes the corresponding variable in a model. We denote by lower case xi the literals of the formula. . and the set of constants.3.
α). The resulting ensemble will be denoted by Sk (n. Speciﬁcally. Recall that an order on the structure enables the LFP computation (or the Turing machine the runs this computation) to represent tuples in a lexicographical ordering. which have a well deﬁned notion of order on them. We could encode our structures with a linear order. it plays no further role. However to allow LFP to capture polynomial time on the class of encodings. 82 . that still sufﬁces. ¬x2 . This seems most natural. for k = 3. but that would make the Gaifman graph fully connected. both for 1 ≤ i < n. each stage of the LFP is orderinvariant. The reason for the relation RE that creates the chain is that on such structures. ¬x3 ) in the relation RC . xi+1 ) and (¬xi . the pairs (xi . Ensembles of kSAT Let us now create ensembles of σstructures using the encoding described above. What we want is something weaker. This is a technicality. α). the clause x1 ∨ ¬x2 ∨ ¬x3 in the SAT formula will be encoded by inserting the tuple (x1 . as well as the pair (xn . This chains together the elements of the structure. ¬x1 ) will be in the relation RE . In other words. SEPARATION OF COMPLEXITY CLASSES 82 Now we encode our kSAT formulae into σstructures in the natural way. For example.8. polynomial time queries are captured by FO(LFP) [EF06. the assignments to the variables that are computed by the LFP have nothing to do with their order. α) as a σstructure will be denoted by Pk (n. Note also that SAT problems may also be represented as matrices (rows for clauses. we need to give the LFP something it can use to create an ordering. columns for variables that appear in them). We will start with the ensemble SATk (n. we encode our structures as successortype structures through the relation RE . They depend only on the relation RC which encodes the clauses and the relation RP that holds the initial partial assignment that we are going to ask the LFP to extend.2]. §11. ¬xi+1 ). α) and encode each kSAT instance as a σstructure. The encoding of the problem Φk (n. Similarly. In our problem of kSAT. Thus. It is known that the class of order invariant queries is also Gaifman local [GS00]. since it imparts on the structure an ordering based on that of the variables.
we build the Gaifman graph of each such structure. We encode the kSAT instance as a structure as described in the previous section. The sites of the neighborhood system are the variable nodes. Finally. will be denoted by Ik (n. the vertices of the graph will be the set of sites. The neighborhood Ns of a site s is the set of all nodes that lie in the rneighborhood of a site. This graph will be called the interaction graph of the LFP computation. Each graph in this ensemble encodes an instance of random kSAT. the simple monadic LFP computation induces a neighborhood system described as follows.e. The set of vertices of the Gaifman graph are simply the set of variable nodes in the factor graph and their negations since we are using both variables and their negations for convenience (this is simply an implementation detail). On this Gaifman graph. and build the neighborhood system through the Gaifman graph. we wish to describe the neighborhood system that underlies the monadic LFP computations on structures of Sk (n. What is the size of cliques in this interaction graph? This is not the same as the size of cliques in the factor graph. or the Gaifman graph. Let us recall the factor graph ensemble Gk (n. therefore. α). In particular. Namely. We begin with the factor graph.2 will have 12 vertices. Note that this interaction graph has many more edges in general than the Gaifman graph. For instance. Each site s will be connected by an edge to every other site in Ns . we make the neighborhood system into a graph in the standard way.2 The LFP Neighborhood System In this section. because the density 83 . m). SEPARATION OF COMPLEXITY CLASSES 83 8.8. every node that was within the locality rank neighborhood of the Gaifman graph is now connected to it by a single edge..2. Two vertices are joined by an edge in the Gaifman graph either when the two corresponding variable nodes were joined to a single function node (i. The resulting graph is. the Gaifman graph for the factor graph of Fig 2. appeared in a single clause) of the factor graph or if they are adjacent to each other in the chain that relation RE has created on the structure. far more dense than the Gaifman graph. where r is the locality rank of the ﬁrst order formula ϕ whose ﬁxed point is being constructed by the LFP computation. Next. parametrized by the clause density α. The ensemble of such graphs. α).
8. SEPARATION OF COMPLEXITY CLASSES
84
of the graph is higher. The size of the largest clique is a random variable. What we want is an asymptotic almost sure (by this we mean with probability tending to 1 in the thermodynamic limit) upper bound on the size of the cliques in the distribution of the ensemble Ik (n, α). Note: From here on, all the statements we make about ensembles should be understood to hold asymptotically almost surely in the respective random ensembles. By that we mean that they hold with probability 1 as n → ∞. Lemma 8.2. The size of cliques that appear in graphs of the ensemble Ik (n, α) are upper bounded by poly(log n) asymptotically almost surely. Proof. Let dmax be as in (7.1), and r be the locality rank of ϕ. The maximum degree of a node in the Gaifman graph is asymptotically almost surely upper bounded by dmax = O(log n). The locality rank is a ﬁxed number (roughly equal to 3d where d is the quantiﬁer depth of the ﬁrst order formula that is being iterated). The node under consideration could have at most dmax others adjacent to it, and the same for those, and so on. This gives us a coarse dr upper bound max on the size of cliques. Remark 8.3. While this bound is coarse, there is not much point trying to tighten it because any constant power factor (r in the case above) can always be introduced by computing a rary LFP relation. This bound will be sufﬁcient for us. Remark 8.4. High degree nodes in the Gaifman graph become signiﬁcant features in the interaction graph since they connect a large number of other nodes to each other, and therefore allow the LFP computation to access a lot of information through a neighborhood system of given radius. It is these high degree nodes that reduce factorization of the joint distribution since they represent direct interaction of a large number of variables with each other. Note that although the radii of neighborhoods are O(1), the number of nodes in them is not O(1) due to the Poisson distribution of the variable node degrees, and the existence of high degree nodes.
84
8. SEPARATION OF COMPLEXITY CLASSES
85
Remark 8.5. The relation being constructed is monadic, and so it does not introduce new edges into the Gaifman graph at each stage of the LFP computation. When we compute a kary LFP, we can encode it into a monadic LFP over a polynomially (nk ) larger product space, as is done in the canonical structure, for instance, but with the linear order replaced by a weaker successor type relation. Therefore, we can always chose to deal with monadic LFP. This is really a restatement of the transitivity principle for inductive deﬁnitions that says that if one can write an inductive deﬁnition in terms of other inductively deﬁned relations over a structure, then one can write it directly in terms of the original relations that existed in the structure [Mos74, p. 16].
8.2.3
Generating Distributions
The standard scenario in ﬁnite model theory is to ask a query about a structure and obtain a Yes/No answer. For example, given a graph structure, we may ask the query “Is the graph connected?” and get an answer. But what we want are distributions of solutions that are computed by a purported LFP algorithm for kSAT. This is not generally the case in ﬁnite model theory. Intuitively, we want to generate solutions lying in exponentially many clusters of the solution space of SAT in the d1RSB phase. How do we do this? To generate these distributions, we will start with partial assignments to the set of variables in the formula, and ask the question whether such a partial assignment can be extended to a satisfying assignment. We need the following deﬁnition. Deﬁnition 8.6. A global relation associated to a decision problem on a class K is a relation R of a ﬁxed arity k over A associated to each structure A ∈ K. The following is a restatement of the ImmermanVardi theorem phrased in terms of computability of global relations. See [LR03, §11.2, p. 206] for a proof. Theorem 8.7. A global relation R on a class of successor structures is computable in polynomial time if and only if R is inductive. 85
8. SEPARATION OF COMPLEXITY CLASSES
86
We wish to see that the global relation that associates to each structure a complete assignment that coincides with the partial assignment placed in the relation RP is inductive. By the theorem above, this is equivalent to showing that it is computable in polynomial time. In order to see this, we recall that decision problems that are NPcomplete have a property called selfreducibility that allows us to query a decision procedure for them a polynomial number of times and build a solution to the search version of the problem. If P = NP, then all decision problems in NP have polynomial time solutions, and one can use selfreducibility to see that the search version will also be polynomial time solvable — namely, a solution will be constructible in polynomial time. Next we will deﬁne our search problem in a way that a solution to it will be a global relation: an instance of the problem will be a structure with partial assignments, and the question will be whether the partial assignment can be extended to a complete assignment. The complete assignment can be represented by a global unary relation that will store all the literals assigned +1, and which must concur with the partial assignment on its overlap. This decision problem is clearly in NP, and therefore if P = NP, it would have a polynomial time search solution, making R computable in polynomial time. The theorem then says R must be inductive. Since we want to generate exponentially many such solutions, we will have to partially assign O(n) (a small fraction) of the variables, and ask the LFP to extend this assignment, whenever possible, to a satisfying assignment to all variables. Thus, we now see what the relation RP in our vocabulary stands for. It holds the partial assignment to the variables. For example, suppose we want to ask whether the partial assignment x1 = 1, x2 = 0, x3 = 1 can be extended to a satisfying assignment to the SAT formula, we would store this partial assignment in the tuple (x1 , ¬x2 , x3 ) in the relation RP in our structure. As mentioned earlier, the output satisfying assignment will be computed as a unary relation which holds all the literals that are assigned the value 1. This means that xi is in the relation if xi has been assigned the value 1 by the LFP, and otherwise ¬xi is in the relation meaning that xi has been assigned the value
86
3 Disentangling the Interactions: The ENSP Model Now that we have a distribution of solutions computed by LFP. 87 . Since we wish to apply them to the setting of complexity theory. Once again. for instance? In Chapter 2. we considered various graphical models and their conditional independence characteristics. with a focus on how their structure changes with the problem size. and we will view it as monadic over a polynomially larger structure. We will have to build our own. they model ﬁxed interactions between a ﬁxed set of variables. Does it factor through any particular graphical model. based on the principles we have learnt. By this we mean that 1. In this way we get a distribution of solutions that is exponentially numerous. we simply abort that particular attempt and carry on with other partial assignments until we generate enough solutions. SEPARATION OF COMPLEXITY CLASSES 87 0 by the LFP computation. they are of ﬁxed size. and we now analyze it and compare it to the one that arises in the d1RSB phase of random kSAT. If the partial assignment cannot be extended. we are interested in families of such models. In short. By “enough” we mean rising exponentially with the underlying problem size. the output will be some section of a relation of higher arity (please see Appendix A for details). Let us ﬁrst note two issues. This is the simplest case where the FO(LFP) formula is simple monadic. our situation is not exactly like any of these models. 8.8. For more complex formulas. and 2. The ﬁrst issue is that graphical models considered in literature are mostly static. the relations between the variables encoded in the models are ﬁxed. Now we “initialize” our structure with different partial assignments and ask the LFP to compute complete assignments when they exist. we would like to examine its conditional independence characteristics. over a ﬁxed set of variables.
it changes the neighborhoods (or more precisely the local types of various other elements) in its vicinity. Thus. The way a LFP computation proceeds through the structure will. 2. the way the LFP would go about assigning values to the unassigned variables would be. We know that there is a “directedness” to LFP in that elements that are assigned values at a certain stage of the computation then go on to inﬂuence other elements who are as yet unassigned. there is a directed ﬂow of inﬂuence as the LFP computation progresses. once an element is assigned a value. Because the ﬂow of information is as described above. or the set of neighborhoods. in general. neighborhoods across the structure inﬂuence the value an unassigned node will take. It implicitly happens once any element enters the relation being computed. different from a Markov random ﬁeld distribution which has no such direction. Even within a cluster. How do we deal with this situation? In order to model this dynamic behavior. for example. In the other type of ﬂow. different. Consider simple monadic LFP. Note that while the ﬁrst type of ﬂow happens during a stage of the LFP. although we would expect them to be similar. in general. vary with the initial partial assignment. In one type of ﬂow. Namely. and another in cluster Y. the trajectories of two different initial partial assignments will not be the same. Even within a certain size n. This is. the second type is implicit. So.8. SEPARATION OF COMPLEXITY CLASSES 88 The second issue that faces us now is as follows. 3. 88 . we do not have a ﬁxed graph on n vertices that will model all our interactions. We would expect a different “trajectory” of the LFP computation for different clusters in the d1RSB phase. Thus. we have to consider building a graphical model on certain larger product spaces. there is no separate stage of the LFP where it happens. we will not be able to express it using a simple DAG on either the set of vertices. 1. There are two types of ﬂows of information in a LFP computation. if one initial partial assignment landed us in cluster X. let us build some intuition ﬁrst.
we would like to avoid any closed directed paths. They therefore correspond to elements in the structure (recall that elements of the structure represent the literals in the kSAT formula). The model is illustrated in Fig. The stagewise nature of LFP is central to our analysis. which encode the variables of the kSAT instance. Thus. . 5. Each of their possible values are the possible isomorphism types of the rneighborhoods.1. Just like variables. 8. Xn is represented by a different vertex at each stage of the computation. and the various stages cannot be bundled into one without losing crucial information. However. 8. 89 . In order to exploit the factorization properties of directed graphical models. each variable in the original system gives rise to ϕA  vertices in the ENSP model. .1. each neighborhood is also represented by a different vertex at each stage of the LFP computation. elements do not change their color once they have been assigned. . where n is the number of variables in the SAT formula. represent the rneighborhoods of the elements in the structure. Neighborhood Vertices These vertices. We now describe the ENSP model for a simple monadic least ﬁxed point computation. and red indicating the variable has been assigned the value −1. Thus. and the resulting parametrization by potentials. SEPARATION OF COMPLEXITY CLASSES 89 4.green indicating the variable has been assigned a value of +1.1. Since the underlying formula ϕ that is being iterated is positive. Also recall that there are 2n elements in the kSAT structure. 8. or ENSP model for short. in Fig 8. we have only shown one vertex per variable. and allowed it to be colored two colors . we do need a model which captures each stage separately. It has two types of vertices. each variable in our original system X1 .1.8. This model appears to be of independent interest. are represented by the smaller circles in Fig. However. which we will call a ElementNeighborhoodStage Product Model. Let us now incorporate this intuition into a model. Element Vertices These vertices. denoted by the larger circles with blue shading in Fig. .
1 X2.2 X2.3 Xn1.3) Xn.3) N(xn1) Neighborhoods N(x3.2 X1.2) N(x2.2) Xn.1 X3.1) N(xn1. 90 .3 Xi.3 X3. See text for description.1) N(x2.2 Xn1.2 Xi.1 ⋮ X4.2 X3.1 Xi.3 X1.2 N(x1.1: The ElementNeighborhoodStage Product (ENSP) model for LFPϕ .3) N(xn) N(xn1.1 Xn1.2) N(xn1.2) N(x3.1 X1.2 ⋮ X4. SEPARATION OF COMPLEXITY CLASSES 90 N(xn.1) Xn.3 ⋮ X4.2 ⋮ Xi+1.1) N(xn.3) N(x2) N(x1.1 Stages of LFP Figure 8.3 X2.8.3 ⋮ Xi+1.3) N(x3) N(x2.1 ⋮ N(x1.1) N(x3.3 N(x1) Xn Xn1 ⋮ Xi+1 Xi ⋮ X4 X3 X2 X1 Elements Xi+1.2) N(xn.
starting from the leftmost and terminating at the rightmost. In this way. At this stage. Similarly. Initially. which we do not show in the ﬁgure in order to avoid clutter.1 is red. Now we describe the stages of the ENSP. it assigns the value −1 to variable Xn (remember that the ﬁrst two stages in the ENSP correspond to the ﬁrst stage of the LFP computation).2 takes the color green based on information gathered from its own neighborhood N (X3. and some red. and the existence of a bounded number of other local neighborhoods in the structure. Let us now look at the transition to the second stage of the ENSP.8. and the neighborhoods in their vicinity (meaning. their neighborhoods. In the ﬁgure. the LFP assigned the value +1 to the variable X3 . This means they get assigned +1 or −1. the neighborhoods of other elements that are in their vicinity) change. some elements enter the relation. and abort if not). SEPARATION OF COMPLEXITY CLASSES 91 namely. a small fraction O(n) of the variables are assigned values. we are in the leftmost stage.2. The vertices that do not change state simply transmit their existing state to the corresponding vertices in the next stage by a horizontal arrow.1 ) and N (Xn−1. Once some variables have been assigned values in the ﬁrst stage. or one may think of them as a single variable taking the value of the various local rtypes. Each stage of the LFP computation is represented by two stages in the ENSP.1 is green. In the ﬁgure. These vertices may be thought of as vectors of size poly(log n) corresponding to the cliques that occur in the neighborhood system we described in Sec. This is indicated by the dot91 . the variable X3. and Xi. 8. X4. There are 2ϕA  stages. notice that some variable vertices are colored green. Here.1 ) and two other neighborhoods N (X2. at the start of the LFP computation.2. This indicates that at the ﬁrst stage. based on the conditions expressed by the formula ϕ in terms of their own local neighborhoods. This indicates that the initial partial assignment that we provided the LFP had variable X4 assigned +1 and variable Xi assigned −1.1 ). The LFP is asked to extend this partial assignment to a complete satisfying assignment on all variables (if it exists. the local rtypes of the corresponding element.
once X3 has been assigned the value +1. Thus. The explicit stages of the ENSP also perform the task of propagating the local constraints placed by the various factors in the underlying factor graph outward into the larger graphical model. inﬂuence propagates through the structure during a LFP computation. Xn ) by simply looking only at the last (rightmost in the ﬁgure) level of the ENSP. In this way. This is what we will exploit. in the case of k = 3 the clause x1 ∨ x2 ∨ ¬x3 permits all global assignments except those whose ﬁrst three coordinates are (−1. if the factor were a XORSAT clause. The second stage is the implicit stage. Note that this happens implicitly during LFP computation. Remark 8. The variables at the last stage Xi. There are two stages of the ENSP for each stage of the LFP. where variables “update their neighborhoods” and those neighborhoods in their vicinity. in our case of the factors encoding clauses of a kSAT formula. We have embedded our original set of variates into a polynomially larger product space. For example.8. The ﬁrst stage is the explicit stage. where variables get assigned values. we have accomplished our original aim. kSAT asks a question about whether certain spaces 92 . −1. For example. we recover our original variables (X1 .ϕA  are just the original Xi . .8. This product space has a nice factorization due to the directed graph structure. and obtained a directed graphical model on this larger space. . . there are 2ϕA  stages of the ENSP in all. the local restrictions are all in the form of linear spaces. In contrast. For example. By introducing extra variables to represent each stage of each variable and each neighborhood in the SAT formula. By the end of the computation. it updates its neighborhood and also the neighborhood of variable X2 which lies in its vicinity (in this example). the local constraint placed by a clause is that the global assignment must evade exactly one restriction to a speciﬁed set of k coordinates. all variables have been assigned values. and so the global solution is an intersection of such spaces. +1). SEPARATION OF COMPLEXITY CLASSES 92 ted arrows between the second and third stages of the ENSP. . That is why we have represented each stage of the actual LFP computation by two stages in the ENSP. and we have a satisfying assignment. Thus.
4 Parametrization of the ENSP Our goal is to demonstrate the following. 93 . . If LFP were able to compute solutions to the d1RSB phase of random kSAT. where 1 ≤ i1 < i2 < · · · < ik ≤ n and the prohibited νi are ±1. Since the LFP completes its computation in under a ﬁxed polynomial number of steps. . all messages are coded into the formula ϕ. this means that we have managed to represent the LFP computation on a structure as a directed model using a polynomial overhead in the number of parameters of our representation space. then the distribution of the entire space of solutions would have a substantially simpler parametrization than we know it does. by embedding the covariates into a polynomially larger space. and 2ϕA  stages. . Linearity is a global constraint. if we were to try to solve XORSAT formulae. Note that these are O(1) local constraints per factor. Thus. we would not be able to place the various computations done by LFP into a single graphical model. Of course. we have a directed graph with 2n + n = 3n vertices at each stage. So. In other words. . The insight that we can afford to incur a polynomial cost in order to obtain a common graphical model on a larger product space was key to this section. In contrast. . XORSAT asks the question about whether certain linear spaces have a nonempty intersection. for instance. the end result of multiple runs of the LFP will be a space of solutions conditioned upon the requirements. Note that without embedding the covariates into a larger space. ωik ) = (ν1 . . .8. Thus. we have been able to put a common structure on various computations done by LFP on them. . 8. νk )} 93 have nonempty intersections. we would obtain a space that would be linear. SEPARATION OF COMPLEXITY CLASSES of the form {ω : (ωi1 .
How do we compute the CPDs or potentials? We assign various initial partial assignments to the variables as described in Sec. assuming that P = NP. This gives us the factorization (over the expanded representation space) of our distribution. we also know that we can parameterize the distribution by specifying a system of potentials over its cliques. and build up our local CPDs by simply recording local statistics over all these runs. in general. without any added positivity constraints. We do this exponentially numerous times.13. What is important is that each such model will have some properties — such as largest clique size. the major beneﬁt of directed graphical models is that we can do this always. depend on the initial partial assignment.2. be different. We only consider successful computations. 8. The directed nature of the ENSP also means that we can factor the resulting distribution into conditional probability distributions (CPDs) at each vertex of the model of the form P (x  pa(x)). we need to measure the growth in the dimension of independent parameters it requires to parametrize the distribution of solutions that we have just computed using LFP. From our perspective. namely those where the LFP was able to extend the partial assignment to a full satisfying assignment to the underlying kSAT formula. The ENSP for different runs of the LFP will. and then normalize each CPD. we have embedded our variates into a polynomially larger space that has factorization according to a directed model — the ENSP.3 and let the LFP computations run. By employing the version of HammersleyClifford for directed models. SEPARATION OF COMPLEXITY CLASSES 94 In order to accomplish this. each CPD will have scope only poly(log n). Once again.8. In order to do this. automatically ensuring conditional independence. We have seen that the cliques in the ENSP are of size poly(log n). Theorem 2. We represent each stage of the LFP computation on the corresponding two stages of the ENSP and thus obtain one full instantiation of the representation space. Recall that positivity is required in order to apply the HammersleyClifford theorem to obtain factorizations for undirected models. which de 94 . in general. This is because the ﬂow of inﬂuences through the stages of the ENSP will.
The same can also be arrived at by utilizing the normal form of Theorem 5. 2. By Theorem 5. There are only n neighborhoods. 4.10. each of these possibilities can be parameterized by 2poly(log n) parameters. SEPARATION OF COMPLEXITY CLASSES 95 termines the order of the number of parameters — in common. 1. There are polynomially many more vertices in the ENSP model than elements in the underlying structure. A distribution that factorizes according to the ENSP can be parameterized with 2poly(log n) independent parameters.9. This also underscores the principle that the description of the parameter space is simpler because it only involves interactions between l variates at a time directly. Let us inspect these properties that determine the parametrization of the ENSP model.2 gives us a poly(log n) upper bound on the size of the neighborhoods. The ENSP is an interaction model where direct interactions are of size poly(log n). Lemma 8. Remember.8. and are chained together through conditional independencies.9 there is a ﬁxed constant s such that there must exist s neighborhoods in the structure satisfying certain local conditions for the formula to hold. Proposition 8. At each implicit stage of the ENSP. In the case of the LFP neighborhood system. The number of local rtypes whose value each neighborhood vertex can take is 2poly(log n) . By the previous point. we are presently analyzing a single stage of the LFP. and each has poly(log n) elements at most. 3. giving us a total of 2poly(log n) parameters required. we have to update the types of the neighborhoods that were affected by the induction of elements at the previous explicit stage. and then chains these together through conditional independencies. This again gives us poly(n) (O(ns ) in this case) different possibilities for each explicit stage of the ENSP. The scope of the factors in the parametrization grows as poly(log n). the size of the largest cliques 95 .
This will not change if we were computing using complex ﬁxed points since the space of ktypes is only polynomially larger than the underlying structure. The property of the ENSP for range limited models that allows us to analyze the behavior of mixtures is that it is speciﬁed by local Gibbs potentials on its cliques. As the reader may intuit. This is simply a statement about the paucity of independent parameters in the component distributions in the mixture. which are of size poly(log n). Next. The cliques are also upper bounded in size by poly(log n). We will treat the value limited case shortly. Furthermore. In this case. Therefore. This is what drastically reduces the parameter space required to specify the distribution. The crucial property of the distribution of the ENSP is that it admits a recursive factorization. let us examine the value limited case. Thus. It also allows us to parametrize the ENSP by simply specifying potentials on its maximal cliques. Next. not of size O(n). these will show features of scope poly(log n). the differences are 96 . 8.8. a variable interacts with the rest of the model only through the cliques that it is part of. the features in the mixture will be of size poly(log n). in contrast to requiring the larger space RO(n) . we analyze the features of such a mixture when exponentially many instantiations of it are provided. While the entire distribution obtained by LFP may not factor according to any one ENSP. This means that when exponentially many solutions are generated. it is a mixture of distributions each of whom factorizes as per some ENSP. These cliques are parametrized by potentials. the mixture comprises distributions that can be parametrized by a subspace of Rpoly(log n) . a vertex may be in at most O(log n) such cliques.5 Separation We continue our treatment of range limited poly(log n)parametrizations. SEPARATION OF COMPLEXITY CLASSES 96 are poly(log n) for each single run of the LFP. when such a mixture is asked to provide exponentially many samples. In other words. a vertex displays collective behavior only of range poly(log n). We may think of the cliques as the building blocks of each ENSP.
4. and these rules are the same for different computations since it is the same LFP that is being used. but are not very long (have only 2poly(log n) rows). since they are CPDs of the same LFP. 3. This means that two CPDs cannot specify different behavior for the same priors. Using this property. the ﬁnal merged potential will be compatible with each smaller potential on overlaps. How do we merge various O(n) potentials into a single poly(n) sized potential? And what will be the resulting parametrization of this merged potential? In order to merge the potentials. So we will create a single potential over the entire graphical model which will have scope poly(n) (since the computation terminates in polynomial time). However. The potentials are already large in their scope (O(n)). the potentials are parameterized with only 2poly(log n) parameters inspite of having O(n) size. as sections of inductive relations of higher arity. then the CPDs are wide (have O(n) columns). 97 . Remember. these CPDs are nothing but the rules by which the computation proceeds. The solutions are generated by complex LFP. Namely. but the graphical model is parametrizable with only 2poly(log n) parameters.8. Once again we see that we cannot instantiate exponentially many solutions from such a limited parametrization and obtain the d1RSB picture which requires ample O(n) joint distributions. There are O(n) interactions at each stage. Thus. we observe that they have a certain sheaflike property. the Gibbs potentials are speciﬁed over cliques of size O(n). they must agree on overlaps. How do we analyze mixtures of such potentials? The idea is as follows. then so must the ﬁnal merged potential. 2. we can see that if each of the potentials had poly(log n) parametrization. If we think of a potential as a CPD. Since the interactions are O(n). SEPARATION OF COMPLEXITY CLASSES as follows 97 1.
without the possibility of factoring into smaller pieces through conditional independencies. Nor are they value limited since they instantiate in each of the exponentially many clusters in the d1RSB phase. SEPARATION OF COMPLEXITY CLASSES 98 This explains why polynomial time algorithms fail when interactions between variable are ampleO(n). parametrization over cliques of size O(n). and successive such assignments chained together through conditional independencies. We have shown that in the ENSP for range limited models. these ample irreducibleO(n) interactions manifest through the appearance of cores which comprise clauses whose variables are coupled so tightly that one has to assign them “simultaneously. without any conditional independencies between subsets. variables in a core are so tightly coupled together that they can only vary jointly. In other words. and are not value limited either. it guarantees us that there will exist conditional independencies in sets of size larger than the largest cliques in 98 . and which display the ample joint behavior of a system of n covariates. not O(n). their variation is ample. In case of random kSAT in the d1RSB phase. Intuitively. This threshold is precisely the value where ample irreducibleO(n) interactions ﬁrst appear in almost all randomly constructed instances. and become hard only when the densities of constraints increase above a certain threshold. this makes it impossible for polynomial time algorithms to assign their variables correctly. Since cores do not factor through conditional independencies. it guarantees us conditional independencies at the level of its largest interactions. the size of the largest such irreducible interactions are poly(log n). In such cases. Furthermore. since the model is directed. Once clause density is sufﬁciently high.” Cores arise when a set of C = O(n) clauses have all their variables also lying in a set of size C. Likewise. This also puts on rigorous ground the empirical observation that even NPcomplete problems are easy in large regimes. cores cannot be assigned poly(log n) at a time. is insufﬁcient. which requires O(cn ) independent parameters to specify. parametrization over cliques of size only poly(log n) is insufﬁcient to specify the joint distribution. Furthermore.8. they represent irreducible interactions of size O(n) which may not be factored any further. but with only 2poly(log n) parameters. More precisely.
8. we are ready to state our main theorem. The framework we have constructed allows us to analyze the set of polynomial time algorithms simultaneously.2. In other words.2: The factorization and conditional independencies within a core due to potentials of size poly(log n).1.8. It makes precise the notion that polynomial time algorithms can take into account only interactions between variables that grow as poly(log n). Consider the solution space of kSAT in the d1RSB phase for k > 8 as recalled in Section. not as O(n). This is illustrated in Fig. the features present in cores in the d1RSB phase have size O(n). while the space of solutions generated by LFP has features of size poly(log n).10. should the core factorize as per the ENSP.2. P = NP. which are O(poly(log n)). Independent given Intermediate values Independent given Intermediate values poly(log n) poly(log n) Figure 8. there will be independent variation within cores when conditioned upon values of intermediate variables that also lie within the core. We know that for high enough values of the clause 99 . instead of dealing with each individual algorithm separately. 6. At this point. Proof. SEPARATION OF COMPLEXITY CLASSES 99 its moral graph. In other words. Theorem 8. since they can all be captured by some LFP. This is contradictory to the known behaviour of cores for sufﬁciently high values of k and clause density in the d1RSB phase.
SEPARATION OF COMPLEXITY CLASSES 100 density α. these two give us a poly(log n) quantity. But we know that in the highly constrained phases of d1RSB. we can preclude value limited poly(log n)parametrization. then given a value of β. When exponentially many solutions have been generated.8. This would mean that with a poly(log n) change in frozen variables of one cluster. Let αβγ be a representation of the variables in cliques α. we have O(n) frozen variables in almost all of the exponentially many clusters. Let us consider then the situation where these clusters were generated by a purported range limited LFP algorithm for kSAT that can be parametrized by the ENSP model with clique sizes poly(log n). This gives us the contradiction that we seek. At this point. there will be conditional distributions that exhibit conditional independence between blocks of variates size poly(log n). In our case. we may even choose our poly(log n) blocks to be in overlaps of these variables. then this means that we have generated more than exponential in poly(log n) distinct solutions. we will see the effect of conditional independencies beyond range poly(log n). we need O(n) variable ﬂips to get from one cluster to the next. If each set of such variables has scope at most poly(log n). β and γ. When exponentially many solutions have been generated from distributions having the parametrization of the ENSP model. we will have nontrivial conditional distributions conditioned upon values of β variables. Note that since O(n) variables have to be changed when jumping from one cluster to another. The basic question in analyzing such mixtures is: How many variables do we need to condition upon in order to split the distribution into conditionally independent pieces? The answer is given by (a) the size of the largest cliques and (b) the number of such cliques that a single variable can occur in. we will see independent variation over all their possible conditional values in the variables of α and γ. the conditional independencies ensure that we will see cross terms of the form α1 βγ1 α2 βγ2 α1 βγ2 α2 βγ1 . we would get a solution in another cluster. Namely. The ﬁrst observation we make is that since the variables in cores are instantiated in exponentially many clusters. 100 .
the “jointness” in this distribution lies at a level poly(log n). This quantity would have to grow exponentially with n in order to display the behavior of the d1RSB phase.2. This behavior is completely determined by poly(log n) other variates. We may think of such mixtures as possessing only cpoly(log n) “channels” to communicate directly with other variables. it must be poly(log n). Their correlations must factor through this bottleneck. Each variable may choose poly(log n) partners out of O(n) to form a potential. This gives us the crossterms described earlier. Once again we return to the same point — that the jointness of the distribution that a purported LFP algorithm would generate would lie at the poly(log n) levels of conditional independence.8. which gives us conditional independences after range poly(log n). this means blocks of variables of size poly(log n) only “see” the rest of the distribution through equivalence classes that grow as O(npoly(log n) )). Instead. All long range correlations transmitted in such a distribution must pass through only these many channels. This means that blocks of size larger than this are now varying independently of each other conditioned upon some intermediate variables. it can only display a limited joint behavior. exponentially many solutions cannot independently transmit O(n) correlations (namely. We can see that due to the limited parameter space that determines each variable. Thus. not by O(n) other variates. It is also useful to consider how many different parametrizations a block of size poly(log n) may have. there will be solutions that show crossterms between features whose size is poly(log n). SEPARATION OF COMPLEXITY CLASSES 101 there will be no effect of the values of one upon those of the other. It may choose O(log n) such potentials. the resulting distribution will start showing features that are at most of size poly(log n). This is shown pictorially in Fig. This is why when enough solutions have been generated by the LFP. Even coarsely. whereas the jointness in 101 . In other words. This is what prevents the Hamming distance between solutions from being O(n). Therefore. 8. the variables that have to be changed when jumping from one cluster to another). and prevents the Hamming distance from being O(n) on the average over exponentially many solutions.
SEPARATION OF COMPLEXITY CLASSES 102 the distribution of the d1RSB solution space is truly O(n). These jumps are independent. which takes the order of log of the size of the space. Each cluster is a linear space tagged on to a solution for the 2core. Remark 8. and chained together by conditional independencies as would be done by a LFP.13. The frozen variables in XORSAT do not arise due to a high dimensional parametrization.14.11. The poly(log n) size of features and therefore Hamming distance between solutions tells us that polynomial time algorithms correspond to the RS phase of the 1RSB picture. frozen variables would occur even in low dimensional parametrizations in the presence of additional constraints placed by the problem. and accounting for such O(n) jointness that cannot be factored any further is beyond the capability of polynomial time algorithms. Note that the central notion is that of the number of independent parameters. Hard regimes of NPcomplete problems allow O(n) variates to irreducibly jointly vary.12.3]. there are irreducible interactions of size O(n) that cannot be expressed as interactions between poly(log n) variates at a time. It is tempting to think that there will be such a parametrization whenever the algorithmic procedure used to generate the solutions is stage102 . where the linearity of the problem causes frozen variables to occur. c > 1. This is because it takes that many parameters to specify the exponentially many O(n) variable “jumps” between the clusters.8. For example. Linear spaces always admit a simple description as the linear span of a basis. §18. This is what happens in XORSAT. We collect some observations in the following. This is central to the separation of complexity classes. and cannot be factored through poly(log n) sized factors since that would mean conditional independence of pieces of size poly(log n) and would ensure that the Hamming distance between solutions was of that order. Remark 8. Remark 8. Remark 8. not frozen variables. Namely. not to the d1RSB phase. which is also why the clusters are all of the same size. We can see from the preceding discussion that the number of independent parameters required to specify the distribution of the entire solution space in the d1RSB phase (for k > 8) rises as cn . but simply because the 2core percolates [MM09.
Conditional independence over factors of small scope is at the heart of resolving CSPs by means of polynomial time algorithms. There is an intimate relation between the geometry of the space and its parametrization. One might observe that the requirement that we not make any trial and error at all that limits LFP computations in a fundamentally different manner than the locality of information ﬂows. 2. 8. See [Put65] for an interesting related notion of “trial and error predicates” in computability theory. In other words. When placed in the ENSP.6 Some Perspectives The following perspectives are reinforced by this work. It is this space where the dependencies and independencies that the CSP imposes upon covariates that satisfy it manifest. It may.8. Namely. SEPARATION OF COMPLEXITY CLASSES 103 wise local. Otherwise. Studying the parametrization of the space of solutions is a worthwhile pursuit. This is not so. We need the added requirement that “mistakes” are not allowed. of course. 1. where clique sizes are of exponential size. 3. but it can give rise to distributions without any conditional independence factorizations whose factors are of size poly(log n). The view that an algorithm is a means to generate one solution is limited in the sense that it is oblivious to the geometry of the space of all solutions. The most natural object of study for constraint satisfaction problems is the entire space of solutions. but over an exponentially larger space. be the appropriate approach in many applications. we see that there is factorization. 4. polynomial time algorithms succeed by successively “breaking up” the 103 . But there are applications where requiring algorithms to generate numerous solutions and approximate with increasing accuracy the entire space of solutions seems more natural. we cannot change a decision that has been made. even PFP has the stagewise bounded local property.
8. In order to bring this structure under study. This structure is important in their study. polynomial time algorithms cannot solve problems in regimes where blocks whose order is the same as the underlying problem instance require simultaneous resolution. 5. 104 . Consequently. SEPARATION OF COMPLEXITY CLASSES 104 problem into smaller subproblems that are joined to each other through conditional independence. we may have to embed the space of covariates into a larger space (as done by the ENSP). and with a certain structure. Polynomial time algorithms resolve the variables in CSPs in a certain order.
Moreover. assume that no individual variable free in LFPy. tells us that nested ﬁxed points can always be replaced by simultaneous ﬁxed points.A. X. known as the transitivity theorem. The presentation here closely follows [EF06.1) 105 .Y ψ(y. [LFPx. χ(z.Y ψ(y. Z)]u. where χ is ﬁrst order. Y ) be ﬁrst order formulas positive in X and Y . The ﬁrst result. Let ϕ(x. Y ) gets into the scope of a corresponding quantiﬁer or LFP operator in A. we reproduce it in this appendix.X ϕ(x. X. 8].1.Z. X. Reduction to a Single LFP Operation A. X. [LFPy.1 is equivalent to a formula of the form ∃(∀)u[LFPz. Ch. (A. Y ) and ψ(y. Since we use this construction to deal with complex ﬁxed points. Y )])]t Then A.1 The Transitivity Theorem for LFP We now gather a few results that will enable us to cast any LFP into one having just one application of the LFP operator. X.
The length of a be clear ˜ ˜ from context.2 Sections and the Simultaneous Induction Lemma for LFP Next we deal with simultaneous ﬁxed points. . We will denote a tuple consisting only of a’s by a. . Fm be operators acting as above. . . Fm ) : (Ak ) → (Ak ) 106 (A. is an operator acting as J(F1 . First. Set k := max{k1 . . denoted by Ra . . Then the asection of R. . REDUCTION TO A SINGLE LFP OPERATION 106 A. . km } + m + 1. . . . Fm : (Ak1 ) × · · · × (Akm ) → (Akm ) We wish to embed these operators as sections of a “larger” operator. Fm . .1. . The simultaneous join of F1 . which is known as their simultaneous join. . Let m operators F1 . . Let R be a relation of arity (k + l) on A and a ∈ Ak . Fm act as follows: F1 : (Ak1 ) × · · · × (Akm ) → (Ak1 ) F2 : (Ak1 ) × · · · × (Akm ) → (Ak2 ) . Deﬁnition A. . . . . . denoted by J(F1 . is given by Ra := {b ∈ Ak R(ba)} Next we see how sections can be used to encode multiple simultaneous operators producing relations of lower arity into a single operator producing a relation of higher arity. Fm ). . . The proof utilizes a coding procedure whereby each simultaneous induction is embedded as a section in a single LFP operation of higher arity. . Deﬁnition A.APPENDIX A. Let F1 . Recall that simultaneous inductions do not increase the expressive power of LFP.2. . . we introduce the notion of a section. .2) .
. w) := ¬(v = w) ∧ (x1 = · · · = x −i+1 = v) ∧ (x i > 1. . . . .APPENDIX A. These are collected below. b ∈ A. . a a a. b ∈ A. . Fm ) exists. . l Deﬁnition A. a (A.a=b (A. . . . . The ﬁxed point J ∞ of the simultaneous join of operators (F1 . REDUCTION TO A SINGLE LFP OPERATION 107 such that for any a. xl . .3. .4) The following corollaries are now immediate. v. . (A. Since the sections are coded using tuples of the form ak−i+ki +1 bi . . The simultaneous join of inductive operators is inductive.5) Now we are ready to show that simultaneous ﬁxedpoint inductions of formulas can be replaced by the ﬁxed point induction of a single formula.5. we need to show that the simultaneous join can itself be expressed as a LFP computation. . the section formulas δi (x1 . Rabm ) × {˜bm })). Rabm ) × {˜b1 }) ∪ · · · a ˜ ˜ · · · ∪ (Fm (Rab1 . . For ≥ 1 and i = 1. A = δi [˜bj ab] if and only if i = j. . We need formulas that will help us deﬁne sections of a simultaneous induction. .4.3) a ˜ ˜ The simultaneous join operator deﬁned above has properties we will need to use.b∈A. . Finally. −i+2 = · · · = w) For distinct a. The ith power J i of the simultaneous join operator satisﬁes Ji = i i ((F1 × {˜b1 }) ∪ · · · ∪ (Fm × {˜bm })). . we will need formulas that can express this.a=b ((F1 (Rab1 . . . . v. 107 . Lemma A. . . . Concretely.6. Corollary A. xl . . Corollary A.b∈A. Fm ) ∞ ∞ exists if and only if their simultaneous ﬁxed point (F1 . the simultaneous join is given by J(R) := a. w) ¬(v = w) ∧ (x1 = · · · = x = v) i=1 l δi (x1 . the abi section (where the length of a here is k − ˜ ˜ i + 1) of the nth power of J is the nth power of the operator Fi .
Furthermore. . x1 ). . . . . . . . . let ϕ1 . 108 . v. . . . xm ) 108 be formulas of LFP. zk ) ∧ δ1 (z1 . zk ) ∧ δ2 (z1 . . v. .APPENDIX A. . . z1 . .6) ˜ ˜ Then. . zk . km }+ m + 1. As always. . v. . . we let Ri be a ki ary relation and xi be a ki tuple. . . . Zvwm . zk . z1 . . . . the relation computed by the least ﬁxed point of χJ contains all the individual least ﬁxed points computed by the simultaneous induction as its sections. . Set k := max{k1 . . Rm . . . . . . . . ϕm (R1 .7. . . . z1 . Rm . . Rm . . . ϕm be positive in R1 . . zk ) ∧ δm (z1 . . w)) ˜ ˜ k ∨ (ϕ2 (Zvw1 . . . w)) ˜ ˜ . . . . zk . . zk ) := ∃v∃w(¬v = w∧ k ((ϕ1 (Zvw1 . . Zvwm . z1 . . . . . k ∨ (ϕm (Zvw1 . Let ϕ1 (R1 . REDUCTION TO A SINGLE LFP OPERATION Deﬁnition A. . Zvwm . Deﬁne a new ﬁrst order formula χJ having k variables and computing a single kary relation Z by χJ (Z. . . . . . . w)))) (A. . . . .
Computing with ﬁrstorder logic. [AM00] Srinivas M. [Bes74] Julian Besag. 1995. Aji and Robert J. pages 130–139. 2008. [AV91] Serge Abiteboul and Victor Vianu. Berlin. 1995. 2004. Amer. 1991. 43(1):62–124. [ART06] Dimitris Achlioptas and Federico RicciTersenghi.2122v2 [math.Bibliography [ACO08] D. Comput. Journal of Computer and System Sciences. The threshold for random kSAT is 2k log 2 − O(k).CO]. SpringerVerlag. Algorithmic barriers from phase transitions. McEliece. J. CojaOghlan.. With dis109 . 17(4):947–973 (electronic). second edition. 2000. 36:192–236. [AV95] Serge Abiteboul and Victor Vianu. Texts in Theoretical Computer Science. J. Soc. 50:309–335. The generalized distributive law. Soc. 1974. Ser. Statist. New York. An EATCS Series. IEEE Trans. Datalog extensions for database queries and updates. ACM. [BDG95] ´ Jos´ Luis Balc´ zar. Roy. [AP04] Dimitris Achlioptas and Yuval Peres. Structural e a ı complexity. I. B. Theory. Syst. In STOC’06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing. Inform. 46(2):325–343. J. 2006. arXiv:0803. Achlioptas and A. Math. On the solutionspace geometry of random constraint satisfaction problems. Josep D´az. and Joaquim Gabarro. Sci.. Spatial interaction and the statistical analysis of lattice systems.
[Bis06] Christopher M. The P versus NP problem. 2000. Conditional independence for statistical operations. The complexity of theoremproving procedures. R.. A. 1986. Cambridge. 2006. New York. Soc. Bishop. R Monasson. J. 1991.CO]. Springer. [CF86] MingTe Chao and John V. Franco. 110 . Where the really hard problems are. Hawkes. Philip Dawid. 2009. In STOC ’71: Proceedings of the third annual ACM symposium on Theory of computing. Ser. Hammersley. NY. Clay Math. Whittle. 1980. P. [Coo71] Stephen A. New York. and William M. Inst. [Coo06] Stephen Cook. Cook. 8(3):598–617. 2006. Ord. M. 1971. [Daw80] A. Roy. pages 87–104. ACM Press. CojaOghlan. USA. [CKT91] Peter Cheeseman. [CO09] A. In IJCAI. Information Science and Statistics. SIAM J.. R. pages 331–340. [BMW00] G Biroli. MA. Dawid. P. Conditional independence in statistical theory. pages 151–158. 1975.. John Gill. [Daw79] A. Bob Kanefsky. and Robert Solovay. SIAM J. Probabilistic analysis of two heuristics for the 3satisﬁability problem. [BGS75] Theodore Baker. Comput. 4(4):431–442. Relativizations of the P =?N P question. G. B. A variational description of the ground state structure in random satisﬁability problems. Taylor. 41(1):1–31. K. A better algorithm for random ksat. S. Statist. Statist. and M Weigt. and M. Mead. J. 1979.3583v1 [math. PHYSICAL JOURNAL B. arXiv:0902. Bartlett and with a reply by the author. Clifford. Ann.BIBLIOGRAPHY 110 cussion by D. Pattern recognition and machine learning. Comput. 568:551–568. In The millennium prize problems. 15(4):1106–1118. Cox.. P.
393(13):260–279. and Moshe Y. New York.I. Math. SIAMAMS Sympos. Thierry Mora. Appl.BIBLIOGRAPHY [Deo10] 111 Vinay Deolalikar. 120(1):78–92. SIAM– AMS Proc... 1968. Inﬁnitary logic and inductive deﬁnability over ﬁnite structures. Dobrushin. Amer. 1973). Inf. Providence.. 69:67– 72. Soc. [Fag74] Ronald Fagin. 119(2):160–175. Under preparation. monadic conp. 1965. L. [Fri99] E. pages 43–73. The description of a random ﬁeld by means of conditional probabilities and conditions on its regularity. Vol. Steven Lindell. 13:197–224. 12(20):1017–1054. A distribution centric approach to constraint satisfaction problems. 1999.. Appl. J. Theory Prob.. Pairs of satassignments in random boolean formulæ.. Math. 2010. and Comput. 111 . Math. Theor. SpringerVerlag. [Dob68] R. and Scott Weinstein. 1995.. larged edition. [DLW95] Anuj Dawar. 2008. In Complexity of computation (Proc. Stockmeyer.. Friedgut. [Edm65] Jack Edmonds. Journal of Research of the National Bureau of Standards. [EF06] ¨ HeinzDieter Ebbinghaus and Jorg Flum. en monadic np vs. 1974. [FSV95] Ronald Fagin. Marc M´ zard. Amer.. Comput. [DMMZ08] Herv´ Daud´ . Necessary and sufﬁcient conditions for sharp thresholds and the ksat problem. Berlin. Sci. Minimum partition of a matroid into independents subsets. R. 2006. Generalized ﬁrstorder spectra and polynomialtime recognizable sets. and Riccardo e e e Zecchina. Soc. Larry J. 1995. Springer Monographs in Mathematics. Inform. Vardi. VII. On Finite model theory. Comput.
ACM Trans. [Hod93] Wilfrid Hodges. Garey and David S. Logic Found. volume 42 of Encyclopedia of Mathematics and its Applications. Modeltheoretic methods in the study of elementary logic. IEEE Transactions on Pattern Analysis and Machine Intelligence. Relational queries computable in polynomial time (extended abstract). Hopcroft. New York. Comput. 1(1):112–130. Stochastic relaxation. Hartmanis and J. Amsterdam. November 1984. 1976. In Proceedings of the Herbrand symposium (Marseilles. USA. Amsterdam. [Imm82] Neil Immerman. Freeman and Co. Locality of orderinvariant ﬁrstorder formulas. [HH76] J. pages 147–152. pages 132–145. Johnson. SIGACT News. San Francisco. [Han65] William Hanf. 2000. pages 105–135. NorthHolland. Markov ﬁelds on ﬁnite graphs and lattices. 1971. A Series of Books in the Mathematical Sciences. 1982. In Theory of Models (Proc. Berkeley). Cambridge University Press. 8(4):13–24. E. ACM. Hammersley and P.. 1965. Cambridge. W. 112 . H. Independence results in computer science. In STOC ’82: Proceedings of the fourteenth annual ACM symposium on Theory of computing. volume 107 of Stud. 1979.. [GS00] Martin Grohe and Thomas Schwentick.BIBLIOGRAPHY [Gai82] 112 Haim Gaifman. M. Log. Math. 1993. Sympos. Computers and intractability. 1963 Internat. 1982. On local and nonlocal properties. Calif. NorthHolland. 1981). gibbs distributions and the bayesian restoration of images. [HC71] J. [GG84] Stuart Geman and Donald Geman. A guide to the theory of NPcompleteness. 6(6):721–741. [GJ79] Michael R. NY. Model theory... Clifford.
Federico RicciTersenghi. L. Thatcher. CoRR. Federico RicciTersenghi. Recursive causal models. Soc. and Lenka Zdeborov´ . 47:498–519. New York. [KMRT+ 07] Florent Krzakała. 104(25):10318–10323 (electronic). Critical behavior in the satisﬁability of random boolean formulae. Graduate Texts in Computer Science. 264:1297–1301. editors. 1:1–142. pages 85–103. 1984.BIBLIOGRAPHY [Imm86] 113 Neil Immerman. W. J. Sci. Markov random ﬁelds and their applications. [KSC84] Harri Kiiveri. 36(1):30–52. 68(13):86–104. and Lenka Zdeborov´ . Proc. 1998. ¸ Guilhem Semerjian. A. 1980. MIT Press. E. Probabilistic Graphical Models: Principles and Techniques. 113 . IEEE Transactions on Information Theory. [KS94] Scott Kirkpatrick and Bart Selman. [KMRT+ 06] Florent Krzakala. Gibbs states and the a set of solutions of random constraint satisfaction problems. Gibbs states and the a set of solutions of random constraint satisfaction problems. and J. Koller and N. Kschischang. 2007. Factor graphs and the sumproduct algorithm. [Imm99] Neil Immerman. 1986. 1994. [KF09] D. Frey. Austral. Speed. 2006. USA. Ser. 1972. B. Carlin. Kinderman and J. Relational queries computable in polynomial time. 1999. 2009. M. and Hans andrea Loeliger. T. American Mathematical Society. Karp. Snell. Brendan J. Natl. Complexity of Computer Computations. Reducibility among combinatorial problems. [KS80] R. Acad. Andrea Montanari. [KFaL98] Frank R. Friedman. In R. [Kar72] R. Inform. abs/condmat/0612365. Guilhem Semerjian. Descriptive complexity. Plenum Press. Miller and J. Math. SpringerVerlag. and Control. Science. Andrea Montanari. P.
B. 1990. Special issue on inﬂuence diagrams.edu/doi=10. Logic and Complexity. volume 17 of Oxford Statistical Science Series. pages 779–788. Elements of ﬁnite model theory. and H. Berlin. 20(5):491–505.122. SpringerVerlag. Random ksat: Two moments sufﬁce to cross a sharp threshold. Lindell. Leimer. SpringerVerlag. Oxford Science Publications. [LDLL90] S. physics. [MA02] Cristopher Moore and Dimitris Achlioptas. Lauritzen. Advances in Pattern Recognition. Universal sequential search problems. Independence properties of directed Markov ﬁelds. P. 2004. Larsen. FOCS. New York. third edition. [LR03] Richard Lassaigne and Michel De Rougemont. [Lib04] Leonid Libkin. 114 . The Clarendon Press Oxford University Press. 2009. Levin. Texts in Theoretical Computer Science. Information. SpringerVerlag London Ltd. Oxford. Markov random ﬁeld modeling in image analysis. Jain and Rama Chellappa. Oxford Graduate Texts.BIBLIOGRAPHY [Lau96] 114 Steffen L. London. 2005. 2009. A. [MM09] Marc M´ zard and Andrea Montanari. Li. [Li09] Stan Z.. An EATCS Series. [Lev73] Leonid A. Lauritzen. Dawid. 1996. Computing monadic ﬁxed points in linear available online at time on doubly linked data structures. 9(3). 1973. http://citeseerx. London.psu. Graphical models. L. With forewords by Anil K.1. Problems of Information Transmission. 2002.G. Oxford University Press. N.1447. and come putation.1. 2003.ist. Networks. [Lin05] S.
Statist. E. Rev. and Hector Levesque.. Teaneck. ACM. [MZ97] R´ mi Monasson and Riccardo Zecchina. A new look at survey propagation and its generalizations. [MPZ02] M M` zard. Sep 2007. Phys. (electronic). Rev. Wainwright. and Martin J. Statistical mechanics of e the random ksatisﬁability model. Hard and easy distributions of sat problems. Phys. NorthHolland Publishing Co. [MMZ05] M. In SODA. 2007. volume 9 of World Scientiﬁc Lecture Notes in Physics. J. World Scientiﬁc Publishing Co. In AAAI. [Mou74] John Moussouris. 115 . Phys. 1974. [MSL92] David Mitchell. Bart Selman.. Amsterdam. 41 pp. and R. [MPV87] Marc M´ zard. 77. Giorgio Parisi. 17. Elchanan Mossel. pages 459–465. G Parisi. 56(2):1357–1370. NJ. 1974. 1987. and R Zecchina. [MRTS07] Andrea Montanari. M´ zard. Studies in Logic and the Foundations of Mathematics. Zecchina. J. Vol. 54(4):Art. Elchanan Mossel. Aug 1997. T. [Mos74] Yiannis N. 297(August):812–815. 2002. Moschovakis. Lett. 94(19):197–205. Spin e glass theory and beyond. May 2005. and Guilhem Semerjian. Gibbs and Markov random systems with constraints.. pages 1089–1098. Analytic and Algorithmic e Satisﬁability Problems. and Martin J. and Miguel Angel Virasoro. Federico RicciTersenghi. Maneva.. Inc. A new look at survey propagation and its generalizations. Mora. 2005. Solving constraint satisfaction problems through belief propagationguided decimation. Science.BIBLIOGRAPHY [MMW05] 115 Elitza N. 10:11–33. Elementary induction on abstract structures. [MMW07] Elitza Maneva. Wainwright. 1992. Clustering of solutions in e the random satisﬁability problem.
1999. [See96] Detlef Seese. PQ. 55(1. [Put65] 116 Random ksatisﬁability problem: From an analytic solution to an efﬁcient algorithm. e Rev. 1992. pages 444– 454. In STOC ’82: Proceedings of the fourteenth annual ACM symposium on Theory of computing.a computational complexity perspective. 1997. Razborov and Steven Rudich. USA. 1965. [SB99] Thomas Schwentick and Klaus Barthelmann.. pages 603–618. Structures Comput. Trial and error predicates and the solution to a problem of mostowski. 1996. Springer Verlag. Math. Vardi. 1995). E. 2007.. J. 1:665–712. Introduction to the Theory of Computation. [Sip92] Michael Sipser. Linear time computable problems and ﬁrstorder descriptions. Symb. 30(1):49–57. Joint COMPUGRAPH/SEMAGRAPH Workshop on Graph Rewriting and Computation (Volterra. [Sip97] M. The history and status of the p versus np question. J. System Sci. Sci. ACM. Log. [Var82] Moshe Y. Local normal forms for ﬁrstorder logic with applications to games and automata. Nov 2002. PWS Publishing Company. 1997. P. 26th Annual ACM Symposium on the Theory of Computing (STOC ’94) (Montreal. Hilary Putnam. part 1):24–35. Comput. Natural proofs. 1982. Sipser. NY. 1994).. and Mathematics . Proceedings of the ICM 2006.BIBLIOGRAPHY [MZ02] Marc M´ zard and Riccardo Zecchina. [RR97] Alexander A. New York. pages 137–146. Phys. 6(6):505–526. 66(5):056126. NP. 116 . [Wig07] Avi Wigderson. In Discrete Mathematics and Theoretical Computer Science. In STOC. The complexity of relational query languages (extended abstract).