You are on page 1of 18

Computers & Operations Research 33 (2006) 2787 – 2804

www.elsevier.com/locate/cor

Exploiting semidefinite relaxations in constraint programming
Willem-Jan van Hoeve
CWI, P.O. Box 94079, 1090 GB Amsterdam, The Netherlands Available online 5 March 2005

Abstract Constraint programming uses enumeration and search tree pruning to solve combinatorial optimization problems. In order to speed up this solution process, we investigate the use of semidefinite relaxations within constraint programming. In principle, we use the solution of a semidefinite relaxation to guide the traversal of the search tree, using a limited discrepancy search strategy. Furthermore, a semidefinite relaxation produces a bound for the solution value, which is used to prune parts of the search tree. Experimental results on stable set and maximum clique problem instances show that constraint programming can indeed greatly benefit from semidefinite relaxations. 2005 Elsevier Ltd. All rights reserved.
Keywords: Constraint programming; Semidefinite programming; Limited discrepancy search

1. Introduction Constraint programming models for combinatorial optimization problems consist of variables on finite domains, constraints on those variables and an objective function to be optimized. In general, constraint programming solvers use domain value enumeration to solve combinatorial optimization problems. By propagation of the constraints (i.e. removal of inconsistent values), large parts of the resulting search tree may be pruned. Because combinatorial optimization problems are NP-hard in general, constraint propagation is essential to make constraint programming solvers practically applicable. Another essential part concerns the enumeration scheme, that defines and traverses a search tree. Variable and value ordering heuristics as well as tree traversal heuristics greatly influence the performance of the resulting constraint programming solver.
E-mail address: W.J.van.Hoeve@cwi.nl. URL: http://homepages.cwi.nl/∼wjvh/ 0305-0548/$ - see front matter doi:10.1016/j.cor.2005.01.011 2005 Elsevier Ltd. All rights reserved.

standard linear relaxations are not very tight and not informative. and by enlarging the model the solution process may slow down. semidefinite programs are more time-consuming to . which is more concise while preserving the behaviour of the algorithm. although not all problems will be equally suitable. in [2] the method was proposed for stable set problems only. By applying a semidefinite relaxation in this way. in [2] the method uses a subproblem generation framework on which limited discrepancy search is applied. on maximum clique problems. linear relaxations are widely used. However. Namely. being tighter. two classical combinatorial optimization problems. We conclude in Section 7 with a summary and future directions. and gradually scans a wider area around this solution. Motivation NP-hard combinatorial optimization problems are often solved with the use of a polynomially solvable relaxation. The outline of the paper is as follows. we use the solution value of the semidefinite relaxation as a bound for the objective function. A description of our solution framework is given in Section 4. We propose to use the solution of a semidefinite relaxation to define search tree ordering and traversal heuristics. This investigation involves the extraction of semidefinite relaxations from a constraint programming model. Often (continuous) linear relaxations are chosen for this purpose.-J. Moreover. more experimental results are presented. several papers on approximation theory following [4] have shown the tightness of semidefinite relaxations. In Section 5 we introduce the stable set problem and the maximum clique problem. As computational results will show. We compare our method with a standard constraint programming solver. In the current work it has been replaced by limited discrepancy search on single values. These ideas were motivated by a previous work [1].2788 W. integer optimization formulations and a semidefinite relaxation. the specialized solvers appear to be much faster than our method. in Section 3 some preliminaries on constraint and semidefinite programming are given. One way to overcome this problem is to identify and add linear constraints that strengthen the relaxation. The next section gives a motivation for the approach proposed in this work. and the actual use of the relaxation inside the solution scheme. Furthermore. this means that our enumeration scheme starts at the suggestion made by the semidefinite relaxation. 2. Finally. Also within constraint programming. in which a linear relaxation was proved to be helpful in constraint programming. On the other hand. We implemented our method and provide experimental results on the stable set problem and the maximum clique problem. our method obtains far better results than a standard constraint programming solver. we hope to speed up the constraint programming solver significantly. However. while in this paper we propose the method to be applied to any constraint programming problem. see [3] for an overview. a more general view on the proposed method is presented. Then. But it may be time-consuming to identify such constraints. including problem characterizations and instances of the DIMACS benchmark set for the maximum clique problem. For some problems. and with specialized solvers for maximum clique problems. for instance for the stable set problem. which results in stronger pruning. This paper is an extended and revised version of [2]. Let us first motivate why in this paper a semidefinite relaxation is used rather than a linear relaxation. Effectively. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 In this work we investigate the possibility of using semidefinite relaxations in constraint programming. In the current version. Section 6 presents the computational results.

we are willing to make the trade-off in favour of the semidefinite relaxation. Because there are exponentially many possible assignments. provided that the constraint programming solver contains an algorithm to check its satisfiability. The general solution scheme is an iterative process in which branching decisions are made. Finally. constraint propagation is needed to prune large parts of the corresponding search tree. In case of optimization problems. The next waves (discrepancy i. Hence. greatly influence the constraint propagation. However. To our knowledge the cross-fertilization of semidefinite programming and constraint programming has not yet been investigated. and depth-first search to traverse the tree. and the effects are propagated subsequently. when good heuristics are available. with i > 0). we briefly introduce the basic concepts of constraint programming that are used in this paper. while still preserving a complete (exact) solution scheme. Hence one has to trade strength for computation time. or even to identify globally inconsistent domain values. The first wave (discrepancy 0) exactly follows the heuristic. we propose to solve only once a relaxation. If no suitable variable and value ordering heuristics are available. Variable and value ordering heuristics. etcetera). Therefore. In this work the variable domains are assumed to be finite. Constraint propagation tries to remove inconsistent values from variable domains before the variables are actually instantiated. investigating the possibility of using semidefinite relaxations in constraint programming is worthwile in itself. Typically. Constraint programming In this section. it will lead us directly to the optimal solution (possibly unproven). For some (large scale) applications. which define the search tree. corresponding variable domains. some heuristics come pretty close. they should be applied. logical. symbolic. this paper should be seen as a first step toward the cooperation of constraint programming and semidefinite programming. Preliminaries 3. A constraint c is defined as a subset of the Cartesian product of the domains of the variables that are in c. Moreover. Constraints may be of any form (linear. and with that the performance of the solver. LDS is organized in waves of increasing discrepancy from the first solution provided by the heuristic. A thorough explanation of the principles of constraint programming can be found in [6]. one should try to deviate from the first heuristic solution as little as possible. A constraint programming model consists of a set of variables. When ‘perfect’ value and variable ordering heuristics are followed. before entering the search tree. LDS is applied until a maximum discrepancy has been reached. Instead. one does not need to generate the whole search tree. our intention is not to solve a relaxation at every node of the search tree. . constraint programming solvers often use a lexicographic variable and value ordering.-J. In such cases. explore all the solutions that can be reached when i derivations from the heuristic are made. Although being incomplete (inexact). and a set of constraints restricting those variables. This is done by traversing the search tree using a limited discrepancy search strategy (LDS) [7] instead of depth-first search.W. but only a part of it. Basically. nonlinear. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 2789 solve than linear programs in practice. also an objective function is added. Although perfect heuristics are often not available. a constraint programming solver tries to find a solution of the model by enumerating all possible variable-value assignments such that the constraints are all satisfied. say 3 or 4. Hence.1. 3. semidefinite relaxations are well-suited to be used within a branch and bound framework (see for instance [5]).

. tr(X) = n Xii . then a semidefinite programming constraint tr(Aj X) bj corresponds to a linear T programming constraint aj x bj . Then we give a description of the usage of the relaxation within the enumeration scheme. where aj represents the diagonal of Aj . 3. . . Semidefinite programs have the form max tr(W X) s. The m reals bj and the m matrices Aj define m constraints.9].1. . quadratic programming problems. or search tree. Theoretically. i=1 The matrix X. nowadays fast ‘interior point’ methods are being used for this purpose (see [13] for an overview). where the matrix X is replaced by a non-negative vector of variables x ∈ Rn . Namely. We can view semidefinite programming as an extension of linear programming.e.1 the resulting semidefinite program is equal to a linear program. the maximum cut problem. . as explained in Section 3. and many others (see [11] for an overview). provided that the heuristic is informative. Of course LDS can also be applied until all possible discrepancies have been considered. it is not a trivial task to obtain a computationally efficient semidefinite program that provides a tight solution for a given problem. . m) are all supposed to be diagonal matrices. 4. . tr(Aj X) bj (j = 1.2790 W. m). . semidefinite programs have been proved to be polynomially solvable to any fixed precision using the so-called ellipsoid method (see for instance [12]). In practice. In this section we first show how to extract a semidefinite relaxation from a constraint programming model. for a number of combinatorial optimization problems such semidefinite relaxations do exist. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 the resulting strategy often finds good solutions very fast. 1 A diagonal matrix is a matrix with non-negative values on its diagonal entries only. which is the sum of its diagonal elements.A matrix X ∈ Rn×n is said to be positive semidefinite (denoted by X 0) when y T Xy 0 for all vectors y ∈ Rn . Semidefinite programming In this section. . However.11].-J. Semidefinite programming makes use of positive semidefinite matrices of variables. (1) Here tr(X) denotes the trace of X. resulting in a complete strategy. Semidefinite programs may serve as a continuous relaxation for (integer) combinatorial optimization problems. In particular. the maximum satisfiability problem. A large number of references to papers concerning semidefinite programming are on the web pages [8. Unfortunately. X 0. we briefly introduce semidefinite programming. when the matrices W and Aj (j = 1. A general introduction on semidefinite programming applied to combinatorial optimization is given in [10. Within this skeleton. Solution framework The skeleton of our solution framework is formed by the constraint programming enumeration scheme.t. we want to use the solution of a semidefinite relaxation to define the variable and value ordering heuristics. for instance the stable set problem.2. the cost matrix W ∈ Rn×n and the constraint matrices Aj ∈ Rn×n are supposed to be symmetric. i.

this relaxation can be used inside our framework. which will be discussed below. In that case. where N is a positive integer. Note however that the latter constraint is relaxed by requiring X to be positive semidefinite. In general. Condition (4) expresses the fact that di2 = di . . for linear constraints on binary variables a straightforward translation is given in Section 3. not all constraints may be trivially transformed accordingly.1. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 2791 4. . Otherwise. presented in [14]. 1}. we may choose among the set of constraints an appropriate subset to include in the relaxation. . Let d ∈ {0. N}.2. In general. . we transform the variables vi and the domains Di into corresponding binary variables xij for i ∈ {1. . Dn are binary. Although there is no ‘recipe’ to transform any given original constraint into the form of program (1). Then X can be constrained to satisfy X 0. On the other hand. we need to rewrite (a part of) the original constraints into the form of program (1) in order to build the semidefinite relaxation. the domains are non-binary. Dn }. From this model we need to extract a semidefinite relaxation. .W. . . which is explained below. however. The method to transform a model with binary variables into a semidefinite relaxation. we need to build up a relaxation from scratch. Obviously. . a set of constraints and an objective function. This can be done in the following way. the diagonal entries (as well as the first row and column) of this matrix represent the binary variables from which we started. The matrix X contains the variables to model our semidefinite relaxation. a semidefinite relaxation can be extracted using a method proposed by [14]. Especially because the original constraints may be of any form. The same holds for the objective function. vn }. . . one may use results from the literature [11]. . the transformation has consequences for the constraints also. ∀i ∈ {1. Construct the (N + 1) × (N + 1) variable matrix X as X= 1 (1 d T ) = d 1 d dT ddT . is the following. . . . a relaxation is obtained by removing or replacing one or more constraints such that all solutions are preserved. Using these variables. (2) We will then use the binary variables xij to construct a semidefinite relaxation. the constraints itself are allowed to be relaxed. Of course. . Moreover. If all domains D1 . . For instance.-J. . In case the binary variables are obtained from transformation (2). . If it is possible to identify a subset of constraints for which a semidefinite relaxation is known. . 1}N be a vector of binary variables. Building a semidefinite relaxation We start from a constraint programming model consisting of a set of variables {v1 . (3) (4) Xii = X0i where the rows and columns of X are indexed from 0 to N. . which is equivalent to di ∈ {0. n} and j ∈ Di : vi = j ⇔ xij = 1. vi = j ⇔ xij = 0. corresponding finite domains {D1 . as we are constructing a relaxation.

A last remark concerns the solution value of the semidefinite relaxation. Then we assign value k to variable vj . Consider for example again the above matrix X. Our value ordering heuristic is to select first the corresponding suggested value. the solution to the semidefinite relaxation yields fractional values for its variable matrix. . These fractional values serve as an indication for the original constraint programming variables. where vj is a constraint programming variable and k ∈ Dj . which corresponds to assigning vj = k.1. Communication between constraint programming (CP) and semidefinite programming (SDP). it leads to more propagation and a smaller search space. then also xj k is supposed to be close to 1. If this bound is tight.2792 W. 1. also depicted in Fig. Applying the semidefinite relaxation At this point. for some k ∈ Dj and corresponding integer i. In general. Our variable ordering heuristic is to select first the constraint programming variable for which the corresponding fractional solution is closest to the corresponding integer solution. the selected variable is accepted with a probability proportional to the corresponding fractional value. or built up our own relaxation. which is the case in our experiments. Assume that variable Xii corresponds to the binary variable xj k (for some integer j and k). We have also implemented a randomized variant of the above variable ordering heuristic. We expect the semidefinite relaxation to provide promising values. We select first the variable vj for which Xii . we have either identified a subset of constraints for which a semidefinite relaxation exists. is closest to 1. If variable Xii is close to 1. the diagonal variables Xii of the above matrix will be assigned to a value between 0 and 1. For example. consider again the above matrix X. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 CP model: variables domains constraints objective function search: variable ordering value ordering traversal strategy (LDS) solve relaxation: solution value fractional solution SDP Fig. obtained from non-binary variables by transformation (2).2. Hence. For example. Therefore the resulting search tree will be traversed using limited discrepancy search. and suppose it is obtained from non-binary original variables.-J. 4. which corresponds to vj = k. our variable and value ordering heuristics for the constraint programming variables are based upon the fractional solution values of the corresponding variables in the semidefinite relaxation. 1. representing the binary variable xj k . In the randomized case. by transformation (2). Now we show how to apply the solution to the semidefinite relaxation inside the constraint programming framework. which is used as a bound on the objective function in the constraint programming model. defined in Section 3.

and the equivalence of the two problems. Hence. but with edge set E = {(i. We introduce binary variables to indicate whether or not a vertex belongs to the stable set S. 1} ∀(i. where V = {1. First we give their definitions. 5.3 The complement graph of G is G = (V . E). and focus on the stable set problem. this problem amounts to the maximum cardinality stable set problem. So. . van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 2793 5. with initial domains {0. it is not necessary to make this distinction. j ) ∈ E. j ∈ V . Integer optimization formulation Let us first consider an integer linear programming formulation for the stable set problem. Finally.2. (i. for which good semidefinite relaxations exist. ∀i ∈ V . which has been shown to be already NP-hard [17]. and xi = 0 otherwise. We can now state the objective function.1. with the same set of vertices V = {1. .W. .2 In the unweighted case (when all weights are equal to 1).-J. a maximum / clique problem can be translated into a stable set problem on the complement graph. From this. n} is the set of vertices and E a subset of edges {(i. . xi = 1 if vertex i is in S.16] for a survey). (5) 2 In the literature (G) usually denotes the unweighted stable set number. . j ∈ V . . j ) ∈ E. We will do exactly this in our implementation. . A stable set is a set S ⊆ V such that no two vertices in S are joined by an edge in E. This value is called the clique number of G and is denoted by (G).t. j ) ∈ E. on which we have tested our algorithm. . . for n vertices. The stable set problem and maximum clique problem This section describes the stable set problem and the maximum clique problem (see [15. 5. Then we will focus on the stable set problem. with |E| = m. n}. To each vertex i ∈ V a weight wi ∈ R is assigned (without loss of generality. 1}. The weighted stable set number is then denoted as w (G). The maximum clique problem is the problem of finding a clique of maximum total weight in G. we have n integer variables xi indexed by i ∈ V . i = j } of G. being the sum of the weights of vertices that are in S. Hence the integer linear programming model becomes: (G) = max s. E). It is well known that (G) = (G). In this way. The stable set problem is the problem of finding a stable set of maximum total weight in G. for all edges (i. as n wi xi . and formulate it as an integer optimization problem. In this work. j ) | i. j ) | i. A clique is a set C ⊆ V such that every two vertices in C are joined by an edge in E. 3 w (G) is defined similar to w (G) and also not distinguished in this paper. a semidefinite relaxation is inferred. This value is called the stable set number of G and is denoted by (G). n i=1 wi xi xi + xj 1 xi ∈ {0. we can assume all weights to be non-negative). Definitions Consider an undirected weighted graph G = (V . i = j }. we define the constraints that forbid two adjacent vertices to be both i=1 inside S as xi + xj 1.

(9) Note that program (9) can easily be rewritten into the general form of program (1). indicated by ϑ(G). . . . n}. . because Xij represents xi xj . except for (Ai )ii = 1 1 1. Namely. The semidefinite relaxation thus becomes ϑ(G) = max tr(W X) s. . ∀i ∈ {1. Then the objective function translates into tr(W X). Xii = X0i ∀i ∈ V . we can immediately define the (n + 1) × (n + 1) matrix variable X of our relaxation as X= 1 x xT xxT . As our constraint programming model uses binary variables already. . For its derivation into a form similar to program (1). van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 Another way of describing the same solution set is presented by the following integer quadratic program (G) = max s. j ∈ {0. where the binary vector x again represents the stable set. This quadratic formulation will be used below to infer a semidefinite relaxation of the stable set problem. X 0. since the quadratic constraints take more time to propagate than the linear constraints. (Ai )i0 = − 2 and (Ai )0i = − 2 . n} (7) (8) Xii = X0i as described in Section 4.2. i = j. Wij = 0 ∀i ∈ {0. Xij = 0 ∀(i. which makes the corresponding right-hand side (bi entry) equal to 0 (similarly for the edge constraints). First we impose the constraints X 0. n}.t. .3. we first define the (n+1)×(n+1) weight matrix W as Wii = wi ∀i ∈ {1. In fact. n}. . j ) xi2 = xi ∀i ∈ V . . We have chosen the first model. (6) Note that here the constraint xi ∈ {0. both model (5) and model (6) can be used as a constraint programming model.-J. j ) ∈ E. while having the same pruning power. . however. we will use the equivalent model (6). ∈ E. The value of the objective function of this relaxation has been named the theta number of a graph G.t. . Semidefinite programming relaxation The integer quadratic program (6) gives rise to a well-known semidefinite relaxation introduced by Lovász [18] (see [12] for a comprehensive treatment). n i=1 wi xi xi xj = 0 ∀(i. Next we translate the edge constraints xi xj = 0 from program (6) into Xij = 0. . . similar to condition (4) in Section 4. In order to translate the objective function. as in Section 5. 1} is replaced by xi2 = xi . . 5.2794 W. Xii = X0i is equal to tr(Ai X) = 0 where the (n + 1) × (n + 1) matrix Ai consists of all zeroes. To infer the semidefinite relaxation. . . . we will follow the same idea as in Section 4 for the general case.

The reason for our choices is that both solvers are among the fastest in their field. j ∈ V . let the n×n cost matrix U be defined as Uij = wi wj for i. while program (9) uses matrices of dimension n + 1 and m + n constraints. In our implementation we have used the formulation that has been shown to be computationally most efficient among those alternatives [19]. we use the randomized variable ordering heuristic. In fact. In this case. on which our algorithms only use one processor of 400 MHz at a time.4. different from the above. The first algorithm is a sole constraint programming solver. It first solves the semidefinite program (10). . see [12]. The second algorithm is the one proposed in Section 4.0 [23] libraries for matrix computations. let ˜ x ∈ {0. tr(X) ˜ Xij = 0 ∀(i. version 5. The resulting search tree is traversed using a limited discrepancy search strategy. and CSDP is written in C.1 [21].-J. (where n is the number of variables). and the best solution found is the heuristic solution to be followed by the limited discrepancy search strategy. When (10) is solved to optimality. This means we use a lexicographic variable ordering. Let us introduce that particular formulation (called ϑ3 in [12]). Observe that in these definitions we exploit the fact that wi 0 for all i ∈ V . The following semidefinite program ϑ(G) = max ˜ tr(U X) ˜ = 1. see [12]. in order to improve our starting solution. and then calls the constraint programming solver. which uses a standard enumeration strategy. j ) ∈ E.2. As semidefinite programming solver. Define the n × n matrix X = T √ √ n where i = wi / j =1 wj xj xi . Furthermore. defined by the solution of the semidefinite relaxation. After each branching decision. 1}n be a vector of binary variables representing a stable set. Computational results All our experiments are performed on a Sun Enterprise 450 (4 X UltraSPARC-II 400 MHz) with maximum 2048 Mb memory size. s.1 [22] and LAPACK 3. Again. as was argued in Section 5. the scaled ˜ diagonal element ϑ(G)Xii (a fractional value between 0 and 1) serves as an indication for the value of xi (i ∈ V ) in a maximum stable set (see for instance [19]). they can be hooked together relatively easy. This gives an indication why program (10) is computationally more efficient. we use CSDP version 4. its effect is directly propagated through the constraints. 6.1 [20].t. we repeat the search for the first solution n times. Again. As constraint programming solver we use the ILOG Solver library. We distinguish two algorithms to perform our experiments. The resulting search tree is traversed using a depth-first search strategy.W. ˜ 0 X (10) has been shown to also give the theta number of G. and we select domain value 1 before value 0. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 2795 The theta number also arises from other formulations. with the optimized ATLAS 3. and because ILOG Solver is written in C++. Program (10) uses matrices of dimension n and m + 1 constraints. As constraint programming model we have used model (5). it is not difficult to rewrite program (10) into the general form of program (1).

the semidefinite program increases accordingly. we can identify the kind of problems our algorithm is suitable for. for the instances up to 0. 0.01 up to 0. 75. On the other hand.e.5 0.2. while the semidefinite programming solver only solves the semidefinite relaxation. Our aim is to identify the hardness of the instances for both solvers. From these figures. We therefore generated random graphs on 30.-J. parametrized by the density. i.2.4 0.5 0. Characterization of problem instances We will first identify general characteristics of the constraint programming solver and the semidefinite programming solver applied to stable set problems. We have plotted the performance of both solvers in Figs. 40. our semidefinite relaxation suffers from every edge that is added. Random weighted and unweighted graphs Our first experiments are performed on random weighted and unweighted graphs.2796 1e+07 W. 6. 2 and 3. 2.1 0.6 0. Here the constraint programming solver solves the problems to optimality. the constraint programming solver is expected to use less time than the semidefinite programming solver. For the constraint programming solver.15. m/((n2 − n)/2) for a graph with n vertices and m edges.7 0.1. 50 and 60 vertices.2. as well as its computation time. As the number of constraints increases. but a so-called primal-dual interior point algorithm. 125 and 150 vertices and edge density from 0.1 0. For graphs with a higher density. Here we see the effect of constraint propagation. the search tree can be heavily pruned. We generated graphs with 50. Note that we use a log-scale for time and number of backtracks in these pictures.01 0 0.8 0. corresponding to the .3 0. Based upon this information. Namely.7 0.3 0.95. 6. For the semidefinite programming solver we only plotted the time needed to solve the relaxation. Fortunately. 100.2 0.10 and 0.8 0. we can conclude that the constraint programming solver has the most difficulties with instances up to density around 0. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 10000 n=30 n=40 n=50 n=60 1000 100 10 1 0. As the density increases.2 0. It appears that both solvers are highly dependent on the edge density of the graph.9 1 0 0. Consequently. which makes the application of our method unnecessary. our algorithm is expected to behave best for graphs that have edge density up to around 0.05. with density ranging from 0.2. the computation time for the semidefinite relaxation is very small.9 1 n=30 n=40 n=50 n=60 log number of backtracks 1e+06 100000 10000 1000 100 10 edge density log time (s) edge density Fig. we depict both the number of backtracks and the time needed to prove optimality.1 0.4 0.6 0. Performance of the constraint programming solver on random instances with n vertices. this solver does not use a tree search.

7 0. We have used those instances that were solvable in reasonable time by the semidefinite programming solver (here reasonable means within 1000 s). Graphs arising from coding theory The next experiments are performed on structured (unweighted) graphs arising from coding theory. The results are reported in Table 2.01 0 0. and a column for the time needed by the semidefinite programming solver (sdp time). The first five columns of the table are dedicated to the instance.3. This becomes more obvious for larger n. reporting its name. . our method always finds better solutions than the standard constraint programming solver. the total time and the total number of backtracks needed to prove optimality. 6. A final observation concerns the discrepancy of the best found solutions. obtained from [24].9 1 edge density Fig. The last five columns (SDP+CP) present the performance of our method. Namely. in less time or within the time limit. Unweighted graphs on n vertices and edge density r are named ‘gndr’. Our method appears to find those (often optimal) solutions at rather low discrepancies. reporting the best found estimate of . The results are presented in Table 1. It shows the same behaviour as the results on random graphs.6 0.1 0.1 0. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 10000 n=30 n=40 1000 n=50 n=60 2797 100 log time (s) 10 1 0. where also a column has been added for the discrepancy of the best found value (best discr). the value of happened to be known already. 3.4 0. interesting problem area. Weighted graphs are similarly named ‘wgndr’.3 0.2 0. For these instances. the number of vertices n and edges m. The next three columns (CP) present the performance of the constraint programming solver. Table 1 shows that our approach always finds a better (or equally good) estimate for than the standard constraint programming approach. Performance of the semidefinite programming solver on random instances with n vertices. However. This is not surprising. the edge density and the (best known) value of a stable set. which follows the same format as Table 1.5 0.8 0. .-J. there are two (out of 30) instances in which our method needs substantially more time to achieve this result (g75d015 and wg75d010).W.

05 0.05 0.10 0.15 0.14 0.59 1.05 0.81 2.92 40.29 325.48 57.68 82.83 Limit Limit 170.07 4.10 0.59 6.54 5.49 0.10 0.5 38.34 0.4 15.56 Limit Limit Limit Limit Limit Limit Limit Limit Limit 4.2 51.29 10.4 2.92 12.08 2.15 0.06 0.-J.35 0.51 28.15 5.16 0.3 0.73 Limit 664.05 0.15 0.2798 Table 1 Computational results on random graphs.41 3.41 0.21 5.15 0.10 0.93 1.05 1.1 Limit Limit Limit Limit Limit Limit Limit Limit 0.29 0.05 0. The time limit is set to 1000 s.12 4.05 0.24 13.31 Limit Limit Limit Limit Limit Limit Limit Limit Limit 0 0 0 0 5 0 0 0 4 1 6 1 3 4 8 0 0 0 0 13 0 0 2 2 3 0 4 3 9 11 0.09 27.58 29.72 1.07 924.49 0. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 g50d005 g50d010 g50d015 g75d005 g75d010 g75d015 g100d005 g100d010 g100d015 g125d005 g125d010 g125d015 g150d005 g150d010 g150d015 wg50d005 wg50d010 wg50d015 wg75d005 wg75d010 wg75d015 wg100d005 wg100d010 wg100d015 wg125d005 wg125d010 wg125d015 wg150d005 wg150d010 wg150d015 31 24 49 33 27 52 38 29 740 636 568 1761 1198 972 2302 1778 1412 3779 2796 1991 4381 3265 2828 30 528 19 608 25 533 4 036 453 1 764 478 208 146 40 30 24 44 30 24 44 32 26 740 636 568 1761 1198 972 2176 1778 1412 3390 2175 1899 3759 2533 2518 31 24 49 33 27 52 38 29 740 636 568 1761 1198 972 2302 1778 1412 3779 2796 1991 4381 3265 2828 50 50 50 75 75 75 100 100 100 125 125 125 150 150 150 50 50 50 75 75 75 100 100 100 125 125 125 150 150 150 70 114 190 138 282 426 254 508 736 393 791 1160 545 1111 1566 70 126 171 128 284 409 233 488 750 372 767 1144 588 1167 1630 0.92 2.9 3.10 0.10 0. .09 744.41 4.15 0.57 40.81 4.26 0.06 0.94 9.05 0.10 0.10 0.27 0.10 0.62 2.05 0.39 18.14 0.86 0.15 0.09 0. with n vertices and m edges CP Edge density Total time Best discr 27 22 17 36 25 1 209 019 21 43 sdp time Total time 27 22 17 36 25 21 43 21 35 25 27 22 17 50 567 256 932 48 969 Backtracks Backtracks 0 0 0 0 1 641 692 0 SDP + CP Instance Name n m W.36 0.62 Limit Limit Limit Limit Limit Limit Limit Limit 0 0 13 042 0 1 974 913 87 490 0 All times are in seconds.

09 49.98 882.44 Limit Limit 273.512 1zc.128 1dc.95 882. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 All times are in seconds.18 0.02 0.64 1tc.86 5.128 1et.27 0.79 8.05 0.128 28 46 20 37 58 100 18 64 128 256 64 128 256 64 128 256 512 128 543 1471 3839 264 672 1664 192 512 1312 3264 2240 0.75 719.64 1et. 2799 .21 1.58 0.14 72.10 0.33 1.256 1tc.08 49.07 11.64 1dc.128 1tc.28 10 16 30 18 28 50 20 38 63 110 18 11.08 0.08 Limit Limit 0.12 0.04 0.22 107.Table 2 Computational results on graphs arising from coding theory.06 Limit Limit Limit Limit Limit Limit Limit 0 0 0 0 0 0 0 0 4 2 4 5. with n vertices and m edges CP Edge density Total time 10 16 26 18 2 312 832 28 50 20 38 63 110 18 79 519 10 16 30 18 Backtracks Best discr SDP + CP sdp time Total time Backtracks 0 0 0 0 Instance Name n m 1dc.56 129.18 Limit Limit Limit 0 0 W.06 0.256 1tc.256 1et.13 0.-J. The time limit is set to 1000 s.78 8.

We then proceeded our algorithm for a couple of seconds more. 6. For higher densities however. In general. we have transformed these maximum clique problems to stable set problems on the complement graph. our method outperforms the standard constraint programming approach. and follows a branch-and-bound approach. it is still interesting to see its performance on this standard benchmark set. with a corresponding propagation algorithm. In Table 4 we compare our method with two methods that are specialized for maximum clique problems. For all instances with edge density smaller than 0. Graphs from the DIMACS benchmarks set Our final experiments are performed on a subset of the DIMACS benchmark set for the maximum clique problem [25]. the opposite holds. following the time comparison a made in [28].1.spec.-J. using a special constraint for the maximum clique problem. although there is one instance on which our method performs best (san200_0. Again. The results are reported in Table 3. We stopped the semidefinite programming solver at the time limit of 1000 s.2800 W. our method finds the best solutions at a low discrepancy. The fractional solution values of the relaxation serve as an indication for the corresponding constraint programming variables.org/ . The difference in computation time to prove optimality is huge.24.4. and used its intermediate feasible solution as if it were the optimal fractional solution. Since all methods are performed on different machines. The second a method is a constraint programming approach. Conclusion and future work We have presented a method to use semidefinite relaxations within constraint programming.64 shows the strength of the semidefinite relaxation with respect to standard constraint programming. our method is outperformed by the other two methods. Moreover. as analyzed in Section 6. The first method was presented by Österg˚ rd [26]. 7.9_3). A machine comparison from SPEC4 shows that our times are comparable with the times of Österg˚ rd. This idea was introduced by Fahle [27] and extended and improved by Régin [28]. Although our method is not intended to be competitive with the best heuristics and exact methods for maximum clique problems. which again follows the same format as Table 1. The choice for this particular subset of instances is made by the solvability of an instance by a semidefinite programming solver in reasonable time (again. 4 http://www.64 and 1zc. A special treatment has been given to instance MANN_a45. We have multiplied the times of Régin with 3. Note that the instance 1et. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 because the edge density of these instances are exactly in the region in which our method is supposed to behave best (with the exception of 1dc. we need to identify a time ratio between them. the solution value of the relaxation is used as a bound for the corresponding constraint programming objective function.1. to search for a solution up to discrepancy 1.128). As pointed out in Section 5. This is exactly what could be expected from the analysis of Section 6. reasonable means 1000 s).

57 0.01 169.100 0.35 4.90 131.29 45.100 0.22 0.102 32 4 128 4 14 8 16 126 345 70 60 44 42 20.9_2 san200_0.9 103 156 45 36 26 34 64 64 256 28 70 120 45 378 1035 200 200 200 200 192 1312 1024 168 560 1680 72 702 1980 1990 1990 1990 2037 0.05 15.29 0.06 170.35 4.99 Limit 0 0 0 W.46 Limit Limit 170.83 43.19 169.010 0.17 70.51 157. The time limit is set to 1000 s.235 0. with n vertices and m edges CP Edge density 32 4 128 4 14 8 16 1 738 506 255 100 156 140 172 804 Total time Backtracks Best discr sdp time SDP + CP Total time Backtracks 0 706 0 0 0 0 411 104 Instance Name n m hamming6-2 hamming6-4 hamming8-2 johnson8-2-4 johnson8-4-4 johnson16-2-4 MANN_a9 MANN_a27 MANN_a45 san200_0.05 Limit 162.Table 3 Computational results on graphs from the DIMACS benchmark set for maximum clique problems.16 0.9_1 san200_0.095 0.32 82.9_3 sanr200_0.-J.68 27.69 28.072 0. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 All times are in seconds.82 43.444 0.28 Limit 0.100 0.232 0.35 157.10 45.29 1047.031 0.81 Limit Limit Limit Limit Limit Limit 32 4 128 4 14 8 16 125 338 70 60 44 41 0 0 0 0 0 0 1 3 1 0 0 0 4 0.651 0. 2801 .004 0.55 0.

651 0.095 0. Moreover.00 0. Specialized algorithms for the maximum clique problem however.51 157.27 0.9_1 san200_0.83 43. Possible future investigations include the comparison of our method with methods that use linear relaxations. Indeed.235 0. Computational results show that constraint programming can greatly benefit from semidefinite programming. generally outperform our method.01 0.00 55.24 Backtracks 17 42 65 14 140 250 505 50 1 258 768 1040 6638 758 545 541 496 32 4 128 4 14 8 16 125 338 70 60 44 41 CP + SDP Total time 0. structured graphs from coding theory.100 0.102 32 4 128 4 14 8 16 126 345 70 60 44 42 32 4 128 16 14 8 16 Österg˚ rd a Total time 0.28 > 10 000 > 10 000 32 4 128 4 14 8 16 126 345 70 60 44 42 Régin Total time 0.2802 W.32 82.40 0. for the stable set problem socalled clique-constraints (among others) can be added to both the semidefinite and the linear relaxation. Monique Laurent and Sebastian Brand for fruitful discussions and helpful comments while writing (earlier drafts of) this paper. for instance branch-and-cut algorithms.04 0. by adding socalled discrepancy constraints to the semidefinite or the linear relaxation. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 Table 4 A comparison of different methods on graphs from the DIMACS benchmark set for maximum clique problems. Acknowledgements Many thanks to Michela Milano. We have implemented our method to find the maximum stable set in a graph. our method obtains far better results.004 0.86 548. The current work has investigated the possibility of exploiting semidefinite relaxations in constraint programming. Finally.232 0.100 0.9_3 sanr200_0.01 > 10 000 > 10 000 0. the proof of optimality may be accelerated similar to a method presented in [1].10 45.19 169.00 0.10 450.12 7. Also thanks to the anonymous referees for useful comments.00 11. and recompute the solution to the relaxation.00 0.-J.99 > 1000 Backtracks 0 706 0 0 0 0 411 104 70 60 0 0 0 All times are in seconds.35 4.01 0. For instance.444 0.9_2 san200_0. one may investigate the effects of strengthening the relaxation by adding redundant constraints.01 0.01 0. and on a subset of the DIMACS benchmarks set for maximum clique problems.44 > 43 200 3.010 0.031 0.69 28.100 0.073 0. . Experiments are performed on random weighted and unweighted graphs.00 0.46 > 1000 > 1000 170.27 4. Compared to a standard constraint programming approach.55 0. the solution to the semidefinite relaxations turn out to be very informative. with n vertices and m edges Instance Name hamming6-2 hamming6-4 hamming8-2 johnson8-2-4 johnson8-4-4 johnson16-2-4 MANN_a9 MANN_a27 MANN_a45 san200_0.9 Edge density 0.

A hybrid constraint programming and semidefinite programming approach for the stable set problem. editors. 1995.. editor. In: Milano M. [4] Goemans MX. SIAM Journal on Optimization 2003. 3rd ed. 1999.tu-chemnitz. Poljak S.11(1):613–23 http://www. vol.rutgers. Semidefinite programming website. Lecture Notes in Computer Science. Petitet A. [10] Goemans M. Lovász L. [14] Laurent M. [2] van Hoeve WJ. Proceedings of the fourteenth international joint conference on artificial intelligence (IJCAI-95). p. Bischof C. Weismantel R.rutgers. In: Rossi F. 1993. A C library for semidefinite programming. Philadelphia. Reference manual. The semidefinite programming page. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 2803 References [1] Milano M.org/lapack/ [24] Sloane NJA. Handbook of semidefinite programming. Dordrecht: Kluwer. Xue J. Schrijver A.27(1–2):3–35 http://math-atlas. Parallel Computing 2001.1. Nemhauser G. Connections between semidefinite relaxations of the max-cut and stable set problems. [13] Alizadeh F. New York: Wiley. 2470. Rendl F. Steiglitz K. ILOG Solver 5.42(6):1115–45. [18] Lovász L.120:197–207. Reduced cost-based ranking for generating promising subproblems. Discrete Applied Mathematics 2002.. [16] Bomze IM. CWI. Semidefinite programming and integer programming. Hammarling S. PA: SIAM. Sorensen D. editor. Exploiting relaxations in CP. p. Challenge problems: independent sets in graphs. 2004. Blackford LS. SIAM Journal of Global Optimization 1994. 2003. Berlin: Springer. In: Wolkowicz H. a . SIAM Journal on Optimization 1995. INFORMS Journal on Computing 2000. [23] Anderson E. 2002. vol. Vandenberghe L. Limited discrepancy search. Bai Z. Amsterdam: Elsevier.4:301–28.nmt. Automated empirical optimizations of software and the ATLAS project.edu/∼alizadeh/ sdp. http://new-rutcor. Dongarra JJ. Journal of the ACM 1995. 1. [22] Whaley RC.html.de/∼helmberg/ semidef. Dongarra JJ. [11] Laurent M. Discrete optimization.edu/pub/challenge/ graph/benchmarks/clique [26] Österg˚ rd PRJ. McKenney A. Englewood Cliffs.5(1):13–51.25:1–7. http://www.com/∼njas/doc/ graphs. ftp://dimacs. 2001. 407–21. Williamson DP. [21] Borchers B.net/. Optimization Methods and Software 1999. 1988. p. In: Du D-Z. [12] Grötschel M. http://www-user. Lodi A. The maximum clique problem.12(3):177–91. Constraint and integer programming: toward a unified methodology. NJ: PrenticeHall.edu/∼borchers/csdp. vol. 1–16. Computational experience with stable set relaxations. 1999. [17] Papadimitriou CH. DIMACS maximum clique benchmark. [5] Karisch SE. [6] Apt KR. van Hoeve WJ. Saigal R. p. Greenbaum A.13(4):1014 –28.netlib. Interior point methods in semidefinite programming with applications to combinatorial optimization. 607–15. Milano M.html [25] DIMACS. Mathematical Programming 1997. Ginsberg ML. Rendl F. Cambridge: Cambridge University Press.-J.sourceforge.. Pardalos PM. [8] Helmberg C. Combinatorial optimization: algorithms and complexity. also available as Technical Report PNA-R0210. Geometric algorithms and combinatorial optimization. Budinich M. Eighth international conference on the principles and practice of constraint programming (CP’02). vol. 343–60. On the Shannon capacity of a graph. Du Croz J. Rendl F. A fast algorithm for the maximum clique problem. Demmel J. In: Mellish CS. Combinatorial optimization. Rendl F. The maximum clique problem. [7] Harvey WD.html. 2005. 2000.77:225–46.. [chapter 5]. 2003. Rendl F. editor. IEEE Transactions on Information Theory 1979. [3] Focacci F. Clausen J. 2833. Ninth international conference on principles and practice of constraint programming (CP’03). Berlin: Springer. http://www. editors. LAPACK Users’Guide. Handbooks in operations research and management science. Pelillo M. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. editor. [19] Gruber G. [15] Pardalos PM. Principles of constraint programming. Solving graph bisection problems with semidefinite programming. In: Aardal K. Handbook of combinatorial optimization.att. Dordrecht: Kluwer. [9] Alizadeh F.html. Amsterdam. 1982. In: van Hentenryck P.W. editors. [20] ILOG. Lecture Notes in Computer Science.research. 4. Pardalos PM. Dordrecht: Kluwer Academic Publishers.

In: 10th annual European symposium on algorithms (ESA 2002). 634–48.2804 W. editor. In: Rossi F. 2003. Lecture Notes in Computer Science. Ninth international conference on principles and practice of constraint programming (CP’03). p. Simple and fast: improving a branch-and-bound algorithm for maximum clique. Berlin: Springer. Lecture Notes in Computer Science. 2002. 2833. vol.-J. vol. 2461. van Hoeve / Computers & Operations Research 33 (2006) 2787 – 2804 [27] Fahle T. p. .. 485–98. Berlin: Springer. [28] Régin J-C. Using constraint programming to solve the maximum clique problem.