You are on page 1of 163

Hueristic Algorithms - Contents

Cover Page
Title Page
Preface

CLASSES OF PROBLEMS
Introduction
Computational Problems
The classes P and NP
An NP-complete Set
More NP-Complete Problems
Historical Notes and References
Problems

INTEGER PROGRAMMING
Introduction
Linear Programming
Transition to Integer Solutions
Cutting Planes
Upper Bounds for Integer Programs
Historical Notes and References
Problems

ENUMERATION TECHNIQUES
Introduction
Enumerating 0-1 Integer Programs
Intellegent Solution Space Enumeration
General Branch and Bound Algorithms
http://www.cs.uky.edu/~lewis/cs-heuristic/text/contents.html (1 of 3)12/2/2015 10:06:28 AM

Hueristic Algorithms - Contents

Historical Notes and References


Problems

DYNAMIC PROGRAMMING
Introduction
A Shortest Path Problem
Characteristics and Approaches
More Examples
Historical Notes and References
Problems

APPROXIMATE SOLUTIONS
Introduction
Bounds for Hueristics
Performance Analysis
Terminating Exact Solutions
Historical Notes and References
Problems

LOCAL OPTIMIZATION
Introduction
The Greedy Method
Divide and Conquer
Local Improvement
General Techniques for Local Search
Gradient Methods
Historical Notes and References
Problems

NATURAL MODELS
Introduction
Force Directed Optimization
http://www.cs.uky.edu/~lewis/cs-heuristic/text/contents.html (2 of 3)12/2/2015 10:06:28 AM

Hueristic Algorithms - Contents

Simulated Annealing
Neural Networks
Genetic Algorithms
DNA Computing (Slides)
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/contents.html (3 of 3)12/2/2015 10:06:28 AM

Solving NP-Complete Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/front/cover.html (1 of 2)12/2/2015 10:06:31 AM

Solving NP-Complete Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/front/cover.html (2 of 2)12/2/2015 10:06:31 AM

Solving NP-Complete Problems

Solving NP -Complete Problems


F. D. Lewis
University of Kentucky

Copyright by F. D. Lewis.
All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying,
recording, or otherwise, without prior written permission.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/front/title.html12/2/2015 10:06:33 AM

Classes of Problems

Computational problems can be described in many ways. Almost every discipline has
their own special or favorite way on defining and grouping the problems that they
compute. In this section we shall characterize several classes of problems using
methods familiar to mathematicians and computer scientists. Then we shall
concentrate upon one very prevalent class of problems: those whose solutions can be
verified easily but seem to require vast amounts of time to solve optimally.
The sections are:
Computational Problems
The classes P and NP
An NP-complete Set
More NP-Complete Problems
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/clasintr.html12/2/2015 10:06:38 AM

Computational Problems

Computational Problems
When asked to characterize computation and to describe all of the problems that are
solved through computation, a computer scientist will usually begin by describing a
programming language in which to do the computation. Most would present an
imperative language such as C++, but some might mention functional languages or
logic programming languages. All however, would probably then discuss computation
in terms of the resources required to perform the required task.
There is another way, namely mathematical programming. This is where a problem is
defined as the solution to a system of simultaneous equations. In fact, every one of
the problems which we compute using programs written in our favorite languages can
also be described in terms of mathematical programming. An example of such a
problem is finding values for the triple of variables: <x, y, z> which satisfy the
constraints specified in following two equations.
6x2 + y + 4z4 178
7x + 8y3 + z2 11
The triples <1, 1, 1> and <2, 2, 1> are among the many solutions to this particular
problem.
Instead of computational resources, the standard method used to classify
mathematical programming problems is by the largest exponent found in any of the
constraint equations. This provides a hierarchy which contains the classes of linear
programming problems, quadratic programming problems, third power problems,
and so forth. Our above example is a fourth power mathematical programming
problem. In graphics and geometric modeling problems with higher powers abound,
but in most areas of computer science and operations research one concentrates
primarily upon linear programming problems.
A practical example is the following truck packing problem. Suppose we have a truck
with a capacity of 2000 pounds and wish to load it with packages. We are allowed to
make up the load from a collection of three kinds of packages which weigh 540, 1270,
and 303 pounds each. To make things a little more difficult, we find that the
packages are worth $770, $1900, and $400 respectively and that our load must be
worth at least $2670. Setting x, y, and z as the numbers of each type of package that
we place on the truck, we now easily describe possible solutions to this problem with
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/comprob.html (1 of 5)12/2/2015 10:06:42 AM

Computational Problems

the equations:
540x + 1270y + 303z 2000
770x + 1900y + 400z 2670
The feasible solution space or collection of correct solutions to a mathematical
programming problem can be represented geometrically by a polytope in n-space
where n is the number of variables found in the constraint equations. Consider the
two linear equations:
3x + 5y 15
x - 2y 2
Pairs of values <x, y> that satisfy both equations must lie below the line defined by 3x
+ 5y = 15 and above that defined by x - 2y = 2. These two lines are shown in figure 1.

Figure 1 A Linear Programming Solution Space


A common further constraint is to require the values of x and y to be no less than
zero. This corresponds to practical problems where we cannot have negative values
as solutions. The shaded area in figure 1 is where all of the feasible solutions to our
example can be found.
A particular problem may be termed convex or nonconvex depending upon whether
the geometric representation of its solution space forms a convex and nonconvex
polytope. Examples of both of these for 3-space may be found in figure 2.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/comprob.html (2 of 5)12/2/2015 10:06:42 AM

Computational Problems

Figure 2 - Convex and Nonconvex Solution Spaces


On the left is a convex polytope and on the right is a collection of polytopes which
form a solution space which is nonconvex.
A picture of mathematical programming problems appears as Figure 3. As mentioned
above, of particular interest are the linear programming problems because these are
the problems for which we actually compute optimum solutions. This is because they
are convex and we shall explore this further below. The other class of problems that
interest computer scientists are problems with integer values as solutions those for
which fractions are not appropriate. (The truck packing problem mentioned above is
one since we were not allowed to load part of a package.)

Figure 3 - Mathematical Programming Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/comprob.html (3 of 5)12/2/2015 10:06:42 AM

Computational Problems

Note that not all of the linear integer programming problems are convex. The convex
integer linear programming problems are of great interest to computer scientists
since they are exactly the set of graph problems solvable in polynomial time.
We are interested in two styles of computational problems. The first group we shall
examine are decision problems. These are all of the problems with true or false as
solutions. Several examples are:
a. Is x a prime number?
b. Are there any solutions to the first mathematical programming
problem presented above?
c. Can the truck mentioned above be packed with a cargo worth at
least $2700?
We are often interested in finding not only finding solutions, but optimum solutions
to problems. To do this, a problem must be stated in such a way that an optimum
solution is requested. This is done by either maximizing or minimizing a relationship
between the variables called an objective function. Below is an example stated in the
general format for optimization problems.
maximize:
w = 3x - y + 4z
subject to the constraints:
x + 5y + z 75
17x - y - 3z 45
5x - y + z 38
where x, y, and z 0
In an optimization problem we must find a solution which provides either a minimum
or maximum value for the objective function. This is depicted geometrically by
selecting the point on the surface of the feasible solution polytope which provides an
optimum value for the objective function. With convex problems this appears
straightforward if we hop on the surface and go uphill until we reach the top.
With nonconvex problems things are not so simple. Consider the curve shown in
Figure 4. There are two places on the curve with values larger than the points
adjacent to them, one at point a and one at point b. The greatest of these (that at
point b) is called the global maximum while any other point at which there is a
maximum relative to immediately adjacent points is called a local maximum. The
solution space on the right in figure 2 also contains local and global maxima.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/comprob.html (4 of 5)12/2/2015 10:06:42 AM

Computational Problems

Figure 4 - Local and Global Maxima


Note that the polytope on the left in Figure 2 has only one maximum and thus it is a
global maximum, while the solution space represented by the collection of polytopes
on the right in Figure 2 has many maxima. One of the nice things about restricting
our attention to convex problems is that all maxima and minima are guaranteed to be
global.
Theorem. Any local maximum (or minimum) for a convex
programming problem is also a global maximum (or minimum).
Unfortunately many of the problems of interest to us as computer scientists are
nonconvex and thus usually have several local maxima (or minima). This makes
finding optimum solutions more difficult and leads to some very interesting methods
for finding these solutions.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/comprob.html (5 of 5)12/2/2015 10:06:42 AM

P and NP

The Classes P and NP


We now shift gears slightly and move to the examination of two families of problems,
which are very important to computer scientists. Two families which constitute the
bulk of our practical computational problems and have been central to the theory of
computation for many years.
The first is a class which contains all of the problems we solve using computers. If we
think about the problems we actually present to the computer we note that not too
many computations require more than O(n3) or O(n4) time. In fact, most of the
important algorithms we compute are somewhere in the O(log n) to O(n3) range. Thus
we shall state that practical computation resides within polynomial time bounds.
There is a name for this class of problems.
Definition. The class of polynomially solvable problems, P contains all
sets in which membership may be decided by an algorithm whose
running time is bounded by a polynomial.
Besides containing all of what we have decided to consider practical computational
tasks, the class P has another attractive attribute. Its use allows us to not worry about
our machine model since all reasonable models of computation (including programs
and Turing machines) have time complexities, which are polynomially related.
That was the class of problems we actually compute. But there is another important
class. This one is the class of problems that we would love to solve but are unable to
do so exactly. Since that sounds strange, let's look at an example. Consider final
examination scheduling. A school has n courses and five days in which to schedule
examinations. An optimal schedule would be one where no student has to take two
examinations on the same day. This seems like an easy problem. But there are O(5n)
possible different schedules. If we looked at them all with a computer which could
check a million schedules every second, the time spent checking for a value of n = 50
would be about
200,000,000,000,000,000,000 years!
Yes, that's right. Obviously this will not be done between registration and the end of
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/p-np.html (1 of 5)12/2/2015 10:06:44 AM

P and NP

the semester.
One might wonder if the above analysis was needed, because after all, who would
look at all of the schedules? You only need to check a few of the obvious ones. Or do
you? Think back over all of the examination schedules you have seen. Were there any,
which were optimal? No! So, there must be a small problem somewhere. We shall see
more on this problem later.
Let us think a little more about examination schedules. While it might be very difficult
to find a good one, it is easy to check a schedule to see how near perfect it is. This
process is called verification and allows us to know quickly if we stumble upon a
good schedule.
Consider another problem that of finding a minimal length tour of n cities where we
begin and end at the same place. (This is called the closed tour problem.) Again, there
are many solutions, in fact n factorial different tours are possible. And, once more, if
we have a tour, we can easily check to see how long it is. Thus if we want a tour of
less than some fixed length, we can quickly check candidates to see if they qualify.
This is interesting and provides some hope of solving problems of this kind. If we can
determine the worth of an answer, then maybe we can investigate promising solutions
and keep the best one.
Let us consider a class of problems, which all seem very complex, but have solutions,
which are easily checked. Here is a class, which contains the problems for which
solutions can be verified in polynomial time.
Definition. The class of nondeterministic polynomially acceptable
problems, NP, contains all sets in which membership can be verified in
polynomial time.
This may seem to be quite a bizarre collection of problems. But think for a moment.
The examination scheduling problem does fit here. If we were to find a solution, it
could be checked out very quickly. Lots of other problems fall into this category.
Another instance is closed tours of groups of cities. Many graph problems used in
CAD algorithms for computer chip design fit in here also. Also, most scheduling
problems. This is a very interesting collection of problems.
One might wonder about the time actually involved in solving membership in this
class. The only known relationship between NP and deterministic time is the
following result.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/p-np.html (2 of 5)12/2/2015 10:06:44 AM

P and NP

Theorem 1. For every set A in NP there is a polynomial p(n) such that


the problem of determining whether a data item of size n is a member
of A can be solved in 2p(n) time.
A useful tool in studying the relationships between members of a class is the
translation or mapping of one to another. If we can translate one set into another, we
can often deduce properties of one by the properties that we know the other
possesses. This is called reducibility, is pictured in Figure 1, and defined below.
Definition. The set A is many-one polynomial-time reducible to the set B
(this is written as A p B) if and only if there is a recursive function g(x)
which can be computed in polynomial time such that for all x: x A if
and only if g(x) B.

Figure 1 - A many to one mapping between sets


Note that all of the members of A map into a portion of B and all elements not in A
map into a part of B's complement. This gives us a way to solve membership in A if
we know how to solve membership in B. If A is reducible to B via the function g(x),
then all we need do to determine if x is in A is to check to see if g(x) is in B.
One of the properties preserved by reducibility is complexity. Recall that to decide
whether x was in A, we had to:
a. Compute g(x), and
b. Check to see if g(x) was in B.
Thus the complexity of deciding membership in A is the sum of the complexities of
computing g(x) and deciding membership in B. If computing g(x) does not take very
long then we can say that B is no more complex than A. From this discussion we can
state the following theorem.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/p-np.html (3 of 5)12/2/2015 10:06:44 AM

P and NP

Theorem 2. If A p B and B is in P, then A is in P also.


And of course if A pB and B is in NP, then A is in NP for exactly the same reasons.
This brings up another concept.
Definition. The set A is hard for a class of sets if and only if every set in
the class is many-one reducible to A.
If the reducibility function is not very complex, this means that the set A is at least as
complex as any of the members of the class it is hard for. Thus an NP-hard set would
be as difficult to decide membership in as any set in NP. If it were itself in NP, it
would be the most complex set in the class. We have a name for this.
Definition. A set is complete for a class if and only if it is a member of
the class and hard for the class.
Here is another fact about NP-complete sets and polynomial reducibilities, which will
be our major tool in proving sets NP-complete.
Theorem 3. If A p B for a set B in NP, and A is NP-complete, then B is
NP-complete also.
Polynomial reducibilities also may be used to place upper bounds upon sets in P. For
example, the following result is based on this.
Theorem 4. If A is NP-complete then A is a member of P if and only if
P = NP.
Proof. Almost obvious. If A is a member of P then every set polynomially
reducible to A is also in P. Thus the NP-completeness of A forces every
single one of the sets in NP to be members of P.
On the other hand, if P = NP then of course A is a member of P as well.
This is very interesting. If we know that membership in one NP-complete set can be
decided in polynomial time then we know that every set in NP can be decided using
some polynomial algorithm! This means that we would get all of their recognition
algorithms for the price of one. But, it is felt that this is highly unlikely since we know
of no sub-exponential algorithms for membership in any of these sets and the
problem has been around for a while.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/p-np.html (4 of 5)12/2/2015 10:06:44 AM

P and NP

In closing, here is a small list of some of the many problems that are members of NP,
and are in fact, NP-complete.
0-1 Integer Programming (0-1 INT). Given a matrix A and a vector b, is
there a vector x with values from {0, 1} such that Ax b?
CLIQUE. Given a graph and an integer k, are there k vertices in the
graph which are all adjacent to each other?
Vertex Cover (VC). Given a graph and an integer k, is there a collection
of k vertices such that each edge is connected to one of the vertices in
the collection?
Chromatic Number (COLOR). Given a graph and an integer k, is there a
way to color the vertices with k colors such that adjacent vertices are
colored differently?
Examination Scheduling (EXAM). Given a list of courses, a list of
conflicts between them, and an integer k; is there an exam schedule
consisting of k dates such that there are no conflicts between courses
which have examinations on the same date?
Closed Tour (TOUR). Given n cities and an integer k, is there a tour, of
length less than k, of the cities which begins and ends at the same city?
Rectilinear Steiner Spanning Tree (STEINER). Given n points in
Euclidean space and an integer k, is there a collection of vertical and
horizontal lines of total length less than k, which spans the points?
Knapsack. Given n items, each with a weight and a value, and two
integers k and m, is there a collection of items with total weight less
than k, which has a total value greater than m?

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/p-np.html (5 of 5)12/2/2015 10:06:44 AM

An NP-complete Set

An NP-complete Set
The definitions and discussion about P and NP were very interesting. But, of course for
any of this discussion to be worthwhile we need to see an NP-complete set. Or at least
prove that there is one. The following definitions from the propositional calculus lead to
our first NP-complete problem.
Definition. A clause is a finite collection of literals, which in turn are
Boolean variables or their complements.
Definition. A clause is satisfiable if and only if at least one literal in the
clause is true.
Suppose we examine the clauses below which are made up of literals from the set of
Boolean variables {v1, ..., vn}.

The first clause is satisfiable if either v1 or v3 are true or v2 is false. Now let us consider at
the entire collection of clauses. All three are true (at once) when all three variables are
true. Thus we shall say that a collection of clauses is satisfiable if and only if there is
some assignment of truth values to the variables which makes all of the clauses true
simultaneously. The collection:

is not satisfiable because at least one of the three clauses will be false no matter how the
truth values are assigned to the variables. Now for the first decision problem which is
NP-complete. It is central to theorem proving procedures and the propositional calculus.
The Satisfiability Problem (SAT). Given a set of clauses, is there an
assignment of truth values to the variables such that the collection of
clauses is satisfiable?
Since some collections are satisfiable and some are not, this is obviously a nontrivial
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (1 of 7)12/2/2015 10:06:53 AM

An NP-complete Set

decision problem. And it just happens to be NP-complete! By the way, it is not the
general satisfiability problem for propositional calculus, but the conjunctive normal
form satisfiability problem. Here is the theorem and its proof.
Theorem 5. The satisfiability problem is NP-complete.
Proof Sketch. The first part of the proof is to show that the satisfiability
problem is in NP. This is simple. A machine which checks this merely jots
down a truth value for each Boolean variable in a nondeterministic manner,
plugs these into each clause, and then checks to see if one literal per clause
is true. A Turing machine can do this as quickly as it can read the clauses.
The hard part is showing that every set in NP is reducible to the
satisfiability problem. Let's start. First of all, if a set is in NP then there is
some one tape Turing machine Mi with alphabet = {0, 1, b} which recognizes
members (i.e., verifies membership) of the set within time p(n) for a
polynomial p(n). What we wish is to design a polynomial time computable
recursive function gi(x) such that:
Mi recognizes x if and only if gi(x) SAT.
For gi(x) to be a member of SAT, it must be some collection of clauses which
contain at least one true literal per clause under some assignment of truth
values. This means that gi must produce a logical expression which states
that Mi accepts x. Let us recall what we know about computations and
arithmetization. Now examine the following collections of assertions.
a) When Mi begins computation:

#x is on the tape,
the tape head is on square one, and
instruction I1 is about to be executed.

b) At each step of Mi 's computation:

only one instruction is about to be executed,


only one tape square is being scanned, and
every tape square contains exactly one symbol.

c) At each computational step, the instruction being executed and the


symbol on the square being scanned completely determine:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (2 of 7)12/2/2015 10:06:53 AM

An NP-complete Set

the symbol written on the square being read,


the next position of the head, and
the next instruction to be executed.

d) Before p(n) steps, Mi must be in a halting configuration.


These assertions tell us about the computation of Mi(x). So, if we can
determine how to transform x into a collection of clauses which mean exactly
the same things as the assertions written above, we have indeed found our gi
(x). And, if gi(x) is polynomially computable we are done.
First let us review our parameters for the Turing machine Mi. It uses the
alphabet {0, 1, b, #} (where # is used only as an endmarker) and has m
instructions. Since the computation time is bounded by the polynomial p(n)
we know that only p(n) squares of tape may be written upon.
Now let us examine the variables used in the clauses we are about to
generate. There are three families of them. For all tape squares from 1 to p(n)
and computational steps from time 0 to time p(n), we have the collection of
Boolean variables of the form
HEAD[s, t] which is true if Mi has its tape head positioned on tape
square s at time t.
(Note that there are p(n)2 of these variables.) For the same time bounds and all
instructions, we have the variables of the form
INSTR[i, t] which is true if Mi is about to execute instruction number
i at time t.
There are only m*p(n) of these variables. The last family contains variables of
the form
CHAR[c, s, t] which is true if character c in {0, 1, b, #} is found upon
tape square s at time t.
So, we have O(p(n)2) variables in all. This is still a polynomial.
Now let's build the clauses which mean the same as the above assertions.
First, the machine must begin properly. At time 0 we have #x on the tape. If x
= 0110 then the clauses which state this are:
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (3 of 7)12/2/2015 10:06:53 AM

An NP-complete Set

(CHAR[#,1,0]), (CHAR[0,2,0]), (CHAR[1,3,0]),


(CHAR[1,4,0]), (CHAR[0,5,0])
and blanks are placed upon the remainder of the tape with:
(CHAR[b,6,0]), ... , (CHAR[b,p(n),0]).
Since the machine begins on square one with instruction 1, we also include:
(HEAD[1,0]), (INSTR[1,0]).
That finishes our first assertion. Note that all of the variables in these
clauses must be true for gi(x) to be satisfiable since each clause contains
exactly one literal. This starts Mi(x) off properly. Also note that there are p(n)
+2 of these particular one variable clauses.
(NB. We shall keep count of the total number of literals used so far as we go
so that we will know |gi(x)|.)
During computation one instruction may be executed at each step.
But, if the computation has halted then no more instructions can be
executed. To remedy this we introduce a bogus instruction numbered 0 and
make Mi switch to it whenever a halt instruction is encountered. Since Mi
remains on instruction 0 from then on, at each step exactly one instruction
is executed.
The family of clauses (one for each time t p(n)) of the form:
(INSTR[0,t], INSTR[1,t], ... , INSTR[m,t])
maintain that Mi is executing at least one instruction during each
computational step. There are (m+1) p(n) literals in these. We can outlaw
pairs of instructions (or more) at each step by including a clause of the form:

for each instruction pair i and j (where i < j) and each time t. These clauses
state that no pair of instructions can be executed at once and there are about
p(n) m2 literals in them.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (4 of 7)12/2/2015 10:06:53 AM

An NP-complete Set

Clauses which mandate the tape head to be on one and only one square at
each step are very much the same. So are the clauses which state that exactly
one symbol is written upon each tape square at each step of the
computation. The number of literals in these clauses is on the order of p(n)2.
(So, we still have a polynomial number of literals in our clauses to date.)
Now we must describe the action of Mi when it changes from configuration to
configuration during computation. Consider the Turing machine instruction:

Thus if Mi is to execute instruction 27 at step 239 and is reading a 0 on


square 45 we would state the following implication:
if (INSTR[27,239] and HEAD[45,239] and CHAR[0,45,239])
then (CHAR[1,45,240] and HEAD[46,240] and INSTR[42,240]).
Recalling that the phrase (if A then B) is equivalent to (not(A) or B), we now
translate the above statement into the clauses:

Note that the second line of instruction 27 contains a halt. In this case we
switch to instruction 0 and place the tape head on a bogus tape square
(square number 0). This would be something like:

(These clauses are not very intuitive, but they do mean exactly the same as
the if-then way of saying it. And besides, we've got it in clauses just like we
needed to. This was quite convenient.)

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (5 of 7)12/2/2015 10:06:53 AM

An NP-complete Set

In general, we need trios of clauses like the above for every line of each
instruction, at every time, for all of the tape squares. Again, O(p(n)2) literals
are involved in this.
To make sure that the rest of the symbols (those not changed by the
instruction) remain on the tape for the next step, we need to state things like:

which become clauses such as:

These must be jotted down for each tape square and each symbol, for every
single time unit. Again, we have O(p(n)2) literals.
When Mi halts we pretend that it goes to instruction 0 and place the head on
square 0. Since the machine should stay in that configuration for the rest of
the computation, we need to state for all times t:

(this was another if-then statement) and note that there are O(p(n)) literals
here.
One more assertion and we are done. Before p(n) steps, Mi must halt if it is
going to accept. This is an easy one since the machine goes to instruction 0
only if it halts. This is merely the clause
(INSTR[0, p(n)]).
Of course this one must be true if the entire collection of clauses is to be
satisfiable.
That is the construction of gi(x). We need to show that it can be done in
polynomial time. Let us think about it. Given the machine and the time
bound p(n), it is easy (long and tedious, but easy) to read the description of
the Turing machine and generate the above clauses. In fact we could write
them down in a steady stream as we counted to p(n) in loops such as

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (6 of 7)12/2/2015 10:06:53 AM

An NP-complete Set

So, computing gi(x) takes about as much time to compute as it does to write
it down. Thus its complexity is O(|gi(x)|). The same as the length of all of the

literals in the clauses. Since there are O(p(n)2) of these and the length of a
literal will not exceed log2(p(n)) we arrive at polynomial time complexity for the
computation of gi(x).
The remainder of the proof is to show that
Mi accepts x if and only if gi(x) SAT.
While not completely trivial, it does follow from an examination of the
definitions of how Turing machines operate compared to the satisfiability of
the clauses in the above construction. The first part of the proof is to argue
that if Mi accepts x, then there is a sequence of configurations which Mi
progresses through. Setting the HEAD, CHAR, and INSTR variables so that
they describe these configurations makes the set of clauses computed by gi(x)
satisfiable. The remainder of the proof is to argue that if gi(x) can be satisfied
then there is an accepting computation for Mi(x).
That was our first NP -complete problem. It may not be quite everyone's favorite, but at
least we have shown that one does indeed exist. And now we are able to state a result
having to do with the question about whether P = NP in very explicit terms. In fact the
satisfiability problem has become central to that question. And by the second corollary,
this problem can aid in proving NP-completeness.
Corollary. SAT is in P if and only if P = NP.
Corollary. If A NP and SAT A then A is NP -complete.
p

So, all we need to do is determine the complexity of the satisfiability problem and we
have discovered whether P and NP are the same. Unfortunately this seems much easier
said than done!

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/np-compl.html (7 of 7)12/2/2015 10:06:53 AM

More NP-Complete Problems

More NP-complete Problems


One of the claims made in the last section was that there are lots and lots of NPcomplete problems which are of interest to the practical computer scientist. Now it is
time to fulfill this prophecy and demonstrate this. We shall examine some of the
popular NP-complete problems from various computational areas.
Logicians should be quite pleased that satisfiability for the propositional calculus is
NP-complete. It means that they will still be needed to prove theorems since it seems
unlikely that anyone will develop a computer program to do so. But we, as computer
scientists need to see problems which are closer to home. This is also more than a
theoretical exercise because we know that any problem which is NP-complete is a
candidate for approximation since no subexponential time bounded algorithms are
known for these problems.
First, we shall review the process of proving a problem NP-complete. We could do it
from scratch like we did for SAT. But that is far too time consuming, especially when
we have a nifty technique like reduction. All we need to do is:
a. show that the problem is in NP,
b. reduce an NP-complete problem to it, and
c. show that the reduction is a polynomial time function.
Thats not too bad at all. All we basically must accomplish is to transform an NPcomplete problem to a new one. As a first example, let us simplify satisfiability by
specifying exactly how many literals must be in each clause. Then we shall reduce this
problem to others.
Satisfiability with 3 literals per clause (3-SAT). Given a finite set of clauses,
each containing exactly three literals, is there some truth assignment for the
variables which satisfies all of the clauses?
Theorem 1. 3-SAT is NP-complete.
Proof. We know that since 3-SAT is merely a special case of SAT, it must be in

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (1 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

NP. (That is, we can verify that a truth assignment satisfies all of the clauses as
fast as we can read the clauses.)
To show that it 3-SAT hard for NP, we will reduce SAT to it by transforming
any instance of the satisfiability problem to an instance of 3-SAT. This means
we must demonstrate how to convert clauses which do not contain exactly three
literals into ones which do. It is easy if a clause contains two literals. Let us take
(x1, x2) as an example. This is equivalent to the pair:

where u is a new variable. Note that each clause of the pair contains exactly
three literals and that.
So far, so good. Now we will transform clauses such as (x) which contain one
literal. This will require two steps. We begin by converting it to the pair of two
literal clauses:

much as before. Then we change each of these just as before and get:

This was easy. (But youd better plug in all possible truth values for the literals
and fully check it out.)
One case remains. We might have a clause such as (x1, ... , xk) which contains
more than three literals. We shall arrange these literals as a cascade of three
literal clauses. Consider the sequence of clauses:

Let us look at this. If the original clause were satisfiable then one of the xi's had
to be true. Let us set all of the ui's to true up to the point in the sequence where
xi was encountered and false thereafter. A little thought convinces us that this
works just fine since it provides a truth assignment which satisfies the
collection of clauses. So, if the original clause was satisfiable, this collection is
satisfiable too.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (2 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

Now for the other part of the proof. Suppose the original clause is not
satisfiable. This means that all of the xi's are false. We claim that in this case the
collection of clauses we constructed is unsatisfiable also. Assume that there is
some way to satisfy the sequence of clauses. For it to be satisfiable, the last
clause must be satisfiable. For the last clause to be satified, uk-3 must be false
since xk-1 and xk are false. This in turn forces uk-4 to be false. Thus all of the ui's all
the way down the line have got to be false. And when we reach the first clause
we are in big trouble since u1 is false. So, if the xi's are all false there is nothing
we can do with the truth values for the ui's that satisfies all of the clauses.
Note that the above transformation is indeed a polynomial time mapping. Thus
SAT 3-SAT and we are done.
p

One of the reasons that showing that 3-SAT is NP-complete is not too difficult is that
it is a restricted version of the satisfiability problem. This allowed us to merely
modify a group of clauses when we did the reduction. In the future we shall use 3SAT in reductions and be very pleased with the fact that having only three literals per
clause makes our proofs less cumbersome.
Of course having only two literals per clause would be better yet. But attempting to
change clauses with three literals into equivalent two literal clauses is very difficult.
Try this. I'll bet you cannot do it. One reason is because 2-SAT is in P. In fact, if you
could reduce 3-SAT to 2-SAT by translating clauses with three literals into clauses
with two literals, you would have shown that P = NP.
Let us return to introducing more NP-complete problems. We immediately use 3-SAT
for the reduction to our next NP-complete problem which comes from the field of
mathematical programming and operations research. It is a variant of integer
programming.
0-1 Integer Programming (0-1 INT). Given a matrix A and a vector b, is there
a vector x with values from {0, 1} such that Ax b?
If we did not require the vector x to have integer values, then this is the linear
programming problem and is solvable in polynomial time. This one is more difficult.
Theorem 2. 0-1 INT is NP-complete.
Proof. As usual it is easy to show that 0-1 INT is in NP. Just guess the values in
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (3 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

x and multiply it out. (The exact degree of the polynomial in the time bound is
left as an exercise.)
A reduction from 3-SAT finishes the proof. In order to develop the mapping
from clauses to a matrix we must change a problem in logic into an exercise in
arithmetic. Examine the following chart. It is just a spreadsheet with values for
the variables x1, x2, and x3 and values for some expressions formed from them.
Expressions

Values

X1

X2

X3

+ X1 + X2 + X3

+ X1 + X2 - X3

-1

+ X1 - X2 - X3

-1

-1

-2

-1

- X 1 - X2 - X3

-1

-1

-2

-1

-2

-2

-3

Above is a table of values for arithmetic expressions. Now we shall interpret the
expressions in a logical framework. Let the plus signs mean true and the minus
signs mean false. Place or's between the variables. So, +x1 + x2 - x3 now means that
x1 is true, or x2 is true, or x3 is false.
If 1 denotes true and 0 means false, then we could read the expression as x1=1 or
x2=1 or x3=0.
Now note that in each row headed by an arithmetic expression there is a
minimum value and it occurs exactly once. Find exactly which column contains
this minimum value. The first expression row has a zero in the column where
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (4 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

each xi is also zero. Look at the expression. Recall that +x1 + x2 + x3 means that at
least one of the xi should have the value 1. So, the minimum value occurs when
the expression is not satisfied.
Look at the row headed by +x1 - x2 - x3 . This expression means that x1 should be a
1 or one of the others should be 0. In the column containing the minimum value
this is again not the case.
The points to remember now for each expression row are:
a) Each has exactly one column of minimum value.
b) This column corresponds to a nonsatisfying truth assignment.
c) Every other column satisfies the expression.
d) All other columnms have higher values.
Here is how we build a matrix from a set of clauses. First let the columns of the
matrix correspond to the variables from the clauses. The rows of the matrix
represent the clauses - one row for each one. For each clause, put a 1 under
each variable which is not complemented and a -1 under those that are. Fill in
the rest of the row with zeros. Or we could say:

The vector b is merely made up of the appropriate minimum values plus one
from the above chart. In other words:
bi = 1 - (the number of complemented variables in clause i).
The above chart provides the needed ammunition for the proof that our
construction is correct. The proper vector x is merely the truth assignment to
the variables which satisfies all of the clauses. If there is such a truth
assignment then each value in the vector Ax will indeed be greater than the
minimum value in the appropriate chart column.
If a 0-1 valued vector x does exist such that Ax b, then it from the chart we
can easily see that it is a truth assignment for the variables which satisfies each
and every clause. If not, then one of the values of the Ax vector will always be
less than the corresponding value in b. This means that the that at least one
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (5 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

clause is not satisfied for any truth assignment.


Here is a quick example. If we have the three clauses:

then according to the above algorithm we build A and b as follows.

Note that everything comes out fine if the proper values for the xi are put in place. If
x3 is 0 then the first entry of Ax cannot come out less than 0 nor can the second ever
be below -1. And if either x2 or x1 is 1 then the third entry will be at least 1.
Problems in graph theory are always interesting, and seem to pop up in lots of
application areas in computing. So let us move to graph theory for our next problem.
CLIQUE. Given a graph and an integer k, are there k vertices in the
graph which are all adjacent to each other?
This does not sound like a very practical problem, does it? Interesting, yes, but
practical? Consider this. Suppose that you had a graph whose nodes were wires on a
silicon chip. And there was an edge between any two nodes whose wires might
overlap if placed on the same horizontal coordinate of the chip. Finding the cliques
tells the designer how much horizontal room is needed to route all of the wires.
Theorem 3. CLIQUE is NP-complete.
Proof. Again, it is easy to verify that a graph has a clique of size k if we
guess the vertices forming the clique. We merely examine the edges. This
can be done in polynomial time.
We shall now reduce 3-SAT to CLIQUE. We are given a set of k clauses and
must build a graph which has a clique if and only if the clauses are
satisfiable. The literals from the clauses become the graphs vertices. And
collections of true literals shall make up the clique in the graph we build.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (6 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

Then a truth assignment which makes at least one literal true per clause
will force a clique of size k to appear in the graph. And, if no truth
assignment satisfies all of the clauses, there will not be a clique of size k
in the graph.
To do this, let every literal in every clause be a vertex of the graph we are
building. We wish to be able to connect true literals, but not two from the
same clause. And two which are complements cannot both be true at
once. So, connect all of the literals which are not in the same clause and
are not complements of each other. We are building the graph G = (V, E)
where:
V = {<x, i> | x is in the i-th clause}
E = {(<x, i>,<y, j>) | x

and i j}

Now we shall claim that if there were k clauses and there is some truth
assignment to the variables which satisfies them, then there is a clique of
size k in our graph. If the clauses are satisfiable then one literal from each
clause is true. That is the clique. Why? Because a collection of literals (one
from each clause) which are all true cannot contain a literal and its
complement. And they are all connected by edges because we connected
literals not in the same clause (except for complements).
On the other hand, suppose that there is a clique of size k in the graph.
These k vertices must have come from different clauses since no two
literals from the same clause are connected. And, no literal and its
complement are in the clique, so setting the truth assignment to make the
literals in the clique true provides satisfaction.
A small inspection reveals that the above transformation can indeed be
carried out in polynomial time. (The degree will again be left as an
exercise.) Thus the CLIQUE problem has been shown to be NP-hard just
as we wished.
One of the neat things about graph problems is that asking a question about a graph
is often equivalent to asking quite a different one about the graph's complement.
Such is the case for the clique problem. Consider the next problem which inquires as
to how many vertices must be in any set which is connected to or covers all of the
edges.
Vertex Cover (VC). Given a graph and an integer k, is there a collection
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (7 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

of k vertices such that each edge is connected to one of the vertices in


the collection?
It turns out that if a graph with n vertices contains a clique consisting of k vertices
then the size of the vertex cover of the graph's complement is exactly n-k.
Convenient. For an example of this, examine the graphs in figure 1. Note that there is
a 4-clique (consisting of vertices a, b, d, and f) in the graph on the left. Note also that
the vertices not in this clique (namely c and e) do form a cover for the complement of
this graph (which appears on the right).
Since the proof of VC's NP-completeness depends upon proving the relationship
between CLIQUE and VC, we shall leave it as an exercise and just state the theorem.
Theorem 4. VC is NP-complete.

Figure 1 - A graph and its complement.


On to another graph problem. This time we shall examine one of a very different
nature. In this problem we ask about coloring the vertices of a graph so that adjacent
ones are distinct. Here is the definition.
Chromatic Number (COLOR). Given a graph and an integer k, is there a
way to color the vertices with k colors such that adjacent vertices are
colored differently?
This is the general problem for coloring. A special case, map coloring can always be
done with four colors. But as we shall see presently, the general problem is NPcomplete when we must use more than four colors.
Theorem 5. COLOR is NP-complete.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (8 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

Proof. To show that COLOR is in NP, again just guess the method of
coloring vertices and check it out.
We shall reduce 3-SAT to COLOR. Suppose that we have r clauses which
contain n 3 variables. We need to construct a graph which can be
colored with n+1 colors if and only if the clauses are satisfiable.
Begin by making all of the variables {v1, ... , vn} and their complements
vertices of the graph. Then connect each variable to its complement. They
must be colored differently, so color one of each pair false and the other
true.
Now we will force the true colors to be different from each other.
Introduce a new collection of vertices {x1, ... , xn} and connect them all
together. The n xi's now form a clique. Connect each xi to all of the vj and
their complements except when i = j. Thus if we have n different true
colors (call them t1, , tn) we may color the xi's with these. And, since
neither vj or its complement is connected to xi one of these may also be
colored with ti. So far we have colored:
a. each xi with ti,
b. either vi or its complement with ti, and the other false.
An example for three variables is depicted in figure 2. Since shades of
gray are difficult to see, we have used three for the true colors and have
drawn as squares all of the vertices to be colored with the false color.
Note that v1 and v2 are true while v3 is false.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (9 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

Figure 2 - Variables, True and False Colors


So far, so good. We have constructed a graph which cannot be colored
with fewer than n+1 colors. And, the coloring scheme outlined above is
the only one which will work. This is because the xi's must be different
colors and either vi or its complement has to be the (n+1)-st (false) color.
3-SAT enters at this point. Add a vertex for each clause and name them
c1, ... , cr. Connect each of them to all the variables and their complements
except for the three literals which are in the clause. We now have the
following edges in our graph for all i and j between 1 and n, and k
between 1 and r, except where otherwise noted.

Here's a recap. One of each variable and complement pair must be false
and the other, one of the true colors. These true's must be different
because the xi's form a clique. Then, the clauses (the ci's) are connected to
all of the literals not in the clause.
Suppose that there is a truth assignment to the variables which satisfies
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (10 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

all of the clauses. Color each true literal with the appropriate ti and color
its complement false. Examine one of the clauses (say, ci). One of its
literals must have been colored with one of the true colors since the
clause is satisfied. The vertex ci can be colored that way too since it is not
connected to that literal. That makes exactly n+1 colors for all the vertices
of the graph.
If there is no truth assignment which satisfies all of the clauses, then for
each of these assignments there must be a clause (again, say ci) which has
all its literals colored with the false or (n+1)-st color. (Because otherwise
we would have a satisfying truth assignment and one of each literal pair
must be colored false if n+1 colors are to suffice.) This means that ci is
connected to vertices of every true color since it is connected to all those
it does not contain. And since it is connected to all but three of the literal
vertices, it must be connected to a vertex colored false also since there
are at least three variables. Thus the graph cannot be colored with only n
+1 colors.
Since constructing the graph takes polynomial time, we have shown that 3SAT COLOR and thus COLOR is NP-complete.
p

An interesting aspect of the COLOR problem is that it can be almost immediately


converted into a scheduling problem. In fact, one that is very familiar to anyone who
has spent some time in academe. It is the problem of scheduling final examinations
which we examined earlier.
Examination Scheduling (EXAM). Given a list of courses, a list of
conflicts between them, and an integer k; is there an exam schedule
consisting of k dates such that there are no conflicts between courses
which have examinations on the same date?
Here is how we shall set up the problem. Assign courses to vertices, place edges
between courses if someone takes both, and color the courses by their examination
dates, so that no two courses taken by the same person have the same color.
We have looked at seven problems and shown them to be NP-complete. These are
problems which require exponential time in order to find an optimal solution. This
means that we must approximate them when we encounter them. There just happen
to be many more in areas of computer science such as systems programming, VLSI
design, and database systems. Thus it is important to be able to recognize them when
they pop up. And, since their solutions are related, methods to approximate them
often work for other problems.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (11 of 12)12/2/2015 10:06:59 AM

More NP-Complete Problems

In closing, here are three more NP-complete problems.


Closed Tour (TOUR). Given n cities and an integer k, is there a tour, of
length less than k, of the cities which begins and ends at the same city?
Rectilinear Steiner Spanning Tree (STEINER). Given n points in
Euclidean space and an integer k, is there a collection of vertical and
horizontal lines of total length less than k which spans the points?
Knapsack. Given n items, each with a weight and a value, and two
integers k and m, is there a collection of items with total weight less
than k, which has a total value greater than m?

http://www.cs.uky.edu/~lewis/cs-heuristic/text/class/more-np.html (12 of 12)12/2/2015 10:06:59 AM

Integer Programming

At the core of the problems we shall be considering, lies the class of integer
programming problems. From our earlier discussion of optimization and decision
problems, we know that these problems can either be convex and linear or NP complete. Thus they span the space from very simple computing tasks to extremely
complex optimization problems. Now we shall examine standard methods for solving
these problems in the framework of mathematical programming.
The sections are entitled:
Linear Programming
Transition to Integer Solutions
Cutting Planes
Upper Bounds for Integer Programs
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/intintro.html12/2/2015 10:07:07 AM

Linear Programming.

Linear Programming
The intersection of integer programming and linear programming seems the logical
place to begin our discussion of integer programming. We know that these are exactly
the convex problems which can stated in integer programming terms. Thus if we can
find a minimum (or maximum) solution we know that that is a global minimum (or
maximum). And, since these are included in the realm of linear programming, we are
guaranteed an optimal solution in polynomial time. These problems not only can be
solved in polynomial time, but comprise a major portion of the algorithms
encountered in computer science. Even though very few are solved in a linear
programming context ordinarily, all can be stated in linear programming terms.
For example, consider bipartite graph matching. An example of a bipartite graph (one
whose vertices can be partitioned into two nonconnected sets) appears in figure 1b.
The bipartite graph matching problem is to find a set of unconnected edges which
cover as many of the vertices as possible. If we select the set of edges:
{<a, b>, <c, f>, <e, d>}
then we have covered all of the vertices of the graph. This is a maximal matching for
the graph.
Now we shall state the problem in linear programming terms. For each edge <u, v> of
the graph we introduce the variable xuv. If the variable is set to 1 then the edge is part
of the matching, and if set to 0, the edge is not in the matching set.
First, we would like to cover as many vertices as possible. This means including as
many edges as we are able. We can accomplish this by maximizing the objective
function:
z = xab + xad + xcd + xcf + xed.
Next we must make sure that no connected edges occur in the matching. Since two
edges leave vertex a, we add the constraint:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (1 of 14)12/2/2015 10:07:12 AM

Linear Programming.

xab + xad 1
in order to insure that only one of the two edges ends up in the matching. We add
similar constraints for vertices c and d. The complete linear program appears as
figure 1a.

(a)

(b)

Figure 1 - Bipartite Graph Matching


Let us now back up a step and introduce linear programming in a new fashion; a
geometric entity in the plane. Consider the line 2x1 + x2 = 6 which divides the plane
into two halfplanes. If we wish to designate the halfplane above the line we would
write 2x1 + x2 6. This halfplane is shown as shaded region of figure 2a. Now add the
line -2x1 + x2 = -2 to the first line. The area below it is represented by -2x1 + x2 -2. The
area bounded by these two lines forms the shaded region in figure 2b. Note that it is
unbounded to the right.

(a)

(b)

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (2 of 14)12/2/2015 10:07:12 AM

(c)

Linear Programming.

Figure 2 - Areas Bounded by Lines


Adding x1 + 3x2 = 15 and - x1 +x2 = -3 to the first two lines provides a closed perimeter
in the plane. This enclosed region is the shaded area pictured figure 2c. Defining this
shaded region is done by stating that the area lies above the first line, below the
second and third, and above the last line. The four equations (one for each line) below
specify exactly what pairs <x1 , x2> lie within the region.
2x1 + x2 6
-2x1 + x2 -2
x1 + 3x2 15
- x1 +x2 -3
Another area of the plane defined by lines is pictured in figure 3. This area is defined
by the constraining equations provided in the chart at the left along with the edges
they define.

Figure 3 - Constraints and Feasible Solution Region


As before, the area bounded by the lines (and axes) is precisely the region of pairs <x1,
x2> which satisfy the constraining equations. This is called the feasible solution
region. Finding the largest pair in the feasible solution region is the optimization
problem:
maximize z = x1 + x2
subject to the conditions:
- x1 + x2 3
2x1 + 3x2 19
-3x1 + x2 -12
where the values of x1, x2 0
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (3 of 14)12/2/2015 10:07:12 AM

Linear Programming.

A quick glance at the picture in figure 3 tells us that the best solution to the above
optimization problem occurs at vertex d, where x1 takes the value 5 and x2 is 3.
Now we are ready for two definitions which formally describe this particular class of
optimization problems.
Definition. A general linear programming problem may be stated as
follows: Given real numbers b1 , b2 , ... , bm , c1 , c2 , ... , cn and aij (for i =
1, ... , m and j = 1, ... , n), minimize (or maximize) the objective function:
z(x1 , x2 , ... , xn) = c1 x1 + c2 x2 + ... + cnxn
subject to the conditions

Definition. In a linear programming problem with nonnegativity


constraints all of the variables xi are greater than or equal to zero.
If we could state the optimization as m equations in n unknowns, then maybe we
could solve for some of the unknowns by methods such as Gaussian elimination from
linear algebra. We now take our problem from figure 3 and rewrite the equations so
that they are still valid, but now contain the equals operator. Here is the
transformation:
- x1 + x2 3 - x1 + x2 + y1 = 3
2x1 + 3x2 19 2x1 + 3x2 + y2 = 19
-3x1 + x2 -12 -3x1 + x2 - y3 = -12
Note exactly what was done. In the first equation we added y1 to bring the value of the
left hand side up to 3. Thus y1 takes up the slack and is called a slack variable. In the
last equation we subtracted the surplus variable y3 so that the value (which was over http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (4 of 14)12/2/2015 10:07:12 AM

Linear Programming.

12) could be equal to -12. This problem is now almost as we wish.


Definition. A linear programming problem is in standard form if and
only if all of the xi and bj are greater than or equal to zero and the
constraints can be stated: Ax = b.
By changing all of the signs in our third equation, we finally arrive at the related
standard problem for our previous problem.
maximize z(x1 , x2 , y1 , y2 , y3) = x1 + x2
subject to the constraints:
- x1 + x2 + y1 = 3
2x1 + 3x2 + y2 = 19
3x1 - x2 + y3 = 12
where the values of x1 , x2 , y1 , y2 , y3 0.
One very important piece of information concerning the relationship between
problems and their related standard problems needs to be stressed. Namely:
There is a one-to-one correspondence between a problem's feasible
solutions and those for its related standard problem.
In our case, if we omit the slack and surplus variable (yi) values from any feasible
solution to the related standard problem, we have a solution to our original problem.
Now that we have a set of three equations in five unknowns, let's try to solve for x1
and x2 by standard linear algebra methods. We do this in the a series of tableaux. Our
tableau setup is designed to not only help solve the problem, but impart information
as we progress. Compare this one:
x1

x2

y1

y2

y3

bj

-z

-1

y1

19

y2

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (5 of 14)12/2/2015 10:07:12 AM

Linear Programming.

-1

12

y3

to the equations above. The columns are assigned to the original, slack, and surplus
variables. The first row holds the objective function and for each equation, there is a
row holding its coefficients (the aij and the bj's).
Now for some linear algebra. If we can find m independent columns, (one for each
constraint or row) then we have a basic solution to our problem. This basic solution
comes from expressing the vector b as a linear combination of these columns (which
we call a basis). At the moment our problem can be expressed:
Ax + Iy = b
(where I is an m by m identity matrix) and a basic feasible solution for our problem
can be found by setting the xi = 0 and yj = bj. This is:
x1

x2

y1

y2

y3

19

12

Consult the picture in figure 3 and note that <0, 0> was indeed a feasible solution of
the original problem. Now look again at the tableau above and note that the basis
variables are indicated on the right.
At this point we would like to move along toward the optimum solution for our
problem. Making column one into a unit vector (one 1 and the rest 0's) would mean
that we could express b as a linear combination of x1 and two of the yj. This is a step
forward.
Look in column one. We would like to set the column one entry of one of the rows to
1 and then pivot on it. (By pivoting we mean add or subtract that row from the rest to
get 0's in the first column.) We cannot do that in the first row without making the
value of b1 negative (which is not allowed in our definition of standard problem).
Using row two would set b2 to 19/2. This is legal but not the value of x1 in any feasible
solution. So, we are forced to use row three as our pivot.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (6 of 14)12/2/2015 10:07:12 AM

Linear Programming.

After pivoting on column one, row three, we produce the tableau:


x1

x2

y1

y2

y3

bj

4/3

-1/3

-4

-z

2/3

1/3

y1

11/3

-2/3

11

y2

-1/3

1/3

x1

By doing this, we have added x1 to the basis and removed y3 from it. The feasible
solution is now:
x1

x2

y1

y2

y3

11

which corresponds to the lower right corner of the polygon in figure 3. In the same
manner, we select column two to pivot on next (so that x2 joins the basis). There is a
choice between rows one and two. We select row two and produce the tableau:
x1

x2

y1

y2

y3

bj

-4/11

-1/11

-8

-z

-2/11

5/11

y1

3/11

-2/11

x2

1/11

3/11

x1

which provides us with the solution:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (7 of 14)12/2/2015 10:07:12 AM

Linear Programming.

x1

x2

y1

y2

y3

Here we halt with the optimum (maximum) feasible solution <5, 3> to our original
linear programming problem.
(A note in passing on a topic which we shall return to later. As we pivoted and
modified our tableaux until we found an optimal solution, we were traversing the
vertices of the polygon in figure 3. In fact, we began at vertex a and went through
vertex e so that we could end at vertex d. This geometric interpretation of this
method of solving linear programming problems will be examined in a little more
detail at the end of this section.)
Several topics need some explanation. The first one concerns just exactly how we
choose pivots. The top row of the tableau helps with this since it indicates how much
the objective function would change if we added that column to the basis. For
example, in the first tableau of the above example we note that if either x1 or x2 is
added to the basis, then the objective function would increase by the new value of x1
or x2. We added x1 = 4 to the basis and the objective function went from 0 to 4. (Note
that there is a -4 at the top right.) In the second tableau, the top row informs us that
placing x2 in the tableau will increase the objective function by 4/3 of x2s new value.
This happened, as it went from 4 to 8 as we went from <4, 0> to a solution of <5, 3>.
Thus selecting a pivot which will increase the value of the objective function is
preferable. The way to do this, however, is controversial. A method named steepest
descent calls for pivoting on the column with the largest positive entry in the top row.
Another method (this one called greatest increment) tells us to pivot on the column
which will increase the objective function by the greatest amount. The first is of
course the easier in terms of computational time, but the second might just get us
our optimal answer sooner. Both methods have devotees.
Now that we have selected a column, what row do we pick? It is easy to find out what
row not to pick. First of all, do not select a row that has a nonpositive entry at the
selected column. Since all bj must remain positive, pivoting on a negative entry would
ruin this. (If none of the entries in the column are positive, then there is a problem
since the top row entry tells us that the solution can be improved and the other
entries claim that it cannot. This inconsistency means that the solution is unbounded
and thus no optimum exists. Next, do not select a row which will lead to a unfeasible
solution. An example here is column 1 row 2 of the first tableau in our last example.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (8 of 14)12/2/2015 10:07:12 AM

Linear Programming.

Pivoting on it will set x2 to 19/2 which is not feasible. Another problem occurs when
pivoting on some row will make one of the bj entries negative. (And would happen to
b3 if we pivoted on row 2 column 1 in the first tableau.) This is forbidden also in our
standard form problem
There is one other subtlety in pivoting. It is possible to cycle if one is unlucky. There
are methods for keeping this from taking place, but we shall not investigate them
here.
Our last problem concerns the place we start when solving the equations. If we are
fortunate and have with a problem in the standard form: Ax + Iy = b then we merely
set each x FACE="Arial" SIZE=5>i = 0 and yj = bj and take that as our first basic feasible solution.
Otherwise we have to do some extra work. We must find a basic feasible solution
before we can begin to solve our optimization problem.
Consider our very first example, that of figure 2. After putting it in standard form
with slack and surplus variables we have:
maximize z(x1 , x2) = 2x1 + x2
subject to the constraints:
2x1 + x2 - y1 = 6
2x1 - x2 - y2 = 2
x1 + 3x2 + y3 = 15
x1 - x2 + y4 = 3
where all xi, yj 0.
This looks good. But, what do we use as our first basic feasible solution? Looking
merely at the equations, there are values for the yj which satisfy them. In fact,
something like:
x1

x2

y1

y2

y3

y4

-6

-2

15

ought to be a solution to the problem. But, this of course is not a basic feasible
solution because the yj are negative and this is not allowed in a standard linear
programming problem.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (9 of 14)12/2/2015 10:07:12 AM

Linear Programming.

We are in trouble. There seems to be no obvious basic feasible solution. Looking at


the picture of this problem in figure 2 tells us what happened. Since <0, 0> was not in
the feasible solution space, we could not begin there. So, how do we begin with no
proper starting tableau?
Well, we just might be able to transform this into another linear programming
problem. For example, if we added some extra variables to the first two equations
which had the proper signs, then the yi could be set to zero and there would be a
basic feasible solution to the new problem. Consider the related problem:
minimize z(s1, s2) = s1 + s2
subject to the constraints:
2x1 + x2 - y1 + s1 = 6
2x1 - x2 - y2 + s2 = 2
x1 + 3x2 + y3 = 15
x1 - x2 + y4 = 3
where all xi, yj, sj 0
where the variables s1 and s2 have been added. The basic feasible solution to this new
problem is:
x1

x2

y1

y2

y3

y4

s1

s2

15

Minimizing s1 + s2 means bringing them down to zero. If we are lucky and can get
both of the sj to be zero and toss them out of the equations. This will be a basic
feasible solution to our related standard problem.
Since minimization is just reversing signs and maximizing, we begin with a tableau
like that below with -1's on the top row over the sj for our new objective function.
x1

x2

y1

y2

y3

y4

s1

s2

bj

-1

-1

-z

-1

s1

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (10 of 14)12/2/2015 10:07:12 AM

Linear Programming.

-1

-1

s2

15

y3

-1

y4

The tableau is not in proper form though. We want to have zeros, not negative
numbers in the top row over the basis columns. To do this we merely add the first
two rows (those in which s1 and s2 appear) to the top row. This sets -z = 8 as our
beginning point and we now have :
x1

x2

y1

y2

y3

y4

s1

s2

bj

-1

-1

-z

-1

s1

-1

-1

s2

15

y3

-1

y4

We wish to get rid of the sis so the first step is to pivot on first column (which looks
very promising since there is a 4 in the top row), and second row to add x1 to the
basis and get:
x1

x2

y1

y2

y3

y4

s1

s2

bj

-1

-2

-z

-1

-1

s1

-1/2

-1/2

1/2

x1

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (11 of 14)12/2/2015 10:07:12 AM

Linear Programming.

7/2

1/2

-1/2

14

y3

-1/2

1/2

-1/2

y4

The second column now looks very attractive and so we select it for our next pivot.
After pivoting on the second column, first row we have:
x1

x2

y1

y2

y3

y4

s1

s2

bj

-1

-1

-z

-1/2

1/2

1/2

-1/2

x2

1/4

-1/4

1/4

1/4

x1

7/4

-5/4

-7/4

5/4

y3

1/4

1/4

-1/4

-1/4

y4

At last the sj are zero and we know that we have a basic feasible solution to our
related standard problem. It is
x1

x2

y1

y2

y3

y4

and the xi pair <2, 2> is indeed a feasible solution to the original.
Thus linear programming itself provides the method which is used to discover the
basic feasible solution needed in order to start solving the related standard problem.
It also informs us as to whether or not there are feasible solutions. The algorithm we
went through above is named the simplex method and is the standard method for
solving linear programming problems.
With the simplex method in hand we can either solve linear programming problems,
or detect situations where optimal solutions cannot be found. The procedure is
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (12 of 14)12/2/2015 10:07:12 AM

Linear Programming.

outlined in figure 4.
place the problem in standard form
if there is no basis then [PHASE I]
add an artificial basis of sj variables
solve problem to minimize sum sj
if unsuccessful then no solution exists
otherwise discard the sj variables and restore original objective function
solve problem [PHASE II]
if unable to pivot then problem is unbounded

Figure 4 - Two Phase Linear Programming Solution


We close our discussion of algebraic solutions to the linear programming problem
with two fundamental theorems.
Theorem 1. Exactly one of the following situations exists for each linear
programming problem.
a) There is no solution,
b) the problem is unbounded, or
c) there is an optimal feasible solution.
Theorem 2. During the simplex algorithm's execution, if the top tableau
row indicates that a basic feasible solution cannot be improved by
pivoting, then it is optimum.
Now we shall return to geometry in order to provide intuition for linear programming
problems and the simplex algorithm. First we move from the plane to n dimensional
space. In n dimensions the counterpart of a line is a hyperplane. It is a set of points
satisfying an equation such as:
a1x1 + ... + anxn = b
A hyperplane divides the space into two halfspaces according to the inequalities:
a1x1 + ... + anxn b and a1x1 + ... + anxn b
Since a halfspace is a convex set, so is the intersection of several halfspaces. If the
intersection of a finite number of halfspaces is bounded then it is a polytope, in fact,
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (13 of 14)12/2/2015 10:07:12 AM

Linear Programming.

a convex polytope.
Adding a dimension to the polygon featured in figure 3 shall provide us with a three
dimensional polytope. Figure 5 contains it and the halfspaces whose intersection
form it.

Figure 5 - Constraints Forming a Polytope


Some more terminology is in order. The outside of a polytope is composed of faces.
Three kinds exist. We have vertices, which are faces of dimension zero, edges, which
have dimension one, and facets, which are of dimension n-1. A theorem describes this.
Theorem 3. A polytope is the convex hull of its vertices.
This shows us a little of what happens while solving linear programming problems.
The basic feasible solutions are just the vertices of the polytope built by intersecting
the constraints. And, since the feasible solution area is inside the convex hull of
vertices, the optimum solution is always at a vertex.
When we pivot, we are traversing an edge from one vertex (basic feasible solution) to
another. When we cannot improve a solution, we have reached an optimum vertex.
Since the polytope is convex, this optimum must be a global optimum.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/linprog.html (14 of 14)12/2/2015 10:07:12 AM

Transition to integer solutions.

Transition to Integer Solutions.


One interesting circumstance from the last section is the following observation. Since
we traveled the edges of the polytope formed by intersecting the constraints of our
linear programming problems, we often came upon integer solutions to our problems.
And, in problems where integer solutions are necessary (such as bipartite graph
matching), we found them. Let us explore this further.
Consider minimal spanning trees for weighted graphs. This problem is never solved
as a linear programming problem, but can be easily stated as one. Recall that we want
to find a minimum weight collection of edges that connect all of the graph's vertices.
That is the minimum spanning tree for the graph. Figure 1a contains a small graph
whose minimum spanning tree is the pair of edges: {<a, b>, <b, c>}.

Figure 1 - A Minimal Spanning Tree and an Integer Program


To express the minimum spanning tree problem for this graph as a linear
programming problem, we need to state some conditions. We begin by assigning a
variable to each edge of the graph. (For example, xab represents the edge from node a
to node b.) If a variable takes the value one, then that edge is in the minimum
spanning tree. Thus the tree consisting of {<a, b>, <b, c>} would be defined by setting
xac to zero and both xab and xbc to one.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (1 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

The first constraint of figure 1b calls for two edges in the tree and the others require
the variables to take values between zero and one. (In other words, no more than two
edges may be in the tree and each edge can be in it at most once.)
The shaded region of Figure 1c is the polytope formed by the intersection of all of the
constraints. The constraints that limit the variables to values between zero and one
define a unit cube and the first constraint slices the cube on a diagonal. This shaded
triangular plane contains all of the feasible solutions to the problem. Note that some
of the solutions call for portions of the edges to be in the spanning tree. (For
example, two-thirds of each edge is a feasible solution!) And especially note that the
vertices of this region are the three integer solutions to our problem. This means that
when we minimize the sum of the variables times the weight of their edges, we will
indeed get the proper solution since the vertices of the polytope defined by the
constraints are basic feasible solutions.
After placing the constraints in standard form we find that the complete linear
programming problem statement for the minimum spanning tree of the small graph
in figure 1a is:
minimize z = 2xab + 7xac + 3 xbc
subject to the constraints:
xab + xac + xbc = 2
xab + y1 = 1
xac + y2 = 1
xbc + y3 = 1
where xab, xac, xbc 0
Since there is no feasible solution at the origin, we of course would have to apply the
two-phase process to extract a solution. (We must also note that Gaussian elimination
is far more time consuming than any of the standard methods for building a
minimum spanning tree.)
Going on to larger problems, we must do a bit more than require that two of three
edges be in the spanning tree. Figure 2 contains a slightly larger graph whose
minimum spanning tree is:
{<a, c>, <a, e>, <b, c>, <b, d>, <c, f>}
Let us develop the constraints for this problem. As before, we assign a variable to
each edge of the graph, and, if the variable xuv is 1 then the edge from node u to node

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (2 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

v is in the tree. To find the minimum spanning tree we again minimize the sum of the
variable-weight products. And, once more, we require the variables to have values
between zero and one.

Figure 2 - A Weighted Graph


To achieve a spanning tree we require that exactly 5 edges are in the tree, it spans the
vertices, and there are no cycles. To ensure that exactly 5 edges are placed in the tree,
we state:
xab + xac + xae + xbc + xbd + xcd + xce + xcf + xdf + xef = 5
With just the above constraint, one could select the five smallest edges and have a
feasible solution. This, however, would not span the graph. Making sure that the tree
spans the graph means insisting that for every node, one of the edges connected to it
must be in the tree. This requirement induces the following constraints:
xab + xac + xae 1

[vertex a]

xab + xbc + xbd 1

[vertex b]

xac + xbc + xcd + xce + xcf 1

[vertex c]

xbd + xcd + xdf 1

[vertex d]

xae + xce + xef 1

[vertex e]

xcf + xdf + xef 1

[vertex f]

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (3 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

Keeping cycles from appearing is done by taking all subgraphs in which it is possible
to have a cycle and bounding the edges, which may be included in the spanning tree.
For example, the subgraph containing the nodes {a, b, c} might have a cycle of length
three, so we write:
xab + xac + xbc 2
and for the subgraph containing {a, b, c, d} we include:
xab + xac + xbc + xbd + xcd 3
Completing the collection of anti-cycle conditions such as those above completes the
description of the minimum spanning tree.
As before, we get an integer valued answer when we apply the simplex method. This
is again because the vertices of the polytope defined by the constraints have values of
zero and one for all variables.
There is one problem though. There are about 26 constraints needed for cycle
prevention in the last graph and it was not a very large graph. And, since it was
planar, it did not have many edges. To find the minimum spanning tree for an
arbitrary graph we might need a great many constraints. In fact, it is possible in some
problems to have an exponential number of constraints. This is why we do not solve
these problems with linear programming methods.
A much easier problem to define in linear programming terms is the NP-complete
knapsack problem. Recall that in this problem we have n objects, and the i-th one
weighs wi and has a value of vi. We wish to pack a knapsack of capacity K with a
valuable collection of objects. Our variable set shall be {x1, x2, , xn}, and we set the
variable xi to 1 when the i-th object is in the knapsack. Maximizing the value held in
the knapsack is done by the following objective function.

We also require that all variables are greater than zero (xi 0) and bound the knapsack
weight with the constraint:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (4 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

This seems to work quite well and is easy to express. And, in fact, we may easily look
at a small example example. Let us consider a problem with only two objects so that
we can draw the polytope of feasible solutions. Let one object weigh 5 pounds and
the other weigh 8 pounds. Also, let them have values of 1 and 2. If we wish to pack a
knapsack of capacity 23, the problem can be very simply stated in linear
programming terms as indicated in figure 3.

Figure 3 - A Simple Knapsack Problem


The picture in figure 3 provides the feasible solution space for this problem. We see
that applying linear programming to this problem will provide a correct optimum
feasible solution of x1 = 0 and x2 = 23/8. This would be fine if the objects were liquid
or if we could chop them up. In those cases one merely fills the knapsack with pieces
of the object which has the largest value per pound.
But, we want to place whole objects in the knapsack! The solution we are looking for
is x1 = 1 and x2 = 2. Linear programming comes close to the solution, but does not
provide it. This is because we must have integer solutions and that is a nonlinear
constraint. In fact, the feasible integer solution space is the collection of grid points
that are in the shaded area of the graph. This is not a convex space. So, it seems that
knapsack is not so easy to solve after all.
Now consider the closed city tour problem. Recall that there are n cities with costs cab
(to travel from city a to city b) and we wish to make a minimum cost closed tour (a
loop) of the cities visiting each city exactly once. As before, we shall assign one
variable to each edge of the graph of cities. Thus variable xab = 1 indicates that we
have traveled from city a to city b on the tour. We must keep the variables at values of
zero or one and ensure that we leave and enter each city exactly once. In integer
linear programming form this becomes:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (5 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

subject to the constraints:

xij {0,1} for all i, ,j


But things are not that simple. A collection of regional tours (as shown in figure 4a)
connecting all of the cities meets the conditions set forth above. To eliminate these
regional tours, we must have additional constraints. We note that for every subset of
cities, part of the tour must lead into and part must lead out of the subset. This is
illustrated in figure 4b.

Figure 4 - Closed Tour Considerations


Elimination of local subtours in some subset S of the n cities is done by specifying a
constraint which requires entering or leaving the subset. These constraints are of the
form:

for every proper subset of cities S. This does take care of the regional tour problem,
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (6 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

but introduces a number of additional constraints equal to the number of subsets of


the n cities. Unfortunately, this is exponential in the number of cities.
Getting out of this constraint explosion can be done placing an order upon visits to
cities. First, mandate that the tour must begin with city 1. Then assign a variable ti to
each city to indicate the citys place on the tour. (That is, if ti = 17 then city i is the
seventeenth city on the tour.) In order to insure that:
a. All cities are between 2nd and n-th on the tour, and
b. Cities adjacent on the tour have consecutively numbered places on the
tour,
we set t1 to 1, and for all i k between 2 and n, we include the constraint:
ti - tk + nxik n -1
and note that if xik = 1, then tk must be greater than ti. That is, city i must precede city j
on the tour. It also follows that any city that comes after city i on the tour has a larger
t-value than ti. Since the above inequality also requires each of these values is less
than n, we may rest assured that we have a proper tour.
This is fine since we have a suitable number of constraints for the problem. So, why
not solve it with linear programming methods and achieve an answer in polynomial
time? Because the same problem we ran into with the knapsack problem arises,
namely that the polytope in question does not have all integer vertices. We might get
solutions with cities visited in places 17.35 or 24.7 on the tour and this of course is
not acceptable.
(One solution to this problem is to use constraints such as:
xik (ti - tk) = 1
but this is no longer a linear relationship and so we cannot use linear programming to
solve it.)
There is of course the possibility that a feasible solution found through linear
programming is close to the optimum integer solution. But this is not always the case
since we could in fact have rather nasty polytopes for some problems. Consider those
pictured in Figure 5.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (7 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

Figure 5 - Non Integer Optimum Solutions


The feasible solution spaces are the shaded areas and if we maximize z = x1 + 2x2 in
both of these cases, the optimum integer solutions (the dots) are nowhere near the
best feasible solutions found by linear programming. Thus we cannot even round the
best solution off to the closest integer and be assured of the correct answer. We need
other methods to solve these systems since as we mentioned above, requiring
answers to take on integer values is a nonlinear constraint.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (8 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

This is not a complete disaster though. If we can express a problem in integer


programming terms and have some method of solving integer programs, then we
have achieved our goal. Let us concentrate upon the class to which the Knapsack
problem and the Closed Tour problem belong; the class of NP-complete problems.
Since 0-1 integer programming is NP-complete, we know that all of the problems in
NP can be expressed as integer programs. In particular, we know how express the
satisfiability problem as an integer program. This means that expressing a problem in
propositional calculus leads to an integer programming representation of the
problem. In the proof where satisfiability is reduced to integer programming, clauses
were changed to equations almost immediately and the constants on the right hand
sides of the equations were set to one more than the number of negative literals in
the clause. For example:

Setting all of the xi 1 completes the development of the linear program. In order to
solve the problem we execute phase one of the linear programming algorithm, and
immediately find a feasible solution. In the case above, we get a solution that contains
values of zero and one for all of the variables. This is exactly as we wished! But, we
are not always so fortunate. In Figure 6 is a very simple satisfiability problem with the
first basic feasible solution at <0.5, 0.5, 0.5>.
But sometimes one is fortunate, and often a linear programming solution to an
integer programming problem leads to a reasonable approximation to the original
problem. For this reason we shall explore techniques for defining problems in terms
of propositional calculus.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (9 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

Figure 6 - Non Integer Satisfiability Polytope


Sometimes it helps express a problem in integer programming terms if we can first
express it as a set of formulas in the propositional calculus. For example, consider
graph coloring. Recall that this problem requires one to color the vertices of a graph
so that adjacent vertices do not share a color. The graph of figure 2 can be colored in
this manner with four colors, but not with three.
To express this as an optimization problem we shall introduce the family of variables
xuk which represent node u being colored with color k. Then for each edge <u, v> in
the graph we state that both u and v cannot be the same color. This is stated as the
collection of clauses:

where each merely states that either u or v must not be color k. Each of these
translates into the constraint:
xuk + xvk 1
for each edge <u, v> and color k. To make sure that each vertex is colored, we add the
clause:
(xu1 , ... , xun)
which states that u is indeed one of the colors. This is translated into the constraint:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (10 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

which if we are not careful will allow vertex u to be colored with several colors. We
shall fix this later though.
Let's pause a moment. The constraints which state that adjacent nodes should not be
the same color can be pooled if there are cliques in the graph. Finding cliques is not
often cost effective, except for small ones (size 3 or 4). In the graph of figure 2 we
pool cliques and get the following constraints for each color k:
xak + xbk + xck 1
xak + xck + xek 1
xck + xck + xfk 1
xck + xdk + xfk 1
xbk + xck + xdk 1
Note that each equation merely states that color k is used for at most one of the
vertices in the clique.
The optimization portion of this problem requires some care since we wish to use the
minimum number of colors. If we weight each color, then we can favor the first colors
enough so that new colors are never introduced unless needed. We can think of this
in terms of cost. For example charge $1 for each vertex that is color one, $n for each
color two vertex, $n2 for each that is color three, and so forth. Thus minimum cost
means that we should use lower numbered colors on the vertices of the graph. The
objective function is the following.

This makes using an additional colors very expensive.


One of the important things we discovered in this section was that when polynomially
computable problems are expressed as linear programs, the solutions often emerge
as integers. But when NP-complete problems are expressed as linear programming
problems, basic feasible solutions often are not integers. Recalling a little matrix
algebra helps explain this phenomenon.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (11 of 12)12/2/2015 10:07:21 AM

Transition to integer solutions.

Our basic feasible solutions come from a set of m unit vectors which appear as
columns during execution of the simplex algorithm. If we denote the original m
linearly independent columns which make up this basis as the matrix B then we may
state that Bx = b. And, solving for the values of x we get:

adj

in terms of the adjoint of B (B ) and B's determinant. This ties in with the next two
definitions.
Definition. A square, integer matrix is unimodular if its determinant has
the value 1.
Definition. An integer matrix is totally unimodular if all of its square,
nonsingular submatrices are unimodular.
Now at last things fall into place a little. Linear programming problems that can be
stated in totally unimodular form will indeed have integer feasible solutions since the
denominator of the above formula will be one. It conveniently turns out that path
problems, flow problems, and matching problems in networks and graphs have this
property. Therefor, they can be solved with linear programming, but NP-complete
problems cannot. We need some different methods for these.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/transit.html (12 of 12)12/2/2015 10:07:21 AM

Cutting plane techniques.

Cutting Plane Techniques


We have found a large class of problems, which can be stated conveniently as integer
programming problems. We have also discovered that a large subclass of this cannot
be solved by straight linear programming techniques. Thus we need to look further
for ways to solve integer programming problems.
Previously we mentioned the idea of solving our integer programs with linear
programming methods. Removing the constraint, which requires integer solutions, is
called the relaxation of the integer programming problem. Solving this relaxed
problem always brings an optimum solution, but as we have seen, rounding off this
solution often does not always provide the optimum integer solution. We do know
however that:
The optimum solution to the relaxation of an integer programming
problem is an upper bound for the optimum integer solution
Consider the following example. Figure 1a shows a convex region of feasible solutions
defined by several constraints. The grid indicates where inside the polygon the
feasible integer solutions lie. The dot represents the optimal solution (for the linear
programming problem) gained from maximizing x + x . Note that although it is not an
1

integer solution it is the upper bound for an optimum one.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (1 of 8)12/2/2015 10:07:28 AM

Cutting plane techniques.

Figure 1 - Cutting Plane Example


If we could shave off some of the area, which contains noninteger solutions, we could
possibly find an optimal integer solution. Examine the vertical line through x = 3 in
1

figure 1a. Cutting the polygon on this line will not destroy any feasible integer
solutions to the problem. In figure 1b we have done this and have a new polygon.
The line we used to shave off part of the polygon is called a cut or a cutting plane
since it pares off some of the noninteger area we do not care about. And, to do the
carving, all we need do is to introduce this cut as an additional constraint. Note that
no feasible integer solutions were omitted by including this new constraining line in
out linear programming problem.
The dot in figure 1b again represents the optimum solution we get from solving the
relaxation. In figure 1c we have added yet another constraint and finally arrive at an
optimum integer solution.
This seems like a good idea. All we need do is solve the relaxation of the integer
programming problem and generate additional constraints until we get an integer
solution. During this process we would like to guarantee that as we add constraints:
a) No feasible integer solutions are excluded.
b) Each constraint reduces the feasible solution region.
c) Each constraint passes through an integer point.
d) An optimum solution is eventually found.
Let's go straight to an example. In figure 2a we have the linear programming problem
specified by
maximize z = x + x
1

subject to the constraints:


-5x + 4x 0
1

5x + 2x 15
1

where all x 0.
i

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (2 of 8)12/2/2015 10:07:28 AM

Cutting plane techniques.

Figure 2 - Cutting Plane Application


Solving the relaxation of the integer program gives us the optimum solution indicated
by the dot at the top of the triangle in figure 2a. The values for the variables in this
feasible solution are:
x

5/2

and the final tableau after solving for this solution is:
x

-1/10

-3/10

-9/2

-z

1/6

1/6

5/2

-1/15

-1/15

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (3 of 8)12/2/2015 10:07:28 AM

2
1

Cutting plane techniques.

In the second row of the tableau there is a noninteger value for the variable x . In this
2

row we find the equation:

Let us leave the fractional portions of our variables on the left hand side of the
equation and move x to the right. If we separate the right side into integer and
2

fraction portions, we get the following.

Let us examine this equation. Suppose that all of the variables were set to their
optimum integer solutions. Since we do not allow negative solutions, the left hand
side of the equation must be greater than zero. This means that the right hand side of
the equation cannot be negative either. Thus:

This in turn forces the quantity (x - 2) to be no more than 1/2. Since the value of x
2

must be a nonnegative integer, x can only be zero, one or two. This means that the
2

left side of the equation above will always have a value of more than 1/2. Putting this
all together we assert that if x is to have an integer value then the following holds.
2

This is a necessary (but not sufficient) condition for optimum integer values for the
variables. Adding this condition to our collection of constraints (along with its
surplus variable y ) at this point in the solution has the same effect as beginning with
3

the additional constraint x 2. This cuts off the area above 2 for x and gives us the
2

polygon in figure 2b and the following tableau for our linear programming problem.
x

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (4 of 8)12/2/2015 10:07:28 AM

Cutting plane techniques.

-1/10

-3/10

-9/2

-z

1/6

1/6

5/2

-1/15

-1/15

1/6

1/6

-1

1/2

2
1

We are now one column shy of a basis and must remedy that immediately.
Examination of the tableau reveals that y cannot enter the basis, but both y and y
3

might if so desired. We may select either. We choose to pivot on the bottom row and
place y into the basis. This results in the tableau:
1

-1/5

-4/5

21/5

-z

1/5

-2/5

11/5

-6

2
1

Again we have an optimum feasible solution. This one is indicated by the dot on the
picture in figure 2b and corresponds to:
x

11/5

As before, we select the row that provided a noninteger solution, this time involving
x . This gives us the equation:
1

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (5 of 8)12/2/2015 10:07:28 AM

Cutting plane techniques.

We wish to do as before and end up with positive fractions on the left hand side so
that it will be greater than zero. To do this, we just add y to both sides. Then we
3

transform the equation into:

by moving the integer portions of x and y to the right. Now we group the integer
1

portions of the right hand side together and get:

by moving x and y to the right and group the integer portions of that side. Again we
1

see that the left side must be positive. Thus the right side must be positive and by
employing similar arguments to those used above, we may assert that the following
holds.

Adding this new cutting plane restricts our solution to values of x below two. So, we
1

add the new cutting plane to the collection of constraints and pivot. Again we need
one more variable in the basis and this time we chose y . This leads to the final
2

tableau:
x

-3/5

-1

-4

-z

-1

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (6 of 8)12/2/2015 10:07:28 AM

2
1

Cutting plane techniques.

-9

-5

1
2

with the optimum integer solution given below and shown in figure 2c.
x

A recapitulation is in order. First we relax the integer programming problem and


solve for the optimum solution with linear programming methods. If we achieve an
integer solution, then of course we are finished. If not, then there is a row of the
tableau such as:

where b is not an integer. We then split b and all of the a into integer and nonnegative
I

fractional parts. (The integer portion of b is written b and its fractional part is b .) Now
the equation is rearranged so that it looks like this:

We now consider the case where all of the variables are set to their optimum integer
values (which must of course be nonnegative), and deduce several things from the
above equation. If the fractional portions of all the a are nonnegative, then we know
i

that both sides of the equation are no less than zero. Thus

since it has an integer value and is not less than -b . This in turn makes

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (7 of 8)12/2/2015 10:07:28 AM

Cutting plane techniques.

If we add the above fractional equation to the collection of constraints in the tableau,
it is the same as if we began with the previous integer equation as an initial condition.
This is the essence of the cutting plane method of solving integer linear programming
problems. It makes linear programming problems larger and larger as new
constraints are added.
We merely iterate this process and hope for integer solutions to appear quickly. But,
there are several problems. First, the tableaux can become very large indeed. Often
though this is avoided by dropping slack variables introduced with cutting planes
whenever they enter the basis.
A second problem enters because we are using computers to solve our linear
programming equations and computers have finite precision. Thus, noninteger
solutions (such as 5.99999999) might be difficult to detect. Employing algorithms in
which coefficients remain integers does solve this. For example, save the numerator
and denominator of fractions. But, this adds to the execution time.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/cutting.html (8 of 8)12/2/2015 10:07:28 AM

Upper bounds on integer programs.

Upper Bounds on Integer Programs


Probably the most important fact concerning linear programming techniques and their
relationship to integer programming problems is:
The relaxed (or linear programming) solution to an integer
programming problem is an upper bound for all feasible integer
solutions.
With this in mind, examine the optimization problem pictured in the shaded area of
figure 1a.

Figure 1 - Bounding Example


If the objective function for this problem is z = x + x , the optimum solution found by
1

linear programming is the pair:


< x , x > = <6.5, 7.5>
1

and due to the above observation, we know that no integer solution can produce a
http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/bounds.html (1 of 4)12/2/2015 10:07:31 AM

Upper bounds on integer programs.

higher value for the objective function than this solution.


Now suppose we were to break the problem into two subproblems, one where x is
1

restricted to values no less than 7 and one where x is restricted to values no greater
1

than 6. This is easily done by adding constraints (namely: x 7 and x 6) to the


1

original collection. These subproblems are pictured as the shaded areas of figure 1b.
Note particularly that no feasible integer solutions have been omitted as they are all in
one of the two shaded areas. Only the vertical strip of noninteger space between 6 and
7 has been removed. Relaxing these two new problems and solving for optima provides
us with the solutions for our new problems that appear below.
<6, 6.5> and <7, 6.5>
as solutions to the two subproblems. This is closer to the kind of solution we wish, but
it still is not an integer solution.
Subdividing these two problems results in three new problems that are shown as the
shaded areas of Figure 1c. This time we removed horizontal strips from the solution
space.
Continuing on, we divide the remaining problems as indicated by the tree in Figure 2. It
shows all of the subproblems and the optimum solutions for each.

Figure 2 - Solution Search Tree

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/bounds.html (2 of 4)12/2/2015 10:07:31 AM

Upper bounds on integer programs.

One more subdivision took place after that shown in Figure 1c. The shaded region to
the right was split into two subproblems. By restricting x to be no greater than 7, we
1

get as a feasible solution space, a line with an x value of 7 and x ranging from 2 to 6.
1

By restricting x to be no less than 8 we end up with an area containing the single point
1

<8, 4.75>.
At this stage in the process three integer solutions have been found and one mixed
solution still remains. The integer solutions all set the objective function to 13 and this
is better than the remaining mixed solution <8, 4.75>. Thus any of the three is the
optimum and we need not pursue the path in the search tree below the mixed solution.
In figure 3 the algorithm for this method is provided.

Figure 3 - A Bounding Algorithm for Integer Programming


A final note on this method is in order. This algorithm seems to have a slight
advantage over the cutting plane method because the problem involves a smaller
solution space at each stage. And, if we are fairly clever, some of the constraints in the

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/bounds.html (3 of 4)12/2/2015 10:07:31 AM

Upper bounds on integer programs.

original problem can be removed if the new bounding constraint supercedes them. For
example, in figure 1a we solved a problem with four constraints, while in figure 1b
there were two problems, each with three constraints. And since we are always
splitting the problem at a vertex of its feasible solution space, at least one constraint
can disappear from the new problem at each stage.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/integer/bounds.html (4 of 4)12/2/2015 10:07:31 AM

BRANCH

Solving integer programming problems by dividing them into subproblems and using
linear programming methods until integer solutions are found points to a general
method for the exact solution of optimization problems. This method primarily
involves setting up a tree structure in which to consider the entire feasible solution
space for a problem in an organized manner.
First we shall merely enumerate the solution space and then refine our methods. In
an attempt to save computation time and effort we also attempt to cut off our
endeavors when we know that they will not succeed by computing upper and lower
bounds on the possible solutions. This leads to a general method for solving
optimization problems named branch and bound.
The sections are entitled:
Enumerating 0-1 Integer Programs
Intelligent Solution Space Enumeration
General Branch and Bound Algorithms
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enumintr.html12/2/2015 10:07:32 AM

Enumerating 0-1 integer programs.

Enumerating 0-1 Integer Programs


We know that any problem which in NP can be stated as an 0-1 integer programming
problem since 0-1 integer programming is NP-complete. In addition, we know how to
easily map the satisfiability problem into the 0-1 integer programming problem.
Therefor, we may turn any problem in NP into an integer program with the mapping:

This is often not too difficult to implement. All we do is state the problem in the
language of predicate calculus and then either attempt to satisfy the clauses we
developed or map it into 0-1 integer programming and then solve that problem.
This suggests an intriguing method of finding solutions to problems. All we need do is
to compute the objective function for all combinations of zero and one for all variables
and record the best feasible solution. Since each variable is restricted to values of zero or
one (or, in the case of predicate calculus clauses, true and false), the number of solutions
seems not as large as when we allow arbitrary integer values. This however is misleading
since in mapping arbitrary integer programming problems into 0-1 integer programming
problems there can be a significant variable explosion.
Let us examine this. If we think of 1 as true and 0 as false, then enumerating all
candidates for a feasible integer programming solution is exactly the same as
enumerating all subsets of the set of variables. In this manner we interpret a subset of
the variables such as {x2, x4} as representing the solution where x2 = x4 = 1 and all other
variables have values of zero. Thus any enumeration of subsets of a set of variables
provides all possible candidates for feasible solutions. We even know exactly how many
cases make up the enumeration. It is the same as the size of a truth table for an n
variable formula, namely 2n.
A simple example of an enumeration of the subsets of four variables is pictured as a
graph in figure 1. The subsets are ordered by set inclusion with the empty set at the top
and the set of all variables: {x1 , x2 , x3 , x4} at the bottom.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (1 of 8)12/2/2015 10:07:38 AM

Enumerating 0-1 integer programs.

Figure 1 - Combinations of four Variables


One of the first enumeration methods which comes to mind is a depth-first search of the
graph in figure 1. To do this, we examine all combinations of variables where x1 is set to
true or one, then all combinations where x2 is set to true or one, but x1 is false or zero,
and so forth. The tree in figure 2 is the corresponding depth-first search tree.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (2 of 8)12/2/2015 10:07:38 AM

Enumerating 0-1 integer programs.

Figure 2 - Depth First Search Tree for four Variables


If we look at combinations of variables as the quadruples <x1, x2, x3, x4>, then depth-first
search takes us through the sequence:
0000,
1000, 1100, 1110, 1111, 1101, 1010, 1011, 1001,
0100, 0110, 0111, 0101,
0010, 0011,
0001
Note that we first check the case where all variables are zero, then fix x1 at one and do a
depth-first search of its subtree. Next we fix x1 at zero and x2 at one and search that
subtree. This continues until the entire tree has been visited. The recursive procedure
presented in figure 3 does exactly this when called with i set to 1 using z(x1, ... , xn) as the
objective function for the problem we are optimizing.

Figure 3 - Finding the Optimum by Depth-First Enumeration


Even though a depth-first search such as that described in figure 3 is essentially the
technique we shall use to find optimum solutions, we should examine the other obvious
graph search technique, namely breadth-first search. In this method, we visit the nodes
of the search tree of figure 2 in the following order:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (3 of 8)12/2/2015 10:07:38 AM

Enumerating 0-1 integer programs.

0000,
1000, 0100, 0010, 0001,
1100, 1010, 1001, 0110, 0101, 0011,
1110, 1101, 1011, 0111,
1111
A quick inspection reveals that this is just examining combinations of no variables, one
variable, two variables, three variables, and four variables. Further examination shows us
the way to do this recursively. All combinations of k variables can be found by setting xi to
one for i from one to n-k+1, fixing x1, ... , xi , and looking at all combinations of k-1 variables from
the sequence xi+1 , ... , xn
The algorithm of figure 4 works provides this sequence when called from a loop which
sets k to zero through n.

Figure 4 - Finding the Optimum by Breadth-First Enumeration


This seems to be reasonable, but maybe if we are clever, we might restrict our
examination to a portion of the depth-first search tree. At each vertex we could decide
whether or not to descend further. For example, if setting a variable to 1 will not lead to:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (4 of 8)12/2/2015 10:07:38 AM

Enumerating 0-1 integer programs.

a) a feasible solution, or
b) a better objective function value,
then we should not continue on down that portion of the search tree any further.
Another decision which might reduce enumeration time is to carefully select which
branch of the graph (in figure 1) to pursue so that we go directly to the subtree that has
the greatest chance of containing an optimum solution.
Let us examine this with a very simple problem. Consider the problem shown in figure 5.
This is one which was mapped from satisfiability to 0-1 integer programming.
minimize z = 2x1 + 4x2 + x3
subject to the conditions:
(x1 , x2)

x1 + x2 1

(x2 , x3 , x4)

x2 + x3 + x4 1

(x1 , x4)

x1 + x4 1
where all xi {0,1}

Figure 5 - Clauses and a 0-1 Integer Programming Problem


We first check out the solution where all xi are set to zero and find that not a single
equation (or clause) is satisfied and the objective function is zero. If we set each variable
(individually) to 1 then we observe the following for the problem.
x1 = 1

x2 = 1

x3 = 1

x4 = 1

Equations Satisfied:

Objective Function:

Action:

Obviously setting x1, x2, or x4 to 1 will improve the values of the constraints the most.
Setting x2 to 1 however, improves the objective function the most. Based upon this, let us
rearrange our depth-first search tree as indicated in figure 6.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (5 of 8)12/2/2015 10:07:38 AM

Enumerating 0-1 integer programs.

Figure 6 - Modified Search Tree


At this point we shall set fix the value of x2 as 1 and proceed. Note that only the second
constraint is violated now. Thus setting any of the other variables to 1 might help.
Evaluating the these actions provides:
x1 = 1

x3 = 1

x4 = 1

Equations Satisfied:

Objective Function:

Action:

Again, we have a tie when we consider satisfying the constraints. Setting x1, or x4 to one
both provide feasible solutions. Taking x1 and x2 as one provides us with a feasible
solution with the best objective function value, namely six. We now rearrange the search
tree once more to reflect the priority of setting variables to one and get the tree of figure
7.

Figure 7 - The New Search Tree


Setting additional variables to 1 will not satisfy any more equations, but does bring a
better value for the objective function.
Action:

x3 = 1

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (6 of 8)12/2/2015 10:07:38 AM

x4 = 1

Enumerating 0-1 integer programs.

Equations Satisfied:

Objective Function:

Setting x1, x2, and x3 as one gives us the most improvement in the objective function, so
we shall do that. Continuing in this manner for the entire enumeration provides the
search tree depicted in figure 8.

Figure 8 - The Rearranged Depth-First Search Tree


In the search tree, feasible solutions occur at the shaded nodes and the value of the
objective function is provided for each combination of variables. The search tree was
rearranged so that feasible solutions (or combinations of variables closest to feasible
solutions) were examined first and these in turn, were ordered by the value of the
objective function at each level.
Note that searching could be terminated when the objective function reached seven for
the <x1, x2, x3> combination since that is the maximum value that can be achieved. In
general however things are not this simple, but one should always watch for this to
happen.
Developing the algorithm is not very difficult. As before, we use depth first search on the
graph of Figure 1, but this time we are smarter about selecting the branches to go down.
Thus we must keep track of the variables which we set as we traverse the search tree.
One way to do this is to order the variables at each step as was done above.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (7 of 8)12/2/2015 10:07:38 AM

Enumerating 0-1 integer programs.

It should be noted that ordering the variables requires some computation time. Before
implementing a clever search algorithm, one should consider this and compare the
added computation to that of much simpler search techniques.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/enum01.html (8 of 8)12/2/2015 10:07:38 AM

Intelligent Solution Space Enumeration.

Intelligent Solution Space Enumeration


As we saw in our examination of linear integer programming, the solution space for
an optimization problem can be successively partitioned until each of the portions
are bounded by optimal integer solutions. It is even possible to discontinue some of
the search paths when it is known that a solution no better than that already found is
forthcoming. This suggests a rather famous, yet quite similar method for solving NPcomplete problems.
We wish to examine methods of solution space enumeration not based upon
geometry, but upon the actual integer solutions themselves. Consider the chromatic
number problem for the graph pictured below in figure 1.

Figure 1 - A Graph
It is rather obvious that it can be colored with three colors if nodes b and c are the
same color. Let us examine all possible combinations of three colors that can be
applied to the nodes of the graph. Since each node can be any of the three colors,
there are exactly 34 = 81 colorings, but we can easily reduce this to 33 = 27 if we specify
that node a is to be a particular color. Let us set our colors as blue, yellow, and white.
We shall color node a blue. In figure 2 all of the coloring combinations are specified
for three colors and the remaining graph vertices.
A cursory examination reveals that only a few of these are feasible solutions. Those in
the tree on the left (with node b set to blue) are not feasible since node a was set to
that color, node b is adjacent to it, and both cannot be the same color.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (1 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 2 - Coloring Combinations


In the other trees, we need not consider any portion of the subtree with node c set to
blue since node c is also adjacent to node a. The same is true when coloring node d.
Continuing on and deleting unfeasible solutions from the search space, we can prune
the search tree until we arrive at the search tree of figure 3. Note that there are
exactly two feasible solutions when node a is originally colored blue.

Figure 3 - Intelligent Feasible Solution Tree


Looking even closer at our search tree, it is certain that if we were to search the tree
in a depth-first manner for a three-color solution we might only traverse the leftmost
branches of the tree in figure 3. Describing the nodes of the tree as quadruples of
colors (blue, yellow, and white); we would examine the sequence of partial solutions:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (2 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

<b, ?, ?, ?>, <b, y, ?, ?>, <b, y, y, ?>, <b, y, y, w>


before finding a feasible solution. Thus by intelligent pruning of the feasible solution
tree we may find a solution without looking at the entire set of solutions.
Slightly different reasoning could also be applied to the problem. At each level of the
tree we might note that:

a node must not be the same color as any adjacent node that is already
colored, and
a node need only be colored with one of the colors already assigned or the
next color not assigned yet.

Looking back at the original search tree, we now know we need not examine
combinations where node b is blue. This also cuts down the size of the feasible
solution space that we must examine.
Several things should be noted at this point. We could have solved the optimization
problem for the chromatic number problem in graphs by making sure that all
portions of the search tree that contained one and two color solutions were
considered. In addition, we made our decisions of what must be examined based
upon partial solutions. This is often the case and we need not enumerate all of the
full solutions in order to rule out many cases of optimization problems.
This is an example of a very famous algorithmic technique named branch and bound.
Branching takes place since the solution space is modeled as a tree and a search of
the tree is performed. Bounding takes place when a subtree is eliminated because it is
infeasible or none of the solutions in the subtree are better than the best found so
far.
Here, in figure 4, is the branch and bound algorithm for the chromatic number
problem that we have just developed.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (3 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 4 - Chromatic Number Solution Space Search


Let us now examine the knapsack problem, in particular, the 0-1 knapsack problem.
For our example we have three items weighing:
31, 26, 15, and 7 pounds,
and wish to fill a knapsack that can contain 49 pounds. First, we note that this
problem is very similar to those depicted in the material on enumeration of solution
spaces for integer programming. A depth-first search was used successfully there, so
we shall use one here as well.
Our strategy is somewhat greedy. We first examine all of the feasible solutions
containing the 31 pound item, then those containing the 26 pound item but not the
31 pound one, and so forth. Figure 5 contains such a depth-first search tree for this 01 knapsack problem. Note that unfeasible solutions are blue and actual solutions are
yellow.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (4 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 5 - Depth-First Search Tree for 0-1 Knapsack Problem


This search tree also bears some similarity to the coloring problem in that we do not
continue a path when there cannot be a feasible solution below a node. On the
leftmost path through the search tree of figure 5 the capacity was exceeded, so no
additional combinations containing the 31 and 26 pound items were examined.
An important fact about the search tree for the knapsack problem emerges at this
point.
The knapsack weight at any node is a lower bound for the knapsack
weights in that nodes subtree.
Thus, if the knapsack capacity has been exceeded at some node of the search tree, the
subtree below that node need not be searched. Our knapsack limit is 49 pounds, so
we cease our search at the blue node labeled 31+26 as this sums to 57 pounds.
The search tree provides even more information than the lower bounds that reveal
when the knapsack overflows or is about to overflow. We can also easily compute
upper bounds on the knapsack weight in the subtree for each node based upon the
sum of the possible weights that could be added to the load. Consider the tree in
figure 6 that now has labels on some of its nodes.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (5 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 6 - Search Tree with Upper Bounds


The numbers to the left of some of the nodes provide upper bounds on the knapsack
loads in their subtrees. Thus there is a 79 at the root since that is the sum of all the
weights and that, of course, is the largest load that one might attempt to place in the
knapsack. At the node containing the 31 pound weight there is a 79 upper bound for
the same reason. Examining the load below (with the 31 and 15 pound weights) we
find that only the 7 pound weight can be added and so the maximum load for this
subtree is 53. At the node containing the 26 pound weight, the 15 and 7 pound
weights could be added to the knapsack, so the upper bound for this subtree is 48.
When we reach the node containing the weight of 15 pounds we find that the
maximum knapsack load in its subtree is 22 pounds and realize that we need not
search that subtree since we have already encountered a better solution, namely 26
+15+7 = 48 pounds.
Thus we estimated the best possible solution for each subtree by adding up all of the
weights that could be added to the knapsack, and if they did not provide a weight
greater than that encountered so far in the search, the subtree was not visited.
We now have two rules for not visiting subtrees based upon the bounds that were
computed at each node.

If the lower bound is greater than the knapsack capacity, then no feasible
solution exists in the subtree.
If the upper bound is less than the best solution found thus far, then an
optimum solution is not in the subtree.

By traversing the solution space tree, we are BRANCHING to new solutions and we
http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (6 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

compute a BOUND at each node that helps limit our search. For this reason, these
algorithms have been called branch and bound algorithms.
Let us develop a branch and bound algorithm for the knapsack problem. Let the set W
= {w1, ... , wn} be the weights of objects to be placed in the knapsack and let the set of
variables X = {x1, ... , xn} indicate which objects are to be placed in the knapsack. (That
is, if xi = 1, then the i-th object is in the knapsack.) This algorithm is described in
figure 7.

Figure 7 - Depth-First Branch and Bound Knapsack Algorithm


Let us represent the knapsack content as the vector <x1, ... , x4> and perform a quick
partial trace of the algorithm. We begin with nothing in the knapsack and set best = 0.
Then we proceed to <1,0,0,0> and declare 31 to be the best so far. Continuing to
<1,1,0,0> we find that we have exceeded the knapsack limit and backtrack. This
http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (7 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

brings us to <1,0,1,0> and 46 becomes our new best effort. After another overflow at
the leaf <1,0,1,1>, we backtrack and find that the upper bound for <1,0,0,1> will not
be better than 46, so we immediately backtrack all the way to the root. Next we set x1
= 0 and try the subtree rooted at <0,1,0,0>. This brings us a best load of 48.
Insufficient upper bounds prevent examining any more nodes of the subtree.
If we omit the subtrees that the Pack procedure does not visit from our previous
search trees, we find that the algorithm traverses the search tree shown in figure 8.
As before, the nodes where the capacity was exceeded are darker and the subtree
upper bounds (the sum + uk values from the algorithm) have been placed to the left of
each node which is not a leaf.

Figure 8 - Search Tree for 0-1 Knapsack Algorithm


Let us turn our attention to another NP-complete problem, that of finding a
minimum rectilinear Steiner spanning tree. Here is the formal definition of the
problem.
Minimum Rectilinear Steiner Spanning Tree. Given a set of points in the
plane, find a set of vertical and horizontal lines of minimal length that
span the points.
Our example input data for this problem is the set of points on a unit grid pictured in
Figure 9a. Figure 9b is an ordinary minimum spanning tree while figure 9c shows the
special spanning tree made up of vertical and horizontal lines that is called a
rectilinear Steiner spanning tree. If we use a rectilinear metric to measure the edges in
the minimum spanning tree, its length is 13 while the Steiner version measures 12.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (8 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 9 - Points, an MST, and Steiner Tree


Before finding the optimum rectilinear Steiner spanning tree (or Steiner tree in the
sequel) for this problem, we mention a fact about minimum Steiner trees that will
help us with our search.
There is a minimum Steiner tree containing only L-shaped edges
between points that can be drawn so that exactly one line passes
through each point.
The Steiner tree in figure 9c was constructed from the set of L-shaped edges {ad, bd,
cd} by drawing one line through each point. Note that if we were to draw the same
tree with a horizontal line passing through point a and one line through each point,
the resulting tree would have measured 19. Note also that the tree with the set of
edges {ac, bc, cd} is the same size as that of figure 9c.
Again we shall do a depth-first search, adding edges to the tree as we go in a manner
reminiscent of Prim's algorithm for minimum spanning trees. To aid our endeavors,
we order the edge list by length in hopes that a small tree will surface quickly. If the
grid in figure 9 is made up of unit squares, note that the edges are of rectilinear
length:
Edge:

bc

cd

bd

ac

ab

ad

Length:

10

We initially examine the smallest edge (bc), and place its most central vertex (c) in the
tree. At this point we do a depth-first search on the edges from c: bc, cd, and ac. Note
that we have ordered them by length with the smallest first. The resulting search tree
appears as figure 10.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (9 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 10 - Depth-First Search Tree for Steiner Tree Problem


At the node on the left labeled bc the tree contains the set of vertices {b, c} and the
edge bc. Below this we add the edges leading from b and c to the remaining nodes a
and d, and we again visit them in order by length. At the node on the left labeled cd,
the tree contains the vertices {b, c, d} and the edges {bc, cd}.
At each vertex of the search tree, we check to see if the spanning tree that is being
build is larger than the best found so far. As an initial upper bound on the spanning
tree size we use the size of the minimal spanning tree over the points.
Observing the grid that is induced by the points reveals that any rectilinear spanning
tree must be at least as big and the grid width plus the grid length. In addition, a
theorem by Hwang that states that the smallest rectilinear Steiner spanning tree is no
smaller than two-thirds the size of the minimal spanning tree over the points. Thus
we may use the largest of these as a lower bound on the size of the best solution and
cut off the search if this is achieved.
The algorithms for this process are presented in figures 11 and 12.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (10 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 11 - Steiner Tree Main Program

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (11 of 12)12/2/2015 10:07:44 AM

Intelligent Solution Space Enumeration.

Figure 12 - Depth-First Search Algorithm for Steiner Trees


In our example from figure 9, half of the perimeter is 11 while the minimum spanning
tree is 13. Two-thirds of this is 8.7 or 9. So, we begin with a best tree size of 13 and
can cut off our search if we find one of size less than 11.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/intsrch.html (12 of 12)12/2/2015 10:07:44 AM

The general method.

The General Branch and Bound Method


Our general technique for branch and bound algorithms involves modeling the
solution space as a tree and then traversing the tree exploring the most promising
subtrees first. This is continued until either there are no subtrees into which to
further break the problem, or we have arrived at a point where, if we continue, only
inferior solutions will be found. A general algorithm for branch and bound searching
is presented in figure 1.

Figure 1 - General Branch and Bound Searching


Let's examine this technique more closely and find out what is needed to solve
problems with the branch and bound method using the chromatic number and
knapsack algorithms from the 'Intelligent Search' section of this chapter.
We need first to define the objects that make up the original problem and possible
solutions to it.

Problem instances. For the knapsack problem this would consist of two lists,
one for the weights of the items and one for their values. Also we need an
integer for the knapsack capacity. For chromatic numbers (or graph coloring),

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/genmeth.html (1 of 4)12/2/2015 10:07:46 AM

The general method.

this is just a graph that could be presented as an adjacency matrix, or better


yet, an adjacency edge list.

Solution tree. This must be an ordered edition of the solution search space,
possibly containing partial and infeasible solution candidates as well as all
feasible solutions as vertices. For knapsack we built a depth-first search tree for
the associated integer programming problem with the objects ordered by
weight. In the chromatic number solution tree we presented partial graph
colorings with the first k nodes colored at level k. These were ordered so that if
a node had a particular color at a vertex, then it remained the same color in the
subtree.
Solution candidates. For knapsack, a list of the items placed in the knapsack will
suffice. Chromatic numbering involves a list of the colors for each vertex in the
graph. But, it is a little more complex since we use partial solutions in our
search, so we must indicate vertices yet to be colored in the list.

An essential rule to be followed in defining solution spaces for branch and bound
algorithms is the following.
If a solution tree vertex is not part of a feasible solution, then the
subtree for which it is the root cannot contain any feasible solutions.
This rule guarantees that if we cut off search at a vertex due to unfeasibility, then we
have not ignored any optimum solutions.
Now, we present the definitions for bounds used in the above algorithm.
Lower bound at a vertex. The smallest value of the objective function for
any node of the subtree rooted at the vertex.
Upper bound at a vertex. The largest value of the objective function for
any node of the subtree rooted at the vertex.
For chromatic number we used the number of colors for the lower bound of a partial
or complete solution. The lower bound for knapsack vertices was the current load,
while the upper bound was the possible weight of the knapsack in the subtree.
Next we must have the following methods (or algorithms) which operate upon and
help us to analyze the objects.

Feasible solution checker. For knapsack, we merely insure that the sum of the

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/genmeth.html (2 of 4)12/2/2015 10:07:46 AM

The general method.

weights of the items in the knapsack is no more than its capacity. Chromatic
numbering involves checking to see if any two adjacent vertices are the same
color.

Objective function. For knapsack, sum the values of the items in the knapsack.
For chromatic numbers, count the colors.
Lower bound function. For knapsack and chromatic number, this is just the
objective function.
Upper bound function. For knapsack this was the lower bound plus the sum of
the weights that could be added. Chromatic numbers did not have a useful
upper bound function since a minimum was optimal.

At this point complexity should be mentioned. Computing these for the knapsack
problem is easy because they all involve summing the weights. A good strategy is to
record the knapsack loads as each vertex in the search tree is visited so that the
objective and upper bound functions require one addition and the feasibility check
utilizes one comparison.
Chromatic numbering involves more work when solution candidates are checked for
feasibility. In the worst case, all of the graph edges must be examined, and this
possibly requires O(n2) steps. One way to reduce this a little is to use partial solutions
where the children of a vertex have one more node colored than their parent.
Let us now turn our attention to two interrelated topics: solution space design and
searching the space. Creative designers build a space that can be searched without
too much complexity - either in the bounding computations or in the space required
to hold the solution candidates under consideration and those about to be
considered. Some helpful techniques are the following.

Design a solution space that contains a subset of the entire solution space that
includes an optimum solution.
Use a depth-first strategy so that only a small portion of the search tree needs
to be stored at any stage.
Make the feasibility checks and bound computations cumulative so that time is
minimized.
Order the children of each vertex so that the most promising solutions are
examined first.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/genmeth.html (3 of 4)12/2/2015 10:07:46 AM

The general method.

Use a good approximation to the optimum solution as the initial best solution

Our last note involves correctness. Two things must be shown. First, an optimum
solution exists in the solution space tree. And secondly, the optimum solution is
found by the branch and bound search algorithm.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/enumerat/genmeth.html (4 of 4)12/2/2015 10:07:46 AM

Dynamic Programming

Another exact technique which enumerates the feasible solutions for an optimization
problem is named dynamic programming. Like branch and bound, all feasible
solutions are considered, but in a very different manner. Instead of forming
permutations of the elements found in solutions, we concentrate on combinations.
Also, we shall work backwards from solutions instead of forward as in other
enumeration techniques. Thus dynamic programming is a deductive rather than an
inductive process.
The sections are entitled:
A Shortest Path Problem
Characteristics and Approaches
More Examples
Related Top-Down Techniques
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/dynaintr.html12/2/2015 10:07:47 AM

A shortest path problem.

Shortest Path Problems.


Everyone's favorite way to explain dynamic programming seems to be by example.
One of the favorite examples is a simple shortest path problem. We shall be no
different.
Consider the directed, weighted graph of Figure 1. In order to find the shortest path
from node s to node t, we could use enumeration.

Figure 1 - A Network Path Problem


This would involve examining the following six sequences of vertices.
s a c f t

s b e g t

s a d f t

s b d g t

s a d g t

s b d f t

After some computation we would discover that the shortest path was the one that
went through a, c, and f. But we did have to consider all of the possible paths and did
not find the answer until the very end of our search for the shortest path. Also, we
had to do 24 additions in order to find the lengths of each path.
Other methods that involved building paths from s to t were developed by Dijkstra,
Lee, and Moore. All three of these were quite similar and in essence involved
http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/shortest.html (1 of 5)12/2/2015 10:07:53 AM

A shortest path problem.

successively examining the remaining vertex that is closest to s. Thus it is noted that
s a costs 1, s a d costs 2, s a c costs 4, and so forth until the shortest path
to t is found. This computation involves 12 additions to sum all of the path lengths
plus the overhead needed to determine at each stage which remaining node is closest
to s. The method does however have the attractive feature of determining shortest
paths from s to all of the nodes and it is far better than enumerating all of the
possible paths.
The graph in figure 1 is called a layered network because it has five distinct zones of
vertices:
{s}, {a, b}, {c, d, e}, {f, g}, and {t},
and if we wish to find the shortest path from node s to node t we must pass through
one vertex from each zone. But, rather than go from s to t as before, we shall branch
backwards from t going from zone to zone until we reach s.
In order to get to node t we must have come from either f or g. The costs involved are
5 for f t and 2 for the edge g t. Backing up one zone, we find that to reach nodes f
or g, we had to come directly from c, d, or e. In order to go to t from d there is a
choice, namely through f or g. The path through g is shorter, so we select that. The
shortest paths from nodes in these two zones to t are shown in Table 1. The way to
read this chart is to follow the next links. For example, to go from d to t, we go from
d to the next node under d, namely g. Then we look at the next node under g, which is
t. Thus the shortest path from d to t is d g t and its cost is 10.

Table 1 - Distances to node t


At this point we know the shortest paths to t from the zone containing nodes f and g
as well as that containing c, d, and e. In turn we find that the shortest path from b to t
is through e rather than d and the shortest path to t from a goes through c. The
entries in the Table 2 complete the task of finding a path from s to t.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/shortest.html (2 of 5)12/2/2015 10:07:53 AM

A shortest path problem.

Table 2 - Paths to t in a network


We should note several things at this point. First of all, we not only solved a single
path problem, but the all pairs, single destination path problem for a network of this
type. Thus we got more for our effort than we initially wanted just like the popular
shortest path algorithms present earlier. The computation involved was 12 additions
with no additional overhead to keep track of intermediate results.
Most of all, we used the solutions of subproblems to solve longer path problems. We
progressed from zone to zone making decisions based upon our results from the
previous zone. We were able to do this because of the following important fact.
Every portion of a shortest path is itself a shortest path.
Suppose the graph in Figure 1 was an undirected graph rather than a layered network.
We could still use subpaths to build a solution to the all pairs, single destination path
problem using methods similar to those of Dijkstra, Lee, or Moore that build paths
out of subpaths. We still branch backward, but instead of filling in complete zones at
each step, we enter the closest node to our completed paths at each step.
For example, the closest node to t is g. Next come e and f which are 5 away from t
along the paths e g t and f t. Then we add the vertices labeled b and c that are 7
units away from t along b e g t and c f t. The complete chart of paths to t is
shown in Table 3.

Table 3 - Paths to node t in a graph


Again we were able to take advantage of the fact that every shortest path was made
up shortest subpaths. Another nice thing about our solution is that we only looked at

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/shortest.html (3 of 5)12/2/2015 10:07:53 AM

A shortest path problem.

each edge of the graph once. Compare this to the situation where one might
enumerate and examine all of the paths through the graph. For a complete graph this
means only O(n2) computational steps instead of an exponential number of steps for
complete graphs. And, since we were able to use information again and again, we
saved time.
The other rather famous path problem is the all pairs shortest path problem that is
sometimes called transitive closure. This also is solved by filling in a chart, except
this chare will be of size O(n2) since there are exactly that many paths to find.
We begin by jotting down the distances between all of the pairs of vertices along
paths that do not go through any other nodes. Let us take figure 1 and turn it into an
undirected graph. For this example the shortest paths which go directly from one
node to another and do not pass through other nodes appear in figure 4.

Figure 4 Short Paths Going Through no Nodes


These are, of course just the weights from the graph of figure 1. We can build from
this and produce a table that gives all of the shortest paths that go through either no
vertices or vertex s. This only changes the table in figure 4 by adding a paths between
a and b of length 10. The next step is to allow paths that can pass through vertices s
and a. If we continue on in this manner, things get interesting when we can go
through nodes {s, a, b, c, d}. Now it is possible to go from a to b in two ways: a s
b, a path with length 10 or a d b, a path with length 2.
What we did was to compare the shortest path from a to b which went only through
{s, a, b, c} with one which went from a to d and then from d to b. We continue in this
manner until we know the shortest paths going from any vertex to another passing
http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/shortest.html (4 of 5)12/2/2015 10:07:53 AM

A shortest path problem.

through any of the nodes in the graph.


Here is the general method due to Floyd and Warshall. We first define subsets of the
vertices in a graph as
A0 = , A1 = {v1}, , Ai = {v1, , vi}, , An = {v1, , vn}.
Let us now define d(Ai, vj, vk) as the distance or cost of a path from vj to vk going through
only vertices in the set of vertices Ai, then the following equation provides this value
for the next subset of vertices.
d(Ai+1, u, v) = minimum[ d(Ai, vj, vk), d(Ai, vj, vi+1) + d(Ai, vi+1, vk)]
In other words the shortest path either one of length d(Ai, vj, vk) which does not go
through vi+1 or one of length d(Ai, vj, vi+1) + d(Ai, vi+1, vk) going through vertex vi+1. This
recursive computing procedure allows us to find all shortest paths connecting any
two vertices in O(n3) steps compared to the possibly exponential number of steps
necessary if all paths were enumerated.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/shortest.html (5 of 5)12/2/2015 10:07:53 AM

Characteristics and approaches.

Characteristics and Approaches


We shall now extract some of the properties and techniques that were used to
construct solutions to path problems. The two design methods that dominate this
process are:
a. Define the problem in terms of subproblems.
b. Construct the recursive relationship between them.
This is easy to accomplish for shortest path problems. Recalling the graph shown in
figure 1 we shall do just this.

Figure 1 - A Network Path Problem


Using the function path(u, v) to represent the shortest distance between nodes u and
v, we noted that since one had to go through either node f or g to reach t, then:
path(s, t) = min[path(s, f)+path(f, t), path(s, g)+path(g, t)].
We then worked backwards through the zones of the network to construct optimum
http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/char-app.html (1 of 3)12/2/2015 10:07:54 AM

Characteristics and approaches.

solutions from the subproblems which were just going to and from nodes in the
zones.
Our next step towards solving the problem is to compute values for subproblems and
use them to construct the optimum solution. If
every subsolution of an optimum solution is optimum
then we shall be able to construct optimal solutions from optimal subsolutions. This
statement or rule is named the Principle of Optimality by those in the dynamic
programming field.
The second example from path problems was the dynamic programming solution to
all pairs shortest path problem due to Floyd and Warshall. Here, if we recall that we
defined subsets of the vertices in a graph as
A0 = , A1 = {v1}, , Ai = {v1, , vi}, , An = {v1, , vn}.
and define subproblems involving constrained paths as:
d(Ai, vj, vk) = distance from vj to vk going through only vertices in Ai,
then the recursive relationship between these subproblems is:
d(Ai+1, u, v) = minimum[ d(Ai, vj, vk), d(Ai, vj, vi+1)+d(Ai, vi+1, vk) ].
Let's consider another dramatic example; computing Fibonacci numbers. They are
defined by the recursive relationship:
fn = fn-1 + fn-2
which seems to indicate that we should compute them by a top-down recursive
procedure. But, as everyone knows, this would be a hideous mistake. An exponential
amount of work would be done if we did this since many of the numbers would be
computed over and over again. We instead need to compute them in the order f1 , f2 , ... ,
fn. In other words, we organize the order in which we compute the subproblems, much
the same way that we did with the path problems.
Thus two more steps emerge in our process.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/char-app.html (2 of 3)12/2/2015 10:07:54 AM

Characteristics and approaches.

c. Characterize the necessary subproblem space.


d. Determine the order in which to compute subproblems.
All that remains is to fill in the subproblem table. The time needed to do this depends
on the second important requirement for superb dynamic programming, namely:
there should be numerous common subproblems.
This, in fact, is what separates good recursive divide and conquer algorithms (such as
mergesort) from problems that should be solved with the dynamic programming
techniques. If there are a lot of common subproblems and the subproblem space is
not too large, we can efficiently solve the problem using dynamic programming.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/char-app.html (3 of 3)12/2/2015 10:07:54 AM

More examples.

More Dynamic Programming Examples


Let us begin with the most easily stated integer programming problem, the knapsack
problem. Recall that we have n items which have weights wi and values vi and wish to
select the highest valued collection which does not exceed a weight limit b. Thus we
let xi be the amount of item i we include and

Breaking this down into subproblems follows easily. We load a portion of the
knapsack with some of the items. For all values of k n and y b this is defined as:

So, for values of k between 1 and n and values of y from 1 to b, Fk(y) gives us a partial
loaded knapsack. We note that the principle of optimality does indeed apply since an
optimum load consists of several optimum subloads.
The next step is to determine the relationship between the Fk(y) values. This is not too
difficult to do. Suppose we wish to have y pounds of items 1 through k. Either item k
is in the load or not. If not, then the load is the same as the load involving items 1
through k-1, or Fk-1(y). Otherwise we subtract the weight of item k from y and look at
that optimum load. This means that
Fk(y) = maximum[ Fk-1(y), Fk(y - wk)+vk ]
All we need to do now is make a chart of all the Fk(y) values and fill in the chart.
Consider the four items whose weights and values are provided in table 1 below.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/more-ex.html (1 of 3)12/2/2015 10:08:00 AM

More examples.

Table 1 - A Knapsack Problem


From these weights and values let us compute all of the Fk(y) values for this problem.
There are no one pound loads so all Fk(1) = 0. The only way to load up two pounds is
to use item 1. At y = 3 pounds we begin to have choices since item three is available.
When y can be 5, we can use combinations of all the weights to form an optimum
load. Table 2 reveals that the optimum seven-pound load is worth 55.

Table 2 - Fk(y) for a Knapsack Problem


Time and space bounds for computing the dynamic programming solution to the
knapsack problem are interesting. Since all we need do is fill in a table, we need O(nb)
time and space. This sounds pretty good! In fact, this seems to show that the
knapsack problem can be solved in quadratic time and space. This is very interesting
indeed since no other NP-complete problem is that easy to do. But we have been
fooled since the problem size depends on the space taken to write the weights and
values, not the weight bound. Thus the problem size is O(nlogb) which makes O(nb)
exponential as we suspected. Problems of this type are called pseudopolynomial time
problems.
Let us turn now to the closed city tour problem. We have n cities and a matrix A of
costs incurred while traveling between them. Thus aij is the cost of traveling from city
i to city j. For any set of cities S (not including city 1), we let C(S, k) be the minimum
cost for traveling from city 1 to city k, going through all of the cities in S once. That
is, a tour completely through S beginning with city 1 and ending in city k.
For sets containing one city, this cost is easy to compute. In fact, for all k,
http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/more-ex.html (2 of 3)12/2/2015 10:08:00 AM

More examples.

C({k}, k) = a .
1k

For larger sets of cities, we note that some city (say city i) had to precede city k. Thus
to go from city 1 through S to city k, we can go from city 1 to city i through the set S {i} and then go directly to city k. To get the best tour, we simply take the minimum
over all cities in S. This is represented by the formula:

which is the recursive relationship between closed tour subproblems.


Again we note that the principle of optimality does apply since any subtour of an
optimum tour is itself optimum. So, to solve the traveling salesman problem, we
begin with small tours and keep computing subproblems until we find our optimum
tour.
Analysis of the traveling salesman problem indicates that there are (n-1)! possible
tours. This is in the neighborhood of O(2nlogn). We did achieve a savings in time since
we can do the computations mentioned above in O(n22n) steps, but unfortunately we
need about O(n2n) space for storing our table.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/dynamic/more-ex.html (3 of 3)12/2/2015 10:08:00 AM

Approximate Solutions

Thus far we have been concentrating on exact solutions to problems which can be
stated as integer programs. A major trouble we encounter is that since the time
n

complexity of these problems is often at least O(2 ), we cannot solve them for large
inputs. Often when it is not feasible to compute an exact solution to a problem, we
revert to approximation because this is better than no solution at all. These
algorithms are often called heuristics since there is usually a rule of thumb at the
core of the algorithm. But before examining heuristics in detail, we shall study several
ways to analyze them.
The sections are:
Bounds for Heuristics
Performance Analysis
Terminating Exact Solutions
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/approx/apprintr.html12/2/2015 10:08:05 AM

Bounds for Heuristics

Bounds for Heuristics


Whenever we opt for a quick algorithm that will find an approximate solution to a
problem, we hope that the solution will be as close to optimum as possible. It would
be even better if we could guarantee the solution to be within a certain distance from
the optimum solution. That is, given an instance of a problem, we wish the objective
function value of the solution provided by the approximation algorithm to be as close
to the optimum solution as possible. Being able to bound this closeness is better yet.
Thus, we would like to find a relationship between the algorithm and the optimum
solution. For an algorithm A and input (or instance) I, we shall call the value of the
objective function for the solution provided by the algorithm A(I). Let us denote the
value of the objective function for the optimum solution OPT(I). If we are looking at
minimization problems, then we wish to find a g(n) such that for all instances of the
problem:
A(I) g(OPT(I)).
Consider bin packing. Our input is n items {u1, u2, .. , un} of size s(ui) between 0 and 1.
We wish to minimize the number of bins needed to pack the items with the constraint
that each bin is of size 1. We shall use an algorithm named first fit as our first
example. It is a very simple greedy algorithm that works as follows. First, line up the
bins in a row from left to right. Then we merely go through our collection of items
placing each in the leftmost bin that has room for it. Figure 1 illustrates this for the
collection of items with sizes
{1/3, 3/4, 1/4, 2/3, 3/8, 1/4}.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/approx/bounds.html (1 of 4)12/2/2015 10:08:08 AM

Bounds for Heuristics

Figure 1 - First Fit Bin Packing


This required three bins. If we were to think about it, we know that we will need at
least one bin and no more than n bins. If FF(I) is the objective value for the first fit
algorithm on I then:
1 FF(I) n
But we can refine these bounds a little more. Suppose the items were liquid and we
could pour them into the bins. Then we would have to have enough bins to take care
of the sum of the sizes. That is our new lower bound. In addition, we should note the
following:
Fact. No more than one bin can be half-full or less. And, that one must
be the rightmost in the row.
Proof. Suppose that two were half full. In this case, to place items in the
rightmost one of these we had to pass over the other half full bin. This
goes against the rules for the first fit algorithm.
This fact provides the upper bound for the number of bins: the number we would
have if we poured each a tiny bit more than half full. Thus:

The lower bound in the above equation is the least that our optimal solution can
possibly be. Putting this all together we derive that
FF(I) 2 OPT(I).
A rather exotic and long analysis of first fit does provide a tighter bound. In fact, it
can be shown that:

For our next example, let us turn to the Closed Tour problem. It possesses a rather
famous algorithm with a provable optimality bound.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/approx/bounds.html (2 of 4)12/2/2015 10:08:08 AM

Bounds for Heuristics

First, one constructs the minimum spanning tree (MST) for the collection of cities.
Then all of the edges in the minimum spanning tree are duplicated. An example
appears as the left portion of figure 2.

Figure 2 - Double MST and Extracted Tour


At this point every vertex has an even number of edges leading from it, so a tour
which traverses all of the edges such as:
a-b-e-c-d-c-e-h-e-g-f-g-e-b-a
is possible and can be easily generated. This well-known type of tour is called an
Euler tour.
Since an optimal closed tour can be no shorter than the minimum spanning tree, so
we know that the Euler tour is no worse than twice the optimum closed tour that
visits all of the cities once.
At this point we merely extract a closed tour such as that on the right in figure 2 from
the Euler tour and are assured that:
OPT tour 2 OPT.
The complete algorithm appears as figure 3 below.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/approx/bounds.html (3 of 4)12/2/2015 10:08:08 AM

Bounds for Heuristics

Figure 3 - Closed Tour Algorithm


An even better method is to add just enough edges to the spanning tree so that each
vertex has even degree. Then, following all of the steps in the above algorithm
provides a tour with the following bounds.

Another bounding result due to Hwang that depends upon minimum spanning trees
concerns rectilinear Steiner spanning trees.
Theorem (Hwang). The shortest rectilinear Steiner spanning tree over a
set of points is no less than two-thirds the size of the rectilinear
minimal spanning tree over the points.
Thus if one can show that an algorithm produces a tree no larger than the minimum
spanning tree, then it is no worse than one and a half times the optimum.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/approx/bounds.html (4 of 4)12/2/2015 10:08:08 AM

Local Optimization

Almost the first scheme thought of when trying to figure out how to solve some
problem by approximation is to try to be as good as possible on as large an area as
possible. In other words, try to be very optimum locally and hope that this carries
over to the rest of the problem. We shall investigate several ways to accomplish this.
The sections are:
The Greedy Method
Divide and Conquer
Local Improvement
General Techniques for Local Search
Gradient Methods
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/loclintr.html12/2/2015 10:08:10 AM

Greedy Methods

Greedy Methods
Compulsive algorithm designers adore greedy methods. All that seems to be required
for this is to jump in and do whatever seems best at the time. A great example is the
first fit algorithm for bin packing shown in the section on approximations. In that
algorithm one merely takes items in order and places each in the closest available bin.
The fact that the results are often quite good in practice makes techniques like this
very attractive.
Another greedy method much in the spirit of first fit is the nearest neighbor
algorithm for the closed tour problem. It is very much like the very famous Kruskal
algorithm for minimum spanning trees since we merely keep connecting the cities
that are closest until we have a tour. An example is pictured in figure 1.

Figure 1 - Nearest Neighbor Closed Tour


Here though, the relationship between tours found by the nearest neighbor algorithm
and optimum tours is:

which depends on n, the size of the problem. So, the theoretical bound on
performance seems to decrease as the problem instances grow larger.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (1 of 7)12/2/2015 10:08:19 AM

Greedy Methods

Our next problem comes from the field of CAD algorithms for VSLI design. It is called
channel routing. A routed channel is shown in figure 2 and defined formally as:
Definition. A channel is a sequence of pairs of integers
<t1, b1>,<t2, b2>, ... , <tn, bn>.
Unfortunately the definition, although precise, is not very intuitive and thus does not
help one to understand what a channel actually is. The intuition behind the definition
is that a channel consists of two rows of pins (or terminals), some of which must have
a common electrical connection. The ti represent the pins on the top of the channel,
while the bi are those on the bottom. Examine figure 2.

Figure 2 - A Routed Channel


Note that there is a row of numbered pins along the top and one along the bottom.
(We call these sides shores to go along with the nautical motif of channels.) Those of
figure 2 correspond to the sequence:
<1, 2>, <0, 1>, <2, 3>, <2, 1>, <3, 4>, <0, 0>, <4, 5>, <3, 5>
which satisfies the above definition.
Those pins bearing the same label (number) must be connected together. Pins labeled
zero however are not connected to any others. A collection of pins which must be
connected is called a net. The labels on the pins name the net.
The small dark squares are called vias and indicate where two wires (the lines) are
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (2 of 7)12/2/2015 10:08:19 AM

Greedy Methods

connected, as they are insulated from each other at all other points. The horizontal
wires are routed on tracks. An optimum solution contains the fewest tracks or least
area. Figure 2 illustrates a 5-track routing.
The greedy router we are about to examine makes one pass over the channel the left
to the right. As it progresses along the channel, it brings in wires from pins on the
shores column by column (or pin by pin) into the channel and attaches them to wires
on horizontal tracks until every pin is connected to the rest of those bearing the
identical labels.
Here is an example. First, tracks are assigned to nets, such as net 1, that enter the
channel from the left as shown in figure 3. Then nets 1 and 2 were brought in from
the top and bottom in the first column. Net 2 is assigned to a new track and extended
to the right. Net 1 is attached to the existing net 1 track in both the first and second
columns. Then both tracks are extended to column three.

Figure 3 - Beginning a Routing


Next, net 2 is attached to its existing track and net 3 is brought into the channel to an
empty track. This is the state of affairs in figure 3. Examine figure 4.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (3 of 7)12/2/2015 10:08:19 AM

Greedy Methods

Figure 4 - Continuing the Routing


Now all existing tracks (those for nets 1, 2, and 3) are extended to column 4 and nets
2 and 1 are brought into the channel. Net 1 is attached to the existing net 1 track and
net 2 is brought in to an empty track at the top.
At this point a problem arises. We cannot join net 2 to its existing track because this
will cause an overlap with net 1. This is not allowed. Thus a new track must then be
assigned to net 2 causing it to exist on two tracks. This is shown in the next channel
installment in figure 5.

Figure 5 - More of the Routing


Also in figure 5 we see that at the next column, nets 4 and 3 were brought into the
channel and net three was connected to an existing track. And, since the next pin for
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (4 of 7)12/2/2015 10:08:19 AM

Greedy Methods

net 3 is on the top shore, the extension of net 3's track was made as near the top as
possible. Note also that on the next column we shall be able to consolidate the tracks
for net 2 since no nets enter there.
The process of bringing nets into the channel and either assigning them to new tracks
or attaching them to existing tracks continues column by column until the end of the
channel.
We are ready now to state the entire algorithm, but first we need some terminology. A
net is said to be rising if its next pin further along the channel is on the top shore
and next pin (if any) does not reside on the bottom shore within a pre-defined
distance called the steady net constant. Similarly, a net is falling if its next pin is on
the bottom shore and the following pin is not located on the top shore within the
distance specified by the steady net constant. In our example, net 1 is falling and net
2 is rising after column one. A steady net by default is that which is neither rising nor
falling. Split nets are nets that unfortunately have been placed upon two different
tracks at a column. Net 2 has been split on columns four, five, and six.
The greedy algorithm for channel routing is presented as figure 6 below.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (5 of 7)12/2/2015 10:08:19 AM

Greedy Methods

Figure 6 - The Greedy Channel Router


The algorithm begins by assigning tracks to left border nets if any. Here, track
selection for the nets is done by placing rising nets above steady nets that, in turn are
placed above falling nets. This group is placed upon the central tracks of the channel.
The algorithm then continues through each column of the channel by first trying to
bring in the non-zero pins from the top and bottom shores to either the first unused
track, or to a track containing its net, whichever comes first. The vertical wires that
are used to bring in the pins must not cross over each other in the process and if
such a situation arises, the pin that requires the shorter distance to be brought into
the channel is assigned its existing track, and a new track is created for the other pin
such that there is no overlap of vertical wires.
The algorithm next locates all split nets (nets occupying more than one track) and
tries to 'collapse' as many of these as possible into one track each by connecting them
together with a vertical jog. This obviously frees up one track if a split net occupies
two tracks and more if the net is spread on more than two tracks. Care must be taken
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (6 of 7)12/2/2015 10:08:19 AM

Greedy Methods

to see that a vertical jog of one net does not overlap the vertical jog of another net or
of an incoming net unless of course they are the same net. Net 2's two tracks were
reunited in this manner in column 6.
The next step is to narrow down the distance between as many existing split nets as
possible by making nets come closer to each other by the use of vertical wires which
must be the minimum jog length. Also, these wires should not be incompatible with
vertical wires that may have been placed in earlier steps.
This is followed up by locating all single track nets that are rising or falling, and
attempting to move them to higher or lower tracks if possible using vertical wires.
This was done to net 3 at column 5 and net 4 at column 6.
As the algorithm progresses through these steps some bookkeeping is done so that
when:

new pins are brought in,


other vertical wires are placed in the channel, or
a new track is created,

the list of available tracks is continually updated to reflect the changes in availability
of the tracks made along the way.
Now the routing for this column is over and at this point, the column count is
incremented and routing begins on the new column.
When the end of the channel is reached, all tracks are checked for their availability
and if they are still in use, then there are two possibilities for them. The first is that
the tracks contain split nets that were unable to be collapsed earlier within the
channel area. They may now be collapsed, one at a time if necessary. This might mean
extending the right edge of the channel by some more columns. The second
possibility is that the tracks are continuing with those nets because they comprise the
list of right border nets. These are as they should be and end there.
In order to calculate the time complexity for this routing algorithm, the parameters
are the length of the channel and the number of tracks. The algorithm makes one
pass over a channel having length n. As it processes each column, it checks every
track. This means that the time taken is equal to the channel length multiplied by the
number of tracks. If the number of nets is proportional to n (the length of the
channel), then the time complexity comes out to be O(n2) since n tracks could be
required in the worst case.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/greedy.html (7 of 7)12/2/2015 10:08:19 AM

Divide and Conquer

Divide and Conquer


One of the most popular algorithmic devices for computer scientists has been divide
and conquer. Use of this method has led to the fast searches and sorts that every
beginning student encounters. For some problems, it is also a good method for quick
and easy approximations. Our strategy will be to divide an instance of a problem into
smaller and smaller pieces until we are able to solve the problem for the smaller
pieces. Then, if all goes well, joining together these good solutions for portions of the
problem should provide a reasonable approximation to the original problem instance.
Closed City Tours shall be our first example. First, we divide our cities into eastern
and western regions so that each region contains the same number of cities. Then we
shall divide these into northern and southern sections in the same manner. Thus
instead of merely bisecting the city space as a mathematician might do, we quadrisect
the space in a computer science manner. Each region now contains roughly a quarter
of the cities present in the original problem. This means that instead of (n-1)! possible
tours, we now need only consider (n/4 1)! tours for each region. Figure 1 contains an
example of a city space that has been partitioned.

Figure 1 - Regional Closed City Tours


If a region is small enough so that we can find an optimum closed tour then we do so.
Otherwise, we keep partitioning the regions until we can compute optimum tours.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/divide.html (1 of 3)12/2/2015 10:08:25 AM

Divide and Conquer

In figure 1, shortest closed tours have been found for each quadrant. The last step is
to make connections between quadrants, and omit one of the links in each regional
tour. A possibility is shown in figure 2.

Figure 2 Connecting the Regional Tours


In general, a problem instance is split into halves (or quarters) and then these smaller
problems are solved optimally if possible. If the subproblems are still too large, they
are divided again. After all of the subproblems have been solved, then they are
combined to form a solution to the main problem.
Note that the resulting solution is not necessarily optimum, even though it was built
from optimum subsolutions. It is tempting to attempt to apply the Principle of
Optimality from dynamic programming, but close examination reveals that it stated
that optimum solutions are composed of optimum subsolutions, not the other way
around.
The next application of the divide and conquer technique shall be the chromatic
number problem, or graph coloring. First, we take a graph such as that on the left in
figure 3 and divide it into the two subgraphs as shown on the right. In this case, the
division was done so that as few edges as possible crossed the boundary. The reason
for that is so that there will be as few conflicts as possible between the regions. Now
the two subgraphs can be colored optimally with three colors. The colorings are then
resolved with the pairs <a, e>, <b, d>, and <c, f> taking identical colors.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/divide.html (2 of 3)12/2/2015 10:08:25 AM

Divide and Conquer

Figure 3 - A Partitioned Graph


Figure 4 contains a general algorithm for divide and conquer strategies.

Figure 4 General Divide and Conquer Algorithm

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/divide.html (3 of 3)12/2/2015 10:08:25 AM

Local search.

Local Improvement
Examining the geometric interpretation of integer programming reveals that a
problems constraints form a polytope. Optimum solutions for the relaxation of the
problem to linear programming can be found on the convex hull of this polytope. But
unfortunately, optimum integer solutions are often found inside the polytope rather
than on the convex hull. This is why linear programming does not solve integer
programming problems.
Consider the two dimensional optimization problem shown in figure 1. The feasible
solution space is the darker area and we wish to maximize the sum of the two
variables. Integer solutions to the problem occur at the intersections of the grid edges.

Figure 1 - Local Search Path Through Neighborhoods

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (1 of 10)12/2/2015 10:08:35 AM

Local search.

Suppose that we were able somehow to find a feasible solution, perhaps through the
use of a greedy method. Suppose that the solution indicated by the dot labeled a in
figure 1 is such a solution. Let us search the area directly around this solution (which
we call the neighborhood of the solution) to see if there is a better solution which is
not too different than the one we have. If we find a better solution, then we continue
the search for better solutions. This process is illustrated in figure 1. The dots are
feasible solutions and the large circles are neighborhoods. As we find better and
better solutions, we progress upwards and to the right on the path:
a b c d e,
until an optimal solution (in this case found at point e) is encountered.
This method is entitled local search and calls for searching a small area around a
solution and adopting a better solution if found. The process halts when no better
solutions occur. This algorithm is illustrated in figure 2.

Figure 2 - A Basic Local Search Algorithm


We shall begin with Closed City Tours as out first example. If t is a tour of n cities,
then a 1-change neighborhood is defined as:
N1c(t) = {s | s = t with one citys position changed}
After forming the neighborhood, it is searched for a better tour. In figure 3, a city is
moved in going from (a) to (b), resulting in a better tour.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (2 of 10)12/2/2015 10:08:35 AM

Local search.

Figure 3 - A Local Change in a Closed Tour


An even better neighborhood definition for closed tours is a class called k-optimal
neighborhoods. These are the result of removing k edges from a tour and
reconnecting the tour. Local search methods using 3-optimal neighborhoods have
proven very effective.
One of the most famous local search algorithms is Kernighan and Lin's min-cut
algorithm for the graph partition problem. The problems formal definition appears
below.
Graph Partition. Given a weighted graph G = (V, W). Find disjoint sets of
vertices A and B such that A B = V and the sum of the weights of the
edges between A and B is minimized.
Let us take a weighted graph G = (V, W) where W[i, j] provides the weight of the edge
between vi and vj. In figure 4 we find an example with the vertex set V = {r, s, t, u, v, w}.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (3 of 10)12/2/2015 10:08:35 AM

Local search.

Figure 4 - Weighted Graph


First we partition V into two subsets of the same size which we shall name A and B.
Then we call the sum of the weights of all of the edges between A and B the cost of
the partition. This is denoted Cost(A, B). In our example, we partition the graph into A
= {r, s, t} and B = {u, v, w}. After adding up the weights of all the edges passing
between vertices in A and B we find that Cost(A, B) = 20.
As noted above, the Min-Cut algorithm strategy begins with an arbitrary partition of
the vertex set V into sets A and B. Then we attempt to form better partitions by
swapping vertices between A and B until the cost seems to be the best that can be
achieved.
To do this we must examine neighborhoods formed by exchanging pairs of vertices
from A and B. If the partition P is the pair <A, B>, then for two vertices a A and b
B the partition Pab is:
Pab = <A {b} - {a}, B {a} - {b}>.
It is formed by swapping the vertices a and b between partitions. This makes the
neighborhood of P = <A, B> the collection of all such partitions. That is:
N(P) = { Pab | for all a A and b B}.
Now we need to formulate the change in cost of swapping induces.
Definition. The external cost or gain E(a) of moving vertex a out of the
set A is the sum of the weights of all the edges leaving A from a.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (4 of 10)12/2/2015 10:08:35 AM

Local search.

Definition. The internal cost or loss I(a) of moving vertex a out of the set
A is the sum of the weights of the edges between a and other vertices in
A.
Definition. The total cost or difference incurred by moving vertex a out
of the set A is: D(a) = E(a) - I(a).
An easy technical lemma follows directly from these definitions.
Lemma. The gain incurred from swapping vertices a A and b B is: g
(a, b) = D(a) + D(b) - 2W[a, b].
Lets look at the values of these items using our last graph and the partition of A = {r,
s, t} and B = {u, v, w}.

Now that we know the external and internal costs incurred by swapping vertices
across the partition, here are the costs when pairs are swapped.

From these numbers we can conclude that swapping r and u seems to be a good idea.
So is swapping t and w. We should probably not wish to swap s and v however. Or at
least not at this time.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (5 of 10)12/2/2015 10:08:35 AM

Local search.

Let us apply the local search algorithm. We take the graph G = (V, W) and divide the
vertex set V arbitrarily into sets A = {r, s, t} and B = {u, v, w}. Then we examine the
nine neighborhoods that result from exchanging pairs of vertices from A and B.
Figure 5a contains difference values for all of the vertices in the table on the left while
the table on the right indicates all of the gains involved in swapping vertices between
A and B.

a) Swap r and u, total gain = 6

b) Best swap: s and v, total gain = 6


Figure 5 - Tables used during Min-Cut Local Search
Swapping either r and u or t and w both provide a gain of 6. We elect to exchange r
and u. Then we recalculate the differences and gains for all vertices. These new values
appear in figure 5b. At this point it seems that the only swap which is not destructive
involves s and v. The local search terminates at this point because there is no gain.
It is intriguing to wonder if swapping s and v leads to a neighborhood where better
partitions reside. As we see in figure 6, this is not to be.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (6 of 10)12/2/2015 10:08:35 AM

Local search.

Best new swap: t and r, total gain = -1


Figure 6 - More Min-Cut Local Search Tables
The idea is worth following and Kernighan and Lin did so. They argued that partitions
such as that above are often local minima and continuing to swap vertices, even
though there is negative gain might lead to neighborhoods with better partitions.
Consider the graph in figure 7 that illustrates how total gain might possibly change
over time.

Figure 7 - Total Gain over Time


After swaps one, three, and eight, local maxima occur in the total gain function. The
local search algorithm would halt at any of these. What we would like to do is
continue to swap and possibly reach the global maximum at step eight. Kernighan
and Lins variable depth method of local search provides this capability.
We shall now examine sequences of swaps. The process begins as before by searching
the neighborhood of the partition for the best pair of vertices to exchange. These are
swapped even if this reduces the total gain. In our example the sequence looks like
this:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (7 of 10)12/2/2015 10:08:35 AM

Local search.

To prevent cycling, we only allow swapping a vertex once in a sequence of exchanges.


Thus, after r and u are swapped in the above sequence, they become unavailable for
exchanging. Vertex swapping is continued until A and B have been totally
interchanged. We then retain all swaps up to the point of maximum gain in the
sequence. In our example, this means retaining only the first swap. At this point we
begin again with another sequence of swaps and continue swapping sequences until
no gain takes place. The second swapping sequence is:

Since no gain took place, we halt and present {u, s, t} and {r, v, w} as our partition.
Figure 8 provides a description of the entire algorithm.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (8 of 10)12/2/2015 10:08:35 AM

Local search.

Figure 8 - Graph Partitioning Algorithm


Several implementation features are of value since they make the algorithm run a bit
faster. For example, the recalculation of D[u] for all u V that are still available just
requires a small update, not an entire one. If ai and bk have been swapped and u A
then:
D[u] := D[u] + 2 (W[u, ai] - W[u, bk])
Finding the pair to swap is much quicker also if the values of D[a] and D[b] have been
sorted. Also, we need not do n/2 swaps since the last one merely completes the entire
interchange of A and B and changes the total gain to zero.
Now let us turn our attention to the analysis of this algorithm. Three things must be
examined: correctness, complexity, and possibility of reaching an optimum solution.
We shall take the latter first.
Theorem 1. Using the basic operations of the MinCut algorithm it is
possible to reach an optimal solution.
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (9 of 10)12/2/2015 10:08:35 AM

Local search.

Proof. The MinCut algorithm swaps vertices. Thus we need to show that
by swapping vertices it is possible to go from any partition to an optimal
one, that is, one with the minimum cut.
Suppose we are given an arbitrary partition P and an optimal one POPT. A
brief examination of P reveals that it either is the same as POPT or it is not.
If not, then there is a vertex in each half of P that does not belong. Swap
them and continue until the desired partition appears.
For this problem, correctness is rather simple. As there are no guarantees, we only
need show that a feasible solution has been found. Thus we may state the following
with very little proof.
Theorem 2. The MinCut algorithm is correct.
Proof. We begin with a feasible solution. Swapping two vertices provides
another feasible solution. Thus at every stage of the algorithm we have a
feasible solution.
With approximation algorithms, correctness is not as much of an issue as with
optimum algorithms. However, the complexity of the procedure is of great
importance to us and must be examined.
Computing the differences D[u] initially is an O(n2) operation since all of the graph
edges need to be examined. Setting the vertices as available requires only linear time.
Inside the for loop, selection of the pair to be swapped can be O(n2). Recalculating
each D[u] takes a constant amount of time for each available u, so this too is O(n).
Empirical results indicate that the repeat loop will be repeated less than four times,
even for very large values of n. These results indicate a time complexity of O(n3) for
the algorithm.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/improve.html (10 of 10)12/2/2015 10:08:35 AM

Local Search - General Techniques

General Techniques in Local Search


Variable depth search seems to show more promise than elementary local search due
to the possibility that one need not get stuck at local optima. Continuing along the
search path, even though it seems to bring less attractive solutions at times does lead
to better solutions.
Some notation is required in order to develop a description of the general algorithm.
An instance of a problem contains units which can be manipulated in order to form
new solutions.
Here are some examples. For graph partitioning, an instance is the graph, units are
vertices, and a solution is a partition. Thus swapping vertices between partitions
forms new solutions. In the closed tour problem an instance is the distance matrix,
units are cities, and solutions are tours. Here, changing a city's position in the tour
forms a new solution.
For a problem of size n, we shall say that there are n possible units that can be
manipulated to form new solutions since this should be proportional to the problem
size. After a group of units (denoted U) is changed, all of the units in U become
unavailable for further change. A neighborhood for a solution S is then:
N(S) = { SU | the units in U were changed in S to form SU }.
Each solution has a cost and we denote the gain of changing from solution S to
solution SU as:
g(U) = cost(SU) - cost(S).
In the algorithm, we construct a sequence of solutions: S0, ... , Sm after which there are
no units remaining which can be changed. The integer m depends upon the
neighborhood definition. In the MinCut graph partitioning algorithm this was one less
than n/2, and in the 1-change closed city tour algorithm this was n-1. At each stage in
the sequence we define G[i] as the total gain of Si over S0 or if the units in U were
modified in order to form Si:
http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/gen-tech.html (1 of 4)12/2/2015 10:08:37 AM

Local Search - General Techniques

G[i] = G[i-1] + g(U).


Figure 1 contains the general algorithm.

Figure 1 - Variable Depth Local Search Algorithm


Examining the algorithm reveals that in order to apply this technique to a problem,
we must define:

an initial feasible solution,


neighborhoods for feasible solutions, and
costs of solutions or objective functions.

Initial solutions appear in two flavors: random, and the result of a greedy algorithm.
Each has its champions. Possibly the best approach is to try both. That is, add a
greedy initial solution to a collection of randomly generated solutions.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/gen-tech.html (2 of 4)12/2/2015 10:08:37 AM

Local Search - General Techniques

Neighborhood definition is an art form. They can be obvious, but many are clever and
elegant, and some border upon genius. The key to a good neighborhood definition is
primarily ease of manipulation. A good formulation makes all of the manipulation
and computation flow quickly and easily. A clumsy neighborhood adds to the
algorithms complexity.
This brings up the computational methods and data structures, which are part of the
local search algorithm. We must also develop:

representations for feasible solutions,


search techniques for neighborhoods, and
evaluation methods for gains in cost.

Most feasible solution representations are straightforward. A sequence of cities can


be used for closed tours or two sets represents a partition of graph vertices. But,
occasionally clever representations can be found which are easier to manipulate than
obvious ones. Thus designing solution representations is also an art form.
Searching neighborhoods can be very time consuming if the neighborhood is large
and if evaluating the objective function for solutions in the neighborhood is lengthy.
There are two common search strategies. One is a greedy search called first
improvement because the first solution found that is better than the original is
immediately adopted. The other involves searching the entire neighborhood for the
best solution and is called steepest descent. It is not clear that the extra time involved
in searching an entire neighborhood is that useful. Consider the two-dimensional
optimization problem shown in figure 2.

Figure 2 - First Improvement Search Path


http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/gen-tech.html (3 of 4)12/2/2015 10:08:37 AM

Local Search - General Techniques

Only in a few cases was the best solution in a neighborhood selected, but the
algorithm did find the optimal solution just the same. In fact, it took one more
neighborhood examination than if a full search of each neighborhood took place, but
this is not bad at all if a full search requires O(n2) steps and a restricted search O(n).
This example was of course hypothetical, but it does illustrate the attractiveness of
not searching entire neighborhoods.
The variable depth method is a nice compromise between the first improvement and
steepest descent methods. It begins by searching the entire neighborhood, but
reduces its search area as units become unavailable for manipulation.
Finally, if computing the gain involved by changing to a neighboring solution can be
sped up, lots of time can be saved. The min-cut algorithm accomplishes this by
updating the gain values in linear time during each iteration rather than recomputing
all of them at a cost of quadratic time.
In order to perform proper algorithm analysis, three items must be examined when
presenting a local search algorithm,

correctness,
complexity, and
the possibility of achieving an optimal solution.

Correctness is often rather simple to guarantee since all that needs to be shown is
that a feasible solution is produced. Of course, if the cost can be bounded as some of
the previous examples were, this is better.
Complexity is mainly the size of the neighborhood searched times the number of
solutions in a sequence if the outer loop is executed only a few times. Otherwise, the
algorithm might run for an exponential amount of time. After all, if solutions get
better and better it is possible to examine a large number of feasible solutions before
halting. In practice, most algorithms execute the outer loop less than five times for
large problem instance sizes.
The last consideration, proving that it is possible to go from any initial feasible
solution to an optimal one is a nice touch. It really says that the algorithm has a
chance of achieving optimality if one is very lucky. It also in some sense certifies that
the neighborhood definition and search procedure are reasonable. It is not as trivial
as one might think since there are highly cited algorithms in the literature that can be
shown to never produce an optimal solution with certain inputs.

http://www.cs.uky.edu/~lewis/cs-heuristic/text/local/gen-tech.html (4 of 4)12/2/2015 10:08:37 AM

RANDOMIZED

One of the difficulties when dealing with nonconvex or NP-complete optimization


problems is that one often falls into local optima. When this happens, often the global
optimum is then impossible to reach. In figure 1a there is a convex solution space.
Here there are no local optima, just a global optimum. In figure 1b however we have a
space with lots of local optima (the pyramid tops). If an algorithm gets stuck on one
of these it is possible that it will stay there.

Figure 1 - Solution Space Polytopes


Thus methods designed to lead away from local optima become attractive with
nonconvex solution spaces. We shall examine several that are based upon natural
systems.
The sections are:

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/natrintr.html (1 of 2)12/2/2015 10:08:39 AM

RANDOMIZED

Force Directed Optimization


Simulated Annealing
Neural Networks
Genetic Algorithms
DNA Computing (Slides)
Historical Notes and References
Problems

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/natrintr.html (2 of 2)12/2/2015 10:08:39 AM

DNA Computing

DNA Computing
Click here to start
Table of Contents
What Next?
Geographic Tours
Hamiltonian Paths
Solving NP Problems
deoxyribonucleic acid (DNA) molecule
Complementary Bases Attract
PPT Slide
Using DNA as a Computer
Remember Graphs and Paths?
Building Vertices from DNA
Building Edges from DNA
Vertex and Edge Bonding
Operations on Molecules
Encoding Binary Sequences
DNA Operations
http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/index.htm (1 of 2)12/2/2015 10:08:44 AM

DNA Computing

Advantages
Disadvantages

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/index.htm (2 of 2)12/2/2015 10:08:44 AM

What Next?

Slide 1 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld001.htm12/2/2015 10:08:50 AM

Geographic Tours

Slide 2 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld002.htm12/2/2015 10:08:53 AM

Hamiltonian Paths

Slide 3 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld003.htm12/2/2015 10:08:54 AM

Solving NP Problems

Slide 4 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld004.htm12/2/2015 10:08:56 AM

deoxyribonucleic acid (DNA) molecule

Slide 5 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld005.htm12/2/2015 10:08:58 AM

Complementary Bases Attract

Slide 6 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld006.htm12/2/2015 10:09:00 AM

PPT Slide

Slide 7 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld007.htm12/2/2015 10:09:02 AM

Using DNA as a Computer

Slide 8 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld008.htm12/2/2015 10:09:03 AM

Remember Graphs and Paths?

Slide 9 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld009.htm12/2/2015 10:09:05 AM

Building Vertices from DNA

Slide 10 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld010.htm12/2/2015 10:09:08 AM

Building Edges from DNA

Slide 11 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld011.htm12/2/2015 10:09:10 AM

Vertex and Edge Bonding

Slide 12 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld012.htm12/2/2015 10:09:11 AM

Operations on Molecules

Slide 13 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld013.htm12/2/2015 10:09:13 AM

Encoding Binary Sequences

Slide 14 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld014.htm12/2/2015 10:09:15 AM

DNA Operations

Slide 15 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld015.htm12/2/2015 10:09:17 AM

Advantages

Slide 16 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld016.htm12/2/2015 10:09:19 AM

Disadvantages

Slide 17 of 17

http://www.cs.uky.edu/~lewis/cs-heuristic/text/natural/dna-comp/sld017.htm12/2/2015 10:09:24 AM