Professional Documents
Culture Documents
BRUCE WEIDE
Department of Computer Science, Carneg~e-MeUonUnwersLty, PLttsburgh, Pennsylvanm 15213
ComputingSurveys,Vol.9, No 4, December1977
294 • B. Weide
even more complicated than it needs to be maximum element in a list of n items
for most purposes. Knuth [43, 44, 46] from a linearly ordered set. An appropri-
chooses an intermediate ground for his ate choice for an elementary operation is
MIX machine, and derives running times a comparison between two items of the
for particular implementations of algo- list, because the items might be records
rithms. Between the two extremes, results for which comparisons are nontrivial,
can differ by more than simply a constant while bookkeeping operations such as loop
scale factor. In choosing a model of com- control and pointer management are usu-
putation, we try to balance realism ally proportional to the number of compar-
against mathematical tractability. How- isons. By counting dominant operations,
ever, our experience is that results can be the asymptotic complexity measure is off
useful despite what appear to be overly by only a constant factor from the true
simplified computational models. number of operations performed.
Rather than finding exact running Some analyses use the ~'uniform cost
times on particular machines, most anal- criterion," where memory references, com-
yses count only certain ~elementary" op- parisons, arithmetic operations, and all
erations. The complexity measure reports other instructions each take unit time.
the asymptotic growth of this operation Still others employ the ~'logarithmic cost
count. Mathematicians have used various criterion," in which the charge for access-
notations for such "order" results, as re- ing a piece of information is proportional
ported by Knuth [47], who suggests the to the number of symbols needed to repre-
following generally accepted version. To sent it. Normally, the choice of operations
express the fact that a function g(n) grows to be counted is not crucial to asymptotic
asymptotically no faster than another analysis, assuming that the dominant op-
function /~n), we write g ( n ) = O(~n)), eration is among them.
which reads, "of order at most f(n)". It is
helpful to regard O(f(n)) as the set of Measuring Problem Size
functions with the property that for any
function g(n) in O(f(n)) there is a constant Problem size is another vague concept. It
c such that g(n) ~ cf(n). Similarly, ~(f(n)) could be made exact by letting n be the
Cof order at least f(n)") is the set of func- number of symbols required to encode the
tions with the property that for any func- problem for a particular Turing machine
tion g(n) in ~(f(n)) there is a constant c > or for some other computational model.
0 such that g(n)>-cf(n). Finally, 0(f(n)) This encoding is extremely important (see
Cof order exactly f(n)") is the set of func- [1], Section 10.1). Suppose that an algo-
tions with the property that for any func- rithm takes as input a single integer, and
tion g(n) in #(/~n)) there are constants c1> that the input k requires ck time for some
0 and c2 such that cgC(n) <- g(n) <- c~(n). constant c. Let n(k) refer to the length of
We commonly write, for example, the encoding of the integer k (i.e., n(k) is
g(n) = OWn)) rather than g(n) E O(f(n)). the problem size when the input is the
In the present context, O-notation is used integer k). If k is encoded in unary nota-
to describe upper bounds and ~-notation tion, then n(k) = k, so the algorithm runs
is used for lower bounds. The 0-notation is in cn time. However, if k is encoded in
used when it is possible to characterize binary, then n(k) = log2k, so that the
the complexity to within a constant factor. running time of the algorithm is c2". No-
Aho, Hopcroft, and Ullman [1] argue con- tice that for any radix representation, the
vincingly that such asymptotic analyses size of the representation is within a con-
are especially meaningful in light of faster stant factor of the size of the binary encod-
hardware. How much larger problems ing. Because of this, along with the fact
such machines can solve, and therefore that in practice we nearly always use a
their cost-effectiveness, depends on the radix representation for integers, unary
growth rate of the computation time. notation is not appropriate.
Consider the probl~- of finding the Problem representation can be of major
C o m p u t i n g S u r v e y s , Vol 9, No 4, D e c e m b e r 1977
Analysis Techniques for Discrete Algorithms • 297
must produce this output if it works tree of n points in the plane can be used to
properly. solve the element uniqueness problem,
Suppose that one of the comparisons and must therefore take time ~ ( n log n).
between adjacent elements from this final The reduction is quite simple. Suppose
list had not been made during the course that we want to determine whether any
of execution of the algorithm; say, A,:B2. two of the numbers x~, x2, • • • xn are equal.
Then the configuration: We can solve this problem by giving any
Euclidean minimum spanning tree algo-
BI <=B2 <A~ <A2 < " " <Bn <An rithm the points (x~, 0), (x~, 0), • • • (xn, 0).
would also be a legitimate possible out- The two closest points are known to be
come, being indistinguishable from the joined by one of the n - 1 spanning-tree
correct answer on the basis of the compar- edges, so we can simply scan these edges
isons which were made. Hence, all 2n - 1 and, in linear time, determine if any edge
comparisons between adjacent elements of has zero length. Such an edge exists if
the final list must be performed for the and only if two of the x, are equal. There-
algorithm to produce the correct output. fore, if the spanning tree algorithm could
An algorithm with this performance is operate in less than O(n log n) time, the
easily constructed. We therefore know element uniqueness problem could be
that for the problem of merging two or- solved in less than O(n log n) time, contra-
dered lists of n elements, L(n) = U(n) = dicting the ~ ( n log n) lower bound men-
2n - 1. tioned earlier.
For searching an ordered table, inqui- Other applications include the reduction
ries are of the form: "Is the key element of context-free language recognition to ma-
less than this element?" The obvious ora- trix multiplication [74] and the mutual
cle simply responds to inquiries in such a reductions between Boolean matrix multi-
way that the key item is always in the plication and transitive closure [18]. In
larger of the two portions of the list. Thus these cases, the reductions provide the
at most half the table can be eliminated potential for proving lower bounds, but
from consideration with each comparison, nontrivial lower bounds are not known for
and at least [log nl comparisons are re- these particular problems.
quired. Hyafil [33], among many others, Many other examples of this technique
has used an oracle to prove a lower bound are found in transformations between so-
for the selection problem (finding the called NP-complete problems, where the
kth-largest of n elements), where the triv- purpose is not to show lower bounds but
ial bound is simply n. In this case, both to demonstrate membership in the equiv-
upper and lower bounds are known to be alence class of NP.complete problems (see
0(n), and the oracle provides a way of re- Section 5). Note that it is not always clear
fining the lower bound to permit compari- how to identify the problem P2, which is
son with precise upper bounds. of course a requirement for using this
approach.
Problem Reduction
Miscellaneous Tricks
One of the most elegant means of proving
a lower bound on a problem P~ is to show Among the newer approaches for proving
that an algorithm for solving P~, along lower bounds is the use of graph models of
with a transformation on problem in- algorithms. Hyafil and Kung [34] show
stances, could be used to construct an tradeoffs between the depth and breadth
algorithm to solve another problem P2 for of trees describing the parallel evaluation
which a lower bound is known. The power of arithmetic expressions to show that the
of this approach is substantial. possible speedup using k processors is
Shamos and Hoey [66] use problem re- bounded by (2k+1)/3. This result is
duction to show that an algorithm for counter to the intuition that having k
finding the Euclidean minimum spanning processors available would allow a speed-
for which the exact solution is T(n) = n(n ring, using seven multiplications and 18
- 1)/2. Of course, it is clearly not sufficient additions (as opposed to the textbook
to choose x arbitrarily if a worst case of method which uses eight multiplications
0(n 2) must be avoided. Rather, x should and four additions). Aho, Hopcroft, and
partition S into approximately equal Ullman [1] explain how the number of
parts. Aho, Hopcroft, and Ullman [1] call additions can be reduced to 15, which is
this the ~principle of balancing." It can be optimal.
accomplished by choosing x as the median Suppose n = 2k. Strassen's algorithm
element of S, whereupon the recurrence begins by partitioning each of the original
becomes matrices into four submatrices of size 2k-I
x 2k-1 (to which the algorithm is applied
T(n) <- 2T([n/2j) + P(n) + C(n) recursively), then multiplies the 2 x 2
f o r n > 1, matrices (which have 2k-~ x 2k-~ matrices
T(1) = T(0) = 0. as elements) using the seven-multiplica-
tion 15-addition algorithm. The recursive
As in the case of binary search, ~---' rather application of the algorithm to perform
than '=' is used because n m a y be odd and the seven multiplications is the key to its
because the partitioning element is al- efficiency. Even though the total number
ready in place and need not be considered of scalar operations used to multiply 2 x 2
in the recursive step. T(n) still provides matrices of scalars is 22, as opposed to 12
an upper bound since neither remaining using the classical method, the number of
list can contain more t h a n h a l f the ele- multiplications is reduced by the new al-
ments. It is convenient to think of T as gorithm. When the elements of the 2 x 2
being defined not only for integer argu- matrices are themselves matrices, this fact
ments, but on the entire real line. Since T becomes important, because matrix mul-
is a nondecreasing function, T([xJ)- < T(x), tiplications are more costly t h a n matrix
so the inequality remains valid if In/2] is additions.
replaced by n/2. This step greatly simpli- An analysis of the tradeoffs involved in
fies the task of solving the recurrence. sacrificing 11 additions in order to save
Now, P(n) = n - 1 as before; however, one multiplication begins with the recur-
C(n) is no longer zero but the number of rence describing the number of scalar op-
comparisons necessary to find the median erations used to multiply a pair of n x n
of n elements. Blum et al., [5] present an matrices:
algorithm which finds the median in at
most 5.43n comparisons, and Schonhage, T(n) = 7T(n/2) + 15(n/2) 2 for n > 1,
Paterson, and Pippenger [63] have an al- T(1) = 1.
gorithm which uses at most 3n compari-
Here, the recursive step requires seven
sons. Taking C(n) <- 3n, we have
applications of the algorithm to (n/2) x
T(n) < 2T(n/2) + 4n - 1 for n > 1, (n/2) matrices, plus 15 additions of such
T(1) = T(0) = 0, matrices. Each matrix addition takes (n/
2)~ scalar operations. The initial condition
for which the solution is T(n) <- 4n log n is obvious, since multiplication of two 1 x
- n + 1, so t h a t T(n) is O(n log n). This 1 matrices consists of a single scalar mul-
algorithm is not practical, however, be- tiplication. The solution is T(n) = 6n l°g7 -
cause the 3n median algorithm is ex- 5n2; thus T(n) i s O(n l°gT) = O(n2"Sl), com-
tremely complicated, and because the ex- pared to O(n3) for the classical method.
pected behavior of Quicksort is very good Because of the factor of 6, though, the
without such a modification (see Section 4). classical algorithm (which takes 2n 3 - n 2
As a final example, consider Strassen's operations) is still faster for n less t h a n
algorithm [71] for matrix multiplication. about 300. By using a hybrid scheme
The algorithm is based on recursive appli- which uses Strassen's algorithm for large
cation of a method for multiplying 2 x 2 matrices and the classical algorithm for
matrices with elements from an arbitrary smaller ones, this crossover point can be
techniques reported in the previous section prior knowledge about the likelihood of
are still applicable, although some of the various inputs. While a uniform distribu-
recurrences are tougher to handle and tion is not the only possible assumption, it
therefore stronger solution methods may is the one most often encountered. Excep-
need to be used. tions are Spira [70] on the expected time
for finding all shortest paths in a graph,
Pros and Cons of Average-Case Analysis and Bentley and Shamos [4] on the ex-
pected behavior of some geometric algo-
The primary reason for analyzing the be- rithms. In both of these cases, the only
havior of algorithms "on the average" is, important assumptions deal with inde-
of course, that a worst case m a y arise so pendence of the random variables involved
rarely (perhaps never) in practice that and not with their distribution. The as-
some other complexity measure would be sumptions made in analyzing scheduling
more useful. An alternative to worst-case algorithms or parallel computations often
analysis that immediately comes to mind include exponentially distributed or Er-
is some sort of average-case analysis. langian distributed services times for
Rather than try to define and analyze a tasks [8, 2].
particular case which is somehow ~aver- In one attempt to answer this objection,
age," the approach is to simultaneously Yuval [79] has suggested that algorithms
analyze all cases and to weight the indi- might ~randomize" their inputs in order
vidual case complexities with the appro- to make the assumption appear valid. He
priate probabilities of each case occurring. has pointed out that if suitable random
Obviously, this complicates the mathe- steps were being taken at certain stages,
matics considerably. If this were the only an algorithm could have good expected
objection to doing average-case analysis, behavior for every input, and thereby as-
all that would be required would be more sure good expected-case solution times re-
sophisticated tools, which could be dis- gardless of the probability distribution
cussed in detail here. However, more seri- being assumed (see also [55]). For exam-
ous questions have been raised which tend ple, Quicksort could choose the partition-
to cast considerable doubt on the entire ing element randomly (this idea was of-
venture; this is the main reason for not fered by Hoare [28] in his original paper
going into more detailed description of the on Quicksort). Even though this might
methods used in average-case analysis. seem a case of the tail wagging the dog,
Rather, the reader is urged to consult the there is some justification for such an
original sources. approach in this instance. In order to
The most important objection is that, make the analysis of the algorithm tract-
typically, there is no way to identify the able, Sedgewick [64] assumes that the files
probability distribution over all problem ~o be sorted are random, and presents
occurrences. While it may be reasonable evidence that the algorithm works better
in some situations to assume that every if they are! Clearly, not all algorithms can
possible abstract problem instance is be modified in this manner. The key ele-
equally likely (for example, that every ment in binary search cannot be ~'random-
item is equally likely to be the key in a ized"; it is given as the sole input. Simi-
binary search, or that every permutation larly, the array in which the search is to
is equally likely to be the input to Quick- be made is certainly fixed during the
sort, or that every n-vertex graph is search, so there is no room to manipulate
equally likely), this assumption really the algorithm in this way.
only makes sense if the problem space is Despite the shaky basis for assuming
finite. It clearly makes no sense, for ex- random inputs, the results of analysis us-
ample, to say that every integer program ing this assumption may be reasonable
is equally likely. Furthermore, even when approximations for other distributions.
it does have meaning, the assumption of a Moreover, nonuniform distributions com-
uniform distribution over all possible in- plicate the recurrences.
puts may not be at all realistic if we have Knowing the average behavior of an
I
Analysis Techniques for Discrete Algorithms • 307
of NP. While there is overwhelming cir- of NP-complete problems is quite large,
cumstantial evidence that such problems and includes all the optimization and
are not also in P, no proof of this conjecture graph problems mentioned above. This
has yet been produced. fact forms the basis for believing (even if
A language L1 is "polynomially reduci- one cannot prove) that P ~ NP, since
ble" to L2 if there is a deterministic poly- none of the hundreds of algorithms for the
nomial-time algorithm which transforms scores of problems that are NP-complete
a string x into a string f(x) such that x is runs in polynomial time. If any of these
in LI iff f(x) is in L2. Among the conse- problems could be solved quickly, all of
quences of reducibility is the fact that if them could be; the fact that so far none of
there is a polynomial-time deterministic them can be is a convincing argument
algorithm to recognize L2 and L1 is reduci- (although not a proof) that they never will
ble to L2, then a polynomial-time deter- be.
ministic algorithm to recognize L~ can be Whether P = NP is not the only open
constructed. It consists of applying the question in this area. Some problems re-
transformation f to the input x and then main unclassified. For example, deciding
checking whether f(x) is in L~, all of which whether two regular expressions are
can be done deterministically in polyno- equivalent, or whether a string is in a
mial time. given context-sensitive language, are
The key to the argument that P ~ NP problems at least as hard as any problems
is a remarkable theorem by Cook [12] in NP, but are not known to be in NP
stating that every problem in NP can be themselves (see Aho, Hopcroft, and Ull-
polynomially reduced to Boolean satisfia- man [1]). Such problems are called "NP-
bility. This problem is very simply stated: hard" because their inclusion in P would
Is there an assignment of truth values to imply that P = NP. On the other hand,
the literals of a Boolean expression which deciding whether two given graphs are
makes the expression true? This means isomorphic, whether a given integer is a
that every problem which can be solved in prime, or whether a given integer is not a
polynomial time on a nondeterministic TM prime, have not been shown to be NP-
can also be solved by subjecting the input complete, even though they are in NP and
string to a transformation (done determin- cannot currently be solved in polynomial
istically in polynomial time) that converts time. Hence, these may be problems in
it to an instance of satisfiability, and then NP but not in P, whose solutions will not
solving the resulting satisfiability prob- help solve the other problems in NP. Lad-
lem. Many other problems have subse- her [50] has shown that P @ NP implies
quently been shown to have the same the existence of such '~intermediate" prob-
property. A problem such as satisfiability lems, but it is not known whether the
is called "NP-complete." One way to prove above problems are in this class.
that a problem is NP-complete is to dem-
onstrate that the problem is in NP, and "Guaranteed" Approximation Algorithms
that another NP-complete problem is re- All known algorithms for solving NP-com-
ducible to it. All NP-complete problems plete problems run in nonpolynomial de-
are also ~'P-complete" (see Sahni [60]), i.e., terministic time. It is therefore impracti-
in the class of problems solvable in deter- cal to solve very large instances of these
ministic polynomial time iff P = NP. The problems, even though some of them are
distinction between these definitions has among the most important problems of
all but disappeared in the literature, and graph theory and operations research.
the term "NP-complete" has come to be Fortunately m a n y applications requiring
used for either one. Under either defini- solution of such problems do not require
tion, if there is a polynomial-time deter- exact solutions. To take advantage of this
ministic algorithm for any NP-complete fact, algorithm designers have developed
problem, then P = NP. many new approximation methods. Most
Karp [39] shows reducibilities among of these algorithms seek near-optimal so-
problems which demonstrate that the class lutions to all instances of a problem.
A generally accepted error measure for ~'utilities" and ~weights", respectively, and
these algorithms is the maximum relative b is a scalar representing the capacity of a
error over all problem instances [37]. Nor- knapsack. The objective is to fill the knap-
mally, the quantities from which the rela- sack with a subset of the n items so that
tive error is computed are obvious: an the total utility c - x is maximized without
integer program's objective function value, violating the capacity constraint a.x -< b.
a graph's chromatic number for the graph Item i is to be included iffx, = 1.
coloring problem, or a schedule's length. The problem can be solved by a straight-
Sometimes, as for the satisfiability prob- forward tree-search method. For any par-
lem, the original problem must be re- tial assignment to the variables xl, x2, ""
phrased as an optimization problem in x, which we m a y have at level i of the
order to study approximate solutions. The tree, there correspond two more assign-
ratio of the absolute error to the exact ments (letting x,+l be either 0 or 1) at
solution value is given the symbol ~. level i + 1. A feasible solution is one for
In designing approximation algorithms which a.x -< b, and a partial assignment
of this type, one is concerned with guar- at level i consists of the fixed variables x~
anteeing that the relative error ~ is never through xz, with the remaining ones set to
larger than some prescribed maximum, 0. The key observation is that the infeasi-
which essentially means that e is taken as bility of a partial assignment implies the
given. The goal is to develop an algorithm infeasibility of every completion; accord-
that always solves the problem to within ingly, we may prune the tree at that point.
a factor of 1 -+ ~. For this reason, many of However, even with such pruning rules,
these algorithms tend to resemble the pat- the total number of solution candidates
terns commonly encountered in calculus generated may be exponential in n.
proofs, with strange functions of ~ appear- An approximation scheme using inter-
ing, as if by magic, in the early stages of val partitioning is constucted as follows:
the algorithm in order to assure that the let P~ be the maximum total utility of
final solution will be within a factor of 1 partial solutions on level ~. Divide the
- ~ of optimal. Recognizing this takes interval [0, P,] into subintervals of size
much of the mystery out of what can P,~/n, and discard all candidates with total
appear to be very complicated algorithms. utility in the same interval except that
Because error analysis is part of the one with the least total weight. There are
design of such algorithms, understanding now at most Inlet + 1 nodes on each level,
the analysis is tantamount to understand- and therefore only O(n2/E) nodes in the
ing the operation of the algorithm. Sahni entire tree. The errors introduced at each
[62] nicely summarizes the techniques in stage are additive, so the total error is
this class, dividing the methods into three bounded by E. This means that in O(n2/e)
categories. These are especially useful for time we can solve the 0/1 knapsack prob-
the knapsack problem, packing problems, lem to within E, for every ~ > 0. By
and certain scheduling problems. The another approach, Ibarra and Kim [35]
worst-case complexities they produce are have shown that this can be reduced to
usually like O(na/~) for constants a and b O(n log n) + O((3/e)41og(3/~)).
which depend on the problem. Shamos and Yuval [67] have demon-
An example of one of Sahni's tech- strated an interesting approximation al-
niques, called interval partitioning, is a gorithm for a problem which is relatively
simple algorithm for solving the 0/1 knap- easy. It finds an ~-approximation to the
sack problem: mean distance between n points in the
plane in O(n) time although an optimal
maximize: c" x solution requires O(n 2) time.
subject to: a. x <- b Not all NP-complete problems lend
x ~ { 0 , 1}n themselves equally well to approximation,
even though they are, in a certain sense,
Here, c and a are n-vectors of positive of the same time complexity. D. S. John-
Median: Find the median of a list of n weight path from v to every other vertex
elements of a linearly ordered set. (The of G. A modified version of this single-
median is one which is at least as large source problem allows negative edge
as half the elements, and as small as weights. The all-pairs version requires
the others). minimum-weight paths between all
Merging: Coalesce two ordered lists into pairs of vertices of G.
one. Sorting: Arrange a list of n elements {x~}
Minimum spanning tree: For a given from a linearly ordered set so that x l <
graph G with edge weights, find the X 2 ~ ... ~ X n.
minimum-cost set of edges which con- Transitive closure: Given a Boolean ma-
nects all the vertices of G without form- trix A, compute A + A 2 + A3 + . . . .
ing any cycles. For the Euclidean ver- Traveling salesman: Given a graph G
sion, the vertices are points in space with nonnegative edge weights, find the
and the edge weights are Euclidean in- minimum-cost cycle which passes
terpoint distances. through each vertex exactly once. For
Nonprimes: Determine whether a given the Euclidean version of this problem,
integer is a composite number (i.e., can the vertices are points in space, and the
be factored). edge weights are Euclidean interpoint
Planarity of a graph: Determine whether distances.
a given graph can be drawn in the plane
without any crossing edges. ACKNOWLEDGMENTS
Primes: Determine whether a given inte-
ger is a prime number. The author would hke to thank Michael I. Shamos,
H T Kung, Samuel H Fuller, Jon L. Bentley, and
Satisfiability: For a given Boolean expres- Richard M Karp for their helpful critiques, and
sion, determine whether there Is any Peter Denningfor also suggestingthe problem glos-
assignment of values to the literals for sary.
which the expression is true.
REFERENCES
Scheduling: Given n tasks and some con-
straints on the order in which they may [1] AHo, A. V ; HOPCROFT,J. E ; AND ULLMAN, J
D The design and analys~s of computer al-
be done, find the best sequence in which gorithms, Addison-Wesley, Reading, Mass,
to perform them. There are many vari- 1974
ations which involve number of proces- [2] BAUDET, G Numerical computatmns on
asynchronous multiprocessors, Carnegie-Mel-
sors, processing times, precedence, lon Unlv, Dept Computer Sci., Pittsburgh,
deadlines, and penalties for late comple- P e n n , Apml 1976
tion. [3] BEARDWOOD,J., HALTON,J. H.; ANDHAMMER-
SLEY, J M. "The shortest path through
Search: Given a set of n elements and a many points," Proc. Cambridge Phdosoph~cal
"key" element, determine whether the Soc. 55, 4 (Oct. 1959), 299-327.
[4] BENTLEY, J L, AND SHAMOS, M. I. Dwzde
key element is in the given set. Many and conquer for l~near expected time, Carne-
variations of this problem have been gin-Mellon U n i v , Dept. Computer Scl., Pitts-
defined. burgh, Penn., March 1977; to appear in Inf.
Process. Lett
Selection: Find the kth-smallest element [5] BLUM,M.; FLOYD, R. W.; PRATT, V ; RIVEST,
from a given list of n linearly ordered R. L , AND TARJAN, R E. "Time bounds for
elements. selection," J Comput. Syst. Se~. 7, 4 (Aug.
1973), 448-461.
Set identity: Determine whether two [6] BORODIN, A. "Computational complexity
given n-element sets are identical. theory and practice" in Currents ~n the theory
of computzng, A. V. Aho, lEd.I, Prentice-
Set manipulation (UNION-FIND):A UNION Hall, Inc., Englewood Cliffs, N.J., 1973, pp.
operation forms the union of two sets; a 35-89.
FIND operation determines which set [7] BORODIN,A.; AND MUNRO, I. The computa-
tional complexity of algebraw and numeric
contains a particular element. Perform problems, American Elsevier, N.Y., 1975
a sequence of n such operations. [8] CHANDY, K M, AND REYNOLDS, P.
Shortest path: Given a graph G with non- F. "Scheduling partially ordered tasks with
probablllStIC execution times," m Proc. 5th
negative edge weights, and a distin-
guished vertex v, find the minimum-
~ mp. Operating Systems Pnnctples, ACM,
Y , 1975, pp 169-177
[9] COFFM~, E. G. [Ed.] Computerandjob shop computations," in Proc. IF1P Congress 74,
scheduling theory, John Wiley and Sons, Inc., Vol. 3, North-Holland Publ. Co., Amsterdam,
N.Y., 1976. The Netherlands, 1974, pp. 620-626.
[10] COVFMAN,E. G.; ANDSETHI, R. "A general- [30] HOPCROI~r,J. E.; ANDTARJAN,R.E. "Efficient
ized bound on LPT sequencing," in Proc. Int. algorithms for graph manlpulatmn," Com-
Syrup. Computer Performance Modeling, Mea- mun. ACM 16, 6 (June 1973), 372-378.
surement, and Evaluation, ACM, N.Y., 1976, [31] HoPcROrr,J. E.; ANYULLMAN,J. D Formal
pp. 306-310. languages and their relatwn to automata, Ad-
[11] COHEN,J.; ANDRCerH,M. "On the implemen- dison-Wesley, Reading, Mass, 1969
tation of Strassen's fast multlphcation algo- [32] HOPCROFT,J E., ANn ULLMAN, J. D. "Set
rithm,"Acta Inf. 6, 4 (1976), 341-355. merging algorithms," SIAM J. Comput. 2, 4
[12] CooK, S. A. ~'The complexity of theorem (Dec. 1973), 294-303.
OVmgprocedures," in Proc. 3rd Ann. ACM [33] HYAFIL,L. "Bounds on selection," SIAM J.
rap. Theory of Computing, ACM, N.Y., Comput. 5, 1 (March 1976), 109-114.
1971, pp. 151-158. {34] HYAFIL,L.; ANDKUN6, H. T "The complex-
[13] DAHLQUIST,G.; ANY BJOECK, A. Numerwal ity of parallel evaluation of linear recur-
methods, Prentice-Hall, Inc., Englewood rences," inProc 7th Ann. ACM Syrup. Theory
Cliffs, N.J., 1974. Computing, ACM, N.Y., 1975, pp. 12-22.
[14] DrJKSTRA,E.W. "A note on two problems in [35] IBARRA,O.; "AND KIM, C. "Fast approxima-
connexmn with graphs," Numer. Math. 1, tion algorithms for the knapsack and sum of
(1959), 269-271. subsets problems," J. ACM 22, 4 (Oct. 1975),
[15] DORKIN,D.; ANDLIPrON,R., On the complex- 463-468
~ty of computations under varying sets of prim- [36] JOHNSON,D.B. "A note on Dijkstra's short-
itives, Tech. Rep no. 42, Yale Univ., Dept est path algorithm," J. ACM 20, 3 (July 1973),
Computer Sci., New Haven, Conn., 1975. 385-388.
[16] EDMONDS,J.; AND KAY, R.M. "Theoretical [37] JOHNSON, D.S. "Approximation algorithms
improvements in algorithmic efficiency for for combinatorial problems," in Proc. 5th Ann.
network flow problems," J. ACM 19, 2 (April ACM Syrup. Theory Computing, ACM, N.Y.,
1972), 248-264. 1973, pp. 38-49; also in J. Comput. Syst. Sc~
[17] ERDOS, P.; AND SPENCER, J. Probabdzstw 9, 3 (Dec. 1974), 256-278
methods ~n comb~natoncs, Academic Press, [38] JOHNSON, D. S. "Fast algorithms for bin
N Y , 1974. packing," J. Comput. Syst Scz. 8, 3 (June
[18] FISCHER,M. J.; ANn MZYER, A.R. "Boolean 1974), 272-314.
matrix multiphcation and transitive closure," [39] KARP, R M. "Reducibility among combina-
in Conf. Record, IEEE 12th Ann Syrup torial problems," in Complexity of computer
Sw~tchzng Automata Theory, IEEE, N.Y., computatmns, R. E. Miller, and J W.
1971, pp. 129-131. Thatcher, [Eds.], Plenum Press, N.Y, 1972,
[19] FLOYD, R.W. "Algorithm 245' Treesort 3," pp. 85-103
Commun ACM 7, 12 (Dec. 1964), 701. [40] KARP, R M "On the computational com-
[20] FLoYn, R. W.; AND RIVEST, R.L. "Expected plexity of combinatorial problems," Networks
time bounds for selection," C o m m u n A C M 5, 1 (Jan 1975), 45-68.
18, 3 (March 1975), 165-172. [41] KARP, R. M "The probabilistic analysis of
[21] FORD, L. R.; AND JOHNSON, S.M. "A tourna- some combinatorial search algorithms," inAl-
ment problem," A m Math. Monthly 66, gorithrns and complexity New dtrectmns and
(1959), 387-389. recent results, J F. Traub, lEd.l, Academic
[22] FRAZER, W D. "Analysm of combinatory al- Press, N.Y., 1976, 1-19.
gorlthms--a sample of current methodology," [42] KLEINROCK, L. Queue~ng systems, Vol. I:
in Proc. AFIPS 1972 Spr~ng Jr. Computer Theory, John Wiley and Sons, Inc., N Y.,
Conf., Vol. 40, AFIPS Press,-Montvale, N.J., 1975.
pp. 483-491. [43] KNUTH,D.E. The art of computer program-
[23] FULLER, S. H ; GASCHNIG, J. G.; AND GIL- ruing, Vol. I" Fundamental algorzthms, Addi-
LOGLY, d. d Analysis of the alpha-beta prun- son-Wesley, Reading, Mass., 1968
ing algorithm, Carnegqe-Mellon U n i v , Dept [44] KNUTH,D.E. The art of computer program-
Computer SCL, Pittsburgh, Penn., July 1973. m~ng, Vol. H" Sem~numerwal algorithms, Ad-
[24] GAREY, M. R.; AND JOHNSON, D. S. The dison-Wesley, Reading, Mass., 1969.
complexity of near-optimal graph coloring," [45] KNUTH,D E "Mathematical analysis of al-
J. ACM 23, 1 (Jan. 1976), 43-49. orithms," in Proc IFIP Congress 71, Vol 1,
[25] GARRY,M. R.; ANDJOHNSON, D. S. "Approxi- orth-Holland Publ Co., Amsterdam, The
mation algorithms for combinatorial prob- Netherlands, 1971, pp. 135-143.
lems: an annotated bibliography," in Algo- [46] KNUTH,D E The art of computer program-
rithms and complexity: New dwectmns and re- mzng, Vol. llI Sortzng and searchtng, Addi-
cent results, J. F. Traub, [Ed.], Academic son-Wesley, Reading, Mass, 1973
Press, N.Y., 1976, pp. 41-52 [47] KNUTH,D.E. "Big omicron and big omega
[26] GRAHAM,R L. "An efficient algorithm for and big theta," SIGACT News 8, 2 (April/
determining the convex hull of a finite planar June 1976), 18-24.
set," Inf. Process Lett. 1, (1972), 132-133. [48] KNUTH,D. E.; ANn MOORE, R W. "An anal-
[27] GUmAS,L. J.; ANDSZEMRREDI,E. "The anal- ysis of alpha-beta pruning," Artff Intell 6,
ysis of double hashing," extended abstract in (1975), 293-326.
Proc. 8th Ann. ACM Syrup Theory Comput- [49] KNUTH, D. E, MORRIS, J H.; ANn PRATT,
ing, ACM, N.Y., 1976, pp. 187-191. V R. Fastpattern matching zn strings, Tech.
[28] HOARE,C. A.R. "Quicksort," Comput. J 5, Rep STAN-CS-74-440, Computer Scl Dept.,
(1962), 10-15. Stanford Umv., Stanford, Calif., Aug. 1974,
[29] HOPCROF'r, J E. "Complexity of computer also Knuth, D. E.; Morns, J. H.; and Pratt,