You are on page 1of 19

Graphs & Combinatorial Problems Graphs

• A new part of the course – will cover the more • Formal Definition:
theoretical aspects required in later lectures – A graph G is a finite nonempty set V together with an
[irreflexive], symmetric relation E on V
– Graphs, cliques, and colouring
• The relation E relates vertices to other vertices and is known
– Algorithms and intractability as the edge relation, or “edge set”
– Linear programming and integer linear programming • If relation E is symmetric, it means that
– Shortest and longest path algorithms • (a,b)∈E ⇒ (b,a)∈E
• This lecture covers • an edge has no concept of “direction”
• In mathematics, an edge relation is usually considered
– Definition of graph (revision), clique, and clique number
– Graph colouring, chromatic number • ¬∃a : (a,a)∈E
– Interval graphs – engineers often relax this constraint (hence the brackets)

1/15/2007 Lecture5 gac1 1 1/15/2007 Lecture5 gac1 2

Directed Graphs Cliques

• Formal definition: • A complete graph is a special type of graph where
all possible edges are in the edge set
– A directed graph G is a finite nonempty set V
together with an [irreflexive] relation E on V
– This time the concept of direction is implicit, as K3
K1 K2
we could have (a,b)∈E and (b,a)∉E
• A subgraph G’(V’,E’) of a graph G(V,E) is a graph
• You may see directed graphs referred to as whose vertex and edge sets obey
“digraphs” • V’ ⊆ V, E’ ⊆ E
v1 v2 v4
v2 v4
v3 G(V,E) G’(V’,E’)
1/15/2007 Lecture5 gac1 3 1/15/2007 Lecture5 gac1 4
Cliques Clique Number
• A clique is a complete subgraph • The clique number ω(G) of a graph G is the size of
the node set of its largest clique
v1 v2 v4
v1 v2 v4
v2 v4
v3 G(V,E) G’(V’,E’)
v3 G(V,E)
v2 v4
• This graph has cliques with the following node
v3 G’’(V,E) subsets:
– {v1}, {v2}, {v3}, {v4}, {v1,v2}, {v1,v3}, {v2,v3}, {v2,v4},
• G’ is a clique. G’’ is not a clique (but it is a {v1,v2,v3}
subgraph of G) • Its clique number is 3
1/15/2007 Lecture5 gac1 5 1/15/2007 Lecture5 gac1 6

Graph Colouring A Colouring Algorithm

• Graph colouring is the process of labelling each • A simple algorithm for colouring a graph is given below
node of a graph such that no two connected nodes Colour_Graph( G(V,E) )
share the same label begin
foreach v ∈ V {
v1 v2 v4 c = 1;
while ∃(v,v’) ∈ E : v’ has colour c
v3 G(V,E) c = c + 1;
label v with colour c }
• The graph above is coloured with three different end
• This will always correctly colour a graph, but the number of
• Graph colouring can model many problems distinct colours used depends on the order in which the
• e.g. colouring a conflict graph (Lecture 2) will nodes are visited
result in a resource binding
1/15/2007 Lecture5 gac1 7 1/15/2007 Lecture5 gac1 8
Chromatic Number Interval Graphs
• The smallest number of colours with which it is possible to • Luckily, not all graphs are “hard” to colour. One type of
colour a graph G is called its chromatic number χ(G) graph which is easy to colour with the minimum number of
colours is an “interval graph”
• For a general graph, finding χ(G) is a “hard” problem
• An interval graph is a graph whose vertices can be put in
– the algorithm presented does not guarantee a colouring one-to-one correspondence with a set of intervals, such that
with χ(G) colours two vertices are connected by an edge iff the corresponding
– we’ll be discussing “hard” problems next lecture intervals intersect
• In resource binding, the chromatic number tells us the v1 v2
minimum number of distinct resources required v1 v2
v1 v2 v3??
• Since every node in a clique must be coloured differently to
every other node in a clique, v3 v2 v4 v3 v1
• ω(G) ≤ χ(G) v3
An interval graph NOT an interval graph
1/15/2007 Lecture5 gac1 9 1/15/2007 Lecture5 gac1 10

The Left Edge Algorithm The Left Edge Algorithm

• The left-edge algorithm colours interval graphs optimally.
• Some set theory:
• Let us denote by li and ri the left-most and right-most point of
the interval corresponding to vertex vi. – \ represents set subtraction
Left_Edge( G(V,E) ) • X \ Y = { z : z∈X ∧ z∉Y }
begin • The left edge algorithm tries to colour as many
sort nodes in ascending order of left edge – store in L
c := 1; intervals as possible with one colour, before moving
while( not all vertices have been coloured ) { on the the next colour
r := 0;
while( ∃ an element s in L with ls > r ) { • Left Edge was originally introduced to pack wire
vs := first node in L with ls > r; segments tightly on a VLSI layout. It is now used
r: = rs;
label vs with colour c for many other purposes – particularly resource
L := L \ {vs}; } binding.
c := c + 1; }
1/15/2007 Lecture5 gac1 11 1/15/2007 Lecture5 gac1 12
Left Edge - Example Left Edge - Example
v1 v1 v6
v1 v2 v3 c=1
v1 v6 v6 v7 v5 c=2
v7 v4 v3
v4 c=3
v7 v4 v3 v7
v2 v5
v2 v5 coloured graph intervals packed into colours
v3 1↔
v5 2↔
interval graph interval list L

1/15/2007 Lecture5 gac1 13 1/15/2007 Lecture5 gac1 14

Summary Suggested Problems

• This lecture has covered • For the graph below, apply the general colouring algorithm
for the following two vertex orders. Compare and contrast
– graphs and digraphs your results. (*)
– cliques and clique number – (a) (v1, v2, v3, v4)
– colouring and chromatic number – (b) (v1, v4, v3, v2)
– interval graphs and the Left Edge algorithm • By applying the left-edge algorithm, or otherwise,
demonstrate that one of the two orders above results in an
• Next lecture will examine the ideas behind optimum colouring (*)
designing “good” algorithms, and what it v1 v2
means for a problem to be “hard”
v3 v4
1/15/2007 Lecture5 gac1 15 1/15/2007 Lecture5 gac1 16
Algorithms and Intractability The Purpose of This Lecture
• Part of our 4-lecture “theory break” • Synthesis is all about writing algorithms to solve
– Graphs, cliques, and colouring problems in digital design
– Algorithms and intractability • This lecture will consider some of the more
– Linear programming and integer linear programming theoretical aspects concerning
– Shortest and longest path algorithms – problems, algorithms, and complexity
• We will formalize what is meant by a “hard”
• This lecture covers
– The definition of an “algorithm”
• You will not be required to prove the hardness of
– Polynomial-time and intractability
any unseen problems as part of this course
– P and NP
• You may be required to describe the ideas of
– Polynomial reduction, NP-completeness and NP- hardness
1/15/2007 Lecture6 gac1 1 1/15/2007 Lecture6 gac1 2

Problems and Instances “Hard” Problems

• We have already discussed several problems and
algorithms. We will now take a few minutes to formalize
these concepts
• A problem is a general question to be answered, usually
possessing several parameters, whose values are left
– e.g. Can I schedule a DFG G(V,E) to complete within λ
cycles using at most n multipliers?
• An instance of a problem is obtained by specifying particular
values for all parameters
– e.g. Can I schedule the DFG given in Lecture 1, slide 5,
to complete within 10 cycles using at most 2 multipliers?
[Garey & Johnson 1979]
1/15/2007 Lecture6 gac1 3 1/15/2007 Lecture6 gac1 4
“Hard” Problems “Hard” Problems

[Garey & Johnson 1979] [Garey & Johnson 1979]

1/15/2007 Lecture6 gac1 5 1/15/2007 Lecture6 gac1 6

Algorithms and Efficiency Complexity

• An algorithm is a general step-by-step procedure • Usually, we can describe the worst-case
for solving problems performance of an algorithm as a function of the
“size” n of the problem instance
• An algorithm is said to solve a problem Π if the
algorithm can be applied to any instance of Π and • We generally are concerned with the “big picture” of
how performance scales with size (especially for
is guaranteed to always produce a solution for that large sizes), rather than specific execution times
• The Big-Oh notation allows us to express this
• An efficient algorithm is one that solves the problem behaviour
“quickly” – O(n), O(n2), O(en)
– there are other factors such as memory usage, but we • An algorithm is O( f(n) ) if its worst case
will ignore these performance is bounded by k f(n) for large n

1/15/2007 Lecture6 gac1 7 1/15/2007 Lecture6 gac1 8

Complexity Polynomial vs Exponential Time
• Example: A (good) algorithm to add n • A polynomial-time algorithm is one which has
O( p(n) ) for some polynomial p(⋅).
numbers will be O(n)
• An exponential-time algorithm is any algorithm
• Example: An algorithm to sort n numbers in which is not polynomial-time.
order. You may be familiar with • Clearly for large n, exponential-time algorithms take
– quicksort: O(n2) much longer than polynomial-time algorithms
– the main distinction is thus: “is this algorithm exponential
– heapsort: O(n log n) (bad) or polynomial (good)?”
• Example: An algorithm which considers all – the order of the polynomial is of secondary concern
possible k-colourings that a graph could have • All problems which can be solved by polynomial
would be O(kn) algorithms are said to belong to the class P

1/15/2007 Lecture6 gac1 9 1/15/2007 Lecture6 gac1 10

Nondeterministic Polynomial Time Want to Earn Some Money?

• To complicate matters, computer scientists have • The problem “does P = NP?” is unsolved
come up with another class, NP (nondeterministic
polynomial). • If you solve it you will
• A problem is in NP if a solution to the problem can – be famous
be checked in polynomial time – win $2,000,000 from
– this doesn’t mean it has to be solvable in polynomial time
• …but don’t let it distract you from your
• Example:
– scheduling G(V,E) in time λ given resource constraints degree!
may or may not be solvable in polynomial time
– it is clear that given a schedule, we could check in
polynomial time that it is a valid schedule and it
completes within λ cycles
1/15/2007 Lecture6 gac1 11 1/15/2007 Lecture6 gac1 12
Polynomial Reduction NP-completeness & NP-hardness
• Many interesting and difficult problems (like • There are some problems which are in NP and
scheduling) are in NP but we don’t know whether which are known to be at least as hard as any other
they’re in P problem in NP.
• Since it is generally hard to prove that a given – these are called NP-complete
problem is not in P, we instead concentrate on
proving that its “at least as hard” as a known hard • NP-complete problems are of particular interest, as
problem if a solution to any NP-complete problem can be
• If we can transform any instance of a hard problem found in polynomial time then P = NP
ΠH into an instance of our problem Π, and that • A problem which is at least as hard as an NP-
transformation can be done in polynomial time, complete problem is called NP-hard
then – this is our formal definition: for “hard problem” read “NP-
– if we can solve Π, we can solve ΠH ⇒ Π is also hard! hard problem”
1/15/2007 Lecture6 gac1 13 1/15/2007 Lecture6 gac1 14

A Hierarchy of Problems Proving Hardness

• Assuming P ≠ NP, this is how our “world of • Proving NP-hardness requires two stages
problems” looks – pick a known NP-hard problem
– demonstrate a transformation from this problem to your
NP NP-hard • There are some NP-complete problems which form the
basis of many proofs. We will look at one: Partition
NP- • Partition: Given a finite set A and a measure s(a) ∈ Z+ for
complete each a ∈ A, is there a subset A’ ⊆ A such that the following
P equation holds?

∑ s(a) = ∑ s(a)
a∈ A ' a∈ A− A '
1/15/2007 Lecture6 gac1 15 1/15/2007 Lecture6 gac1 16
Proving Hardness Example: Scheduling is NP-hard
• An example instance of “partition”: • To finish off, we’ll prove the NP-hardness of an
example problem (a simple form of scheduling)
– A = {v1, v2, v3} with s(v1) = 1, s(v2) = 2, • Our simple scheduling problem has no data
s(v3) = 1 dependencies and only one type of operation
– for this instance, the answer is clearly “yes”:
• A’ = {v2} or A’ = {v1, v3} • Remember that you won’t be asked to do such a
proof for an unseen problem, but this proof has
been included
– for completeness
– to give a more “practical” end to a highly theoretical
– to justify past and future comments about scheduling
being a “hard” task to perform
1/15/2007 Lecture6 gac1 17 1/15/2007 Lecture6 gac1 18

Scheduling is NP-hard Scheduling is NP-hard

• Let’s start by defining our problem: • Let us rephrase the question:
– given a finite set A of operations, a latency – is there a partition A = A1 ∪ A2 ∪ … ∪ Am of A
d(a) ∈ Z+ for each a ∈ A, a number m ∈ Z+ of into m disjoint subsets such that
resources, and a deadline λ ∈ Z+
⎧ ⎫
– is there a schedule such that all operations max ⎨ ∑ d (a )⎬ ≤ λ
1≤i ≤ m
complete within the deadline and no more than ⎩a∈Ai ⎭
m resources are used?
– Ai represents the set of operations assigned to
processor i, and no two operations can be
executed at the same time on a single resource

1/15/2007 Lecture6 gac1 19 1/15/2007 Lecture6 gac1 20

Scheduling is NP-hard Scheduling is NP-hard
• Let’s consider a special case of our problem, for • Rewriting, we require
m=2 and 1
λ= ∑ d (a)
2 a∈A
1 ⎧ ⎫
max ⎨ ∑ d (a ) − ∑ d (a ), ∑ d (a ) − ∑ d (a )⎬ ≤ 0
• Then the problem reduces to: 2 ⎩a∈A' a∈ A− A ' a∈ A− A ' a∈ A ' ⎭
– given a finite set A, and a value d(a) ∈ Z+ for each a ∈ A
• But for any k, max(k,-k) ≤ 0 ⇒ k = 0, so we require
– is there a partition into 2 disjoint subsets A’ and A – A’
such that
⎧ ⎫ 1
∑ d (a) = ∑ d (a)
max ⎨ ∑ d (a ), ∑ d (a )⎬ ≤ ∑ d (a )
a∈ A ' a∈ A− A '

⎩a∈A' a∈ A− A ' ⎭ 2 a∈A • But this is the “partition” problem. So “partition” is a special
case of our problem and hence our problem is NP-hard
1/15/2007 Lecture6 gac1 21 1/15/2007 Lecture6 gac1 22

• This lecture has covered
– The definition of an “algorithm”
– Polynomial-time and intractability
– P and NP
– Polynomial reduction, NP-completeness and NP-
• Next lecture we will look at the (NP-hard!) problem
of Integer Linear Programming (ILP) and how we
can use ILP solving software to help us optimize
our hardware
1/15/2007 Lecture6 gac1 23
[Integer] Linear Programming Mathematical Programming
• Part of our 4-lecture “theory break” • Mathematical “programming” is the name given to
– Graphs, cliques, and colouring
the branch of mathematics that considers the
following optimization problem:
– Algorithms and intractability
– Linear programming and integer linear programming max f ( x), x ∈ S ⊆ R n
– Shortest and longest path algorithms
• Here Rn represents the set of n-dimensional
• This lecture covers
vectors of real numbers, and f is a real-valued
– Mathematical programming, integer / mixed-integer function defined on S. S is the constraint set and f
programming, and linear programming is the objective function.
– Slack variables
• By choosing f and S appropriately, we can model a
– Application example: Capital budgeting wide variety of real-life problems in this way.

1/15/2007 Lecture7 gac1 1 1/15/2007 Lecture7 gac1 2

Feasibility and Optimality Integer Programming

• Any x ∈ S is called a feasible solution • An integer programming problem is one where S is
restricted to have only integer values
• If the there is an xo ∈ S such that
f(x) ≤ f(xo) for all x ∈ S S ⊆ Z n ⊆ Rn
then xo is called an optimal solution • A mixed integer programming problem is one where some
elements of S are restricted to integers
• Integer programming problems are typically harder than the
• The aim is to find an optimal solution for a given f equivalent real problem. You can gain an intuition why by
and S considering the following problems
– find the value of x minimizing cos(x/5)
• 5π
– find the integer value of x minimizing cos(x/5)
• round( 5π ) ? round ( 5π + 10π ) ? …
1/15/2007 Lecture7 gac1 3 1/15/2007 Lecture7 gac1 4
Linear Programming Why Are We Interested?
• Problems where f and S are restricted to linear form • We are interested in expressing problems as
are of particular interest integer or mixed integer linear programs because
f(x) = cTx, S = { x | Ax = b, x ≥ 0 } – it provides a way to formalize the problem
– c is an n x 1 vector, A is an m x n matrix and b is an – we can apply known general techniques to solve
m x 1 vector
the problem
• Imposing the linearity constraints restricts the
– lots of software exists to solve MILPs (e.g.
domain of problems, but allows us to use known
lp_solve, available free from the web)
solution techniques
• For general x, these problems can be solved
exactly (e.g. Simplex technique). For integer x, the – I will be introducing ILP formulations for
problem is NP-complete. scheduling and resource binding in later lectures
1/15/2007 Lecture7 gac1 5 1/15/2007 Lecture7 gac1 6

Modelling Complex Problems Inequality

• At first glance, linear constraints may seem very • Inequality constraints can easily be introduced by adding an
restrictive – this is not necessarily the case, if you extra variable
build your model carefully. • For example, consider the program:
• Here are three types of constraint that could be max 2x1 + 3x2 subject to x1 + x2 ≤ 10
useful in synthesis This is the same as
– inequalities (e.g. x1 + x2 ≤ b1, rather than x1 + x2 = b1) max 2x1 + 3x2 subject to x1 + x2 + x3 = 10
– dichotomy (e.g. x1 + x2 ≤ b1 OR x3 + x4 ≤ b2) • For “≥”, we would insert (-x3) into the constraint
– conditionals (e.g. x1 + x2 ≤ b1 ⇒ x3 + x4 ≤ b2) • The extra variable is called a slack variable – it does not
appear in the objective function
• We will only be considering the first in this brief • Because this is so straight-forward, many ILP solving
introduction. If you wish to use the others, programs allow you to express constraints with inequality
– R.S. Garfinkel and G.L. Nemhauser, “Integer directly. From now on, we will use inequalities freely without
Programming”, Wiley and Sons, 1972 considering slack variables explicitly
1/15/2007 Lecture7 gac1 7 1/15/2007 Lecture7 gac1 8
Example: Capital Budgeting Example: Capital Budgeting
• From Garfinkel and Nemhauser (1972): • Let’s introduce a set of variables xj, which we
– A firm has n projects that it would like to interpret as:
undertake, but due to budget limitations, not all – xj = 1 ⇒ project j is selected
can be selected. In particular, project j has a – xj = 0 ⇒ project j is not selected
value of cj, and requires an investment of aij in • Then the objective function can be formulated as
the time period i, i =1,…,m. The capital available n
in time period i is bi. ∑c x
j =1
j j
– Problem: Maximize the total value, subject to
budget constraints • The constraints are

∑a x
j =1
ij j ≤ bi , i = 1,..., m; x j ≤ 1, j = 1,..., n
1/15/2007 Lecture7 gac1 9 1/15/2007 Lecture7 gac1 10

• This lecture has covered
– Mathematical programming, integer / mixed-
integer programming, and linear programming
– Slack variables
– Application example: Capital budgeting
• Next lecture (the last in our “theory break”),
looks at finding the shortest and longest path
through a graph

1/15/2007 Lecture7 gac1 11

Path Problems and Algorithms Edge Weighted Graphs
• Part of our 4-lecture “theory break” • An edge-weighted graph is a graph G(V,E) together
– Graphs, cliques, and colouring with a weighting function w: E → R
– Algorithms and intractability • We can represent this graphically by annotating
– Linear programming and integer linear programming each edge e ∈ E with its weight w(e)
– Shortest and longest path algorithms
0 0
• This lecture covers 0 v1
0 3 1
– Edge-weighted graphs, shortest and longest path 1 2 3
-1 1
problems 2 1 2 v0 -6 v3
4 5
– Longest path through a DAG
2 1 1 4
– Longest path through a general graph: Liao-Wong 6 v2
– Longest path as a LP An edge weighted DAG An edge-weighted graph with cycles
1/15/2007 Lecture8 gac1 1 1/15/2007 Lecture8 gac1 2

Shortest and Longest Path Shortest and Longest Path

• A path through a graph is an alternating sequence • The longest path problem is to find a path of
of vertices and edges maximum total weight between a given “source”
vertex and any other vertex in the graph
3 1 – the shortest path problem is defined similarly
-1 1 – we will consider only longest path problems – shortest
v0 -6 v3 path can then be achieved by inverting all weights
1 4 w’(e) = – w(e)
• Bellman’s equations define the total weight of any
• A path between vertices v0 and v3, with total edge vertex v
weight 3+1 = 4 has been highlighted sv = max ( su + w(u , v))
( u ,v )∈E
1/15/2007 Lecture8 gac1 3 1/15/2007 Lecture8 gac1 4
Longest Path Through a DAG DAG Algorithm
• The longest path through a DAG is an easier problem than • Below is one possible algorithm (apologies to the recursion-
the equivalent for a general graph phobics)
• This is because we can find an order of nodes to visit such Algorithm DAG_Longest_Path( G(V,E), source )
that the right-hand side of each Bellman’s equation is known Set ssource = 0;
• For our example DAG, let’s choose vertex 0 as our source. foreach v ∈ V
Find_DAG_Path( G(V,E), v );
Then s0 = 0. If we now proceed to apply Bellman’s end DAG_Longest_Path
equations in the order (s1, s2, s3, s4, s5, s6), we can
determine the total weight for each node Algorithm Find_DAG_Path( G(V,E), v )
– s1 = 0, s2 = 0, s3 = 0, s4 = 2, s5 = 2, s6 = 4 if already know sv
• Note that this would not work with an arbitrary order. We else
must calculate sv before su for all (v,u) ∈ E foreach (u,v) ∈ E
• For a graph with cycles, it is not possible to find such an Find_DAG_Path( G(V,E), u )
order Apply Bellman’s equation to find sv
end Find_DAG_Path
1/15/2007 Lecture8 gac1 5 1/15/2007 Lecture8 gac1 6

0 0
DAG example DAG Example
0 0 2. Find_DAG_Path( G(V,E), 1 )
• Let’s assume the vertices are stored in V
1 2 3 in an arbitrary order – say (4,1,2,3,5,0,6) 3. Find_DAG_Path( G(V,E), 2 )
1 2 4. Find_DAG_Path( G(V,E), 3 )
4 • A call to DAG_Longest_Path( G(V,E), 0 )
will set s0 = 0, and then follow the following 1. Find_DAG_Path( G(V,E), 0)
2 1 execution profile 2. Calculate s3 = 0
5. Find_DAG_Path( G(V,E), 5 )
1. Find_DAG_Path( G(V,E), 4 ) 1. Find_DAG_Path( G(V,E), 3)
1. Find_DAG_Path( G(V,E), 1 ) 2. Calculate s5 = 2
1. Find_DAG_Path( G(V,E), 0 ) 6. Find_DAG_Path( G(V,E), 0 )
2. Calculate s1 = 0 7. Find_DAG_Path( G(V,E), 6 )
2. Find_DAG_Path( G(V,E), 2 ) 1. Find_DAG_Path( G(V,E), 4)
1. Find_DAG_Path( G(V,E), 0 ) 2. Find_DAG_Path( G(V,E), 5)
2. Calculate s2 = 0 3. Calculate s6 = 4
3. Calculate s4 = 2
1/15/2007 Lecture8 gac1 7 1/15/2007 Lecture8 gac1 8
General Longest Path Example Edge Set Partition
• Many algorithms to find the longest path of general • Consider our example graph. If we remove the
graphs have been proposed in the literature edges labelled “-1” and “-6”, we obtain a DAG
• We will consider Liao and Wong’s algorithm as it is v1 v1
very efficient for cases where the graph edge set 3 1 3 1
-1 1 1
E ∪ F can be partitioned into a “forward” edge set E v0 -6 v3 v0 v3
and a feedback edge set F where G(V,E) is a DAG
and |E| >> |F| 1 4 1 4
v2 v2
– this is often the case with graphs arising in synthesis –
we will consider some of these in future lectures • The remaining edges form the set E, whereas the
two we removed form the set F

1/15/2007 Lecture8 gac1 9 1/15/2007 Lecture8 gac1 10

General Algorithm Algorithm Description

Algorithm Liao_Wong( G(V,E ∪ F), source ) • Liao Wong first applies the DAG algorithm on the forward
for j = 1 to | F | + 1 {
DAG_Longest_Path( G(V,E), source); edges only. If no feedback edge provides a longer path
flag = TRUE; alternative, the algorithm terminates
foreach (u,v) in F {
if sv < su + w(u,v) { • If a longer path alternative is found, the algorithm models
flag = FALSE; this as an extra forward edge directly from the source
E = E ∪ { (source, v) }; • This process is repeated, until no more changes to the edge
w(source,v) = su + w(u,v);
} set are necessary
} • It is provable that if the graph contains no cycles where the
if( flag ) return;
} sum of weights around the cycle is positive, the outer loop
end Liao_Wong need only be executed at most |F|+1 times.

1/15/2007 Lecture8 gac1 11 1/15/2007 Lecture8 gac1 12

General Example General Example
• Let us examine our example graph • We now examine each of the feedback edges
in turn
1 3
1 – for edge (v3,v0), sv0 ≥ sv3 – 6 (0 ≥ -1), so no change
needs to be made
-1 1 1
v0 -6 v3 v0 v3 – for edge (v1,v2), sv2 < sv1 – 1 (1 < 2), so we must insert a
new forward edge (v0,v2) with weight 2 [in this example,
1 4 1 4 (v0,v2) is already in E, so we just modify the weight]
v2 v2
v1 v1
3 1 3 1
• Performing our initial DAG longest path, with v0 as
1 1
the source, leads to v0 v3 v0 v3
– sv0 = 0, sv1 = 3, sv2 = 1, sv3 = 5
1 4 2 4
v2 v2
1/15/2007 Lecture8 gac1 13 1/15/2007 Lecture8 gac1 14

General Example Longest Path as a LP

• Calculating the longest path on the modified DAG • To keep up our interest in LP, let’s formulate the
leads to longest path problem as a LP
– sv0 = 0, sv1 = 3, sv2 = 2, sv3 = 6 • Let’s revisit Bellman’s equations:
• Examining each feedback edge in turn
– for edge (v3,v0), sv0 ≥ sv3 – 6 (0 ≥ 0), so no change
sv = max ( su + w(u , v))
( u ,v )∈E
needs to be made • A necessary condition for satisfaction is:
– for edge (v1,v2), sv2 ≥ sv1 – 1 (2 ≥ 2), so no change needs
to be made ∀(u , v) ∈ E , sv ≥ su + w(u , v) (*)
• At this point, the algorithm terminates as no • The minimum values of sv that satisfy (*) are the
changes are necessary solutions to Bellman’s equations

1/15/2007 Lecture8 gac1 15 1/15/2007 Lecture8 gac1 16

Longest Path as a LP LP Example
• We can write this as: • For our general graph example, the LP objective
minimize ∑ sv subject to: function and constraints are given below
v1• minimize s0 + s1 + s2 + s3
sv ≥ su + w(u , v) for all (u,v) ∈ E 3 1
-1 1 subject to:
and ssource = 0 v0 -6 v3
s1 ≥ s0 + 3; s1 ≥ s2 + 1
4 s2 ≥ s0 + 1; s2 ≥ s1 – 1
• This is a standard LP formulation (c.f. lecture 7), s3 ≥ s1 + 1; s3 ≥ s2 + 4
which can easily be cast in matrix notation Ax ≥ b if
s0 ≥ s3 – 6; s0 = 0

1/15/2007 Lecture8 gac1 17 1/15/2007 Lecture8 gac1 18

Some Applications Worked Example

• Longest and shortest path problems have • Consider the edge-weighted graph shown below
many real-life applications, including v1 v2
– Circuits: Determining the critical path in a circuit, 3
1 2
and hence the performance of that circuit
v3 v4
– Transport: Finding the (shortest/cheapest/least 1
fuel) route between two places • (a) determine the longest path from v1 to all other
– Networking and Comms: Shortest path through a vertices in the graph
network • (b) if an edge (v2,v3) with weight w(v2,v3) = -4
were added, how would this affect the longest
1/15/2007 Lecture8 gac1 19 1/15/2007 Lecture8 gac1 20
Worked Example Summary
(a) It should be easy to see that sv1 = 0, sv2 = 5, • This lecture has covered
sv3 = 1, sv4 = 3 (verify by applying Bellman’s – Edge-weighted graphs, shortest and longest
equations in the order (v1, v3, v4, v2) path problems
(b) This edge would close a cycle {v3, v4, v2}. We – Longest path through a DAG
therefore use Liao-Wong to determine whether – Longest path through a general graph: Liao-
any change has occurred to the longest paths Wong
Examining the feedback edge (v2,v3), we see – Longest path as a LP
that sv3 ≥ sv2 – 4 (1 ≥ 5 – 4) and therefore the • This brings us to the end of our “theory
extra edge has not affected the longest paths break”. Next lecture will look at scheduling
digital circuits.
1/15/2007 Lecture8 gac1 21 1/15/2007 Lecture8 gac1 22

Suggested Problems
• Find the shortest path through the DAG used as an
example in this lecture (*)
• Try to apply the Liao-Wong algorithm to find the
shortest path through the cyclic graph example.
Does it work? If not, why not? (***)
• In the cyclic example, change the weight of edge
(v3,v0) to –4. Now apply Liao-Wong to the shortest
path problem. (*)

1/15/2007 Lecture8 gac1 23