You are on page 1of 5

COMP4500™ 2012 Exam

Question 1
(a)
T(n) = 5T(n/2) + n^2
a = 5, b = 2, f(n) = n^2
⇒ n^2/n^(2.31...) ⇒ Case 1 of the Master method applies.
So T(n) is Θ(n^(2.31…)) i.e. log2(5)

b)
T(n) = 5T(n/2) + n^3
a = 5, b = 2, f(n) = n^3
⇒ n^3/n^(2.31…) ⇒ Case 3 of the Master method.
Need to check af(n/b) <= cf(n) for some c < 1.
5(n/2)^3 = 5/8(n^3) ⇒ c= ⅝ < 1.
Therefore, T(n) is Θ(n3)

Question 2
(a)
One possible solution (that is thought to be correct):
Modified version of Breadth First Search. The intuition behind the algorithm is that we use BFS
to construct a search tree one level at a time. We know that all shortest paths must appear in
the same level of the tree, and so when the destination vertex w is first encountered, the
distance of its parent + 1 gives a shortest path distance to w. After this, we only have to search
through the remainder of the same level in the tree to find any more paths that lead to w (and
these are also shortest paths since they have the same distance). Once we reach the next level
of the tree, the algorithm can terminate, as it is impossible to find any more shortest paths from
this point onwards.
NUM-SHORTEST-PATHS(G, v, w)
for u in G.V
u.distance = 0
u.colour = white
v.colour = grey
Q.initialise()
Q.enqueue(v)
numShortestPaths = 0
shortestPathDistance = 0

while not Q.isEmpty()


current = Q.next()
if (shortestPathDistance !=0 and current.distance >= shortestPathDistance)
// a shortest path has already been found on the previous level of the
breadth first search tree, and we are now on the next level of the tree and
therefore will be unable to find any more shortest paths, so stop
searching
break loop
for u in G.adj[current]
if u == w
// the destination vertex has been encountered. If the shortest
path distance has not yet been set (it is still 0), then set it
if shortestPathDistance == 0
shortestPathDistance = current.distance + 1
numShortestPaths++
if u.colour = white
u.distance = current.distance + 1
u.colour = grey
Q.enqueue(u)
current.colour = black
return numShortestPaths

(b)
An adjacency list representation for the graph is used. In the worst case, all vertices and edges
are visited once, so the time complexity of the algorithm is O(|V|+|E|). The space complexity is
O(|V|) in the worst case when all vertices are on the queue at the same time.
Question 3
Dynamic programming applies to an optimisation problem when the problem exhibits:
● Optimal substructure (an optimal solution can be constructed from optimal solutions to
subproblems)
● Overlapping subproblems
● A polynomial number of subproblems with respect to the problem size, as the table used
for storing subproblem solutions cannot be exponential in size if the problem is to be
solved efficiently
Greedy algorithms apply to an optimisation problem when the problem exhibits:
● Optimal substructure
● The greedy choice property (making many locally optimal choices leads to a globally
optimal solution)

The essential differences are that DP is exhaustive, and must solve all subproblems, while
greedy algorithms are myopic and only need to solve one subproblem at each iteration. DP (not
memoization) constructs a solution from the bottom up, solving smaller subproblems that larger
subproblems depend on first. Greedy algorithms operate in a top down fashion, reducing the
problem size at each iteration.
DP is more widely applicable, as most problems do not exhibit the greedy choice property,
meaning they have to be exhaustively solved to find an optimal solution. When both methods
are applicable, greedy is likely to be faster as it solves many less subproblems than DP does to
arrive at an optimal solution.

Question 4
(a)
P(i, S) = {
0
if i=0 or S=0 (base case, no tracks left or no time left)

P(i-1, S)
if s[i] > S (can’t include track i as its playing time is too long)

max(P(i-1, S-s[i]) + c[i], P(i-1, S))


if s[i] <= S (can include track i, determine if it is better to include it or exclude it from T)
}
(b)
Dynamic programming pseudocode:

P(i, S):
T = new integer array of size (i+1) by (S+1)
// fill in base cases
for j = 0 to i
T[j, 0] = 0
for j = 0 to S
T[0, j] = 0
// find solution from bottom up
for j = 1 to i
for k = 1 to S
if s[j] > k
// can’t include item j
T[j, k] = T[j-1, k]
else
// can include item j, check to see if it is better to include it or
exclude it
T[j, k] = max(T[j-1, k-s[j]] + c[j], T[j-1, k])
return T[i, S]

Question 5
To show that any sequence of m operations performed on an empty queue has amortised O(m)
time we need to show that each operation has amortised constant time. The MAXIMUM and
DEQUEUE operations clearly have constant running times, so we need only do the analysis for
ENQUEUE. We shall use the potential method.

Recall that we want to set up the potential so that we want it to cancel out with the non-constant
parts of the actual cost, so that the amortised cost becomes constant. Examining the
ENQUEUE function, we see that its actual cost is a = 3 + k where k is the number of items in the
tail of M which are less than x (3 comes from the two insertions, and the last evaluation of the
while loop condition which causes us to break out of the loop). Thus define the potential p(D_i)
to be the number of items in the tail of M which are less than x. Then the amortised cost is

c_i = a + p(D_i) - p(D_{i-1})


=3+k-k
=3
We also check that p(D_0) = 0 and that p(D_i) >= p(D_0) for all i, and thus our amortised cost is
an upper bound on the actual cost. Thus every operation has amortised constant time, and a
sequence of m such operations will have an amortised time of O(m).
Question 6
(a)
Statement 1 is true. Like the statement says, if you can colour the graph with two colours then
all vertices with the same colour belong to one of the two partite sets of the bipartite graph.

Statement 2 is true. The graph colouring decision problem in general (for k colours) is NP-
Complete, and therefore is also NP-hard (see
https://en.wikipedia.org/wiki/Graph_coloring#Algorithms).

Statement 3 is true. There may exist polynomial time algorithms to solve NP-hard problems, but
no one currently knows of any. If someone were able to find such a polynomial time algorithm
then they would have shown that P=NP.

Statement 4 is false. Testing if a graph is bipartite is a special case of the general graph
colouring problem, with k=2. This problem can be solved in polynomial time.

(b)
A simple algorithm to determine if a graph is bipartite is to use a slightly modified version of
Breadth First Search. Employ BFS as per usual, but assign vertices in alternating layers a
different colour. During the search, if we encounter a neighbour of a vertex that has the same
colour as that vertex (implying the neighbour has already been visited and coloured by one of its
other parent vertices) then the graph is not bipartite. This algorithm would run in O(|V|+|E|) time.

You might also like