You are on page 1of 8

COMP4500 2013 Exam

Question 1
(a)
(1)
T(n) = 4T(n/4) + ϴ(n)
a = 4, b = 4, f(n) = n

n^(logb a) = n^(log4 4) = n^1


n^(logb a) similar to f(n), so case 2 of the Master Method applies.

Solution: T(n) ∈ ϴ(n^(logb a)*lg n) ∈ ϴ(n lg n)

This algorithm is not asymptotically faster.

(2)
T(n) = T(n/2) + ϴ(n^2)
a = 1, b = 2, f(n) = n^2

n^(logb a) = n^(log2 1) = n^0


f(n) polynomially larger than n^(logb a), so case 3 of the Master Method may apply. Check
that the regularity condition: a*f(n/b) <= c*f(n) for some c < 1 holds.

1*(n/2)^2 = n^2/4 = ¼ n^2


¼ n^2 <= c*n^2 for ¼ <= c < 1
Regularity condition holds.

Solution: T(n) ∈ ϴ(f(n)) = ϴ(n^2)

This algorithm is not asymptotically faster than the original algorithm.

(b)
F(n) = aF(n/3) + ϴ(n^2)
b = 3, f(n) = n^2

Find a such that F(n) ∈ ϴ(n^2 lg n).


For F(n) to have this bound, case 2 of the Master Method must apply, and so n^(logb a)
must be similar to f(n), i.e.:

n^(logb a) = f(n)
n^(log3 a) = n^2
log3 a = 2
∴ a = 9, as 3^2 = 9

So for a = 9, F(n) ∈ ϴ(n^2 lg n). When a < 9, logb a < 2 and so f(n) is polynomially larger
than n^(logb a), and case 3 of the Master Method applies (assuming the regularity condition
holds), giving F(n) ∈ ϴ(n^2) < ϴ(n^2 lg n). When a > 9, logb a > 2 and so n^(logb a) is
polynomially larger than f(n), and case 1 of the Master Method applies, giving F(n) ∈
ϴ(n^(logb a)) >
ϴ(n^2 lg n). Thus, the maximum number of recursive calls that can be made within F(n) so
that its time complexity is no worse than ϴ(n^2 lg n) is 9.

Question 2
(a)
C = {(A, C), (C, B), (B, E), (E, F), (F, B), (B, D), (D, A)}

(b)
C= {(A, B), (B, D), (D, A)}
Choose B as the new source.
C’ = {(B, C), (C, E), (E, F), (F, D), (D, E), (E, B)}
Insert C’ into C before (B, D).
C= {(A, B), {(B, C), (C, E), (E, F), (F, D), (D, E), (E, B)}, (B, D), (D, A)}

(c)
Remember we are only concerned about graphs that satisfy the 4 properties listed at the
start of the question. Since any graph given as input to the algorithm is connected, there is
guaranteed to be a path from the source vertex to every other vertex. Additionally, since
each vertex has even degree, it is always possible to follow an edge out of a vertex after
following an edge into it, meaning it is impossible to get stuck at any given vertex, and it is
always possible to find a path back to the source.

(d)
GET-CYCLE(G, v, s)
C = <s>
while v.HASEDGE()
w = v.PICKNEIGHBOUR()
C.ADD(w)
G.DELETEEDGE(v, w)
if w == s
break
v=w
return C
Question 3
(a)
minEdit(i, j) = {
0
if i = N and j = M (base case, both indices at the end of strings so no more transformations
can be applied)

minEdit(i+1, j+1)+1
if x[i] = y[j] (current characters in both strings are equal, so perform the COPY
transformation)

min(minEdit(i+1, j)+1, minEdit(i, j+1)+1)


if x[i] != y[j] (current characters in both strings are not equal, so determine if it is better to
perform the DELETE or INSERT transformation)
}

(b)
Dynamic programming pseudocode:

minEdit(i, j):
T = new integer array of size (N+1) by (M+1) // N+1 rows by M+1 columns
// Fill in base case
(note: this is with 0,0 being the bottom left corner of the matrix)
T[N, M] = 0
// Fill in top most row and rightmost column of T so that for each cell we move to the
left in the top row we give it a cost of 1 + the minimum cost of the cell to its
immediate right (since moving to the left corresponds to an INSERT transformation
with cost 1); and for each cell we move down in the rightmost column we give it a
cost of 1 + the minimum cost of the cell immediately above it (since moving
downwards corresponds to a DELETE transformation with cost 1). This is required
for bootstrapping the table, as any given cell in the table is directly dependent on the
cell directly above it, the cell directly to its right, and the cell directly diagonal from it
(moving upwards).
for m = M-1 down to j
T[N, m] = T[N, m+1] + 1
for n = N-1 down to i
T[n, M] = T[n+1, M] + 1

for n = N-1 down to i


for m = M-1 down to j
if x[n] == y[m]
T[n, m] = T[n+1, m+1]+1
else
T[n, m] = min(T[n+1, m]+1, T[n, m+1]+1)
return T[i, j]
(c)
Time complexity:
The top two for loops iterate (M-1)-j and (N-1)-i times respectively, giving them respective
time complexities of ϴ(M-j) and ϴ(N-i). The two main for loops both have these time
complexities, but they are nested, giving an overall time complexity of ϴ((N-i)(M-j)).

Space complexity:
ϴ(N M) as T is of size (N+1) by (M+1). Note that T does not actually have to be this large if i
and j are not equal to 0 (since the algorithm would not have to iterate down to the bottom left
hand corner of the table), but defining T with these dimensions makes it easier to write the
pseudocode and think about the problem.

(d)
If you think about the table in the dynamic programming solution, there is always a naive
way to reach the goal cell; travel left until you hit the far left of the table and then travel down
until you hit the bottom of the table or vice versa. This is a path through the table where each
transformation (movement to another cell) has a cost of 1, since the transformations that are
being applied on the path are just a sequence of INSERTs and then a sequence of
DELETEs or vice versa, all with cost 1. If the path goes through N rows and M columns then
the total cost of the transformations is just N+M, i.e. the length of x + the length of y.

Normally it would be cheapest to travel as far as you can diagonally in the table (as moving
diagonally, which corresponds to a COPY transformation, normally has a cost of 1), and then
resort to moving left and downwards when required. This is because you could move one
unit diagonally by moving left and then downwards (INSERT then DELETE), but this has a
cost of 2. If the cost of COPY is changed to 2 or more, then moving diagonally is never going
to give an advantage; it is best to only move leftwards or downwards. The greedy strategy is
therefore the same as the naive strategy described above and the edit distance between two
strings will simply be N+M.

Question 4
(a)
ENQUEUE has an actual cost of 1, since it only performs one basic operation.

DEQUEUE has an actual cost of 3*S1 + 3. The while loop will iterate as long as stack 1 is
not empty, meaning it will iterate a total of S1 times. On each iteration, 3 units of work are
performed. The first is checking if stack1 is empty, the second is popping the top element off
stack1 and the third is pushing this element onto stack2. This corresponds to the 3*S1
component of the actual cost. In order for the while loop to break, its condition has to be
checked for a final time after stack1 has been emptied, and this accrues another unit of
work. The remaining 2 units of work are accrued by initially checking if stack2 is empty and
then finally popping the top element off stack2.
(b)
We want to amortise DEQUEUE, since it is the expensive operation for the queue. To do
this, we need to give ENQUEUE a higher amortised cost than its actual cost (so it helps pay
for DEQUEUE operations).

Every time ENQUEUE is called, it takes 1 unit of work to push an element to stack1. Later,
when DEQUEUE is called, it takes 3 units of work to take this same element off stack1.
Therefore, assign an amortised cost of 4 to ENQUEUE. This will pay for the 3*S1 portion of
the actual cost of DEQUEUE.

The amortised cost of DEQUEUE should therefore be 3, as this will pay for the additional 3
units of work in the actual cost of DEQUEUE.

(c)
Consider the DEQUEUE operation, with actual cost 3*S1 + 3:

Amortised cost = Actual cost + (Φ(Di) - Φ(Di-1))


Amortised cost = 3*S1 + 3 + (Φ(Di) - Φ(Di-1))

We obviously need (Φ(Di) - Φ(Di-1)) = -3*S1 in order for the Amortised cost to be 3 (as we
defined in part (b)). Therefore, the potential function should be 3*S1. This works, as after a
DEQUEUE operation (when stack2 is empty), the size of stack1 is reduced by S1, and thus
the potential drops by 3*S1.

Another way to think about it is that every time we want to get one element off stack1, we
need to expend 3 units of work. This means we need to store 3 units of potential for each
element we push to stack1 so we can later pop it off the stack.
Question 5
(a)
OAS-CHECK(G, A, k):
// check that the array contains all vertices of G exactly once
vertices = G.V
for i = 1 to A.size
current = A[i]
if vertices.contains(current)
vertices.remove(current)
else
// current vertex was previously removed, meaning it is duplicated in A
return false
if vertices.size != 0
return false

// check that there is a valid path of weight < k


for i = 1 to (A.size - 1)
start = A[i]
finish = A[i+1]
if weight(start, finish) != ∞
totalCost += weight(start, finish)
else
return false
if totalCost < k
return true
else
return false

(b)
OAS(G):
minPath = NONE // an array of vertices that compose a minimum cost path in G
pathCost = |G.V|*G.MAXWEIGHT() // Stores the cost of minPath. Initialise to the
maximum possible cost of a minimum cost path in G.
for each of the |G.V|! permutations of G.V
A = current permutation
k=0
while k < pathCost
if OAS-CHECK(G, A, k+1) = true
minPath = A
pathCost = k
break while loop
k++
if pathCost == 0 // won’t be able to find a shorter path
break for loop
return minPath
(c)
To show that OAS is NP-Complete, we need to show that OAS is in both NP and NP-Hard.

We already know that OAS ∈ NP, as we have designed a verification algorithm that runs in
polynomial time in part (a) of this question. We will now show that OAS ∈ NP-Hard by reducing an
NP-Complete problem to OAS.

Using the transitivity of polynomial time reductions, if X is reducible to Y in polynomial time


(statement 3) and Y is reducible to OAS in polynomial time (statement 1), then X is reducible
to OAS in polynomial time. Statement 2 tells us that X ∈ P, and since P ⊆ NP, X ∈ NP.
Statement 4 tells us that X ∈ NP-Hard. Since X ∈ NP and X ∈ NP-Hard, X ∈ NP-Complete. Since
we can reduce X to OAS in polynomial time, OAS ∈ NP-Hard.

Therefore, we have shown that OAS ∈ NP and OAS ∈ NP-Hard, proving that
OAS ∈ NP-Complete.

You might also like