You are on page 1of 14

.

00x Advanced Algorithms & Data Structures


November 8, 2017
The University of Queensland COMP4500/COMP7500
Dr Larissa Meinicke 2017, Semester 2

[Exam Paper] [ECP (2017, Sem 2): Assessment]

Question 1. Recurrence Relations [15 marks] (3 parts)


Assuming that T (n)∈ Θ(1)for all n ≤ n0for a suitable constant n0 , solve each of the following recurrences to
obtain an asymptotic bound on their complexity as a closed form. Make your bounds as tight as possible.
Show your working.

a) [5 marks] T (n)=6 T (n/3)+n2


b) [5 marks] T (n)=16 T (n/4 )+ n
c) [5 marks] T (n)=4 T (n−1)+1

Solution:

(a)

a=6
b=3
lo g b a=lo g 3 6
31 <3lo g 6 <3 2
3

nlo g a=n1.63
b

f (n)=n 2 ∈Θ(n2)

Comparing f (n)against nlo g a, we get f (n)=Ω(n❑lo g 6), which is case 3 of the Master
b 3

Theorem. T (n)=Θ(n 2). Check regularity condition:

Let to satisfy this condition. Thus regularity check hold


s.

I used af(n/b) <= cf(n) and got c=⅔ (+1)


I didn’t see cf(n/b) at all, even in the slides. Can anyone clarify? +1 +1
The correct form is definitely af(n/b) <= cf(n). As per directly from the lecture notes. +1

(b)
a=16
b=4
lo g b a=lo g 4 16=2
nlo g a=n2
b

f (n)=n 2 ∈Θ(n2)

Comparing f (n)against nlo g a, we get f (n)=O(n lo g a), which is case 1 of the Master Theorem.
b b

T (n)=Θ(n 2).

(c) Iterate
ying2
I got T (n)=O( 4n ). Double anyone
ˇ ? (+12)

Lazy way via recursion tree. Although I think if you go this way, you need to verify by
induction it works (it’s only a guess).
Node s per layer = 4^i for i = 0, 1, …

Work done per node: O(1)


Layers = n
Get the same summation:

Yeah I did a recursion tree as well, I reckon we might 1 mark


because the notation isn’t completely “Larissa” style ¯\_(ツ)_/¯
Question 2. Undirected Graphs and Shortest Paths [20 marks] (1 part)

Alternate Solution: Basically BFS

def set_nearest(G, X):


Q = Queue()
for u in G.V:

u.nearest = None //infinity

u.colour = white //unvisited

let x = be any vertex from X //


x.nearest = x
Q.enqueue(x)
x.colour = grey

while not Q.is_empty():


u = Q.deque()
for v in G.adj[u]:

if v.colour == white: //unvisited


if v in X:
v.nearest = v
else:
v.nearest = u.nearest // why u.nearest instead of u?(+1) because u.nearest is from x and u
might not
v.colour = grey
Q.enqueue(v)
u.colour = black

(i think this answer is wrong: v.nearest = u.nearest, and u.nearest will always be x)
Alternate solution 2:
def set_nearest(G, X):
Q = Queue()
for u in G.V:

u.nearest = None

u.colour = white

for x in X: // we should only choose one x to start?

x.nearest = x

Q.enqueue(x)

x.colour = grey

while not Q.is_empty():


u = Q.deque()
for v in G.adj[u]:

if v.colour == white:
v.nearest = u.nearest
v.colour = grey
Q.enqueue(v)
u.colour = 3
Question 3. Dynamic Programming [30 marks] (3 parts)

(a)
Best Answer:
Base case. If you have zero guests, well you have no rating possible. Thus that’s a base case.
General case. Need to consider every possible permutation basically. The choices are to assign the i’th
guest to every room, then pass on the occupied room to the next recursive call (the next meeting) to
indicate it cannot be used. Do this in a backtracking pattern and you get the following recurrence:

General solution. The general solution is .


Runtime. As there are problems, and each subproblem spends time, we have a runtime of
.

Person 1:
M(1, j) = s[1][j] - base case
M(i, 0) = max(M(i, x)) (x is all values from 1 to m)
M(i, j) = max(M(i - 1, x)) + s[i][j] (Where x is all the values of 1, 2, … m except for the value of j)
Zs // Z S?
Person 2: Alternate suggestion: (+1)
M(0,j) = 0
M(i, j) = max(s[i][x] + M(i-1, x) ; x != j) (is it correct?)
//I think the above one is calculating the satisfaction with them in room j, rather than not in room j for
the base case. Also the M(i, 0) case I think can just get absorbed into the other one. Please correct me if I
am wrong

][

Imagine a table of sub-problem solutions M(i, j) with n + 1 rows and m + 1 columns ranging from 0..n and
0..m.

All cells in the i == 0 row are initialized to 0.

Each subproblem M[i][j] in i = 1..n relies on all cells in the previous row except the jth column value.

Therefore calculate sub-problem solutions row by row in order to respect sub-problem dependencies.

Optional code:

for j <-- 0 to m // base cases


M[0, j] = 0

for i <-- 0 downto n: // general cases


for j <-- 0 downto m:

M[i, j] = ...

(b) for (int i = 1; i < n; i++)


for(int j = m; j >= 0; j--)
M(i, j) = ....

Here i = 1 must be calculated first and for the j-values, j = 0 must be calculated last as you must know the
optimal arrangement given guess i stayed in any room j. The time complexity is nm^2 (+1) as each max
operation requires a series of m additions at most. This is done mn times.

Alternative answer:
s:
i\j 0 1 2 3

0
1 1 3 4

2 1 5 3

3 9 4 1

//M(i, 0) = max{M(i-1,j)+s[i][j] | 1<=j<=m}

M:
I\j 0 1 2 3

0 0 0 0 0

1 4 4 4 3

2 9 9 6 9

3 18 12 18 18

Return m[3][0]

Pcd:
For i = 0 to n {
If i=0 {
// M(0, j) = 0 for 0<=j<=m
} else {
// M(i, 0) = max{M(i-1,j)+s[i][j] | 1<=j<=m}

Ans = 0
For 1 <= j <= m
If (M[i][j] + s[i][j] > ans) opt = j
Ans = max(ans, M[i][j] + s[i][j])
M[i][0] = ans;
opt

// M(i, X) for 1<=X<=m


// M(i, X) = max(M(i-1, k) + s[i][k]) | 1<= k <= m and k!=X)

For X = 1 to m
If X != opt { m[i][x]=m[i][0] }
else{
Ans = 0
For k = 1 to m
If k != X m
Ans = max(M(i-1, k) + s[i][k], ans)

M[i][X] = ans
}
}
}

N * (m+1*m) = nm
Alternative Sol:

O(M * N)
For a greedy algorithm to exist the problem needs to have the property of optimal substructure (which the
above problem does) and the greedy choice property (roughly “locally optimal choices lead to globally
optimal solutions”).

The problem would have the greedy choice property if the same meeting room could be used for
meeting with consecutive dignitaries.

In this case we could iterate through n dignitaries and scan through their satisfaction preferences for each
of the m meeting rooms and choose the one they prefer most before moving to the next dignitary.

This could be done in θ(nm) time. It would be faster than a dynamic programming solution because we
don’t need to check all sub problems at each step.

(For a greedy algorithm to exist, a locally optimal choice must be globally optimal. Hence if such an
algorithm exists it would be faster as a DP solution requires all sub-problems to be solved which takes
longer. Greedy selects a subproblem and doesn’t look back = less sub-problems = less time.)

DP Solution can be done in O(M * N) time. So Greedy would not be asymptotically faster but still faster
by a constant factor.
Question 4. Amortised Analysis [20 marks] (2 parts)
Question states “measure the complexity of the operations in terms of the number of method calls to the
iterator procedure HASNEXT() only”

a)

The above arises since for every element in S_1, we go through List 2(why iterate all list 2?
since hasNext() will return false when one element is null Agree + 1 《- worst case),
S_2 times + the elements we’ve added so far (i), + 1 since a final call to HAS-NEXT is made
after we go through every element of List_2. This only changes below (enqueue) to S2 + S1
+1
I think the answer above is correct (+1). As a worst-case example of what is happening,
let’s say:

List 1 = [5, 6, 7, 8]
S1 = 4
List 2 = [1, 2, 3, 4]
S2 = 4

When DEQUEUE is called we take the first element of List 1 (in this case 5) and begin
iterating over List 2 to find its correct position. Since all of the elements of List 2 are smaller
than 5, we iterate over all the elements of List 2 which results in a total of S2 + 1 calls to the
HASNEXT function (iterator begins at the very beginning of the list such that NEXT will
return the first element (1). HASNEXT is called once for each element of S2 and then once
more on the final element (START -> 1 -> 2 -> 3 -> 4 -> null, where -> is a call to HASNEXT)).
So for adding the first element of List 1 to List 2, we have S2 + 1 calls to HASNEXT. For the
next element, we also have that there is now one extra element in S2 than before (since it
was added from List 1). Thus, for the second element, there will be S2 + 1 + 1 calls to
HASNEXT. When considering this for any element n in List 1, we can see that there will be
S2 + 1 + (n - 1) calls to HASNEXT. Taking this as a summation over all the elements of List 1
we get the summation in the previous answer (where (n-1) is converted to the number of
elements already added to List 2 (i)):

S 1 −1

∑ (S 2 +i+ 1)
i=0

Which can be solved as shown in the other answer. Hope this is correct and maybe helps
someone. Let me know if my thinking is wrong.
People 2 & 3
Array-based Linked List Implementation
If the size of the list S1 <= S2, then (1 + S1) * S1 / 2
Else (1 + S2) * S2 / 2
Pointer-based Linked List Implementation
(1 + S1) * S1 / 2

b) Φ (S1, S2) = S1S2 + S1(S1 + 1) / 2


Amortised Costs: = actual cost + change in potential function
Enqueue = Φ(S1 + 1, S2) - Φ(S1, S2) + 0
= S2 + S1
(-> should this be S2+S1+1?)(+3)
Dequeue = Φ(0, S1 + S2) - Φ(S1, S2) + S1S2 + S1(S1 + 1) / 2
= (0 + 0) - (S1S2 + S1(S1 + 1) / 2) + S1S2 + S1(S1 + 1) / 2
= 0
And for every operation, it is trivially true that Φ(Di) >= Φ(Do)

^can someone explain, how did they get this (bolded)


My suggested answer: Potential function=<How many elements in current two list are less than this
element which z be inserted>. The potential function change will become 1,if this element >= previous
element(previous is also less than this element,so add 1.),otherwise will 0.(No change)

Isn’t this question similar to 2015 Q3? Where we can only dequeue as many items as we have enqueued
(in 2015 we can only sell as many items as we have bought). If the actual cost is in terms of the number of
method calls to HasNext(), and we defined in (a) that:
ENQUEUE() = 0
DEQUEUE() = S1S2 + S1(S1+1)/2

Then can we say that the amortized cost for both functions, the potential is the “credit” for possibly
dequeuing later
E.g. (actual costs + change in potential)
ENQUEUE() = S2 + 1 + S1 (this credit can be used to iterate through the existing List2 [+S2], and the final
HasNext() call that returns null [+1], and finally pays for itself to be iterated over by the remaining i
elements in List1 [+S1]

But idk how to prove this for dequeue


DEQUEUE() = 0

Idk can someone confirm or reject this


Question 5. Reductions [15 marks] (3 parts)

(a) Showing NP-hardness:

Call longest-simple-cycle with , the minimum number of edges to form a cycle. If this
algorithm finds it, it also outputs a yes for the hamiltonian cycle problem. This transformation of the
problem instance can be done in polynomial time by mere variable assignment. (+2)

I think it should be k=|G.E| shouldn’t it? As k is the minimum number of edges in the cycle and not
the minimum number of vertices. Also set v to be any vertex in G.V. (+1)

Should be k=|G.V|. Consider e.g. 5 vertices: a simple cycle between them will have no more (or less) than 5
edges. +1
Counterexample to k=|G.E|: a graph with two vertices, connected by a pair of edges, but one vertex has 10
edges that loop to itself. We can get a simple cycle of |G.V| = 2 edges here but not |G.E| = 12 edges.

(b) Showing NP-completeness:

A problem is NP-complete if meets two requirements:


1. It is shown to be in NP
2. It is shown to be NP-Hard

Showing problem is in NP: To be in NP, there must exists some polynomial time verification algorithm
V such that for all inputs, the decision problem is 1 (yes) if and only if there exists a polynomial-length
“certificate” such that the verified also outputs 1 (yes).

<Polytime verification algorithm goes here>. Checks:


● There is at least k edges.
● Check the start and end vertex of the path are the same (forms a cycle)
● The set of cycle vertices contains v
Also need to check that the path is simple
Poly-Time Verification Algorithm:
Let S = the vertices which were visited in order
LSC-Check(G, S, k, v)
1. Check no vertex appears more than once in S
2. Check that all pairs of adjacent vertices in S are connected by an edge in G.E
3. Check that an Edge exists in G.E connecting the first and last vertices in S
4. Check that vertex v is in S
5. Return Length(S) >= k

Showing problem is NP-Hard: established in part (a).

This proves HAM-CYCLE reduces to LONGEST-SIMPLE-CYCLE in polynomial time, which means that
LONGEST-SIMPLE-CYCLE is at least as hard as HAM-CYCLE, so LONGEST-SIMPLE-CYCLE is NP-complete.

(c) No known polynomial-time algorithm to solve a particular NP problem - implications: You are
unlikely to be able to obtain an efficient algorithm (without approximations)

I think this is asking for what if there is a proof (currently, none exists) for .

My thinking: To truly prove P != NP, our friend needs to find a way to prove EVERY problem in NP
has no polynomial-time algorithm to solve it. He has only found a proof for one particular
problem. Whilst this might have implications for dedicating more resources towards improving
the approximation algorithm for this particular problem, it still means there might be a poly-time
algorithm for another NP-Complete problem, in which case P = NP. He has only proven a single
instance, not the entire set.

Remember that P is a subset of NP. So trivially, there are problems in NP with polynomial-time
solutions.

My thoughts: this means that NO NP-complete problems have polynomial-time solutions.


If we can reduce problem X to problem Y, it means Y is at least as hard as X. Since NPC problems
are the hardest in NP (everything can be reduced to them), the fact that one can’t be solved in
polynomial time has no bearing on anything less hard (the rest of NP).

But, all NPC problems can be reduced to each other, and so are ‘equally’ hard. If another NPC
problem A had a polynomial-time solution, we wouldn’t be able to reduce our proved un-
polynomially-solvable problem B to it. That would mean that A wouldn’t be NPC, a contradiction.

Therefore, no NPC problems can have polynomial time solutions.

END OF EXAMINATION

You might also like