You are on page 1of 21

Good luck to everyone!

(+9999999)
Thanks fellas

Anyone there for a deferred exam? -2 G Luck

Q1

a) |G.V|^(N-2)

N=5
M=4
While 1
5 call to m = 3
N=5
M=3
While 1
25 call to m = 2
N=5
M=2
125 call to m = 1
n=5
M=1
625 call to m = 0
5^4
G.v ^ n -1 or g.v^m as m = n-1 both are equal if you state that m = n -1
(+3)
n^(n-1) where n is |G.V| (+3. n = |G.V| is given in the question)
5
Isnt that g.v ^m rather than n-2? (+1)
They only ask for exactly how many times will the method call
SHORTESTPATH(G, weight, x, y,0). NOT total number of recursive calls.
So we get |G.V|^(N-2) or n^(m-1) if m = n-1

Must be n^(n-1) (Not an official answer)


(+3) I made a terrible diagram to back this up, but it was too ugly. I believe
this is the correct answer

b)
Base case
T(m) = O(1) if m == 0
General case
T(m) = n * T(m - 1) + 1 if m != 0 (can anyone explain this more?)
Explanation: regardless of what is passed in as v, we iterate over every
vertex (ie n times) and make a recursive call with argument m-1. Total
time is T(m) = n*T(m-1) + 1 (the + 1 is for the constant time calls outside of
the loop)
Tight bobund: Theta(n * 3^m) isnt n*3^m = 3n^m = n^m????????
Where is 3 coming from? (+4)
Q2

Use BFS recording parent vertices. Stop when you reach the source vertex

Find-Cycle(G, source) {

For u in G.v
u.dist = infinity
u.colour = white
u.parent = null

Q = Queue()
Q.enqueue(source)

While not Q.isEmpty()


Curr = Q.dequeue()

For u in G.adjacent[Curr]
If u == source
Return u-->curr → curr.parent → curr.parent.parent … and so forth
We should have extra function to return path?

If u.colour == white
U.colour = grey;
U.parent = curr;
Q.enqueue(u)
Curr.colour = black

Return Null

It should be (vi-1, vi) instead of (vi-i, vi) at the 3rd line. The question is asking for
the sequence of the cycle, but the previous answer didn’t do that part, so I'll write
a new one. The idea is the same, using the BFS algorithm.
By Pinyao (not official answer):

BFS(G, v) // v is the source node v0


1 for u in G.V - {v}
2 u.distance = ∞; u.colour = white; u.parent = NULL; // initialize all nodes except source v
3 v.distance = 0; v.colour = grey; v.parent = NULL; destination = NULL; // init the source
4 Q.initialize() // init Q by adding all nodes into it
5 Q.enqueue(v)
6 while not Q.isEmpty() && destination == NULL // Stop if all nodes visited / destination reached
7 current = Q.dequeue()
8 for u in G.adj[current]
9 if u == v // if destination is reached, break the loop
10 destination = v
11 break
12 if u.colour == white
13 u.distance = current.distance + 1
14 u.colour = grey; u.parent = current
15 Q.enqueue(u)
16 current.colour = black
17 if destination == NULL // return NIL according to the question if source is not reached as the
destination
18 return NIL
19 else // otherwise, trace back from the destination
20 sequence = {destination} // add the destination into list first
21 while destination != v
22 destination = destination.parent // iterate till the parent == source v
23 sequence.add(destination)
24 sequence.reverse() // since the list is added from destination to source, reverse it
25 return sequence

Below will be an analysis of the time complexity


As we all know the BFS has a time complexity of Θ(V+E)
In this question, V = x, E = y, thus the time of the first 16 lines are of time complexity Θ(x+y)
From line 17 to 25, there’re 2 cases
1. Destination is Null, time complexity is Θ(1)
2. Destination not null, cycle exists. We will iterate through the parents of destination, time
complexity is at most Θ(x), which means the loop contains all the nodes in the graph.
Either Θ(x+y+1) or Θ(x+x+y) belongs to Θ(x+y).
Q3

a)
Total cost is 4
What is the total cost of performing this (m) sequence of ADDTEMPERATURE
operations?
Size of P or R (not really sure why?) - after m operations , size of R is m? Hmm
m is number if call that call add_temp, so yes

For this sort of question, the exact mathematical way of describing the sequence isn't
important because it would be very awkward, just describing the potential function as
the length of that sequence would be enough and then having the potential change
relative to that definition is enough. From luke
X: 1 R: [1] P: [-1] cost: 0
X: 2 R: [1, 2] P: [-1, -1] cost: 1
X: 3 R: [1, 2, 3] P: [-1, -1, -1] cost: 2
X: 4 R: [1, 2, 3, 4] P: [-1, -1, -1, -1] cost: 3
X: 3 R: [1, 2, 3, 4, 3] P: [-1, -1, -1, -1, 3] cost: 3
X: 7 R: [1, 2, 3, 4, 3, 7] P: [-1, -1, -1, -1, 3, -1] cost: 5
You could say x = 7 that is pay the cost for those x that hasnt pay yet, which 3 and last
and 1 at first hasnt pay their cost yet
Notice that before x = 7 is insert into R, size is 5, where cost is also 5, note: while loop is
before append code, so cost is already done before x is being appended, or you can do
S - 1, after x is appended, it same thing, as long as you can prove Potential P(0) = 0,
P(i) >= P(0)
Definition is important to state:
So if there exist a x that is greater than all k in R, then total cost = size of R when x is
not added to the array yet S-1 for x being appended
- > more deeper
- At worst case non decrease and equal x sequence
- If x is greater than previous x, than total cost +1
If there exist a x that is less equal than previous x, then current cost = 0 (+1)
b)
So let potential function be P(S) #S = size of P or R follow lecture slide
And then calculate the changes and amortised cost
Amortised cost = c_m + Φ(D_m ) – Φ(D_m–1) = size of P + 1 (the change is +1)
(not sure how to prove this to be constant?) then we have O(m) for m operations.
Prove it not negative done
Ensure that Φ(D0) ≤ Φ(Di) for all i > 0, it is true Φ(Di) will never be non-negative for
a positive value of size P

Total is O(m)

I’m not convinced of the logic of this answer. The change in potential is meant to
cancel out the actual cost. The potential function you’ve defined has a change of
1, however the example provided has the addition of 7 give a cost of 3.

We can try to consider the best and worst cases to see how this will work.
Consider inserting temperatures of 10, 11, 12. Each insert here has an actual
cost of 1 because R.get(i) < x and P.get(i) = -1, so it breaks after 1 iteration. The
arrays look like this:
R [10, 11, 12]
P [-1, -1, -1]

Similarly, we can look at the worst case by inserting 12, 11, 10. Each insert here
has an actual cost proportional to the number of previous elements, since it
must step back through every element in R (-1 It doesn't call the while loop,
why? It should be 0 for actual cost) . The arrays would look like this:
R [12, 11, 10]
P [-1, 0, 1]
With this in mind, we’ll try to define the potential function. If the data structure is
in a state where the actual cost could be higher, it should have higher potential. If
we state at R for long enough, defining the potential as the length of the longest
decreasing subsequence which includes the last element might work.

This is great because it makes computing the amortised cost really easy.

The actual cost of an “add” operation is the number of steps back (from the last
element in the array) until we find an element >= the new element. This is exactly
the change in the longest decreasing subsequence, i.e. the change in potential.

Working this out with the example from (a), the potential after each insert is 1, 2,
2, 3, and 1. The actual costs were 0, 0, 0, 1, 0, and 3. Using the formula
“amortised cost = actual cost + Φ(Di) - Φ(Di-1)” gives us an amortised cost of 1 for
each add operation.
+1+1

Alternatively, the potential can be thought of as the length of the linked list
defined by the hint in the question. Because of the way R is constructed, this
turns out to be exactly the longest decreasing subsequence as above.

How about Φ = p, where p is the number of unique elements in P after and


including the last -1? +1
Change in Φ:
= +1 if R.size == 0 or x <= last element of R,
= +(1-p) if x > last element of R (1 for after adding -1 to P, and p is the
previous state of Φ)
Actual costs:
= 0 if R.size == 0 or x <= last element of R,
= p-1 if x > last element of R (cuz it includes -1 so p-1 instead of p
Amortised costs:
= 1 if R.size == 0 or x <= last element of R,
= 0 if x > last element of R
Potential function is therefore bounded below by 1 (although it starts at 0
initially) and amortised costs are all constant => m operations are O(m)
Actually I think this is pretty much the same logically as purple, maybe a different
way of looking at it though +2

My solution is quite similar as the one above in green (by Pinyao)


First thing we need to know is that the element with the newest/latest index p = -1 is the largest
element among all in the current list.
So we can divide the situation of adding a new element x into 2 cases
● The first i = -1 or R.get(i) ≥ x
This means there’s no need to search through the predecessors to find the largest
element, we can directly insert x.
● R.get(i) < x and the first i ≠ -1
Which means we need to iterate through a list of predecessors to obtain the latest
(-1), the largest element yet.
Thus, the thing we need to focus on is –– the size of the predecessors after the latest (-1)so
it means you exclude -1?
Let Φ = size(predecessors // after latest -1)
Our formular: Ci~ = Ci + (ΦDi - ΦDi-1)
For case 1 (the first i = -1):
Ci = 0 (we don’t need to call the P.get(i) function, so there’s no cost)
ΦDi = k+1 (the current size of the predecessors + 1)
ΦDi-1 = k (the current size of the predecessors)
From the above, we get Ci~ = 0 + (k+1 - k) = 1 for the first case

For case 2 (the first i ≠ -1):


Ci = k (we call the P.get(i) functions k time to find -1, k is the size of predecessors)
ΦDi = 1 (the current size of the predecessors + 1) should be 0 if you exclude the latest
-1 and that’s why you have k as actual cost otherwise it would be k-1
ΦDi-1 = k (the current size of the predecessors)
From the above, we get Ci~ = k + (1 - k) = 1 for the second case
Both Ci~ = 1. Thus, for m elements, we will have a cost of O(m)
Q4

a)
Python version in recursion
def run(i,x,y):
if i == len(c) or (x< c[i] and y < c[i]):
return 0
if x>= c[i] and y >= c[i]:
return 1+ max(run(i+1,x-c[i],y),run(i+1,x,y-c[i]))
elif x >= c[i]:
return 1 + run(i+1,x-c[i],y)
elif y >= c[i]:
return 1 + run(i + 1, x , y - c[i])

Dp recurrence case
If x and y both exceed <- base case
m(i,x,y) = 0
If x and y both able to buy 《- general case
m(i, x, y) = 1 + max(m(i+1,x-c[i],y),m(i+1,x,y-c[i])), this missed a case where we
choose to not hire at all. If we can hire this candidate, we need to hire this candidate.
We aren’t allowed to skip over anyone. I think this solution is fine. Should be c[i+1]
because for m(0,X,Y) = max(m(1,X-c[0]...)) is incorrect (c[0] does not exist). It should be
X-c[1]
Or m(n-1,X,y) = max(m(n,X-c[n-1].....) this does not include X-c[n] in the next m(n,X,Y)
If x able to buy
m(i,x,y) = 1 + m(i+1,x-c[i],y)
If y able to buy
m(i,x,y) = 1 + m(i+1,x,y-c[i])

a)
For M(i, x, y), we can divide it into 4 cases,
● Base case 0: there’re 0 candidate in the list it is better to say i = n
● Base case 1: ci > x && ci > y
Which means that we cannot employ any candidate anymore, both centers run out of
their budgets.
● Case 1: ci ≤ x && ci ≤ y
Which means both centers have enough money to hire pi, we call m(i+1, x-ci, y) to find
the result of center X employing pi, and call m(i+1, x, y-ci) to see the result of center T
employing pi, choose the one with the maximum value (which refers to the number of
people employed)
● Case 2: ci ≤ x only (ci > y)
Which means only center X has enough money to hire pi, we call m(i+1, x-ci, y) only
● Case 3: ci ≤ y only (ci > x)
Which means only center Y has enough money to hire pi, we call m(i+1, x, y-ci) only
Below is the pseudo code: (By Pinyao, not an official answer)

M(i, x, y)
1 if i == len(c) // Base case 0, no candidate in the list
2 return 0;
3 if c[i] > x AND x[i] > y // Base case 1, no center can afford pi
4 return 0;
5 if c[i] ≤ x AND c[i] ≤ y // Case 1, both centers could afford pi
6 return max( m(i + 1, x - c[i] , y), // i+1 since we move on to next candidate, x/y -
c[i] to afford pi c[i+1], not c[i]
7 m(i + 1, x, y - c[i])) + 1 // plus 1 since pi is employed
8 if c[i] ≤ x // Case 2, only center x is able to hire pi
9 return m(i + 1, x - c[i], y) + 1

10 if c[i] ≤ y // Case 3, only center y is able to hire pi


11 return m(i + 1, x, y - c[i]) +1
b)

+2
We have to do M[i+1, x - C[i + 1], y] because at i+1 we are assuming that the
i+1th job application has been paid for. (+1, although irl this means you just need
to put an empty value at the start of your C array)

We can simplify the code a bit by handling several cases at the same time.
Below, we have a base case and the recursive case is defined with if
statements.

def solve(n, X, Y, c):


assert n == len(c)
# M is an n * (X+1) * (Y+1) matrix of zeros.
M = [[[0]*(Y+1)] * (X+1)] * n

# base case where i = n-1, we can hire one person if either X or Y


has
# enough money, or else zero.
for x in range(X):
for y in range(Y):
M[n-1][x][y] = 1 if x >= c[n-1] or y >= c[n-1] else 0

# fill in the rest of the matrix by decreasing i.


# the order of x and y does not matter.
for i in range(n-1)[::-1]:
for x in range(X+1):
for y in range(Y+1):
m = 0
if x >= c[i]:
m = max(m, 1 + M[i+1][x - c[i]][y])
if y >= c[i]:
m = max(m, 1 + M[i+1][x][y - c[i]])
M[i][x][y] = m

return M[0][X][Y]
if __name__ == "__main__":
print(solve(10, 5, 5, [1] * 10))

b)
A little hint for everyone, if there’re k variables used in the recurrence method, there’ll
be k for loops used in the dynamic method.
For example, in this question, we used 3 variables i, x, y by recurrence method M(i, x, y).
Thus, there’ll be 3 for loops iterating through i, x and y.
Below is the pseudo code: (by Pinyao, not an official answer)
M_dynamic(c, X, Y) // c for cost array, X for budget of center X, Y for budget of center Y
1 if c.size == 0 return 0
// M[i][x][y] is a 3-D array storing total number of people hired at each i, x, y
2 init(M) // initialize each cell to be 0
3 for(i = c.size - 1; i ≥ 1; i--)
4 for (x = 0; x ≤ X; x++)
5 for (y = 0; y ≤ Y; y++)
6 if c[i] ≤ x AND c[i] ≤ y // refer to case 1, both centers could afford
pi
7 M[i, x, y] = 1 + max( M[i + 1, x - c[i], y], M[i + 1, x, y - c[i]]
8 else if c[i] ≤ x // refer to case 2, only center X could afford
9 M[i, x, y] = 1 + M[i + 1, x - c[i], y]
10 else if c[i] ≤ y // refer to case 3, only center Y could afford
11 M[i, x, y] = 1 + M[i + 1, x , y - c[i]]
12 return M[1, X, Y]

Cannot be -c[i+1] since the index will be out of bound, just change the first for loop to ≥ 1
Lines 8 and 10 need to be else if statements. Also - c[i] needs to be - c[i+1] as the c arr is
indexed by 1 not 0.
Q5

a)
Clique decision problem is in NP if a polynomial time algorithm to verify it
exists. Such an algorithm could be implemented as follows: isn't this is same
as clique decision problem?one is simpler I think, hmm it fair
As long as following check are run in polynomial , it proves this question is in the
complexity class NP
This
Verify_clique(G,C, k)
1.check C is clique of G O(v+e) dfs/bfs
2.check vertices of C are from G.V O(V)
For v in C
If v is not in G.V
Return false
Return true
3.check each pair of vertices in C O(E)
For e in c.E
If e.u == e.v
Return false
4.check the number of vertices in the set C >= k O(1)
If C.V.size() < k return false (assume we have C.V set)
Return true

b)
It is a google problem, so …? I don’t have google use Bing +1

You might also like