You are on page 1of 6

CS325: Analysis of Algorithms, Fall 2017

Midterm

• I don’t know policy: you may write “I don’t know” and nothing else to answer a question and receive 25 percent
of the total points for that problem whereas a completely wrong answer will receive zero.
• There are 6 problems in this exam.
• These formula may be useful:


⎪ if c < 1
n ⎪


Θ(1),
1 + c + c + . . . + c = ∑ c = ⎨Θ(n),
2 n i
if c = 1



i=1

⎩Θ(c ),
n
if c > 1

n(n + 1)
1 + 2 + 3 + ... + n = = Θ(n2 )
2

Problem 1
Problem 2
Problem 3
Problem 4
Problem 5
Problem 6

1
Problem 1. [6 pts] Which of the following statements is true or false?
(a) If f (n) = 123n − 21 then f (n) = Ω(n). True
(b) If f (n) = 34n10 + n + 1 then f (n) = O(n). False
(c) If f (n) = 2f (n − 1), and f (1) = 1 then f (n) = O(n log n). False
(d) If f (n) = 24n log n + 12n then f (n) = Θ(n log n). True
(e) If f (n) = 22+n then f (n) = O(2n ). True

(f) If f (n) = n then f (n) = O(n). True

Problem 2. [5 pts] Bob thinks it is possible to obtain a faster sorting algorithm by breaking the
array into three pieces (instead of two). Of course, this idea came to his mind, after he could come
up with an algorithm to merge three sorted arrays in linear time. He implemented his algorithm and
called the procedure Merge(A[1 . . . n], k, `). Assuming A[1 . . . k], A[k +1, . . . , `] and A[`+1, . . . , n]
are sorted, Merge merges them into one sorted array in O(n) time. Based on this procedure, Bob
has designed his recursive algorithm that follows.

1: procedure Sort(A[1 ⋅ ⋅ n])


2: if n > 1 then
3: k ← ⌊n/3⌋
4: ` ← ⌊2n/3⌋
5: Sort(A[1, . . . , k)
6: Sort(A[k + 1, . . . , `)
7: Sort(A[` + 1, . . . , n)
8: Merge(A[1 . . . n], k, `)

What is the running time of this sorting algorithm? Write the recursion for the running time,
and use the recursion tree method to solve it.

Solution. Bob’s algorithm runs in O(n log n) time, similar to regular merge sort. The algorithm
has three recursive calls to subproblems of size n/3, and spends O(n) for merging. Hence, if T (n)
shows the running time we have:
T (n) = 3T (n/3) + O(n).
We use the recursion tree method to solve this recursion, as follows.
n, n level 1: n

n/3, n/3 n/3, n/3 n/3, n/3


level 2: 3(n/3) = n

level i: 3i (n/3i ) = n

Blue numbers show the size of the subproblems, and green numbers show the non-recursive
work. The total non-recursive work at each level is O(n), and there are O(log n) levels. Therefore,
we have T (n) = O(n log n).

2
Problem 3. [4 pts] Here is a divide and conquer algorithm for computing the maximum number
in an array, which is not necessarily sorted.

1: procedure RecMax(A[1 ⋅ ⋅ n])


2: if n > 1 then
3: m ← ⌊n/2⌋
4: `1 ←RecMax(A[1 ⋅ ⋅ m])
5: `2 ←RecMax(A[m + 1 ⋅ ⋅ n])
6: return max(`1 , `2 )
7: else
8: return A[1]

Do you think this maximum finder is faster than the regular iterative one? What is its running
time? Write the recursion for the running time, and use the recursion tree method to solve it.

Solution. This algorithm runs in O(n) time, not faster than the regular maximum finder. The
algorithm has two recursive calls to subproblems of size n/2, and spends O(1) for computing the
max of `1 and `2 . Hence, if T (n) shows the running time we have:

T (n) = 2T (n/2) + O(1).

We use the recursion tree method to solve this recursion, as follows.

n, O(1) level 1: O(1)

n/2, O(1) n/2, O(1)


level 2: 2.O(1)

level i: 2i .O(1)

Blue numbers show the size of the subproblems, and green numbers show the non-recursive
work. The total non-recursive work at level i level is 2i × O(1). Therefore, the total running time
is:
O(1) (1 + 2 + 4 + . . . + 2h ) = O(1) (1 + 2 + 4 + . . . + 2⌈log n⌉ ) = O(n)

where h is the height of the recursion tree (so, h ≤ ⌈log n⌉). Note, 1 + 2 + 4 + . . . + 2t = O(2t ), as given
in the cover page.

3
Problem 4. [5 pts] Alice has two sorted arrays A[1, . . . , n], B[1, . . . , n + 1]. She knows that A
is composed of distinct positive numbers, and B is derived from inserting a zero into A. She would
like to know the index of this zero. Unfortunately, she does not have enough time to search the
arrays, so she is looking for a faster algorithm. She wonders if you can design and analyze a fast
algorithm for her to find the index of the zero in B. (Also, she has told me not to give many points
for algorithms with running time much larger than O(log n)).

She has provided the following example to ensure that the problem statement is clear.

A ∶ 1, 3, 4, 6, 7, 8, 9, 20
B ∶ 1, 3, 0, 4, 6, 7, 8, 9, 20.

Your algorithm should return 3 in this case, which is the index of the zero in B.

Solution. Let t be the index of the zero in B. Here is the key observation: for any i ∈ {1, . . . , n},
if A[i] = B[i] then t > i, otherwise t ≤ i. Therefore, we can do a binary search.

1: procedure FindIndex(A[1 ⋅ ⋅ n], B[1 ⋅ ⋅ m])


2: if n = 0 then
3: return 1
4: i ← ⌊n/2⌋
5: if A[i] ≠ B[i] then
6: return FindIndex(A[1 ⋅ ⋅ i − 1], B[1 ⋅ ⋅ i])
7: else
8: return FindIndex(A[i + 1 ⋅ ⋅ n], B[i + 1 ⋅ ⋅ m]) + i

Similar to binary search, for the running time T (n), we have:

T (n) = 2T (n/2) + O(1) = O(log n).

4
Problem 5. [6 pts] Let P (n) be the number of binary strings of length n that do not have any
three consecutive ones (i.e. they do not have “111” as a substring). For example:

P (1) = 2 ∶ 0, 1,
P (2) = 4 ∶ 00, 01, 10, 11,
P (3) = 7 ∶ 000, 001, 010, 011, 100, 101, 110,
P (4) = 13 ∶ 0000, 0001, 0010, 0011, 0100, 0101, 0110, 1000, 1001, 1010, 1011, 1100, 1101.

(a) Design a recursive algorithm to compute P (n). Justify the correctness of your algorithm.

(b) Turn the recursive algorithm of part (a) into a dynamic programming.

(c) What is the running time of your dynamic programming?

Solution.

(a) Let n ≥ 4. Each acceptable binary string has exactly one of the following properties: (1) Its
last bit is 0, (2) its last two bits are 01, its last three bits are 011. The number of strings
with property (1) is P (n − 1), the number of strings with property (2) is P (n − 2), and the
number of strings with property (3) is P (n − 3). Therefore, we have:

P (n) = P (n − 1) + P (n − 2) + P (n − 3).

(b) Here is the dynamic programming:

1: procedure Tribunacci(n)
2: p[1] = 2, p[2] = 4, p[3] = 7
3: for i = 4 to n do
4: p[i] = p[i − 1] + p[i − 2] + p[i − 3]
5: return p[n]

(c) The running time is O(n).

5
Problem 6. [6 pts] Recently, Vankin has leaned to jump. Hence, we can think about the
following new game, called “White Squares”.
White Squares is played on an n × n chessboard, where n is an odd positive number, and the
top left corner is white. The player starts the game by placing a token on the top left square. Then
on each turn, the player moves the token either one square to the right or one square down. The
game ends when the token is on the bottom right corner. The player starts with a score of zero;
whenever the token lands on a white square, the player adds its value to his score. The object of
the game is to score as many points as possible.
For example, given the 5 × 5 chessboard below, the player can score −1 + 5 + 4 − 3 − 3 = 2 moving
down, down, right, down, right, right, down, right.

(a) Describe an efficient algorithm to compute the maximum possible score for a game of White
Squares, given the n × n array of values as input.

(b) What is the running time of your algorithm?

Solution.

(a) Let A[1, . . . , n][1, . . . , n] specify the numbers in the chess board. First make a pass and
change all “black numbers” to zero. Then, do something very similar to GA2, but, note the
problem is slightly different as you always start from top left corner, and end at the bottom
right corner. Now, let S[i, j] be the max score that can be gained by going from the top left
corner to square (i, j); in the end, we return S[n, n]. We have the follwing recursion (similar
to GA2).
S[i, j] = max(S[i, j − 1], S[i − 1, j]) + A[i, j].
Now you can come up with the base cases and write the dynamic programming ,

(b) The running time will be O(n2 ); it is similar to GA2.

You might also like