You are on page 1of 4

Advanced Algorithms, Fall 2012

Prof. Bernard Moret

Homework Assignment #1
due Sunday night, Sept. 30

Group members
Afroze Baqapuri Nicolas Stucki Thadd´ e Tyl e

Question 1. Implement a queue using two ordinary stacks and analyze the amortized cost of each InsertQueue and Delete-Queue operation. (Assume the cost of each Pop or Push operation is 1). Now implement a double-ended queue with two ordinary stacks and give a counterexample to show that the amortized cost of its four operations (Insert-Back, Insert-Front, Delete-Back, and Delete-Front) is no longer O(1). A counterexample is a sequence of operations that takes time growing faster than the number of operations in the sequence. Indicate which single operation(s) are such that their removal makes the the amortized cost of the remaining three operations O(1). Solution 1. Simple queue To implement the queue with two stacks we use the first stack as an enqueueing stack and the second will be used as a dequeueing stack. All elements will be dequeued by popping it out from the dequeueing stack and all elements will be enqueued by pushing it into the enqueueing stack. If the dequeueing stack is empty and we want to delete the next element we need to do an additional step before dequeueing, this new step consist of popping each element of the enqueueing stack and pushing it into the dequeueing stack. The operation I NSERT-Q UEUE is clearly O(1) because it consists of only one PUSH operation. On the other hand the D ELETE -Q UEUE operations can be O(1) if the dequeueing stack is not empty or O(n) if it is necessary to transfer all the n elements form the enqueueing stack to the dequeueing stack before deleting the element. Therefore it is trivial to see that the I NSERT-Q UEUE operetion O(1) in an amortised analysis. because it is O(1) in any case. To prove that the D ELETE -Q UEUE is operation O(1) we will show that n calls to that operation will produce a cost of only O(n) instead of the O(n2 ) that would be expected by the non-amortised analysis. To do so we will count the operation from the perspective of each element during it’s life in the queue. So we assume that there are n calls to D ELETE -Q UEUE and we assume that any element that is in the queue was inserted by a I NSERT-Q UEUE operation. Therefor every element is or has been in the enqueuing stack. We also know that any element has to be dequeued from the dequeueing stack. So each deleted element has to be passed form the enqueueing to the dequeueing stack and this only happens once. Therefore each of the n elements will produce (at most) two PUSH and two POP operation, this means at most 4n operations for the n elements. Therefore the D ELETE -Q UEUE has an O(1) amortised cost.

But this new operation only simplifies things because now if an element is inserted it could have at most 4 operations (the ones from the simple queue analysis) or at most 2 operations (PUSH into and POP out of the stack that has a delete operation).F3 F2 F1 F0 where Fi ∈ {0. This will create a situation where every call to a delete operation will have to transfer all the elements from one stack to the other. An example of a number that has more than one “fitstring” representation is that of the number 2. which can either be 11F or 100F . The result of applying this transformation yields 01101001F .Fi . In other words. 1}. Fk+2∗l+1 = 0 and Fk+2∗(n+1) = 0. Let a fit string representation of a number n be . To show that these operations can’t be amortised to O(1) we will show that n calls to operation will not necessarily be O(n). Double-ended queue To implement the double-ended queue is similar to the implementation of the simple queue. The costs of these operations are the same of those form the equivalent operations from the single queue. Using the definition of Fibonacci numbers. For instance. If either F0 or F1 is 0. we flip it to a one and the increase operation is done. if both F0 and F1 are set to 1.. Fk+2∗l+1 = 1 and Fk+2∗(l+1) = 1 (for k > 0 and l from 0 to the maximum integer we can find. we notice that this does not 2 . where the ith least significant “fit” indicates whether the sum includes the ith Fibonacci number Fi . Verify that a number need not have a unique fitstring representation (in contrast to bitstrings) and describe an algorithm to implement increase and decrease operations on a single fitstring in constant amortized time. We assume that all elements of the queue are in only one of the stacks. For that we will show on case that is ’amortised’ to O(n2 ) by calling D ELETE -F RONT and D ELETE -BACK operations.Solution 1. For example. D ELETE -BACK and D ELETE F RONT are O(n) (or O(1) if the deleted end of the stack is not empty). we keep an array of 0/1 “fits”. Otherwise. I NSERT-BACK and I NSERT-F RONT are O(1). we represent integers as sums of Fibonacci numbers. then we will find the first case (starting from the right) where Fk = 0. instead of an array of 0/1 bits.. Now the problem arises if we are calling D ELETE -F RONT and then D ELETE -BACK repeatedly (in a cyclic manner) until the queue is empty.. 01011111F get an n of 1. instead of using sums of powers of two. Suppose that.. Therefore the analysis of the simple queue still holds up and all three operations of the semi-double-endend queue will be amortised to O(1). This will leave us with the same case that we had with the simple queue plus the operation that pushes directly to the dequeueing stack. To make the operations O(1) in a amortised analysis we have to delete one of the two delete operations. Question 2. We will now consider the increase operation. This means that all calls to delete operations will be O(n) and therefore the total time for the n calls will be O(n2 ). call it n) and flip them to Fk = 1. What happened here is that each time a delete operation is called the all the elements are on the other stack. this means that the other stack is completely empty. With this we showed a sequence of operations that won’t be amortised the single operations to O(1). Solution 2. the “fitstring” 101110F represents the number F6 + F4 + F3 + F2 = 8 + 3 + 2 + 1 = 14. We will implement the same functions from the simple queue changing the names of the stacks to front (instead of enqueueing) an back (instead of dequeueing) and will implement a second version of these operations with the stacks inverses.

It is important to note that after every iteration the number of consecutive zeros on the right decrees. Then. call it n) and flip them to Fk = 0. This will eventually end when F0 or F1 become one. • Prove that. m ≥ n. Using the definition of Fibonacci numbers. We may claim that all insertions take a constant time of 2 ∗ k. we flip it to a zero and the decrease operation is done. creates a problem: when the needs grow beyond the allocated array size. if both F0 and F1 are set to 0.change the number. but that creating a new array takes time proportional to its size. Therefore the amortised cost to decrease one is O(1) over n decreases. then we will find the first case (from the left) where Fk = 1. Otherwise. This will eventually end when F0 or F1 become 0. Fk+2∗l+1 = 0 and Fk+2∗(l+1) = 0 (for k > 0 and l from 0 to the maximum integer we can find. with k := k1 + 2 ∗ k2 . The change in potential ∆Φ = −(k/2 − 1) so the amortised running time is k/2 + 3 − (k/2 − 1) = 4. Obviously. 0111111F to 1010101F . • Why not halve the array as soon as the number of elements falls below one half of the allocated size? Solution 3. The number of operations taken to decrease one is k/2 + 3 where k is the number of consecutive ones on the right. Question 3. Let n be the number of elements in the array. say. we start the procedure over (tail-recursion yet again). Then. Assume that returning an array to free storage takes constant time. The algorithm to decrease one is very similar to the increase operation. let the potential Φ(i) be the number of zeros in the fit of i. Fk+2∗l+1 = 1 and Fk+2∗(n+1) = 1. in which case the cost is actually 2∗k2 ∗n+k1 . To prove the amortisation. there is no support for simply increasing the array. the array is copied into one half its size (and the old array returned to free storage) and the deletion then proceeds. Therefore the amortised cost to increase one is O(1) over n increases. Indeed. we start the procedure over (tail-call recursion). let’s use the cost-accounting analysis. • When the next deletion reduces the number of elements below one quarter of the allocated size. You notice that the real cost is k/2 + 3 where k is the number of consecutive ones on the right. half of the bits get flipped. let the potential Φ(i) be the number of ones in the fitstring representation. as in a simple queue or a binary heap. since we combine the allocation of an array of double the capacity and an insertion. the amortized cost per operation of array doubling and array halving is constant. in the context of a data structure that grows or shrinks one element at a time. • When the next insertion encounters a full array. the transformation goes from. Using arrays. If either F0 or F1 is one. except when m = n. The change in potential ∆Φ = −(k/2 − 1) so the amortised running time is k/2 + 3 − (k/2 − 1) = 4. To prove the amortisation. Inserting elements costs a constant time k1 . However. Therefore consider the following solution. the array is copied into one twice its size (and the old array returned to free storage) and the insertion then proceeds. plus one. and m be its capacity (or size). At worst. we notice that this does not change the number. In order to insert the element that required a 3 . It is important to note that after every iteration the number of consecutive ones on the right decrees.

the amortized cost per deletion is actually 2 ∗ k again. The problem arises when we just reduced the array and then we want to add two elements it will take O(n) because it will need to increase the size again. 4 . which is k2 ∗ 2 ∗ n + k1 . When inserting all those elements. we had to insert at least n elements 2 (half of the previous size of the array. we just inserted that one element that made the array double in size up to 4 ∗ n. ie. This. the amortized total cost is 2 ∗ k ∗ n + 2 ∗ k. reguardless of the value of n. the array is halved to 2 ∗ n. Yet again. At worse. the array will be half-full (or half-empty). The amortized total cost is trivially higher than the real total cost. then we will have 2n operations but it will take O(n2 ) to do it. let’s use the cost-accounting analysis. As a result. which is Θ(1). in itself. Upon removing those n + 1 elements. The actual cost of each removal is k1 (constanttime). and we are now removing n + 1 elements from the array. As a result. the point where it previously had to increase the size of the array). We may claim that all deletions take a constant time of 2 ∗ k. If we add those two elements and then we remove them in a repeated manner. causes a real cost of k1 ∗ n. On the other hand. we will consider the case of decrease operations. ie. due to the halving process. and the real total cost is (k1 + 2 ∗ k2 ) ∗ n + 2 ∗ (k1 + k2 ). forcing it to be halved. Similarly. upon reaching a quarter of the size of the array (n). we observe a real cost of k1 ∗ n + k2 ∗ 2 ∗ n + k1 . the amortized cost per insertion is actually 2 ∗ k. It is important to note that after each growth or shrink of the array. plus the element that requires a reallocation. which is clearly bigger. except for that last one. Θ(1) (constant time). Indeed. with k := k1 + k2 . This will assure that with combinations of the two operations will work as well because the numbers there will always be the necessary distance of to the next growth or shrink operation since the last operation.reallocation of the array when inserting in an array of size n. The reduction of size isn’t made when the number of elements falls below one half of the allocated size because that solution doesn’t amortise to constant time. the amortized cost is 2 ∗ k ∗ n + 2 ∗ k. and an amortized cost of 2 ∗ k ∗ n.