# LOVELY PROFESSIONAL UNIVERSITY

Algorithm design and analysis Topic: Amortized analysis: Aggregate method

Submitted to: Mr. Vijay Kumar Garg (CSE408)

Submitted by: Navdeep Singh RK1R08B42 10804943

Navdeep Singh BTECH MTECH (CSE)
.Acknowledgment
First of all I am heartily thankful to my teacher “MR. Vijay Kumar Garg” who stand a chance for me to get more knowledge about the topic “Amortized analysis: Aggregate method” I am also thankful to my friends and my parents who supported me in completion of this term paper. And last but not the least very much thankful to the University Providing Internet Facility inside the campus and providing excellent library facilities hence making the work easier.

the technique was first formally introduced by Robert Tarjan in his paper Amortized Computational Complexity. inserting '45' might require O(n)
. particularly those involving binary trees and union operations. Key ideas ●Amortized analysis is an upper bound: it's the average performance of each operation in the worst case. even though a single operation might be expensive. However. At the heart of the method is the idea that while certain operations may be extremely costly in resources. which is now subsumed by amortized analysis. Amortized analysis initially emerged from a method called aggregate analysis. it is now ubiquitous and comes into play when analyzing many other algorithms as well. “The amortized cost of insertion into a splay tree with n items is O(log n). it is invalid to reason. "paying back" the program over a number of iterations. Amortization was initially used for very specific types of algorithms. the cost will be O(log n).” In fact. the time required to perform a sequence of data-structure operations is averaged over all the operations performed. It does not say anything about the cost of a specific operation in that sequence. However. so when I insert '45' into this tree. It is particularly useful because it guarantees worst-case performance rather than making assumptions about the state of the program. they cannot occur at a high-enough frequency to weigh down the entire program because the number of less costly operations will far outnumber the costly ones in the long run. It allows for the establishment of a worst-case bound for the performance of an algorithm irrespective of the inputs by looking at all of the operations. if one averages over a sequence of operations. which addressed the need for a more useful form of analysis than the common probabilistic methods used. Amortized analysis differs from average-case analysis in that probability is not involved. For example. ●Amortized analysis is concerned with the overall cost of a sequence of operations. an amortized analysis guarantees the average performance of each operation in the worst case.Introduction Amortized analysis is a method of analyzing algorithms that considers the entire sequence of operations of the program. Amortized analysis can be used to show that the average cost of an operation is small. In an amortized analysis.

in fact. Its applicability is therefore dependent on certain assumptions about probability distributions on algorithm inputs. and this bound will always hold. Amortized analysis needs no such assumptio ns. An average-case bound. '160'. Amortized analysis may lead to a more realistic worst-case bound by taking these interactions into account. which means the analysis is invalid if these assumptions do not hold (or that probabilistic analysis cannot be used at all. Also. but the average cost over all operations in any valid sequence will always perform within the bound. However. ●If good amortized cost is a goal. worst-case analysis can give overly pessimistic bounds for sequences of operations.' '399'. it is so regardless of whether you insert the sequence '10. in that it is concerned with the cost averaged over a sequence of operations. a single operation in a sequence may have cost worse than this bound. on the other hand. average-case analysis relies on probabilistic assumptions about the data structures and operations in order to compute an expected running time of an algorithm. ●The two points above imply that both amortized and worst-case bounds should be understood when choosing an algorithm to use in practice. 2001. a worst-case bound on the average time per operation. the average time for each operation will be O(log n). '10. “When I insert m items into a tree. 92‒3). Amortized analysis is similar to average-case analysis.” ● Amortized analysis is concerned with the overall cost of arbitrary sequences. for example. etc. Because an amortized bound says nothing about the cost of individual operations. because such analysis ignores interactions among different operations on the same data structure (Tarjan 1985). it may be possible that one operation in sequence requires a huge cost.
.operations! It is only appropriate to say.' '160. an algorithm may be designed to explicitly perform this “clean up” during expensive operations Comparison to other analysis techniques As mentioned above. Note that the bound offered by amortized analysis is. even if the assumptions on the distribution of inputs are valid. does not preclude the possibility that one will get “unlucky” and encounter an input that requires much more than the expected computation time.' '2' or the sequence '2'. if input distribution s cannot be described!) (Cormenet al. it offers an upper-bound on the worst case running time of a sequence of operations. An amortized bound will hold regardless of the specific sequence. Practical systems in which it is important that all operations have low and/or comparable costs may require an algorithm with a worse amortized cost but a better worst-case per operation bound. ●Amortized analysis can be understood to take advantage of the fact that some expensive operations may “pay” for future operations by somehow limiting the number or cost of expensive operations that can happen in the near future. if the amortized cost of insertion is O(log n).

One is a stack with the additional operation MULTIPOP. Amortization is useful because competitive analysis's performance bounds must hold regardless of the particular input. per operation is therefore T(n) / n. which is like the accounting method in that we determine the amortized cost of each operation and may overcharge operations early on to compensate for undercharges later. Accounting method. When there is more than one type of operation. even when there are several types of operations in the sequence. which pops several objects at once. Amortized analysis is closely related to competitive analysis. We shall use two examples to examine these three models. each type of operation may have a different amortized cost. The potential method maintains the credit as the "potential energy" of the data structure instead of associating the credit with individual objects within the data structure. The accounting method overcharges some operations early in the sequence. The aggregate analysis method In the aggregate method of amortized analysis. Note that this amortized cost applies to each operation. a sequence of n operations takes worst-case time T(n) in total. In the worst case. which involves comparing the worst-case performance of an online algorithm to the performance of an optimal off-line algorithm on the same data. The potential method. in which we determine an upper bound T(n) on the total cost of a sequence of n operations. storing the overcharge as "prepaid credit" on specific objects in the data structure. or amortized cost. the average cost.These differences between probabilistic and amortized analysis therefore have important consequences for the interpretation and relevance of the resulting bounds. Sleator and Tarjan (1985a) offer an example of using amortized analysis to perform competitive analysis. The other two
. which by definition is seen by the online algorithm in sequence rather than at the beginning of processing. The credit is used later in the sequence to pay for operations that are charged less than they actually cost. we show that for all n. in which we determine an amortized cost of each operation. Types:
Amortized analysis is of three types: 1) Aggregate analysis 2) Accounting method 3) Potential method Aggregate analysis. The amortized cost per operation is then T(n)/n. The other is a binary counter that counts up from 0 by means of the single operation INCREMENT.

and the actual running time is a linear function of this cost. The total cost of a sequence of n PUSH and POP operations is therefore n. the operation STACK-EMPTY returns TRUE if there are no objects currently on the stack. k).
. one call is made to POP in line 2. k) on a stack of s objects? The actual running time is linear in the number of POP operations actually executed. or pops the entire stack if it contains less than k objects. POP(S) pops the top of stack S and returns the popped object. and FALSE otherwise. MULTIPOP(S.methods we shall study in this chapter. In the following pseudo code. Thus. k) of objects popped off the stack. and thus it suffices to analyze MULTIPOP in terms of the abstract costs of 1 each for PUSH and POP. we analyze stacks that have been augmented with a new operation. x) pushes object x onto stack S. For each iteration of the loop. Stack operations In our first example of the aggregate method. may assign different amortized costs to different types of operations.k) 1 while not STACK-EMPTY(S) and k 2 do POP(S) 3 k k-1
0
Figure 1 shows an example of MULTIPOP. the total cost of MULTIPOP is min(s. PUSH(S. k). and the actual running time for n operations is therefore (n). What is the running time of MULTIPOP(S. the accounting method and the potential method. let us consider the cost of each to be 1. The number of iterations of the while loop is the number min(s. Since each of these operations runs in O(1) time. which removes the k top objects of stack S. The situation becomes more interesting if we add the stack operation MULTIPOP(S.

Why? Each object can be popped at most once for each time it is pushed. . We actually showed a worst-case bound of O(n) on a sequence of n operations. is not tight. or the amortized cost. no probabilistic reasoning was involved. We use an array A[0 . To add 1 (modulo 2 ) to the value in the counter. including calls within MULTIPOP. We emphasize again that although we have just shown that the average cost. The amortized cost of an operation is the average: O(n)/n = O(1). x = 0. . Initially. POP. we can obtain a better upper bound that considers the entire sequence of n operations. 1. In fact.1] of bits. although a single MULTIPOP operation can be expensive. and hence a sequence of n operations costs O(n2). of a stack operation is O(1). obtained by considering the worstcase cost of each operation individually. POP. so that . The worst-case time of any stack operation is therefore O(n). A binary number x that is stored in the counter has its lowest-order bit in A[0] and its highest-order bit in A[k . POP. k . is at most the number of PUSH operations. Let us analyze a sequence of n PUSH. whose result is shown in (b). . and thus A[i] = k 0 for i = 0. where length[A] = k. shown initially in (a). k . since the stack size is at most n. any sequence of n PUSH. consider the problem of implementing a k-bit binary counter that counts upward from 0. The worst-case cost of a MULTIPOP operation in the sequence is O(n). and MULTIPOP operations on an initially empty stack can cost at most O(n). the number of times that POP can be called on a nonempty stack. Therefore. which is at most n. any sequence of n PUSH. 4). since we may have O(n) MULTIPOP operations costing O(n) each. The next operation is MULTIPOP(S.
Incrementing a binary counter As another example of the aggregate method. Dividing this total cost by n yielded the average cost per operation.Figure 1 The action of MULTIPOP on a stack S. .1]. the O(n2) result. and MULTIPOP operations on an initially empty stack. The top 4 objects are popped by MULTIPOP(S.
. 7). as the counter. Using the aggregate method of amortized analysis. Although this analysis is correct. we use the following procedure. . For any value of n.1. and MULTIPOP operations takes a total of O(n) time. which empties the stack shown in (c) since there were fewer than 7 objects remaining. and hence running time.

a sequence of n INCREMENT operations on an initially zero counter takes time O(nk) in the worst case. Notice that the total cost is never more than twice the total number of INCREMENT operations. At the start of each iteration of the while loops in lines 2-4. then adding 1 flips the bit to 0 in position i and yields a carry of 1. As with the stack example. the loop ends. The next-highest-order bit. The cost of each INCREMENT operation is linear in the number of bits flipped. and then.Amortized cost per operation is T(n)/n
X 0 1 2 3 4 5 6 7 8 9 A[4] 0 0 0 0 0 0 0 0 0 0 A[3] 0 0 0 0 0 0 0 0 1 1 A[2] 0 0 0 0 1 1 1 1 0 0 A[1] 0 0 1 1 0 0 1 1 0 0 A[0] 0 1 0 1 0 1 0 1 0 1 Total Cost 0 1 3 4 7 8 10 11 15 16
Figure 2 An 8-bit binary counter as its value goes from 0 to 16 by a sequence of 16 INCREMENT operations. flipping the 0 to a 1. Bits that flip to achieve the next value are shaded. a cursory analysis yields a bound that is correct but not tight. The running cost for flipping bits is shown at the right. we wish to add a 1 into position i. to be added into position i + 1 on the next iteration of the loop. in which array A contains all 1's. so that adding a 1 into position i. As Figure 2 shows. INCREMENT (A) 1 i 0 2 while i< length[A] and A[i] = 1 3 do A[i] 0 4 i i+1 5 if i < length[A] 6 then A[i] 1 This algorithm is essentially the same one implemented in hardware by a ripple-carry counter Figure 2 shows what happens to a binary counter as it is incremented 16 times. Thus. flips only every other time:
. we know that A[i] = 0. A[0] does flip each time INCREMENT is called. starting with the initial value 0 and ending with the value 16. is taken care of in line 6. If A[i] = 1. A[1]. We can tighten our analysis to yield a worst-case cost of O(n) for a sequence of n INCREMENT'S by observing that not all bits flip each time INCREMENT is called. A single execution of INCREMENT takes time (k) in the worst case. if i < k. Otherwise.

. lg n . The worst-case time for a sequence of n INCREMENT operations on an initially zero counter is therefore O(n). The total number of flips in the sequence is thus
by equation (3. . relating the potential to the lo g of the subtree size for splay trees. 1. then performing an analysis for each case. In general. computing the potential of a data structure as a sum of “local” potentials so that one can reason about the effects of local changes while ignoring irrelevant and unchanging components of the structure. To understand the application of amortized analysis to common problems. The reader is theref
.g. it is essential to know the basics of both the accounting method and the potential method. and to thoroughly appreciating the design and purpose of certain data structures. the method used depends on the desired bounds and the desired complexity of the analysis. as the desired outcome is a logarithmic bound). An understanding of amortized analysis is essential to success in an algorithms course.a sequence of n INCREMENT operations on an initially zero counter causes A[1] to flip n/2 times. for i = 0. bit A[i] never flips at all. Some strategies that sometimes work. The approaches yield equivalent results. bit A[i] flips n/2i times in a sequence of n INCREMENT operations on an initially zero counter. The resources presented here supply many examples of both methods applied to real problems.
Conclusion Amortized analysis is a useful tool that complements other techniques such as worst-case and average-case analysis. designing the potential method around the desired form of the result (e. For i > lg n . To perform an amortized analysis. . Similarly. include enumerating the ways in which an algorithm might operate on a data structure. to understanding the implication of theoretical bounds on real-world performance. There is no magic formula for arriving at a potential function or accounting credit scheme that will always work. or n/4 times in a sequence of n INCREMENT'S. . It has been applied to a variety of problems. and it is also crucial to appreciating structures such as splay trees that have been designed to have good amortized bounds. but one might be more intuitively appropriate to the problem under consideration. bit A[2] flips only every fourth time.. however.4). so the amortized cost of each operation is O(n)/n = O(1). one should choose either the accounting method or the potential method. and reason ing about each type of operation in a sequence individually before coming up with a bound on a n arbitrary sequence of operations.

ore again urged to consult any of the sources mentioned here to improve his or her understanding of amortized analysis and to explore these algorithms in greater depth.eli.
References:1) http://www.pdf
.sdsu.html#RTF ToC3 2) http://staff.edu/~ltoma/teaching/cs231/fall11/Lectures/13amortized/amortized.edu/courses/fall95/cs660/notes/amortized/Amortized.cn/~csli/graduate/algorithms/book6/chap18.htm 3) http://www.ustc.bowdoin.edu.