Professional Documents
Culture Documents
At the end of the sequence, divide the total cost by the number of operations to determine the
average cost per operation.
1.1.3 The Potential method
Instead of representing prepaid work as credit stored with specific objects in the data structure, the
potential method of amortized analysis represents the prepaid work as “potential energy,” or
just “potential,” which can be released to pay for future operations.
We associate the potential with the data structure as a whole rather than with specific objects
within the data structure
We can designate a potential function Φ on a data structure’s states if :
Φ(a0) = 0, where a0 is the starting state of the data structure.
Φ(at) ≥ 0 for all states at of the data structure occurring at the time of the course of the
computation.
At each stage in the computation, the potential function should be able to maintain track of the
precharged time. It calculates the amount of time that can be saved up to cover expensive
operations.
Amortized time of an operation can be defined as:
c + Φ(a’) − Φ(a), where c is the original cost of the operation and a and a’ are the states of the
data structure before and after the operation, respectively.
1.2 Example of Amortized Analysis
Incrementing a Binary counter
Consider the problem of implementing a k-bit binary counter that counts upward from 0. We use
an array A[0...k-1] of bits, where A.length = k, as the counter. A binary number x that is stored in
the counter has its lowest-order bit in A[0] and its highest-order bit in A[k-1], so that x = E(i=0 to k-
1)A[i].2^i.Initially, x = 0, and thus A[i]= 0 for i 0,1,...k-1.
After solving it with Aggregate analysis method we can say that the worst-case time for a
sequence of n INCREMENT operations on an initially zero counter is therefore O(n). The average
cost of each operation, and therefore the amortized cost per operation, is O(n)/n = O(1).
After solving it with Accounting method the cost of resetting the bits within the while loop is paid
for by the dollars on the bits that are reset. The INCREMENT procedure sets at most one bit and
therefore the amortized cost of an INCREMENT operation is at most 2 dollars. Hence for n
INCREMENT operations, the total amortized cost is O(n), which bounds the total actual cost.
After solving it with Potential method it gives us an easy way to analyze the counter even when it
does not start at zero The counter starts with b0 1s, and after n INCREMENT operations it has bn
1s, where 0<=b0, bn<=k. Since b0<=k, as long as k=O(n), the total actual cost is O(n) if we
execute at least n= Omega(k) INCREMENT operations, the total actual cost is O(n), no matter
what initial value the counter contains.
1.3 Advantages of Amortized Analysis:
1. More accurate predictions: Amortized analysis provides a more accurate prediction of the
average-case complexity of an algorithm over a sequence of operations, rather than just the
worst-case complexity of individual operations.
2. Provides insight into algorithm behavior: By analyzing the amortized cost of an
algorithm, we can gain insight into how it behaves over a longer period of time and how it
handles different types of inputs.
3. Helps in algorithm design: Amortized analysis can be used as a tool for designing
algorithms that are efficient over a sequence of operations.
4. Useful in dynamic data structures: Amortized analysis is particularly useful in dynamic
data structures like heaps, stacks, and queues, where the cost of an operation may depend
on the current state of the data structure.
1.4 Disadvantages of Amortized Analysis:
1. Complexity: Amortized analysis can be complex, especially when multiple operations are
involved, making it difficult to implement and understand.
2. Limited applicability: Amortized analysis may not be suitable for all types of algorithms,
especially those with highly unpredictable behavior or those that depend on external factors
like network latency or I/O operations.
3. Lack of precision: Although amortized analysis provides a more accurate prediction of
average-case complexity than worst-case analysis, it may not always provide a precise
estimate of the actual performance of an algorithm, especially in cases where there is high
variance in the cost of operations.
Master Theorem
Master Theorem
The Master Theorem is a tool used to solve recurrence relations that arise in the analysis
of divide-and-conquer algorithms. The Master Theorem provides a systematic way of
solving recurrence relations of the form:
where a, b, and f(n) are positive functions and n is the size of the problem. The Master Theorem
provides conditions for the solution of the recurrence to be in the form of O(n^k) for some
constant k, and it gives a formula for determining the value of k
Note: It is important to note that the Master Theorem is not applicable to all recurrence
relations, and it may not always provide an exact solution to a given recurrence.
It is possible to complete an asymptotic tight bound in these three cases:
2.1.1 Case 1:
If f (n) = for some constant ε >0, then it follows that:
T (n) = Θ
Solution:
3.1.2 Fractional Knapsack Problem:
This problem is also used for solving real-world problems. It is solved by using the Greedy
approach. In this problem we can also divide the items means we can take a fractional part of
the items that is why it is called the fractional knapsack problem. For example, if we have an
item of 13 kg then we can pick the item of 12 kg and leave the item of 1 kg. To solve the
fractional problem, we first compute the value per weight of each item.
As we know that the fractional knapsack problem uses a fraction of the items so the greedy
approach is used in this problem.
The fractional knapsack problem can be solved by first sorting the items according to their
values, and it can be done in O(NlogN) This approach starts with finding the most valuable
item, and we consider the most valuable item as much as possible, so start with the highest
value item denoted by vi. Then, we consider the next item from the sorted list, and in this
way, we perform the linear search in O(N) time complexity.
Therefore, the overall running time would be O(NlogN) plus O(N) equals to O(NlogN). We
can say that the fractional knapsack problem can be solved much faster than the 0/1
knapsack problem.
Time Complexity of Fractional Knapsack Problem is:
O(NlogN)
Example of Fractional Knapsack Problem:
Problem:
Solution:
3.2 Differences between the 0/1 and Fractional knapsack problem:
Sr. No 0/1 knapsack problem Fractional knapsack problem
The 0/1 knapsack problem is solved using Fractional knapsack problem is solved using a
1.
dynamic programming approach. greedy approach.
The 0/1 knapsack problem has not an optimal The fractional knapsack problem has an optimal
2.
structure. structure.
In the 0/1 knapsack problem, we are not Fractional knapsack problem, we can break items
3.
allowed to break items. for maximizing the total value of the knapsack.
0/1 knapsack problem, finds a most valuable In the fractional knapsack problem, finds a most
4. subset item with a total value less than equal valuable subset item with a total value equal to the
to weight. weight.
In the 0/1 knapsack problem we can take In the fractional knapsack problem, we can take
5.
objects in an integer value. objects in fractions in floating points.
Thank You