You are on page 1of 28

College of Vocational Studies

Presentation on Design and Analysis of


Algorithms

Presented By: Aryan Garg Presented To: Mr. Dhananjaya Singh


Roll No: 2K21/CS/18
BSc(Hons) Computer Science
INDEX
Title Slide No.
1. Amortized Analysis 3-4
1.1 Techniques used in Amortized Analysis 4
1.1.1 Aggregate analysis 5
1.1.2 The Accounting method 6-7
1.1.3 The Potential method 8
1.2 Example of Amortized Analysis 9-10
1.3 Advantages of Amortized Analysis 11
1.4 Disadvantages of Amortized Analysis 12
2.Master Theorem 13-14
2.1 Cases 15
2.1.1 Case 1 16
2.1.2 Case 2 17
2.1.3 Case 3 18
2.2 Some key points of Master Theorem 19
3.Knapsack problem 20-21
3.1 Types of Knapsack Problem 22
3.1.1 0/1 Knapsack Problem 23-24
3.1.2 Fractional Knapsack Problem 25-26
3.2 Differences between the 0/1 and Fractional knapsack problem 27
Amortized Analysis
Amortized Analysis
Amortized analysis is a technique used in computer science to analyze the average-case
time complexity of algorithms that perform a sequence of operations, where some
operations may be more expensive than others.
Instead of analyzing the worst-case time complexity of an algorithm it provides an average-
case analysis of the algorithm by considering the cost of several operations performed over
time.
The key idea behind amortized analysis is to spread the cost of an expensive operation
over several operations.
It is useful for designing efficient algorithms for data structures such as dynamic arrays,
priority queues, and disjoint-set data structures.

1.1 Techniques used in Amortized Analysis are:


1. Aggregate analysis
2. The accounting method
3. The potential method
1.1.1 Aggregate analysis
In aggregate analysis, we show that for all n, a sequence of n operations takes worst-case
time T(n) in total. In the worst case, the average cost, or amortized cost, per operation is
therefore T(n)/n.
We compute the total cost of a sequence of operations and divide it by the number of
operations to get the average cost per operation.

For a sequence on n operations we can compute the cost by this formula:

Examples of Aggregate Analysis


1. Stack operations
2. Incrementing a binary counter
1.1.2 The Accounting method
In the accounting method of amortized analysis, we assign differing charges to different
operations, with some operations charged more or less than they actually cost.
We call the amount we charge an operation its amortized cost.
When an operation’s amortized cost exceeds its actual cost, we assign the difference to specific
objects in the data structure as credit.

Steps to solve a problem using the above method:


Identify the sequence of operations that the algorithm will perform, and determine which
operations are “cheap” and which are “expensive.”
Define a credit or potential function that will be used to track the credit that has been accumulated
by the algorithm.
Initialize the credit to 0
For each operation in the sequence, do the following:
If the operation is cheap, increment the credit by the cost of the operation.
If the operation is expensive, subtract the credit from the cost of the operation to determine the
actual cost. The actual cost is the difference between the cost of the operation and the credit.
If the credit becomes negative, reset it to 0.

At the end of the sequence, divide the total cost by the number of operations to determine the
average cost per operation.
1.1.3 The Potential method
Instead of representing prepaid work as credit stored with specific objects in the data structure, the
potential method of amortized analysis represents the prepaid work as “potential energy,” or
just “potential,” which can be released to pay for future operations.
We associate the potential with the data structure as a whole rather than with specific objects
within the data structure
We can designate a potential function Φ on a data structure’s states if :
Φ(a0) = 0, where a0 is the starting state of the data structure.
Φ(at) ≥ 0 for all states at of the data structure occurring at the time of the course of the
computation.
At each stage in the computation, the potential function should be able to maintain track of the
precharged time. It calculates the amount of time that can be saved up to cover expensive
operations.
Amortized time of an operation can be defined as:
c + Φ(a’) − Φ(a), where c is the original cost of the operation and a and a’ are the states of the
data structure before and after the operation, respectively.
1.2 Example of Amortized Analysis
Incrementing a Binary counter
Consider the problem of implementing a k-bit binary counter that counts upward from 0. We use
an array A[0...k-1] of bits, where A.length = k, as the counter. A binary number x that is stored in
the counter has its lowest-order bit in A[0] and its highest-order bit in A[k-1], so that x = E(i=0 to k-
1)A[i].2^i.Initially, x = 0, and thus A[i]= 0 for i 0,1,...k-1.

After solving it with Aggregate analysis method we can say that the worst-case time for a
sequence of n INCREMENT operations on an initially zero counter is therefore O(n). The average
cost of each operation, and therefore the amortized cost per operation, is O(n)/n = O(1).
After solving it with Accounting method the cost of resetting the bits within the while loop is paid
for by the dollars on the bits that are reset. The INCREMENT procedure sets at most one bit and
therefore the amortized cost of an INCREMENT operation is at most 2 dollars. Hence for n
INCREMENT operations, the total amortized cost is O(n), which bounds the total actual cost.

After solving it with Potential method it gives us an easy way to analyze the counter even when it
does not start at zero The counter starts with b0 1s, and after n INCREMENT operations it has bn
1s, where 0<=b0, bn<=k. Since b0<=k, as long as k=O(n), the total actual cost is O(n) if we
execute at least n= Omega(k) INCREMENT operations, the total actual cost is O(n), no matter
what initial value the counter contains.
1.3 Advantages of Amortized Analysis:
1. More accurate predictions: Amortized analysis provides a more accurate prediction of the
average-case complexity of an algorithm over a sequence of operations, rather than just the
worst-case complexity of individual operations.
2. Provides insight into algorithm behavior: By analyzing the amortized cost of an
algorithm, we can gain insight into how it behaves over a longer period of time and how it
handles different types of inputs.
3. Helps in algorithm design: Amortized analysis can be used as a tool for designing
algorithms that are efficient over a sequence of operations.
4. Useful in dynamic data structures: Amortized analysis is particularly useful in dynamic
data structures like heaps, stacks, and queues, where the cost of an operation may depend
on the current state of the data structure.
1.4 Disadvantages of Amortized Analysis:
1. Complexity: Amortized analysis can be complex, especially when multiple operations are
involved, making it difficult to implement and understand.
2. Limited applicability: Amortized analysis may not be suitable for all types of algorithms,
especially those with highly unpredictable behavior or those that depend on external factors
like network latency or I/O operations.
3. Lack of precision: Although amortized analysis provides a more accurate prediction of
average-case complexity than worst-case analysis, it may not always provide a precise
estimate of the actual performance of an algorithm, especially in cases where there is high
variance in the cost of operations.
Master Theorem
Master Theorem
The Master Theorem is a tool used to solve recurrence relations that arise in the analysis
of divide-and-conquer algorithms. The Master Theorem provides a systematic way of
solving recurrence relations of the form:

where a, b, and f(n) are positive functions and n is the size of the problem. The Master Theorem
provides conditions for the solution of the recurrence to be in the form of O(n^k) for some
constant k, and it gives a formula for determining the value of k

Note: It is important to note that the Master Theorem is not applicable to all recurrence
relations, and it may not always provide an exact solution to a given recurrence.
It is possible to complete an asymptotic tight bound in these three cases:
2.1.1 Case 1:
If f (n) = for some constant ε >0, then it follows that:
T (n) = Θ

Let's understand it with the help of an example:


T (n) = 8 T
this results in the conclusion:
2.1.2 Case 2:
F (n) = Θ then it follows that:
T (n) = Θ

Let's understand it with the help of an example:


T (n) = 2 T

this results in the conclusion:


2.1.3 Case 3:
If it is true f(n) = Ω for some constant ε >0 and it also true that: a f for some constant c<1
for large value of n ,then :
T (n) = Θ((f (n))
Let's understand it with the help of an example:
T (n) = 2

this results in the conclusion:


2.2 Some key points of Master Theorem:
1. Divide-and-conquer recurrences: The Master Theorem is specifically designed to solve
recurrence relations that arise in the analysis of divide-and-conquer algorithms.
2. Time complexity: The Master Theorem provides conditions for the solution of the
recurrence to be in the form of O(n^k) for some constant k, and it gives a formula for
determining the value of k.
3. Useful tool: Despite its limitations, the Master Theorem is a useful tool for analyzing the
time complexity of divide-and-conquer algorithms and provides a good starting point for
solving more complex recurrences.
4. Supplemented with other techniques: In some cases, the Master Theorem may need to
be supplemented with other techniques, such as the substitution method or the iteration
method, to completely solve a given recurrence relation.
Knapsack problem
Knapsack Problem
Suppose you have been given a knapsack or bag with a limited weight capacity, and each item has some
weight and value. The problem here is that “Which item is to be placed in the knapsack such that the weight
limit does not exceed and the total value of the items is as large as possible?”
Consider the real-life example. Suppose there is a thief and he enters the museum. The thief contains a
knapsack, or we can say a bag that has limited weight capacity. The museum contains various items of
different values. The thief decides what items are should he keep in the bag so that profit would become
maximum.
Some important points related to the knapsack problem are:
It is a combinatorial optimization-related problem.
Given a set of N items – usually numbered from 1 to N; each of these items has a mass wi and a
value vi.
It determines the number of each item to be included in a collection so that the total weight M is
less than or equal to a given limit and the total value is as large as possible.
The problem often arises in resource allocation where there are financial constraints.
3.1 Types of Knapsack Problem:
3.1.1 0/1 knapsack problem
3.1.2 Fractional knapsack problem
3.1.1 0/1 Knapsack Problem:
This problem is solved by using a dynamic programming approach. In this problem, the items
are either completely filled or no items are filled in a knapsack. 1 means items are completely
filled or 0 means no item in the bag. For example, we have two items having weights of 12kg
and 13kg, respectively. If we pick the 12kg item then we cannot pick the 10kg item from the
12kg item (because the item is not divisible); we have to pick the 12kg item completely.
In this problem, we cannot take the fraction of the items. Here, we have to decide whether
we have to take the item, i.e., x = 1 or not, i.e., x = 0.
The greedy approach does not provide the optimal result in this problem.
Another approach is to sort the items by cost per unit weight and starts from the highest until
the knapsack is full. Still, it is not a good solution. Suppose there are N items. We have two
options either we select or exclude the item. The brute force approach has O(2N)
exponential running time. Instead of using the brute force approach, we use the dynamic
programming approach to obtain the optimal solution.
Time Complexity of 1/0 Knapsack Problem is:
O(N*W)
Example of 0/1 Knapsack Problem:
Problem:

Solution:
3.1.2 Fractional Knapsack Problem:
This problem is also used for solving real-world problems. It is solved by using the Greedy
approach. In this problem we can also divide the items means we can take a fractional part of
the items that is why it is called the fractional knapsack problem. For example, if we have an
item of 13 kg then we can pick the item of 12 kg and leave the item of 1 kg. To solve the
fractional problem, we first compute the value per weight of each item.
As we know that the fractional knapsack problem uses a fraction of the items so the greedy
approach is used in this problem.
The fractional knapsack problem can be solved by first sorting the items according to their
values, and it can be done in O(NlogN) This approach starts with finding the most valuable
item, and we consider the most valuable item as much as possible, so start with the highest
value item denoted by vi. Then, we consider the next item from the sorted list, and in this
way, we perform the linear search in O(N) time complexity.
Therefore, the overall running time would be O(NlogN) plus O(N) equals to O(NlogN). We
can say that the fractional knapsack problem can be solved much faster than the 0/1
knapsack problem.
Time Complexity of Fractional Knapsack Problem is:
O(NlogN)
Example of Fractional Knapsack Problem:
Problem:

Solution:
3.2 Differences between the 0/1 and Fractional knapsack problem:
Sr. No 0/1 knapsack problem Fractional knapsack problem

The 0/1 knapsack problem is solved using Fractional knapsack problem is solved using a
1.
dynamic programming approach. greedy approach.

The 0/1 knapsack problem has not an optimal The fractional knapsack problem has an optimal
2.
structure. structure.

In the 0/1 knapsack problem, we are not Fractional knapsack problem, we can break items
3.
allowed to break items. for maximizing the total value of the knapsack.

0/1 knapsack problem, finds a most valuable In the fractional knapsack problem, finds a most
4.  subset item with a total value less than equal valuable subset item with a total value equal to the
to weight. weight.

In the 0/1 knapsack problem we can take In the fractional knapsack problem, we can take
5. 
objects in an integer value. objects in fractions in floating points. 
Thank You

You might also like