You are on page 1of 9

DESIGN AND ANALYSIS OF

ALGORITHM
Assignment # 01
Dated = 15/06/2023

Dynamic Programming
Definition:
Dynamic programming (DP) is defined as a technique that solve some particular type of problems is polynomial
Time. Dynamic programming solutions are faster than the exponential brute method and can be easily proved their
correctness.

Dynamic programming is mainly an optimization over plain recursive. Whenever we see a recursive solution that
has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the
results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces
time complexities from exponential to polynomial.

Characteristics of Dynamic Programming Algorithm:


➢ In general, dynamic programming is one of the most powerful techniques for solving a certain class of problems.
➢ There is an elegant way to formulate the approach and a very simple thinking process, and the coding part is very
easy.
➢ Essentially, it is a simply idea, after solving a problem with a given input, save the result as a reference for future
use, so you won’t have to re-solve it.
➢ Additionally, the optimal solutions to the subproblems contribute to the optimal solution of the given problem
(referred to as the optimal substructure property).
➢ The solution to the subproblems are stored in a table or array (memorization) or in a bottom-up manner
(tabulation) to avoid redundant computation.
➢ The solution to the problem can be constructed from the solutions to the subproblems.

Working Principles of Dynamic Programming:


❖ Characterize structure of optimal solution, i.e. build a mathematical model of the solution.
❖ Recursively define the value of the optimal solution.
❖ Using bottom-up approach, compute the value of the optimal solution for each possible subproblems.
❖ Construct optimal solution for the original problem using information computed in the previous step.

Applications:
Dynamic programming is used to solve optimization problems. It is used to solve many real-life problems such as:

i. Making a change problem


ii. Knapsack problem
iii. Optimal binary search tree
Techniques to solve Dynamic Programming Problems
1) Top-down (Memorization):
Break down the given problem in order to begin solving it. If you see that the problem has already been
solved, return the saved answer. If it hasn’t been solved, solve it and save it. This is usually easy to think
of and very intuitive, this is referred to as Memorization.

2) Bottom-up (Tabulation):
Analyze the problem and see in what order the subproblems are solved, and work your way up from the
trivial subproblem to the given problem. This process ensures that the subproblems are solved before the
main problem. This is referred as Tabulation.

How to solve a Dynamic Programming Problem?


To dynamically solve a problem, we need to check two necessary conditions:

▪ Overlapping Subproblems:
When the solutions to the same subproblems are needed repetitively for solving the actual problem. The
problem is said to have overlapping subproblems property.

▪ Optimal Substructure Property:


If the optimal solution of the given problem can be obtained by using optimal substructure property.

Steps to solve?
1) Identify if it Is a dynamic programming problem.
2) Decide a state expression with the least parameters.
3) Formulate state and transition relationships.
4) Do tabulation (or memorization).

How to solve Dynamic Programming Problems through example?


Problem:
Let’s find the Fibonacci sequence up to the nth term. A Fibonacci series is the sequence of numbers in which each
number is the sum of the two preceding ones. For example, 0,1,2,3, and so on. Here, each number is the sum of the two
preceding numbers.

Naive Approach:
The basic way to find the nth Fibonacci number is to use recursive.

Below is the implementation for the above approach:

#include<iostream>

Using namespace std;

Int fib(int n)

If (n<=1){

Return n;

}
Int x = fib(n-1);

Int y = fib(n-2);

Return x+y;

Int main()

Int n=5;

Cout<<fib(n);

Return 0;

Output: 5
Time Complexity: O(2n)
Now in this process of memorization, considering the above Fibonacci numbers example, it can be observed that
the total number of unique calls will be at most (n+1) only.

Below is the implementation for the above approach:

#include <iostream>

Using namespace std;

Int fib-helper (int n, int*ans)

If (n<=1){

Return n;

If (ans[n] !=1){

Return ans[n];

Int x = fibo-helper (n-1, ans);

Int y = fibo-helper (n-2, ans);

Ans[n] = x+y;

Int fibo(int n)

Int*ans = new int [n+1];


For (int i=0; i<=n; i++){

Ans[i] = -1;

Fibo-helper (n, ans);

Int main ()

Int n = 5;

Cout<<fibo(n);

Return 0;

Time Complexity: O(n)


Auxiliary Space: O(n)
Optimized Approach: following a bottom-up approach to reach the desired index. This
approach of converting recursive into iteration is known as Dynamic Programming
(DP).

Disjoint Set
Definition:
In the design and analysis of algorithm, a disjoint set is a data structure that represents a collection of disjoint
(non-overlapping) sets. It provides efficient operations for creating and manipulating sets, as well as determining whether
two elements belong to the same set.

Usage:
The disjoint set data structure is commonly used in algorithms that deal with partitioning elements into groups,
such as clustering, graph algorithms, and image processing.

Operations:
The main operations supported by a disjoint set data structure are:

1. MakeSet(x): Creates a new set with a single element x. initially, each element is in its own set.
2. Find(x): Returns the representative or root element of the set that contains element x. The
representative is typically chosen as an arbitrary element of the set and is used to uniquely identify the
set.
3. Union(x): Merges the sets that contain elements x and y into a single set, combining their elements. This
operation modifies the disjoint set data structure by linking the roots of the two sets.

Implementation:
To efficiently implement these operations, various techniques can be employed. One commonly used technique is
known as “union by rank” or “union by size”, which optimizes the union operation by always appending the smaller tree to
the larger tree to keep the overall height of the trees small. Another technique is “path compression”, which optimizes the
find operation by making each element directly point to the representative, reducing the time required for subsequent
find operations.

Time Complexity:
The time complexity of the operations in a disjoint set data structure can be analyzed using the amortized
analysis. The MakeSet operations runs in constant time O(1), while both find and union operations have an amortized
time complexity of nearly O(α(n)), where α(n) is a very slowly growing function known as the inverse Ackermann function.
In practical terms, this means that the disjoint set operations can be considered nearly constant time.

Conclusion:
Overall, the disjoint set data structure is a variable tool for efficiently handling partitioning problems and is widely
used in various algorithmic applications.

NP-complete Problems
Definition:
In the design and analysis of algorithm, NP-complete problems play a crucial role. NP stands for
“Nondeterministic Polynomial Time”, which refers to a class of problems that can be verified in polynomial time. An NP-
complete problem is a specific type of problem within the NP class, and it has the properly that any other problem in NP
can be reduced to it in polynomial time.

Conditions:
More formally, a problem is classified as NP-complete if it satisfies two conditions:

1) It belongs to the class NP, meaning that solutions to the problem can be verified in polynomial time.
2) It is at least as hard as any other problem in NP. This means that if there exists a polynomial-time algorithm to
solve one NP-complete problem, then there exists a polynomial-time algorithm for saving all NP problems.

Significance:
The significance of NP-complete problems lies in the fact that they are believed to be computationally intractable,
meaning that no efficient algorithm exists to solve them in the general case. This belief is based on the fact that many
attempts to find efficient algorithms for NP-complete problems have failed so far, and no polynomial-time solution has
been discovered.

Example:
The famous example of an NP-complete problems is the Boolean satisfiability problem (SAT). it involves
determining whether a given Boolean formula can be satisfied by assigning Boolean values to its variables.

Detail:
When dealing with NP-complete problems, the common approach is to develop approximation algorithms or
heuristic methods that provide reasonably good solutions in practice, even if they do not guarantee finding the optimal
solution in all cases. Researchers also work on identifying problem-specific characteristics that allow for the development
of specialized algorithms that perform well on certain instances of NP-complete problems.

The existence of NP-complete problems has significant implications for the field of computational complexity
theory and the study of algorithm efficiency. It demonstrates that there are problems for which finding an exact solution
efficiently is unlikely unless P (the class of problems solvable in polynomial time) is equal to NP, a question that remains
one of the most important unsolved problems in computer science.
Approximation Algorithm
Definition:
An approximation algorithm is an algorithm that provides a near-optimal solution for given optimization problem,
usually in a more efficient trades off optimally for computational efficiency by finding a solution that is guaranteed to be
close to the optimal solution within a certain approximation ratio or factor.

Goal/Purpose:
The goal/purpose of an approximation algorithm is to strike a balance between quality and computational
resources. It aims to find a solution that is reasonably good, often within a known or provable bound, while avoiding the
high computational cost associated with finding the optimal solution.

Designing:
When designing an approximation algorithm, the following key elements are considered:

1. Problem Formulation: The problem is defined precisely is terms of inputs, outputs, and objectives to be
optimized. The problem is typically an optimization problem where the goal is to maximize or minimize
an objective function under certain constraints.
2. Approximation Ratio: An approximation ratio or factor is defined, which is a bound on the performance
of the approximation algorithm compared to the optimal solution. It quantifies how close the obtained
solution is to the optimal solution. For example, if the approximation ration is 2, it means the algorithm
provides a solution that is at most twice as bad as the optimal solution.
3. Efficiency: The approximation algorithm is designed to run in polynomial time, ensuring that it can handle
large input sizes within a reasonable time frame. The focus is on finding a solution that is “good enough”
quickly rather than exhaustively searching for the optimal solutions.
4. Analysis and Proof: The approximation algorithm is analyzed to establish its approximation guarantee.
The analysis typically involves bounding the performance of the algorithm in terms of the optimal
solution. The proof aims to show that the approximation algorithm’s solution is within the desired
approximation ratio.

Usage:
Approximation algorithms are widely used in practice for solve optimization problems in various domains, such as
scheduling, resources allocation, network design, and facility location. They provide efficient solutions that are often
sufficient for real world applications where finding the exact optimal solution is impractical or computationally infeasible.

Conclusion:
Overall, approximation algorithms offer a trade-off between solution quality and computational efficiency,
allowing for the efficient exploration of optimization problems in practice.

Network Clock
Definition:
A network clock refers to a mechanism used in computer networks to synchronize the time across multiple
devices or systems. It ensures that different nodes in the network have a consistent notion of time, which is crucial for
various network operations and applications.
Details:
Network clock are essential for tasks such as timestamping network events, coordinating distributed
computations, maintaining data consistency, and enabling time-based security protocols. They help ensure that different
devices or processes can accurately coordinate their actions based on a shared understanding of time.

Key Concepts:
The design and analysis of algorithms related to network clocks involves addressing challenges such as clock drift,
clock skew, and clock synchronization protocols. Here are some key concepts:

1. Clock Drift: Clock drift refers to the phenomenon where a clock’s rate of ticking is not perfectly accurate
and tends to deviate overtime. This can occur due to factors like variations in hardware, temperature, or
clock crystal quality. Analyzing clock drift helps determine the maximum error that can accumulate in
time synchronization over a given period.
2. Clock Skew: Clock skew refers to the different in the readings of two clocks that should ideally be
synchronized. Clock skew can arise due to network delays, packet transmission times, or difference in
clock precision. If represents the relative difference in the local time between different devices or
processes.
3. Clock Synchronization Protocols: Clock synchronization protocols are algorithms or mechanisms used to
establish and maintain time synchronization among networked devices. These protocols aim to minimize
clock skew and drift to achieve accurate time synchronization. Examples of clock synchronization
protocols include the Network Time Protocol (NTP) and the Precision Time Protocol (PTP).
4. Analysis of Synchronization Algorithms: The design and analysis of clock synchronization algorithms
involve accessing their accuracy, efficiency, and resilience to network delays or failures. Evaluating the
stability, convergence rate, and robustness of synchronization algorithms is crucial to ensure reliable time
synchronization in various network conditions.

Conclusion:
Efficient clock synchronization algorithms contribute to the overall performance, reliability, and security of
computer networks. They enable accurate event ordering, coordination of distributed computations, detection of
anomalies or attacks based on time-related patterns, and proper functioning of time-sensitive applications.

Designing and analyzing algorithms related to network clocks often involves considering factors such as clock
precision, network latency, message exchange overhead, and fault tolerance. It is an important aspect of network protocol
design and plays a significant role in ensuring the proper functioning of distributed systems and networked applications.

Write algorithm of Binary Search using Recursive Method


Algorithm_binarySearch(arr, key, low, high)

If (low>high);

Return false;

Else

Mid = (low+high)/2

If (x == arr[mid]);

Return mid;
Else if (x > arr[mid]);

Return binarySearch(arr, key, mid+1, high);

Else

Return binarySearch(arr, key, low, mid-1);

Which one take less time


𝑓(𝑛) = 3𝑛√𝑛 𝑔(𝑛) = 2√𝑛 log 𝑛
log(3𝑛√𝑛 ) log⁡(2√𝑛 log 𝑛 )
𝑙𝑜𝑔3 + 𝑙𝑜𝑔𝑛√𝑛 √𝑛⁡𝑙𝑜𝑔𝑛⁡𝑙𝑜𝑔2 2
𝑙𝑜𝑔3 + √𝑛 log 𝑛 √𝑛 log 𝑛
𝑙𝑜𝑔3 + √𝑛⁡𝑙𝑜𝑔𝑛⁡ > ⁡⁡⁡ √𝑛 log 𝑛

Find the time complexity of the following function:


Void test (int n)

If (n > 1)

Cout<<n;

Test (n-1);

Test (n-1);

Solution
T(n) = (n-1)+(n-1)+1

= 2(n-1)+1

1 𝑛 == 0
𝑇(𝑛) = {
2𝑇(𝑛 − 1) + 1 𝑛 > 1

Time complexity is O(n)

1
Show that 𝑛2 + 3𝑛 = 𝜃(𝑛2 )
2
If n ≥ 1

1 2 1 7
𝑛 + 3𝑛 ≤ 𝑛2 + 3𝑛2 = 𝑛2
2 2 2
1 2
𝑛 + 3𝑛 = 𝑂(𝑛2 )
2

When n ≥ 0

1 2 1 2
𝑛 ≤ 𝑛 + 3𝑛
2 2
1 2
𝑛 + 3𝑛 = Ω(𝑛2)
2
1 1
since 2 𝑛2 + 3𝑛 = 𝑂(𝑛2) and 2 𝑛2 + 3𝑛 = Ω(𝑛2 )

1 2
𝑛 + 3𝑛 = 𝜃(𝑛2 )
2

Thank You
Composed by M. Abdullah

You might also like