You are on page 1of 25

Algorithm Design

Methods

PREPARED BY: ABDUL JALIL NIAZAI


Algorithm Design Methods
 When it comes to solve problems, especially in computer science, algorithms are something
that play a crucial role.
 An algorithm is a set of rules for solving a certain problem quickly. But the question is how to
design an algorithm to solve that problem?
 A given problem can be solved in various different approaches and some approaches deliver
much more efficient results than others.
 Different algorithm design techniques and their applications are discussed in these slides.

 Brute Force Search


 Greedy method
 Divide and Conquer
 Backtracking
 Randomized Algorithm
 Dynamic Programming
 Branch and Bound

PREPARED BY: ABDUL JALIL NIAZAI


Brute Force Search
 Brute Force Algorithms are exactly what they sound like – straightforward methods of solving a
problem that rely on sheer computing power and trying every possibility rather than advanced
techniques to improve efficiency.
 It is a simple approach of addressing a problem that relies on huge processing power and testing
of all possibilities to improve efficiency.
 It also guarantees that it finds the correct solution to a problem.
 Many problems solved in day-to-day life using the brute force strategy, for example exploring
all the paths to a nearby market to find the minimum shortest path.

PREPARED BY: ABDUL JALIL NIAZAI


Brute Force Search
 For example, imagine you have a small padlock with 4
digits, each from 0-9. You forgot your combination, but
you don't want to buy another padlock. Since you can't
remember any of the digits, you have to use a brute force
method to open the lock.

 So you set all the numbers back to 0 and try them one by
one: 0001, 0002, 0003, and so on until it opens. In the
worst case scenario, it would take 104, or 10,000 tries to
find your combination.

 The time complexity of brute force is O(mn), which can


also be written as O(n*m). This means that if we need to
search a string of “n” characters in a string of “m”
characters, then it would take “n*m” tries.

PREPARED BY: ABDUL JALIL NIAZAI


Brute Force Search
 Pros of Brute Force.
 Guarantees that it finds the correct solution to a problem.
 This type of algorithm is applicable to a wide range of domains.
 It is mainly used for solving simpler and small problems.
 It can be considered a comparison benchmark to solve a simple problem.

 Cons of Brute Force.


 This method relies more on compromising the power of a computer system for solving a problem than on
a good algorithm design.
 Brute force algorithms are slow.
 Brute force algorithms are not constructive or creative compared to algorithms that are constructed using
some other design paradigms.

PREPARED BY: ABDUL JALIL NIAZAI


Brute Force Search
 Conclusion:
Brute force algorithm is a technique that guarantees solutions for
problems of any domain helps in solving the simpler problems and
also provides a solution that can serve as a benchmark for evaluating
other design techniques, but takes a lot of run time and inefficient.

PREPARED BY: ABDUL JALIL NIAZAI


Divide and Conquer
Divide and Conquer is an algorithmic paradigm in which the problem is solved using the
Divide, Conquer, and Combine strategy.
How Divide and Conquer Algorithms Work?
Here are the steps involved:
 Divide: Divide the given problem into sub-problems using recursion.
 Conquer: Solve the smaller sub-problems recursively. If the sub problems is small enough,
then solve it directly.
 Combine: Combine the solutions of the sub-problems that are part of the recursive process
to solve the actual problem.

PREPARED BY: ABDUL JALIL NIAZAI


Divide and Conquer
Example of divide and conquer approach:
 Merge Sort Algorithm.

PREPARED BY: ABDUL JALIL NIAZAI


Divide and Conquer
Standard algorithms that follow Divide and Conquer algorithm:
 Binary Search
 Quicksort
 Merger Sort
 Closest Pair of Points:
 Strassen's Algorithm:
 Cooley–Tukey Fast Fourier Transform (FFT) algorithm

PREPARED BY: ABDUL JALIL NIAZAI


Greedy Method
 The greedy method is one of the strategies like Divide and
conquer used to solve the problems.
 This method is used for solving optimization problems.
 An optimization problem is a problem that demands either
maximum or minimum results. Let's understand through some
terms.
 The Greedy method is the simplest and straightforward approach.
 It is not an algorithm, but it is a technique.
 The main function of this approach is that the decision is taken on
the basis of the currently available information.
 Whatever the current information is present, the decision is made
without worrying about the effect of the current decision in
future.

PREPARED BY: ABDUL JALIL NIAZAI


Greedy Method
 This technique is basically used to determine the feasible solution that may or may not be
optimal.
 The feasible solution is a subset that satisfies the given criteria. The optimal solution is the
solution which is the best and the most favorable solution in the subset.
 In the case of feasible, if more than one solution satisfies the given criteria then those solutions
will be considered as the feasible, whereas the optimal solution is the best solution among all the
solutions.
Characteristics of Greedy method

 To construct the solution in an optimal way, this algorithm creates two sets where one
set contains all the chosen items, and another set contains the rejected items.
 A Greedy algorithm makes good local choices in the hope that the solution should be
either feasible or optimal.

PREPARED BY: ABDUL JALIL NIAZAI


Greedy Method
 Application of Greedy method.
 Knapsack Problem
 Minimum Spanning Tree
 Job Scheduling Problem
 Prim's Minimal Spanning Tree Algorithm
 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Huffman Coding
 Ford-Fulkerson Algorithm

PREPARED BY: ABDUL JALIL NIAZAI


Backtracking
 A backtracking algorithm is a problem-solving algorithm
that uses a brute force approach for finding the desired
output.
 The term backtracking suggests that if the current
solution is not suitable, then backtrack and try other
solutions. Thus, recursion is used in this approach.
 It uses recursive calling to find a solution set by building
a solution step by step, increasing levels with time.
 In order to find these solutions, a search tree named
state-space tree is used. In a state-space tree, each branch
is a variable, and each level represents a solution.

PREPARED BY: ABDUL JALIL NIAZAI


Backtracking
State Space Tree

 A space state tree is a tree representing all the possible states (solution or nonsolution)
of the problem from the root as an initial state to the leaf as a terminal state.

Backtracking Algorithm

Backtrack(x)
if x is not a solution
return false
if x is a new solution
add to list of solutions
backtrack(expand x)

PREPARED BY: ABDUL JALIL NIAZAI


Backtracking
Example of Backtracking

 Problem: You want to find all the possible ways of arranging B1 and B2 and G on 3
benches. Constraint: G should not be on the middle bench.

Solution: There are a total of 3! = 6 possibilities. We will try all the possibilities and
get the possible solutions. We recursively try all the possibilities.

All the possibilities are:

PREPARED BY: ABDUL JALIL NIAZAI


Backtracking
Example of Backtracking

 The following state space tree shows the


possible solutions.

State tree with all the solutions

PREPARED BY: ABDUL JALIL NIAZAI


Backtracking
Application of Backtracking

 To find all Hamiltonian Paths present in


a graph.
 To solve the N Queen problem.
 Maze solving problem.
 The Knight's tour problem.

PREPARED BY: ABDUL JALIL NIAZAI


Randomized Algorithms
 A randomized algorithm is a technique that uses a source of randomness as part of its
logic.
 It is typically used to reduce either the running time, or time complexity; or the memory
used, or space complexity, in a standard algorithm.
 The algorithm works by generating a random number, r, within a specified range of
numbers, and making decisions based on r's value.

 A randomized algorithm could help in a situation of doubt by flipping a coin or a drawing a


card from a deck in order to make a decision.
 Similarly, this kind of algorithm could help speed up a brute force process by randomly
sampling the input in order to obtain a solution that may not be totally optimal, but will be
good enough for the specified purposes.

PREPARED BY: ABDUL JALIL NIAZAI


Randomized Algorithms
Application of Randomized Algorithms

 Randomized Quicksort
 Atlantic City Algorithms
 Las Vegas Algorithms
 Computational Complexity
 π Approximation

PREPARED BY: ABDUL JALIL NIAZAI


Dynamic Programming
 Dynamic Programming is a technique in computer programming that helps to efficiently
solve a class of problems that have properties of
 optimal substructure and
 Overlapping sub-problems.

 If any problem can be divided into sub-problems, which in turn are divided into smaller
sub-problems, and if there are overlapping among these sub-problems, then the
solutions to these sub-problems can be saved for future reference.
 In this way, efficiency of the CPU can be enhanced. This method of solving a solution is
referred to as dynamic programming.

 Such problems involve repeatedly calculating the value of the same sub-problems to
find the optimum solution.

PREPARED BY: ABDUL JALIL NIAZAI


Dynamic Programming
Dynamic Programming Example

 Let's find the Fibonacci sequence up to 5th term. A Fibonacci series is the sequence of
numbers in which each number is the sum of the two preceding ones.
 For example, 0,1,1, 2, 3. Here, each number is the sum of the two preceding numbers.

Let n be the number of terms.

1. If n <= 1, return 1.
2. Else, return the sum of two preceding numbers.

PREPARED BY: ABDUL JALIL NIAZAI


Dynamic Programming
Dynamic Programming Example

We are calculating the Fibonacci sequence up to the 5th term.

 The first term is 0.


 The second term is 1.
 The third term is sum of 0 (from step 1) and 1(from step 2), which is 1.
 The fourth term is the sum of the third term (from step 3) and second term (from step 2)
i.e. 1 + 1 = 2.
 The fifth term is the sum of the fourth term (from step 4) and third term (from step 3)
i.e. 2 + 1 = 3.
Hence, we have the sequence 0,1,1, 2, 3. Here, we have used the results of the previous
steps as shown below. This is called a dynamic programming approach.

PREPARED BY: ABDUL JALIL NIAZAI


Dynamic Programming
How Dynamic Programming Works?

 Dynamic programming works by storing the result of sub-problems so that when their
solutions are required, they are at hand and we do not need to recalculate them.

 This technique of storing the value of sub-problems is called memorization.


 By saving the values in the array, we save time for computations of sub-problems we
have already come across.

var m = map(0 → 0, 1 → 1)
function fib(n)
if key n is not in map m
m[n] = fib(n − 1) + fib(n − 2)
return m[n]

PREPARED BY: ABDUL JALIL NIAZAI


Dynamic Programming
Some applications of dynamic programming.

 Longest Common Subsequence.


 Finding Shortest Path.
 AI and machine learning (graph theory)
 Floyd-Warshall Algorithm

PREPARED BY: ABDUL JALIL NIAZAI


Branch and Bound

Assignment

PREPARED BY: ABDUL JALIL NIAZAI

You might also like