You are on page 1of 37

MODULE-IV

DYNAMIC PROGRAMMING, BACK TRACKING AND


BRANCH & BOUND

The Control Abstraction- The Optimality Principle- Matrix Chain Multiplication-


Analysis, All Pairs Shortest Path Algorithm - Floyd-Warshall Algorithm-
Analysis. The Control Abstraction of Back Tracking – The N Queen’s Problem.
Branch and Bound Algorithm for Travelling Salesman Problem.

DYNAMIC PROGRAMMING
 Algorithm design technique
 Used when solution to problems can be viewed as result of a sequence of
decisions.
 Collection of sub problems are identified and solved one by one, smallest
the first.
 Using answers to small problems, solutions to large ones are figured out,
until whole set of sub problems are solved
 Used for optimization problems.
 Such algorithms examine previously solved sub-problems and will
combine their solutions to give the best solution for a given problem.
 Eg: Travelling from A to B during rush hour. Finds shortest path to point
near A and from there to next shortest point and eventually reach B using
shortest path.
Steps in DP:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in bottom up fashion
4. Construct an optimal solution from computed information
EXAMPLE PROBLEMS:
 ALL PAIRS SHORTEST PATH ALGORITHM
 SINGLE SOURCE SHORTEST PATH
 MATRIX CHAIN MULTIPLICATION
 OPTIMAL BINARY SEARCH TREE
 0/1 KNAPSACK PROBLEM
 TRAVELLING SALESMAN PROBLEM
 STRING EDITING

OPTIMALITY PRINCIPLE
 The principle of optimality is the basic principle of dynamic programming
 An optimal path has the property that whatever the initial conditions and
control variables (choices) over some initial period, the control (or decision
variables) chosen over the remaining period must be optimal for the
remaining problem, with the state resulting from the early decisions taken
to be the initial condition.
EXAMPLE:
 Shortest path Problem: Find the shortest path between vertices i & j
in directed graph G
 To find this one must decide the sequence of vertices to be visited so
as to find the shortest route.
 Optimal sequence of decisions results in least length path

DYNAMIC PROGRAMMING vs DIVIDE & CONQUER


 Divide and conquer and dynamic programming are two algorithms or
approaches to solving problems.
 Divide and conquer algorithm divides the problem into sub problems and
combines those solutions to find the solution to the original problem.
 Dynamic programming does not solve the sub problems independently. It
stores the answers of sub problems to use them for similar problems.
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
Scanned by CamScanner
ALL PAIRS SHORTEST PATH ALGORITHM
Algorithm to find the shortest path between all n(n-1) ordered pairs of vertices in
a graph. This algorithm is Warshall- Floyd algorithm. The computation time is
n3.
FLOYD WARSHALL ALGORITHM
Floyd-Warshall algorithm is an algorithm for finding shortest paths in a
weighted graph with positive or negative edge weights. A single execution of the
algorithm will find the lengths (summed weights) of shortest paths between all
pairs of vertices. Although it does not return details of the paths themselves, it is
possible to reconstruct the paths with simple modifications to the algorithm. The
Floyd-Warshall algorithm is an example of dynamic programming.
Floyd-Warshall algorithm is used to find all pair shortest path problem from a
given weighted graph. As a result of this algorithm, it will generate a matrix,
which will represent the minimum distance from any node to all other nodes in
the graph.
The algorithm works by inserting one or more vertices into paths. Starting with
n by n matrix D= d[dij ] of direct distances, n different vertices D1;D2;D3; :::Dn are
constructed sequentially. Matrix Dk may be thought of as matrix whose(i, j)th
entry gives the length of the shortest distance path among all directed path
among all directed paths from i to j with vertices 1, 2,..k allowed as
intermediated vertices.
EXAMPLE:
BACKTRACKING
 Backtracking is a type of algorithm that is a refinement of brute force
search.
 In backtracking, multiple solutions can be eliminated without being
explicitly examined, by using specific properties of the problem
 It systematically searches for a solution to a problem among all available
options.
 Usually used in constraint satisfaction problems
 A solution is build up incrementally, builds candidates to solutions
 If the newly added candidate cannot possibly be completed to valid
solution, it backtracks
 It does so by assuming that the solutions are represented by vectors (x1, ...,
xn) of values and by traversing, in a depth first manner, the domains of
the vectors until the solutions are found.
 When invoked, the algorithm starts with an empty vector.
 At each stage it extends the partial vector with a new value.
 Upon reaching a partial vector (x1, ..., xi) which can’t represent a partial
solution, the algorithm backtracks by removing the trailing value from the
vector, and then proceeds by trying to extend the vector with alternative
values.
 Backtracking algorithm determine the problem solution by systematically
searching the problem space with a tree representation.
 Choose a sequence of objects from a specified set so that the sequence
satisfies some criterion.
 The procedure whereby, after determining that a node lead to nothing but
dead ends, we “backtrack” to the node’s parent and proceed with the
search on the next child.
State space tree of Backtracking:
 Each node defines a problem state.
 All path from root to other nodes defines state space of problem.
 Solution states are those problem state s for which the path from root to s
defines a tuple in solution space.
 Answer states are those solution states s for which path from root to s
defines a tuple that is member of solution (satisfies implicit constraints).
 The tree representation of solution space is called state space tree.
CONTROL ABSTRACTION:
Recursive Algorithm

Iterative Algorithm
Example problems:
 N QUEENS PROBLEM
 0/1 KNAPSACK PROBLEM
 SUM OF SUBSET PROBLEM

N QUEENS PROBLEM
› Position n queens on an nn chessboard so that no two queens threaten
each other

› No two queens can be in the same row, column/ diagonal.

› Backtracking algorithm is used

› 4 queens and 8 queens are common

4 Queens problem
› 4 queens have to be placed on an 4x4 chessboard, so that no two attack

› The rows and columns are numbered from 1 to 4.

› Each queen placed in different row

› Al solutions is represented as 4 tuples (x1,x2,x3,x4) where xi is the column


on which ith row queen is placed

› Start finding the solution by placing the Q1 in first column

No queens can be placed further, so backtrack and place Q2 in 4th column


Tree organisation of 4 queens solution space
ALGORITHM:

BRANCH & BOUND


 Branch and bound is a general algorithm for finding optimal solutions of
various optimization problems.
 It consists of a systematic enumeration of all candidate solutions, where
large subsets of fruitless candidates are discarded, by using upper and
lower estimated bounds of the quantity being optimized
 The algorithm traverses a spanning tree of the solution space using the
breadth first approach.
 It is a state space search method in which all children of E-node is
generated before any other live node can become the E-node.
FIFO & LIFO
 Here, a BFS like state space search in which the live nodes are
maintained using a queue is called FIFO and a D-Search like state space
search in which the live nodes are maintained using a stack is called
LIFO.
 As in the case of backtracking, bounding functions are used to avoid the
generation of sub trees that do not contain an answer node.
LC(Least Cost Method)
 In both FIFO and LIFO branch and bound, selection rule for next E-node
does not give preference to a node that has a very good chance of getting
the search to an answer node quickly.
 The search can be speeded up by adding a ranking function for live nodes.
 Next E-node is elected on the basis of ranking function.
 Rank is created on the basis of additional computational effort needed to
reach an answer node from live node
 For any node x, cost could be  Number of nodes in the sub tree x that
need to be generated before the answer node is generated
TRAVELLING SALESMAN PROBLEM

ALGORITHM USING BRANCH & BOUND LCBB


Let G=(V,E) be directed graph defining an instance of the travelling salesperson
problem.
1. This graph is represented by a cost matrix where
Cij= the cost of edge, if there is a path between vertex i and vertex j
Cij = ∞, if there is no path
2. Convert the cost matrix into reduced matrix i.e. every row and column should
contain at least on zero entry
3. Cost of reduced matrix is the sum of elements that are subtracted from rows
and columns of cost matrix to make it reduced.
4. Make the state space tree for reduced matrix.
5. To find next node to be selected, find the least cost valued node by calculating
the reduced cost matrix with every node.
6. If edges (i,j) is to be included, then there are three conditions to accomplish
this task:
(i) Change all entries in row i and column j to ∞
(ii) Set A[j,1] = ∞
(iii) reduce all rows and columns in resulting matrix except for
rows and columns containing ∞
7. Calculate the cost of the matrix where
cost = L + cost(i,j) + r
Where ‘L’ is the cost of original reduced cost matrix and ‘r’ is reduced cost
matrix
8. Repeat the above steps for all nodes until all the nodes are generated and we
get a path.
EXAMPLE
EXAMPLE:

You might also like