GREEDY METHOD General characteristics:1. Problems will have n inputs. 2.

From the input set, find a subset that satisfies some constraints. This subset is called feasible solution. 3. Feasible solution either maximizes or minimizes a give objective function. This feasible solution is called an optimal solution. 4. Greedy algorithm works in stages, by considering one input at a time. 5. Each stage, a decision is made regarding whether the input considered yields an optimal solution. If so, the input is considered otherwise, rejected. 6. Inputs are considered by a selection procedure. Selection procedure is based on some optimization measure. 7. If the inclusion of the next input into the partially constructed optimal solution results in an infeasible solution, then this input is not added to the partial solution, otherwise, it is added. This measure may be the objective function. Examples: 1. Knapsack problem 2. Tree vertex splitting 3. Job sequencing with dead lines 4. MST 5. Single source shortest path 6. Optimal storage on tapes 7. Optimal merge patterns Control Abstraction: Algorithm greedy (a, n) { Solution: = ø For i = 1 to n do { x = select (a); If feasible (solution, x) then solution = union (solution, x) } Return solution; } KNAPSACK PROBLEM Assume there are n objects and a knapsack or bag, objects I has a weight Wi. let m be the capacity of knapsack or bag. Let Pi be the profit knapsack or bag. Let Pi be the profit knapsack or bag. Let Pi be the profit associated with each object. Let Xi be the fraction of each object i. If a fraction Xi of object I is placed in a bag or knapsack, then the profit P iXi is earned. The objective is to fill the knapsack and maximize the profit. Maximize n∑i=1 PiXi ------------------- (1) Subject n∑i=1 Wi Xi <= m ------------ (2) 0<= Xi <=1 Where profits and weights are positive numbers A feasible solution is any set (X1, …….. Xn) satisfying the equations 2.

0. For i=1 to n do { If(w[i]>u) then break. Algorithm greedy knapsack (m. We begin with min weight edge (u. we choose a minimum weight edge (u. This process is repeated until a MST is formed. If weights are assigned. weighted graph.An optimal solution is a feasible solution for which equation 1 is maximized. U= U-W[i]. G/ € G . all cities connected with minimum cost. Electric network 2. Then vertex u is added to T. Prim’s algorithm has the property that the edges in the (T) always form a single tree. } m=capacity of the bag n=number of objects W [1…n] = weights of the objects P [1…n] = profit of the objects Selection procedure: Objects are ordered such that (P[i]/W[i]) >= (P[i+1]/W[i+1]) MINIMUM SPANNING TREE Given a connected. v) connecting a vertex v in the T. In each iteration only safe edges are added (i. } If(i<=n) then X[i] = U/W[i]. A graph can have many different spanning trees. U=m.0. undirected. MST is one with the lowest total cost. n) { For i=1 to n do X[i] =0. APPLICATIONS OF MST 1. Algorithm: Procedure Prim (G: weighted connected graph with n vertices) { T= a minimum weighted edge For i = 1 to n-2 Begin { . Prim’s algorithm finds a minimum spanning tree of G. v) in G = (V. E). PRIM’S ALGORITHM Consider a weighted connected graph G with n vertices. Then in each iteration. a spanning tree of that graph is a sub graph which is a tree and connects all the vertices together.e. Then vertex u outside of T. X[i] = 1. edges which will not form a cycle). V(G/) = V(G) Link/communication problems.

we choose a minimum weight edge (u. T = T with e added } End Return (T) } Explanation: Prim’s Algorithm has the property that the edges in the (T)( always form a single tree. Also. Without the for loop. w) does not create a cycle in T . This process is repeated until a minimum spanning tree is formed. then add (v. If priority queue is implemented as binary heap. Priority queue can be implemented as 1. w from E. Fibonacci heap O(V log V +E) Implementation of priority queue as Binary heap Minimum edge is selected by storing the edges in a priority queue.E = an edge of minimum weight incident to a vertex in T and not forming a cycle in T if added in T. w) from E of lowest cost. We begin with a minimum weight edge (u. Kruskal’s algorithm finds a minimum spanning tree of G. Then in each iteration. Algorithm: { T=ø While t has less than n-1 edges and ( E ≠ 0 ) do { Choose an edge (v. w).E). Cost adjacency matrix O(V2) 3. Then vertex U is added to T. the edge selection and updating MST is implemented as heap which takes O (log V) Therefore. total time for prim’s algorithm using binary heap in O (V log V + E log V). ANALYSIS Minimum edge is selected by storing the edges in a priority queue. Else Discard (v.e. O [E] times. Binary heap 2. w) to T. E=n-1 Therefore.v) in G (V. Total time O (V log V) For loop executes at the max n-2+1 i. } } Explanation: . If (v. Delete v. Construction of heap O (V) Extracting min/max O (log V) Therefore.v) connecting a vertex V in T to the vertex U outside of T. KRUSKAL’S ALOGORITHM Consider a weighted connected graph G with n vertices. in each iteration only safe edges are added that is edges which will not form a cycle.

Inside the while loop. O(E) times. Binary heap: Outside the while loop. the forest is connected and a single MST is formed. it will not form a cycle. In dynamic programming. DYNAMIC PROGRAMMING It is an algorithm approach that can be used when the solution to a problem can be viewed as the results of a sequence of decisions.e. heap would be constructed for n vertices. two computations are carried out : a. One decision sequence will be Many decision sequences will be generated generated. Only safe edges are added i. It is assumed that the weights are positive. Priority queue can be implemented as 1. v) and it is added to T.e. Definition: Principle of optimality: the principle of optimality states that an optimal sequence of decisions has the property that whatever the initial state and decisions are. compared to permutation problems Characteristics: - . 3. This process is repeated until a MST is formed.permutation problems. Algorithm: This algorithm is called as Dijkstra’s algorithm developed by using greedy method. ANALYISIS: Minimum weight edge is selected by storing the edges and vertices in a priority queue. the problem is to determine the shortest paths from V0 to all the remaining vertices of G. while loop performance is stated as O(E log V + E log V). total analysis is therefore O(V + E log V + E log V). Cost adjacency matrix Single source shortest paths: Definition: Given a directed weighed graph G=(V. an optimum sequence of decisions is obtained by using principle of optimality. 2. forest is formed. Retrieval from heap. Fibonacci heap: while loop executes (n-1) times i. Solve problems (selection problems) selection. Sometimes. 2. E). Used to solve selection problems. Update of MST.E) and the weighting function as cost and the source vertex as V0 . T (MST) is not a tree but a forest. Construction of heap would take O(n) time i.e. v) in G = (v. which is O (log n) i. GREEDY METHOD DYNAMIC PROGRAMMING 1. b. the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision.TSP 3. We begin with minimum weight edge (u.OBST. Therefore. choose a minimum weight edge (u. At the end of the iteration. 3. Edges with minimum cost are considered 2. which is O (log n) i. Optimal solutions to the subproblems are Such problems will be solved easily retained so that recomputation is avoided. O(log V).e. O (log V).e. So. O(v) time. Ex: knapsack DP algorithms often have a polynomial complexity 4.Kruskal’s algorithm has the following properties at each iteration: 1. Then in each iteration.

Examples: 1. If it is not empty. 4. given that stage j+1 has already been solved. Single source shortest path 4. . The left & right sub trees are also BST. The keys in the right sub tree are larger than the key in the root. Multi stage graph 2. 3. Optimal solutions to the sub problems are retained so that recomputation is avoided. 2. TREE 1 takes four comparisons whereas TREE 2 takes 3 comparisons to find an identifier. The keys in the left sub tree are smaller than the key in the root. 5. Every element has the key and no two elements have the same key. Decision at one stage transforms one state into a state in the next stage. It may be empty. 1. All pairs shortest path 3. then it satisfies the following properties. Given the current state. Each stage has number of states associated with it. 2. OBST => (selection problem) 5. the optimal decision for each of the remaining states does not depend on the previous states or decision. TSP => (permutation problem) OBST: Binary search tree: BST is a binary tree. 4. 3. 6. There exists a recursive relationship that identifies the optimal decision for stage j. 1.The problems can be divided into stages with a decision required at each stage. In worst case.

1. j) such that j-i = 1. To compute one C(i. The remaining nodes will be in the right sub tree (r).j) = k ( the value of k which minimizes the equation 1) C(0. Cij > 0 for all i and j and Cij = ∞ if (i and j) ₡ E. So on till j-i = n.a2….j) = p(j) + q(j) +w(i. E0 contains all identifiers x such that x < a1 E1 contains all identifiers x such that x > an. ..j-1) C(i. terminates at an external node at level l. P(i) is the probability to search ai where 1 <= i <= n 3. E) be a directed graph with edge costs Cij . the following points are considered. Let the identifiers are denoted as {a1. 1. TRAVELLING SALESPERSON PROBLEM (TSP) TSP is a permutation problem.….In average case TREE 1 takes 1+2+3+4/5 = 12/5 comparisons whereas TREE 2 takes 1+2+2+3+3/5 = 11/5 comparisons. j-i = 1 is O(n).an 2. r(i. Ek-1 will be in the left sub tree (l).a2…. total complexity is O(n2 * n ) is O(n3). Successful search will terminate in internal nodes and unsuccessful search will terminate in external nodes. Each identifier is searched for equal probability. 5. BST contains n internal nodes and n+ 1 external node. the following assumptions are made. No unsuccessful search is made.E1.e.i) = 0 r(i. Analysis : 1. we can expect unsuccessful searches.j) is O(n). each identifier is searched with different probabilities. The identifiers not in BST can be partitioned into n+1 classes Ei 0 <= i <= n i. j) = mini<k<=j { C(i. C(i.ak-1 and external nodes E0. 6. There are n! permutations for n objects.j) ------. j-i = 2 ….an} such as that a1 < a2….j) & r(i. O(n*n) = O(n2) 2. x is not present in a i < x < ai+1 where 0<= i<= n [x can be less than a1 ] [therefore min value is 0] 4.n) can be solved by first computing all C(i.e. Expected cost of a BST is Min {∑1<=i<=n P (i) + level (ai) + ∑0<=i<=n q(i) + (level (Ei) – 1) } DYNAMIC PROGRAMMING APPROACH Dynamic programming approach to OBST problem is to make a decision as to which of the ai’s should be assigned as the root node of the tree.j) is also recorded w(i. But in general. selection problems have 2n possible selections. j) till j-i=n Therefore. then all internal nodes a1. (n! > 2n) Problem definition: Let G = (V.(1) w(i.j) is also computed.j) } + w(i. then cost is calculated as q(i) + (level (Ei) – 1). 2. Compared to selection problems. We have to compute C(i.k-1) + C(k.i) = q(i) r(i. x = ai ) 7. C(i.e. then cost is calculated as p(i) + level (ai ) (i. Therefore. Also.i) = 0 w(i. To compute w(i. Suppose if we select a k as the root node. Therefore. Permutation problems are harder to solve. j) = O(n) i. For unsuccessful search. 8. j) represents cost of OBST. In this case. If a successful search terminate at an internal node at level l. q(i) is the probability of an unsuccessful search.

the remaining (x i+1 …… X m) can be ignored.s)’s that have to be computed.|s|=2….x2. we have to find a tour with minimum cost.x2.s) be the length of a shortest path starting at vertex i. If the partial solution can no way lead to optimum solution. Problem description: Every tour starts and ends at vertex 1.|s|=n-1 S=V-{1} Analysis Let N be the number of G(i. .s-{j}) } Also. the solution should satisfy a bounding function or criterion function P(x1. Travelling salesperson is to find a tour of minimum cost. These constraints are classified into two categories. G(i.s) = minj € s { Cij + G(j. Problems that can be solved using backtracking approach should satisfy some constraints.x2….ø) = Ci1 The equation is solved by solving |s|=1. 6. 5. 3. The cost of a tour is the sum of the cost of the edges on the tour. all xi’s are calculated and checked whether the solution is optimum or not.Let |V| = n. Let mi be the size of set Si. Implicit constraints are rules that determine which of the tuples in the solution space satisfy the criterion function. Using dynamic programming approach. it is considered else it is rejected. the desired solution is expressed as (x1. In backtracking method.xn).xn) where xi are chosen from Si (solution set). A tour of G is a directed simple cycle that includes every vertex in V. Therefore in Brute Force approach. 2. N= (n-1) 2n-2 Any algorithm can be developed with complexity O(n2 * 2n) Backtracking (General Method) Characteristics 1.then. going through all vertices in s and terminating at vertex 1. we need only less than m trails. Problem which deals with searching for a set of solutions satisfying some constraints can be solved using backtracking. Let G(i. In backtracking method. to check whether solution is an optimum solution or not.x3….x3…. àImplicit àExplicit Explicit constraints are rules that restrict the value of each Xi. InBacktracking method. Each vertex is visited exactly once. say (x1. If optimum. But in Backtracking approach.. 4.. G(i. There are two methods to find the solution àBrute Force Approach àBacktracking Method In Brute Force approach.xi) the partial solution is calculated. we need m trails.

e. àDynamic Trees: trees are dependent of the problem instance. All paths from the root to other nodes define the state space of the problem. 4. A node which has been generated and all of whose children have not yet been generated is called a live node. Let B be the bounding function. 7.X2…. 7.Xi) be a path from the root to a node in a state space tree. Terminologies regarding tree organization of a solution space 1.X[2]…. Some of the problems that can be solved by backtracking approach are à8 Queens àSum of Subsets àHamilton Cycles àGraph Coloring 8. all queens must be on different rows. Algorithm Backtracking(k) foreach X[k] E T (X[1].. Tree organization for a solution space is called as state space tree..X2….X[2]…. Backtracking algorithms determine solutions by searching the solution space. The search is done using a tree organization for the solution space.Ex: For 8 Queens Problem Explicit constraints: Value of Xi will vary from 1 to 8. àStatic Trees: trees are independent of problem instance. There are two search methods used in state space tree: àDepth first àBreadth first Depth Firstnode generation with bounding function is called backtracking. Breadth First node generation is called Branch and Bound technique. A live whose children are currently generated is called E-node. columns.X[k-1]) do { if(Bk(X[1].X[k]!=0) then { . 2. Solution state is a path from root to the node that defines a solution.Xi+1is also a path to a problem state. 6. Implicit constraints: No two Xi’s can be same i. General Method Let (X1.X2…. 3. 5. diagonals. There are two types of state space trees. Answer state is a solution state that satisfies some implicit constraints. A dead nodeis one which is not to be generated further.Xi) be a set of all possible values for Xi+1 such that X1. Each node in the tree defines a problem state. Let T(X1. 8.

àBounding functions are regarded as good if they reduce the number of nodes in the state space tree. a check is made to determine whether a solution has been found then.X3….X[k]) is a path to an answer node) then { Write(X[i:k])..X2. Tree 1 . } if(k < n) then { Backtracking(k+1). Efficiency of backtracking algorithm depends on four factors 1.if(X[1].Xn).X[2]. The number of Xk satisfying Bk. 3. Time for the bounding function Bk.…. Algorithm is recursively invoked.X2. There is a tradeoff that good bounding functions take more time to evaluate.Xn) is treated as a global array. All possible elements for the Kth position of the tuple that satisfy Bk are generated. àThe number of nodes generated can be estimated by an Estimator (algorithm proposed by Monte Carlo). àSearching in a state space tree can be improved by the principle of rearrangement. one by one and adjoined to the current vector (X1.X3…. The time to generate the next Xk. Number of Xk satisfying explicit constraints. 2. Each time Xkis attached. } } } The solution vector(X1. 4.

X2…. These constraints reduce the solution space from 88 to 8! 2 3 4 5 6 7 8 . Implicit Constraint: No 2 Xi’s can be same ie. In tree 2. Eight queen’s problem It is a combinational problem to place eight queens on an 8 x 8 chess board so that no two “attack” i. if a node in level2 is removed. Let us number the rows and columns of the chessboard as 1 through 1 1 2 3 4 5 6 7 8 The queens are numbered 1 to 8.X8) where Xis the column on which queen i is placed.e no two queens are on the same row. No two queens can be on the same rows or columns or diagonal.Tree 2 In tree 1. Explicit Constraint: Value of Xi will vary from 1 to 8. if a node in level2 is removed. then 8 possible solutions are discarded. column. Solution to this problem is represented as (X1. and diagonal. then 12 possible solutions are discarded.

X8) where Xi is column on which queen I placed. But to check for the diagonal position.n).i) returns a Boolean value that is true if the Kthqueen can be placed in column i.i) for j=1 to k-1 do if((x[j]=i) or (abs(x[j]-i)=abs(j-k)) then return false.l).  Diagonal that runs from upper left to lower right has same row-column value.i) then { X[k]=i. if ( k=n ) then Write(x[1:n]) else NQueen(k+1.  Diagonal that runs from upper right to lower left has same row+columnvalue. Then they are on 2 same diagonal the i i only if i-j=k-l ori+j=k+l i-k=j-l k = j l j-l=k-i k = j l Therefore two queens lie on the same diagonal iff |j-l|=|i-k| Algorithm place(k. imagine the chessboard as 2D array.By representing the solution like (X1. return true.X2….j) and (k. it is easy to check that 2 queens are not placed in same row or column. Algorithm NQueen(k.n) for q=1 to n do { if place(k. The total nodes generated in a state space tree for N Queens problem n-2 j 1 + S ( T1(n-i)) j=0 i=0 . } } Algorithm place(k.n) gives a complete solution Algorithm NQueen(k. Suppose two queens are 1 placed at position (I.

The total nodes generated in a 8 queen’s problem state space tree is n-2 j 1 + S ( T1(8-i)) j=0 i=0 1 + 8 + 8*7 + 8*7*6 + 8*7*6*5 +8*7*6*5*4 + 8*7*6*5*4*3 + 8*7*6*5*4*3*2 + 8*7*6*5*4*3*2*1 =1+8+56+336+1680+6720+20160+40320+40320 =109601 nodes .


Definition 3 Dead node is a generated node that is not to be expanded or explored any further. we will use bounding functions to avoid generating subtrees that do not contain an answer node • Example: 4-queens – FIFO branch-and-bound algorithm ∗ Initially. no queen has been placed on the chessboard ∗ The only live node becomes E-node ∗ Expand and generate all its children. • Used for state space search – In BFS. an E-node is a node currently being expanded. the cost could be given by 1. 2.Branch and Bound Terminology Definition 1 Live node is a node that has been generated but whose children have not yet been generated. exploration of a new node cannot begin until the node currently being explored is fully explored General method • Both BFS and DFS generalize to branch-and-bound strategies – BFS is an FIFO search in terms of live nodes where the List of live nodes are stored in a queue – DFS is an LIFO search in terms of live nodes where the list of live nodes are stored in a stack • Just like backtracking. Definition 4 Branch-and-bound refers to all state space search methods in which all children of an E-node are generated before any other live node can become the E-node. it is obvious that the answer may be reached in one more move ∗ The rigid selection rule requires that other live nodes be expanded and then. Number of nodes in subtree x that need to be generated before an answer can be reached ∗ Search will always generate the minimum number of nodes 2. the current node be tested – Rank the live nodes by using a heuristic cˆ(·)Branch and Bound – The next E-node is selected on the basis of this ranking function – Heuristic is based on the expected additional computational effort (cost) to reach a solution from the current live node – For any node x. In other words. 3. there is only one live node. children being a queen in column 1. Definition 2 E-node is a live node whose children are currently being explored. and add the possible nodes to the queue of live nodes ∗ Bound the nodes that become dead nodes – Compare with backtracking algorithm ∗ Backtracking is superior method for this search problem • Least Cost (LC) search – Selection rule does not give preference to nodes that will lead to answer quickly but just queues those behind the current live nodes ∗ In 4-queen problem. if three queens have been placed on the board. All children of a dead node have already been expanded. Number of levels to the nearest answer node in the subtree x ∗ cˆ(root) for 4-queen problem is 4 ∗ The only nodes to become E-nodes are the nodes on the path from the root to the nearest answer node . and 4 of row 1 (only live nodes left) ∗ Next E-node is the node with queen in row 1 and column 1 ∗ Expand this node.

. list_node_t list_node. we’ll substitute it by a heuristic estimate as cˆ() Algorithm LC SEARCH – Algorithm LCSearch uses cˆ() to find an answer node typedef struct { list_node_t * next. we use gˆ(x) ≡ 0 and f(h(x)) as the level of node x · LC search generates nodes by level ∗ In DFS. algorithm LCSearch ( t ) { // Search t for an answer node if ( *t is an answer node ) { print ( *t ). we use f(h(x)) ≡ 0 and gˆ(x) ≥ gˆ(y) whenever y is a child of x – An LC-search coupled with bounding functions is called an LC branch-and-bound search – Cost function ∗ If x is an answer node. float cost. c(x) is the cost of reaching x from the root of state space tree ∗ If x is not an answer node. list_node_t * parent. that subtree has been searched and there is no need to explore x again ∗ Above can be avoided by using an estimate function gˆ(·) instead of actually expanding the nodes – Let gˆ(x) be an estimate of the additional effort needed to reach an answer from node x ∗ x is assigned a rank using a function cˆ(·) such that cˆ(x) = f(h(x)) + ˆg(x) where h(x) is the cost of reaching x from root and f(·) is any nondecreasing function ∗ The effort spent in reaching the live node cannot be reduced and all we are concerned with at this point is to minimize the effort reaching the solution from the live node – A search strategy that uses a cost function cˆ(x) = f(h(x)) + ˆg(x) to select the next E-node would always choose for its next E-node a live node with least cˆ(·) ∗ Such a search strategy is called an LC-search (Least Cost search) ∗ Both BFS and DFS are special cases of LC-search ∗ In BFS. provided the subtree x contains no answer node ∗ If subtree x contains an answer node. c(x) = ∞. } list_node_t.– Problem with the above techniques to compute the cost at node x is that they involve the search of the subtree at x implying the exploration of the subtree ∗ By the time the cost of a node is determined. c(x) is the cost of a minimum cost answer node in subtree x ∗ cˆ(·) with f(h(x)) = h(x) is an approximation to c(·) • Control abstractions for LC-search – Search space tree t – Cost function for the nodes in t: c(·) – Let x be any node in t ∗ c(x) is the minimum cost of any answer node in the subtree with root x ∗ c(t) is the cost of the minimum cost answer node in t – Since it is not easy to compute c(). return.

// Find a live node with least estimated cost // The found node is deleted from the list of live nodes } } ∗ The algorithm always keeps the list of live nodes in a list ∗ When all the children of E have been generated. while ( true ) { for each child x of E { if x is an answer node { print the path from x to t. FIFO. otherwise. E becomes a dead node · Happens only if none of E’s children is an answer node ∗ If there are no live nodes left.} E = t. Least() correctly chooses the next Enode and the search continues – FIFO search ∗ Implement list of live nodes as a queue ∗ Least() removes the head of the queue ∗ Add() adds the node to the end of the queue – LIFO search – The only difference in LC. then all live nodes x with cˆ(x) > U may be killed ∗ All answer nodes reachable from x have cost c(x) ≥ cˆ(x) > U ∗ Starting value for U can be obtained by some heuristic or set to ∞ ∗ Each time a new answer node is found. // Pointer for path to root } if there are no more live nodes { print ( "No answer node" ). x->parent = E. the algorithm terminates. and LIFO – Use a cost function cˆ(·) such that cˆ(x) ≤ c(x) provides lower bound on the solution obtainable from any node x – If U is the upper bound on the cost of a minimum-cost solution. . Direct Acyclic Graph (DAG). the value of U can be updated Topological sort 1. FIFO. return. return. // E-node Initialize the list of live nodes to be empty. }Branch and Bound 5 Add ( x ). } E = Least(). // Add x to list of live nodes. and LIFO is in the implementation of list of live nodes • Bounding – A branch-and-bound method searches a state space tree using any search mechanism in which all children of the E-node are generated before another node becomes the E-node – Each answer node x has a cost c(x) and we have to find a minimum-cost answer node ∗ Common strategies include LC.

Every DAG has one or more topological sequence. 7. Jobs are represented by vertices. Applications Scheduling sequence of jobs or tasks. Insert n into L 3.9 11 8 1 0 Remove edge e from the graph .2 if m has no other incoming edges then insert m into S if graph has edges then output error message(graph has atleast one cycle) else display L . 7 5 3 3.2.10 3.11.11. Analysis O(|v| + |E|) Algorithm: 1.9. if job x must be completed before job y can be started.2 7.9.3. There is an edge from x to y. While S is not empty do foreach node n with an edge e from n to m do 3.1 Remove node n from s 3. Lß empty 2.10.2 2 9 Sßset of all the edges with no incoming edge 3.