This action might not be possible to undo. Are you sure you want to continue?

**LECTURE-18: General Method:
**

Dynamic programming is a stage-wise search method suitable for optimization problems whose solutions may be viewed as the result of a sequence of decisions. The most attractive property of this strategy is that during the search for a solution it avoids full enumeration by pruning early partial decision solutions that cannot possibly lead to optimal solution. In many practical situations, this strategy hits the optimal solution in a polynomial number of decision steps. However, in the worst case, such a strategy may end up performing full enumeration. Dynamic programming takes advantage of the duplication and arranges to solve each subproblem only once, saving the solution (in table or something) for later use. The underlying idea of dynamic programming is: avoid calculating the same stuff twice, usually by keeping a table of known results of subproblems. Unlike divide-and-conquer, which solves the subproblems top-down, a dynamic programming is a bottom-up technique.Bottom-up means start with the smallest subproblems. Combining theirs solutions obtain the solutions to subproblems of increasing size. until arrive at the solution of the original problem. The Principle of Optimality: The dynamic programming relies on a principle of optimality. This principle states that in an optimal sequence of decisions or choices, each subsequence must also be optimal. For example, in matrix chain multiplication problem, not only the value we are interested in is optimal but all the other entries in the table are also represent optimal. The principle can be related as follows: the optimal solution to a problem is a combination of optimal solutions to some of its subproblems. The difficulty in turning the principle of optimally into an algorithm is that it is not usually obvious which subproblems are relevant to the problem under consideration.

consider a dictionary of words used by a spelling checker for English language documents. at least one comparison is wasted! If key. guaranteeing that in 99. 'the'. Thus we concentrated on balancing the tree so as to make the cost of finding any key at most log n. Since there are n choices for the root and 2O(n) choices for roots of the two sub-trees. the number of comparisons which must be made before k is found). It will be searched many more times for 'a'. A balanced binary tree is likely to end up with a word such as 'miasma' at its root. in which case we assume it to be equal). We make use of the property: Lemma Sub-trees of optimal trees are themselves optimal trees. a divide-andconquer approach would choose each key as a candidate root and repeat the process for each sub-tree. 'and'. It is also reasonably easy to produce a table of the frequency of occurrence of words: words are simply counted in any suitable collection of documents considered to be representative of those for which the spelling checker will be used. has relative frequency. Proof If a sub-tree of a search tree is not an optimal tree. from the root (i. . rk. Such a dictionary needs to be large: the average educated person has a vocabulary of 30 000 words.e. etc than for the thousands of uncommon words which are in the dictionary just in case someone happens to use one of them. We calculate the O(n) best trees consisting of just two elements (the neighbours in the sorted list of keys). then a better search tree will be produced if the sub-tree is replaced by an optimal tree. Sum (dkrk) where dk is the distance of the key.LECTURE-19: Optimal Binary Search Trees: Up to this point. we have assumed that an optimal search tree is one in which the probability of occurrence of all keys is equal (or is unknown. then in an optimal tree.and right-sub-trees. so it needs ~100 000 words in it to be effective. However. Then the process can be repeated for the left. Thus the problem is to determine which key should be placed at the root of the tree. However.99+% of searches. An efficient algorithm can be generated by the dynamic approach. is minimized. this leads to an O(nn) algorithm. k. k.

1 + 5. there are two possible arrangements for the tree containing F and G. Thus the overall algorithm is O(n3).h).1 + 7. Similarly.In the figure. The sub-trees containing two elements are then used to calculate the best costs for subtrees of 3 elements. c(h. Code for optimal binary search tree Note some C 'tricks' to handle dynamically allocated two-dimensional arrays using pre- . There are O(n2) such sub-tree costs. etc. We also store g as the root of the best f-g sub-tree in best(f.2 = 19 and for (b) 7. This process is continued until we have calculated the cost and the root for the optimal search tree with n elements.g). c(g.g). we calculate the best cost for all n-1 subtrees with two elements. The cost for (a) is 5. Each one requires n operations to determine. if the cost of the smaller sub-trees is known.2 = 17 Thus (b) is the optimum tree and its cost is saved as c(f.i).

rfi. the data structures used contain the frequencies.processor macros for C and BEST! The data structures used may be represented: After the initialisation steps. max everywhere below the diagonal and zeroes in the positions just above the diagonal (to allow for the trees which don't have a left or right . in cii (the costs of single element trees).

j + ci.m-1 + cm.i := 0 for length := 1 to n do for i := 0 to n-length do j := i + length wi.j .m-1 Rightson(ri.k-1+ck.j := wi. the optimal costs of k-1 element trees (ci.i+k) are filled in using previously calculated costs of smaller trees Construction of optimal binary search tree for i := 0 to n do wi.j := m Leftson(ri.branch): In the first iteration.j ri.j) := rm.i := 0 ri.j := wi.i := qi ci.i+1) will be filled in with the optimal costs of two-element trees from i to i+1. In subsequent iterations.j-1 + pj + qj m := value of k (with i < k ≤ j) which minimizes (ci.j) := ri. all the positions below the diagonal (ci.j) ci.

k-1+ck.j-1 ≤ k ≤ ri+1.j) .The time complexity of this algorithm is O(n3). Making a slight change will reduce the complexity to be O(n2). Modify the range of considered values of k: if length=1 then m := j else m := value of k (with ri.j) which minimizes (ci.

c[0 . and the two sequences v = <v1. w] . and other values are computed from the input and "earlier" values of c.LECTURE-20: 0/1 Knapsack: Let i be the highest-numbered item in an optimal solution S for W pounds. and get c[i-1. We can express this fact in the following formula: define c[i. On other hand. n. w] = c[i-1. The algorithm takes as input the maximum weight W. . wn>. . . . then the second row. Although the 0-1 knapsack problem. v2. and get c[i-1. w] contains the maximum value that can be picked into the knapsack. j] values in the table. the number of items n. and thief can choose from items w-wi. in which case it is vi plus a subproblem solution for (i-1) items and the weight excluding wi. i-1 upto the weight limit w. w] else c[i. That is. . n. . the first row of c is filled in from left to right. 0] = 0 for w = 1 to W do if wi ≤ w then if vi + c[i-1. c[i1. 0 . w] = 0 for i = 1 to n do c[i. . w-wi] else c[i. . That is.2. . the above formula for c is similar to LCS formula: boundary values are 0. w-wi] additional value. w-wi]. . . thief can choose from item 1. i and maximum weight w. The better of these two choices should be made. w] whose entries are computed in a row-major order.2. w. and so on. in which case it is a subproblem's solution for (i-1) items and the same weight. if the thief picks item i. At the end of the computation. w] max [vi + c[i-1. thief takes vi value. . It stores the c[i. w] = vi + c[i-1. Dynamic-0-1-knapsack (v. Then 0 c[i. that is. vn> and w = <w1. c[n. Then S`= S {i} is an optimal solution for W-wi pounds and the value to the solution S is Vi plus the value of the subproblem. . w-wi] then c[i. . w] to be the solution for items 1. W) for w = 0 to W do c[0. . if thief decides not to take item i. So the 0-1 knapsack algorithm is like the LCS-length algorithm given in CLR-book for finding a longest common subsequence of two sequences. . . w]} if i = 0 or w = 0 if wi ≥ 0 if i>0 and w ≥ wi This says that the value of the solution to i items either include ith item. w] value. or does not include ith item. w] = c[i-1. w2. . a two dimensional array.w] = c[i-1.

w]. broken up as follows: θ(nw) times to fill the c-table. and we continue tracing with c[i-1.The set of items to take can be deduced from the table. Analysis This dynamic-0-1-kanpsack algorithm takes θ(nw) times. which has (n+1).(w+1) entries. O(n) time to trace the solution. because the tracing process starts in row n of the table and moves up 1 row at each step. w-W]. starting at c[n. . and we are continue tracing with c[i-1. Otherwise item i is part of the solution. each requiring θ(1) time to compute. w] item i is not part of the solution. If c[i. w] and tracing backwards where the optimal values came from. w] = c[i-1.

) Clearly we cannot examine all possible solutions for minimum cost. The problem is to find the order of minimum cost route that is. Clearly.Finally. Let's number the cities from 1 to n and city 1 be the start-city of the salesperson. j = 1. If we try to determine the solution of this problem systematically. beginning from one of the city that is considered as a base or starting city and returns to it.. passing through each city only once. this requires at least (n-1)! Steps. where n is the number of cities. . where k are the pointers of the city the salesman has not visited yet.LECTURE-21+22: The Traveling Salesman Problem: This is one of the most well known difficult problems of time. Also let's assume that c(i. Outline of TSP Heuristic Whenever the salesman is in town i he chooses as his next city i. If each step required a 1msec we would need about 770 centuries to calculate the minimum cost route. In case more than one cities give the minimum cost.Next. A salesperson must visit n cities. 3. .1)! = 20! Steps.1)! Possible solutions. the city with the smaller k will be chosen.First. . we would end up with (n . keep the one with the minimum cost. j) i. For example if there were 21 cities the steps required are (n 1)! = (21 . the order of visiting the cities in such a way that the cost is the minimum. Here is the systematic way of solving this problem: Algorithm TSP 1. j) cost. Start with city 1 Output • Vector of cities and total cost. n. The cost of the transportation among the cities is given. find out all (n -1)! Possible solutions. This greedy algorithm selects the cheapest visit in every step and does not care whether this will lead to a wrong result or not. determine the minimum cost by finding out the cost of every one of these (n -1)! Solutions. k) costs. 2. (My life is too short for this crap.e. c (i. Heuristic: Input • • • • Number of cities n Cost of traveling between the cities. the city j for which the c(i. is the minimum among all c(i. j) is the visiting cost from any city i to any other city j. .

The algorithm.Initialization c← 0 Cost ← 0 visits ← 0 e = 1 /* pointer of the visited city */ 2. The edge to 1 has already been used. (5 6). It appears (experimentally) that failure occurs in less than 1. 5. 9. (5 3). 4 has edges to 1. Start with either one of the “start cities” in the edge lists of the parents or with a city with the smallest number of edges this latter criterion maximizes the probability that you will finish the tour using the parental set of edges. p2 = (4 1 2 8 7 6 9 3 5). City 7: (7 6). (8 2). k).For 1 ≤ r ≤ n Do { Choose pointer j with minimum = c (e. Using both parents. (7 8). (4 5). City 4: (4 3). j) = min{c (e. (2 8). (6 9). ( 1). collect the edges available: City 1: (1 9). ( 6). Once you have decided on the first city. 5 has fewer edges than 3: (1 4 5 x x x x x x). (8 9). 9 has 4. (4 1). 1) Example. 4.C(r) ← j C(n) = 1 cost = cost + c (e. 2 and 4 have 3 edges. (6 7). randomly. City 2: (2 1). Say you picked 4. Continuing in this fashion. City 9: (9 8). If we start with City 1: we can reach 2. (3 9). You now have (1 4 x x x x x x x). (1 2). Pick. between 2 and 4. (3 5). (3 4). add an edge to a city with the smallest number of edges. (9 3).cost e=j 3. visits (k) = 0 and 1 ≤ k ≤ n } cost ← cost + minimum .Main Steps 1.5% of the cases. City 3: (3 2). City 6: (6 5). (2 3). (1 4). City 5: (5 4). Continue. we arrive at the offspring (1 4 5 6 7 8 2 3 9) without needing to introduce a new edge to complete the tour. Start with two parents: p1 = (1 2 3 4 5 6 7 8 9). 3. City 8: (8 7). .

mii = 0 for all 1 ≤ i ≤ n.The number of 1s is exactly n•(n 1)/2. 2.Matrix Representations and Operators. A tour (3 1 2 8 7 4 6 9 5) is represented by the matrix: the element mij contains a 1 iff the city i occurs before the city j on the tour. There have been at least 3 attempts. If mij = 1 and mjk = 1 then mik = 1 The two parents p1 = (1 2 3 4 5 6 7 8 9) p2 = (4 1 2 8 7 6 9 3 5) Correspond to the matrices below: . 1. Precedence Matrix. Properties of the matrix: 1. 3.

etc. 3. city 1 must precede cities 2. 5.: matrix (a) corresponds to a tour. agreeing with the marginal sums. The marginal sums are calculated and stored. Ex. For example.The matrix gives the intersection: And we have a partial order. 7. Two were defined. How do we complete the tour? Operators. 8. 7. We have an example below: . the bits at the intersections are removed and replaced randomly. 9 are selected. 6. The first took a matrix.. Assume rows 4. 6. 9 and columns 1. randomly selected several rows and columns. 5. 3. city 6 is only required to precede city 9. 8 and 9. removed the set bits at the intersections of those rows and columns and replaced them randomly.

- PURCHASE ONLYBacktracking
- PURCHASE ONLYGreedy Method
- PURCHASE ONLYDivide and Conquer
- PURCHASE ONLYBrief Review Of Algorithms and their Complexity
- PURCHASE ONLYBranch and Bound
- PURCHASE ONLYNP-Hard and NP-Complete Problems
- Introduction to Compilers
- Parsers(SLR,CLR,LALR)
- Parsing Techniques
- Context Free Grammars
- Lexical Analysis
- PURCHASE ONLYCode Generation

Sign up to vote on this title

UsefulNot useful- Optimal Control Theory: An Introductionby Donald E. Kirk
- Combinatorial Algorithms: Enlarged Second Editionby T. C. Hu
- The Optimal Design of Chemical Reactors: A Study in Dynamic Programmingby Rutherford Aris
- Forward-Looking Decision Making: Dynamic Programming Models Applied to Health, Risk, Employment, and Financial Stabilityby Robert E. Hall

- Introduction to Stochastic Dynamic Programming
- Modern Control Engineering
- Introduction to Variational Methods in Control Engineering
- Dynamic Programming
- Optimal Control Theory
- Combinatorial Algorithms
- The Optimal Design of Chemical Reactors
- Forward-Looking Decision Making
- Dynamic Programming and Its Applications
- Introduction to Dynamic Programming
- Introduction to Dynamic Programming