# PRESENTED BY.….

Om Prakash Tripathi 3rd C.S.E. 514/06

GREEDY ALGORITHM

A

greedy algorithm solves the problem by making the choice that seems best at a particular moment. Some problems may not have efficient solution but a greedy algorithm may provide a solution that is close to optimal.

**GREEDY ALGORITHM Contd..
**

greedy

algorithm works if a problem exhibit the following two properties 1) greedy choice property a globally optimal solution can be arrived at by making an locally optimal solution i.e. an optimal solution can be obtained by making greedy choices.

**GREEDY ALGORITHM Contd..
**

2)

optimal substructure: optimal solution contain sub solution ie solution to subproblems of an optimal solution are optimal.

**ACTIVITY SELECTION PROBLEM
**

**Scheduling a resource among several competing activities Definition
**

– – – –

S={1, 2,…, n} – activities that wish to use a resource Each activity has a start time si and a finish time fi, where si ≤ fi If selected, activity i takes place during the half-open interval [si, fi) Activities i and j are compatible if [si, fi) and [sj, fj) don’t overlap

si ≥ fj or sj ≥ fi

–

The activity-selection problem is to select a maximum-size set of mutually compatible activities Suppose f1 ≤ f2 ≤ f3 ≤ …≤ fn Θ if the activities are already sorted initially by their finish times (n)

**Greedy-Activity-Selector Algorithm
**

–

EXAMPLE

Given

10 activities along with their start and finish time as S={A1,A2,A3,A4,A5,A6,A7,A8,A9,A10 } Si={1,2,3,4,7,8,9,9,11,12} Fi={3,5,4,7,10,9,11,13,12,14} Compute a schedule where the largest number of activities takes place?

SOLUTION

Arranging the activities in increasing order of finish time. A A1 A3 A2 A4 A6 A5 A7 A9 A8 A10 ctiv ity s 1 3 2 4 8 7 9 11 9 12 tart Fini 3 4 5 7 9 10 11 12 13 14 sh

A10 A8

A9 A7 A5 A4 A2 A3 A1 1 2 A6

3

4

5

6

7

8

9

10

11

12

13 14

Solution contd….

The

final activity schedule is {A1,A3,A4,A6,A7,A9,A10} Running time of GREEDY ACTIVITY SELECTOR problem=O(n log n),as sorting can be done in O(n log n ) time . There are O(1) operations per activity ,thus total time=O(n log n)+n*O(1)=O(n log n).

EXAMPLE

Consider

the problem of making change for N cents using the least no of coins . Describe a greedy algorithm to make change consisting of quarters,dimes,nickels & pennies ?

SOLUTION

STEP 1 Let f(N) be the min no change coins needed for n cents.

STEP 2 Define the recurrence . Then we have F(N)= 1 if N=1 =1 if N=5 = 1 if N=10 = 1 if N=25 = min { f(N-25)+1,f(N-10)+1,f(N-5)+1,f(N-1)+1} otherwise.

Continued…..

Step 3 Compute f(N). Continue untill we find solution Eg for N=91, F(N)=f(66)+1 = f(41)+1+1 = f(16)+1+1+1 =f(6)+1+1+1+1 = f(1)+1+1+1+1+1 =1+1+1+1+1+1=6 coins.

KNAPSACK PROBLEM……..

Problem

Statement A thief robbing a store and can carry a maximal weight of W into their knapsack. There are n items and ith item weigh wi and is worth vi dollars. What items should thief take?

**There are two versions of problem 1.) Fractional knapsack problem
**

Fractional

knapsack problem The setup is same, but the thief can take fractions of items, meaning that the items can be broken into smaller pieces so that thief may decide to carry only a fraction of xi of item i, where 0 ≤ xi ≤ 1.

2..) 0-1 knapsack problem

0-1

knapsack problem The setup is the same, but the items may not be broken into smaller pieces, so thief may decide either to take an item or to leave it (binary choice), but may not take a fraction of an item.

**0-1 knapsack cannot be solved by the greedy strategy
**

Unable

to fill the knapsack to capacity, and the empty space lowers the effective value per pound of the load

–

We must compare the solution to the sub-problem in which the item is included with the solution to the sub-problem in which the item is excluded before we can make the choice

Dynamic-Programming Solution to the 0-1 Knapsack Problem

.

Let i be the highest-numbered item in an optimal solution S for W pounds. Then S` = S - {i} is an optimal solution for W - wi pounds and the value to the solution S is Vi plus the value of the subproblem. We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, . . . , i and maximum weight w. Then

Dynamic-Programming Solution to the 0-1 Knapsack Problem c[0,w] = 0 for 0<=w<=W c[i,w] =- infy for w<0 c[i,w] = max{ vi+c[i-1,w-wi] , c[i-1,w] } for 1<=i<=n & 0<=w<=W This says that the value of the solution to i items either include ith item, in which case it is vi plus a subproblem solution for (i - 1) items and the weight excluding wi,

**Dynamic-Programming Solution to the 0-1 Knapsack Problem
**

or does not include ith item, in which case it is a subproblem's solution for (i - 1) items and the same weight. That is, if the thief picks item i, thief takes vi value, and thief can choose from items w - wi, and get c[i - 1, w - wi] additional value. On other hand, if thief decides not to take item i, thief can choose from item 1,2, . . . , i- 1 upto the weight limit w, and get c[i - 1, w] value. The better of these two choices should be made.

**Dynamic-Programming Solution to the 0-1 Knapsack Problem
**

Although

the 0-1 knapsack problem, the above formula for c is similar to LCS formula: boundary values are 0, and other values are computed from the input and "earlier" values of c. So the 0-1 knapsack algorithm is like the LCS-length algorithm for finding a longest common subsequence of two sequences.

**Dynamic-Programming Solution to the 0-1 Knapsack Problem
**

The algorithm takes as input the maximum weight W, the number of items n, and the two sequences v = <v1, v2, . . . , vn> and w = <w1, w2, . . . , wn>. It stores the c[i, j] values in the table, that is, a two dimensional array, c[0 . . n, 0 . . w] whose entries are computed in a row-major order. That is, the first row of c is filled in from left to right, then the second row, and so on. At the end of the computation, c[n, w] contains the maximum value that can be picked into the knapsack.

Dynamic-0-1-knapsack (v, w, n, W)

FOR w = 0 TO W DO c[0, w] = 0 FOR i=1 to n DO c[i, 0] = 0 FOR w=1 TO W DO IF wi ≤ w THEN IF vi + c[i-1, w-wi] >c[i-1,w] THEN c[i, w] = vi + c[i-1, w-wi] ELSE c[i, w] = c[i-1, w] ELSE c[i, w] = c[i-1, w]

Dynamic-0-1-knapsack

The

set of items to take can be deduced from the table, starting at c[n. w] and tracing backwards where the optimal values came from. If c[i, w] = c[i-1, w] item i is not part of the solution, and we are continue tracing with c[i-1, w]. Otherwise item i is part of the solution, and we continue tracing with c[i-1, w-W].

Analysis

This

dynamic-0-1-kanpsack algorithm takes θ(nw) times, broken up as follows: θ(nw) times to fill the c-table, which has (n +1).(w +1) entries, each requiring θ(1) time to compute. O(n) time to trace the solution, because the tracing process starts in row n of the table and moves up 1 row at each step.

**Example for Dynamic-Programming 0-1 Knapsack Problem
**

Find

the solution for the problem ……. W=10. i=<1,2,3,4> vi=<10,40,30,50> wi=<5,4,6,3>

Solution…….

i\w 0 0 1 2 3 4 0 0 0 0 0 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 4 0 0 5 0 6 0 7 0 8 0 9 0 10 0

10 10 10 10 10 10

40 40 40 40 40 50 50 40 40 40 40 40 40 70

50 50 50 50 90 90 90 90

**Check for correctness……
**

Since there are 4 items having W=10,v=<10,40,30,50>,w=<5,4,6,3> We can see that atmost two items can be taken and P<3,6> = 50+30=80 P<3,4> = 50+40=90 P<3,5> = 50+10=60 P<6,4> = 30+40=70 P<4,5> = 40+10=50 So selecting <3,4> gives maximum profit.

THEORAM…..

Assume

that all edges cast are distinct. Let S be any subset of nodes that is neither empty nor equal to all of V,and let edge e=(v,w) be the minimum cast edge with one end in S and other in V-S. Then every minimum spanning tree contains the edge e.

PROOF….

Let

T be the spanning tree that does not contain e; we need to show that T does not have minimum possible cost. We will do this using an exchange argument.We will identify an edge e’ in T that is more expensive then e and with the property exchanging e for e’ results in another spanning tree.This resulting tree will then be cheaper than T,as desired.

Swapping the edge e for the edge e’ in tree T …….

S

V’

e’

’ w’

v

e ....................... w

e can be swapped for e’

**Appalachian Trail Problem…
**

The problem states that some friends decide to hike Appalachian trail this summer .They want to hike as much as possible per day but for obvious reason ,not after dark. On a map they have identified a large set of good stopping points for camping and they are considering the following system for deciding when to stop for the day. Each time they come to a potential point, they determine wheather they can make it next one before nightfall.if they can make it ,they keep hiking otherwise they stop. Is the approach decided by them is best one ?

**Appalachian Trail Problem… Solution…….
**

Assumptions… #1) We will model the Appalachian trail as a long line segment of length L. #2) Assume that the friends can hike d miles per day( independent of any situation) #3) The guess made by friends is always correct. #4) Potential stopping points are located at distances x1,x2,…………xn. From start of the trail.

**Appalachian Trail Problem… Solution…….
**

Let

R={Xp1,…..,Xpk} denote the set of stopping point choosen by the greedy algorithm and suppose by way of contradiction that there is a smaller valid set of stopping points ;Lets call this smaller set S={Xq1,………..Xqm} with m<k.

**Appalachian Trail Problem… Solution…….
**

To

obtain the contradiction we first show that the stopping point reached by the greedy algorithm on each day j is farther then the stopping point reached under the alternate solution. That is For each j=1,2,3…m, we have Xpj>=Xqj.

**Appalachian Trail Problem… Solution…….
**

We prove this by induction on j. The case j=1 follows directly from the definition of the greedy algorithm : friends travel as long as possible on the first day before stopping. According to assumption 2 : Xqj-Xqj-1<=d; ……………………………………(1) Since Xpj-1 >= Xqj-1 so - Xpj-1 <= -Xqj-1 Xqj - Xpj-1 <= Xqj -Xqj-1 …………………………..(2) from (1) and (2) we get Xqj - Xpj-1 <= d.

Appalachian Trail Problem… Solution…….

This means that friends have the option of hiking all the way from Xpj-1 to Xqj in one day,and hence the solution Xpj at which they finally stop can only be farther from Xqi. In statement Xpj>=Xqj on putting i=j=m we get Xpj>=Xqj …………………………(3)

**Appalachian Trail Problem… Solution…….
**

Now if m < k. Then we must have Xpm < L-d …………………..(4) for otherwise friends would never have needed to stop at the location Xpm+1.

Combining (3 ) & (4) Xqm < L-d. But this contradicts the assumption that S is a valid set of stopping points. .

**Appalachian Trail Problem… Solution…….
**

Consequently

m !< k. So we have proved that the greedy algorithm produces a valid set of stopping points of minimal possible size.

**Minimum cost Problem… Solution…
**

Suppose a security company needs to obtain licenses for n different pieces of cryptographic s/w. Due to regulations ,the company can only obtain these licenses at the rate of at most one one per month. Each license is currently selling for a price of $100 and the rate by which the price is hiking is different for each license. In which order company should bye licenses so that the cost spent is minimum.

**Minimum cost Problem… Solution…
**

Lets

algorithm takes n rates of price growth r1,r2,r3,………..rn. Lets try proving that the sorting ri in decreasing order in fact always gives the optimal solution. We will show this by exchange process. Let there is an optimal solution o that differs from our solutionS.

**Minimum cost Problem… Solution…
**

So

this optimal solution must contain an inversion, that is there must exist two neighboring months t and t+1 such that the price increase rate of the license bought in month t (denoted by rt) is less then that bought in month t+1.(denoted by rt+1) that is we have rt < rt+1.

**Minimum cost Problem… Solution…
**

If

we swap these two purchases the rest of purchases are identically priced. In O the amount paid during the two months involved in the swap is 100(rt^t+rt+1^(t+1)). On the other hand ,if we swapped these two purchases ,we would pay 100((rt+1)^t+rt^(t+1)) .

**Minimum cost Problem… Solution…
**

rt+1^rt

+ rt^t+1 < rt^t + rt+1^t+1 rt^t+1 – rt^t < rt+1^t+1 - rt+1^t rt^t (rt-1) < rt+1^t (rt+1 - 1)………….(1) But since ri >1 for all i and since rt < rt+1 So (1) holds. This concludes that S is the best solution.

**Minimum cost Problem… Solution… Analysis……..
**

The

running time of algorithm is O(n log n) since the sorting takes that much time and rest (outputting) is linear. So over all running time is O(n log n).