Professional Documents
Culture Documents
GREEDY METHOD
General Method:
Greedy technique is a most straightforward algorithm design technique, which can
apply to wide variety of problems. These problems have n inputs and require us to obtain a
subset that satisfies some constraints. Any subset that satisfies these constraints is called
feasible solution. We need to find a feasible solution that either maximizes or minimizes a
given objective function called an optimal solution.
The greedy method constructs an algorithm that works in stages considering one
input at a time. At each stage the decision is made regarding whether a particular input is
an optimal solution. If the inclusion of the next input into a partially constructed optimal
solution will result in an infeasible solution, then that input is not added to the partial
solution; otherwise it is added. The selection procedure is itself based on some
optimization measure; this measure is may be objective of the problem.
Different optimization measures may be possible for a given problem, these will
generate an algorithm that generate suboptimal solutions. This version of greedy technique
called Subset paradigm.
Algorithm Greedy(a,n)
// a[1:n] contains n inputs
{
solution=ф;
for i=1 to n do
{
x= Select(a);
if Feasible(solution, x) then
solution = Union(solution,x);
}
return solution;
}
M2 F G
M1
A B E
0
1 2 3 4 5 6 7 8 9 10 11
Knapsack Problem
We are given with a n objects & a knapsack or bag of capacity m. Object i has weight
wi & profit pi . If a fraction xi(0≤ xi ≤1) is placed into the knapsack, then a profit of pixi is
obtained. The objective is to obtain a feasible solution of the knapsack that maximizes the
total profit earned. Since the knapsack capacity is m, the total weight of all chosen objects
should not be more than m. This problem can be stated as:
Maximize px
1 i n
i i Eqn 1
Subject to wx m
1i n
i i Eqn 2
Weights: (w1,w2,w3)=(18,15,10)
Profits : (p1,p2,p3)=(25,24,15)
Profit/weight ratio:
p1 25 p 2 24 p 3 15
= =1.39 = =1.6 = =1.5
w1 18 w2 15 w3 10
pi
Arranging the in decreasing order 1.6>1.5>1.39
wi
i.e Weights: (w2,w3,w1)=(15,10,18)
Profits : (p2,p3,p1)=(24,15,25)
Let RC be the remaining capacity of the Knapsack after placing the i th item.
RC
Item Wi Pi X=1 or Profit=X*Pi RC=RC-Wi*X
Wi
- - - - 20
2 15 24 1 24 5
3 10 15 0.5 7.5 0
1 18 25 0 0 0
Theorem:
p1 p 2 pn
If ≥ ≥. . .≥ then GreedyKnapsack generates an optimal solution to the given
w1 w2 wn
instance of the knapsack problem.
Proof: Let x=(x1,x2, . . .,xn) be the solution generated by the GreedyKnapsack. If all the x i
equal to 1, then clearly the solution is optimal. Let j be the least index such that x j≠1. xi=1
for 1≤i<j , xi=0 for j<i≤n and 0≤xj<1.
Let y=(y1,y2,. . . yn) be an optimal solution. We know that all the optimal solutions fill
the knapsack exactly, so we can assume ∑wiyi=m.
Let k be the least index such that yk≠xk . It also follows that yk<xk. To see this consider the
three possibilities k<j, k=j, k>j.
i. If k<j, then xk=1. But yk≠xk, so yk<xk.
pk pi
pizi =
1i n
piyi ( zk yk )wk
1i n
( yi zi ) wi
wk k in wi
pk
≥ py
1i n
i i [( zk yk ) wk ( y z )w ] w
k i n
i i i
k
= py
1 i n
i i
If ∑pizi >∑piyi , then y could not have been optimal solution. If these sums are equal,
then either z=x and x is optimal, or z≠x. In z≠x repeated use of the above argument will
either shows that y is not optimal, or transform y into x thus it shows that x is too optimal.
p
iJ
i . An optimal solution is a feasible solution with maximum value.
The optimal solution is (1,4). These jobs are processed in the sequence job 4 followed by
job 1 i.e job 4 is processed by deadline 1 & job 1 is processed by deadline 2 and the total
profit is 127(100+27).
The detailed Greedy algorithm for Job Sequencing with deadlines and profit is given below:
Algorithm JS(d, J, n)
//d[i]≥1, 1≤i≤n are the deadlins, n≥1. The jobs are ordered such that p[1]≥p[2]≥. . . ≥p[n].
// J[i] is the ith job in the optimal solution, 1≤i≤k. Also at termination d[ J[i] ] ≤ d[ J[i+1] ].
{
d[0]=J[0]=0;
J[1]=1
k=1
for i=2 to n do
{
r=k
while ((d[ J[r] ]>d[i]) and (d[ J[r] ]≠r)) do
r=r-1;
if ((d[ J[r] ]≤d[i]) and d[i]>r) then
{
for q=k to (r+1) step-1 do
J[q+1]=J[q]
J[r+1]=i;
i. They can be used to obtain an independent set of circuit equations for an electric
network.
ii. They can be used to check the cyclicity & connectivity property of the graph.
The minimum spanning tree of a given graph is a connected acyclic subgraph that contains
all the vertices of the graph such that the sum of the edge weights of the tree should be
minimum.
or
Let G=(V,E) be an undirected connected graph, a subgraph t=(V,E’) of G is a spanning tree of
G if and only if t is a tree such that the sum of the edge weights of the tree should be
minimum.
There are two algorithms to find the minimum spanning tree of the given graph:
Prim’s Algorithm:
The greedy method Prim’s algorithm to obtain a minimum cost spanning tree builds
the tree edge by edge. The next edge to include is chosen according to some optimization
criterion. The best criterion is to choose an edge that result in a minimum increase in
the sum of the costs of the edges so for included to tree.
If A is the set of edges selected so far, then A forms a tree. The next edge (u,v) to be
included in A is a minimum cost edge not in A with property that A { (u,v) }is also a tree.
The following is the algorithm for Prim’s algorithm
Algorithm Prim (E, cost, n, t)
// E is the set of edges in G. cost[1:n,1:n] is the cost matrix of an n vertex graph such that
// cost[i, j] is either a positive real number or ∞ if no edge (i,j) exists.
// A minimum spanning tree is computed and stored as a set of edged in the array
// t[1:n-1, 1:2]. ( t[I,1], t[i,2] is an edge in the minimum cost spanning tree. The final cost is
//returned.
{
Let (k,l) be an edge of minimum cost in E
mincost= cost[ k, l]
t[1,1]=k; t[1,2]=l;
for i=1 to n do
if (cost[ i , l ] < cost[ i , k]) then
near[ i ]=l;
else
near[ i ]=k;
near[ k ]=near[ l ]=0;
for i=2 to n-1 do
{
Let j be an index such that near[ j ]≠0 and cost[j, near[ j ]] is minimum.
t[i,1]=j; t[i,2]=near[j];