Professional Documents
Culture Documents
General Mathod
Control abstraction
Applications
1. Knapsack problem
2. Job sequencing with Deadlines
3. Minimum Cost spanning trees
a) Prim’s Algorithm
b) Kruskal’s algorithm
4. Single Source shortest path Problem.
_____________________________________________________________________________
General Method:
Greedy method is straight forward design technique used to find optimal solution.
Feasible Solution:
Problem is defined with n inputs. The subset of solutions which will satisfy the give
constraints ( conditions) are known as Feasible Solutions or Candidate Solutions.
Objective Function:
An objective function, which assigns a value to a solution, or a partial solution.
Optimal Solution:
The feasible Solution, which maximizes or minimizes the objective function is called an
Optimal Solution.
The Greedy method suggests that one can device an algorithm that works in stages.
This version of Greedy Technique is called the “ Subset paradigm”.
1. Arrange all feasible inputs in order.
2. Consider one input at a time.
3. Check whether the input is in an optimal solution then include in partially constructed
optimal solution.
4. Check the next input mentoned in step 1 generates the optimal solution along with the
prevous input then include in partiall constructed optimal solution , otherwise reject the
input.
5. The selection of the input will be done on the basis of Objective function.
1. Algorithm Greedy(a, n)
2. // a[1:n] contains the n inputs.
3. {
4. Solution := 0: // initialize the solution.
5. for i:= 1 to n do
6. {
7. x:= Select(a);
8. if Feasible( Soluition, x) then
9. Solution := Union(Solution, x);
10. }
11. Return Solution;
12. }
The function Select selects an input from a[ ] and removes it.
The selected input’s value is assigned to x.
Feasible is a Boolean – valued function that determines whether x can be included into a
Solution vetctor.
The function Union combines x with the solution and updates the objective function.
For problems that do not call for the selection of an optimal sub set, in the Greedy method,
the decisions will be made by considering the inputs in some order based on the
Optimization criteria. Then this version of Greedy technique is known as “Ordering
Paradigm”.
Problems can be solved in Subset paradigm are:
1. Container Loading
2. Knapsack Problem
3. Tree vertex splitting
4. Job sequencing with Deadlines
5. Minimum-Cost Spanning Trees
a) Prim’s Algorithm
b) Kruskal’s algorithm
Knapsack problem:
Algorithm:
1. Algorithm GreedyKnapsack(m, n)
2. // p[1:n] and w[1:n] contain the profits and weights respectively of the n objects such that
3. // p[i] / w[i] >= p[i+1] / w[i+1].
4. // m is the knapsack size and x[1:n] is the solution vector.
5. {
6. for i:= 1 to n do x[i]:= 0.0; initialize x.
7. U := m;
8. for i: = 1 to n do
9. {
10. if ( w[i] > U ) then break;
11. X[i] := 1.0; U := U – w[i];
12. }
13. if ( i <= n) then x[i] := U / w[i] ;
14. }
Fractional Knapsack has time complexity O(nlogn) where n is the number of items in S.
Where S is a heap-based priority queue and then the removal has complexity Θ(logn) so the
up to n removals take O(n logn). The rest of the algorithm is O(n).
Now the removal is simply removing the first element. If we use a circular list for S, the
removal is O(1), so the algorithm is O(n). Including the sort we again have O(n logn).
Job Sequencing with Deadline:
Problem Statement:
Feasible Solution:
A feasible solution for this problem is a subset J of jobs such that each job in this
subset can be implemented by its deadline.
The value of a feasible solution J is the sum of the profits of the jobs in J
= (i belong to J) ∑ pi
Optimal Solution:
Example:
Let n = 4, (p1, p2, p3 , p4 ) = (100, 10, 15, 27) and (d1, d2, d3, d4) = (2, 1, 2, 1).
Therefore the solution is J = {1, 4} with value 127 is an optimal solution for the given instance.
Theorem:
Let J be a set of k jobs and = i1, i2, i3, …., ik is a permutatiom of jobs in J such that
di1 < = di2 < = ….. < = dik.
Then J is a feasible solution iff the jobs in J can be processed in the order without
violating any deadline.
This algorithm constructs an optimal set J of jobsthat can be processed by their due times.
The selected jobs can be processed in the order given by above theorem.
1. Algorithm GreedyJob(d, J, n)
2. // J is a set of jobs that can be completed by their deadlines.
3. {
4. J: = {1};
5. for i: = 2 to n do
6. {
7. if ( all jobs in J U {i} can be completed by their deadlines)
8. then J: = J U {i};
9. }
10. }
Greedy Algorithm for sequencing unit time jobs with deadlines and profits:
This algorithm assumes that the jobs are alredy sorted such that p 1 >= p2 >= … >= pn .
It assumes that n >= 1 and the deadline d[i] of job i is at least 1.
No job with d[i] < 1 can ever be finished by its deadline.
1. Algorithm JS(d, j, n)
2. // d[i] >= 1, 1 <= i < = n are the deadlines.
3. // The jobs are ordered such that p 1 >= p2 >= … >= pn .
4. // J[i] is the ith job in the optimal solution, 1 <= i < = k.
5. // Also, at termination d[J[ i ]] < = d[J[ i + 1]] , 1 <= i < = k.
6. {
7. d[0] := J[0] := 0; // Initialize.
8. J[1] := 1; // Include Job1.
9. K := 1;
10. for i := 2 to n do
11. {
12. // Consider jpbs in nonincreasing order of p[i]
13. // Find position for i and check feasibility of insertion.
14. r := k;
15. while ((d[J[r]] > d[i]) and ((d[J[r]] ≠ r )) do r := r – 1;
16. if ((d[J[r]] < = d[i]) and (d[i] > r )) then
17. {
18. // Insert i into J[ ].
19. for q := k to (r +1) step -1 do J[q + 1] := J[q];
20. J[r + 1] := I; k := k + 1;
21. }
22. }
23. return k;
24. }
Example: Let n = 5, (p1, p2, p3, p4, p5) = ( 20, 15, 10, 5, 1) and (d1, d2, d3, d4, d5) = ( 2, 2, 1, 3, 3).
The time complexity of the Algorithm JS depend on the two parameters n (number of Jobs),
s (number of jobs included in the solution J).
The while loop of the line 15 in the Algorithm is iterated atmost k times. Each iteration takes
Ɵ (1) time.
If the conditional of line 16 is true, then lines 19 and 20 are executed. These lines require
Ɵ (k - r) time to insert job i.
Hence, the the total time for each iteration of the for loop of line 10 is Ɵ (k). This loop is
iterated (n – 1) times.
If s is the final value of k i.e., s is the number of jobs in the final solution, then the total time
needed by the algorithm JS is Ɵ (s * n).
Since s <= n, the worst-case time, as a function of n alone is Ɵ (s * n) = Ɵ (n * n) = Ɵ (n2).
In addition to the space needed for d, algorithm JS needs Ɵ (s) amount of space for J.
The profit values are not needed by Algorithm JS. It is sufficient to know that p i >= pi+1,
1 <= i < n.
The computing time of Algorithm JS can be reduced from O(n2 ) to nearly O(n) by using
the disjoint set union and find algorithms and a different method to determine the feasibility
of a partial solution.
Spanning Tree : Given an undirected and connected graph G=(V,E), a spanning tree of the
graph G is a tree that spans G (that is, it includes every vertex of G) and is a subgraph
of G (every edge in the tree belongs to G).
The cost of the spanning tree is the sum of the weights of all the edges in the tree.
There can be many spanning trees.
Minimum spanning tree is the spanning tree where the cost is minimum among all the
spanning trees.
There also can be many minimum spanning trees.
Applications:
Prim’s Algorithm:
A Greedy method to obtain a minimum-cost spanning tree builds the tree edge by edge.
The next edge to include is chosen according to some optimaization criterion.
The simplest suct criteriaon is to choose an edge that results in a minimum increase in the
sum of the costs of the edges so far included.
Accoriding to the criteria, first, the set of edges so far selected form a tree.
i.e., If A is the set of edges selected so far, then A forms a Tree.
The next edge (u, v) to be included in A is a minimum-cost edge not in A with the property
that A U {(u, v)} is also a Tree.
Stages in the Prim’s Algorithm:
= 99
Stage Selection Cost Min Cost near [j] Cost cost [j, near [j]] minimum
of Edge =mincost +
cost{j, near[j]]
{1, 2} = 28, Min {28, 25} = 25
1 {1, 6} 10 10 {2, 5}
{6, 5} = 25 {6, 5} is a Selected edge
{1, 2} = 28
Min {28, 22, 24} = 22
2 {6, 5} 25 10+25 = 35 {2, 4, 7} {5, 4} = 22
{5, 4} is a Selected edge
{5, 7} = 24
{1, 2} = 28
{5, 7} = 24 Min { 28, 24, 18, 12} = 12
3 {5, 4} 22 35+22=57 {2, 3, 7}
{4, 7} = 18 {4, 3} is a Selected edge
{4, 3} = 12
{1, 2} = 28
{3, 2} = 16 Min {28, 16, 24, 18} = 16
4 {4, 3} 12 57+12=69 {2, 7}
{5, 7} = 24 {3, 2} is a Selected edge
{4, 7} = 18
{2, 7} = 14
Min { 14, 24, 18} = 14
5 {3, 2} 16 69+16=85 {7} {5, 7} = 24
{2, 7} is a Selected edge
{4, 7} = 18
6 {2, 7} 14 85+14=99 -- -- Return Min Cost = 99
The Algorithm will start with a tree that includes only a minimum-cost edge of G.
Then, edges are added to this tree one by one.
The next edge(i, j) to be added is such that i is a vertex already included in the tree, j is a
vertex not yet included.
Cost of (i, j) , cost[i, j] , is minimum among all edges (k,l) such that vertex k is in the tree
and vertex l is not in the tree.
To determine this edge(i, j) efficiently, we associate with each vertex j not yet included in
the tree a value near[j].
The value near[j] is a vertex in the tree such that cost[ j, near[j] ] is minimum among all
choices for near[j].
near[j] = 0 for all vertices j that are already in the tree.
The next edge to include is defined by the vertex j such that near[j] ≠ 0 ( j not already in the
tree) and cost[ j, near[j] ] is minimum.
S Time
No Algorithm
Complexity
1 Algorithm Prim (E, cost, n, t)
2 // E is the set of edges in G. cost[1:n, 1:n] is the cost
3 // adjacency matrix of an n vertex graph such that cost[i, j]
4 // either a positive real number or ∞ if no edge(I, j) exists.
5 // A minimum spanning tree is computed and stord as a
6 // set of edges in the array t[1:n-1, 1:2] .( t[i, 1], t[i, 2]) is
7 // an edge in the minimum-cost spanning tree.The final cost is returned
8 {
9 Let (k, l) be an edge of minimum cost in E; O ( |E| )
10 mincost := cost [k, l] ; Ɵ (1)
11 t[1, 1] := k; t[1, 2] := l;
12 for i := 1 to n do // Initialise near Ɵ (n)
13 if (cost [i, l] < cost [i, k]) then near [i] := l;
14 else near [i] := k;
15 near [k] := near [l] := 0;
16 for i := 2 to n -1 do O ( n2 )
17 { // find n – 2 additional edges for t.
18 Let j be an index such that near [j] ≠ 0 and O(n)
19 cost [j, near [j]] is minimum; O(n)
20 t[i, 1] := j; t[i, 2] := near [j];
21 mincost := mincost + cost [j, near [j]];
22 near [j] := 0;
23 for k := 1 to n do // Update near[ ]. O(n)
24 if (( near [k] ≠ 0) and (cost [k, near [k]] > cost [k, j]))
25 then near [k] := j;
26 }
27 return mincost;
28 }
The time complexity of Prims algorithm is = O ( n2 )
If tree is implemented as Red-black tree then the time complexity of Prims algorithm is
O (( n + |E| log n )
Kruskal’s Algorithm:
According to the second optimization criteria, the edges of the graph are considered in
nondecreasing order of cost.
i.e., the set t of edges so far selected for the spanning tree be such that it is possible to
complete t into a tree.
Thus t may not be a tree at all stages in the algorithm.
It forms a forest since the set of edges t can be completed into a tree iff there are no cycles
in t.
Example:
No. of Edges covered Edge Cost of the edge
1 {1, 6} 10
2 {3, 4} 12
3 {2, 7} 14
4 {2, 3} 16
5 {4, 5} 22
6 {6, 5} 25
t has (n-1) edges , stop mincost = 99
1. t := 0;
2. while (( t hasless than n – 1 edges) and ( E ≠ 0)) do
3. {
4. Choos an edge (v, w) from E of lowest cost;
5. Delete (v, w) from R;
6. if (v, w) does not create a cycle in t then add (v, w) to t;
7. else discard (v, w);
8. }
Any data structure that supports the operations of search min ( or max), insert and delete
min ( or max) is called a Priority queue.
Heap : A max (min) heap is a complete binary tree with the property that the value at each
node is atleast as larage as ( as small as) the values at its chldren ( if they exist).
In a max heap , one of the largest elements is at the root of the heap.
If the elements are distinct, then the root contains the largest item.
A max heap can implemented using an array a[ ].
To insert an element into heap, one adds it “ at the bottom” of the heap and then compares
it with its parent, grandparent, greatgrandparent, and so on, until it is less than o equal to
one of these values. Algorithm Insert will perform this job.
Insrting new element will take Ɵ (log n) time in the worst case.
To delete the maximum key from the max heap, an Algorithm Adjust is used.