Professional Documents
Culture Documents
SEMESTER :V
2.Container Loading
3.Knapsack Problem
7. Prim’s Algorithm
8. Kruskal’s Algorithm
Greedy Method:
Following are a few points about the greedy method.
The first note point is that we have to find the best method/option out of many
present ways.
In this method, we will be deciding the output by focusing on the first stage. We
don’t think about the future.
The greedy method may or may not give the best output.
A greedy Algorithm solves problems by making the choice that seems to be
the best at that particular moment. There are many optimization problems that
can be determined using a greedy algorithm. A greedy algorithm may provide
a solution that is close to optimal to some issues that might have no efficient
solution. A greedy algorithm works if a problem has the following two
properties:
1. Greedy Choice Property: By creating a locally optimal solution we can reach a
globally optimal solution. In other words, by making “greedy” choices we can
obtain an optimal solution.
2. Optimal substructure: Optimal solutions will always contain optimal
subsolutions. In other words, we say that the answers to subproblems of an optimal
solution are optimal.
Examples:
Following are a few examples of Greedy algorithms
Machine scheduling
Fractional Knapsack Problem
Minimum Spanning Tree
Huffman Code
Job Sequencing
Activity Selection Problem
Components of Greedy Algorithm
Greedy algorithms consist of the following five components −
A candidate set − A solution is created with the help of a candidate set.
A selection function − It is used to choose the best candidate that is to be added to
the solution.
A feasibility function − A feasibility function is useful in determining whether a
candidate can be used to contribute to the solution or not.
An objective function − This is used to assign a value to a solution or a partial
solution.
A solution function − A solution function is used to indicate whether a complete
solution has been reached or not.
Areas of Application
The greedy approach is used to solve many problems. Out of all the
problems, here we have a few of them as follows:
One of the applications could be finding the shortest path between two vertices
using Dijkstra’s algorithm.
Another is finding the minimal spanning tree in a graph using Prim’s /Kruskal’s
algorithm
Greedy Algorithm:
getOptimal(Item, array[], int num)
Initialize empty result as, result = {}
While (All items are not considered)
2. CONTAINER LOADING
The greedy algorithm constructs the loading plan of a single container layer by
layer from the bottom up. At the initial stage, the list of available surfaces
contains only the initial surface of size L x W with its initial position at height 0.
At each step, the algorithm picks the lowest usable surface and then determines
the box type to be packed onto the surface, the number of the boxes and the
rectangle area the boxes to be packed onto, by the procedure select layer.
The procedure select layer calculates a layer of boxes of the same type with the
highest evaluation value. The procedure select layeruses breadth-limited tree
search heuristic to determine the most promising layer, where the breadth is
different depending on the different depth level in the tree search. The advantage
is that the number of nodes expanded is polynomial to the maximal depth of the
problem, instead of exponentially growing with regard to the problem size. After
packing the specified number of boxes onto the surface according to the layer
arrangement, the surface is divided into up to three sub-surfaces by the
procedure divide surfaces.
Then, the original surface is deleted from the list of available surfaces and the
newly generated sub-surfaces are inserted into the list. Then, the algorithm selects
the new lowest usable surface and repeats the above procedures until no surface is
available or all the boxes have been packed into the container. The algorithm
follows a similar basic framework.
while (there exist usable surfaces) and (not all boxes are packed) do
set best layer := select layer(list of surface, list of box type, depth)
end while
Given a layer of boxes of the same type arranged by the G4-heuristic, the layer is
always packed in the bottom-left corner of the loading surface.
The divisions are done according to the following criteria, which are similar to
those in [2] and [5]. The primary criterion is to minimize the total unusable area of
the division variant. If none of the remaining boxes can be packed onto a sub-
surface, the area of the sub-surface is unusable. The secondary criterion is to avoid
the creation of long narrow strips.
3.Knapsack Problem
Given a set of items, each with a weight and a value, determine a subset of items to
include in a collection so that the total weight is less than or equal to a given limit
and the total value is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a
subproblem in many, more complex mathematical models of real-world problems.
One general approach to difficult problems is to identify the most restrictive
constraint, ignore the others, solve a knapsack problem, and somehow adjust the
solution to satisfy the ignored constraints.
Knapsack Problem Using Greedy Method: The selection of some things, each
with profit and weight values, to be packed into one or more knapsacks with
capacity is the fundamental idea behind all families of knapsack problems. The
knapsack problem had two versions that are as follows:
Fractional Knapsack Problem
1. 0 /1 Knapsack Problem
In this method, the Knapsack's filling is done so that the maximum capacity of the
knapsack is utilized so that maximum profit can be earned from it. The knapsack
problem using the Greedy Method is referred to as:
Given a list of n objects, say {I1, I2,……, In) and a knapsack (or bag).
The capacity of the knapsack is M.
Each object Ij has a weight wj and a profit of pj
If a fraction xj (where x ∈ {0...., 1)) of an object Ij is placed into a knapsack, then a
profit of pjxj is earned.
The problem (or Objective) is to fill the knapsack (up to its maximum capacity M),
maximizing the total profit earned.
Mathematically:
Note that the value of xj will be any value between 0 and 1 (inclusive). If any
object Ij is completely placed into a knapsack, its value is 1 (xj = 1). If we do not
pick (or select) that object to fill into a knapsack, its value is 0 ( xj = 0). Otherwise,
if we take a fraction of any object, then its value will be any value between 0 and
1.
Knapsack Problem Algorithm Using Greedy Method
A pseudo-code for solving knapsack problems using the greedy method is;
greedy fractional-knapsack (P[1...n], W[1...n], X[1..n]. M)
/*P[1...n] and W[1...n] contain the profit and weight of the n-objects ordered such
that X[1...n] is a solution set and M is the capacity of knapsack*/
{
For j ← 1 to n do
X[j]← 0
profit ← 0 // Total profit of item filled in the knapsack
weight ← 0 // Total weight of items packed in knapsacks
j ← 1
While (Weight < M) // M is the knapsack capacity
The network model may not able tolerate losses beyond a certain level. In places
where the loss exceeds the tolerance value boosters have to be placed. Given a
networks and tolerance value, the TVSP problem is to determine an optimal
placement of boosters. The boosters can only placed at the nodes of the tree.
d (u) = Max { d(v) + w(Parent(u), u)} d(u) – delay of node v-set of all edges & v
belongs to child(u) δ tolerance value
. d (7)= max{0+w(4,7)}=1
d (8)=max{0+w(4,8)}=4
d (10)= max{0+w(6,10)}=3
d(5)=max{0+e(3.3)}=1
d (6)=max{2+w(3,6),3+w(3,6)}=max{2+3,3+3}=6> δ->booster
d (2)=max{6+w(1,2)}=max{6+4)=10> δ->booster
Note: No need to find tolerance value for node 1 because from source only power
is transmitting.
Formula
Step-1 s[1]
s[1]=T dist[2]=10
s[2]=F dist[3]=α
s[3]=F dist[4]= α
s[4]=F dist[5]= α
s[5]=F dist[6]= 30
s[6]=F dist[7]= α
S[7]=F
Kruskal's algorithm to find the minimum cost spanning tree uses the greedy
approach. This algorithm treats the graph as a forest and every node it has as an
individual tree. A tree connects to another only and only if, it has the least cost
among all available options and does not violate MST properties.
To understand Kruskal's algorithm let us consider the following example −
Remove all loops and parallel edges from the given graph.
In case of parallel edges, keep the one which has the least cost associated and
remove all others.
The next step is to create a set of edges and weight, and arrange them in an
ascending order of weightage (cost).
Now we start adding edges to the graph beginning from the one which has the least
weight. Throughout, we shall keep checking that the spanning properties remain
intact. In case, by adding one edge, the spanning tree property does not hold then
we shall consider not to include the edge in the graph.
The least cost is 2 and edges involved are B,D and D,T. We add them. Adding
them does not violate spanning tree properties, so we continue to our next edge
selection.
Next cost is 3, and associated edges are A,C and C,D. We add them again −
Next cost in the table is 4, and we observe that adding it will create a circuit in the
graph. –
5.Job Sequencing problem with deadline
Problem Statement
Solution
Let us consider, a set of n given jobs which are associated with deadlines and
profit is earned, if a job is completed by its deadline. These jobs need to be ordered
in such a way that there is maximum profit.
It may happen that all of the given jobs may not be completed within their
deadlines.
Assume, deadline of ith job Ji is di and the profit received from this job is pi.
Hence, the optimal solution of this algorithm is a feasible solution with maximum
profit.
Thus, D(i)>0D(i)>0 for 1⩽i⩽n1⩽i⩽n.
Initially, these jobs are ordered according to profit,
i.e. p1⩾p2⩾p3⩾...⩾pnp1⩾p2⩾p3⩾...⩾pn.
Algorithm: Job-Sequencing-With-Deadline (D, J, n, k)
D(0) := J(0) := 0
k := 1
J(1) := 1 // means first job is selected
for i = 2 … n do
r := k
while D(J(r)) > D(i) and D(J(r)) ≠ r do
r := r – 1
if D(J(r)) ≤ D(i) and D(i) > r then
for l = k … r + 1 by -1 do
J(l + 1) := J(l)
J(r + 1) := i
k := k + 1
Analysis
In this algorithm, we are using two loops, one is within another. Hence, the
complexity of this algorithm is O(n2)O(n2).
Example
Let us consider a set of given jobs as shown in the following table. We have to find
a sequence of jobs, which will be completed within their deadlines and will give
maximum profit. Each job is associated with a deadline and profit.
Job J1 J2 J3 J4 J5
Deadline 2 1 3 2 1
Profit 60 100 20 40 20
Solution
To solve this problem, the given jobs are sorted according to their profit in a
descending order. Hence, after sorting, the jobs are ordered as shown in the
following table.
Job J2 J1 J4 J3 J5
Deadline 1 2 2 3 1
Profit 100 60 40 20 20
From this set of jobs, first we select J2, as it can be completed within its deadline
and contributes maximum profit.
Next, J1 is selected as it gives more profit compared to J4.
In the next clock, J4 cannot be selected as its deadline is over, hence J3 is
selected as it executes within its deadline.
The job J5 is discarded as it cannot be executed within its deadline.
Thus, the solution is the sequence of jobs (J2, J1, J3), which are being executed
within their deadline and gives maximum profit.
Total profit of this sequence is 100 + 60 + 20 = 180.
7. Kruskal's Algorithm:
An algorithm to construct a Minimum Spanning Tree for a connected weighted
graph. It is a Greedy Algorithm. The Greedy Choice is to put the smallest weight
edge that does not because a cycle in the MST constructed so far.
Analysis: Where E is the number of edges in the graph and V is the number of
vertices, Kruskal's Algorithm can be shown to run in O (E log E) time, or simply,
O (E log V) time, all with simple data structures. These running times are
equivalent because:
o E is at most V2 and log V2= 2 x log V is O (log V).
o If we ignore isolated vertices, which will each their components of the
minimum spanning tree, V ≤ 2 E, so log V is O (log E).
For Example: Find the Minimum Spanning Tree of the following graph using
Kruskal's algorithm.
Solution: First we initialize the set A to the empty set and create |v| trees, one
containing each vertex with MAKE-SET procedure. Then sort the edges in E into
order by non-decreasing weight.
Step 4: Now, edge (h, i). Both h and i vertices are in the same set. Thus it creates a
cycle. So this edge is discarded.
Then edge (c, d), (b, c), (a, h), (d, e), (e, f) are considered, and the forest
becomes.
Step 5: In (e, f) edge both endpoints e and f exist in the same tree so discarded this
edge. Then (b, h) edge, it also creates a cycle.
Step 6: After that edge (d, f) and the final spanning tree is shown as in dark lines.
Step 7: This step will be required Minimum Spanning Tree because it contains all
the 9 vertices and (9 - 1) = 8 edges
At every step, it considers all the edges and picks the minimum weight edge. After
picking the edge, it moves the other endpoint of edge to set containing MST.
Example: Generate minimum cost spanning tree for the following graph using
Prim's algorithm.
Solution: In Prim's algorithm, first we initialize the priority Queue Q. to contain all
the vertices and the key of each vertex to ∞ except for the root, whose key is set to
0. Suppose 0 vertex is the root, i.e., r. By EXTRACT - MIN (Q) procure, now u = r
and Adj [u] = {5, 1}.
Removing u from set Q and adds it to set V - Q of vertices in the tree. Now, update
the key and π fields of every vertex v adjacent to u but not in a tree.
1. π[3]= 4 π[6]= 4
1. π [2] = 3 π[6]=3
1. u = EXTRACT_MIN (2, 6)
2. u = 2 [key [2] < key [6]]
3. 12 < 18
4. Now the root is 2
5. Adj [2] = {3, 1}
6. 3 is already in a heap
7. Taking 1, key [1] = 28
8. w (2,1) = 16
9. w (2,1) < key [1]
1. π[1]= 2
Now by EXTRACT_MIN (Q) Removes 1 because key [1] = 16 is minimum.
1. Π [6] = 1
Now all the vertices have been spanned, Using above the table we get Minimum
Spanning Tree.
1. 0 → 5 → 4 → 3 → 2 → 1 → 6
2. [Because Π [5] = 0, Π [4] = 5, Π [3] = 4, Π [2] = 3, Π [1] =2, Π [6] =1]
Total Cost = 10 + 25 + 22 + 12 + 16 + 14 = 99
Given n programs P1, P2, …, Pn of length L1, L2, …, Ln respectively, store them on
a tap of length L such that Mean Retrieval Time (MRT) is a minimum. The
retrieval time of the jth program is a summation of the length of first j programs on
tap. Let Tj be the time to retrieve program Pj. The retrieval time of P j is computed
as,
Mean retrieval time of n programs is the average time required to retrieve any
program. It is required to store programs in an order such that their Mean Retrieval
Time is minimum. MRT is computed as,
Storage on Single Tape
In this case, we have to find the permutation of the program order which
minimizes the MRT after storing all programs on single tape only.
There are many permutations of programs. Each gives a different MRT.
Consider three programs (P1, P2, P3) with a length of (L1, L2, L3) = (5, 10,
2).
Let’s find the MRT for different permutations. 6 permutations are possible for
3 items. The Mean Retrieval Time for each permutation is listed in the
following table.
Ordering Mean Retrieval Time (MRT)
P1, P2, P3 ( (5) + (5 + 10) + (5 + 10 + 2) ) / 3 = 37 / 3
P1, P3, P2 ( (5) + (5 + 2) + (5 + 2 + 10) ) = 29 / 3
P2, P1, P3 ( (10) + (10 + 5) + (10 + 5 + 2) ) = 42 / 3
P2, P3, P1 ( (10) + (10 + 2) + (10 + 2 + 5) ) = 39 / 3
P3, P1, P2 ( (2) + (2 + 5) + (2 + 5 + 10) ) = 26 / 3
P3, P2, P1 ( (2) + (2 + 10) + (2 + 10 + 5) ) = 31 / 3
It should be observed from the above table that the MRT is 26/3, which is
achieved by storing the programs in ascending order of their length.
Thus, greedy algorithm stores the programs on tape in non-decreasing order of
their length, which ensures the minimum MRT.
Tj ← 0
for i ← 1 to n do
for j ← 1 to i do
Tj ← Tj + L[j]
end
end
MRT ← Tj/ n
Sorted data:
= 112.6
= 85
= 88.25
= 95.28
Example
Let us consider the given files, f1, f2, f3, f4 and f5 with 20, 30, 10, 5 and 30 number
of elements respectively.
If merge operations are performed according to the provided sequence, then
M1 = merge f1 and f2 => 20 + 30 = 50
M2 = merge M1 and f3 => 50 + 10 = 60
M3 = merge M2 and f4 => 60 + 5 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
Hence, the total number of operations is
50 + 60 + 65 + 95 = 270
Now, the question arises is there any better solution?
Sorting the numbers according to their size in an ascending order, we get the
following sequence −
f4, f3, f1, f2, f5
Hence, merge operations can be performed on this sequence
M1 = merge f4 and f3 => 5 + 10 = 15
M2 = merge M1 and f1 => 15 + 20 = 35
M3 = merge M2 and f2 => 35 + 30 = 65
M4 = merge M3 and f5 => 65 + 30 = 95
Therefore, the total number of operations is
15 + 35 + 65 + 95 = 210
Obviously, this is better than the previous one.
In this context, we are now going to solve the problem using this algorithm.
Initial Set
Step-1
Step-2
Step-3
Step-4
The graph is widely accepted data structure to represent distance map. The distance
between cities effectively represented using graph.
Dijkstra proposed an efficient way to find the single source shortest path from
the weighted graph. For a given source vertex s, the algorithm finds the
shortest path to every other vertex v in the graph.
Assumption : Weight of all edges is non-negative.
Steps of the Dijkstra’s algorithm are explained here:
1. Initializes the distance of source vertex to zero and remaining all other
vertices to infinity.
2. Set source node to current node and put remaining all nodes in the list of
unvisited vertex list. Compute the tentative distance of all immediate neighbour
vertex of the current node.
3. If the newly computed value is smaller than the old value, then update it.
For example, C is the current node, whose distance from source S is dist (S, C) = 5.
5. Stop when the destination node is tested or when unvisited vertex list
becomes empty.
Algorithm DIJAKSTRA_SHORTEST_PATH(G, s, t)
// s is the source vertex
// t is the target vertex
// π[u] stores the parent / previous node of u
// V is the set of vertices in graph G
dist[s] ← 0
π[s] ← NIL
1. Draw a table showing the intermediate distance values of all the nodes at each
iteration of the algorithm.
2. Show the final shortest path tree.
Solution:
Here, source vertex is A.
Initialization:
dist[source] = 0 ⇒ dist[A] = 0
Vertex u A B C D E F G H
dist[u] 0 ∞ ∞ ∞ ∞ ∞ ∞ ∞
π[u] NIL NIL NIL NIL NIL NIL NIL NIL
Iteration 1:
u = unprocessed vertex in Q having minimum dist[u] = A
Adjacent[A] = {B, E, F}
=0+1
=1
=0+4
=4
=0+8
=8
Vertex u A B C D E F G H
dist[u] 0 1 ∞ ∞ 4 8 ∞ ∞
π[u] NIL A NIL NIL A A NIL NIL
Iteration 2:
u = unprocessed vertex in Q having minimum dist[u] = B
Adjacent[B] = {C, F, G}
=1+2
=3
=1+6
=7
=1+6
=7
Vertex u A B C D E F G H
dist[u] 0 1 3 ∞ 4 7 7 ∞
π[u] NIL A B NIL A B B NIL
Iteration 3:
u = unprocessed vertex in Q having minimum dist[u] = C
=4
=3+2
=5
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 7 5 ∞
π[u] NIL A B C A B C NIL
Iteration 4:
u = unprocessed vertex in Q having minimum dist[u] = E
Adjacent[E] = {F}
=4+5
=9
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 7 5 ∞
π[u] NIL A B C A B C NIL
Iteration 5:
u = unprocessed vertex in Q having minimum dist[u] = D
Adjacent[D] = {G, H}
val[G] = dist[D] + weight(D, G)
=4+1
=5
=4+4
=8
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 7 5 8
π [u] NIL A B C A B D D
Iteration 6:
u = unprocessed vertex in Q having minimum dist[u] = G
Adjacent[G] = { F, H }
=5+1
=6
=5+1
=6
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 6 5 6
π [u] NIL A B C A G C G
Iteration 7:
u = unprocessed vertex in Q having minimum dist[u] = F
Adjacent[F] = { }
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 6 5 6
p[u] NIL A B C A G C G
Iteration 8:
u = unprocessed vertex in Q having minimum dist[u] = H
Adjacent[H] = { }
Vertex u A B C D E F G H
dist[u] 0 1 3 4 4 6 5 6
p[u] NIL A B C A G C G
We can easily derive the shortest path tree for given graph from above table. In the
table, p[u] indicates the parent node of vertex u. The shortest path tree is shown in
following figure