You are on page 1of 13

ANALYSIS OF ALGORITHM

CHAPTER THREE: GREEDY MODEL

INTRODUCTION
THE GREEDY METHOD
 A problem with N inputs will have some constraints any subsets that satisfy these
constraints are called a feasible solution.
 A feasible solution that either maximize or minimize a given objectives function is
called an optimal solution.

 The greedy method suggests that one can devise an algorithm which works in stages,
considering one input at a time. At each stage, a decision is made regarding whether
or not a particular input is in an optimal solution.
 This is done by considering the inputs in an order determined by some selection
procedure. If the inclusion of the next input into the partially constructed optimal
solution will result in an infeasible solution, then this input is not added to the partial
solution.
 The selection procedure itself is based on some optimization measure. This measure
may or may not be the objective function.

JOB SCHEDULING WITH DEAD LINES


The problem is the number of jobs, their profit and deadlines will be given and we have
to find a sequence of job, which will be completed within its deadlines, and it should
yield a maximum profit.

Chapter 3 – Greedy Model Page 1


Points To remember:
 To complete a job, one has to process the job or a action for one unit of time.
 Only one machine is available for processing jobs.
 A feasible solution for this problem is a subset of j of jobs such that each job in
this subject can be completed by this deadline.
 If we select a job at that time ,
o Since one job can be processed in a single m/c. The other job has to be in its
waiting
o state until the job is completed and the machine becomes free.
o So the waiting time and the processing time should be less than or equal to the
deadline of the job.

Chapter 3 – Greedy Model Page 2


OPTIMAL MERGE PATTERN
 Two sorted files containing n and m records respectively could be merged together
to obtain one sorted file in time O(n + m).
 When more than two sorted flies are to be merged together the merge can be
accomplished by repeatedly merging sorted files in pairs.
 Thus, if files Xl, X2, X3 and X4 are to be merged we could first merge Xl and X2 to
get a file Yl. Then we could merge Yl and X3 to get Y2. Finally, Y2 and X4 could
be merged to obtain the desired sorted file.
 Alternatively, we could first merge Xl and X2 getting Yl, then merge X3 and X 4
getting Y2 and finally Y1 and Y2 getting the desired sorted file.

Chapter 3 – Greedy Model Page 3


 The merge pattern such as the one just described will be referred to as a 2-way
merge pattern. 2-way merge patterns may be represented by binary merge trees.
 The leaf nodes are drawn as squares and represent the given five files. These nodes
will be called external nodes. The remaining nodes are drawn circular and are called
internal nodes.
 Each internal node has exactly two children and it represents the file obtained by
merging the files represented by its two children. The number in each node is the
length (i.e., the number of records) of the file represented by that node.

Chapter 3 – Greedy Model Page 4


HUFFMAN CODES
 Another application of binary trees with minimal weighted external path length is to
obtain an optimal set of codes for messages M 1 , ••• , Mn+ 1.
 Each code is a binary string which will be used for transmission of the corresponding
message. At the receiving end the code will be decoded using a decode tree.
 A decode tree is a binary tree in which external nodes represent messages. The
binary bits in the code word for a message determine the branching needed at each
level of the decode tree to reach the correct external node.
 For example, if we interpret a zero as a left branch and a one as a right branch, then
the decode tree of Figure 4.5 corresponds to codes 000, 001, 01, and 1 for messages
M 1, M 2, M 3 and M 4 respectively. These codes are called Huffman codes.
 The cost of decoding a code word is proportional to the number of bits in the code.
This number is equal to the distance of the corresponding external node from the
root node.
 The expected decode time is minimized by choosing code words resulting in a
decode tree with minimal weighted external path length! Hence the code which
minimizes expected decode time also minimizes the expected length of a message.

Chapter 3 – Greedy Model Page 5


Chapter 3 – Greedy Model Page 6
SINGLE SOURCE SHORTEST PATH
 In this section, we consider the single-source shortest-paths problem: for a given
vertex called the source in a weighted connected graph, find shortest paths to all its
other vertices.
 The single-source shortest-paths problem asks for a family of paths, each leading
from the source to a different vertex in the graph, though some paths may, of course,
have edges in common.
 Multitudes of less obvious applications include finding shortest paths in social
networks, speech recognition, document formatting, robotics, compilers, and airline
crew scheduling.
 In the world of entertainment, one can mention path finding in video games and
finding best solutions to puzzles using their state-space.
 This algorithm is applicable to undirected and directed graphs with nonnegative
weights only.
 Dijkstra’s algorithm finds the shortest paths to a graph’s vertices in order of their
distance from a given source. First, it finds the shortest path from the source to a
vertex nearest to it, then to a second nearest, and so on.
 In general, before its ith iteration commences, the algorithm has already identified
the shortest paths to i − 1 other vertices nearest to the source. These vertices, the
source, and the edges of the shortest paths leading to them from the source form a
subtree Ti of the given graph (Figure 9.10).
 Since all the edge weights are nonnegative, the next vertex nearest to the source can
be found among the vertices adjacent to the vertices of Ti .
 The set of vertices adjacent to the vertices in Ti can be referred to as “fringe
vertices”; they are the candidates from which Dijkstra’s algorithm selects the next
vertex nearest to the source.
 To identify the ith nearest vertex, the algorithm computes, for every fringe vertex u,
the sum of the distance to the nearest tree vertex v (given by the weight of the edge
(v, u)) and the length dv of the shortest path from the source to v (previously
determined by the algorithm) and then selects the vertex with the smallest such sum.
The fact that it suffices to compare the lengths of such special paths is the central
insight of Dijkstra’s algorithm.
 To facilitate the algorithm’s operations, we label each vertex with two labels.

Chapter 3 – Greedy Model Page 7


 The numeric label d indicates the length of the shortest path from the source to this
vertex found by the algorithm so far; when a vertex is added to the tree, d indicates
the length of the shortest path from the source to that vertex.
 The other label indicates the name of the next-to-last vertex on such a path, i.e., the
parent of the vertex in the tree being constructed. With such labeling, finding the
next nearest vertex u∗ becomes a simple task of finding a fringe vertex with the
smallest d value. Ties can be broken arbitrarily. After we have identified a vertex u∗
to be added to the tree, we need to perform two operations:
 Move u∗ from the fringe to the set of tree vertices.
 For each remaining fringe vertex u that is connected to u∗ by an edge of weight
w(u∗, u) such that du∗ + w(u∗, u) < du, update the labels of u by u∗ and
du∗ + w(u∗, u), respectively.

Figure 9.11 demonstrates the application of Dijkstra’s algorithm to a specific graph.

ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = _V, E_ with nonnegative weights
// and its vertex s
//Output: The length dv of a shortest path from s to v
// and its penultimate vertex pv for every vertex v in V
Initialize(Q) //initialize priority queue to empty
for every vertex v in V
dv←∞; pv←null
Insert(Q, v, dv) //initialize vertex priority in the priority queue
ds←0; Decrease(Q, s, ds) //update priority of s with ds
VT←∅
for i ←0 to |V| − 1 do
u∗←DeleteMin(Q) //delete the minimum priority element
VT←VT∪ {u∗}
for every vertex u in V − VT that is adjacent to u∗ do
if du∗ + w(u∗, u) < du
du←du∗ + w(u∗, u); pu←u∗
Decrease(Q, u, du)

Example

Chapter 3 – Greedy Model Page 8


MINIMUM SPANNING TREES
PRIM’S ALGORITHM
 A spanning tree of a connected graph is its connected acyclic subgraph (i.e., a
tree) that contains all the vertices of the graph.
 A minimum spanning tree of a weighted connected graph is its spanning tree of
the smallest weight, where the weight of a tree is defined as the sum of the
weights on all its edges. The minimum spanning tree problem is the problem of
finding a minimum spanning tree for a given weighted connected graph.
Two serious difficulties to construct Minimum Spanning Tree
 1. The number of spanning trees grows exponentially with the graph size (at least
for dense graphs).
 2. Generating all spanning trees for a given graph is not easy.

Chapter 3 – Greedy Model Page 9


 Prim‘s algorithm constructs a minimum spanning tree through a sequence of
expanding subtrees. The initial subtree in such a sequence consists of a single vertex
selected arbitrarily from the set V of the graph‘s vertices.
 On each iteration, we expand the current tree in the greedy manner by simply
attaching to it the nearest vertex not in that tree. The algorithm stops after all the
graph‘s vertices have been included in the tree being constructed.
 Since the algorithm expands a tree by exactly one vertex on each of its iterations, the
total number of such iterations is n-1, where n is the number of vertices in the graph.
 The tree generated by the algorithm is obtained as the set of edges used for the tree
expansions.

ALGORITHM

 The nature of Prim‘s algorithm makes it necessary to provide each vertex not in the
current tree with the information about the shortest edge connecting the vertex to a
tree vertex.
 We can provide such information by attaching two labels to a vertex: the name of the
nearest tree vertex and the length (the weight) of the corresponding edge. Vertices

Chapter 3 – Greedy Model Page 10


that are not adjacent to any of the tree vertices can be given the label indicating their
infinite distance to the tree vertices a null label for the name of the nearest tree
vertex.
 With such labels, finding the next vertex to be added to the current tree T = (VT, ET)
become simple task of finding a vertex with the smallest distance label in the set
V - VT. Ties can be broken arbitrarily.
 After we have identified a vertex u* to be added to the tree, we need to perform two
operations:
 Move u* from the set V—VT to the set of tree vertices VT.
 For each remaining vertex U in V—VT - that is connected to u* by a shorter edge
than the u‘s current distance label, update its labels by u* and the weight of the
edge between u* and u, respectively.

The following figure demonstrates the application of Prim‘s algorithm to a specific


graph.

Chapter 3 – Greedy Model Page 11


Fig.Application of Prim’s algorithm

KRUSKAL’S ALGORITHM
 Kruskal’s algorithm looks at a minimum spanning tree of a weighted connected
graph G = (V, E) as an acyclic subgraph with |V| − 1 edges for which the sum of the
edge weights is the smallest.
 Consequently, the algorithm constructs a minimum spanning tree as an expanding
sequence of sub graphs that are always acyclic but are not necessarily connected on
the intermediate stages of the algorithm.
 The algorithm begins by sorting the graph’s edges in non decreasing order of their
weights. Then, starting with the empty sub graph, it scans this sorted list, adding the
next edge on the list to the current sub graph if such an inclusion does not create a
cycle and simply skipping the edge otherwise.
ALGORITHM Kruskal(G)
//Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET , the set of edges composing a minimum spanning tree of G
sort E in nondecreasing order of the edge weights w(ei1) ≤ . . . ≤ w(ei|E|)
ET←ⱷ; ecounter ←0 //initialize the set of tree edges and its size
k←0 //initialize the number of processed edges
while ecounter < |V| − 1 do
k←k + 1
if ET∪ {eik} is acyclic
ET←ET∪ {eik}; ecounter ←ecounter + 1
return ET

Chapter 3 – Greedy Model Page 12


Chapter 3 – Greedy Model Page 13

You might also like