You are on page 1of 22

Design and Analysis of Algorithms

Unit- III : The Greedy Method

 General Mathod
 Control abstraction
 Applications
1. Knapsack problem
2. Job sequencing with Deadlines
3. Minimum Cost spanning trees
a) Prim’s Algorithm
b) Kruskal’s algorithm
4. Single Source shortest path Problem.
_____________________________________________________________________________
General Method:
 Greedy method is straight forward design technique used to find optimal solution.
 Feasible Solution:
Problem is defined with n inputs. The subset of solutions which will satisfy the give
constraints ( conditions) are known as Feasible Solutions or Candidate Solutions.
 Objective Function:
An objective function, which assigns a value to a solution, or a partial solution.
 Optimal Solution:
The feasible Solution, which maximizes or minimizes the objective function is called an
Optimal Solution.

 The Greedy method suggests that one can device an algorithm that works in stages.
This version of Greedy Technique is called the “ Subset paradigm”.
1. Arrange all feasible inputs in order.
2. Consider one input at a time.
3. Check whether the input is in an optimal solution then include in partially constructed
optimal solution.
4. Check the next input mentoned in step 1 generates the optimal solution along with the
prevous input then include in partiall constructed optimal solution , otherwise reject the
input.
5. The selection of the input will be done on the basis of Objective function.

Greedy method control abstraction for the subset paradigm :

1. Algorithm Greedy(a, n)
2. // a[1:n] contains the n inputs.
3. {
4. Solution := 0: // initialize the solution.
5. for i:= 1 to n do
6. {
7. x:= Select(a);
8. if Feasible( Soluition, x) then
9. Solution := Union(Solution, x);
10. }
11. Return Solution;
12. }
 The function Select selects an input from a[ ] and removes it.
 The selected input’s value is assigned to x.
 Feasible is a Boolean – valued function that determines whether x can be included into a
Solution vetctor.
 The function Union combines x with the solution and updates the objective function.
 For problems that do not call for the selection of an optimal sub set, in the Greedy method,
the decisions will be made by considering the inputs in some order based on the
Optimization criteria. Then this version of Greedy technique is known as “Ordering
Paradigm”.
Problems can be solved in Subset paradigm are:

1. Container Loading
2. Knapsack Problem
3. Tree vertex splitting
4. Job sequencing with Deadlines
5. Minimum-Cost Spanning Trees
a) Prim’s Algorithm
b) Kruskal’s algorithm

Problems can be solved in Ordering paradigm are:

1. Optimal Storage on Tapes


2. Optimal Merge Patterns
3. Single-Source shortest paths

Example 1 to understand Greedy approach :


Change Making– The problem here is to issue the change with minimum number of coins.
65 paise ca be issued as: 1*50 paise + 1*10 paise + 1*5paise =65
Example 2 to understand Greedy approach :
Machine Scheduling: Every machine can do one task at a time. The problem here is to find
minimum number of machines to complete the tasks.

Knapsack problem:

 The problem is solved by using “ subset paradigm “ of Greedy method.


 The Problem statement : “n” objects, with each weight wi , where i will take the values from 1
to n, and a knapsack with a capacity of “m” is given.
 If a fraction xi , 0 <= xi <= 1, of object I is placed into the knapsack, then a profit (p i * xi ) is
earned.
 The objective is to obtain a filling of the knapsack that maximizes the total profit earned.
 Since the knapsack capacity is m, we require the total weight of all chosen objects to be
atmost m.
 Feasible Solution: (x1, x2, … xn) subject to (1 <= i <= n) ∑ (wi * xi ) < = m
and (0 <= xi <= 1), (1 <= i <= n)
 Objective Function: Maximize (1 <= i <= n) ∑ (pi * xi )
 Optimal Solution: (x1, x2, … xn) with Maximum ∑ (pi * xi ).
 The profits pi , weights wi are positive numbers.
Example : Consider the following instance of the knapsack problem: n = 3, m = 20 ,
(p1, p2, p3) = (25, 24, 15) and (w1, w2, w3) = (18, 15, 10).

The Feasible solutions are :


w1 = 18 w2 =15 w3 =10 m =20 p1 = 25 p2 =24 p3 =15
(x1, x2, x3 ) w1x1 w2x2 w3x3 ∑ (wi*xi ) p1*x1 p2*x2 p3*x3 ∑ (pi*xi )
1 (1/2, 1/3, 1/4) 18/2 =9 15/3=5 10/4=2.5 16.5 25/2=12.5 24/3=8 15/4=3.75 24.25
2 (1, 2/15, 0) 18*1=18 15*2/15=2 10*0=0 20 25*1=25 24*2/15=3.2 15*0 =0 28.2
3 (0, 2/3, 1) 18*0=0 15*2/3=10 10*1=10 20 25*0=0 24*2/3=16 15*1=15 31
4 (0, 1, 1/2) 18*0=0 15*1=15 10*1/2=5 20 25*0=0 24*1=24 15*1/2=7.5 31.5

Algorithm:

1. Algorithm GreedyKnapsack(m, n)
2. // p[1:n] and w[1:n] contain the profits and weights respectively of the n objects such that
3. // p[i] / w[i] >= p[i+1] / w[i+1].
4. // m is the knapsack size and x[1:n] is the solution vector.
5. {
6. for i:= 1 to n do x[i]:= 0.0; initialize x.
7. U := m;
8. for i: = 1 to n do
9. {
10. if ( w[i] > U ) then break;
11. X[i] := 1.0; U := U – w[i];
12. }
13. if ( i <= n) then x[i] := U / w[i] ;
14. }

Example: Consider the following instance of the knapsack problem: n = 3, m = 20 ,


(p1, p2, p3) = (25, 24, 15) and (w1, w2, w3) = (18, 15, 10).

Arrange the items based on p[i] / w[i] >= p[i+1] / w[i+1].


p1 / w1 = 25 / 18 = 1.388
p2 / w2 = 24 / 15 = 1.6
p3 / w3 = 15 / 10 = 1.5
Therefore, 1.6 > 1.5 > 1.388
The objects can be chosen in the order of (2,3,1)
for i:= 1 to n do x[2]
U := m; U = m =20
for i: = 1 to n do
if ( w[i] > U ) then break w[2] = 15 < U { i = 1 since object 2 is 1st item}
X[i] := 1.0; U := U – w[i]; x[2] = 1.0 ; U = 20 – 15 = 5
if ( w[i] > U ) then break w[3] = 10 > U = 5 then break
if ( i <= n) then x[i] := U / w[i] (2 < 3) then x[3] = 5 / 10 = 0.5
for i:= 1 to n do x[1] = 0.0
x[i]:= 0.0; initialize x.
Optimal Solution (x1, x2, x3) = (0, 1, 0.5)
Total weight = w1x1 + w2x2 + w3x3 = 18*0 + 15*1 +10* 5/10 = 0 + 15 + 5 = 20
Total Profit = p1x1 + p2x2 + p3x3 = 25*0 + 24*1 + 15*0.5 = 0+ 24+7.5 = 31.5
Time Complexity:

 Fractional Knapsack has time complexity O(nlogn) where n is the number of items in S.

 Where S is a heap-based priority queue and then the removal has complexity Θ(logn) so the
up to n removals take O(n logn). The rest of the algorithm is O(n).

 Alternatively, S could be a sequence and we could begin Fractional Knapsack by sorting S


with a Θ(n logn) sort.

 Now the removal is simply removing the first element. If we use a circular list for S, the
removal is O(1), so the algorithm is O(n). Including the sort we again have O(n logn).
Job Sequencing with Deadline:

Problem Statement:

 We are given a set of n jobs.


 Associated with job i is an integer deadline di > = 0 and a profit pi > = 0.
 For any job i the profit pi is earned iff the job is completed by its deadline.
 To complete a job, one has to process the job on a machine for one unit of time.
 Only one machine is available for processing jobs.

Feasible Solution:

 A feasible solution for this problem is a subset J of jobs such that each job in this
subset can be implemented by its deadline.
 The value of a feasible solution J is the sum of the profits of the jobs in J
= (i belong to J) ∑ pi

Optimal Solution:

 An Optimal solution is a feasible solution with maximum value.


 Since the problem involves the identification of a subset, it fits the subset Paradigm.

Example:

 Let n = 4, (p1, p2, p3 , p4 ) = (100, 10, 15, 27) and (d1, d2, d3, d4) = (2, 1, 2, 1).

The Feasible solutions are :

Feasible Processing 2 1 2 1 100 10 15 27


Solution Sequence d1 d2 d3 d4 p1 p2 p3 p4 ∑ (pi )
1 (1, 2) 2, 1 2nd 1st 100 10 110
2 (1, 3) 1,3 or 3, 1 1st 2nd 100 15 115
3 (1, 4) 4, 1 2nd 1st 100 27 127
4 st nd 25
(2, 3) 2, 3 1 2 10 15
5 (3, 4) 4, 3 2nd 1st 15 27 42
6 st 100
(1) 1 1 100
7 (2) 2 1st 10 10
8 st 15
(3) 3 1 15
9 (4) 4 1st 27 27

The Optimal solution is row No. 3.


 In this solution only jobs 1 and 4 are processed and the value is 127.
 These jos must be processed in the order job 4 followed by job1.
 Thus the processing of job4 begins at time zero and that of job1 is completed at time 2.
Formulation of Greedy algorithm:

 Let the Objective function is (i belong to J) ∑ (p i ).


 Using the objective function, the next job to include is the one that increases ∑ (p i ) most,
subject to the constraint that the resulting J is a feasible solution.
 This requires to keep the jobs in nonincreasing order of the p i ‘s.

J =Feasible Processing 2 1 2 1 100 10 15 27


Solution Sequence d1 d2 d3 d4 p1 p2 p3 p4 ∑ (pi )
1 0 0
2 {1} 100 100
3 {1, 4} 4, 1 2nd 1st 100 27 127
4 {1, 3, 4} Not feasible 2nd 2 nd
1st Reject
5 {1, 2, 4} Not feasible 2nd 1 st
1st Reject

Therefore the solution is J = {1, 4} with value 127 is an optimal solution for the given instance.

Theorem:
 Let J be a set of k jobs and = i1, i2, i3, …., ik is a permutatiom of jobs in J such that
di1 < = di2 < = ….. < = dik.
 Then J is a feasible solution iff the jobs in J can be processed in the order without
violating any deadline.

High-level description of the Greedy Algorithm:

 This algorithm constructs an optimal set J of jobsthat can be processed by their due times.
 The selected jobs can be processed in the order given by above theorem.

1. Algorithm GreedyJob(d, J, n)
2. // J is a set of jobs that can be completed by their deadlines.
3. {
4. J: = {1};
5. for i: = 2 to n do
6. {
7. if ( all jobs in J U {i} can be completed by their deadlines)
8. then J: = J U {i};
9. }
10. }

Greedy Algorithm for sequencing unit time jobs with deadlines and profits:

 This algorithm assumes that the jobs are alredy sorted such that p 1 >= p2 >= … >= pn .
 It assumes that n >= 1 and the deadline d[i] of job i is at least 1.
 No job with d[i] < 1 can ever be finished by its deadline.
1. Algorithm JS(d, j, n)
2. // d[i] >= 1, 1 <= i < = n are the deadlines.
3. // The jobs are ordered such that p 1 >= p2 >= … >= pn .
4. // J[i] is the ith job in the optimal solution, 1 <= i < = k.
5. // Also, at termination d[J[ i ]] < = d[J[ i + 1]] , 1 <= i < = k.
6. {
7. d[0] := J[0] := 0; // Initialize.
8. J[1] := 1; // Include Job1.
9. K := 1;
10. for i := 2 to n do
11. {
12. // Consider jpbs in nonincreasing order of p[i]
13. // Find position for i and check feasibility of insertion.
14. r := k;
15. while ((d[J[r]] > d[i]) and ((d[J[r]] ≠ r )) do r := r – 1;
16. if ((d[J[r]] < = d[i]) and (d[i] > r )) then
17. {
18. // Insert i into J[ ].
19. for q := k to (r +1) step -1 do J[q + 1] := J[q];
20. J[r + 1] := I; k := k + 1;
21. }
22. }
23. return k;
24. }

Example: Let n = 5, (p1, p2, p3, p4, p5) = ( 20, 15, 10, 5, 1) and (d1, d2, d3, d4, d5) = ( 2, 2, 1, 3, 3).

Deadline Job considered 1 2 3 Profit earned


Job # Job 3 Job1, Job 2 Job4, Job5
Profit 10 20, 15 5, 1
Slot [0, 1] [1, 2] [2, 3]
Assigned Job 1 Job 1 20
Assigned Job 2 Job 2 35
Assigned Job 3 Job 3 Can not fit, reject 35
Assigned Job 4 Job4 40
Assigned Job 5 Can not fit, reject Job5 40
Optimal Solution J = {1, 2, 4} 40

The optimal solution is J = {1, 2, 4} with a profit of 40.

Measuring Time Compexity:

 The time complexity of the Algorithm JS depend on the two parameters n (number of Jobs),
s (number of jobs included in the solution J).
 The while loop of the line 15 in the Algorithm is iterated atmost k times. Each iteration takes
Ɵ (1) time.
 If the conditional of line 16 is true, then lines 19 and 20 are executed. These lines require
Ɵ (k - r) time to insert job i.
 Hence, the the total time for each iteration of the for loop of line 10 is Ɵ (k). This loop is
iterated (n – 1) times.
 If s is the final value of k i.e., s is the number of jobs in the final solution, then the total time
needed by the algorithm JS is Ɵ (s * n).
 Since s <= n, the worst-case time, as a function of n alone is Ɵ (s * n) = Ɵ (n * n) = Ɵ (n2).
 In addition to the space needed for d, algorithm JS needs Ɵ (s) amount of space for J.
 The profit values are not needed by Algorithm JS. It is sufficient to know that p i >= pi+1,
1 <= i < n.
 The computing time of Algorithm JS can be reduced from O(n2 ) to nearly O(n) by using
the disjoint set union and find algorithms and a different method to determine the feasibility
of a partial solution.

Minimum Cost Spanning Trees

Spanning Tree : Given an undirected and connected graph G=(V,E), a spanning tree of the
graph G is a tree that spans G (that is, it includes every vertex of G) and is a subgraph
of G (every edge in the tree belongs to G).

Minimum Spanning Tree :

 The cost of the spanning tree is the sum of the weights of all the edges in the tree.
 There can be many spanning trees.
 Minimum spanning tree is the spanning tree where the cost is minimum among all the
spanning trees.
 There also can be many minimum spanning trees.

Applications:

 Minimum spanning tree has direct application in the design of networks.


 It is used in algorithms approximating the travelling salesman problem, multi-terminal
minimum cut problem and minimum-cost weighted perfect matching.
Other practical applications are:
 Cluster Analysis
 Handwriting recognition
 Image segmentation

Prim’s Algorithm:

 A Greedy method to obtain a minimum-cost spanning tree builds the tree edge by edge.
 The next edge to include is chosen according to some optimaization criterion.
 The simplest suct criteriaon is to choose an edge that results in a minimum increase in the
sum of the costs of the edges so far included.
 Accoriding to the criteria, first, the set of edges so far selected form a tree.
 i.e., If A is the set of edges selected so far, then A forms a Tree.
 The next edge (u, v) to be included in A is a minimum-cost edge not in A with the property
that A U {(u, v)} is also a Tree.
Stages in the Prim’s Algorithm:

Given Graph {1, 6} is a Minimum Cost edge. – Stage-1

{ 6, 5} is a Minimum Cost edge. – Stage-2 { 5, 4} is a Minimum Cost edge. – Stage-3

{ 4, 3} is a Minimum Cost edge. – Stage-4 { 3, 2} is a Minimum Cost edge. – Stage-5

The cost of the edge {1, 6} = 10


The cost of the edge {6, 5} = 25
The cost of the edge {5, 4} = 22
The cost of the edge {4, 3} = 12
The cost of the edge {3, 2} = 16
The cost of the edge {2, 7} = 14

The Cost of the Minimal Spanning tree

= 99
Stage Selection Cost Min Cost near [j] Cost cost [j, near [j]] minimum
of Edge =mincost +
cost{j, near[j]]
{1, 2} = 28, Min {28, 25} = 25
1 {1, 6} 10 10 {2, 5}
{6, 5} = 25 {6, 5} is a Selected edge
{1, 2} = 28
Min {28, 22, 24} = 22
2 {6, 5} 25 10+25 = 35 {2, 4, 7} {5, 4} = 22
{5, 4} is a Selected edge
{5, 7} = 24
{1, 2} = 28
{5, 7} = 24 Min { 28, 24, 18, 12} = 12
3 {5, 4} 22 35+22=57 {2, 3, 7}
{4, 7} = 18 {4, 3} is a Selected edge
{4, 3} = 12
{1, 2} = 28
{3, 2} = 16 Min {28, 16, 24, 18} = 16
4 {4, 3} 12 57+12=69 {2, 7}
{5, 7} = 24 {3, 2} is a Selected edge
{4, 7} = 18
{2, 7} = 14
Min { 14, 24, 18} = 14
5 {3, 2} 16 69+16=85 {7} {5, 7} = 24
{2, 7} is a Selected edge
{4, 7} = 18
6 {2, 7} 14 85+14=99 -- -- Return Min Cost = 99

 The Algorithm will start with a tree that includes only a minimum-cost edge of G.
 Then, edges are added to this tree one by one.
 The next edge(i, j) to be added is such that i is a vertex already included in the tree, j is a
vertex not yet included.
 Cost of (i, j) , cost[i, j] , is minimum among all edges (k,l) such that vertex k is in the tree
and vertex l is not in the tree.
 To determine this edge(i, j) efficiently, we associate with each vertex j not yet included in
the tree a value near[j].
 The value near[j] is a vertex in the tree such that cost[ j, near[j] ] is minimum among all
choices for near[j].
 near[j] = 0 for all vertices j that are already in the tree.
 The next edge to include is defined by the vertex j such that near[j] ≠ 0 ( j not already in the
tree) and cost[ j, near[j] ] is minimum.
S Time
No Algorithm
Complexity
1 Algorithm Prim (E, cost, n, t)
2 // E is the set of edges in G. cost[1:n, 1:n] is the cost
3 // adjacency matrix of an n vertex graph such that cost[i, j]
4 // either a positive real number or ∞ if no edge(I, j) exists.
5 // A minimum spanning tree is computed and stord as a
6 // set of edges in the array t[1:n-1, 1:2] .( t[i, 1], t[i, 2]) is
7 // an edge in the minimum-cost spanning tree.The final cost is returned
8 {
9 Let (k, l) be an edge of minimum cost in E; O ( |E| )
10 mincost := cost [k, l] ; Ɵ (1)
11 t[1, 1] := k; t[1, 2] := l;
12 for i := 1 to n do // Initialise near Ɵ (n)
13 if (cost [i, l] < cost [i, k]) then near [i] := l;
14 else near [i] := k;
15 near [k] := near [l] := 0;
16 for i := 2 to n -1 do O ( n2 )
17 { // find n – 2 additional edges for t.
18 Let j be an index such that near [j] ≠ 0 and O(n)
19 cost [j, near [j]] is minimum; O(n)
20 t[i, 1] := j; t[i, 2] := near [j];
21 mincost := mincost + cost [j, near [j]];
22 near [j] := 0;
23 for k := 1 to n do // Update near[ ]. O(n)
24 if (( near [k] ≠ 0) and (cost [k, near [k]] > cost [k, j]))
25 then near [k] := j;
26 }
27 return mincost;
28 }
The time complexity of Prims algorithm is = O ( n2 )
If tree is implemented as Red-black tree then the time complexity of Prims algorithm is
O (( n + |E| log n )
Kruskal’s Algorithm:

 According to the second optimization criteria, the edges of the graph are considered in
nondecreasing order of cost.
 i.e., the set t of edges so far selected for the spanning tree be such that it is possible to
complete t into a tree.
 Thus t may not be a tree at all stages in the algorithm.
 It forms a forest since the set of edges t can be completed into a tree iff there are no cycles
in t.

Example:
No. of Edges covered Edge Cost of the edge
1 {1, 6} 10
2 {3, 4} 12
3 {2, 7} 14
4 {2, 3} 16
5 {4, 5} 22
6 {6, 5} 25
t has (n-1) edges , stop mincost = 99

Early form of minimum-cost spanning tree algorithm due to Kruskal:

1. t := 0;
2. while (( t hasless than n – 1 edges) and ( E ≠ 0)) do
3. {
4. Choos an edge (v, w) from E of lowest cost;
5. Delete (v, w) from R;
6. if (v, w) does not create a cycle in t then add (v, w) to t;
7. else discard (v, w);
8. }

 Initially E is the set of all edges in G.


 The only functions perform in this set are
1. Determine an edge with minimum cost (line no. 4)
2. Delete this edge (line no.5)
 Both these functions can be performed effeciently if the edges in E are maintained as a
sorted sequential list.
 If the edges are maintained as a minheap, then the next edge to consider can be obtained in
O (log |E|) time.
 The construction of heap itself takes O (|E|) time.
 To perform step 6 effeciently, the vertices in G should be grouped together in such a way
that one can easily determine whether the vertices v and w are already connected by the
earlier selection of edges.
 If they are, then the edge (v, w) is to be discarded.
 If they are not, then (v, w) is to be added to t.
 One possible grouping is to place all vertices in the same connected component of t into a
set ( all connected components of t will also be trees).
 Then, two vertices v and w are connected in t iff they are in the same set.
 For example, when the edge (2, 6) is to be considered , the sets are {1,2}, {3, 4, 6} and {5}.
 The next edge to be considered is (1, 4). Since vertices 1 and 4 are in the same set, the
edge is rejected.
 The edge (3, 5) connects vertices in different sets and resuilts I the final spanning tree.
 Using the set representation and union and find algorithms, we can obtain an efficient
implementation of line 6.
 The computing time is therefeore determined by the time for lines 4 and 5, which in the
worst case is O (|E| log |E|).
1. Algorithm Kruskal (E, cost, n, t)
2. // E is the set of edges in G. G has n vertices.
3. // cost [u, v] is the cost of the edge (u, v). t is the set of edges in the minimum cost
4. // spanning tree. The final cost is returned.
5. {
6. Construct a heap out of the edge costs using Heapify;
7. for I := 1 to n do parent [i] := -1;
8. // Each vertex is in a different set.
9. I := 0; mincost := 0.0;
10. while ((i < n-1) and ( heap not empty)) do
11. {
12. Delete a minimum cost edge (u, v) from a heap
13. and reheapify using Adjust;
14. j := Find (u) ; k := Find (v) ;
15. if ( j ≠ k) then
16. {
17. i := i + 1;
18. t[i, 1] := u; t[i, 2] := v;
19. mincost := mincost + cost [u, v];
20. Union (j, k);
21. }
22. }
23. if ( i ≠ n-1) then write (“ No spanning Tree”);
24. else return mincost;
25. }

 In line 6 an intial heap of edges is constructed.


 In line 7 each vertex is assigned to a distinct set .
 The set t is the set of edges to be included in the minimum-cost spanning tree.
 i is the number edges in t.
 the set t can be represented as a sequential list using a two-dimensional array t[1: n-1, 1:2].
 Edge (u, v) can be added to t by the assignments t [i, 1] := u; t [i, 2] := v;
 In the while loop of line 10, edges are removed from the heap one by one in nondecreasing
order of cost.
 Line 14 determines the sets containing u and v.
 If j ≠ k , then vertices u and v are in different sets. ( and so in different trees) and edge (u, v)
is included into t.
 The sets containing u and v are combined (line 20).
 If u = v , the edge (u, v) is discarded as its inclusion into t would create a cycle.
 Line 23 determines whether a spanning tree was found.
 It follows that i ≠ n -1 iff the grapgh G is not connected.
 The computing time is O (|E| log |E|), where E is the edge set of G.
Priority Queues

 Any data structure that supports the operations of search min ( or max), insert and delete
min ( or max) is called a Priority queue.
 Heap : A max (min) heap is a complete binary tree with the property that the value at each
node is atleast as larage as ( as small as) the values at its chldren ( if they exist).
 In a max heap , one of the largest elements is at the root of the heap.
 If the elements are distinct, then the root contains the largest item.
 A max heap can implemented using an array a[ ].
 To insert an element into heap, one adds it “ at the bottom” of the heap and then compares
it with its parent, grandparent, greatgrandparent, and so on, until it is less than o equal to
one of these values. Algorithm Insert will perform this job.
 Insrting new element will take Ɵ (log n) time in the worst case.
 To delete the maximum key from the max heap, an Algorithm Adjust is used.

Insertion into a Heap:

1. Algorithm Insert (a, n)


2. {
3. // Inserts a[n] into the heap which is stored in a[1: n-1].
4. i : = n; item := a[n];
5. while((i > 1) and (a[|i/2|] < item)) do
6. {
7. a[i] := a[|i/2|] ; i: = |i/2|;
8. }
9. a[i] := item ; return true;
10. }

Example : steps to insert 90 into an existing heap.


Algorithm Heap sort
1. Algorithm Heapify (a, n)
2. // Readjust the elements in a [1: n]to form a heap.
3. {
4. for i: = |n/2| to step -1 do Adjust(a, I, n);
5. }

You might also like