You are on page 1of 45

Analysis and Design of Algorithms – Unit - 3

Decrease-and-Conquer Technique
 The decrease-and-Conquer technique is based on exploiting the
relationship between a solution to a given instance of a problem and a
solution to a smaller instance of the same problem.
 Once such a relationship is established, it can be exploited either
top down (recursively) or bottom up (without a recursion).

 There are three major variations of decrease-and-conquer:


 Decrease by a constant:
o Insertion sort
o Graph algorithms:
 DFS
 BFS
 Topological sorting
o Algorithms for generating permutations, subsets
 Decrease by a constant factor
o Binary search
 Variable-size decrease
o Euclid’s algorithm

Decrease-by-a-constant
 In the decrease-by-a-constant variation, the size of an instance is
reduced by the same constant (typically, this constant is equal to one) on
each iteration of the algorithm.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 1


Analysis and Design of Algorithms – Unit - 3

Example: Consider the exponentiation problem of computing an for positive


integer exponents:
 The relationship between a solution to an instance of size n and an
instance of size n-1 is obtained by the formula:
an = an-1 . a
 So, the function f(n) = an can be computed “top down” by using its
recursive definition:
f(n-1) . a if n > 1
f(n) = a if n = 1
 Function f(n) = an can be computed “bottom up” by multiplying a by itself
n-1 times.

Decrease-by-a-Constant Factor
 In the decrease-by-a-constant factor variation, the size of a problem
instance is reduced by the same constant factor on each iteration of the
algorithm. In most applications, this constant factor is equal to two.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 2


Analysis and Design of Algorithms – Unit - 3

 Example: Consider the exponentiation problem of computing an for


positive integer exponents:
 If the instance of size n is to compute an, the instance of half its
size will be to compute an/2, with obvious relationship between the
two: an = (an/2 )2
 If n is odd, we have to compute an-1 by using the rule for even-
valued exponents and then multiply the result by a.
 To summarize, we have the following formula:
(an/2)2 if n is even and positive
an = (a(n-1)/2)2 .a if n is odd and greater than 1
a if n = 1
 If we compute an recursively according to above formula and
measure the algorithm’s efficiency by number of multiplications,
then algorithm is expected to be in O(log n) because, on each
iteration, the size is reduced by at least one half at the expense of
no more than two multiplications.

Note:
Consider the problem of exponentiation: Compute an
 Brute Force: an = a * a * a * a * . . . * a
 Divide and conquer: an = an/2 * an/2
 Decrease by one: an = an-1 * a
 Decrease by constant factor: an = (an/2)2 if n is even

an = (a(n-1)/2)2 . a if n is odd

Variable – Size – Decrease


 In this variation, the size reduction pattern varies from one iteration of
an algorithm to another.
 Euclid’s algorithm for computing the GCD provides a good example of
such a situation:
gcd(m,n) = gcd(n,m mod n)

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 3


Analysis and Design of Algorithms – Unit - 3

 Though the arguments on the right-hand side are always smaller than
those on the left-hand side, they are smaller neither by a constant nor by
a constant factor.

Insertion Sort
 Decrease – by – one technique is applied to sort an array A[0 . . . n-1].
 We assume that the smaller problem of sorting the array A[0 . . . n-2] has
already been solved to give us a sorted array of size n – 1:
A[0] ≤ . . . ≤ A[n – 2].
 How can we take the advantage of this solution to the smaller problem to
get a solution to the original problem by taking into account the element
A[n-1]?
 All we need is to find an appropriate position for A[n – 1] among
sorted elements and insert it there.

 As shown in below figure, starting with A[1] and ending with A[n – 1],
A[i] is inserted in its appropriate place among the first i elements of the
array that have been already sorted (but, unlike selection sort, are
generally not in their final positions).

A[0] ≤ . . . ≤ A[j] < A[j + 1] ≤ . . . ≤ A[i – 1] | A[i] . . . A[n – 1]

Smaller than or equal to A[i] greater than A[i]

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 4


Analysis and Design of Algorithms – Unit - 3

Pseudocode of Insertion sort

Analysis of Insertion sort


Worst-case Efficiency
 The number of key comparisons in this algorithm obviously depends on
the nature of the input.
 In the worst case, A[j] > v is executed the largest number of times. i.e.,
for every j = i – 1, . . .,0.
 Thus, for the worst-case input, we get A[0] > A[1](for i =1), A[1] > A[2]
(for i=2), . . . A[n – 2] > A[n – 1](for i=n-1).
 The worst-case input is an array of strictly decreasing values. The no. of
key comparisons for such an input is
n-1 i-1 n-1
Cworst(n) = ∑ ∑1 = ∑i = ((n-1)n)/2 Є Θ(n2)
i=1 j=0 i=1
 Thus, in the worst-case, insertion sort makes exactly the same number
of comparisons as selection sort.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 5


Analysis and Design of Algorithms – Unit - 3

Best-case Efficiency
 In the Best case, the comparison A[j] > v is executed only once on every
iteration of outer loop.
 It happens if and only if A[i – 1] ≤ A[i] for every i = 1, . . . n – 1, i.e.,
if the input array is already sorted in ascending order.
n-1
Cbest(n) = ∑1 = n - 1 Є Θ(n)
i=1

Example: Apply Insertion sort technique to sort the list,

{89, 45, 68, 90, 29, 34, 17} in ascending order.


Soln:
89 | 45 68 90 29 34 17
45 89 | 68 90 29 34 17
45 68 89 | 90 29 34 17
45 68 89 90 | 29 34 17
29 45 68 89 90 | 34 17
29 34 45 68 89 90 | 17
17 29 34 45 68 89 90

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 6


Analysis and Design of Algorithms – Unit - 3

Graph Traversal
 Graph traversal refers to the process of visiting each vertex in a graph.
 There are two principal Graph traversal algorithms:

 Depth-first search (DFS)

 Breadth-first search (BFS)

 Depth-First Search (DFS) and Breadth-First Search (BFS):


 Both work on directed or undirected graphs.
 The difference between the two algorithms is in the order in which
each “visit” vertices.

Depth-First Search
 DFS starts visiting vertices of a graph at an arbitrary vertex by marking it
as having been visited.
 On each iteration, the algorithm proceeds to an unvisited vertex that is
adjacent to the last visited vertex.
 This process continues until a dead end – a vertex with no adjacent
unvisited vertices – is encountered.
 At a dead end, the algorithm backs up one edge to the vertex it came
from and tries to continue visiting unvisited vertices from there.
 The algorithm eventually halts after backing up to the starting vertex,
with the latter being a dead end.
 By this time, all the vertices in the same connected component have been
visited.
 If unvisited vertices still remain, the depth-first search must be restarted
at any one of them.
 It is convenient to use a stack to trace the operation of DFS.
 We push a vertex onto the stack when the vertex is reached for first
time, and we pop a vertex off the stack when it becomes a dead end.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 7


Analysis and Design of Algorithms – Unit - 3

 It is very useful to accompany a depth-first search traversal by


constructing the so-called Depth-First Search forest.
 The traversal’s starting vertex serves as the root of the first tree in
such a forest.
 Whenever a new unvisited vertex is reached for the first time, it is
attached as a child to the vertex from which it is being reached.
Such an edge is called a tree edge because the set of all such edges
forms a forest.
 The algorithm may also encounter an edge leading to a previously
visited vertex other than its immediate predecessor (i.e., its parent in
the tree). Such an edge is called a back edge because it connects a
vertex to its ancestor, other than the parent, in the depth-first
search forest.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 8


Analysis and Design of Algorithms – Unit - 3

DFS Traversal for a digraph

 The DFS forest in the previous figure exhibits all four types of edges
possible in a DFS forest of a directed graph: tree edges (ab, bc, de),
back edges (ba) from vertices to their ancestors, forward edges (ac)
from vertices to their descendants in the tree other than their children,
and cross edges (dc), which are none of the above mentioned types.

Note: A back edge in a DFS forest of a directed graph can connect a


vertex to its parent.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 9


Analysis and Design of Algorithms – Unit - 3

Pseudocode of DFS

Analysis of DFS traversal

 For the adjacency matrix representation, the traversal’s time is in


Θ(|V|2) where, |V| is the number of the graph’s vertices.

 For the adjacency list representation, it is in Θ(|V| + |E|) where, |V|


and |E| are the number of the graph’s vertices and edges, respectively.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 10


Analysis and Design of Algorithms – Unit - 3

Applications of DFS

 Checking connectivity: Since DFS halts after visiting all the vertices
connected by a path to the starting vertex, checking a graph’s
connectivity can be done as follows - Start a DFS traversal at an
arbitrary vertex and check, after the algorithm halts, whether all the
graph’s vertices will have been visited. If they have, the graph is
connected; otherwise, it is not connected.

 Checking acyclicity: If the DFS forest does not have back edges, then it
is clearly acyclic.

 Finding articulation points of a graph: A vertex of a connected graph is


said to be its articulation point if its removal with all edges incident to it
breaks the graph into disjoint pieces.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 11


Analysis and Design of Algorithms – Unit - 3

Breadth-first search (BFS)

 It proceeds in a concentric manner by visiting first all the vertices that


are adjacent to a starting vertex, then all unvisited vertices two edges
apart from it, and so on, until all the vertices in the same connected
component as the starting vertex are visited.

 If unvisited vertices still remain, the algorithm has to be restarted at an


arbitrary vertex of another connected component of the graph.

 Instead of a stack, BFS uses a queue. The queue is initialized with the
traversal’s starting vertex, which is marked as visited.
 On each iteration, the algorithm identifies all unvisited vertices
that are adjacent to the front vertex, marks them as visited, and
adds them to the queue; after that, the front vertex is removed
from the queue.
 BFS traversal is similar to level-by-level tree traversal.

 It is useful to accompany a BFS traversal by constructing the


breadth-first search forest. The traversal’s starting vertex serves as the
root of the first tree in such a forest.
 Whenever a new unvisited vertex is reached for the first time, it is
attached as a child to the vertex from it is being reached from with
an edge called a tree edge.
 The algorithm may also encounter an edge leading to a previously
visited vertex other than its immediate predecessor (i.e., its parent
in the tree). Such an edge is called a cross edge.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 12


Analysis and Design of Algorithms – Unit - 3

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 13


Analysis and Design of Algorithms – Unit - 3

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 14


Analysis and Design of Algorithms – Unit - 3

Pseudocode of BFS

Analysis of BFS traversal

 BFS has same efficiency as DFS:


 Θ(|V|2) for the adjacency matrix representation where, |V| is the
number of the graph’s vertices.
 Θ(|V|+|E|) for the adjacency list representation where, |V| and
|E| are the number of the graph’s vertices and edges, respectively.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 15


Analysis and Design of Algorithms – Unit - 3

Applications of BFS

 Checking connectivity of a graph


 Checking acyclicity of a graph
 BFS can be helpful in some situations where DFS cannot.
 For example, BFS can be used for finding a path with the fewest
number of edges between two given vertices.
o We start a BFS traversal at one of the two vertices given and
stop it as soon as the other vertex is reached.
o The simple path from the root of the BFS tree to the second
vertex is the path sought.

Main facts about DFS and BFS

DFS BFS
Data structure stack queue
No. of vertex orderings 2 orderings 1 ordering
Edge types (undirected graphs) tree and back edges tree and cross edges
Applications connectivity, connectivity,
acyclicity, acyclicity,
articulation points minimum-edge paths
Efficiency for adjacency matrix Θ(|V|2) Θ(|V|2)
Efficiency for adjacency lists Θ(|V| + |E|) Θ(|V| + |E|)

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 16


Analysis and Design of Algorithms – Unit - 3

Topological Sorting

Example:
 Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time
student has to take in some degree program.
 The courses can be taken in any order as long as the following course
prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1
and C2, C4 requires C3, and C5 requires C3 and C4. The student can
take only one course per term.

In which order should the student take the courses?


 The above situation can be modeled by a digraph in which vertices
represent courses and directed edges indicate prerequisite requirements.

 The question is whether we can list the vertices of a digraph in such an


order that for every edge in the graph, the vertex where the edge starts is
listed before the vertex where the edge ends. This problem is called
Topological Sorting.
 For topological sorting to be possible, a digraph must be a dag i.e., if a
digraph has no cycles, the topological sorting problem for it has a
solution.
 There are two efficient algorithms that both verify whether a digraph is a
dag, and if it is a dag then it produces an ordering of vertices that
solves the topological sorting problem:
 DFS – based algorithm
 Source – removal algorithm

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 17


Analysis and Design of Algorithms – Unit - 3

Topological sorting Algorithms

1. DFS-based algorithm:
 DFS traversal noting the order vertices are popped off stack (i.e., the
order in which vertices become dead ends).
 Reverse order solves topological sorting
 Back edges encountered?→ NOT a dag!

Figure: (a) Digraph for which the topological sorting problem needs to be solved.
(b) DFS traversal stack with the subscript numbers indicating the popping-off
order.
(c) Solution to the problem.

2. Source removal algorithm:


 This algorithm is based on direct implementation of the
decrease (by one)–and–conquer technique: Repeatedly, identify in a
remaining digraph a source, which is a vertex with no incoming edges,
and delete it along with all the edges outgoing from it.
 The order in which the vertices are deleted yields a solution to the
topological sorting problem.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 18


Analysis and Design of Algorithms – Unit - 3

Example: Solve the topological sorting problem for the following digraph using:
a) DFS-based algorithm
b) Source removal algorithm

(a) Using DFS-based algorithm

1 2 1 2 1 2 1

4 3 3

(b) Using Source removal algorithm

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 19


Analysis and Design of Algorithms – Unit - 3

Algorithms for Generating Combinatorial Objects

Generating Permutations

 Bottom-up minimal change algorithm


 Johnson-Trotter algorithm
 Lexicographic ordering

Generating Permutations using Bottom-up minimal change algorithm

 The smaller-by-one problem is to generate all (n-1)! Permutations


 Assuming that the smaller problem is solved, we can get a solution to the
larger one
 by inserting n in each of the n possible positions among elements of
every permutation of n-1 elements.
 There are two possible order of insertions: either left to right or
right to left.
 Total number of all permutations will be n.(n-1) ! = n!.
Hence, we will obtain all the permutations of {1,…,n}.

 We can insert n in the previously generated permutations either left to


right or right to left.
 We start with inserting n into 12…(n-1) by moving right to left and then
switch direction every time a new permutation of {1,…,n-1} needs to be
processed.
 Minimal-change requirement is satisfied.
 Each permutation is obtained from the previous one by exchanging
only two elements.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 20


Analysis and Design of Algorithms – Unit - 3

Problems:

1. Generate all the permutations of n=3 i.e., {1,2,3} by the


Bottom-up minimal-change algorithm.
Soln:
start 1
insert 2 into 1 right to left 12 21
insert 3 into 12 right to left 123 132 312
insert 3 into 21 left to right 321 231 213

2. Generate all the permutations of n=4 i.e., {1,2,3,4} by the


Bottom-up minimal-change algorithm.
Soln:

start 1
insert 2 into 1 right to left 12 21
insert 3 into 12 right to left 123 132 312
insert 3 into 21 left to right 321 231 213
insert 4 into 123 right to left 1234 1243 1423 4123
insert 4 into 132 left to right 4132 1432 1342 1324
insert 4 into 312 right to left 3124 3142 3412 4312
insert 4 into 321 left to right 4321 3421 3241 3214
insert 4 into 231 right to left 2314 2341 2431 4231
insert 4 into 213 left to right 4213 2413 2143 2134

Generating Permutations using Johnson-Trotter algorithm

 It is possible to get the same ordering of permutations of n elements


without explicitly generating permutations for smaller values of n :
 Associate a direction with each element k in a permutation
 Indicate such a direction by a small arrow written above the
element
3 2 4 1
 The element k is said to be mobile in such an arrow-marked
permutation
• if its arrow points to a smaller number adjacent to it
• 3 and 4 are mobile elements
• 2 and 1 are not
 Johnson Trotter algorithm uses this notion of mobile element.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 21


Analysis and Design of Algorithms – Unit - 3

ALGORITHM Johnson Trotter (n)


// Implements Johnson-Trotter algorithm for generating
//permutations
// Input : A positive integer n
// Output : A list of all permutations of {1, … , n}

Initialize the first permutation with 1 2 … n


while the last permutation has a mobile element do
find its largest mobile element k
swap k and the adjacent integer k ’s arrow points to
reverse the direction of all the elements that are larger than k
add the new permutation to the list

Problem: Generate all the permutations of n=3 i.e., {1,2,3} using


Johnson-Trotter algorithm.

Soln: 1 2 3

1 3 2

3 1 2

3 2 1

2 3 1

2 1 3

 The natural place for permutation n n-1 . . .1 seems to be the last one on
the list. This would be the case if permutations were listed in increasing
order – also called the lexicographic order.

 The Johnson-Trotter algorithm does not produce permutations in


lexicographic order

Example:

 Johnson-Trotter algorithm: 123, 132, 312, 321, 231, 213

 Lexicographic order: 123, 132, 213, 231, 312, 321


Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 22
Analysis and Design of Algorithms – Unit - 3

Lexicographic order algorithm:

Problem: Generate all permutations of {1, 2, 3} by the Lexicographic-


order algorithm.

Soln:

123
132
213
231
312
321

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 23


Analysis and Design of Algorithms – Unit - 3

Transform-and-Conquer Design Technique

 This method works as two-stage procedure.


1. Transformation stage: the problem’s instance is modified to be more
amenable to solution.
2. In the conquering stage, it is solved.

 Three major variations that differ by what we transform a given


instance to:

 instance simplification: transformation to a simpler or more


convenient instance of the same problem.

 Representation change: transformation to a different


representation of the same instance.

 problem reduction: transformation to an instance of a different


problem for which an algorithm is already available.

Figure: Transform-and-conquer strategy.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 24


Analysis and Design of Algorithms – Unit - 3

Instance simplification - Presorting

 Solve a problem’s instance by transforming it into another Simpler or


easier instance of the same problem

Presorting :
Many problems involving lists are easier when list is sorted.
 searching
 checking if all elements are distinct (element uniqueness)

Example 1: Checking element uniqueness in an array.


 Presorting-based algorithm
Stage 1: sort by efficient sorting algorithm (e.g. merge sort).
Stage 2: scan array to check pairs of adjacent elements.

Worst-case Efficiency:
T(n) = Tsort(n) + Tscan(n) Є Θ(nlog n) + Θ(n) = Θ(nlog n)

The running time of this algorithm is the sum of the time spent on
sorting and the time spent on checking consecutive elements. It is the
sorting part that will determine the overall efficiency of the algorithm.

 Brute force algorithm


Compares pairs of the array’s elements until either two equal elements
were found or no more pairs were left.

Worst- case Efficiency: Θ(n2)

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 25


Analysis and Design of Algorithms – Unit - 3

ALGORITHM PresortElementUniqueness(A[0..n - 1])


//Solves the element uniqueness problem by sorting the array first
//Input: An array A[0..n - 1] of orderable elements
//Output: Returns “true” if A has no equal elements, “false” otherwise
Sort the array A
for i  0 to n - 2 do
if A[i]= A[i + 1] return false
return true

 We can sort the array first and then check only its consecutive elements:
if the array has equal elements, a pair of them must be next to each
other and vice versa.

Example 2: Computing a mode


 A mode is a value that occurs most often in a given list of numbers.
Example: for 5, 1, 5, 7, 6, 5, 7, the mode is 5.

 The brute-force approach for computing a mode would scan the list and
compute the frequencies of all its distinct values, then find the value with
the largest frequency.

Worst- case Efficiency: Θ(n2)

 In presort mode computation algorithm, the list is sorted first. Then


all equal values will be adjacent to each other. To compute the mode, all
we need to do is to find the longest run of adjacent equal values in the
sorted array.
 The efficiency of this presort algorithm will be nlogn, which is the time
spent on sorting.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 26


Analysis and Design of Algorithms – Unit - 3

ALGORITHM PresortMode(A[0..n - 1])


//Computes the mode of an array by sorting it first
//Input: An array A[0..n - 1] of orderable elements
//Output: The array’s mode
Sort the array A
i0 //current run begins at position i
modefrequency ← 0 //highest frequency seen so far
while i ≤ n – 1 do
runlength ← 1; runvalue ← A[i]
while i + runlength ≤ n – 1 and A[i + runlength] = runvalue
runlength ← runlength + 1
if runlength > modefrequency
modefrequency ← runlength; modevalue ← runvalue
i ← i + runlength
return modevalue

Balanced Search Tree


 There are two approaches to balance a search tree:
 The first approach is of instance – simplification variety: an
unbalanced binary search tree is transformed to a balanced one. An
AVL tree requires the difference between the heights of the left and
right subtrees of every node never exceed 1. Other types of trees are
red-black trees and splay trees.

 The second approach is of the representation-change variety:


allow more than one element in a node of a search tree. Specific cases
of such trees are 2 – 3 trees, 2-3-4 trees, B-trees. They differ in the
number of elements admissible in a single node of a search tree, but
all are perfectly balanced.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 27


Analysis and Design of Algorithms – Unit - 3

AVL TREES
 AVL trees were invented in 1962 by two Russian scientists, G. M.
Adelson-Velsky and E. M. Landis, after whom this data structure is
named.

 Definition: An AVL tree is a binary search tree in which the balance


factor of every node, which is defined as the difference between the
heights of the node’s left and right subtrees, is either 0, +1 or -1. (The
height of the empty tree is defined as -1).

 If an insertion of a new node makes an AVL tree unbalanced, we


transform the tree by a rotation.
 A rotation in an AVL tree is a local transformation of its subtree rooted
at a node whose balance has become either +2 or -2; if there are several
such nodes, we rotate the tree rooted at the unbalanced node that is the
closest to the newly inserted leaf.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 28


Analysis and Design of Algorithms – Unit - 3

 There are only four types of rotations:


 Single right rotation (R-rotation)
 Single left rotation (L-rotation)
 Double left-right rotation (LR – rotation)
 Double right-left rotation (RL – rotation)

 Single right rotation (R-rotation):


This rotation is performed after a new key is inserted into the
left subtree of the left child of a tree whose root had the Balance of +1
before the insertion.
(Imagine rotating the edge connecting the root and its left child in the
binary tree in below figure to the right.)

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 29


Analysis and Design of Algorithms – Unit - 3

 Single left rotation (L-rotation):


 This is the mirror image of the single R-rotation.
 This rotation is performed after a new key is inserted into the
right subtree of the right child of a tree whose root had the balance
of -1 before the insertion.
(Imagine rotating the edge connecting the root and its right child in
the binary tree in below figure to the left.)

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 30


Analysis and Design of Algorithms – Unit - 3

 Double left-right rotation (LR-rotation):


It is performed after a new key is inserted into the right subtree of the
left child of a tree whose root had the balance of +1 before the insertion.

 It is a combination of two rotations: We perform the L-rotation of the


left subtree of root r followed by the R-rotation of the new tree rooted
at r in below figure.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 31


Analysis and Design of Algorithms – Unit - 3

 Double right-left rotation (RL-rotation):


 It is the mirror image of the double LR-rotation.
 It is performed after a new key is inserted into the left subtree of the
right child of a tree whose root had the balance of -1 before the
insertion.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 32


Analysis and Design of Algorithms – Unit - 3

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 33


Analysis and Design of Algorithms – Unit - 3

Note:
 The balance factor of a node is 0 if and only if the height of the left
subtree and height of right subtree are same.
 The balance factor of a node is 1 if the height of the left subtree is one
more than height of right subtree (left heavy).
 The balance factor of a node is -1 if the height of the right subtree is one
more than height of left subtree (right heavy).
 The height of empty tree is -1.
 If there are several nodes with the ±2 balance, the rotation is done for the
tree rooted at the unbalanced node that is the closest to the newly
inserted leaf.

Efficiency of AVL tree


 The height h of any AVL tree with n nodes satisfies the inequalities
log2 n ≤ h < 1.4405 log2(n + 2) – 1.3277.
 These inequalities imply that the operations of searching and insertion
are Θ(log n) in the worst case.
 The operation of key deletion in an AVL tree is considerably more difficult
than insertion, but fortunately it turns out to be in the same efficiency
class as insertion, i.e., logarithmic.
 The drawbacks of AVL trees are frequent rotations, the need to
maintain balances f or the tree’s nodes, and overall complexity, especially
of the deletion operation.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 34


Analysis and Design of Algorithms – Unit - 3

Example: Construct AVL tree for the following sequence of elements


{100, 200, 300, 250, 270, 70, 40}

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 35


Analysis and Design of Algorithms – Unit - 3

Heaps

Notion of the Heap

DEFINITION: A heap can be defined as a binary tree with keys assigned to its
nodes (one key per node) provided the following two conditions are met:
1. The tree’s shape requirement - the binary tree is almost complete or
complete. i.e., all its levels are full except possibly the last level, where
only some rightmost leaves may be missing.
2. The parental dominance requirement
 the key at each node is greater than or equal to the keys at its
children. We call this variation a max-heap.
or
 The key at each node is smaller than or equal to the keys at its
children. We call this variation a min-heap.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 36


Analysis and Design of Algorithms – Unit - 3

Important properties of Heaps

1. There exists exactly one almost complete binary tree with n nodes. Its
height is equal to log2 n .
2. The root of a heap always contains its largest element.
3. A node of a heap considered with all its descendants is also a heap.
4. A heap can be implemented as an array by recording its elements in the
top-down, left-to-right fashion. It is convenient to store the heap’s
elements in positions 1 through n of such an array, leaving H[0] unused.
In such a representation,
a. the parental node keys will be in the first n/2 positions of the
array, while the leaf keys will occupy the last n/2 positions;
b. the children of a key in the array’s parental position i(1 ≤ i ≤ n/2 )
will be in positions 2i and 2i + 1, and correspondingly, the parent
of a key in position i ( 2 ≤ i ≤ n) will be in position i/2 .

Thus, we could also define a heap as an array H[1 . . . n] in which every


element in position i in the first half of the array is greater than or equal
to the elements in positions 2i and 2i + 1, i.e.,
H[i] ≥ max{ H[2i], H[2i + 1]} for i = 1, . . . , n/2

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 37


Analysis and Design of Algorithms – Unit - 3

Construction of a Heap

Bottom-up heap construction algorithm:


 It initializes the almost complete binary tree with n nodes by placing keys
in the order given and then “heapifies” the tree as follows:
 Starting with the last parental node, the algorithm checks whether the
parental dominance holds for the key at this node.
 If it doesn’t, the algorithm exchanges the node’s key K with the larger key
of its children and checks whether the parental dominance holds for K in
its new position.
 This process continues until the parental dominance requirement for K is
satisfied.
 After completing the “heapification” of the subtree rooted at the current
parental node, the algorithm proceeds to do the same for the node’s
immediate predecessor.
 The algorithm stops after this is done for the tree’s root.

Example: Construct Max-heap for the list {2, 9, 7, 6, 5, 8} using


Bottom-up heap construction algorithm.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 38


Analysis and Design of Algorithms – Unit - 3

Bottom-up Heap Construction Algorithm:

Efficiency of Bottom-up algorithm in the worst case

 Assume, for simplicity, that n = 2k - 1 so that a heap’s tree is full, i.e.,


the maximum number of nodes occurs on each level.

 Let h be the height of the tree, where h = log2 n


(or just log2(n+1) – 1 = k – 1).

 Each key on level i of the tree will traverse to the leaf level h in the worst
case of the heap construction algorithm.

 Since moving to next level down requires two comparisons – one to find
the larger child and the other to determine whether the exchange is

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 39


Analysis and Design of Algorithms – Unit - 3

required, the total number of key comparisons involving a key on level i


will be 2(h – i).

 Therefore, the total number of key comparisons in the worst case will be

 Thus, with this bottom-up algorithm, a heap of size n can be constructed


with fewer than 2n comparisons.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 40


Analysis and Design of Algorithms – Unit - 3

Top-down Heap Construction Algorithm

 The alternative (and less efficient) algorithm constructs a heap by


successive insertions of a new key into a previously constructed heap. It
is called as top-down heap construction algorithm.
 To insert a new key K into a heap, first attach a new node with key K in it
after the last leaf of the existing heap.
 Then shift K up to its appropriate place in the new heap as follows:
 Compare K with its parent’s key: if the latter is greater than or
equal to K, stop; otherwise, swap these two keys and compare K
with its new parent.
 This swapping continues until K is not greater than its last parent
or it reaches the root.

Example: Construct Max-heap for the list {2, 9, 7, 6, 5, 8} using


Top-down heap construction algorithm.

2 2 9 9 9 9

9 2 2 7 2 7 6 7

6 2

9 9
9

6 7 6 7
6 8

2 5 2 5 8 2 5 7

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 41


Analysis and Design of Algorithms – Unit - 3

Deletion from a Heap

 Consider the deletion of the root’s key.


Maximum Key Deletion from heap
Step1 Exchange the root’s key with the last key K of the heap.
Step2 Decrease the heap’s size by 1.
Step3 “Heapify” the smaller tree by shifting K down the tree exactly in
the same way we did it in the bottom-up heap construction
algorithm. i.e., verify the parental dominance for K: if it holds, we
are done; if not, swap K with the larger of its children and repeat
this operation until the parental dominance condition holds for K
in its new position.

Efficiency of Top-down Heap construction algorithm and


Deletion from Heap
 The insertion operation cannot require more key comparisons than the
heap’s height. Since the height of a heap with n nodes is about log2 n, the
time efficiency of insertion is in O(log n).

 The efficiency of deletion is determined by the number of key


comparisons needed to “heapify” the tree after swap has been made and
size of the tree is decreased by 1. The time efficiency of deletion is in
O(log n) as well.

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 42


Analysis and Design of Algorithms – Unit - 3

Heapsort

 This sorting algorithm was discovered by J. W. Williams.


 This is a two-stage algorithm that works as follows:
Stage 1 (Heap construction): Construct a heap for a given array.
Stage 2 (Maximum deletions): Apply the root-deletion operation n-1
times to the remaining heap.
 The array elements are eliminated in decreasing order.
 But since under the array implementation of heaps, an element
being deleted is placed last, the resulting array will be exactly the
original array sorted in ascending order.

Example: Sort the array {2, 9, 7, 6, 5, 8} in ascending order


using Heapsort

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 43


Analysis and Design of Algorithms – Unit - 3

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 44


Analysis and Design of Algorithms – Unit - 3

Analysis of Heapsort using Bottom-up approach

 The heap construction stage of the algorithm is in O(n).

 Now, it is required to analyze only the second stage.


(h = log2n. No. of key comparisons = 2h = 2log2n)
 Observe that after exchanging the root with (n-1)th item, we have to
reconstruct the heap for (n-1) elements which depends only on height
of the tree.
 So, time complexity to reconstruct the heap for (n-1) elements
= 2 log2(n-1)
 After exchanging 0th item with n-2 item, time complexity to
reconstruct the heap = 2 log2 (n-2)
 After exchanging 0th item with n-3 item, time complexity to
reconstruct the heap = 2 log2 (n-3)
... ...
 After exchanging 0th item with 1st item, time complexity to reconstruct
the heap = 2 log2 1

 So, for the number of key comparisons, C(n), needed for eliminating the
root keys from heaps of diminishing sizes from n to 2 is given by the
inequality:

Therefore,
the time complexity for the second stage = 2n log2 n ≈ n log2 n.

 So, the time complexity of heap sort =


Time complexity of stage 1 + time complexity of stage 2
= O(n) + O(n log2 n)
= O(max{ n, n log2 n })
= O(n log2 n)

Therefore, the time complexity of heapsort = O(nlog2 n)

************* END OF UNIT - 3 *************

Smt. Kavitha M, Assistant professor, Dept. of CSE, SIT, Tumkur-3 Page 45

You might also like