You are on page 1of 174

1.

Sub-trees of each node can differ by at


most 1 in their height
2. Every sub-trees is an AVL tree
3. All the nodes are balanced i.e.
 A internal node is balanced if the heights of
its two children differ by at most 1.
 Otherwise, such an internal node is
unbalanced.
 AVL trees are height-balanced binary search
trees

 Balance factor of a node


◦ height(left subtree) - height(right subtree)

 An AVL tree has balance factor calculated at


every node
 height of node = h
 balance factor = hleft-hright
 empty height = -1

height=2 BF=1-0=1

1 6 0

4 9
0 0

1 5
4
44
2 3
17 78
1 2 1
32 50 88
1 1
48 62
Insert operation may cause balance factor to
become 2 or –2 for some node. We resort to
techniques called Rotations to restore balance
of the search tree.

Let the node that needs rebalancing be A.


 Outside Cases (require single rotation) :
◦ 1. LL- Insertion into left subtree of left child of A.
◦ 2. RR- Insertion into right subtree of right child of
A.
 Inside Cases (require double rotation) :
◦ 3. RL- Insertion into right subtree of left child of
A.
◦ 4. LR- Insertion into left subtree of right child of
A.
The time complexity of an insertion
operation in an AVL tree is given by
O(height)= O(log n).
Hashing
• In computer science, a hash table, or a hash
map, is a data structure that associates keys
with values.

• It works by transforming the key using a hash


function into a hash, a number that is used as
an index in an array to locate the desired
location where the values should be.
The efficient techniques for any searching are
those in which the number of comparisons are
minimum.

The general idea of using HASHING is to use the


key to determine the address of a record.

The algorithm for searching is not dependent on


the number of elements as was in previous
sections, but is essentially independent of it.
Hash function is made as per need. It depends on the
input data and probability distribution in the intended
application.

It should have the following properties:


1.Quick-Search should take place by minimum
comparisons
2.Uniform- It should be evenly distributed for minimum
collisions.
3.Deterministic-For a given input, the function must
always generate the same value
Position Key Record
1. Direct Method- 0
1 1
Directly use the
key as the index 2 2
in the table, if 3
the key is an 4 4
integer and its
value is small. 5 5
6
7 7
Position Key Record
0 100
Hash function,
H(k)=k modulo m; 1 121
2 112
Here let m be 10 143
3
So H(121)=1 etc.
4
5 105
6
7
8 118
3.Midsquare Method
The key k, is squared.
Hash function,H(k)=L;
Where L is obtained by deleting digits from both sides of k2 The same position
of digits should be used.

k: 3205 7148 2345


k2 : 10 272 025 51 093 904 5 499 025
H(k) 72 93 99
• Now do division method
• Let us take::

• h(k) = k mod 4
k: 3205 7148 2345
k2 : 10 272 025 51 093 904 5 499 025
H(k) 72 93 99
Final array

3205 7148 2345

0 1 2
3
The key is partitioned into a number of parts, k 1
….kr
Each part is to have the same number of digits.
These parts are added together,the last carry is
ignored and we get the hash function,
K
H(k)=k + k 3205 ;
+….+k 7148 2345
1 2 3
K1 32 71 23
K2 05 48 45
H(K) 37 19 68
• Again doing division method
• Let us take::

• h(k) = k mod 4

K 3205 7148 2345


K1 32 71 23
K2 05 48 45
H(K) 37 19 68
Final array
2345 3205 7148

0 1 2
3
Suppose we have two keys k1, and k2 such that
H(k1) equals H(k2).When k1 is entered in the table,
it gets inserted in the position H(k1). But when k2 is
hashed, an attempt maybe made to insert at the
same position as k1.But that is not possible.

This situation is called the hash collision or hash


clash.
1.Rehashing(or open addressing)-It

involves a secondary hash function on the

hash key of the item.

2.Chaining-It builds a linked list of all items

whose keys hash to the same values.


Open addressing hash tables store the records
directly within the array

A hash collision is resolved by probing, or


searching through alternate locations in the
array (following a probe sequence) until either
the target record is found, or an unused array
slot is found, which indicates that there is no
such key in the table
Suppose that a new record,R with key k is to be
added to the memory table T, but that memory
location with hash address H(k) is already filled.

1.Linear Probing-Assign R to the first available


location following T [h].For searching this record,
linearly search at locations
h(k,i) = (h(k) + i) (mod) m
if i is the i th probe
But the disadvantage is that records tend to cluster.
2.Quadratic Probing-Quadratic probing operates by
taking the original hash value and adding
successive values of an arbitrary quadratic
polynomial to the starting value Let i be the ith
probe position

h(k,i) = (h(k) + c1i+c2i2 )(mod m )

3.Double Hashing-Here a second hash function is


used for resolving a collision.

h(k,j) = (h1 (k) + j.h2 (k) )(mod m )


 h(k) = k mod 13
 Insert keys:

31
73
44 32

41 18 44 59 32 22 31 73
 h(k) = k mod 8
 Insert keys:41,51,66,74

7
4
4 6 5 7
1 6 1 4
0 1 2 3 4 5
6 7

4 6 5 7
1 6 1 4
0 1 2 3 4 5
6 7
• h1(K) = K mod 13
• h2(K) = 3 + K mod 4
– we want h2 to be an offset to add
– 18 41 22 44 59 32 45 47

44 45

41 18 32 59 47 22
0 1 2 3 4 5 6 7 8 9 10 11 12
41 44 45 18 32 59 47 22

0 1 2 3 4 5 6 7 8 9 10 11 12
• In its simplest
form each slot in
the array is a
linked list, or the
head cell of a
linked list, where
the list contains
the elements that
are hashed to the
same location.
h(k) = k mod 10

Insert 2 3 5 7 8 122 65 88 12 25 15

2 2 122 12

3 3

5 5 65 25 15

7 7

8 8 88
h(k) = (mid k2) mod 5

Insert 16 13 15 17 12 19

H(16) = mid(256) mod 5 = 5 mod 5 =0

H(13) = mid(169) mod 5 = 6 mod 5 =1


0 16

1 13 19

2 15
H(19) = mid(361) mod 5 = 6 mod 5 =1
3 17

4 12
SORTING

Arranging a set of data in some


order
There are different methods that are
used to sort the data in ascending or
descending order
TYPES OF SORTING
 Bubble sort
 Selection sort
 Quick sort
 Insertion sort
 Heap sort
 Merge sort
 Radix sort
SORTING

Let a0, ..., an-1 be the sequence to be sorted. At the beginning


and after each iteration of the algorithm the sequence consists of
two parts: the first part a0, ..., ai-1 is already sorted, the second part
ai, ..., an-1 is still unsorted (i   0, ..., n).
In order to insert element ai into the sorted part, it is compared
with ai-1, ai-2 etc. When an element aj with aj ai is found, ai is
inserted behind it. If no such element is found, then ai is inserted at
the beginning of the sequence.
After inserting element ai the length of the sorted part has
increased by one. In the next iteration, ai+1 is inserted into the
sorted part etc. While at the beginning the sorted part consists of
element a0 only, at the end it consists of all elements a0, ..., an-1.
INSERTION SORT
Example

25 17 31 13 2

FIRST
ITERATION

25 17 31 13 2
INSERTION SORT CONTD…

SECOND ITERATION

17 25 31 13 2

THIRD ITERATION

17 25 31 13 2
INSERTION SORT CONTD…

THIRD
ITERATION

13 17 25 31 2

FOURTH
ITERATION
13 17 25 31 2
INSERTION SORT CONTD…

SORTED ----ARRAY------

2 13 17 25 31
INSERTION SORT CONTD…
Analysis Insertion Sort
int i, j, temp;
for (i=1; i<n; i++)
{ j=i;
temp=a[j];
while (j>0 && a[j-1]>temp)
{ a[j]=a[j-1];
j--;
}
a[j]=t;
}
Analysis Insertion Sort
The worst case occurs when in every step the proper position for
the element that is inserted is found at the beginning of the sorted
part of the sequence. I.e. in the while-loop sequences of length 1, 2,
3, ..., n-1 are scanned. Altogether, these are (n-1)·n / 2     Θ(n2)
operations. This case occurs when the original sequence is sorted
in decreasing order.
It is possible to find the inserting position of element ai faster,
namely by binary search. However, moving the elements to the
right in order to make room for the element to be inserted takes
linear time anyway.
The exact number of steps required by insertion sort is given by
the number of inversions of the sequence.
Definition:  Let a  =  a0, ..., an-1 be a finite sequence. An
inversion is a pair of index positions, where the elements of the
sequence are out of order. Formally, an inversion is a pair (i, j),
where i < j and ai > aj.
Analysis Insertion Sort
Example:  Let a = 5, 7, 4, 9, 7. Then, (0, 2) is an inversion, since
a0 > a2, namely 5 > 4. Also, (1, 2) is an inversion, since 7 > 4, and
(3, 4) is an inversion, since 9 > 7. There are no other inversions in
this sequence.
We now count the inversions (i, j) of a sequence a separately for each
position j. The resulting value of vj gives the number of elements ai to the
left of aj which are greater than aj.
For instance, for the sequence a = 5, 7, 4, 9, 7 we have v2 = 2, since the
two inversions (0, 2) and (1, 2) have 2 as their second component. Here, 5
and 7 are greater than 4.

Definition:  The inversion sequence v = v0, ..., vn-1 of a sequence


a = a0, ..., an-1 is defined by
vj  =  |{ (i, j)  |  i < j       ai > aj }|
for j = 0, ..., n-1.
Analysis Insertion Sort
Example:  The above sequence a = 5, 7, 4, 9, 7 has the inversion
sequence v = 0, 0, 2, 0, 1.
Obviously, we have vi i for all i = 0, ..., n-1. If all vi are 0, then and only
then the sequence a is sorted. If the sequence a is a permutation, then it is
uniquely determined by its inversion sequence v. The permutation n-1, ..., 0
has inversion sequence 0, ..., n-1.
Theorem:  Let a  =  a0, ..., an-1 be a sequence and v  =  v0, ..., vn-1 be its
inversion sequence. Then the number of steps T(a) required by insertion
sort to sort sequence a is

T(a)  =     i = 0, ..., n-1  vi

Proof:  Obviously, insertion sort takes vi steps in each iteration i.


Thus, the overall number of steps is equal to the sum of all vi.
Analysis Insertion Sort
Example:  The following table shows the sequence a of the
first example and its corresponding inversion sequence. For
instance, v5 = 4 since 4 elements to the left of a5 = 2 are greater
than 2 (namely 5, 7, 3 and 4). The total number of inversions is
17.
I01234567
Ai57034261

vi00222416
Thus, insertion sort requires 17 steps to sort this sequence
T(a)  =     i = 0, ..., n-1  vi

Proof:  Obviously, insertion sort takes vi steps in each iteration i.


Thus, the overall number of steps is equal to the sum of all vi.
SELECTION SORT
Find the smallest element in the list
n and put it in the first position.
Find the smallest in the sublist n-1
& put it at the 2nd position.
Continue this for whole list.
SELECTION SORT
The idea behind selection sort is that to sort N
items you make N passes through them.
On the first pass you find the largest value,
and then swap it with the last item. Thus the
largest item is now in position N.
On the second pass you scan through just the
first N-1 entries. The largest of those items is
swapped with the item in position N-1. Hence
the largest item overall is now in the last
position; the second largest item is now in the
second last position.
SELECTION SORT
This process is repeated, with one item being
placed in its correct location each time.
After N passes, the entire collection of data is
sorted.
A simple variation is to find the smallest item
each time and put it at the front.
To sort into descending order, the largest item
is found each time and moved to the front
FIRST ITERATION

25 17 31 13 2

17 25 31 13 2
FIRST ITERATION cont.

17 25 31 13 2

13 25 31 17 2
AFTER FIRST ITERATION

2 25 31 17 13

AFTER SECOND ITERATION

2 13 31 25 17
AFTER THIRD ITERATION

2 13 17 31 25

AFTER FOURTH ITERATION

2 13 17 25 31
SELECTION SORT
Analysis of SELECTION SORT

Algorithm
for outer in (list start..list end-1) loop
highest := outer; -- starting highest value
for inner in (outer .. list end) loop
if (list(inner) > list(highest) then
highest := inner -- new highest value
end if
end loop
swap list(highest) and list(outer)
end loop
ALGORITHM: SELECTION SORT
void selection_sort (int arr[], int size)
{
int temp;
for (int i=0;i<(size-1) ;i++)
{
for (int j=i+1;j<size;j++)
{
if (arr[i]<arr[j])
{
temp=arr[i];
arr[i]=arr[j];
arr[j]=temp;
}
}
}
Analysis of SELECTION SORT

the number of comparisons and swaps the


Selection Sort requires in the worst case
The number of items to be sorted is
represented by n
# of swaps = (n-1)
# of comparisons = (n-1) + (n-2) + ... + 1

n = 6
# of comparisons   =  (n-1) + (n-2) + (n-3) + (n-4) + (n-5)
# of comparisons   =  (6-1) + (6-2) + (6-3) + (6-4) + (6-5)
=  5 + 4 + 3 + 2 + 1
=  15
Compare 1st two elements, if out of
order, exchange them to put in order.
Move down one element, compare 2nd
and 3rd elements, exchange if
necessary. Continue until end of array.
Pass through array again, exchanging
as necessary.
Bubble Sort

 Each pair of adjacent elements is


compared and swapped until the
smallest element “bubbles” to the
top.
 Repeat this process each time
stopping one indexed element less,
until you compare only the first (or
last) two elements in the list.
 The largest element will have “sunk”
to the bottom.
3
1st Pass
32
We start by comparing the first 16
two elements in the List.
48
5
A Bubble Sort Example
3
Compare
32
16
48
5
A Bubble Sort Example
3
32
Compare
16
48
5
A Bubble Sort Example
3
16
Swap
32
48
5
A Bubble Sort Example
3
16
32
Compare
48
5
A Bubble Sort Example
3
16
32
48
Compare
5
A Bubble Sort Example
3
16
32
5
Swap
48
A Bubble Sort Example
3
16
As you can see, the largest
number 32
has “bubbled” down, or sunk 5
to the bottom of the List after 48
the first pass through the List.
A Bubble Sort Example

Compare
3
2nd Pass
16
32
5
48
For our second pass through the List, we
start by comparing these first two elements
in the List.
A Bubble Sort Example

3
Compare
16
32
5
48
A Bubble Sort Example

3
16
32
Compare
5
48
A Bubble Sort Example

3
16
5
Swap
32
48
At the end of the second pass, we stop at element
number n - 1, (n=5,here) because the largest element
in the list is already in the last position. This places the
second largest element in the second to last spot.
A Bubble Sort Example
3rd Pass
3
Compare
16
5
32
48
A Bubble Sort Example

3
Compare
16
5
32
48
A Bubble Sort Example

3
Swap
5
16
32
48
At the end of the third pass, we stop comparing
and swapping at element number n - 2.
A Bubble Sort Example
4th Pass
3
Compare
5
16
The end of the fourth pass 32
stops at element number n - 3.
48
The last pass (the 4th pass here ) compares only the first
two elements of the List. After this comparison and
possible swap, the smallest element has “bubbled” to the
top.
void bubble_sort (int arr[], int size)
{
int temp;
for (int i=0;i<size ;i++)
{
for (int j=0;j<(size-1)-i;j++)
{
if (arr[j]<arr[j+1])
{ temp=arr[j];
arr[j]=arr[j+1];
arr[j+1]=temp;
}
}
}
}
 Traditionally,the
time for a sorting algorithm is
measured in terms of comparisons.

 Thenumber f(n) of comparisons is easily


calculated as there are (n-1) comparisons
during first pass, (n-2) during second pass and
so on
F(n) = (n-1)+(n-2)+(n-3)+…..+2+1=
n(n-1)/2 = n2/2+O(n) = O(N2)

In other words the time required to execute the


bubble sort algorithm is proportional to n2 times
where n is number of input items.
 Based on divide-and-conquer strategy

 Divide the list into two smaller lists of about


equal sizes
 Sort each smaller list recursively
 Merge the two sorted lists to get one sorted list
18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2

18 26 32 6 43 15 9 1 22 26 19 55 37 43 99 2
Merge Sort – Example

Original Sequence Sorted Sequence


18 26 32 6 43 15 9 1 1 6 9 15 18 26 32 43

18 26 32 6 43 15 9 1 6 18 26 32 1 9 15 43
43

18 26 32 6 43 15 9 1 18 26 6 32 15 43 1 9

18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1

18 26 32 6 43 15 9 1
MERGE SORT
 How do we divide the list?

 How do we merge the two sorted


lists?
If an array A[0..N-1]: dividing takes O(1) time

we can represent a sublist by two integers left and right:

to divide A[left..Right],
we compute:

center=(left+right)/2

and obtain:

A[left..Center] and A[center+1..Right]


 Input: two sorted array A and B
 Output: an output sorted array C
 Three counters: Actr, Bctr, and Cctr
◦ initially set to the beginning of their respective
arrays

The smaller of A[Actr] and B[Bctr] is copied to the next


entry in C, and the appropriate counters are advanced
When either input list is exhausted, the remainder of the
other list is copied to C
Pseudo Code for Merge-Sort

function merge_sort(m)
{
int list left, right, result
if length(m) ≤ 1
return m

int middle = length(m) / 2 -1


for each x in m up to middle
add x to left
for each x in m after middle
add x to right
Continue….
Pseudo Code for Merge-Sort

left = merge_sort(left)
right = merge_sort(right)
result = merge(left, right)
return result
}

Continue….
function merge(left,right)
{
int result

while length(left) > 0 and length(right) > 0


if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
end while
while length(left) > 0
append left to result
while length(right) > 0
append right to result
return result
}
Analysis of
merge sort
Merge-Sort Analysis

• Time, divide
n
n

n/2 n/2 2 × n/2 = n

log n levels

n/4 n/4 n/4 n/4 4 × n/4 = n

n/2 × 2 = n
Total time for divide: n log n
Merge-Sort Analysis
• Time, merging
n n

n/2 n/2 2 × n/2 = n

log n levels

n/4 n/4 n/4 n/4 4 × n/4 =n

n/2 × 2 = n

Total time for merging: n log n


Total running time

order of nlogn
QUICK SORT
Divide and Conquer
Quick sort is a very efficient sorting algorithm
invented by C.A.R. Hoare. It works on the
Divide and Conquer rule.
 Quick-sort is a randomized
sorting algorithm based on the x
divide-and-conquer paradigm:
◦ Divide: pick a random
element x (called pivot) and
partition S into
x
 L elements less than x
 E elements equal x L E G
 G elements greater than x
◦ Recur: sort L and G
◦ Conquer: join L, E and G
x
function quicksort(array)
var list less, greater
if length(array) ≤ 1
return array
select and remove a pivot value pivot
from array
for each x in array
if x ≤ pivot then append x to less
else append x to greater
return
Concatenate(quicksort(less),pivot,quicksort(gr
eater))
It requires extra storage space, which
is bad.
The additional memory allocations

required can also drastically impact


speed and cache performance in
practical implementations.
So there is a more complex version

which uses an in-place partition


algorithm.
function partition(array, left, right, pivotIndex)
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right] // Move
pivot to end
storeIndex := left
for i from left to right − 1
if array[i] ≤ pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right] // Move
pivot to its final place
return storeIndex
IN PLACE PARTITION

Partition the portion of the array between


indexes left and right

2. Move all elements less than or equal to


a[pivotIndex] to the beginning of the
subarray, leaving all the greater elements
following them.

3. Find the final position of the pivot


element

4. Return the final position


procedure quicksort(array, left, right)
if right > left
select a pivot index (e.g. pivotIndex :=
left)
pivotNewIndex := partition(array, left,
right, pivotIndex)
quicksort(array, left, pivotNewIndex -
1) quicksort(array, pivotNewIndex + 1,
right)
QuickSort(low, high) {
if (high-low <= 1) return;
pivot = MedianOf3(low, high);
split = low;
for (i=low; i<high-1; i++)
if (a[i] <pivot)
{
swap a[i] and a[split];
split++;
}
swap a[high-1] and a[split];
QuickSort(low, split);
QuickSort(split+1, high);
return;
}
MedianOf3(low, high) {
middle = (low + high) / 2;
if (a[low] > a[middle])
swap a[low] and a[middle];
if (a[low] > a[high-1])
swap a[low] and a[high-1];
if (a[middle] < a[high-1])
swap a[middle] and a[high-1];
return a[high - 1];
}

On average it makes Θ(nlogn)
comparisons to sort n items.

In the worst case, it makes Θ(n2)


comparisons.

Typically, quicksort is significantly faster


in practice than other Θ(nlogn) algorithms,
 because its inner loop can be efficiently
implemented in most real-world data, it is
possible to make design choices which
minimize the no. of comparisons required.
 In the best case, each time we perform a
partition we divide the list into two
nearly equal pieces. Means each
recursive call processes a list of half the
size.
 We can make only logn nested calls
before we reach a list of size 1.
 The depth of the call tree is Θ(logn).
 Each level of calls needs only Θ(n) time
all together.
 The result is that the algorithm uses
only Θ(nlogn) time.
A heap is a
certain kind of
complete binary
tree.
Root
A heap is a
certain kind of
complete
binary tree.

When a complete
binary tree is built,
its first node must be
the root.
Complete
binary tree. Left child
of the
root

The second node is


always the left child
of the root.
Complete
binary tree. Right child
of the
root

The third node is


always the right child
of the root.
Complete
binary tree.

The next nodes


always fill the next
level from left-to-right.
Complete
binary tree.

The next nodes


always fill the next
level from left-to-right.
Complete
binary tree.

The next nodes


always fill the next
level from left-to-right.
Complete
binary tree.

The next nodes


always fill the next
level from left-to-right.
Complete
binary tree.
A heap is a 45
certain kind of
complete
binary tree. 35 23

27 21 22 4

19
Each node in a heap
contains a key that
can be compared to
other nodes' keys.
A heap is a 45
certain kind of
complete
binary tree. 35 23

27 21 22 4

19
The "heap property"
requires that each
node's key is >= the
keys of its children
 Put the new node in the 45
next available spot.
 Push the new node
upward, swapping with its
35 23
parent until the new node
reaches an acceptable
location. 27 21 22 4

19 42
 Put the new node in the 45
next available spot.
 Push the new node
upward, swapping with its
35 23
parent until the new node
reaches an acceptable
location. 42 21 22 4

19 27
 Put the new node in the 45
next available spot.
 Push the new node
upward, swapping with its
42 23
parent until the new node
reaches an acceptable
location. 35 21 22 4

19 27
 The parent has a key that 45
is >= new node, or
 The node reaches the
root.
42 23
 The process of pushing
the new node upward
is called 35 21 22 4
reheapification
upward.
19 27
 Move the last node onto 45
the root.

42 23

35 21 22 4

19 27
 Move the last node onto 27
the root.

42 23

35 21 22 4

19
 Move the last node onto 27
the root.
 Push the out-of-place
node downward, swapping
42 23
with its larger child until
the new node reaches an
acceptable location. 35 21 22 4

19
 Move the last node onto 42
the root.
 Push the out-of-place
node downward, swapping
27 23
with its larger child until
the new node reaches an
acceptable location. 35 21 22 4

19
 Move the last node onto 42
the root.
 Push the out-of-place
node downward, swapping
35 23
with its larger child until
the new node reaches an
acceptable location. 27 21 22 4

19
 The children all have keys 42
<= the out-of-place
node, or
 The node reaches the leaf.
35 23
 The process of pushing
the new node downward
is called 27 21 22 4
reheapification
downward.
19
 We will store the 42
data from the nodes
in a partially-filled
array. 35 23

27 21

An array of data
 Data from the root 42
goes in the first
location
of the 35 23
array.

27 21

42

An array of data
 Data from the next 42
row goes in the next
two array locations.
35 23

27 21

42 35 23

An array of data
 Data from the next 42
row goes in the next
two array locations.
35 23

27 21

42 35 23 27 21

An array of data
 Data from the next 42
row goes in the next
two array locations.
35 23

27 21

42 35 23 27 21

An array of data
We don't care what's in
this part of the array.
 The links between the tree's 42
nodes are not actually stored as
pointers, or in any other way.
 The only way we "know" that "the
35 23
array is a tree" is from the way we
manipulate the data.
27 21

42 35 23 27 21

An array of data
 If you know the index of a node, 42
then it is easy to figure out the
indexes of that node's parent and
children.
35 23

27 21

42 35 23 27 21

[1] [2] [3] [4] [5]


 Heap is a complete binary tree.
 Two types of heaps.
 Max-heap:-If the value of any node
is greater than all its children.
 Min-heap:-If the value of any node
is smaller than all its children.
PHASES INVOLVED IN HEAP
SORT
Construct a heap by adjusting the
array elements.

Repeatedly eliminate the root


element of the heap by shifting it to
the end of the array & then restore
the heap structure with remaining
elements.
ARRAY & ITS EQUIVALENT BINARY
TREE
11

2 9

13 57 25 17

1 90 3

11 2 9 13 57 25 17 1 90 3
HEAP FROM BINARY TREE
90

57 25

13 11 9 17

1 2 3

90 57 25 13 11 9 17 1 2 3
RESULT OF HEAP SORT
1 2 3 9 1 1 1 2 5 9
1 3 7 5 7 0
COMPLEXITY OF HEAP
SORT
O(nlogn)
HEAP SORT
HeapSort(size) {
for (i = size/2; i >= 0; i--)
ReHeap(size, i);
for (i = size-1; i > 0; i--)
{
swap a[i] and a[0];
ReHeap(i, 0);
}
}
ReHeap(len, parent) {
temp = a[parent];
HEAP SORT
ReHeap(len, parent) {
temp = a[parent];
child = 2*parent + 1;
while (child < len)
{
if (child<len-1 && a[child]<a[child+1])
child++;
if (temp >= a[child]) break;
a[parent] = a[child];
parent = child;
child = 2*parent + 1;
}
a[parent] = temp;
return;
 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
i
i

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent child
 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
i

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent child
 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
i

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent child
 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
i

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent child
 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
i

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parentchild

 9 6 7 1 15 12 8 3 10 14 13 11 4 5 2 
parent child
 9 6 7 10 15 12 8 3 10 14 13 11 4 5 2 
parent child

 9 6 7 10 15 12 8 3 10 14 13 11 4 5 2 
child
parent

 9 6 7 10 15 12 8 3 1 14 13 11 4 5 2 
parent child

 9 6 7 10 15 12 8 3 1 14 13 11 4 5 2 
i
 9 6 7 10 15 12 8 3 1 14 13 11 4 5 2 
parent

 9 6 7 10 15 12 8 3 1 14 13 11 4 5 2 
parent child

 9 6 12 10 15 12 8 3 1 14 13 11 4 5 2 
parent child

 9 6 7 10 15 12 8 3 1 14 13 11 4 5 2 
child
parent
 9 6 12 10 15 11 8 3 1 14 13 11 4 5 2 
parent child

 9 6 12 10 15 11 8 3 1 14 13 11 4 5 2 
child
parent

 9 6 12 10 15 11 8 3 1 14 13 11 4 5 2 
parent

 9 6 12 10 15 11 8 3 1 14 13 7 4 5 2 
parent
 9 6 12 10 15 11 8 3 1 14 13 7 4 5 2 
i

 9 6 12 10 15 11 8 3 1 14 13 7 4 5 2 
parent child

 9 6 12 10 15 11 8 3 1 14 13 7 4 5 2 
parent child

 9 15 12 10 15 11 8 3 1 14 13 7 4 5 2 
parent child
 9 15 12 10 15 11 8 3 1 14 13 7 4 5 2 
child
parent

 9 15 12 10 14 11 8 3 1 14 13 7 4 5 2 
parent child

 9 15 12 10 14 11 8 3 1 14 13 7 4 5 2 
child
parent

 9 15 12 10 14 11 8 3 1 14 13 7 4 5 2 
parent child
 9 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child

 9 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
i

 9 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent

 9 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child
 15 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child

 15 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
child
parent

 15 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child

 15 15 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child
 15 14 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child

 15 14 12 10 14 11 8 3 1 6 13 7 4 5 2 
child
parent

 15 14 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child

 15 14 12 10 14 11 8 3 1 6 13 7 4 5 2 
parent child
 15 14 12 10 13 11 8 3 1 6 13 7 4 5 2 
parent child

 15 14 12 10 13 11 8 3 1 6 13 7 4 5 2 
child
parent

 15 14 12 10 13 11 8 3 1 6 9 7 4 5 2 
parent

 15 14 12 10 13 11 8 3 1 6 9 7 4 5 2 
i
 15 14 12 10 13 11 8 3 1 6 9 7 4 5 2 
i

 2 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
i

 2 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
parent

 2 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
Parent child
 2 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
Parent child

 14 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
Child
parent

 14 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
Child
parent

 14 14 12 10 13 11 8 3 1 6 9 7 4 5 15 
Child
parent
 2 13 12 10 13 11 8 3 1 6 9 7 4 5 15 
Parent child

 14 13 12 10 13 11 8 3 1 6 9 7 4 5 15 
Child
parent

 14 13 12 10 13 11 8 3 1 6 9 7 4 5 15 
Child
parent

 14 13 12 10 13 11 8 3 1 6 9 7 4 5 15 
Child
parent
 14 13 12 10 13 11 8 3 1 6 9 7 4 5 15 
Parent child

 14 13 12 10 9 11 8 3 1 6 9 7 4 5 15 
Child
parent

 14 13 12 10 9 11 8 3 1 6 2 7 4 5 15 
parent

 14 13 12 10 9 11 8 3 1 6 2 7 4 5 15 
i
 14 13 12 10 9 11 8 3 1 6 2 7 4 5 15 
i

 5 13 12 10 9 11 8 3 1 6 2 7 4 14 15 
i

 5 13 12 10 9 11 8 3 1 6 2 7 4 14 15 
parent

 5 13 12 10 9 11 8 3 1 6 2 7 4 14 15 
Parent child
 13 13 12 10 9 11 8 3 1 6 2 7 4 14 15 
child
Parent

 13 13 12 10 9 11 8 3 1 6 2 7 4 14 15 
child
Parent

 13 10 12 10 9 11 8 3 1 6 2 7 4 14 15 
child
Parent

 13 10 12 10 9 11 8 3 1 6 2 7 4 14 15 
Parent child
 13 10 12 5 9 11 8 3 1 6 2 7 4 14 15 
child
Parent

 13 10 12 5 9 11 8 3 1 6 2 7 4 14 15 
i

 4 10 12 5 9 11 8 3 1 6 2 7 13 14 15 
Parent child

 4 10 12 5 9 11 8 3 1 6 2 7 13 14 15 
Parent child
 4 10 12 5 9 11 8 3 1 6 2 7 13 14 15 
Parent Child

 12 10 12 5 9 11 8 3 1 6 2 7 13 14 15 
Child
Parent

 12 10 12 5 9 11 8 3 1 6 2 7 13 14 15 
Parent child

 12 10 11 5 9 11 8 3 1 6 2 7 13 14 15 
child
Parent
 12 10 11 5 9 11 8 3 1 6 2 7 13 14 15 
Parent Child

 12 10 7 5 9 11 8 3 1 6 2 7 13 14 15 
Child
Parent

 12 10 7 5 9 11 8 3 1 6 2 4 13 14 15 
Parent

 12 10 7 5 9 11 8 3 1 6 2 4 13 14 15 
i
 12 10 7 5 9 11 8 3 1 6 2 4 13 14 15 
i

 4 10 7 5 9 11 8 3 1 6 2 12 13 14 15 
Parent child

 4 10 7 5 9 11 8 3 1 6 2 12 13 14 15 
Parent child

 4 10 7 5 9 11 8 3 1 6 2 12 13 14 15 
i

You might also like