You are on page 1of 187

‫بسم هللا الرحمن الرحيم‬

‫‪1‬‬
Searching and Sorting
Searching
Searching

• Linear Search
– Simplest but Costly in time and resources

• Binary Search
– Efficient but assumes the input to be sorted
Linear Search •

5
Dr. Ossama Ismail AAST
Linear Search
Linear Search
#include <stdio.h>
int main()
{
int array[100], search, c, n;
printf("Enter number of elements in array\n");
scanf("%d", &n);
printf("Enter %d integer(s)\n", n);
for (c = 0; c < n; c++)
scanf("%d", &array[c]);
printf("Enter a number to search\n");
scanf("%d", &search);
Linear Search
for (c = 0; c < n; c++)
{
if (array[c] == search) /* If required element is found */
{
printf("%d is present at location %d.\n", search, c+1);
break;
}
}
if (c == n)
printf("%d isn't present in the array.\n", search);
return 0;
}
Linear Search Function using Pointers
long linear_search(long *p, long n, long find)
{
long c;
for (c = 0; c < n; c++)
{
if (*(p+c) == find)
return c;
}
return -1;
}
Binary Search •

10
Dr. Ossama Ismail AAST
Binary Search
Will be Studied later on with the
Divide-and-Conquer
Algorithms

11
Dr. Ossama Ismail AAST
Sorting

12
Dr. Ossama Ismail AAST
Sorting

We will cover

1. Basic Sorting Algorithms


2. Faster Sorting Algorithms
3. A Comparison of Sorting Algorithms

13
Dr. Ossama Ismail AAST
Basic Sorting Algorithms

Sorting is:
A process
It organizes a collection of data
Organized into ascending/descending order

Internal: data fits in memory


External: data must reside on secondary storage
Sort key: data item which determines order

14
The Sorting Problem cont’d

Input: sequence s1, s2, …, sn of numbers.

Output: permutation s'1, s'2, …, s'n of the input


Sequence such that s'1  s'2  …  s'n .

Example: Input: 7 2 8 4 3 6 5 1

Output: 1 2 3 4 5 6 7 8

Prof. Ossama Ismail 15


The Sorting Problem cont’d
• Problem
Input : A list of n objects
Output : A permutation of the objects such that they are in
order

• Assumptions
– The objects are integers
– The list contains distinct integers (no repeats)
– Permute the integer list in increasing order

• The assumptions are not limiting in general


– Our algorithms will still work if:
• Objects are anything that can be compared to one another
• The list contains repeat objects
• A sorted list may be objects in decreasing order
Prof. Ossama Ismail 16
Sorting Definitions
• Internal Sort The data to be sorted is all stored in the
computer’s main memory.

• In place sorting : Sorting of a data structure does not


require any external data structure for storing the intermediate
steps

• External sorting : Sorting of records not present in


memory, some of the data to be sorted might be stored in
some external, slower, device.

• Stable sorting : If the same element is present multiple


times, then they retain the original relative order of positions
Prof. Ossama Ismail 17
Sorting
Algorithmic Description and Analysis of
• Bubble Sort
• Selection Sort
• Insertion Sort
• Merge Sort
• Quick Sort

18
Dr. Ossama Ismail AAST
Sorting Algorithms
• Insertion, Selection and Bubble sorts have
quadratic worst-case performance
• The faster comparison based algorithm ?
O(n*log(n))
– Merge and Quick sorts
Bubble Sort •

20
Dr. Ossama Ismail AAST
Bubble Sort

• Bubble Sort is the simplest sorting algorithm


that works by repeatedly swapping the
adjacent elements if they are in wrong order.
Bubble Sort cont’d

• Main Idea
– n – 1 passes
– Find and move the next maximum value into its correct place by
consecutive swapping

• Example
7 8 6 2 <- pass 1
7862
7682
7628
6 7 2 8 <- pass 2
6278
2 6 7 8 <- pass 3
2678

Prof. Ossama Ismail 22


Bubble Sorting
– An array is to be sorted in descending
order.

1.Read an array a[n]


2.For j = 0 to n–2
3. For i = 0 to n – 1 - j
4. If a[I] < a[I+1] swap a[I] and a[I+1]
5.Print sorted array a[n]
Bubble Sorting
Bubble Sorting
Example of bubble sort 1/2

72854 27548 25478 24578


27854 27548 25478 24578
27854 25748 24578 (done)
27584 25478
27548

25
Bubble Sorting
Example of bubble sort 2/2

26
Dr. Ossama Ismail AAST
Code for bubble sort
void bubbleSort(int[] a)
{
int outer, inner;
for (outer = a.length - 1; outer > 0; outer--) {
// counting down
for (inner = 0; inner < outer; inner++) {
// bubbling up
if (a[inner] > a[inner + 1]) { // if out of order...
int temp = a[inner]; // ...then swap
a[inner] = a[inner + 1];
a[inner + 1] = temp;
}
}
}
Dr. Ossama Ismail AAST
27 27
Analysis of bubble sort
• for (outer = a.length - 1; outer > 0; outer--)
• {
for (inner = 0; inner < outer; inner++)
• {
if (a[inner] > a[inner + 1])
• { // code for swap omitted
}
}
}
• Let n = a.length = size of the array
• The outer loop is executed n-1 times (call it n, that’s close enough)
• Each time the outer loop is executed, the inner loop is executed
– Inner loop executes n-1 times at first, linearly dropping to just once
– On average, inner loop executes about n/2 times for each execution of
the outer loop
– In the inner loop, the comparison is always done (constant time), the
swap might be done (also constant time) 28 28

• Result is n * n/2 * k, that is, O(n2/2 + k) = O(n2)


Bubble Sort
for (c = 0 ; c < n - 1; c++)
{
for (d = 0 ; d < n - c - 1; d++)
{
if (array[d] > array[d+1]) /* For decreasing order use < */
{
swap = array[d];
array[d] = array[d+1];
array[d+1] = swap;
}
}
}
Bubble Sort cont’d

• Analysis
– Best case, O(n) algorithm
– Worst case, O(n2) algorithm
• Again, a poor choice for large amounts of
data
Bubble Sort cont’d

• The algorithm runs the same on all inputs,


so it runs in Theta(n2)
• # of comparisons dominate over the #
swaps

• Can we do better?
– ??????
Prof. Ossama Ismail 31
Bubble Sort cont’d

• Main Idea

– n – 1 passes
– Find and move the next maximum value into
its correct place by consecutive swapping
– Stop if no swaps were made in a pass

Prof. Ossama Ismail 32


Bubble Sort cont’d
Algorithm Bubble-Sort-Better(A, n):
Input: List A of n integers
Output: The sorted list A

i = n;
swapflag = true;
while (swapflag) do
i <- i – 1;
swapflag = false;
for j <- 1 to i do
if (A[j ] > A[j + 1])
Swap(A[j], A[j + 1]);
swapflag = true;
Prof. Ossama Ismail 33
Better Bubble Sort

Analysis
• Best Case – Omega(n)
– Already sorted: n comparisons and 0 swaps
• Worst Case – O(n2)
– Input sorted in the opposite order AND
– Smallest value starts at the opposite end
– n comparisons * n passes
• Average Case – Theta(n2)
– Any input is equally likely
– Given an input, how many passes will it take to stop?
– Count and average the # of comparisons when stopping at every
possible pass
– See page 65 of the text

Prof. Ossama Ismail 34


Selection Sort •

35
Dr. Ossama Ismail AAST
Selection Sort

The selection sort algorithm sorts an array by repeatedly


finding the minimum element (considering ascending
order) from unsorted part and putting it at the beginning.
The algorithm maintains two subarrays in a given array.

• Main Idea

– n passes (or iterations)


– Find and move the next minimum value to its correct
place by a single swap

Prof. Ossama Ismail 36


Selection Sort cont’d

• Example (a selection sort of an array of eight integers)

Initial list : 6 2 8 7 5 1 4 3
12875643
12875643
12375648
12345678
12345678
12345678
12345678

Prof. Ossama Ismail 37


GN 121 AAST
Selection sort
• Given an array of length n,
– Search elements 0 through n-1 and select the smallest
• Swap it with the element in location 0
– Search elements 1 through n-1 and select the smallest
• Swap it with the element in location 1
– Search elements 2 through n-1 and select the smallest
• Swap it with the element in location 2
– Search elements 3 through n-1 and select the smallest
• Swap it with the element in location 3
– Continue in this fashion until there’s nothing left to search

Dr. Ossama Ismail AAST 38


Selection
Sort
Selection Sort
Example and analysis of
selection sort
• The selection sort might swap an
72854 array element with itself--this is
harmless, and not worth checking for
27854
• Analysis:
24857 – The outer loop executes n-1 times
– The inner loop executes about n/2 times
on average (from n to 2 times)
24587
– Work done in the inner loop is constant
(swap two array elements)
24578
– Time required is roughly (n-1)*(n/2)
– You should recognize this as O(n2)
41 41
Code for selection sort
void selectionSort(int[] a)
{
int outer, inner, min;
for (i = 0; i < a.length - 1; i++)
{ // outer counts down
min = outer;
for (j = i + 1; j < a.length; j++)
{
if (a[j] < a[min])
{ min = j; }
// Invariant: for all i, if outer <= i <= inner, then a[min] <= a[i]
}
// a[min] is least among a[outer]..a[a.length - 1]
int temp = a[i];
a[i] = a[min];
a[min] = temp;
// Invariant: for all i <= i, if i < j then a[i] <= a[j]
}
43
} 43
Selection Sort cont’d

Algorithm Selection-Sort(A, n):


Input: List A of n integers
Output: The sorted list A

for i <- 1 to n do
minpos <- i;
for j <- (i + 1) to n do
if (A[j] < A[minpos])
minpos <- j;
Swap(A[i], A[minpos]);

Prof. Ossama Ismail 44


Selection Sort cont’d
Run-time Analysis
Count
for i <- 1 to n do n+1
minpos = i; n
for j = (i + 1) to n do T1 + 1
if (A[j] < A[minpos]) T1
minpos = j; T1
Swap(A[j], A[minpos]); n

• Where T1 = SUM(n – i)
• So, T(n) = Theta(n2)
Prof. Ossama Ismail 45
Selection Sort cont’d

• Analysis
– This is an O (n2) algorithm
• In general selection sort algorithm is
inefficient to use.
Insertion Sort •

47
Dr. Ossama Ismail AAST
Insertion Sort

Insertion sort is a simple sorting algorithm that


works similar to the way you sort playing cards in
your hands. The array is virtually split into a sorted
and an unsorted part. Values from the unsorted part
are picked and placed at the correct position in the
sorted part.
The Insertion Sort

Take each item from unsorted region, insert


into its correct order in sorted region

49
Dr. Ossama Ismail AAST
Insertion Sort cont’d

• Main Idea
– n - 1 iterations
– Take each item from unsorted region, insert into its correct order in sorted
region
• Insert the current value, v, into the sorted list of already traversed values
• Find v’s place in the sorted sub-list and shift higher values over

• Example
Initial array : 6 2 8 7 5 1 4 3
26875143
26875143
26785143
25678143
12567843
12456783
12345678
Prof. Ossama Ismail 50
GN 121 AAST
The Insertion Sort
The Insertion Sort
The Insertion Sort

An algorithm of order O(n2)


53
The Insertion Sort
– An array is to be sorted in descending
order.

1.Read an array a[n]


2.J = 1
3.Let k be such that a[k] = max{a[l]: j
<= l < n }
4.Swap a[k] and a[j].
5.Increment j and if j < n-2 then go to
step 3
6.Print sorted array a[n]

Lectures on Numerical Methods 54


The Insertion Sort
void insertionSort(int numbers[], int array_size)
{
int i, j, index;
for (i=1; i < array_size; i++)
{ index = numbers[i];
j = i;
while ((j > 0) && (numbers[j-1] > index))
{
numbers[j] = numbers[j-1];
j = j - 1;
}
numbers[j] = index;
}
}
Insertion Sort cont’d
Algorithm Insertion-Sort(A, n):
Input: List A of n integers
Output: The sorted list A

for i <- 2 to n do
insertvalue <- A[i];
newpos <- i – 1;
while ((newpos >= 1) && (A[newpos] >
insertval)) do
A[newpos + 1] <- A[newpos];
newpos <- newpos - 1;
A[newpos + 1] = inservalue;
Prof. Ossama Ismail 56
Insertion Sort cont’d
Run-time Analysis
Count
for i <- 2 to n do n
insertvalue <- A[i]; n-1
newpos <- i – 1; n-1
while ((newpos >= 1) && T1 + 1
(A[newpos] > insertval)) do
A[newpos + 1] <- A[newpos]; T1

newpos <- newpos - 1; T1


A[newpos + 1] = inservalue; n-1

• Where T1 = SUM(i)
• So, T(n) = O(n2)
Prof. Ossama Ismail 57
Insertion Sort cont’d

• Analysis
– An algorithm of order O(n2)
– Best case O(n)
• Appropriate for 25 or less data items
Divide-and-Conquer

59
Divide-and-Conquer
Method
• Divide problem into subproblems
• Solve subproblems
• If subproblems have same nature as original
problem, the strategy can be applied
recursively
• Merge subproblem solutions into solutions for
original problem

60
Dr. Ossama Ismail AAST
Divide-and-Conquer
• Divide-and conquer is a general algorithm design paradigm:
– Divide: divide the input data S in two or more disjoint subsets
S1, S2, S2…
– Recur: solve the subproblems recursively
– Conquer: combine the solutions for S1, S2, …, into a solution for
S
• The base case for the recursion are subproblems of constant size
• Analysis can be done using recurrence
equations

61
Dr. Ossama Ismail AAST
The Divide and Conquer Algorithm

Divide_Conquer(problem P)
{
if Small(P) return S(P);
else {

divide P into smaller instances P1, P2, …, Pk, k1;

Apply Divide_Conquer to each of these subproblems;

return Combine(Divide_Conque(P1),
Divide_Conque(P2),…, Divide_Conque(Pk));
}
}
62 62
Dr. Ossama Ismail AAST
Divde_Conquer recurrence relation
The computing time of Divde_Conquer is

 g ( n) n small
T ( n) = 
T ( n1 ) + T (n2 ) + ... + T ( nk ) + f (n) otherwise

– T(n) is the time for Divide_Conquer on any input size n.


– g(n) is the time to compute the answer directly (for small
inputs)
– f(n) is the time for dividing P and combining the
solutions.
63 63
Dr. Ossama Ismail AAST
A Typical Divide and Conquer Case

a problem of size n

subproblem 1 subproblem 2
of size n/2 of size n/2

Find a solution to Find a solution to


subproblem 1 subproblem 2

Combine solutions to
Solve the original problem

6464
Dr. Ossama Ismail AAST
Divide-and-Conquer Examples

• Binary search (?)


• Sorting: mergesort and quicksort
• Binary tree traversals
• Multiplication of large integers
• Matrix multiplication: Strassen’s algorithm
• Closest-pair and convex-hull algorithms

65
Dr. Ossama Ismail AAST
Binary Search •

66
Dr. Ossama Ismail AAST
Binary Search
Binary Search
Binary Search
int BinarySearch(int x, int v[], int n)
{
int low, high, mid;
low = 0;
high = n – 1;
while ( low <= high) {
mid = (low + high) / 2;
if (x < v[mid])
high = mid – 1;
else if (x > v[mid])
low = mid + 1;
else
return mid; /*found match*/
}
return –1; /*No match*/
}
Binary Search
Very efficient algorithm for searching in sorted array:

• A Divide and Conquer Algorithm to find a key in an array:


• -- Precondition: S is a sorted list >> Check if x ϵ S

binsearch(number n, low, high, S[], x)


if low ≤ high then mid = (low + high) / 2
if x = S[mid] then return mid
elsif x < S[mid] then return
binsearch(n, low, mid-1, S, x)
else return binsearch(n, mid+1, high, S, x)
else return 0 end binsearch
end binsearch
70
Dr. Ossama Ismail AAST
Analysis of Binary Search

• Time efficiency
– worst-case recurrence: T (n) = 1 + ( n/2 ), T (1) = 1
solution: T (n) = log2(n+1) = O(log n)

This is VERY fast: e.g., T(109) = 30

• Limitations: must be a sorted array (not linked list)

71
Dr. Ossama Ismail AAST
Complexity Analysis
T(1)=1
T(n) = 2T(n/2) + cn

= 2(2T(n/4)+cn/2)+cn=2^2T(n/2^2)+2cn
= 2^2(2T(n/2^3)+cn/2^2)+2cn = 2^3T(n/2^3)+3cn
= … = (2^k)T(n/2^k) + k*cn
(let n=2^k, then k=log n)
T(n) = n*T(1)+k*cn = n*1+c*n*log n = O(n log n)

72
Dr. Ossama Ismail AAST
Recursive Binary Search
• A Divide and Conquer Algorithm to find a key in an array:
• -- Precondition: S is a sorted list >> Check if x ϵ S

binsearch(number n, low, high, S[], x)


if low ≤ high then mid = (low + high) / 2
if x = S[mid] then return mid
elsif x < s[mid] then return
binsearch(n, low, mid-1, S, x)
else return binsearch(n, mid+1, high, S, x)
else return 0 end binsearch
end binsearch

73
Dr. Ossama Ismail AAST
Linear v/s Binary Search
Sorting Problem
• Optimal number of comparisons
– An algorithm must be able to distinguish which one
of the n! permutations is the correct permutation.
– Can be viewed as a binary decision tree
– Each permutation corresponds to a leaf in the tree
– The depth of the tree is the number of comparisons
necessary for distinguishing the sorted permutation
– A binary tree with height h has at most 2h leaves
– A sorting decision tree has height at least log n!
– log n! = O(n log n)
– Comparison-based sorting is O(n log n)

75
Dr. Ossama Ismail AAST
Efficient Sorting Techniques

76
Dr. Ossama Ismail AAST
Table for the running time of the algorithms :
• In this table, n is the number of elements to be sorted .
• The columns "Best", "Average", and "Worst" give the time
complexity in each case.
• "Memory" denotes the amount of auxiliary storage needed
beyond that used by the list itself.

77
Dr. Ossama Ismail AAST
Merge Sort •

78
Dr. Ossama Ismail AAST
Merge Sort
• Merge sort is a sorting algorithm invented by John von
Neumann (1903-1957) based on the divide and
conquer technique.
• It always runs in time, but requires space.

• Developed merge sort for EDVAC in 1945

79
Dr. Ossama Ismail AAST
Merge Sort
Merge Sort is a Divide and Conquer algorithm. It
divides the input array into two halves, calls itself for
the two halves, and then merges the two sorted halves.

The merge() function is used for merging two halves.


The merge(Arr, l, m, r) is a key process that assumes
that Arr[l..m] and Arr[m+1..r] are sorted and merges the
two sorted sub-arrays into one. See the following
Algorithm for details.

80
Prof. Ossama Ismail GN
121 AAST
Merge-Sort Review

Merge-sort on an input sequence S with n elements


consists of three steps:
– Divide: partition S into two sequences S1 and S2
of about n/2 elements each
– Recur: recursively sort S1 and S2
– Conquer: merge S1 and S2 into a unique sorted
sequence

81
Dr. Ossama Ismail AAST
Algorithm: Merge-Sort(S,p,r)
A procedure sorts the elements in the sub-array
A[p..r] using divide and conquer
• Merge-Sort(S,p,r)
– if p >= r, do nothing
– if p< r then q  ( p + r ) / 2
• Merge-Sort(S,p,q) <== first half
• Merge-Sort(S,q+1,r) <== second half
• Merge(S,p,q,r)
• Starting by calling Merge-Sort(S,1,n)
• How do we partition?
82
Dr. Ossama Ismail AAST
Partitioning - Choice 1
• First (n-1) elements into set A, last element set B
• Sort A using this partitioning scheme recursively
– B already sorted
• Combine A and B using function Insert() (= insertion
into sorted array)
• Leads to recursive version of InsertionSort()
– Number of comparisons: O(n2)
• Best case = n-1

• Worst case = ???


Partitioning - Choice 2
• Put element with largest key in B, remaining
elements in A
• Sort A recursively
• To combine sorted A and B, append B to sorted A
– Use Max() to find largest element → recursive
SelectionSort()
– Use bubbling process to find and move largest element
to right-most position → recursive BubbleSort()
• All O(n2)
Partitioning - Choice 3
• Let’s try to achieve balanced partitioning
• A gets n/2 elements, B gets rest half
• Sort A and B recursively
• Combine sorted A and B using a process
called merge, which combines two sorted lists
into one
Example
• Partition into lists of size n/2

[10, 4, 6, 3, 8, 2, 5, 7]

[10, 4, 6, 3] [8, 2, 5, 7]

[10, 4] [6, 3] [8, 2] [5, 7]

[4] [10] [3][6] [2][8] [5][7]


Example (Cont.)
• Merge

[2, 3, 4, 5, 6, 7, 8, 10 ]

[3, 4, 6, 10] [2, 5, 7, 8]

[4, 10] [3, 6] [2, 8] [5, 7]

[4] [10] [3][6] [2][8] [5][7]


Merge Sort
Merge Sort
void mergesort(int lo, int hi)
{
if (lo<hi) {
int m=(lo+hi)/2;
mergesort(lo, m);
mergesort(m+1, hi);
merge(lo, m, hi);
}
}
Mergesort Example
4 5 1 7 9 2 3 8

9 2 3 8 4 5 1 7

The non-recursive
3 8 9 2 17 4 5 version of Mergesort
starts from merging
8 3 2 9 7 1 5 4
single elements into
sorted pairs.
8 3 9 2 7 1 5 4

9 8 3 2 7 5 4 1

9 8 7 5 4 3 2 1

90
Dr. Ossama Ismail AAST
Merge Sort

Levels of recursive calls to mergeSort , given an array of eight items


Recurrence Equation Analysis
• The conquer step of merge-sort consists of merging two sorted
sequences, each with n/2 elements and implemented by means of a
doubly linked list, takes at most bn steps, for some constant b.
• Likewise, the basis case (n < 2) will take at b most steps.
• Therefore, if we let T(n) denote the running time of merge-sort:

 b if n  2
T (n) = 
2T (n / 2) + bn if n  2
• We can therefore analyze the running time of merge-sort by finding a
closed form solution to the above equation.
– That is, a solution that has T(n) only on the left-hand side.
92
Dr. Ossama Ismail AAST
Recursion tree

Solve T(n) = 2T( ) + cn, where c > 0 is constant.


cn cn
cn
log2 n cn




Q(1) Q(n)
number of leaves = n
Prof. Ossama Ismail 93
Analysis of Mergesort
• All cases have same efficiency: Θ(n log n)

T(n) = 2T(n/2) + Θ(n), T(1) = 0


• Number of comparisons in the worst case is close to theoretical
minimum for comparison-based sorting:
log2 n! ≈ n log2 n - 1.44n

• Space requirement: Θ(n) (not in-place)

94
Dr. Ossama Ismail AAST
Solving recurrence eqn for Mergesort
Iterative Substitution
• In the iterative substitution, or “plug-and-chug,” technique, we
iteratively apply the recurrence equation to itself and see if we
can find a pattern:
T ( n ) = 2T ( n / 2) + bn
= 2( 2T ( n / 2 2 )) + b( n / 2)) + bn
T (n) = bn + bn log n = 2 2 T ( n / 2 2 ) + 2bn
= 2 3 T ( n / 23 ) + 3bn
= 2 4 T ( n / 2 4 ) + 4bn
= ...
= 2i T ( n / 2i ) + ibn

Note that base, T(n)=b, case occurs when 2i = n.


That is, i = log n. So, Thus, T(n) is O(n log n)
95
Dr. Ossama Ismail AAST
The Recursion Tree
• Draw the recursion tree for the recurrence relation and look
for a pattern:
 b if n  2
T (n) = 
2T (n / 2) + bn if n  2

depth T’s size time

bn
0 1 n
bn
1 2 n/2
bn

i 2i n/2i Total time = bn + bn log n …

… … … (last level plus all previous levels)


96
Dr. Ossama Ismail AAST
Merge Sort

• Analysis
– Merge sort is of order O(n  log n)
– This is significantly faster than O(n2)
Quick Sort •

98
Dr. Ossama Ismail AAST
Quicksort cont’d
• Like mergesort, Quicksort is also based on
the divide-and-conquer paradigm.

• Fastest known sorting algorithm in practice


– Caveats: not stable

• Average case complexity → O(n log n)

• Worst-case complexity → O(n2)


– Rarely happens, if implemented well
Prof. Ossama Ismail 99
Quicksort cont’d
How it works ?
◼ Pick some number p from the array
◼ Move all numbers less than p to the beginning of the array
◼ Move all numbers greater than (or equal to) p to the end of
the array
◼ Quicksort the numbers less than p
◼ Quicksort the numbers greater than or equal to p

numbers less than p p numbers greater than or equal to p

Prof. Ossama Ismail 101


Quick Sort Algorithm
Given an array of n elements (e.g., integers):
• If array only contains one element, return
• Else
– pick one element to use as pivot.
– Partition elements into two sub-arrays:
• Elements less than or equal to pivot
• Elements greater than pivot
– Quicksort two sub-arrays
– Return results
Quicksort – cont’d
◼ Sort S[left...right]:
Quicksort S[left...right]
1. if left < right:
1.1. Partition S[left...right] such that:
all S[left...p-1] are < S[p], and
all S[p+1...right] are >= S[p]
1.2. Quicksort S[left...p-1]
1.3. Quicksort S[p+1...right]
2. End
Prof. Ossama Ismail 103
Quicksort Example
5 3 1 9 8 2 4 7

2 3 1 4 5 8 9 7
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9

104
Dr. Ossama Ismail AAST
Example
We are given array of n integers to sort:

40 20 10 80 60 50 7 30 100
Example: Pick Pivot Element

• There are a number of ways to pick the pivot


element. In this example,
• We will use the first element in the array:

40 20 10 80 60 50 7 30 100
Example: Partitioning Array
• Given a pivot, partition the elements of the array
such that the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot

• The sub-arrays are stored in the original data array.

• Partitioning loops through, swapping elements


below/above pivot.
Example: Select Two Indices

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Swap
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]

40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Swap Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 0
Example: Swap Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

7 20 10 30 40 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
pivot_index = 4
Example: Partition Result

<= data[pivot] > data[pivot]

7 20 10 30 40 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
Example: Quick Sort Sub-Array

<= data[pivot] > data[pivot]

7 20 10 30 40 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
Quicksort

Prof. Ossama Ismail 133


GN 121 AAST
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree?
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2 n)
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2 n)
– Number of accesses in partition?
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2 n)
– Number of accesses in partition? O(n)
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2 n)
– Number of accesses in partition? O(n)
– Best case running time?
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2 n)
– Number of accesses in partition? O(n)
– Best case running time? O(n log2n)
Quick Sort Analysis
• Assume that keys are random, uniformly
distributed.
• What is best case running time?
– Recursion:
1. Partition splits array in two sub-arrays of size n/2
2. Quicksort each sub-array
– Depth of recursion tree? O(log2 n)
– Number of accesses in partition? O(n)
– Best case running time? O(n log2n)
– Worst case running time?
Quicksort – cont’d
Picking the pivot problem !
Somehow we have to select a pivot, and we hope
that we will get a good partitioning.
• How would you pick one?
• Strategy 1: Pick the first element in the LIST S
– Works only if input is random
– What if input S is sorted, or even mostly sorted?
• All the remaining elements would go into either S1 or S2!
• Terrible performance!
Prof. Ossama Ismail 143
Quick Sort Worst Case

• Assume first element is chosen as pivot.


• Assume we get an array that is already in
order:

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]

pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]

<= data[pivot] > data[pivot]


Quick Sort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2 n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree?
Quick Sort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2 n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree? O(n)
Quick Sort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2 n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree? O(n)
– Number of accesses per partition?
Quick Sort Analysis
• Assume that keys are random, uniformly distributed.
• Best case running time: O(n log2 n)
• Worst case running time?
– Recursion:
1. Partition splits array in two sub-arrays:
• one sub-array of size 0
• the other sub-array of size n-1
2. Quicksort each sub-array
– Depth of recursion tree? O(n)
– Number of accesses per partition? O(n)
Quick Sort Analysis

• Assume that keys are random, uniformly distributed.


• Best case running time: O(n log2 n)
• Worst case running time: O(n2) !!!
Quick Sort Analysis

• Assume that keys are random, uniformly distributed.


• Best case running time: O(n log2 n)
• Worst case running time: O(n2) !!!

• What can we do to avoid worst case?


Quicksort – cont’d
Picking the pivot (contd.)

• Strategy 2: Pick the pivot randomly


If the items in the array arranged randomly, we
choose a pivot randomly.
– Would usually work well, even for mostly sorted input

– Unless the random number generator is not quite


random!

Prof. Ossama Ismail 158


Quicksort – cont’d
Picking the pivot (contd.)

Obviously, it doesn’t make sense to sort the array in


order to find the median to use as a pivot.

Instead, compare just three elements of the


(sub)array—the first, the last, and the middle

Prof. Ossama Ismail 159


Quicksort – cont’d
Picking the pivot (contd.) - Picking a better pivot
• Strategy 3: Median-of-three Partitioning

– Ideally, the pivot should be the median of input array S


• Median = element in the middle of the sorted sequence

– Would divide the input into two almost equal partitions

– Unfortunately, its hard to calculate median quickly,


even though it can be done in O(n) time!

– So, find the approximate median


• Pivot = median of the left-most, right-most and center element
S
Prof. Ossama Ismailof the array 160
Improved Pivot Selection
• Pick median value of three elements from data
array:
data[0], data[n/2], and data[n-1].

• Use this median value as pivot.


Improving Quick Sort Performance
• Improved selection of pivot.
• For sub-arrays of size 3 or less, apply brute
force search:
– Sub-array of size 1: trivial
– Sub-array of size 2:
if(data[first] > data[second]) swap them
– Sub-array of size 3: left as an exercise.
Quicksort – cont’d
Picking the Pivot (contd.)
• Example: Median-of-three Partitioning
– Let input S = {6, 1, 4, 9, 0, 3, 5, 2, 7, 8}

– Left = 0 and S[left] = 6


– Right = 9 and S[right] = 8

– mid = (left + right)/2 = 4 and S[mid] = 0

– Pivot
• = Median of S[left], S[right], and S[mid]
• = median of 6, 8, and 0
• = S[left] = 6
Prof. Ossama Ismail 163
Quick Sort
Algorithm
• Pick an element, called a pivot, from the list.
• Reorder the list so that all elements which are
less than the pivot come before the pivot and so
that all elements greater than the pivot come after
it (equal values can go either way). After this
partitioning, the pivot is in its final position. This is
called the partition operation.
• Recursively sort the list of lesser elements and
the list of greater elements in sequence.

#include <stdlib.h>
void qsort(void *base, size_t nmemb, size_t size,
int(*compar)(const void *, const void *));
Quicksort – Analysis

Quicksort is O(n*log2n) in the best case and average case.

Quicksort is slow when the array is sorted and we


choose the first element as the pivot.
Although the worst case behavior is not so good, its
average case behavior is much better than its worst
case.
So, Quicksort is one of best sorting
algorithms using key comparisons.
Prof. Ossama Ismail 165
Quicksort – Analysis
Worst Case: The worst case for quick-sort occurs when the
pivot is the unique minimum or maximum element

◼ In the worst case, partitioning always divides the size n


array into these three partitions:
◼ A length one part, containing the pivot itself
◼ A length zero part, and
◼ A length n-1 part, containing everything else
◼ We don’t recur on the zero-length part
◼ Recurring on the length n-1 part requires (in the worst
case) recurring to depth n-1
Prof. Ossama Ismail 166
Quicksort – Analysis

Prof. Ossama Ismail 167


Worst Case: (assume that pivot is the first element)

9 8 7 6 5 4 3 2 1 0
Original list
9 8 7 6 5 4 3 2 1 0
pivot
9 8 7 6 5 4 3 2 1 0

9 8 7 6 5 4 3 2 1 0
A worst-case
partitioning with 9 8 7 6 5 4 3 2 1 0
quicksort 9 8 7 6 5 4 3 2 1 0

9 8 7 6 5 4 3 2 1 0

9 8 7 6 5 4 3 2 1 0

9 8 7 6 5 4 3 2 1 0

Prof. Ossama Ismail


9 8 7 6 5 4 3 2 1 0
168

9 8 7 6 5 4 3 2 1 0
Partitioning

◼ To partition S[left...right]:
1. Set pivot = S[left], L = left + 1, R = right;
2. while L < R, do
2.1. while L < right & S[L] < pivot , set L = L + 1
2.2. while R > left & S [R] >= pivot , set R = R - 1
2.3. if L < R, swap S[L] and S[R]
3. Set S [left] = S[R], S[R] = pivot
4. END

Prof. Ossama Ismail 169


Partitioning – Example
S[left...right] = S[1...10] left=1 & right = 10
L = 1+1 = 2 R = 10
L=2 R = 10 pivot = 34

34 41 12 51 7 61 18 20 70 55

L R

Prof. Ossama Ismail 170


Partitioning – Example
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=2 R=9 pivot = 34

34 41 12 51 7 61 18 20 70 55

L R

Prof. Ossama Ismail 171


Partitioning – Example
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=2 R=8 pivot = 34

34 41 12 51 7 61 18 20 70 55

L R

Prof. Ossama Ismail 172


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=2 R=8 pivot = 34
SWAP S[2] & S[8]
34 20 12 51 7 61 18 41 70 55

L R

Prof. Ossama Ismail 173


Partitioning – Example
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=3 R=8 pivot = 34

34 21 12 51 7 61 18 41 70 55

L R

Prof. Ossama Ismail 174


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=4 R=8 pivot = 34

34 21 12 51 7 61 18 41 70 55

L R

Prof. Ossama Ismail 175


Partitioning – Example
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=4 R=7 pivot = 34

34 21 12 51 7 61 18 41 70 55

L R

Prof. Ossama Ismail 176


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=4 R=7 pivot = 34
SWAP S[4] & S[7]
34 21 12 18 7 61 51 41 70 55

L R

Prof. Ossama Ismail 177


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=5 R=7 pivot = 34

34 21 12 18 7 61 51 41 70 55

L R

Prof. Ossama Ismail 178


S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=5 R=7 pivot = 34

34 21 12 18 7 61 51 41 70 55

L R

Prof. Ossama Ismail 179


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=6 R=7 pivot = 34

34 21 12 18 7 61 51 41 70 55

L R

Prof. Ossama Ismail 180


Partitioning – Example
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=6 R=6 pivot = 34

34 21 12 18 7 61 51 41 70 55

Prof. Ossama Ismail 181


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=6 R=5 pivot = 34

34 21 12 18 7 61 51 41 70 55

Prof. Ossama Ismail 182


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=6 R=5 pivot = 34

34 21 12 18 7 61 51 41 70 55

R L

Prof. Ossama Ismail 183


Partitioning – Example cont’d
S[left...right] = S[1...10] left=1 & right = 10
R = 10
L=6 R=5 pivot = 34
SWAP S[1] & S[5]
7 21 12 18 34 61 51 41 70 55

R L

Prof. Ossama Ismail 184


Quicksort – Analysis
Time and Space for Quick Sort

n Space complexity:
– Average case and best case: O(log n)
– Worst case: O(n)
n Time complexity:
– Average case and best case: O(n log n)
– Worst case: O(n2 )

Prof. Ossama Ismail 185


Quicksort – Analysis

Comments on quicksort
◼ Quicksort is the fastest known sorting algorithm
◼ For better performance, choose the pivot carefully
◼ “Median of three” is a good technique for choosing
the pivot
◼ However, no matter what you do, there will be
some cases where Quicksort runs in O(n2) time
◼ Warning: not stable
Prof. Ossama Ismail 186
The Quick Sort

FIGURE 11-9 A partition about a pivot


Analysis of Quicksort
• Best case: split in the middle — Θ(n log n)
• Worst case: sorted array! — Θ(n2) T(n) = T(n-1) + Θ(n)
• Average case: random arrays — Θ(n log n)
• Improvements:
– better pivot selection: median of three partitioning
– switch to insertion sort on small subfiles
– elimination of recursion
These combine to 20-25% improvement
• Considered the method of choice for internal sorting of
large files (n ≥ 10000)
188
Dr. Ossama Ismail AAST
Comparison of Sorting Algorithms

Approximate growth rates of time required for eight


sorting algorithms

You might also like