Professional Documents
Culture Documents
1
Searching and Sorting
Searching
Searching
• Linear Search
– Simplest but Costly in time and resources
• Binary Search
– Efficient but assumes the input to be sorted
Linear Search •
5
Dr. Ossama Ismail AAST
Linear Search
Linear Search
#include <stdio.h>
int main()
{
int array[100], search, c, n;
printf("Enter number of elements in array\n");
scanf("%d", &n);
printf("Enter %d integer(s)\n", n);
for (c = 0; c < n; c++)
scanf("%d", &array[c]);
printf("Enter a number to search\n");
scanf("%d", &search);
Linear Search
for (c = 0; c < n; c++)
{
if (array[c] == search) /* If required element is found */
{
printf("%d is present at location %d.\n", search, c+1);
break;
}
}
if (c == n)
printf("%d isn't present in the array.\n", search);
return 0;
}
Linear Search Function using Pointers
long linear_search(long *p, long n, long find)
{
long c;
for (c = 0; c < n; c++)
{
if (*(p+c) == find)
return c;
}
return -1;
}
Binary Search •
10
Dr. Ossama Ismail AAST
Binary Search
Will be Studied later on with the
Divide-and-Conquer
Algorithms
11
Dr. Ossama Ismail AAST
Sorting
12
Dr. Ossama Ismail AAST
Sorting
We will cover
13
Dr. Ossama Ismail AAST
Basic Sorting Algorithms
Sorting is:
A process
It organizes a collection of data
Organized into ascending/descending order
14
The Sorting Problem cont’d
Example: Input: 7 2 8 4 3 6 5 1
Output: 1 2 3 4 5 6 7 8
• Assumptions
– The objects are integers
– The list contains distinct integers (no repeats)
– Permute the integer list in increasing order
18
Dr. Ossama Ismail AAST
Sorting Algorithms
• Insertion, Selection and Bubble sorts have
quadratic worst-case performance
• The faster comparison based algorithm ?
O(n*log(n))
– Merge and Quick sorts
Bubble Sort •
20
Dr. Ossama Ismail AAST
Bubble Sort
• Main Idea
– n – 1 passes
– Find and move the next maximum value into its correct place by
consecutive swapping
• Example
7 8 6 2 <- pass 1
7862
7682
7628
6 7 2 8 <- pass 2
6278
2 6 7 8 <- pass 3
2678
25
Bubble Sorting
Example of bubble sort 2/2
26
Dr. Ossama Ismail AAST
Code for bubble sort
void bubbleSort(int[] a)
{
int outer, inner;
for (outer = a.length - 1; outer > 0; outer--) {
// counting down
for (inner = 0; inner < outer; inner++) {
// bubbling up
if (a[inner] > a[inner + 1]) { // if out of order...
int temp = a[inner]; // ...then swap
a[inner] = a[inner + 1];
a[inner + 1] = temp;
}
}
}
Dr. Ossama Ismail AAST
27 27
Analysis of bubble sort
• for (outer = a.length - 1; outer > 0; outer--)
• {
for (inner = 0; inner < outer; inner++)
• {
if (a[inner] > a[inner + 1])
• { // code for swap omitted
}
}
}
• Let n = a.length = size of the array
• The outer loop is executed n-1 times (call it n, that’s close enough)
• Each time the outer loop is executed, the inner loop is executed
– Inner loop executes n-1 times at first, linearly dropping to just once
– On average, inner loop executes about n/2 times for each execution of
the outer loop
– In the inner loop, the comparison is always done (constant time), the
swap might be done (also constant time) 28 28
• Analysis
– Best case, O(n) algorithm
– Worst case, O(n2) algorithm
• Again, a poor choice for large amounts of
data
Bubble Sort cont’d
• Can we do better?
– ??????
Prof. Ossama Ismail 31
Bubble Sort cont’d
• Main Idea
– n – 1 passes
– Find and move the next maximum value into
its correct place by consecutive swapping
– Stop if no swaps were made in a pass
i = n;
swapflag = true;
while (swapflag) do
i <- i – 1;
swapflag = false;
for j <- 1 to i do
if (A[j ] > A[j + 1])
Swap(A[j], A[j + 1]);
swapflag = true;
Prof. Ossama Ismail 33
Better Bubble Sort
Analysis
• Best Case – Omega(n)
– Already sorted: n comparisons and 0 swaps
• Worst Case – O(n2)
– Input sorted in the opposite order AND
– Smallest value starts at the opposite end
– n comparisons * n passes
• Average Case – Theta(n2)
– Any input is equally likely
– Given an input, how many passes will it take to stop?
– Count and average the # of comparisons when stopping at every
possible pass
– See page 65 of the text
35
Dr. Ossama Ismail AAST
Selection Sort
• Main Idea
Initial list : 6 2 8 7 5 1 4 3
12875643
12875643
12375648
12345678
12345678
12345678
12345678
for i <- 1 to n do
minpos <- i;
for j <- (i + 1) to n do
if (A[j] < A[minpos])
minpos <- j;
Swap(A[i], A[minpos]);
• Where T1 = SUM(n – i)
• So, T(n) = Theta(n2)
Prof. Ossama Ismail 45
Selection Sort cont’d
• Analysis
– This is an O (n2) algorithm
• In general selection sort algorithm is
inefficient to use.
Insertion Sort •
47
Dr. Ossama Ismail AAST
Insertion Sort
49
Dr. Ossama Ismail AAST
Insertion Sort cont’d
• Main Idea
– n - 1 iterations
– Take each item from unsorted region, insert into its correct order in sorted
region
• Insert the current value, v, into the sorted list of already traversed values
• Find v’s place in the sorted sub-list and shift higher values over
• Example
Initial array : 6 2 8 7 5 1 4 3
26875143
26875143
26785143
25678143
12567843
12456783
12345678
Prof. Ossama Ismail 50
GN 121 AAST
The Insertion Sort
The Insertion Sort
The Insertion Sort
for i <- 2 to n do
insertvalue <- A[i];
newpos <- i – 1;
while ((newpos >= 1) && (A[newpos] >
insertval)) do
A[newpos + 1] <- A[newpos];
newpos <- newpos - 1;
A[newpos + 1] = inservalue;
Prof. Ossama Ismail 56
Insertion Sort cont’d
Run-time Analysis
Count
for i <- 2 to n do n
insertvalue <- A[i]; n-1
newpos <- i – 1; n-1
while ((newpos >= 1) && T1 + 1
(A[newpos] > insertval)) do
A[newpos + 1] <- A[newpos]; T1
• Where T1 = SUM(i)
• So, T(n) = O(n2)
Prof. Ossama Ismail 57
Insertion Sort cont’d
• Analysis
– An algorithm of order O(n2)
– Best case O(n)
• Appropriate for 25 or less data items
Divide-and-Conquer
59
Divide-and-Conquer
Method
• Divide problem into subproblems
• Solve subproblems
• If subproblems have same nature as original
problem, the strategy can be applied
recursively
• Merge subproblem solutions into solutions for
original problem
60
Dr. Ossama Ismail AAST
Divide-and-Conquer
• Divide-and conquer is a general algorithm design paradigm:
– Divide: divide the input data S in two or more disjoint subsets
S1, S2, S2…
– Recur: solve the subproblems recursively
– Conquer: combine the solutions for S1, S2, …, into a solution for
S
• The base case for the recursion are subproblems of constant size
• Analysis can be done using recurrence
equations
61
Dr. Ossama Ismail AAST
The Divide and Conquer Algorithm
Divide_Conquer(problem P)
{
if Small(P) return S(P);
else {
return Combine(Divide_Conque(P1),
Divide_Conque(P2),…, Divide_Conque(Pk));
}
}
62 62
Dr. Ossama Ismail AAST
Divde_Conquer recurrence relation
The computing time of Divde_Conquer is
g ( n) n small
T ( n) =
T ( n1 ) + T (n2 ) + ... + T ( nk ) + f (n) otherwise
a problem of size n
subproblem 1 subproblem 2
of size n/2 of size n/2
Combine solutions to
Solve the original problem
6464
Dr. Ossama Ismail AAST
Divide-and-Conquer Examples
65
Dr. Ossama Ismail AAST
Binary Search •
66
Dr. Ossama Ismail AAST
Binary Search
Binary Search
Binary Search
int BinarySearch(int x, int v[], int n)
{
int low, high, mid;
low = 0;
high = n – 1;
while ( low <= high) {
mid = (low + high) / 2;
if (x < v[mid])
high = mid – 1;
else if (x > v[mid])
low = mid + 1;
else
return mid; /*found match*/
}
return –1; /*No match*/
}
Binary Search
Very efficient algorithm for searching in sorted array:
• Time efficiency
– worst-case recurrence: T (n) = 1 + ( n/2 ), T (1) = 1
solution: T (n) = log2(n+1) = O(log n)
71
Dr. Ossama Ismail AAST
Complexity Analysis
T(1)=1
T(n) = 2T(n/2) + cn
= 2(2T(n/4)+cn/2)+cn=2^2T(n/2^2)+2cn
= 2^2(2T(n/2^3)+cn/2^2)+2cn = 2^3T(n/2^3)+3cn
= … = (2^k)T(n/2^k) + k*cn
(let n=2^k, then k=log n)
T(n) = n*T(1)+k*cn = n*1+c*n*log n = O(n log n)
72
Dr. Ossama Ismail AAST
Recursive Binary Search
• A Divide and Conquer Algorithm to find a key in an array:
• -- Precondition: S is a sorted list >> Check if x ϵ S
73
Dr. Ossama Ismail AAST
Linear v/s Binary Search
Sorting Problem
• Optimal number of comparisons
– An algorithm must be able to distinguish which one
of the n! permutations is the correct permutation.
– Can be viewed as a binary decision tree
– Each permutation corresponds to a leaf in the tree
– The depth of the tree is the number of comparisons
necessary for distinguishing the sorted permutation
– A binary tree with height h has at most 2h leaves
– A sorting decision tree has height at least log n!
– log n! = O(n log n)
– Comparison-based sorting is O(n log n)
75
Dr. Ossama Ismail AAST
Efficient Sorting Techniques
76
Dr. Ossama Ismail AAST
Table for the running time of the algorithms :
• In this table, n is the number of elements to be sorted .
• The columns "Best", "Average", and "Worst" give the time
complexity in each case.
• "Memory" denotes the amount of auxiliary storage needed
beyond that used by the list itself.
77
Dr. Ossama Ismail AAST
Merge Sort •
78
Dr. Ossama Ismail AAST
Merge Sort
• Merge sort is a sorting algorithm invented by John von
Neumann (1903-1957) based on the divide and
conquer technique.
• It always runs in time, but requires space.
79
Dr. Ossama Ismail AAST
Merge Sort
Merge Sort is a Divide and Conquer algorithm. It
divides the input array into two halves, calls itself for
the two halves, and then merges the two sorted halves.
80
Prof. Ossama Ismail GN
121 AAST
Merge-Sort Review
81
Dr. Ossama Ismail AAST
Algorithm: Merge-Sort(S,p,r)
A procedure sorts the elements in the sub-array
A[p..r] using divide and conquer
• Merge-Sort(S,p,r)
– if p >= r, do nothing
– if p< r then q ( p + r ) / 2
• Merge-Sort(S,p,q) <== first half
• Merge-Sort(S,q+1,r) <== second half
• Merge(S,p,q,r)
• Starting by calling Merge-Sort(S,1,n)
• How do we partition?
82
Dr. Ossama Ismail AAST
Partitioning - Choice 1
• First (n-1) elements into set A, last element set B
• Sort A using this partitioning scheme recursively
– B already sorted
• Combine A and B using function Insert() (= insertion
into sorted array)
• Leads to recursive version of InsertionSort()
– Number of comparisons: O(n2)
• Best case = n-1
[10, 4, 6, 3, 8, 2, 5, 7]
[10, 4, 6, 3] [8, 2, 5, 7]
[2, 3, 4, 5, 6, 7, 8, 10 ]
9 2 3 8 4 5 1 7
The non-recursive
3 8 9 2 17 4 5 version of Mergesort
starts from merging
8 3 2 9 7 1 5 4
single elements into
sorted pairs.
8 3 9 2 7 1 5 4
9 8 3 2 7 5 4 1
9 8 7 5 4 3 2 1
90
Dr. Ossama Ismail AAST
Merge Sort
b if n 2
T (n) =
2T (n / 2) + bn if n 2
• We can therefore analyze the running time of merge-sort by finding a
closed form solution to the above equation.
– That is, a solution that has T(n) only on the left-hand side.
92
Dr. Ossama Ismail AAST
Recursion tree
…
…
…
…
Q(1) Q(n)
number of leaves = n
Prof. Ossama Ismail 93
Analysis of Mergesort
• All cases have same efficiency: Θ(n log n)
94
Dr. Ossama Ismail AAST
Solving recurrence eqn for Mergesort
Iterative Substitution
• In the iterative substitution, or “plug-and-chug,” technique, we
iteratively apply the recurrence equation to itself and see if we
can find a pattern:
T ( n ) = 2T ( n / 2) + bn
= 2( 2T ( n / 2 2 )) + b( n / 2)) + bn
T (n) = bn + bn log n = 2 2 T ( n / 2 2 ) + 2bn
= 2 3 T ( n / 23 ) + 3bn
= 2 4 T ( n / 2 4 ) + 4bn
= ...
= 2i T ( n / 2i ) + ibn
bn
0 1 n
bn
1 2 n/2
bn
• Analysis
– Merge sort is of order O(n log n)
– This is significantly faster than O(n2)
Quick Sort •
98
Dr. Ossama Ismail AAST
Quicksort cont’d
• Like mergesort, Quicksort is also based on
the divide-and-conquer paradigm.
2 3 1 4 5 8 9 7
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
1 2 3 4 5 7 8 9
104
Dr. Ossama Ismail AAST
Example
We are given array of n integers to sort:
40 20 10 80 60 50 7 30 100
Example: Pick Pivot Element
40 20 10 80 60 50 7 30 100
Example: Partitioning Array
• Given a pivot, partition the elements of the array
such that the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Compare with Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Swap
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
40 20 10 80 60 50 7 30 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Repeat
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Swap Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
40 20 10 30 7 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 0
Example: Swap Pivot
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
7 20 10 30 40 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
pivot_index = 4
Example: Partition Result
7 20 10 30 40 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
Example: Quick Sort Sub-Array
7 20 10 30 40 50 60 80 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
Quicksort
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
Quick Sort Worst Case
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 2 4 10 12 13 50 57 63 100
[0] [1] [2] [3] [4] [5] [6] [7] [8]
– Pivot
• = Median of S[left], S[right], and S[mid]
• = median of 6, 8, and 0
• = S[left] = 6
Prof. Ossama Ismail 163
Quick Sort
Algorithm
• Pick an element, called a pivot, from the list.
• Reorder the list so that all elements which are
less than the pivot come before the pivot and so
that all elements greater than the pivot come after
it (equal values can go either way). After this
partitioning, the pivot is in its final position. This is
called the partition operation.
• Recursively sort the list of lesser elements and
the list of greater elements in sequence.
#include <stdlib.h>
void qsort(void *base, size_t nmemb, size_t size,
int(*compar)(const void *, const void *));
Quicksort – Analysis
9 8 7 6 5 4 3 2 1 0
Original list
9 8 7 6 5 4 3 2 1 0
pivot
9 8 7 6 5 4 3 2 1 0
9 8 7 6 5 4 3 2 1 0
A worst-case
partitioning with 9 8 7 6 5 4 3 2 1 0
quicksort 9 8 7 6 5 4 3 2 1 0
9 8 7 6 5 4 3 2 1 0
9 8 7 6 5 4 3 2 1 0
9 8 7 6 5 4 3 2 1 0
9 8 7 6 5 4 3 2 1 0
Partitioning
◼ To partition S[left...right]:
1. Set pivot = S[left], L = left + 1, R = right;
2. while L < R, do
2.1. while L < right & S[L] < pivot , set L = L + 1
2.2. while R > left & S [R] >= pivot , set R = R - 1
2.3. if L < R, swap S[L] and S[R]
3. Set S [left] = S[R], S[R] = pivot
4. END
34 41 12 51 7 61 18 20 70 55
L R
34 41 12 51 7 61 18 20 70 55
L R
34 41 12 51 7 61 18 20 70 55
L R
L R
34 21 12 51 7 61 18 41 70 55
L R
34 21 12 51 7 61 18 41 70 55
L R
34 21 12 51 7 61 18 41 70 55
L R
L R
34 21 12 18 7 61 51 41 70 55
L R
34 21 12 18 7 61 51 41 70 55
L R
34 21 12 18 7 61 51 41 70 55
L R
34 21 12 18 7 61 51 41 70 55
34 21 12 18 7 61 51 41 70 55
34 21 12 18 7 61 51 41 70 55
R L
R L
n Space complexity:
– Average case and best case: O(log n)
– Worst case: O(n)
n Time complexity:
– Average case and best case: O(n log n)
– Worst case: O(n2 )
Comments on quicksort
◼ Quicksort is the fastest known sorting algorithm
◼ For better performance, choose the pivot carefully
◼ “Median of three” is a good technique for choosing
the pivot
◼ However, no matter what you do, there will be
some cases where Quicksort runs in O(n2) time
◼ Warning: not stable
Prof. Ossama Ismail 186
The Quick Sort