You are on page 1of 13

School of Computer Science and Applied Mathematics

University of the Witwatersrand, Johannesburg

Advanced Analysis of Algorithms


Notes
Sheng Yan Lim
Semester II, 2020

2 Sorting algorithms
2.1 Brute force sorting algorithms
The next group of common algorithms which we will consider will be those for sorting. This is
another task which is performed often when computers are used in industry so it is worthwhile
understanding some of the classic approaches. We will not study all of the sorting algorithms
which have been developed, merely an interesting subset of them.

These algorithms can be applied to sort data structures of various types. We will restrict our
study to lists of numbers. In particular we will be trying to sort lists of numbers stored in arrays
in memory.

2.1.1 Max sort


Suppose that you were asked to sort a list of numbers. One way to tackle the problem would
be to find the biggest number in the list and write that down, then to find the second biggest
number in the list, then to find the third biggest number and so on.

The max sort works in very much the same fashion. We start off by making a pass through
the list to find which position in the list holds the biggest number. Then we swop the biggest
number and the number in the last position. This means that the biggest number is now at the
end of the list (the correct position for it to be if we are sorting from smallest to biggest). Then
we look for the position of the biggest number in the list but now we do not consider the last
position in the list. After this pass we swop this number (the second biggest number) with the
number in the second last position in the original list. Having done this the last two numbers
in the list are in sorted order. We then continue this process for all the numbers in the list.

The detail of the max sort algorithm is given in Algorithm 1.

Let us now analyse this algorithm. Clearly the algorithm has no best and worst case – for any
list of length n it always does the same amount of work. To determine the amount of work
the algorithm does we must first identify a basic operation which will give us a good idea of
the amount of work done and then we must determine how many times this basic operation is
executed. In this algorithm most of the work is done in comparisons between list elements so
counting these comparions is a good approximation to the amount of work that the algorithm
does.

The algorithm has two loops. The outer loop controls the moving of the biggest number at
each pass to the end of the list. The inner loop checks each number in the list being considered
at any stage to determine if it is the biggest number in that sublist. In the first pass of the

1
Algorithm 1 maxSort(myList, n)
Input: myList, n where myList is an array with n entries (indexed 0, 1, . . . n − 1)
Output: myList where the values in myList are such that
myList[0] ≤ myList[1] ≤ . . . ≤ myList[n − 2] ≤ myList[n − 1]
01 For i from n − 1 down to 1
02 maxP os ← i
03 For j from 0 to i − 1
04 If myList[j] > myList[maxP os]
05 Then
06 maxP os ← j
07 swop(myList[maxP os], myList[i])
08 Return myList
outer loop there are n numbers being considered so there are n − 1 comparisons (the inner
loop is executed n − 1 times to compare each number in that sublist to the current maximum
number). In the next pass through the outer loop the sublist is of length n − 1. The biggest
number has been moved to the end of the list (the n − 1th position in the array) and we are
only considering positions 0 to n − 2 in the array with the n − 2th array element originally set as
the maximum found so far. Thus there are n − 2 comparisons. The next time round the sublist
is of length n − 2, then n − 3, and finally of length 2. The total number of comparisons is thus
(n − 1) + (n − 2) + (n − 3) + . . . + 2 + 1. So the complexity function is
g(n) = (n − 1) + (n − 2) + . . . + 3 + 2 + 1 = n(n − 1)/2

The max sort algorithm is thus O(n2 ).

Once again we should prove that the algorithm we have just developed will do what we want it
to do if we supply it with valid input. That is if our input is a list of numbers then the algorithm
will return the list of numbers sorted in ascending order (i.e. arranged from the smallest number
to the biggest number). This can be done by a straightforward inductive proof.

Theorem 1. maxSort works correctly for any valid input.


Proof
Base case
Let n = 2. That is, let the length of the list be 2 (there are two numbers in the list – a list
with only one number is already sorted). Then the outer For loop will work from 1 down to
1. That is, it will execute exactly once. In the body of the loop maxP os will be given the
value of i which is 1 – it will point to array position 1 which holds the second number in the
list. In the inner loop j goes from 0 to i − 1 = 1 − 1 = 0 and so will also execute once. In the
body of the inner loop, if myList[0] > myList[1] then maxP os is updated to point to position
0 in the array (the first number is bigger than the second number). If myList[0] ≤ myList[1]
then the first number is the smaller of the two and maxP os will not be changed. The inner
loop will then be exited and the number in position 1 in the array will be swopped with the
number in the position pointed to by maxP os. If the numbers were originally sorted then the
swop will change nothing – the number in position 1 in the array will be swopped with itself. If
the numbers were not sorted then the numbers in positions 0 and 1 will be swopped. This list
will now be sorted. The algorithm would then exit the outer loop and return the sorted list of
numbers. The algorithm thus works for a list of length 2.

Induction hypothesis

2
Assume now that the algorithm works for a list of length k − 1 where k > 2. This means that
the algorithm will correctly sort a list of length k − 1.

Inductive step
We now need to show that the algorithm would work for a list of length k. Consider what
happens in the first pass through the outer For loop. Here maxP os will be set to k − 1 (to
point to the last element in the list). The inner loop will then compare each of the numbers in
positions 0 to k − 2 to the number currently pointed at by maxP os and will update maxP os
appropriately. After the inner loop terminates maxP os will point to the biggest number in the
list which will then be moved to the last position in the list. We thus have the biggst number in
its correct position and a (possibly) unsorted list of length k−1 in positions 0 to k−2 of the array.

By our induction hypothesis the algorithm will correctly sort a list of length k − 1. This means
that the algorithm will correctly sort the numbers in positions 0 to k − 2 in the array (the first
k − 1 numbers in the list). The remaining passes through the outer loop accomplish this and so
when the outer loop terminates the algorithm will return a correctly sorted list of numbers. So
the algorithm will work correctly for a list of length k.

Thus by the principle of mathematical induction our result is proved. 

As we will see later the max sort algorithm is not an optimal algorithm. There are a number of
more efficient sorting algorithms.

2.1.2 Selection sort


The Selection sort (also called the min sort) algorithm is very similar to the max sort algorithm
except that in this case we move the smallest number into its correct position in the first pass
of the algorithm, the second smallest number into its correct position in the second pass, and
so on. The selection (or min) sort algorithm is shown in Algorithm 2.1.2.

Algorithm 2 selectionSort(myList, n)
Input: myList, n where myList is an array with n entries (indexed 0 . . . n − 1)
Output: myList where the values in myList are such that
myList[0] ≤ myList[1] ≤ . . . ≤ myList[n − 2] ≤ myList[n − 1]
01 For i from 0 to n − 2
02 minP os ← i
03 For j from i + 1 to n − 1
04 If myList[j] < myList[minP os]
05 Then
06 minP os ← j
07 swop(myList[minP os], myList[i])
08 Return myList

The analyis for the selection sort is very similar to that for the max sort. The inner loop first
does n − 1 comparisons (comparing the numbers in positions 1 to n − 1), then n − 2 comparisons,
and so on. So once again the complexity function g(n) = n(n − 1)/2 and the algorithm is O(n2 ).

The proof of correctness of the solution method is very similar to that for max sort and, like
max sort, selection sort is not an optimal algorithm.

3
2.1.3 Bubblesort
Another commonly used sorting algorithm is the bubble sort. This algorithm works by “bub-
bling” the largest number to the end of the list. Then “bubbling” the second largest number
to the second last position in the list. Then the third largest number to the third last position
and so on until the list is sorted. The algorithm works by comparing numbers which are next
to each other in the list of numbers and interchanging the values if the first value is bigger than
the second.

The algorithm can be written down as shown in Algorithm 2.1.3.

Algorithm 3 bubbleSort(myList, n)
Input: myList, n where myList is an array with n entries (indexed 0 . . . n − 1)
Output: myList where the values in myList are such that
myList[0] ≤ myList[1] ≤ . . . ≤ myList[n − 2] ≤ myList[n − 1]
01 For i from n − 1 down to 1
02 For j from 0 to i − 1
03 If myList[j] > myList[j + 1]
04 Then
05 Swop(myList[j], myList[j + 1])
06 Return myList

This algorithm does the job it is required to do but how efficient is it? In order to answer this
question, let us look at the work that the algorithm does and once again to do this we choose
the basic operation as a comparison of two list elements.

The comparisons are made in the inner loop of the algorithm and there is one comparison each
time the inner loop is executed. Thus counting the number of times the inner loop is executed
will tell us how many comparisons are made. For a problem of size n, the inner loop is executed
first n − 1 times, then n − 2 times, ..., and finally once only.

This means that in total the inner-loop is executed n(n − 1)/2 times. This is O(n2 ). This also
means that the algorithm is O(n2 ) in the number of comparisons of list elements made.

This analysis gives us a measure of the efficiency of the bubble sort. (Most of the work is done
in the inner-loop).

Exercise: Prove the bubble sort is O(n2 ).

Is this a good measure of the amount of work the algorithm has to do? What would the com-
plexity of the algorithm be if we chose the number of swops as the basic operation which we
count to determine the complexity function of the algorithm? In this case the algorithm would
display best and worst case performance. In the best case no swops would ever be done but we
would still do O(n2 ) comparisons – this would happen if every pair of numbers which was tested
was already “in order”. This could only happen if the original list was already sorted. In the
worst case a swop would occur for every comparison. This would be the case if the original list
was arranged in reverse sorted order (from biggest to smallest). The algorithm would then do
n − 1 swops to move the largest number from the beginning of the list to the end of the list.
Then it would do n − 2 swops to move the second largest number from the first position to the

4
second last position in the list. And so on until the list is sorted. So the complexity function
would be g(n) = n(n − 1)/2 and the algorithm is O(n2 ) in the worst case.

This raises the question of whether we can improve the bubble sort in any way so that it has an
improved best case performance. One way to do this is to break out of the algorithm if on any
pass of the inner loop no swops are made. If no swops are made then the list must be in order.
The algorithm in Algorithm 2.1.3 works in this way.

Algorithm 4 bubbleSortAgain(myList, n)
The bubble sort algorithm with an escape clause
Input: myList, n where myList is an array with n entries (indexed 0 . . . n − 1)
Output: myList where the values in myList are such that
myList[0] ≤ myList[1] ≤ . . . ≤ myList[n − 2] ≤ myList[n − 1]
01 i ← n − 1
02 sorting ←True
03 While i ≥ 1 and sorting =True
04 swopped ← False
05 For j from 0 to i − 1
06 If myList[j] > myList[j + 1]
07 Then
08 Swop(myList[j], myList[j + 1])
09 swopped ←True
10 If swopped = False
11 Then sorting ← False
12 Else i ← i − 1
13 Return myList

So if on any pass of the list, no swops are made then the algorithm breaks out of the outer loop
and terminates. The algorithm will however always have to make at least one pass through the
inner loop in order to check if any swops are made. The algorithm will thus do at least n − 1
comparisons of list elements. The best case for this version of bubble sort is thus O(n) and the
worst case remains as O(n2 ).

The bubble sort is simple (easy to understand, easy to remember and easy to program) but
not very efficient. If one was sorting to be able to use the results of the sort later or if it was
a one-off job, then it might be OK, but there are better algorithms for sorting and we will be
looking at some of these in the next sections of this chapter.

2.2 Decrease and Conquer Sorting Algorithms


2.2.1 Insertion sort
The next straightforward sort we will consider is the insertion sort. This sort works by main-
taining a sorted sublist of numbers and inserting the next number into the correct position of
the sorted sublist which then becomes one number longer. The process is then repeated for the
next number.

The algorithm starts by treating the first number in the list as the sorted sublist and the insert-
ing the second number into the appropriate position in the sublist – before the first number if it
is smaller than the first number and after it otherwise (remember we are dealing with distinct

5
numbers so the two numbers cannot have the same value). Note if the second number is smaller
than the first then the first number has to be shifted from position 0 in the array into position
1 so that the second number can be slotted into position 0. If the second number is bigger than
the number in the sorted sublist then the length of the sorted sublist is increased.

The third number in the original list is handled in a similar fashion. First it is compared to
the second number in the sorted sublist. If it is bigger then the sorted sublist is increased in
length by 1. If it is smaller then it is compared to the first number in the sorted sublist. If it
is bigger than the first number it is slotted in between the first and second numbers by moving
the second number one space to the right and putting the new number into the position which
has been “opened up” in the sorted sublist. If the number we are inserting is smaller than the
first number in the list then it must be inserted at the beginning of the list. This means that all
of the numbers have to be shifted along to make room. First the second number in the sorted
list is shifted one place to the right (into the position which was occupied by the number we are
inserting). Then the first number is shifted into the position freed up by the second number and
then finally the number we are trying to place is inserted into the beginning of the sorted sublist
(at position 0). This process is then repeated for all of the other numbers in the unsorted sublist.

The insertion sort algorithm is shown in Algorithm 5.

Algorithm 5 insertionSort(myList, n)
The insertion sort algorithm
Input: myList, n where myList is an array with n entries (indexed 0 . . . n − 1)
Output: myList where the values in myList are such that
myList[0] ≤ myList[1] ≤ . . . ≤ myList[n − 2] ≤ myList[n − 1]
01 For i from 1 to n − 1
02 x ← myList[i]
03 j ←i−1
04 While (j >= 0) and myList[j] > x
05 myList[j + 1] ← myList[j]
06 j ←j−1
07 myList[j + 1] ← x
08 Return myList

To analyse this algorithm we can once again consider the comparisons of two list elements as
the basic operation. It should be easy to see that this algorithm performs different amounts of
work on different inputs (i.e. it exhibits best and worst case behaviour).

When does the algorithm do the fewest comparisons? If the test myList[j] > x fails (i.e.
myList[j] < x is true) then the algorithm will not enter the body of the While loop and will
then not test that condition again on any pass through the outer loop. So if on each pass of the
outer loop the test myList[j] > x fails immediately then the least number of comparisons will
be made. What would the format of the input list have to be for this to occur? On each pass
through the outer loop x (the number we are trying to insert into its correct place) would have
to be bigger than the biggest number in the sorted sublist. This would only occur if the original
list was already in sorted order. In this case we would do one comparison of list elements for
each pass of the outer loop so we would do n − 1 such comparisons in total. Our insertion sort
algorithm is thus O(n) in the best case.

6
Similarly the algorithm does the most work (the most comparisons of list elements) when the
number which is being inserted into the sorted sublist is smaller than all of the numbers which are
already in that sorted sublist. x is compared to the last number in the sublist, then to the second
last, and so on until it is compared to the first number in the sublist. When the sorted sublist
has only one number in it then only one comparison is made, when it has two numbers then two
comparisons are made, and so on. When the algorithm is trying to insert the last number into its
correct place the sorted sublist has n − 1 numbers in it and so n − 1 comparisons are made. Thus
gW (n) = 1+2+3 . . . (n−2)+(n−1) = n(n−1)/2 and so insertion sort is O(n2 ) in the worst case.

What about the average case performance of insertion sort? The average case analysis presented
here is based on that by [1]. [3] present a similar analysis. To do the analysis we assume that the
numbers we are sorting are distinct and that all arrangements (permutations) of the elements
are equally likely. Let us now consider what will happen when we try to insert the ith number
of the original list into its correct position in the ordered sublist which is of length i − 1. There
are i places where this number could occur. It could be bigger than all the numbers, it could fit
between the biggest and the second biggest number in the list, or between the second biggest
and the third biggest number, and so on. It could also be smaller than the first number. If
the number is bigger than all the numbers in the sorted sublist then it takes one comparison
to establish this. If it is bigger than the second biggest number but smaller than the biggest
number then two comparisons are required to establish this. And so on. If the number we are
trying to insert into the list is smaller than all of the numbers in the sorted sublist except for
the first number in the sorted sublist then it takes i − 1 comparisons to establish this. If the
number we are trying to insert into the list is smaller than all of the numbers then we find this
out when we compare it to the first number and so this case also takes i − 1 comparisons. By
our assumption all of the positions are equally likely so to the average number of comparisons
to insert our number into a sorted sublist of length i is
i−1 i−1
X 1 1 1 X 1 i−1 1 i+1 1
·j+ · (i − 1) = ( · j) + 1 − = +1− = −
i i i i 2 i 2 i
j=1 j=1
If we now consider the comparisons required for inserting each number in the original list
into the sorted sublist, then
n n
X i+1 1 n2 3n X 1
gA (n) = ( − )= + −1−
2 i 4 4 i
i=2 i=2
Pn 1
Now i=2 i ≈ ln n and therefore
n2 3n
gA (n) ≈ + − 1 − ln n
4 4
and so gA (n) ∈ Θ(n2 ).

Insertion sort is thus O(n) in the best case and O(n2 ) in the average and worst cases. In practice,
insertion sort works quite efficiently. We will, however, study some more efficient algorithms
later in the text. The fact that there are more efficient sorting algorithms (that also work by
comparisons of list elements) means that insertion sort (and max sort, selection sort and bub-
blesort) is not optimal.

Once again a simple inductive proof can be used to show that the insertion sort algorithm is
correct.

7
2.3 Divide and Conquer Sorting algorithms
2.3.1 Overview
Typically divide-and-conquer solutions to problems are based on the approach of dividing the
original problem instance into two (or more) roughly equal smaller parts, solving the smaller
instances recursively (i.e. in the same manner by once again dividing them into smaller instances)
until a limiting case is reached which can be solved directly and then combining the solutions
to produce the solution to the original problem. A skeleton of a typical divide and conquer
algorithm is given in Algorithm 6.

Algorithm 6 DivideAndConquer(Inputdata)
A typical divide and conquer algorithm
01 IF limitingCase
02 Then
03 Answer ← DirectSolution(Inputdata)
04 Else
05 Divide(Inputdata, I1 , I2 , ..., In )
06 A1 ← DivideAndConquer(I1 )
07 A2 ← DivideAndConquer(I2 )
08 ...
09 An ← DivideAndConquer(In )
10 Answer ← Combine(A1 , A2 , ..., An )

In different divide and conquer algorithms the work done in the dividing and the combining
phases could be different.

2.3.2 The Mergesort Algorithm


The mergesort algorithm is a typical divide and conquer algorithm.

A simple skeleton of the recursive mergesort algorithm is shown in Algorithm 7. This algorithm
works on a list of numbers stored in an array in memory. The parameters lef t and right are
the positions of the first element and last element in the array to be sorted. Initially lef t would
be 0 and right would be n − 1 where n is the length of the list being sorted.

This algorithm is clearly recursive (there are two calls inside the body of the algorithm to the
algorithm), it divides the current list in half at each level of recursion and then calls mergesort
to sort the halves of the list. The merge which reassembles the sorted list from the two sorted
half lists must still be specified properly.

We will look now at a more complete specification of mergesort which works by successively
dividing the list into smaller and smaller lists until the limiting case of a one element list is
reached. Once the limiting case of the recursion has been reached the algorithm begins “bub-
bling out” merging the sorted half lists at each level of recursion.

To accomplish the merging of the two sorted half lists at each level we need a second array. This
array stores the unmerged half lists in a back to back fashion with the right half list in reverse
order.

8
Algorithm 7 mergeSort(lef t, right)
The mergesort algorithm
Input: lef t, right where lef t and right are indices into an array myList of n entries
(initially indexed 0 . . . n − 1)
Output: a portion of the array myList where the values in myList are such that
myList[lef t] ≤ myList[lef t + 1] ≤ . . . ≤ myList[right − 1] ≤ myList[right]
00 IF right − lef t > 0
01 Then
02 mid ← b(lef t + right)/2c
03 mergeSort(lef t,mid)
04 mergeSort(mid + 1,right)
05 merge(lef t,mid,mid + 1,right)

Algorithm 8 mergeSort(lef t, right)


The mergesort algorithm
Input: lef t, right where lef t and right are indices into an array myList of n entries
(initially indexed 0 . . . n − 1)
Output: a portion of the array myList where the values in myList are such that
myList[lef t] ≤ myList[lef t + 1] ≤ . . . ≤ myList[right − 1] ≤ myList[right]
01 IF right − lef t > 0
02 Then
03 mid ← b(lef t + right)/2c
04 mergeSort(lef t,mid)
05 mergeSort(mid + 1,right)
06 For i from mid down to lef t
07 temp[i] ← myList[i]
08 For j from mid + 1 to right
09 temp[right + mid + 1 − j] ← myList[j]
10 i ← lef t
11 j ← right
12 For k from lef t to right
13 If temp[i] < temp[j]
14 Then
15 myList[k] ← temp[i]
16 i←i+1
17 Else
18 myList[k] ← temp[j]
19 j ←j−1

9
This temporary list is then used to merge the two half lists back into the original list by looking
at the ends of the list and copying the smallest element into the first position of the original list,
shrinking the temporary list appropriately and continuing until all the values have been copied
into the original list in sorted order.

The full mergesort algorithm can then be written out as in Algorithm 8.


Exercise: To make sure that you understand how mergeSort works, use the algorithm to sort
the list 3,1,19,2,4,21,7,11.

We make the claim that this algorithm is more efficient than the bubblesort. Let us now try to
determine the order of complexity of the mergesort algorithm.
Theorem 2. mergeSort requires about n log2 n comparisons to sort a list of n elements.
Proof
For mergesort we have an algorithm that has to make a linear pass (the merging phase) through
the data after it has split and sorted the two halves. The splitting of the list can be done in
constant time. Sorting each half list takes half as much time as sorting the whole list and sorting
a list of length 1 takes no work.

So we can derive the reccurence relationship

g(n) = g(b n2 c) + g(d n2 e) + n and g(1) = 0

which describes the amount of work done by the algorithm for n elements of input.

Now if we consider a list of size n = 2m we get

g(n) = 2 · g( n2 ) + n = g(2m ) = 2 · g(2m−1 ) + 2m

If we now divide through by 2m we get


g(2m ) g(2m−1 )
2m = 2m−1
+1
g(2m−1 )
Substituting for 2m−1
in this formula gives
g(2m ) g(2m−2 )
2m = 2m−2
+1+1
g(2m−2 ) g(2m−3 ) g(2m )
We continue in this fashion substituting for 2m−2
, 2m−3 , etc and eventually we get 2m = m.

Therefore g(2m ) = m · 2m . But we have that n = 2m .

Therefore g(n) = n log2 n so the relationship applies if n = 2m .

It can be shown that this relationship holds even if n 6= 2m but we will not do this analysis here.

So Mergesort is Θ(n log2 n). 

Exercise: Prove the mergesort is Θ(n log2 n).

2.3.3 Quicksort
Quicksort is another “divide-and-conquer” method for sorting. It works by partitioning the list
into two parts and then sorting the parts independently. The position of the partition depends

10
on the list being sorted.

An skeleton algorithm for the quicksort (again working on an array in memory) is thus as shown
in Algorithm 9.

Algorithm 9 quickSort(lef t, right)


The quicksort algorithm
Input: lef t, right where lef t and right are indices into an array of n entries (initially indexed
0 . . . n − 1)
Output: a portion of the array myList where the values in myList are such that
myList[lef t] ≤ myList[lef t + 1] ≤ . . . ≤ myList[right − 1] ≤ myList[right]
01 IF right > lef t
02 Then
03 i ← partition(lef t, right)
04 quickSort(lef t,i − 1)
05 quickSort(i + 1,right)

The crux of the algorithm is the partition.

The partition must rearrange the list to make the following 3 conditions hold:
1. the element myList[i] is in its final place in the list for some i
2. all the elements in myList[lef t] . . . myList[i − 1] are less than or equal to myList[i].
3. all the elements in myList[i + 1] . . . myList[right] are greater than or equal to myList[i]
How is this done in practice?
1. Choose myList[right] as the element to go to the correct place in the list.
2. Scan from the left until an element ≥ myList[right] is found.
3. Scan from the right until an element < myList[right] is found.
4. Swop these elements.
5. Continue this way until the scan pointers cross.
6. Exchange myList[right] with the element in the leftmost position of the right sub list (ie
the element pointed to by the left pointer).
The complete quicksort is thus as shown in Algorithm 10.

Exercise: Use quicksort to sort the list 3, 1, 5, 2, 9, 7, 4.

Let us now consider the complexity of the quicksort algorithm. The best thing that could happen
in quicksort would be that each partitioning divides the list exactly in half. This would mean
the number of comparisons would satisfy the divide-and-conquer recurrence g(n) = 2 · g( n2 ) + n
and g(1) = 0.

The 2·g( n2 ) is the cost for sorting the two half lists and the n is the number of comparisons at each
level. We have seen that g(n) = n log2 n. That is, the best case for the quicksort is O(n log2 n).
We can prove (but won’t do it here) that the average case for the quicksort is 2n ln n or O(n ln n).

Let us now consider the worst case of the quicksort algorithm. First notice that at any level of
the recursion the partition will make k − 1 comparisons of the partition element v with list
elements for a list of length k. Once these comparisons have been made the partition element

11
Algorithm 10 quickSort(lef t, right)
The quicksort algorithm
Input: lef t, right where lef t and right are indices into an array of n entries
(initially indexed 0 . . . n − 1)
Output: a portion of the array myList where the values in myList are such that
myList[lef t] ≤ myList[lef t + 1] ≤ . . . ≤ myList[right − 1] ≤ myList[right]
01 IF right > lef t
02 Then
03 v ← myList[right]
04 i ← lef t
05 j ← right
06 While i < j
07 While myList[i] < v
08 i←i+1
09 While j > i and myList[j] ≥ v
10 j ←j−1
11 If j > i
12 Then
13 t ← myList[i]
14 myList[i] ← myList[j]
15 myList[j] ← t
16 ELSE t ← myList[i]
17 myList[i] ← myList[right]
18 myList[right] ← t
19 quickSort(lef t,i − 1)
20 quickSort(i + 1,right)

12
is moved into its correct place in the list for that level. If the partition element is the biggest
number in the list then all that happens is that the list of length k is split into a list of length
k − 1 and a list of length 0 on opposite sides of the partition element.

At the next level of recursion, for the list of length k −1, k −2 comparisons would be done to find
to correct place for the partition element. If the partition element was again the biggest element
in the list then we would again get a bad partition – into a list of length k−2 and a list of length 0.

For any list of length n the situation of the partition element always being the biggest element
in the sublist being considered would occur if the list was already sorted before the quick-
sort algorithm was called. In this case (n − 1) + (n − 2) + (n − 3) + ... + 2 + 1 comparisons
would be made. This means that n(n − 1)/2 comparisons of list elements would be done in total.

The worst case of quicksort, when the list is already sorted, is thus O(n2 ).

Note that there are a number of different ways of performing the partition in the quicksort
algorithm but they all give the same analysis.

2.4 Optimal sorting algorithms


We will not consider the optimality of these algorithms in any detail in this course – the proofs
are quite difficult and we don’t have the time to go into them. Note, however, that the lower
bound for sorting algorithms which work by comparisons of list elements (as these do) is approx-
imately n lg n. This means that mergesort can be considered as an optimal sorting algorithm in
this class of problem. Max sort, selection sort, bubblesort and insertion sort are obviously not
optimal.

There are other sorting algorithms based on other approaches and which work for inputs of
particulars forms which work in linear time.

2.5 Additional reading material


For more on sorting see [2, 4] in addition to the texts previously cited.

Acknowledgments: Significant parts of this material were originally prepared by Prof. Ian
Sanders and further edited by Dr. Hima Vadapalli.

References
[1] Sara Baase. Computer Algorithms: Introduction to Design and Analysis (2Nd Ed.). Addison-
Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1988.

[2] Sara Baase and Allen Van Gelder. Computer Algorithms: Introduction to Design and Anal-
ysis. Pearson/Prentice Hall, 2000.

[3] Gilles Brassard and Paul Bratley. Fundamentals of Algorithmics, volume 524. Prentice Hall
Englewood Cliffs, 1996.

[4] Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algorithms.
The MIT Press and McGraw-Hill Book Company, 1989.

13

You might also like