You are on page 1of 2

Input Size Initial Order

Dupes?
Avg.Time.Sort
1000
random
yes
0.06s
1000
sorted
yes
0.03s
1000
reverse
yes
0.02s
1000
random
no
0.04s
1000
sorted
no
0.03s
1000
reverse
no
0.03s
10000
random
yes
0.19s
10000
sorted
yes
0.05s
10000
reverse
yes
0.08s
10000
random
no
0.05s
10000
sorted
no
0.03s
10000
reverse
no
0.08s
100000
random
0.19s
100000
sorted
0.082s
100000
reverse
0.109s
100000
random
0.103s
100000
sorted
0.055s
100000
reverse
0.053s

Num.Runs

Avg.Time.usel

10

0.03s

10

0.00s

10

0.06s

10

0.06s

10

0.09s

10

0.05s

10

0.333s

10

0.15s

10

0.0s

10

0.196s

10

0.02s

10

0.06s

yes

130.025s

yes

0.121s

yes

0.109s

no

131.415s

no

0.087s

no

0.065s

As the input size increases, the time the usel program requires
to sort numbers
increases significantly faster than that of the sort program, but
only when
the data is randomised. The usel sort is an implementation of
insertion sort,
and has a worst case complexity of O(n^2), while the linux sort
uses merge sort,
which has worst case complexity O(nlogn). The time tests show how
a better
time complexity improves the speed of a program by a large factor
(in this case,
reducing 2min to 0.2sec), and demonstrates how important it is
the find algorithms
with good time complexity. The presence of duplicate numbers did

not affect the


speed of either sort by much, but a sorted and reverse list of
numbers made the
usel program much faster. This is because the insertion sort
algorithm (by placing
a new integer in its position within a sorted list) works better
with lists that
are already sorted are almost fully sorted.

You might also like