Professional Documents
Culture Documents
• Reading
• Lecture Slides
• Dilemma: you have two (or more) solution to solve a problem, how to
choose the best solution?
• Simple but not a reliable approach: implement each algorithm in Python
and test how long each takes to complete.
• Algorithms runtime can be different for different settings
• Difficult but reliable approach: assess performance in an abstract way
• Idea: analyze algorithm performance as size of input grows
• We need to develop an approach showing rate of growth of programs running times
so that we can compare algorithms.
• Time complexity of an algorithm is also called cost function and denoted as f(n), where
n is the size of the input.
• Cost: the amount of time it takes the algorithm to complete
• Time complexity is non-decreasing
• The amount of time needed by an algorithm cannot decrease as the size of the input increases.
• e.g. finding a number in a larger list can not take less time than finding a number in a smaller list.
• For large values of n, the value of the cost function is mainly dependent on the largest term in
the function.
• In other words, when analyzing algorithms, we only care about the term that grows the fastest.
• In the example above, 3𝑛2 is the larges term.
CMPT 120, Spring 2023, Mohammad Tayebi 7
Big-O Notation
• Order of magnitude or Big-O notation is a mathematical notation for describing
the asymptotic growth of a time complexity function.
• The letter O stands for “Order”.
• With Big-O notation, instead of using the exact cost functions we use functions that
approximate the actual cost.
• Example: if 𝑓 𝑛 = 𝑛3 + 2𝑛 + 6 , then f(n) is of order 𝑛3 , or 𝑂(𝑛3 ), as 𝑛3 dominates the
cost function for large values of 𝑛.
• The big-O notation expresses the relative performance of an algorithm, not its
absolute performance (being an approximation).
• Algorithms with complexity O(n) may have different absolute execution times.
• e.g. 𝑓 𝑛 = 𝑛3 + 2𝑛 + 6 and 𝑓 𝑛 = 𝑛3 and 𝑓 𝑛 = 2𝑛3 + 5𝑛2 + 7𝑛 + 2 are all in order of 𝑓(𝑛3 ).
• A linear search traverses the list until the desired element is found.
• Algorithm: Check the items of the list in order, until the key is found, or
the end of the list is reached.
The complexity of a linear search is linear, 𝑂 𝑛 , where n is the size of the given list.
3 5 12 14 41 56 71
Middle element
41 > 14
41 56 71
Middle element
41 < 56
41
• Bubble sort scans the list of numbers and swap any numbers that are
not in sorted order.
• To sort the list, this swapping process must be repeated n times.
1. def bubble_sort(lst):
2. n = len(lst)
3. for i in range(n): 3. Scanning all elements of the list
• The insertion sort starts with the second element of the list:
• Compare it to the first element of the list. If it is smaller, swap these two elements.
• Now, the first two elements in the list are in sorted order, and the rest of the list is
still unsorted.
• Then, it moves on to the third element and starts sliding any elements in
the sorted part of the list to the right until it finds the right place to insert
the current element.
• It repeats this steps until the list is completely sorted.
• Selection sort repeatedly find the smallest element from the unsorted
part of the list and build up a sorted list.
• The selection sort algorithm maintains two sublists of a given list: a list
of already sorted elements and a sublist of unsorted elements.
1. def selection_sort(lst):
2. n = len(lst)
3. for i in range(n):
4. min_index = i
5. for j in range(i + 1, n): 5-7 Find the element with minimum
value in each step
6. if lst[j] < lst[min_index]:
7. min_index = j Image source: https://www.geeksforgeeks.org/selection-sort-vs-bubble-sort/
• Merge sort works by breaking the original list down into smaller list,
sort them and merge the sorted lists until the whole list is sorted.
• Merge sort can be implemented in two ways:
• Iterative (bottom-up)
• Recursive (top-down)
• We learn about this approach in the recursion lecture.
5 -3 60 1 18 30 -10 6
-3 5 1 60 18 30 -10 6
-3 1 5 60 -10 6 18 30
-10 -3 1 5 6 18 30 60
Implementation 1 6.
7.
for i in range(0, size, 2 * k):
start = i
8. middle = i + k - 1
9. end = min(i + 2 * k - 1, size)
10. merge(lst, temp, start, middle, end)
11. k = 2 * k 6. for k = 1, i = [0, 2, 4, 6, 8…]
for k = 2, i = [0, 4, 8, 12…]
for k = 4, i = [0, 8, 16…]
1. # iterative merge sort …
2. lst = [5, 7, 19, 13, -4, 2, 10, 1]
3. print("Original list: ", lst)
4. merge_sort(lst)
5. print("Sorted list: ", lst)
CMPT 120, Spring 2023, Mohammad Tayebi 33
1. def merge(lst, temp, start, middle, end):
2. m = start
3. i = start
Merge Sort – 4.
5.
j = middle + 1
while i <= middle and j <= end:
Implementation 2 6. if lst[i] < lst[j]: 5. Compare elements in the left
and right of the middle element
7. temp[m] = lst[i]
• The number of merges 8. i = i + 1
performed is log 𝑛 9. else:
• To complete each merge, 10. temp[m] = lst[j]