You are on page 1of 20

# Big-O Cheat Sheet

1.7k
Tw eet

5.1k
Like

\$7.50 / wk
on Gittip.

Know Thy Complexities!
Hi there! This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. Over the last few years, I've interviewed at several Silicon Valley startups, and also some bigger companies, like Yahoo, eBay, LinkedIn, and Google, and each time that I prepared for an interview, I thought to myself "Why oh why hasn't someone created a nice Big-O cheat sheet?". So, to save all of you fine folks a ton of time, I went ahead and created one. Enjoy!
Good Fair Poor

Searching
Space

open in browser PRO version

Are you a developer? Try out the HTML to PDF API

pdfcrowd.com

Algorithm

Data Structure

Time Complexity Average Worst
O(|E| + |V|)

Complexity Worst
O(|V|)

Depth First Search (DFS)

Graph of |V| vertices and |E| edges Graph of |V| vertices and |E| edges Sorted array of n elements Array Graph with |V| vertices and |E| edges

-

-

O(|E| + |V|)

O(|V|)

Binary search

O(log(n))

O(log(n))

O(1)

Linear (Brute Force) Shortest path by Dijkstra, using a Min-heap as priority queue Shortest path by Dijkstra, using an unsorted array as priority queue Shortest path by BellmanFord

O(n) O((|V| + |E|) log |V|)

O(n) O((|V| + |E|) log |V|)

O(1) O(|V|)

Graph with |V| vertices and |E| edges

O(|V|^2)

O(|V|^2)

O(|V|)

Graph with |V| vertices and |E| edges

O(|V||E|)

O(|V||E|)

O(|V|)

Sorting
Algorithm Data Structure Time Complexity Best Average Worst Worst Case Auxiliary Space Complexity Worst

open in browser PRO version

Are you a developer? Try out the HTML to PDF API

pdfcrowd.com

Quicksort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n log(n)) O(n^2) O(n^2) O(n) Mergesort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n^2) O(n) Heapsort Array O(n log(n)) O(1) Bubble Sort Insertion Sort Select Sort Bucket Sort Radix Sort Array O(n) O(1) Array O(n) O(n^2) O(n^2) O(1) Array Array O(n^2) O(n+k) O(n^2) O(n+k) O(n^2) O(n^2) O(1) O(nk) Array O(nk) O(nk) O(nk) O(n+k) Data Structures Data Structure Time Complexity Average Indexing Basic O(1) Space Complexity Worst Worst Search O(n) Search O(n) Insertion - Deletion - Indexing O(1) Insertion - Deletion O(n) open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com .

com .Array Dynamic Array SinglyLinked List DoublyLinked List Skip List O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n) O(n) O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n) O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n log(n)) Hash Table Binary Search Tree Cartresian Tree B-Tree Red-Black Tree Splay Tree AVL Tree - O(1) O(1) O(1) - O(n) O(n) O(n) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n) - O(log(n)) O(log(n)) O(log(n)) - O(n) O(n) O(n) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) - O(log(n)) O(log(n)) O(log(n)) - O(log(n)) O(log(n)) O(log(n)) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.

com .Heaps Heaps Time Complexity Extract Max O(1) Heapify Linked List (sorted) Linked List (unsorted) Binary Heap Binomial Heap Fibonacci Heap - Find Max O(1) Increase Key O(n) Insert O(n) Delete O(1) Merge O(m+n) - O(n) O(n) O(1) O(1) O(1) O(1) O(n) O(1) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(m+n) - O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) - O(1) O(log(n))* O(1)* O(1) O(log(n))* O(1) Graphs Node / Edge Management Adjacency list Storage Add Vertex Add Edge Remove Vertex O(|V| + Remove Edge O(|E|) Query O(|V|+|E|) O(1) O(1) O(|V|) open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.

tight[1] upper. tightness unknown lower. big-oh is the most useful because represents the worstcase behavior. open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. the growth rate of f(x) is asymptotically proportional to g(n). so that's why it's referred to as a tight bound (it must be both the upper and lower bound). An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n). For example.|E|) Incidence list Adjacency matrix Incidence matrix O(|V|+|E|) O(|V|^2) O(|V| ⋅ |E|) O(1) O(|V|^2) O(|V| ⋅ |E|) O(1) O(1) O(|V| ⋅ |E|) O(|E|) O(|V|^2) O(|V| ⋅ |E|) O(|E|) O(1) O(|V| ⋅ |E|) O(|E|) O(1) O(|E|) Notation for asymptotic growth letter (theta) Θ (big-oh) O (small-oh) o (big omega) Ω (small omega) ω bound upper and lower. tightness unknown upper. while Omega is the lower bound.com . an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. not tight growth equal[2] less than or equal[3] less than greater than or equal greater than [1] Big O is the upper bound.SO [2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. Theta requires both Big O and Omega. Here the growth rate is no faster than g(n). not tight lower. In other words. [3] Same thing.

Google Calendar. open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. O(n!) is the worst complexity which requires 720 operations for just 6 elements.In short. shows the number of operations (y axis) required to obtain a result as the number of elements (x axis) increase. which only requires a constant number of operations for any number of elements.com .google.com/apps/business Get Gmail for Business. Google Docs and more! Big-O Complexity Chart This interactive chart. created by our friends over at MeteorCharts. if algorithm is __ then its performance is __ algorithm o(n) O(n) Θ(n) Ω(n) ω(n) performance <n ≤n =n ≥n >n Google Apps for Business www. while O(1) is the best complexity.

com . Eric Rowell Quentin Pleple Nick Dizazzo Michael Abed Adam Forsyth Are you a developer? Try out the HTML to PDF API open in browser PRO version pdfcrowd. 5. 2.Contributors Edit these tables! 1. 4. 3.

Y. the best case still has an upper time bound of O(1) (it takes constant time to find an element in index 0. until we find the object in the last index where we look. while the worst case (the object is in the last index where we look) has an upper time bound of O(n) (it takes a number of steps of order equal to the problem size. O(n) is better the O(log(n))? In what way? 1024 vs 10 increments that a sort algorithm has to perform for instance? All in all this is good information but in its current state. to the novice.upper bound. vs. time.. Your example about the dollars states specific amounts (e.g. i. • Reply • Share › Oleksandr1 Luis • a month ago You make a very poor assumption that because a specific value is given. you can thus understand how big O can be used both for. but we have done this.) with problem size. Its will always take. the best and the worst case.). There is another problem I do not like is the color scheme is sometimes wrong. However. your example should be modified to say something like "it takes at most 2\$ per mile" (linear). Big O (Omicron) is the Worst Case Scenario.) of a function that describes how an algorithm grows (in space. but big O and related concepts are used to bound the order (linear. etc. exponential.com . As the size of the problem grows (the array to be searched grows in size). It is the upper bound for for the algorithm. this is in MHO so if I'm off base or incorrect then feel free to flame me like the fantastic four at a gay parade :) • Reply • Share › Luis Oleksandr • 2 months ago @Oleksandr You are confused. Simply this is a computational exercise to extract the empirical data. You can usually find that average for an algorithms efficiency by testing it in average case and worst cases together. worst case is when the list is completed out of order. Take your linear search. open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. For instance in a linear search algorithm. 135 dollars to get to New York." The first bit of information from Omega is essentially useless. n. at most. say. etc. This is almost pointless to have. honestly it needs to be taken with a grain of salt and fact check with a good algorithm book. This is the most beneficially piece of information to have about an algorithm but unfortunately it is usually very hard to find. Theta is the upper and lower bound together. for instance you would rather have Big O then Omega because it is exactly the same as say "it will take more than five dollars to get to N. With this in mind. Omega is the lower bound. the third however gives you the constraint. To be more appropriate. " at most 135 dollars"). or another fixed position).. the list sorted but backwards.e.

You should read up on this because this is very important. I'm not sure why you don't understand a very clear analogy. see Statement iii and Example (a) in the end). (in total 9 different.com . I agree with Luis that the table is correct and not useless but also agree with Oleksandr that it's not complete (but again disagree that it is incomplete because of the mismatch between best/average case and big-O. 135 being the Omicron value. I meant to say Linear Sort Algorithm. it will run more than five iterations (Omega). but for you I change situation and values. .. this is a complete misunderstanding of Omega vs Omicron. you have a lower-bound for average-case. or upper-bound for best-case. which has the worst can scenario when a list is fed to said algorithm in order. Given function unknown.has useless/meaningless information) Here are the statements in this argument: Statement i) "The table is wrong in using Big-O notation for all columns". The point is that Big O is the upper bound of a function. forgive me. and do not have any relation with each other. correct combinations. but backwards. As for the example. \$135 dollars was given as an upper bound. BUT it will never run more than 135 iterations. It is in fact any polynomial function of my choice given its parameters and any amount of Lagrange constants which will produce a value of 135.than it must be a linear function. or any such number I specify to be used in the example. Big O cannot be used for the best case scenario. In fact there are an infinite amount of Big O's for any elementary functions. On the linear search algorithm. This statement is open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. These are orthogonal terms. but --none-. For example. I • Reply • Share › see more Yavuz Yetim Oleksandr1 • a month ago @Oleksandr @Luis IMHO. that lead to the eventual misunderstanding. I agree about what you said about linear search algorithm.. \$5 was the Omega value. The main confusion is between the terms "case" and "bound". each useful for a different use case. there are three different statements in this argument.

the best and the worst case. you can thus understand how big O can be used both for. but big O and related concepts are used to bound the order (linear. Big-O notation is only a representation for a function. writing O(n) for a best-case entry is correct. under Search. average case or the best case.com . Therefore.) with problem size. etc. " at most 135 dollars").g. This statement is false because the table is correct. Big-O notation does not have anything to do with the worst case. until we find the object in the last index where we look. With this in mind. etc. would be more appropriate listed as Graph instead of Tree. (fixed: wrong autocomplete of who I replied to) • Reply • Share › ericdrowell 1 Antoine Grondin 7 Mod tempire • 7 months ago I'll try to clarify that. or another fixed position). As the size of the problem grows (the array to be searched grows in size).) of a function that describes how an algorithm grows (in space. time. To be more appropriate. Take your linear search. say. the best case still has an upper time bound of O(1) (it takes constant time to find an element in index 0. One correct representation for this function is O(n). Your example about the dollars states specific amounts (e. exponential.). see more • Reply • Share › Guest Oleksandr • 2 months ago @Oleksandr You are confused. Thanks! • Reply • Share › • 7 months ago I think DFS and BFS. your example should be modified to say something like "it takes at most 2\$ per mile" (linear). while the worst case (the object is in the last index where we look) has an upper time bound of O(n) (it takes a number of steps of order equal to the problem size. n. • Reply • Share › open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.Statement i) "The table is wrong in using Big-O notation for all columns". Let's say the best-case run time for an algorithm for a given input of size n is exactly (3*n + 1).