You are on page 1of 19

SOFTWARE ENGINEERING

CAREER PROGRAM
Algorithms and Data Structures

Recomandat de

Dezvoltat de
Complexity of algorithms: time and space complexity,
asymptotic notations
Content:

1. Motivation
2. Time and Space Complexity
3. Notations and definitions
4. Methods for computing complexities
5. P vs NP briefly
6. Amortized complexities
7. Exercices
Motivation
What is the purpose?

1. We want an objective way of measuring the performance of an algorithm, independent of the machine that it is ran on

2. We measure the performance by looking at the time complexity and memory complexity

3. It is very useful in order to predict the behaviour and performance of an algorithm before implementing it. It is also useful
in order to find the areas of an algorithm that can be improved and that are worth improving (bottlenecks)
Time and Space Complexity
What exactly are we measuring?

1. In order to objectively measure time, we count the number of elementary operations (arithmetic operations,
comparisons, writing to memory, reading memory) that our algorithm performs with respect to the input size

2. Measuring memory is simpler: we look at the most amount of memory (this is the amount of RAM our program uses, not
actual storage) that our program uses in bytes with respect to the input size

3. We are talking about WORST-CASE analysis. That is: we are thinking about the absolute worst possible scenario. This type
of analysis is what we currently know most about and it is also very useful as often times the worst case actually does
occur
Notations and definitions
Notations:

1. Ω - lower bound
2. Θ - precise bound (lower and upper)
3. O - upper bound
4. ⍵ - loose lower bound
5. o - loose upper bound
Notations and Definitions
Definitions using limits:

Let f(n) be the number of operations the program does as a function of the input size, n. Then:
Notations and Definitions
Comments:
1. We compare the growth rates of the two functions as the input size goes to infinity. By doing so, we can distinguish
between lower/upper/precise bounds.

2. When computing complexities, constants do not matter. As n goes to infinity, a constant factor will not change whether
the limit is 0, a constant or infinity. Therefore, we can ignore those. Despite this, constant factor optimizations are not to
be ignored. Sure n and n/64 are about as fast when n goes to infinity, but if n is about 1000, we will see a significant
difference. Still, we will ignore them when computing complexities.

3. Any log grows slower than any polynomial and that in turn grows slower than any exponential.

4. When computing complexities, we only keep the most dominant term, because as the input size goes to infinity, the other
terms become negligible.

5. Complexities can be functions with more than one variable, for instance we can have an O(n + m*k) complexity.

6. We aim for polynomial complexities with an exponent that is as small as possible (Moore’s law)
Computing the complexity of a program
Intuitive rules:

As a very loose approach which will give us a correct upper bound, but probably not the tightest bound, we can think of the
following rules:
○ Nested loops have their complexities multiplied
○ Loops on the same level have their complexities summed
○ What about recursive functions?

for(int i = 1; i <= n; ++i) { // O(n) for(int i = 1; i <= n; ++i) { // O(n) Int fib(int n) {
for(int j = 1; j <= n; ++j) { // O(n) // do some O(1) if(n <= 1) return 1;
// do some O(1) operations
operations Return fib(n-1) + fib(n-2).
} } }
}
for(int j = 1; j <= n; ++j) { // O(n) O(????)
O(n^2) // do some O(1) operations
}

O(n + n) = O(2n) = O(n)


Computing the complexity of a program
Recursive functions are tricky:

1. You previously saw an example of a recursive function that is quite difficult to analyze. Recursive functions are indeed
more difficult to compute the complexity for, unless there is some nice property that we can make use of.
2. There is a general “recipe”, which for some cases tells us the exact complexity of the algorithm, but sometimes fails and
gives us no further information: Master Theorem
3. Memory is also a bit hard to compute. We must keep in mind that function calls allocate memory and we have to keep in
mind that we only care about the maximum memory used at a given time. Which of the following approaches is better
and why? Is there actually any difference?

Void printRecursive(int left, int right) {


Void printRecursive(int n) {
if(left == right) {
if(n == 0) return;
Cout << left << ‘ ’;
printRecursive(n-1);
return;
Cout << n << ‘ ’;
}
}
Int mid = (left + right)/2;
printRecursive(left, mid);
printRecursive(mid+1, right);
}
Briefly about P vs NP and why it is important
P vs NP:

1. NP algorithms are Non-Deterministic Polynomial algorithms. Without going in detail, they are very hard problems for
which no polynomial solution is knows. Chances are that there might in fact not exist a polynomial algorithm that actually
solves such a problem. It has yet to be proven.

2. Because they are so hard, it is also useful to know a couple of examples of well known NP (complete) problems. If you are
trying to solve a problem and some part of your problem is actually an instance of some well known NP problem, then
you are probably very unlikely to find a polynomial solution and it is better not to waste time in trying to do so.

3. Don’t be discouraged though, maybe you will be the one to solve P vs NP once and for all.
Amortized complexities
What are amortized complexities?

1. An amortized complexity is a complexity for which we can actually find a tighter bound than we could possibly get by just
thinking about the intuitive rules we talked about and thinking about the worst case of each individual part of the process.

2. Sometimes, some part of the algorithm impact other parts in such a way that the worst case is different than what it may
seem.

3. There are rigorous ways of computing such complexities, but we will focus on some classical examples and get a feel for
the intuition behind this.
Amortized complexities
A simple example

Int n, m;
Cin >> n >> m;
vector<int> friends(n + 1);
for(int i = 1; i <= m; ++i) {
Int a, b;
Cin >> a >> b;
friends[a].add(b);
friends[b].add(a);
}
for(int i = 1; i <= n; ++i) {
Cout << “Person ” << i << “ is friends with: “;
for(int friend : friends[i]) cout << friend << ‘ ’;
Cout << ‘\n’;
}
Amortized complexities
What we might be tempted to do
Int n, m;
Cin >> n >> m;
vector<int> friends(n + 1);
for(int i = 1; i <= m; ++i) {
Int a, b;
Cin >> a >> b;
friends[a].add(b);
friends[b].add(a);
}
for(int i = 1; i <= n; ++i) { // O(n)
Cout << “Person ” << i << “ is friends with: “;
for(int friend : friends[i]) cout << friend << ‘ ’; // worst case i has m friends so O(m)
Cout << ‘\n’;
}
-> O(n*m)
Amortized complexities
But we can do better
Int n, m;
Cin >> n >> m;
vector<int> friends(n + 1);
for(int i = 1; i <= m; ++i) {
Int a, b;
Cin >> a >> b;
friends[a].add(b);
friends[b].add(a);
}
for(int i = 1; i <= n; ++i) { // O(n)
Cout << “Person ” << i << “ is friends with: “;
for(int friend : friends[i]) cout << friend << ‘ ’; // In total there are 2*m people among all lists
// so during the
outer n steps, this loop will do 2m steps in total
Cout << ‘\n’;
}
-> O(n+m)
Amortized complexities
Other examples (whiteboard)

● Sieve
● Array that doubles in size each time it is full
● Queue implemented with two stacks
● DFS
Exercices
Check the following
Exercices
Find the following complexities

● T(n) = 2T(n-1) + 1 - Hanoi reccurence


● T(n) = 2*T(n/2) + n - Merge sort
Exercices
Find the complexities of the following code snippets
for(int i =1; i <= n; ++i)
// O(1) operation
for(int j = 1; j <= n; ++j)
// O(1) operation

for(int i =1; i <= n; ++i)


// O(1) operation
for(int j = 1; j <= m; ++j)
// O(1) operation
Exercices
Find the complexities of the following code snippets
for(int i =1; i <= n; i += 2)
for(int j = i; j <= n; ++j)
// O(1) operation

for(int i = 1; i <= n; ++i) {


for(int j = 1; j <= n; j += i) {
// O(1)
}
for(int j = 1; j*j <= n; j += 100) {
// O(1) operation
}
}

You might also like