You are on page 1of 20

Big O – Order of Magnitude

How do programmers measure the work performed by two algorithms?

Solution 1:
• Compare the execution times for running the two programs.
• The one with the shorter execution time is clearly the better algorithm.

Using this technique, we can determine only that program A is more efficient than program B on a
particular computer. Execution times are specific to a particular machine.

Of course, we could test the algorithms on all possible computers, but we want a more general
measure.
Big O – Order of Magnitude
How do programmers measure the work performed by two algorithms?

Solution 2
• Count the number of instructions or statements executed.
• This measure, however, varies with
– the programming language used as well as with
– the individual programmer’s style.

To standardize this measure somewhat,


we could count the number of passes through a critical loop in the algorithm.
If each iteration involves a constant amount of work, this measure gives us a meaningful
yardstick of efficiency.
Big O – Order of Magnitude
How do programmers measure the work performed by two algorithms?

Solution 3
• Isolate a particular operation fundamental to the algorithm and
• Count the number of times that this operation is performed.
Suppose, for example, that
• we are summing the elements in an integer list.
• to measure the amount of work required, we could count the integer addition operations.
Note:
We do not actually have to count the number of addition operations; it is some function of the
number of elements (i.e. n) in the list. Therefore, we can express the number of addition
operations in terms of n.
Now we can compare the algorithms for the general case, not just for a specific list size and for
specific computer.
Big O – Order of Magnitude
Matrix Multiplication Case:
• On many computers floating point multiplication is so much more expensive than addition in terms
of computer time.
• We might as well count only the multiplication operations and ignore the addition in matrix
multiplication algorithm.

In analyzing algorithms,
we often can find one operation that dominates the algorithm, and just count only the dominated
operation (ignoring the other operations fade into the background).

If we want to buy elephants and goldfish, and we are considering


two pet suppliers, we need to compare only the prices of elephants;
the cost of the goldfish is trivial/insignificant in comparison.
Big O – Order of Magnitude – Another Justification
The order of magnitude of a function is identified with the term in the function that increases fastest
relative to the size of the problem. For instance, if
f(n) = n2 + 100 n + log10 n + 1000
For large values of n, some multiple of n2 dominates the function for sufficiently large values of N, then
f(n) is of order n2
or f(n) = O(n2)
Why can we just drop the low-order terms?

n f(n) n2 100n log10n 1000


Note:
This doesn’t mean that the
other terms do not contribute Val % Val % Val % Val %
to the computing time, but 1 1101 1 0.1% 100 9.1% 0 0% 1000 90.8%
rather that they are not 10 2101 100 5.8% 1000 47.6% 1 0.05% 1000 47.6%
significant in our approximation 100 21002 10000 47.6% 10000 47.6% 2 0.99% 1000 4.76%
when n is “large.” 105 10,010,001,005 1010 99.9% 107 .099% 5 0.0% 1000 0.00%
Big O – Order of Magnitude – Another Justification
Suppose that we want to write all the elements in a list into a file. How much
work is involved? The answer depends on how many elements are in the list. Our algorithm is
Open the file
while more elements in list do
write the next element
If n is the number of elements in the list, the time required to do this task is
T(n) = (n * time-to-write-one-element) + time-to-open-the-file
As, Open file time = constant
So, for large values of n, we can ignore the file open time in determining the Big-O approximation .
So, T(n) is O(n) - T(n) is some multiple of n

Note:
If the list has only a few elements, the time needed to open the file may seem significant.
For large values of n, writing the elements is an elephant in comparison with opening the file.
Big O – Order of Magnitude
Examples
Consider the following two algorithms to initialize to zero every element in an N-element array:

Algorithm A Algorithm B
items[0] = 0; for (index = 0; index < n; index++)
items[1] = 0; items[index] = 0;
items[2] = 0;
items[3] = 0;
----------
items[n – 1] = 0;

Algorithm A is O(n)
Algorithm B is O(n)
Big O – Order of Magnitude
Now let’s look at two different algorithms that calculate the sum of the integers from 1 to n.
Algorithm Sum1 Algorithm Sum2
sum = 0; sum = ((n + 1) * n) / 2;
for (count = 1; count <= n; count++)
sum = sum + count;
Algorithm Sum1 is a simple for loop that adds successive integers to keep a running total.
Algorithm Sum2 calculates the sum by using a formula.

Consider the calculation when n = 9: sum = (n * (n+1))/2


sum = 0+1 = 1 sum = 9 * (9+1) / 2
sum = 1+2 = 3 sum = 45
sum = 3+3 = 6
sum = 6+4 = 10
sum = 10+5 = 15
sum = 15+6 = 21
sum = 21+7 = 28
sum = 28+8 = 36
•sum = 36+9 = 45
Big O – Order of Magnitude
Algorithm Sum1 Algorithm Sum2
sum = 0; sum = ((n + 1) * n) / 2;
for (count = 1; count <= n; count++)
sum = sum + count;
Let’s compare them using Big-O notation.
Sum1
• The work done by Sum1 is a function of the magnitude of n;
• As n gets larger, the amount of work grows proportionally.
• If n is 50, Sum1 works 10 times as hard as when n is 5.
Sum1 = O(n)
Sum2
Consider the cases when n = 5 and n = 50
• whatever value we assign to n, the algorithm does the same amount of work to solve the problem.
Sum2 = O(1)
Big O – Order of Magnitude
Is Sum2 always faster?
Is it always a better choice than Sum1?
That depends.

• Sum2 might seem to do more “work,” because the formula involves multiplication and division
• Sum1 calculates a simple running total.

For very small values of n,


Sum2 actually might do more work than Sum1.
For very large values of n,
Sum2 stays the same work whereas Sum1 does a proportionally larger amount of work

So the choice between the algorithms depends in part on how they are used, for small or large values
of n.
Big O – Order of Magnitude
Sum2 is more complicated than Sum1
• Sum2 is not as obvious as Sum1,
• It is more difficult for the programmer to understand.
Sometimes a more efficient solution to a problem is more complicated;
we may save computer time at the expense of the programmer’s time.

What’s the verdict?


As usual in the design of computer programs, there are tradeoffs.

When we compare algorithms using Big-O notation, we are concerned with what happens when n is
“large.”
The Big-O analysis doesn’t give us precise information. Instead, it gives us an approximation.
100*n, 90*n are all O(n),
Which one is “better”? We can’t - In Big-O terms, they are all roughly equivalent for large values of n.
Common Orders of Magnitude
O(1) – Constant (bounded time)
The amount of work is bounded by a constant and does not depend on the size of the problem.
a[3] = 5 - O(1) - Assigning a value to the ith element in an array of n elements
(Although bounded time is often called constant, the amount of work is not necessarily constant. Rather, it is bounded by a
constant)

O(log2 n) - logarithmic time


Algorithms that successively cut the amount of data to be processed in half at each step typically fall into this
category. The amount of work depends on the log of the size of the problem.
- finding a value in a sorted list using the binary search algorithm is O(log 2n).

O(n) - linear time


O(n) is called linear time. The amount of work is some constant times the size of the problem.
- Printing all the elements in a list of n elements is O(n).
- Searching for a particular value in a list of unordered elements is also O(n),
Common Orders of Magnitude
O(n log2 n)
Algorithms of this type typically involve applying a logarithmic algorithm n times.
Quick Sort, Heap Sort , Merge Sort - O(n log2 n) complexity

O(n2) - quadratic time.


Algorithms of this type typically involve applying a linear algorithm n times.
Bubble Sort, Selection Sort, Insertion Sort - O(n2)

O(n3) - cubic time.


An example of this class is a routine that increments every element in a three-dimensional table of integers

O(2n ) - exponential time.


These algorithms are costly. Exponential times increase dramatically in relation to the size of n.
Running time of this type of algorithm grow so quickly that the computation time required for problems of this
order may exceed the estimated life span of the universe.
Common Orders of Magnitude
Table: Comparison of Rates of Growth
Growth rates
Which growth rate is better???

Graph from Adam Drozdek’s book


Which one is Better n (linear algorithm) or n 2 (quadratic algorithm)
Suppose a computer takes 1000 times as long as to process basic operation once in algorithm
A as it takes to process the basic operation once in Algorithm B.

Time Complexity Time to process Basic Operation


Algorithm A: n 1000 t
Algorithm B: n2 t

Execution Time to process an instance of size n


Algo A = n x 1000 t
Algo B = n2 x t

Here Algorithm B (n2) is efficient for small values of n. But we know that linear time
should be efficient.
To determine when algorithm A is efficient, we solve inequality:
Slow Fast
n2 x t > n x 1000 x t

Or n > 1000

This shows that Algorithm A is efficient when n > 1000.

If application never had an input size larger than 1000,

Algorithm B should be implemented,

otherwise

implement Algorithm A.

Concluding Remarks:
Algorithm with time complexity n is more efficient than the algorithm with time complexity
n2 for sufficiently large values on n, regardless of how long it to take to process basic
operation in two algorithms.
Asymptotic Analysis

Asymptotic complexity studies the efficiency of an algorithm as


the input size becomes large

18
Big-Oh
• Definition:

f(n) is O(g(n)) if there exist positive numbers c & N such that


f(n)<=cg(n) for all n>=N

g(n) is called the upper bound on f(n) OR


f(n) grows at the most as large as g(n)
Example:
T(n) = n2 + 3n + 4
n2 + 3n + 4 <= 2 n2 for all n>10

so we can say that T(n) is O(n2) OR


T(n) is in the order of n2.

T(n) is bounded above by a + real multiple of n2


Properties of Big-Oh
• If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n))
• If f(n) is O(h(n)) and g(n) is O(h(n)) then f(n) + g(n) is O(h(n))
• ank is O(nk) where a is a constant
• nk is O(nk+j ) for any positive j
• If f(n)=c g(n) then f(n) is O(g(n))

You might also like