Professional Documents
Culture Documents
Solution 1:
• Compare the execution times for running the two programs.
• The one with the shorter execution time is clearly the better algorithm.
Using this technique, we can determine only that program A is more efficient than program B on a
particular computer. Execution times are specific to a particular machine.
Of course, we could test the algorithms on all possible computers, but we want a more general
measure.
Big O – Order of Magnitude
How do programmers measure the work performed by two algorithms?
Solution 2
• Count the number of instructions or statements executed.
• This measure, however, varies with
– the programming language used as well as with
– the individual programmer’s style.
Solution 3
• Isolate a particular operation fundamental to the algorithm and
• Count the number of times that this operation is performed.
Suppose, for example, that
• we are summing the elements in an integer list.
• to measure the amount of work required, we could count the integer addition operations.
Note:
We do not actually have to count the number of addition operations; it is some function of the
number of elements (i.e. n) in the list. Therefore, we can express the number of addition
operations in terms of n.
Now we can compare the algorithms for the general case, not just for a specific list size and for
specific computer.
Big O – Order of Magnitude
Matrix Multiplication Case:
• On many computers floating point multiplication is so much more expensive than addition in terms
of computer time.
• We might as well count only the multiplication operations and ignore the addition in matrix
multiplication algorithm.
In analyzing algorithms,
we often can find one operation that dominates the algorithm, and just count only the dominated
operation (ignoring the other operations fade into the background).
Note:
If the list has only a few elements, the time needed to open the file may seem significant.
For large values of n, writing the elements is an elephant in comparison with opening the file.
Big O – Order of Magnitude
Examples
Consider the following two algorithms to initialize to zero every element in an N-element array:
Algorithm A Algorithm B
items[0] = 0; for (index = 0; index < n; index++)
items[1] = 0; items[index] = 0;
items[2] = 0;
items[3] = 0;
----------
items[n – 1] = 0;
Algorithm A is O(n)
Algorithm B is O(n)
Big O – Order of Magnitude
Now let’s look at two different algorithms that calculate the sum of the integers from 1 to n.
Algorithm Sum1 Algorithm Sum2
sum = 0; sum = ((n + 1) * n) / 2;
for (count = 1; count <= n; count++)
sum = sum + count;
Algorithm Sum1 is a simple for loop that adds successive integers to keep a running total.
Algorithm Sum2 calculates the sum by using a formula.
• Sum2 might seem to do more “work,” because the formula involves multiplication and division
• Sum1 calculates a simple running total.
So the choice between the algorithms depends in part on how they are used, for small or large values
of n.
Big O – Order of Magnitude
Sum2 is more complicated than Sum1
• Sum2 is not as obvious as Sum1,
• It is more difficult for the programmer to understand.
Sometimes a more efficient solution to a problem is more complicated;
we may save computer time at the expense of the programmer’s time.
When we compare algorithms using Big-O notation, we are concerned with what happens when n is
“large.”
The Big-O analysis doesn’t give us precise information. Instead, it gives us an approximation.
100*n, 90*n are all O(n),
Which one is “better”? We can’t - In Big-O terms, they are all roughly equivalent for large values of n.
Common Orders of Magnitude
O(1) – Constant (bounded time)
The amount of work is bounded by a constant and does not depend on the size of the problem.
a[3] = 5 - O(1) - Assigning a value to the ith element in an array of n elements
(Although bounded time is often called constant, the amount of work is not necessarily constant. Rather, it is bounded by a
constant)
Here Algorithm B (n2) is efficient for small values of n. But we know that linear time
should be efficient.
To determine when algorithm A is efficient, we solve inequality:
Slow Fast
n2 x t > n x 1000 x t
Or n > 1000
otherwise
implement Algorithm A.
Concluding Remarks:
Algorithm with time complexity n is more efficient than the algorithm with time complexity
n2 for sufficiently large values on n, regardless of how long it to take to process basic
operation in two algorithms.
Asymptotic Analysis
18
Big-Oh
• Definition: