You are on page 1of 6

Big Omega

We define big-oh notation by saying f(n)=O(g(n)) if there exists some


constant c such that for all large enough n, f(n)≤ c g(n). If the same holds
for all c>0, then f(n)=o(g(n)), the little-oh notation. Big-oh and little-oh
notation come in very handy in analyzing algorithms because we can
ignore implementation issues that could cost a constant factor.

To describe lower bounds we use the big-omega notation f(n)=Ω(g(n))


usually defined by saying for some constant c>0 and all large enough n,
f(n)≥c g(n). This has a nice symmetry property, f(n)=O(g(n)) iff
g(n)=Ω(f(n)). Unfortunately it does not correspond to how we actually
prove lower bounds.

For example consider the following algorithm to solve perfect matching: If


the number of vertices is odd then output "No Perfect Matching"
otherwise try all possible matchings.

We would like to say the algorithm requires exponential time but in fact
you cannot prove a Ω(n2) lower bound using the usual definition of Ω
since the algorithm runs in linear time for n odd. We should instead define
f(n)=Ω(g(n)) by saying for some constant c>0, f(n)≥ c g(n) for infinitely
many n. This gives a nice correspondence between upper and lower
bounds: f(n)=Ω(g(n)) iff f(n) is not o(g(n)).

big-O notation

Definition: A theoretical measure of the execution of an algorithm, usually the time or


memory needed, given the problem size n, which is usually the number of items.
Informally, saying some equation f(n) = O(g(n)) means it is less than some constant
multiple of g(n). The notation is read, "f of n is big oh of g of n".
Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that 0
≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and
must not depend on n.

Also known as O.
See also Ω(n), ω(n), Θ(n), , little-o notation, asymptotic upper bound, asymptotically
tight bound, NP, complexity, model of computation.
Note: As an example, n² + 3n + 4 is O(n²), since n² + 3n + 4 < 2n² for all n > 10. Strictly
speaking, 3n + 4 is O(n²), too, but big-O notation is often misused to mean equal to
rather than less than. The notion of "equal to" is expressed by Θ(n).
The importance of this measure can be seen in trying to decide whether an algorithm is
adequate, but may just need a better implementation, or the algorithm will always be too
slow on a big enough input. For instance, quicksort, which is O(n log n) on average,
running on a small desktop computer can beat bubble sort, which is O(n²), running on a
supercomputer if there are a lot of numbers to sort. To sort 1,000,000 numbers, the
quicksort takes 20,000,000 steps on average, while the bubble sort takes
1,000,000,000,000 steps!
Any measure of execution must implicitly or explicitly refer to some computation model.
Usually this is some notion of the limiting factor. For one problem or machine, the
number of floating point multiplications may be the limiting factor, while for another, it
may be the number of messages passed across a network. Other measures which may be
important are compares, item moves, disk accesses, memory used, or elapsed ("wall
clock") time.
Introduction to Recursion

Recursion
What is it?

A function that calls itself directly or indirectly to solve a smaller version


of its task until a final call which does not require a self-call is a recursive
function.

Divide and conquer approach


What do I need?

1. Decomposition into smaller problems of same type


2. Recursive calls must diminish problem size
3. Necessity of base case
4. Base case must be reached

Binary Search Algorithm

Looking up a word in the dictionary?

• Sequential search
• Binary search

Algorithm:
Search(dictionary)
{
if (Dictionary has only 1 page)
Sequentially search page for word
else
{
Open the dictionary to the middle page
Determine which half of the dictionary the word is in

if (The word is in the first half)


Search(first half of dictionary) // ignore second half
else
Search(second half of dictionary // ignore first half
}
}
Dictionary structure?
Compare to sequential search

Factorial

Meet requirements?
Consider the code fragment:
main()
{
int i = 3; // 1
cout << f(i) << endl; // 2
}

int f(int a1)


{
if (a1 <= 1) // 3
return 1; // 4
else // 5
return a1 * f(a1 - 1); // 6
}
Non-recursive version:
int fact(int n)
{
int i;
int prod = 1;

for (i = 1; i <= n; i++)


prod *= i;

return prod;
}

The Fibonacci Sequence

Definition:

Consider:
int fib(int val)
{
if (val <= 2)
return 1;
else
return fib(val - 1) + fib(val - 2);
}
Call graph for fib(6):
Non-recursive version:
int fib(int val)
{
int current = 1;
int old = 1;
int older = 1;

val -=2;

while (val > 0)


{
current = old + older;
older = old;
old = current;
--val;
}

return current;
}

Greatest Common Divisor

Definition:

int gcd(int a, int b)


{
int remainder = a % b;

if (remaider == 0)
return b;
else
return gcd(b, remainder);
}

You might also like