## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

23.1 introduction

Two algorithnms perform the same tasks, such as a search (linear search vs. binary

search), or sort (selection sort vs. insertion sort).

We want to know which algorithm is better?

We then implement these algorithms in Java and run them to get the execution times.

This approach is problematic for two reasons:

1. Many tasks run concurrently on a computer. Execution of a particular program

depends on system load.

2. Execution time depends on specific input. For example, linear search and binary

search, if an element to be searched happens to be the first in the list, linear

search will find the element quicker than binary search.

It is difficult to compare algorithms using their execution time.

A theoretical approach was developed to analyze algorithms independent of

computers and specific input. This approach approximates the effects of a change

on the size of the input. This allows us to see how fast an algorithm’s execution

time increases as the input size increases. Thus, you compare two algoritms by

examining their growth rates.

23.2 Big O Notation

The linear search algorithm compares the key with the elements in the array

sequentially until the key is found, or the array is exhausted.

If the key is not in the array, it requires n comparisons for an array of size n.

If they key is in the array, it requires n/2 comparisons on average.

The algorithm’s execution time is proportional to the size of the array.

Doubling the size of the array, doubles the number of comparisons, so the

algorithms grows at a linear rate. Growth rate has an order of magnitude of n.

Computer Scientists us Big O notation to represent “order of magnitude”.

Using this notation, the complexity of the linear search algorithm is O(n),

pronounced as “order of n.”

For the same input size, an algorithm’s execution time may vary, depending on the

input. Input resulting in the shortests execution time is called best-case inpu.

Input resulting in the longests execution time is the worst-case input.

Best-case, and worst-case are not representative, but worst-case analysis is very

useful. Algoritms will never be slower than the worst case.

An aeverage-case analysis attemtps to determine the average amount of time

among all possible inputs of the same size.

Average case analysis is ideal, but difficult to perform because for many problems, it

is difficult to determine the relative probabilities and distributions of various input

instances.

Worst case analysis is easier to perform, so the analysis is generally conducted for

the worst case.

Linear search algorithms requires n comparisons in the worst case, and n/2

comparisons in the average case if you are nearly always looking for something

known to be in the list.

Using Big O notation, both cases require O(n) time. The multiplicative constant ½

can be omitted.

Algorithms analysis is focused on growth rate, so multiplicative constants have no

impact on growth rates. Growth rate for n/2 or 100n is the same as for n. Thus,

O(n) = O(n/2) = O(100n).

Consider the algorithm to find the maximum number in an array of n elements. To

find the maximum number if n is 2, it takes one comparison; if n is 3 it takes two

comparisons. Generally, it takes n – 1 comparisons to find the maximum number in

a list of n elements.

Algorithm anslysis is for large input size. If the input size is small, there is no

significance in estimating an algorithm’s efficiency.

As n grows larger, the n part in the expression n – 1 dominates the complecity.

Big O notation allows to ignore the nondominanting part, e.g. the – 1 in the

expression n – 1, and highlights the important part, e.g. n in the expressio n – 1.

So the complecity of this algorithmis O(n).

Big O notation estimates execution time of an algorithm in relation to tehinput size.

If time is not reltaed to input size, algorithm is said to take constant time with

notation O(1). For example, a method that retrieves an element at a given index in

an array takes constant time because the time doesnot grow as the size of the array

increases. The following mathematical summations are often useful in algorithm

analysis:

1 + 2 + 3 + … + (n – 1) + n = n(n+1)/2.

A^0 + a^1 + a^3 + … + a^(n-1) + a^n = a^n+1 – 1 / a-1

2^0 + 2^1 + 2^2 + 2^3 + … 2^(n-1) + 2^n = 2^(n+1 – ½ - 1 = 2^n+1 – 1.

23.3 Examples: Determining Big O

Section gives examples determining Big O for repetitions, sequences, and selection

statements.

Consider time complexity for the following loop:

For ( I = 1; I <= n; i++){

K = k + 5;

}

It is a constant time, c, for executing

K = k + 5;

Since the loop executes n times, time complecity for loop is T(n) = (a constant c)* n

= O(n).

Example 2

What is the time complexity of the following loop?

For (I = 1; i<= n; i++){

For (j = 1; j<= n; J++){

K = k + I + j;

}

}

It is a constant time c for executing

K = k + I + j;

Outer loop executes n times. For each iteration in the outer loop, the inner-loop is

executed n times. So, the time complexity for the loop is

T(n) = (a constant c) * n * n = O(n^2)

Algorithms with O(n^2) time complecity are called quadratic algorithms. Quagratic

algorithms grow quickly as the problem size increases. Doubling input size

quadruples the algorithm’s time. Algorithms with nested loops are often quadratic.

Example 3

Consider the followin gloops:

For( I = 1; I <= n; i++) {

For (j = 1; j<= I; j++){

k = k+1+j;

}

}

Outer loop executes n times. For I = 1, 2, …, inner loop executes one time, two

times, and n times. So time complexity for loop is

T(n) = c + 2c + 3c + 4c + … + nc = cn(n + 1)/2

= (c/2)n^2 + (c/2)n

= O(n^2)

Example 4

Consider the following loop

For ( I = 1; I <= n ; I ++) {

For (j = 1; j <=20; j++) {

K = k + I + j;

}

}

Inner loop executes 20 times, outer loop n times. So time complexity for loop is T(n)

= 20* c * n = O(n).

Example 5

Consider the following sequences:

For (j = 1; j <= 10; j++){

K = k + 4;

}

For (I = 1; I <=n; I++){

For(j=1;j<=20;j++){

K = k+i+j;

}

}

First loop executes 10 times, second loop 20* n times. Time complexity for loop is

T(n) = 10* c + 20* c* n = O(n)

Example 6

Consider the following selection statement:

If (list.contains(e)){

System.out.println(e)’

}

Else

For (Object t: list){

System.out.println(t);

}

Suppose list contains n elements. Execution time for list.contains(e) is O(n). Loop

in the else clause takes O(n) time. Time complexity for entire statement is

T9n) = if test time + worst case time (if clause, else clause)

= O(n) + O(n) = O(n).

Example 7

Consider the computation a^n for an integer n. A simple algoritm would multiply a

n times, as follows:

Result = 1;

For (int I = 1; I <= n ; i++)

Result *= a;

Algorithm takes O(n) time. Without loss of generality, assume n = 2^k. You can

improve the algorithm using the following scheme:

Result = a

For (int I = 1; i<=k i++)

Result = result * result;

Algorithm taes O(logn) time. For an arbitray n, you can revise algorithm and prove

that the complexity is still O(logn).

23.4 Analyzing Algorithm Time Complexity

Analyzing various algorithms will follow.

23.4.1 Analyzing Binary Search

Binary search searches a key in a sorted array. Each iteration in the algorithm

contains a fixed number of operations, denoted by c. Let T(N) denote the time

compelxity for a binary search on a listof n elements. Without loss of generaility,

assume n is a power of 2 and k = log n. Binary search eliminates half of input after

two comparisons,

T(n) = T(n/2) + c = T(n/2^2) + c + c = T(n/2^k) + kc

= T(1) + c log n = 1 + (log n)c

=O(log n).

Ignoring constants and nondominating terms, complexity for binary search algorithm

is O(log n). Algorithms with O(log n) time complexity are logarithmic algorithms.

Base of log is 2, but base does not affect logarithmic growth rate, so it can be

omitted. Logarithmic algorithms grows slowly as problem size increases. Squaring

input size results in double the time for the algorithm.

23.4.2 Analyzing Selection Sort

Selection sort finds the smallest number in a list, and places it first. Then, selection

sort finds the smallest number remaining and places it after the the first, and so

son, until list contains only a single number. The number of comparirons is n-1 for

the first iteration, n-2 for the secon iteration, and so on. Let T(n) denote the

complexity for selection sort, and c denote the total number of other operations

such as assignments and additional comparisons in each iteration. So,

T(n) = (n-1) + c + (n-2) + c + … + 2 + c + 1 + c

= (n-1)(n-1+1)/2 + c(n-1) = n^2/2 – n/2 + cn-c

= O(n^2).

Thus, complexity of the selection sort algorithm is O(n^2).

23.4.3 Analyzing Insertion Sort

Insertion sort algorithm sorts a lsit of values by repeatedly inserting a new element

into a sorted partial array until the whole array is sorted. At the kth iteration, to

insert an element into an array of size k, it may take k comparisons to find the

insertion position, and k moves to insert the element. Let T(n) denote complexity

for insertion sort and c denote total number of other operations such as

assignments and additional comparisons in each iteration. So,

T(n) = (2 + c) + (2 x 2 + c) + … + (2 x (n-1) + c)

= 2(1 + 2 + … + n-1) + c(n-1)

= 2(n-1)n/2 + cn – c = n^2 – n + cn-c

= O(n^2).

Thus, complexity for insertion sort algorithm is O(n^2). So selection sort and

insertion sort are same time complexity.

23.4.4 Analyzing Towers of Hanoi Problem

Towers of Hanoi recursively moves n disks from tower A to tower B with assistance

from tower C as follows:

1. Move the first n – 1 disks from A to C with assistance of tower B.

2. Move disks n from A to B.

3. Move n – 1 disks from C to B with assistance of tower A.

Let T(n) denote complexity for algorithm that moves n disks and c denote the

constant time to move on disk: i.e., T(1) is c. So,

T(n) = t(n – 1) + c + T(n – 1)

= 2T(n-1) + c

= 2(2T(n-2) + c ) + c

=2(2(2T(n-3) + c) + c) + c

2^n-1T(1) + 2^n-2c + … + 2c + c

= 2^n-1c + 2^n-2c + … + 2c + c = (2^n – 1)c = O(2^n).

Algorithms with O(2^n) are exponential algorithms. As input increases, time for

exponential algorithm grows exponentially. Exponential algoriwhtms are not

practical for large input sizes.

23.4.5 Comparing Common Growth Functions

The following are functions ordered from least to greatest showing various time

complexity arguments.

O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(n^3) < O(2^n)

Function Name n = 25 n = 50 F(50)/f(25)

O(1) constant time 1 1 1

O(log n) Logarithmic time 4.64 5.64 1.21

O(n) Linear time 25 50 2

O(n log n) Long-linear time 116 282 2.43

O(n^2) Quadratic Time 625 2500 4

O(n^3) Cubic Time 15625 125000 8

O(2^n) Exponential Time 3.36 x 10^7 1.27 x 10^15 3.35 x 10^7

23.5 Case Studies: Finding Fibonacci Numbers

Here is a recursive method for finding the Fibonacci numbers…

Left off here.

- paper _rts_2
- TimeComplexity
- 10.1.1.30.3637
- binary tress
- Ds.pdf
- prac-quiz1-sol.pdf
- ADA_July-2003-04_2
- June, 2014 Exam Sol
- Algorithm
- On the automatizability of Polynomial Calculus
- Complexity
- Fault Tolerance Metrics
- Lecture2_2pp
- Convex Hull Notes
- Lect6_big and Small Oh
- ARQ by Sub Carrier Assignment for OFDM-Based Systems
- Binary Search Algorithm
- CS and IT.pdf
- Typography
- Understanding Explain
- Component Labeling Cviu
- Recursive Median Sieve
- 756-4-17-2012 parallel
- Probability
- 10.1.1.67
- Sorting of Raisins using Computer Vision Approach
- ADA QB
- Merge Sort
- 7 Sorting - Algorithms (series lecture)

Skip carousel

- frbclv_wp1990-02.pdf
- tmpA3C6
- tmp8D02
- wp02_08
- A Mathematical Theory of Communication
- tmpADD4.tmp
- tmpB3E1.tmp
- tmpDB1F.tmp
- UT Dallas Syllabus for math1314.521 05u taught by Bentley Garrett (btg032000)
- An Internal Intrusion Approach And Self Protection For Generating Synthetic Log File Using Forensic Techniques
- tmpBB9E.tmp
- tmpE701.tmp
- frbrich_wp94-1.pdf
- Trusted Cloud Computing with Secure Aid Agency for a Couple of Clouds Collaboration
- tmp685C.tmp
- tmpC4B6
- As 1046.3-1991 Letter Symbols for Use in Electrotechnology Logarithmic Quantities and Units
- Tmp 3970
- Technological Change, Human Capital Structure, and Multiple Growth Paths
- tmp1DD0
- Tmp 6460
- 66547_1980-1984
- tmp338E.tmp
- Logjam attack
- Money & Sustainability

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Chapter 23 Algorithm Efficiency Liang 8th will be available on

Loading