You are on page 1of 22

ALGORITHM ANALYSIS AND

DESIGN
Course Material

Department of Computer Science & Engineering, Toc H Institute of Science and Technology, Arakkunnam
Analysis of Algorithms

Algorithm-definition
• An algorithm is a step by step finite sequence of instruction which gives
complete solution to a given problem.
• An algorithm is a finite set of instructions which, if followed accomplish a
particular task.
✓ Algorithms are not specific to any programming language.
✓ An Algorithm can be implemented in any programming language.

Necessary criteria for an algorithm

✓ Input: - The input of an algorithm can either be given by the user or


generated internally.
✓ Output: - An algorithm should have at least one output.
✓ Finiteness: - An algorithm should end in a finite number of steps.
✓ Definiteness: - Every step of an algorithm should be clear and
unambiguously defined.
✓ Effectiveness: - Every instruction must be sufficiently basic that it can be in
principle carried out by a person using only pencil and paper.
Performance analysis

Criteria
1. Does the program meet the original specifications of the task?
2. Does it work correctly?
3. Does the program contain documentation that shows how to use it and how it
works?
4. Does the program effectively use functions to create logical units?
5. Is the code reusable?
6. Does the program efficiently use primary and secondary memory?
7. Is the programs running time acceptable for the task?

Stages

The Analysis is done at two stages:


1. Priori Analysis: done before implementation.
2. Posteriori Analysis: done after implementing the algorithm on a target
machine.

The primary resources available in a computer are CPU and primary


memory.Analyzing an algorithm means finding how much time the algorithm
takes and how much space it occupies in memory when implemented.
Performance evaluation is based on 2 factors namely

✓ Performance analysis
• Space Complexity: of a program is the amount of memory
that it needs to run to completion.
• Time Complexity: of a program is the amount of computer
time that it needs to run to completion.
✓ Performance measurement

Space Complexity: Analysis based on the memory required to execute the


algorithm is called Space Complexity of the algorithm. The space needed by a
program is the sum of

1
Fixed space requirements-eg: variables, constants
Variable space requirements-eg Pointer, recursion stack

Total space required S(P)=c+Sp(I);


where c is a constant representing fixed space requirements is the program,Sp(I) is
the variable space requirement for program P working on instance I.

Examples

1.Alg sum (a , n)
s=0;
for i=0 to n-1
s=s+a[i];
return s;

Ans:S(P)=n+3 where (Sp=0)

2. Alg sum (a , n)
s=0;
for i=0 to n-1
for j=0 to m-1
s=s+a[i][j];
return s;
Ans:S(P)=nm+5

3.#include<stdio.h>
int main()
{ int a = 5, b = 5, c;
c = a + b;
printf("%d", c);
}
In the above program, 3 integer variables are used. The size of the integer data type
is 2 or 4 bytes which depends on the compiler. Now, lets assume the size as 4 bytes.
So, the total space occupied by the above-given program is 4 * 3 = 12 bytes. Since
no additional variables are used, no extra space is required.

4. float abc (float a, float b, float c)


{
return a + b + b * c + (a + b -c) / (a + b)+ 4.00;
}
Ans: 3 variables , so 3*4=12 and Sabc(I)=0
5. #include <stdio.h>
int main()
{ int n, i, sum = 0;
scanf("%d", &n);
int arr[n];
for(i = 0; i < n; i++)
{ scanf("%d", &arr[i]); sum = sum + arr[i];
}
printf("%d", sum);

2
}
In the above-given code, the array consists of n integer elements. So, the space
occupied by the array is 4 * n. Also we have integer variables such as n, i and sum.
Assuming 4 bytes for each variable, the total space occupied by the program is 4n
+ 12 bytes.
6.

Time Complexity
Analysis based on the time taken to execute the algorithm is called Time
Complexity of algorithm. Time complexity includes the compilation time and
execution time, but compilation is done once (similar to fixed space component
since it does not depend on the instance characteristics) whereas the execution
is done ‘n’ number of times. So the compilation time is not considered in most of
the cases but only the execution time.

The running time of an algorithm depends on several factors like


i) Computer configuration
ii) Compiler
iii) Input to the algorithm.

The time taken


T(P)= compile time + run time (or execution time)
ie T(P) = C + Tp ; where C is the compilation time and Tp
is the program’s execution time.

Time complexity can be estimated in 2 ways


1. Operation Count
Eg: Searching an array for the presence of an element. Here the
time complexity is estimated based on the number of search
operations
2. Step Count: can be determined in 2 ways
: using count statements
: using tabular method

Tabular method
1. Determine step count for each statement.(called as steps per execution or s/e).
2. Find the number of times each statement is executed.(called frequency).
3. Multiply s/e by frequency to get total steps

3
Rules that may be used for determining frequency count
1. Comments-0 steps
2. No count for { and }
3. Assignment statement-1 step
4. Return statement-1 step
5. Conditional statement-1 step
6. Loop condition for n times-n+1 steps ,Body of loop=n
steps

Examples

4
e

5
Asymptotic Notations
If we analyze an algorithm precisely, we usually end up with an equation in
terms of a variable characterizing the input. For example, by analyzing the work of
the algorithm A for problem P in terms of its input size, we may obtain the equation:
WA(n) = 2n log n+ 3n+ 4 log n+ 5. By applying the analysis method to another
algorithm, algorithm B, we may derive the equation: WB(n) = 6n + 7 log2 n + 8 log
n + 9. When given such equations, how should we interpret them? For example,
which one of the two algorithm should we prefer? It is not easy to tell by simply
looking at the two equations. But what we can do is to calculate the two equations
for varying values of n and pick the algorithm that does the least amount of work
for the values of n that we are interested in.

Step count method was used to compare the time complexities of two
programs that compute the same function and to predict the growth in run time
as the instance characteristics change. But determining the exact step count of
a program is a difficult task. A natural solution to this way is to find the upper
bound of the time complexity instead of calculating the exact step count.Order of
growth or rate of growth of an algorithm gives a simple characterization of the
algorithm’s efficiency by identifying relatively significant term in the step count.
Eg: For an algorithm with a step count 2n2+3n+1, the order of growth depends
on 2n2 for large n.

Asymptotic analysis is a technique that focuses analysis on the significant term.


Asymptotic analysis is the evaluation of the performance of an algorithm in terms
of input size. ie How does the time/space taken by an algorithm change with the
input size?

Cases to analyse an algorithm


Worst Case Analysis
✓ In this case, we calculate the upper bound on the running time
of an algorithm.
✓ In this case, we consider such inputs so that the algorithm
executes the maximum number of operations.
Best Case Analysis
✓ In this case, we calculate the lower bound on the running time
of an algorithm.
✓ In this case, we consider such inputs so that the algorithm
executes a minimum of operations.
Average Case Analysis
✓ In this case, we calculate both upper & lower bound on the
running time of an algorithm.
✓ In this case, we consider all possible inputs so that the
algorithm executes an average of maximum & minimum
number of operations.

Eg: For a linear search algorithm, where N = length of array
Worst Case: When the element to be searched is either not present in the array
or is present at the end of the array. Time Complexity: O(n)
Best Case: When the element to be searched is present at the first location of
the array.
Average Case:
▪ Average of all the cases (complexities), when the element is present at
all the locations.
▪ Time Complexity:(N + (N — 1) + (N — 2) + … + 1) / N
▪ Time Complexity: O(N)

6
Expressing complexity function with reference to other known function
is called asymptotic complexity. Asymptotic notations help us make
approximate, but meaningful assumptions about the time and space complexity.
Asymptotic notations are used to represent the complexities of algorithms for
asymptotic analysis. These notations are mathematical tools to represent the
complexities. There are three notations that are commonly used.

Big Oh Notation

✓ It represents the upper bound of the resources required to solve a problem.


✓ It is the measure of the longest amount of time taken for the algorithm to
complete.
✓ It is represented by O

ie Big-Oh (O) notation gives an upper bound for a function f(n) to within a constant
factor.

We write f(n) = O(g(n)), If there are positive constants n0 and c such that, to the right
of n0 the f(n) always lies on or below c*g(n).
O(g(n)) = { f(n) : There exist positive constant c and n0 such that 0 ≤ f(n) ≤ c g(n), for
all n ≥ n0}

Big Omega Notation

✓ It represents the lower bound of the resources required to solve a problem.


✓ It is the measure of the smallest amount of time taken for the algorithm to
complete.

7
✓ It is represented by Ω

ie Big-Omega (Ω) notation gives a lower bound for a function f(n) to within a constant
factor.

We write f(n) = Ω(g(n)), If there are positive constantsn0 and c such that, to the right
of n0 the f(n) always lies on or above c*g(n).
Ω(g(n)) = { f(n) : There exist positive constant c and n0 such that 0 ≤ c g(n) ≤ f(n), for
all n ≥ n0}
Eg

Big Theta Notation

✓ The lower and upper bound for the function ‘f’ is provided by the theta
notation ᶿ
✓ It is the measures of the longest and smallest amount of time taken for the
algorithm to complete.

ie Big-Theta(Θ) notation gives bound for a function f(n) to within a constant factor.

8
We write f(n) = Θ(g(n)), If there are positive constantsn0 and c1 and c2 such that, to
the right of n0 the f(n) always lies between c1*g(n) and c2*g(n) inclusive.
Θ(g(n)) = {f(n) : There exist positive constant c1, c2 and n0 such that 0 ≤ c1 g(n) ≤
f(n) ≤ c2 g(n), for all n ≥ n0}
Eg:

<Refer other pdf for definition of little o and little omega>

Asymptotic Analysis of Iterative Algorithms

◆ Iterative Algorithms

Some rules that can be used in general:

1) O(1): Time complexity of a function (or set of statements) is considered


as O(1) if it doesn’t contain loop, recursion and call to any other non-
constant time function.
A loop or recursion that runs a constant number of times is also considered
as O(1). For example the following loop is O(1).
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
2) O(n): Time Complexity of a loop is considered as O(n) if the loop
variables is incremented / decremented by a constant amount. For example
following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
3) O(nc): Time complexity of nested loops is equal to the number of times
the innermost statement is executed. For example the following sample loops
have O(n2) time complexity.
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i -= c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}

9
4)O(Logn) Time Complexity of a loop is considered as O(Log n) if the loop
variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
. The series that we get in first loop is 1, c, c2, c3, … ck. If we put k equals to
Logcn, we get cLogcn which is n.

5)O(Log Logn) Time Complexity of a loop is considered as O(Log Logn) if


the loop variables is reduced / increased exponentially by a constant
amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 1; i = fun(i)) {
// some O(1) expressions
}
6) Time complexities of consecutive loops: When there are consecutive
loops, we calculate time complexity as sum of time complexities of
individual loops.
for (int i = 1; i <=m; i += c)
{
// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).

Problem 1
What is the time & space complexity of the following code:
let a = 0, b = 0;for (let i = 0; i < n; ++i) {
a = a + i;
}
for (let j = 0; j < m; ++j) {
b = b + j;
}

Time Complexity: O(n + m)


Space Complexity: O(1)

10
Problem 2
What is the time & space complexity of the following code:
let a = 0, b = 0;
for (let i = 0; i < n; ++i) {
for (let j = 0; j < n; ++j) {
a = a + j;
}
}
for (let k = 0; k < n; ++k) {
b = b + k;
}

Time Complexity: O(n²)


Space Complexity: O(1)

Problem 3
What is the time and space complexity of the following code:
let a = 0;
for (let i = 0; i < n; ++i) {
for (let j = n; j > i; --j) {
a = a + i + j;
}
}

Time Complexity: O(n²)


Space Complexity: O(1)

Problem 4
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}

for (int i = n; i > 0; i -= c) {


// some O(1) expressions
}

Time Complexity of a loop is considered as O(n) if the loop variables is incremented


/ decremented by a constant amount. For example the above function have O(n) time
complexity.
Problem 5
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}

11
}

for (int i = n; i > 0; i -= c) {


for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
O(nc): Time complexity of nested loops is equal to the number of times the innermost
statement is executed. For example the following sample loops have O(n 2) time
complexity

Problem 6
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop variables is
divided / multiplied by a constant amount.Let us see mathematically how it is O(Log
n). The series that we get in first loop is 1, c, c2, c3, … ck. If we put k equals to Logcn,
we get cLogcn which is n.

Problem 7
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 1; i = fun(i)) {
// some O(1) expressions
}
for (int i = 1; i <=m; i += c) {
// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).

12
O(Log Logn) Time Complexity of a loop is considered as O(LogLogn) if the loop
variables is reduced / increased exponentially by a constant amount .

Problem 8
What is the time complexity of following code?
Assume n>=2
while (n > 1) {
n=n/ 2;
}
Solution:
When n=2 (2 1)while loop gets executed 1 times
n=4( 2 2)while loop gets executed 2 times
n=8 ( 2 3)while loop gets executed 3 times
………..Assuming n is a power of 2,ie n=2k.Therefore k=log2 n. Therefore time
complexity is O(log2 N)
Variation
Assume n>=2
while (n > 1) {
n=n/5;
}
O(log5 N)

Assume n>=2
while (n > 1) {
n=n/10;
}
O(log10 N)

Problem 9
What is the time complexity of following code:
A( )
{
i=1,s=1;
while(s<=n)
{ i++;
s=s+i;
print(‘’hai”);
}
}
Explanation
s 1,3,6,10…..
i=1,2,3…….
k(k+1)/2>n
k=O(√n)

13
Problem10
What is the time complexity of following code:
A( )
{

i=1;

for( i=1;i2<=n;i++)

print(“”hai”);

}
Explanation
i2<=n is equivalent to i<=√n. Therefore the number of times the statement get executed
is √n.
Problem11
What is the time complexity of following code: (unrolling?)
A()
{
int i,j,k,n;
for (int i = 1; i <= n; i ++)
{
for (int j = i; j <=i; j++) {
for (int k = 1; k <=100; k++)
{print(“”hai”);}}
}
Explanation
Outer for loop runs n times. Second loop execution depends on the value of i. So we
need to unroll this .But third loop execution is independent of I and j values.
i=I 2 3 ………………………………. n
j=1 2 3 ……………………………….n
k=100 2*100 3*100 ……………………………..n*100
So total times the print gets executed is
100+2*100+3*100+……………n*100=100(n(n+1/2)=O(n2).
Problem12
What is the time complexity of following code (unrolling)
A()
{
int i,j,k,n;

14
for (int i = 1; i <= n; i ++)
{
for (int j = 1; j <=i2; j++) {
for (int k = 1; k <=n/2; k++)
{print(“”hai”);}}
}
Explanation
Outer for loop runs n times. Second loop execution depends on the value of i. So we
need to unroll this .But third loop execution is independent of i and j values.
i=I 2 3 ………………………………. n
j=1 4 9 ……………………………….n2
k=n/2*1 n/ 2*4 n/2*9 ……………………………..n/2* n2
So total times the print gets executed is
n/2*1+n/2*4+n/2*9+……………n/2* n2=n/2(n(n+1)((2n+1)/6)=O(n4).

Problem13
What is the time complexity of following code
A()
{
int i;
for (int i = 1; i <= n; i *2)
print(“”hai”);
}
Explanation
i= I , 2,4……n
20,21,22…….2k
The print statement gets executed k times
ie 2k=n which is equivalent to k=log n.
Therefore time complexity is log n.

Problem14
What is the time complexity of following code?
int i, j, k;
for (i = n / 2; i <= n; i++) {
for (j = 1; j <= n/2; j = j ++) {
for (k= 1; k <= n; k = k*2)
print(“hai”); }
}
}

15
Explanation
i and j gets executed n/2 times.k gets executed log n times. Here every loop runs
independently. So there is no need for unrolling.
So total time=n/2*n/2*log 2 n which is O(n2 log2 n)
Asymptotic Analysis of Iterative Algorithms

Methods for solving recurrences


Iteration method
Recursion tree method
Master method
Back Substitution Method
(Refer other pdf for iteration,recursive tree and back substitution method)
Master Method:
Master Method is a direct way to get the solution. The master method works only
for following type of recurrences or for recurrences that can be transformed to the
following type.

T(n)=a T(n/b)+ʘ(nk logp n),where a>=1,b>=1,k>=0 and p is a real


number

i) if a>bk ,then T(n)= ʘ(n log b a)

ii) if a= bk

a)if p>-1,then T(n)= ʘ(n log b a log p+1 n)

b)if p=-1,then T(n)= ʘ(n log b a log log n)

c)if p<=-1,then T(n)= ʘ(n log b a)

iii) if a< bk ,

a)if p>=0,then T(n)= ʘ(nk log p n)

b)if p<0,then T(n)= ʘ(nk )

Problem 1
i) T(n)= 3T(n/2)+n2
a=3,b=2,k=2,p=0
Case 3a:A ns: ʘ(n2)

16
Problem 2
ii) T(n)= 4T(n/2)+n2
a=4,b=2,k=2,p=0
Case 2a
Ans: ʘ(nlog 4 log n) = ʘ(n2 log n)

Problem 3
iii)T(n)= 4T(n/2)+n2
a=1,b=2,k=2,p=0
Case 3a: A ns: ʘ(n2)

Problem 4
iv)T(n)= 16T(n/4)+n
a=16,b=4,k=1,p=0
Ans: ʘ(n2)

Problem 5
v)T(n)= 2T(n/2)+n log n
a=2,b=2,k=1,p=1
Case 2a: A ns: ʘ(n log 2 n)

Problem 6
vi)T(n)= 2T(n/2)+n /log n
=2T(n/2)+n log -1n

a=2,b=2,k=1,p=-1
Case 2b: A ns: ʘ(n log log n)

Problem 7
T(n)= 2T(n/4)+n 0.51

Case 3a : ʘ(n0.51)

Problem 8
T(n)=0.5T(n/2)+1/n
a=0.5 which is less than 1.So master’s theorem cannot be applied.

Problem 9
T(n)=6T(n/3)+n2 log n
a=6,b=3,k=2,p=1
Case 3a : ʘ(n2 log n)

Problem 10
T(n)=64T(n/8) - n2 log n
Note: Masters Theorem cannot be applied as we have a minus in the equation

Problem 11
T(n)=7T(n/3) + n2
a=7,b=3,k=2,p=0
Case 3a: A ns: ʘ(n2)

Problem 12
T(n)=4T(n/2) + log n

17
a=4,b=2,k=0,p=1
Case 1: A ns: ʘ(n2)

Problem 13
T(n)=√2T(n/2) + log n
a=√2,b=2,k=0,p=1
Case 1: A ns: ʘ(nlog √2 )= ʘ(√n)

Problem 14
T(n)=2T(n/2) + √n
a=2,b=2,k=1/2,p=0
Case 1: A ns: ʘ(n)

Problem 15
T(n)=3T(n/2) + n
a=3,b=2,k=1,p=0
Case 1: A ns: ʘ(n log 2 3)

Problem 16
T(n)=3T(n/2) + n
a=3,b=2,k=1,p=0
Case 1: A ns: ʘ(n log 2 3)

Problem 16
T(n)= T(2n/3) + 1
Sol: ʘ(log n)

SAMPLE QUESTIONS

1. Differentiate between recursive and iterative algorithms.

2. Is 2n+1=O(2n)?.Justify.

3. Is 22n=O(2n)?.Justify.

4.Show that the function f(n) defined by f(1)=1,f(n)=f(n-1)+1/n for n>1 has the

complexity O(log n).

[Note: 1+1/2+1/3+...+1/n = log n]

5. Consider the following code

void func()

18
{

for(i=0;i<n;i++)

funci();

and consider the complexity of funci ( ) is log 2 n. Find the tome complexity

of func( ).

[Note: T(n) = n * [ time complexity of inner statements]

6. Derive the Big Oh notation for T(n)=T(n-1)+n using back substitution method.

7. Show that the solution of T(n)=T(n/2)+1 is O(lg n).

8. Show that the solution to T(n)=2T(n/2+17)+n is O(n lg n).

9. Apply master theorem to find the solution to T(n)=4T(n/3)+n.

10. Apply master theorem to find the solution to T(n)=4T(n/2)+n2.

11. Calculate the run time efficiency of the following program segment using

tabular method

for(i=1;i<=n;i++)

for(j=1;j<=n;j++)

printf(“hello”);

12. Let f(n)=n2+n.Can you say f(n) is O(n3).Justify.

13. Write an algorithm/pseudo code to find the sum of two square matrices and

find the time complexity of the algorithm using frequency count method.

14. Write a recursive function to find the factorial of a given number. Find its time

complexity.

15. Examine the need for calculating frequency count.

16. n2+n=O(n3).Justify

17. Derive the Big Oh notation for f(n)=n2+2n+5.

18. Between O(log n) and O(n log n) which is better. Justify.

19.Derive the Big oh notation for T(n)=T(n-1)+log n, T(0)=0.(Note log(mn)=log m+log

n)

19
20.

21. Write short notes on linear and nonlinear data structures.

22. Compare arrays and linked list.

23. Show the reversal of a linked list using iterative method

24. Discuss the array and linked representation of stack

25. Discuss the array and linked representation of queue.

26. Differentiate between singly linked list and circular linked list.

27.List applications of array, stack, queue and inked list

LONG ANSWER QUESTIONS

1. Explain master theorem for with suitable example.

2. Use a recursion tree to determine a good asymptotic upper bound on the

recurrence T (n) = 2 T(n/2) + n. Use the substitution method to verify the answer.

20
3. Explain recursion tree method and Substitution method for solving recurrence

with suitable examples.

4. Give a recursive algorithm to find the k th smallest element of an array. Write the

recurrence equation and perform asymptotic analysis for worst case.

5. Write an algorithm to search a key k in a N*N matrix .What is the time complexity

of the algorithm.

6. Describe the different notations used to describe the asymptotic running time of

an algorithm.

7. Exemplify the various operations that can be performed on singly linked list.

8. Exemplify the various operations that can be performed on doubly linked list.

21

You might also like