You are on page 1of 19

UNIT I

Introduction to Algorithms: Introduction, Algorithm Specifications, Recursive Algorithms,


Performance Analysis of an algorithm Time and Space Complexity, Asymptotic Notations,
Amortized Analysis\

1. Data Structures

Data Structure is a way to store and organize data so that it can be used efficiently. A data
structure should be seen as a logical concept that must address two fundamental concerns.

1. First, how the data will be stored, and


2. Second, what operations will be performed on it.

The data structure must be powerful enough to handle the different relationship existing
between the data. The structure of data also to be simple, so that we can efficiently process data
when required.

To develop a program of an algorithm we should select an appropriate data structure for that
algorithm. Therefore, data structure is represented as:

Algorithm + Data structure = Program

1.1 Types of Data Structures

There are two types of data structures:

• Primitive data structure


• Non-primitive data structure

1.1.1 Primitive Data structure

The primitive data structures are primitive data types. The int, char, float, double, and pointer
are the primitive data structures that can hold a single value.

1
1.1.2 Non-Primitive Data structure

The non-primitive data structure is divided into two types:

a) Linear data structure


b) Non-linear data structure

Linear Data Structure:

The arrangement of data in a sequential manner is known as a linear data structure. The data
structures used for this purpose are Arrays, Linked list, Stacks, and Queues. In these data
structures, one element is connected to only one another element in a linear form.

Non-linear data structure

When one element is connected to the 'n' number of elements known as a non-linear data
structure. The best example is trees and graphs. In this case, the elements are arranged in a
random manner.

Data structures can also be classified as:

• Static data structure: It is a type of data structure where the size is allocated at the
compile time. Therefore, the maximum size is fixed.
• Dynamic data structure: It is a type of data structure where the size is allocated at the
run time. Therefore, the maximum size is flexible.

1.2 Operations on Data Structures

The major or the common operations that can be performed on the data structures are:

• Traversing: Every data structure contains the set of data elements. Traversing the data
structure means visiting each element of the data structure in order to perform some
specific operation like searching or sorting.
• Insertion: Insertion can be defined as the process of adding the elements to the data
structure at any location. If we insert elements into data structure more than its size,
then overflow occurs.
• Deletion: The process of removing an element from the data structure is called
Deletion. We can delete an element from the data structure at any random location. If
we try to delete an element from an empty data structure, then underflow occurs.
• Searching: The process of finding the location of an element within the data structure
is called Searching.
• Sorting: The process of arranging the data structure in a specific order is known as
Sorting.
• Merging: When two lists List A and List B of size M and N respectively, of similar
type of elements, clubbed or joined to produce the third list, List C of size (M+N), then
this process is called merging

2
1.3 Advantages of Data structures

• Efficiency: If the choice of a data structure for implementing a particular ADT is


proper, it makes the program very efficient in terms of time and space.
• Reusability: The data structure provides reusability means that multiple client
programs can use the data structure.
• Abstraction: The data structure provides the level of abstraction. The client cannot see
the internal working of the data structure, so it does not have to worry about the
implementation part. The client can only see the interface.

2. Algorithm

An algorithm is a sequence of unambiguous instructions used for solving a problem, which can
be implemented (as a program) on a computer

2.1 Algorithm Specifications

Every algorithm must satisfy the following specifications:

1. Input - Every algorithm must take zero or more number of input values from external.
2. Output - Every algorithm must produce an output as result.
3. Definiteness - Every statement/instruction in an algorithm must be clear and
unambiguous (only one interpretation)
4. Finiteness - For all different cases, the algorithm must produce result within a finite
number of steps.
5. Effectiveness - Every instruction must be basic enough to be carried out and it also
must be feasible

2.2 Advantages of Algorithms

• It is easy to understand.
• Algorithm is a step-wise representation of a solution to a given problem.
• In Algorithm the problem is broken down into smaller pieces or steps hence, it is easier
for the programmer to convert it into an actual program.

2.3 Disadvantages of Algorithms:

• Writing an algorithm takes a long time so it is time-consuming.


• Understanding complex logic through algorithms can be very difficult.
• Branching and Looping statements are difficult to show in Algorithms.

3
2.4 Designing an Algorithm

In order to write an algorithm, following things are needed as a pre-requisite:

a) The problem that is to be solved by this algorithm.


b) The constraints of the problem that must be considered while solving the problem.
c) The input to be taken to solve the problem.
d) The output to be expected when the problem the is solved.
e) The solution to this problem, in the given constraints.

Then the algorithm is written with the help of above parameters such that it solves the problem.

Example:

Consider the example to add three numbers and print the sum.

Step 1: Fulfilling the pre-requisites

As discussed above, in order to write an algorithm, its pre-requisites must be fulfilled.

• The problem that is to be solved by this algorithm: Add 3 numbers and print their sum.
• The constraints of the problem that must be considered while solving the problem: The
numbers must contain only digits and no other characters.
• The input to be taken to solve the problem: The three numbers to be added.
• The output to be expected when the problem the is solved: The sum of the three numbers
taken as the input.
• The solution to this problem, in the given constraints: The solution consists of adding
the 3 numbers. It can be done with the help of ‘+’ operator, or bit-wise, or any other
method.

Step 2: Designing the algorithm

Now let’s design the algorithm with the help of above pre-requisites:

Algorithm to add 3 numbers and print their sum:

1. START
2. Declare 3 integer variables num1, num2 and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3
respectively.
4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of variable sum
7. END

Step 3: Testing the algorithm by implementing it.

In order to test the algorithm, let’s implement it in C language.

4
// C program to add three numbers
// with the help of above designed algorithm
#include <stdio.h>
int main()
{
// Variables to take the input of the 3 numbers
int num1, num2, num3;
// Variable to store the resultant sum
int sum;
// Take the 3 numbers as input
printf("Enter the 1st number: ");
scanf("%d", &num1);
printf("%d\n", num1);
printf("Enter the 2nd number: ");
scanf("%d", &num2);
printf("%d\n", num2);
printf("Enter the 3rd number: ");
scanf("%d", &num3);
printf("%d\n", num3);
// Calculate the sum using + operator
// and store it in variable sum
sum = num1 + num2 + num3;
// Print the sum
printf("\nSum of the 3 numbers is: %d", sum);
return 0;
}

3. Recursive Algorithm

The process in which a function calls itself directly or indirectly is called recursion and the
corresponding function is called a recursive function. Using a recursive algorithm, certain
problems can be solved quite easily. Examples of such problems are Towers of Hanoi (TOH),
Inorder / Preorder / Postorder Tree Traversals, DFS of Graph, etc. A recursive function solves
a particular problem by calling a copy of itself and solving smaller sub-problems of the original
problems. Many more recursive calls can be generated as and when required. It is essential to
know that we should provide a certain case in order to terminate this recursion process.

The Three Laws of Recursion:

1) A recursive algorithm must have a base case.


2) A recursive algorithm must change its state and move toward the base case.
3) A recursive algorithm must call itself, recursively.

Example: Sum of Natural Numbers Using Recursion

#include <stdio.h>
int sum(int n);

int main() {
int number, result;

5
printf("Enter a positive integer: ");
scanf("%d", &number);
result = sum(number);
printf("sum = %d", result);
return 0;
}

int sum(int n) {
if (n != 0)
// sum() function calls itself
return n + sum(n-1);
else
return n;
}

Output:
Enter a positive integer:3
sum = 6

Recursive Calls

6
3.1 Types of Recursion

Recursion is of two types depending on whether a function calls itself from within itself or
whether two functions call one another mutually. The former is called direct recursion and
the later is called indirect recursion.

• Direct - FunctionA calls FunctionA


• Indirect - FunctionA calls FunctionB, FunctionB calls FunctionA or FunctionA calls
FunctionB, FunctionB calls FunctionC, FunctionC calls FunctionA

3.2 Differences between recursion and iteration:

3.3 The Towers of Hanoi

Tower of Hanoi, is a mathematical puzzle which consists of three towers (pegs) and more than
one rings is as depicted:

7
These rings are of different sizes and stacked upon in an ascending order, i.e. the smaller one
sits over the larger one. There are other variations of the puzzle where the number of disks
increase, but the tower count remains the same.

Rules:

The mission is to move all the disks to some another tower without violating the sequence of
arrangement. A few rules to be followed for Tower of Hanoi are −

1) Only one disk can be moved among the towers at any given time.
2) Only the "top" disk can be removed.
3) No large disk can sit over a small disk.

The recursive algorithm for moving n disks from tower A to tower B works as follows.

If 𝑛 = 1,

• one disk is moved from tower A to tower B.

If 𝑛 > 1,

• Recursively move the top 𝑛 − 1 disks from 𝐴 𝑡𝑜 𝐶. The largest disk remains on tower
𝐴 by itself.
• Move a single disk from 𝐴 𝑡𝑜 𝐵.
• Recursively move back 𝑛 − 1 disks from 𝐶 𝑡𝑜 𝐵.

#include <stdio.h>
#include <stdlib.h>

int main()
{
int no_of_disk;
printf("Enetr the no of disks\n");
scanf("%d",&no_of_disk);
hanoi(no_of_disk, 'A', 'B', 'C');
return 0;
}

void hanoi(int n, char rodFrom, char rodMiddle, char rodTo)


{
if(n==1){
printf("Disk 1 moved from %c to %c \n",rodFrom,rodTo);
return;
}
hanoi(n-1,rodFrom,rodTo,rodMiddle);
printf("Disk %d moved from %c to %c \n",n,rodFrom,rodTo);
hanoi(n-1,rodMiddle,rodFrom,rodTo);

8
Output:
Enetr the no of disks
3
Disk 1 moved from A to C
Disk 2 moved from A to B
Disk 1 moved from C to B
Disk 3 moved from A to C
Disk 1 moved from B to A
Disk 2 moved from B to C
Disk 1 moved from A to C

4. Performance Analysis of an algorithm

The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm, and
second is after creating the algorithm. The following are the two analysis of an algorithm:

• Priori Analysis: Here, priori analysis is the theoretical analysis of an algorithm which
is done before implementing the algorithm. Various factors can be considered before
implementing the algorithm like processor speed, which has no effect on the
implementation part.
• Posterior Analysis: Here, posterior analysis is a practical analysis of an algorithm. The
practical analysis is achieved by implementing the algorithm using any programming
language. This analysis basically evaluates that how much running time and space taken
by the algorithm.

Performance of an algorithm means predicting the resources which are required to an algorithm
to perform its task.

The performance of an algorithm depends on the following:

• Whether that algorithm is providing the exact solution for the problem?
• Whether it is easy to understand?
• Whether it is easy to implement?
• How much space (memory) it requires to solve the problem?
• How much time it takes to solve the problem? Etc.

9
When we want to analyse an algorithm, we consider only the space and time required by that
algorithm and we ignore all the remaining elements. Thus, performance analysis of an
algorithm can also be defined as the process of calculating space and time required by that
algorithm.

4.1 Algorithm Complexity

The performance of the algorithm can be measured in two factors:

4.1.1 Space Complexity

Total amount of computer memory required by an algorithm to complete its execution is called
as space complexity of that algorithm. Space Complexity can be constant or Linear. To
calculate the space complexity, we must know the memory required to store different datatype
values (according to the compiler). For example, the C Programming Language compiler
requires the following:

• 2 bytes to store Integer value.


• 4 bytes to store Floating Point value.
• 1 byte to store Character value.
• 6 (OR) 8 bytes to store double value.

Space Complexity can be constant or Linear. If any algorithm requires a fixed amount of space
for all input values, then that space complexity is said to be Constant Space Complexity If
the amount of space required by an algorithm is increased with the increase of input value, then
that space complexity is said to be Linear Space Complexity.

Example 1

int square(int a)
{
return a*a;
}

In the above piece of code, it requires 2 bytes of memory to store variable 'a' and another 2
bytes of memory is used for return value. That means, totally it requires 4 bytes of memory to
complete its execution. And these 4 bytes of memory is fixed for any input value of 'a'. This
space complexity is said to be Constant Space Complexity.

Example 2

int sum(int A[ ], int n)


{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}

In the above piece of code it requires

10
• 'n*2' bytes of memory to store array variable 'a[ ]'
• 2 bytes of memory for integer parameter 'n'
• 4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
• 2 bytes of memory for return value.

That means, totally it requires '2n+8' bytes of memory to complete its execution. Here, the total
amount of memory required depends on the value of 'n'. As 'n' value increases the space
required also increases proportionately. This type of space complexity is said to be Linear
Space Complexity.

4.1.2 Time Complexity

The time complexity of an algorithm is the total amount of time required by an algorithm to
complete its execution. Time Complexity can be constant or Linear. If any program requires a
fixed amount of time for all input values, then its time complexity is said to be Constant Time
Complexity. If the amount of time required by an algorithm is increased with the increase of
input value, then that time complexity is said to be Linear Time Complexity.

To calculate the time complexity of an algorithm, we need to define a model machine. Let us
assume a machine with following configuration...

• It is a Single processor machine


• It is a 32 bit Operating System machine
• It performs sequential execution
• It requires 1 unit of time for Arithmetic and Logical operations
• It requires 1 unit of time for Assignment and Return value
• It requires 1 unit of time for Read and Write operations

Now, we calculate the time complexity of following example code by using the above-defined
model machine

Example 1

int sum(int a, int b)


{
return a+b;
}

In the above sample code, it requires 1 unit of time to calculate a+b and 1 unit of time to return
the value. That means, totally it takes 2 units of time to complete its execution. And it does not
change based on the input values of a and b. That means for all input values, it requires the
same amount of time i.e. 2 units.

Example 2

int sum(int A[], int n)


{
int sum = 0, i;
for(i = 0; i < n; i++)

11
sum = sum + A[i];
return sum;
}

For the above code, time complexity can be calculated as follows:

In above calculation

• Cost is the amount of computer time required for a single operation in each line.
• Repeatation is the amount of computer time required by each operation for all its
repeatations.
• Total is the amount of computer time required by each operation to execute.

So above code requires '4n+4' Units of computer time to complete the task. Here the exact time
is not fixed. And it changes based on the n value. If we increase the n value then the time
required also increases linearly. Totally it takes '4n+4' units of time to complete its execution
and it is Linear Time Complexity.

4.2 Asymptotic Notations

Whenever we want to perform analysis of an algorithm, we need to calculate the complexity


of that algorithm. But when we calculate the complexity of an algorithm it does not provide
the exact amount of resource required. So instead of taking the exact amount of resource, we
represent that complexity in a general form (Notation) which produces the basic nature of that
algorithm. We use that general form (Notation) for analysis process.

Asymptotic notation of an algorithm is a mathematical representation of its complexity.

In asymptotic notation, when we want to represent the complexity of an algorithm, we use only
the most significant terms in the complexity of that algorithm and ignore least significant terms
in the complexity of that algorithm (Here complexity can be Space Complexity or Time
Complexity).

12
The time required by an algorithm comes under three types:

• Worst case: It defines the input for which the algorithm takes a huge time.
• Average case: It takes average time for the program execution.
• Best case: It defines the input for which the algorithm takes the lowest time

The commonly used asymptotic notations used for calculating the running time complexity of
an algorithm is given below:

• Big - Oh Notation (O)


• Omega Notation (Ω)
• Theta Notation (θ)

4.2.1 Big - Oh Notation (O)

Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity. That means Big - Oh notation always indicates the maximum time required by
an algorithm for all input values. That means Big - Oh notation describes the worst case of an
algorithm time complexity.

Big - Oh Notation can be defined as follows:

Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as
O(g(n)).

f(n) = O(g(n))

Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-
Axis and time required is on Y-Axis

In above graph after a particular input value n0, always C g(n) is greater than f(n) which
indicates the algorithm's upper bound.

Example

13
Consider the following f(n) and g(n)...

f(n) = 3n + 2
g(n) = n

If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values of C >
0 and n0>= 1

f(n) <= C g(n)


⇒3n + 2 <= C n

Above condition is always TRUE for all values of C = 4 and n >= 2.

By using Big - Oh notation we can represent the time complexity as follows:

3n + 2 = O(n)

4.2.2 Big - Omege Notation (Ω)

Big - Omega notation is used to define the lower bound of an algorithm in terms of Time
Complexity. That means Big-Omega notation always indicates the minimum time required by
an algorithm for all input values. That means Big-Omega notation describes the best case of
an algorithm time complexity.

Big - Omega Notation can be defined as follows:

Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as
Ω(g(n)).

f(n) = Ω(g(n))

Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-
Axis and time required is on Y-Axis

In above graph after a particular input value n0, always C g(n) is less than f(n) which indicates
the algorithm's lower bound.

14
Example

Consider the following f(n) and g(n)...

f(n) = 3n + 2
g(n) = n

If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of C >
0 and n0>= 1

f(n) >= C g(n)


⇒3n + 2 >= C n

Above condition is always TRUE for all values of C = 1 and n >= 1.

By using Big - Omega notation we can represent the time complexity as follows...

3n + 2 = Ω(n)

4.2.3 Big - Theta Notation (Θ)

Big - Theta notation is used to define the average bound of an algorithm in terms of Time
Complexity. That means Big - Theta notation always indicates the average time required by
an algorithm for all input values. That means Big - Theta notation describes the average case
of an algorithm time complexity.

Big - Theta Notation can be defined as follows:

Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we
can represent f(n) as Θ(g(n)).

f(n) = Θ(g(n))

Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis

15
In above graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is
greater than f(n) which indicates the algorithm's average bound.

Example

Consider the following f(n) and g(n)...

f(n) = 3n + 2
g(n) = n

If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all
values of C1 > 0, C2 > 0 and n0>= 1

C1 g(n) <= f(n) <= C2 g(n)


⇒C1 n <= 3n + 2 <= C2 n

Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.

By using Big - Theta notation we can represent the time compexity as follows...

3n + 2 = Θ(n)

Common Asymptotic Notations

constant Ο(1)
logarithmic Ο(log n)
linear Ο(n)
n log n Ο(n log n)
quadratic Ο(n2)
cubic Ο(n3)
polynomial nΟ(1)
exponential 2Ο(n)

5. Amortized Analysis

Amortized analysis considers a sequence of operations on a given data structure. It Computes


the average cost over a sequence of operations. It Guarantees the avg. performance of each
operation in the worst case for that sequence of operations. Unlike average case complexity
analysis, there is no involvement of probability in amortized analysis.

5.1 Techniques for Amortized Analysis

There are three main types of amortized analysis:

i. Aggregate analysis
ii. Accounting method
iii. Potential method

16
Aggregate analysis:

First find total cost of n operations, then divide by n to find amortized cost.

Example

Multipop Stack

• Consider a sequence of n PUSH, POP, MULTIPOP operations on an initially empty


stack
• MULTIPOPS are just a sequence of POPs, so analysis can consider only number
• PUSH and POPs (either a direct POP or a POP within a MULTIPOP)
• Each element pushed can be popped at most once (either in a direct POP or a POP
within a MULTIPOP)
• So the no. of POP operations (including the ones inside MULTIPOP) ≤ number of
PUSH
• So the total time is at most O(n) (since PUSH and POP are each O(1))
• Hence the average cost of an operation is O(n)/n = O(1)
• We say that the amortized cost of a PUSH, POP, or MULTIPOP in a sequence of n
such operations is O(1)

Accounting method:

Accounting method involves assigning differing charges to different operations. The amount
of the charge is called the amortized cost of that operation. Amortized cost of an operation can
be more or less than actual cost.

• When amortized cost > actual cost, the difference is saved in specific objects as credits
• The credits can be used by later operations whose amortized cost < actual cost

As a comparison, in aggregate analysis, all operations have same amortized costs. We do not
assign an amortized cost at the beginning for any operation, we get the total cost and divide by
the number of operation, irrespective of what those operations are.

Example

Multipop Stack

• Actual cost of operations: PUSH = 1, POP = 1, MULTIPOP = min (|S|, k)


• Set amortized cost (charge) for operations as: PUSH = 2, POP = 0, MULTIPOP = 0

Intuition

• A POP cannot happen without a PUSH


• While pushing, pay cost 1 for the actual cost of PUSH, and leave a credit of 1 for a
POP (direct or within MULTIPOP) in case it is popped later
• If not popped, the credit stays, remember we only need an upper bound on the
actual cost

17
• So a POP later does not need to pay anything, it has already been paid by the PUSH
• PUSH is overcharged (more than 1), POP/MULTIPOP is undercharged (less than
1)
• For a sequence of n PUSH, POP, MULTIIPOP operations, easy to see the conditions
hold
• After any step, amount of total credit never becomes negative.
If no. of POP (direct or in a MULTIPOP) = no. of PUSH, credit is 0
If no. of POP < no. of PUSH, credit is > 0
• Total amortized cost ≥ total actual cost
• So total actual cost is bounded by total amortized cost = 2n
• So average cost per operation in the sequence = O(1)

Potential Method:

Potential method is similar as accounting method. The difference is potential method stores the
prepaid work as potential energy or potential, instead of credit. The potential is associated with
the data structure as a whole rather than with specific objects within the data structure

• Initial data structure D0


• n operations, resulting in D0, D1,…, Dn with costs c1, c2,…, cn
• A potential function Φ: {Di} → R (real numbers)
➢ Φ(Di) is called the potential of Di
➢ So the potential of the data structure changes as operations are done on it
• Amortized cost ci' of the i-th operation is:
ci' = ci+ Φ(Di) – Φ(Di-1) (actual cost + potential change)
• Total amortized cost

• We want the total amortized cost to be an upper bound of the total actual cost
➢ So we need Φ(Dn) ≥ Φ(D0)
➢ But this has to be true for any n, so we need Φ(Di) ≥ Φ(D0) for any i
➢ Define Φ(D0)=0,and so Φ(Di) ≥0, for all i
• If the potential change is positive (i.e., Φ(Di) - Φ(Di-1)>0), then ci' is an overcharge (so
store the increase as potential)
• Otherwise, undercharge (discharge the potential to pay the extra actual cost)

Example

Multipop Stack

• Assign Potential = number of elements in stack


• So Φ(D0)=0, and Φ(Di) ≥ 0
• Amortized cost of stack operations
➢ PUSH
Potential change = (|S|+1) – |S| = 1
Amortized cost = actual cost + potential change = 1 + 1 = 2

18
➢ POP
Potential change = (|S| – 1) – |S| = -1
Amortized cost = actual cost + potential change = – 1 + 1 = 0
➢ MULTIPOP(S, k): let k’ = min(|S|, k)
Potential change = – k’
Amortized cost = actual cost + potential change = k’ + (– k’) = 0
• Total amortized cost per operation is O(1)
• Hence total amortized cost for n operations is O(n)
• So average cost per operation is O(1)
• Total amortized cost is an upper bound of total actual cost

IMPORTANT QUESTIONS

1. Define data structures and explain the types of data structures


2. List various operations that can be performed using data structures
3. What are the various specifications of an algorithm?
4. Design an algorithm for processing student data using structures.
5. What is recursion? Explain the laws of recursion using an example
6. Differentiate between recursion and iteration.
7. Explain the performance analysis of algorithms
8. What is time complexity and space complexity?
9. Consider the piece of code below
int Add(int A[],int B[], int C[],int n)
{
int i ,j;
for(i = 0; i < n;i++)

for(j=0;j<n;j++)
{
C[I,j] = A[ i,j]+B[I,j]
}
}
}
Find the time and space complexity of above mentioned code.
10. Calculate time and space complexity for recursive factorial program
11. What are asymptotic notations. Explain in detail
12. Explain the three methods of amortized analysis
13. Evaluate the best, worst and average case for the following
3n2+2n+1

19

You might also like