Professional Documents
Culture Documents
UNIT-1
INTRODUCTION
INTRODUCTION:
Al Khowarismi means[Algorithm]
Algorithms play a crucial role in various fields and have many applications.
Some of the key areas where algorithms are used include:
2. They help to automate processes and make them more reliable, faster, and
easier to perform.
Feasible: The algorithm must be simple, generic, and practical, such that
it can be executed with the available resources. It must not contain some
future technology or anything.
Properties of Algorithm:
It should be deterministic means giving the same output for the same
input case.
Every step in the algorithm must be effective i.e. every step should do
some work.
Types of Algorithms:
2. Recursive Algorithm:
3. Backtracking Algorithm:
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or
groups of elements from a particular data structure. They can be of
different types based on their approach or the data structure in which the
element should be found.
5. Sorting Algorithm:
6. Hashing Algorithm:
Divide
Solve
Combine
8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution
for the next part is built based on the immediate benefit of the next part.
The one solution that gives the most benefit will be chosen as the solution
for the next part.
This algorithm uses the concept of using the already found solution to
avoid repetitive calculation of the same part of the problem. It divides the
problem into smaller overlapping subproblems and solves them.
Advantages of Algorithms:
It is easy to understand.
Disadvantages of Algorithms:
a. Natural language
b. Pseudocode
c. Flowchart
Step 3: Add the above two numbers and store the result in c.
b) Pseudocode:
• For Assignment operation left arrow “←”, for comments two slashes “//”,if
condition, for, while loops are used.
Example:
ALGORITHM Sum(a,b)
c←a+b
return c
c)Flowchart
Once an algorithm has been specified then its correctness must be proved.
An algorithm must yield a required result for every legitimate input in a
finite amount of time.
For Example, the correctness of Euclid’s algorithm for computing the
greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n).
A common technique for proving correctness is to use mathematical induction
because an algorithm’s iterations provide a natural sequence of steps needed for
such proofs.
For an algorithm the most important is efficiency. In fact, there are two kinds
of algorithm efficiency. They are:
The research experience has shown that for most problems, we can achieve
much more spectacular progress in speed than in space.
When measuring input size for algorithms solving problems such as checking
primality of a positive integer n. Here, the input is just one number, and it is this
number’s magnitude that determines the input size. In such situations, it is
preferable to measure size by the number b of bits in the n’s binary
representation: b=⌊log2n⌋+1
Identify the most important operation of the algorithm, called the basic
operation, the operation contributing the most to the total running time, and
compute the number of times the basic operation is executed.
Orders of Growth
Normally the order of growth is determined for larges values of ‘n’ because of
the following two reasons:
Definition: The efficiency of an algorithm for the input of size ‘n’ for which the
algorithm takes longest time to execute among all possible inputs.
In the worst-case analysis, we calculate the upper limit of the execution time of
an algorithm. It is necessary to know the case which causes the execution of the
maximum number of operations.
For linear search, the worst case occurs when the element to search for is not
present in the array. When x is not present, the search () function compares it
with all the elements of arr [] one by one. Therefore, the temporal complexity of
the worst case of linear search would be Θ (n).
Definition: The efficiency of an algorithm for the input of size ‘n’ for which the
algorithm takes least time to execute among all possible inputs.
In the best case analysis, we calculate the lower bound of the execution time of
an algorithm. It is necessary to know the case which causes the execution of the
minimum number of operations. In the linear search problem, the best case
occurs when x is present at the first location.
The number of operations in the best case is constant. The best-case time
complexity would therefore be Θ (1) Most of the time, we perform worst-case
analysis to analyze algorithms. In the worst analysis, we guarantee an upper
bound on the execution time of an algorithm which is good information.
In the average case analysis, we take all possible inputs and calculate the
computation time for all inputs. Add up all the calculated values and divide the
sum by the total number of entries.
We need to predict the distribution of cases. For the linear search problem,
assume that all cases are uniformly distributed. So we add all the cases and
divide the sum by (n + 1).
Merge
Array O(n log(n)) O(n log(n)) O(n log(n)) O(n)
sort
Heap sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(1)
Smooth
Array O(n) O(n log(n)) O(n log(n)) O(1)
sort
Bubble
Array O(n) O(n2) O(n2) O(1)
sort
Insertion
Array O(n) O(n2) O(n2) O(1)
sort
Selection
Array O(n2) O(n2) O(n2) O(1)
sort
Space complexity:
Data Space: The variables and constants that we use in the algorithm will
also require some space which is referred to as data space.
To calculate the space complexity we need to have an idea about the value of
the memory of each data type. This value will vary from one operating system
to another. However, the method used to calculate the space complexity remains
the same.
Let’s have a look into a few examples to understand how to calculate the space
complexity.
Example 1:
{
int a,b,c,sum;
sum = a + b + c;
return(sum);
}
Here, in this example, we have 4 variables which are a, b, c, and sum. All these
variables are of type int hence, each of them require 4 bytes of memory.
Time complexity:
many times we make use of the big O notation which is an asymptotic notation
while representing the time complexity of an algorithm.
Example:
statement;
Here, in this example we have a loop inside another loop. In such a case, the
complexity would be proportional to n square. Hence, it would be of type
quadratic and therefore the complexity is quadratic complexity.