You are on page 1of 35

ANALYSIS AND DESIGN OF ALGORITHMS

Course code: 21BCA5C01


Module 1
Semester: 5 th
semester
Role of Algorithms in Computing

Prepared by
Dr.A.Kannagi
Introduction
What is an Algorithm?
An algorithm is a step by step procedure to solve logical and
mathematical problems.
Notion of Algorithms

Need for studying algorithms:


 The study of algorithms is the cornerstone of computer science. It can be
recognized as the core of computer science. Computer programs would not
exist without algorithms.
 Another reason for studying algorithms is that if we know a standard set of
important algorithms, they further our analytical skills & help us in
developing new algorithms for required applications
The essential characteristics of an algorithm

 Every step of an algorithm should perform a single task.


 Ambiguity (Confusion) should not be included at any stage in an algorithm.
 An algorithm should involve a finite number of steps to arrive at a solution.
 Simple statement and structures should be used in the development of the
algorithm.
 Every algorithm should lead to a unique solution of the problem.
 An algorithm should have the capability to handle some unexpected
situations, which may arise during the solution of a problem (For example,
division by zero).
Requirements of an algorithm

An algorithm is finite set of instructions that is followed, accomplishes a


particular task. All algorithms must satisfy the following criteria:

 Input. Zero or more quantities are externally supplied.


 Output. At least one quantity is produced.
 Definiteness. Each instruction is clear and produced.
 Finiteness. If we trace out the instruction of an algorithm, then for all cases,
the algorithm terminates after a finite number of steps.
 Effectiveness. Every instruction must be very basic so that it can be carried
out, in principal, by a person using only pencil and paper. It is not enough that
each operation be definite as in criterion 3; it also must be feasible.
Advantages of Algorithm

 It is a step-by-step representation of a solution to a given problem, which is


very easy to understand.
 It has got a definite procedure, which can be executed within a set period of
time.
 It is easy to first develop an algorithm, and then convert it into a flowchart
and then into a computer program.
 It is independent of programming language.
 It is easy to debug as every step has got its own logical sequence.

Disadvantages of Algorithm
 It is time consuming and cumbersome as an algorithm is developed first which
is converted into a flowchart and then into a computer program.
Fundamentals of Algorithmic Problem Solving
 Understanding the Problem
 Decision making
 Methods of Specifying an Algorithm
 Proving an Algorithm’s Correctness
 Analyzing an Algorithm
 Coding an Algorithm
Role of algorithms in computing

 Sorting
 Searching
 String processing
 Graph problems
 Combinatorial problems
 Geometric problems
 Numerical problems
Algorithms as a technology
 The Internet
 Bio informatics
 Electronic commerce
 The approach of linear programming is also one such technique which is widely used like
 In manufacturing and other commercial enterprises where resources need to be allocated scarcely in the
most beneficial way.
 Or a institution may want to determine where to spend money buying advertising in order to maximize the
chances of their institution to grow.
 Shortest path algorithm also has an extensive use as
 In a transportation firm such as a trucking or railroad company, may have financial interest in finding
shortest path through a road or rail network because taking shortest path result in lower labour or fuel
costs.
 Or a routing node on the Internet may need to find the shortest path through the network in order to route a
message quickly.

 Even an application that does not require algorithm content at the application level relies heavily on algorithms
as the application depends on hardware, GUI, networking or object orientation and all of these make an
extensive use of algorithms.
Fundamentals of the Analysis of Algorithm Efficiency

 What is the goal of analysis of algorithms?


 To compare algorithms mainly in terms of running time but also in terms
of other factors (e.g., memory requirements, programmer's effort etc.)

 What do we mean by running time analysis?


 Determine how running time increases as the size of the problem
increases.
Types of Analysis
 Worst case
 Provides an upper bound on running time
 An absolute guarantee that the algorithm would not run longer, no matter what
the inputs are
 Best case
 Provides a lower bound on running time
 Input is the one for which the algorithm runs the fastest

Lower Bound  Running Time Upper Bound

 Average case
 Provides a prediction about the running time
 Assumes that the input is random
How do we compare algorithms?
We need to define a number of objective
measures.

(1) Compare execution times?


Not good: times are specific to a particular computer !!

(2) Count the number of statements executed?


Not good: number of statements vary with the programming
language as well as the style of the individual programmer.
Complexity of an algorithm is analyzed in two
perspectives: Time and Space

 Time Complexity
It’s a function describing the amount of time required to run an algorithm in
terms of the size of the input. "Time" can mean the number of memory accesses
performed, the number of comparisons between integers, the number of times
some inner loop is executed, or some other natural unit related to the amount of
real time the algorithm will take.

 Space Complexity
It’s a function describing the amount of memory an algorithm takes in terms of
the size of input to the algorithm. We often speak of "extra" memory needed, not
counting the memory needed to store the input itself. Again, we use natural (but
fixed-length) units to measure this.
The algorithm analysis framework consists of the
following:

 Measuring an Input’s Size


 Units for Measuring Running Time
 Orders of Growth
 Worst-Case, Best-Case, and Average-Case Efficiencies
Input Size

 Input size (number of elements in the input)


 size of an array

 polynomial degree

 # of elements in a matrix

 # of bits in the binary representation of the input

 vertices and edges in a graph


Units for Measuring Running Time

 Express running time as a function of the input size n (i.e., f(n)).


 Compare different functions corresponding to running times.
 Such an analysis is independent of machine time, programming style, etc.
 To count the number of times each of the algorithm’s operations is
executed.
Example

 Associate a "cost" with each statement.


 Find the "total cost“ by finding the total number of times each statement is
executed.

Algorithm 1 Algorithm 2

Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
... ...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
Example

 Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2
Orders of Growth
 A difference in running times on small inputs is not what really distinguishes
efficient algorithms from inefficient ones.
 Consider the example of buying elephants and goldfish:
Cost: cost_of_elephants + cost_of_goldfish
Cost ~ cost_of_elephants (approximation)
 The low order terms in a function are relatively insignificant for large n
n4 + 100n2 + 10n + 50 ~ n4

i.e., we say that n4 + 100n2 + 10n + 50 and n4 have the same rate of growth
Worst-Case, Best-Case, and Average-Case
Efficiencies

 Analysis of algorithm is the process of analyzing the problem-solving


capability of the algorithm in terms of the time and size required (the size of
memory for storage while implementation). However, the main concern of
analysis of algorithms is the required time or performance. Generally, we
perform the following types of analysis −
 Worst-case − The maximum number of steps taken on any instance of size a.
 Best-case − The minimum number of steps taken on any instance of size a.
 Average case − An average number of steps taken on any instance of size a.
Asymptotic notation

 To compare two algorithms with running times f(n) and g(n), we need a rough
measure that characterizes how fast each function grows.

 Let t(n) and g(n) can be any nonnegative functions defined on the set of natural
numbers. The algorithm’s running time t(n) usually indicated by its basic
operation count C(n), and g(n), some simple function to compare with the
count.

 Hint: use rate of growth


 Compare functions in the limit, that is, asymptotically!
(i.e., for large values of n)
Following asymptotic notations are used to calculate the running time complexity
of an algorithm.
 O − Big Oh
 Ω − Big omega
 θ − Big theta
Big Oh (O): Asymptotic Upper Bound
 ‘O’ (Big Oh) is the most commonly used notation. A function f(n) can be
represented is the order of g(n) that is O(g(n)), if there exists a value of
positive integer n as n0 and a positive constant c such that −
 f(n)⩽c.g(n) for n> in all case
 Hence, function g(n) is an upper bound for function f(n), as g(n) grows faster
than f(n).

We say fA(n)=30n+8 is order n, or O (n)


It is, at most, roughly proportional to
n.
• fB(n)=n2+1 is order n2, or O(n2).
• It is, at most, roughly proportional to
n2 .
• In general, any O(n2) function is
faster- growing than any O(n)
function.
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2

 Both algorithms are of the same order: O(N)

Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2 = O(N2)
Big-O Visualization O(g(n)) is the
set of functions
with smaller or
same order of
growth as g(n)
Examples

2n = O(n ):
2 3 2n2 ≤ cn3  2 ≤ cn  c = 1 and n0= 2

n = O(n ):
2 2 n2 ≤ cn2  c ≥ 1  c = 1 and n0= 1

1000n2+1000n = O(n2):
1000n2+1000n ≤ 1000n2+ n2 =1001n2 c=1001 and n0 = 1000

n ≤ cn2  cn ≥ 1  c = 1 and n0= 1


n = O(n2):
 - notation

(g(n)) is the set of functions


with larger or same order of
growth as g(n)
Examples
 5n2 = (n)
 c, n0 such that: 0  cn  5n
2 cn  5n2  c = 1 and n0 = 1

 100n + 5 ≠ (n2)
 c, n0 such that: 0  cn2  100n + 5
100n + 5  100n + 5n ( n  1) = 105n
cn2  105n
Since n is positive  cn – 105  0  n  105/c
 contradiction: n cannot be smaller than a constant

n = (2n), n3 = (n2), n = (logn)


 -notation

(g(n)) is the set of functions


with the same order of growth
as g(n)
General Plan for Analyzing the Time Efficiency of
Recursive Algorithms

 Decide on a parameter (or parameters) indicating an input’s size.


 Identify the algorithm’s basic operation.
 Check whether the number of times the basic operation is executed can vary
on different inputs of the same size; if it can, the worst-case, average-case,
and best-case efficiencies must be investigated separately.
 Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
 Solve the recurrence or, at least, ascertain the order of growth of its solution.
EXAMPLE 1: Compute the factorial function F(n) = n! for an arbitrary non-negative integer
n. Since n! = 1•. . . . • (n − 1) • n = (n − 1)! • n, for n ≥ 1 and 0!= 1 by definition, we can
compute F(n) = F(n − 1) • n with the following recursive algorithm.

ALGORITHM Factorial(n)
//Computes n! Recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return Factorial(n − 1) * n

Algorithm analysis

• For simplicity, we consider n itself as an indicator of this algorithm’s input size. i.e. 1.
The basic operation of the algorithm is multiplication, whose number of executions we
denote M(n). Since the function F(n) is computed according to the formula F(n) = F(n −1)*n
for n > 0.
• The number of multiplications M(n) needed to compute it must satisfy the equality

• M(n − 1) multiplications are spent to compute F(n − 1), and one more multiplication is needed to
multiply the result by n.
Algorithm design

 The important aspects of algorithm design include creating an efficient


algorithm to solve a problem in an efficient way using minimum time and
space.
 To solve a problem, different approaches can be followed. Some of them can
be efficient with respect to time consumption, whereas other approaches
may be memory efficient.
 However, one has to keep in mind that both time consumption and memory
usage cannot be optimized simultaneously. If we require an algorithm to run
in lesser time, we have to invest in more memory and if we require an
algorithm to run with lesser memory, we need to have more time.
Questions??

You might also like