You are on page 1of 48

Mod 1

PROGRAMMING METHODOLOGIES
Programming Methodologies

 Programming methodologies deal with different methods of


designing programs

 Data is the basic entity or fact that is used in calculation or


manipulation process.

 organization or structuring of data will have impact on the


efficiency of the program.
Data structure
 Data structure is the structural representation of logical relationships
between elements of data.

 The selection of a particular data structure will help the programmer


to design more efficient programs
 Algorithm + Data Structure = Program

 A complex problem usually cannot be divided and programmed by set


of modules unless its solution is structured or organized
Data structure

 Representation of a particular data structure In the memory of


a computer is called a storage structure

 Data structure = Organized data +Operations


DS for good Algorithm
 Step by step solution

 For solving
1) we have to define the problems
2) Design algorithm to solve that problem

 Optimize the algorithm before the delivery


 Optimization of a program depends on a Algorithm design

 For larger program ,Each part of the program is defined


before programming then only we can refine
Algorithm
 Refinement process involved when a problem is converted
into program is called SETPWISE REFINEMENT method

 Two Approaches
 Top – down approaches
 Bottom- up approaches
Stepwise refinement
 1. In the first stage, modeling, we try to represent the problem using
an appropriate mathematical model such as a graph, tree etc.

 2. At the next stage, the algorithm is written in pseudo-language (or


formal algorithm) that is, a mixture of any programming language
constructs and less formal English statements.
 The operations to be performed on the various types of data become
fixed.

 3. In the final stage we choose an implementation for each abstract


data type and write the procedures for the various operations on that
type.
MODULAR PROGRAMMING
 The focus is entirely on writing code (functions).

 Any code may access the contents of any data structure passed to it.

 Two methods may be used for modular programming are known as top-
down and bottom-up

 Focus is on end result

 if a program has been written in modular form, it is easier to detect the


source of the error and to test it in isolation, than if the program were written
as one function.
 If an error is discovered after the program supposedly has been fully tested,
then the modules concerned can be isolated and retested by them.
TOP-DOWN ALGORITHM
DESIGN
 The principles of top-down design dictates that a program should
be divided into a main module and its related modules.

 Each module should also be divided into sub modules

 The division of modules processes until the module consists only


of elementary and cannot be further subdivided.
TOP-DOWN
 Top-down algorithm design is a technique for organizing and coding
programs in which
 a hierarchy of modules is used, and breaking the specification
down into simpler and simpler pieces,
 each having a single entry and a single exit point, and
 in which control is passed downward through the structure
without unconditional branches to higher levels of the structure.
BOTTOM-UP ALGORITHM DESIGN
 Bottom-up algorithm design is the opposite of top-down
design.

 It refers to a style of programming where an application is


constructed starting with existing primitives of the
programming language, and constructing gradually more and
more complicated features, until the all of the application has
been written
STRUCTURED PROGRAMMING
 Structured programming is a programming paradigm aimed at
improving the clarity, quality, and development time of a
computer program by making extensive use of
the structured control flow constructs of
 selection (if/then/else) and
 Repetition (while and for),
 block structures, and
 subroutines
ANALYSIS OF ALGORITHM
 After designing an algorithm, it has to be checked for its
correctness

 The algorithm can be analyzed by tracing all step-by-step


instructions and reading the algorithm for logical correctness

 there may be more than one algorithm to solve a problem. The


choice of a particular algorithm depends on following
performance analysis and measurements :

 1. Space complexity
 2. Time complexity
SPACE COMPLEXITY
 Analysis of space complexity of an algorithm or program is the
amount of memory it needs to run to completion.

 Reasons for studying space complexity are:


1. If the program is to run on multi user system, it may be required to
specify the amount of memory to be allocated to the program.
2. We may be interested to know in advance that whether sufficient
memory is available to run the program.
3. There may be several possible solutions with different space
requirements.
4. Can be used to estimate the size of the largest problem that a
program can solve.
Space Complexity
 The space needed by a program consists of following components
 1. Fixed Space requirements:
 Space not depend on input and output
 Include instruction space , space for simple variable, fixed structured variables
 2. Variable Space requirements:
 Spaces needed by variable whose size depends on instance of the problem
being solved.
 When a function using recursion: Environment stack space:
 When using dynamic array (ptr = (int*) malloc(100 * sizeof(int));)
 Variable space for instance I is denoted by Sp(I)
Space Complexity
 Total space requirement S(p) of any program is
 S(p)= C + Sp(I)

 c:fixed space
 Sp(I) variable space
Space Complexity
float sum( float a, float b)
{
return a+b;
}

The fn accept only two simple variable, so only fixed space

Sabc(I) =0
Space Complexity
float fact( int n)
{
if (n==0)
return 1
else
return n * fact(n-1)
}

Here additional is space required for each recursive call.


for parameter n 2 bytes.
For storing return address 4 bytes.
Total 6 bytes
TIME COMPLEXITY
 The time complexity of an algorithm or a program is the amount
of time it needs to run to completion

 T(p) is the sum of compile time and run time

 Compile time is fixed , it does not depend on the instance

 The exact time will depend on


 the implementation of the algorithm,
 programming language,
 optimizing the capabilities of the compiler used,
 the CPU speed,
 other hardware characteristics/specifications and so on.
TIME COMPLEXITY
 To measure the time complexity accurately, we have to count all
sorts of operations performed in an algorithm.

 By analyzing an algorithm, it is hard to come out with an exact


time required.

 To find out exact time complexity, we need to know the exact


instructions executed by the hardware and the time required for
the instruction. But its rarely worth the effort
TIME COMPLEXITY
 our intention is to estimate the execution time of an algorithm
irrespective of the computer machine on which it will be used

 Here, the more sophisticated method is to identify the key operations


and count such operations performed till the program completes its
execution.

 A key operation in our algorithm is an operation that takes maximum


time among all possible operations in the algorithm

 The time complexity can be expressed as function of number of key


operations performed
TIME COMPLEXITY
 To identify the time complexity we need to count the number of
program steps in a program

 Execution time for statement a=2 and a=b*y-p/J +i*r is different ,


but we count both as only one step

 The only requirement is that the time required to execute each statement
that is counted as one step be independent of the instance characteristics

 Instead of measuring actual time required in executing each statement


in the code, we consider how many times each statement execute.
Frequency count method
 float sum (list[], int n) cost count total
 { x x x
 float sum=0 c1 1 c1*1
 Int I x x x

 for (i=0;i<n ;i++) c3 n+1 c3*n+1


 sum+=list[i] c4 n c4*n
 return sum c5 1 c5*1

total steps= 2n+3


void transpose (int **a, int n) cost count total
{ x x x
for (int i=0; i<n; i++) c1 n+1 c1*n+1
for(int j=i+1; j<n; j++) c3 n(n+1)/2 c3*n(n+1)/2
swap(a[i][j], a[j][i]) c4 n c4*n

If i=0 j=1 to n-1  n-1+1  n


i=1 j=2 to n-1 n-2+1 n-1

i=n-1 j=n-1+1
j=n to n-1  only the false step 1

1+2+3+…..n-1 + n sum of n natural numbers n(n+1)/2


void transpose (int **a, int n) cost count total
{ x x x
for (int i=0; i<n; i++) c1 n+1 c1*n+1
for(int j=i+1; j<n;j++) c2 n(n+1)/2 c2*n(n+1)/2
swap(a[i][j], a[j][i]) c3 n(n-1)/2 c3*n(n-1)/2
n2+n+1
SWAP
If i=0 j=1 to n -1  n-1
i=1 j=2 to n-1  n-2

i=n-2 j=n-2+1
j=n-1 to n-1  1 times
i=n-1 j=n to n-1  0 times
0+1+2+……n-1 sum of first n-1 number
Put n-1 in equation (n-1)(n)/2
Asymptotic analysis
 Asymptotic Analysis is the big idea that handles issues in
analyzing algorithms.

 In Asymptotic Analysis, we evaluate the performance of an


algorithm in terms of input size (we don't measure the actual
running time).

 We calculate, how does the time (or space) taken by


an algorithm increases with the input size.
Three cases in analysis
 When we analyze an algorithm it depends on the input data,
 there are three cases :
1. Best case
2. Average case
3. Worst case
 In the best case, the amount of time a program might be expected
to take on best possible input data.
 In the average case, the amount of time a program might be
expected to take on typical (or average) input data.
 In the worst case, the amount of time a program would take on the
worst possible input configuration.
Three cases
 Example Linear search

The list
8,2,1,5,3,9,31,12,7,18

Searching 9
Total comparison required is 6

Searching 10
Total comparison required is 10, not found

Searching 8
Total comparison required is 1.
Best case of linear search
 If the searching element Is present as the first element

 Time required is constant time O(1)

 B(n) = O(1)
Worst case of linear search
 If the searching element Is present as the last element

 Time required O(n)

 W(n) = O(n)
Average case of linear search
 It is defined as the total time required for all possible cases /
total number of cases

 Very difficult to find for most of the cases

 Average time= 1+2 +3 +4….. … .. . . .. +n /n

 n(n+1)/2*n

 Average A(n)= n+1/2


Asymptotic Notations
 Asymptotic Notations are languages that allow us to analyze an
algorithm’s running time by identifying its behavior as the input size
for the algorithm increases.
 This is also known as an algorithm’s growth rate.

 When it comes to analysing the complexity of any algorithm in terms


of time and space, we can never provide an exact number to define
the time required and the space required by the algorithm, instead we
express it using some standard notations, also known as Asymptotic
Notations.
 When we analyse any algorithm, we generally get a formula to represent
the amount of time required for execution or the time required by the
computer to run the lines of code of the algorithm

 If some algorithm has a time complexity of T(n) = (n2 + 3n + 4), which is


a quadratic equation.

 For large values of n, the 3n + 4 part will become insignificant compared


to the n2 part.

 For n = 1000, n2 will be 1000000 while 3n + 4 will be 3004.

 Also, When we compare the execution times of two algorithms the


constant coefficients of higher order terms are also neglected.
What is Asymptotic Behaviour
 The word Asymptotic means approaching a value or curve arbitrarily
closely

 In case of Asymptotic notations, we ignore the constant factors and


insignificant parts of an expression, to device a better way of
representing complexities of algorithms, in a single coefficient, so that
comparison between algorithms can be done easily
Example
 Expression 1: (20n2 + 3n - 4)
 Expression 2: (n3 + 100n - 2)

 As per asymptotic notations the function will grow as the value of


n(input) will grow, and that will entirely depend on n2 for the
Expression 1, and on n3 for Expression 2.

 Hence, we can clearly say that the algorithm for which running time is
represented by the Expression 2, will grow faster than the other one,
simply by analysing the highest power coeficient and ignoring the
other constants(20 in 20n2) and insignificant parts of the
expression(3n - 4 and 100n - 2).
 1<log n <sqrt(n) <n < nlogn <n2 <n3 <…….. <2n
<3n<…….nn

 Representing algorithm as function and we can show


they belong to a class
Types of Asymptotic Notations
 We use three types of asymptotic notations to represent the
growth of any algorithm
 Big O(O)- upper bound
 Big Omega (Ω)- lower bound
 Big Theta (Θ)- average bound
Big O
 The function f(n)=O(g(n)) if there exist +ve constant c and n0 such that
f(n)<= c *g(n) for every n>=n0

 Example:

 F(n) =2 n+3
 2n+3<= 10n or 11n or 100n c=10 c=11
` g(n)=n g(n)=n
2n+3n= 5n 2n+3<=5n for all n>=1
 2n+3<= 5n2
 This also true

 1<log n <sqrt(n) < n < nlogn <n2 <n3 <…….. <2n <3n<…….n

Lower bound |avg bound | upper bound

Try to write closes function. So its n


Analysis of Iterative  Binary Search-- O(log2n)
statement
While(start <=end)
 Linear search - O(n)
{
If (a[mid]==key)
 for(i=0,i<n;i++)
Exit
If(a[i]==key)
found else if(a[mid]>key)
Assign end as mid-1
Else
Assign start as mid+1
}
Analysis of Recursive n
 Fibonacci --O(2 )
statement
 Fib(n)
 {
 Factorial--- O(n)
 If n<=1)
 Return n
Fact(n)  Else
{  Return fib(n-1) + fib (n-2)

If n==0
Return 1
Else
Return n x fact(n-1)
O-notation [Big oh]

 We use O-notation, where we have only an asymptotic


upper bound.
Ω-notation [Omega]
 Ω-notations are used to denote an asymptotic lower bound
 Definition: Ω (g(n))= {f(n) :there exist positive constants C and
n0 such that 0≤ C g(n) ≤f(n) for all n≥ n0 }

 For all values of n to the right of n0, the value of f(n) is on or


above g(n)

 Example:3n+2=Ω (n) as
 3n+2≥3n for n ≥1
Θ- notation[Theta]
 Θ(g(n))= {f(n) :there exist positive constants C1,C2 and no such
that 0≤ C1 g(n) ≤ f(n) ≤ C2 g(n) for all n≥ no }

 For all values of n to the right of no ,the value of f(n) lies at or


above C1 g(n) and at or below C2 g(n) .

 Example :3n+2=Θ (n) as,


 3n+2≥3n for all n≥2 and
 3n+2≤4n for all n≥2
 So C1=3, C2 = 4 and no = 2
o-notation [little oh]
 o-notations are used to denote the upper bound that is not
asymptotically tight.

 o(g(n))= {f(n) :there exist positive constants C>0,there exist a


constant n0>0 such that 0≤ f(n) <C g(n) for all n≥ n0 }

 2n=o(n2) but 2n ≠o(n)


 f(n)=O(g(n)).the bound 0≤f(n)≤cg(n) holds for some constant


c>0,but in f(n)=o(g(n)),the bound 0≤f(n)<cg(n) holds for all
constant c>0.
ω-notation [little omega]

 ω -notations are used to denote the lower bound that is not


asymptotically tight.

 ω (g(n))= {f(n) :there exist positive constants C>0,there exist a


constant no>0 such that 0≤ C g(n)<f(n) for all n≥ no }.

 Example: n2/2= ω(n) but n/2≠ ω(n2).


END

You might also like