You are on page 1of 114

UNIT1: FUNDAMENTALS OF ALGORITHM Structure 1.0 Objectives 1.1 Introduction to algorithm 1.2 Properties of algorithm 1.

3 Algorithmic Notations 1.4 Design and development of an algorithm 1.5 Some simple examples 1.6 Summary 1.7 Keywords 1.8 Answers to check your progress 1.9 Unit- end exercises and answers 1. 10 Suggested readings 1.0 OBJECTIVES At the end of this unit you will be able to Fundamentals of algorithms along with notation. Te various properties of an algorithm. How to write algorithm or pseudo code for any problem. Algorithms for varieties of problems.

1.1 Introduction to algorithm: An algorithm, named for the ninth century Persian mathematician al-Khowarizmi, is simply a set of rules used to perform some calculations, either by hand or more usually on a machine Even ancient Greek has used an algorithm which is popularly known as Euclids algorithm for calculating the greatest common divisor(gcd) of two numbers. An algorithm is a tool for solving a given problem.Before writing a program for solving the given problem, a good programmer first designs and writes the concerned algorithm, analyses it refines it as many times as required and arrives at the final efficient form that works well for all valid input data and solves the problem in the shortest possible time, utilizing minimum memory space. Definition of algorithm: The algorithm is defined as a collection of unambiguous instructions occurring in some specific sequence and such an algorithm should produce output for given set of input in finite amount of time.

The basic requirement is that the statement of the problem is to be made very clear because certain concepts may be clear to some one and it may not be to some body else. For example, calculating the roots of a quadratic equation may be clear to people who know about mathematics, but it may be unclear to some one who is not. A good algorithm is like sharp knife it does exactly what it is supposed to do with a minimum amount of applied effort. Using the wrong algorithm to solve a problem is like trying to cut a steak with a screwdriver. You obtain a result, but you would have spent more effort than necessary. Any algorithm should consist of the following: 1. Input : The range of inputs for which an algorithm works perfectly. 2. Output : The algorithm should always produce correct results and it should halt. 3. A finite sequence of the instructions that transform the give input to the desired output (Algorithm + Programming lanuage) Usually, algorithm will be written in simple English like statements along with simple mathematical expressions. The definition of Algorithm can be illustrated using the figure 1.1 Input Problem -- Algorithm -- Computer -- Output ( Fig 1.1 Notion of the Algorithm )

Any systematic method for calculating the result a can be considered as an algorithm. For example, the methods that we learn in school for adding, multiplying, dividing numbers can be considered as algorithms. By looking at the steps specified we can achieve for the result without even thinking. Even a cooking recipe can be considered as an algorithm if the steps: 1. Describe precisely how to make a certain dish. 2. Describe the exact quality to be used. 3. Detail instructions of what items to be added next at what time? How long to cook? 1.2 Properties of algorithms:

Each and every algorithm has to satisfy some properties. The various properties or the characteristics of an algorithms are: 1. Precise and unambiguous (Definiteness) :An algorithm must be simple, precise and unambiguous i.e. there should not be any ambiguity(dout) in the instructions or statements specified to solve a problem. Each and every instruction used in the algorithm must be clear and unambiguous.

2. Range of inputs : The range of inputs for which the algorithm produce the desired result should be specified. 3. Maintain order : The instructions in each and every step in an algorithms are I specified order i.e. they will be executed in sequence (i.e. one after the other). The instructions cannot be written in random order. 4. Finite and correct : They must solve the problem in certain finite number of steps and produce the appropriate result. The range of input for which the algorithm works perfectly should be specified. 5. Termination : ecac algorithm should terminate. 6. Several algorithms may exist for solving a given problem and execution speed of each algorithm may be different. (for example, to sort various algorithms bubble sort, insertion sort can be used). 7. An algorithm can be represented in several different ways. 8. Algorithm for a given problem can be based on very different ides (for example, to sort several methods exist such as bubble sort, insertion sort, radix sort etc.) and may have different execution speeds. 1.3 Algorithmic Notations The following notations are used usually while writing any algorithm. 1. Write the word algorithm and write what is the main objective of the algorithm. For example, Algorithm Area_of_circle 2. Then a brief description of what is achieved using the algorithm along with the inputs to the algorithm has to be provided. For example, Description : The algorithm computes the area of circle using the input value radius 3. Each i9nstruction should be in a separate steps and the step number has to be provided. What is accomplished in each step has to be described in brief and has to be enclosed within the square brackets (which we call as comment). For example, to find the area of circle, we can write: Step2: [ Find the area of circle] Area 3.142*radius*radius. 4. After all operations are over, the algorithm has to be terminated which indicates the logical end of the algorithm. For example , the last step in the algorithm will be: Step4: [Finished] exit. 1.4 Design and development of an algorithm The fundamental steps in solving any given problem which leads to the complete development of an algorithm, are as follows:

1. 2. 3. 4. 5. 6. 7.

Statement of the problem. Development of a mathematical model Designing of the algorithm Implementation Analysis of the algorithm for its time and space complexity Program testing and debugging Documentation.

1. Statement of the problem. Before we attempt to solve a given problem, we must understand precisely the statement of the algorithm. There are again several ways to do this. We can list all the software specification requirements and try to ask several questions and get the answers. This would help us to understand the problem more clearly and remove any ambiguity. 2. Development of a mathematical model Having understood the problem, the next step is to look for a mathematical model, which is best suit for the given problem. This is a very important step in the overall solution process and it should be given considerable thought. In fact the choose of the model has a long way to go down in the development process. We must think of -which mathematical model is best suit for any given problem ? - Are there any model which has already been selected to solve a problem which resembles the current one? 3. Designing of the algorithm As we are comfortable with the specification and the model of the problem at this stage, we can move on to writing down an algorithm. 4. Implementation In this step appropriated data structures are to be selected and coded in a target language. The select of a target language is very important sub step to reduce complexities involved in coding. 5. Analysis of the algorithm for its time and space complexity We will use, in this section, a number of term s like complexity, analysis , efficiency, etc. All of these terms refer to the performance of a program. Our job cannot stop once we write the algorithm and code it in say C or C++ or Java. We should worry about the space3 and timing requirement too. Why? There are several reasons for this and so we shall start with the time complexity. In simple terms time complexity of a program is the amount of computer times it needs to run a program.

The space complexity of a program is the amount of memory needed to run a program. 6. Program testing and debugging After implementing the algorithm in a specific language, next is the time execute. After executing the program the desired output should be obtained. Testing is nothing about the verification of the program for its correctness i.e. whether the output of the program is correct or not. Using different input values, one can check whether the desired output is obtained or not. Any logical error can be identified by program testing. Usually debugging is part of testing. Many debugging tool exist by which one can test the program for its correctness. 7. Documentation Note that the documentation is not last step. The documentation should exist for understanding the problem till it is tested and debugged. During design and implementation phase the documentation is very useful. To understand the design or code, proper comments should be given. As far as possible program should be selfdocumented. So, usage of proper variable name and data structures play a very important role during documentation. It is very difficult to read and understand the others logic and code. The documentation enables the individuals to understand the programs written by the other people. 1.5 Some simple examples 1. Algorithm to find GCD of two numbers.( Euclids algorithm). ALGORITHM : gcd(m,n) //Purpose :To find the GCD of two numbers //Description : This algorithm computes the GCD of two non-negative and non-zero values accepted as parameters. //Input : Two non-negative and non-zero values m and n //Output : GCD of m and n Step1 : if n=0 return m and stop Step2: Divide m by n and assign the remainder to r. Step3: Assign the value of n to m and the value of r to n Step4: Go to step1. 2. Algorithm to find GCD of two numbers.(Consecutive integer checking method).

ALGORITHM : gcd(m,n) //Purpose :To find the GCD of two numbers //Description : This algorithm computes the GCD of two non-negative and non-zero values accepted as parameters. //Input : Two non-negative and non-zero values m and n //Output : GCD of m and n Step1 : [find the minimum of m and n] rmin(m,n); Step2: [find the gcd using consecutive integer checking] While(1) if (m mod r = 0 and n mod r = 0) break; end while Step 3: return r. 3. Algorithm to find GCD of two numbers.(Repetative subtraction method) ALGORITHM : gcd(m,n) //Purpose :To find the GCD of two numbers //Description : This algorithm computes the GCD of two non-negative and non-zero values accepted as parameters. //Input : Two non-negative and non-zero values m and n //Output : GCD of m and n Step1 : [If one of the two numbers is zero, return non-negative number a the GCD] if (m=0) return n; if (n=0) return m; Step2 :[Repeat step 2 while m and n are different] While (m!=n) if(m>n) mm-n; else nn-m; end if end while Step 3: [finished : return GCD as the output] return m; Note: Same problem can be solved in many ways(example Algorithm 1,2 and 3).

4. Algorithm to generate prime numbers using sieve Eratosthenes method. (Pseudo code) ALGORITHM SIEVE_PRIME(n) //Purpose : To generate prime numbers between 2 and n //Description : This algorithm generates prime numbers using sieve method //Input : A positive integer n>=2 //Output : Prime numbers <=n Step 1: [ Generate the list of integers from 2 to n ] for p 2 to n do a[p]p end for Step 2: [Eliminate the multiples of p between 2 to n ] for p 2 to n do if (a[p] != 0 ) i p*p while ( i <= n ) a[i] 0 i i + p end while end if end for Step 3: [Obtain the prime umbers by copying the non zero elements] m0 for p 2 to n do if (a[p] != 0 ) b[m] a[p]; m m + 1 end if end for Step 4: [Output the prime numbers between 2 to n ] for i0 to m-1 write b[i] end for Step 5 : [Finished ] Exit 5. Algorithm to find the number of digits in a binary representation of a given decimal integer Algorithm : Binary(n) //Purpose : To count the number of digits in a binary representation of a given decimal integer. //Input : n : a positive decimal integer.

//Output : Number of digits in a binary representation of a given positive decimal integer. Count 1; While ( n > 1) Count Count + 1 nn/2 end while return Count Check your progress 1. What is an algorithm ? Explain the notion of the algorithm? 2. What are various properties of an algorithm? 3. Explain the procedure of generating prime numbers using the method Sieve of eratosthenes and write the algorithm for the same. 4. Explain the steps involved in the design and development of an algorithm. 1.6 SUMMARY Algorithm : An algorithm is a sequence of non-ambiguous instructions for solving a problem in a finite amount of time. An input to an algorithm specifies an instance of the problem the algorithm solves. Algorithm can be specified in a natural language or a pseudo code; they can also be implemented as computer programs. A good algorithm is usually a result of repeated efforts and rework. The same problem can often be solved by several algorithms. For example, three algorithms were given for computing the greatest common divisor of two integers: Euclids algorithm, the consecutive integer checking algorithm, and repetitive subtraction.

1.7 KEYWORDS Algorithm : It is a sequence of unambiguous instructions to solve a problem in afinite amount of time. Time complexity : It is the time required to execute a program. Space complexity : It is the amount of memory needed to run a program. 1.8 ANSWERS TO CHECK YOUR PROGRESS 1. 1.1 2. 1.2 8

3. 1.5(4th algorithm) 4. 1.4 UNIT-EBD EXERCISES AND ANSWERS 1. Find gcd(31415,14142) by applying Euclids algorithm. 2. What does Euclids algorithm do for a pair of numbers in which the first number is smaller than the second one? What is the largest number of times this can happen during the algorithms execution on such an input? 3. Write an algorithm to find the gcd of two numbers by using repetitive subtraction method. Find gcd(36,171) by using repetitive subtraction 4. Write an algorithm to find the number of digits in a binary representation of a given decimal integer. Trace it for the input 255. Answers: SEE 1. 2. 3. 4. 1.5 (1st algorithm) 1.5 (1st algorithm) [ Hint : find gcd(12,24) ] 1.5 (3rd algorithm) 1.5 (5th algorithm)

1.9 SUGGESTED READINGS 1. Introduction to the Design & Analysis of Algorithms By Anany Levitin 2. Aho, Alfred V., "The design and analysis of computer algorithm". 3. Analysis and design of algorithms By A M Padma Reddy.

MODULE-1,UNIT 2 ANALYSIS OF ALGORITHM EFFICIENCY Structure 1.0 Objectives 1.1 Introduction to algorithm 1.2 Properties of algorithm 1.3 Algorithmic Notations 1.4 Design and development of an algorithm 1.5 Some simple examples 1.6 Summary 1.7 Keywords 1.8 Answers to check your progress 1.9 Unit- end exercises and answers 1. 10 Suggested readings

1.0

OBJECTIVES

At the end of this unit you will be able to Efficiency of an Algorithm. Space complexity. Time complexity. Performance measurement Need for time complexity Worst-case , Best-case and Average-case efficiencies. Asymptotic Notations. Big-Oh (O) Big-Omega () Big-Theta () Practical complexities. Analysis of iterative algorithms. Analysis of Recursive algorithms.

1.1

INTRODUCTION Two important ways to characterize the effectiveness of an algorithm are its space complexity and time complexity. Time complexity of an algorithm concerns determining an expression of the number of steps needed as a function of the problem size. Since the step count measure is somewhat coarse, one does not aim at obtaining an exact step count. Instead, one attempts only to get asymptotic bounds on the step count. Asymptotic analysis makes use of the O (Big Oh) notation. Two other notational constructs used by computer scientists in the analysis of algorithms are (Big Theta) notation and (Big Omega)notation.

10

1.2 Space complexity The space complexity of a program is the amount of memory that may be required to run a program. 1. The primary memory of a computer is an important resource for the proper execution of a program. Without sufficient memory either the program works slowly or may not work totally. Therefore, the exact memory requirement for a program is to be in advance. 2. When we design a program we must see that memory requirement is kept to the minimum so that even computers with less memory can execute the program. 3. Now-a-days the operating systems take care of the efficient usage of memory based upon the virtual memory concept or dynamic linking and loading. 1.2.1 Analysis of space complexity The following components are important in calculating the space requirements: Instruction space This is the space required to store the machine code generated by the compiler. Generally the object code will be placed in the code segment. Data space The space needed for constants, static variables, intermediate variables, dynamic variables etc. This is nothing but the data segment space. Stack space To store return address, return values, etc. To store these details, a stack segment will be used.

1.2.2 How to calculate space complexity? Before we proceed to any specific example, we must understand the importance of the size of the input, that is, n. generally every problem will be associated with n. it may refer to Number of cities in travelling salesman problem. Number of elements in the sorting and searching problem. Number of cities coloring the map problem. Number of objects knapsack problem. When a problem is independent of n, then the data space occupied by the algorithm/program may be considered as zero. Let us start with few simple problems which are iterative type. EXAMPLE: 1. Finding the average of three numbers. Void main() { int a,b,c,avg; scanf(%d %d%d,a,b,c); 11

avg=(a+b+c)/3; printf(average is=%d,avg); } Program to illustrate the space complexity. As a, b, c and avg are all integer variables, the space occupied by them is =4*sizeof(int) =4*2bytes =8bytes Space occupied by the constant 3 is = 1*2 bytes. Hence, total space is = 8 + 2 = 10 bytes. 1.3 Time complexity It is the amount of time a program or algorithm takes for execution. That is how fast an algorithm runs. Note that the time taken by a program for compilation is not included in the calculation. Normally researchers give more attention to time efficiency rather than space efficiency, because handling memory problems is easier than time. 1.4 ASYMPTOTIC NOTATIONS Two important ways to characterize the effectiveness of an algorithm are its space complexity and time complexity. Time complexity of an algorithm concerns determining an expression of the number of steps needed as a function of the problem size. Since the step count measure is somewhat coarse, one does not aim at obtaining an exact step count. Instead, one attempts only to get asymptotic bounds on the step count. Asymptotic analysis makes use of the O (Big Oh) notation. Two other notational constructs used by computer scientists in the analysis of algorithms are (Big Theta) notation and (Big Omega) notation. The performance evaluation of an algorithm is obtained by totaling the number of occurrences of each operation when running the algorithm. The performance of an algorithm is evaluated as a function of the input size n and is to be considered modulo a multiplicative constant.

The following notations are commonly use notations in performance analysis and used to characterize the complexity of an algorithm. -Notation (Same order) This notation bounds a function to within constant factors. We say f(n) = (g(n)) if there exist positive constants n0, c1 and c2 such that to the right of n0 the value off(n) always lies

12

between c1 g(n) and c2 g(n) inclusive. In the set notation, we write as follows: (g(n)) = {f(n) : there exist positive constants c1, c1, and n0 such that 0 c1 g(n) f(n) c2 g(n) for all n n0} We say that is g(n) an asymptotically tight bound for f(n).

Graphically, for all values of n to the right of n0, the value of f(n) lies at or above c1 g(n) and at or below c2 g(n). In other words, for all n n0, the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n). In the set terminology, f(n) is said to be a member of the set (g(n)) of functions. In other words, because O(g(n)) is a set, we could write f(n) (g(n)) to indicate that f(n) is a member of (g(n)). Instead, we write f(n) = (g(n)) to express the same notation. Historically, this notation is "f(n) = (g(n))" although the idea that f(n) is equal to something called (g(n)) is misleading. Example: n2/2 2n = (n2), with c1 = 1/4, c2 = 1/2, and n0 = 8.

-Notation (Upper Bound)

13

This notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or below c g(n). In the set notation, we write as follows: For a given function g(n), the set of functions (g(n)) = {f(n): there exist positive constants c and n0 such that 0 f(n) c g(n) for all n n0} We say that the function g(n) is an asymptotic upper bound for the function f(n). We use -notation to give an upper bound on a function, to within a constant factor.

Graphically, for all values of n to the right of n0, the value of the function f(n) is on or below g(n). We write f(n) = O(g(n)) to indicate that a function f(n) is a member of the set (g(n)) i.e. f(n) (g(n)) Note that f(n) = (g(n)) implies f(n) = (g(n)), since -notation is a stronger notation than -notation. Example: 2n2 = (n3), with c = 1 and n0 = 2.

Equivalently, we may also define f is of order g as follows: If f(n) and g(n) are functions defined on the positive integers, then f(n) is (g(n)) if and only if there is a c > 0 and an n0 > 0 such that | f(n) | | g(n) | for all n n0

14

Historical Note: The notation was introduced in 1892 by the German mathematician Paul Bachman.

-Notation (Lower Bound) This notation gives a lower bound for a function to within a constant factor. We write f(n) = (g(n)) if there are positive constants n0 and c such that to the right of n0, the value of f(n) always lies on or above c g(n). In the set notation, we write as follows: For a given function g(n), the set of functions (g(n)) = {f(n) : there exist positive constants c and n0 such that 0 c g(n) f(n) for all n n0} We say that the function g(n) is an asymptotic lower bound for the function f(n).

The intuition behind -notation is shown above. Example: n = (lg n), with c = 1 and n0 = 16.

1.4.1 Algorithm Analysis The complexity of an algorithm is a function g(n) that gives the upper bound of the number of operation (or running time) performed by an algorithm when the input size isn. There are two interpretations of upper bound. Worst-case Complexity The running time for any given size input will be lower than the upper bound except possibly for some values of the input where the maximum is reached.

15

Average-case Complexity The running time for any given size input will be the average number of operations over all problem instances for a given size. Because, it is quite difficult to estimate the statistical behavior of the input, most of the time we content ourselves to a worst case behavior. Most of the time, the complexity of g(n) is approximated by its family o(f(n)) where f(n) is one of the following functions. n (linear complexity), log n (logarithmic complexity), na where a 2 (polynomial complexity), an (exponential complexity).

1.4.2 Optimality Once the complexity of an algorithm has been estimated, the question arises whether this algorithm is optimal. An algorithm for a given problem is optimal if its complexity reaches the lower bound over all the algorithms solving this problem. For example, any algorithm solving the intersection of n segments problem will execute at least n2operations in the worst case even if it does nothing but print the output. This is abbreviated by saying that the problem has (n2) complexity. If one finds an O(n2) algorithm that solve this problem, it will be optimal and of complexity (n2).

1.5 Practical Complexities Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty. In this context, a computational problem is understood to be a task that is in principle amenable to being solved by a computer (which basically means that the problem can be stated by a set of mathematical instructions). Informally, a computational problem consists of problem instances and solutions to these problem instances. For example, primality testing is the problem of determining whether a given number is prime or not. The instances of this problem are natural numbers, and the solution to an instance is yes or no based on whether the number is prime or not. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.

16

Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. 1.5.1 Function problems A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just yes or no. Notable examples include the traveling salesman problemand the integer factorization problem. It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a b = c holds. Deciding whether a given triple is member of this set corresponds to solving the problem of multiplying two numbers. 1.5.2 Measuring the size of an instance To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices? If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis says that a problem can be solved with a feasible amount

17

of resources if it admits a polynomial time algorithm. 1.6 Performance measurement of simple algorithms 1.Find the time complexity of the following algorithms a.)Algorithm :simple for (i=0; i<=n*n; i++) for (j=i+1; j<i; j++) if(i<j){ sum++;}

Sol for (i=1; i<=n*n; i++) Executed n*n times for (j=0; j<i; j++) Executed <= n*n times sum++; O(1) Running Time: O(n4) 2. Algorithm for matrix multiplication Algorithm matmul(a[0n-1],b[0..n-1]) //Input : two nby n matrices //output: Matrix c=ab For i0 to n-1 do For j 0 to n-1 do C[I,j]0 For k 0 to n-1 do C[I,j] c[I,j]+a[I,k]+b[k,j] Return c Time complexity of this algorithm is given by M(n)= =n3 1
i= j= k = 0 0 0 n n n 1 1 1

4.Algorithm for element uniqueness Algorithm : uniquelement(a[].n) //input : n number of elements and a- an array consisting of n elements For i 0 to n-2 do

18

For i i+1 to n-1 do If(a[i]=a[j]) Return 0; End if Return 1; Worst case efficiency : T(n)O(n2) Best case efficiency: if a=a then basic operation will be executed only once. Therefore t(n)(1) Note: To solve the non-recursive algorithms time efficiency make use of the formula result=upper bound-lower bound+1 in each summation. Check your progress 1. Explain the concept space complexity 2. What is meant by a time complexity? Why it is required? 3. write a note on asymptotic notations. 4. Find the time complexity of matrix multiplication algorithm. SUMMARY: Space complexity: The space complexity of a program is the amount of memory that may be required to run a program. Time complexity : It is the time required to execute a program. Asymptotic notations representation of time complexity in any of the notaions (big oh,big omega, big theta). 1.5 KEYWORDS Basic operation an operation which is executed more number of times in the program (logic part). Usually present in the inner most loop (part) of the algorithm/program.

1 = n(n 1) / 2 <= n2
i =0 j =i +1

n 2 n 1

19

1.6 1. 1.2 2. 1.3 3. 1.4 4. 1.6

1.7 UNIT-END EXERCISES AND ANSWERS 1. Find the time complexity of for the algorithm transpose of a matrix 2. Write a note on best case, average case , worst case in a program with example Answers: SEE 1. 1.4 2. 1.3 1.8 SUGGESTED READINGS 1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy 4. Even, Shimon., "Graph Algorithms",Computer Science Press.

20

MODULE-1,UNIT RECURRENCES Structure 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 2.0 Objectives

ALGORITHMS

ANALYSIS

AND

SOLVING

Analyzing control structures Using a barometer Average case analysis Amortized analysis Solving recurrences Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to 2.1 Solve container loading and knapsack problems. Find the shortest paths using Prims and Kruskals algorithm. Identifying the difference between graph tree and minimum spanning tree.

INTRODUCTION An essential tool to design an efficient and suitable algorithm is the "Analysis of Algorithms". There is no magic formula it is simply a matter of judgment, intuition and experience. Nevertheless, theses are some basic techniques that are often useful, such as knowing how to deal with control structures and recursive equations. 1.2 Analyzing control structures Control Structures Analysis: Eventually analysis of algorithms proceeds from the inside out. Determine fist, the time required by individual instructions, then combine

21

these tines according to the control systems that combine the instructions in the program.

1.2.1. Sequencing: Let P1 and P2 be two fragments of an algorithm. They way be single instruction. They may be single instructions or complicated sub-algorithms. Let t1 and t2 be the times taken by P1 and P2. t1 and t2 may depend on various parameters, such as the instance size. The sequencing rule says that the time required to compute P1 and P2 is simply t1 + t2. By the maximum rule, this time is in q(max(t1, t2)). Despite its simplicity, applying this rule is sometimes less obvious than it may appear. It could happen that one of the parameters that control t2 depends on the results of the computation performed P1. 1.2.2. For Loops: They are the easiest loops to analyze. For i 1 to m do P(i) By a convention well adopt: m=0 means P(i) is not executed at all, (not an error). P(i) could depend on the size. Of course, the easiest case is when it doesnt. Let t be the time required to compute P(i) and the total time required is l=mt. Usually this approach is adequate, but there is a potential pitfall: We didnt consider the time for the loop control. After all our for loops is shorthand for something like the following while loop. i1 while i <= m do P(i) i i + 1 In the worst situations it is reasonable to count the test i m at unit cost and the same thing with the instructions i i + 1 and the sequencing operations go to implicit in the while loop. Let c be the upper bound on the time required by each of the operations: l <= c for i 1 + (m+1)c tests i m + mt execution of P(i) + mc execution of i i + 1 + mc sequencing operations l <= (t+3c)m+2c This time is clearly bounded below by mt. If c is negligible compared to t, our previous estimate that l is roughly equal to mt is justified. The analysis of for loop is more interesting when the time t(i) required for P(i) runs as a function of I and or also on size n. So: for i 1 to m do P(i) takes a time given by, t(i) (ignoring the time taken by the loop control). Example : Algorithm for matrix multiplication

22

Algorithm matmul(a[0n-1],b[0..n-1]) //Input : two nby n matrices //output: Matrix c=ab For i0 to n-1 do For j 0 to n-1 do C[I,j]0 For k 0 to n-1 do C[I,j] c[I,j]+a[I,k]+b[k,j] Return c Time complexity of this algorithm is given by M(n)= (n3) 1
i= j= k = 0 0 0 n n n 1 1 1

1.2.3 Recursive Calls: The analysis of recursive algorithms is straight forward to a certain point. Simple inspection of the algorithm often gives rise to a recurrence equation that mimics the flow of control in the algorithm. General techniques on how to solve or how to transform the equation into simpler non-recursive equations will be seen later. 1.2.4 While and Repeat loops: Theses two types of loops are usually harder to analyze than for loops because there is no obvious a priori way to know how many times we shall have to go around the loop. The standard technique for analyzing these loops is to find a function of the variables involved where value decreases each time around. To determine how many times the loop is repeated, however, we need to understand better how the value of this function decreases. An alternative approach to the analysis of while loops consist of treating them like recursive algorithms. We illustrate both techniques with the same example, The analysis of repeat loops is carried out similarly.

1.3 Using a barometer 1.4 Supplementary examples a. Algorithm to find the sum os array elements Algorithm sum(a,n) 23

{ s=0.0; for I=1 to n do s= s+a[I]; return s; } The problem instances for this algorithm are characterized by n,the number of elements to be summed. The space needed d by n is one word, since it is of type integer. The space needed by aa is the space needed by variables of type array of floating point numbers. This is atleast n words, since a must be large enough to hold the n elements to be summed. So,we obtain S sum(n)>=(n+s) [ n for a[],one each for n,I a& s] Time Complexity: The time T(p) taken by a program P is the sum of the compile time and the run time(execution time) The compile time does not depend on the instance characteristics. Also we may assume that a compiled program will be run several times without recompilation .This rum time is denoted by tp(instance characteristics). The number of steps any problem statemn t is assigned depends on the kind of statement. For example, comments 0 steps. Assignment statements 1 steps. [Which does not involve any calls to other algorithms] Interactive statement such as for, while & repeat-until Control part of the statement. 1. We introduce a variable, count into the program statement to increment count with initial value 0.Statement to increment count by the appropriate amount are introduced into the program. This is done so that each time a statement in the original program is executes count is incremented by the step count of that statement.

Algorithm: Algorithm sum(a,n) { s= 0.0; 24

count = count+1; for I=1 to n do { count =count+1; s=s+a[I]; count=count+1; } count=count+1; count=count+1; return s; } If the count is zero to start with, then it will be 2n+3 on termination. So each invocation of sum execute a total of 2n+3 steps. The second method to determine the step count of an algorithm is to build a table in which we list the total number of steps contributes by each statement. First determine the number of steps per execution (s/e) of the statement and the total number of times (ie., frequency) each statement is executed. By combining these two quantities, the total contribution of all statements, the step count for the entire algorithm is obtained.

Statement 1. Algorithm Sum(a,n) 2.{ 3. S=0.0; 4. for I=1 to n do 5. s=s+a[I]; 6. return s; 7. } Total

S/e 0 0 1 1 1 1 0

Frequency 1 n+1 n 1 -

Total 0 0 1 n+1 n 1 0 2n+3

1.5 AVERAGE CASE ANALYSIS Most of the time, average-case analysis are performed under the more or less realistic assumption that all instances of any given size are equally likely. For sorting problems, it is simple to assume also that all the elements to be sorted are distinct. Suppose we have n distinct elements to sort by insertion and all n! permutation of these elements are equally likely.

25

Analysis

To determine the time taken on a average by the algorithm ,we could add the times required to sort each of the possible permutations ,and then divide by n! the answer thus obtained. An alternative approach, easier in this case is to analyze directly the time required by the algorithm, reasoning probabilistically as we proceed. For any I,2 I n, consider the sub array, T[1.i]. The partial rank of T[I] is defined as the position it would occupy if the sub array were sorted. For Example, the partial rank of T in [3,6,2,5,1,7,4] in 3 because T[1.4] once sorted is [2,3,5,6]. Clearly the partial rank of T[I] does not depend on the order of the element in Sub array T[1I-1].

Best case: This analysis constrains on the input, other than size. Resulting in the fasters possible run time Worst case:This analysis constrains on the input, other than size. Resulting in the fasters possible run time Average case: This type of analysis results in average running time over every type of input. Complexity: Complexity refers to the rate at which the storage time grows as a function of the problem size Asymptotic analysis: Expressing the complexity in term of its relationship to know function. This type analysis is called asymptotic analysis. Asymptotic notation: Big oh: the function f(n)=O(g(n)) iff there exist positive constants c and no such that f(n)c*g(n) for all n, n no. Omega: the function f(n)=(g(n)) iff there exist positive constants c and no such that f(n) c*g(n) for all n, n no. Theta: the function f(n)=(g(n)) iff there exist positive constants c1,c2 and no such that c1 g(n) f(n) c2 g(n) for all n, n no. 1.6 Amortized anlysis In computer science, amortized analysis is a method of analyzing algorithms that considers the entire sequence of operations of the program. It allows for the establishment of a worst-case bound for the performance of an algorithm irrespective of the inputs by looking at all of the operations. At the heart of the method is the idea that while certain operations may be extremely costly in resources, they cannot occur at a high-enough frequency to weigh down the entire program because the number of less costly operations will far outnumber the costly ones in the long run, "paying back" the program over a number of iterations. It is particularly useful because it guarantees worst-case performance while accounting for the entire set of operations in an algorithm. There are generally three methods for performing amortized analysis: the aggregate method, the accounting method, and the potential method. All of these give the same answers, and their usage difference is primarily circumstantial and due to individual preference. 26

Aggregate analysis determines the upper bound T(n) on the total cost of a sequence of n operations, then calculates the average cost to be T(n) / n.

The accounting method determines the individual cost of each operation, combining its immediate execution time and its influence on the running time of future operations. Usually, many short-running operations accumulate a "debt" of unfavorable state in small increments, while rare long-running operations decrease it drastically.

The potential method is like the accounting method, but overcharges operations early to compensate for undercharges later.

As a simple example, in a specific implementation of the dynamic array, we double the size of the array each time it fills up. Because of this, array reallocation may be required, and in the worst case an insertion may require O(n). However, a sequence of n insertions can always be done in O(n) time, because the rest of the insertions are done in constant time, so n insertions can be completed in O(n) time. The amortizedtime per operation is therefore O(n) / n = O(1). Another way to see this is to think of a sequence of n operations. There are 2 possible operations: a regular insertion which requires a constant c time to perform (assume c = 1), and an array doubling which requires O(j) time (where j<n and is the size of the array at the time of the doubling). Clearly the time to perform these operations is less than the time needed to perform n regular insertions in addition to the number of array doublings that would have taken place in the original sequence of n operations. There are only as many array doublings in the sequence as there are powers of 2 between 0 and n ( lg(n) ). Therefore the cost of a sequence of n operations is strictly less than the below expression:

The amortized time per operation is the worst-case time bound on a series of n operations divided by n. The amortized time per operation is therefore O(3n) / n = O(n) / n = O(1).

1.7 Recursion: Recursion may have the following definitions: -The nested repetition of identical algorithm is recursion. -It is a technique of defining an object/process by itself. -Recursion is a process by which a function calls itself repeatedly until some specified condition has been satisfied. Recursion may have the following definitions: -The nested repetition of identical algorithm is recursion. -Recursion is a process by which a function calls itself repeatedly until some specified condition has been satisfied. 1.7.1 When to use recursion: 27

Recursion can be used for repetitive computations in which each action is stated in terms of previous result. There are two conditions that must be satisfied by any recursive procedure. 1. Each time a function calls itself it should get nearer to the solution. 2. There must be a decision criterion for stopping the process. In making the decision about whether to write an algorithm in recursive or non-recursive form, it is always advisable to consider a tree structure for the problem. If the structure is simple then use non-recursive form. If the tree appears quite bushy, with little duplication of tasks, then recursion is suitable. The recursion algorithm for finding the factorial of a number is given below, Algorithm : factorial-recursion Input : n, the number whose factorial is to be found. Output : f, the factorial of n Method : if(n=0) f=1 else f=factorial(n-1) * n if end algorithm ends. The general procedure for any recursive algorithm is as follows, 1. Save the parameters, local variables and return addresses. 2. If the termination criterion is reached perform final computation and goto step 3 otherwise perform final computations and goto step 1

3. Restore the most recently saved parameters, local variable and return address and goto the latest return address.

28

1.7.2 Iteration v/s Recursion: Demerits of recursive algorithms: 1. Many programming languages do not support recursion; hence, recursive mathematical function is implemented using iterative methods. 2. Even though mathematical functions can be easily implemented using recursion it is always at the cost of execution time and memory space. For example, the recursion tree for generating 6 numbers in a Fibonacci series generation is given in fig 2.5. A Fibonacci series is of the form 0,1,1,2,3,5,8,13,etc, where the third number is the sum of preceding two numbers and so on. It can be noticed from the fig 2.5 that, f(n-2) is computed twice, f(n-3) is computed thrice, f(n-4) is computed 5 times. 3. A recursive procedure can be called from within or outside itself and to ensure its proper functioning it has to save in some order the return addresses so that, a return to the proper location will result when the return to a calling statement is made. 4. The recursive programs needs considerably more storage and will take more time.

1.7.3 Demerits of iterative methods : Mathematical functions such as factorial and Fibonacci series generation can be easily implemented using recursion than iteration. In iterative techniques looping of statement is very much necessary.

Recursion is a top down approach to problem solving. It divides the problem into pieces or selects out one key step, postponing the rest. Iteration is more of a bottom up approach. It begins with what is known and from this constructs the solution step by step. The iterative function obviously uses time that is O(n) where as recursive function has an exponential time complexity. It is always true that recursion can be replaced by iteration and stacks. It is also true that stack can be replaced by a recursive program with no stack.

29

1.7.4 SOLVING RECURRENCES :-( Happen again (or) repeatedly) The indispensable last step when analyzing an algorithm is often to solve a recurrence equation. With a little experience and intention, most recurrence can be solved by intelligent guesswork. However, there exists a powerful technique that can be used to solve certain classes of recurrence almost automatically. This is a main topic of this section the technique of the characteristic equation.

1. Intelligent guess work: This approach generally proceeds in 4 stages. 1. 2. 3. 4. Calculate the first few values of the recurrence Look for regularity. Guess a suitable general form. And finally prove by mathematical induction(perhaps constructive induction).

1) (Fibonacci) Consider the recurrence. n fn = f n-1 + f n-2 otherwise We rewrite the recurrence as, fn f n-1 f n-2 = 0. The characteristic polynomial is, x2 x 1 = 0. The roots are, -(-1) ((-1)2 + 4) x = -----------------------2 1 (1 + 4) = ---------------2 1 5 = ---------2 if n=0 or n=1

30

1+5 r1 = --------2

and

1 - 5 r2 = --------2

The general solution is, fn = C1r1n + C2r2n when n=0, when n=1, f0 = C1 + C2 = 0 f1 = C1r1 + C2r2 = 1 C 1 + C2 = 0 C1r1 + C2r2 = 1 From equation (1) C1 = -C2 Substitute C1 in equation(2) -C2r1 + C2r2 = 1 C2[r2 r1] = 1 Substitute r1 and r2 values C2 C2 1 - 5 1 - 5 --------- --------- = 1 2 2 1 5 1 5 --------------------=1 2 (1) (2)

C2 = -1/5 -1 1 - 5 ---- -------5 2 1 5 --------n n

+
n

1 = ----

31

3. Inhomogeneous recurrence : * The solution of a linear recurrence with constant co-efficient becomes more difficult when the recurrence is not homogeneous, that is when the linear combination is not equal to zero. * Consider the following recurrence a0tn + a1 t n-1 + + ak t n-k = bn p(n) * The left hand side is the same as before,(homogeneous) but on the right-hand side we have bnp(n), where, b is a constant p(n) is a polynomial in n of degree d. Example(1) : Consider the recurrence, tn 2t n-1 = 3n (A) In this case, b=3, p(n) = 1, degree = 0. The characteristic polynomial is, (x 2)(x 3) = 0 The roots are, r1 = 2, r2 = 3 The general solution, tn = C1r1n + C2r2n tn = C12n + C23n (1) when n=0, C1 + C2 = t0 (2) when n=1, 2C1 + 3C2 = t1 (3) sub n=1 in eqn (A) t1 2t0 = 3 t1 = 3 + 2t0 substitute t1 in eqn(3), (2) * 2 2C1 + 2C2 = 2t0 2C1 + 3C2 = (3 + 2t0) -------------------------------C2 = -3 C2 = 3

Sub C2 = 3 in eqn (2) 32

C 1 + C2 = t 0 C1 + 3 = t0 C1 = t0 3 Therefore tn = (t0-3)2n + 3. 3n = Max[O[(t0 3) 2n], O[3.3n]] = Max[O(2n), O(3n)] constants = O[3n] Example :2. Solve the following recurrence relation x(n)=x(n-1)+5 for n>1 , x(1)=0 Sol : The above recurrence relation can be written as shown below x(n)={ x(n-1)+5 if n>1 0 if n=1} consider the relation when n>1 x(n)=x(n-1)+5 -------a Replace n by n-1 in eqv a x(n)=x(n-2)+5+5 Replace n by n-2 in eqv a x(n)= x(n-2)+5+5+5 x(n)= x(n-2)+3*5 . .. Finally x(n)=x[n-(n-1)]+(n-1)*5 = x(1)+(n-1)*5 =0 +(n-1)*5 x(n)=5(n-1) 4. Change of variables: * It is sometimes possible to solve more complicated recurrences by making a change of variable. * In the following example, we write T(n) for the term of a general recurrences, and ti for the term of a new recurrence obtained from the first by a change of variable. Example: (1) Consider the recurrence, 1 T(n) = 3T(n/2) + n , if n is a power of 2, n>1 Reconsider the recurrence we solved by intelligent guesswork in the previous section, but only for the case when n is a power of 2 , if n=1

33

1 T(n) = 3T(n/2) + n * We replace n by 2i. * This is achieved by introducing new recurrence ti, define by ti = T(2i) * This transformation is useful because n/2 becomes (2i)/2 = 2 i-1 * In other words, our original recurrence in which T(n) is defined as a function of T(n/2) given way to one in which ti is defined as a function of t i-1, precisely the type of recurrence we have learned to solve. ti = T(2i) = 3T(2 i-1) + 2i ti = 3t i-1 + 2i ti 3t i-1 = 2i (A) In this case, b = 2, p(n) = 1, degree = 0 So, the characteristic equation, (x 3)(x 2) = 0 The roots are, r1 = 3, r2 = 2. The general equation, tn = C1 r1i + C2 r2i sub. r1 & r2: tn = 3nC1 + C2 2n tn = C1 3i + C2 2i We use the fact that, T(2i) = ti & thus T(n) = tlogn when n= 2i to obtain, T(n) = C1. 3 log2n + C2. 2log2n T(n) = C1 . nlog23 + C2.n [i = logn] When n is a power of 2, which is sufficient to conclude that, T(n) = O(n log3) n is a power of 2

Check your progress 1. Explain how to analyze different control structures of algorithms. 2. Write a note on average case anlysis. 3. Write a recursive algorithm for generating Fibonacci series and construct the recurrence relation and solve. 4. Solve the following recurrence relation x(n)=x(n-1)+5 for n>1 , x(1)=0.

34

SUMMARY: Control Structures Analysis: Eventually analysis of algorithms proceeds from the inside out. Determine fist, the time required by individual instructions, then combine these tines according to the control systems that combine the instructions in the program. Recursion can be used for repetitive computations in which each action is stated in terms of previous result. Solving recurrences by using following steps : Calculate the first few values of the recurrence.Look for regularity.Guess a suitable general form and finally prove by mathematical induction(perhaps constructive induction). 1.7 KEYWORDS 1. Big oh , Omega , and Theta symbols of asymptotic notation. 1.8 1. 1.1 2. 1.5 3. 1.7 4. 1.7 1.7 UNIT-END EXERCISES AND ANSWERS 3. Write a note on amortized anlysis. 4. solve the recurrence relation x(n)=x(n-1)*n if n>0 where x(0)=1 Answers: SEE 1. 1.6 2. 1.7 1.9 SUGGESTED READINGS 1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy 4. Even, Shimon., "Graph Algorithms",Computer Science Press. ANSWERS TO CHECK YOUR PROGRESS

35

MODULE-1,UNIT 4 SEARCHING AND SORTING Structure 1.0 Objectives 1.1 Searching algorithms Linear search Binary search 1.2 sorting Selection sort Insertion sort Bubble sort 1.3 Summary 1.4 Keywords 1.5 Answers to check your progress 1.6 Unit- end exercises and answers 1.7 Suggested readings

3.0

OBJECTIVES

At the end of this unit you will be able to 3.1 Know how to search in different ways Identify which searching technique is better Make sorting in different ways example insertion sort, selection sort Performance measurement of searching and sorting techniques.

SEARCHING ALGORITHMS Let us assume that we have a sequential file and we wish to retrieve an element matching with key k, then, we have to search the entire file from the beginning till the end to check whether the element matching k is present in the file or not. There are a number of complex searching algorithms to serve the purpose of searching. The linear search and binary search methods are relatively straight forward methods of searching. 1.1.1 Sequential search: (Linear search) In this method, we start to search from the beginning of the list and examine each element till the end of the list. If the desired element is found we stop the search and return the index of that element. If the item is not found and the list is exhausted the search returns a zero value. In the worst case the item is not found or the search item is the last (nth) element. For both situations we must examine all n elements of the array so the order of magnitude

36

or complexity of the sequential search is n. i.e., O(n). The execution time for this algorithm is proportional to n that is the algorithm executes in linear time. The algorithm for sequential search is as follows, Algorithm : sequential search Input : A, vector of n elements K, search element Output : j index of k i=1 While(i<=n) { if(A[i]=k) { write("search successful") write(k is at location i) exit(); } else i++ if end while end write (search unsuccessful); algorithm ends. 1.1.2 Binary search: Binary search method is also relatively simple method. For this method it is necessary to have the vector in an alphabetical or numerically increasing order. A search for a particular item with X resembles the search for a word in the dictionary. The approximate mid entry is located and its key value is examined. If the mid value is greater than X, then the list is chopped off at the (mid-1)th location. Now the list gets reduced to half the original list. The middle entry of the left-reduced list is examined in a similar manner. This procedure is repeated until the item is found or the list has no more elements. On the other hand, if the mid value is lesser than X, then the list is chopped off at (mid+1)th location. The middle entry of the right-reduced list is examined and the procedure is continued until desired key is found or the search interval is exhausted. The algorithm for binary search is as follows, Algorithm : binary search Input : A, vector of n elements K, search element Output : low index of k low=1,high=n While(low<=high-1) { mid=(low+high)/2

37

if(k<a[mid]) high=mid else low=mid if end } while end if(k=A[low]) { write("search successful") write(k is at location low) exit(); } else write (search unsuccessful); if end; algorithm ends.

1.2 Sorting Several algorithms are presented, including insertion sort, shell sort, and quicksort. Sorting by insertion is the simplest method, and doesnt require any additional storage. Shell sort is a simple modification that improves performance significantly.

1.2.1

SELECTION_SORT

Selection sort is among the simplest of sorting techniques and it work very well for small files. Furthermore, despite its evident "nave approach "Selection sort has a quite important application because each item is actually moved at most once, Section sort is a method of choice for sorting files with very large objects (records) and small keys. Here's a step-by-step example to illustrate the selection sort algorithm using numbers: Original array: 6354927 1st pass -> 2 3 5 4 9 6 7 (2 and 6 were swapped) 2nd pass -> 2 3 4 5 9 6 7 (4 and 5 were swapped) 3rd pass -> 2 3 4 5 6 9 7 (6 and 9 were swapped) 4th pass -> 2 3 4 5 6 7 9 (7 and 9 were swapped) 5th pass -> 2 3 4 5 6 7 9 (no swap) 6th pass -> 2 3 4 5 6 7 9 (no swap)

38

Note: There were 7 keys in the list and thus 6 passes were required. However, only 4 swaps took place.

Algorithm : Selection sort for i 1 to n-1 do min j i; min x A[i] for j i + 1 to n do If A[j] < min x then min j j min x A[j] A[min j] A [i] A[i] min x

The worst case occurs if the array is already sorted in descending order. Nonetheless, the time require by selection sort algorithm is not very sensitive to the original order of the array to be sorted: the test "if A[j] < min x" is executed exactly the same number of times in every case. The variation in time is only due to the number of times the "then" part (i.e., min j j; min x A[j] of this test are executed. The Selection sort spends most of its time trying to find the minimum element in the "unsorted" part of the array. It clearly shows the similarity between Selection sort and Bubble sort. Bubble sort "selects" the maximum remaining elements at each stage, but wastes some effort imparting some order to "unsorted" part of the array. Selection sort is quadratic in both the worst and the average case, and requires no extra memory.

39

For each i from 1 to n - 1, there is one exchange and n - i comparisons, so there is a total of n -1 exchanges and (n -1) + (n -2) + . . . + 2 + 1 =n(n -1)/2 comparisons. These observations hold no matter what the input data is. In the worst case, this could be quadratic, but in the average case, this quantity is O(n log n).

1.2.2 Insertion Sort If the first few objects are already sorted, an unsorted object can be inserted in the sorted set in proper place. This is called insertion sort. An algorithm consider the elements one at a time, inserting each in its suitable place among those already considered (keeping them sorted). Insertion sort is an example of an incremental algorithm; it builds the sorted sequence one number at a time. This is perhaps the simplest example of the incremental insertion technique, where we build up a complicated structure on n items by first building it on n 1 items and then making the necessary changes to fix things in adding the last item. The given sequences are typically stored in arrays. We also refer the numbers as keys. Along with each key may be additional information, known as satellite data. [Note that "satellite data" does not necessarily come from satellite!] Algorithm: Insertion Sort It works the way you might sort a hand of playing cards: 1. We start with an empty left hand [sorted array] and the cards face down on the table [unsorted array]. 2. Then remove one card [key] at a time from the table [unsorted array], and insert it into the correct position in the left hand [sorted array]. 3. To find the correct position for the card, we compare it with each of the cards already in the hand, from right to left. Note that at all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table. Pseudo code We use a procedure INSERTION_SORT. It takes as parameters an array A[1.. n] and the length n of the array. The array A is sorted in place: the numbers are rearranged within the array, with at most a constant number outside the array at any time.

INSERTION_SORT (A)

40

1.

FOR j 2 TO length[A] DO

2. key A[j] 3. {Put A[j] into the sorted sequence A[1 . . j 1]} 4. i j 1 5. WHILE i > 0 and A[i] > key 6. DO A[i +1] A[i] 7. ii1 8. A[i + 1] key

Example: Following figure (from CLRS) shows the operation of INSERTION-SORT on the array A= (5, 2, 4, 6, 1, 3). Each part shows what happens for a particular iteration with the value of j indicated. j indexes the "current card" being inserted into the hand.

Read the figure row by row. Elements to the left of A[j] that are greater than A[j] move one position to the right, and A[j] moves into the evacuated position. Analysis Since the running time of an algorithm on a particular input is the number of steps executed, we must define "step" independent of machine. We say that a statement that takes ci steps to execute and executed n times contributes cin to the total running time of the algorithm. To compute the running time, T(n), we sum the products of the cost and times column [see CLRS page 26]. That is, the running time of the algorithm is the sum of running times for each statement executed. So, we have T(n) = c1n + c2 (n 1) + 0 (n 1) + c4 (n 1) + c5 2 j n ( tj ) + c6 2 j n (tj 1) + c7 2 j n (tj 1) + c8 (n 1) In the above equation we supposed that tj be the number of times the while-loop (in line 5) is executed for that value of j. Note that the value of j runs from 2 to (n 1). We have

41

T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n ( tj )+ c6 2 j n (tj 1) + c7 2 j n (tj 1) + c8 (n 1) Equation (1) Best-Case The best case occurs if the array is already sorted. For each j = 2, 3, ..., n, we find that A[i] less than or equal to the key when i has its initial value of (j 1). In other words, when i = j 1, always find the key A[i] upon the first time the WHILE loop is run. Therefore, tj = 1 for j = 2, 3, ..., n and the best-case running time can be computed using equation (1) as follows: T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n (1) + c6 2 j n (1 1) + c7 2 j n (1 1) + c8 (n 1) T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 (n 1) + c8 (n 1) T(n) = (c1 + c2 + c4 + c5 + c8 ) n + (c2 + c4 + c5 + c8) This running time can be expressed as an + b for constants a and b that depend on the statement costs ci. Therefore, T(n) it is a linear function of n. The punch line here is that the while-loop in line 5 executed only once for each j. This happens if given array A is already sorted. T(n) = an + b = O(n) It is a linear function of n.

Worst-Case The worst-case occurs if the array is sorted in reverse order i.e., in decreasing order. In the reverse order, we always find that A[i] is greater than the key in the while-loop test. So, we must compare each element A[j] with each element in the entire sorted subarray A[1 .. j 1] and so tj = j for j = 2, 3, ..., n. Equivalently, we can say that since the while-loop exits because i reaches to 0, there is one additional test after (j 1) tests. Therefore, tj = j for j = 2, 3, ..., n and the worst-case running time can be computed using equation (1) as follows: T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n ( j ) + c6 2 j n(j 1) + c7 2 j n(j 1) + c8 (n 1) And using the summations in CLRS on page 27, we have

42

T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n [n(n +1)/2 + 1] + c6 2 j n [n(n 1)/2] + c7 2 j n [n(n 1)/2] + c8 (n 1) T(n) = (c5/2 + c6/2 + c7/2) n2 + (c1 + c2 + c4 + c5/2 c6/2 c7/2 + c8) n (c2 + c4 + c5 + c8) This running time can be expressed as (an2 + bn + c) for constants a, b, and c that again depend on the statement costs ci. Therefore, T(n) is a quadratic function of n. Here the punch line is that the worst-case occurs, when line 5 executed j times for each j. This can happens if array A starts out in reverse order T(n) = an2 + bn + c = O(n2) It is a quadratic function of n.

The graph shows the n2 complexity of the insertion sort. Worst-case and average-case Analysis We usually concentrate on finding the worst-case running time: the longest running time for any input size n. The reasons for this choice are as follows:

The worst-case running time gives a guaranteed upper bound on the running time for any input. That is, upper bound gives us a guarantee that the algorithm will never take any longer. For some algorithms, the worst case occurs often. For example, when searching, the worst case often occurs when the item being searched for is not present, and searches for absent items may be frequent. Why not analyze the average case? Because it's often about as bad as the worst case.

Example: Suppose that we randomly choose n numbers as the input to insertion sort.

43

On average, the key in A[j] is less than half the elements in A[1 .. j 1] and it is greater than the other half. It implies that on average, the while loop has to look halfway through the sorted subarray A[1 .. j 1] to decide where to drop key. This means that tj = j/2. Although the average-case running time is approximately half of the worst-case running time, it is still a quadratic function of n. Stability Since multiple keys with the same value are placed in the sorted array in the same order that they appear in the input array, Insertion sort is stable. Extra Memory This algorithm does not require extra memory.

For Insertion sort we say the worst-case running time is (n2), and the best-case running time is (n). Insertion sort use no extra memory it sort in place. The time of Insertion sort is depends on the original order of a input. It takes a time in (n2) in the worst-case, despite the fact that a time in order of n is sufficient to solve large instances in which the items are already sorted.

1.2.2

Bubble sort

Bubble sort, also known as sinking sort, is a simple sorting algorithm that works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. The algorithm gets its name from the way smaller elements "bubble" to the top of the list. Because it only uses comparisons to operate on elements, it is a comparison sort. Let us take the array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest number using bubble sort algorithm. In each step, elements written in bold are being compared. First Pass: (51428) ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps them. (15428) ( 1 4 5 2 8 ), Swap since 5 > 4 (14528) ( 1 4 2 5 8 ), Swap since 5 > 2 (14258) ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. Second Pass: (14258) (14258) 44

(14258) ( 1 2 4 5 8 ), Swap since 4 > 2 (12458) (12458) (12458) (12458) Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without anyswap to know it is sorted. Third Pass: (12458) (12458) (12458) (12458) (12458) (12458) (12458) (12458) Finally, the array is sorted, and the algorithm can terminate. Algorithm :bubbleSort( A : list of sortable items ) n = length(A) for j 1 to n-1 do for i0 to n-j-1 do if A[i] >= A[i+1] then swap(A[i], A[i+1]) end if end for end for

Analysis : Time complexity t(n)=

1 = n j 1 0 +1 = n j = (n 1) + (n 2) + ... + 3 + 2 +1 = n(n+!) / 2
j =1 i =0 j =1 j =1

n 1 n j 1

n 1

n 1

So, time complexity of bubble sort = (n2 ).

Check your progress 1. Write an algorithm for sequential search and trace it for the input { 1,9,2,4,6,8}. 2. Write an algorithm for bubble sort and selection sort. Apply it for the following set of numbers { 5,8,3, 2,1,9}. Find the time complexity of all these algorithms. 3. Write an algorithm for insertion sort Apply it for the following set of numbers { 6,8,3, 6,1,9}. Find the time complexity . 1.3 SUMMARY: 45

Sorting : It is the process of arranging the elements in either ascending or descending manner. Example insertion sort, bubble sort, selection sort etc. Searching : it is a process of finding the element from the set of n elements. Example linear search, binary search. 1.9 KEYWORDS Binary search : searching an element by divide and conquer technique. 1.10 1. 1 2. 1.2.1 & 1.2.3 3. 1.2.2 1.7 UNIT-END EXERCISES AND ANSWERS 5. Apply insertion sort algorithm for the set A L G O R I T H M to sort it in ascending order. 6. Apply Binary search algorithm to search G from the set A B C F H Z X . Answers: SEE 1. 1.1 2. 1.2 1.10 SUGGESTED READINGS ANSWERS TO CHECK YOUR PROGRESS

1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy 4. Even, Shimon., "Graph Algorithms",Computer Science Press.

46

MODULE-3,UNIT 2: : DIVIDE AND CONQUER Structure 2.0 1.1 1.2 1.3 1.4 1.6 1.7 1.8 1.9 1.10 4.0 Objectives Introduction General Structure of Divide and conquer Applications finding minimum and maximum Recurrence Equations Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to 4.1 Find how to apply divide conquer Find the time complexity of a divide and conquer algorithms Identifying recurrence relations for the algorithm Know how to solve recurrence relations

INTRODUCTION Divide and conquer method of designing the algorithm is the best known method of Solving a problem. Now, let us see "What is divide and conquer technique? What is the general plan using which these algorithms work?" Definition: Divide and conquer is a top-down technique for designing algorithms that consist of dividing the problem into smaller sub problems hoping that the solutions of the sub problems are easier to find. The solutions of all smaller problems are then combined to get a solution for the original problem. The divide and conquer technique of solving a problem involves three steps at Each level of the recursion: . Divide: The problem is divided into a number of sub problems

47

. Conquer: The sub problems are conquered by solving them recursively. If the sub Problems are smaller in size, the problem can be solved using straightforward method . Combine: The solutions of sub problems are combined to get the solution for the larger problem.

1.1.1 Masters theorem to solve the recurrence relation The master theorem concerns recurrence relations of the form:

In the application to the analysis of a recursive algorithm, the constants and function take on the following significance:

n is the size of the problem. a is the number of subproblems in the recursion.

n/b is the size of each subproblem. (Here it is assumed that all subproblems are essentially the same size.) f (n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and the cost of merging the solutions to the subproblems.

It is possible to determine an asymptotic tight bound in these three cases: T(n)= { (nd ) if a< bd if a=bd if a>bd } (nd logn) (nd logn)

4.2

MAXIMUM AND MINIMUM

Let us consider another simple problem that can be solved by the divide-andconquer technique. The problem is to find the maximum and minimum items in a set of n elements. In analyzing the time complexity of this algorithm, we once again concentrate on the no. of element comparisons.

48

More importantly, when the elements in a[1:n] are polynomials, vectors, very large numbers, or strings of character, the cost of an element comparison is much higher than the cost of the other operations. Hence, the time is determined mainly by the total cost of the element comparison.

1. Algorithm straight MaxMin(a,n,max,min) 2. // set max to the maximum & min to the minimum of a[1:n] 3. { 4. max:=min:=a; 5. for I:=2 to n do 6. { 7. if(a[I]>max) then max:=a[I]; 8. if(a[I]<min) then min:=a[I]; 9. } 10. } Algorithm: Straight forward Maximum & Minimum Straight MaxMin requires 2(n-1) element comparison in the best, average & worst cases. An immediate improvement is possible by realizing that the comparison a[I]<min is necessary only when a[I]>max is false. Hence we can replace the contents of the for loop by, If(a[I]>max) then max:=a[I]; Else if (a[I]<min) then min:=a[I]; Now the best case occurs when the elements are in increasing order. The no. of element comparison is (n-1). The worst case occurs when the elements are in decreasing order. The no. of elements comparison is 2(n-1) The average no. of element comparison is < than 2(n-1) On the average a[I] is > than max half the time, and so, the avg. no. of comparison is 3n/2-1. A divide- and conquer algorithm for this problem would proceed as follows:

Let P=(n, a[I] ,,a[j]) denote an arbitrary instance of the problem. Here n is the no. of elements in the list (a[I],.,a[j]) and we are interested in

49

finding the maximum and minimum of the list. If the list has more than 2 elements, P has to be divided into smaller instances. For example , we might divide P into the 2 instances, P1=([n/2],a, ..a[n/2]) & P2= (n-[n/2],a[[n/2]+1],..,a[n]) After having divided P into 2 smaller sub problems, we can solve them by recursively invoking the same divide-and-conquer algorithm.

Algorithm: Recursively Finding the Maximum & Minimum Using Divide and conquer technique 1. Algorithm MaxMin (I,j,max,min) 2. //a[1:n] is a global array, parameters I & j 3. //are integers, 1<=I<=j<=n.The effect is to 4. //set max & min to the largest & smallest value 5. //in a[I:j], respectively. 6. { 7. if(I=j) then max:= min:= a[I]; 8. else if (I=j-1) then // Another case of small(p) 9. { 10. if (a[I]<a[j]) then 11. { 12. max:=a[j]; 13. min:=a[I]; 14. } 15. else 16. { 17. max:=a[I]; 18. min:=a[j]; 19. } 20. } 21. else 22. { 23. // if P is not small, divide P into subproblems. 24. // find where to split the set mid:=[(I+j)/2]; 25. //solve the subproblems 26. MaxMin(I,mid,max.min); 27. MaxMin(mid+1,j,max1,min1); 28. //combine the solution 29. if (max<max1) then max=max1; 30. if(min>min1) then min = min1; 31. } 32. }

50

The procedure is initially invoked by the statement, MaxMin(1,n,x,y) Suppose we simulate MaxMin on the following 9 elements

A:          22 13 -5 -8 15 60 17 31 47 A good way of keeping track of recursive calls is to build a tree by adding a node each time a new call is made. For this Algorithm, each node has 4 items of information: I, j, max & imin. Examining fig: we see that the root node contains 1 & 9 as the values of I &j corresponding to the initial call to MaxMin. This execution produces 2 new calls to MaxMin, where I & j have the values 1, 5 & 6, 9 respectively & thus split the set into 2 subsets of approximately the same size. From the tree, we can immediately see the maximum depth of recursion is 4. (including the 1st call) The include no.s in the upper left corner of each node represent the order in which max & min are assigned values. No. of element Comparison: If T(n) represents this no., then the resulting recurrence relations is T(n)={ T([n/2]+T[n/2]+2 1 IF n=2 0 IF n=1 n>2

When n is a power of 2, n=2^k for some +ve integer k, then T(n) = 2T(n/2) +2 = 2(2T(n/4)+2)+2 = 4T(n/4)+4+2 * * = 2^k-1T(2)+ = 2^k-1+2^k-2 = 2^k/2+2^k-2 = n/2+n-2 = (n+2n)/2)-2 T(n)=(3n/2)-2 *Note that (3n/3)-3 is the best-average, and worst-case no. of comparisons when n is a power of 2.

51

4.3

SOLVING RECURRENCE EQUATIONS

We solve recurrence equations often in analyzing complexity of algorithms, circuits, and such other cases. 1.3.1 Homogeneous recurrence equation A homogeneous recurrence equation is written as: a0tn + a1tn-1 + . . . . + aktn-k = 0. Solution technique: Step 1: Set up a corresponding Characteristic Equation: a0 xn + a1 x(n-1) + . + ak x(n-k) = 0, x(n-k) [a0 xk + a1 x(k-1) + +ak ] = 0, a0xk + a1xk-1 + . . . . + ak = 0 [ for x =/= 0] Step 2: Solve the characteristic equation as a polynomial equation. Say, the real roots are r1, r2, . . . . , rk. Note, there are k solutions for k-th order polynomial equation. Step 3: The general solution for the original recurrence equation is: tn = i=1kcirin Step 4: Using initial conditions (if available) solve for the coefficients in above equation in order to find the particular solution. Example 1: Solve the recurrence equation tn 3tn-1 4tn-2 = 0, for n >= 2. {Initial condition: t0=0, t1=1} Characteristic equation: xn 3x(n-1) 4x(n-2) = 0, Or, x(n-2) [x2 3x 4] = 0, Or, x2 3x 4 = 0, Or, x2 + x 4x 4 = 0, Or, x(x+1) 4(x+1) = 0, Or, (x+1)(x-4) = 0, Therefore, roots are, x = -1, 4. So, the general solution of the given recurrence equation is: tn = c1*(-1)n + c2*(4n) Use t0 = c1 + c2 = 0, and t1 = -c1 + 4c2 = 1. [Note, we need two initial conditions for two coefficients.] Sove for c1 and c2, c1 = -(1/5), c2 = (1/5). 52

So, the particular solution is: tn = (1/5)[4n (-1)n] = (4n) 1.3.2 Inhomogeneous recurrence equation a0tn + a1tn-1 + . . . . + aktn-k = bnp(n), where b is a constant and p(n) is a polynomial of order n. Solution technique: Step 0: Homogenize the given equation to an equivalent homogeneous recurrence equation form. Step 1 through 3 (or 4) are the same as in the case of solving homogeneous recurrence equation. Example 2: tn 2tn-1 = 3n. [Note, this is a special case with p(n) =1, polynomial of 0-th order, and there is no initial condition so we get the general solution only.] Transform with n->n+1: tn+1 2tn = 3n+1 Eqn(1). Multiply original eqn with 3 on both the sides: 3tn 6tn-1 = 3n+1 Eqn(2). Subtract Eqn(2) from Eqn(1): tn+1 5tn + 6tn-1 = 0, this is a homogeneous recurrence equation which is equivalent to the given inhomogeneous equation. Characteristic equation: x2 5 x + 6 = 0. Which is (x-2)(x-3) = 0. So, roots are x = 2, 3. General solution of the given recurrence equation is: tn = c1*(2n) + c2*(3n) = (3n) Homogenizing may need multiple steps. Example 3: tn 2tn-1 = n 53

So, tn+1 2tn = n+1 Subtracting the former (given) eqn from the latter eqn, tn+1 3tn + 2tn-1 = 1 Still not a homogeneous eqn. Second stage of homogenizing, tn+2 3tn+1 + 2tn = 1 Subtract once again, tn+2 4tn+1 + 5tn - 2tn-1 = 0 Now it is a homogeneous recurrence equation and one can solve it in the usual way. 1.3.3 Solving recurrence equations using Masters theorem A special type of recurrence equation that is frequently encountered in algorithms' analyses T(n) = aT(n/b) + cni, for some constant integer i, and constants of coefficients a and c. Three cases: a = bi, the solution is T(n) = O(ni log b n); a > bi, the solution is T(n) = O(nlog_b a); a < bi, the solution is T(n) = O(ni); Example-Matrix Multiplication (Strassens algorithm)

Consider the problem of computing the product of two matrices. I.e., given two matrices, A and B, compute the given by matrix , the elements of which are

Section shows that the direct implementation of Equation results in an running time. In this section we show that the use of a divide-and-conquer strategy results in a slightly better asymptotic running time. To implement a divide-and-conquer algorithm we must break the given problem into several subproblems that are similar to the original one. In this instance we view each of the matrices as a matrix, the elements of which are can be written as submatrices. Thus, the original matrix multiplication,

54

and

is an

matrix.

matrix products (divide) followed by four

(conquer). Since matrix addition is an operation, the total running time for the multiplication operation is given by the recurrence:

Note that Equation is an instance of the general recurrence given in Equation . In this case, a=8, b=2, and k=2. We can obtain the solution directly from Equation . Since , the total running time is better than the original, direct algorithm! . But this no

Fortunately, it turns out that one of the eight matrix multiplications is redundant. Consider the following series of seven matrices:

55

Each equation above has only one multiplication. Ten additions and seven multiplications are required to compute through . Given elements of the product matrix C as follows: through , we can compute the

additions. Therefore, the worst-case running time is given by the following

As above, Equation is an instance of the general recurrence given in Equation we obtain the solution directly from Equation . In this case, a=7,b=2 and k=2. Therefore, and the total running time is (Use Masters Theorem)

. and

Note straightforward

. Consequently, the running time of the divide-and-conquer which is better (asymptotically) than the approach.

matrix multiplication strategy is

Example 2 : Solve the following recurrence equation t(n)=3t(n/2)+1 Where t(1)=1 Sol: It is of the form:

56

Where a=3 ,b=2 and f(n)=nd = 10 Therefore d=0 Here a> bd Hence the solution is o(nlog32 ) T(n)=o(nlog32 ) = o(3logn2) Check your progress 1. Explain the process of divide and conquer. 2. Using divide and conquer approach write an algorithm for finding the minimum and maximum in the array. 3. State Masters theprem. 1.4 SUMMARY Divide and conquer: is a general algorithm design technique that solves a problems instance by dividing it into several smaller instances (ideally of equal size), solving each of them recursively, and then combining their solutions to get a solution to the original instance of the problem. Many efficient algorithms are based on this technique, although it can be both inapplicable and inferior to simpler algorithmic solutions. Time efficiency T(n) of many divide nad conquer technique satisfies the equation T(n)=aT(n/b)+f(n). The Masters theorem establishes the order of growth o fthis equations solution. Strassns algorithm needs only seven multiplications to multiply two 2-by-2 matrices but requires more additions than definition based algorithm. By exploring the divide and conquer technique, this algorithm can multilply two n-by-n matrices about n2.807 multiplications. Answers: SEE 1. 1.1 2. 1.2 3. 1.1

57

1.11 KEYWORDS 1 Asymptoic notation 2. Masters theorem to solve the recurrence relations of divide and conquer of specific form. 3.Homogeneous and inhomogeneous form these are forms of recurrence euations. 1.12 1. 1.2 2. 1.2 3. 1.3 4. 1.4 1.7 UNIT-END EXERCISES AND ANSWERS 7. Multiply two matrices a={1,2,3,4} and b={1,2,3,4} using Strassens algorithm. 8. Discuss the efficiency of an algorithm for finding the minimum and maximum in the array. 9. Discuss the different ways of solving the recurrence equations and time complexities. Answers: SEE 1. 1.3 2. 1.2 3. 1.3 1.11 SUGGESTED READINGS ANSWERS TO CHECK YOUR PROGRESS

1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy

58

MODULE-3,UNIT 2: : SORTING Structure 3.0 1.1 1.2 1.3 1.4 1.6 1.7 1.8 1.9 1.10 5.0 Objectives Introduction Merge sort Quick sort Binary search Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to Identify various types of Sorting Find the time complexity of a sorting algorithm Different cases of the algorithm (worst, best, average cases) Identify which sorting is better and why? Identify why binary search and its time complexity

5.1

INTRODUCTION

59

In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; More formally, the output must satisfy two conditions: 1. The output is in non decreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. The two sorting techniques that closely follows the divide and conquer method are Merge sort Quick sort (partition exchange sort) 5.2 MERGE SORT

Conceptually, a merge sort works as follows 1. 2. 3. 4. If the list is of length 0 or 1, then it is already sorted. Otherwise: Divide the unsorted list into two sub lists of about half the size. Sort each sub list recursively by re-applying the merge sort. Merge the two sub lists back into one sorted list.

Now, let us see "What are the steps involved in merge sort?" The various steps that are involved while sorting using merge sort are shown below: . Divide: Divide the given array consisting of n elements into two parts of nl2 Element search . Conquer: Sort the left part of the array and right part of the array recursively using merge sort. . . . Combine: Merge the sorted left part and sorted right part to get a single sorted array. The key operation in merge sort is combining the sorted left part and sorted right part into a single sorted array. This process of merging of two sorted vectors into a single sorted vector is called simple merge. The only necessary condition for this problem is that both arrays should be sorted. Now, let us see "How to design simple merge algorithm?" Design: Suppose we have two sorted vectors A and B with m and n elements respectively. The following procedure is used to merge these two sorted vectors: . Compare ith item of vector A with jth item of vector B and copy the lesser item into kth position of resultant vector C (with 0 as the initial value for the variables i, j and k). The equivalent code can be written as shown below:

60

if ( A[i] < B[j] ) then C[k] A[i] k k +l, ii+l // Copy the lowest element from A to C // Point to next item in C and A

( Fig 2.1 Example for merge sort)

Algorithm SimpleMerge(A,low,mid,high) //Purpose : Merge two sorted arrays where the first array starts from low to mid and //the second starts from mid+1 to high. //Input: A is a sorted from the index position low to mid // A is a sorted from the index position mid+1 to high //Output: A is a sorted from index low to high. i low, j mid+l, k low while ( i <= mid and j <= high) if ( A[i] < A[j] ) then C[k] A[i] //Copy the lowest element from first part of A to C ii+1 //Point to next item in the left part of A kk+1 //Point to next item in C

61

else C[k] A[j] jj+l k k + 1; end if end while // Copy the lowest element from second part of A to C // Point to next item in the second part of A // Point to next item in C while (i <= mid) //Copy the remaining items from left part of A to C C[k] A[i] kk+l , ii+l end while while (j <= high) //Copy the remaining items from right part of A to C C[k] A[j] kk+l; jj+l end while for i = low to high a[i] c[i] // Copy the elements from vector C to vector A end for //End of the algorithm Once the merging is over, we can easily arrange the numbers in ascending order using merge sort. If low and high are lower limit and upper limits in an array, the general procedure to sort the items using merge sort is shown below: if ( low < high) Divide the array into equal parts Sort the left part of the array recursively Sort the right part of the array recursively Merge the left part and right part . end if The complete algorithm to sort the numbers using merge sort is shown below:

AlgorithmMergeSort(A, low, high) //Purpose: Sort the elements of the array between the lower bound and upper bound //lnput:A is an unsorted vector with low and high as lower bound and upper bound //Output: A is a sorted vector if(low < high) mid (low + high)/2 // Divide the array into equal parts

62

MergeSort (a, low,mid) MergeSort (a, mid + 1, high)

// Sort the left part of array, // Sort the right part of the array // Merge the left part and right part

SimpleMerge(a, low, mid, high) End if

Analysis : It is clear from the algorithm that the problem instance is divided into two equal parts. If the time for the merging operations is proportional to n, then the computing time for merge sort is described by the recurrence relation. T(n) = { a 2T(n/2)+cn n=1,a a constant n>1,c a constant.

When n is a power of 2, n= 2^k, we can solve this equation by successive substitution. T(n) =2(2T(n/4) +cn/2) +cn = 4T(n/4)+2cn = 4(2T(n/8)+cn/4)+2cn * * = 2^k T(1)+kCn. = an + cn log n. It is easy to see that if s^k<n<=2^k+1, then T(n)<=T(2^k+1). Therefore, T(n)=O(n log n) Advantages Merge sort algorithm is stable algorithm It can be applied to files of any size. Disadvantages . The algorithm uses extra space proportional to N. So the algorithm is not in place. It uses more memory on the stack because of recursion. 5.3 QUICK SORT

The divide-and-conquer approach can be used to arrive at an efficient sorting method different from merge sort. 63

In merge sort, the file a[1:n] was divided at its midpoint into sub arrays which were independently sorted & later merged. In Quick sort, the division into 2 sub arrays is made so that the sorted sub arrays do not need to be merged later. This is accomplished by rearranging the elements in a[1:n] such that a[I]<=a[j] for all I between 1 & n and all j between (m+1) & n for some m, 1<=m<=n. Thus the elements in a[1:m] & a[m+1:n] can be independently sorted. No merge is needed. This rearranging is referred to as partitioning. Function partition of Algorithm accomplishes an in-place partitioning of the elements of a[m:p-1] It is assumed that a[p]>=a[m] and that a[m] is the partitioning element. If m=1 & p-1=n, then a[n+1] must be defined and must be greater than or equal to all elements in a[1:n] The assumption that a[m] is the partition element is merely for convenience, other choices for the partitioning element than the first item in the set are better in practice. The function interchange (a,I,j) exchanges a[I] with a[j].

Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.

64

Algorithm: Partition the array a[m:p-1] about a[m] 1. Algorithm Partition(a,m,p) 2. //within a[m],a[m+1],..,a[p-1] the elements 3. // are rearranged in such a manner that if 4. //initially t=a[m],then after completion 5. //a[q]=t for some q between m and 6. //p-1,a[k]<=t for m<=k<q, and 7. //a[k]>=t for q<k<p. q is returned 8. //Set a[p]=infinite. 9. { 10. v=a[m];I=m;j=p; 11. repeat 12. {

65

13. repeat 14. I=I+1; 15. until(a[I]>=v); 16. repeat 17. j=j-1; 18. until(a[j]<=v); 19. if (I<j) then interchange(a,I.j); 20. }until(I>=j); 21. a[m]=a[j]; a[j]=v; 22. retun j; 23. } 1. 2. 3. 4. 5. 6. 7. Algorithm Interchange(a,I,j) //Exchange a[I] with a[j] { p=a[I]; a[I]=a[j]; a[j]=p; }

Algorithm Quicksort(p,q) 1. //Sort the elements a[p],.a[q] which resides 2. //is the global array a[1:n] into ascending 3. //order; a[n+1] is considered to be defined 4. // and must be >= all the elements in a[1:n] 5. { 6. if(p<q) then // If there are more than one element 7. { 8. // divide p into 2 sub problems 9. j=partition(a,p,q+1); 10. //j is the position of the partitioning element. 11. //solve the sub problems. 12. quicksort(p,j-1); 13. quicksort(j+1,q); 14. //There is no need for combining solution. 15. } 16. } Design: The quick sort is based on the principle of "Divide and conquer". The quick sort works very well on large set of data. Now, let us see "What are the steps involved in quick sort?" The various steps that are involved while sorting using quick sort are shown below: Step1: Divide Divide the array into two sub-arrays:

66

Right part A[K+1]A[K+2]..A[N-1]

Step 2: Conquer: Sort the left part of the array A[O]A[l].. ...A[k-l] recursively Sort the right part of the array A[k+l] A[k+2] A[kn-l] recursively To simplify the design, assume that a very large value is stored at the end of the array. This is achieved by storing 00 in a[n]. Apart from a, low and high the other variablesthat are used are: . i : The initial value of index i is low i.e., i low . j : The initial value of index j is one more than high i.e., j high . pivot: a[low]is treated as the pivot element. The general procedure to partition the array is shown below: . Keep incrementing the index i as long as pivot a[i]. This is achieved using the statement: do i i + 1 while (pivot >= a[i]); .. Once the above condition fails, keep decrementing the index j as long as pivot S a[j]' This is achieved using the statement: do j +- j - 1 while (pivot >= a[i]); . Once the above condition fails, if i is less than j exchange a[i] with a[j] and repeat all the above process as long as i <= j. Performance of Quick sort The running time of quick sort depends on whether partition is balanced or unbalanced, which in turn depends on which elements of an array to be sorted are used for partitioning. A very good partition splits an array up into two equal sized arrays. A bad partition, on other hand, splits an array up into two arrays of very different sizes. The worst partition puts only one element in one array and all other elements in the other array. If the partitioning is balanced, the Quick sort runs asymptotically as fast as merge sort. On the other hand, if partitioning is unbalanced, the Quick sort runs asymptotically as slow as insertion sort. Best Case The best thing that could happen in Quick sort would be that each partitioning stage divides the array exactly in half. In other words, the best to be a median of the keys in A[p . . r] every time procedure 'Partition' is called. The procedure 'Partition' always split the array to be sorted into two equal sized arrays. If the procedure 'Partition' produces two regions of size n/2. the recurrence relation is then

67

T(n)=T(n/2)+T(n/2)+ (n) = 2T(n/2) + (n) And from case 2 of Master theorem T(n) = (n lg n)

Worst-case:
Let T(n) be the worst-case time for QUICK SORT on input size n. We have a recurrence T(n) = max1qn-1 (T(q) + T(n-q)) + (n) --------- 1

where q runs from 1 to n-1, since the partition produces two regions, each having size at least 1. Now we guess that T(n) cn2 for some constant c. Substituting our guess in equation 1.We get T(n) = max1qn-1 (cq2 ) + c(n - q2)) + = c max (q2 + (n - q)2) + (n) (n)

Since the second derivative of expression q2 + (n-q)2 with respect to q is positive. Therefore, expression achieves a maximum over the range 1 q n -1 at one of the endpoints. This gives the bound max (q2 + (n - q)2)) 1 + (n -1)2 = n2 + 2(n -1). Continuing with our bounding of T(n) we get T(n) c [n2 - 2(n-1)] + (n) = cn2 - 2c(n-1) + (n) Since we can pick the constant so that the 2c(n -1) term dominates the have T(n) cn2 Thus the worst-case running time of quick sort is T(n)= (n2). (n) term we

68

Average-case Analysis If the split induced by RANDOMIZED_PARTITION puts constant fraction of elements on one side of the partition, then the recurrence tree has depth (lgn) and (n) work is performed at (lg n) of these level. This is an intuitive argument why the average-case running time of RANDOMIZED_QUICKSORT is (n lg n). Let T(n) denotes the average time required to sort an array of n elements. A call to RANDOMIZED_QUICKSORT with a 1 element array takes a constant time, so we have T(1) = (1). After the split RANDOMIZED_QUICKSORT calls itself to sort two sub arrays. The average time to sort an array A[1 . . q] is T[q] and the average time to sort an array A[q+1 . . n] is T[n-q]. We have T(n) = 1/n (T(1) + T(n-1) + n-1q=1 T(q) + T(n-q))) + We know from worst-case analysis T(1)= (1)andT(n-1)=O(n2) T(n) = 1/n ( (1) + O(n2)) + 1/n n-1q=1 (r(q) + T(n = 1/n n-1q=1(T(q) + T(n - q)) + (n) n-1 = 1/n[2 k=1(T(k)] + = 2/n n-1k=1(T(k) + (n) --------- 3 (n) ----- 1

q))

(n) ------- 2 (n)

+

Solve the above recurrence using substitution method. Assume inductively that T(n) anlgn + b for some constants a > 0 and b > 0. If we can pick 'a' and 'b' large enough so that n lg n + b > T(1). Then for n > 1, we have T(n) k=1 2/n (aklgk = 2a/n n-1k=1 klgk - 1/8(n2) + 2b/n (n -1) + (n)
n-1

+ b) ------- 4

(n)

n-1

69

T(n)

2a/n [1/2 n2 lgn anlgn - an/4 + 2b + (n)

1/8(n2)]

2/n

b(n

-1)

(n)

In the above equation, we see that (n) + b and an/4 are polynomials and we certainly can choose 'a' large enough so that an/4 dominates (n) + b. We conclude that QUICKSORT's average running time is Conclusion : Quick sort is an in place sorting algorithm whose worst-case running time is (n2) and expected running time is (n lg n) where constants hidden in (n lg n) are small. (n lg(n)).

5.4

BINARY SEARCH Suppose we are given a number of integers stored in an array A, and we want to locate a specific target integer K in this array. If we do not have any information on how the integers are organized in the array, we have to sequentially examine each element of the array. This is known as linear search and would have a time complexity of O(n ) in the worst case. How ever, if the elements of the array are ordered, let us say in ascending order, and we wish to find out the position of an integer target K in the array, we need not make a sequential search over the complete array. We can make a faster search using the Binary search method. The basic idea is to start with an examination of the middle element of the array. This will lead to 3 possible situations: If this matches the target K, then search can terminate successfully, by printing out the index of the element in the array. On the other hand, if K<A[middle], then search can be limited to elements to the left of A[middle]. All elements to the right of middle can be ignored. If it turns out that K >A[middle], then further search is limited to elements to the right of A[middle]. If all elements are exhausted and the target is not found in the array, then the method returns a special value such as 1. Here is one version of the Binary Search function:

70

Algorithm : BinarySearch (int A[ ], int n, int K) { L=0, Mid, R= n-1; while (L<=R) { Mid = (L +R)/2; if ( K= =A[Mid] ) return Mid; else if ( K > A[Mid] ) L = Mid + 1; else R = Mid 1 ; } return 1 ; } Analysis of Binary search Best case : The best case occurs when the item to be searched is present in the middle of the array. So the total number of comparisons required will be 1.. Therefore, the time complexity of binary search in the best case is given by Tbest(n)=(1). Worst case: This case occurs when the key to be searched is in either at the first position or at the last position in the array. In such situations, maximum number of elements comparisons are required and the time complexity is given by T(n)={ 1 if n=1 T(n/2) + 1 otherwise Consider, t(n)=t(n/2)+1 This recurrence relation can be solved using repeated substitution as shown below: t(n)=t(n/2)+1 replace n by n/2 2 t(n)=1+1+t(n/2 ) t(n)=1+2+t(n/23 ) replace n by n/2 .. .. In general, t(n)=i+t(n/2i ) Finally to get the initial condition t(1),let 2i = n t(n)=i+t(1) Where t(1)=0 t(n)=i We have n=2i , Take log on both sides i*log2 =log2 n i= log2 n So, time complexity is given by Tavg(n) ( log2 n) Advantages of binary search Simple technique Very efficient searching technique Disadvantages of binary search The array should be sorted.

71

Check your progress 1. What is a sorting? 2. Write an algorithm for merge sort? Explain its working. 3. Write an algorithm for quick sort? calculate its best case, worst case and average case time complexity. 4. Explain the working of binary search with example? Write an algorithm and its time complexity. 5.5 SUMMARY Merge sort is a divide and conquer algorithm. It works by dividing an array into two halves, sorting them recursively, and then merging the two sorted halves to get the original array sorted. The algorithms time efficiency is same in all cases i.e (nlogn). Quick sort is a divide and conquer algorithm that works by partitioning its inputs elements according to their value relative to some pre-selected element. Quick sort is noted for its superior efficiency among nlogn algorithms for sorting randomly ordered arrays but also for the quadratic worst-case efficiency. Binary search is o(logn) algorithm for searching in sorted arrays. It is typical example of an application of the divide and conquer technique because it needs to solve just one problem of half the size on each of its iterations. ANSWERS TO CHECK YOUR PROGRESS 1. 1.1 2. 1.2 3. 1.3 4. 1.4 UNIT-END EXERCISES AND ANSWERS 10. a.)What is largest number of key comparisions made by binary search insearching for a key in the following array? { 3,14,27,31,39,42,55,70,74,81,85,93,98 } b) List all the keys of this array that will require the largest number of key comparisons when searched for by binary search. 11. Apply quick sort to the list A N A L Y S I S in alphabetical order.

72

12. Apply merge sort algorithm to sort A L G O R I T H M in alphabetical order? Is merge sort a stable algorithm? Answers: SEE 1. 1.4 2. 1.3 3. 1.2 5.6 SUGGESTED READINGS 1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan

73

MODULE-3,UNIT 3: Structure 4.0 1.1 1.1.1 1.2 1.3 1.4 1.5 1.6 1.7 6.0 Objectives Introduction

GREEDY TECHNIQUE

Concept of greedy mehod Optimization Problems Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to 6.1 Find how to apply Greedy technique Identifying whether to solve problem by using greedy technique Know how to find single source shortest path Construct Huffman tree and to generate Huffmans code.

INTRODUCTION Greedy algorithms are simple and straightforward. They are shortsighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. They are easy to invent, easy to implement and most of the time quite efficient. Many problems cannot be solved correctly by greedy approach. Greedy algorithms are used to solve optimization problems

74

1.1.1 Concept of Greedy meyhod Greedy Algorithm works by making the decision that seems most promising at any moment; it never reconsiders this decision, whatever situation may arise later. As an example consider the problem of "Making Change". Coins available are:

dollars (100 cents) quarters (25 cents) dimes (10 cents) nickels (5 cents) pennies (1 cent)

Problem : Make a change of a given amount using the smallest possible number of coins. Informal Algorithm

Start with nothing. at every stage without passing the given amount. add the largest to the coins already chosen.

Formal Algorithm Make change for n units using the least possible number of coins. MAKE-CHANGE (n) { C {100, 25, 10, 5, 1} //constants S {}; Sum 0 While sum !=n // Set that hold the solution

75

x=largest item in set C such that sum+x <= n if no such item then return No solution S S{valueofx} sum sum+x RETURN S } Example : Make a change for 2.89 (289 cents) here n = 2.89 and the solution contains 2 dollars, 3 quarters, 1 dime and 4 pennies. The algorithm is greedy because at every stage it chooses the largest coin without worrying about the consequences. Moreover, it never changes its mind in the sense that once a coin has been included in the solution set, it remains there. 1.1.2 Characteristics and Features of Problems solved by Greedy Algorithms

To construct the solution in an optimal way. Algorithm maintains two sets. One contains chosen items and the other contains rejected items. The greedy algorithm consists of four (4) function. 1. 2. 3. 4. A function that checks whether chosen set of items provide a solution. A function that checks the feasibility of a set. The selection function tells which of the candidates is the most promising. An objective function, which does not appear explicitly, gives the value of a solution.

Structure Greedy Algorithm

Initially the set of chosen items is empty i.e., solution set. At each step o item will be added in a solution set by using selection function. o IF the set would no longer be feasible reject items under consideration (and is never consider again). o ELSE IF set is still feasible THEN add the current item.

76

1.1.3 Definitions of feasibility A feasible set (of candidates) is promising if it can be extended to produce not merely a solution, but an optimal solution to the problem. In particular, the empty set is always promising why? (because an optimal solution always exists) Unlike Dynamic Programming, which solves the subproblems bottom-up, a greedy strategy usually progresses in a top-down fashion, making one greedy choice after another, reducing each problem to a smaller one.

Greedy-Choice Property The "greedy-choice property" and "optimal substructure" are two ingredients in the problem that lend to a greedy strategy. Greedy-Choice Property It says that a globally optimal solution can be arrived at by making a locally optimal choice.

6.2

OPTIMIZATION PROBLEMS

1.2.1 Huffman Codes Huffman code is a technique for compressing data. Huffman's greedy algorithm look at the occurrence of each character and it as a binary string in an optimal way. Example Suppose we have a data consists of 100,000 characters that we want to compress. The characters in the data occur with following frequencies. a b c d e f Frequency 45,000 13,000 12,000 16,000 9,000 5,000

77

Consider the problem of designing a "binary character code" in which each character is represented by a unique binary string. Fixed Length Code In fixed length code, needs 3 bits to represent six(6) characters.

a b c d 45,000 13,000 12,000 16,000 000 001 010 011

e 9,000 100

f 5,000 101

This method require 3000,000 bits to code the entire file. How do we get 3000,000?

Total number of characters are 45,000 + 13,000 + 12,000 + 16,000 + 9,000 + 5,000 = 1000,000. Add each character is assigned 3-bit codeword => 3 * 1000,000 = 3000,000 bits.

Conclusion Fixed-length code requires 300,000 bits while variable code requires 224,000 bits. Saving of approximately 25%.

Prefix Codes In which no codeword is a prefix of other codeword. The reason prefix codes are desirable is that they simply encoding (compression) and decoding. Can we do better? A variable-length code can do better by giving frequent characters short codewords and infrequent characters long codewords.

a b c d 45,000 13,000 12,000 16,000 0 101 100 111

e 9,000 1101

f 5,000 1100

78

Character 'a' are 45,000 each character 'a' assigned 1 bit codeword. 1 * 45,000 = 45,000 bits. Characters (b, c, d) are 13,000 + 12,000 + 16,000 = 41,000 each character assigned 3 bit codeword 3 * 41,000 = 123,000 bits Characters (e, f) are 9,000 + 5,000 = 14,000 each character assigned 4 bit codeword. 4 * 14,000 = 56,000 bits. Implies that the total bits are: 45,000 + 123,000 + 56,000 = 224,000 bits Encoding: Concatenate the codewords representing each characters of the file.

String Encoding TEA 10 00 010 SEA 011 00 010 TEN 10 00 110 Example From variable-length codes table, we code the3-character file abc as:

a 0

c => 0.101.100 = 0101100

101 100

Decoding Since no codeword is a prefix of other, the codeword that begins an encoded file is unambiguous. 79

To decode (Translate back to the original character), remove it from the encode file and repeatedly parse. For example in "variable-length codeword" table, the string 001011101 parse uniquely as 0.0.101.1101, which is decode to aabe. The representation of "decoding process" is binary tree, whose leaves are characters. We interpret the binary codeword for a character as path from the root to that character, where 0 means "go to the left child" and 1 means "go to the right child". Note that an optimal code for a file is always represented by a full (complete) binary tree.

1.2.2 Dijkstra's Algorithm (Single Source shortest path algorithm ) Dijkstra's algorithm solves the single-source shortest-path problem when all edges have non-negative weights. It is a greedy algorithm.Algorithm starts at the source vertex, s, it grows a tree, T, that ultimately spans all vertices reachable from S. Vertices are added to T in order of distance i.e., first S, then the vertex closest to S, then the next closest, and so on. Following implementation assumes that graph G is represented by adjacency. Problem Statement : To find out the shortest distance from a single source to different cities. Algorithm : DIJKSTRA (G, w, s) 1. INITIALIZE SINGLE-SOURCE (G, s) 2. S { } // S will ultimately contains vertices of final shortest-path weights from s 3. Initialize priority queue Q i.e., Q V[G] 4. while priority queue Q is not empty do 5. u EXTRACT_MIN(Q) // Pull out new vertex 6. S S {u} // Perform relaxation for each vertex v adjacent to u 7. for each vertex v in Adj[u] do 8. Relax (u, v, w)

Example: Step by Step operation of Dijkstra algorithm. Step1. Given initial graph G=(V, E). All nodes nodes have infinite cost except the source node, s, which has 0 cost.

80

Step 2. First we choose the node, which is closest to the source node, s. We initialize d[s] to 0. Add it to S. Relax all nodes adjacent to source, s. Update predecessor (see red arrow in diagram below) for all nodes updated.

Step 3. Choose the closest node, x. Relax all nodes adjacent to node x. Update predecessors for nodes u, v and y (again notice red arrows in diagram below).

Step 4. Now, node y is the closest node, so add it to S. Relax node v and adjust its predecessor (red arrows remember!).

81

Step 5. Now we have node u that is closest. Choose this node and adjust its neighbor node v.

Step 6. Finally, add node v. The predecessor list now defines the shortest path from each node to the source node, s.

82

Analysis Like Prim's algorithm, Dijkstra's algorithm runs in O(|E|lg|V|) time.

Check your progress 1. Explain the greedy method of problem solving with the example.. 2. Write a note on Huffman coding. 3. Write an algorithm for a single source shortest path by using greedy technique explain it with the example. 1.3 SUMMARY The Greedy Technique suggests constructing a solution to an optimization problem through a sequence of steps, each expanding a partially constructed solution obtained so far, until acompletesolution to the problem is reached. On each step, the choice made must be feasible, locally optimal , and irrevocable. Dijkstras algorithm solves the single source shortest path problem of finding shortest paths from a given vertex (the source) to all the other vertices of weighted graph or digraph. Huffman code is an optimal prefix free variable length encoding scheme that assigns bit string to characters based on their frequencies in a given text. This is accomplished by a greedy construction of a binary tree whose edges are labeled with 0s and 1s. 1.4 KEYWORDS

1 Greedy Technique It is a method (approach) of problem solving. 2. Huffmans tree It is a binary tree generated by Huffman algorithm having left child edge weight as 0 and right child weight as 1. 1.5 ANSWERS TO CHECK YOUR PROGRESS 1. 1.1

83

2. 1.2 3. 1.2 1.6 UNIT-END EXERCISES AND ANSWERS 1.a. Compare fixed length encoding with the variable length encoding b. Prove that how variable length encoding is better than the fixed length encoding. c. Is that the Huffman encoding can be used for data compression ,defend your answer with the example. 2. Discuss how Dijkstras algorithm belongs to greedy technique with the example. Answers: SEE 1. 1.3 2. 1.3 1.7 SUGGESTED READINGS

1. Introduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy

84

MODULE-3,UNIT 3: Structure 5.0 1.1 1.2 1.3 1.4 1.6 1.7 1.8 1.9 1.10 7.0 Objectives Introduction

APPLICATIONS OF GREEDY METHOD

Container loading problem 0/1 Knapsack problem Minimum cost spanning tree algorithms Summary Key words Answers to check your progress Unit-end exercises and answers Suggested readings OBJECTIVES

At the end of this unit you will be able to 7.1 Solve container loading and knapsack problems. Find the shortest paths using Prims and Kruskals algorithm. Identifying the difference between graph tree and minimum spanning tree.

INTRODUCTION Greedy method is the most straightforward designed technique. As the name suggest they are short sighted in their approach taking decision on the basis of the information immediately at the hand without worrying about the effect these decision may have in the future.

DEFINITION: A problem with N inputs will have some constraints .any subsets that satisfy these constraints are called a feasible solution. A feasible solution that either maximize can minimize a given objectives function is called an optimal solution. 7.2 CONTAINER LOADING PROBLEM

85

The container laoding problem is almost similar to the knapsack problem and also similar to another interesting problem called packing problem. The container loading is stated as follows: We have equal size containers to be loaded on to the cargo and in turn the cargo is to be loaded on to the ship. Each container has weight, wi and the cargo has a maximum capacity of c units. The objective of this problem is to load the ship with maximum number of containers. Let xi is a variable, taking values 0 or 1. A 1 indicates that container is to be loaded and a 0 means it should not. Formally we can define the problem as,

Maximise Subjected to the constraints, from i=1 to n, wi.xi<= C Greedy Strategy In this problem, fortunately, we do not have the profit that has to be considered for constraints. Since the objective is to load the maximum number of containers, the greedy strategy that we use is : Include the containers from lowest to highest (i.e. ascending order of weights) weights so that we could pack more containers. Example Consider a container loading instances with n=7, {w1,w2,w3,w4,w5,w6,w7}={90,190,40,80,140,40,40,10} and C=300. Sol : When the containers are arranged in the ascending order of their weights, we get {w1,w2,w3,w4,w5,w6,w7}={ 10, 40, 40,80,90,140,190} ={ 1, 1, 1 , 1 , 1, 0, 0 } Therefore, the solution is {1,0,1,1,0,1,1} Total number of containers=5 Total weight=260 7.3 KNAPSACK PROBLEM Statement : A thief robbing a store and can carry a maximal weight of w into their knapsack. There are n items and ith item weigh wi and is worth vi dollars. What items should thief take?

86

There are two versions of problem Fractional knapsack problem The setup is same, but the thief can take fractions of items, meaning that the items can be broken into smaller pieces so that thief may decide to carry only a fraction of xi of item i, where 0 xi 1. Exhibit greedy choice property.

Exhibit optimal substructure property.

?????

0-1 knapsack problem The setup is the same, but the items may not be broken into smaller pieces, so thief may decide either to take an item or to leave it (binary choice), but may not take a fraction of an item. Exhibit No greedy choice property.

Only dynamic programming algorithm exists.

1.3.1 0-1 Knapsack Problem using Dynamic programming we are given n objects and knapsack or bag with capacity M object I has a weight Wi where I varies from 1 to N. The problem is we have to fill the bag with the help of N objects and the resulting profit has to be maximum. Formally the problem can be stated as Maximize xipi subject to XiWi<=M Where Xi is the fraction of object and it lies between 0 to 1. There are so many ways to solve this problem, which will give many feasible solution for which we have to find the optimal solution. But in this algorithm, it will generate only one solution which is going to be feasible as well as optimal. First, we find the profit & weight rates of each and every object and sort it according to the descending order of the ratios.

87

Select an object with highest p/w ratio and check whether its height is lesser than the capacity of the bag. If so place 1 unit of the first object and decrement .the capacity of the bag by the weight of the object you have placed. Repeat the above steps until the capacity of the bag becomes less than the weight of the object you have selected .in this case place a fraction of the object and come out of the loop. Whenever you selected.

The most common formulation of the problem is the 0-1 knapsack problem, which restricts the number xi of copies of each kind of item to zero or one. Mathematically the 0-1-knapsack problem can be formulated as:

maximize

subject to

The bounded knapsack problem restricts the number xi of copies of each kind of item to a maximum integer value ci. Mathematically the bounded knapsack problem can be formulated as:

maximize

subject to

Algorithm: Knapsack(n,m,w,p,v) //Input : n number of objects to be selectd m- capacity of the knapsack w weight of all the objects p- profits of all the objects //Output: v- the optimal solution for the number of objects selected with specified remaining capacity For i 0 to n do For j0 to m do If(i=0 or j=0) v[i,j]=0

88

Else if(w[i]>j) v[i,j]=v[i-1,j] else v[i,j]=max(v[i-1,j],v[i-1,j-w[i]]+p[i]) end if end for end for Algorithm: objectselected(n,m,w,v,x) //Input : n number of objects to be selectd m- capacity of the knapsack w weight of all the objects p- profits of all the objects //Output : x- the information of objects selected and not selected For i0 to n-1 do X[i]=0 End for I=n; j=m While( i != 0 and j != 0) { If(v[I,j] != v[i-1,j]) { x[i]=1; j=j-w[i] } i=i-1 } For i1 to n do If(x[i]=1) Write object i selected End if End for Example : Given some items, pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that we can carry is no more than some fixed number W. So we must consider weights of items as well as their value. Item #Weight Value 1 2 12 2 1 10 Maximum capacity is i.e. M=5 3 3 20 4 2 15

89

Sol :

1.4 MINIMUM COST SPANNING TREE ALGORITHM

A spanning tree of a graph is any tree that includes every vertex in the graph. Little more formally, a spanning tree of a graph G is a subgraph of G that is a tree and contains all the vertices of G. An edge of a spanning tree is called a branch; an edge in the graph that is not in the spanning tree is called a chord. We construct spanning tree whenever we want to find a simple, cheap and yet efficient way to connect a set of terminals (computers, cites, factories, etc.). Spanning trees are important because of following reasons.

Minimum spanning tree A minimum spanning tree (MST) of a weighted graph G is a spanning tree of G whose edges sum is minimum weight. In other words, a MST is a tree formed from a subset of the edges in a given undirected graph, with two properties: it spans the graph, i.e., it includes every vertex of the graph. it is a minimum, i.e., the total weight of all the edges is as low as possible.

Let G=(V, E) be a connected, undirected graph where V is a set of vertices (nodes) and E is the set of edges. Each edge has a given non negative length.

1 PRIMS ALGORITHM: This algorithm was first proposed by Jarnik, but typically attributed to Prim. It starts from an arbitrary vertex (root) and at each stage, add a new branch (edge) to the tree already constructed; the algorithm halts when all the vertices in the graph have been reached. 90

This strategy is greedy in the sense that at each step the partial spanning tree is augmented with an edge that is the smallest among all possible adjacent edges.

Example : Start from an arbitrary vertex (root). At each stage, add a new branch (edge) to the tree already constructed; the algorithm halts when all the vertices in the graph have been reached.

Algorithm prims(e,cost,n,t) { Let (k,l) be an edge of minimum cost in E; Mincost :=cost[k,l]; T[1,1]:=k; t[1,2]:=l; For I:=1 to n do If (cost[i,l]<cost[i,k]) then near[i]:=l; Else near[i]:=k; Near[k]:=near[l]:=0; For i:=2 to n-1 do { Let j be an index such that near[j]0 and Cost[j,near[j]] is minimum; T[i,1]:=j; t[i,2]:=near[j]; Mincost:=mincost+ Cost[j,near[j]]; 91

Near[j]:=0; For k:=0 to n do If near((near[k]0) and (Cost[k,near[k]]>cost[k,j])) then Near[k]:=j; } Return mincost; } The prims algorithm will start with a tree that includes only a minimum cost edge of G. Then, edges are added to the tree one by one. the next edge (i,j) to be added in such that I is a vertex included in the tree, j is a vertex not yet included, and cost of (i,j), cost[i,j] is minimum among all the edges. The working of prims will be explained by following diagram Step 2:

Step 1:

Step 3:

Step 4:

Step 5:

Step 6:

92

Analysis : The algorithm spends most of its time in finding the smallest edge. So, time of the algorithm basically depends on how do we search this edge. Straightforward method Just find the smallest edge by searching the adjacency list of the vertices in V. In this case, each iteration costs O(m) time, yielding a total running time of O(mn).

2. KRUSKALS ALGORITHM:

In kruskal's algorithm the selection function chooses edges in increasing order of length without worrying too much about their connection to previously chosen edges, except that never to form a cycle. The result is a forest of trees that grows until all the trees in a forest (all the components) merge in a single tree. In this algorithm, a minimum cost-spanning tree T is built edge by edge. Edge are considered for inclusion in T in increasing order of their cost. An edge is included in T if it doesnt form a cycle with edge already in T. To find the minimum cost spanning tree the edge are inserted to tree in increasing order of their cost

Algorithm: Algorithm kruskal(E,cost,n,t) //Eset of edges in G has n vertices. //cost[u,v]cost of edge (u,v).tset of edge in minimum cost spanning tree // the first cost is returned. { for i=1 to n do parent[I]=-1; I=0;mincost=0.0; While((I<n-1)and (heap not empty)) do { j=find(n); 93

k=find(v); if(j not equal k) than { i=i+1 t[i,1]=u; t[i,2]=v; mincost=mincost+cost[u,v]; union(j,k); } } if(i notequal n-1) then write(No spanning tree) else return minimum cost; } Analysis The time complexity of minimum cost spanning tree algorithm in worst case is O(|E|log|E|), where E is the edge set of G.

Example: Step by Step operation of Kurskal algorithm.

Step 1. In the graph, the Edge(g, h) is shortest. Either vertex g or vertex h could be representative. Lets choose vertex g arbitrarily.

Step 2. The edge (c, i) creates the second tree. Choose vertex c as representative for second tree.

94

Step 3. Edge (g, g) is the next shortest edge. Add this edge and choose vertex g as representative.

Step 4. Edge (a, b) creates a third tree.

Step 5. Add edge (c, f) and merge two trees. Vertex c is chosen as the representative.

Step 6. Edge (g, i) is the next next cheapest, but if we add this edge a cycle would be created. Vertex c is the representative of both.

95

Step 9. Instead of adding edge (h, i) add edge (a, h).

Step 10. Again, if we add edge (b, c), it would create a cycle. Add edge (d, e) instead to complete the spanning tree. In this spanning tree all trees joined and vertex c is a sole representative.

96

97

2. 1.2 & 1.3 1.12 SUGGESTED READINGS

1. Inroduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy 4. Even, Shimon., "Graph Algorithms",Computer Science Press.

98

MODULE-4,UNIT1 INTRODUCTION TO GRAPHS Structure 1.0 Objectives 1.1 Graphs as data structures 1.2 Graph representation Adjacency matrix Adjacency list 1.3 Depth First Search (DFS) traversal 1.4 Summary 1.5 Keywords 1.6 Answers to check your progress 1.7 Unit- end exercises and answers 1.8 Suggested readings

8.0

OBJECTIVES

At the end of this unit you will be able to 8.1 Represent the graph in a computer by using adjacency matrix or adjacency list type. Identify which method of representation graph is better and when. Traverse the graph using DFS traversal and its time complexity.

GRAPHS AS DATA STRUCTURE 1.1.1 Introduction to graph : Graphs are widely-used structure in computer science and different computer applications. We don't say data structurehere and see the difference. Graphs mean to store and analyze metadata, the connections, which present in data. For instance, consider cities in your country. Road network, which connects them, can be represented as a graph and then analyzed. We can examine, if one city can be reached from another one or find the shortest route between two cities. First of all, we introduce some definitions on graphs. Next, we are going to show, how graphs are represented inside of a computer. Then you can turn to basic graph algorithms. There are two important sets of objects, which specify graph and its structure. First set is V, which is called vertex-set. In the example with road network cities are vertices. 99

Each vertex can be drawn as a circle with vertex's number inside.

vertices Next important set is E, which is called edge-set. E is a subset of V x V. Simply speaking, each edge connects two vertices, including a case, when a vertex is connected to itself (such an edge is called a loop). All graphs are divided into two big groups: directed and undirected graphs. The difference is that edges in directed graphs, called arcs, have a direction. These kinds of graphs have much in common with each other, but significant differences are also present. We will accentuate which kind of graphs is considered in the particular algorithm description. Edge can be drawn as a line. If a graph is directed, each line has an arrow.

undirected graph

directed graph

Now, we present some basic graph definitions.

Sequence of vertices, such that there is an edge from each vertex to the next in sequence, is called path. First vertex in the path is called the start vertex; the last vertex in the path is called the end vertex. If start and end vertices are the same, path is called cycle. Path is called simple, if it includes every vertex only once. Cycle is called simple, if it includes every vertex, except start (end) one, only once. Let's see examples of path and cycle.

100

path (simple)

cycle (simple)

The last definition we give here is a weighted graph. Graph is called weighted, if every edge is associated with a real number, called edge weight. For instance, in the road network example, weight of each road may be its length or minimal time needed to drive along.

weighted graph

1.2 Graphs representation There are several possible ways to represent a graph inside the computer. We will discuss two of them: adjacency matrix and adjacency list. a ) Adjacency matrix Each cell aij of an adjacency matrix contains 0, if there is an edge between i-th and j-th

101

vertices, and 1 otherwise. Before discussing the advantages and disadvantages of this kind of representation, let us see an example.

The graph presented by example is undirected. It means that its adjacency matrix is symmetric. Indeed, in undirected graph, if there is an edge (2, 5) then there is also an edge (5, 2). This is also the reason, why there are two cells for every edge in the sample. Loops, if they are allowed in a graph, correspond to the diagonal elements of an adjacency matrix. Advantages. Adjacency matrix is very convenient to work with. Add (remove) an edge can be done in O(1) time, the same time is required to check, if there is an edge between two vertices. Also it is very simple to program and in all our graph tutorials we are going to work with this kind of representation. Disadvantages.

Adjacency matrix consumes huge amount of memory for storing big graphs. All graphs can be divided into two categories, sparse and dense graphs. Sparse ones contain not much edges (number of edges is much less, that square of number of vertices, |E| << |V|2). On the other hand, dense graphs contain number of edges comparable with square of number of vertices. Adjacency matrix is optimal for dense graphs, but for sparse ones it is superfluous. Next drawback of the adjacency matrix is that in many algorithms you need to know the edges, adjacent to the current vertex. To draw out such an information from the adjacency matrix you have to scan over the corresponding row, which results in O(|V|) complexity. For the algorithms like DFS or based on it, use of the adjacency matrix results in overall complexity of O(|V|2), while it can be reduced to O(|V| + |E|), when using adjacency list. The last disadvantage, we want to draw you attention to, is that adjacency matrix requires huge efforts for adding/removing a vertex. In case, a graph is

102

used for analysis only, it is not necessary, but if you want to construct fully dynamic structure, using of adjacency matrix make it quite slow for big graphs. To sum up, adjacency matrix is a good solution for dense graphs, which implies having constant number of vertices. b.) Adjacency list This kind of the graph representation is one of the alternatives to adjacency matrix. It requires less amount of memory and, in particular situations even can outperform adjacency matrix. For every vertex adjacency list stores a list of vertices, which are adjacent to current one. Let us see an example.

14 245 35 425

Graph

Advantages. Adjacent list allows us to store graph in more compact form, than adjacency matrix, but the difference decreasing as a graph becomes denser. Next advantage is that adjacent list allows to get the list of adjacent vertices inO(1) time, which is a big advantage for some algorithms. Disadvantages.

Adding/removing an edge to/from adjacent list is not so easy as for adjacency matrix. It requires, on the average,O(|E| / |V|) time, which may result in cubical complexity for dense graphs to add all edges. Check, if there is an edge between two vertices can be done in O(|E| / |V|) when list of adjacent vertices is unordered or O(log2(|E| / |V|)) when it is sorted. This operation stays quite cheap.

103

Adjacent list doesn't allow us to make an efficient implementation, if dynamically change of vertices number is required. Adding new vertex can be done in O(V), but removal results in O(E) complexity.

Conclusion : Adjacency list is a good solution for sparse graphs and lets us changing number of vertices more efficiently, than if using an adjacent matrix. But still there are better solutions to store fully dynamic graphs.

1.3 Algorithms associated with graphs and its time complexities 1.3.1 Depth-first search (DFS) for undirected graphs Depth-first search, or DFS, is a way to traverse the graph. Initially it allows visiting vertices of the graph only, but there are hundreds of algorithms for graphs, which are based on DFS. Therefore, understanding the principles of depth-first search is quite important to move ahead into the graph theory. The principle of the algorithm is quite simple: to go forward (in depth) while there is such possibility, otherwise to backtrack.

Algorithm Algorithm : DFS Traversal(G) //Implement a depth first search traversal of a graph //Input : Graph G=<V,E> //Output: Graph G with its vertices marked with consecutive integers in order they //have been first encountered by the DFS traversal Mark each vertex in V with 0 as a mark of being unvisited Count0 For each vertex v in V do If v is marked with 0 dfs(v) //end DFS Traversal Routine dfs (v) //visits recursively all unvisited vertices connected to vertex v and assigns them the //numbers in the order they are encountered via global variable count CountCount+1 Mark v with Count For each vertex w in V adajacent to v do 104

If w is marked woth 0 Dfs(w) //end dfs

In DFS, each vertex has three possible colors representing its state: white: vertex is unvisited; gray: vertex is in progress; black: DFS has finished processing the vertex. NB. For most algorithms boolean classification unvisited / visited is quite enough, but we show general case here. Initially all vertices are white (unvisited). DFS starts in arbitrary vertex and runs as follows: 1. Mark vertex u as gray (visited). 2. For each edge (u, v), where u is white, run depth-first search for u recursively. 3. Mark vertex u as black and backtrack to the parent. Example : Traverse a graph shown below, using DFS. Start from a vertex with number 1.

Source graph.

105

106

107

Mark the vertex 3 as gray.

108

There are no ways to go from the vertex 3. Mark it as black and backtrack to the vertex 5.

There is an edge (5, 4), but the vertex 4 is gray.

There are no ways to go from the vertex 5. Mark it as black and backtrack to the vertex 2.

109

There are no more edges, adjacent to vertex 2. Mark it as black and backtrack to the vertex 4.

There is an edge (4, 5), but the vertex 5 is black.

There are no more edges, adjacent to the vertex 4. Mark it as black and backtrack to the vertex 1.

110

There are no more edges, adjacent to the vertex 1. Mark it as black. DFS is over.

As you can see from the example, DFS doesn't go through all edges. The vertices and edges, which depth-first search has visited is a tree. This tree contains all vertices of the graph (if it is connected) and is called graph spanning tree. This tree exactly corresponds to the recursive calls of DFS. If a graph is disconnected, DFS won't visit all of its vertices. For details, see finding connected components algorithm. Complexity analysis Assume that graph is connected. Depth-first search visits every vertex in the graph and checks every edge its edge. Therefore, DFS complexity is O(V + E). As it was mentioned before, if an adjacency matrix is used for a graph representation, then all edges, adjacent to a vertex can't be found efficiently, that results in O(V2) complexity.

Check your progress 1. Write an algorithm for DFS traversal analyze its complexity. 2. What are the different ways of representing a graph explain it with the example. 3. What are the advantages and disadvantages of adjacency matrix and adjacency list method of representing the graph. 1.4 SUMMARY: Graph can be represented in two ways i.e. adjacency matrix and adjacency list method.

111

Adjacency matrix is a good solution for dense graph and adjacency list is good for sparse graph. Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. 1.15 KEYWORDS Graph : a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges. Digraph is a graph with directions on its edges. 1.16 2. 1.2. 3. 1.2 1.7 UNIT-END EXERCISES AND ANSWERS 15. Apply a DFS traversal for a graph having an adjacency matrix ANSWERS TO CHECK YOUR PROGRESS 1. 1.3.1

Matrix 1 16. a.) Write the equivalent graph for the above matrix (i.e. matrix1). b.) Represent the matrix1 in adjacency list. 3. Write a note on path , weighted graph, cycle, loop. Answers: SEE

112

1. 1.3.1 2. 1.2 3. 1.1 1.13 SUGGESTED READINGS

1. Introduction to The design and analysis of algorithms by Anany Levitin 2. Analysis and design of algorithms with C/C++ - 3rd edition by Prof. Nandagopalan 3. . Analysis and design of algorithms by Padma reddy 4. Even, Shimon., "Graph Algorithms",Computer Science Press. 5. Data structures, Algorithms and applications in C++ -2nd edition, By Sartaj Sahni.

113

114