You are on page 1of 41
Algorithms and CHAPTER 1 Problem Solving le of Algorithms in Computing - What are algorithms, Algoritnms as technology, Evolution of Algorithm : The Rol Need of Correctness of Algorithm, Confirming correctness of Algorithm - sample Algorithms, Design of Algorithm, examples, Iterative algorithm design issues. problem Solving Principles : Classification of problems, Problem solving strategies, Classification of time complexities (linear, logarithmic ete). 4.4 Algorithmic Thinking... 12. Whatis sn Algorithm 2 18 1.2.1 Characteristics of an Algorithm UG. Explain the characteristics of a good algorithm? List out the problems solved by the algorithm. SEM. 12.2 Algorithms as Technology. 18 UG. Write a short note on the algorithm as a technology with example. SPPU - Q..2(b), March 19,5 Marks Iam 12.3 Evolution of Algorithms 1.3 Classification of Problem: 1.4 Stages in Problem-Solving 1.5 Applying Ditférent Algorithmic Strategies... 1.8 Design of Algorithm. 1.6.1 Basic Steps to Design an Algorith 1.6.2 Writing Pseudocode of an Algorithm.. 1.6.3 Pseudocode Conventions. Design & Analysis of Algorithms (SPPU-Sem.7-Comp) 1.7 The Correctness of an Algorithm. 1.7.1. Need for the Correctness of an Algorithm... UG. Why the correctness of an algorithm is important. What is loop Invariant property? Explain with example, SUEOSTO MIRE. 1.7.2 Confirming Correctness of Algorithm - Sample Examples... UG. How to confirm the correctness of an Algorithm? Explain with example, ISPPU - Q. 2a), March 19, 5 Marks UG. Explain the concept of Principle of Mathematical Induction and prove the correctness of an algorithm to find factorial of a number. SURO MEEEASLIET ... UQ. Explain the concept of PMI and prove the correctness of an algorithm to find factorial of a number TUINSPPU - Q. 3(b), March 19,5 Marks i 1.8 Iterative Algorithm Design Issues UO. Explain issues related to iterative algorithm design. SLU MTC MEN REMCAENES . 1.8.1. Iterations using the Loop Control Structure, 1.8.2 Improving the Efficiency of the Algorithms, UG. Explain itferent means of improving the efficiency of an algorithm. ISPPU-Q. 1(a), March 19,5 Marks, 2(b), Dec. 19.6 Marks I 1.9 Classification of Time Complexities. > Chapter Ends. (SPPU - New Syllabus w.e.f academe year 22-03) (p7-7+ “ Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Algorithms and Problem Solving)...Page No. (1-3) (3) Show your library membership card to the library staff and get his/her permission to access the book records. Search for a required DAA book. If the book is available, then ask the library staff to issue the book to you. Algorithms are everywhere and are involved in our lives at every moment whether we notice it explicitly or not. To cook a delicious dish we need a correct recipe, to get a passport we need (0 fulfil defined procedures, to solve any numerical we complete certain computational steps, to a) 6) manufacture a product it requires typical processing, to cure a disease we obey a prescription and so on. Even to breath, we follow certain steps unknowingly. All these terminologies like a recipe, procedure, process, prescription refer to an algorithm. Presently computers are effectively used to perform numerous tasks to improve profitability and proficiency. ‘© Computational thinking is necessary to make use of computers for accomplishing our variety of activities. © Apart from the basic skills of computer usage, computer professionals should be capable of developing specific computer programs to perform ‘numerous computer-automated tasks. © Algorithmic thinking is required for wri computer programs. © Algorithmic thinking is an essential analytical skill for solving any problem. It provides the strategy to get the solution to a problem. + It is highly needed in the development of effective computer programs to solve computational problems. © Algorithmic thinking is pertinent in all disciplines of study and not only in the field of computer science. such © Suppose you want to borrow a DAA book from the library. Then before approaching the library, you will analyse the requirements like valid library membership card, correct details of the book (title, author, publication, edition, etc), and library working hours. ‘* Based on it you should plan for a certain-sequence of steps to borrow a DAA book from the library. If you fail in such algorithmic thinking, you will not get the expected results for your task either you may. forget rary membership card or may borrow the wrong. book or miss the library working hours. * So, to accomplish any predefined task we need to © a 6Q_ Define an algorithm. If the book is not available then fill the requirement slip and submit it to the library staff to register your claim for that book. Consider another example of developing a ban| iB software. The only syntax of programming languages or some snippets of codes from, varied sources are not sufficient to develop the correct computer programs for it. Unless you apply algorithmic thinking to design a computational model and define the correct workflow of the required banking system you are unable to develop correctly ‘working banking software, cuss different characteristics ‘of good algorithm, (5 Marks) 5 An algorithm is a finite set of unambiguous steps needed to be followed in certain order to accomplish a specific task. In the case of computational algorithms, these steps refer to the instructions that contain fundamental operators like +,~, *,/, %, ete. Algorithms provide a precise description of the procedure to be followed in certain order to solve a well-defined computational problem. 16261 Characteristics of an Algorithm develop a strategic plan to do it. Accordingly, we have | 1) Input: An algorithm has zero or more inputs. Each to follow certain steps in a specific order. In this instruction that contains a fundamental operator must ‘example we will follow the steps below : accept zero or more inputs. (1) Carry your valid library membership card. (2) Output : An algorithm produces at least one output. (2) Goto the respective library. Every instruction that contains a fundamental operator must accept zero or more inputs. (SPU - New Syllabus vie. academe year 22-28) (P7-71) [Bl rech-eo Pubicatons.. SACHIN SHAH Venture Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Algorits and Probl SoWving)..Page ti ; - 4) (3) Definiteness : All instructions in an algorithm must | * A Libey ee re the precise a be unambiguous, precise, and easy to interpret. By unambiguous req Ps referring to any ofthe instructions in an algorithm one | « The same problem can be solved by dite, can clearly understand what is to be done, Every fundamental operator in instruction must be defined without any ambiguity, (4) Finiteness : An algorithm must terminate after a finite number of steps in all test cases. Every instruction which contains a fundamental operator must be terminated within a finite amount of time. Infinite oops or recursive functions without base conditions do not possess finiteness, () Effectiveness : An algorithm must be developed by using very basic, simple, and feasible operations so that ‘one can trace it out by using just paper and pencil. algorithmic strategies. Also, the same algorithm can be specified in mutigy ways. © The legitimate inputs of an alg specified. im should be we. 1.2.2 Algorithms as Technology Beni eae anes pie tite a short note on the algorithm as a technology | ith example. EEUBRONETES 7 * Computing time and memory space are limited resources, so we should use them sensibly. Itis achieved by the usage of efficient algorithms that need less space and time. Effi ney ¢ The same problem can be solved by different algorithms. Such algorithms exhibit radical differences in their efficiency. © Many times, these variances are much more significant than those due to runtime environment (hardware and software). * Eg, consider the following scenario where Computerl is 100 times faster than Computer 2. Two different sorting algorithms are executed on these two computen to sort 10° numbers. * It shows that even with the 100 times slower compilt the Computer? completes the sorting of 10° numbes 20 times faster than Computer 1. This is achieved because of the usage of an efficient algorithm heap ot Reni ey = eee | Comp.1 | 10° instructions per | Bubble Sort 2n?, where a constant Cy = 2, 2 (10°) instructions | ot 10” instructions/second =2000 seconds __ Comp.2 | 107 instructions per | Heap Sort SOnlogn, where a constant | 50 - 10° 1 ins peaonel Cam.50, 10” instructions/seo®! | = 100 seconds (SPPU - New Syllabus w.e.f academe year 22-28) (7-71) ‘ [el rechaveoPulisatons. a sacrn itt ign & Analysis of Algorithms (SPPU-Sem.7-Comp) 5 Algorithms and other technologies ‘Many other computer technologies need knowledge of algorithms : Eg, igh-speed (i) Computer organization and architecture: hardware with superpipelining, superscalar architectures, graphics processors, parallel architectures etc. (i) Networking: LAN (local area network), WAN (wide area network) (iii) System programming: language processors, operating systems, linkers, loaders etc. iv), Web designing. (v) Image processing: graphics/image/video processing, (vi) Artificial intelligence (vii) Embedded systems (viii) Robotics 1.2.3 Evolution of Algorithms > An algorithm is a precisely defined, self-contained sequence of instructions needed to complete a particular job. The invention of zero and decimal number systems by ancient Indians gave rise to basic algorithms for number systems, arithmetic operations. © Ancient Indian Sanskrit Grammarian Panini designed data structures like Maheshwar Sutra describing algorithms and rules for Sanskrit phonetics, morphology, language syntax. © Some’ theories say that the Babylonian clay tablets (300 BC) were the first algorithms used by the ancient (Algorithms and Problem Solving)...Page No. (1-5) * Decades later, algorithms of the present form came into practice with Alan Turing's computing machine, In 1936, he proposed the concept of effective procedure. This steered the evolution of structured programming. © Later there was a concept of the correctness of algorithms. To verify the correctness of algorithms different proof techniques were used. ‘+ As the field of algorithms evolved, it gave rise t0 the need for efficient algorithms. ‘+ Many algorithmic strategies were proposed like divide and conquer, decrease and conquer, greedy method, dynamic programming, backtracking, branch and bound ete. © Prof. D. Knuth tossed the term “algorithm analysis". Many researchers studied the efficiency of algorithms by considering time and space trade-off. © Then the theory of complexity classes was developed based on the traetability and intractability of problems. © For many intractable important problems~advanced algorithms like approximation algorithms, randomized algorithms are designed. Algorithms are used in many applications, in many technologies. © Currently, many streams of algorithms are developed like genetic algorithms, online algorithms, parallel algorithms, distributed algorithms, optimization algorithms, fuzzy algorithms etc. Di 1.3. CLASSIFICATION OF PROBLEMS _ ‘There are’different types of problems in computation. ‘Some of the most important problems are listed below: © Searching problems : These problems include searching of any item or a search key in given data, Eg, retrieving information from large databases, searching an element in the list. people to record their grain stock and cattle, Originally, algorithms were contributed to algebra, calculus etc. by many mathematicians like Euclid, Eratosthenes. © The term “Algorithm” is attributed to Persian ‘mathematician, scientist and astronomer Abu Abdullah ‘Muhammad ibn Musa Al-Khwarizmi, His name was translated in Latin as “Algorithmic” from which the word “Algorithm” was coined, © Sorting problems : These problems include rearrangement of items in given data in an ascending or descending order. E.g., arranging names in alphabetical order, ranking internet search results, ranking students as per their CGPA. * String processing : These problems include the : © In 1847, .English mathematician George Boole ‘coi is : Recie ° : putations on strings. A string is defined as a : ee binary algebra, the base of today’s computing sequence of characters from an alphabet i.e., letters, : logic. ‘numbers, and special characters. E.g., String matching : problems, string encoding, parsing. (SPU New Syllabus w.e.t academe year 22-28) (P7-71) /Tech-Neo Publications...A SACHIN SHAH Venture Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (wigorts and Preble Soling).Page Ne (1 * Graph problems : These problems include processing of graphs. A graph is a collection of points (nodes/ vertices) and some line sogments (edges) connecting them. E.g., graph-traversal, graph colouring problem, minimum spanning tree (MST) problem. * Combinatorial problems : Theso problems explicitly or implicitly ask to find a combinatorial object like a permutation, a combination, oF a subset that satisfies the specified constraints. A desired combinatorial object may also be needed to possess some additional property such as a maximum or a minimum. value. Exg., S-queens problem, 15-puzzle problem, tiling problem. * Geometric problems : These problems deal with geometric objects like points, fines, and polygons. Computational geometry problems have numerous applications such as computer graphics, tomography, and robotics. E.g., convex-hull problem, closest-pair problem. © Numerical problems ; These problems include mathematical objects of continuous nature.’ They include multiplication problems, computing definite integrals, solving equations and systems of equations, evaluating functions, and so on. E.g., larger integer multiplication, mairix, chain multiplication, Gaussian elimination problem. : ‘© Considering the complexity theory, the problems are broadly classified as : © Optimization problem : It isa computational problem that determines the optimal value of a specified cost function. Eg, travelling salesman problem (TSP), ‘optimal binary search tree (OBST) problem, vertex cover problem. © Decision problem : It is a restricted type of a ‘computational problem that produces only two possible outputs (“yes” or “no”) for each input. E.g., primality test, Hamiltonian cycle: Whether a given graph has any Hamiltonian cycle in it? * Decidable problem : It is a decision problem which gets the correct “yes” or “no” answer for a given input either in polynomial time or in non-polynomial time, Eg,, primality test, Hamiltonian cyele problem ‘© Undecidable problem : It is a decision problem which does not get the correct “yes” or “no” answer for a given input by any algorithm. E.g., halting problem * Tractable problem : It is solved in polynomial time, using the deterministic algorithms, Ex, binary search, merge sort © Intractable problem : It cannot be solved in polynomial time using the deterministic algorithms, Eg, knapsack problem, graph colouring problem (SPPU - New Syllabus w.e.f academe year 22-23) (P7-71) D1 1.4 STAGES IN PROBLEM-SOLVING q ‘State and explain. different stages in probes} 1) solving is fark} ‘The following stages are essential to solve any real-world problem : (1) Identifying the problem : To get the solution to any real-world problem we must thoroughly understand the problem and its constraints described in a natural language. Designing a computational or mathematical model : It presents the abstraction of a real-world problem. It removes unnecessary and irrelevant data from the problem description and simplifies it to get a precise computational model. : @ Data organization : The essential data to solve the problem must be organized effectively. We should select an appropriate data structure to store the necessary data. Algorithm designing : By analyzing the problem we should design a finite, set of unambiguous steps to get the solution. ‘Algorithm specification : It is the way of describing the algorithmic steps for the programmer. Generally, these steps are written in the form of a pseudo-code and conveyed to the programmer. Algorithm validation : After defining these algorithmic steps, we should validate our logic. It checks whether the algorithm produces the correct ‘output in a finite amount of time for all legal test inputs. Analysis of an algorithm : The performance of the correct algorithm is analyzed and its efficiency is checked by considering different performance metics like usage of memory, time, etc, 3) @ 6) © Mm ‘The correctness and the efficiency of an algorithm fot all legitimate inputs are verified through mathematical proof, Implementation : By referring to the specifications of a verified algorithm a correct computer program is written using a specific programming language technology. (10) Testing ond debugging : The computer prog written for a specific algorithm is executed 00 * machine, It is tested for all legitimate inputs and debugged to trace the expected workflow. ™ Performance of an algorithm is tested by experimentation results. ® @) [Bb rech-Noo Publications...A SACHIN SHAH Ventu® Design & Analysis of Algorithms (SPPU-Som.7-Comp) (1D Documentation : The details ofthe solved problem, its algorithm, analysis of algorithm, proofs of the correctness of an algorithm, implementation, test eases well documented for the future applications and ete. research DhEN.S” APPLYING DIFFERENT ALGORITHMIC) Baces STRATEGIES. RANE RE | PGQTUR Goes oF aloon ting seine See : SSS © The same problem can be unravelled by different algorithmic strategies. These algorithmic solutions may differ in the requirements of computing resources. © Thus, the efficiency of an algorithm depends on an algorithmic strategy used to design it. © Some of the popular algorithmic strategies are as below: Brute force method : It enumerates all possible solutions to a given problem without applying any heuristics. © Exhaustive search’ : It generates all candidate solutions to 4 given combinatorial problem and identifies the feasible solution. © Divide and conquer : It follows three steps as given below: (@® Divide : Apply a top-down approach to divide a large problem into smaller and distinct sub-problems. Conquer : Solve the _sub-problems independently and recursively by invoking the same algorithm. Combine : Apply a bottom-up approach to combine the solutions of all sub-problems to geta final solution to the original problem. © Greedy method : It builds a solution in stages. At every stage, it selects the best choice concerning the local considerations. Gi) ii) © Dynamie programming : It is suitable for solving optimization problems with overlapping sub-problems. It applies to problems when the ‘optimal decision sequence cannot be generated through stepwise decisions based on locally (Algorithms and Problem Solving)...Page No. (1-7) © Backtracking + It is an algorithmic strategy that explores all solutions to a given problem and abandons them if they are not fulfilling the specified constraints. It follows depth first search (DF). © Branch and bound : It isa state-space algorithm where an E-nodes remains an E-node until it is, dead, © Exotic algorithms like genetic algorithms, simulated annealing, online algorithms, parallel algorithms, distributed algorithms, optimization algorithms, fuzzy algorithms, ete, © By analysing the characteristic nature of the problem, the appropriate algorithmic strategy is to be applied to solve it. > Some of the classic problems that can be solved by different algorithms are as follows. Algorithmic | Strategy. (1) Sequential search (2) Bubble sort (3) N-queens problem (4) 15-puzzle problem (5). Closest-pair problem (© Container loading problem Brute force method Divide and Binary search conquer @ 2) @) @) Merge sort Quicksort Large _ integer problem Closest-pair problem Finding Max-Min multiplication (3). (6) Fractional knapsack problem Job scheduling problem Activity selection problem ‘Minimum spanning tree problem Single source shortest paths problem Optimal storage on tapes problem Huffman code generation wo @) @) @) © Greedy algorithms © aM problem optimal criteria, (SPPU - New Syllabus w.e.f academe year 22-23) (P7-71) [al rect-Noo Pubications..A SACHIN SHAH Venture Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Algorithms and Problem Solving)...Page No, (1 Dynamic a Binomial coefficients problem, programming (2) Optimal Binary Search Tree (OBST) problem (3). 0/1 knapsack problem (4) Matrix problem, (4) Multistage graph problem (S)_ All pairs shortest paths (6) Longest common subsequence problem chain multiplication (D)_Travelling salesperson problem ‘Backtracking (1) N-queens problem (2) Sum of subsets problem (3) Graph coloring problem (4) Hamiltonian cycle problem (5)_O/1 knapsack problem ‘Branch and (@)_O/1 knapsack problem 15-puzzle problem (3) Travelling salesperson problem * Designing an efficient algorithm to solve a computational problem is a skill that needs good algorithmic thinking. '* A good algorithm designer needs the knowledge of: © Mathematics, Discrete mathematics, Numerical methods for Simulation and Modeling, (Computer programming languages, Data structures and file handling, ‘Computer organization and architecture, Database management systems, Systems programming. * A good algorithm should possess five major characteristics: (1) Input, (2) Output, (3) Definiteness, (4) Finiteness and (5) Effectiveness, The same problem can be answered by different algorithmic strategies. These algorithmic solutions may 4iffer in requirements of computing resources, * Also, the same algorithm can be ways. 0000000 nin multiple (SPPU - New Syllabus w.e.f academe year 22-23) (P7-711 %® 1.6.1 Basic Steps to Design an Algorithm, (1) Understand and analyse the problem to be solved, (2) Mention the distinctive name to an algorithm, (3) Select the appropriate algorithmic strategy to solve the problem by analysing the characteristic nature of tg problem. Eg,, Ifa problem can be divided into independent sub. problems of the same nature then divide and conquer algorithmic strategy can be used to solve it. If a problem has overlapping sub-problems then dynamic ‘programming can be used to solve it, (4) Identify the legitimate inputs of an algorithm. (5) Identify the expected output of an algorithm. (© Decide the suitable data structure to define inputs and to present the output, (7) Describe a finite set of well-ordered unambiguous instructions to produce the expected output. SQ 1.6.2 Writing Pseudocode of an Algorithm * Once the algorithm is designed it can be specified in no. by showing individually the truth of each of the statements A(Mo)s Alig + 1), A(ng + 2),--. - Using the principle of ‘mathematical induction we need to establish the truth of only two statements, namely (i) A(ng) and (i) A(n + 1) if A(m) holds for an arbitrary integet n> ng. ‘The general procedure of PMI (Q)_ Basis step : Prove directly that statement A(n) # true for some. base case ng where no is so™ integer. It is generally a very easy step. (2) Induction Hypothesis : Assume that A(t) for an arbitrary integer n > mo and prove |! implication that A(n + 1) holds for Vn = Mo: (3) Inductive Step : Then by using the princi induction, the statement A(n) is true Vn 2 ®0- 3) holds ie of Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Algorithms and Problom Solving)...Page No. (1-11) ne ‘ E.g.. Consider an iterative algorithm to calculate factorial of any number n a) Proof of correctness of algorithm Iterative_fact(n) by mathematical induction © Leta proposition A(n) be true if for each positive number n, fact = n!= 1*2*.... © Basis Step : Let n = 1. When n = 1, the loop the first iteration, fact := 1 * 1 = 1 and . Since 1! = 1, fact = n! and i= m+ 1 hold. Thus A(np) is.true for some base case np = 1 © Induction Hypothesis : Assume that A(n) is true for some n = k Jandi=k +1 hold after k number of iterations of the loop. Now we prove that A(n+1) is true. © We need to show that for k+1 iteration, fact = (k+1)! and i = k+1+1 hold. When the loop centers (K+1)St iteration, fact = k! and i = (k+1) at the beginning of the loop. Inside the loop, fact := k!* (k +1) and i= (k-+ 1) + 1 producing fact = (K+ I)land i= (k+1) +1. © Inductive Step: Hence by inductive reasonin; can be claimed that A(n) holds for any n fact = n! and i= n + 1 hold for any positive integer n, © Now, when the algorithm terminates, i =n + 1. Hence the loop has been executed at the most n times. © Thus fact = ni! is returned. This implies that the algorithm Iterative_fact(n) is correct. it Iterative algorithms are non-recursive by nature. In these types of the algorithm, the functions do not call themselves. ‘© The various design issues of the iterative algorithms are as below (1) Kerations using the loop control structure (2). Improving the efficiency of the algorithms, (3)__Estimation of time and space requirements (4) Expressing the complexities using order notations (5) Applying different algorithmic strategies ‘1.8.1 Iterations using the Loop Control Structure ‘+ The iterative algorithm uses a looping control structure to implement the iterations. E.g. for, while, repeat-until ‘© Forany loop we specify: (1) The initial condition that is set to-be TRUE before the beginning of the loop. (2) The invariant relation that must hold before, after and during each iteration of the loop. (3) The terminating condition that specifies the condition for which the loop must terminate, “initial condition a 0; 2 while ( largest) are independently tested by two if statements inside a for loop, so the number of comparisons are 2(n — 1) If we replace two if statements by if-else if statements inside a for loop, then the number of comparisons will bbe (n ~ 1). This replacement makes sense because an element Afi] is found to be smaller than the current smallest element then there is no need to compare it again with the current largest element. This ‘modification is reflected in the code below : (AG) < smallest) smallest :=_A(i); Thus, by avoiding the late termination of a loop the performance of an algorithm can be improved. Early detection of the expected output For the certain input instances of a problem, the algorithm can reach the expected output condition before its regular condition of termination is reached. Tdentifying such conditions, the algorithm can have an carly exit from a loop. It saves the computation time. Exg., Suppose we want to search for at least a single even number in an array AfO:n - 1). Consider an inefficient code forthe same as given below : Here, though an even element is found prior to reach the last element in given array A, the loop continues till the end of an array A. ‘AS we want to search for at least one even element in an array A, if we find any even element before reaching the last clement in A, there is no need to scan the further elements in the array. (SPPU - New Syllabus w.e.f academe year 22-23) (P7-71) (Algorithms and Problem Solving)...Page No. (1-13) ‘© On the first occurrence of an even element, (if any) in A, the algorithm can have early termination. The efficient code for the same can be given as below : for nytt) i t | if (AG %2) = 0) { vite (ACI) s break; J* on the occurrence of an, even element the algorithm exits early #/ } } it@=n) ‘write (“no even element is present in’ A”); ‘© To check for the early exit conditions some additional steps are needed to be added in an algorithm. These steps should be added only if they are inexpensive in terms of computing time. © There is always a trade-off additional steps and memory space versus execution time to have early exit of an algorithm. mt 1.9 CLASSIFICATION OF’ TIME COMPLEXITIES _ Y Important Note : Kindly refer to subsection 2.2.5 in Chapter 2. © Algorithmic thinking is an essential anal solving any problem. * An algorithm is a finite set of unambiguous steps needed to be followed in certain order to accomplish a specific task. A good algorithm possesses five characteristics: input, output, definiteness, finiteness and effectiveness. Some of the popular algorithmic strategies for problem solving are as below: © Brute force method Exhaustive search Divide and conquer Greedy method oo 00 Dynamic programming ‘Tech-Neo Publications...A SACHIN SHAH Venture Analysis of Algorithms CHAPTER 2 and Complexity Theory ‘Analysis: Input size, best case, worst-case, average-case Counting Dominant operators, Growth rate, upper bounds, asymptotic growth, 0, 0, ,@, 0 and w notations, polynomial and non-polynomial problems, deterministic and romdeterministic algorithms, P-Class problems, NP-Class of problems, Polynomial problem reduction NP-Complete problems - vertex cover and 3-SAT and NP-hard problem - Hamiltonian cycle 24 22 23 Analysis of Algorithms.. ene : 23 RQ. Give the significance of analysis of algorithms. Also, compare a priori analysis and a posteriori analysis of algorithms, ((oMCMTC MCU EME eMC ETS) 23 RO. How dowe analyze and measure the ime complexity ofthe algorithm? What are the basic components, which contribute to space complexity? In what way the asymmetry between and Big-Omega notation is helpful? 2.41 APriori and a Posteriori Analysis... a i RO. Compare a prior and posterior analysis of algorithms. ESRCRTO SESE RATT Efficiency-Analysis Framework ........esssssssesessassesen 2 RO. Whatis the framework fr the analysis of algorithms? Discuss all the components. (EEACRTONIERERRIEI .. 22.1 The Growth Rate Function. 2.2.2 Estimate of Time Complexity .. salen 2.2.3. Best-case, Worst-case and Average-case Analysis RQ. Define best-case, worst-case and average-case efficiency? Is an average-case efficiency is an average of best-case, worst-case officiencios? OMAN TTY.. 22.4 Asymptotic Efficiency . ss Asymptotic Notations. UA. Explain Big Oh(O), Omega(A) and Theta (0) notations in detail along with suitable examples. So EC MTOM EAL... ca 3 ; Ua. Define asymptotic notations, Explain ther significance in analyzing algorithms, ISPPU -0. 2(b), Aug. 17.4 Marks oa UG. Explain Asymptotic notations with example, SETTER TENE ERC 'UQ. Define asymptotic notation. What Is their si 1. ignificance in analyzing algorithms? Explain Bic Onoga and Thea notaons, SOCRTOMTNEATE sen oh UG. Write short note on : (i) P class and NP class 2 TEE 5 : (I) Big ‘oh’ and theta 23.1 Types of Asymptotic Notations, eee 2.32 Properties of Asymptotic Notations. & Complextly INEOtY) F899 No, 5 Marks) am (analysis of Algorithms EON 1y-Sem.7-Comp) ws asymptotic notations. in & Analysis of Algorithms (SPP VO. _List the properties of vario 2.4 Computational Complexity 2.4.1 Basic Terminologies o Ua, Explain Polynomial and n SEVEN jon-determi TE termiristie and non-deterministic algorith bal ow ¥ Computational Complexity .on-polynomial problems. Explair inistc algorithms? Explal pon UO, What are deterministic and n : im for searching. , UG, Write one example each of det SEM 4 2.5 Computational Complexity Classes re 8 UG. Give and explain the relationship between P, NP, NP-Complele and NP-Hard. a SERUM REAR. UG. Explain following with relations with each other. () Polynomiat Algorithms (i) Non-Polynomial Hard Algorithms (iy Non-polynomial complete Algorithms SEIEERET( Meee EM LES UG. What ere P and NP classes? What is ther relationship? Give examples of each class. SUMMARIES 2.5.1 The classes: P, NP, NP-Hard, NP-Complete 2.6 Theory of Reducibilty is 2.6.1 Polynomial-time Reduction 0 26.2 Reducibility and NP-Completeness. 2.7 NP-Complete Problems. : - fe on NP-Completeness of algorithm and NP-Hard. SS gURNREIC) METRE LETS UO. Write a shor not UG. What are the steps to prove the NP-Completeness of a problem? Prove that the vertex cover problern is NP-complete nae UQ. What is NP-Complete Algorithm? How do we prove that an algorithm is (Give an example) 27.1 The SSAT ProbloM smi UQ. —_ Whatis SAT and 3-SAT problem? Prove that 3-SAT problem is NP-Complete. . UQ. Whats Boolean Satisfiability Problem? Explai SECM UG. Explain in brief NP-Complete problem. Prove thatthe SAT . Problem is NP-Cc . SEO A ei UQ. What is SAT and 3-SAT problem? Prove that the 3-S: SAT problem is NP-« ISPPU - Q. 3(b), May 16, 8 Marks oe 2.7.2. The Clique Problem . 2.7.3 The Vertex Cover Problem, Ua. Slate Vertex cover problem and prove that Vertex Cover problem is NP-Complete. ISPPU - 0. 3(b), Dec. 19, 8 Marks, Q. 3(a), Dec. 17, 8 Marks, UQ. Prove that Vertex cover is PI problem is NP-Complet it hewn LESASPPU - Q. 4(b), May 17, 8 Marks] ua. Explain NP-Hard Hamilte zy onan cy probien, SEPA NSTT TTS 27.5 Redicibility Structure of NP-Complete Probl a ee aes > Chapter Ends, ae NP-Complete? 3-SAT problem. Prove 3-SAT is NP-Complete. (SPPU - New Syllabus w.e.f academe year: 22-28) (P7-71) Balrech.neo Publications...A SACHIN SHAH ve Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Analysis of Algorithms & Complexity Theory)..Page No. (2-3) ies “we, human beings, intuitively follow algorithmic thinking or without applying any conscious thought we do many activities mechanically or habitually. ‘+ Most of the time while doing such activites we are less concemed with the efficiency but for computational algorithms, we must consciously work for its efficiency considering the scarcity of resources and their costs. We should think about the wise-usage of the computing resources while performing any computational task. We should design efficient algorithms that require lesser resources and execution time. OMeersert ens): GQ. Define of algorithm. ils algorithm. ‘© Analysis of algorithm investigates the efficiency of an. algorithm concerning computing resources. It involves the analysis of the time and space complexity of an algorithm, > Time Complexity : It is the time needed for the completion of an algorithm. To estimate the time complexity, we need to consider the cost of each fundamental instruction and the number of times the instruction is executed. Space Complexity : It is the amount of memory needed for the completion of an algorithm. To estimate the memory requirement we need to focus on two parts: (QA fixed part : It is independent of the input size. It includes memory for instructions (code), ‘constants, variables, etc. (2) A variable part : Itis dependent on the input size. It includes memory for recursion stack, referenced variables, etc. Consider the following algorithm ——— —— 1,_| Algorithm Product(int af), int n) 7 = = 210 - = : 3. 1 7 1 (for assignment operation) 4 1; oriz=1) 1 1 1; (fori's n) (at); +(n41) 1; (for +4) a +n 3 1 0 a (for assignment and multiplication operation) 6 1 < ~ = 7._| return prod 1 1 1 3 - : Total time complexity |__3n+4 ‘Space requirement : 1, Fixed parts memory to store 3 variables- prod, n, and i, 3 Space requirement : 2. ~ Variable part: memory to store an array t of size n a Total space complexity | __n+3 (SPPU - New Syllabus w.e,f academe year 22-23) (P7-71) Blrecn eo Publications...A SACHIN SHAH Venture of Algorithms & Complexity Theory)...Pa Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Analysis of Alg : cay 2 ‘an memory (currently memory is cheaper, but time is precious!) we fq ‘The same theory applies to space complexity, t0o. Ws, © As computational time is more crucial th: time complexity in all further discussions. Bete the frequency couhts forall statements inthe following algorithm segment. are calculated as below : © ans. : The frequency counts for all statements in the given algorithm segment te [Step Description | Step Cost | Frequency | ‘Total Count _ L 1 L 1 2 while(I Comparison of a priori and a posteriori analysis =a 1._ | Itis made before the execution of an algorithm. _| It is made after the execution of an algorithm. 2. | It is based on knowledge that is independent of | It needs justification through experience and hence it is based experimental evidence. on the knowledge that depends on experimental evidence. 3. | All the values of different evaluation metrics are | All the values of different evaluation metrics are exact vals estimated values. recorded during experimentation, 4, | Alll the values of various performance metrics are | All the values of various performance metrics are non-unifom uniform values independent of actual input size. "| values war. the actual input given during execution. '5. | Its independent of the computational power of the | It is dependent on the computati u, iputational power of the CPU, CPU, operating system, system architecture, | operating system, system architecture, srogetinlag languag*| Programming language, and other environmental | and other environmental aspects, aspects, [tis also known as “performance estimation” or | Ttis also known as “performance testing” or “performant? “non-empirical analysis". Measurement” or “empirical analysis”. 7. | It needs a strong knowledge base of mathematical principles for the analysis of algorithms. Tedoes not require a trong knowledge base of mathemai® 8. / Bg. x @ x + y needs estimated running time O(1) irrespective of machine specification, Ex. x — x+y runs in say, 0.01 ns on the machine-l in? ns on the machine-II, in 0.0005 ns on the machine-Il- aver (SPPU - New Syllabus w.e. academe year 22-23) (771) (Bly. = eCh-Nen Bihinatinne & CACHINS' Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Analysis of Algorithms & Complexity Theory)...Page No. (2+5) }2. EFFICIENCY-ANALYSIS FRAMEWORK TAQ What 1G the framework for the analysis of} algorithins? Discuss all the components. (Ref. 2. 1(a), May 13, 8 Maris) PRREESE SEES ‘There are four major components ofthe framework for efficiency analysis of an algorithm as given below: ‘The growth rate function : The time and space complexities are defined as functions of the number of input instances of an algorithm. ‘An estimate of time complexity : The time complexity is estimated: by calculating the frequency count of fundamental instructions in an algorithm. Each fundamental instruction needs one unit of time. Best-case, worst-case and average-case analysis : ‘Though the input size remains the same, some algorithms perform differently with specific inputs. ‘This leads to the best-case, worst-case and average-case analysis of an algorithm. Asymptotic Efficiency : The efficiency of an algorithm is investigated when the input size increases without any bound. %®_ 2.2.1 The Growth Rate Function ‘The running time of an algorithm is the function of an input size given to an algorithm. The growth of this function with the increasing input size characterizes the efficiency of an algorithm. By comparing the growth rates of the running time of algorithms, we can evaluate the relative performance of alternative solutions to the given problem. wo @ @ @ Ex, 2.2.1 : Suppose you have algorithms with running time listed below (Assume these are exact running time). How much slower do each of these algorithms get when you 1) Double the input size b) increase the input size by 1? (@ 10007 Gi) nlogn Gi) 2" Gv) n° (@ Marks) son. (A) Double the input size @ 2n then running time = 100 ny that the algorithm gets slower by a factor of 4. nlog nz If n = 2n then running time = 2n log (2n). It implies that the algorithm gets slower by a factor of 2, plus an additive 2n. (iii) 2°: 1f n= 2n then running time = 2°”. It implies that the algorithm gets slower by a factor equal to the square ofthe previous running tine Le, 2. (iv) n? : If n= Qa) then running time = (2n)” = 4n’. It implies that the algorithm gets slower by a factor of 4. (B) Increase the input size by 1 (100 n?: If = n+ 1 then running time = 100 (n+ 1)" = 100 (0? +2n +1) = 100.n” + 200 n + 100. It implies that the algorithm gets slower by an additive 200 n +100. log n: If n =n +1 then running time = (n + 1) Jog (n+ 1) It implies that algorithm gets slower by an w Table 2.2.1 : Growth Rates of some Typical ssaite e 8 Le Une OS en Functions of n Gi) 2° :1fn =n +1 then running time = 2°*? =2 x2". 1t a implies thatthe algorithm gets slower by a factor of 2. ae Gv) ws fn=n-+ 1 then running time = (n+ 1)" = n° + 2n a + 1. It implies that the algorithm gets slower by an E additive 2n +1. 100 [1000] 1000] yo | | EX 222 * Suppose you have algorithms with running dime ee ar 4 | listed below (Assume these are exact running time). How 100 | 66 | 100 | 66 | io | 10% [10% | 107 | | much siower do each of these algorithms get when you 1000 ] 10 | 1000] qo" f a0 | 10° 4) Double the input size b) increase the input size by 1? 2 an ah ci a ro 1131 go | 10% | 10° | 10” @ 0 Gi) ® Gi)" Gv) nlogn — (@ Marks) 10° 70 © sot. : oe (A) Double the input size ee or @ on’: ifm = 2n then the running time = (2n°) = 4n°. It * implies that the algorithm gets slower by a factor of . 3 7 (i) ns If = 2n then the running time = (2n)° = 8n°, It # implies thatthe algorithm gets slower by a factor of 8. 10 | 33 io” Let (SPPU’- New Syllabus w.e.f academe year 22-23) (P7-71) [eal rech-wveo Publcations.A SACHIN SHAH Venture beeen {& Analysis of Algorithms (SPPU-Sem.7-Comp) (iy) 2°: If n = 2n then the running time = 2°. It implies thatthe algorithm gets slower by a factor equal tothe square of the previous running time be. 2° Go) log: Ifn-= 2n, then the running time = 2n log Qn). Te implies that the algorithm gets slower by a factor of 2 plus an additive 2n, (B) Increase the input size by 1 @ wetn=ns 1, then the running time = (n + 1) n+ 2n+ 1. It implies that the algorithm gets slower by an additive 2n +1. YA 2.2.2 Estimate of Time Complexity As computational time is more crucial than memory (currently memory is cheaper but time is pi the time complexity in all further discussions. The same theory applies to the space complexity, too, © Counting the dominant operators ‘* Amalgorithm describes a finite set of unambi task, In the case of computational algorithms, HAN Sete. To estimate the computational time of an al; algorithm. Eg, In a recursive algorithm, we can focus on the fre focus on the frequency of execution of the loop body. © Examples Ex.1:A simple forloop. (Analysis of Algorithms & Complexity Theory). SE No di) ns Ifn=n-+ I then the running time = asp =n 3m + 3n + 1, I implies thatthe algortin 2 slower by an additive 3n’ +3n +1. =2eed (i 2": Ifa = n+ 1 then the running ti Pe 2". Teimplies that the algorithm gets slower by a faci, of 2, (iv) nog n If n = (n + 1) then running time = ( +1) Jog (n). It implies that the algorithm gets slower by ay additive log(n + 1) +n flog(n + 1) log n). iguous steps to be followed in a certain sequence to complete a particular these steps refer to the instructions that contain the fundamental operators like gorithm, we can concentrate on dominant fundamental operations in an ‘These operations can adequately give a notion of the time requirement of execution of any algorithm. quency of recursive calls. Also, in an iterative algorithm, we can L 1 u 4 1 1,(for isn) (a+); HO +1) Ii(forits) a +n 3 sume= sum 1 6 ) : : 7 return sum; 1 1 1 8 } : ; Total ti : a ime complexity pace requirement: |.Fixed part: memory to stowe3 Variables- sum, T and ipo st Nand i, 3 Total space complexity | 3 (Constant) (SPPU - New Syllabus we. academe year 22 28) (P7-71) i a Tech-Neo Publications... SACHIN SHAH V™ Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Analysis of Algorithms & Complexity Theory)...Page No. (2-7) Ex.2: A nested forloop. StepNo ‘Step Deveription ‘Step Cont | Rrequeney = [Total Count 1 Algorithm Sum2(n) . a - 2 ( te 5 : 3 sumi=0; 1 i ¢ i 1 Li(for isn) | (n+l); Hoel) Ix(orit+) | +n 5 1(for j Tn; 7 1;,(forjgn) | (n+1)*n; actin forj++) | nn; ‘+n 6 T on a 7 ) : : = 8 I = = : 9 return sum; 1 1 1 1 1 : = : Total time complexity | an7s4ne4 ‘Space requirement: 1.Fixed part: memory to store 3 variables- sum, n, i and j. | 4 Total space complexity | 4(Constant) ‘© In this example, the outer for loop on iis executed for frequency = n. ‘+ Foreach value of, the inner for loop on jis executed for frequene} ‘ 2 n giving total frequency count of n'. | Total Count 1 Algorithm Sum3(n) - - 7 2 { B z 5 3 sum:=0; 1 1 4 for (i:=1; iS n; i++) 1; 1 1,(for is n) (n+1); +(n+1) 1;(for i++) a +n 5 for 1,(for j:=1) 1*n; n 1xforjsn) | ((n43)/2"m; | 4(n?43ny/2 tsfforjx2) | HV" | saan 6 ‘sumi= sum4j; 1 (@+1)2)"0 “¥(n74ny2 7 1 5 : 7 8 1 - 5 5 9 return sum; 1 1 t 10 } : : Total time complexity | (3n7411n+8)/2 ‘Space requirement: 1.Fixed part: memory to siore 3 variables- sum, n, iandj. | 4 ‘Total space complexity | 4 (Constant) ‘executed for frequency = n, ‘* In this example, the outer for loop on i +) «However, for each value of i, the inner for loop on j is executed for frequency = “> as j = i +2 that follows the arithmetic progression. (SPPU-- New Syllabus w.e.t academe year 22-23) (P7-71) [Bal tech-Noo Publications... SACHIN SHAH Venture Design & Analysis of Algorithms (SPPU-Sem.7-Comp) WW 22.3 Best-case, Worst-case and Average-case Analysis “‘Detine best-case, worst-case’ and. alerageleave: “IS an average-case efficiency -is ant ‘Though the input size remains the same, the efficiency of certain algorithms varies with specific inputs. It demonstrates the three cases of efficiency analysis as described below © Best-case efficiency * It is achieved when the best-case input of size n is given to an algorithm. With such input, an algorithm Tans to its completion in the least amount of time than that with other inputs of the same size, n, * Eg, In a sequential search, if the first clement of a given input of size n is the same as the search key, then the algorithms ends with a successful search in just one comparison. ‘O(1) is the best-case efficiency of a sequential search. Gi) Worst-case efficiency + Ibis achieved when the worst-case input of size n is given to an algorithm. With such input, an algorithm runs for the longest among all inputs of the Same size n, It gives an upper bound on running time. * Eg, Ina sequential search ifthe last element of a given input of size n is equal to the search key, then the algorithm ends with a successful search by performing 1 comparison O(n) is the worst-case efficiency of a sequential search, (iii) Average-case efficiency ‘+ Itis recorded when neither best-case nor the worst-case but any “random” input of size n is given to an algorithm. + Ibis not calculated by averaging the worst-case and the best-case efficiencies. Sometimes coincidently the average case efficiency of an algorithms matches with the mean of its worst-case and the best-case of efficiencies. * Estimation of average-case efficiency is based on some Probabilistic assumptions and is comparatively more Complex than the best-case and worst-case analysis, * Eg, for any random input of size n, sequential search Performs O(n) comparisons. Hence its average-case efficiency is O(n). * (Analysis of Algorithms & Complexity Theor)...Page Ny y %®_ 2.2.4 Asymptotic Efficiency Unless we evaluate the performance ofan algorthn, taking a sufficiently large input size, we cannot uy, its growth rate. Hence we need to study 4) “asymptotic efficiency” of an algorithm where ig performance of an algorithm is verified against increasing input size boundlessly. Table 2.2.2 : Basic asymptotic efficiency classes Gt i "Description efficiency, class if 16 [11 Defines “constant” growth rate irrespective of input size. Defines “logarithmic” or “sublinear” growth rate achieved by reducing an input size by a constant factor on subsequent iterations of the algorithm. E.g. Binary search algorithm, 2. [logn Defines “linear” growth rate. E.g. Sequential search algorithm, . Defines “quasilinear” growth rate. E.g. Merge sort by divide and conquer strategy. . 4-|nlogn Defines “quadratic” growth rate. It is generally found in an analysis of algorithms with} % nested loops. E.g. Matrix addition, Defines “cubic” growth rate. | Itis generally found in an analysis of algorithms with3 nested loops. E.g. Floyd- Warshalt’s algorithm, Defines “exponential” growth rate. It characterizes (@) the algorithms that generate | all possible subsets of mi | sets. E.g. Travelling salesperson problem by | dynamic programming. 1 Defines “factorial” gro"! | ' rate. It characterizes bet algorithms that produ? permutations ofan teitem set. Eg. Travels | | salesman ae (sPPL force method. (SPPU.- New Sylabus w.e..academe year 22.29) (7-71) v & Tech-Neo Publications... SACHIN SHAH Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Analysis of Algorithms & Complexity Theory)...Page No. (2-9) 3 ASYMPTOTIC NOTATIONS ‘Omega(a) Explain Big’ OW(0), Theta (6) notations in detail along with sui Sa examples, (a), May 17,6 Marks I Define asymptotic notations. significance in analyzing algorithms. Explain Asymptotic notations with example. EDERAL | Define asymptotic notation, What is their's significance in analyzing algorithms? Explain Big oh, , ‘Omega and Theta notations. aged RECOM Explain their Ua. “Write short note on: () Pcless and NPclass : HI SPPU - Q. 4(a), Doc 19, 8 Marks) Big oh and thet © To compare two algorithms with reference to their growth rates, we use notations called “asymptotic notations”. © The asymptotic notation describes the “asymptotic efficiency” of an algorithm. Y_2.3.1 Types of Asymptotic Notations (1) Big “Ob” (©) (2) Big Omega (2) (3). Theta (@) (6) Little “On” (0) (5) Little Omega (@) (1) Big “Oh” (0) Let h(n) and k(n) are two functions. The function hn) = O(K(n)) iff there exist some positive constants € and ng such that h(n) Sc * k(n), Wn 2 Mo- Here, k(n) is having the same or higher growth rate than h(n). So it gives the upper bound on h(n). (waFig. 2.3.1(a) : Upper Bound(O-Notation) [= Example (The function Sn + 16 # O(1) as there are no positive constants ¢ and ng so that Sn + 16S¢-1Vn2n9. +The function Sn + 16 = O(n) as Sn + 16 $21 n forall nz 1.Here np=1 and = 21 «The function Sn + 16 = O(n") as Sn + 16 $7n” for all nz 2. Here np=2and¢=7, @ ‘The function Sn + 16 = O(n") as Sn + 16 Sn" for all 124, Here n= 4 ande=1. «The function Sn + 16 = 0(2") as Sn + 16 $2" for all nz 6. Here np =6 ande=1. «Even though Sn + 16 can be expressed as O(n), O(n"). (O(n), 02"), we must select the Least Upper Bound. Since, O(1) < O(n) $ O(n") < Om’) $02") for sufficient large value of n, Sn + 16 = O(n) is the most appropriate expression. SF Theorem 2.3.1 I h(a) = aye ay n+ agn’ +......+ qn” then h(n) = O(n"), where ag, ay, a2, and ay,> 0. » My are constants Proot * Toprove h(n) = O(n™) we check the inequality, Stag are constants and ap > 0. hin) < aptayntagn' +. Where ap, ay dy. ec h(n) < lan! Bn (SPPU - New Syllabus w.e.f academe yeat 22-28) (P7-71) ‘Tech-Neo Publiations...A SACHIN SHAH Venture Design & Analysis of Algorithms (SPPU-Sem.7-Comp) ( ™ hie) < Eby ™ iz0 m ho) ¢ Dba vnet iso h(n) = O(n"), ¥n2 1 and assuming m is constant. 2 + Thus, itis proved that if h(n) = ag# ay n+ ag n+ an” then h(n) = O(n”), where ap, ay, ay, constants and 2,;>0. (2) Big Omega (2) ‘Let h(n) and k(n) are two functions. The function h(n) = Q¢k(n)) iff there exists some positive constants and np such that h(n) 2 © * k(n), Vin np, Here k(n) is having either the same or lower growth Tate than h(n). So it defines the lower bound on h(n). me in) = 24K(n)) (143}Fig. 2.3.1(b) : Lower Bound (Q-Notation) SS Examples (The function 5n + 16 = Q(n) as 5n + 16 > n for all 21. Here ng = 1 and c= 1. [It holds for n = 0, but as per definition ng 20} (ii) The function 8n° + Sn + 16 = (1) as Bn? +504 1621 for alln2 1. Here n= 1 ande = 1, The function. 8n” + 5 + 16 = O(n) as Bn? +50 *1GEa egal nek Heteag= Lande =1, The function Bn" + Sn + 16 = Qa") a8 Bn? + Sn +162n° forall n> 1, Here no = 1 andc=1, (SPPU - New Sylabus w.e academe year 22-23) (7-71) + Even though 8n” + Sn $16 can be expeg (1), Qn) and QM"). 1) is nop Ba informative as a lower bound so it is never 4 though it is valid. We must select the Hii Lower Bound, Sine (1) 5 O(n) < 260° ft 8 +5n4+16= An"). © Theorem 2.3.2 soma m If h(n) = agt ayn tagn +e... + ayn” then h(n) = Qin"), where ag, Ay, Az, «+4 Ay, are constants and a,>0. : Proof * Toprove h(n) = Q(n") we check the inequality, h(n) > agtayntagn’+......+aqn™ Where ap, a1, 295... dp are constants and ap > 0. h(n) > lain! wha) > D baler’ izO m eho) 2 Z lai, vn21 i=0 ©. h(n) = O(n"), Vn2 1 and assuming mis constant, © Thus, it is proved that if h(n) = agt ay n+ ajo" +e git” then h(n) = Q(n™), where ap, ay, ap, 5 4, are constants and a,,>0. (3) Theta (8) Let h(n) and K(n) are two functions. The funeoe h(n) = © (K(n)) iff there exists some positive const 1, ¢2 and np such’that c) * k(n) < h(n) < cy * Kit Vn2no, In other words, the function h(n) = © (k(n)) iff h(o)=? (K(n)) and h(n) = © (K(n)), V n> ng. Here k(n) des both an upper and lower bound on h(n). It defies? tight bound on h(n), Wile. ed Design & Analysis of Algorithms (SPPU-Sem.7-Comp) 7 io) = etki) (wag. 23.1(6) : Tight Bound (@ -Notation) & Examples (The function Sn + 16 # (1) as 5n + 16 # O(1) though, Sn+ 16 = Q(1). Gi) The function Sn + 16 = O(n) as Sn + 16 < 2in for all, 21 and Sn +1620 for all n2 1.Here, c} = 1, ¢ = 21 and np = 1. = Theorem 2.3.3 IE N(a) = apt ay n+ ayn +e... ha gn™ then h(n) = O(n"), where ap, a1, 2, «+» My are constants and a,> 0. Proof ‘+ By the definition of an asymptotic notation ©, to prove h(n) = @(n") we must prove that h(n) = O(n™) and h(n) = 20"), © Referring to Theorem 2.3.1 we can show h(n) = O(n"). © Referring to Theorem 2.3.2 we can show h(n) = Q(n"). © As hin) = O(n") and h(n) = @ (n”) we have h(n) = © (n"). Thus, it is proved that if h(n) = ag+ a, n apn’ 4+......-+agn” then prove that h(n) = @(n", Where ap, ay, 2, . Bm are constants and a,,>0. 5 Theorem 2.3.4 If Let h(n) and k(n) be asymptotically nonnegative functions then h(n) + k(n) = ©(maximum (hin), g(n))). © Proot * As h(n) and k(n) are asymptotically nonnegative functions, there exists ng such that h(n) 2 O and k(n) 2 0 for all n> no. (Analysis of Algorithme & Complexity Theory)...Page No. (2-11) Let f(n) = maximum (h(n), g(n)) h(n) if h(n) 2 h(n), K(n) if h(n) < k(n) ‘Thus, for any n, f(n) is either h(n) of k(n). So, we have 0 < h(n) < f(n) and 0 < k(n) < f(n) for all n> ny. Adding these 2 inequalities we get, 0 < h(n) + k(n) < 2f(n), which shows that (h(n) + k(n)) = O(F(n)) = O(maximum(h(n), k(n) for alln>npandc)=2. — ......(By definition of Big-Oh). Similarly, we have h(n) + k(n) 2 f(n) 2 0, which shows that (h(n) + k(n)) = 2(6(n)) = Q(maximumth(n), k(n))) for all n& npand ¢ = 1. ...(By definition of Big-Omega). O(f(n)) = Ofmaximum(h(n), k(n))) (f(n)) = A(maximum(h(n), k(n))) 2 Mo) As (h(n) + k(n) and (h(n) + k(n)) we can show that (h(n) + k(n)) © (fn) = O(maximum(h(n), K(n))) for all n2 no, €) = 1, ¢9=2- ..(By definition of Theta). (4) Little ‘oh’ (0) Let h(n) and k(n) are two functions. The function h(a) = ofk(n)) iff lim ha) _ 4 neo k(n) ~ Here the bound 0 $ h(n) <¢ * k(n) holds for all ¢ > 0 and for all n 2 no. tS Examples (The function Sn + 16 = ofa") as lim Sn+16 =0 noe i) The function Sn + 16 # 0(n) as lim 5n#16 ase pay fee (5) Little Omega (o) Let h(n) and K(n) are two functions. The function h(n) = (k(n) iff lim k(n) _ lim hin) nape hia) = OR pes Kay = Here the bound h(n) > c * k(n) 2 0 holds for all ¢ > 0 ‘and for all nn. 5 Example (The function 8” + Sn + 16 = a(n) as lim 8 © Thus, for n > no, h(n) + k(n) > h(n) = 0 and n> Br asnel6 ” h(n) + k(n) 2 k(n) 2 0. (SPPU- New Syllabus w.e.{ academo year 22-23) (P7-71) Tech-Neo Publications...A SACHIN SHAH Venture. Design & Analysis of Algorithms (SPPU-Sem:7-Comp) YA 2.3.2 Properties of Asymptotic Notations (Analysis of Algorithms & Complexity Theory)...Page No, (tg Consider functions hi) and k(n) are asympotcally postive. Table 2.3.1 : Relational Properties of Asymptotic Notations Transpose ‘Symmetry. v h(n) = Q(h(n)) then, k(n) + Q¢h(n)) if b)=OCK(M)) —_ h(n) =OCK(n)) and h(n) = OCK(n)) h(a) = Och(m)) |then Kin) OGM) | Kin) = OCKn)) iff k(n) = QCh(n)) => hin) = OU) 2 v X 7 7 Es. if (h(n) =A¢kM)),, [BC =QEK(H)) and _—_| h(a) = 2 (KM) k(n) Xin) => iff k(n) = O((h(n)) h(n) = 200m) 3. v v v v Es. hin) = @ck(n)) h(n) = ©(k(n)) “and [h(n = O(K(a)) h(n) = @ch(my) Jif kin) =Orniny) | K(n)'= Clen)) ‘ff k(n) = QCh(n)) and = h(n) = O(a) b(n) = 2(K(n)) iff_k(n) = O((h(n)) eS x x v v Es: if (a) = ofk(n)) h(n) = o(k(n)) A(n) = ofk(n)) n(n) # o(h(n)) then k(n) # o¢h(n)) k(n) = o(l(n)) iff k(n) = (h(n) =>_h(n) = o(((n)) Eo x x v v Eg. if hin) = ock(ny h(n) = a(k(n)) h(n) = w(k(n)) h(n) # w(h(n)) |then k(n) # w(h(n)) K(n) = «(l(n)) = bin) = ony) iff Kk (n) = o(h(n)) Design & Analysis of Algorithms (SPPU-Sem.7-Comp) (Analysis of Algorithms & Complexity Theory)...Page No. (2-13) “If h(@) = O(k(n) then a » hin) = O(k(n)) where a> 0. (n) = O(f(m) and k(n) = O(n) then (i) h(n) + k(n) = 0 (maximum (f(n), g(n))) (ii)_-hin) * k(n) = O(f(n) * gin)) tin hho) i noe Kin) > © =h(n) = o(kin)), 0= hin) = ofk(n), ie. h(n) has smaller growth rate than k(n). \(n) has larger growth rate than k(n). ¢>0 =>h(n) has the same growth rate as that of k(n). Ex, 2.3.1: Reorder from the smallest to the largest. Justify your answer. ( nlogyn, n4n4n’, 2, sqrt(n). a), 2", ogg n, loga n,n (ii) n logy n, n°, n°Mlogy n, (n= +1) Gv) nf,2°, (aeD)§, 2°70", a” Yi soin.: Letus take n = 1024 =2'° Note: Take any sufficiently large value of n. (10 Marks) @ nlogen, = 1024 log) 1024 =2"° x 10. nenen = 294242" 2° = constant sart(n) = 2" As 225 <2!x10 < 2042942” we get, 2's sqrt) Snloginsn+n'+n° w P= Oem 22 Qi, 2! x 1ogy 2° = 2! x 10 logan = Jog 2'°=10 Be Qa” As 2x 10<2.< 2 <2! we get, Seah logy nsnloggn Non-determintstic polynomial-time algorithms solution in polynomial time, then all NP-Class produce possible solutions to given problens in a | problems could get their solutions in polynomial time non-deterministic way and verify the correctness of ‘Thus, NP-Complete problems are the hardest problems those solutions in polynomial time. in NP-Class. : ‘© NP-Complete problems can have possible solutions + Ifsomebody tes us a solution to the problem, then we |" Guly by applying the non-deterministc algorithms and can verify it in polynomial time, Such problems are | their solutions can be verified in polynomial time, NP.Class problems. E.g., Sudoku puzzle, : Circuit-satistiability problem, —_vertex-cover oe ear in problem, clique decision problem, m-coloring + NP-Class problems are tractable or intractable. 1 problem, TSP decision problem. + As the definition of NP-Class problems applies to all | ¢ The relationship among P, NP, NP-Hard and -Class problems, P CNP; but itis an open question in NP-Complete problems (assuming P # NP) is depicted ‘computational complexity: does P = NP? in Fig. 25.1, -» All P-Class problems and NP-Complete’ problems are NP-Class problems, but its inverse is not true by assuming P # NP. © Some classic examples, of NP-Class. problems: O/1 knapsack problem, TSP, graph colouring. problem, vertex-cover problem, clique problem, 3. NP-Hard Class > A decision problem Py-is NP-Hard if each NP-Class problem is polynomially reducibie to Py (ie. for each problem P € NP, P20.P1). + Thus, it implies that an NP-Hard problem is at least as Complexity rataFig. 2.5.1 : Generally assumed rel ynship among P, hhard as the hardest NP-Class problem. NP, NP-Hard and NP-Complete problems + AILNP.Complete problems are NP-Hard problems, but all NP-Hard problems are not NP-Complete problems. | are * Though the name is NP-Hard, all problems in this class THEORY OF REDUCIBILITY do not belong to NP-Class. + NP-Hard problems are decidable or undecidable. : e ar Sas IFA Sp 8 and if 8 can 2» © be solved in polynomial time then prove that A can + (@) Decidable NP-Hard problems: _vertex-cover | decision problem, clique decision problem, m- a Comment on the statement; “Two problems tt and | coloring decision problem, TSP decision problem. (©) Undecidable NP-Hard problems: Halting problem, 4. NP-Complete Class isan, has seh > A decision problem Ps is NP-Complcte if, nate on pobmom: P (@) Pj is an NP-Class problem (i.e. Py¢ NP) and Gil), Each NP-Class problem is polynomially reducible ‘Suppose a kid hus not yet learnt the procedure of toP, Ge, foreach problem P2e NP,PraPy). | "Wlplying two numbers, but he knows the procedure of (SPPU-- New Syllabus w.e. academe year 22-23) (P7-71) Tal roch-Noo Pubcatins..A SACHIN SHAH Venture

You might also like