You are on page 1of 51

Jawaharlal Nehru Engineering College

Laboratory Manual

ADVANCED ALGORITHM

For

Final Year Students CSE Dept: Computer Science & Engineering (NBA Accredited)

 Author JNEC, Aurangabad

1

FOREWORD
It is my great pleasure to present this laboratory manual for Final Year engineering students for the subject of Advanced Algorithm keeping in view the vast coverage required to create analytical skills, to enable the students to design algorithms and to analyze the algorithm.

As a student, many of you may be wondering with some of the questions in your mind regarding the subject and exactly what has been tried is to answer through this manual.

As you may be aware that MGM has already been awarded with ISO 9000 certification and it is our endure to technically equip our students taking the advantage of the procedural aspects of ISO 9000 Certification.

Faculty members are also advised that covering these aspects in initial stage itself, will greatly relived them in future as much of the load will be taken care by the enthusiasm energies of the students once they are conceptually clear.

Dr. S.D.Deshmukh Principal

2

LABORATORY MANUAL CONTENTS
This manual is intended for the Final Year students of Computer Science and Engineering in the subject of Advanced Algorithm. This manual typically contains practical/Lab Sessions related Advanced Algorithm implemented in C covering various aspects related the subject to enhanced understanding.

As per the syllabus along with Study of Java Language, we have made the efforts to cover various aspects of Advanced Algorithm covering different Techniques used to construct and understand concepts of Advanced Algorithm.

Students are advised to thoroughly go though this manual rather than only topics mentioned in the syllabus as practical aspects are the key to understanding and conceptual visualization of theoretical aspects covered in the books.

Good Luck for your Enjoyable Laboratory Sessions

Prof. D.S.Deshpande HOD, CSE

Ms. J.D.Pagare Lecturer, CSE Dept.

3

DOs and DON’Ts in Laboratory:

1. Make entry in the Log Book as soon as you enter the Laboratory.

2. All the students should sit according to their roll numbers starting from their left to right.

3. All the students are supposed to enter the terminal number in the log book.

4. Do not change the terminal on which you are working.

5. All the students are expected to get at least the algorithm of the program/concept to be implemented.

6. Strictly observe the instructions given by the teacher/Lab Instructor.

Instruction for Laboratory Teachers::

1. Submission related to whatever lab work has been completed should be done during the next lab session. The immediate arrangements for printouts related to submission on the day of practical assignments.

2. Students should be taught for taking the printouts under the observation of lab teacher.

3. The promptness of submission should be encouraged by way of marking and evaluation patterns that will benefit the sincere students.

4

SUBJECT INDEX
1. Program for Recursive Binary & Linear Search. 2. Program for Heap Sort. 3. Program for Merge Sort. 4. Program for Selection Sort. 5. Program for Insertion Sort. 6. Program for Quick Sort. 7. Program for FFT. 8.Study of NP-Complete theory. 9.Study of Cook’s theorem. 10.Study of Sorting network.

5

end) THEORY: This technique is applied only if the items to be compared are in ascending or descending order. /* variable declaration */ int bsearch ( int begin . printf ( " \n Enter size of the array n : " ) . int end ) . /* Function declaration */ void main () { int i . flag = 0 . clrscr () . n .Occurs when the key to be searched is either at first position or last position. i < n . for ( i = 0 . WORST CASE: Log 2 n. EFFICIENCY ANALYSIS: BEST CASE: 1. k . AVERAGE CASE: Log 2 n PROGRAM FOR BINARY SEARCH : #include<stdio. It uses divide and conqueror method. 6 . &a[i] ) . Program to Perform recursive Binary and Linear search Aim: Write a program to perform Recursive binary and linear search ALGORITHM : BINARY SEARCH Step 1:if(begin<=end) the go to Step 2 else go to Step Step 2:mid = (begin+end)/2 Step 3:if(key element = a[mid]) then successful search else go to step 4 /* search the upper half of the array */ Step 4:If(k<a[mid]) then recursively call binary(begin. scanf ( "%d" . mid-1) /* search the lower half of the array */ Step 5:if(k>a[mid]) then recursively call binary (mid+1.h> int a[20] . &n ) .Occurs when the item to be searched is present in the middle of the array.h> #include<conio. printf ( " \n Enter elements of array in ascending order : " ) . i++ ) scanf ( "%d" .1.

if ( flag == 1 ) printf ( " \n Successful search . mid . if ( begin <= end ) { mid = ( begin + end ) / 2 .1 ) . } return 0 . In this technique. Step 4:else go to Step 5 Step 5: Recursively call the same algorithm for the rest n-1 elements. end ) . int end ) { int mid . if ( k > a[mid] ) return bsearch ( mid + 1. key element is present ==========Input . else printf ( " \n Unsuccessful search " ) . if ( k == a[mid] ) return 1 .printf ( "\n Enter the key element : " ) . THEORY: It is also called SEQUNTIAL SEARCH.Output============= Enter size of the array n : 5 Enter elements of array in ascending order : 1 2345 Enter the key element : 6 Unsuccessful search LINEAR SERACH Step 1:input an array with n number of elements Step 2:input the key element Step 3:if( key element = a[1] ) then successful search exit. the search key is 7 . n . if ( k < a[mid] ) return bsearch ( begin . scanf ( "%d" . key element is present " ) . flag = bsearch ( 0. getch () . } ==========Input . } int bsearch ( int begin .Output============= Enter size of the array n : 5 Enter elements of array in ascending order : 1 2345 Enter the key element : 2 Successful search .1 ) . &k ) .

key.h> # include <conio. a[20]. key).compared with each item sequentially one after another. pos).Occurs when the key to be searched is first element. int pos.Occurs when the element to be searched is not present in the table. printf("\n Enter the Key to be Searched : "). AVERAGE CASE: n. 1. If the search key is found. int k. int key) { if( low>high ) return(-1). n. if(key==a[low]) return(low). k++) scanf("%d". EFFICIENCY ANALYSIS: BEST CASE: SUCCESSFUL SEARCH: 1. pos = RLSearch(a. n). UNSUCCESSFUL SEARCH: n. key) ). for(k=1. &n). WORST CASE: n.h> int RLSearch(int a[]. low+1. scanf("%d". clrscr(). &key). else printf("\n Key %d Found in position %d". if( pos == -1 ) printf("\n Key Not Found"). printf("\n Enter %d Numbers : \n ". high. k<=n. int low. printf("\n Enter How many Numbers : "). } 8 . } void main() { int n. position of the search key is returned. PROGRAM FOR LINEAR SEARCH : # include <stdio. key. &a[k]). int high. getch().occurs if the key is either the last element or it is not present in the table searched. scanf("%d". else return( RLSearch(a.

Output============= Enter size of the array n : 5 Enter elements of array in ascending order : 1 2345 Enter the key element : 2 Key 2 found in position 2 ==========Input .==========Input .Output============= Enter size of the array n : 5 Enter elements of array in ascending order : 1 2345 Enter the key element : 6 Key not found Conclusion: Thus we have perform recursive Binary and Linear search 9 .

int m . Sorting phase: here the items are arranged in ascending order (if we use max heap) or descending order (if we use min heap).n] of orderable items /* Output : A heap H[1.. b[max] . It has 2 phases: 1. Program to Perform Heap Sort Aim: Sort a given set of elements using the Heap Sort method. Sorting using top down approach: n Log 2 n. 10 .h> #include<conio.h> #define max 100 void heapify () . v = H[k] and heap = false Step 3: while not heap and 2 * k <= n do Step 4: j = 2 * k Step 5: if j < n // there are two children Step 6: if H[j] < H[j+1] j = j + 1 Step 7: if v >= H[j] heap = true Step 8: else go to Step 9 Step 9: H[k] = H[j] . int maxdel (). To arrange in ascending order we use bottom up approach to create max heap. clrscr () . void main () { int i . ALGORITHM : /* Constructs a heap from the elements of a given array */ /* Input : An array H[1.2.n . PROGRAM : #include<stdio. 2. And to arrange in descending order we use top down approach min heap. It uses TRANSFORM AND CONQUER method. Heap creation phase: here the unsorted array is transformed into heap.n] Step 1: for i = n/2 downto 1 do Step 2 Step 2: k = i .. int a[max] . void heapsort () . EFFICIENCY ANALYSIS: Sorting using bottom up approach: n Log 2 n. k = j Step 10: H[k] = v THEORY: This sorting technique uses heap to arrange numbers in ascending or descending order.

} void heapify () { int i . i++ ) printf ( "\n%d" . // start from middle of a and move up to 1 for ( i = n/2 . &n ) . e . i >= 1 . for ( i = 1 . //pick larger of children if ( e >= a[j] ) break. &a[i] ) . //save the maximum element e = a[n] . printf ( " \n Enter elements : \n " ) . //left child and c+1 is right child while ( j <= n ) { if ( j < n && a[j] < a[j+1] ) j++ . j . printf ( " \n The sorted array is : \n " ) . i-. for ( i = 1 . } } int maxdel () { int x . //go to the next level } a[j/2] = e . if ( n == 0 ) return -1 . x = a[1] . i >= 1 . } void heapsort () { int i .) b[i] = maxdel () . //save root of subtree j = 2 * i . b[i] ) .) { e = a[i] . i-. //get the last element n-. e . j . i++ ) scanf ( "%d" . i <= n . // is a max heap a[j/2] = a[j] . scanf ( "%d" . j = j * 2 . m=n. // Heap the structure again i=1. heapify () . heapsort () .printf ( " \n Enter array size : " ) . i <= m .. getch () . j = 2. i . 11 . for ( i = n .

//Pick larger of two childrean if ( e >= a[j] ) break . return x . j = j * 2 . // go to the next level } a[i] = e .while ( j <= n ) { if ( j < n && a[j] < a[j+1] ) j++ . } =============Input .Output============ Enter array size : 5 Enter elements : 7 1 9 3 5 The sorted array is : 1 3 5 7 9 Conclusion: Thus we have perform Heap Sort 12 . //subtree is heap a[i] = a[j] . i=j.

q-1 ] both sorted //Output: Sorted array A [ 0…p+q-1 ] of the elements of B and C Step 1: i = 0 Step 2: j = 0 Step 3: k = 0 Step 4: while i < p and j < q do { Step 5: if B[i] <= C[j] { Step 6: A[k] = B[i] Step 7: i = i + 1 } /* end if */ Step 8: else { Step 9: A[k] = C[j] Step 10: j = j + 1 } /* end if */ Step 11: k = k + 1 /* end while */ Step 12: if i = p then copy C [ j. CONQUER: sort left part and right part of the array recursively using merge sort.p+q-1 ] Step 13: else copy B [ i…. C..p-1 ] and C [ 0…. COMBINE: merge the sorted left part and sorted right part to get a single sorted array.q-1 ] to A [ k.n-1 ] Sorted in nondecreasing order Step 1: If n > 1 then go thru the following steps Step 2: copy A [ 0…[n/2] – 1 ] to B [ 0…[n/2] – 1 ] to Step 3: copy A [ [n/2] – n .n-1 ] of orderable elements //Output: Array A [ 0….n-1 ] ) // Sorts array A [ 0 …….Program to Perform Merge Sort Aim: Sort a given set of elements using Merge Sort ALGORITHM : Mergesort ( A [ 0….p+q-1] ) //Merges two sorted arrays into one sorted array //Input: Arrays B [ 0…. DIVIDE: divide the given array consisting of n elements into 2 parts of n/2 elements each.1 ] to B [ 0…[n/2] – 1 ] to Step 4: Mergesort(B [ 0…[n/2] – 1 ] ) Step 5: Mergesort(C [ 0…[n/2] – 1 ] ) Step 6: Merge ( B.3. A ) Merge ( B [ 0…. 2.n-1} by recursive mergesort //Input: An array A [ 0….p-1 ].p-1 ] to A [ k…p+q-1 ] THEORY: The concept used here is divide and conqueror.q-1 ].. The steps involved are: 1. The key operation in merge sort is combining the sorted left part and sorted right part into a single sorted array.. A [ 0…. 3.. C [ 0…. 13 ..

Merge(a. high). low. mid). k++) scanf("%d". for(k=1. while ( i<=mid && j<=high ) { if( a[i] <= a[j] ) b[k++] = a[i++] . Time complexity using Substitution method: n Log 2 n. int low. k<=n. MergeSort(a. high). mid = (low+high)/2 . a[20]. clrscr(). &n). k<=high. int high) { int mid. printf("\n Enter %d Numbers : \n ". } while (i<=mid) b[k++] = a[i++] . mid+1. for(k=low. k. if(low >= high) return. n). } void main() { int n.h> void Merge(int a[]. j=mid+1.EFFICIENCY ANALYSIS: Time complexity using Master theorem: n Log 2 n. k=low. 1. k++) printf("%5d". \PROGRAM: # include <stdio. getch(). scanf("%d". low. a[k]). k++) a[k] = b[k]. for(k=1. j. int k. mid. MergeSort(a. k<=n. i=low. int high) { int i.h> # include <conio. 14 . int mid. printf("\n Enter How many Numbers : "). else b[k++] = a[j++] . MergeSort(a. } void MergeSort(int a[]. int low. &a[k]). b[20]. printf("\n Sorted Numbers are : \n "). n). while (j<=high) b[k++] = a[j++] .

} ================Input – Output=============== Enter how many numbers : 5 Enter 5 numbers : 99 67 85 12 97 Sorted numbers are : 12 67 85 97 99 Conclusion: Thus we have perform Merge Sort 15 .

printf ( " \n Enter size of the array : " ) . EFFICIENCY ANALYSIS: Time complexity: n2. selection sort requires n-1 exchanges(each pass requires one exchange) where as bubble sort requires(n-1)n/2 exchanges. This property distinguishes selection sort from other algorithms. i++ ) 16 .Program to Perform Selection Sort Aim: Sort a given set of elements using Selection sort . obtain second smallest number and exchange that with second element. obtain the first smallest number and exchange that with the element in first position. In this method. Even though the time complexity of selection sort is n2. ALGORITHM : Selection sort ( A [ 0…n-1 ] ) //The algorithm sorts a given array by selection sort //Input: An array A [ 0…n-1 ] of orderable elements //Output: Array A [ 0….n-1 ] sorted in ascending order Step 1: for i = 0 to n – 2 do Step 2: min = i Step 3: for j =i + 1 to n – 1 do Step 4: if a[j] < a[min] then go to Step 5 Step 5: min = j //end for Step 6: swap A[i] and A[min] //end for THEORY: This uses the brute force method for sorting. clrscr () . the number of swaps in each pass will be n-1. void main () { void selectionsort() .4. n . Observe that in worst case.h> int a[20] . printf ( " \n Enter the elements : \n " ) . i < n . both in worst case and in best case. int i . for ( i = 0 . &n ) . scanf ( "%d" . In second pass. PROGRAM : #include<stdio.h> #include<conio.

j++ ) { if ( a[j] < a[min] ) min = j . for ( j = i + 1 . printf ( " \n The sorted elements are : " ) . a[i] ) .scanf ( "%d" . i++ ) printf ( "\n%d" . j . i++ ) { min = i .Output============= Enter size of the array : 5 Enter the elements : 7 1935 The sorted elements are : 13579 Conclusion: Thus we have perform Selection Sort 17 . a[i] = a[min] . } void selectionsort () { int i . for ( i = 0 . } temp = a[i] . &a[i] ) . j < n .1 . i < n . min . } } ==============Input . selectionsort () . getch () . i < n . for ( i = 0 . a[min] = temp . temp .

void main () { void insertionsort() . WORST CASE: n2.Program to Perform Insertion Sort Aim: Sort a given set of elements using Insertion sort method.n-1 ] of orderable elements //Output: Array A [ 0….Occurs when the elements are sorted in descending order. Consider an array of n elements to sort. scanf ( "%d" . &n ) . EFFICIENCY ANALYSIS: BEST CASE: n. THEORY: This uses decrease and conqueror method to sort a list of elements. We assume that a[i] is the item to be inserted and assign it to item.. Compare the item a[i] from position (i-1) down to 0 and insert into appropriate place. printf ( " \n Enter size of the array : " ) .h> int a[20] . AVERAGE CASE: n2. ALGORITHM : Insertionsort ( A [ 0…n-1 ] ) //Input: An array A [ 0…. i < n . printf ( " \n Enter the elements : \n " ) . This sorting technique is very efficient if the elements to be sorted are partially arranged in ascending order.h> #include<conio. i++ ) scanf ( "%d" .Occurs when the elements are already sorted. 18 . clrscr () .5. for ( i = 0 . int i .n-1 ] sorted in increasing order Step 1: for i = 1 to n – 1 do { Step 2: v = A[i] Step 3: j = i – 1 Step 4: while j >= 0 and A[j] > v do { Step 5: A[j+1] = a[j] Step 6: j = j – 1 } //end while Step 7: A[j+1] = v } //end for PROGRAM : #include<stdio. n . &a[i] ) .

getch () .Output============= Enter size of the array : 5 Enter the elements : 9 2851 The sorted elements are : 12589 Conclusion: Thus we have perform Insertion Sort 19 . j . i < n . while ( j >= 0 && a[j] > min ) { a[j + 1] = a[j] . for ( i = 0 . j=i-1. printf ( " \n The sorted elements are : " ) . } } =============Input . } void insertionsort () { int i . i++ ) { min = a[i] . i++ ) printf ( "\n%d" .insertionsort () . for ( i = 0 . a[i] ) . } a[j + 1] = min . j=j-1. i < n . min .

Occurs when at each invocation of the procedure. the current array is partitioned into 2 sub arrays with one of them being empty. AVERAGE CASE: nlog2n.n-1].Program to Perform Quick Sort Aim: Sort a given set of elements using Quick sort method.r] of A[0…n-1]. The first step in this technique is to partition the given table into 2 sub tables such that the elements towards the left of the key element are less than the key element and element towards right are greater than the key element.. Then the left and right sub tables are sorted individually and recursively. with the split position returned as this //function’s value. defined by its left and right //indices l and r //Output: The subarray A[l…r] sorted in increasing order Step 1:if l < r then { Step 2:s = partition( A[l…r] ) //s is a split position Step 3:Quicksort ( A [ l…s-1 ] ) Step 4: Quicksort ( A [ s+1…r ] ) Partition ( A [ l……r ] ) //Partitions a sub array by using its first element as a pivot //Input: A sub array A[l. WORST CASE: n2.this works well on large set of data. EFFICIENCY ANALYSIS: BEST CASE: nlog2n. THEORY: This works on divide and conqueror technique.6. Step 1: p = a[l] Step 2: i = l Step 3: j = r + 1 Step 4: repeat 20 . //Output: A partition of A[l…r]. defined by its left and right //indices l and r ( l < r ). ALGORITHM : Quicksort ( A [l…r] ) //Sorts a subarray by quicksort //Input: A subarray A[l…r] of A[0….

key. getch().Step 5: repeat i = i + 1 until A[i] >= p Step 6: repeat j = j –1 until A[j] <= p Step 7: swap ( A[i] . a[k]). } void QuickSort(int a[]. k<=n. int low. if(i<j) Exch(&a[i]. i=low+1. key=low. j. n). A[j] ) Step 8: until i >= j Step 9: swap ( A[i] . printf("\n Sorted Numbers are : \n "). &a[key]). for(k=1. a[20]. int *q) { int temp = *p. A[j] ) Step 11: return j PROGRAM : # include <stdio. 1. k++) scanf("%d". scanf("%d". int k. j-1). low. } void main() { int n. *q = temp. QuickSort(a. if(low>=high) return. clrscr(). high). QuickSort(a. k. for(k=1. while ( a[j] > a[key] ) j=j-1. QuickSort(a. *p = *q. &n). j=high.h> # include <conio. &a[j]). int high) { int i. printf("\n Enter How many Numbers : "). k++) printf("%5d". A[j] ) //undo last swap when i >= j Step 10: swap ( A[l]. j+1. printf("\n Enter %d Numbers : \n ". while(i<=j) { while ( a[i] <= a[key] ) i=i+1. k<=n. n). &a[k]). } 21 . } Exch(&a[j].h> void Exch(int *p.

===============Input – Output============= Enter array size : 10 Enter the array : 4 2 1 9 8 3 5 7 10 6 The sorted array is : 1 2 3 4 5 6 7 8 9 10 Conclusion: Thus we have perform Quick Sort 22 .

FFT(n. if N=1 then A[0]:=a0. This operation is useful in many fields but computing it directly from the definition is often too slow to be practical. b(x):=aN-2xn-1+…. a(x)= aN-1xN-1+……+a0.7. A[0: N-1] is set to the values a(wj).a(x).A DFT decomposes a sequence of values into components of different frequencies. //B. else { n:=N/2.b(x).Program to Perform Fast Fourier Transform Aim: Write a program to perform fast fourier transform ALGORITHM: Algorithm FFT(N.and wp are complex arrays.w. FFT(n. using the definition.c(x). while an FFT can compute the same 23 . A[j]:=B[j]+wp[j]*C[j].w2. takes O(N 2) arithmetical operations.w2. c(x):=aN-1xn-1+…. An FFT is a way to compute the same result more quickly: computing a DFT of N points in the obvious way. { // b and c are polynomials.+a3x+a1. } } } THEORY: A fast Fourier transform (FFT) is an efficient algorithm to compute the discrete Fourier transform (DFT) and its inverse.B). There are many distinct FFT algorithms involving a wide range of mathematics. Wp[-1]:=1/w. For j:=0 to n-1 do { Wp[j]:=w*wp[j-1].and w is a primitivwe Nth root of //unity. from simple complex-number arithmetic to group theory and number theory.C.C).A) // N=2m. A[j+n]:=B[j]-wp[j]*c[j].0<=j<=N-1.+a2x+a0.

The most well known FFT algorithms depend upon the factorization of N.result in only O(N log N) operations. The difference in speed can be substantial. but (contrary to popular misconception) there are FFTs with O(N log N) complexity for all N. Many FFT algorithms only depend on the fact that is an Nth primitive root of unity. Since the inverse DFT is the same as the DFT. but with the opposite sign in the exponent and a 1/N factor. FFTs are of great importance to a wide variety of applications. any FFT algorithm can easily be adapted for it data. and the improvement is roughly proportional to N/log(N). from digital signal processing and solving partial differential equations to algorithms for quick multiplication of large integers. the computation time can be reduced by several orders of magnitude in such cases. especially for long data sets where N may be in the thousands or millions—in practice. This huge improvement made many DFT-based algorithms practical. even for prime N.txt file is as follows 24 . such as number-theoretic transforms. and thus can be applied to analogous transforms over any finite field.

X). and x is calculated. double (*x)[2]. double (*x)[2]. * ******************************************************************************/ #include <stdlib. /* FFT */ void fft(int N.fast Fourier transform and its inverse (both recursively) ----------------------------------------------------------------------------*/ /****************************************************************************** * This file defines a C function fft that. tmp1. k10. /* indices for butterflies */ double tmp0. double (*XX)[2]). double (*X)[2]) { /* Declare a pointer to scratch space. double (*x)[2]. double (*X)[2]). by calling another function * * fft_rec (also defined).c .h> #include <math. /* Free memory. /* generic index */ double cs. double (*x)[2]. X. XX). 0. a function ifft with the same parameters is defined that * * calculates an inverse FFT (IFFT) recursively. /* temporary storage */ if(N != 2) /* Perform recursive step. Usage: * * ifft(N. /* half the number of points in FFT */ int k. */ 25 . double (*X)[2]. void fft_rec(int N. Im X) * * Similarly. * * Im x) * * X: pointer to N frequency-domain samples calculated in rectangular form * * (Re X. k11. */ fft_rec(N. Usage: * * fft(N. 1.PROGRAM: /*---------------------------------------------------------------------------fft. */ double (*XX)[2] = malloc(2 * N * sizeof(double)). * * Parameters: * * N: number of points in FFT (must equal 2^n for some integer n >= 1) * * x: pointer to N time-domain samples given in rectangular form (Re x. sn. double (*X)[2]). x. /* cosine and sine */ int k00. int delta. x. void ifft(int N.2831853071795864769252867665590057683943L) /* function prototypes */ void fft(int N. x. int offset. double (*x)[2]. N and X are given. /* Calculate FFT by a recursion. X). int delta. calculates an FFT recursively. int offset. k01. * * Here. double (*XX)[2]) { int N2 = N/2.h> /* macros */ #define TWO_PI (6. double (*X)[2]. } /* FFT recursion */ void fft_rec(int N. */ free(XX).

x[N2][1] = x[N2][1]/N. /* temporary storage */ /* Calculate IFFT via reciprocity property of DFT. N complex-valued time. the 2N values are assumed to be separated by whitespace. x). * * Then. /* half the number of points in IFFT */ int i. offset. k01 = k00 + delta. X[k00][1] = x[k00][1] + x[k01][1]. */ fft_rec(N2. for(i=1. an N-point FFT of these samples is found by calling the function * * fft.sn * XX[k11][0]. */ fft(N. } } /****************************************************************************** * This program demonstrates how to use the file fft. fft_rec(N2. Im x). tmp1 = cs * XX[k11][1] . k11 = k10 + delta. sn = sin(TWO_PI*k/(double)N). X. an N-point IFFT of these samples is * * is found by calling the function ifft. tmp1 = x[i][1]/N. k++) { k00 = offset + k*delta. x[N-i][1] = tmp1. Finally. Im X). i<N2. x[i][0] = x[N-i][0]/N. tmp0 = cs * XX[k11][0] + sn * XX[k11][1]. thereby recovering the original * * samples x. tmp1.{ /* Calculate two (N/2)-point DFT's. k10 = offset + 2*k*delta. } } else /* Perform 2-point DFT. x. X).tmp1. x[0][0] = x[0][0]/N. XX. i++) { tmp0 = x[i][0]/N. X[k01][0] = x[k00][0] . X[k01][1] = x[k00][1] . cs = cos(TWO_PI*k/(double)N). } } /* IFFT */ void ifft(int N. offset+delta. as well as to calculate an inverse FFT * * (IFFT) of given frequency-domain samples. X[k00][1] = XX[k10][1] + tmp1. */ { k00 = offset.x[k01][0]. in rectangular form (Re x. 2*delta.c to calculate an FFT * * of given time-domain samples. First. x[0][1] = x[0][1]/N. x[N2][0] = x[N2][0]/N. X). XX.* * domain samples x. X[k00][0] = x[k00][0] + x[k01][0]. double (*X)[2]) { int N2 = N/2. double (*x)[2]. X[k01][1] = XX[k10][1] . k<N2. x[i][1] = x[N-i][1]/N.tmp0.x[k01][1]. are read from a * * specified file. Next. k01 = k00 + N2*delta. x. /* Combine the two (N/2)-point DFT's into one N-point DFT. thereby yielding N complex-valued frequency-domain samples X in * * rectangular form (Re X. /* generic index */ double tmp0. */ for(k=0. 2*delta. x[N-i][0] = tmp0. X[k00][0] = XX[k10][0] + tmp0. the calculated samples X are saved to a specified * 26 . X[k01][0] = XX[k10][0] .

/* pointer to frequency-domain samples */ double dummy. printf("\nX(k):"). &dummy) == 2) N++. while(i==2*(i/2)) i = i/2. i<N. i<N. */ rewind(fp). */ x = malloc(2 * N * sizeof(double)). */ if(!(fp = fopen(file. for(i=0. i++) fscanf(fp. "%lg%lg".h> #include <stdlib. /* Get time-domain samples. /* Read through entire file to get number N of points in FFT. /* pointer to time-domain samples */ double (*X)[2]. */ printf("Input file for time-domain samples x(n)? ").c" int main() { int i. i<N. &x[i][1]). for(i=0. i. X[i][1]). exit(EXIT_FAILURE). x[i][0]. * ******************************************************************************/ #include <stdio. } /* Allocate time. /* generic index */ char file[FILENAME_MAX]. } N=0. */ printf("\nx(n):"). /* file pointer */ /* Get name of input file of time-domain samples x. exit(EXIT_FAILURE). &dummy. 27 . /* name of data file */ int N.* file. i. */ if(N < 2 || i != 1) { printf(". printf("N = %d". scanf("%s". x. X = malloc(2 * N * sizeof(double)). */ fft(N. /* Print time-domain samples and resulting frequency-domain samples. */ } /* For N >=2. /* Check that N = 2^n for some integer n >= 1. file). /* number of points in FFT */ double (*x)[2]. if desired. /* While i is even. x[i][1]). i++) printf("\n k=%d: %12f %12f". /* scratch variable */ FILE *fp.and frequency-domain memory. fclose(fp). "%lg%lg". we now have N = 2^n iff i = 1. for(i=0.h> #include "fft. &x[i][0]. /* Calculate FFT. factor out a 2. i++) printf("\n n=%d: %12f %12f". which does not equal 2^n for an integer n >= 1. file). X[i][0]. "r"))) { printf(" File \'%s\' could not be opened!". X). while(fscanf(fp. N). */ if(N >= 2) { i = N.").

421320 2.600000 -3.". exit(EXIT_FAILURE).600000 * n=7: 4. i.171068 -1.900000 * k=1: 3.442641 * x(n): * n=0: 3.15e\n". /* Write frequency-domain samples X to a file.400000 * n=5: 5. */ printf("\nx(n):").400000 33. if(!(fp = fopen(file. file).300000 0. /* Print recovered time-domain samples.300000 * n=2: 5. i<N.900000 6.892893 * k=2: -5. */ for(i=0. ifft(N. i++) x[i][0] = x[i][1] = 0.042641 * k=4: -0.000000 * n=3: 4. x[i][0].300000 4. X).700000 * k=5: -0.900000 6. printf("Samples X(k) were written to file %s. i<N.600000 * n=1: 2.600000 * n=1: 2. X[i][1]).821320 0./* Clear time-domain samples and calculate IFFT.000000 2. free(X). i++) fprintf(fp. for(i=0.15e %23.600000 2.800000 9. "%23. file). file).800000 9.300000 * k=3: 5.307107 * k=6: -1.600000 2.300000 * n=2: 5.txt *N=8 * x(n): * n=0: 3.900000 * k=7: -8. scanf("%s". if desired. "w"))) { printf(" File \'%s\' could not be opened!". x[i][1]). } for(i=0. } /*=========================================================================== === * Program output (example) *=========================================================================== === * Input file for time-domain samples x(n)? data. */ printf("\nOutput file for frequency-domain samples X(k)?" "\n (if none.600000 4.971068 7. */ free(x).000000 * n=3: 4. abort program): ").100000 28 . return 0.900000 4. i<N.800000 -3. /* Free memory.800000 * n=6: 5. X[i][0].100000 * X(k): * k=0: 35.400000 -14. fclose(fp). x. i++) printf("\n n=%d: %12f %12f".100000 * n=4: 3.600000 4.

400000 * n=5: 5.900000 4.300000 4.300000 0. */ Conclusion: Thus we have perform fft 29 .100000 * Output file for frequency-domain samples X(k)? * (if none.txt.* n=4: 3.800000 * n=6: 5.txt * Samples X(k) were written to file X.000000 2.600000 * n=7: 4. abort program): X.

Intractable (non-computable) problems (I) There is a very large and important class of problems that 0. Exponential problems (E) 3. Definition 2 of NP: A problem is said to be NP if 1. where u(n) goes to infinity as n goes to infinity.8. we don't know how to solve polynomially. Clearly. P is a subset of NP A very famous open question in Computer Science: 30 . we know how to solve exponentially. II. and 2. For those who are not familiar with Turing machines. it takes polynomial time to verify the correctness of a candidate solution Remark: It is much easier and faster to "grade" a solution than to find a solution from scratch. its solution comes from a finite set of possibilities. Introduction      A problem is said to be polynomial if there exists an algorithm that solves the problem in time T(n)=O(nc). 1. Polynomial problems (P) 2. where c is a constant. two alternative definitions of NP will be developed. Examples of polynomial problems: 2 o Sorting: O(n log n) = O(n ) 3 o All-pairs shortest path: O(n ) 2 o Minimum spanning tree: O(E log E)= O(E ) A problem is said to be exponential if no polynomial-time algorithm can be developed for it and if we can find an algorithm that solves it in O(nu(n)). and 2. The world of computation can be subdivided into three classes: 1. we don't know if they can be solved polynomially at all This class is a gray area between the P-class and the E-class. Definition of NP        Definition 1 of NP: A problem is said to be Nondeterministically Polynomial (NP) if we can find a nodeterminsitic Turing machine that can solve the problem in a polynomial number of nondeterministic moves.Study of Np-Complete theory Aim: To Study Np-Complete Theory I. It will be studied in this chapter. We use NP to designate the class of all nondeterministically polynomial problems.

P = NP ?      To give the 3rd alternative definition of NP. it returns an uncertain value. Behavior of "choose()": 1. the solution size N must be polynomial in n. The time of this stage is polynomial in the input size n. and the verification stage must be polynomial in n. Template for an NP algorithm: begin /* The following for-loop is the guessing stage*/ for i=1 to N do X[i] := choose(i). 2. Question: Goes G have a Hamiltonian Cycle? Here is an NP algorithm for the HC problem: begin /* The following for-loop is the guessing stage*/ for i=1 to n do X[i] := choose(i). that is. which we call "choose()". end     Remark: For the algorithm above to be polynomial. endfor 31 . nonimplementable instruction. choose(i) magically returns the i-th component of the CORRECT solution in constant time 2. endfor /* Next is the verification stage */ Write code that does not use "choose" and that verifies if X[1:N] is a correct solution to the problem. The first stage is a guessing stage that uses choose() to find a solution to the problem. An NP algorithm is an algorithm that has 2 stages: 1. choose(i) returns mere "garbage". Definition 3 of NP: A problem is said to be NP if there exists an NP algorithm for it. we introduce an imaginary. Input: A graph G 2. if a problem has no solution. The second stage checks the correctness of the solution produced by the first stage. Example of an NP problem: The Hamiltonian Cycle (HC) problem 1. if a problem has a solution of N components.

endif endfor endfor return(yes)./* Next is the verification stage */ for i=1 to n do for j=i+1 to n do if X[i] = X[j] then return(no).X[i+1]) is not an edge then return(no). The K-clique problem is NP 1. endfor /* Next is the verification stage */ for i=1 to k do for j=i+1 to k do if (X[i] = X[j] or (X[i]. endif return(yes). HC is NP. endif endfor endfor for i=1 to n-1 do if (X[i]. Input: A graph G and an integer k 2. and the time of the verification stage is O(n2). end    The solution size of HC is O(n). Question: Goes G have a k-clique? Here is an NP algorithm for the K-clique problem: begin /* The following for-loop is the guessing stage*/ for i=1 to k do X[i] := choose(i).X[1]) is not an edge then return(no). end 32 .X[j]) is not an edge) then return(no). endif endfor if (X[n]. Therefore.

III. 33 . The following are additional examples of well-known yes-no problems. the k-clique problem is NP. and the output of T is IR 3. and the time of the verification stage is O(n2). The subset-sum problem: o Instance: a real array a[1:n] o Question: Can the array be partitioned into two parts that add up to the same value? The satisfiability problem (SAT): o Instance: A Boolean Expression F o Question: Is there an assignment to the variables in F so that F evaluates to 1? The Treveling Salesman Problem The original formulation: o Instance: A weighted graph G o Question: Find a minimum-weight Hamiltonian Cycle in G. and so was the yesno version of the k-clique problem. Answer(QP. then o IP: denotes an instance of P o QP: denotes the question of P o Answer(QP.IP)=Answer(QR. Therefore. The yes-no version of the HC problem was described above. Focus on Yes-No Problems       Definition: A yes-no problem consists of an instance (or input I) and a yes-no question Q. The solution size of the k-clique is O(k)=O(n).IR) Definition: We say that problem problem P reduces to problem R if there exists a transform from P to R. The algorithm T takes polynomial time 2.IP): denotes the answer to the question QP given input IP Let P and R be two yes-no problems Definition: A transform (that transforms a problem P to a problem R) is an algorithm T such that: 1. Reductions and Transforms     Notation: If P stands for a yes-no problem. The input of T is IP. The yes-no formulation: o o Instance: A weighted graph G and a real number d Question: Does G have a Hamiltonian cycle of weight <= d? IV.

R is NP. it follows that P reduces to R (by transitivity of transforms). Clearly. x := AR(IR). then R is at least as difficult as P. A takes as input IR.IR) Let AR be the polynomial time algorithm for problem R. o Let P be an arbitrary NP problem. o Note also that the algorithm AP takes polynomial time because both T and AR Q. The intuition derived from the previous theorem is that if a problem P reduces to problem R. Theorem: A problem R is NP-complete if 0. Theorem: Let P and R be two problems. it remain to show that any arbitrary NP problem P reduces to R. Proof: o Let T be the transform that transforms P to R. it follows that P reduces to R0 o And since R0 reduces to R. NP-Completeness     Definition: A problem R is NP complete if 1. o Since R0 is NP-complete. T is a polynomial time algorithm that transforms IP to IR such that Answer(QP. and 1.D.IP) because x = AR(IR) = Answer(QR. Every NP problem P reduces to R An equivalent but casual definition: A problem R is NP-complete if R is the "most difficult" of all NP problems. R is NP 2.IR) = Answer(QP. return x. then P is polynomial. If P reduces to R and R is polynomial. There exists an NP-complete problem R0 that reduces to R Proof: o Since R is NP.IP).E. o    Q.IR) o Design a new algorithm AP as follows: Algorithm AP(input: IP) begin IR := T(IP).IP) = Answer(QR.E.V. 34 . end o Note that this algorithm AP returns the correct answer Answer(QP.D. and returns as output Answer(QR.

<u2. Q. <zj.F1>. F2=1. we can consistently assign each ui a value 1.F1>.Fk> which are pairwise adjacent o These k nodes come the k fifferent factors. NP-Completeness of the k-Clique Problem     The k-clique problem was laready shown to be NP.Fr where every factor Fi is a sum of literals (a literal is a Bollean variable or its complement) o Let k=r and G=(V. For this strategy to become effective. <zk. Cook's Theorem: SAT is NP-complete.Fj>) forms an edge since the endpoints come from different factors and zi != zj' due to the fact that they are both assigned 1. .Fi> and <uj. 35 . and adjacent nodes have non-complement first-components. o This assignment makes each Fi equal to 1 because ui is one of the additive literals in Fi o Consequently.Fj> | xi is a variable in Fj} E={(<xi..Fi> . in every factor Fi there is (at least) one variable assigned 1. Specifically. no two ui and uj are complements because the two nodes <ui. . o F can be put into a conjunctive normal form: F=F1F2. then F is satistiable o Assume G has a k-clique <u1.. .Fk> is a k-clique in G because they are k distinct nodes.. This is provided by Cook's Theorem below. the following steps are sufficient: 0. . Prove R to be NP 1..E. and each pair (<zi. Find an already known NP-complete problem R0.F2>.   The previous theorem amounts to a strategy for proving new problems to be NP complete. <ys. o We finally prove that if G has a k-clique.F2>. o Assume F is satisfiable o This means that there is an assignment that makes F equal to 1 o This implies that F1=1.. Fr=1 o Therefore..Fj> . VI. <z1. to problem a new problem R to be NP-complete. .Ft>) | j !=t and xi != ys'} where ys' is the complement of ys o We prove first that if F is satisfiable. becuae no two nodes from the same factor can be adjacent o Furthermore.. It remain to prove that an NP-complete problem reduces to k-clique Theorem: SAT reduces to the k-clique problem Proof: o Let F be a Boolean expression. one per factor.D. Call that variable zi o As a result. then there is a k-clique. and come up with a transform that reduces R0 to R. <z2.E) defined as follows: V={<xi. we need at least one NP-complete problem.Fj> are adjacent. .. o As a result. <uk. F is equal to 1.

Square zero can be assumed to contain a designated separator symbol. n. : : : . P(n). 2.2. the Turing machine head cannot move more than P(n) steps to the left of its starting point.1. Remember that what makes a problem NP is the existence of a polynomial-time algorithm—more specifically. In order to prove this. and a tape alphabet a1. there is a polynomial function P and a Turing machineM which. : : : . U. 1. square j of the tape contains the symbol ak. : : : . M is in state j. 1. the proposition Sijk says that after i computation steps. 2. : : : . : : : . s. together with a candidate certificate c. For i = 0. we require a uniform way of representing NP problems. a2. will check in time no greater than P(n). then. when given any instance I of D. q 1. This is because with a problem instance of length n the computation is completed in at most P(n) steps. What Cook did was somewhat analogous to what Turing did when he showed that the Entscheidungsproblem was equivalent to the Halting Problem. For i = 0. By the definition of NP. : : : . Thus the problem of determining the latter is reduced to the problem of determining the former. : : : . We shall further assume that the initial tape is inscribed with the problem instance on the squares 1. during this process. the proposition Qij says that after i computation steps. P(n). 2. He showed how to encode as Propositional Calculus clauses both the relevant facts about the problem instance and the Turing machine which does the certificate-checking. and the putative certificate on the squares m. Note that we must have m _ P(n). j = P(n). 1. Assume. and D. We shall assume that the operation of the machine is governed by the functions T. 3. and k = 1. and that the symbol in this square at that stage will be a1 if and only if the candidate certificate is a true certificate. in such a way that the resulting set of clauses is satisfiable if and only if the original problem instance is positive. as. : : : . Let us assume that M has q states numbered 0. We define some atomic propositions with their intended interpretations as follows: 1. 2. : : : . whether or not c is acertificate of I.9. a Turing machine—for checking candidate certificates. q 1. P(n) and j = 0. To Study Cook’s theorem Aim: To Study the Cook’s theorem Cook’s Theorem Cook’s Theorem states that Any NP problem can be converted to SAT in polynomial time. 36 . where n is the length of I. We shall also assume that the machine halts scanning square 0. 1. that we are given an NP decision problem D.

and each distinct pair ak. : : : . : : : . q 1. For each i = 0. 4. S0nin. Next. the machine M is scanning square j of the tape. At each step. giving a total of 2P(n)(2P(n) + 1)(P(n) + 1) = O(P(n)3) literals. k of tape squares from P(n) to P(n). the machine is in state 1 scanning square 1. al of symbols we have the clause :(Sijk ^ Sijl). : : : . we have the clause:(Qij ^ Qik). we have the clause :(Tij ^ Tik). : : : . : : : . : : : . This is expressed by the n clauses S01i1 . 2. 9. s. P(n) and P(n) _ j _ P(n). k of distinct states. 7. giving (P(n) + 1)(2P(n) + 1) = O(P(n)2) literals altogether. k = 0. 37 . 8.l) Tij ^ Qik ^ Sijl ! T(i+1)(j+D(k. For each i = 0.3. 2. we define some clauses to describe the computation executed by M: 1. P(n) we have the clause Qi0 _ Qi1 _ _ _ _ _ Qi(q1). giving (P(n) + 1)(2P(n) + 1)s = O(P(n)2) literals altogether. This is expressed by the two clauses Q01. Initially. 3. P(n). At each step. T01. we have the clauses Tij ^ Qik ^ Sijl ! Q(i+1)T(k. At each computation step. 6. P(n). : : : . At each computation step. the string ai1ai2 : : : ain defining the problem instance I is inscribed on squares 1. : : : . M is in at least one state. U. and each distinct pair j. For each i = 0. P(n).l) Tij ^ Qik ^ Sijl ! S(i+1)jU(k. a total of n literals. each tape square contains at least one alphabet symbol. : : : . M is in at most one state. : : : . Initially. For each i = 0. we have the clause Ti(P(n)) _ Ti(1P(n)) _ _ _ _ _ Ti(P(n)1) _ TiP (n). i = 0. giving a total of q(q 1)(P(n) + 1) = O(P(n)) literals altogether. the proposition Tij says that after i computation steps. the tape is scanning at most one square. P(n). For each i = 0. For each i = 0. These clauses contribute a total of (12s + 3)(P(n) + 1)(2P(n) + 1)q = O(P(n)2) literals. P(n) and j = P(n). : : : . The configuration at each step after the first is determined from the configuration at the previous step by the functions T. P(n) _ j _ P(n). : : : . : : : . giving a total of (P(n) + 1)(2P(n) + 1)s(s 1) = O(P(n)2) literals altogether 5. 1. P(n) and P(n) _ j _ P(n) we have the clause Sij1 _ Sij2 _ _ _ _ _ Sijs. and l = 1. giving just two literals. At each step. each tape square contains at most one alphabet symbol. n of the tape. P(n) and for each pair j. the tape is scanning at least one square. and D defining the machine M. S02i2 . giving (P(n) + 1)q = O(P(n)) literals altogether. For each i = 0. note that the given clause is equivalent to the formula Sijk ^ :Tij ! S(i+1)jk).l)) Sijk ! Tij _ S(i+1)jk The fourth of these clauses ensures that the contents of any tape square other than the currently scanned square remains the same (to see this. At each step.

but restricts the clauses to at most three schematic letters each: 38 . since the sum of two polynomials is itself a polynomial. suppose I is a negative instance of D. This means that there is a certificate c such that when M is run with inputs c. We know we can convert it into SAT in polynomial time. SP(n)01. This means that if we find a way of solving SAT in polynomial time. It is thus clear that the procedure for setting up these clauses. since many frequently encountered problems which are so far believed to be intractable are NP. which have been proved to be NP-complete. By the P(n)th step. TP(n)0. Conversely. Suppose SAT can be converted to problem D in polynomial time.1 of the tape. in polynomial time. SAT was the first NP-complete problem to be recognised as such (the theory of NP-completeness having come into existence with the proof of Cook’s Theorem). Hence those clauses constitute a positive instance of SAT. a set of clauses whichconstitute a positive instance of SAT if and only I is a positive instance of D. I. they depend only on the machine and do not vary with the problem instance. This is expressed by the three clauses QP(n)0. We must now show that we have succeeded in converting D into SAT. and hence constitutes a negative instance of SAT. we have converted D into SAT in polynomial time. which means that whatever symbols are placed on squares P(n). Suppose first that I is a positive instance of D. can be accomplished in polynomial time. Thus from the instance I of problem D we have constructed. but it is by no means the only one. This would have huge practical repercussions. Now take any NP problem D0. In order to prove that an NP problem is NP-complete. There are now literally thousands of problems. it will halt scanning symbol a1 on square 0. In other words.10.This special property of SAT is called NP-completeness. The result of these two conversions is a polynomial-time conversion of D0 into D. the machine has reached the halt state. Altogether the number of literals involved in these clauses is O(P(n)3) (in working this out. all that is needed is to show that SAT can be converted into it in polynomial time. and we know we can convert SAT into D in polynomial time. it follows that D is NP-complete. thus they do not contribute to the growth of the the number of literals with increasing problem size. NP-completeness Cook’s Theorem implies that any NP problem is at most polynomially harder than SAT. note that q and s are constants.1 of the tape so that all the clauses above are satisfied. This problem is similar to SAT. given the original machine M and the instance I of problem D. which contains the symbol a1. In that case there is no certificate for I. which is what the O notation captures for us). This means that there is some sequence of symbols that can be placed initially on squares P(n). The reason for this is that the sequential composition of two polynomial time algorithms is itself a polynomial-time algorithm. : : : . and is then scanning square 0. we will then be in a position to solve any NP problem in polynomial time. We illustrate this by showing that the problem 3SAT is NP-complete. cropping up in many different areas of computing. Since D0 was an arbitrary NP problem. that is. when the computation halts the machine will not be scanning a1 on square 0. A decision problem is NP-complete if it has the property that any NP problem can be converted into it in polynomial time. giving another 3 literals. This means that the set of clauses above is not satisfiable. : : : . And since D was an arbitrary NP problem it follows that any NP problem can be converted to SAT in polynomial time.

all NP-complete problems agree to within some polynomial amount of difference. P=NP? We have seen that a problem is NP-complete if and only if it is NP and any NP problem can be converted into it in polynomial time. It turns out to be straightforward to convert an arbitrary instance of SAT into an instance of 3SAT with the same satisfiability property. like SAT.Given a finite set fC1. so either Ln1 or Ln is true. Then all the clauses in C are satisfied: for i = 1.Ln2 are all false. and D2 can be converted into D1 by virtue of the fact that D2 is NP and D1 is NP-hard. : : : . which is NP). (A problem satisfying the second condition is called NP-hard. each of which contains at most three schematic letters. and hence C is true.Xn3. so NP-complete means NP and NP-hard. and for j = k. in particular Xn3 is true. Then it satisfies at least one of the literals appearing in C. But what is the computational complexity of these problems? If any one NP-complete problem can be shown to be of polynomial complexity. 3SAT is obviously NP (since it is a special case of SAT. Conversely.C2. We replace this by n 2 new clauses. : : : . the(k 1)th clause is satisfied because Lk is true. as follows: L1 _ L2 _ X1 X1 ! L3 _ X2 X2 ! L4 _ X3 . : : : . Thus in any event at least one of the Li is true. Now assign true to X1. Take any clause written in disjunctive form as C _ L1 _ L2 _ : : : _ Ln. If we take an instance of SAT and replace all the clauses containing more than three literals by clausescontaining exactly three in the way described above.X2. Then it is easy to see that all the Xi must be true. the ith clause is satisfied because Xi is true. k + 1. All NP-complete problems stand or fall together. 2.) It follows from this that all NP-complete problems are mutually interconvertible in polynomial time. Thus as far as computational complexity is concerned. is NP-complete. say Lk. then D1 can be converted into D2 by virtue of the fact that D1 is NP and D2 is NP-hard. Xn4 ! Ln2 _ Xn3 Xn3 ! Ln1 _ Ln Call the new set of clauses C.. we end up with an instance of 3SAT which is satisfiable if and only if the original instance is satisfiable. then by the above.Xk2 and false to Xk1. Moreover. : : : . the conversion can be accomplished in polynomial time. Any truth-assignment to the schematic letters appearing in the Li which satisfies C can be extended to the Xi so that C is satisfied. suppose that a certain truth-assignment satisfies C. n 2 the jth clause is satisfied because Xj1 (appearing in the antecedent) is false.. k 2. For if D1 and D2 are both NP-complete. If on the other hand any one NPcomplete problem can be shown not to be solvable in polynomial time. : : : . determine whether there is an assignment of truth-values to the schematic letters appearing in the clauses which makes all the clauses true. and conversely any truth-assignment which satisfies C also satisfies C. 39 . then by the above they all are. It follows that 3SAT.Cng of clauses. Suppose L1. To prove this. : : : . using n 3 new schematic letters X1. where n > 3 and each Li is a literal. suppose we have a truth-assignment satisfying C: each clause in C is satisfied. : : : .Xn3. none of them are so solvable.

then it may be that a new generation of computers will have exactly such abilities. the Turing model of computation. what would happen is that the theory of Turing-equivalent computation would suddenly become much less relevant to practical computing concerns. but there is no NP-complete problem for which any algorithm is known that runs in less than exponential time. of course. that is. If we could solve NP-complete problems in polynomial time. then the whole class NP would collapse down into the class P of problems solvable in polynomial time. This is because the NP-complete problems are the hardest of all NP problems. NP-complete problems are strictly harder than polynomial problems. if we had access to unlimited parallelism. and if they are polynomial then all NP problems are polynomial. All this assumes. we could solve any NP problem (including therefore the NPcomplete problems) in polynomial time.The current state of our knowledge is this: we know how to solve NP-complete problems in exponential time. However. of course. as suggested above. It would not solve the classic P=NP question. because that question concerns the properties of Turing-equivalent computation. Most people who have an opinion on this believe that the answer is no. Existing computers do not provide us with such parallelism. This implies. but if the current work on Quantum Computation comes to fruition. which applies to all existing computers. making the question only of theoretical interest. no-one has ever succeeded in proving that it is not possible to solve an NP-complete problem faster than that. and if that happens. Conclusion: Thus we have studied Cook’s Theorem 40 . of course. then everything changes. Thus the question of whether or not there are polynomial algorithms for NP-complete problems has become known as the “P=NP?” problem. On the other hand. that no-one has proved that NP-complete problems can’t be solved in polynomial time.

circuits would just be horizontal lines.. and we compare two numbers using a comparator gate: For our drawings. Computing with a circuit We are going to design a circuit.10.e. looks like: 41 . To Study Sorting Network Aim: To Study Sorting Network THEORY: 1. with vertical segments (i. we will draw such a gate as follows: So. gates) between them. where the inputs are the numbers. A complete sorting network.

is that one can generate circuits from a sorting algorithm. and are output on the wires on the right. In this case: 42 .In fact. Repeating this inner loop.The surprising thing. we get the following sorting network: Alternative way of drawing it: Q: How much time does it take for this circuit to sort the n numbers? Running time = how many time clocks we have to wait till the result stabilizes.The inputs come on the wires on the left. consider the following circuit: Q: What does this circuit does? A: This is the inner loop of insertion sort. The largest number is output on the bottom line.

bni.We of course. . y)) and min(f(x). which each gate has two inputs and two outputs. Proof: Consider a single comparator with inputs x. 3 The Zero-One Principle The zero-one principle: If a comparison network sort correctly all binary inputs (i. the output is monotonically sorted. . . . The running time of a sorting network is just its depth. y. d2). The size of a sorting network is the number of gates in the sorting network. then for any monotonically increasing function f . we get: Lemma 1 Insertion sort requires 2n − 1 time units to sort n numbers. ani into the output sequence b = hb1. . . A comparison network is a DAG (directed acyclic graph). a2. every number is either 0 or 1) then it sorts correctly all inputs. with n inputs and n outputs. . f(an)i into the sequence f(b) = hf(b1). y) and y0 = max(x. . If f(x) = f(y) then the claim trivially holds for this comparator. (Note that a comparison Definition 2.e. .3 A sorting network is a comparison network such that for any input. f(bn)i.2 The depth of a wire is 0 at the input.In general. y)) 43 . the network transforms the input sequence f(a) = hf(a1). For a gate with two inputs of depth d1 and d2 the depth of the output is 1 + max(d1. . The depth of a comparison network is the maximum depth of an output wire. b2. . Definition 2. f(y)) = f(min(x.. . . If f(x) < f(y) then clearly max(f(x). f(y)) = f(max(x.. y). need prove that the zero-one principle is true. 2 Definitions Definition 2. . . Lemma 2 If a comparison network transforms the input sequence a = ha1. . and outputs x0 = min(x.

. . the output is ????1????0????. for input hx. f(x)i Establishing the claim for one comparator. for the input f(a1). f(y)i the output is hf(y). . . yi thus for input hf(x). 4 A bitonic sorting network Definition: A bitonic sequence is a sequence which is first increasing and then decreasing. 4. 1 is either of the form 0i1j0k or of the form 1i0j1k where 0i denote a sequence of i zeros. that it sorts incorrectly the sequence a1. .bitonic (1. 1. . . Theorem : If a comparison network with n inputs sorts all 2n binary strings of length n correct. by the above lemma. . (4.0????f(ak)????f(ai)??1111 but f(ai) = 0 and f(aj) = 1. . This implies that if a wire carry a value ai when the network get input a1. . we have output hx. Definition A half-cleaner is a comparison network. . 3. Proof: Assume for the sake of contradiction. 2. for input hx. Namely. for x > y. f(bn)) for which the comparison network does not sort it correctly. . . . then it sorts all sequences correctly. f(y)i the output is hf(x). 5. . . Clearly. Definition A bitonic sorter is a comparison network that sort bitonic sequences. . . 5.Thus. 3. or can be circularly shifted to become so. . yi. This immediately implies the lemma. f(an) . an then for the input f(a1). 2. Let ai < ak be the two numbers that outputted in incorrect order (i. we have a binary input (f(b1). 4. 1. . . 2) . f(y)i Thus. for x < y. .e. Observation A bitonic sequence over 0. 3) . 2.binary circuit the circuit would output f(b1). This follows immediately by using the above claim for a single comparator together with induction on the network structure. bn be the output sequence for this input. xi thus for input hf(x). . 44 . 2. 1. . we have output hy. Namely. .bitonic. ak appears before ai in the output). . A contradiction to our assumption. Example (1. _. this sequence looks like 000. 3. 1) . f(bn). Let b1. connecting line i with line i + n/2.. f(an) this wire would carry the value f(ai). 4. But then. 2. an.not bitonic. . yi.

Left side is clean and all 0. The depth of a Half − Cleaner[n] is one. 2. The elements in the top half are smaller than the elements in bottom half. What does a half-cleaner do for a (binary) bitonic sequence? Example Properties: 1. 45 .A Half − Cleaner[n]: A half-cleaner with n inputs. 2. One of the halves is clean. Lemma : If the input to a half-cleaner is a binary bitonic sequence then for the output sequence: 1. and the other is bitonic. Right side is bitonic.

!#_-$ _______ _ __ __ Opening the recursion.This suggests a simple recursive construction of Bitonic − Sorter[n]: _______ _ _ ______ ______ ____ _. Bitonic − Sorter[n] is Lemma: 46 . we have: ______ _ _ _____ ________ _ Namely.

_ an and b1 _ b2 _ . a2. . . illegal. Thus. just flip one of them. bn−1. . It is easy to verify that the resulting sequence is bitonic. . . an. and as such we can sort it using the Bitonic − Sorter[n]. What we do. . Observation: Given two sorted sequences a1 _ a2 _ . bn−2. . . and use bitonic− sorter[n]: _______ _ _ ______ ______ ___ _ _______ ______ __ __ This is of course. b1 is bitonic. . . bn. b2. to merge two sorted sequences of length n/2. _ bn the sequence a1. . where the second sequence is being flipped. is to take bitonic − sorter[n] and physically flip the last n/2 entries: 47 .Merging sequence Q: Given two sorted sequences of length n/2 how do we merge them into a single sorted sequence? A:Concatinate the two sequence. .

let flip − cleaner[n] to be the component: 48 . Merger[n] is But this is equivalent to: Which is Bitonic − Sorter[n] with the first component flipped. Formally.Thus.

sorter[n]: ______ _____ ________ ______ ___ ________ __ ____ __ __ Lemma :Sorter[n] is a sorting network (i.Thus. it sorts any n numbers) using G(n) = As for the depth D(n) = D(n/2) + Depth(Merger[n]) = D(n/2) + O(log(n)) 49 .. Merger[n] is just the comparator network: ______ _ _ _____ _ _______ ____ _ _ _ __ _!_#"%$&(' _ _ _ __ _!_ "%$&(' _ _______ ____ ________ _ Lemma : Sorting Network Q: How to build a sorting network? A:Just implement merge sort using Merger[n].e.

Here is how Sorter[8] looks like: Conclusion: Thus we have studied Sorting Network 50 .

51 . The assessment is done according to the directives of the Principal/ VicePrincipal/ Dean Academics. The marking patterns should be justifiable to the students without any ambiguity and teacher should see that students are faced with unjust circumstances. It is a wrong approach or concept to award the students by way of easy marking to get cheap popularity among the students to which they do not deserve. It is a primary responsibility of the teacher that right students who are really putting up lot of hard work with right kind of intelligence are correctly awarded.Possible Question on Advanced Algorithm: VIVA VOCE QUESTIONS 1)Define and state the importance of sub algorithm in computation and its relation ship with main algorithm? 2) Give the difference of format between an algorithm and a sub algorithm? 3) What is the general algorithm model for any recursive procedure? 4)State recursion and its different types? 5)Explain the depth of recursion? 6)State the problems which differentiate between recursive procedure and non-recursive procedure? 7)what is the complexity of fft? 8)what is mean by modular arithmetic? 9)What is chinese Remainder theorem? 10)What is mean by NP-Complete and NP-Hard? 11)Differentiate between NP-Complete and NP-Hard 12)Differentiate between deterministic and non deterministic algorithm 13)What mean by algorithm? 14)what are different techniques to solve the given problem 15)Explain Knapsack Problem 16)What is mean by dynamic programming 17)Explain the Greedy method 18)What is mean by Time Complexity? 19)What is mean by Space Complexity? 20)What is Asymptotic Notation? Evaluation and marking system: Basic honesty in the evaluation and marking system is absolutely essential and in the process impartial nature of the evaluator is required in the examination system to become popular amongst the students.