Professional Documents
Culture Documents
Submitted by:
Prashant Saxena
1814310147
Submitted to:
Mr. Bhupesh Gupta
(Assistant Professor)
GHAZIABAD (UP)
VISION
MISSION
● To promote technical proficiency by adopting effective teaching learning
processes.
● To provide environment & opportunity for students to bring out their inherent
talents for all round development.
● To promote latest technologies in Computer Science & Engineering and across
disciplines in order to serve the needs of Industry, Government, Society, and
the scientific community.
● To educate students to be Successful, Ethical and Effective problem-solvers
and Life-Long learners who will contribute positively to the society.
3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for
the public health and safety, and the cultural, societal, and environmental considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information
to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modelling to complex engineering activities with
an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports
and design documentation, make effective presentations, and give and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to
manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
PEO1: Graduates of the program will be able to apply fundamental principles of engineering
in problem solving and understand the role of computing in multiple disciplines.
PEO 2: Graduates will learn to apply various computational techniques & tools for developing
solutions & projects in real world.
PEO3: Be employed as computer science professionals beyond entry-level positions or be
making satisfactory progress in graduate programs
PEO4: Demonstrate that they can function, communicate, collaborate and continue to learn
effectively as ethically and socially responsible computer science professionals.
EXPERIMENT 1:
COMPARE THE RUN TIME PERFORMANCE OF BUBBLE SORT, INSERTION
SORT AND SELECTION SORT.
OBJECTIVES:
● Implement Bubble Sort, Insertion sort, and selection sort using any programming language
● Generate random numbers as inputs of size 100,500,1000,5000,10000,50000,100000
● Determine the run time of all the three algorithms on randomly generated inputs
● Plot the comparative graph of all three algorithms and give a comment.
● Justify the runtime performance with theoretical time complexities
● Give real-time applications of the algorithms
INTRODUCTION TO TOPICS:
BUBBLE SORT:
Bubble sort is a simple sorting algorithm. This sorting algorithm is a comparison-based algorithm in
which each pair of adjacent elements is compared and the elements are swapped if they are not in
order. This algorithm is not suitable for large data sets as its average and worst case complexity are
of Ο(n2) where n is the number of items.
INSERTION SORT:
This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always
sorted. For example, the lower part of an array is maintained to be sorted. An element which is to
be 'inserted in this sorted sub-list, has to find its appropriate place and then it has to be inserted
there. Hence the name, insertion sort.
The array is searched sequentially and unsorted items are moved and inserted into the sorted sub-
list (in the same array). This algorithm is not suitable for large data sets as its average and worst case
complexity are of Ο(n2), where n is the number of items.
SELECTION SORT:
Selection sort is a simple sorting algorithm. This sorting algorithm is an in-place comparison-based
algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted
part at the right end. Initially, the sorted part is empty and the unsorted part is the entire list.
The smallest element is selected from the unsorted array and swapped with the leftmost element,
and that element becomes a part of the sorted array. This process continues moving an unsorted
array boundary by one element to the right.
This algorithm is not suitable for large data sets as its average and worst case complexities are of
Ο(n2), where n is the number of items.
ALGORITHMS:
BUBBLE SORT:
begin BubbleSort(list)
for all elements of list
if list[i] > list[i+1]
swap(list[i], list[i+1])
end if
end for
return list
end BubbleSort
SELECTION SORT:
Step 1 − Set MIN to loca on 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at loca on MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat un l list is sorted
INSERTION SORT:
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shi all the elements in the sorted sub-list that is greater than the
value to be sorted
Step 5 − Insert the value
Step 6 − Repeat un l list is sorted
CODE:
import math
import matplotlib.pyplot as plt
import time
import random
def bubbleSort(nlist):
for itr in range(len(nlist)-1,0,1):
for i in range(itr):
if nlist[i]>nlist[i+1]:
temp = nlist[i]
nlist[i]=nlist[i+1]
nlist[i+1] =temp
nlist=[14,46,27,57,41,45,21,70]
bubbleSort(nlist)
print(nlist)
def selectionSort(a):
for i in range(len(a)):
min = i
for j in range(i+1,len(a)):
if a[j]<a[min]:
min = j
t = a[min]
a[min]=a[i]
a[i]=t
return a
def insertionSort(a):
for i in range(0,len(a)):
j=i-1
t=a[i]
while j>-1 and t<a[j] :
a[j+1]=a[j]
j-=1
a[j+1] = t
return a
print(insertionSort([-3,2,1,4,5,0,23,-7,12,18,15]))
rt=[]
lt=[100,200,500,1000,3000,4000,10000,20000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(j)
selectionSort(l)
end= time.time()
print(end-start)
rt.append(end-start)
rt1=[]
lt1=[100,200,500,1000,3000,4000,10000,20000]
for i in lt1:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(j)
insertionSort(l)
end= time.time()
print(end-start)
rt1.append(end-start)
rt2=[]
lt2=[100,200,500,1000,3000,4000,10000,20000]
for i in lt2:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(j)
bubbleSort(l)
end= time.time()
print(end-start)
rt2.append(end-start)
rt3=[]
lt1=[100,200,500,1000,3000,4000,10000,20000]
for i in lt1:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
insertionSort(l)
end= time.time()
print(end-start)
rt3.append(end-start)
rt4=[]
lt2=[100,200,500,1000,3000,4000,10000,20000]
for i in lt2:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
bubbleSort(l)
end= time.time()
print(end-start)
rt4.append(end-start)
rt5=[]
lt=[100,200,500,1000,3000,4000,10000,20000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
selectionSort(l)
end= time.time()
print(end-start)
rt5.append(end-start)
plt.plot(lt,rt2,label='Sorted Data')
plt.plot(lt,rt4,label ='Unsorted Data')
plt.title('Bubble Sort')
plt.legend()
plt.show()
plt.plot(lt,rt3,label='Unsorted Data')
plt.plot(lt,rt1,label ='Sorted Data')
plt.title('Insertion Sort')
plt.legend()
plt.show()
plt.plot(lt,rt5,label='Unsorted Data')
plt.plot(lt,rt,label ='Sorted Data')
plt.title('Selection Sort')
plt.legend()
plt.show()
GRAPHS:
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 2:
COMPARE THE RUN TIME PERFORMANCE OF HEAP SORT, QUICK SORT AND
MERGE SORT.
OBJECTIVES:
● Implement Heap Sort, Quick sort, and Merge sort using any programming language
● Generate random numbers as inputs of size 100,500,1000,5000,10000,50000,100000
● Determine the run time of all the three algorithms on randomly generated inputs
● Plot the comparative graph of all three algorithms and give a comment.
● Justify the runtime performance with theoretical time complexities
● Give real-time applications of the algorithms
INTRODUCTION TO TOPICS:
HEAP SORT:
Heap sort is a comparison based sorting technique based on Binary Heap data structure. It is similar
to selection sort where we first find the maximum element and place the maximum element at the
end. We repeat the same process for the remaining elements.
QUICK SORT:
Quick sort is a highly efficient sorting algorithm and is based on partitioning an array of data into
smaller arrays. A large array is partitioned into two arrays one of which holds values smaller than
the specified value, say pivot, based on which the partition is made and another array holds values
greater than the pivot value.
Quicksort partitions an array and then calls itself recursively twice to sort the two resulting
subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-case
complexity are O(n2), respectively.
MERGE SORT:
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
ALGORITHMS:
HEAP SORT:
[END OF LOOP]
CALL Delete_Heap(ARR,N,VAL)
SET N = N+1
[END OF LOOP]
Step 3: END
QUICK SORT:
Step 1 − Choose the highest index value has pivot
Step 2 − Take two variables to point le and right of the list excluding pivot
Step 3 − le points to the low index
Step 4 − right points to the high
Step 5 − while value at le is less than pivot move right
Step 6 − while value at right is greater than pivot move le
Step 7 − if both step 5 and step 6 does not match swap le and right
Step 8 − if le ≥ right, the point where they met is new pivot
MERGE SORT:
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves un l it can no more be divided.
Step 3 − merge the smaller lists into new lists in sorted order.
CODE:
import math
import time
start_time = time.time()
p=[]
for i in range(1000000):
p.append(math.sqrt(i*i))
end_time = time.time()
print(end_time-start_time)
def heapify(ar,n,i):
rt=i;
l=2*i+1
r=2*i+2
def heap_sort(ar,n):
for i in range(int(n/2),-1,-1):
heapify(ar,n,i)
for i in range(n-1,0,-1):
z=ar[0]
ar[0]=ar[i]
ar[i]=z
heapify(ar,i,0)
def partition(arr,low,high):
i = ( low-1 ) # index of smaller element
pivot = arr[high] # pivot
i = i+1
arr[i],arr[j] = arr[j],arr[i]
arr[i+1],arr[high] = arr[high],arr[i+1]
return ( i+1 )
def quickSort(arr,low,high):
pi = partition(arr,low,high)
def merge_sort(values):
if len(values)>1:
m = len(values)//2
left = values[:m]
right = values[m:]
left = merge_sort(left)
right = merge_sort(right)
values =[]
for i in left:
values.append(i)
for i in right:
values.append(i)
return values
import random
import time
rt=[]
lt=[100,200,500,1000,3000,4000,10000,20000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
heap_sort(l,len(l))
end= time.time()
print(end-start)
rt.append(end-start)
print()
rst=[]
lts=[100,200,500,1000,3000,4000,10000,20000]
for i in lts:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(j)
heap_sort(l,len(l))
end= time.time()
print(end-start)
rst.append(end-start)
rt1=[]
lt1=[100,200,500,1000,3000,4000,10000,20000]
for i in lt1:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
quickSort(l,0,len(l)-1)
end= time.time()
print(end-start)
rt1.append(end-start)
print()
rts1=[]
lts1=[100,200,500]
for i in lts1:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(j)
quickSort(l,0,len(l)-1)
end= time.time()
print(end-start)
rts1.append(end-start)
rt2=[]
lt2=[100,200,500,1000,3000,4000,10000,20000]
for i in lt2:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
l = merge_sort(l)
end= time.time()
print(end-start)
rt2.append(end-start)
print()
rst2=[]
lst2=[100,200,500,1000,3000,4000,10000,20000]
for i in lst2:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(j)
l = merge_sort(l)
end= time.time()
print(end-start)
rst2.append(end-start)
plt.plot(lt,rt,label="Unsorted Data")
plt.plot(lts,rst,label="Sorted Data")
plt.title('Heap sort')
plt.xlabel("Input Size")
plt.ylabel('Runtime')
plt.legend()
plt.show()
plt.plot(lt1,rt1,label="Unsorted Data")
plt.plot(lts1,rts1,label="Sorted Data")
plt.title('Quick Sort')
plt.xlabel('Input Size')
plt.ylabel('Runtime')
plt.legend()
plt.show()
plt.plot(lt2,rt2,label='Unsorted Data')
plt.plot(lst2,rst2,label='Sorted Data')
plt.title('Merge Sort')
plt.legend()
plt.xlabel('Input Size')
plt.ylabel('Runtime')
plt.show()
GRAPHS:
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 3:
COMPARE THE RUN TIME PERFORMANCE OF COUNT SORT, AND BUCKET
SORT.
OBJECTIVES:
● Implement Count Sort and Bucket sort using any programming language
● Generate random numbers as inputs of size 100,500,1000,5000,10000,50000,100000
● Determine the run time of all the three algorithms on randomly generated inputs
● Plot the comparative graph of all three algorithms and give a comment.
● Justify the runtime performance with theoretical time complexities
● Give real-time applications of the algorithms
INTRODUCTION TO TOPICS:
COUNT SORT:
Count sort is a sorting algorithm that sorts the elements of an array by counting the number of
occurrences of each unique element in the array. The count is stored in an auxiliary array and the
sorting is done by mapping the count as an index of the auxiliary array.
BUCKET SORT:
Bucket Sort is a sorting technique that sorts the elements by first dividing the elements into several
groups called buckets. The elements inside each bucket are sorted using any of the suitable sorting
algorithms or recursively calling the same algorithm.
Several buckets are created. Each bucket is filled with a specific range of elements. The elements
inside the bucket are sorted using any other algorithm. Finally, the elements of the bucket are
gathered to get the sorted array.
The process of bucket sort can be understood as a scatter-gather approach. The elements are first
scattered into buckets then the elements of buckets are sorted. Finally, the elements are gathered in
order.
ALGORITHMS:
COUNT SORT:
countingSort(array, size)
max <- find largest element in array
initialize count array with all zeros
for j <- 0 to size
find the total count of each unique element and
store the count at jth index in count array
for i <- 1 to max
find the cumulative sum and store it in count array itself
for j <- size down to 1
restore the elements to array
decrease count of each element restored by 1
BUCKET SORT:
bucketSort()
create N buckets each of which can hold a range of values
for all the buckets
initialize each bucket with 0 values
for all the buckets
put elements into buckets matching the range
for all the buckets
sort elements in each bucket
gather elements from each bucket
end bucketSort
CODE:
def count_sort(arr):
max_element = int(max(arr))
min_element = int(min(arr))
range_of_elements = max_element - min_element + 1
# Create a count array to store count of individual
# elements and initialize count array as 0
count_arr = [0 for _ in range(range_of_elements)]
output_arr = [0 for _ in range(len(arr))]
return arr
rt=[]
lt=[100,200,500,1000,3000,4000,10000,20000,50000,100000,200000,400000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(random.randrange(1,100000))
l = count_sort
end= time.time()
print(end-start)
rt.append(end-start)
rt=[]
lt=[100,200,500,1000,3000,4000,10000,20000,50000,100000,200000,400000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(i)
l = count_sort
end= time.time()
print(end-start)
rt.append(end-start)
rt1=[]
lt=[100,200,500,1000,3000,4000,10000,20000,50000,100000,200000,400000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
l.append(i)
l = count_sort
end= time.time()
print(end-start)
rt1.append(end-start)
plt.plot(lt,rt,label="Unsorted Data")
plt.plot(lt,rt1,label="Sorted Data")
plt.xlabel("Input Size")
plt.title("Count Sort")
plt.ylabel('Runtime')
plt.legend()
plt.show()
n = len(arr)
def bucketSort(x):
arr = []
slot_num = 10 # 10 means 10 slots, each
# slot's size is 0.1
for i in range(slot_num):
arr.append([])
arr[index_b].append(j)
rt=[]
lt=[100,200,500,1000,3000,4000,10000,20000]
for i in lt:
start =time.time()
l=[]
random.seed(i)
for j in range(i):
q= random.randrange(0,1)
l.append(q)
l=bucketSort(l)
end= time.time()
print(end-start)
rt.append(end-start)
rt1=[]
lt=[100,200,500,1000,3000,4000,10000,20000]
for i in lt:
start =time.time()
l=[]
z=min(1/i,0.99)
k=z
for j in range(i):
z=min(z,.999)
l.append(z);
z+=k
l=bucketSort(l)
end= time.time()
print(end-start)
rt1.append(end-start)
plt.plot(lt,rt,label="Unsorted Data")
plt.plot(lt,rt1,label="Sorted Data")
plt.title("Bucket Sort")
plt.xlabel("Input Size")
plt.ylabel('Runtime')
plt.legend()
plt.show()
GRAPHS:
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 4:
IMPLEMENT ASSEMBLY LINE SCHEDULING PROBLEM USING DYNAMIC
PROGRAMMING.
INTRODUCTION TO TOPIC:
The main goal of assembly line scheduling is to give the best route or can say fastest from all assembly
lines.
In the above diagram we have two main assembly lines considered as LINE 1 and LINE 2.
Normally, once a chassis enters an assembly line, it passes through that line only. The time to go from
one station to the next within the same assembly line is negligible.
Occasionally, a special rush order comes in, and the customer wants the automobile to be
manufactured as quickly as possible. For the rush orders, the chassis still passes through the n
stations in order, but the factory manager may switch the partially-completed auto from one
assembly line to the other after any station.
ALGORITHM:
CODE:
def carAssembly (a, t, e, x):
NUM_STATION = len(a[0])
T1 = [0 for i in range(NUM_STATION)]
T2 = [0 for i in range(NUM_STATION)]
T1[0] = e[0] + a[0][0] # time taken to leave
# first station in line 1
T2[0] = e[1] + a[1][0] # time taken to leave
Prashant Saxena 1814310147 CSE 03
33
a = [[4, 5, 3, 2],
[2, 10, 1, 4]]
t = [[0, 7, 4, 5],
[0, 9, 2, 8]]
print(carAssembly(a, t, e, x))
rt=[]
l=[10,100,200,500,1000,2000,10000,25000,50000,100000]
e = [10, 12]
x = [18, 7]
for i in l:
a=[]
t=[]
a1=[]
a2=[]
start = time.time()
random.seed(i)
for j in range(i):
a1.append(random.randrange(1,100))
a.append(a1);
for j in range(i):
a2.append(random.randrange(1,100))
a.append(a2)
t1 =[0]
for j in range(i-1):
t1.append(random.randrange(1,100))
t.append(t1)
t2=[0]
for j in range(i-1):
t2.append(random.randrange(1,100))
t.append(t2)
print(carAssembly(a,t,e,x))
end = time.time()
print(end-start)
rt.append(end-start)
plt.plot(l,rt,label="AssemblyLine")
plt.legend()
plt.xlabel('Input Size')
plt.ylabel('Runtime')
plt.show()
GRAPH:
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 5:
IMPLEMENT LONGEST COMMON SUBSEQUENCE PROBLEM USING
DYNAMIC PROGRAMMING.
INTRODUCTION TO TOPIC:
If a set of sequences is given, the longest common subsequence problem is to find a common
subsequence of all the sequences that is of maximal length.
The longest common subsequence problem is a classic computer science problem, the basis of data
comparison programs such as the diff-utility, and has applications in bioinformatics. It is also widely
used by revision control systems, such as SVN and Git, for reconciling multiple changes made to a
revision-controlled collection of files.
ALGORITHM:
Algorithm: LCS-Length-Table-Formulation (X, Y)
m := length(X)
n := length(Y)
for i = 1 to m do
C[i, 0] := 0
for j = 1 to n do
C[0, j] := 0
for i = 1 to m do
for j = 1 to n do
if xi = yj
C[i, j] := C[i - 1, j - 1] + 1
B[i, j] := ‘D’
else
if C[i -1, j] ≥ C[i, j -1]
C[i, j] := C[i - 1, j] + 1
B[i, j] := ‘U’
else
C[i, j] := C[i, j - 1]
B[i, j] := ‘L’
return C and B
CODE:
# Dynamic Programming implementation of LCS problem
m = len(X)
n = len(Y)
import time
lt=[5,10,15,20,25,50,100,150,500,1000,2000,5000,10000]
rt=[]
import random
characters=['A','B','C','T']
for i in lt:
start =time.time()
l=[]
random.seed(i)
L= random.choices(characters,k=i)
x=""
Prashant Saxena 1814310147 CSE 03
38
for j in L:
x=x+j
L= random.choices(characters,k=i)
y=""
for j in L:
y=y+j
print(lcs(x,y))
end= time.time()
print(end-start)
rt.append(end-start)
plt.plot(lt,rt,label="")
plt.title("Longest Common Subsequence")
plt.xlabel('Input Size')
plt.ylabel('Runtime')
plt.legend()
plt.show()
GRAPHS:
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 6:
IMPLEMENT FIBONACCI SUBSEQUENCE PROBLEM USING MEMORIZATION.
INTRODUCTION TO TOPIC:
MEMORIZATION:
The first step will be to write the recursive code. In the program below, a program related to
recursion where only one parameter changes its value has been shown. Since only one parameter is
non-constant, this method is known as 1-D memoization.
ALGORITHM:
memo = { }
fib(n):
if n in memo: return memo[n]
else if n = 0: return 0
else if n = 1: return 1
else: f = fib(n − 1) + fib(n − 2)
memo[n] = f
return f
CODE:
def fibonacci(input_value):
if input_value==1:
return 1
elif input_value == 2:
return 1
elif input_value > 2:
return fibonacci(input_value - 1) + fibonacci(input_value-2)
tim = []
fibonacci(50)
import time
for i in range(1,40):
Prashant Saxena 1814310147 CSE 03
40
start = time.time()
print("fib({}) =".format(i),fibonacci(i))
ed = time.time()
tim.append(ed-start)
print(ed-start)
cache={}
def fibonacci_memo(i):
if i in cache:
return cache[i]
if i==1:
value=1
elif i==2:
value=2
elif i>2:
value = fibonacci_memo(i-1)+fibonacci_memo(i-2)
cache[i] = value
return value
import time
t2=[]
for i in range(1,200):
start = time.time()
print("fib({}) =".format(i),fibonacci_memo(i))
ed = time.time()
t2.append(ed-start)
print(ed-start)
tim
for i in range(1,200):
l2.append(i)
plt.plot(l1,tim,label="naive method")
plt.plot(l2,t2,label="using memmorization")
plt.title("FABONNACI")
plt.xlabel("Input Size")
plt.ylabel("Runtime")
plt.legend()
GRAPH:
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 7:
IMPLEMENT KNAPSACK PROBLEM USING GREEDY ALGORITHM.
INTRODUCTION TO TOPIC:
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a
weight and a value, determine the number of each item to include in a collection so that the total
weight is less than or equal to a given limit and the total value is as large as possible.
ALGORITHM:
Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)
for i = 1 to n
do x[i] = 0
weight = 0
for i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x
CODE:
# Python3 program to solve fractional
# Knapsack Problem
class ItemValue:
# Greedy Approach
class FractionalKnapSack:
totalValue = 0
for i in iVal:
curWt = int(i.wt)
curVal = int(i.val)
if capacity - curWt >= 0:
capacity -= curWt
totalValue += curVal
else:
fraction = capacity / curWt
totalValue += curVal * fraction
capacity = int(capacity - (curWt * fraction))
break
return totalValue
# Driver Code
if __name__ == "__main__":
wt = [10, 40, 20, 30]
val = [60, 40, 100, 120]
capacity = 50
# Function call
maxValue = FractionalKnapSack.getMaxValue(wt, val, capacity)
print("Maximum value in Knapsack =", maxValue)
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 8:
IMPLEMENT SUM OF SUBSET PROBLEM USING BACKTRACKING
ALGORITHM.
INTRODUCTION TO TOPIC:
Subset sum problem is to find a subset of elements that are selected from a given set whose sum
adds up to a given number K. We are considering the set contains non-negative values. It is
assumed that the input set is unique (no duplicates are presented).
ALGORITHM:
subsetSum(set, subset, n, subSize, total, node, sum)
Begin
if total = sum, then
display the subset
//go for finding next subset
subsetSum(set, subset, , subSize-1, total-set[node], node+1, sum)
return
else
for all element i in the set, do
subset[subSize] := set[i]
subSetSum(set, subset, n, subSize+1, total+set[i], i+1, sum)
done
End
CODE:
def SubsetSum(set, n, sum) :
# Base Cases
if (sum == 0) :
return True
if (n == 0 and sum != 0) :
return False
# ignore if last element is > sum
if (set[n - 1] > sum) :
return SubsetSum(set, n - 1, sum);
# else,we check the sum
# (1) including the last element
# (2) excluding the last element
return SubsetSum(set, n-1, sum) or SubsetSum(set, n-1, sumset[n-1])
# main
Prashant Saxena 1814310147 CSE 03
45
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.
EXPERIMENT 9:
IMPLEMENT KNAPSACK PROBLEM USING BACKTRACKING ALGORITHM.
INTRODUCTION TO TOPIC:
Branch and bound is an algorithm design paradigm which is generally used for solving combinatorial
optimization problems. These problems are typically exponential in terms of time complexity and
may require exploring all possible permutations in the worst case. Branch and Bound solve these
problems relatively quickly.
Let us consider the below 0/1 Knapsack problem to understand Branch and Bound.
Given two integer arrays val[0..n-1] and wt[0..n-1] that represent values and weights associated
with n items respectively. Find out the maximum value subset of val[] such that the sum of the
weights of this subset is smaller than or equal to Knapsack capacity W.
ALGORITHM:
○ If the current item can be inserted into the knapsack, then calculate the lower
and upper bound of the left child of the current node.
○ Update the minLB and insert the children if their upper bound is less than minLB.
CODE:
// C++ program to solve knapsack problem using
// branch and bound
#include <bits/stdc++.h>
using namespace std;
return profit_bound;
}
Node u, v;
return maxProfit;
}
return 0;
}
RESULT:
Above Experiment shows the behaviour of underlying algorithms for different input samples.