Professional Documents
Culture Documents
Discrete Structure 1 in C112 (Learning Material)
Discrete Structure 1 in C112 (Learning Material)
CS112
Discrete Structures 1
LEARNING MATERIALS
(Midterm Period)
DISCLAIMER
This learning material is used in compliance with the flexible teaching-learning approach espoused by
CHED in response to the pandemic that has globally affected educational institutions. Authors and
publishers of the contents are well acknowledged. As such the college and its faculty do not claim
ownership of all sourced information. This learning material will solely be used for instructional purposes
not for commercialization
Page 1 of 46
DISCRETE STRUCTURES 1
TABLE OF CONTENTS
Page 2 of 46
DISCRETE STRUCTURES 1
Page 3 of 46
DISCRETE STRUCTURES 1
CHAPTER 1
Data Structures and Basic Concepts
Learning Outcomes
• Develop an understanding of concepts of algorithm
• Perform operations associated with sorting, and priority techniques.
Keyterms
• Big O notation
• Hash
• Probing
• Complexity
• Vertices
Lessons
Lesson 1: Hash Tables
Hashing is the transformation of a string of characters into a usually shorter fixed-length value or key
that represents the original string. Hashing is used to index and retrieve items in a database because it
is faster to find the item using the shorter hashed key than to find it using the original value.
Hashing is a technique that is used to uniquely identify a specific object from a group of similar objects.
Some examples of how hashing is used in our lives include:
In universities, each student is assigned a unique student number that can be used to retrieve
information about them.
In libraries, each book is assigned a unique number that can be used to determine information about
the book, such as its exact position in the library or the users it has been issued to etc.
In both these examples the students and books were hashed to a unique number.
Assume that you have an object and you want to assign a key to it to make searching easy. To store
the key/value pair, you can use a simple array like a data structure where keys (integers) can be used
directly as an index to store values. However, in cases where the keys are large and cannot be used
directly as an index, you should use hashing.
Page 4 of 46
DISCRETE STRUCTURES 1
In hashing, large keys are converted into small keys by using hash functions. The values are then stored
in a data structure called hash table. The idea of hashing is to distribute entries (key/value pairs)
uniformly across an array. Each element is assigned a key (converted key). By using that key you can
access the element in O(1) time. Using the key, the algorithm (hash function) computes an index that
suggests where an entry can be found or inserted.
An element is converted into an integer by using a hash function. This element can be used as an index
to store the original element, which falls into the hash table.
The element is stored in the hash table where it can be quickly retrieved using hashed key.
hash = hashfunc(key)
index = hash % array_size
In this method, the hash is independent of the array size and it is then reduced to an index (a number
between 0 and array_size − 1) by using the modulo operator (%).
Hash tables are a bit more complex. They put elements in different buckets based on their hash %
some value. In an ideal situation, each bucket holds very few items and there aren't many empty
buckets. Once you know the key, you compute the hash
A hash function is any function that can be used to map a data set of an arbitrary size to a data set of
a fixed size, which falls into the hash table. The values returned by a hash function are called hash
values, hash codes, hash sums, or simply hashes.
To achieve a good hashing mechanism, It is important to have a good hash function with the following
basic requirements:
Easy to compute: It should be easy to compute and must not become an algorithm.
Uniform distribution: It should provide a uniform distribution across the hash table and should not result
in clustering.
Less collisions: Collisions occur when pairs of elements are mapped to the same hash value. These
should be avoided.
Note: Irrespective of how good a hash function is, collisions are bound to occur. Therefore, to maintain
the performance of a hash table, it is important to manage collisions through various collision resolution
techniques.
Let us understand the need for a good hash function. Assume that you have to store strings in the hash
table by using the hashing technique {“abcdef”, “bcdefa”, “cdefab” , “defabc” }.
To compute the index for storing the strings, use a hash function that states the following:
The index for a specific string will be equal to the sum of the ASCII values of the characters modulo
599.
Page 5 of 46
DISCRETE STRUCTURES 1
As 599 is a prime number, it will reduce the possibility of indexing different strings (collisions). It is
recommended that you use prime numbers in case of modulo. The ASCII values of a, b, c, d, e, and f
are 97, 98, 99, 100, 101, and 102 respectively. Since all the strings contain the same characters with
different permutations, the sum will 599.
The hash function will compute the same index for all the strings and the strings will be stored in the
hash table in the following format. As the index of all the strings is the same, you can create a list on
that index and insert all the strings in that list.
Here, it will take O(n) time (where n is the number of strings) to access a specific string. This shows
that the hash function is not a good hash function.
Let’s try a different hash function. The index for a specific string will be equal to sum of ASCII values of
characters multiplied by their respective order in the string after which it is modulo with 2069 (prime
number).
Page 6 of 46
DISCRETE STRUCTURES 1
Hash table
Let A hash table is a data structure that is used to store keys/value pairs. It uses a hash function
to compute an index into an array in which an element will be inserted or searched. By using a good
hash function, hashing can work well. Under reasonable assumptions, the average time required to
search for an element in a hash table is O(1).us consider string S. You are required to count the
frequency of all the characters in this string.
string S = “ababcd”
The simplest way to do this is to iterate over all the possible characters and count their frequency one
by one. The time complexity of this approach is O(26*N) where N is the size of the string and there are
26 possible characters.
void countFre(string S)
{
for(char c = ‘a’;c <= ‘z’;++c)
{
int frequency = 0;
for(int i = 0;i < S.length();++i)
if(S[i] == c)
frequency++;
cout << c << ‘ ‘ << frequency << endl;
}
}
Page 7 of 46
DISCRETE STRUCTURES 1
Output
a2
b2
c1
d1
e0
f0
…
z0
Let us apply hashing to this problem. Take an array frequency of size 26 and hash the 26 characters
with indices of the array by using the hash function. Then, iterate over the string and increase the value
in the frequency at the corresponding index for each character. The complexity of this approach
is O(N) where N is the size of the string.
int Frequency[26];
int hashFunc(char c)
{
return (c - ‘a’);
}
void countFre(string S)
{
for(int i = 0;i < S.length();++i)
{
int index = hashFunc(S[i]);
Frequency[index]++;
}
for(int i = 0;i < 26;++i)
cout << (char)(i+’a’) << ‘ ‘ << Frequency[i] << endl;
}
OUTPUT
a2
b2
c1
d1
e0
f0
…
z0
Page 8 of 46
DISCRETE STRUCTURES 1
Separate chaining is one of the most commonly used collision resolution techniques. It is usually
implemented using linked lists. In separate chaining, each element of the hash table is a linked list. To
store an element in the hash table you must insert it into a specific linked list. If there is any collision
(i.e. two different elements have same hash value) then store both the elements in the same linked list.
Page 9 of 46
DISCRETE STRUCTURES 1
The cost of a lookup is that of scanning the entries of the selected linked list for the required key. If the
distribution of the keys is sufficiently uniform, then the average cost of a lookup depends only on the
average number of keys per linked list. For this reason, chained hash tables remain effective even when
the number of table entries (N) is much higher than the number of slots.
For separate chaining, the worst-case scenario is when all the entries are inserted into the same linked
list. The lookup procedure may have to scan all its entries, so the worst-case cost is proportional to the
number (N) of entries in the table.
In the following image, CodeMonk and Hashing both hash to the value 2. The linked list at the
index 2 can hold only one entry, therefore, the next entry (in this case Hashing) is linked (attached) to
the entry of CodeMonk.
Assumption
Insert
void insert(string s)
{
// Compute the index using Hash Function
int index = hashFunc(s);
// Insert the element in the linked list at the particular index
Page 10 of 46
DISCRETE STRUCTURES 1
hashTable[index].push_back(s);
}
Search
void search(string s)
{
//Compute the index by using the hash function
int index = hashFunc(s);
//Search the linked list at that specific index
for(int i = 0;i < hashTable[index].size();i++)
{
if(hashTable[index][i] == s)
{
cout << s << " is found!" << endl;
return;
}
}
cout << s << " is not found!" << endl;
}
In open addressing, instead of in linked lists, all entry records are stored in the array itself. When
a new entry must be inserted, the hash index of the hashed value is computed and then the
array is examined (starting with the hashed index). If the slot at the hashed index is
unoccupied, then the entry record is inserted in slot at the hashed index else it proceeds in
some probe sequence until it finds an unoccupied slot.
The probe sequence is the sequence that is followed while traversing through entries. In
different probe sequences, you can have different intervals between successive entry slots or
probes.
When searching for an entry, the array is scanned in the same sequence until either the target
element is found or an unused slot is found. This indicates that there is no such key in the table.
The name "open addressing" refers to the fact that the location or address of the item is not
determined by its hash value.
Linear probing is when the interval between successive probes is fixed (usually to 1). Let’s
assume that the hashed index for a particular entry is index. The probing sequence for linear
probing will be:
and so on…
Page 11 of 46
DISCRETE STRUCTURES 1
Hash collision is resolved by open addressing with linear probing. Since CodeMonk and
Hashing are hashed to the same index i.e. 2, store Hashing at 3 as the interval between
successive probes is 1.
Assumption
string hashTable[21];
int hashTableSize = 21;
Insert
void insert(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize then roll
back
while(hashTable[index] != "")
index = (index + 1) % hashTableSize;
hashTable[index] = s;
}
Search
Page 12 of 46
DISCRETE STRUCTURES 1
void search(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize then roll
back
while(hashTable[index] != s and hashTable[index] != "")
index = (index + 1) % hashTableSize;
//Check if the element is present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}
Let us assume that the hashed index for an entry is index and at index there is an occupied slot. The
probe sequence will be as follows:
Assumption
string hashTable[21];
int hashTableSize = 21;
Insert
void insert(string s)
{
Page 13 of 46
DISCRETE STRUCTURES 1
Search
void search(string s)
{
//Compute the index using the Hash Function
int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize roll back
int h = 1;
while(hashTable[index] != s and hashTable[index] != "")
{
index = (index + h*h) % hashTableSize;
h++;
}
//Is the element present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}
Let us say that the hashed index for an entry record is an index that is computed by one hashing
function and the slot at that index is already occupied. You must start traversing in a specific probing
sequence to look for an unoccupied slot. The probing sequence will be:
Here, indexH is the hash value that is computed by another hash function.
Page 14 of 46
DISCRETE STRUCTURES 1
Assumption
string hashTable[21];
int hashTableSize = 21;
Insert
void insert(string s)
{
//Compute the index using the hash function1
int index = hashFunc1(s);
int indexH = hashFunc2(s);
//Search for an unused slot and if the index exceeds the hashTableSize roll back
while(hashTable[index] != "")
index = (index + indexH) % hashTableSize;
hashTable[index] = s;
}
Search
void search(string s)
{
//Compute the index using the hash function
int index = hashFunc1(s);
int indexH = hashFunc2(s);
//Search for an unused slot and if the index exceeds the hashTableSize roll back
while(hashTable[index] != s and hashTable[index] != "")
index = (index + indexH) % hashTableSize;
//Is the element present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}
Page 15 of 46
DISCRETE STRUCTURES 1
APPLICATIONS
• Associative arrays: Hash tables are commonly used to implement many types of in-memory
tables. They are used to implement associative arrays (arrays whose indices are arbitrary
strings or other complicated objects).
• Database indexing: Hash tables may also be used as disk-based data structures and
database indices (such as in dbm).
• Caches: Hash tables can be used to implement caches i.e. auxiliary data tables that are used
to speed up the access to data, which is primarily stored in slower media.
• Object representation: Several dynamic languages, such as Perl, Python, JavaScript, and
Ruby use hash tables to implement objects.
• Hash Functions are used in various algorithms to make their computing faster
_________________________________________________________________________________
These algorithms take an input list, processes it (i.e., performs some operations on it) and
produce the sorted list.
The most common example we experience every day is sorting clothes or other items on an e-
commerce website either by lowest price to highest, or list by popularity, or some other order.
Sorting refers to the operation or technique of arranging and rearranging sets of data in some
specific order. A collection of records called a list where every record has one or more fields. The fields
which contain a unique value for each record is termed as the key field. For example, a phone number
directory can be thought of as a list where each record has three fields - 'name' of the person, 'address'
of that person, and their 'phone numbers'. Being unique phone number can work as a key to locate any
record in the list.
Sorting is the operation performed to arrange the records of a table or list in some order
according to some specific ordering criterion. Sorting is performed according to some key value of each
record.
The records are either sorted either numerically or alphanumerically. The records are then
arranged in ascending or descending order depending on the numerical value of the key. Here is an
example, where the sorting of a lists of marks obtained by a student in any subject of a class.
• Internal Sorting
• External Sorting
Page 16 of 46
DISCRETE STRUCTURES 1
Internal Sorting: If all the data that is to be sorted can be adjusted at a time in the main memory, the
internal sorting method is being performed.
External Sorting: When the data that is to be sorted cannot be accommodated in the memory at the
same time and some has to be kept in auxiliary memory such as hard disk, floppy disk, magnetic
tapes etc., then external sorting methods are performed.
• The length of time spent by the programmer in programming a specific sorting program
• Amount of machine time necessary for running the program
• The amount of memory necessary for running the program
Various sorting techniques are analyzed in various cases and named these cases as follows:
• Best case
• Worst case
• Average case
Hence, the result of these cases is often a formula giving the average time required for a sort of size
'n.' Most of the sort methods have time requirements that range from O(nlog n) to O(n2).
Page 17 of 46
DISCRETE STRUCTURES 1
2.2.1 APPLICATION
Before diving into any algorithm, it’s very much necessary for us to understand what are the real world
applications of it. Quick sort provides a fast and methodical approach to sort any lists of things.
Following are some of the applications where quick sort is used.
• Commercial computing: Used in various government and private organizations for the
purpose of sorting various data like sorting of accounts/profiles by name or any given ID,
sorting transactions by time or locations, sorting files by name or date of creation etc.
• Numerical computations: Most of the efficiently developed algorithms use priority queues and
in turn sorting to achieve accuracy in all the calculations.
• Information search: Sorting algorithms aid in better search of information and what faster way
exists than to achieve sorting using quick sort.
Basically, quick sort is used everywhere for faster results and in the cases where there are space
constraints
2.2.2 EXPLAINATION
Taking the analogical view in perspective, consider a situation where one had to sort the papers
bearing the names of the students, by name from A-Z. One might use the approach as follows:
1. Select any splitting value, say L. The splitting value is also known as Pivot.
2. Divide the stack of papers into two. A-L and M-Z. It is not necessary that the piles should be
equal.
3. Repeat the above two steps with the A-L pile, splitting it into its significant two halves. And M-
Z pile, split into its halves. The process is repeated until the piles are small enough to be
sorted easily.
4. Ultimately, the smaller piles can be placed one on top of the other to produce a fully sorted
and ordered set of papers.
5. The approach used here is reduction at each split to get to the single-element array.
6. At every split, the pile was divided and then the same approach was used for the smaller piles
by using the method of recursion.
Technically, quick sort follows the below steps:
Page 18 of 46
DISCRETE STRUCTURES 1
Consider the following array: 50, 23, 9, 18, 61, 32. We need to sort this array in the most efficient
manner without using extra place (inplace sorting).
Solution
Step 1:
• Make any element as pivot: Decide any value to be the pivot from the list. For convenience of
code, we often select the rightmost index as pivot or select any at random and swap with
rightmost. Suppose for two values “Low” and “High” corresponding to the first index and last
index respectively.
o In our case low is 0 and high is 5.
o Values at low and high are 50 and 32 and value at pivot is 32.
• Partition the array on the basis of pivot: Call for partitioning which rearranges the array in
such a way that pivot (32) comes to its actual position (of the sorted array). And to the left of
the pivot, the array has all the elements less than it, and to the right greater than it.
o In the partition function, we start from the first element and compare it with the pivot.
Since 50 is greater than 32, we don’t make any change and move on to the next
element 23.
o Compare again with the pivot. Since 23 is less than 32, we swap 50 and 23. The
array becomes 23, 50, 9, 18, 61, 32
o We move on to the next element 9 which is again less than pivot (32) thus swapping
it with 50 makes our array as 23, 9, 50, 18, 61, 32.
o Similarly, for next element 18 which is less than 32, the array becomes 23, 9, 18, 50,
61, 32. Now 61 is greater than pivot (32), hence no changes.
o Lastly, we swap our pivot with 50 so that it comes to the correct position.
Thus the pivot (32) comes at its actual position and all elements to its left are lesser, and all elements
to the right are greater than itself.
Step 2:
Step 3:
• Repeat the steps for the left and right sublists recursively. The final array thus becomes
9, 18, 23, 32, 50, 61.
Page 19 of 46
DISCRETE STRUCTURES 1
The following diagram depicts the workflow of the Quick Sort algorithm which was described above.
• Best case scenario: The best case scenario occurs when the partitions are as evenly
balanced as possible, i.e their sizes on either side of the pivot element are either are equal or
are have size difference of 1 of each other.
1. Case 1: The case when sizes of sublist on either side of pivot becomes equal occurs
when the subarray has an odd number of elements and the pivot is right in the middle
after partitioning. Each partition will have (n-1)/2 elements.
2. Case 2: The size difference of 1 between the two sublists on either side of pivot
happens if the subarray has an even number, n, of elements. One partition will have
n/2 elements with the other having (n/2)-1.
In either of these cases, each partition will have at most n/2 elements, and the tree representation of
the subproblem sizes will be as below:
Page 20 of 46
DISCRETE STRUCTURES 1
• Worst case scenario: This happens when we encounter the most unbalanced partitions
possible, then the original call takes n time, the recursive call on n-1 elements will take (n-1)
time, the recursive call on (n-2) elements will take (n-2) time, and so on. The worst case time
complexity of Quick Sort would be O(n2).
The space complexity is calculated based on the space used in the recursion stack. The worst case
space used will be O(n) . The average case space used will be of the order O(log n). The worst-case
space complexity becomes O(n), when the algorithm encounters its worst case where for getting a
sorted list, we need to make n recursive calls.
Page 21 of 46
DISCRETE STRUCTURES 1
2. Repeatedly merge sublists to produce newly sorted sublists until there is only 1 sublist
remaining. This will be the sorted list.
Page 22 of 46
DISCRETE STRUCTURES 1
1. Iteration (1)
2. Iteration (2)
Page 23 of 46
DISCRETE STRUCTURES 1
3. Iteration (3)
1. The first step involves the comparison of the element in question with its adjacent element.
2. And if at every comparison reveals that the element in question can be inserted at a particular
position, then space is created for it by shifting the other elements one position to the right
and inserting the element at the suitable position.
3. The above procedure is repeated until all the element in the array is at their apt position.
First Iteration: Compare 25 with 17. The comparison shows 17< 25. Hence swap 17 and 25.
Page 24 of 46
DISCRETE STRUCTURES 1
Second Iteration: Begin with the second element (25), but it was already swapped on for the correct
position, so we move ahead to the next element.
Now hold on to the third element (31) and compare with the ones preceding it.
Also, 31> 17, no swapping takes place and 31 remains at its position.
The array after the Second iteration looks like: 17, 25, 31, 13, 2
Third Iteration: Start the following Iteration with the fourth element (13), and compare it with its
preceding elements.
But there still exist elements that we haven’t yet compared with 13. Now the comparison takes place
between 25 and 13. Since, 13 < 25, we swap the two.
The last comparison for the iteration is now between 17 and 13. Since 13 < 17, we swap the two.
Page 25 of 46
DISCRETE STRUCTURES 1
Fourth Iteration: The last iteration calls for the comparison of the last element (2), with all the
preceding elements and make the appropriate swapping between elements.
This is the final array after all the corresponding iterations and swapping of elements.
Page 26 of 46
DISCRETE STRUCTURES 1
Page 27 of 46
DISCRETE STRUCTURES 1
APPLICAT ION
Bubble sort is mainly used in educational purposes for helping students understand the
foundations of sorting.
This is used to identify whether the list is already sorted. When the list is already sorted (which
is the best-case scenario), the complexity of bubble sort is only O(n).
In real life, bubble sort can be visualized when people in a queue wanting to be standing in a
height wise sorted manner swap their positions among themselves until everyone is standing based on
increasing order of heights.
EXPLANATION
Algorithm: We compare adjacent elements and see if their order is wrong (i.e a[i] > a[j] for 1 <= i < j <=
size of array; if array is to be in ascending order, and vice-versa). If yes, then swap them.
• Let us say, we have an array of length n. To sort this array we do the above step (swapping)
for n - 1 passes.
• In simple terms, first, the largest element goes at its extreme right place then, second largest
to the last by one place, and so on. In the ith pass, the ith largest element goes at its right place
in the array by swappings.
• In mathematical terms, in ith pass, at least one element from (n - i + 1) elements from start will
go at its right place. That element will be the ith (for 1 <= i <= n - 1) largest element of the array.
Because in the ith pass of the array, in the jth iteration (for 1 <= j <= n - i), we are checking if
a[j] > a[j + 1], and a[j] will always be greater than a[j + 1] when it is the largest element in range
[1, n - i + 1]. Now we will swap it. This will continue until ith largest element is at the (n - i + 1)th
position of the array.
Consider the following array: Arr=14, 33, 27, 35, 10. We need to sort this array using bubble sort
algorithm.
Page 28 of 46
DISCRETE STRUCTURES 1
FIRST PASS
• We proceed with the first and second element i.e., Arr[0] and Arr[1]. Check if 14 > 33 which is
false. So, no swapping happens, and the array remains the same.
• We proceed with the second and third element i.e., Arr[1] and Arr[2]. Check if 33 > 27 which
is true. So, we swap Arr[1] and Arr[2].
• We proceed with the third and fourth element i.e., Arr[2] and Arr[3]. Check if 33 > 35 which is
false. So, no swapping happens, and the array remains the same.
• We proceed with the fourth and fifth element i.e., Arr[3] and Arr[4]. Check if 35 > 10 which is
true. So, we swap Arr[3] and Arr[4].
Thus, marks the end of the first pass, where the Largest element reaches its final(last) position.
Page 29 of 46
DISCRETE STRUCTURES 1
SECOND PASS
• We proceed with the first and second element i.e., Arr[0] and Arr[1]. Check if 14 > 27 which is
false. So, no swapping happens and the array remains the same.
We now proceed with the second and third element i.e., Arr[1] and Arr[2]. Check if 27 > 33 which is
false. So, no swapping happens and the array remains the same.
• We now proceed with the third and fourth element i.e., Arr[2] and Arr[3]. Check if 33 > 10
which is true. So, we swap Arr[2] and Arr[3].
i-th Pass:
After the ith pass, the ith largest element will be at the ith last position in the array.
n-th Pass:
After the nth pass, the nth largest element (smallest element) will be at nth last position(1st position)
in the array, where ‘n’ is the size of the array.
After doing all the passes, we can easily see the array will be sorted.
Page 30 of 46
DISCRETE STRUCTURES 1
COMPLEXITY ANALYSIS
Best case scenario: The best-case scenario occurs when the array is already sorted. In this case, no
swapping will happen in the first iteration (The swapped variable will be false). So, when this happens,
we break from the loop after the very first iteration. Hence, time complexity in the best-case scenario
is O(n) because it has to traverse through all the elements once.
Worst case and Average case scenario: In Bubble Sort, n-1 comparisons are done in the 1st pass,
n-2 in 2nd pass, n-3 in 3rd pass and so on. So, the total number of comparisons will be:
The space complexity for the algorithm is O(1), because only a single additional memory space is
required i.e. for temporary variable used for swapping.
We perform the steps given below until the unsorted subarray becomes empty
HOW IT WORKS
Page 31 of 46
DISCRETE STRUCTURES 1
We will swap A[0] and A[6] then, make A[0] part of sorted subarray.
We will swap A[2] and A[4] then, make A[2] part of sorted subarray.
We will swap A[3] and A[5] then, make A[3] part of sorted subarray.
We will swap A[4] and A[7] then, make A[4] part of sorted subarray.
We will swap A[5] and A[6] then, make A[5] part of sorted subarray.
Page 32 of 46
DISCRETE STRUCTURES 1
We will swap A[6] and A[7] then, make A[6] part of sorted subarray.
Generally, the value of the element itself is considered for assigning the priority.
For example, The element with the highest value is considered as the highest priority element. However,
in other cases, we can assume the element with the lowest value as the highest priority element. In
other cases, we can set priorities according to our needs.
Page 33 of 46
DISCRETE STRUCTURES 1
There are two kinds of priority queues: a max-priority queue and a min-priority queue. In both kinds, the
priority queue stores a collection of elements and is always able to provide the most “extreme” element,
which is the only way to interact with the priority queue. For the remainder of this section, we will discuss
max-priority queues. Min-priority queues are analogous.
Hence, we will be using the heap data structure to implement the priority queue in this tutorial. A max-
heap is implement is in the following operations. If you want to learn more about it, please visit max-
heap and mean-heap.
Page 34 of 46
DISCRETE STRUCTURES 1
If there is no node,
create a newNode.
else (a node is already present)
insert the newNode at the end (last node from left to right.)
heapify the array
For Min Heap, the above algorithm is modified so that parentNode is always smaller than newNode.
Page 35 of 46
DISCRETE STRUCTURES 1
remove noteToBeDeleted
For Min Heap, the above algorithm is modified so that the both childNodes are smaller than
currentNode.
Page 36 of 46
DISCRETE STRUCTURES 1
return rootNode
• Dijkstra's algorithm
• for implementing stack
• for load balancing and interrupt handling in an operating system
• for data compression in Huffman code
A graph consists of –
• Vertices − Interconnected objects in a graph are called vertices. Vertices are also known as
nodes.
• Edges − Edges are the links that connect the vertices.
There are two types of graphs –
• Directed graph − In a directed graph, edges have direction, i.e., edges go from one vertex to
another.
• Undirected graph − In an undirected graph, edges have no direction.
Page 37 of 46
DISCRETE STRUCTURES 1
• Vertex coloring − A way of coloring the vertices of a graph so that no two adjacent vertices
share the same color.
• Edge Coloring − It is the method of assigning a color to each edge so that no two adjacent
edges have the same color.
• Face coloring − It assigns a color to each face or region of a planar graph so that no two
faces that share a common boundary have the same color.
The concept of graph coloring is applied in preparing timetables, mobile radio frequency assignment,
Suduku, register allocation, and coloring of maps.
Page 38 of 46
DISCRETE STRUCTURES 1
end
ok = ΣStatus
end
Some possible spanning trees of the above graph are shown below –
Page 39 of 46
DISCRETE STRUCTURES 1
Page 40 of 46
DISCRETE STRUCTURES 1
Page 41 of 46
DISCRETE STRUCTURES 1
Among all the above spanning trees, figure (d) is the minimum spanning tree. The concept of minimum
cost spanning tree is applied in travelling salesman problem, designing electronic circuits, Designing
efficient networks, and designing efficient routing algorithms.
To implement the minimum cost-spanning tree, the following two methods are used –
• Prim’s Algorithm
• Kruskal’s Algorithm
Page 42 of 46
DISCRETE STRUCTURES 1
Kruskal’s algorithm is a greedy algorithm, which helps us find the minimum spanning tree for a
connected weighted graph, adding increasing cost arcs at each step. It is a minimum-spanning-tree
algorithm that finds an edge of the least possible weight that connects any two trees in the forest.
Page 43 of 46
DISCRETE STRUCTURES 1
• Find all unlabeled vertices adjacent to the vertex labeled i. If no vertices are connected to the
vertex, S, then vertex, D, is not connected to S. If there are vertices connected to S, label
them i+1.
• If D is labeled, then go to step 4, else go to step 2 to increase i=i+1.
• Stop after the length of the shortest path is found.
Page 44 of 46
DISCRETE STRUCTURES 1
Basics of Hash Tables Tutorials & Notes | Data Structures | HackerEarth. HackerEarth.
Retrieved December 6, 2020, from https://www.hackerearth.com/practice/data-
structures/hash-tables/basics-of-hash-tables/tutorial/
Lesson 2
Lesson 3
Lesson 4
Vijini Mallawaarachchi. (2020, August 27). 10 Graph Algorithms Visually Explained - Towards
Data Science. Medium; Towards Data Science. https://towardsdatascience.com/10-
graph-algorithms-visually-explained-e57faa1336f3
Page 45 of 46
DISCRETE STRUCTURES 1
SAQs
Lesson 1
• Why hash tables are fast?
• What is hashing and hash table?
• What is the purpose of hashing?
Lesson 2
• Why we use sorting techniques
• How many categories sorting has? Briefly explain each.
Lesson 3
• What are the two kinds of priority queue?
• Give 2 algorithms that priority queue can be utilized.
Lesson 4
References
Basics of Hash Tables Tutorials & Notes | Data Structures | HackerEarth. HackerEarth. Retrieved
December 6, 2020, from https://www.hackerearth.com/practice/data-structures/hash-
tables/basics-of-hash-tables/tutorial/
Page 46 of 46