You are on page 1of 46

DISCRETE STRUCTURES 1

CS112
Discrete Structures 1
LEARNING MATERIALS
(Midterm Period)

DISCLAIMER

This learning material is used in compliance with the flexible teaching-learning approach espoused by
CHED in response to the pandemic that has globally affected educational institutions. Authors and
publishers of the contents are well acknowledged. As such the college and its faculty do not claim
ownership of all sourced information. This learning material will solely be used for instructional purposes
not for commercialization

CatSU – College of Information and Communications Technology

Page 1 of 46
DISCRETE STRUCTURES 1

TABLE OF CONTENTS

TABLE OF CONTENTS ........................................................................................................................ 2


CHAPTER 1 Data Structures and Basic Concepts ............................................................................. 4
Learning Outcomes ........................................................................................................................... 4
Keyterms ........................................................................................................................................... 4
Lessons............................................................................................................................................. 4
Lesson 1: Hash Tables ..................................................................................................................... 4
1.1 Basics of Hash Tables ....................................................................................................... 4
1.1.1 Hash Function ............................................................................................................... 5
1.1.2 Need for a Good Hash Function .................................................................................... 5
1.2 Basics of Hash Tables ....................................................................................................... 9
1.2.1 Separate Chaining (Opening Hashing) .......................................................................... 9
1.2.2 Linear Probing (Open Addressing or Closed Hashing) ................................................ 11
1.2.3 Quadratic Probing ....................................................................................................... 13
1.2.4 Double Hashing ........................................................................................................... 14
Lesson 2: Sorting Techniques......................................................................................................... 16
2.1 SORTING ALGORITHMS ..................................................................................................... 16
2.1 CATEGORIES OF SORTING ............................................................................................... 16
2.2 TYPES OF SORTING ALGORITHMS .................................................................................. 17
2.3 TIME COMPLEXITIES OF SORTING ALGORITHMS .......................................................... 17
2.4 EFFICIENCY OF SORTING TECHNIQUES ......................................................................... 17
2.2 QUICK SORT ALGORITHM.................................................................................................. 18
2.2.1 APPLICATION ................................................................................................................... 18
2.2.2 EXPLAINATION ................................................................................................................. 18
2.2.3 QUICK SORT EXAMPLE ................................................................................................... 19
2.2.4 COMPLEXITY ANALYSIS.................................................................................................. 20
2.3: MERGE SORT ALGORITHMS ............................................................................................ 21
2.3.1 TOP-DOWN MERGE SORT IMPLEMENTATION: ............................................................ 22
2.3.2 MERGING OF TWO LISTS DONE AS FOLLOWS: ........................................................... 22
2.3.3 BOTTOM-UP MERGE SORT IMPLEMENTATION: ........................................................... 23
2.4 INSERTION SORT ALGORITHM ......................................................................................... 24
2.4 TIME COMPLEXITY ANALYSIS ........................................................................................... 27
2.5 BUBBLE SORT ALGORITM ..................................................................................................... 27

Page 2 of 46
DISCRETE STRUCTURES 1

2.6 SELECTION SORT ............................................................................................................... 31


Lesson 3: Priority Queues ............................................................................................................... 33
3.1 DIFFERENCE BETWEEN PRIORITY QUEUE AND NORMAL QUEUE .............................. 34
3.2 IMPLEMENTATION OF PRIORITY QUEUE ......................................................................... 34
3.3 PRIORITY QUEUE OPERATIONS ....................................................................................... 34
3.3.1 INSERT AN ELEMENT INTO THE PRIORITY QUEUE ..................................................... 34
3.3.2 DELETING AN ELEMENT FROM THE PRIORITY QUEUE .............................................. 35
3.3.4 PEEKING FROM THE PRIORITY QUEUE (FIND MAX/MIN) ............................................ 37
3.3.5 EXTRACT-MAX/MIN FROM THE PRIORITY QUEUE ....................................................... 37
3.3.6 PRIORITY QUEUE APPLICATIONS ................................................................................. 37
Lesson 4: Graph Algorithms ............................................................................................................ 37
4.1 WHAT IS A GRAPH? ............................................................................................................ 37
4.2 GRAPH COLORING ............................................................................................................. 38
4.3 CHROMATIC NUMBER ........................................................................................................ 38
4.3.1 Steps for graph coloring ..................................................................................................... 38
4.3.2 Pseudocode for graph coloring .......................................................................................... 39
4.4 MINIMAL SPANNING TREE ................................................................................................. 39
4.5 PRIM'S ALGORITHM ............................................................................................................ 42
4.5.1 STEPS OF PRIM’S ALGORITHM ...................................................................................... 42
4.5 KRUSKAL'S ALGORITHM .............................................................................................. 43
4.5.1 STEPS OF KRUSKAL’S ALGORITHM .............................................................................. 43
4.6 SHORTEST PATH ALGORITHM .......................................................................................... 44
4.6.1 MOORE’S ALGORITHM .................................................................................................... 44
Supplementary Learning Materials ................................................................................................. 45
SAQs............................................................................................................................................... 46
References...................................................................................................................................... 46

Page 3 of 46
DISCRETE STRUCTURES 1

CHAPTER 1
Data Structures and Basic Concepts

Learning Outcomes
• Develop an understanding of concepts of algorithm
• Perform operations associated with sorting, and priority techniques.

Keyterms
• Big O notation
• Hash
• Probing
• Complexity
• Vertices

Lessons
Lesson 1: Hash Tables

1.1 Basics of Hash Tables

Hashing is the transformation of a string of characters into a usually shorter fixed-length value or key
that represents the original string. Hashing is used to index and retrieve items in a database because it
is faster to find the item using the shorter hashed key than to find it using the original value.

Hashing is a technique that is used to uniquely identify a specific object from a group of similar objects.
Some examples of how hashing is used in our lives include:

In universities, each student is assigned a unique student number that can be used to retrieve
information about them.

In libraries, each book is assigned a unique number that can be used to determine information about
the book, such as its exact position in the library or the users it has been issued to etc.

In both these examples the students and books were hashed to a unique number.

Assume that you have an object and you want to assign a key to it to make searching easy. To store
the key/value pair, you can use a simple array like a data structure where keys (integers) can be used
directly as an index to store values. However, in cases where the keys are large and cannot be used
directly as an index, you should use hashing.

Page 4 of 46
DISCRETE STRUCTURES 1

In hashing, large keys are converted into small keys by using hash functions. The values are then stored
in a data structure called hash table. The idea of hashing is to distribute entries (key/value pairs)
uniformly across an array. Each element is assigned a key (converted key). By using that key you can
access the element in O(1) time. Using the key, the algorithm (hash function) computes an index that
suggests where an entry can be found or inserted.

Hashing is implemented in two steps:

An element is converted into an integer by using a hash function. This element can be used as an index
to store the original element, which falls into the hash table.

The element is stored in the hash table where it can be quickly retrieved using hashed key.

hash = hashfunc(key)
index = hash % array_size

In this method, the hash is independent of the array size and it is then reduced to an index (a number
between 0 and array_size − 1) by using the modulo operator (%).

Hash tables are a bit more complex. They put elements in different buckets based on their hash %
some value. In an ideal situation, each bucket holds very few items and there aren't many empty
buckets. Once you know the key, you compute the hash

1.1.1 Hash Function

A hash function is any function that can be used to map a data set of an arbitrary size to a data set of
a fixed size, which falls into the hash table. The values returned by a hash function are called hash
values, hash codes, hash sums, or simply hashes.

To achieve a good hashing mechanism, It is important to have a good hash function with the following
basic requirements:

Easy to compute: It should be easy to compute and must not become an algorithm.

Uniform distribution: It should provide a uniform distribution across the hash table and should not result
in clustering.

Less collisions: Collisions occur when pairs of elements are mapped to the same hash value. These
should be avoided.

Note: Irrespective of how good a hash function is, collisions are bound to occur. Therefore, to maintain
the performance of a hash table, it is important to manage collisions through various collision resolution
techniques.

1.1.2 Need for a Good Hash Function

Let us understand the need for a good hash function. Assume that you have to store strings in the hash
table by using the hashing technique {“abcdef”, “bcdefa”, “cdefab” , “defabc” }.

To compute the index for storing the strings, use a hash function that states the following:

The index for a specific string will be equal to the sum of the ASCII values of the characters modulo
599.

Page 5 of 46
DISCRETE STRUCTURES 1

As 599 is a prime number, it will reduce the possibility of indexing different strings (collisions). It is
recommended that you use prime numbers in case of modulo. The ASCII values of a, b, c, d, e, and f
are 97, 98, 99, 100, 101, and 102 respectively. Since all the strings contain the same characters with
different permutations, the sum will 599.

The hash function will compute the same index for all the strings and the strings will be stored in the
hash table in the following format. As the index of all the strings is the same, you can create a list on
that index and insert all the strings in that list.

Here, it will take O(n) time (where n is the number of strings) to access a specific string. This shows
that the hash function is not a good hash function.

Let’s try a different hash function. The index for a specific string will be equal to sum of ASCII values of
characters multiplied by their respective order in the string after which it is modulo with 2069 (prime
number).

String Hash function Index


abcdef (971 + 982 + 993 + 1004 + 1015 + 1026)%2069 38
bcdefa (981 + 992 + 1003 + 1014 + 1025 + 976)%2069 23
cdefab (991 + 1002 + 1013 + 1024 + 975 + 986)%2069 14
defabc (1001 + 1012 + 1023 + 974 + 985 + 996)%2069 11

Page 6 of 46
DISCRETE STRUCTURES 1

Hash table

Let A hash table is a data structure that is used to store keys/value pairs. It uses a hash function
to compute an index into an array in which an element will be inserted or searched. By using a good
hash function, hashing can work well. Under reasonable assumptions, the average time required to
search for an element in a hash table is O(1).us consider string S. You are required to count the
frequency of all the characters in this string.

string S = “ababcd”

The simplest way to do this is to iterate over all the possible characters and count their frequency one
by one. The time complexity of this approach is O(26*N) where N is the size of the string and there are
26 possible characters.

void countFre(string S)
{
for(char c = ‘a’;c <= ‘z’;++c)
{
int frequency = 0;
for(int i = 0;i < S.length();++i)
if(S[i] == c)
frequency++;
cout << c << ‘ ‘ << frequency << endl;
}
}

Page 7 of 46
DISCRETE STRUCTURES 1

Output

a2
b2
c1
d1
e0
f0

z0

Let us apply hashing to this problem. Take an array frequency of size 26 and hash the 26 characters
with indices of the array by using the hash function. Then, iterate over the string and increase the value
in the frequency at the corresponding index for each character. The complexity of this approach
is O(N) where N is the size of the string.

int Frequency[26];

int hashFunc(char c)
{
return (c - ‘a’);
}

void countFre(string S)
{
for(int i = 0;i < S.length();++i)
{
int index = hashFunc(S[i]);
Frequency[index]++;
}
for(int i = 0;i < 26;++i)
cout << (char)(i+’a’) << ‘ ‘ << Frequency[i] << endl;
}

OUTPUT

a2
b2
c1
d1
e0
f0

z0

Page 8 of 46
DISCRETE STRUCTURES 1

1.2 Basics of Hash Tables

1.2.1 Separate Chaining (Opening Hashing)

Separate chaining is one of the most commonly used collision resolution techniques. It is usually
implemented using linked lists. In separate chaining, each element of the hash table is a linked list. To
store an element in the hash table you must insert it into a specific linked list. If there is any collision
(i.e. two different elements have same hash value) then store both the elements in the same linked list.

Page 9 of 46
DISCRETE STRUCTURES 1

The cost of a lookup is that of scanning the entries of the selected linked list for the required key. If the
distribution of the keys is sufficiently uniform, then the average cost of a lookup depends only on the
average number of keys per linked list. For this reason, chained hash tables remain effective even when
the number of table entries (N) is much higher than the number of slots.

For separate chaining, the worst-case scenario is when all the entries are inserted into the same linked
list. The lookup procedure may have to scan all its entries, so the worst-case cost is proportional to the
number (N) of entries in the table.

In the following image, CodeMonk and Hashing both hash to the value 2. The linked list at the
index 2 can hold only one entry, therefore, the next entry (in this case Hashing) is linked (attached) to
the entry of CodeMonk.

Implementation of hash tables with separate chaining (open hashing)

Assumption

Hash function will return an integer from 0 to 19.

vector <string> hashTable[20];


int hashTableSize=20;

Insert

void insert(string s)
{
// Compute the index using Hash Function
int index = hashFunc(s);
// Insert the element in the linked list at the particular index

Page 10 of 46
DISCRETE STRUCTURES 1

hashTable[index].push_back(s);
}

Search

void search(string s)
{
//Compute the index by using the hash function
int index = hashFunc(s);
//Search the linked list at that specific index
for(int i = 0;i < hashTable[index].size();i++)
{
if(hashTable[index][i] == s)
{
cout << s << " is found!" << endl;
return;
}
}
cout << s << " is not found!" << endl;
}

1.2.2 Linear Probing (Open Addressing or Closed Hashing)

In open addressing, instead of in linked lists, all entry records are stored in the array itself. When
a new entry must be inserted, the hash index of the hashed value is computed and then the
array is examined (starting with the hashed index). If the slot at the hashed index is
unoccupied, then the entry record is inserted in slot at the hashed index else it proceeds in
some probe sequence until it finds an unoccupied slot.

The probe sequence is the sequence that is followed while traversing through entries. In
different probe sequences, you can have different intervals between successive entry slots or
probes.

When searching for an entry, the array is scanned in the same sequence until either the target
element is found or an unused slot is found. This indicates that there is no such key in the table.
The name "open addressing" refers to the fact that the location or address of the item is not
determined by its hash value.

Linear probing is when the interval between successive probes is fixed (usually to 1). Let’s
assume that the hashed index for a particular entry is index. The probing sequence for linear
probing will be:

index = index % hashTableSize


index = (index + 1) % hashTableSize
index = (index + 2) % hashTableSize
index = (index + 3) % hashTableSize

and so on…

Page 11 of 46
DISCRETE STRUCTURES 1

Hash collision is resolved by open addressing with linear probing. Since CodeMonk and
Hashing are hashed to the same index i.e. 2, store Hashing at 3 as the interval between
successive probes is 1.

IMPLEMENTATION OF HASH TABLE WITH LINEAR PROBING

Assumption

• There are no more than 20 elements in the data set.


• Hash function will return an integer from 0 to 19.
• Data set must have unique elements.

string hashTable[21];
int hashTableSize = 21;

Insert

void insert(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize then roll
back
while(hashTable[index] != "")
index = (index + 1) % hashTableSize;
hashTable[index] = s;
}

Search

Page 12 of 46
DISCRETE STRUCTURES 1

void search(string s)
{
//Compute the index using the hash function
int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize then roll
back
while(hashTable[index] != s and hashTable[index] != "")
index = (index + 1) % hashTableSize;
//Check if the element is present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}

1.2.3 Quadratic Probing


Quadratic probing is like linear probing and the only difference is the interval between
successive probes or entry slots. Here, when the slot at a hashed index for an entry record is already
occupied, you must start traversing until you find an unoccupied slot. The interval between slots is
computed by adding the successive value of an arbitrary polynomial in the original hashed index.

Let us assume that the hashed index for an entry is index and at index there is an occupied slot. The
probe sequence will be as follows:

index = index % hashTableSize


index = (index + 12) % hashTableSize
index = (index + 22) % hashTableSize
index = (index + 32) % hashTableSize
and so on…

IMPLEMENTATION OF HASH TABLE WITH QUADRATIC PROBING

Assumption

• There are no more than 20 elements in the data set.


• Hash function will return an integer from 0 to 19.
• Data set must have unique elements.

string hashTable[21];
int hashTableSize = 21;

Insert

void insert(string s)
{

Page 13 of 46
DISCRETE STRUCTURES 1

//Compute the index using the hash function


int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize roll back
int h = 1;
while(hashTable[index] != "")
{
index = (index + h*h) % hashTableSize;
h++;
}
hashTable[index] = s;
}

Search

void search(string s)
{
//Compute the index using the Hash Function
int index = hashFunc(s);
//Search for an unused slot and if the index will exceed the hashTableSize roll back
int h = 1;
while(hashTable[index] != s and hashTable[index] != "")
{
index = (index + h*h) % hashTableSize;
h++;
}
//Is the element present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}

1.2.4 Double Hashing


Double hashing is similar to linear probing and the only difference is the interval between successive
probes. Here, the interval between probes is computed by using two hash functions.

Let us say that the hashed index for an entry record is an index that is computed by one hashing
function and the slot at that index is already occupied. You must start traversing in a specific probing
sequence to look for an unoccupied slot. The probing sequence will be:

index = (index + 1 * indexH) % hashTableSize;


index = (index + 2 * indexH) % hashTableSize;
and so on…

Here, indexH is the hash value that is computed by another hash function.

Page 14 of 46
DISCRETE STRUCTURES 1

IMPLEMENTATION OF HASH TABLE WITH DOUBLE HASHING

Assumption

• There are no more than 20 elements in the data set.


• Hash functions will return an integer from 0 to 19.
• Data set must have unique elements.

string hashTable[21];
int hashTableSize = 21;

Insert

void insert(string s)
{
//Compute the index using the hash function1
int index = hashFunc1(s);
int indexH = hashFunc2(s);
//Search for an unused slot and if the index exceeds the hashTableSize roll back
while(hashTable[index] != "")
index = (index + indexH) % hashTableSize;
hashTable[index] = s;
}

Search

void search(string s)
{
//Compute the index using the hash function
int index = hashFunc1(s);
int indexH = hashFunc2(s);
//Search for an unused slot and if the index exceeds the hashTableSize roll back
while(hashTable[index] != s and hashTable[index] != "")
index = (index + indexH) % hashTableSize;
//Is the element present in the hash table
if(hashTable[index] == s)
cout << s << " is found!" << endl;
else
cout << s << " is not found!" << endl;
}

Page 15 of 46
DISCRETE STRUCTURES 1

APPLICATIONS

• Associative arrays: Hash tables are commonly used to implement many types of in-memory
tables. They are used to implement associative arrays (arrays whose indices are arbitrary
strings or other complicated objects).
• Database indexing: Hash tables may also be used as disk-based data structures and
database indices (such as in dbm).
• Caches: Hash tables can be used to implement caches i.e. auxiliary data tables that are used
to speed up the access to data, which is primarily stored in slower media.
• Object representation: Several dynamic languages, such as Perl, Python, JavaScript, and
Ruby use hash tables to implement objects.
• Hash Functions are used in various algorithms to make their computing faster

_________________________________________________________________________________

Lesson 2: Sorting Techniques

2.1 SORTING ALGORITHMS


Sorting Algorithms are methods of reorganizing many items into some specific order such as
highest to lowest, or vice-versa, or even in some alphabetical order.

These algorithms take an input list, processes it (i.e., performs some operations on it) and
produce the sorted list.

The most common example we experience every day is sorting clothes or other items on an e-
commerce website either by lowest price to highest, or list by popularity, or some other order.

Sorting refers to the operation or technique of arranging and rearranging sets of data in some
specific order. A collection of records called a list where every record has one or more fields. The fields
which contain a unique value for each record is termed as the key field. For example, a phone number
directory can be thought of as a list where each record has three fields - 'name' of the person, 'address'
of that person, and their 'phone numbers'. Being unique phone number can work as a key to locate any
record in the list.

Sorting is the operation performed to arrange the records of a table or list in some order
according to some specific ordering criterion. Sorting is performed according to some key value of each
record.

The records are either sorted either numerically or alphanumerically. The records are then
arranged in ascending or descending order depending on the numerical value of the key. Here is an
example, where the sorting of a lists of marks obtained by a student in any subject of a class.

2.1 CATEGORIES OF SORTING


The techniques of sorting can be divided into two categories. These are:

• Internal Sorting
• External Sorting

Page 16 of 46
DISCRETE STRUCTURES 1

Internal Sorting: If all the data that is to be sorted can be adjusted at a time in the main memory, the
internal sorting method is being performed.

External Sorting: When the data that is to be sorted cannot be accommodated in the memory at the
same time and some has to be kept in auxiliary memory such as hard disk, floppy disk, magnetic
tapes etc., then external sorting methods are performed.

2.2 TYPES OF SORTING ALGORITHMS


• Quick Sort
• Bubble Sort
• Merge Sort
• Insertion Sort
• Selection Sort

2.3 TIME COMPLEXITIES OF SORTING ALGORITHMS


The complexity of sorting algorithm calculates the running time of a function in which 'n' number of
items are to be sorted. The choice for which sorting method is suitable for a problem depends on
several dependency configurations for different problems. The most noteworthy of these
considerations are:

• The length of time spent by the programmer in programming a specific sorting program
• Amount of machine time necessary for running the program
• The amount of memory necessary for running the program

2.4 EFFICIENCY OF SORTING TECHNIQUES


To get the amount of time required to sort an array of 'n' elements by a method, the normal approach
is to analyze the method to find the number of comparisons (or exchanges) required by it. Most of the
sorting techniques are data sensitive, and so the metrics for them depends on the order in which they
appear in an input array.

Various sorting techniques are analyzed in various cases and named these cases as follows:

• Best case
• Worst case
• Average case
Hence, the result of these cases is often a formula giving the average time required for a sort of size
'n.' Most of the sort methods have time requirements that range from O(nlog n) to O(n2).

Algorithm Best Average Worst


Quick Sort Ω(n log(n)) Θ(n log(n)) O(n^2)
Bubble Sort Ω(n) Θ(n^2) O(n^2)
Merge Sort Ω(n log(n)) Θ(n log(n)) O(n log(n))
Insertion Sort Ω(n) Θ(n^2) O(n^2)
Selection Sort Ω(n^2) Θ(n^2) O(n^2)
Heap Sort Ω(n log(n)) Θ(n log(n)) O(n log(n))
Radix Sort Ω(nk) Θ(nk) O(nk)
Bucket Sort Ω(n+k) Θ(n+k) O(n^2)

Page 17 of 46
DISCRETE STRUCTURES 1

2.2 QUICK SORT ALGORITHM


The algorithm was developed by a British computer scientist Tony Hoare in 1959. The name
"Quick Sort" comes from the fact that, quick sort can sort a list of data elements significantly faster
(twice or thrice faster) than any of the common sorting algorithms. It is one of the most efficient sorting
algorithms and is based on the splitting of an array (partition) into smaller ones and swapping
(exchange) based on the comparison with 'pivot' element selected. Due to this, quick sort is also called
as "Partition Exchange" sort. Like Merge sort, Quick sort also falls into the category of divide and
conquer approach of problem-solving methodology.

2.2.1 APPLICATION
Before diving into any algorithm, it’s very much necessary for us to understand what are the real world
applications of it. Quick sort provides a fast and methodical approach to sort any lists of things.
Following are some of the applications where quick sort is used.

• Commercial computing: Used in various government and private organizations for the
purpose of sorting various data like sorting of accounts/profiles by name or any given ID,
sorting transactions by time or locations, sorting files by name or date of creation etc.
• Numerical computations: Most of the efficiently developed algorithms use priority queues and
in turn sorting to achieve accuracy in all the calculations.
• Information search: Sorting algorithms aid in better search of information and what faster way
exists than to achieve sorting using quick sort.
Basically, quick sort is used everywhere for faster results and in the cases where there are space
constraints

2.2.2 EXPLAINATION
Taking the analogical view in perspective, consider a situation where one had to sort the papers
bearing the names of the students, by name from A-Z. One might use the approach as follows:

1. Select any splitting value, say L. The splitting value is also known as Pivot.
2. Divide the stack of papers into two. A-L and M-Z. It is not necessary that the piles should be
equal.
3. Repeat the above two steps with the A-L pile, splitting it into its significant two halves. And M-
Z pile, split into its halves. The process is repeated until the piles are small enough to be
sorted easily.
4. Ultimately, the smaller piles can be placed one on top of the other to produce a fully sorted
and ordered set of papers.
5. The approach used here is reduction at each split to get to the single-element array.
6. At every split, the pile was divided and then the same approach was used for the smaller piles
by using the method of recursion.
Technically, quick sort follows the below steps:

Step 1 − Make any element as pivot


Step 2 − Partition the array on the basis of pivot
Step 3 − Apply quick sort on left partition recursively
Step 4 − Apply quick sort on right partition recursively

Page 18 of 46
DISCRETE STRUCTURES 1

2.2.3 QUICK SORT EXAMPLE


Problem Statement

Consider the following array: 50, 23, 9, 18, 61, 32. We need to sort this array in the most efficient
manner without using extra place (inplace sorting).

Solution

Step 1:

• Make any element as pivot: Decide any value to be the pivot from the list. For convenience of
code, we often select the rightmost index as pivot or select any at random and swap with
rightmost. Suppose for two values “Low” and “High” corresponding to the first index and last
index respectively.
o In our case low is 0 and high is 5.
o Values at low and high are 50 and 32 and value at pivot is 32.

• Partition the array on the basis of pivot: Call for partitioning which rearranges the array in
such a way that pivot (32) comes to its actual position (of the sorted array). And to the left of
the pivot, the array has all the elements less than it, and to the right greater than it.
o In the partition function, we start from the first element and compare it with the pivot.
Since 50 is greater than 32, we don’t make any change and move on to the next
element 23.
o Compare again with the pivot. Since 23 is less than 32, we swap 50 and 23. The
array becomes 23, 50, 9, 18, 61, 32
o We move on to the next element 9 which is again less than pivot (32) thus swapping
it with 50 makes our array as 23, 9, 50, 18, 61, 32.
o Similarly, for next element 18 which is less than 32, the array becomes 23, 9, 18, 50,
61, 32. Now 61 is greater than pivot (32), hence no changes.
o Lastly, we swap our pivot with 50 so that it comes to the correct position.
Thus the pivot (32) comes at its actual position and all elements to its left are lesser, and all elements
to the right are greater than itself.

Step 2:

• The main array after the first step becomes


23, 9, 18, 32, 61, 50

Step 3:

• Now the list is divided into two parts:


1. Sublist before pivot element
2. Sublist after pivot element
Step 4:

• Repeat the steps for the left and right sublists recursively. The final array thus becomes
9, 18, 23, 32, 50, 61.

Page 19 of 46
DISCRETE STRUCTURES 1

The following diagram depicts the workflow of the Quick Sort algorithm which was described above.

2.2.4 COMPLEXITY ANALYSIS


Time Complexity of Quick sort

• Best case scenario: The best case scenario occurs when the partitions are as evenly
balanced as possible, i.e their sizes on either side of the pivot element are either are equal or
are have size difference of 1 of each other.
1. Case 1: The case when sizes of sublist on either side of pivot becomes equal occurs
when the subarray has an odd number of elements and the pivot is right in the middle
after partitioning. Each partition will have (n-1)/2 elements.
2. Case 2: The size difference of 1 between the two sublists on either side of pivot
happens if the subarray has an even number, n, of elements. One partition will have
n/2 elements with the other having (n/2)-1.
In either of these cases, each partition will have at most n/2 elements, and the tree representation of
the subproblem sizes will be as below:

Page 20 of 46
DISCRETE STRUCTURES 1

The best case complexity of quick sort algorithm is O(log n)

• Worst case scenario: This happens when we encounter the most unbalanced partitions
possible, then the original call takes n time, the recursive call on n-1 elements will take (n-1)
time, the recursive call on (n-2) elements will take (n-2) time, and so on. The worst case time
complexity of Quick Sort would be O(n2).

Space Complexity of Quick sort

The space complexity is calculated based on the space used in the recursion stack. The worst case
space used will be O(n) . The average case space used will be of the order O(log n). The worst-case
space complexity becomes O(n), when the algorithm encounters its worst case where for getting a
sorted list, we need to make n recursive calls.

2.3: MERGE SORT ALGORITHMS


Merge sort is one of the most efficient sorting algorithms. It works on the principle of Divide and
Conquer. Merge sort repeatedly breaks down a list into several sub lists until each sublets consists of
a single element and merging those sub lists in a manner that results into a sorted list.

Page 21 of 46
DISCRETE STRUCTURES 1

2.3.1 TOP-DOWN MERGE SORT IMPLEMENTATION:


The top-down merge sort approach is the methodology which uses recursion mechanism. It starts at
the Top and proceeds downwards, with each recursive turn asking the same question such as “What is
required to be done to sort the array?” and having the answer as, “split the array into two, make a
recursive call, and merge the results.”, until one gets to the bottom of the array-tree.

EXAMPLE: LET US CONSIDER AN EXAMPLE TO UNDERSTAND THE APPROACH


BETTER.
1. Divide the unsorted list into n sub lists, each comprising 1 element (a list of 1 element is
supposed sorted).

2. Repeatedly merge sublists to produce newly sorted sublists until there is only 1 sublist
remaining. This will be the sorted list.

2.3.2 MERGING OF TWO LISTS DONE AS FOLLOWS:


The first element of both lists is compared. If sorting in ascending order, the smaller element
among two becomes a new element of the sorted list. This procedure is repeated until both the smaller
sub lists are empty and the newly combined sublets covers all the elements of both the sub lists.

Page 22 of 46
DISCRETE STRUCTURES 1

2.3.3 BOTTOM-UP MERGE SORT IMPLEMENTATION:


The Bottom-Up merge sort approach uses iterative methodology. It starts with the “single-element”
array and combines two adjacent elements and also sorting the two at the same time. The combined-
sorted arrays are again combined and sorted with each other until one single unit of sorted array is
achieved.

Example: Let us understand the concept with the following example.

1. Iteration (1)

2. Iteration (2)

Page 23 of 46
DISCRETE STRUCTURES 1

3. Iteration (3)

Thus, the entire array has been sorted and merged.

2.4 INSERTION SORT ALGORITHM


Insertion sort is the sorting mechanism where the sorted array is built having one item at a time. The
array elements are compared with each other sequentially and then arranged simultaneously in some
particular order. The analogy can be understood from the style we arrange a deck of cards. This sort
works on the principle of inserting an element at a particular position, hence the name Insertion Sort.

Insertion Sort works as follows:

1. The first step involves the comparison of the element in question with its adjacent element.
2. And if at every comparison reveals that the element in question can be inserted at a particular
position, then space is created for it by shifting the other elements one position to the right
and inserting the element at the suitable position.
3. The above procedure is repeated until all the element in the array is at their apt position.

Let us now understand working with the following example:

Consider the following array: 25, 17, 31, 13, 2

First Iteration: Compare 25 with 17. The comparison shows 17< 25. Hence swap 17 and 25.

The array now looks like: 17, 25, 31, 13, 2

Page 24 of 46
DISCRETE STRUCTURES 1

Second Iteration: Begin with the second element (25), but it was already swapped on for the correct
position, so we move ahead to the next element.

Now hold on to the third element (31) and compare with the ones preceding it.

Since 31> 25, no swapping takes place.

Also, 31> 17, no swapping takes place and 31 remains at its position.

The array after the Second iteration looks like: 17, 25, 31, 13, 2

Third Iteration: Start the following Iteration with the fourth element (13), and compare it with its
preceding elements.

Since 13< 31, we swap the two.

Array now becomes: 17, 25, 13, 31, 2.

But there still exist elements that we haven’t yet compared with 13. Now the comparison takes place
between 25 and 13. Since, 13 < 25, we swap the two.

The array becomes 17, 13, 25, 31, 2.

The last comparison for the iteration is now between 17 and 13. Since 13 < 17, we swap the two.

The array now becomes 13, 17, 25, 31, 2.

Page 25 of 46
DISCRETE STRUCTURES 1

Fourth Iteration: The last iteration calls for the comparison of the last element (2), with all the
preceding elements and make the appropriate swapping between elements.

Since, 2< 31. Swap 2 and 31.


Array now becomes: 13, 17, 25, 2, 31.
Compare 2 with 25, 17, 13.
Since, 2< 25. Swap 25 and 2. 13, 17, 2, 25, 31.
Compare 2 with 17 and 13.
Since, 2<17. Swap 2 and 17.
Array now becomes: 13, 2, 17, 25, 31.
The last comparison for the Iteration is to compare 2 with 13.
Since 2< 13. Swap 2 and 13.
The array now becomes: 2, 13, 17, 25, 31.

This is the final array after all the corresponding iterations and swapping of elements.

Page 26 of 46
DISCRETE STRUCTURES 1

2.4 TIME COMPLEXITY ANALYSIS


Even though insertion sort is efficient, still, if we provide an already sorted array to the insertion
sort algorithm, it will still execute the outer for loop, thereby requiring n steps to sort an already sorted
array of n elements, which makes its best case time complexity a linear function of n.
Wherein for an unsorted array, it takes for an element to compare with all the other elements
which mean every n element compared with all other n elements. Thus, making it for n x n, i.e., n2
comparisons. One can also look at other sorting algorithms such as Merge sort, Quick Sort, Selection
Sort, etc. and understand their complexities.

Worst Case Time Complexity [ Big-O]: O(n2)


Best Case Time Complexity [Big omega]: O(n)
Average Time Complexity [Big theta]: O(n2)

2.5 BUBBLE SORT ALGORITM


Bubble sort, also referred to as comparison sort, is a simple sorting algorithm that repeatedly
goes through the list, compares adjacent elements and swaps them if they are in the wrong order. This
is the simplest algorithm and inefficient at the same time. Yet, it is very much necessary to learn about
it as it represents the basic foundations of sorting.

Page 27 of 46
DISCRETE STRUCTURES 1

APPLICAT ION

Understand the working of Bubble sort

Bubble sort is mainly used in educational purposes for helping students understand the
foundations of sorting.

This is used to identify whether the list is already sorted. When the list is already sorted (which
is the best-case scenario), the complexity of bubble sort is only O(n).

In real life, bubble sort can be visualized when people in a queue wanting to be standing in a
height wise sorted manner swap their positions among themselves until everyone is standing based on
increasing order of heights.

EXPLANATION

Algorithm: We compare adjacent elements and see if their order is wrong (i.e a[i] > a[j] for 1 <= i < j <=
size of array; if array is to be in ascending order, and vice-versa). If yes, then swap them.

• Let us say, we have an array of length n. To sort this array we do the above step (swapping)
for n - 1 passes.
• In simple terms, first, the largest element goes at its extreme right place then, second largest
to the last by one place, and so on. In the ith pass, the ith largest element goes at its right place
in the array by swappings.
• In mathematical terms, in ith pass, at least one element from (n - i + 1) elements from start will
go at its right place. That element will be the ith (for 1 <= i <= n - 1) largest element of the array.
Because in the ith pass of the array, in the jth iteration (for 1 <= j <= n - i), we are checking if
a[j] > a[j + 1], and a[j] will always be greater than a[j + 1] when it is the largest element in range
[1, n - i + 1]. Now we will swap it. This will continue until ith largest element is at the (n - i + 1)th
position of the array.

BUBBLE SORT EXAMPLE

Consider the following array: Arr=14, 33, 27, 35, 10. We need to sort this array using bubble sort
algorithm.

Page 28 of 46
DISCRETE STRUCTURES 1

FIRST PASS

• We proceed with the first and second element i.e., Arr[0] and Arr[1]. Check if 14 > 33 which is
false. So, no swapping happens, and the array remains the same.

• We proceed with the second and third element i.e., Arr[1] and Arr[2]. Check if 33 > 27 which
is true. So, we swap Arr[1] and Arr[2].

Thus the array becomes:

• We proceed with the third and fourth element i.e., Arr[2] and Arr[3]. Check if 33 > 35 which is
false. So, no swapping happens, and the array remains the same.

• We proceed with the fourth and fifth element i.e., Arr[3] and Arr[4]. Check if 35 > 10 which is
true. So, we swap Arr[3] and Arr[4].

Thus, after swapping the array becomes:

Thus, marks the end of the first pass, where the Largest element reaches its final(last) position.

Page 29 of 46
DISCRETE STRUCTURES 1

SECOND PASS

• We proceed with the first and second element i.e., Arr[0] and Arr[1]. Check if 14 > 27 which is
false. So, no swapping happens and the array remains the same.

We now proceed with the second and third element i.e., Arr[1] and Arr[2]. Check if 27 > 33 which is
false. So, no swapping happens and the array remains the same.

• We now proceed with the third and fourth element i.e., Arr[2] and Arr[3]. Check if 33 > 10
which is true. So, we swap Arr[2] and Arr[3].

Now, the array becomes

i-th Pass:

After the ith pass, the ith largest element will be at the ith last position in the array.

n-th Pass:

After the nth pass, the nth largest element (smallest element) will be at nth last position(1st position)
in the array, where ‘n’ is the size of the array.

After doing all the passes, we can easily see the array will be sorted.

Thus, the sorted array will look like this:

Page 30 of 46
DISCRETE STRUCTURES 1

COMPLEXITY ANALYSIS

Time Complexity of Bubble sort

Best case scenario: The best-case scenario occurs when the array is already sorted. In this case, no
swapping will happen in the first iteration (The swapped variable will be false). So, when this happens,
we break from the loop after the very first iteration. Hence, time complexity in the best-case scenario
is O(n) because it has to traverse through all the elements once.

Worst case and Average case scenario: In Bubble Sort, n-1 comparisons are done in the 1st pass,
n-2 in 2nd pass, n-3 in 3rd pass and so on. So, the total number of comparisons will be:

• Sum = (n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1


• Sum = n(n-1)/2
Hence, the time complexity is of the order n2 or O(n2).

Space Complexity of Bubble sort

The space complexity for the algorithm is O(1), because only a single additional memory space is
required i.e. for temporary variable used for swapping.

2.6 SELECTION SORT


The idea behind this algorithm is pretty simple. We divide the array into two parts: sorted and
unsorted. The left part is sorted subarray and the right part is unsorted subarray. Initially, sorted
subarray is empty and unsorted array is the complete given array.

We perform the steps given below until the unsorted subarray becomes empty

1. Pick the minimum element from the unsorted subarray.


2. Swap it with the leftmost element of the unsorted subarray.
3. Now the leftmost element of unsorted subarray becomes a part (rightmost) of sorted subarray
and will not be a part of unsorted subarray.

HOW IT WORKS

This is our initial array A = [5, 2, 6, 7, 2, 1, 0, 3]

Page 31 of 46
DISCRETE STRUCTURES 1

Leftmost element of unsorted part = A[0]

Minimum element of unsorted part = A[6]

We will swap A[0] and A[6] then, make A[0] part of sorted subarray.

Leftmost element of unsorted part = A[2]

Minimum element of unsorted part = A[4]

We will swap A[2] and A[4] then, make A[2] part of sorted subarray.

Leftmost element of unsorted part = A[3]

Minimum element of unsorted part = A[5]

We will swap A[3] and A[5] then, make A[3] part of sorted subarray.

Leftmost element of unsorted part = A[4]

Minimum element of unsorted part = A[7]

We will swap A[4] and A[7] then, make A[4] part of sorted subarray.

Leftmost element of unsorted part = A[5]

Minimum element of unsorted part = A[6]

We will swap A[5] and A[6] then, make A[5] part of sorted subarray.

Page 32 of 46
DISCRETE STRUCTURES 1

Leftmost element of unsorted part = A[6]

Minimum element of unsorted part = A[7]

We will swap A[6] and A[7] then, make A[6] part of sorted subarray.

This is the final sorted array.

Lesson 3: Priority Queues


A priority queue is a special type of queue in which each element is associated with a priority and is
served according to its priority. If elements with the same priority occur, they are served according to
their order in the queue.

Generally, the value of the element itself is considered for assigning the priority.

For example, The element with the highest value is considered as the highest priority element. However,
in other cases, we can assume the element with the lowest value as the highest priority element. In
other cases, we can set priorities according to our needs.

Page 33 of 46
DISCRETE STRUCTURES 1

3.1 DIFFERENCE BETWEEN PRIORITY QUEUE AND NORMAL QUEUE


In a queue, the first-in-first-out rule is implemented whereas, in a priority queue, the values are
removed on the basis of priority. The element with the highest priority is removed first.

3.2 IMPLEMENTATION OF PRIORITY QUEUE


Priority queue can be implemented using an array, a linked list, a heap data structure, or a binary search
tree. Among these data structures, heap data structure provides an efficient implementation of priority
queues.

There are two kinds of priority queues: a max-priority queue and a min-priority queue. In both kinds, the
priority queue stores a collection of elements and is always able to provide the most “extreme” element,
which is the only way to interact with the priority queue. For the remainder of this section, we will discuss
max-priority queues. Min-priority queues are analogous.

Hence, we will be using the heap data structure to implement the priority queue in this tutorial. A max-
heap is implement is in the following operations. If you want to learn more about it, please visit max-
heap and mean-heap.

A comparative analysis of different implementations of priority queue is given below.

Operations peek insert delete


Linked List O(1) O(n) O(1)
Binary Heap O(1) O(log n) O(log n)
Binary Search Tree O(1) O(log n) O(log n)

3.3 PRIORITY QUEUE OPERATIONS

3.3.1 INSERT AN ELEMENT INTO THE PRIORITY QUEUE


Inserting an element into a priority queue (max-heap) is done by the following steps.

• Insert the new element at the end of the tree.

Page 34 of 46
DISCRETE STRUCTURES 1

• Heapify the tree.

Algorithm for insertion of an element into priority queue (max-heap)

If there is no node,
create a newNode.
else (a node is already present)
insert the newNode at the end (last node from left to right.)
heapify the array
For Min Heap, the above algorithm is modified so that parentNode is always smaller than newNode.

3.3.2 DELETING AN ELEMENT FROM THE PRIORITY QUEUE


Deleting an element from a priority queue (max-heap) is done as follows:

• Select the element to be deleted

• Swap it with the last element

Page 35 of 46
DISCRETE STRUCTURES 1

• Remove the last element.

• Heapify the tree.

Algorithm for deletion of an element in the priority queue (max-heap)

If nodeToBeDeleted is the leafNode

remove the node

Else swap nodeToBeDeleted with the lastLeafNode

remove noteToBeDeleted

heapify the array

For Min Heap, the above algorithm is modified so that the both childNodes are smaller than
currentNode.

Page 36 of 46
DISCRETE STRUCTURES 1

3.3.4 PEEKING FROM THE PRIORITY QUEUE (FIND MAX/MIN)


Peek operation returns the maximum element from Max Heap or minimum element from Min Heap
without deleting the node.

For both Max heap and Min Heap

return rootNode

3.3.5 EXTRACT-MAX/MIN FROM THE PRIORITY QUEUE


Extract-Max returns the node with maximum value after removing it from a Max Heap whereas
Extract-Min returns the node with minimum value after removing it from Min Heap.

3.3.6 PRIORITY QUEUE APPLICATIONS


Some of the applications of a priority queue are:

• Dijkstra's algorithm
• for implementing stack
• for load balancing and interrupt handling in an operating system
• for data compression in Huffman code

Lesson 4: Graph Algorithms

4.1 WHAT IS A GRAPH?


A graph is an abstract notation used to represent the connection between pairs of objects. Graphs are
applied widely in our days. They are used in economy, aeronautics, physics, biology (for analyzing
DNA), mathematics and other areas. This page describes some of the applications of a collection of
algorithms (all of them are available in Graph Magics).

A graph consists of –

• Vertices − Interconnected objects in a graph are called vertices. Vertices are also known as
nodes.
• Edges − Edges are the links that connect the vertices.
There are two types of graphs –

• Directed graph − In a directed graph, edges have direction, i.e., edges go from one vertex to
another.
• Undirected graph − In an undirected graph, edges have no direction.

Page 37 of 46
DISCRETE STRUCTURES 1

4.2 GRAPH COLORING


Graph coloring is a method to assign colors to the vertices of a graph so that no two adjacent vertices
have the same color. Some graph coloring problems are –

• Vertex coloring − A way of coloring the vertices of a graph so that no two adjacent vertices
share the same color.

• Edge Coloring − It is the method of assigning a color to each edge so that no two adjacent
edges have the same color.
• Face coloring − It assigns a color to each face or region of a planar graph so that no two
faces that share a common boundary have the same color.

4.3 CHROMATIC NUMBER


Chromatic number is the minimum number of colors required to color a graph. For example, the
chromatic number of the following graph is 3

The concept of graph coloring is applied in preparing timetables, mobile radio frequency assignment,
Suduku, register allocation, and coloring of maps.

4.3.1 Steps for graph coloring


• Set the initial value of each processor in the n-dimensional array to 1.
• Now to assign a particular color to a vertex, determine whether that color is already assigned
to the adjacent vertices or not.
• If a processor detects same color in the adjacent vertices, it sets its value in the array to 0.
• After making n2 comparisons, if any element of the array is 1, then it is a valid coloring

Page 38 of 46
DISCRETE STRUCTURES 1

4.3.2 Pseudocode for graph coloring


begin

create the processors P(i 0,i1,...in-1) where 0_iv < m, 0 _ v < n


status[i0,..in-1] = 1

for j varies from 0 to n-1 do


begin

for k varies from 0 to n-1 do


begin
if aj,k=1 and ij=ikthen
status[i0,..in-1] =0
end

end
ok = ΣStatus

if ok > 0, then display valid coloring exists


else
display invalid coloring

end

4.4 MINIMAL SPANNING TREE


A spanning tree whose sum of weight (or length) of all its edges is less than all other possible
spanning tree of graph G is known as a minimal spanning tree or minimum cost spanning tree. The
following figure shows a weighted connected graph.

Some possible spanning trees of the above graph are shown below –

Page 39 of 46
DISCRETE STRUCTURES 1

Page 40 of 46
DISCRETE STRUCTURES 1

Page 41 of 46
DISCRETE STRUCTURES 1

Among all the above spanning trees, figure (d) is the minimum spanning tree. The concept of minimum
cost spanning tree is applied in travelling salesman problem, designing electronic circuits, Designing
efficient networks, and designing efficient routing algorithms.

To implement the minimum cost-spanning tree, the following two methods are used –

• Prim’s Algorithm
• Kruskal’s Algorithm

4.5 PRIM'S ALGORITHM


Prim’s algorithm is a greedy algorithm, which helps us find the minimum spanning tree for a
weighted undirected graph. It selects a vertex first and finds an edge with the lowest weight incident
on that vertex.

4.5.1 STEPS OF PRIM’S ALGORITHM


1. Select any vertex, say v1 of Graph G.
2. Select an edge, say e1 of G such that e1 = v1 v2 and v1 ≠ v2 and e1 has minimum weight
among the edges incident on v1 in graph G.
3. Now, following step 2, select the minimum weighted edge incident on v2.
4. Continue this till n–1 edges have been chosen. Here n is the number of vertices.

Page 42 of 46
DISCRETE STRUCTURES 1

The minimum spanning tree is –

4.5 KRUSKAL'S ALGORITHM

Kruskal’s algorithm is a greedy algorithm, which helps us find the minimum spanning tree for a
connected weighted graph, adding increasing cost arcs at each step. It is a minimum-spanning-tree
algorithm that finds an edge of the least possible weight that connects any two trees in the forest.

4.5.1 STEPS OF KRUSKAL’S ALGORITHM


• Select an edge of minimum weight; say e1 of Graph G and e1 is not a loop.
• Select the next minimum weighted edge connected to e1.
• Continue this till n–1 edges have been chosen. Here n is the number of vertices.

Page 43 of 46
DISCRETE STRUCTURES 1

The minimum spanning tree of the above graph is –

4.6 SHORTEST PATH ALGORITHM


Shortest Path algorithm is a method of finding the least cost path from the source node(S) to the
destination node (D). Here, we will discuss Moore’s algorithm, also known as Breadth First Search
Algorithm.

4.6.1 MOORE’S ALGORITHM


• Label the source vertex, S and label it i and set i=0.

• Find all unlabeled vertices adjacent to the vertex labeled i. If no vertices are connected to the
vertex, S, then vertex, D, is not connected to S. If there are vertices connected to S, label
them i+1.
• If D is labeled, then go to step 4, else go to step 2 to increase i=i+1.
• Stop after the length of the shortest path is found.

Page 44 of 46
DISCRETE STRUCTURES 1

Supplementary Learning Materials


Lesson 1

Basics of Hash Tables Tutorials & Notes | Data Structures | HackerEarth. HackerEarth.
Retrieved December 6, 2020, from https://www.hackerearth.com/practice/data-
structures/hash-tables/basics-of-hash-tables/tutorial/

Nilsson, S. (n.d.). Hash tables explained [step-by-step example]. Yourbasic.org;


yourbasic.org. Retrieved December 6, 2020, from
https://yourbasic.org/algorithms/hash-tables-explained

Lesson 2

InterviewBit. (2020). Quicksort Algorithm - InterviewBit. InterviewBit.


https://www.interviewbit.com/tutorial/quicksort-algorithm

Sorting Techniques in Data Structures. W3schools.In. Retrieved December 6, 2020, from


https://www.w3schools.in/data-structures-tutorial/sorting-techniques/

Sorting Algorithms - GeeksforGeeks. GeeksforGeeks. Retrieved December 6, 2020, from


https://www.geeksforgeeks.org/sorting-algorithms/

Lesson 3

Priority Queue Data Structure. Programiz.com. Retrieved December 6, 2020, from


https://www.programiz.com/dsa/priority-queue

Maxim Aleksa. (2019). Priority Queues · Data Structures. Maximal.Io.


https://datastructures.maximal.io/priority-queues/

Data Structure - Priority Queue. Tutorialspoint.com. Retrieved December 6, 2020, from


https://www.tutorialspoint.com/data_structures_algorithms/priority_queue.htm

Lesson 4

Graph Algorithm - Tutorialspoint. (2020). Tutorialspoint.com.


https://www.tutorialspoint.com/parallel_algorithm/graph_algorithm.htm

Vijini Mallawaarachchi. (2020, August 27). 10 Graph Algorithms Visually Explained - Towards
Data Science. Medium; Towards Data Science. https://towardsdatascience.com/10-
graph-algorithms-visually-explained-e57faa1336f3

Nilsson, S. (2018). Introduction to graph algorithms: definitions and examples. Yourbasic.org;


yourbasic.org. https://yourbasic.org/algorithms/graph/

Page 45 of 46
DISCRETE STRUCTURES 1

SAQs
Lesson 1
• Why hash tables are fast?
• What is hashing and hash table?
• What is the purpose of hashing?

Lesson 2
• Why we use sorting techniques
• How many categories sorting has? Briefly explain each.

Lesson 3
• What are the two kinds of priority queue?
• Give 2 algorithms that priority queue can be utilized.

Lesson 4

• What are graphs used for?


• What is a graph and its types?
• What is the difference between direct and undirect graph?

References
Basics of Hash Tables Tutorials & Notes | Data Structures | HackerEarth. HackerEarth. Retrieved
December 6, 2020, from https://www.hackerearth.com/practice/data-structures/hash-
tables/basics-of-hash-tables/tutorial/

Nilsson, S. (n.d.). Hash tables explained [step-by-step example]. Yourbasic.org; yourbasic.org.


Retrieved December 6, 2020, from https://yourbasic.org/algorithms/hash-tables-explained

InterviewBit. (2020). Quicksort Algorithm - InterviewBit. InterviewBit.


https://www.interviewbit.com/tutorial/quicksort-algorithm

Sorting Techniques in Data Structures. W3schools.In. Retrieved December 6, 2020, from


https://www.w3schools.in/data-structures-tutorial/sorting-techniques/

Sorting Algorithms - GeeksforGeeks. GeeksforGeeks. Retrieved December 6, 2020, from


https://www.geeksforgeeks.org/sorting-algorithms/

Priority Queue Data Structure. Programiz.com. Retrieved December 6, 2020, from


https://www.programiz.com/dsa/priority-queue

Maxim Aleksa. (2019). Priority Queues · Data Structures. Maximal.Io.


https://datastructures.maximal.io/priority-queues/

Data Structure - Priority Queue. Tutorialspoint.com. Retrieved December 6, 2020, from


https://www.tutorialspoint.com/data_structures_algorithms/priority_queue.htm

Page 46 of 46

You might also like