Professional Documents
Culture Documents
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 1
Outline
• Course Related Information
• Overview of Course Content
• Basic Concepts in Algorithmic Analysis
• Conclusions
Data Structures and Algorithms Department of CSE, ISM Dhanbad November 21, 2021 2
Lecture Plan
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 3
Useful Resources
Text Books:
Horowitz, Sahani, and Rajasekaran, “Fundamentals
of Computer Algorithms”, Universities Press
Lipschutz and Pai, “Data Structures”, TMH
Publishing
Reference Books:
Tenenbaum, “ Data Structures using C/C++”, PHI
Samanta, “Classic Data Structures”, PHI
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 4
Evaluation
No Mid Term Examination
Four Quiz Test: 52 Marks (13 Marks Each)
End Semester Examination: 48 Marks
Duration of Each Quiz is 30 minutes
The Questions can be of MCQ type, True/False, fill
in the blanks, short answer questions or numerical
problem.
The schedule for conducting quizzes will be
followed as per the academic calendar.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 5
Introduction
Computer Science is the Study of Algorithms and Data
Algorithms are at the heart of every nontrivial computer application
There is an intimate connection between synthesis of algorithms and
structuring of data
Thus, Data structures and algorithms should be thought of as a single
unit, neither making sense without the other.
Algorithm study comprises four distinct areas:
Machines for executing algorithms- It includes from pocket
calculator to largest general purpose digital computer, and the
goal is to study various machine fabrication and organization so
that the algorithms can be effectively carried out.
Languages for describing algorithms- At one end are the
languages closest to the physical M/Cs and at the other end are
the languages designed for sophisticated problem solving.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 6
Contd…
Foundation of algorithms- It relates basics regarding the
possibility/designing algorithms. For instance, Is a particular
task accomplishable by a computing device? What are
minimum number operations are necessary for an algorithm
to complete function/problem?
Analysis of algorithms- To study/analysis each steps of an
algorithm so that its performance in terms of time
requirements and memory storage as a whole can be
estimated.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 7
Algorithms
Definition of algorithm- An algorithm is a finite
set of instructions, which, if followed,
accomplish a particular task. In addition, every
algorithm must satisfies the following criteria:
Input- There are zero or more quantities which are
externally supplied.
Output- At least one quantity is produced.
Definiteness- Each instruction must be clear and
unambiguous
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 8
Contd…
Finiteness- In all cases, the algorithm must terminate after finite
number of steps
Effectiveness- Each instruction must be sufficiently basic and
contribute to problem solution.
Correctness- Algorithm must produce correct output in all
instances.
Difference between algorithm and program- A program
(Operating system) does not satisfy the Finiteness. An
algorithm must terminate.
An algorithm can be described in many ways- English- but
careful that the instructions are Definite, Flowchart- In
graphical form using different symbols, Pseudocode- Almost
code, that means an algorithm should be presented in such a
way that algorithm and code have one-to-one relationship.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 9
Algorithm example
Computation of F0 0, and F1 1
Fibonacci Series Fn Fn 1 Fn 2 s.t. n 2
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 10
Contd…
The quality of an algorithm is decided by two
parameters:
How much extra space does it require other than the
input ?
How much time does it require for execution for a
given input of size n?
Generally, we analyze algorithms in terms of their
time complexities.
We assume in context of time complexity analysis:
All the statements take equal time for execution.
Each statement takes unit time to execute.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 11
Analysis of the algorithms
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 12
Execution count for Fn
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 13
Analysis of Time Complexity
Best case: Inputs are provided in such a way
that the minimum time is required to process
them.
Average case: Average behaviour of the
algorithm is studied for various kinds of inputs.
It needs assumption of statistical distribution of
inputs.
Worst case: Input is given in such a way that
maximum time is required to process them.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 14
Linear Search
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 15
An efficient algorithm: Binary Search
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 16
Time Complexity
k, n 1
T n n
T k, n 1
2
n
T n T k
2
n
T 2 2k
2
n
T 3 3k
2
n
T r rk Let n 2
r
2
n
T rk
n
T 1 rk
k log n k
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 17
Thank You… Any
Queries...?
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 18
Algorithms
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 1
Outline
• What is an Algorithm?
• Performance Analysis
Data Structures and Algorithms Department of CSE, ISM Dhanbad November 21, 2021 2
Algorithms
Definition of algorithm- An algorithm is a finite set of
instructions, which, if followed, accomplish a particular task. In
addition, every algorithm must satisfies the following criteria:
Input- There are zero or more quantities which are externally supplied.
Output- At least one quantity is produced.
Definiteness- Each instruction must be clear and unambiguous
Finiteness- In all cases, the algorithm must terminate after finite number
of steps
Effectiveness- Each instruction must be sufficiently basic and contribute
to problem solution.
Correctness- Algorithm must produce correct output in all instances
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 3
Goals
How to devise algorithms?
Creating an algorithm is an art which may never be fully
automated.
How to validate algorithms?
It is necessary to show that it computes the correct answer
for all possible legal inputs.
How to analyze algorithms?
It refers to the task of determining how much computing
time and storage an algorithm requires.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 4
Algorithm example
Computation of F0 0, and F1 1
Fibonacci Series Fn Fn 1 Fn 2 s.t. n 2
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 5
Contd…
The quality of an algorithm is decided by two
parameters:
How much extra space does it require other than the
input ?
How much time does it require for execution for a
given input of size n?
Generally, we analyze algorithms in terms of their
time complexities.
We assume in context of time complexity analysis:
All the statements take equal time for execution.
Each statement takes unit time to execute.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 6
Analysis of the algorithms
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 7
Execution count for Fn
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 8
Linear Search
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 9
An efficient algorithm: Binary Search
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 10
A decision tree
The performance of the binary search algorithm can be
described in terms of a decision tree, which is a binary tree that
exhibits the behavior of the algorithm.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 11
Time Complexity
k, n 1
T n n
T k, n 1
2
n
T n T k
2
n
T 2 2k
2
n
T 3 3k
2
n
T r rk Let n 2
r
2
n
T rk
n
T 1 rk
k log n k
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 12
Analysis of Algorithms
We are interested in the design of "good" data
structures and algorithms.
A data structure is a systematic way of organizing
and accessing data, and an algorithm is a step-by-
step procedure for performing some task in a finite
amount of time.
To classify some data structures and algorithms as
"good", we must have precise ways of analyzing
them.
Analyzing the efficiency of a program involves
characterizing the running time and space usage of
algorithms and data structure operations.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 13
Contd…
In order to study the performance of an algorithm,
we have to estimate the computing time (or time
complexity) and memory requirements.
It is impossible to determine exactly how much
time it takes unless we know the following:
Machine we are executing on
Machine instruction set
Time required by each machine instruction.
Translations a compiler will make from source code to
machine language
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 14
Contd…
Generally, frequency count of steps of an
algorithm is estimated and represented in some
asymptotic notations for computing time
complexity of an algorithm.
This lecture introduces the common functions
that are used for analyzing algorithms and some
justification techniques for analyzing
algorithms.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 15
Analysis of Time Complexity
Best case: Inputs are provided in such a way
that the minimum time is required to process
them.
Average case: Average behaviour of the
algorithm is studied for various kinds of inputs.
It needs assumption of statistical distribution of
inputs.
Worst case: Input is given in such a way that
maximum time is required to process them.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 16
Common Functions Used in Analysis
Constant Function f(n) = C assignment statement, invoking
a method or function.
Logarithm Function f(n) = logn binary search
Linear Function f(n) = n print out the elements of an
array of size n.
N-Log-N Function f(n) = nlogn merge sort, which will be
discussed later
Quadratic Function f(n) = n2 Bubble sort
Cubic function f(n) = n3 matrix multiplication
Exponential Function f(n) = bn Towers of the Hanoi
In this function, b is a positive
constant, called the base, and the
argument n is the exponent.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 17
Rate of Growth of Functions
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 18
Growth of Functions
We quantify the concept that g grows at least
as fast as f.
We only care about the behaviour for large
problems.
Even bad algorithms can be used to solve small
problems.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 19
Big-oh notation
There are different asymptotic notations, first of all we
will study big-O notation, whose definition is given
below:
f(n) = O(g(n)) [read as f of n equals big-oh of g of n) if and
only if there exist two constants c and n0 such that
f(n) ≤ cg(n) for all n, n ≥ n0, where f(n) is the computing
time of some algorithm.
It means that the computing time of an algorithm is O(g(n))
is that its execution time takes no more than a constant time
g(n), where n is the parameter corresponding to input size, or
input and output size or output size (generally input-size is
considered).
For instance, Fibonacci program, time complexity is O(n).
Thus, O(n) is linear, O(n2) is quadratic, O(n3) is cubic, O(2n)
is exponential.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 20
Contd…
The two constants c and n0 in the definition of
big-O notation are called witness to the
relationship f(n) is O(g(n)) .
Choose n0
Choose c; it may depend on your choice of n0
Once you choose c and n0, you must prove the truth
of the implication (often by induction).
Show that f(x)=x2+2x+1 is O(x2)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 21
Example
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 22
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 23
Contd…
Thus, the big-O notation provides an upper
bound on the running time.
Thus, it gives the worst-case complexity of an
algorithm.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 24
big-omega (Ω) notation
While the O-notation gives an upper bound, the
Ω notation, on the other hand, provides a lower
bound within a constant factor of the running
time.
Thus, it provides the best case complexity of an
algorithm.
f(n) = Ω(g(n)) [read as f of n equals big-omega of g of
n) if and only if there exist two constants c and n0 such
that f(n) ≥ cg(n) for all n, n ≥ n0. where f(n) is the
computing time of some algorithm.
The problem of sorting by comparisons is Ω (n log n), to mean
that no comparison-based sorting algorithm with time complexity
that is asymptotically less than n log n can ever be devised.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 25
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 26
Theta notation
It represents the upper and the lower bound of
the running time of an algorithm.
It is used for analyzing the average-case
complexity of an algorithm.
f(n) = Θ g(n)) [read as f of n equals theta of g of n)
if and only if there exist positive constants c1, c2 and
n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n, n ≥
n0.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 27
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 28
Why is it called Asymptotic Analysis?
In mathematical analysis, asymptotic analysis is a method of
describing limiting behavior.
As an illustration, suppose that we are interested in the
properties of a function f(n) as n becomes very large.
If f(n)=n2 + 3n, then as n becomes very large, the
term 3n becomes insignificant compared to n2.
The function f(n) is said to be "asymptotically equivalent to n2,
as n → ∞". This is often written symbolically as f(n) ~ n2, which
is read as "f(n) is asymptotic to n2".
In Asymptotic Analysis, we evaluate the performance of an
algorithm in terms of input size (we don't measure the actual
running time).
We typically ignore small values of n, since we are usually
interested in estimating how slow the program will be on large
inputs.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 29
Shortcomings of Asymptotic Analysis
Implementation complexity: Algorithms with
better complexity are often (much) more
complicated. This can increase coding time and
the constants.
Asymptotic analysis ignores small input sizes.
At small input sizes, constant factors or low
order terms could dominate running time.
Worst case versus average performance
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 30
Guidelines for Asymptotic Analysis
Loops: The running time of a loop is, at most,
the running time of the statements inside the
loop (including tests) multiplied by the number
of iterations.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 31
Contd…
Nested loops: Analyze from the inside out.
Total running time is the product of the sizes of
all the loops.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 32
Contd…
Logarithmic complexity: An algorithm is
O(logn) if it takes a constant time to cut the
problem size by a fraction (usually by 1/2).
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 33
Algorithm and Data Structure
An algorithm takes raw data as input and transforms it
into refined data as output. Thus, Computer science is
the study of both algorithm and data.
For processing/transforming data, the requirements are
Machine to hold data
Languages to describe data manipulation
Languages to describe kinds of refined data can be produced
from raw data.
Structure to represent data
There is an intimate connection between structuring data
and synthesis of algorithm, i.e., they worked together as
a unit.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 36
Overview of Data Structures
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 1
Algorithm and Data Structure
An algorithm takes raw data as input and transforms it
into refined data as output. Thus, Computer science is
the study of both algorithm and data.
For processing/transforming data, the requirements are
Machine to hold data
Languages to describe data manipulation
Languages to describe kinds of refined data can be produced
from raw data.
Structure to represent data
There is an intimate connection between structuring data
and synthesis of algorithm, i.e., they worked together as
a unit.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 4
Definition
A data structure is an organization of data values, the
relationships among them, and the functions to answer
particular queries on that data.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 5
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 6
Example: Complex Number x+iy
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 7
Classification
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 8
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 9
Linear Data Structures
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 10
Non-Linear Data Structures
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 11
Assignment-1
Problem Statement: Magic Square
A magic square is an n×n matrix of the integers 1 to n2
such that the sum of every row, column and diagonal is
the same.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 12
Contd..
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 13
Thank You… Any
Queries...?
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad November 21, 2021 14
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Outline I
1 Introduction to Data Structures
Classification of Data Structure
Data Structure Operations
Introduction to Analysis of Algorithms 2 Introduction to Algorithms
3 Complexity
4 Time Complexity
O (Big O) Notation
Ω (Big Omega) Notation
Θ (Theta) Notation
o (Small / Little o) Notation
ω (Small / Little omega) Notation
Properties of asymptotic notations
5 Space Complexity
5 Space Complexity The term information is sometimes used for data with given at-
tributes, or, in other words, meaningful or processed data.
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Introduction to Data Structures Introduction to Data Structures
The way that data are organized into the hierarchy of fields, records Each record in a file may contain many field items, but the value
and fiels reflects the relationship between attributes, entities and in a certain field may uniquely determine the record in the file.
entity sets.
Records may also be classified according to length.
I A field is a single elementary unit of information representing I In fixed-length records, all the records contain the same data
an attribute of an entity, items with same amount of spaces assigned to each data
I A record is the collection of field values of a given entity and
item.
I A file is the collection of records of the entities in a given I In variavle-length records, file records may contain different
entity set. lengths.
The organization of data into fields, records and files may not be
complex enough to maintain and efficiently process certain col-
lections of data. For this reason, data are also organized into
more complex types of structures. The study of such data stuc-
tures includes the following three steps:
I Logical or mathemetical description of the structure Data may be organized in many different ways; the logical or
I Implementation of the structure on a computer mathematical model of a particular organization of data is called
I Quantitative analysis of the structure, Which includes deter- a data structure.
mining the amount of memory needed to store the structure
and the time required to process the structure.
Note: The second and third of the steps in the study of data
structures depend on whether the data are stored (a) in the
primary memory of the computer or (b) in a secondary stor-
age unit. The first case deals with the data structure and the
second case, called file managment or data base manage-
ment.
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Introduction to Data Structures Introduction to Data Structures
Classification of Data Structure Classification of Data Structure
2 Introduction to Algorithms
Data Structures are normally classified into two broad categories:
3 Complexity
5 Space Complexity
Data types: A particular kind of data item, as defined by the values it Primitive data structures are basic structures and are directly op-
can take, the programming language used, or the operations that can erated upon by machine instructions.
be performed on it.
Primitive data structures have different representations on differ-
ent computers.
Integers, floats, character and pointers are examples of primitive
data structures.
A Non-primitive data type is further divided into Linear and Non- A data structure is said to be Linear, if its elements are connected
Linear data structure in linear fashion by means of logically or in sequence memory
I Array: An array is a fixed-size sequenced collection of ele- locations.
ments of the same data type.
There are two ways to represent a linear data structure in mem-
I List: An ordered set containing variable number of elements
ory,
is called as Lists.
I File: A file is a collection of logically related information. It I Static memory allocation
can be viewed as a large list of records consisting of various I Dynamic memory allocation
fields.
The possible operations on the linear data structure are: Traver-
sal, Insertion, Deletion, Searching, Sorting and Merging.
Difference between Linear and Non Linear Data 1 Introduction to Data Structures
Structure I Classification of Data Structure
Data Structure Operations
2 Introduction to Algorithms
3 Complexity
4 Time Complexity
O (Big O) Notation
Ω (Big Omega) Notation
Θ (Theta) Notation
o (Small / Little o) Notation
ω (Small / Little omega) Notation
Properties of asymptotic notations
5 Space Complexity
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Introduction to Data Structures Introduction to Data Structures
Data Structure Operations Data Structure Operations
2 Introduction to Algorithms
Computer Science is the Study of Algorithms and Data
3 Complexity
There is an intimate connection between synthesis of algorithms
and structuring of data
4 Time Complexity
O (Big O) Notation Thus, Data structures and algorithms should be thought of as a
Ω (Big Omega) Notation single unit, neither making sense without the other.
Θ (Theta) Notation
o (Small / Little o) Notation
ω (Small / Little omega) Notation
Properties of asymptotic notations
5 Space Complexity
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Introduction to Algorithms Introduction to Algorithms
5 Space Complexity
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Complexity Time Complexity
5 Space Complexity
Definition: Let f (n) and g(n) be two functions from the set of natural
numbers to the set of nonnegative real numbers. f (n) is said to be
O(g(n)) if there exists a natural number n0 and a constant c > 0 such
that Consequently, if limn→∞ f (n)/g(n) exists, then
∀n ≥ n0 , f (n) ≤ cg(n).
lim f (n)/g(n) 6= ∞ implies f (n) = O(g(n)).
n→∞
Informally, this definition says that f grows no faster than some con-
stant times g. The O-notation can also be used in equations as a
simplification tool. This is helpful if we are not interested in the details
of the lower order terms.
4 Time Complexity
O (Big O) Notation
Ω (Big Omega) Notation
Θ (Theta) Notation
o (Small / Little o) Notation
ω (Small / Little omega) Notation
Properties of asymptotic notations
5 Space Complexity
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Time Complexity Time Complexity
Ω (Big Omega) Notation Θ (Theta) Notation
Informally, this definition says that f grows at least as fast as some 4 Time Complexity
constant times g. It is clear from the definition that f (n) is Ω(g(n)) if O (Big O) Notation
and only if g(n) is O(f (n)). Ω (Big Omega) Notation
Θ (Theta) Notation
o (Small / Little o) Notation
ω (Small / Little omega) Notation
Properties of asymptotic notations
5 Space Complexity
Definition: Let f (n) and g(n) be two functions from the set of natural
numbers to the set of nonnegative real numbers. f (n) is said to be
Θ(g(n)) if there exists a natural number n0 and two positive constants
c1 and c2 such that
Consequently, if limn→∞ f (n)/g(n) exists, then
∀n ≥ n0 , c1 g(n) ≤ f (n) ≤ c2 g(n).
lim f (n)/g(n) 6= c implies f (n) = Θ(g(n)).
n→∞
2 Introduction to Algorithms Definition: Let f (n) and g(n) be two functions from the set of natural
numbers to the set of nonnegative real numbers. f (n) is said to be
o(g(n)) if for every constant c > 0 there exists a positive integer n0
3 Complexity
such that f (n) < cg(n) for all n ≥ n0 .
Consequently, if limn→∞ f (n)/g(n) exists, then
4 Time Complexity
O (Big O) Notation lim f (n)/g(n) = 0 implies f (n) = o(g(n)).
Ω (Big Omega) Notation n→∞
Θ (Theta) Notation
o (Small / Little o) Notation
ω (Small / Little omega) Notation
Properties of asymptotic notations
5 Space Complexity
5 Space Complexity
Introduction to Analysis of Algorithms Introduction to Analysis of Algorithms
Time Complexity Time Complexity
ω (Small / Little omega) Notation Properties of asymptotic notations
5 Space Complexity
5 Space Complexity
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 1
Outline
Introduction
One-Dimensional Array
Memory Allocation for an Array
Operations on Arrays
Multidimensional Arrays
Sparse Matrices
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 2
Introduction
An array is a finite ordered and collection of
homogeneous data elements.
Array is finite because it contains only limited
number of element; and ordered, as all the
elements are stored one by one in contiguous
locations of computer memory in a linear ordered
fashion.
All the elements of an array are of the same data
type (say, integer) only and hence it is termed as
collection of homogeneous elements.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 3
Contd…
Array is particularly useful when we are dealing with lot
of variables of the same type.
For example, lets say I need to store the marks in math
subject of 100 students.
To solve this particular problem, either I have to create
the 100 variables of int type or create an array of int
type with the size 100.
Obviously the second option is best, because keeping
track of all the 100 different variables is a tedious task.
On the other hand, dealing with array is simple and
easy, all 100 values can be stored in the same array at
different indexes (0 to 99).
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 4
Arrays
An array is a collection of similar data elements. These data
elements have the same data type.
An array is conceptually defined as a collection of <index,
value> pairs.
Each array element is referred to by specifying the array
name followed by one or more subscripts, with each
subscript enclosed in square brackets.
Each subscript must be expressed as a nonnegative integer.
Thus, in the n-element array x, the array elements are x[0],
x[1], x[2], …, x[n-1].
The number of subscripts determines the dimensionality of
the array. (for example x[i] refers to an element in one-
dimensional array x, and y[i][j] to an element in 2-D array y)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 5
Declaration of Arrays
Arrays are declared using the following syntax:
data_type name_of_array[size_of_array]
float arr[10]
int mark[20]
Sample code to read input through array
int j, mark[20];
for(j=0;j<20;j++)
scanf(“%d”,&mark[j]);
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 6
Array Memory Representation
The following diagram represents an integer array
that has 12 elements.
The index of the array starts with 0, so the array
having 12 elements has indexes from 0 to 11.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 7
Memory Allocation of an Array
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 8
Operations on an Array
Retrieval of an element: Given an array index, retrieve
the corresponding value.
This can be accomplished in O(1) time. This is an important
advantage of the array relative to other structures.
Search: Given an element value, determine whether it is
present in the array.
If the array is unsorted, there is no good alternative to a
linear search that iterates through all of the elements in the
array and stops when the desired element is found.
In the worst case, this requires O(n) time.
If the array is sorted, Binary search can be used. The Binary
search only requires O(log n) time.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 9
Contd…
Insertion at required position: for inserting the
element at required position, all of the existing
elements in the array from the desired location must
be shifted one place to the right.
This requires O(n) time in worst case.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 10
Contd…
Deletion at desired position: leaves a “vacant”
location.
Actually, this location can never be vacant because it refers to
a word in memory which must contain some value.
Thus, if the program accesses a “vacant” location, it doesn’t
have any way to know that the location is vacant.
This requires O(n) time in worst case.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 11
Sorted Arrays
There are several benefits to using sorted arrays,
namely: searching is faster, computing order
statistics (the ith smallest element) is O(1), etc.
The sorting or sorted Arrays may be considered as
of pre-processing steps on data to make
subsequent queries efficient.
The idea is that we are often willing to invest some
time at the beginning in setting up a data structure
so that subsequent operatios on it become faster.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 12
Contd…
Some sorting algorithms such as Heap sort and
Merge sort require O(n log n) time in the worst
case
Whereas other simpler sorting algorithms such
as insertion sort, bubble sort and selection sort
require O(n2) time in the worst case.
Quick sort have a worst case time of O(n2), but
require O(n log n) on the average.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 13
Advantages and disadvantages of Arrays
Advantages
Reading an array element is simple and efficient. The time
complexity is O(1) in both best and worst cases.
This is because any element can be instantly read using indexes
(base address calculation behind the scene) without traversing
the whole array.
Disadvantages
While using array, we must need to make the decision of the
size of the array in the beginning, so if we are not aware how
many elements we are going to store in array, it would make
the task difficult.
The size of the array is fixed so if at later point, if we need to
store more elements in it then it can’t be done. On the other
hand, if we store less number of elements than the declared
size, the remaining allocated memory is wasted.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 14
Multidimensional Arrays
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 15
Row- or Column- Major Representation
Earlier representations of multidimensional arrays
mapped the location of each element of the
multidimensional array into a location of a one-
dimensional array.
Consider a two dimensional array with r rows and c
columns. The number of elements in the array n = rc.
The element in location [i][j], 0 ≤ i < r and 0 ≤ j < c, will be
mapped onto an integer in the range [0, n−1].
If this is done in row-major order — the elements of row 0
are listed in order from left to right followed by the
elements of row 1, then row 2, etc. — the mapping
function is ic + j.
If elements are listed in column-major order, the mapping
function is jr+i.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 16
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 17
Memory Allocation in 2D-arrays (A mxn)
Logically, a matrix appears as two-dimensional but
physically it is stored in a linear fashion.
In order to map from logical view to physical
structure, we need indexing formula.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 18
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 19
Sparse Matrices
A matrix is sparse if a large number of its elements are
zeros.
Storing of null elements of a sparse matrix is nothing
but wastage of memory
Rather than store such a matrix as a two-dimensional
array with lots of zeroes, a common strategy is to save
space by explicitly storing only the non-zero elements.
There are several ways to represent this matrix as a
one-dimensional array.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 20
Contd…
Other sparse matrices may have an irregular or
unstructured pattern.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 21
Contd…
Some well known sparse matrices which are symmetric
in form:
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 22
Memory Representation
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 23
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 24
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 25
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 26
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 27
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 28
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 29
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 30
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 31
Three Dimensional (3-D) Array
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 32
Reference Book
Classic Data Structures by D. Samanta
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 33
Thank You…
Any Queries...?
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad August 28, 2021 34
Arrays Arrays
Outline I
1 Introduction
Arrays
2 Linear Arrays
Representation of Liniear Arrays in Memory
Travarsing Linear Arrays
Inserting and Deleting
Searching
Linear Search
Binary Search
Multidimensional Arrays
Matrices
Sparse Matrices
Arrays Arrays
Introduction Introduction
Classification
1 Introduction
Arrays
1 Introduction
Arrays Arrays
Linear Arrays Linear Arrays
Example
1 Introduction
Arrays Arrays
Linear Arrays Linear Arrays
Representation of Liniear Arrays in Memory Travarsing Linear Arrays
Arrays Arrays
Linear Arrays Linear Arrays
Inserting and Deleting Inserting and Deleting
Let A be an array
1 Introduction Insertion refers to the operation of adding another element to A
Deletion refers to removing one of the elements of A
Inserting at the end can be easily done provided memory space
2 Linear Arrays allocated for array is large enough
Representation of Liniear Arrays in Memory
Travarsing Linear Arrays For insertion at the middle on an average half of the elements
Inserting and Deleting must be moved downward to new locations to accommodate ele-
Searching ment and keep order of other elements
Linear Search
Similarly deletion at end has no difficulty while deletion at the
Binary Search
middle requires each subsequent element be moved one location
Multidimensional Arrays
upward to fill array
Matrices
Sparse Matrices Here downward refers to location with larger subscripts and up-
ward refers to smaller subscripts
Arrays Arrays
Linear Arrays Linear Arrays
Inserting and Deleting Inserting and Deleting
Insertion Deletion
Insertion Algorithm
Deletion algorithm
Insert element ITEM into Kth position of LA
INSERT(LA, N, K, ITEM) Delete Kth element from LA and assign it to item
1. [Initialize counter.] Set J:=N DELETE(LA,N,K,ITEM)
3. [Move Jth element downward.] Set LA[J+1]:= LA[J] 2. Repeat for J=K to N-1 :
[Move J+1st element upward] Set LA[J]:=LA[J+1]
4. [Decrease counter.] Set J:= J-1
End of step 2 loop [End of loop]
5. [Insert element.] Set LA[K]:= ITEM 3. [Reset the number N of elements in LA] Set N:=N-1
7. Exit
Arrays Arrays
Linear Arrays Linear Arrays
Searching Searching
1 Introduction
1 Introduction
Arrays Arrays
Linear Arrays Linear Arrays
Searching Searching
2. Repeat Steps 3 and 4 while LOC:=0 and k≤N. 1. [Insert ITEM at end of DATA] Set DATA[N+1]:=ITEM
3. If ITEM=DATA[K], then: Set LOC:=K. 2. [Initialize counter.] Set LOC:=1
4. Set k:=k+1.[Increments counter.] 3. [Search for ITEM.]
[End of Step 2 loop.] Repeat while DATA[LOC]6=ITEM
Set LOC:=LOC+1
5. [Successful?]
If LOC=0, then: [End of loop]
Write: ITEM is not in the array DATA. 4. [Successful?] If LOC=N+1, then: Set LOC:=0
Else: 5. Exit
Write: LOC is the location of ITEM.
[End of If structure.]
6. Exit.
Arrays Arrays
Linear Arrays Linear Arrays
Searching Searching
Complexity
1 Introduction
Arrays Arrays
Linear Arrays Linear Arrays
Searching Searching
6. Exit
Arrays Arrays
Linear Arrays Linear Arrays
Searching Searching
Complexity Limitation
Here we can observe that each comparison reduces the sample Binary search algorithm requires two conditions:
size in half,Hence the worst case time complexity will be equal to I The list must be sorted.
log2 n. I One must have direct access to the middle element in any
Hence binary search will be more efficient when the array is sorted. sublist.
Arrays Arrays
Linear Arrays Linear Arrays
Multidimensional Arrays Multidimensional Arrays
Multidimensional Arrays I
1 Introduction
Two-dimensional arrays
2 Linear Arrays A two dimensional array A is a collection of m.n data elements repre-
Representation of Liniear Arrays in Memory sented as A[J,K] such that 1≤J≤m and 1≤K≤n
Travarsing Linear Arrays
Inserting and Deleting A[1, 1] A[1, 2] A[1, 3] A[1, 4]
Searching A[2, 1] A[2, 2] A[2, 3] A[2, 4]
Linear Search A[3, 1] A[3, 2] A[3, 3] A[3, 4]
Binary Search
Multidimensional Arrays Two dimensional array with 3 rows and 4 columns.
Matrices
Sparse Matrices
Arrays Arrays
Linear Arrays Linear Arrays
Multidimensional Arrays Multidimensional Arrays
Arrays Arrays
Linear Arrays Linear Arrays
Multidimensional Arrays Multidimensional Arrays
Row-major order:
LOC(A[K1 , K2 , · · · , Kn ]) = Base(A) + w[(((· · · (E1 L2 + E2 )L3 ) + E3 )L4 + · · · + EN−1 )LN + EN ]
Arrays Arrays
Linear Arrays Linear Arrays
Multidimensional Arrays Matrices
Arrays Arrays
Linear Arrays Linear Arrays
Matrices Matrices
Matrices I Matrices II
Matrix multiplication Algorithm
Here we have A=m.p and B=p.n and C=A.B
MATMUL(A,B,C,M,P,N)
Complexity: C = m.n.p
Arrays Arrays
Linear Arrays Linear Arrays
Matrices Matrices
Sparse matrices I
1 Introduction
Arrays Arrays
Linear Arrays Linear Arrays
Matrices Matrices
The second matrix of the figure has many zero entries. Such a
matrix is called sparse.
In second matrix, only 8 out of 36 entries are non-zero and that Matrices with a relatively high proportion of zero entries are called
is sparse! sparse matrices for example lower and upper triangular matrices
Here one may save space by storing only those nonzero entries
Arrays Arrays
Linear Arrays Linear Arrays
Matrices Matrices
Arrays Arrays
Linear Arrays Linear Arrays
Matrices Matrices
Arrays Arrays
Linear Arrays Linear Arrays
Matrices Matrices
Outline I
1 Linked Lists
Linked Lists Representation of Linked Lists in Memory
Traversing a Linked List
Searching a Linked List
Memory Allocation
Insertion into a Linked List
Deletion from a Linked List
Header Linked Lists
Two-way Lists
Polynomial
Sparse Matrix
Introduction I
The pointer of the last node contains a special value, called the Advantages of Linked list over Arrays :
null pointer, which is any invalid address.
1. Dynamic size
Null Pointer- It is denoted by in the diagram, signals the end of 2. Ease of insertion/deletion
the list.
Disadvantages of Linked list over Arrays :
The linked list also contains a list pointer variablecalled START
or NAME - which contains the address of the first node in the 1. Random access is not allowed.
list; hence there is an arrow drawn from START to the first node. 2. Extra memory space for pointer is required with each ele-
Clearly, we need only this address in START to trace through the ment of the list.
list.
1 Linked Lists
Representation of Linked Lists in Memory
Traversing a Linked List First of all, LIST requires two linear arrays - INFO and LINK,
Searching a Linked List
Memory Allocation such that INFO[K] and LINK[K] contain, respectively, the informa-
Insertion into a Linked List tion part and the nextpointer field of a node of LIST.
Deletion from a Linked List LIST also requires a variable name - such as START - which con-
Header Linked Lists tains the location of the beginning of the list,
Two-way Lists
Polynomial and a nextpointer sentinel - denoted by NULL - which indicates
Sparse Matrix the end of the list.
Linked Lists Linked Lists
Linked Lists Linked Lists
Representation of Linked Lists in Memory Representation of Linked Lists in Memory
Example:
START = 9, so INFO[9] = N is the first character.
LINK[9] = 3, so INFO[3] = O is the second character.
LINK[3] = 6, so INFO[6] = (blank) is the third character.
LINK[6] = 11, so INFO[11] = E is the fourth character.
LINK[11] = 7, so INFO[7] = X is the fifth character.
LINK[7] = 10, so INFO[10] = I is the sixth character.
LINK[10] = 4, so INFO[4] = T is the seventh character.
LINK[4] = 0, the NULL value, so the list has ended.
In other words, NO EXIT is the character string.
1 Linked Lists In traversing a single linked list, we visit every node in the list
Representation of Linked Lists in Memory starting from the first node to the last node.
Traversing a Linked List
Searching a Linked List Our traversing algorithm uses a pointer variable PTR which points
Memory Allocation to the node that is currently being processed.
Insertion into a Linked List
Deletion from a Linked List
Header Linked Lists
Two-way Lists
Polynomial
Sparse Matrix
Linked Lists Linked Lists
Linked Lists Linked Lists
Traversing a Linked List Traversing a Linked List
Algorithm for traversing a Linked List : The procedure below finds the number NUM of elements in a
(Traversing a Linked List) Let LIST be a linked list in memory. This linked list.
algorithm traverses LIST, applying an operation PROCESS to each We traverse the linked list in order to count the number of elements;
element of LIST. The variable PTR points to the node currently being hence the procedure is very similar to the above traversing algorithm.
processed. Procedure: COUNT(INFO, LINK, START, NUM)
1. Set NUM : = 0. [Initializes counter.]
1. Set PTR := START. [Initializes pointer PTR.]
2. Set PTR : = START. [Initializes pointer.]
2. Repeat Steps 3 and 4 while PTR 6= NULL.
3. Repeat Steps 4 and 5 while PTR 6= NULL.
3. Apply PROCESS to INFO[PTR].
4. Set NUM : = NUM + 1. [Increases NUM by 1.]
4. Set PTR := LINK[PTR]. [PTR now points to the next node.]
5. Set PTR : = LINK[PTR]. [Updates pointer.]
[End of Step 2 loop.]
[End of Step 3 loop.]
5. Exit.
6. Return.
1 Linked Lists
Representation of Linked Lists in Memory
Traversing a Linked List
Searching a Linked List
Memory Allocation LIST Is Unsorted: we require two tests. First we have to check
Insertion into a Linked List to see whether we have reached the end of the list; i.e., first we
Deletion from a Linked List check to see whether PTR = NULL If not, then we check to see
Header Linked Lists whether INFO[PTR] = ITEM.
Two-way Lists
Polynomial
Sparse Matrix
Linked Lists Linked Lists
Linked Lists Linked Lists
Searching a Linked List Searching a Linked List
1. Set PTR := START. Complexity is same as that of other linear search algorithms.
2. Repeat Step 3 while PTR 6= NULL:
3. If ITEM = INFO[PTR], then: In sorted linear array we can apply a binary search whose running
Set LOC := PTR, and Exit. time is proportional to log2 n.
Else:
Set PTR := LINK[PTR]. [PTR now points to the
next node.]
[End of If structure.]
[End of Step 2 loop.]
4. [Search is unsuccessful] Set LOC := NULL.
5. Exit.
1 Linked Lists
Representation of Linked Lists in Memory
Traversing a Linked List
Searching a Linked List
Memory Allocation
Insertion into a Linked List Algorithms which insert nodes into linked lists come up in various
Deletion from a Linked List situations. We discuss three of them here.
Header Linked Lists
Two-way Lists
Polynomial
Sparse Matrix
1 Linked Lists
Representation of Linked Lists in Memory
I Algorithm :
Traversing a Linked List
1. [Use above Procedure to find the location of the node preced- Searching a Linked List
ing ITEM.] Memory Allocation
SRCHSL(INFO, LINK, START, ITEM, LOC) Insertion into a Linked List
2. [Use previous Algorithm to insert ITEM after the node with Deletion from a Linked List
location LOC.]
Header Linked Lists
Call INSLOC(INFO, LINK, START, AVAIL, LOC, ITEM).
Two-way Lists
3. Exit.
Polynomial
Sparse Matrix
All of our deletion algorithms will return the memory space of the
deleted node N to the beginning of the AVAIL list.
Observe that three pointer fields are changed as follows:
1. The nextpointer field of node A now points to node B, where
node N previously pointed.
2. The nextpointer field of N now points to the original first node
in the free pool, where AVAIL previously pointed.
3. AVAIL now points to the deleted node N.
There are also two special cases. If the deleted node N is the first All of our deletion algorithms will return the memory space of the
node in the list, then START will point to node B and deleted node N to the beginning of the AVAIL list.
If the deleted node N is the last node in the list, then node A will Accordingly, all of our algorithms will include the following pair of
contain the NULL pointer. assignments, where LOC is the location of the deleted node N:
LINK[LOC] := AVAIL and then AVAIL := LOC
Linked Lists Linked Lists
Linked Lists Linked Lists
Deletion from a Linked List Deletion from a Linked List
Deletion From a Linked List VII Deleting the Node Following a Given Node I
Deleting the Node Following a Given Node II Deleting the Node Following a Given Node III
Deleting the Node Following a Given Node IV Deleting the Node Following a Given Node V
Algorithm :
Below is the diagram of the assignment LINK[LOCP] := LINK[LOC]
DEL(INFO, LINK, START, AVAIL, LOC, LOCP) This algorithm deletes
which effectively deletes the node N when N is not the first node.
the node N with location LOC. LOCP is the location of the node
which precedes N or, when N is the first node, LOCP = NULL.
Deleting the Node with a Given ITEM of Information I Deleting the Node with a Given ITEM of Information II
Algorithm : DELETE(INFO, LINK, START, AVAIL, ITEM)
This algorithm deletes from a linked list the first node N which contains the given ITEM
of information.
Traverse the list, using a pointer variable PTR and comparing
ITEM with INFO[PTR] at each node. 1. Use above Procedure to find the location of N and its preceding node.
2. If LOC = NULL, then: Write: ITEM not in list, and Exit.
While traversing, keep track of the location of the preceding node
by using a pointer variable SAVE. 3. [Delete node.]
If LOCP = NULL, then:
SAVE and PTR are updated by the assignments SAVE = PTR
Set START := LINK[START]. [Deletes first node.]
and PTR = LINK[PTR].
Else:
The traversing continues as long as INFO[PTR] != ITEM. Set LINK[LOCP] := LINK[LOC].
[End of If structure.]
Then PTR contains the location LOC of node N and SAVE con-
tains the location LOCP of the node preceding N. 4. [Return deleted node to the AVAIL list.]
Set LINK[LOC] := AVAIL and AVAIL := LOC.
5. Exit.
Linked Lists Linked Lists
Linked Lists Linked Lists
Header Linked Lists Header Linked Lists
Deleting I Deleting II
Represent a Polynomial with a Linked List II Represent a Polynomial with a Linked List III
Here we will assume that the linked list is ordered by the power
for the simplicity of algorithms.
Linked Lists Linked Lists
Linked Lists Linked Lists
Polynomial Polynomial
Represent a Polynomial with a Linked List IV Represent a Polynomial with a Linked List V
Since both linked list are ordered by the power, we can use a
two-pointer method to merge the two sorted linked list:
Linked Lists Linked Lists
Linked Lists Linked Lists
Polynomial Polynomial
To generate the final linked list, we can first merge sort the
linked list based on each nodes power:
This linked list contains all terms we need to generate the final
result.
After the sorting, the like term nodes are grouped together.
Then, we can merge each group of like terms and get the
final multiplication result:
Consider the above same sparse matrix used in the Triplet repre-
sentation.
Linked Lists
Linked Lists
Sparse Matrix
Sparse Matrix IX
Linked Lists
Dr. Arup Kumar Pal
Department of Computer Science & Engineering
Indian Institute of Technology (ISM), Dhanbad
Jharkhand-826004
E-mail: arupkrpal@iitism.ac.in
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 1
Outline
Introduction
Single Linked List
Representation
Operations
Double Linked List
Circular Linked List
Applications of Linked List
Sparse Matrix Manipulation
Polynomial Representation
Generalized Lists (Not in your syllabus)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 2
Introduction
An array stores the data in contiguous memory
locations.
In order to occupy the adjacent space, a block
of memory that is required for the array should
be allocated before hand.
Once the memory is allocated, it cannot be
extended any more.
Deleting an element or inserting an element
may require shifting of elements.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 3
Contd…
The linked list is an alternative to the array when a
collection of objects is to be stored.
The linked list is implemented using pointers.
Thus, an element (or node) of a linked list
contains the actual data to be stored and a
pointer to the next node.
The pointer is simply the address in memory of
the next node.
Thus, a key difference from arrays is that a linked
list does not have to be stored contiguously in
memory.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 4
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 5
Singly Linked List
A Linear Linked List is also called singly linked list or one way
linked list.
In the singly linked list each node is divided into two parts.
Information field – contains the user information.
Next pointer field – contains the address of next node.
Another pointer variable, for example head, is used. That
contains the address of the first node in the list.
The next pointer field of the last node contains NULL value to
indicate the end of the list.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 6
Representation
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 7
Example: list of student records
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 8
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 9
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 10
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 11
Alternative Way
Instead of statically declaring the structures n1,
n2, n3,
Dynamically allocate space for the nodes
Use malloc() individually for every node allocated
This is the usual way to work with linked lists,
as number of elements in the list is usually not
known in advance (if known, we could have
used arrays)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 12
Contd…
struct node
{
int data;
struct node *next;
};
typedef struct node NODE;
//type definition making it abstract data type
NODE *start;
start=(NODE *)malloc(sizeof(NODE));
//dynamic memory allocation
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 13
Contd…
• Now, we can assign values to the respective
fields of NODE
start->data= 10;
start->next= ‘\0’;
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 14
Contd…
• Any number of nodes can be created and linked
to the existing node.
start->next=(NODE *)malloc(sizeof(NODE));
start->next.data= 10;
start->next.next= ‘\0’;
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 15
To create and to display
void main()
{ struct node
{ int num;
struct node *ptr;
};
typedef struct node NODE;
NODE *start, *first, *temp = 0;
int count = 0;
int choice = 1;
first = 0;
while (choice)
{ start = (NODE *)malloc(sizeof(NODE));
printf("Enter the data item\n");
scanf("%d", &start-> num);
if (first != 0)
{ temp->ptr = start; temp = start; }
else
{ first = temp = start; }
printf("Do you want to continue(Type 0 or 1)?\n");
scanf("%d", &choice);
}
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 16
Contd…
temp->ptr = 0;
/* reset temp to the beginning */
temp = first;
printf("Display of the linked list is\n");
while (temp != 0)
{
printf("%d=>", temp->num);
count++;
temp = temp -> ptr;
}
printf("NULL\n");
printf("No. of nodes in the list = %d\n", count);
}
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 17
LIST is Unsorted
Algorithm : SEARCH(INFO, LINK, START, ITEM, LOC)
LIST is a linked list in memory. This algorithm finds the location
LOC of the node where ITEM first appears in LIST, or sets LOC
=NULL.
Begin:
1. Set PTR := START.
2. Repeat Step 3 while PTR ≠ NULL:
3. If ITEM = INFO[PTR], then: Set LOC := PTR, and Exit.
Else: Set PTR := LINK[PTR]. [PTR now points to the next node.]
[End of If structure.]
[End of Step 2 loop.]
4. [Search is unsuccessful.] Set LOC := NULL.
5. Exit.
End
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 18
Contd…
The complexity of this algorithm is the same as
that of the linear search algorithm with context
of Arrays.
The worst-case running time is proportional to
the number n of elements in LIST.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 19
LIST is Sorted
Algorithm : SEARCH(INFO, LINK, START, ITEM, LOC)
LIST is a sorted list in memory. This algorithm finds the location LOC of
the node where ITEM first appears in LIST, or sets LOC= NULL.
Begin:
1. Set PTR := START.
2. Repeat Step 3 while PTR ≠ NULL:
3. If ITEM < INFO[PTR], then: Set PTR := LINK[PTR]. [PTR now points to next
node.]
Else if ITEM = INFO[PTR], then: Set LOC := PTR, and Exit. [Search is
successful.]
Else:
Set LOC := NULL, and Exit. [ITEM now exceeds INFO[PTR].]
[End of If structure.]
[End of Step 2 loop.]
4. Set LOC := NULL.
5. Exit.
End
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 20
Contd…
The complexity of this algorithm is still
proportional to the number n of elements in
LIST.
With a sorted linear array we can apply a
binary search algorithm.
A binary search algorithm cannot be applied to
a sorted linked list, since there is no way of
indexing the middle element in the list.
This property is one of the main drawbacks in
using a linked list as a data structure.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 21
Insertion into a Linked List
Insertion at the Beginning of a List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 22
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 23
Inserting at the End of the List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 24
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 25
Inserting after a Given Node
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 26
Inserting after a Given Node
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 27
Deletion a Node at front in List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 28
Deletion a Node at front in List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 29
Deletion of a node at the end in List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 30
Deletion of a node at the end in List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 31
Deletion of a node at any position
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 32
Deletion of a node at any position
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 33
Polynomial Representation
A polynomial can be represented as:
f x am x am 1x
m m 1
a1x a0 x
1 0
Now:
A 3x 5 x
12 3
2
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 34
Polynomial Addition
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 35
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 36
Algorithm
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 37
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 38
Polynomial Multiplication
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 39
Circular Linked List
Circular header lists are frequently used instead of
ordinary linked lists because many operations are much
easier to state and implement using header lists.
In circular lists null pointer is not used, and hence all
pointers contain valid addresses.
Every (ordinary) node has a predecessor, so the first
node may not require a special case.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 40
Deleting First Node from Circular Linked List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 41
Delete last node from Circular Linked List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 42
Double Linked List
Each list discussed before is type of a one-way list,
where we can traverse the list in only one
direction.
A double linked list is a linear collection of data
elements, called nodes, where each node N is
divided into three parts:
An information field INFO which contains the data of N
A pointer field FORW which contains the location of the
next node in the list
A pointer field BACK which contains the location of the
preceding node in the list
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 43
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 44
Sparse Matrix
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 45
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 46
Generalized Lists
Suppose we want to represent the given
polynomial using linear list
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 48
Some Examples of Generalized List
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 49
Polynomial Representation
Suppose P=3x2y
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 50
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 51
Thank You…
Any Queries...?
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 8, 2021 52
Stacks Stacks
Outline I
1 Stacks
Stack Operations
Stacks
2 Array Representation of Stacks
4 Applications of Stacks
Stacks Stacks
Stacks Stacks
Stacks I
1 Stacks
Stack Operations
4 Applications of Stacks This way a stack is a LIFO (Last in First Out) or FILO (First in Last
Out)data structure.
Stack Operations I
1 Stacks
Stack Operations
4 Applications of Stacks
Stacks Stacks
Stacks Array Representation of Stacks
Stack Operations
4 Applications of Stacks
Stacks Stacks
Array Representation of Stacks Array Representation of Stacks
Stacks Stacks
Linked Representation of Stacks Linked Representation of Stacks
Can be maintained using one-way list or singly linked list. The condition TOP == NULL will indicate that the stack is empty.
1. [Available space?] If AVAIL = NULL, then Write OVERFLOW and Exit 1. [Stack has an item to be removed?]
2. [Remove first node from AVAIL list] IF TOP = NULL then Write: UNDERFLOW and Exit.
Set NEW := AVAIL and AVAIL := LINK[AVAIL]. 2. Set ITEM := INFO[TOP] [Copies the top element of stack into ITEM]
3. Set INFO[NEW] := ITEM [Copies ITEM into new node] 3. Set TEMP := TOP and TOP = LINK[TOP]
4. Set LINK[NEW] := TOP [New node points to the original top node in the stack] [Remember the old value of the TOP pointer in TEMP and reset TOP to point to
the next element in the stack ]
5. Set TOP = NEW [Reset TOP to point to the new node at the top of the stack]
4. [Return deleted node to the AVAIL list]
6. Exit. Set LINK[TEMP] = AVAIL and AVAIL = TEMP.
5. Exit.
Stacks Stacks
Applications of Stacks Applications of Stacks
Applications of Stacks
1 Stacks
Stack Operations
Evaluation of Expressions I
1 Stacks
Stack Operations
Operators Precedence:
2 Array Representation of Stacks I In arithmetic expressions operators precedence is observed:
An Example:
4 Applications of Stacks
I Evaluate: 2 ↑ 3 + 5 ∗ 2 ↑ 2 − 12/6
I Answer: 26
5 Arithmetic Expressions; Polish Notation
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
An Important Fact:
I Parentheses’ alter the precedence of operators. Infix Notations:
An Example: I Expressions in which operator lies between the operands
are referred to as infix notations.
I (A + B) ∗ C 6= A + (B ∗ C)
I A+B, C-D, P*F, · · · all are infix notations.
I (2 + 3) ∗ 7 = 35 while 2 + (3 ∗ 7) = 23
I A+(B*C) and (A+B)*C are distinguished by parentheses or
How computer evaluates the arithmetic expressions? – is the by applying the operators precedence discussed above.
question we want to seek answer for.
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
7. Exit.
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
2. Scan P from left to right and repeat Steps 3 and 4 for each element of P until the sentinel ”)” is encountered.
6. Exit.
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
2. Scan A from right to left and repeat step 3 to 6 for each element of A until the STACK is empty
a. Repeatedly pop from STACK and add to B each operator (on the top of STACK) which has same or
higher precedence than the operator.
a. Repeatedly pop from the STACK and add to B (each operator on top of stack until a left parenthesis
is encounterd)
7. Exit
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
Expression: ( A + B ∧ C ) * D + E ∧ 5
1. Reverse the infix expression: 5 ∧ E + D * ) C ∧ B + A (
Stacks Stacks
Arithmetic Expressions; Polish Notation Arithmetic Expressions; Polish Notation
4. Decrement P by 1 and go to Step 2 as long as there are characters left to be scanned in the expression.
6. End.
Stacks
Arithmetic Expressions; Polish Notation
Expression: + 9 * 2 6
Character Scanned Stack (Front to Back) Explanation
6 6 6 is an operand, push to Stack
2 62 2 is an operand, push to Stack
* 12 (6 * 2) * is an operator, pop 6 and 2,
multiply them and push result to
Stack
9 12 9 9 is an operand, push to Stack
+ 21 (12+9) + is an operator, pop 12 and
9 add them and push result to
Stack
Result: 21
Queues Queues
Outline I
1 Queues
Queues
2 Array Representation of Queues
4 Deques
5 Priority Queues
Queues Queues
Queues Queues
Queues
1 Queues
5 Priority Queues
Queues Queues
Queues Queues
Queues Queues
Queues Array Representation of Queues
I Suppose we want to add an element (ITEM)to the queue 2 Array Representation of Queues
when its last part is occupied (i.e., REAR = N) and space is
available in the first part of the queue.
I One way to do this is to simply move the entire queue to the 3 Linked Representation of Queues
beginning of the array, changing FRONT and REAR accord-
ingly, and then inserting ITEM as above.
I This procedure may be very expensive.
4 Deques
5 Priority Queues
Queues Queues
Array Representation of Queues Array Representation of Queues
Queues Queues
Array Representation of Queues Array Representation of Queues
FRONT = REAR6=NULL
Queues Queues
Array Representation of Queues Array Representation of Queues
QDELETE Algorithm
QDELETE (QUEUE, N, FRONT, REAR, ITEM) 1 Queues
3. Set INFO[NEW] := ITEM and LINK[NEW]= NULL [Copies ITEM into new node]
5. Exit.
Queues Queues
Linked Representation of Queues Linked Representation of Queues
2. Set TEMP = FRONT [If linked queue is nonempty, remember FRONT in a tempo-
rary variable TEMP]
4. FRONT = LINK (TEMP) [Reset FRONT to point to the next element in the queue]
5. LINK(TEMP) = AVAIL and AVAIL = TEMP [return the deleted node TEMP to the
AVAIL list]
6. Exit.
Queues Queues
Deques Deques
Deques I
1 Queues
5 Priority Queues
Queues Queues
Deques Deques
Priority Queues I
1 Queues
2 Array Representation of Queues A priority queue is a collection of elements such that each el-
ement has been assigned a priority and such that the order in
which elements are deleted and processed comes from the fol-
3 Linked Representation of Queues lowing rules
1. An element of higher priority is processed before any ele-
ment of lower priority.
4 Deques 2. Two elements with the same priority are processed accord-
ing to the order in which they were added to the queue.
5 Priority Queues
Queues Queues
Priority Queues Priority Queues
One-Way List Representation of a Priority Queue I One-Way List Representation of a Priority Queue II
Priority numbers will operate in the usual way, the lower the pri-
ority number, the higher the priority
Queues Queues
Priority Queues Priority Queues
One-Way List Representation of a Priority Queue III One-Way List Representation of a Priority Queue IV
Queues Queues
Priority Queues Priority Queues
One-Way List Representation of a Priority Queue V One-Way List Representation of a Priority Queue VI
Queues Queues
Priority Queues Priority Queues
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 1
Outline
• Basic Terminologies
• Sorting Techniques
• Conclusions
Data Structures and Algorithms Department of CSE, ISM Dhanbad September 25, 2021 2
Introduction
Sorting is one of the fundamental operations in
computer science.
Sorting is one of the most common data-
processing applications.
A sorting algorithm is an algorithm that puts
elements of a list in a certain order, such as
increasing or decreasing, with numerical data, or
alphabetically, with character data.
Efficient sorting is important for optimizing the
use of other algorithms (such as search and
merge algorithms) which require input data to be
in sorted lists.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 3
Sorting
Let A be a list of n elements A1, A2, …, An in
memory.
Sorting A refers to the operation of rearranging
the contents of A so that they are increasing in
order (numerically or lexicographically), that is, so
that A1 ≤ A2 ≤ A3 ≤ … ≤An
Since A has n elements, there are n! ways that the
contents can appear in A.
These ways correspond precisely to the n!
permutations of 1, 2,…, n. Accordingly, each
sorting algorithm must take care of these n!
possibilities.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 4
Properties of Sorting
Stable Sort: A sorting algorithm is stable if the
relative order of elements with the same key
value is preserved by the algorithm.
Non-Stable
Sort
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 5
Properties of Sorting
In place Sort: Suppose array data structures is
used to store the data than the sorting method
takes place within the array only, that is,
without using any other extra storage space.
In other words, in place sorting does not
require extra memory space other than the list
itself and hence it is memory efficient method.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 6
Quick Overview
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 7
Programmer’s view
1. How to rearrange a given set of data?
2. Which data structures are more suitable to
store data prior to their sorting?
3. How fast can the sorting be achieved?
4. How can sorting be done in memory
constraint situation?
5. How to sort various types of data?
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 8
Sorting Techniques
Sorting
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 9
Sorting Techniques
Internal
Sorting
Sorting by Sorting by
Comparison Distribution
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 10
Contd…
Sorting by Comparison: The basic operation involved in
this type of sorting technique is comparison.
A data item is compared with other items in the list of
items in order to find its place in the sorted list.
Comparison is followed by arrangement process like:
Insertion (The item chosen is then inserted into an
appropriate position relative to the previously sorted items)
Selection (Selected item, say smallest/largest item may be
placed in proper place)
Exchange (Here, items are interchanged)
Merge (Two or more lists are merged into an output list)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 11
Contd…
Sorting by Distribution: In this type of sorting,
no key comparison takes place.
Distribution of items is based on the following
choices:
Radix (An item is placed in a space decided by the
bases(radixes) of its components with which it is
composed of.
Counting (Items are sorted based on their relative
counts)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 12
Sorting by Exchange
Here we will discussed sorting algorithms
which are based on the principle of “sorting by
exchange”.
Following are the sorting based on this
principle:
Bubble Sort
Quick Sort
Shell Short (We will not discussed)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 13
Bubble Sort
Bubble sort is the simplest sorting algorithm.
This technique is simple in the sense that it is easy
to understand, easy to implement, and easy to
analyze.
It works by iterating the input array from the first
element to the last, comparing each pair of
elements and swapping them if needed.
Bubble sort continues its iterations until no more
swaps are needed.
The algorithm gets its name from the way
smallest/largest elements “bubble” to the top of
the list.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 14
Bubble Sort
Suppose the list of numbers A[l], A[2], …, A[N] is in memory. The bubble
sort algorithm works as follows:
Steps Remarks
Step 1. Compare A[1] and A[2] and arrange • Step 1 involves n – 1 comparisons.
them in the desired order, so that A[l] < A[2]. • The largest element is “bubbled
Then compare A[2] and A[3] and arrange them up” to the nth position
so that A[2] < A[3]. Continue until we compare • When Step 1 is completed, A[N]
A[N – 1] with A[N] and arrange them so that will contain the largest element.
A[N – 1] < A[N].
Step 2. Repeat Step 1 with one less • Step 2 involves N – 2 comparisons
comparison; that is, now we stop after we • When Step 2 is completed, the
compare and possibly rearrange A[N – 2] and second largest element will occupy
A[N –1]. A[N – 1].
Step 3. Repeat Step 1 with two fewer • Step 3 involves N – 3 comparisons
comparisons; that is, we stop after we • When Step 3 is completed, the
compare and possibly rearrange A[N – 3] and third largest element will occupy
A[N – 2]. A[N – 2].
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 15
Contd…
After n – 1 steps, the list will be sorted.
Each of steps is called a pass and the bubble
sort algorithm requires n – 1 passes, where n is
the number of input items.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 16
Example
Suppose the following numbers are stored in an array A: 32, 51, 27, 85, 66, 23,
13, 57
Pass 1:
At the end of this first pass, the largest number, 85, has moved to the last
position.
However, the rest of the numbers are not sorted, even though some of them
have changed their positions.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 17
Contd…
Pass 2
At the end of Pass 2, the second largest number, 66, has moved its way
down to the next-to-last position.
Pass 7. Finally, A1 is compared with A2. Since 13 < 23, no interchange takes
place. Since the list has 8 elements; it is sorted after the seventh pass.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 18
Algorithm
Algorithm 4.4: BUBBLESORT(A, N)
Here A is an array with N elements.
1. Repeat Steps 2 and 3 for K = 1 to N – 1.
2. Set PTR := 1. [Initializes pass pointer PTR.] void bubbleSort(int arr[], int n)
3. Repeat while PTR ≤ N – K: [Executes pass.] {
(a) If A[PTR] < A[PTR + 1], then: int i, j;
Interchange A[PTR] and A[PTR + 1]. for (i = 0; i < n-1; i++)
[End of If structure.] for (j = 0; j < n-i-1; j++)
if (arr[j] > arr[j+1])
(b) Set PTR := PTR + 1.
swap(&arr[j], &arr[j+1]);
[End of inner loop.] }
[End of Step 1 outer loop.]
4. Exit.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 19
Complexity of Bubble Sort Algorithm
Traditionally, the time for a sorting algorithm is measured in
terms of the number of comparisons.
There are n-1 comparisons during the first pass, which places
the largest element in the last position; there are n-2
comparisons in the second step, which places the second
largest element in the next-to-last position; and so on.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 20
Contd…
The earlier mentioned function always runs O(n2)
time even if the array is sorted.
To solve this, we can introduce an variable FLAG.
The value of FLAG is set true if there occurs
swapping of elements. Otherwise, it is set false.
After an iteration, if there is no swapping, the
value of swapped will be false.
This means elements are already sorted and there
is no need to perform further iterations.
This will reduce the execution time and helps to
optimize the bubble sort.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 21
Contd...
void bubbleSort(int arr[], int n)
{ int i, j;
bool FLAG;
for (i = 0; i < n-1; i++)
{ FLAG = false;
for (j = 0; j < n-i-1; j++)
{ if (arr[j] > arr[j+1])
{
swap(&arr[j], &arr[j+1]);
FLAG = true;
}
}
// IF no two elements were swapped by inner loop, then break
if (FLAG == false) break;
}
}
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 23
Approach
There remains only the problem of deciding how to insert A[K]
in its proper place in the sorted subarray A[1], A[2], …,A[K – 1].
This can be accomplished by comparing A[K] with A[K – l],
comparing A[K] with A[K – 2], comparing A[K] with A[K – 3],
and so on, until first meeting an element A[J] such that A[J] ≤
A[K].
Then each of the elements A[K – l], A[K – 2], …, A[J + 1] is
moved forward one location, and A[K] is then inserted in the J
+ 1 st position in the array.
The algorithm is simplified if there always is an element A[J]
such that A[J] ≤ A[K]; otherwise we must constantly check to
see if we are comparing A[K] with A[1].
This condition can be accomplished by introducing a sentinel
element A[0] = – ∞ (or a very small number).
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 24
Example
Suppose an array A contains 8 elements as
follows: 77, 33, 44, 11, 88, 22, 66, 55
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 25
Algorithm
Algorithm: INSERTIONSORT(A, N).
This algorithm sorts the array A with N
elements. void insertionSort(int arr[], int n)
1. Set A[0] := – . [Initializes sentinel element.] {
2. Repeat Steps 3 to 5 for K = 2, 3, …, N: int i, key, j;
for (i = 1; i < n; i++) {
3. Set TEMP := A[K] and PTR := K – 1. key = arr[i];
4. Repeat while TEMP < A[PTR]: j = i - 1;
while (j >= 0 && arr[j] > key)
(a) Set A[PTR + 1] := A[PTR]. [Moves
{
element forward.] arr[j + 1] = arr[j];
(b) Set PTR := PTR – 1. j = j - 1;
}
[End of loop.] arr[j + 1] = key;
5. Set A[PTR + 1] := TEMP. [Inserts element in }
proper place.] }
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 26
Complexity of Insertion Sort
In the worst case occurs when the array A is in reverse
order and the inner loop must use the maximum
number K – 1 of comparisons.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 27
Selection Sort
Suppose an array A with n elements A[1], A[2],
…, A[N] is in memory.
The selection sort algorithm for sorting A works
as follows.
First find the smallest element in the list and
put it in the first position.
Then find the second smallest element in the
list and put it in the second position. And so
on.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 28
Contd…
Pass 1: Find the location LOC of the smallest in the list of N
elements A[1], A[2], A[N], and then interchange A[LOC] and
A[1]. Then: A[1] is sorted.
Pass 2: Find the location LOC of the smallest in the sublist of N-1
elements A[2], A[3], A[N], and then interchange A[LOC] and
A[2]. Then: A[1], A[2] is sorted, since A[1] ≤ A[2].
Pass 3: Find the location LOC of the smallest in the sublist of N ‒
2 elements A[3], A[4], A[N], and then interchange A[LOC]
and A[3]. Then: A[1], A[2], A[3] is sorted, since A[2] ≤ A[3].
…
Pass N ‒ 1: Find the location LOC of the smaller of the elements
A[N ‒ 1], A[N], and then interchange A[LOC] and A [N ‒
1]. Then: A[1], A[2], ...A[N] is sorted, since A[N ‒ 1] ≤
A[N].
Thus A is sorted after N – 1 passes.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 29
Example
Suppose an array A contains 8 elements as
follows: 77, 33, 44, 11, 88, 22, 66, 55
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 30
Algorithm
Algorithm : SELECTIONSORT(A, N)
This algorithm sorts the array A with N elements.
1. Repeat Steps 2 and 3 for K = 1, 2, …, N –1:
2. Call MIN(A, K, N, LOC). void selectionSort(int arr[], int n)
3. [Interchange A[K] and A[LOC].] {
int i, j, min;
Set TEMP := A[K], A[K] := A[LOC] and
A[LOC] := TEMP. // One by one move boundary of unsorted subarray
for (i = 0; i < n-1; i++)
[End of Step 1 loop.] {
// Find the minimum element in unsorted array
4. Exit. min = i;
for (j = i+1; j < n; j++)
________________________________________ if (arr[j] < arr[min])
MIN(A, K, N, LOC) min = j;
// Swap the found minimum element with the first
1. Set MIN := A[K] and LOC := K. element
swap(&arr[min_idx], &arr[i]);
2. Repeat for J = K + 1, K + 2, …, N: }
If MIN > A[J], then: Set MIN := A[J] }
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 31
Complexity of the Selection Sort
MIN(A, K, N, LOC) requires n –K comparisons.
That is, there are n – 1 comparisons during Pass 1 to find
the smallest element, there are n – 2 comparisons during
Pass 2 to find the second smallest element, and so on.
Accordingly, f (n) = (n – 1) + (n – 2) +... + 2 + 1 = O(n2)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 32
Quick Short
In the bubble sort, consecutive items are
compared and possibly exchanged on each
pass through the list, which means that many
exchanges may be needed to move an element
to its correct position.
Quick short is more efficient than the bubble
sort because, here fewer exchanges are
required to correctly position an element.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 33
Approach
Quick sort is an algorithm of the divide-and-conquer
type.
Here, the problem of sorting is reduced to the problem
of sorting two smaller sets.
The reduction step of the Quick sort algorithm finds the
final position of one of the numbers.
Each iteration of the quick sort selects an element,
known as pivot, and divides the list into three groups
A partition of elements whose keys are less than the
pivot’s key, the pivot element that is placed in its
ultimately correct location in the list, and a partition of
elements greater than or equal to the pivot’s key.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 34
Example
Suppose A is the following list of 12 numbers:
• Beginning with 22, next scan the list in the opposite direction, from
left to right, comparing each number with 44 and stopping at the first
number greater than 44. The number is 55. Interchange 44 and 55 to
obtain the list.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 35
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 36
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 37
Contd…
The above reduction step is repeated with each sublist
containing 2 or more elements.
Since we can process only one sublist at a time, we must
be able to keep track of some sublists for future
processing.
This is accomplished by using two stacks, called LOWER
and UPPER, to temporarily “hold” such sublists.
That is, the addresses of the first and last elements of
each sublist, called its boundary values, are pushed onto
the stacks LOWER and UPPER, respectively; and the
reduction step is applied to a sublist only after its
boundary values are removed from the stacks.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 38
Algorithm
Algorithm: QuickSort(A,N)
This algorithm sorts an array A with N elements.
1. [Initialize.] TOP := NULL.
2. [Push boundary values of A onto stacks when A has 2 or more elements.]
If N > 1, then: TOP := TOP + 1, LOWER[1] :=1, UPPER[1] := N.
3. Repeat Steps 4 to 7 while TOP ≠ NULL.
4. [Pop sublist from stacks.]
Set BEG := LOWER[TOP], END :=UPPER[TOP], TOP := TOP – 1.
5. Call QUICK(A, N, BEG, END, LOC)
6. [Push left sublist onto stacks when it has 2 or more elements.]
If BEG < LOC – 1, then:
TOP := TOP + 1, LOWER[TOP] := BEG,
UPPER[TOP] = LOC – 1. [End of If structure.]
7. [Push right sublist onto stacks when it has 2 or more elements.]
If LOC + 1 < END, then:
TOP := TOP + 1, LOWER[TOP] := LOC +1, UPPER[TOP] := END.
[End of If structure.] [End of Step 3 loop.]
8. Exit.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 39
Contd…
Procedure 6.7: QUICK(A, N, BEG, END, LOC) (ii) Set LOC := RIGHT.
Parameters BEG and END contain the boundary (iii) Go to Step 3.
values of the sublist of A to which this procedure
applies. LOC keeps track of the position of the [End of If structure.]
first element A[BEG] of the sublist during the 3. [Scan from left to right.]
procedure. The local variables LEFT and RIGHT
will contain the boundary values of the list of (a) Repeat while A[LEFT] ≤ A[LOC) and
elements that have not been scanned. LEFT ≠ LOC:
1. [Initialize.] Set LEFT := BEG, RIGHT := END LEFT := LEFT + 1.
and LOC := BEG. [End of loop.]
2. [Scan from right to left.] (b) If LOC = LEFT, then: Return.
(a) Repeat while A[LOC) ≤ A[RIGHT] and
(c) If A[LEFT] > A[LOC], then
LOC ≠ RIGHT:
RIGHT := RIGHT – 1. (i) [Interchange A[LEFT] and
[End of loop.] A[LOC].]
(b) If LOC = RIGHT, then: Return. TEMP := A[LOC], A[LOC] :=
(c) If A[LOC] > A[RIGHT], then: A[LEFT],
(i) [Interchange A[LOC) and A[RIGHT].] A[LEFT] := TEMP.
TEMP := A[LOC), A[LOC] := A[RIGHT], (ii) Set LOC := LEFT.
A[RIGHT] := TEMP. (iii) Go to Step 2.
[End of If structure.]
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 40
Complexity of Quick Sort Algorithm
In Quick Sort, the list is partitioned into two parts.
So, the complexity will be based on:
Time to partition the given list
Time to sort left sub list
Time to sort right sub list
The best case timing analysis is possible when
the array is always partitioned in half.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 41
Complexity of Quick Sort Algorithm
n n
Best Case Analysis: T n cn T T
2 2
n
cn 2T , for some constant c
2
n
Thus, T n cn 2T
2
n n n
cn 2 c 2T 2 2cn 22 T 2
2 2 2
n n n
2cn 22 c 2 2T 3 3cn 23 T 3
2 2 2
n
kcn 2k T k [Let 2 k n k log 2 n]
2
cn log 2 n nT 1
cn log 2 n n
O n log 2 n
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 42
Complexity of Quick Sort Algorithm
Worst Case Analysis:
T n cn T 0 T n 1 , for some constant c
Thus, T n cn T n 1
cn c n 1 T n 2
cn c n 1 c n 2 T n 3
cn c n 1 c n 2 c n 3 c 1 T 0
c n n 1 n 2 2 1
n n 1
c
2
O n2
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 43
Merging
Suppose A is a sorted list with r elements and B
is a sorted list with s elements.
The operation that combines the elements of A
and B into a single sorted list C with n = r + s
elements is called merging.
One simple way to merge is to place the
elements of B after the elements of A and then
use some sorting algorithm on the entire list.
This method does not take advantage of the
fact that A and B are individually sorted.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 44
Intelligent Approach
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 45
Merging Algorithm
Algorithm: MERGING(A, R, B, S, C)
Let A and B be sorted arrays with R and S elements, respectively. This
algorithm merges A and B into an array C with N = R + S elements.
1. [Initialize.] Set NA := 1, NB := 1 and PTR:= 1.
2. [Compare.] Repeat while NA ≤ R and NB≤ S:
If A[NA] < B[NB], then:
(a) Set C[PTR] := A[NA].
(b) Set PTR :=PTR + 1 and NA := NA + 1.
Else:
(a) Set C[PTR] := B[NB].
(b) Set PTR :=PTR + 1 and NB := NB + 1.
[End of If structure.]
[End of loop.]
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 46
Contd…
3. [Assign remaining elements to C.]
If NA > R, then:
Repeat for K = 0, 1, 2, …, S – NB:
Set C[PTR + K] := B[NB + K].
[End of loop.]
Else:
Repeat for K = 0, 1, 2, …, R – NA:
Set C[PTR + K] := A[NA + K].
[End of loop.]
[End of If structure.]
4. Exit.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 47
Complexity of the Merging Algorithm
The input consists of the total number n = r + s
of elements in A and B.
Each comparison assigns an element to the
array C, which eventually has n elements.
Accordingly, the number f(n) of comparisons
cannot exceed n: f(n) ≤ n =O(n)
In other words, the merging algorithm can be
run in linear time.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 48
Merge Sort
The merge sort applies the divide-and-conquer
strategy to sort a sequence.
First it subdivides the sequence into subsequences
of singletons.
Then it successively merges the subsequences
pairwise until a single sequence is re-formed.
Each merge preserves order, so each merged
subsequence is sorted.
When the final merge is finished, the complete
sequence is sorted.
.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 49
Example
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 50
Algorithm
MergeSort(array a, int left, int right)
{
if ( left < right )
{
mid = (left + right) / 2
MergeSort(a, left, mid)
MergeSort(a, mid+1, right)
merge(a, left, mid, right)
}
}
MergeSort() is a recursive function and low >= high is the
base case, i.e. there is 0 or 1 item
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 51
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 52
Complexity: Merge-Sort Algorithm
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 53
Contd…
So, the merge sort works by repeatedly dividing
the array in half until the pieces are singletons,
and then it merges the pieces pairwise until a
single piece remains.
The number of iterations in the first part equals
the number of times n can be halved: that is,
log2n.
The second part of the process reverses the first.
So the second part also has log2n steps.
So the entire algorithm has O(log2n) steps.
Each step compares all n elements.
So the total number of comparisons is O(n lg n).
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 54
Limitation
The main drawback of merge-sort is that it
requires an auxiliary array with n elements.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 55
Sorting by Distribution
These sorting algorithms are radically different than the
previous discussed algorithms.
The unique characteristic of a distribution sorting algorithm is
that it does not make use of comparisons to do the sorting.
Instead, distribution sorting algorithms rely on a
priori knowledge about the universal set from which the
elements to be sorted are drawn.
For example, if we know a priori that the size of the universe
is a small, fixed constant, say m, then we can use the counting
sort algorithm.
Similarly, if we have a universe the elements of which can be
represented with a small, finite number of bits (or even digits,
letters, or symbols), then we can use the radix sorting
algorithm.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 56
Counting Sort
The essential requirement in this sorting
algorithm is that the size of the universe from
which the elements to be sorted are drawn is a
small, fixed constant, say m.
For example, suppose that we are sorting
elements drawn from {0,1,…,m-1} i.e., the set
of integers in the interval [0,m-1].
Counting sort uses m counters. The i-th counter
keeps track of the number of occurrences of
the i-th element of the universe.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 57
Example
int count[m]
for ( int k=0; k < n; k++ ) //suppose m<n
count[v[k]]++;
Complexity: O(n)
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 58
Radix Sort
The downfall of counting sort is that it may not be too
practical if the range of elements is too large.
For example, if the range of the n elements we need to
sort was from 1 to n3, then simply creating the auxiliary
array will take O(n3) time and counting sort will
asymptotically do worse than insertion sort.
Radix sort helps solve this problem by sorting the
elements digit by digit.
The idea is that we can sort integers by first sorting
them by their least significant digit (i.e. the ones digit),
then sorting the result of that sort by their next
significant digit (i.e. the tens digit), and so on until we
sort the integers by their most significant digit.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 59
Example
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 60
Contd…
In case of list of names. Here the radix is 26 (the 26
letters of the alphabet.)
Specifically, the list of names is first sorted according to
the first letter of each name.
That is, the names are arranged in 26 classes, where the
first class consists of those names that begin with “A,”
the second class consists of those names that begin with
“B,” and so on.
During the second pass, each class is alphabetized
according to the second letter of the name. And so on.
If no name contains, for example, more than 12 letters,
the names are alphabetized with at most 12 passes.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 61
Implementation
Consider the following list
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 62
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 63
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 64
Contd…
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 65
Algorithm
radix sort(Array)
{
d= maximum number of digits in the largest element
create d buckets of size 0-9
for i = 0 to d
sort the elements according to ith place digits using
Counting Sort
}
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 66
Complexity of Radix Sort
Suppose a list A of n items A1, A2, …, An is given.
Let d denote the radix, and suppose each item Ai is
represented by means of s of the digits: Ai = di1di2 … dis
The radix sort algorithm will require s passes
Hence the number C(n) of comparisons for the
algorithm is bounded as follows: C(n) ≤ d×s×n
Although d is independent of n, the number s does
depend on n.
In the worst case, s = n, so C(n) = O(n2).
In the best case, s = logdn, so C(n) = O(n log n).
In other words, radix sort performs well only when the
number s of digits in the representation of the Ai’s is
small.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 67
Limitation
Another drawback of radix sort is that one may
need d×n memory locations.
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 68
Thank You…
Any Queries...?
Data Structures and Algorithms Dept. of CSE, IIT(ISM) Dhanbad September 25, 2021 69
Sorting Algorithms Sorting Algorithms
Outline I
1 Introduction
Sorting
Classification of Sorting
Applications of Sorting
Sorting Algorithms 2 Bubble Sort
Algorithm
Complexity
Modified version
3 Selection Sort
Algorithm
Complexity
4 Insertion Sort
Algorithm
Complexity
5 Merge Sort
Merging
Sorting Algorithms Sorting Algorithms
Introduction
1 Introduction
Outline II Sorting
Classification of Sorting
Complexity Applications of Sorting
2 Bubble Sort
6 Quicksort Sort Algorithm
Algorithm Complexity
Complexity Modified version
Randomized Quick sort 3 Selection Sort
Algorithm
Complexity
7 Count Sort
Algorithm 4 Insertion Sort
Complexity Algorithm
Complexity
5 Merge Sort
8 Radix Sort Merging
Algorithm Complexity
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Introduction Introduction
Sorting
Complexity
Randomized Quick sort
What is Sorting? I
1 Introduction
Applications of Sorting I Sorting
Classification of Sorting
Applications of Sorting
2 Bubble Sort
Uniqueness testing Algorithm
Complexity
Deleting duplicates
Modified version
Prioritizing events 3 Selection Sort
Frequency counting Algorithm
Complexity
Reconstructing the original order
4 Insertion Sort
Set intersection/union Algorithm
Finding a target pair x, y such that x + y = z Complexity
5 Merge Sort
Efficient searching Merging
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Bubble Sort Bubble Sort
Complexity
Randomized Quick sort
Bubble Sort I
Algorithm I Complexity I
BUBBLE(DATA,N)
1. Repeat steps 2 and 3 for K=1 to N-1.
2. Set PTR:=1. [Initializes pass pointer PTR.]
Here Step 1 requires n − 1 comparisons Step2 requires n − 2
3. Repeat while PTR≤N-K: [Executes pass.]
comparisons
(a) If DATA[PTR]>DATA[PTR+1], then:
f (n) = (n − 1) + (n − 2) + · · · + 2 + 1 = n(n−1)
2 = O(n2 )
Interchange DATA[PTR] and DATA[PTR+1] 2
[End of If structure] Hence the complexity is of the order of n
(b) Set PTR:=PTR+1
[End of inner loop.]
[End of step 1 outer loop.]
4. Exit.
Sorting Algorithms Sorting Algorithms
Bubble Sort Bubble Sort
Complexity Modified version
1 Introduction Complexity
Sorting Randomized Quick sort
Classification of Sorting
Applications of Sorting
2 Bubble Sort
Algorithm
Complexity 7 Count Sort
Modified version Algorithm
3 Selection Sort Complexity
Algorithm
Complexity
4 Insertion Sort
Algorithm
8 Radix Sort
Complexity
Algorithm
5 Merge Sort Complexity
Merging
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Selection Sort Selection Sort
More precisely:
I Pass 1. Find the location LOC of the smallest in the list of
Suppose an array A with n elements A[1], A[2], · · · , A[N] is N elements A[1], A[2], A[N], and then interchange A[LOC]
in memory. The selection sort algorithm for sorting A works and A[1]. Then: A[1] is sorted.
as follows. I Pass 2. Find the location LOC of the smallest in the sublist
First find the smallest element in the list and put it in the first of N − 1 elements A[2], A[3], A[N], and then interchange
position. A[LOC] and A[2]. Then: A[1], A[2] is sorted, since A[1] ≤
A[2].
Then find the second smallest element in the list and put it I Pass 3. Find the location LOC of the smallest in the sublist
in the second position. of N − 2 elements A[3], A[4], A[N], and then interchange
And so on. A[LOC] and A[3]. Then: A[1], A[2], A[3] is sorted, since
A[2] ≤ A[3].
I ···
I ···
Sorting Algorithms Sorting Algorithms
Selection Sort Selection Sort
Algorithm I Algorithm II
1 Introduction
Complexity II Sorting
Classification of Sorting
Applications of Sorting
2 Bubble Sort
Algorithm
Complexity
Performance Modified version
I Worst case complexity : O(n2 ) 3 Selection Sort
I Best case complexity : O(n2 ) Algorithm
I Average case complexity : O(n2 ) Complexity
I Worst case space complexity: O(1) auxiliary 4 Insertion Sort
Algorithm
Complexity
5 Merge Sort
Merging
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Insertion Sort Insertion Sort
Complexity
Randomized Quick sort
Insertion sort I
Advantages
I Simple implementation
I Efficient for small data
I Adaptive: If the input list is presorted [may not be com- Suppose an array A with n elements A[1], A[2], · · · , A[N] is
pletely] then insertions sort takes O(n + d), where d is the in memory.
number of inversions Practically more efficient than selec- The insertion sort algorithm scans A from A[1] to A[N], in-
tion and bubble sorts, even though all of them have O(n2 )
serting each element A[K ] into its proper position in the pre-
worst case complexity
I Stable: Maintains relative order of input data if the keys are viously sorted subarray A[1], A[2], · · · , A[K − l].
same
I In-place: It requires only a constant amount O(1) of addi-
tional memory space
I Online: Insertion sort can sort the list as it receives it
Sorting Algorithms Sorting Algorithms
Insertion Sort Insertion Sort
Complexity I Complexity II
1 Introduction Complexity
Sorting Randomized Quick sort
Classification of Sorting
Applications of Sorting
2 Bubble Sort
Algorithm
Complexity 7 Count Sort
Modified version Algorithm
3 Selection Sort Complexity
Algorithm
Complexity
4 Insertion Sort
Algorithm
8 Radix Sort
Complexity
Algorithm
5 Merge Sort Complexity
Merging
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Merge Sort Merge Sort
Merging Merging
Merging I MERGE I
MERGE(A,R,LBA,S,LBB,C,LBC)
3. [Assign remaining elements to C.]
1. [Initialize] Set NA := LBA, NB = LBB, PTR := LBC, UBA = If NA > UBA then:
LBA + R − 1, UBB = LBB + S − 1 I Repeat for K=0,1,2, · · · , UBB-NB
2. [Compare.] Repeat while NA ≤ UBA and NB ≤ UBB: Set C[PTR+K]:=B[NB+K]
If A[NA] < B[NB] then [End of loop.]
(a) [Assign element from A to C.] Set C[PTR]:=A[NA] Else:
(b) [Update pointers.] Set PTR:=PTR+1 AND NA:=NA+1 I Repeat for K=0,1,2, · · · , UBA-NA
Else: Set C[PTR+K]:=A[NA+K]
(a) [Assign element from B to C.] Set C[PTR]:=B[NB] [End of loop.]
(b) [Update pointers.] Set PTR:=PTR+1 AND NB:=NB+1 [End of If structure.]
[End of If structure.] 4. Return
[End of loop.]
Sorting Algorithms Sorting Algorithms
Merge Sort Merge Sort
Merging Merging
Binary Search and Insertion Algorithm I Binary Search and Insertion Algorithm II
The binary search and insertion algorithm does not take into
Suppose the number r of elements in a sorted array A is account the fact that A is sorted. Accordingly, the algorithm
much smaller than the number s of elements in a sorted may be improved in two ways as follows. (Here we assume
array B. that A has 5 elements and B has 100 elements.)
One can merge A with B as follows. For each element A[K] Reducing the target set: Suppose after the first search
of A, use a binary search on B to find the proper location we find that A[1] is to be inserted after B[16]. Then we need
to insert A[K] into B. Each such search requires at most only use a binary search on B[17], ..., B[100] to find the
log s comparisons; hence this binary search and insertion proper location to insert A[2]. And so on.
algorithm to merge A and B requires at most r log s com- Tabbing: The expected location for inserting A[1] in B is
parisons. We emphasize that this algorithm is more efficient near B[20] (that is, B[s/r]), not near B[50]. Hence we first
than the usual merging Algorithm only when r s, that is, use a linear search on B[20], B[40], B[60], B[80] and B[100]
when r is much less than s. to find B[K] such that A[1] ≤ B[K], and then we use a binary
search on B[K - 20], B[K - 19], ..., B[K].
Mergesort I Mergesort II
MERGEPASS(A,N,L,B)
N
1. Set Q:=INT( 2∗L ), S:=2*L*Q and R:=N-S
2. Repeat for J = 1, 2, · · · , Q:
(a) Set LB:=1+(2*J-2)*L
The procedure merges pairs of sub arrays of A and assigns (b) Call MERGE(A,L,LB,A,L,LB+L,B,LB)
them to B. [End of loop.]
The N element array A is composed of sorted subarrays 3. [Only one subarray left?]
If R≤L then:
where each subarray has L elements except possibly last
Repeat for J = 1, 2, · · · , R:
subarray which may have fewer than L elements Set B(S+J):=A(S+J).
[End of loop.]
Else:
Call MERGE(A,L,S+1,A,R,L+S+1,B,S+1)
[End of if structure.]
4. Return
This algorithm sorts N-element array A using an auxiliary Merge sort is an example of the divide and conquer strat-
array B. egy.
MERGESORT(A, N) Merging is the process of combining two sorted files to make
1. Set L := 1 one bigger sorted file.
2. Repeat steps 3 to 6 while L < N:
Selection is the process of dividing a file into two parts: k
3. Call MERGEPASS(A, N, L, B)
4. Call MERGEPASS(B, N, 2*L, A) smallest elements and n - k largest elements.
5. Set L := 4 * L Selection and merging are opposite operations
[End of Step 2 loop] I selection splits a list into two lists
6. Exit I merging joins two files to make one file
Sorting Algorithms Sorting Algorithms
Merge Sort Merge Sort
Merging Complexity
1 Introduction
Complexity II Sorting
Classification of Sorting
Applications of Sorting
2 Bubble Sort
Algorithm
Complexity
Performance Modified version
I Worst case complexity : Θ(n log n) 3 Selection Sort
I Best case complexity : Θ(n log n) Algorithm
I Average case complexity : Θ(n log n) Complexity
I Worst case space complexity: Θ(n) auxiliary 4 Insertion Sort
Algorithm
Complexity
5 Merge Sort
Merging
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Quicksort Sort Quicksort Sort
Complexity
Randomized Quick sort
QUICKSORT I
Example: Suppose A is the following list of 12 numbers: Beginning with 22, next scan the list in the opposite direc-
tion, from left to right, comparing each number with 44 and
stopping at the first number greater than 44. The number is
The reduction step of the quicksort algorithm finds the final 55. Interchange 44 and 55 to obtain the list.
position of one of the numbers.
we use the first number, 44.
Beginning with the last number, 66, scan the list from right (Observe that the numbers 22, 33 and 11 to the left of 44
to left, comparing each number with 44 and stopping at the are each less than 44.)
first number less than 44. The number is 22. Interchange Beginning this time with 55, now scan the list in the original
44 and 22 to obtain the list direction, from right to left, until meeting the first number
less than 44. It is 40. Interchange 44 and 40 to obtain the
list.
Sorting Algorithms Sorting Algorithms
Quicksort Sort Quicksort Sort
QUICKSORT IV QUICKSORT V
Beginning with 77, scan the list from right to left seeking a
number less than 44. We do not meet such a number before
meeting 44. This means all numbers have been scanned
and compared with 44. Furthermore, all numbers less than
44 now form the sublist of numbers to the left of 44, and all
Beginning with 40, scan the list from left to right. The first numbers greater than 44 now form the sublist of numbers
number greater than 44 is 77. Interchange 44 and 77 to to the right of 44, as shown below:
obtain the list
QUICKSORT VI Algorithm I
Algorithm IV Complexity
8. Exit.
Sorting Algorithms Sorting Algorithms
Quicksort Sort Quicksort Sort
Randomized Quick sort Randomized Quick sort
1 Introduction
Randomized Quick sort III Sorting
Classification of Sorting
Applications of Sorting
2 Bubble Sort
Algorithm
Complexity
One way to improve Randomized - Quick sort is to choose Modified version
the pivot for partitioning more carefully than by picking a 3 Selection Sort
random element from the array. One common approach is Algorithm
to choose the pivot as the median of a set of 3 elements Complexity
randomly selected from the array. 4 Insertion Sort
Algorithm
Complexity
5 Merge Sort
Merging
Complexity
6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Count Sort Count Sort
Complexity
Randomized Quick sort
Count sort I
Algorithm I Complexity I
1 Introduction
Important Notes I Sorting
Classification of Sorting
Applications of Sorting
Counting sort is efficient if the range of input data is not 2 Bubble Sort
significantly greater than the number of objects to be sorted. Algorithm
Consider the situation where the input sequence is between Complexity
range 1 to 10K and the data is 10, 5, 10K , 5K . Modified version
It is not a comparison based sorting. It running time com- 3 Selection Sort
Algorithm
plexity is O(n) with space proportional to the range of data.
Complexity
It is often used as a sub-routine to another sorting algorithm 4 Insertion Sort
like radix sort. Algorithm
Counting sort uses a partial hashing to count the occur- Complexity
rence of the data object in O(1). 5 Merge Sort
Merging
Counting sort can be extended to work for negative inputs Complexity
also. 6 Quicksort Sort
Algorithm
Sorting Algorithms Sorting Algorithms
Radix Sort Radix Sort
Complexity
Randomized Quick sort
Radix sort I
Algorithm I Complexity I
Important Notes II
Outline I
1 INTRODUCTION
2 BINARY TREES
Terminology
Complete Binary Trees
Trees Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
Trees Trees
1 INTRODUCTION
Introduction I 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
So far, we have been studying mainly linear types of data struc-
3 REPRESENTING BINARY TREES IN MEMORY
tures: strings, arrays, lists, stacks and queues.
Linked Representation of Binary Trees
This part defines a nonlinear data structure called a tree. Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
This structure is mainly used to represent data containing a hier-
5 TRAVERSING BINARY TREES
archical relationship between elements, e.g., records, family trees
and tables of contents. 6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
First we investigate a special kind of tree, called a binary tree, Inorder Traversal
which can be easily maintained in the computer. Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
BINARY TREES BINARY TREES
Trees Trees
BINARY TREES BINARY TREES
Terminology
1 INTRODUCTION
Binary Trees VI 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
Consider the four binary trees. 4 PROPERTIES OF BINARY TREES
The three trees (a), (c) and (d) are similar. 5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
In particular, the trees (a) and (c) are copies since they also have Preorder Traversal
the same data at corresponding nodes. Inorder Traversal
The tree (b) is neither similar nor a copy of the tree (d) because, Postorder Traversal
in a binary tree, we distinguish between a left successor and a Binary Tree Formation from its Traversals
right successor even when there is only one successor. 7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
BINARY TREES BINARY TREES
Terminology Terminology
The terms descendant and ancestor have their usual meaning. Terminology from graph theory and horticulture is also used with
a binary tree T .
That is, a node L is called a descendant of a node N (and N is
called an ancestor of L) if there is a succession of children from Specifically, the line drawn from a node N of T to a successor is
N to L. called an edge, and a sequence of consecutive edges is called a
path.
In particular, L is called a left or right descendant of N according
to whether L belongs to the left or right subtree of N. A terminal node is called a leaf, and a path ending in a leaf is
called a branch.
Trees Trees
BINARY TREES BINARY TREES
Terminology Terminology
Terminology IV Terminology V
Trees Trees
BINARY TREES BINARY TREES
Terminology Complete Binary Trees
1 INTRODUCTION
Terminology VI 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
Binary trees T and T 0 are said to be similar if they have the same 5 TRAVERSING BINARY TREES
structure or, in other words, if they have the same shape. 6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
The trees are said to be copies if they are similar and if they have Inorder Traversal
the same contents at corresponding nodes. Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
BINARY TREES BINARY TREES
Complete Binary Trees Complete Binary Trees
The nodes of the complete binary tree T26 have been purposely This is a relatively small number.
labeled by the integers 1, 2, · · · , 26, from left to right, generation
For example, if the complete tree Tn has n = 1000000 nodes,
by generation.
then its depth Dn = 21.
With this labeling, one can easily determine the children and par-
ent of any node K in any complete tree Tn .
Trees Trees
BINARY TREES BINARY TREES
Complete Binary Trees Strict Binary Trees
1 INTRODUCTION
Complete Binary Trees IV 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
BINARY TREES BINARY TREES
Strict Binary Trees Strict Binary Trees
A binary tree tree T is said to be a 2-tree or an extended binary Then T may be “converted” into a 2-tree by replacing each empty
tree if each node N has either 0 or 2 children. subtree by a new node, as pictured in Fig. (b).
Trees Trees
BINARY TREES REPRESENTING BINARY TREES IN MEMORY
Extended Binary Trees: 2-Trees
1 INTRODUCTION
Extended Binary Trees: 2-Trees V 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
That is, each variable, or constant in E appears as an “inter-
Sequential Representation of Binary Trees
nal” node in T whose left and right subtrees correspond to the
4 PROPERTIES OF BINARY TREES
operands of the operation.
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
REPRESENTING BINARY TREES IN MEMORY REPRESENTING BINARY TREES IN MEMORY
Trees Trees
REPRESENTING BINARY TREES IN MEMORY REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees Sequential Representation of Binary Trees
1 INTRODUCTION
Linked Representation of Binary Trees VII 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
REPRESENTING BINARY TREES IN MEMORY REPRESENTING BINARY TREES IN MEMORY
Sequential Representation of Binary Trees Sequential Representation of Binary Trees
For any non-empty binary tree, if n is the number of nodes then For any non-empty binary tree, if n0 is the number of leaf nodes
the number of edges is n − 1. and n2 is the number of internal nodes having degree 2, then
Proof: n0 = n2 + 1.
Basis: Let n = 1, it implies that there is only one node in the tree. So the number Proof: Assume n1 is the total number of nodes with one child.
of edge is zero.
Now the total number of nodes n = n0 + n1 + n2
Induction Hypothesis:
And Total number of edges, e = 0 × n0 + 1 × n1 + 2 × n2 = n1 + 2n2 .
For n=k: Suppose the property is true for n = k. So the number of edges is k − 1.
We know that, e = n − 1,
For n=k+1: Now addition of one node includes one extra edge into set. So, now
Hence, n1 + 2n2 = (n0 + n1 + n2 ) − 1, which yields n0 = n2 + 1.
total edges = (k − 1) + 1 = k, Hence proved.
Trees Trees
PROPERTIES OF BINARY TREES PROPERTIES OF BINARY TREES
Preorder
There are three standard ways of traversing a binary tree T with I Process the root R.
root R. These three algorithms, called preorder, inorder and pos- I Traverse the left subtree of R in preorder.
torder, are as follows: I Traverse the right subtree of R in preorder
Trees Trees
TRAVERSING BINARY TREES TRAVERSING BINARY TREES
Inorder
I Traverse the left subtree of R in inorder.
I Process the root R.
The preorder traversal of T processes A, traverses LT and tra-
I Traverse the right subtree of R in inorder.
verses RT .
Again, the preorder traversal of LT processes the root B and then
D and E.
The preorder traversal of RT processes the root C and then F .
Hence A B D E C F is the preorder traversal of T .
Trees Trees
TRAVERSING BINARY TREES TRAVERSING BINARY TREES
PostOrder
I Traverse the left subtree of R in postorder
I Traverse the right subtree of R in postorder.
The inorder traversal of T traverses LT , processes A and tra- I Process the root R.
verses RT .
Again, the inorder traversal of LT processes D, B and then E.
The inorder traversal of RT processes C and then F .
Hence D B E A C F is the inorder traversal of T .
Trees Trees
TRAVERSING BINARY TREES TRAVERSING BINARY TREES
Trees Trees
TRAVERSING BINARY TREES TRAVERSING BINARY TREES
Observe that each algorithm contains the same three steps, and
Let E denote the following algebraic expression:
that the left subtree of R is always traversed before the right sub-
tree.
The difference between the algorithms is the time at which the
root R is processed.
Specifically, in the “pre” algorithm, the root R is processed be-
fore the subtrees are traversed; in the “in” algorithm, the root R
is processed between the traversals of the subtrees; and in the
The preorder and postorder traversals of T are as follows:
“post” algorithm, the root R is processed after the subtrees are
I (Inorder) [a + (b - c)] * [(d - e)/(f + g - h)] traversed.
I (Preorder) * + a b c / d e + f g h The three algorithms are sometimes called, respectively, the node-
I (Postorder) a b c + d e f g + h / * left-right (NLR) traversal, the left-node-right (LNR) traversal and
the left-right-node (LRN) traversal.
Trees Trees
TRAVERSING BINARY TREES TRAVERSAL ALGORITHMS USING STACKS
1 INTRODUCTION
Traversing Binary Trees XI 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Observe that each of the above traversal algorithms is recursively
Linked Representation of Binary Trees
defined, since the algorithm involves traversing subtrees in the
Sequential Representation of Binary Trees
given order.
4 PROPERTIES OF BINARY TREES
Accordingly, we will expect that a stack will be used when the 5 TRAVERSING BINARY TREES
algorithms are implemented on the computer. 6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal Preorder Traversal
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal Inorder Traversal
1 INTRODUCTION
Preorder Traversal IX 2 BINARY TREES
Terminology
Complete Binary Trees
Algorithm: PREORD(INFO, LEFT, RIGHT, ROOT)
Strict Binary Trees
A binary tree T is in memory. The algorithm does a preorder traversal of T, applying an operation PROCESS
to each of its nodes. An array STACK is used to temporarily hold the addresses of nodes. Extended Binary Trees: 2-Trees
1. [Initially push NULL onto STACK, and initialize PTR.]
3 REPRESENTING BINARY TREES IN MEMORY
Set TOP := 1, STACK [1] := NULL and PTR := ROOT . Linked Representation of Binary Trees
2. Repeat Steps 3 to 5 while PTR 6= NULL :
3. Apply PROCESS to INFO[PTR]. Sequential Representation of Binary Trees
4. [Right child?]
If RIGHT [PTR] 6= NULL, then: [Push on STACK.]
4 PROPERTIES OF BINARY TREES
Set TOP := TOP + 1, and STACK [TOP] := RIGHT [PTR].
[End of If structure.]
5 TRAVERSING BINARY TREES
5. [Left child?] 6 TRAVERSAL ALGORITHMS USING STACKS
If LEFT [PTR] 6= NULL, then:
Set PTR := LEFT [PTR]. Preorder Traversal
Else: [Pop from STACK.]
Set PTR := STACK [TOP] and TOP := TOP − 1. Inorder Traversal
[End of If structure.]
[End of Step 2 loop.]
Postorder Traversal
Binary Tree Formation from its Traversals
6. Exit.
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Inorder Traversal Inorder Traversal
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Inorder Traversal Inorder Traversal
8. Exit.
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Postorder Traversal Postorder Traversal
1 INTRODUCTION 8 Types of Binary Tree
2 BINARY TREES 9 Expression Tree
Terminology
Complete Binary Trees 10 BINARY SEARCH TREES
Strict Binary Trees Searching and Inserting in Binary Search Trees
Complexity of the Searching Algorithm
Extended Binary Trees: 2-Trees
Application of Binary Search Trees
3 REPRESENTING BINARY TREES IN MEMORY Deleting in a Binary Search Tree
Linked Representation of Binary Trees
Sequential Representation of Binary Trees 11 AVL SEARCH TREES
Insertion in an AVL Search Tree
4 PROPERTIES OF BINARY TREES
Deletion in an AVL search tree
5 TRAVERSING BINARY TREES
12 RED BLACK TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Rotations
Preorder Traversal
Insertion
Inorder Traversal
Deletion
Postorder Traversal
Binary Tree Formation from its Traversals 13 HEAP and HEAPSORT
7 HEADER NODES; THREADS Inserting into a Heap
Header Nodes Deleting the Root of a Heap
Threads; Inorder Threading Application to Sorting - Heapsort
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Postorder Traversal Postorder Traversal
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Postorder Traversal Postorder Traversal
1. [Push NULL onto STACK and initialize PTR.] [Pops node from STACK.]
Set TOP := 1, STACK [1] := NULL and PTR := ROOT . [End of loop.]
2. [Push left-most path onto STACK.]
Repeat Steps 3 to 5 while PTR 6= NULL: 8. If PTR < 0, then:
3. Set TOP := TOP + 1 and STACK [TOP] := PTR. (a) Set PTR := −PTR.
[Pushes PTR on STACK.]
4. If RIGHT [PTR] 6= NULL, then: [Push on STACK.] (b) Go to Step 2.
Set TOP := TOP + 1 and STACK [TOP] := −RIGHT [PTR]. [End of If structure.]
[End of If structure.]
5. Set PTR := LEFT [PTR]. [Updates pointer PTR.] 9. Exit.
[End of Step 2 loop.]
6. Set PTR := STACK [TOP] and TOP := TOP − 1.
[Pops node from STACK,]
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Binary Tree Formation from its Traversals Binary Tree Formation from its Traversals
1 INTRODUCTION 8 Types of Binary Tree
2 BINARY TREES 9 Expression Tree
Terminology
Complete Binary Trees 10 BINARY SEARCH TREES
Strict Binary Trees Searching and Inserting in Binary Search Trees
Complexity of the Searching Algorithm
Extended Binary Trees: 2-Trees
Application of Binary Search Trees
3 REPRESENTING BINARY TREES IN MEMORY Deleting in a Binary Search Tree
Linked Representation of Binary Trees
Sequential Representation of Binary Trees 11 AVL SEARCH TREES
Insertion in an AVL Search Tree
4 PROPERTIES OF BINARY TREES
Deletion in an AVL search tree
5 TRAVERSING BINARY TREES
12 RED BLACK TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Rotations
Preorder Traversal
Insertion
Inorder Traversal
Deletion
Postorder Traversal
Binary Tree Formation from its Traversals 13 HEAP and HEAPSORT
7 HEADER NODES; THREADS Inserting into a Heap
Header Nodes Deleting the Root of a Heap
Threads; Inorder Threading Application to Sorting - Heapsort
Trees Trees
TRAVERSAL ALGORITHMS USING STACKS TRAVERSAL ALGORITHMS USING STACKS
Binary Tree Formation from its Traversals Binary Tree Formation from its Traversals
Binary Tree Formation from its Traversals I Binary Tree Formation from its Traversals II
Trees Trees
HEADER NODES; THREADS HEADER NODES; THREADS
Trees Trees
HEADER NODES; THREADS HEADER NODES; THREADS
Threads; Inorder Threading Threads; Inorder Threading
1 INTRODUCTION 8 Types of Binary Tree
2 BINARY TREES 9 Expression Tree
Terminology
Complete Binary Trees 10 BINARY SEARCH TREES
Strict Binary Trees Searching and Inserting in Binary Search Trees
Complexity of the Searching Algorithm
Extended Binary Trees: 2-Trees
Application of Binary Search Trees
3 REPRESENTING BINARY TREES IN MEMORY Deleting in a Binary Search Tree
Linked Representation of Binary Trees
Sequential Representation of Binary Trees 11 AVL SEARCH TREES
Insertion in an AVL Search Tree
4 PROPERTIES OF BINARY TREES
Deletion in an AVL search tree
5 TRAVERSING BINARY TREES
12 RED BLACK TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Rotations
Preorder Traversal
Insertion
Inorder Traversal
Deletion
Postorder Traversal
Binary Tree Formation from its Traversals 13 HEAP and HEAPSORT
7 HEADER NODES; THREADS Inserting into a Heap
Header Nodes Deleting the Root of a Heap
Threads; Inorder Threading Application to Sorting - Heapsort
Trees Trees
HEADER NODES; THREADS HEADER NODES; THREADS
Threads; Inorder Threading Threads; Inorder Threading
Specifically, we will replace certain null entries by special pointers There are many ways to thread a binary tree T , but each thread-
which point to nodes higher in the tree. ing will correspond to a particular traversal of T .
These special pointers are called threads, and binary trees with Also, one may choose a one-way threading or a two-way thread-
such pointers are called threaded trees. ing.
The threads in a threaded tree must be distinguished in some Unless otherwise stated, our threading will correspond to the in-
way from ordinary pointers. order traversal of T .
Trees Trees
HEADER NODES; THREADS HEADER NODES; THREADS
Threads; Inorder Threading Threads; Inorder Threading
Trees Trees
HEADER NODES; THREADS HEADER NODES; THREADS
Threads; Inorder Threading Threads; Inorder Threading
1 INTRODUCTION
Types of Binary Tree I 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
Expression Tree 3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Binary Search Tree Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
Height Balanced Tree (AVL Tree)
5 TRAVERSING BINARY TREES
Red Black Tree 6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Heap Tree
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
Expression Tree Expression Tree
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
The definition of a binary search tree given here assumes that all
the node values are distinct.
There is an analogous definition of a binary search tree which
admits duplicates, that is, in which each node N has the following
property: The value at N is greater than every value in the left
subtree of N and is less than or equal to every value in the
right subtree of N.
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
1 INTRODUCTION 8 Types of Binary Tree
2 BINARY TREES 9 Expression Tree
Terminology
Complete Binary Trees 10 BINARY SEARCH TREES
Strict Binary Trees Searching and Inserting in Binary Search Trees
Complexity of the Searching Algorithm
Extended Binary Trees: 2-Trees
Application of Binary Search Trees
3 REPRESENTING BINARY TREES IN MEMORY Deleting in a Binary Search Tree
Linked Representation of Binary Trees
Sequential Representation of Binary Trees 11 AVL SEARCH TREES
Insertion in an AVL Search Tree
4 PROPERTIES OF BINARY TREES
Deletion in an AVL search tree
5 TRAVERSING BINARY TREES
12 RED BLACK TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Rotations
Preorder Traversal
Insertion
Inorder Traversal
Deletion
Postorder Traversal
Binary Tree Formation from its Traversals 13 HEAP and HEAPSORT
7 HEADER NODES; THREADS Inserting into a Heap
Header Nodes Deleting the Root of a Heap
Threads; Inorder Threading Application to Sorting - Heapsort
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
Searching and Inserting in Binary Search Trees I Searching and Inserting in Binary Search Trees II
In other words, proceed from the root R down through the tree T
until finding ITEM in T or inserting ITEM as a terminal node in T .
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
Searching and Inserting in Binary Search Trees III Searching and Inserting in Binary Search Trees IV
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
Searching and Inserting in Binary Search Trees V Searching and Inserting in Binary Search Trees VI
Suppose the following six numbers are inserted in order into an
empty binary search tree: 40, 60, 50, 33, 55, 11
Figure shows the six stages of the tree. We emphasize that if the
six numbers were given in a different order, then the tree might The formal presentation of our search and insertion algorithm will
be different and we might have a different depth. use the following procedure, which finds the locations of a given
ITEM and its parent.
The procedure traverses down the tree using the pointer PTR and
the pointer SAVE for the parent node.
This procedure will also be used in the next section, on deletion.
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
Searching and Inserting in Binary Search Trees VII Searching and Inserting in Binary Search Trees VIII
1. [Tree empty?]
If ROOT = NULL, then: Set LOC := NULL and PAR := NULL, and Return.
2. [ITEM at root?]
If ITEM = INFO[ROOT ], then: Set LOC := ROOT and PAB := NULL, and Return.
FIND(INFO, LEFT, RIGHT, ROOT, ITEM, LOC, PAR) 3. [Initialize pointers PTR and SAVE.]
If ITEM < INFO[ROOT ], then:
Set PTR := LEFT [ROOT ] and SAVE := ROOT .
A binary search tree T is in memory and an ITEM of information is given. This procedure finds the location
Else:
LOC of ITEM in T and also the location PAR of the parent of ITEM. There are three special cases: Set PTR := RIGHT [ROOT ] and SAVE := ROOT .
[End of If structure.]
(i) LOC = NULL and PAR = NULL will indicate that the tree is empty. 4. Repeat Steps 5 and 6 while PTR 6= NULL:
(ii) LOC =
6 NULL and PAR = NULL will indicate that ITEM is the root of T . 5. [ITEM found?]
If ITEM = INFO[PTR], then: Set LOC := PTR and PAR := SAVE, and Return.
(iii) LOC = NULL and PAR 6= NULL will indicate that ITEM is not in T and can be added to T as a 6. If ITEM < INFO[PTR], then:
Set SAVE := PTR and PTR := LEFT [PTR].
child of the node N with location PAR. Else:
Set SAVE := PTR and PTR := RIGHT [PTR].
[End of If structure.]
[End of Step 4 loop.]
7. [Search unsuccessful.] Set LOC := NULL and PAR := SAVE.
8. Exit.
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
Searching and Inserting in Binary Search Trees IX Searching and Inserting in Binary Search Trees X
5. Exit.
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
1 INTRODUCTION
Searching and Inserting in Binary Search Trees XI 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Observe that, in Step 4, there are three possibilities: Linked Representation of Binary Trees
Sequential Representation of Binary Trees
1. the tree is empty,
4 PROPERTIES OF BINARY TREES
2. ITEM is added as a left child and
3. ITEM is added as a right child. 5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Searching and Inserting in Binary Search Trees
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Searching and Inserting in Binary Search Trees Deleting in a Binary Search Tree
1 INTRODUCTION
Application of Binary Search Trees IV 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Deleting in a Binary Search Tree Deleting in a Binary Search Tree
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Deleting in a Binary Search Tree Deleting in a Binary Search Tree
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Deleting in a Binary Search Tree Deleting in a Binary Search Tree
node N does not have two children; and the second procedure 1. [Initializes CHILD.]
If LEFT [LOC] = NULL and RIGHT [LOC] = NULL, then:
refers to Case 3, where N does have two children. Set CHILD := NULL.
Else if LEFT [LOC] 6= NULL, then:
There are many subcases which reflect the fact that N may be a Set CHILD := LEFT [LOC].
Else:
left child, a right child or the root. Set CHILD := RIGHT [LOC].
[End of If structure.]
Also, in Case 2, N may have a left child or a right child. 2. If PAR 6= NULL, then:
If LOC = LEFT [PAR], then:
Set LEFT [PAR] := CHILD.
Procedure B treats the case that the deleted node N has two Else:
Set RIGHT [PAR] := CHILD.
children. [End of If structure.]
Else:
We note that the inorder successor of N can be found by moving Set ROOT := CHILD.
[End of If structure.]
to the right child of N and then moving repeatedly to the left until 3. Return.
meeting a node with an empty left subtree.
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Deleting in a Binary Search Tree Deleting in a Binary Search Tree
Deleting in a Binary Search Tree XII Deleting in a Binary Search Tree XIII
Trees Trees
BINARY SEARCH TREES BINARY SEARCH TREES
Deleting in a Binary Search Tree Deleting in a Binary Search Tree
1. [Find the locations of ITEM and its parent, using Procedure FIND.]
Call FIND(INFO, LEFT, RIGHT, ROOT, ITEM, LOC, PAR).
We can now formally state our deletion algorithm, using Proce- 2. [ITEM in tree?]
If LOC = NULL, then: Write: ITEM not in tree, and Exit.
dures A and B as building blocks. 3. [Delete node containing ITEM.]
If RIGHT [LOC] 6= NULL and LEFT [LOC] 6= NULL, then:
Call CASEB(INFO, LEFT, RIGHT, ROOT, LOC, PAR).
Else:
Call CASEA(INFO, LEFT, RIGHT, ROOT, LOC, PAR).
[End of If structure.]
4. [Return deleted node to the AVAIL list.]
Set LEFT [LOC] := AVAIL and AVAIL := LOC.
5. Exit.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
AVL search trees like binary search trees are represented using a
linked representation. However, every node registers its balance fac-
tor. Fig. shows the representation of an AVL search tree. The number
against each node represents its balance factor.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Case 1: Unbalance occurs due to the insertion in the left sub tree of
the left child of the pivot node.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Case 2: Unbalance occurs due to the insertion in the right sub tree of
the right child of the pivot node.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Case 3: Unbalance occurs due to the insertion in the right sub tree of
the left child of the pivot node.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Case 4: Unbalance occurs due to the insertion in the left sub tree of
the right child of the pivot node.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Insertion in an AVL Search Tree
1 INTRODUCTION 8 Types of Binary Tree
2 BINARY TREES 9 Expression Tree
Terminology
Complete Binary Trees 10 BINARY SEARCH TREES
Strict Binary Trees Searching and Inserting in Binary Search Trees
Complexity of the Searching Algorithm
Extended Binary Trees: 2-Trees
Application of Binary Search Trees
3 REPRESENTING BINARY TREES IN MEMORY Deleting in a Binary Search Tree
Linked Representation of Binary Trees
Sequential Representation of Binary Trees 11 AVL SEARCH TREES
Insertion in an AVL Search Tree
4 PROPERTIES OF BINARY TREES
Deletion in an AVL search tree
5 TRAVERSING BINARY TREES
12 RED BLACK TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Rotations
Preorder Traversal
Insertion
Inorder Traversal
Deletion
Postorder Traversal
Binary Tree Formation from its Traversals 13 HEAP and HEAPSORT
7 HEADER NODES; THREADS Inserting into a Heap
Header Nodes Deleting the Root of a Heap
Threads; Inorder Threading Application to Sorting - Heapsort
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Insertion in an AVL Search Tree
Inserting an element in its first phase is similar to binary search Based on the position of the inserted node with reference to A, the
tree. rotation are classified as follows:
After insertion the balance factor of node is affected. LL rotation: Inserted node is in the left subtree of left subtree of
Rotations is used to restore the balance of the search tree. node A
To perform rotations: RR rotation: Inserted node is in the right subtree of right subtree
of node A
I It is necessary to identify a specific node A whose BF (A) is
neither 0, 1 or -1. LR rotation: Inserted node is in the right subtree of left subtree of
I Which is the nearest ancestor to the inserted node on the node A
path from the inserted node to the root. RL rotation: Inserted node is in the left subtree of right subtree of
All nodes on the path from the inserted node to A will have their node A
balance factors to be either 0, 1, or -1.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Insertion in an AVL Search Tree
Rotations II LL Rotation
Inserted node is in the left subtree of left subtree of node A.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Insertion in an AVL Search Tree
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Insertion in an AVL Search Tree
Construct AVL tree by inserting following elements 64, 1, 14, 26, 13,
110, 98, 85.
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Insertion in an AVL Search Tree
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Insertion in an AVL Search Tree Deletion in an AVL search tree
1 INTRODUCTION
Important Notes 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
The time complexity of an insertion operation in an AVL tree is
4 PROPERTIES OF BINARY TREES
given by O(height) = O(log n).
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Deletion in an AVL search tree Deletion in an AVL search tree
Rotations R0 Rotation
Trees Trees
AVL SEARCH TREES AVL SEARCH TREES
Deletion in an AVL search tree Deletion in an AVL search tree
Trees Trees
AVL SEARCH TREES RED BLACK TREES
Deletion in an AVL search tree
1 INTRODUCTION
Deletion Example II 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Sequential Representation of Binary Trees
4 PROPERTIES OF BINARY TREES
5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
RED BLACK TREES RED BLACK TREES
All simple paths from any node χ to a descendant leaf have the
same number of black nodes = black-height(χ).
Trees Trees
RED BLACK TREES RED BLACK TREES
Trees Trees
RED BLACK TREES RED BLACK TREES
Trees Trees
RED BLACK TREES RED BLACK TREES
Rotations Rotations
Rotations IV Rotations V
Trees Trees
RED BLACK TREES RED BLACK TREES
Insertion Insertion
Insertion IV Insertion V
Pseudocode:
Graphical notation:
Trees Trees
RED BLACK TREES RED BLACK TREES
Insertion Insertion
Case 1:
Case 2:
Trees Trees
RED BLACK TREES RED BLACK TREES
Insertion Insertion
Insertion X Insertion XI
Case 3:
Trees Trees
RED BLACK TREES RED BLACK TREES
Insertion Insertion
Insertion I Insertion II
Trees Trees
RED BLACK TREES RED BLACK TREES
Insertion Insertion
Deletion I Deletion II
Trees Trees
HEAP and HEAPSORT HEAP and HEAPSORT
Inserting into a Heap Inserting into a Heap
8. Return.
Trees Trees
HEAP and HEAPSORT HEAP and HEAPSORT
Inserting into a Heap Deleting the Root of a Heap
1 INTRODUCTION
Inserting into a Heap - Algorithm II 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
Suppose an array A with N elements is given. By repeatedly applying Sequential Representation of Binary Trees
Procedure to A, that is, by executing Call INSHEAP(A, J, A[J + 1]) for 4 PROPERTIES OF BINARY TREES
J = 1, 2, ..., N - 1, we can build a heap H out of the array A. 5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
HEAP and HEAPSORT HEAP and HEAPSORT
Deleting the Root of a Heap Deleting the Root of a Heap
Deleting the Root of a Heap - Example Deleting the Root of a Heap - Algorithm I
Suppose we want to delete Root R = 95 in a given heap. DELHEAP(TREE, N, ITEM): A heap H with N elements is stored in the array TREE. This procedure assigns the root
TREE[1] of H to the variable ITEM and then reheaps the remaining elements. The variable LAST saves the value
of the original last node of H. The pointers PTR, LEFT and RIGHT give the locations of LAST and its left and right
children as LAST sinks in the tree.
Trees Trees
HEAP and HEAPSORT HEAP and HEAPSORT
Deleting the Root of a Heap Application to Sorting - Heapsort
1 INTRODUCTION
Deleting the Root of a Heap - Algorithm II 2 BINARY TREES
Terminology
Complete Binary Trees
Strict Binary Trees
Extended Binary Trees: 2-Trees
3 REPRESENTING BINARY TREES IN MEMORY
Linked Representation of Binary Trees
8. If LEFT = N and if LAST < TREE[LEFT ], then: Set PTR := LEFT .
Sequential Representation of Binary Trees
9. Set TREE[PTR] := LAST .
4 PROPERTIES OF BINARY TREES
10. Return. 5 TRAVERSING BINARY TREES
6 TRAVERSAL ALGORITHMS USING STACKS
Preorder Traversal
Inorder Traversal
Postorder Traversal
Binary Tree Formation from its Traversals
7 HEADER NODES; THREADS
Header Nodes
Threads; Inorder Threading
Trees Trees
HEAP and HEAPSORT HEAP and HEAPSORT
Application to Sorting - Heapsort Application to Sorting - Heapsort
HEAPSORT(A, N): An array A with N elements is given. This algorithm sorts the elements of A.
[End of Loop.]
3. Exit.
Trees Trees
HEAP and HEAPSORT HEAP and HEAPSORT
Application to Sorting - Heapsort Application to Sorting - Heapsort
Trees
HEAP and HEAPSORT
Application to Sorting - Heapsort
Complexity of Heapsort IV
Outline I
1 Introduction
Types of Graphs
Representation of a Graph
2 Elementary Graph Algorithms
Graph Algorithms TRAVERSING A GRAPH
Breadth-first search
Depth-first search
3 Minimum Spanning Trees
The algorithms of Kruskal
The algorithms of Prim
4 Single-Source Shortest Paths
The Bellman-Ford algorithm
Dijkstra’s algorithm
5 All-Pair Shortest Paths
The Floyd-Warshall algorithm
V =1,2,3,4,5,6
E={(1,4),(1,6),(2,6),(4,5),(5,6)}
A graph G consists of two things:
1. A set V of elements called nodes (or points or vertices)
2. A set E of edges such that each edge e in E is identified
with a unique (unordered) pair [u, v] of nodes in V, denoted
by e = [u, v]
We indicate the parts of a graph by writing G = (V, E).
In a Graph a vertex with degree zero is called the isolated A vertex with degree one is called the pendent vertex of
vertex or in other word we can say that the vertex which is the graph G or in other world if there is only one edge as-
not associated with any edge is called the isolated vertex. sociated with any vertex then the degree of that vertex is
In the graph the vertex 3 is called the isolated vertex. one and vertex is called the pendant vertex.In the graph the
vertex 2 is called the pendant vertex.
Path I Path II
In a Graph G an open walk is called an Path if in a walk no
vertex appears more then once. A path P of length n from a node u to a node v is defined as a
Hence we can say that a path is a finite alternative se- sequence of n + 1 nodes.
quences of vertices and edges in which no of the vertices P = (v0 , v1 , v2 , · · · , vn ) such that u = v0 ; vi−1 is adjacent to vi for
and edges appears more then once. i = 1, 2, · · · , n; and vn = v.
The path P is said to be closed if v0 = vn .
The total number of edges in the path is called the length of
The path P is said to be simple if all the nodes are distinct, with
the path. V1,e1,V2,e4,V4,e5,V5,e6,V3 is an path.
the exception that v0 may equal vn ; that is, P is simple if the
nodes v0 , v1 , · · · , vn−1 are distinct and the nodes v1 , v2 , · · · , vn
are distinct.
A cycle is a closed simple path with length 3 or more. A cycle of
length k is called a k-cycle.
Graph Algorithms Graph Algorithms
Introduction Introduction
Types of Graphs Types of Graphs
An undirected graph (graph) is a graph in which edges have A Directed graph (digraph) is a graph in which edges have
no orientation. orientations.
The edge (x, y ) is not identical to edge (y, x).
The edge (x, y) is identical to edge (y, x), i.e., they are not
ordered pairs.
The maximum number of edges possible in an undirected
graph without a loop is n(n − 1)/2
The notions of path, simple path and cycle carry over from A directed graph G is said to be connected, or strongly con-
undirected graphs to directed graphs except that now the nected, if for each pair u, v of nodes in G there is a path
direction of each edge in a path (cycle) must agree with from u to v and there is also a path from v to u.
the direction of the path (cycle). A node v is said to be On the other hand, G is said to be unilaterally connected if
reachable from a node u if there is a (directed) path from u for any pair u, v of nodes in G there is a path from u to v or
to v. a path from v to u.
Let T be any nonempty tree graph. Suppose we choose any A directed graph G is said to be simple if G has no parallel
node R in T. Then T with this designated node R is called edges. A simple graph G may have loops, but it cannot have
a rooted tree and R is called its root. Recall that there is more than one loop at a given node. A nondirected graph G
a unique simple path from the root R to any other node in may be viewed as a simple directed graph by assuming that
T. This defines a direction to the edges in T, so the rooted each edge [u, v] in G represents two directed edges, (u, v)
tree T may be viewed as a directed graph. Furthermore, and (v, u).
suppose we also order the successors of each node v in Observe that we use the notation [u, v] to denote an un-
T. Then T is called an ordered rooted tree. Ordered rooted ordered pair and the notation (u, v) to denote an ordered
trees are nothing more than the general trees. pair.
Graph Algorithms Graph Algorithms
Introduction Introduction
Types of Graphs Types of Graphs
Tree II Example I
Example IV 1 Introduction
Types of Graphs
Representation of a Graph
1 and outdeg(D) = 2. Node C is a sink, since indeg(C) = 2 but outdeg(C) = 0. No node in G is a source. 5 All-Pair Shortest Paths
The Floyd-Warshall algorithm
The graph can be represented in many ways the some of them An adjacency matrix is a square matrix used to represent a
are discussed here. finite graph.
Adjacency Matrix Representation The elements of the matrix indicate whether pairs of ver-
tices are adjacent or not in the graph.
Adjacency List Representation
Space complexity: O(|V 2 |)
Graph Algorithms Graph Algorithms
Introduction Introduction
Representation of a Graph Representation of a Graph
Consider the graph G in Fig. Suppose the nodes are stored in memory in a linear array DATA as follows: DATA: X, Y,
Z, W
Then we assume that the ordering of the nodes in G is as follows: n1 = X , n2 = Y , n3 = Z and n4 = W . The
adjacency matrix A of G is as follows:
Suppose there is a path from vi to vj . Then there must be a Let A be the adjacency matrix and let P = (pij ) be the path
simple path from vi to vj when vi 6= vj , or there must be a cycle matrix of a digraph G. Then Pij = 1 if and only if there is a
from vi to vj when vi = vj . Since G has only m nodes, such a nonzero number in the ij entry of the matrix
simple path must have length m − 1 or less, or such a cycle must Bm = A + A2 + A3 + · · · + Am
have length m or less.
4. [Inserts node N in the NODE list.] 3. [OVERFLOW?] If AVAILE = NULL, then: Set FLAG := FALSE, and Return.
Set NODE[NEW] := N, NEXT[NEW] := START and START := NEW. 4. [Remove node from AVAILE list.] Set NEW := AVAILE and AVAILE := LINK[AVAILE].
5. Set FLAG := TRUE, and Return. 5. [Insert LOCB in list of successors of A.]
Set DEST[NEW] := LOCB, LINK[NEW] := ADJ[LOCA] and ADJ[LOCA] := NEW.
6. Set FLAG := TRUE, and Return.
1. Set PTR := START. 3. If LOCA = NULL or LOCB = NULL, then: Set LOC := NULL.
Else: Call FIND(DEST, LINK, ADJ[LOCA], LOCB, LOC).
2. Repeat while PTR 6= NULL:
If ITEM = INFO[PTR], then: Set LOC := PTR, and Return. 4. Return.
Else: Set PTR := LINK[PTR].
[End of loop.]
3. [Delete edges ending at N.] DELETE(INFO, LINK, START, AVAIL, ITEM, FLAG)
(a) Set PTR := START. Deletes the first node in the list containing ITEM, or sets FLAG := FALSE when ITEM does not appear in the list.
(b) Repeat while PTR 6= NULL: 1. [List empty?] If START = NULL, then: Set FLAG := FALSE, and Return.
(i) Call DELETE(DEST, LINK, ADJ[PTR], AVAILE, LOC, FLAG). 2. [ITEM in first node?] If INFO[START]. = ITEM, then:
(ii) Set PTR := NEXT[PTR]. Set PTR := START, START := LINK[START],
LINK[PTR] := AVAIL, AVAIL := PTR,
[End of loop.] FLAG := TRUE, and Return.
[End of If structure.]
4. [Successor list empty?] If ADJ[LOC] = NULL, then: Go to Step 7.
3. Set PTR := LINK[START] and SAVE := START. [Initializes pointers.]
5. [Find the first and last successor of N.] 4. Repeat Steps 5 and 6 while PTR 6= NULL:
(a) Set BEG := ADJ[LOC], END := ADJ[LOC] and PTR := LINK[END]. 5. If INFO[PTR] = ITEM, then:
(b) Repeat while PTR 6= NULL: Set LINK[SAVE] := LINK[PTR], LINK[PTR] := AVAIL,
Set END := PTR and PTR := LINK[PTR]. AVAIL := PTR, FLAG := TRUE, and Return.
[End of loop.] [End of If structure.]
6. [Add successor list of N to AVAILE list.] 6. Set SAVE := PTR and PTR := LINK[PTR]. [Updates pointers]
Set LINK[END] := AVAILE and AVAILE := BEG. [End of Step 4 loop.]
7. Call DELETE(NODE, NEXT, START, AVAILN, N, FLAG). 7. Set FLAG := FALSE, and Return.
8. Return.
Graph Algorithms Graph Algorithms
Elementary Graph Algorithms Elementary Graph Algorithms
TRAVERSING A GRAPH
Algorithm: This algorithm executes a breadth-first search on a graph G beginning at a starting node A.
The above algorithm will process only those nodes which
1. Initialize all nodes to the ready state (STATUS = 1). are reachable from the starting node A.
2. Put the starting node A in QUEUE and change its status to the waiting state (STATUS = 2).
3. Repeat Steps 4 and 5 until QUEUE is empty.
Suppose one wants to examine all the nodes in the graph
4. Remove the front node N of QUEUE. Process N and change the status of N to the processed state G. Then the algorithm must be modified so that it begins
(STATUS = 3).
again with another node (which we will call B) that is still in
5. Add to the rear of QUEUE all the neighbors of N that are in the steady state (STATUS= 1), and change
their status to the waiting state (STATUS = 2). the ready state.
[End of Step 3 loop.]
6. Exit. This node B can be obtained by traversing the list of nodes.
Graph Algorithms Graph Algorithms
Elementary Graph Algorithms Elementary Graph Algorithms
TRAVERSING A GRAPH TRAVERSING A GRAPH
BFS-Example-1 I BFS-Example-1 II
BFS-Example-2 I BFS-Example-2 II
BFS-Example-2 V BFS-Example-2 VI
(e) Remove the front element B from QUEUE, and add to QUEUE
(c) Remove the front element F from QUEUE by setting FRONT the neighbors of B (the ones in the ready state) as follows:
:= FRONT + 1, and add to QUEUE the neighbors of F as fol- FRONT = 5 QUEUE : A, F, C, B, D, G
lows: REAR = 6 ORIG : ∅, A, A, A, F, B
FRONT = 3 QUEUE : A, F, C, B, D Note that only G is added to QUEUE, since the other neigh-
REAR = 5 ORIG : ∅, A, A, A, F bor, C is not in the ready state.
(d) Remove the front element C from QUEUE, and add to QUEUE (f) Remove the front element D from QUEUE, and add to QUEUE
the neighbors of C (which are in the ready state) as follows: the neighbors of D (the ones in the ready state) as follows:
FRONT = 4 QUEUE : A, F, C, B, D FRONT = 6 QUEUE : A, F, C, B, D, G
REAR = 5 ORIG : ∅, A, A, A, F REAR = 6 ORIG : ∅, A, A, A, F, B
Note that the neighbor F of C is not added to QUEUE, since (g) Remove the front element G from QUEUE and add to QUEUE
F is not in the ready state (because F has already been the neighbors of G (the ones in the ready state) as follows:
added to QUEUE). FRONT = 7 QUEUE : A, F, C, B, D, G, E
REAR = 7 ORIG : ∅, A, A, A, F, B, G
Graph Algorithms Graph Algorithms
Elementary Graph Algorithms Elementary Graph Algorithms
TRAVERSING A GRAPH TRAVERSING A GRAPH
(h) Remove the front element E from QUEUE and add to QUEUE
the neighbors of E (the ones in the ready state) as follows:
FRONT = 8 QUEUE : A, F, C, B, D, G, E, J
REAR = 8 ORIG : ∅, A, A, A, F, B, G, E
We stop as soon as J is added to QUEUE, since J is our
final destination. We now backtrack from J, using the array
ORIG to find the path P. Thus
J←E←G←B←A
is the required path P.
Graph Algorithms Graph Algorithms
Elementary Graph Algorithms Elementary Graph Algorithms
TRAVERSING A GRAPH TRAVERSING A GRAPH
BFS-Analysis 1 Introduction
Types of Graphs
Representation of a Graph
In breadth-first search each vertex is enqueued at most once, 2 Elementary Graph Algorithms
and hence dequeued at most once. The operations of enqueu- TRAVERSING A GRAPH
Breadth-first search
ing and dequeuing take O(1) time, and so the total time devoted
Depth-first search
to queue operations is O(V ). Because the procedure scans the
adjacency list of each vertex only when the vertex is dequeued, 3 Minimum Spanning Trees
it scans each adjacency list at most once. Since the sum of the The algorithms of Kruskal
lengths of all the adjacency lists is Θ(E), the total time spent in The algorithms of Prim
scanning adjacency lists is O(E). The overhead for initialization
4 Single-Source Shortest Paths
is O(V ), and thus the total running time of the BFS procedure is The Bellman-Ford algorithm
O(V + E). Thus, breadth-first search runs in time linear in the Dijkstra’s algorithm
size of the adjacency-list representation of G.
5 All-Pair Shortest Paths
The Floyd-Warshall algorithm
Again, the above algorithm will process only those nodes which
are reachable from the starting node A. Suppose one wants to
examine all the nodes in G. Then the algorithm must be modified
so that it begins again with another node.
DFS-Example IV DFS-Example-2 I
Consider the graph G in Fig.(a). Suppose we want to find and
print all the nodes reachable from the node J (including J itself).
One way to do this is to use a depth-first search of G starting at
the node J.
(a) Initially, push J onto the stack as follows: (d) Pop and print the top element G, and then push onto the
STACK: J stack all the neighbors of G (those in the ready state) as
follows:
(b) Pop and print the top element J, and then push onto the
Print G
stack all the neighbors of J (those that are in the ready state)
STACK: D, E, C
as follows:
Note that only C is pushed onto the stack, since the other
Print J
neighbor, E, is not in the ready state (because E has already
STACK: D, K
been pushed onto the stack).
(c) Pop and print the top element K, and then push onto the
(e) Pop and print the top element C, and then push onto the
stack all the neighbors of K (those that are in the ready
stack all the neighbors of C (those in the ready state) as
state) as follows:
follows:
Print K
Print C
STACK: D, E, G
STACK: D, E, F
Graph Algorithms Graph Algorithms
Elementary Graph Algorithms Elementary Graph Algorithms
TRAVERSING A GRAPH TRAVERSING A GRAPH
DFS-Example-2 IV DFS-Example-2 V
(f) Pop and print the top element F, and then push onto the
stack all the neighbors of F (those in the ready state) as
follows: (h) Pop and print the top element D, and push onto the stack all
Print F the neighbors of D (those in the ready state) as follows:
STACK: D, E Print D
Note that the only neighbor D of F is not pushed onto the STACK:
stack, since D is not in the ready state (because D has al- The stack is now empty, so the depth-first search of G start-
ready been pushed onto the stack). ing at J is now complete. Accordingly, the nodes which were
(g) Pop and print the top element E, and push onto the stack printed,
all the neighbors of E (those in the ready state) as follows: J, K, G, C, F, E, D
Print E are precisely the nodes which are reachable from J.
STACK: D
(Note that none of the three neighbors of E is in the ready
state.)
DFS-Analysis I DFS-Analysis II
In Kruskal’s algorithm, the set A is a forest whose vertices are Thus, we can determine whether two vertices u and v belong to
all those of the given graph. The safe edge added to A is al- the same tree by testing whether FIND − SET (u) equals FIND −
ways a least-weight edge in the graph that connects two distinct SET (v ). To combine trees, Kruskal’s algorithm calls the UNION
components. procedure.
An edge (u, v ) of least weight. Let C1 and C2 denote the two
trees that are connected by (u, v ). Since (u, v ) must be a light
edge connecting C1 to some other tree. Kruskal’s algorithm
qualifies as a greedy algorithm because at each step it adds
to the forest an edge of least possible weight.
We use a disjoint-set data structure to maintain several disjoint
sets of elements. Each set contains the vertices in one tree
of the current forest. The operation FIND − SET (u) returns a
representative element from the set that contains u.
The next edge can be obtained in O(log E) time if graph has 3 Minimum Spanning Trees
E edges. The algorithms of Kruskal
Reconstruction of heap takes O(E) time. The algorithms of Prim
So, Kruskal’s Algorithm takes O(E log E) time. 4 Single-Source Shortest Paths
The value of E can be at most O(V 2 ). The Bellman-Ford algorithm
Dijkstra’s algorithm
So, O(log V ) and O(log E) are same.
5 All-Pair Shortest Paths
The Floyd-Warshall algorithm
After initializing the d and π values of all vertices in line 1, the al-
gorithm makes |V |−1 passes over the edges of the graph. Each
pass is one iteration of the for loop of lines 2 − 4 and consists
of relaxing each edge of the graph once. After making |V | − 1
passes, lines 5 − 8 check for a negative-weight cycle and re-
turn the appropriate boolean value. The Bellman-Ford algorithm
runs in time O(VE), since the initialization in line 1 takes θ(V )
time, each of the |V | − 1 passes over the edges in lines 2 − 4
takes θ(E) time, and the for loop of lines 5 − 7 takes O(E) time.
Graph Algorithms Graph Algorithms
Single-Source Shortest Paths Single-Source Shortest Paths
The Bellman-Ford algorithm The Bellman-Ford algorithm
Line 1 initializes the d and π values in the usual way, and line
2 initializes the set S to the empty set. The algorithm maintains
the invariant that Q = V − S at the start of each iteration of the
while loop of lines 4 − 8. Line 3 initializes the min-priority queue
Q to contain all the vertices in V ; since S = φ; at that time, the
invariant is true after line 3. Each time through the while loop of
lines 4 − 8, line 5 extracts a vertex u from Q = V − S and line 6
adds it to set S, thereby maintaining the invariant. (The first time
through this loop, u = s.) Vertex u, therefore, has the smallest
shortest-path estimate of any vertex in V − S. Then, lines 7 − 8
relax each edge (u, v ) leaving u, thus updating the estimate v .d
and the predecessor v .π if we can improve the shortest path to
v found so far by going through u.
Example-1 I Example-1 II
Consider the weighted graph G in Fig. Assume v1 = R, v2 = S,
v3 = T and v4 = U. Then the weight matrix W of G is as follows:
Example-2 I Example-2 II
Initially, the distance from each node to itself is 0, and the dis-
tance between nodes a and b is x if there is an edge between
nodes a and b with weight x. All other distances are infinite
Example-2 V Example-2 VI
The algorithm continues like this, until all nodes have been ap-
pointed inter mediate nodes. After the algorithm has finished,
the array contains the minimum distances between any two nodes:
Example-3 I Example-3 II
Graph Algorithms Graph Algorithms
All-Pair Shortest Paths All-Pair Shortest Paths
The Floyd-Warshall algorithm The Floyd-Warshall algorithm
Graph Algorithms
All-Pair Shortest Paths
The Floyd-Warshall algorithm
Conclusion
Outline I
1 Introduction
2 Simple recursive algorithms
3 Backtracking algorithms
Algorithm Paradigms
4 Divide and conquer algorithms
5 Dynamic programming algorithms
6 Greedy algorithms
Dynamic programming vs Greedy approach
7 Branch and bound algorithms
8 Brute force algorithms
9 Randomized algorithms
10 Prune and search
1 Introduction Introduction I
2 Simple recursive algorithms
3 Backtracking algorithms
An algorithmic paradigm or algorithm design paradigm is a
generic model or framework which underlies the design of
4 Divide and conquer algorithms a class of algorithms.
5 Dynamic programming algorithms An algorithmic paradigm is an abstraction higher than the
notion of an algorithm, just as an algorithm is an abstraction
6 Greedy algorithms higher than a computer program.
Dynamic programming vs Greedy approach
Algorithms that use a similar problem-solving approach can
7 Branch and bound algorithms be grouped together.
This classification scheme is neither exhaustive nor disjoint.
8 Brute force algorithms
The purpose is not to be able to classify an algorithm as one
9 Randomized algorithms type or another, but to highlight the various ways in which a
problem can be attacked.
10 Prune and search
Algorithm Paradigms Algorithm Paradigms
Introduction Simple recursive algorithms
3 Backtracking algorithms
Algorithm types we will consider include:
4 Divide and conquer algorithms
I Simple recursive algorithms
I Backtracking algorithms 5 Dynamic programming algorithms
I Divide and conquer algorithms
I Dynamic programming algorithms 6 Greedy algorithms
I Greedy algorithms Dynamic programming vs Greedy approach
I Branch and bound algorithms
I Brute force algorithms 7 Branch and bound algorithms
I Randomized algorithms 8 Brute force algorithms
I Prune and search
9 Randomized algorithms
Finding Path through maze: Eight Queens Problem: Find an arrangement of 8 queens
on a single chess board such that no two queens are at-
tacking one another.
1. Place a queen on the first available square in row 1.
2. Move onto the next row, placing a queen on the first available
square there (that doesn’t conflict with the previously placed
queens).
Algorithm Paradigms Algorithm Paradigms
Backtracking algorithms Divide and conquer algorithms
3 Backtracking algorithms
To color a map with no more than four colors:
color(Country n) 4 Divide and conquer algorithms
I If all countries have been colored (n > number of countries)
return success; otherwise, 5 Dynamic programming algorithms
I For each color c of four colors: If country n is not adjacent
to a country that has been colored c 6 Greedy algorithms
• Color country n with color c
Dynamic programming vs Greedy approach
• Recursivly color country n + 1
7 Branch and bound algorithms
• If successful, return success
I Return failure (if loop exits) 8 Brute force algorithms
9 Randomized algorithms
Divide and conquer algorithms III Fundamental of Divide & Conquer Strategy I
Applications of Divide and Conquer Approach I Applications of Divide and Conquer Approach II
Applications of Divide and Conquer Approach III Advantages of Divide and Conquer I
5. Strassen’s Algorithm: It is an algorithm for matrix multipli- Divide and Conquer tend to successfully solve one of the
cation, which is named after Volker Strassen. It has proven biggest problems, such as the Tower of Hanoi, a mathemat-
to be much faster than the traditional algorithm when works ical puzzle. It is challenging to solve complicated problems
on large matrices. for which you have no basic idea, but with the help of the
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: divide and conquer approach, it has lessened the effort as
The Fast Fourier Transform algorithm is named after J. W.
it works on dividing the main problem into two halves and
Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn). then solve them recursively. This algorithm is much faster
7. Karatsuba algorithm for fast multiplication: It is one of than other algorithms.
the fastest multiplication algorithms of the traditional time, in- It efficiently uses cache memory without occupying much
vented by Anatoly Karatsuba in late 1960 and got published space because it solves simple subproblems within the cache
in 1962. It multiplies two n-digit numbers in such a way by memory instead of accessing the slower main memory.
reducing it to at most single-digit.
It is more proficient than that of its counterpart Brute Force
technique.
3 Backtracking algorithms
Dynamic programming is a technique that breaks the prob-
4 Divide and conquer algorithms
lems into sub-problems, and saves the result for future pur-
5 Dynamic programming algorithms poses so that we do not need to compute the result again.
The subproblems are optimized to optimize the overall so-
6 Greedy algorithms lution is known as optimal substructure property. The main
Dynamic programming vs Greedy approach use of dynamic programming is to solve optimization prob-
7 Branch and bound algorithms lems. Here, optimization problems mean that when we are
trying to find out the minimum or the maximum solution of a
8 Brute force algorithms problem. The dynamic programming guarantees to find the
optimal solution of a problem if the solution exists.
9 Randomized algorithms
How does the dynamic programming approach work? How does the dynamic programming approach work?
I II
The above five steps are the basic steps for dynamic pro-
The following are the steps that the dynamic programming gramming. The dynamic programming is applicable that are
follows: having properties such as:
I It breaks down the complex problem into simpler subprob- Those problems that are having overlapping subproblems
lems. and optimal substructures. Here, optimal substructure means
I It finds the optimal solution to these sub-problems. that the solution of optimization problems can be obtained
I It stores the results of subproblems (memoization). The pro-
by simply combining the optimal solution of all the subprob-
cess of storing the results of subproblems is known as mem-
lems.
orization.
I It reuses them so that same sub-problem is calculated more In the case of dynamic programming, the space complex-
than once. ity would be increased as we are storing the intermediate
I Finally, calculate the result of the complex problem. results, but the time complexity would be decreased.
When i=3 then the values 1and 1 are added shown as be-
When i=2 then the values 0 and 1 are added shown as be- low:
low:
3 Backtracking algorithms
6 Greedy algorithms
Dynamic programming vs Greedy approach
9 Randomized algorithms
3 Backtracking algorithms
5 Dynamic programming algorithms Before understanding the differences between the dynamic pro-
6 Greedy algorithms gramming and greedy approach, we should know about the dy-
Dynamic programming vs Greedy approach namic programming and greedy approach separately.
9 Randomized algorithms
3 Backtracking algorithms
6 Greedy algorithms
Dynamic programming vs Greedy approach
9 Randomized algorithms
3 Backtracking algorithms
9 Randomized algorithms
3 Backtracking algorithms
9 Randomized algorithms
Outline I
1 Introduction
Hashing
2 Hash functions
3 Collision Resolution
5 Chaining
Hashing Hashing
Introduction Introduction
Hashing
1 Introduction
2 Hash functions The search time of each algorithm discussed so far de-
pends on the number n of elements in the collection S of
data.
3 Collision Resolution A searching technique, called hashing or hash addressing,
which is essentially independent of the number n.
In hashing, the idea is to use a hash function that converts
4 Open Addressing: Linear Probing and Modifications
a given key to a smaller number and uses the small number
as an index in a table called a hash table.
5 Chaining
Hashing Hashing
Hash functions Hash functions
Hash functions I
1 Introduction
Hashing Hashing
Hash functions Hash functions
3. Folding method
• The key k is partitioned into a number of parts k1 , k2 , . . . ,
Consider a company each of whose 68 employees is assigned
kr where each part, except possibly the last, has the same
number of digits as the required address. a unique 4-digit employee number. Suppose L consists of 100
• Then the parts are added together, ignoring the last carry two-digit addresses: 00, 01, 02, . . . , 99. We apply the above
That is, H(k) = k1 + k2 + · · · + kr where the leading-digit hash functions to each of the following employee numbers:
carries, if any, are ignored. Sometimes, for extra “milling”, the 3205, 7148, 2345
even-numbered parts, k2 , k4 , . . . are each reversed before
the addition.
Hashing Hashing
Hash functions Hash functions
Division method: Choose a prime number m close to 99, such Midsquare method: The following calculations are performed:
as m = 97. Then H(3205) = 4, H(7148) = 67, H(2345) = 17
That is, dividing 3205 by 97 gives a remainder of 4, dividing
7148 by 97 gives a remainder of 67, and dividing 2345 by 97
gives a remainder of 17. In the case that the memory addresses
begin with 01 rather than 00, we choose that the function H(k ) =
k (mod m) + 1 to obtain:
H(3205) = 4 + 1 = 5,
H(7148) = 67 + 1 = 68, Observe that the fourth and fifth digits, counting from the right,
H(2345) = 17 + 1 = 18 are chosen for the hash address.
Hashing Hashing
Hash functions Collision Resolution
Example IV
1 Introduction
Folding method: Chopping the key k into two parts and adding
2 Hash functions
yields the following hash addresses: H(3205) = 32 + 05 = 37,
H(7148) = 71+48 = 19, H(2345) = 23+45 = 68. Observe that
the leading digit 1 in H(7148) is ignored. Alternatively, one may
3 Collision Resolution
want to reverse the second part before adding, thus producing
the following hash addresses:
H(3205) = 32 + 50 = 82,
4 Open Addressing: Linear Probing and Modifications
H(7148) = 71 + 84 + 55,
H(2345) = 23 + 54 = 77
5 Chaining
Hashing Hashing
Collision Resolution Collision Resolution
Linear Probing I
1 Introduction
Hashing Hashing
Open Addressing: Linear Probing and Modifications Open Addressing: Linear Probing and Modifications
Suppose the 8 records are entered into the table T in the above
Here λ = n/m is the load factor. order. Then the file F will appear in memory as follows:
Hashing Hashing
Open Addressing: Linear Probing and Modifications Open Addressing: Linear Probing and Modifications
Although Y is the only record with hash address H(k) = 5, the The average number U of probes for an unsuccessful search
record is not assigned to T [5], since T [5] has already been filled follows:
by E because of a previous collision at T [4]. Similarly, Z does
not appear in T [1].
The average number S of probes for a successful search follows:
The first sum adds the number of probes to find each of the 8
records, and the second sum adds the number of probes to find
an empty location for each of the 11 locations.
Hashing Hashing
Open Addressing: Linear Probing and Modifications Open Addressing: Linear Probing and Modifications
Disadvantage Techniques I
Techniques II
1 Introduction
2 Hash functions
Double Hashing: Here a second hash function H 0 is used
for resolving a collision, as follows. Suppose a record R with
key k has the hash addresses H(k) = h and H 0 (k) = h0 6=
3 Collision Resolution
m. Then we linearly search the locations with addresses h,
h + h0 , h + 2h0 , h + 3h0 , . . . . If m is a prime number, then the
above sequence will access all the locations in the table T .
4 Open Addressing: Linear Probing and Modifications
5 Chaining
Hashing Hashing
Chaining Chaining
Chaining I Chaining II
Example I Example II
Consider again the data, where the 8 records have the following
hash addresses:
Hashing
Chaining
Disadvantage