Professional Documents
Culture Documents
[BCS-301]
Syllabus
Introduction: Basic Terminology, Elementary Data
Organization, Built in Data Types in C. Algorithm,
Efficiency of an Algorithm, Time and Space Complexity,
Asymptotic notations: Big Oh, Big Theta and Big Omega,
Time-Space trade-off. Abstract Data Types (ADT)
Text Books
Aaron M. Tenenbaum, Yedidyah Langsam and Moshe J.
Augenstein, “Data Structures Using C and C++”, PHI
Learning Private Limited, Delhi India
Horowitz and Sahani, “Fundamentals of Data Structures”,
Galgotia Publications Pvt Ltd Delhi India.
Lipschutz, “Data Structures” Schaum’s Outline Series, Tata
McGraw-hill Education (India) Pvt. Ltd.
Thareja, “Data Structure Using C” Oxford Higher
Education.
Course Outcomes [CO]
CO1: Describe the ways arrays, linked lists, stacks,
queues, trees, and graphs are represented in memory, used
by the algorithms and their common applications
CO2: Understand and implement linear data structures
such as Stack, Queue and Priority Queue.
CO3: Analyze the computational efficiency of the Sorting
and Searching algorithms.
CO4: Implementation of Trees and Graphs and perform
various operations on these data structure.
CO5: Identify the alternative implementations of data
structures with respect to its performance to solve a real
world problem.
Introduction
Data is a collection of facts and figures or a set of values that
can be recorded.
Data Structure is a way of collecting and organizing data in
such a way that we can perform operations on these data in an
effective way.
Data Structure is a particular way of storing and organizing
data in the memory of the computer so that these data can
easily be retrieved and efficiently utilized in the future when
required.
The choice of a good data structure makes it possible to
perform varieties of critical operations effectively.
An efficient data structure also uses minimum memory space
and minimum execution time to process the task. The main
goal is to reduce the space and time complexities of different
tasks.
Why is Data Structure important?
Data structure is important because it is used in almost every
program or software system.
Data Structures are necessary for designing efficient algorithms.
Using appropriate data structures can help programmers save a
good amount of time while performing operations such as storage,
retrieval, or processing of data.
Manipulation of large amounts of data is easier.
Data structures can help improve the performance of our code by
reducing the time and space complexity of algorithms.
Data structures can help you solve complex problems by breaking
them down into smaller, more manageable parts.
Data structures provide a way to abstract the underlying
complexity of a problem and simplify it, making it easier to
understand and solve.
Basic Types of Data Structures
As we have discussed above, anything that can store data can be
called as a data structure, hence Integer, Float, Boolean, Char etc,
all are data structures. They are known as Primitive Data
Structures.
Then we also have some complex Data Structures, which are used
to store large and connected data. Some example of Abstract Data
Structure are :
Arrays
Linked List
Tree
Graph
Stack, Queue etc.
All these data structures allow us to perform different operations on
data. We select these data structures based on which type of
operation is required.
Characteristic Description
counter = 0;
for (i = 1; i < = n; i++)
{
if (A[i] == 1)
counter++;
else {
f(counter);
counter = 0;
}
}
The complexity of this program fragment is θ(n)
Asymptotic Notations
When it comes to analyzing the complexity of any algorithm
in terms of time and space, we can never provide an exact
number to define the time required and the space required by
the algorithm, instead we express it using some standard
notations, also known as Asymptotic Notations.
When we analyse any algorithm, we generally get a formula
to represent the amount of time required for execution or the
time required by the computer to run the lines of the code,
number of memory accesses, number of comparisons,
temporary variables occupying memory space etc.
Asymptotic Notations allow us to analyze an algorithm's
running time by identifying its behavior as its input size
grows. This is also referred to as an algorithm's growth
rate.
Let us take an example, if some algorithm has a time complexity
of
T(n) = (n2 + 3n + 4), which is a quadratic equation. For large
values of n, the 3n + 4 part will become insignificant compared to
the n2 part.
Growth rate of functions
Logarithmic Function - log n
Linear Function - an + b
Quadratic Function - an2 + bn + c
Polynomial Function - anz + . . . + an2 + an1 + an0, where z is
some constant
Exponential Function - an, where a is some constant
GATE-CS-2021
Consider the following three functions.
f1=10n
f2=nlogn
f3=n√n
Arranges the functions in the increasing order of asymptotic growth rate:
f2<f3<f1
AKTU Questions
Define best case, average case and worst case for analyzing
the complexity of a program. [2022-23] [2 Marks]
Rank the following typical bounds in increasing order of
growth rate: O(log n), O(n4), O(1), O(n2 log n) [2021-22]
[2 Marks]
Define the following terms: (i) Time complexity (ii) Space
complexity (iii) Asymptotic notation (iv) Big O notation
[2019-20] [10 Marks]
What is Asymptotic Behavior?
The word Asymptotic means approaching a value or curve
arbitrarily closely (i.e., as some sort of limit is taken).
In asymptotic notations, we use the some model to ignore
the constant factors and insignificant parts of an
expression, to device a better way of representing
complexities of algorithms, in a single term, so that
comparison between algorithms can be done easily.
Let's take an example to understand this:
If we have two algorithms with the following expressions
representing the time required by them for execution:
Expression 1: (20n2 + 3n - 4)
Expression 2: (n3 + 100n - 2)
Now, as per asymptotic notations, we should just worry about how
the function will grow as the value of n (input) will grow, and that
will entirely depend on n2 for the Expression 1, and on n3 for
Expression 2. Hence, we can clearly say that the algorithm for
which running time is represented by the Expression 2, will grow
faster than the other one, simply by analyzing the highest power
coefficient and ignoring the other constants(20 in 20n2) and
insignificant parts of the expression(3n - 4 and 100n - 2).
The main idea behind casting aside the less important part is to
make things manageable.
All we need to do is, first analyze the algorithm to find out an
expression to define it's time requirements and then analyse how
that expression will grow as the input (n) will grow.
Types of Asymptotic Notations
We use three types of asymptotic notations to represent the
growth of any algorithm, as input increases:
Big Theta (Θ)
Big Oh(O)
Big Omega (Ω)
Upper Bounds: Big-O
This notation is known as the upper bound of the algorithm, or
a Worst Case of an algorithm.
It tells us that a certain function will never exceed for any
value of input n at any time.
Consider Linear Search algorithm, in which we traverse an
array elements, one by one to search a given number.
In Worst case, starting from the front of the array, we find the
element we are searching for at the end, which will lead to a
time complexity of n, where n represents the number of total
elements.
If the element we are searching for is the first element of the
array, in which case the time complexity will be constant.
So, the time complexity is O(n) in worst case.
Let f(n) and g(n) are two nonnegative functions
indicating the running time of two algorithms. We say,
g(n) is providing an upper bound to f(n) if there exist
some positive constants c and n0 such that
0 ≤ f(n) ≤ c.g(n) for all n ≥ n0. It is denoted as f(n) =
Ο(g(n)).
Lower Bounds: Omega
Big Omega notation is used to define the lower bound of
any algorithm or we can say the best case of any
algorithm.
This always indicates the minimum time required for any
algorithm for all input values, therefore the best case of any
algorithm.
In simple words, when we represent a time complexity for
any algorithm in the form of big-Ω, we mean that the
algorithm will take at least this much time to complete it's
execution. It can definitely take more time than this too.
Let f(n) and g(n) are two nonnegative functions indicating the
running time of two algorithms. We say the function g(n) is lower
bound of function f(n) if there exist some positive constants c and
n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0. It is denoted as f(n) =
Ω (g(n)).
Tight Bounds: Theta
When we say tight bounds, we mean that the time complexity
represented by the Big-Θ notation is like the average value or range
within which the actual time of execution of the algorithm will be.
For example, if for some algorithm the time complexity is
represented by the expression 3n 2 + 5n, and we use the Big-Θ
notation to represent this, then the time complexity would be Θ(n 2),
ignoring the constant coefficient and removing the insignificant
part, which is 5n.
Here, in the example above, complexity of Θ(n 2) means, that the
average time for any input n will remain in
between, c1*n2 and c2*n2, where c1, c2 are two constants, thereby
tightly binding the expression representing the growth of the
algorithm.
Let f(n) and g(n) are two nonnegative functions indicating
running time of two algorithms. We say the function g(n) is tight
bound of function f(n) if there exist some positive constants c1,
c2, and n0 such that 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n0. It is
denoted as f(n) = Θ (g(n)).
ISRO CS 2015
int recursive (int n)
{
if(n == 1)
return (1);
else return (recursive (n-1) + recursive (n-1));
}