You are on page 1of 32

Data Structure

Introduction
UNIT – 1 To Data
Structure

Prepared by :Krupa Trivedi , Assistant professor ,SOCCA


What Is Data Structure

Data Structure is a Some examples of


way to store and Data Structures are
organize data so Arrays, Linked List,
that it can be used Stack , Queue etc…
efficiently.
Types Of Data Structure

• There are two types of data structures:

1.Primitive data structure


2. Non-primitive data structure
Primitive data structure Non primitive data structure

1) Primitive data structure is a kind of 1) Non-primitive data structure is a


data structure that stores the data type of data structure that can
of only one type. Store the data of more than one
type.
2) Example of primitive data structure 2) Example of non- primitive data
are integer , character , float , etc.. Structures are Array , Linked list,
Stack, Queue.
3) It Starts with a lowercase character 3) It Starts with an uppercase
character.
4) Primitive data structure will contain 4) Non primitive data structure can
some value.(it can’t be null) consist of a null value.

5) Size depends on the type of the 5) In case of non-primitive data


Data structure. Structure size is not fixed.
v . Non-Primitive Data structure

The non-primitive data structure is divided


into two types:
1. Linear data structure
2. Non-linear data structure
Linear data structure Non-Linear data structure

1) In a Linear data structure , data 1) In a Non-Linear data structure ,


elements are arranged in a linear data elements are not in a
(sequence) order. Sequence(non – ordered list)

2) Each and every elements are 2) Data elements are attached in


attached to its previous and next hierarchically manner.
adjacent.

3) In a linear data structure single 3) in non- linear data structure


level is involved. Multiple levels are involved.

4) It is easy to implementation. 4) it is complex to implementation.

5) Example: Array, Linked list, stack 5) Example : tree , graph


queue
• Linear Data structure is divided into two
types:

• Static data structure: It is a type of data


structure where the size is allocated at the
compile time. Therefore, the maximum size is
fixed.
• Dynamic data structure: It is a type of data
structure where the size is allocated at the run
time. Therefore, the maximum size is flexible.
Major Operations Of Data Structure
The major or the common operations that can be
performed on the data structures are:

• Searching: We can search for any element in a data


structure.
• Sorting: We can sort the elements of a data structure
either in an ascending or descending order.
• Insertion: We can also insert the new element in a data
structure.
• Updation: We can also update the element, i.e., we can
replace the element with another element.
• Deletion: We can also perform the delete operation to
remove the element from the data structure.
Advantages Of Data Structure
• Data structure helps in efficient storage of data in the storage device.
• Data structure usage provides convenience while retrieving the data
from storage device.
• Data structure provides effective and efficient processing of small as well
as large amount of data.
• Usage of proper data structure, can help programmer save lots of time or
processing time while operations such as storage, retrieval or processing
of data.
• Manipulation of large amount of data can be carried out easily with the
use of good data structure approach.
• Most of the well organized data structures like Array, stack, queues,
graph, tree, linked list has well built and pre-planned approach for
operations like storage, addition, retrieval, manipulation, deletion, etc.
While using them, programmer can be completely rely on these data
structures.
• Data structure usage can simply encourage reusability in long run as well.
Performance analysis and
measurement
• Performance analysis helps us to select the best algorithm
from multiple algorithm to solve a problem.
• When there are multiple alternative algorithms to solve a
problem , we analyze them and pick the one which is best
suitable for our requirements.

Definition :
• Performance of an algorithm is a process of making
evaluative judgements about algorithms.
Generally the performance of an algorithm
depends on the following elements..

• Whether that algorithm is providing the exact solution for


the problem?
• Whether it is easy to understand?
• Whether it is easy to implement?
• How much space(memory) it requires to solve the problem?
• How much time it takes to solve the problem?
Algorithm
• An algorithm is a process or a set of rules required to
perform calculations or some other problem-solving
operations especially by a computer.
• The formal definition of an algorithm is that it contains
the finite set of instructions which are being carried in
a specific order to perform the specific task.
• It is not the complete program or code.
• It is just a solution (logic) of a problem, which can be
represented either as an informal description using a
Flowchart or Pseudocode.
Characteristics of an Algorithm
• The following are the characteristics of an algorithm:

• Input: An algorithm has some input values. We can pass 0 or some


input value to an algorithm.
• Output: We will get 1 or more output at the end of an algorithm.
• Unambiguity: An algorithm should be unambiguous which means
that the instructions in an algorithm should be clear and simple.
• Finiteness: An algorithm should have finiteness. Here, finiteness
means that the algorithm should contain a limited number of
instructions, i.e., the instructions should be countable.
• Effectiveness: An algorithm should be effective as each instruction
in an algorithm affects the overall process.
• Language independent: An algorithm must be language-
independent so that the instructions in an algorithm can be
implemented in any of the languages with the same output.
Dataflow of an Algorithm
• Problem: A problem can be a real-world problem or any
instance from the real-world problem for which we
need to create a program or the set of instructions. The
set of instructions is known as an algorithm.
• Algorithm: An algorithm will be designed for a problem
which is a step by step procedure.
• Input: After designing an algorithm, the required and
the desired inputs are provided to the algorithm.
• Processing unit: The input will be given to the
processing unit, and the processing unit will produce
the desired output.
• Output: The output is the outcome or the result of the
program.
We need algorithms because of the following
reasons:-
• Scalability: It helps us to understand the
scalability. When we have a big real-world
problem, we need to scale it down into small-
small steps to easily analyze the problem.
• Performance: The real-world is not easily
broken down into smaller steps. If the
problem can be easily broken into smaller
steps means that the problem is feasible.
Example Of Algorithm
• The following are the steps required to add two
numbers entered by the user:
• Step 1: Start
• Step 2: Declare three variables a, b, and sum.
• Step 3: Enter the values of a and b.
• Step 4: Add the values of a and b and store the
result in the sum variable, i.e., sum=a+b.
• Step 5: Print sum
• Step 6: Stop
Factors of an Algorithm
• The following are the factors that we need to consider for
designing an algorithm:
• Modularity: If any problem is given and we can break that
problem into small-small modules or small-small steps,
which is a basic definition of an algorithm, it means that this
feature has been perfectly designed for the algorithm.
• Correctness: The correctness of an algorithm is defined as
when the given inputs produce the desired output, which
means that the algorithm has been designed algorithm. The
analysis of an algorithm has been done correctly.
• Maintainability: Here, maintainability means that the
algorithm should be designed in a very simple structured
way so that when we redefine the algorithm, no major
change will be done in the algorithm.
• Functionality: It considers various logical steps to solve the
real-world problem.
• Robustness: Robustness means that how an
algorithm can clearly define our problem.
• User-friendly: If the algorithm is not user-
friendly, then the designer will not be able to
explain it to the programmer.
• Simplicity: If the algorithm is simple then it is
easy to understand.
• Extensibility: If any other algorithm designer
or programmer wants to use your algorithm
then it should be extensible.
Algorithm Analysis
Analysis of efficiency of an algorithm can be
performed at two different stages, before
implementation and after implementation, as
• A priori analysis − This is defined as theoretical
analysis of an algorithm. Efficiency of
algorithm is measured by assuming that all
other factors e.g. speed of processor, are
constant and have no effect on
implementation.
• A posterior analysis − This is defined as
empirical analysis of an algorithm. The chosen
algorithm is implemented using programming
language. Next the chosen algorithm is
executed on target computer machine. In this
analysis, actual statistics like running time and
space needed are collected.
• Algorithm analysis is dealt with the execution
or running time of various operations
involved. Running time of an operation can be
defined as number of computer instructions
executed per operation.
• Algorithm Complexity
• Suppose X is treated as an algorithm and N is treated
as the size of input data, the time and space
implemented by the Algorithm X are the two main
factors which determine the efficiency of X.
• Time Factor − The time is calculated or measured by
counting the number of key operations such as
comparisons in sorting algorithm.
• Space Factor − The space is calculated or measured by
counting the maximum memory space required by the
algorithm.
• The complexity of an algorithm f(N) provides the
running time and / or storage space needed by the
algorithm with respect of N as the size of input data.
• Space Complexity
• Space complexity of an algorithm represents the
amount of memory space needed the algorithm
in its life cycle.
• Space needed by an algorithm is equal to the sum
of the following two components:-
-> Fixed part
-> Variable part
• A fixed part that is a space required to store
certain data and variables (i.e. simple variables
and constants, program size etc.), that are not
dependent of the size of the problem.
• A variable part is a space required by
variables, whose size is totally dependent on
the size of the problem. For example,
recursion stack space, dynamic memory
allocation etc.
• Space complexity S(p) of any algorithm p is
S(p) = A + Sp(I) Where A is treated as the fixed
part and S(I) is treated as the variable part of
the algorithm which depends on instance
characteristic I. Following is a simple example
that tries to explain the concept
• Algorithm
• SUM(P, Q)
• Step 1 - START
• Step 2 - R ← P + Q + 10
• Step 3 – Stop
• Here we have three variables P, Q and R and
one constant. Hence S(p) = 1+3. Now space is
dependent on data types of given constant
types and variables and it will be multiplied
accordingly.
• Time Complexity
• Time Complexity of an algorithm is the
representation of the amount of time required by
the algorithm to execute to completion. Time
requirements can be denoted or defined as a
numerical function t(N), where t(N) can be
measured as the number of steps, provided each
step takes constant time.
• For example, in case of addition of two n-bit
integers, N steps are taken. Consequently, the total
computational time is t(N) = c*n, where c is the
time consumed for addition of two bits. Here, we
observe that t(N) grows linearly as input size
increases.
Asymptotic Analysis Of Algorithm
• Asymptotic analysis of an algorithm refers to
defining the mathematical framing of its run-
time performance.
• Using asymptotic analysis, we can very well
conclude the best case, average case, and
worst case scenario of an algorithm.
• Usually, the time required by an algorithm falls
under three types −
• Best Case − Minimum time required for program
execution.
• Average Case − Average time required for
program execution.
• Worst Case − Maximum time required for
program execution.
• Asymptotic Notations
Following are the commonly used asymptotic notations to
calculate the running time complexity of an algorithm.
1. Ο Notation
2. Ω Notation
3. θ Notation

1. Big Oh Notation, Ο
• The notation Ο(n) is the formal way to express the upper
bound of an algorithm's running time. It measures the
worst case time complexity or the longest amount of time
an algorithm can possibly take to complete.
2.Omega Notation, Ω
• The notation Ω(n) is the formal way to express
the lower bound of an algorithm's running time.
It measures the best case time complexity or the
best amount of time an algorithm can possibly
take to complete.
3. Theta Notation, θ
• The notation θ(n) is the formal way to express
both the lower bound and the upper bound of
an algorithm's running time. It is represented as
follows .

You might also like