# Data structure: The logical or mathematical model of a particular organization of data is

called data structure.
Data structure operations:
1. traversing: Accessing each record exactly once so that certain items in record
may be processed.
2. searching: Finding the location of a record with a given ey value.
!. inserting: adding a new record to the structure.
". deleting: #emoving a record from the structure.
\$. sorting: Arranging the records in some logical order%ascending or descending&.
'. (erging: combining the records in two different sorted files into a single sorted
file.
Types of data structure
Linear) a data structure is said to linear if its element form a se*uence. There are two
ways of representing such linear structures in memory. +ne way is to have the linear
relationship between the elements represented by the means of se*uential memory
locations. ,x)array
The other way is to have the linear relationship between elements represented by the
means of pointer. ,x) lin list
Nonlinear - a data structure is said to be nonlinear if its elements does not form a
se*uence. ,x) tree. graph
Algorithm
An algorithm is a well defined list of steps for solving a particular problems.
/roperties
1 input) an algo. (ust receive some input data supplied externally.
2 output) an algo must produce at least one output as the result.
!. finiteness) the algo must terminate after a finite number of steps.
". definiteness) the steps to be performed in the algo. (ust be clear and unambiguous.
\$.effectiveness) one must be able to perform the steps in the algo. 0ith out applying
any intelligence.
Complexity
The complexity of an algo. is a function which gives the running time and1or space in
terms of the input data size.
Analysis of algorithms
To analyze an algorithm is to determine the amount of resources %such as time and
storage& necessary to execute it. (ost algorithms are designed to wor with inputs of
arbitrary length. 2sually the efficiency or complexity of an algorithm is stated as a
function relating the input length to the number of steps %time complexity& or storage
locations %space complexity&.
0hile analyzing an algorithm time re*uired to execute it is determined. This time is not
in terms of number of seconds or any such unit. 3nstead it represent the number of
operations that are carried out while executing the algorithm.
0hile comparing two algorithms it is assumed that all other things lie speed of
computer and the language are used are same for the algorithms.
0hile analyzing iterative algorithms we need to determine how many times the loop is
executed. To analyze a recursive algorithm one need to determine amount of wor done
for three things - breaing down the large problem into smaller pieces. getting solution
for each piece and combing the individual solutions to get the solution to the whole
problem.
Cases to consider during analysis
4hoosing the input to consider when analyzing an algorithm can have a significant
impact on how an algorithm will perform. (ultiple input sets must be considered while
analyzing an algorithm. These include the following
1. Best case input) this represent the input set that allows an algo to perform most
*uicly. 0ith this input the algo taes shortest time to execute. as it causes the
algorithm to do the least amount of time.
2. worst case input) this represent the input set that allow an algorithm to perform
most slowly. 0orst case is an important analysis because it gives us an idea of
the most time an algo will ever tae.
!. average case input) this represent the input set that allows an algorithm to deliver
an average performance. 5oing average case analysis is a four step process. These
steps are under
%a& determine the number of different groups into which all possible input set
can be divided.
%b& 5etermine the probability that the input will come from each of these
groups.
%c& 5etermine how long the algo will run for each of these groups.
%d& 4alculate average case time using the formula
A%n& 6 p
1
7t
1
8p
2
7t
2
8p
!
7t
!
8999..m times
0here n 6 size of input. m 6 number of groups. p
i
6 probability that the
input will be form group i. t
i
6 time that the algo taes for input from
group i.
ate of growth
0hile doing the analysis of algo more than the exact number of operations performed
by the algo . it is the rate of increase in operations as the size of the problem
increases that is more importance. This is often called the rate of growth of the
algorithm.
Asymptotic notation
O!notation
+%g%n&& = {f %n& : there exist positive constants c and n: such that : ≤ f %n&
≤ cg%n& for all n ≥ n:} .
g%n& is an asymptotic upper bound for f %n&.
3f f %n& Є +%g%n&&. we write f %n& = +%g%n&&.
"xample: 2n
2
= +%n
!
&. with c = 1 and n: = 2.
,xamples of functions in +%n
2
&:
n
2
n
2
+ n
n
2
+ 1:::n
1:::n
2
+ 1:::n
Ω!notation
Ω %g%n&& = { f %n& : there exist positive constants c and n: such that
: ≤ cg%n& ≤ f %n& for all n ≥ n:} .
g%n& is an asymptotic lower #ound for f %n&.
,xamples of functions in Ω(n
2
):
n
2
n
2
+ n
1:::n
2
+ 1:::n
Θ!notation
Θ %g%n&& = { f %n& : there exist positive constants c1. c2. and n: such that :
≤ c1g%n& ≤ f %n& ≤ c2g%n& for all n ≥ n:} .
g%n& is an asymptotically tight #ound for f %n&.
"xample: n
2
1 2 − 2n = Θ %n
2
&. with c1 = 11". c2 = 112. and n: = ;.
Designing algorithms
There are many ways to design algorithms.
For example. insertion sort is incremental: having sorted A<1 . . j −1=. place A< j =
correctly. so that A<1 . . j = is sorted.
• Divide and con\$uer
Another common approach.
Divide the problem into a number of subproblems.
Con\$uer the subproblems by solving them recursively.
Base case: 3f the subproblems are small enough. >ust solve them by brute force.
<3t would be a good idea to mae sure that your students are comfortable with
recursion. 3f they are not. then they will have a hard time understanding divide
and con*uer.=
Com#ine the sub problem solutions to give a solution to the original problem.
example of divide and con*uer is merge sorting. ?orting can be done on each segment of
data after dividing data into segments and sorting of entire data can be obtained in the
con*uer phase by merging the segments.
• Dynamic programming.
0hen a problem shows optimal substructure. meaning the optimal solution to a
problem can be constructed from optimal solutions to subproblems. and overlapping
subproblems. meaning the same subproblems are used to solve many different
problem instances. a *uicer approach called dynamic programming avoids
recomputing solutions that have already been computed. For example. the shortest
path to a goal from a vertex in a weighted graph can be found by using the shortest
path to the goal from all ad>acent vertices.
The main difference between dynamic programming and divide and con*uer is that
subproblems are more or less independent in divide and con*uer. whereas
subproblems overlap in dynamic programming.
• The greedy method.
A greedy algorithm is similar to a dynamic programming algorithm. but the
difference is that solutions to the subproblems do not have to be nown at each stage@
instead a AgreedyA choice can be made of what loos best for the moment. The greedy
method extends the solution with the best possible decision %not all feasible decisions&
at an algorithmic stage based on the current local optimum and the best decision %not
all possible decisions& made in a previous stage.

Top!down and #ottom!up design
Top!down and #ottom!up are strategies of information processing.
A top!down approach is essentially breaing down a system to gain insight into its
compositional sub)systems. 3n a top)down approach an overview of the system is first
formulated. specifying but not detailing any first)level subsystems. ,ach subsystem is
then refined in yet greater detail. sometimes in many additional subsystem levels. until
the entire specification is reduced to base elements.
• ?eparating the low level wor from the higher level abstractions leads to a
modular design.
• (odular design means development can be self contained.
• Fewer operations errors
• (uch less time consuming %each programmer is only involved in a part of the big
pro>ect&.
• Bery optimized way of processing %each programmer has to apply their own
nowledge and experience to their parts %modules&. so the pro>ect will become an
optimized one&.
• ,asy to maintain %if an error occurs in the output. it is easy to identify the errors
generated from which module of the entire program&.
3n a bottom)up approach the individual base elements of the system are first specified in
great detail. These elements are then lined together to form larger subsystems. which
then in turn are lined. sometimes in many levels. until a complete top)level system is
formed. This strategy often resembles a AseedA model. whereby the beginnings are small
but eventually grow in complexity and completeness.
%nsertion sort
1. set A<:=:6 ) min.
2. repeat step ! to \$ for C62 to D
!. set T,(/:6 A<C= and 3:6 C)1
". repeat while T,(/E A<3=
%a& set A<381=:6 A<3=
%b& set 3:6 3)1
\$. set A<381= :6 T,(/
'. return.
Finear search
1.