You are on page 1of 62

Jimma University

Faculty of Computing and


Informatics.

Chapter 1
Introduction to Data Structures and Algorithms

Jimma, Ethiopia.
Outline
 Introduction to Data Structures
 Abstract data Types
 Algorithms
 Properties of an algorithm
 Algorithm analysis concepts
 Complexity analysis
 Asymptotic Analysis

2
Overviews of Data Structures

 A program is written in order to solve a problem. A solution to a problem


actually consists of two things:

 A way to organize the data

 Sequence of steps to solve the problem

 The way data are organized in a computers memory is said to be Data


Structure and the sequence of computational steps to solve a problem is
said to be an algorithm.

 Therefore, a program is nothing but data structures plus algorithms.

Program = Algorithm + Data structure


3
Cont...

 A data structure is an arrangement of data in a computer’s memory (or


sometimes on a disk).

 Data structures include arrays, linked lists, stacks, binary trees, and
hash tables.

 Algorithms manipulate the data in these structures in various ways, such


as searching for a particular data item and sorting the data.

 Data structure is the structural representation of logical relationships


between elements of data.

 In other words a data structure is a way of organizing data items by


considering its relationship to each other.
4
Cont...
 Data structure mainly specifies the structured organization of data, by
providing accessing methods with correct degree of associativity.
 It also affects the design of both the structural and functional aspects
of a program.

 Data structures are the building blocks of a program; here the selection
of a particular data structure will help the programmer to design more
efficient programs, as the complexity and volume of the problems
solved, by the computer is steadily increasing day by day:

 The programmers have to strive hard to solve these problems. If the


problem is analyzed and divided into sub-problems, the task will be
5 much easier i.e., divide, conquer and combine.
Cont...
 Model
 defines an abstract view to the problem

 define the properties of the problem

 Properties of the problem include

 The data which are affected and

 The operations that are involved in the problem.

 With abstraction you create a well-defined entity that can be properly handled. These
entities define the data structure of the program.
 An entity with the properties just described is called an Abstract Data Type (ADT).
6
Abstract Data Types (ADT)

 An ADT consists of an abstract data structure and operations. Put in


other terms, an ADT is an abstraction of a data structure.

 The ADT specifies:

 What can be stored in the Abstract Data Type

 What operations can be done on/by the Abstract Data Type.

 A data structure is a language construct that the programmer has defined


in order to implement an abstract data type.

 There are lots of formalized and standard Abstract data types such as
Stacks, Queues, Trees, etc.
7
Cont...

 ADT is an abstraction of a data structure which provides only the


interface to which a data structure must follow.

 The interface doesn’t give any specific details about how something
should be implemented or in what programming language.

 Abstraction is a process of classifying characteristics as relevant and


irrelevant for the particular purpose at hand and ignoring the irrelevant
ones.
 For example, if we are going to model employees of an organization:
 This ADT stores employees with their relevant attributes and
discarding irrelevant attributes.
 This ADT supports hiring, firing, retiring, … operations.
8
Classification of Data Structure

 What is data structure :

 is basically a technique of organizing and storing of different types of data

items in computer memory.

 It is considered as not only that but also the maintaining of the logical

relationship existing between individual data elements.

 also it can be defined as a mathematical or logical model, which relates to

a particular organization of different data elements.

9
Cont...
1. Primitive data structure
2. Non-primitive data structure

10
Cont...
Primitive data structure:
 also known as basic data structures.
 predefined types of data, which are supported by the programming
language.
Non-Primitive data structure:
 are highly developed complex data structures.
 are not defined by the programming language, but are instead
created by the programmer.
 Are responsible for organizing the group of homogeneous and
heterogeneous data elements.

11
Cont...
 Non-Primitive data structure:
 The linear and non-linear data structure subclassification of Non-primitive
data structure.

 Linear data structure arranges the data into a sequence and follow some sort of
order.
 Non-linear data structure does not organize the data in a sequential manner.
12
Cont...

13
Cont...

14
Algorithm

 An algorithm can be thought of as a set of instructions that specifies how


to solve a particular problem.

 It is a well-defined computational procedure that takes some value or a


set of values as input and produces some value or a set of values as
output

 An algorithm transforms data structures from one state to another state


in two ways:

 An algorithm may change the value held by a data structure

 An algorithm may change the data structure itself

15
Cont...

 An algorithm is a clearly specified set of simple instructions to be


followed to solve a problem.

 Once an algorithm is given for a problem and decided (somehow) to


be correct, an important step is to determine how much in the way of
resources, such as time or space, the algorithm will require.

 An algorithm that solves a problem but requires a year is hardly of any


use. Likewise, an algorithm that requires thousands of gigabytes of main
memory is not (currently) useful on most machines.

16
Properties of Algorithm

 Finiteness: Algorithm must complete after a finite number of steps.

 Definiteness: Each step must be clearly defined, having one and only
one interpretation. At each point in computation, one should be able to
tell exactly what happens next.

 Sequence: Each step must have a unique defined preceding and


succeeding step. The first step (start step) and last step (halt step) must
be clearly noted.

 Feasibility: It must be possible to perform each instruction.

 Correctness: It must compute correct answer for all possible legal


inputs.
17
Cont...

 Language Independence: It must not depend on any one programming


language.

 Completeness: It must solve the problem completely.

 Effectiveness: It must be possible to perform each step exactly and in a


finite amount of time.

 Efficiency: It must solve with the least amount of computational


resources such as time and space.

 Generality: Algorithm should be valid on all possible inputs.

 Input/Output: There must be a specified number of input values, and


18 one or more result values.
Algorithm Analysis Concepts

 The process of determining as precisely as possible how much of


various resource (such as time and memory) an algorithm consumes
when it executes.
 Refers to the process of determining the amount of computing time and
storage space required by different algorithms.
 A process of predicting the resource requirement of algorithms in a
given environment.
 To be able to choose the best algorithm for the problem at hand using
some scientific method; ways of analyzing them in terms of resource
requirement. Main resources are:
 Running Time
 Memory Usage
 Communication Bandwidth
19  In order to learn more about an algorithm, we can ``analyze'' it.
Cont...
 But what can we analyze? We can
 determine the running time of a program as a function of its inputs;
 determine the total or maximum memory space needed for program
data;
 determine the total size of the program code;
 determine whether the program correctly computes the desired result;
 determine the complexity of the program--e.g., how easy is it to read,
understand, and modify; and,
 determine the robustness of the program--e.g., how well does it deal
with unexpected or erroneous inputs?

20
Cont...
 There are many factors that affect the running time of a program.
 Among these are the algorithm itself, the input data, and the
computer system used to run the program.
 The performance of a computer is determined by
the hardware:
• processor used (type and speed),
• memory available (cache and RAM), and
• disk available;
the programming language in which the algorithm is specified;
the language compiler/interpreter used; and
the computer operating system software.

21
Cont...

 What is the goal of analysis of algorithms?

 To compare algorithms mainly in terms of running time but also in terms of


other factors (e.g., memory requirements, programmer's effort etc.)

 What do we mean by running time analysis?

 Determine how running time increases as the size of the problem increases.

22
Complexity Analysis
 Is the systematic study of the cost of computation, measured either
 in time units
 in operations performed
 in the amount of storage space required
 The goal is to have a meaningful measure that permits comparison of
algorithms independent of operating platform.
 Complexity: A measure of the performance of an algorithm: An algorithm’s
performance depends on internal and external factors.

Internal External
The algorithm’s efficiency, in • Size of the input to the algorithm
terms of: • Speed of the computer on which it is
• Time required to run run
• Space (memory storage) • Quality of the compiler
required to run
23
Complexity Analysis
 Two things to consider:
 Time Complexity:

 Determine the approximate number of operations required to solve a


problem of size n.
 Space Complexity:
 Determine the approximate memory required to solve a problem of
size n.

24
Complexity Analysis
 Time Complexity:
 The time complexity of an algorithm or a program is the amount of time it
needs to run to completion.
 The exact time will depend on the implementation of the algorithm,
programming language, optimizing the capabilities of the compiler
used, the CPU speed, other hardware characteristics/specifications and
so on.
 To measure the time complexity accurately, we have to count all
sorts of operations performed in an algorithm.
If we know the time for each one of the primitive operations
performed in a given computer, we can easily compute the time
taken by an algorithm to complete its execution.
• This time will vary from machine to machine.

25
Complexity Analysis
 By analyzing an algorithm, it is hard to come out with an exact time
required.
 To find out exact time complexity, we need to know the exact
instructions executed by the hardware and the time required for the
instruction.
 The time complexity also depends on the amount of data inputted to
an algorithm.
But we can calculate the order of magnitude for the time
required.

26
Complexity Analysis
 SPACE COMPLEXITY
 Analysis of space complexity of an algorithm or program is the amount of
memory it needs to run to completion.
 Some of the reasons for studying space complexity are:
 If the program is to run on multi user system, it may be required to
specify the amount of memory to be allocated to the program.
 We may be interested to know in advance that whether sufficient
memory is available to run the program.
 There may be several possible solutions with different space
requirements.
 Can be used to estimate the size of the largest problem that a program
can solve.

27
Complexity Analysis
 When we analyze an algorithm it depends on the input data, there are three cases :

 Best case (minimum number of operations)

 Average case(average number of operations)

 Worst case(maximum number of operations)

 In the best case, the amount of time a program might be expected to take on best possible
input data.

 In the average case, the amount of time a program might be expected to take on typical (or
average) input data.

 In the worst case, the amount of time a program would take on the worst possible input
configuration.

 In order to determine the running time of an algorithm it is possible to define three functions

28 Tbest(n), Tavg(n) and Tworst(n)


Complexity Analysis
Best Case Analysis
 Assumes that data are arranged in the most advantageous order.
 It also assumes the minimum input size.
 E.g.
 For sorting – the best case is if the data are arranged in the required order.
 For searching – the required item is found at the first position.
 Note: Best Case computes the lower boundary of T(n)
 It causes fewest number of executions
 Best Case (Tbest): The amount of time the algorithm takes on the smallest
possible set of inputs.

29
Complexity Analysis
Worst case analysis
 Assumes that data are arranged in the disadvantageous order.
 It also assumes that the input size is infinite.
 Eg.
 For sorting – data are arranged in opposite required order
 For searching – the required item is found at the end of the item or the
item is missing
 It computes the upper bound of T(n) and causes maximum number of
execution.
 Worst Case (Tworst): The amount of time the algorithm takes on the
worst possible set of inputs.

30
Complexity Analysis
Average case Analysis
 Assumes that data are found in random order.
 It also assumes random or average input size.
 E.g.
 For sorting – data are in random order.
 For searching – the required item is found at any position or missing.
 Average Case (Tavg): The amount of time the algorithm takes on an "average"
set of inputs.
 It also causes average number of execution.
 Best case and average case can not be used to estimate (determine) complexity
of algorithms
 Worst case is the best to determine the complexity of algorithms.

31
Complexity Analysis
 Benefits

 Provides the short hand method, to find how much time algorithm takes
(approx)

 Its useful way of estimating the efficiency of an algorithm, without having


to measure actual run times

 Gives the choice to select among the best algorithms available

32
Complexity Analysis

 Complexity analysis involves two distinct phases:

 Algorithm Analysis: Analysis of the algorithm or data structure to produce


a function T (n) that describes the algorithm in terms of the operations
performed in order to measure the complexity of the algorithm.

 Order of Magnitude Analysis: Analysis of the function T (n) to determine


the general complexity category to which it belongs.

 There is no generally accepted set of rules for algorithm analysis. However, an


exact count of operations is commonly used.

33
Complexity Analysis
Analysis Rules:
1. Execution of one of the following operations takes time 1:
• Assignment Operation
• Single Input/output Operation
• Single Boolean Operations
• Single Arithmetic Operations
• Function Return
2. Running time of a selection statement (if, switch) is the time for the condition
evaluation + the maximum of the running times for the individual clauses in the
selection

34
Complexity Analysis
3. Loops: Running time for a loop is equal to the running time for the statements
inside the loop * number of iterations
 Remark:
 The total running time of a statement inside a group of nested loops is
• The running time of the statements multiplied by the product of the
sizes of all the loops.
 For nested loops, analyze inside out.
 Always assume that the loop executes the maximum number of
iterations possible.
4. Running time of a function call is 1 for setup + the time for any parameter
calculations + the time required for the execution of the function body.

35
Example

Time Units to Compute


int count()
{ int k=0; 1 for the assignment statement: int
int count() k=0
cout<<
―Enter{ int
ank=0; 1 for the output statement 1
integer‖;
cout<< for the input statement. In
―Enter an
cin>>n; the for loop:
integer‖; 1 assignment, n+1 tests, and n
for (i=0;i<n;i++) increments.
cin>>n;
k=k+1; n loops of 2 units for an assignment, and
for (i=0;i<n;i++) an addition.
return 0;}
k=k+1; 1 for the return statement.
return 0;}

T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6


= O(n)
36
Example

int total(int n) Time Units to Compute


-------------------------------------------------
{ 1 for the assignment statement: int sum=0
In the for loop:
int sum=0; 1 assignment, n+1 tests, and n increments.
n loops of 2 units for an assignment, and an
for (int i=1;i<=n;i++) addition.
1 for the return statement.
sum=sum+1; --------------------------------------------------------------
return sum; -----
T (n)= 1+ (1+n+1+n)+2n+1 = 4n+4 =
} O(n)

37
Example
void func(){ Time Units to Compute

int x=0; -------------------------------------------------


1 for the first assignment statement: x=0;
int i=0;
1 for the second assignment statement: i=0;
int j=1;
1 for the third assignment statement: j=1;
cout<< "Enter an Integer value";
1 for the output statement. 1
cin>>n;
for the input statement.
while (i<n)
In the first while loop:
{ x++;
n+1 tests
i++;
n loops of 2 units for the two increment
} (addition) operations
while (j<n){ j+ In the second while loop:
+; n tests
}} n-1 increments
-------------------------------------------------------
38 T (n)= 1+1+1+1+1+n+1+2n+n+n-1 = 5n+5 = O(n)
Example

int total(int n) Time Units to Compute


-------------------------------------------------
{ 1 for the assignment statement: int sum=0
In the for loop:
int sum=0; 1 assignment, n+1 tests, and n increments.
n loops of 2 units for an assignment, and an
for (int i=1;i<=n;i++) addition.
1 for the return statement.
sum=sum+1; -------------------------------------------------------------------
T (n)= 1+ (1+n+1+n)+2n+1 = 4n+4 = O(n)
return sum;
}

39
Example

Time Units to Compute


int sum (int n){
----------------------------------------
int partial_sum = 0; 1 for the assignment.
1 assignment, n+1 tests, and n
for (int i = 1; i <= n; i++)
increments.
partial_sum=partial_sum +(i * i * i); n loops of 4 units for an assignment, an
addition, and two multiplications.
return partial_sum;
1 for the return statement.
} ----------------------------------------
T (n)= 1+(1+n+1+n)+4n+1 = 6n+4 =
O(n)

40
Exercise
 What is the Time Units to Compute ?

sum=0
k = 0
if(i==0){ for (int i = 0; i < n; i++)
sum = n(n+1)/2; for (int j = 0; j < n; j++)
k++;
cout<<sum;
}else{
for(i=1;i<=n;i++){
sum=sum+i; for (i=1; i<n;){
} i=i*2;
cout<<sum; }

41
Asymptotic Analysis

 The term asymptotic means approaching a value or curve arbitrarily closely


(i.e., as some sort of limit is taken).

 Asymptotic analysis refers to computing the running time of any operation in


mathematical units of computation.

 It is an algorithm refers to defining the mathematical boundary/framing of


its run-time performance.

 Asymptotic Notations are languages that allow us to analyze an algorithm's


running time by identifying its behavior as the input size for the algorithm
increases. This is also known as an algorithm's growth rate.

 Using asymptotic analysis, we can very well conclude the best case,
42 average case, and worst case scenario of an algorithm.
Cont...

 Suppose we are considering two algorithms, A and B, for solving a given


problem. analysis of the running times of each of the algorithms and determined
them to be TA(n) and TB(n), respectively, where n is a measure of the problem size.

 Then it should be a fairly simple matter to compare the two functions T A(n)

and TB(n), to determine which algorithm is the best!

 What exactly does it mean for one function, say TA(n), to be better than another function,

TB(n)?

 One possibility arises if we know the problem size a priori.

 For example, suppose the problem size is n0 and TA(n0) < TB(n0). Then clearly

algorithm A is better than algorithm B for problem size n0

 We consider the asymptotic behavior of the two functions for very large problem sizes.
43
Cont...
 If the size of the input is n then f(n) is the function of n denotes the time
complexity. f(n) represents the number of instruction executed for the input
values of n.
 We can compare the data structure for a particular operation by comparing
their f(n) values.
 We are interested in the growth rate of f(n) with respect to n because it might be
possible that for smaller input size, one data structure may seem better than the
other but for larger input size it may not.
 This concepts is applicable in comparing the two algorithm as well.
 Examining the exact running time is not the best solution to calculate the time
complexity
 Measuring the actual running time is not practical at all
 The running time generally depends on the size of the input

44
Cont.....................
Consider the two functions
f(n) = n2 and g(n) = n2 – 3n + 2
Around n = 0, they look very
different

Yet on the range n = [0, 1000],


they are (relatively) indistinguishable:

The absolute difference is large, for


example,
f(1000) = 1 000 000
g(1000) = 997 002
but the relative difference is very small
f(1000)  g(1000)
 0.002998  0.3%
f(1000)
and this difference goes to zero as n → ∞
45
Cont...
To demonstrate with another example, Still, around n = 1000, the relative
f(n) = n6 and difference is less than 3%
g(n) = n6 – 23n5+193n4 -729n3+1206n2 – 648n
Around n = 0, they are very different

 The justification for both pairs of polynomials being similar is that, in both cases,
they each had the same leading term:
n6 in the first case, n6 in the second
 Suppose however, that the coefficients of the leading terms were different. In
this case, both functions would exhibit the same rate of growth, however, one
46 would always be proportionally larger
Cont...
 Example: f(n)=5n2 +6n +12, calculate running for n=1
 % running time of 5n2 = , n 5n2 6n 12
 % running time of 6n = 1 21.74 26.09 52.17
 % running time of 12 = 10 87.41 10.49 2.09
100 98.79 1.19 0.02
1000 99.88 0.12 0.0002

 For large value of n, 5n2is consumes larger. Hence, if we can eliminates


the rest of the terms they are not contributing much to the time.
 Therefore the running time of f(n)= 5n2 +6n+12 is approximately equal
to 5n2;
• F(n) ≈ 5n2
 Asymptotic behavior (as n gets large) is determined entirely by the
leading term.
47
Cont...

 Asymptotic analysis is concerned with how the running time of an


algorithm increases with the size of the input in the limit, as the size of the
input increases without bound.

 There are five notations used to describe a running time function. These
are:

 Big-Oh Notation ()

 Big-Omega Notation (Ω)

 Theta Notation ()

 Little-o Notation ()

48  Little-Omega Notation ()


Big-Oh Notation ()
 Big-Oh notation is a way of comparing algorithms and is used for computing
the complexity of algorithms; i.e., the amount of time that it takes for computer
program to run.
 It‘s only concerned with what happens for very a large value of n.
 Therefore only the largest term in the expression (function) is needed.
 For example, if the number of operations in an algorithm is n2 – n, n is
insignificant compared to n2 for large values of n.
 Hence the n term is ignored. Of course, for small values of n, it may be
important. However, Big-Oh is mainly concerned with large values of n.
 Talks about asymptotic upper bounds
 Used for worst-case analysis

 Formal Definition: f(n)= O(g(n)) if there exist c, k ∊ℛ+ such that for all n≥ k,
49 f(n) ≤ c.g (n)
Cont…

 f(n)=10n+5 and g(n)=n. Show that f(n) is O(g(n)).

 To show that f(n) is O(g(n)) we must show that constants c and k


such that

=f(n) <=c.g(n) for all n>=k

=10n+5<=c.n for all n>=k (Try c=15)

=Then we need to show that 10n+5<=15n

=Solving for n we get: 5<5n or 1<=n

=So f(n) =10n+5 <=15.g(n) for all n>=1. (c=15,k=1)

50
1 < logn << n < nlogn <<< … <<< …<
Cont...
 f(n) = f(n)=8n+128. Show that f(n)=O(n2).
 To show that f(n) is O(g(n)) we must show that constants c and k such that
=f(n) <=c.g(n2) for all n>=k
 8n+128 <=1*n2 for all n>=k (Try c=1)
 8n+128<=n2  0<=n2-8n-128  0<=(n+8)(n-16)
 Since (n+8)>0 for all value of n>=0, we conclude that (n-16)>=0. that is n 0 =16
 So, we have that for c=1 and no=16, f(n) <=c.g(n2) for all integer n>no. hence,
f(n)=O(n2).

Exercise

1. f(n) = (3/2)n2+(5/2)n-3. Show that f(n)= O(n2)

51
2. f(n) = 3n2 +4n+1. Show that f(n)=O(n2).
Cont...

 Demonstrating that a function f(n) is big-O of a function g(n) requires


that we find specific constants c and k for which the inequality holds
(and show that the inequality does in fact hold).

 Big-O expresses an upper bound on the growth rate of a function, for


sufficiently large values of n.

 An upper bound is the best algorithmic solution that has been found for
a problem.

52
Cont...
 Big-O Theorems
 Theorem 1: k is O(1)
 Theorem 2: A polynomial is O(the term containing the highest
power of n).
 Theorem 3: k*f(n) is O(f(n)) Constant factors may be ignored
 Theorem 4(Transitivity): If f(n) is O(g(n))and g(n) is O(h(n)),
then f(n) is O(h(n)).
 Theorem 5: For any base b, logb(n) is O(logn).

53
Cont...
 Worst case is the best to determine the time complexity of
algorithms. In other ways Big O notation is determine the upper
bound of the function.
 What is the time complexity of the following program?
 Void fun(int n){
int i, j, k; Constant times O(1)

for (i=1; i<=n/3;i++){ n times

for (j=1;j<=n; j+=4) n times

cout(“hello”);
}
}
Total time complexity is : 1+N*N=O(n2)
54
Cont...
 What is the time complexity of the following program?
Void fun(int n){ Void fun(int n){
int i,j; int i, j, k count=0;
for (i=1;i<=n/3;i++){ for ( i=n/2; i<=n; i++){
for(j=1; j<=n; j+=4){ for(j=1; j+n/2<=n; j++){
cout<<“Hello”; for(k=1; k<=n; k=k*2)

} {

} count++;
}
}
}

55
Big-Omega Notation (Ω)

 Just as O-notation provides an asymptotic upper bound on a function, Ω


notation provides an asymptotic lower bound.

 Formal Definition: A function f(n) is Ω( g (n)) if there exist constants c


and k ∊ ℛ+ such that f(n) >=c. g(n) for all n>=k.

 f(n)= Ω( g (n)) means that f(n) is greater than or equal to some


constant multiple of g(n) for all values of n greater than or equal to
some k.

 Example: If f(n) =n2, then f(n)= Ω(n)


 In simple terms, f(n)= Ω( g (n)) means that the growth rate of f(n) is greater that
or equal to g(n).
56
Cont...

 f(n)=2n+3, show that f(n)= Ω(n).

 f(n)>=c.g(n)

 2n+3>=1.n ; try c=1 and for all n>1. therefore, f(n)= Ω(n) or f(n)=
Ω(logn).

 f(n)=5n2-64n+256. show that f(n)= Ω(n2). in order to show this we need to find
an integer no and a constant c>0 such that for all integers n>no, f(n)>=cn2

 f(n)= 5n2-64n+256 >=n2 where c=1

4n2-64n+256>=0  4(n-8)2 

Since (n-8)2 for all values of n>=0, we conclude that no=0.

57 So, we have that for c=1 and no=0, f(n)>=cn2 for all integers n>=no.
Theta Notation ()

 A function f(n) belongs to the set of (g(n)) if there exist positive constants c1
and c2 such that it can be sandwiched between c1.g(n) and c2.g(n), for
sufficiently large values of n.

 Formal Definition:

 A function f(n) is (g(n)) if it is both O(g(n)) and (g(n)). In other words,

there exist constants c1, c2, and k>0 such that c1.g(n)<=f(n)<=c2.g(n) for

all n>=k

 If f(n)=(g(n)), then g(n) is an asymptotically tight bound for f(n). In

simple terms, f(n)=(g(n)) means that f(n) and g(n) have the same rate

of growth.
58
Cont...
Show that
 If f(n)=2n+1, then f(n) =  (n)
c1.g(n)<=f(n)<=c2.g(n)
c1.n<=2n+1<=c2.n
Take c1=1 and c=3
n<=2n+1<=3n where c1=1 , c2=3 and n>=1

 Show that f(n)=1/2n2-1/2n then f(n)=  (n2).

 Show that f(n)= 2n2+3n+6 then f(n) ≠  (n3).

 Show that f(n)=3n3+6n+7 then f(n)= (n3)

59
Little-o Notation ()
 f(n)=o(g(n)) means for all c>0 there exists some k>0 such that
f(n)<c.g(n) for all n>=k.
 Informally, f(n)=o(g(n)) means f(n) becomes insignificant relative to
g(n) as n approaches infinity.

Example: f(n)=3n+4 is o(n2)


In simple terms, f(n) has less growth rate compared to g(n).
g(n)= 2n2 , g(n) =o(n3), O(n2), g(n) is not o(n2)

60
Little-Omega Notation ()

 Little-omega () notation is to big-omega (Ω) notation as little-o


notation is to Big-Oh notation.

 We use  notation to denote a lower bound that is not asymptotically


tight.

 Formal Definition: f(n)=  (g(n)) if there exists a constant no>0 such


that 0<=c. g(n)<f(n) for all n>=k.

 Example: 2n2= (n) but it‘s not (n2).

61
Cont...

The End !!
62

You might also like