You are on page 1of 61

Graduate Studies Program

Term: Fall 2022/2023

Computing

Lecture 1
Introduction to Algorithms
(Draft)

This lecture notes have been compiled from different resources,


I’d like to thank all those authors who make them available.

1
Lecture Outline

✓ Introduction to algorithms.

✓Analysis of algorithms.

✓ Asymptotic Notations.

2
Introduction to Algorithms
◼ Algorithm is a step-by-step procedure, which defines a
set of instructions to be executed in a certain order to
get the desired output.

◼ An algorithm is a set of steps of operations to solve a


problem performing calculation, data processing, and
automated reasoning tasks.

◼ An algorithm is an efficient method that can be


expressed within finite amount of time and space.

◼ Algorithms help us to understand scalability.

◼ Performance often draws the line between what is


feasible and what is impossible.

3
Characteristics of Algorithm
◼ An algorithm should have the following characteristics:
◼ Definiteness: Algorithm should be clear and unambiguous. Each
of its steps (or phases), and their inputs/outputs should be clear
and must lead to only one meaning.
◼ Input: An algorithm should have 0 or more well-defined inputs.
◼ Output: An algorithm should have 1 or more well-defined
outputs, and should match the desired output.
◼ Finiteness: Algorithms must terminate after a finite number of
steps.
◼ Feasibility: Should be feasible with the available resources.
◼ Independent: An algorithm should have step-by-step directions,
which should be independent of any programming code.
◼ Must have a unique name.
◼ Effectiveness.

4
Algorithm Design
◼ The important aspects of algorithm design include:
◼ Creating an efficient algorithm to solve a problem in an
efficient way using minimum time and space.
◼ To solve a problem, different approaches can be followed.
Some of them can be efficient with respect to time
consumption, whereas other approaches may be memory
efficient.
◼ However, that both time consumption and memory usage
cannot be optimized simultaneously.
◼ If we require an algorithm to run in lesser time, we have to
invest in more memory and if we require an algorithm to run
with lesser memory, we need to have more time.
◼ Most algorithms are designed to work with inputs of arbitrary
length.

5
Problem Development Steps
The following steps are involved in solving computational
problems:
1. Problem definition

2. Specification of an Algorithm
3. Designing an Algorithm
4. Checking the correctness of an Algorithm
5. Analysis of an Algorithm
6. Implementation of an Algorithm
7. Program testing
8. Documentation

6
Algorithm Analysis Criteria
1. Time: (Time function), the algorithm must efficient.
2. Space: how much memory space will consume.
3. Data Transfer: in case of Internet or Cloud based applications.
4. Power Consumption, for Ubiquity systems,
5. CPU Registers: How many register will use.

7
Pseudocode
◼ Pseudocode gives a high-level description of an algorithm without
the need to know the syntax of a particular programming
language.
◼ The running time can be estimated in a more general manner by
using Pseudocode to represent the algorithm as a set of
fundamental operations which can then be counted.
◼ Difference between Algorithm and Pseudocode
◼ An algorithm is a formal definition with some specific
characteristics that describes a process to perform a specific
task.
◼ Generally, the word "algorithm" can be used to describe any high
level task in computational systems.
◼ On the other hand, pseudocode is an informal and human
readable description of an algorithm leaving many granular
details of it.

8
How to Write an Algorithm
❑ Data types are not needed
❑ No declaration
❑ No fixed syntax

Swap (a,b)
{ // or begin
temp = a; // or temp :=a or temp  a
a=b;
b=temp;
} // or end

9
Sample of an algorithm
Algorithm: Insertion-Sort
Input: A list L of integers of length n
Output: A sorted list L1 containing those integers present in L
Step 1: Keep a sorted list L1 which starts off empty
Step 2: Perform Step 3 for each element in the original list L
Step 3: Insert it into the correct position in the sorted list L1.
Step 4: Return the sorted list
Step 5: Stop

10
Sample of Pseudocode
◼ Here is a pseudocode which describes how the high level abstract
process mentioned above in the algorithm Insertion-Sort could
be described in a more realistic way.

for i ← 1 to length(A)
x ← A[i]
j←i
while j > 0 and A[j-1] > x
A[j] ← A[j-1]
j←j-1
A[j] ← x

11
Algorithm Complexity
◼ The topic “Analysis of Algorithms” is concerned primarily with
determining the memory (space) and time requirements
(complexity) of an algorithm.
◼ The time complexity (or simply, complexity) of an algorithm is
measured as a function of the problem size.
◼ Suppose X is an algorithm and n is the size of input data, the time
and space used by the algorithm X are the two main factors,
which decide the efficiency of X.
◼ Time Factor − Time is measured by counting the number of key
operations such as comparisons in a sorting algorithm.
◼ Space Factor − Space is measured by counting the maximum
memory space required by the algorithm.
◼ The complexity of an algorithm f(n) gives the running time and/or
the storage space required by the algorithm in terms of n as the
size of input data.

12
Algorithm Complexity
◼ Analysis of algorithms, it is common to estimate their complexity
in the asymptotic sense, i.e., to estimate the complexity function
for arbitrarily large input.
◼ The term "analysis of algorithms" was coined by Donald
Knuth.
◼ Algorithm analysis is an important part of computational
complexity theory, which provides theoretical estimation for the
required resources of an algorithm to solve a specific
computational problem.
◼ Analysis of algorithms is the determination of the amount of time
and space resources required to execute it.
◼ Usually, the efficiency or running time of an algorithm is stated
as a function relating the input length to the number of steps,
known as time complexity, or volume of memory, known as
space complexity.

13
Analysis Approaches
◼ Efficiency of an algorithm can be analyzed at two different stages,
before implementation and after implementation.
◼ The common approaches of algorithm analysis are:
1. Empirical analysis:
◼ The selected algorithm is implemented using programming language.
This is then executed on target computer machine.
◼ In this analysis, actual statistics like running time and space required,
are collected. So, the analysis deals with the execution or running
time of various operations involved. The running time of an operation
can be defined as the number of computer instructions executed per
operation.
2. Theoretical analysis:
◼ Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no
effect on the implementation.

14
Approach 1
◼ The Empirical Analysis is performed by:
◼ Write a program implementing the algorithm
◼ Run the program with inputs of varying size and composition
◼ Use the class System.Time to get an accurate measure of the
actual running time
◼ Plot the results
◼ Limitations of Experiments
◼ It is necessary to implement the algorithm, which may be
difficult.
◼ Results may not be indicative of the running time on other
inputs not included in the experiment.
◼ In order to compare two algorithms, the same hardware and
software environments must be used.

15
Approach 2
◼ The Theoretical Analysis is accomplished by:
◼ Uses a high-level description of the algorithm instead of an

implementation.
◼ Characterizes running time as a function of the input size, n.

◼ Takes into account all possible inputs.

◼ Allows us to evaluate the speed ( performance ) of an

algorithm independent of the hardware/software environment.


◼ Primitive Operations
◼ Basic computations performed by an algorithm.

◼ Identifiable in pseudocode.

◼ Largely independent from the programming language.

◼ Examples:
◼ Evaluating an expression
◼ Assigning a value to a variable
◼ Indexing into an array
◼ Calling a method
◼ Returning from a method
16
How to analyze an Algorithm
◼ Basic Level Analysis
◼ Example 1:

Swap (a,b)
{
temp = a; → 1 unit of time
a=b; → 1 unit of time
b=temp; → 1 unit of time
}
◼ Time: f(n) = 3 → O(1)
◼ Space: the used variables a 1 word
b 1
temp 1
s(n)=3 → O(1)
◼ X=5*a+6*b, for the basic/brief level analysis, → 1 unit of time,

for the detailed level 4 unit of time are required


17
Frequency Count Method
◼ Example 2:
Sum(A, n)
{
s=0; // → 1 unit of time
for(i=0; i<n; i++) // → 1+n+1+n …> n+1
{
s=s+A[i]; // → n
}
return s; // → 1
}
// Total: f(n) = 1 + n+1 + n + 1 = 2n+3,
Time function: What is the degree of this polynomial: one,
→ O(n)
Space function: The used variables are: A:n, n:1, s:1, i:1,
→ S(n) = n+3 ….> O(n)
18
Frequency Count Method
◼ Example 3:
Add(A,B,n)
{
for( i=0;i<n;i++) // → n+1
{
for(j=0; j<n; j++) // → n *n+1
{
C[I,j] = A[I,j] + B[I,j]; // → n *n
}
}
}

Time function: f(n) = 2n2 + 2n + 1 → O(n2)

19
Frequency Count Method
◼ Example 4:
Multiply(A,B,n){
for( i=0;i<n;i++) { // n+1
for(j=0; j<n; j++){ // n *n+1
C[i,j] = 0; // n *n
for(k=0; k<n; k++) { // n *n *n+1
C[i,j] = C[i,j] + A[i,k] + B[k,j]; // n *n *n
}
}
}

Time function: f(n)= 2n3+3n2+2n+1 → O(n3)


Space: A: n2 , B: n2 , C: n2 , n:1, i:1, j:1, k:1
S(n) = 3n2 + 3 → O(n2)

20
Analysis of for Loop
◼ Example 1:
for (i=0; i<n; i++){ // n+1
Stmt; // n
}
Time Complexity: f(n)= 2n+1 → O(n)

for (i=n; i>0; i--){ // n+1


Stmt; // n
}
Time Complexity: f(n)= 2n+1 → O(n)

for (i=0; i<n; i+=20) { // n/20+1


Stmt; // n/20
}
Time Complexity: f(n)= 2n/20+1 → O(n)

21
Analysis of for Loop
Example 2:
i j #of-times
for( i=0;i<n;i++) 0 0x 0
{ 1 0
for(j=0; j<i; j++) 1x 1
{ 2 0
1
stmt;
2x 2
} 3 0
} 1
2
Total #of times = 1+2+…+n 3x 3
.
= n(n+1)/2
.
Time Complexity: n nx n
f(n)= (n2 +1)/2 → O(n2)

22
Analysis of for Loop

i p
1 0+1
2 1+2
3 1+2+3
4 1+2+3+4

K 1+2+3+…+K

23
Analysis of for Loop
◼ Example 4:
i
for (i=1; i<n; i=i*2) 1
{ 1*2= 2
stmt; 2*2= 22
} 22*2 =23
.
.
Suppose: i>=n .
Since: i=2k
Therefore: 2k >= n 2k
2k = n
k = log2 n
→ O(log2 n)

24
Analysis of for Loop
for (i=1; i<n; i=i*2)
for (i=1; i<n; i++)
{ {
stmt; stmt;
} }
i=1*2*2*2*2*…= n i=1+1+1+1+….+1= n
k=n
2k = n, → k = log2 n
n=8 n=10
i i
1 1
2 2
4 4
8x 8
16x
Log28 =3, log210=3.2
F(n)=┌log n┐
25
Analysis of for Loop

for (i=1; i<n; i++) // n+1


{
for(j=0; j<n; j=j*2) // n * log n
{
stmt; // n * log n
}
}

Time Complexity: f(n) = 2nlogn +n+1


= O(nlogn)

26
Analysis of for-loop
p=0
for (i=1; i<n; i=i*2)
{
p++; // p takes logn time
}

for(j=0; j<p; j*2)


{
stmt; // logp
}

Time Complexity of the stmt in the second loop is:


f(n) = log p, p takes logn time in the first loop,
So, f(n) = O(loglogn).

27
Analysis of while-loop
◼ Example 1

i=0; // 1 for(i=0; i<n; i++ ) // n+1


while (i<n) // n+1 {
stmt; // n
{
}
stmt; // n
i++; // n f(n) = 3n+2 → O(n)
}

f(n) = 3n+2 → O(n)

28
Analysis of while-loop
◼ Example 2

a=1;
a Termination will be when:
while( a<b) 1 a >=b
{ 1*2 =2 a =2k
stmt; 2*2 =22 2k >= b
a=a*2; 22*2 =23 2k = b
} . k = log2 b
. We can call b as n
2k → O(logn)

29
Types/Classes of Time Functions
◼ Constant → O(1):
◼ f(n)= 2, f(n)= 5, f(n)=5000,…
◼ Logarithmic → O(logn):
◼ f(n)= log2 n, log3 n
◼ Linear → O(n):
◼ f(n) = 2n+3, f(n)= 500n+700, f(n) = n/2+6
◼ Quadratic → O(n2):
◼ f(n) = 3n2+5000
◼ Cubic → O(n3):
◼ f(n)= 3n3+5n2+3n+17
◼ Exponential → O(2n):
◼ O(2n), O(3n), O(nn)

30
Comparison of Class Functions
1<logn<√n<n<nlogn< n2 < n3 <…n10 …< nk … < 2n< 3n< … <nn
log n n n2 2n
log21=0 1 1 1
log22=1 2 4 4
log24=2 4 16 16
log28=3 8 64 256
log29=3.1 9 81 512
◼ Examples of Time Functions

◼ For(i=0; i<n,i++) → O(n)


◼ For(i=0;i<n;i=i+2) → n/2, or n/20 → O(n)
◼ For(i=n; i>1;i--) → O(n)
◼ For(i=0; i<n,i*2) → O(log2n)
◼ For(i=0; i<n,i*3) → O(log3n)
◼ For(i=n; i>0,i=i/2) → O(log2n)

31
Comparison of algorithms
◼ How do we compare two algorithms for solving some problem in
terms of efficiency?
◼ We need to define a number of objective measures.
1- Compare execution times?
Not good: times are specific to a particular computer !!
2- Count the number of statements executed?
Not good: number of statements vary with the programming
language as well as the style of the individual programmer.
3- Ideal Solution
◼ Express running time as a function of the input size n
(i.e., f(n)).
◼ Compare different functions corresponding to running times.
◼ Such an analysis is independent of machine time and
programming style, etc.
32
Asymptotic Analysis
◼ Implementing both algorithms and Comparing execution times
is often unsatisfactory approach for four reasons:
◼ First, there is the effort involved in programming and testing
two algorithms when at best you want to keep only one.
◼ Second, when empirically comparing two algorithms there is
always the chance that one of the programs was “better
written” than the other.
◼ Third, the choice of empirical test cases might unfairly
favor one algorithm.
◼ Fourth, we could find that even the better of the two
algorithms does not fall within our resource budget.
◼ These problems can often be avoided using asymptotic
analysis.

33
Asymptotic Analysis
◼ To compare two algorithms with running times f(n) and g(n), we
need a rough measure that characterizes how fast each
function grows.
◼ Comparing functions in the limit, that is, asymptotically!
(i.e., for large values of n)
◼ Asymptotic algorithm analysis, or simply asymptotic analysis.
◼ It allows us to compare the relative costs of two or more
algorithms for solving the same problem.
◼ Asymptotic analysis also gives algorithm designers a tool for
estimating whether a proposed solution is likely to meet the
resource constraints for a problem before they implement an
actual program.

34
Asymptotic Analysis
◼ The asymptotic behavior of a function 𝒇(𝒏) refers to the
growth of 𝒇(𝒏) as n gets large.
◼ We typically ignore small values of n, since we are usually
interested in estimating how slow the program will be on large
inputs.
◼ Time function of an algorithm is represented by 𝐓(𝐧), where n
is the input size.
◼ Different types of asymptotic notations are used to represent
the complexity of an algorithm.
◼ Following asymptotic notations are used to calculate the running
time complexity of an algorithm.

35
Asymptotic Notations
◼ O: Asymptotic Upper Bound
◼ ‘O’ (Big Oh) is the most commonly used notation. A function 𝐟(𝐧)
can be represented is the order of 𝒈(𝒏) that is 𝑶(𝒈(𝒏)), if there
exists a value of positive integer n as n0 and a positive constant
c such that: 𝒇(𝒏) ≤𝒄.𝒈(𝒏) for 𝒏>𝒏𝟎 in all case.
◼ Hence, function 𝒈(𝒏) is an upper bound for function 𝒇(𝒏), as
𝒈(𝒏) grows faster than 𝒇(𝒏).
◼ Ω: Asymptotic Lower Bound
◼ We say that 𝒇(𝒏)= 𝛀(𝐠(𝒏)) when there exists constant c that:
𝒇(𝒏)≥𝒄.𝒈(𝒏) for all sufficiently large value of n.
◼ Here n is a positive integer. It means function g is a lower
bound for function f; after a certain value of n, f will never go
below g.
◼ Ɵ: Asymptotic Tight Bound
◼ We say that 𝑓(𝑛)= Ɵ(g(𝑛)) when there exist constants c1 and
c2 that 𝑐1.𝑔(𝑛)≤𝑓(𝑛)≤ 𝑐2.𝑔(𝑛) for all sufficiently large value of
n. Here n is a positive integer.
36
Asymptotic Notations
◼ 1<log n<√n<nlog< n < n2 < n3 < … n10 … nk … < 2n <3n <…< nn

Lower bound average bound upper bound

◼ e.g. Let us consider a given function: f(n)= 2n+3


◼ Then, 2n+3 <= Single Term that is any thing is >= 2n+3
◼ Foe example, 10n, 7n, 100n, but the simple method is evaluated
by 2n+3n= 5n
◼ So, 2n+3 <= 5n, form the definition, now c=5, g(n)=n,
f(n)=O(g(n)), hence the complexity of f(n) is represented by
O(n).
◼ Can I write 2n+3 <= 2n2+3n2
◼ <= 5n2, then f(n)=O(n2), yes, we can.
◼ Also f(n) = O(2n) is right, they are all true but not useful. So, the
useful one is the closest function which is f(n).
◼ f(n)=O(logn) is wrong, because it is less than and coming from
lower bound. 37
Asymptotic Notations
◼ 1<logn<√n< n < n2 < n3 < … n10 … nk … < 2n <3n <…< nn
Lower bound average bound upper bound

◼ Omega Notation
The function f(n)=Ω(g(n)) iff there exist a +Ve constant
c and n0, such that
f(n) >= c*g(n) for all n >= n0
E.Q: f(n)=2n+3
2n+3 >= 1*n for all n>=1,
→ f(n) = Ω(n),
f(n)= Ω(log n) is also true,
which one is useful, the nearest one which is Ω(n),
but f(n)= Ω(n2) is wrong, because this is becoming upper bound.

38
Asymptotic Notations
◼ 1<log n<√n< n < n2 < n3 < … n10 … nk … < 2n <3n <…< nn
Lower bound average bound upper bound

◼ Theta Notation
The function f(n)=Ө(g(n)) iff there exist a +Ve constant c1, c2 and
n0, such that
c1*g(n) <= f(n) <= c2*g(n) for all n >= n0
E.Q: f(n)=2n+3
1*n <= 2n+3 <= 5n for all n>=1,
→ f(n) = Ө(n),
this is the average bound of the function, this is exactly belongs to
class n,
f(n)= any other class is false,
So, if we can represent theta notation for the function that is
better, if it is not possible we can go to Big-Oh or Omega.

39
Asymptotic Notations
◼ Example 1: find Big-Oh, Omega, and Theta notations for the
function: f(n) = 2n2+3n+4
Solution:
- Big-Oh
2n2+3n+4 <= 2n2+3n2 +4n2
2n2+3n+4 <= 9n2, n>=1
Here, c=9, g(n)=n2,
From the definition, f(n)=O(g(n)), So, f(n)=O(n2)
- Omega
2n2+3n+4 >= 1*n2, So, f(n) =Ω(n2)
- Theta
1*n2 <= 2n2+3n+4<= 9n2 ,
Here: c1=1, c2=9, g(n)=n2, So, f(n)=Ө(n2)

40
Asymptotic Notations
◼ Example 2: find Big-Oh, Omega, and Theta notations for the
function: f(n)=n2logn+n
◼ Solution:

1*n2logn <= n2logn + n <= 2n2logn,


Here:
c1=1, c2=2, g(n)= n2logn,

→ O(n2logn),
→ Ω(n2logn),
→ Ө(n2logn),

- Here, we may notice that there is no n2logn as a type belongs to


classes, so, we can add n2logn class after n2.
- 1<log n<√n<nlog<n<n2<n3<… n10 … nk … < 2n <3n <…< nn

41
Graphic of Asymptotic Notations

Graphic examples of , O, and  .


Common Asymptotic Notations

43
Growth rate
◼ The growth rate for an algorithm is the rate at which the cost of
the algorithm grows as the size of its input grows.
◼ The Figure below shows a graph for six equations, each meant to
describe the running time for a particular program or algorithm.

44
Growth Rate
◼ The two equations 10n and 20n are graphed by straight lines.
◼ A growth rate of cn (for c any positive constant) is often referred
to as a linear growth rate or running time.
◼ This means that as the value of n grows, the running time of the
algorithm grows in the same proportion.
◼ Doubling the value of n roughly doubles the running time.
◼ An algorithm whose running-time equation has a highest-order
term containing a factor of n2 is said to have a quadratic growth
rate.
◼ In the Figure, the line 2n2 represents a quadratic growth rate.
◼ The line labeled 2n represents an exponential growth rate.
◼ The line labeled n! is also growing exponentially.

45
Asymptotic Analysis
◼ Example: find Big-Oh, Omega, and Theta notations for the
function: f(n)=n!
◼ Solution:
f(n)= n!= n*(n-1)*(n-2)*…*3*2*1
→ 1*1*1*…*1 <= 1*2*3*… *n <= n*n*n*….*n
→ 1 <= n! <= nn

→ O(nn),
→ Ω(1),
→ No tight bound, we can specify ONLY Upper and Lower bounds.
- 1<log n<√n<nlog<n<n2<n3<… n10 … nk …<2n <3n <…< nn

46
Asymptotic Analysis
◼ Example: find Big-Oh, Omega, and Theta notations for the
function: f(n)=logn!
◼ Solution:
f(n)= log(n!) = log(n*(n-1)*(n-2)*…*3*2*1)
→ log(1*1*1*…*1)<= log(1*2*3*..*n)<= log(n*n*..*n)
→ 1 <= log n! <=log nn
→ 1 <= log n! <= nlogn
→ O(nlogn),
→ Ω(1),
→ No tight bound, we can specify ONLY Upper and Lower bounds.
- 1<log n<√n<nlog<n<n2<n3<… n10 … nk …<2n <3n <…< nn

47
Comparison of functions

◼ We can use two methods by:


◼ Applying sample of values and judge the result.
◼ Applying log on both functions and judge the result.
◼ Example 1:
Is n2 < n3 ?
1- Apply sample of values:
n n2 n3
2 4 8
3 9 27
→ It is clear that n2 < n3
2- Apply log on both functions
log n2 log n3
→ 2logn < 3logn

48
Comparison of functions
◼ Example 2:
f(n) = n2 log n , g(n)= n (log n)10 ?
1- Apply log on both functions
log [n2logn], log[(n(logn)10]
→ logn2 + loglogn, logn+log(logn)10
→ 2logn + loglogn, logn+10loglogn
→ f(n) is > g(n)
1. Log ab = loga+logb
2. Log a/b = loga-logb
3. Log ab = bloga
4. alogcb = blogca
5. ab=n → b=logn

49
Comparison of functions
◼ Example 3:
f(n) = nlogn , g(n)= 2√n

Sol: Apply log on both functions


log nlogn , log 2√n , apply Formula #3
→ logn * logn , √n log22
→ log2n , n1/2 ,
Apply again log on both sides:
loglog2n , logn1/2
1. Log ab = loga+logb
→ 2loglogn , 1/2logn 2. Log a/b = loga-logb
3. Log ab = bloga
→ f(n) is < g(n) 4. alogcb = blogca
5. ab=n → b=logn

50
Comparison of functions
◼ Example 4
f(n) = 3n√n , g(n)= 2 √n logn

Solution:
3n√n , 2 log2 n√n , Apply formula #3
3n√n , ( n√n ) log2 2 , Apply formula #4
→ 3n√n , n√n
So, it is clear that 3n√n > n√n
But, asymptotically are equal
Like: 5n2 is asymptotically equal to n2
→ f(n) is = g(n). 1. Log ab = loga+logb
2. Log a/b = loga-logb
3. Log ab = bloga
4. alogcb = blogca
5. ab=n → b=logn

51
True/False Asymptotic Notations
1- (n+k)m = O(nm)?
Sol:(n+3)2 = n2+3n+9 = O(n2), → (n+k)m=O(nm) is True.
2- 2n+1 = O(2n)?
Sol: 2n+1 = 2*2n = O(2n), coefficient is ignorable,
→ 2n+1 = O(2n) is True.
3- 22n = O(2n)?
Sol: 22n = 4n which is > 2n ,
2n can not be upper bound of 4n → 22n = O(2n) is False.
4- √logn= O(loglogn)? Sol: apply log on both sides:
log (√logn)= ½(loglogn), logloglogn
½(loglogn) is greater → smaller value can’t be upper bound.
→ √logn= O(loglogn) is False.
5- nlogn = O(2n)? Sol: Apply log on both → log nlogn , log 2n
→ log2n, n , So, n is greater → nlogn = O(2n) is True.

52
Case - Time Analysis
◼ The main concern of analysis of algorithms is the required time
or performance. Generally, the following types of analysis are
performed:
◼ Worst-case:
◼ The maximum number of steps taken on any instance of size a.
◼ An absolute guarantee that the algorithm would not run longer, no
matter what the inputs are
◼ Best-case:
◼ The minimum number of steps taken on any instance of size a.
◼ Input is the one for which the algorithm runs the fastest
◼ Average case:
◼ An average number of steps taken on any instance of size a.

53
Case - Time Analysis
◼ Example: Linear Search Algorithm Analysis
◼ If list A of elements are given:

6 4 10 7 11 9 6 5 14 16
0 1 2 3 4 5 6 7 8 9
We want to search some key element e.g., key =9
It will scan the list from the left hand side, # of Comp.= 6
Suppose we want to search for key=30, then
It will check all the elements, 30 not found, # of Comparisons = 10
- Best Case: searching key element exist at first index

- Best Case Time (the time will take): Constant time → B(n)=O(1)

- Worst case: searching a key at last index

- Worst Case Time (will take maximum time): n, so W(n)=O(n)

- Average Case: may not be possible always

- Average Case Time: all possible case time / no of cases

- Average time: A(n) = (1+2+3+..+n) / n= (n(n+1)/2)/n

- = (n+1)/2
54
Linear Search Asymptotic notations
◼ Notations are not related to cases:
◼ Notations are used to represent bounds of the functions
◼ Cases for the consumed time of algorithm
◼ B(n)=1, the function is 1, so, this is constant that belongs to
constant class, so, we can write:
B(n)= Ө(1),
And we can also write the upper and lower bound:
B(n)=O(1),
B(n)= Ω(1)
In case of worst case: W(n)=n, now function is linear →
W(n)= O(n), W(n)= Ө(n), W(n)= Ω(n)
Best case and worst case can use any notations, do not think that best
case for lower bound and worst case for upper bound.

55
Searching Algorithms
◼ Organizing and retrieving information is at the heart of most
computer applications, and searching is the most frequently
performed of all computing tasks.
◼ Linear search is a very simple search algorithm. In this type of
search, a sequential search is made over all items one by one.
◼ Every item is checked and if a match is found then that
particular item is returned, otherwise the search continues till
the end of the data collection.
procedure linear_search (list, value)
for each item in the list
if match item == value
return the item's location
end if
end for
end procedure

56
Binary search
◼ Binary search is a fast search algorithm with run-time complexity
of Ο(log n).
◼ This search algorithm works on the principle of divide and
conquer.
◼ For this algorithm to work properly, the data collection should be
in the sorted form.
◼ Binary search looks for a particular item by comparing the middle
most item of the collection.
◼ If a match occurs, then the index of item is returned. If the
middle item is greater than the item, then the item is searched in
the sub-array to the left of the middle item.
◼ Otherwise, the item is searched for in the sub-array to the right
of the middle item.
◼ This process continues on the sub-array as well until the size of
the subarray reduces to zero.
◼ For a binary search to work, it is mandatory for the target array
to be sorted.
57
Binary search
◼ The following is a sorted array and assume that we need to
search the location of value 31 using binary search.

◼ First, we shall determine half of the array using the formula


mid = low + (high - low) / 2
◼ Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the
mid of the array.

◼ Now we compare the value stored at location 4, with the value


being searched, i.e. 31.
◼ We find that the value at location 4 is 27, which is not a match.
As the value is greater than 27 and we have a sorted array, and
the target value must be in the upper portion of the array.

58
Binary search

◼ We change the low to mid + 1 and find the new mid value
again.
low = mid + 1 mid = low + (high - low) / 2
◼ The new mid is 7 now. We compare the value stored at location
7 with the target value 31.

◼ The value stored at location 7 is not match, rather it is less than


what we are looking for. So, the value must be in the lower part
from this location.

59
Binary search
◼ Hence, we calculate the mid again. This time it is 5.

◼ We compare the value stored at location 5 with the target value.


We find that it is a match.

◼ We conclude that the target value 31 is stored at location 5.


◼ Binary search halves the searchable items and thus reduces the
count of comparisons to be made to very less numbers.

60
Binary Search Pseudocode
A ← sorted array
n ← size of array
x ← value to be searched
procedure Binary_Search()
set lowerBound = 1
set upperBound = n
while x not found
if upperBound < lowerBound
EXIT: x does not exists.
set midPoint = lowerBound + ( upperBound - lowerBound ) / 2
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure

61

You might also like