23 views

Uploaded by OyeladeAyo

Attribution Non-Commercial (BY-NC)

- DSA 3.pdf
- Big O Notation
- Algorithms and Data Structures
- examens.eng.pdf
- What is Artificial Intelligence
- How to Analyze Time Complexity
- Evaluate Algorithms
- Simons Foundation Annual Report 2012
- 2016 - Evaluation of Point Pair Feature Matching
- NP-completeness.pdf
- Data Structures And Algorithms
- (Lecture Notes in Computer Science 1767) Giorgio Ausiello, Stefano Leonardi, Alberto Marchetti-Spaccamela (auth.), Giancarlo Bongiovanni, Rossella Petreschi, Giorgio Gambosi (eds.) - Algorithms and Co.pdf
- Refinement of Architecture
- R1 - Reading Material
- AlgoLec3
- [LOGIC] Dificulty Very Satisfatiability
- talk-EDA
- Merge
- 2015A
- 5 - 4 - Running Time of the DP Algorithm (8-20)

You are on page 1of 9

MODULE OE

Overview

In this module we are looking at:

• Algorithm Definition

• Types of Analysis of Algorithm

• Algorithm performance

• Asymptotic functionss

• Logarithms and properties

• Some common summations

Introduction

What is an Algorithm?

Ann algorithm, named after the ninth century Persian Mathematician (Abu Jafar

Muhammad Ibn Musu Al-Khowarizmi

Khowarizmi), is defined as follows:

machine.

• An algorithm is a finite step

step-by-step

step procedure to achieve a required result.

• An algorithm is a sequence of computational steps that transform the input into

the output.

• An algorithm is a sequence of operatio

operations

ns performed on data that have to be

organized in data structures.

• An algorithm is an abstraction of a program to be executed on a physical machine

(model of Computation).

• An algorithm is a finite set of precise instructions for performing a computation or

for solving a problem

defined computational procedure that

transforms inputs into outputs, achieving the desired input

input-output

output relationship.

Characteristics

• Finiteness

• Input

• Output

• Rigorous, Unambiguous and Sufficiently Basic at each step

Applications

• Computational Biology

• Scientific Simulation

• Automated Vision/Image Processing

• Compression of Data

• Databases

• Mathematical Optimization

Example: Sorting

Output: the permutation of the input sequence such as a1≤ a2……..≤ an

An algorithm is said to be correct if, for every input instance, it halts with the correct

output. We then say that a correct algorithm solves the given computational problem.

An incorrect algorithm might not halt at all on some input instances or it might halt

with an answer other than the desired one.

Algorithms devised to solve the same problem often differ dramatically in their

efficiency. These differences can be much more significant due to hardware and

software.

Analysis of Algorithm

resource usage.

• modularity,

• correctness,

• maintainability,

• security,

• functionality,

• robustness,

• user-friendliness,

• programmer’s time,

• simplicity,

2

• extensibility,

• reliability, and

• scalability.

• Performance draws line between feasible and unfeasible.

• Algorithms give language for talking about program behavior.

• Performance can be used to “pay” for other things, such as security, features and

user-friendliness.

storage) necessary to execute it. Most algorithms are designed to work with inputs of

arbitrary length. Usually the efficiency or complexity of an algorithm is stated as a

function relating the input length to the number of steps (time complexity) or storage

locations (space complexity).

• To compare algorithms mainly in terms of running time but also in terms of other

factors (e.g., memory requirements, programmer's effort etc.)

increase in running time (or run-time) of an algorithm as its input size (usually

denoted as "n") increases. Run-time efficiency is a topic of great interest in Computer

Science: a program can take seconds, hours or even years to finish executing,

depending on which algorithm it implements.

Types of Analysis

Worst case

• Provides an upper bound on running time

• An absolute guarantee that the algorithm would not run longer, no matter what the

inputs are

• The running time for any given size input will be lower than the upper bound

except possibly for some values of the input where the maximum is reached.

Best case

• Provides a lower bound on running time

• Input is the one for which the algorithm runs the fastest

Average case

• Provides a prediction about the running time

3

• Assumes that the input is random

• The running time for any given size input will be the average number of

operations over all problem instances for a given size.

Because, it is quite difficult to estimate the statistical behavior of the input, most of the

time we content ourselves to a worst case behavior. Most of the time, the complexity of

g(n) is approximated by its family o(f(n)) where f(n) is one of the following functions. n

(linear complexity), log n (logarithmic complexity), na where a≥2 (polynomial

complexity), an (exponential complexity).

Algorithm's Performance

Two important ways to characterize the effectiveness of an algorithm are its space

complexity and time complexity.

Definition: Complexity refers to the rate at which the storage or time grows as a

function of the problem size.

• the machine used to execute the program

• the compiler used to construct the program and

• other factors.

We would like to have a way of describing the inherent complexity of a program (or

piece of a program), independent of machine/compiler considerations. This means that

we must not try to describe the absolute time or storage needed. We must instead

concentrate on a "proportionality" approach, expressing the complexity in terms of its

4

relationship to some known function. This type of analysis is known as asymptotic

analysis.

Usually asymptotic estimates are used because different implementations of the same

algorithm may differ in efficiency. However the efficiencies of any two "reasonable"

implementations of a given algorithm are related by a constant multiplicative factor

called hidden constant.

steps needed as a function of the problem size. Since the step count measure is somewhat

coarse, one does not aim at obtaining an exact step count. Instead, one attempts only to

get asymptotic bounds on the step count.

We introduce several types of asymptotic notation which are used to compare the

performance and efficiency of algorithms.

occurrences of each operation when running the algorithm.

be considered modulo a multiplicative constant.

The second criterion that is customarily used to determine efficiency is concerned with

how much of the memory a given program will need for a particular task. Here we speak

of `space complexity'. For a given task, there typically are algorithms which trade time

for space, or vice versa. For example, we have seen that hash tables have a very good

time complexity at the expense of using more memory than is needed by other

algorithms. It is up to the program designer to decide which the better trade-off for the

situation at hand is.

The difference between space complexity and time complexity is that space can be

reused.

Asymptotic notations

O - Asymptotic upper bound

Ω - Asymptotic lower bound

o - upper bound that is not asymptotically tight

ω - lower bound that is not asymptotically tight

5

Definition Let g(n) be a function. The set O(g(n)) is defined as

In other words, f (n)∈O(g(n)) if and only if there exist positive constants c, and no , such

that ∀ no ≥ n , the inequality 0 ≤ f (n) ≤ cg(n) is satisfied. We say that f (n) is Big O of

g(n) , or that g(n) is an asymptotic upper bound for f (n) .

Example : 40n +100 = O(n2 +10n + 300) . Observe that 0 ≤ 40n +100 ≤ n2 +10n + 300 for

all n ≥ 20, as can be easily verified. Thus we may take n0 = 20 and c =1 in the definition.

Note that in this example, any value of n0 greater than 20 will also work, and likewise

any value of c greater than 1 works. In general if there exist positive constants n0 and c

such that 0 ≤ f (n) ≤ cg(n) for all n ≥ n0 , then infinitely many such constants also exist. In

order to prove that f (n) = O(g(n)) it is not necessary to find the smallest possible n0 and c

making the 0 ≤ f (n) ≤ cg(n) true. It is only necessary to show that at least one pair of such

constants exist.

Generalizing the last example, we will show that an + b = O(cn2 + dn + e) for any

constants a-e, and in fact p(n) = O(q(n)) whenever p(n) and q(n) are polynomials with

deg( p) ≤ deg(q) .

6

Ω(g(n)) = { f (n) | ∃ c >0, ∃ no >0, ∀ n ≥ no: 0 ≤ cg(n) ≤ f(n)}.

We say f (n) is big Omega of g(n), and that g(n) is an asymptotic lower bound for f (n) .

Proof: If f (n) = O(g(n)) then there exist positive numbers c1, n1 such that 0 ≤ f(n)

≤ c1g(n) for all n ≥ n1 . Let c2 =1/ c1 and n2 = n1 . Then 0 ≤ c2f(n) ≤ c1g(n) for all n ≥ n2,

proving g(n) = Ω(Ω f (n)) .

Definition Let g(n) be a function and define the set Θ(g(n)) = O(g(n))∩Ω(g(n)) .

Equivalently

Θ( g(n)) = { f(n) | ∃ c1 >0, ∃ c2 >0, ∃n0 >0, ∀ n ≥ n0 : 0 ≤ c1g(n) ≤ f(n) ≤ c2 g(n)}.

We write f (n) = Θ(g(n)) and say the g(n) is an asymptotically tight bound for f (n) , or

that f (n) and g(n) are asymptotically equivalent. We interpret this geometrically as:

7

Exercises:

1. Prove that if c is a positive constant, then cf (n) = Θ( f (n)) .

2. Prove that f (n) = Θ(g(n)) if and only if g(n) = Θ( f (n)) .

Lemma If f (n) ≤ h(n) for all sufficiently large n, and if h(n) = O(g(n)) , then f (n)

= O(g(n)) .

Proof: The above hypotheses say that there exist positive numbers c and n1 such that

h(n) ≤ cg(n)

for all n ≥ n1 . Also there exists n2 such that 0 ≤ f (n) ≤ h(n) for all n ≥ n2 . (Recall f (n) is

assumed to be asymptotically non-negative.)

Define n0 = max(n1,n2 ), so that if n ≥ n0 we have both n ≥ n1 and n ≥ n2 . Thus n ≥ n0

implies 0 ≤ f (n) ≤ cg(n) , and therefore f (n) = O(g(n)) .

Exercise Prove that if h1(n) ≤ f (n) ≤ h2(n) for all sufficiently large n, where

h1(n) = Ω( (g(n)) and h2(n) = O(g(n)) , then f (n) = Θ(g(n)) .

Definition o(g(n)) = { f(n) | ∀ c >0, ∃ n0 >0, ∀n ≥ n0: 0 ≤ f(n) < cg(n) } . We say that

g(n) is a strict Asymptotic upper bound for f (n) and write f (n) = o(g(n)) as before.

()

Lemma f (n) = o(g(n)) if and only if lim→

() = 0.

()

Proof: Observe that f (n) = o(g(n)) if and only if ∀ c > 0, ∃ n > no : ≤ < , which is

()

()

the very definition of the limit statement lim→ = 0.

()

8

Also no function belong to both o(g(n)) and Ω(g(n)).

Thus o(g(n))∩Ω(g(n)) =∅, and therefore o(g(n)) ⊆ O(g(n)) −Θ(g(n)) .

Definition: ω(g(n)) = { f(n) | ∀c >0, ∃ n0 > 0, ∀ n ≥ n0 : 0 ≤ cg(n) < f(n)}. Here we say

that g(n) is a strict asymptotic lower bound for f (n) and write f (n) =ω (g(n)) .

Exercises

2. Describe briefly on the following notations used in running time analysis of an

algorithm

i. Θ - notation ii. Big-Oh notation iii. little- oh notation

3. Evaluate the equations below using the O-Notation

i. 2n+100 ii. nlog(n)+10n iii. ½n2+100n iv. n3+100n2

- DSA 3.pdfUploaded byNishu Rave
- Big O NotationUploaded bymenilanjan89nL
- Algorithms and Data StructuresUploaded by程国强
- examens.eng.pdfUploaded byMajid Khan
- What is Artificial IntelligenceUploaded bygalois14
- How to Analyze Time ComplexityUploaded bysaadhash286
- Evaluate AlgorithmsUploaded byNurul Akmar Emran
- Simons Foundation Annual Report 2012Uploaded byThe Simons Foundation
- 2016 - Evaluation of Point Pair Feature MatchingUploaded bySaransh Vora
- NP-completeness.pdfUploaded byDiana Arun
- Data Structures And AlgorithmsUploaded byGirish Kumar Nistala
- (Lecture Notes in Computer Science 1767) Giorgio Ausiello, Stefano Leonardi, Alberto Marchetti-Spaccamela (auth.), Giancarlo Bongiovanni, Rossella Petreschi, Giorgio Gambosi (eds.) - Algorithms and Co.pdfUploaded bybharatmukkala
- Refinement of ArchitectureUploaded byJ Christian Odehnal
- R1 - Reading MaterialUploaded byMuhammad Arslan
- AlgoLec3Uploaded byFahad Hashmi
- [LOGIC] Dificulty Very SatisfatiabilityUploaded byluccamantini
- talk-EDAUploaded byCh Mandal
- MergeUploaded bymydummymail
- 2015AUploaded bySudarshan Muralidhar
- 5 - 4 - Running Time of the DP Algorithm (8-20)Uploaded byjcvoscrib
- 1830.pdfUploaded bysrinjoy chakravorty
- dp1.pdfUploaded byBhushan Sonawane
- 9620Uploaded byvuhquang01
- msms2Uploaded byttungl
- SAT Solvers and ApplicationsUploaded byRaj Shekhar
- Scimakelatex.13822.NoneUploaded byUrei
- 490384Uploaded byGunasekar Jayavelu
- Scimakelatex.21107.PallUploaded byEdú Heisenberg
- Scimakelatex.30742.NoneUploaded bydsogh
- Clint Mullins Resume 2012Uploaded byClint Mullins

- 2014 Lec 16 Loop ShapingUploaded byBlue
- Process ControlUploaded byJayesh Patel
- Statistical Rethinking SampleUploaded byjamilkhann
- SpmUploaded byjeetmisra
- iso9126-1-2-3-4(Anexo2).pdfUploaded byAlonso Gonzales Diaz
- BPMUploaded bylzcj
- Canonical EnsembleUploaded byunwanted
- Adkar and Kotter difference and organization structure.txtUploaded byIamSajid Jatoi
- wright1292763603.pdfUploaded byNoureddine Guersi
- Unit 2 SignalsUploaded byDigitallogicdl
- TPM_AMIT DUBEYUploaded byRahul Singh
- Double Integrator SystemsUploaded byRick Gong
- System Analysis and Design(SAAD)Uploaded byGuruKPO
- ChaosUploaded byAugusto Cabrera
- Controlling Z-Wave Door Locks With the VRC0PUploaded byAlarm Grid Home Security and Alarm Monitoring
- HW_CPE622_2013.docxUploaded byFaran Faseesa Ismail
- Organisational Change Management a Critical ReviewUploaded byFilipe Cigano
- UT Dallas Syllabus for cs3354.501 05f taught by Hieu Vu (hdv013000)Uploaded byUT Dallas Provost's Technology Group
- A Process Capability Study on CNC Operation by the Application of Statistical Process Control ApproachUploaded byvivgukgjujg
- TPM BY ANUPUploaded byArif Naikwade
- Chapter 11 (Paper)Uploaded byDayu Mirah
- MCLECTURE19.PPTUploaded byteknikpembakaran2013
- alebraheemAMS17-20-2012Uploaded byaldo
- ThermodynamicsUploaded byazzra
- 7 Failure Prevention and RecoveryUploaded byAnushaBalasubramanya
- Distance Learning - Motorsport - Data Acquisition SystemsUploaded byrudey18
- Ch02 - Lpilp ModelsUploaded bySonya Dewi
- Assignment 2Uploaded bydyemohd
- BRD ExampleUploaded byrajesh2831
- DIF-FFTUploaded byGayathri Rajendran