You are on page 1of 9

1.

Define an algorithm
An algorithm is any well-defined computational procedure that takes
some value, or set of values, as input and produces some value, or set
of values, as output. An algorithm is thus a sequence of computational
steps that transform the input into the output.
2. Define a correct algorithm
An algorithm is said to be correct if, for every input instance, it halts
with the correct output. We say that a correct algorithm solves the
given computational problem. An incorrect algorithm might not halt at
all on some input instances, or it might halt with an incorrect answer.
3. Explain kinds of problems that are solved by algorithms
 Manufacturing and other commercial enterprises often need to allocate
scarce resources in the most beneficial way.
 An oil company may wish to know whereto place its wells in order to
maximize its expected profit. A political candidate may want to
determine where to spend money buying campaign advertising in order
to maximize the chances of winning an election.
 An airline may wish to assign crews to flights in the least expensive way
possible, making sure thateach flight is covered and that government
regulations regarding crew scheduling are met.
 An Internet service provider may wish to determine where to place
additional resources in order to serve its customers more effectively
4. Discuss briefly why Algorithms is considered as a technology
Of course, computers may be fast, but they are not infinitely fast. And memory
may be inexpensive, but it is not free. Computing time is therefore a bounded
resource, and so is space in memory. You should use these resources wisely,
and algorithms that are efficient in terms of time or space will help you do so
5. Represent the algorithm/pseudo code of insertion sort

6. Explain the meaning of “ analyzing an algorithm


Analyzing an algorithm has come to mean predicting the resources that the
algorithm requires. Occasionally, resources such as memory, communication
bandwidth, or computer hardware are of primary concern, but most often it
is computational time that we want to measure. Generally, by analyzing
several candidate algorithms for a problem, we can identify a most efficient
one. Such analysis may indicate more than one viable candidate, but we can
often discard several inferior algorithms in the process
7. Explain what we mean by asymptotically tight bound for f (n) in Θ –
notation
the drawn diagram would gives an intuitive picture of functions f (n) and
g(n),where f (n) = Θ (g (n)) For all values of n at and to the right of ,the
value of f (n) lies at or above and at or below
In other words, for all ( i.e equal constants i.e = or
= ), the function f (n) is equal to g(n)to within a constant factor. We say that
g(n) asymptotically tight bound for f (n)

8. Random-access machine (RAM) is a model of computation that


advances that algorithms are implemented as computer programs.
Explain the importance of RAM in algorithm analysis.
Functions of RAM are as Operating System,as Application area (Program),as
Working storage area. Strictly speaking, we should precisely define the
instructions of the RAM model and their costs whenever a certain algorithm is
on operation. The RAM model contains instructions commonly found in real
computers:arithmetic (such as add, subtract, multiply, divide, remainder,
floor, ceiling), data movement (load, store, copy), and control (conditional and
unconditional branch, subroutine call and return). Each such instruction takes
a constant amount of time.The more the opration instruction of RAM is
complex so does the the instruction of RAM and time used
9. Define the running time of an algorithm
The running time of an algorithm on a particular input is the number of
primitive operations or “steps” executed by RAM hence time spent in the
whole exercise of execution. A constant amount of time is required to execute
each line of pseudocode. One line may take a different amount of time than
another line, This viewpoint is in keeping with the RAM model, and it also
reflects how the pseudocode/algorithm would be implemented on most actual
computers.this aspect of each line may take differenc time is dictatated by the
efficient/inefficient algorithm/pseudocode implemented hence the need to
have the best pseudocode. The running time of the algorithm is the sum of
running times for each statement executed
10. Define o-notation as used in the growth of a function
The asymptotic upper bound provided by O-notation may or may not be
asymptotically tight.The bound is asymptotically tight, but the bound
is not is asymptotically tight. We use o-notation to denote an upper
bound that is not asymptotically tight. We formally define o(g(n)) (“little-oh of
g of n”) as the set

11. Define the Order of growth of an algorithm


The Complexity analysis is also a tool that allows us to explain how an
algorithm behaves as the input grows larger. If we feed it a different input,
how will the algorithm behave? If our algorithm takes 1 second to run for an
input of size 1000, how will it behave if I double the input size? Will it run just
as fast, half as fast, or four times slower? In practical programming, this is
important as it allows us to predict how our algorithm will behave when the
input data becomes larger. For example, if we've made an algorithm for a web
application that works well with 1000 users and measure its running time,
using algorithm complexity analysis we can have a pretty good idea of what
will happen once we get 2000 users instead. For algorithmic competitions,
complexity analysis gives us insight about how long our code will run for the
largest test cases that are used to test our program's correctness. So if we've
measured our program's behavior for a small input, we can get a good idea of
how it will behave for larger inputs.
12. Define the worst-case running time of an algorithm
It’s the longest running time for an algorithm to complete execution of any
input of size n.

13. Explain importance of studying worst-case running time of


algorithms
i. The worst-case running time of an algorithm gives us an upper bound
on the running time for any input. Knowing it provides a guarantee that
the algorithm will never take any longer than that. We need not make
some educated guess about the running time and hope that it never gets
much worse beyond what has been observed.
ii. For some algorithms, the worst case occurs fairly often. For example, in
searching a database for a particular piece of information, the searching
algorithm’s worst case will often occur when the information is not
present in the database.
iii. The “average case”

14. Explain the considerations for us to say that an algorithm is more


efficient than the other
We usually consider one algorithm to be more efficient than another if its
worst-case running time has a lower order of growth. Due to constant
factors and lower order terms, an algorithm whose running time has a
higher order of growth might take less time for small inputs than an
algorithm whose running time has a lower order of growth.

1. Define asymptotic behavior in algorithm computational complexity


analysis
Dropping programming languages factor goes along the lines of
ignoring the differences between particular programming languages
and compilers and only analyzing the idea of the algorithm
itself.This filter of "dropping all factors" and of "keeping the largest
growing term In complexity analysis is called asymptotic behavior.
2. Find the asymptotic behavior of the following example functions by
dropping the constant factors and by keeping the terms that grow
the fastest. Explain briefly your answers:
a) F ( n ) = 5n + 12,
Answer: f( n ) = n. By using the exact same reasoning as above.
b) f( n ) = 109,
c) Answer: f( n ) = 1.

d) In the question (b) above explain if the program has a


loop.
We're dropping the multiplier 109 * 1, but we still have to put a 1 here to
indicate that this function has a non-zero value.the program don’t have a loop
since the number of instructions it needs is just a constant.\i.e 1
e) f( n ) = n2 + 3n + 112,
Answer: f( n ) = n2 Here, n2 grows larger than 3n for sufficiently large n, so
we're keeping that.
f) f( n ) = n3 + 1999n + 1337 ,
Answer: f( n ) = n3.Even though the factor in front of n is quite large, we can
still find a large enough n so that n3 is bigger than 1999n. As we're interested
in the behavior for very large values of n, we only keep n3 .
g) f( n ) = n + ,
Answer : f( n ) = n This is so because n grows faster than as we increase n.
3. Consider a Python program below which adds two array elements
together to produce a sum which it stores in another variable: v =
a[ 0 ] + a[ 1 ].Explain briefly why we have a constant number of
instructions of asymptotic of f( n ) = 1.
4. Below is a C program

int i;
for ( i = 0; i < n; ++i ) {
f( n );
}
This is a program that calls a function within a loop and we know the number
of instructions the called function performs, it's easy to determine the number
of instructions of the whole program. Explain briefly the number of
instructions of the whole program asymptotically.
Answer: f(n)=n2, as the first n is for the internal loop.Also the function is
called exactly n times.
5. List down any four common complexity classes with their algorithm
time complexity. Hit an algorithm with Θ( n ) is of complexity n.
Θ( 1 ) algorithm is a constant-time algorithm,
Θ( n ) is linear,
Θ( n2 ) is quadratic and
Θ( log( n ) ) is logarithmic).
6. Programs with a bigger Θ run slower than programs with a smaller
Θ.Explain.
Answer: This is because bigger Θ must have a high complexity (many loops)
that its needs to compute while a smaller Θ could be having just a constant
time algorithm which may or may not have a loop to consume its
computation time
7. Define a recurrence in algorithm.
A recurrence is an equation or inequality that describes a function in terms of
its value on smaller inputs. When the subproblems are large enough to solve
recursively, we call that the recursive case. Once the subproblems become
small enough that we no longer recurse, we say that the recursion “bottoms
out” and that we have gotten down to the base case. Sometimes, in addition to
subproblems that are smaller instances of the same problem, we have to solve
subproblems that are not quite the same as the original problem. We consider
solving such subproblems as part of the combine step. Recall that in divide-
and-conquer, we solve a problem recursively,applying three steps at each
level of the recursion:divide,conquer and combine. Recurrences can take
many forms. For example, a recursive algorithm might divide subproblems
into unequal sizes, such as a 2/3,1/3 split
1. Explain by example a recurrence relation
Answer: It’s the recursive part of a recursive definition of either a number
sequence or integer function. For example Recursive definition {fn } =
0,1,1,2,3,5,8,13,21,34,55,…of Fibonacci sequence is as follows

INITIALIZE:f0 = 0, f1 = 1

RECURSE: fn = fn-1+fn-2 for n > 1.The recursive definition part is the fn = fn-
1+fn-2 for n > 1 which is the recurrence relation consisting of an equation
that expresses each term in terms of lower terms

2. Explain linear recurrence relations with constant coefficients.


Answer: A recurrence relation is said to be linear if an is a linear combination
of the previous terms plus a function of n. I.e. no squares, cubes or other
complicated function of the previous ai can occur. If in addition all the
coefficients are constants then the recurrence relation is said to have constant
coefficients.

3. Which of the following is a Linear Recurrences with Constant


Coefficients.

1. an = 2an-1
2. an = 2an-1 + 2n-3 - an-3
3. an = an-12
4. Partition function:
Answers
1. an = 2an-1: YES
2. an = 2an-1 + 2n-3 - an-3: YES
3. an = an-12: NO. Squaring is not a linear operation. Similarly an = an-
1an-2 and an cos(an-2) are non-linear.
4. Partition function:

NO. This is linear, but coefficients are not constant as C (n -1, n -1-i ) is a
non-constant function of n.

4. Discuss three methods for solving recurrence problems in


algorithms.
i. In the substitution method, we guess a bound and then use
mathematical induction to prove our guess correct.
ii. The recursion-tree method converts the recurrence into a tree whose
nodes represent the costs incurred at various levels of the recursion. We
use techniques for bounding summations to solve the recurrence.
iii. The master method provides bounds for recurrences of the form T (n)
=aT (n/b )+ f (n)
where ,and f (n) is a given function. Such recurrences arise s
frequently. Recurrence of the form characterizes a divide and-conquer
algorithm that creates a sub problems, each of which is 1/b the size of the
original problem, and in which the divide and combine steps together take f
(n) time.

5. Discuss divide-and-conquer approach in algorithm.


Many useful algorithms are recursive in structure: to solve a given problem,
they call themselves recursively one or more times to deal with closely related
sub problems. These algorithms typically follow a divide-and-conquer
approach: they break the problem into several subproblems that are similar to
the original problem but smaller in size, solve the subproblems recursively,
and then combine these
solutions to create a solution to the original problem.
6. What is a greedy algorithm .
Like dynamic-programming algorithms, greedy algorithms typically apply to
optimization problems in which we make a set of choices in order to arrive at
an ptimal solution. The idea of a greedy algorithm is to make each choice in a
locally optimal manner
.
1. Discuss divide-and-conquer paradigm and algorithm that
employs approach.
The divide-and-conquer paradigm involves three steps at each level of the
recursion:
i. Divide the problem into a number of sub problems that are smaller
instances of the same problem.
ii. Conquer the sub problems by solving them recursively. If the sub
problem sizes are small enough, however, just solve the sub problems in
a straight forward manner.
iii. Combine the solutions to the subproblems into the solution for the
original problem.
The merge sort algorithm closely follows the divide-and-conquer paradigm.
Intuitively,it operates as follows.
Divide: Divide the n-element sequence to be sorted into two subsequences of
n/2 elements each.
Conquer: Sort the two subsequences recursively using merge sort.
Combine: Merge the two sorted subsequences to produce the sorted answer.
2. Briefly illustrate and explain the line of Sort-Merge pseudo code
algorithm.

3. Illustrate operations of merge sort algorithms with a diagram

4. Define the meaning of “bottoms out” in the merge sort algorithm


The recursion “bottoms out” is used when the sequence to be
sorted has length 1, in which case there is no work to be done,
since every sequence of length 1 is already in sorted order
5. Carry out the Analysis of merge sort algorithms
We reason as follows to set up the recurrence for T (n), the worst-case
running time of merge sort on n numbers. Merge sort on just one element
takes constant time. When we have n>1elements, we break down the running
time as follows.
Divide: The divide step just computes the middle of the subarray, which takes
constant time. Thus, D(n)= Θ( 1 )
Conquer: We recursively solve two subproblems, each of size n=2, which
contributes 2T(n/2) to the running time.
Combine:We noted that the MERGE procedure on an n-element subarray
takes time = Θ( n )
And so C(n)= Θ( n )
When we add the functions D(n) and C(n) for the merge sort analysis, we are
adding a function that is Θ( 1 )and a function that is Θ( n ).This sum is a linear
function of n, that is Θ( n ). Adding it to the 2T(n/2) term from the “conquer”
step gives the recurrence for the worst-case running time T(n)of merge sort:

You might also like