You are on page 1of 4

Spring 2013

Masters in Computer Application (MCA) - Semester 4


MCA4040 - ANALYSIS AN !SI"N #$ AL"#%I&'M
((oo) I* (1+4,)
(+0 Mar)s)
1. Explain time complexity and space complexity of an algorithm.
Time complexity is the amount of computer time an algorithm requires to run to
completion. In computer science, the time complexity of an algorithm quantifies the
amount of time taken by an algorithm to run as a function of the length of the string
representing the input. The time complexity of an algorithm is commonly expressed
using big O notation, which excludes coefficients and lower order terms. When
expressed this way, the time complexity is said to be described asymptotically, i.e., as
the input size goes to infinity. or example, if the time required by an algorithm on all
inputs of size n is at most !n" # "n, the asymptotic time complexity is O$n"%.
Time complexity is commonly estimated by counting the number of elementary
operations performed by the algorithm, where an elementary operation takes a fixed
amount of time to perform. Thus the amount of time taken and the number of
elementary operations performed by the algorithm differ by at most a constant factor.
&ince an algorithm's performance time may (ary with different inputs of the same
size, one commonly uses the worst)case time complexity of an algorithm, denoted as
T$n%, which is defined as the maximum amount of time taken on any input of size n.
Time complexities are classified by the nature of the function T$n%. or instance, an
algorithm with T$n% * O$n% is called a linear time algorithm, and an algorithm with
T$n% * O$+n% is said to be an exponential time algorithm.
&pace complexity of an algorithm is the amount of memory it needs to run to
completion. ,nalysis of space complexity of an algorithm or program is the amount
of memory it needs to run to completion. &ome of the reasons for studying space
complexity are-
.. If the program is to run on multi user system, it may be required to specify the
amount of memory to be allocated to the program.
+. We may be interested to know in ad(ance that whether sufficient memory is
a(ailable to run the program.
". /ere may be se(eral possible solutions with different space requirements.
0. 1an be used to estimate the size of the largest problem that a program can
sol(e.
2. What do you mean by asymptotic notations? Explain three major asymptotic
notations in detail.
,symptotic complexity is a way of expressing the main component of the cost
of an algorithm, using idealized units of computational work. 1onsider, for example,
the algorithm for sorting a deck of cards, which proceeds by repeatedly searching
through the deck for the lowest card. The asymptotic complexity of this algorithm is
the square of the number of cards in the deck. This quadratic beha(ior is the main
term in the complexity formula, it says, e.g., if you double the size of the deck, then
the work is roughly quadrupled.
Three ma2or asymptotic notations are
The O Notation
The O$pronounced as- big)oh% is the formal method of expressing the upper
bound of an algorithm's running time. It's a measure of the longest amount of time it
could possibly take for the algorithm to complete.
ig!Omega Notation
or non)negati(e functions, f$n% and g$n%, if there exists an integer n3456 and a
constant c 7 5 such that for all integers n7n3456, f$n% 8 cg$n%, then f$n% is omega of
g$n%. This is denoted as 9f$n% * :$g$n%%9.
This is almost the same definition as ;ig Oh, except that 9f$n% 8 cg$n%9, this
makes g$n% a lower bound function, instead of an upper bound function. It describes
the best that can happen for a gi(en data size.
Theta Notation
or non)negati(e functions, f$n% and g$n%, f$n% is theta of g$n% if and only if f$n%
* O$g$n%% and f$n% * :$g$n%%. This is denoted as 9f$n% * <$g$n%%9.
This is basically saying that the function, f$n% is bounded both from the top and
bottom by the same function, g$n%.
The theta notation is denoted by =.
"ittle!O Notation
or non)negati(e functions, f$n% and g$n%, f$n% is little o of g$n% if and only if
f$n% * O$g$n%%, but f$n% > <$g$n%%. This is denoted as 9f$n% * o$g$n%%9.
"ittle Omega Notation
or non)negati(e functions, f$n% and g$n%, f$n% is little omega of g$n% if and only
if f$n% * :$g$n%%, but f$n% > <$g$n%%. This is denoted as 9f$n% * ?$g$n%%9.
@uch like Aittle Oh, this is the equi(alent for ;ig Omega. g$n% is a loose lower
boundary of the function f$n%B it bounds from the bottom, but not from the top.
#. Write short note on$
a% &election sort algorithm
b% ubble sort algorithm.
&election &ort
In computer science, selection sort is a sorting algorithm, specifically an in)
place comparison sort. It has O$n+% time complexity, making it inefficient on large
lists, and generally performs worse than the similar insertion sort. &election sort is
noted for its simplicity, and it has performance ad(antages o(er more complicated
algorithms in certain situations, particularly where auxiliary memory is limited.
The algorithm di(ides the input list into two parts- the sublist of items already
sorted, which is built up from left to right at the front $left% of the list, and the sublist
of items remaining to be sorted that occupy the rest of the list. Initially, the sorted
sublist is empty and the unsorted sublist is the entire input list. The algorithm
proceeds by finding the smallest $or largest, depending on sorting order% element in
the unsorted sublist, exchanging it with the leftmost unsorted element $putting it in
sorted order%, and mo(ing the sublist boundaries one element to the right.
ubble &ort
;ubble sort, sometimes incorrectly referred to as sinking sort, is a simple sorting
algorithm that works by repeatedly stepping through the list to be sorted, comparing
each pair of ad2acent items and swapping them if they are in the wrong order. The
pass through the list is repeated until no swaps are needed, which indicates that the
list is sorted. The algorithm gets its name from the way smaller elements 9bubble9 to
the top of the list. ;ecause it only uses comparisons to operate on elements, it is a
comparison sort. ,lthough the algorithm is simple, most of the other sorting
algorithms are more efficient for large lists.
'erformance
;ubble sort has worst)case and a(erage complexity both C$n+%, where n is the
number of items being sorted. There exist many sorting algorithms with substantially
better worst)case or a(erage complexity of O$n log n%. D(en other C$n+% sorting
algorithms, such as insertion sort, tend to ha(e better performance than bubble sort.
Therefore, bubble sort is not a practical sorting algorithm when n is large.
The only significant ad(antage that bubble sort has o(er most other
implementations, e(en quicksort, but not insertion sort, is that the ability to detect that
the list is sorted is efficiently built into the algorithm. Eerformance of bubble sort
o(er an already)sorted list $best)case% is O$n%.
(. Explain depth! first search and breadth! first search algorithms.
)epth!firth algorithm
Fepth)first search selects a source (ertex s in the graph and paint it as 9(isited.9
Gow the (ertex s becomes our current (ertex. Then, we tra(erse the graph by
considering an arbitrary edge $u, (% from the current (ertex u. If the edge $u, (% takes
us to a painted (ertex (, then we back down to the (ertex u. On the other hand, if edge
$u, (% takes us to an unpainted (ertex, then we paint the (ertex ( and make it our
current (ertex, and repeat the abo(e computation. &ooner or later, we will get to a
Hdead end,I meaning all the edges from our current (ertex u takes us to painted
(ertices. This is a deadlock. To get out of this, we back down along the edge that
brought us here to (ertex u and go back to a pre(iously painted (ertex (. We again
make the (ertex ( our current (ertex and start repeating the abo(e computation for
any edge that we missed earlier. If all of ('s edges take us to painted (ertices, then we
again back down to the (ertex we came from to get to (ertex (, and repeat the
computation at that (ertex. Thus, we continue to back down the path that we ha(e
traced so far until we find a (ertex that has yet unexplored edges, at which point we
take one such edge and continue the tra(ersal.
readth!firth algorithm
;readth)first search starts at a gi(en (ertex s, which is at le(el 5. In the first
stage, we (isit all the (ertices that are at the distance of one edge away. When we
(isit there, we paint as 9(isited,9 the (ertices ad2acent to the start (ertex s ) these
(ertices are placed into le(el .. In the second stage, we (isit all the new (ertices we
can reach at the distance of two edges away from the source (ertex s. These new
(ertices, which are ad2acent to le(el . (ertices and not pre(iously assigned to a le(el,
are placed into le(el +, and so on. The ;& tra(ersal terminates when e(ery (ertex
has been (isited.
To keep track of progress, breadth)first)search colors each (ertex. Dach (ertex
of the graph is in one of three states-
.. Jndisco(eredB
+. Fisco(ered but not fully exploredB and
". ully explored.
*. )ifferentiate bet+een bottom!up and Top!do+n heap construction.
Top down design proceeds from the abstract entity to get to the concrete design.
;ottom up design proceeds from the concrete design to get to the abstract entity. Top
down design is most often used in designing brand new systems, while bottom up design
is sometimes used when one is re(erse engineering a designB i.e. when one is trying to
figure out what somebody else designed in an existing system. ;ottom up design begins
the design with the lowest le(el modules or subsystems, and progresses upward to the
main program, module, or subsystem. With bottom up design, a structure chart is
necessary to determine the order of execution, and the de(elopment of dri(ers is
necessary to complete the bottom up approach. Top down design, on the other hand,
begins the design with the main or top)le(el module, and progresses downward to the
lowest le(el modules or subsystems. Keal life sometimes is a combination of top down
design and bottom up design. or instance, data modeling sessions tend to be iterati(e,
bouncing back and forth between top down and bottom up modes, as the need arises.
,. Explain the t+o types of collision resolution in -ashing.
Two types of collision resolution in hashing are
Open .ddressing
In an open addressing hashing system, if a collision occurs, alternati(e locations are
tried until an empty location is found. The process starts with examining the hash location
of the identifier. If it is found occupied, some other calculated slots are examined in
succession till an empty slot is found. The same process is carried out for retrie(al.
eatures -
a% ,ll the identifiers are stored in the hash table itself
b% Dach slot contains an identifier or it is empty
c% Kequires a bigger table for open addressing
d% Three techniques are commonly used for open addressing - linearprobing, quadratic
probing, rehashing.
e% There is a possibility of the table becoming full
f% Aoad factor can ne(er exceed ..
g% Erobing for insertion
/haining
The main problem in using the linear probing and its (ariants is that, it results in
series of comparisons, which is not a (ery big impro(ement o(er the linear search
method. In chaining, what is done is that when an o(erflow occurs, the identifier L is
placed in the next (acant slot $ like linear probing % but it is chained to the identifier
occupying its bucket so that L can be easily located. 1haining can be done if another field
is pro(ided in the hash table, which stores the hash address of the next identifier ha(ing
the same hash address i.e. synonym. /ence identifiers can be easily located. 1haining has
three (ariants -
.. 1haining without replacement
+. 1haining with replacement
". 1haining using linked lists

You might also like