You are on page 1of 10
3 Growth of Functions "The onder of growth ofthe sunning time of an algorithm, deined in Chapter 2, sives a simple characterization ofthe algorithm's efficiency and also allows ws 1. ‘compare the relative perfomance of alternative algorithms. Once the input sie m ‘becomes lage enough, merge son, with its © ign) worst-case running time, beats insertion sor, whse worst-case runing time is @(n?). Although we can sometimes determine th exact runing time of an algorithm, as we id for insertion sot in Chapter 2, the exta precision is not usually worth the effort of comruting it For large enough inputs, the multiplicative constants and lower order terns of | an exact runing time are dominated by the effects of the input size itsel. ‘When we look atin: sizes large enough 19 make only the order of grovth of | the running time relevant we are studying the arympotie efficiency of lgorihms, ‘Thats, we are concerned with how the running time of an algorithm increases with the sizeof the input inthe limi, asthe sizeof the input increases without bound, Usually, an algorithm tht i asymptotically more efficient wil be the best choice forall but very small npats, ‘This chapter gives several sandard methods for simplifying the asymptotic anal- _yss of algorithms. The next section begins by defining several types of “asymp- ‘otic notation,” of which we have already seen an example in @-notation, We then Present several notational conventions used throughout this book, and fly we "Teview the behavior of functions that commonly arise inthe analysis of algorhims 34 Asymptotic notation ‘The notations we use to describe the asymptotic running time of an algoriim se defined in terms of functions whose domains are the set of natural numters N= (0.1,2,...). Such notations are convenient for deseribing the worst-case rnning-time funetion T(n), which usually is defined only on integer input ses. ‘We sometimes find it convenient, however, to abuse asymptotic notation ina ve (Chapter 3. Gronth of Fartons riety of ways. For example, we might extend the notation to the domain of real ‘numbers or, alternatively, restrict to subst of the natural numbers. We should ‘make su, however, to understand the precise meaning of the notation so that when ‘we abuse we do not misuse it. This section defines the basic asymptotic notations ‘and also introduces some common abuses. Asymptotic notation, functions, and running times ‘We will ase asymptotic notation primarily to describe the manning times of algo- rithms, a when we wrote tat insertion son's worst-case running time is @(n") Asymptic notation actully applies to functions, however. Recall that we charac- terized irsertion son's worst-case running ime as an? +n +c, for some constants 4, anée, By writing that insertion sr’s running time is G(r), we abstracted fvay some deals ofthis function. Because asymptotic notation applies to func- tions, what we were writing as ©(n?) was the function an* + bn + c, which in that case happened to characterize the worst-case running time of insertion sort. In this book, the functions to which we apply asymptotic notation will usually characeiz the running times of algritims, But asymptotic notation can apply 1. functions that characterize some other aspect of algorithms (the emount of space they use. for example), or evento functions that have nothing whatsoever to do. ‘with algorithms, "Even when we use asymptotic notation to apply tothe running time ofan sl gorithm, we need to understand which running time we mean. Sometimes we are ‘terested in the worst-case running time, Often, however, we wish to charscerize the running time no matter what he input. In other words, we often wish to make. ‘blanket statement that covers all inputs, not just the worst ease, We shall see ‘symptenotations that sre well suited to characterizing running mes no mater What the input @-notaton, Tn Chaper 2, we found that the worst-case running time of insertion sor is T(n) = (0), Letus define what thie notation means, For «give funtion gn), wwe denor by O(e(n)) the set of functions ©(6(n)) = {/(r): ther exist postive constants cy, and ng suc that O< cie(n) = SH) < cag(n) forall m > no). "yen pon won means“ at” LU Appt aan “6 Sia) = eu) ™ f= 01) Jo) = 20) © © © ‘igure 31 Grape examples of te ©, O, nd nenons, I enh paste vale of ng shown {eth minima pombe wae any rei lve Would alo wor.) O-staon oun fe ‘hn wi coat far, We wre (x) = O(g()) trot pone conan 1. Sede rch hat wanda th igh fro, ae a J) aap es between (2d 280) ‘nce, (0) O-oation ges an per bond fra ucton wii «conse ct: We wie "fs Olgtn) i ere oposite rants n nc mich at woh ight fn, eae 2700 shay exon below ca) (0 St-ctaton gst ves botnd fra feo o within ‘conta We we /() = 0) re we este constants nach hated totheight of mo, vale of i) almys es ono saree) ‘A function f(n) belongs tothe set ©(g(n)) if there exist positive constants ¢1 and c; such that it can be “sandwiched” between y(n) and cog(), for sufi ciently lage n. Because @(g(n)) is a set, we could write “/(n) © O(g(n))” to indicate that f(n) is x member of O(g(n)). Tastead, we will sully write *f(n) = @(e(n))” to express the same notion. Yeu might be confused because ‘Wwe sbuse equality inthis way, but we shall see later inthis section tht doing so. tosis advantages. igure 3.1(2) gives an inutive picture of fundions J(n) and g(n), where: ‘f(n) = ©(¢(n)). For al values of mat and to the right of mo, the value of f(n) ies at or above cy4(n) and ator below cag (n) In ober words, forall >, the unetion f(7) is equal ton) to within a constant factor. We say that g(n) is an ‘asympotiall tight Bound for fn). ‘The definition of ©(g(7)) requires that every member f(n) © (g(™) be asymptotically nonnegative, that is, that f(n) be nonnegative whenever m i sf ficiently large. (An asymptotically postive function is one tat is positive for suficiently large n-) Consequently, the function g(n) itself must be asymptotically ponnegatve or else the st @(g(n) is empty. We shall therefore assume that every function used within @-noution is asymptotically nonnegative. This assumption holds forte oter asymptotic notations defined inthis chapter as wel (Chapter 3. Grom of Fanctons Jn Chap 2, we inzoduced an infomal noon of @-noation tht amouned to towing away lower-order tems and ignoring the leading coefcent of he Iighest oer tem. Let us briefly justify this intion by using the foal e- rion to show tat fr? ~ 3n = Q(x), To do so, we must determine positve Constants Ca and Re such tat ‘We can make the right-hand inequality bok! for any valu of > 1 by choosing xy constant ¢; > 1/2, Likewise, we can ake the lf-hand inequality hod for ayy ‘ale ofn > 7 by choosing any constant cy < 1/14, Ths, by choosing cy = 1/4, y= 1/2,and ny = 7, we can verify that jn ~ 3n = O(0), Ceraily, oer choices fr the constants eit, but the important thing is that some choice exits. [Note that hee constants depend onthe fonction Jn ~ Sn; a diferent function ‘belonging to O(n?) would usually require different constants, ‘We can aluo tse the formal definition to verify that Gr? # Oj). Suppuse forthe puspore of contradiction that cand mo exist such Wat Gr < eat “or ‘all n > mo But then dividing by n® yields n < c2/6, which cannot possibly held for arity large n, since cs constn Titel, the lower order terms ofan asymptotically positive function canbe ignored in determining asymptotically tight bounds because they are insignificant for large 1. When is large, even tiny traction ofthe highest-order term si fies to dominate the lowerondr tems. Thus, sting c oa vale that i sighly seller a the coofiet of te highes-orer term and setting cat vel hat is slightly larger permits the ineqales inthe definition of G-noation to be st in God. The coeBcient ofthe highest oder term can likewise be ignored, since it nly changes cand eb a constant factor equal the coefficient ‘Asan example consider any quadratic funeson fn) = an? + bn + ¢, where ab and are constants and a > 0. Throwing away the lowerorer tems cd ignoring the constant yield f(n) = (0). Formally, o show the same thing, we taketh constans ¢) = a/4,¢ = Ta/4, and mg = 2-max((b|/a, e/a). You tay verify tit 0 © esn? < an? + dnc = cqn? forall n = no. In gene, for any poypomial p(n) = -¥ pain! wher the re constants and ay > 0, We have p(n) = @(n*) (see Probiem 3-1). ‘Sine ay constant is «degree 0 polynomial, we can express any constant fine- tion as @(x°), or @(1). This latter notation is @ minor abuse, however, because the 4D Anmprie noain a expression doesnot indicate what variable is tending to infinity? We shall often ‘se the notion ©) to mean either a constant or «constant function with respect to some variable, 0-notation ‘The @-notaon asymptotically bounds « function from above and below. When we have only an arymprotc upper Bound, we use O-notation. For given fune- tion g(a), we denote by O(g(") (pronounced “big-oh of g of nor sometimes jnst“oh of of nthe set of functions O(g(m)) = Lr): there exist positive constants ¢ and mo such that 05 Sin) Scx{n) forall n > no} ‘We use O-notaton to give an upper bound on «function, to within # constant factor, Figure 3.1() shows the intuition behind O-notation. Foal values at and to he right of mo, the vel ofthe function J (n) is on o below cx). We write f(n) = O(e(n) to indicat tat a function f(r) is « member ofthe set O((n). Note that fm) = O(g()) implies f(n) = O(e(n), since @- ‘oution is «stronger notion than O-notaton. ‘Writen st-theoreicaly, we have ©(g{n)) O(e(n)). Ths, our proot thet any quadratic Fuetion an? + bn +c, where a> 0, isn ©(%?) also shows tha any such quadratic function isn OC), ‘What may be more suprising is that when @ > 0, any linear function an + bis in O(@, which is easily verified by taking ¢ = a + [| and no = max(1,—B/2). Ifyou have seen O-notation before, you might ind it strange that we should write, for example, n = Ol). Inthe literature, we sometimes find O-notaion informally describing asymptotically ight bounds, tat is, what we have defined using ©-notation, In this book, however, when we write f(r) = O(&(™)). we ae merely claiming that some constant ruhiple of g(n) 8 an asymptotic upper bound on (r) with no claim about how tight an upper bound i is. Distingush- ing asymptotic upper bounds from asymptotically ght bounds is sandard inthe slgorihms iterate. ‘Using O-notaion, we can often describe the running time of an algorithm merely by inspecting the algorithm's overall stwuctre. For example, the doubly ‘esd loop structure of the insertion sor algoritim from Chapter 2 immediatly Yields an O(n?) upper bound onthe wors-cabe running time: the cost of each it- ration of the iner loop is bounded from above by O'() (constant, the indies 1 2 he rea obi i tou orianry woo or unctins does net ing fnew fos vale. In eles, the pret afncton ae hry species: the fonton weal oe Swritn at 2 even Ler? Adoping ae grat maton, were, Nou complete ‘ger niin ad we chore folate he tse, Chapter Growth ef Functions andj are bot at most, andthe inner loop is executed at most once for each of “he pis of values for andj. ‘Since O-oation describes an upper bound, when we we ito bound the wor case runing ime of a algortim, wehave abound o the runing time of th algo- fithm on every nput—tbe blanket sutement we discussed ear Thus, the O(@2) ound oo worst-case runing ime of insertion sor alo applies tot ramning time con evey inp The O(1") bound oo the worst-case runing tine of insertion sot however, dos not imply a (2) bound onthe running tine of insertion sor on very inp For example, we saw in Chapter 2 tat when the input i already Sorc, sein sort us n(n) ine. “Techical, iti an abuse o Say that he running time of insertion sot is O(n?) since fora gen n, the ecual runing tine vars, depending on the particular {ngt of size. When we sey “the running ime is O(n?” we ean that ther is & function fm) that is O (1) such that for any value of, no matter what particular input of size is chosen, the runing time o that input is bounded from ebove by the vale /(). Equivalently, we mean tht the worst-case unning ime is O(). 2-notation Just as O-ntation provides an asymptotic upper bound on a fnction, -notation [rovdes an axympioric lower bound. For given function g(r), we denote by 260) (pronounced “ig-omege of g of n™ oF sometines jus “omega of & of nthe st of functions 2G) = (7): there eit positive constants ¢ and my such hat. 0 no) Figure 3.1) shows the inution behind Q-noution, For ll values mato 10 he Fight ofr, the vale of f(s) i 00 or sbove cx(")- From the ceitions ofthe asymptotic notations we hae seen thus tis easy to prove the following important theorem (ee Eerie 31-5 Theorem 3.1 For any two functions (2) and g{n), we have f() = ©(G(%)) if and only it F(a) = O(g{n)) and fn) = 2K) . ‘Asan exanpleof the application of this theorem, our proof that an? + bm ++ c= (8%) for any constants a, 6, and c, where a > 0, immediatly implies that an? + bn ee = Q(n") and an? +bn +c = O(n). In practice, rater than vsing ‘Theorem 3.1 10 obiin asymptotic upper and lower bounds from asymptotically tight bounds, as we did for this example, we usually use it to prove asymptotically tight Bounds from asymptotic upper and lower bounds. 3 Appr nosion ° ‘When we say that the running time (90 modifier) of an algorithm is (gn), se mau What mo mater what partlealar put of size m ix chosen for each value ‘fn the running time on tht input is atleast a constant ties gn), frsuficienty large m, Equivalently, we are giving & lower bound on the best-case runing time ‘ofan algorithm. For example, the best-case running time of insertion srt is (r), ‘which implies thatthe running time of insertion so is 2(n). “The running time of insertion sor therefore belongs to both 2(n) and O(n), sine i fll anywhere between a linear function of mand a quadratic function of ‘Moreover, these bounds are asymptotically a tight as possible: for instance, the running time of insertion sort is not S(n), since there exists an inpt for which insertion sort runs in ©(n) time (2, when the input is already sore). Its not ‘contradictory, however, to say thatthe worst-case runing time of irsertion sort, is (2), since there exists an input that causes the algorithm to tke O(n time. ‘Asymptotic notation in equations and inequalities ‘We have already seen how asymptotic notation can be used within mathematical formulas. For example, i introducing, O-notation, we wrote “n= O(n)” We ‘might also write 2n?+3n-+1 = 2n? + @ (a). How do we interpret such formulas? "When the asymptote notation stands slone (ha is, not within a lager formula) on the righthand side of an equation (or inequality), as inn = O(r#), we have Already defined the equal sign to mean set membership: n € O(r?). In general, however, when asymptote notation appears in a formula, we interpre it as stand> {ng for some anonymous function that we do not care to name. For example, the formula 2n? 4 3n 41 = 2n? + O(n) means that 2n? + 3n 41 = 2n? + Sn), ‘where f(t i some function in the set O(n). In this case, we let f(n) = 3n + 1, ‘Which indeed isin ©(n), Using asymptotic notation in this manner can help eliminate inessential detail ‘and cle in an equation. For example, in Chapter 2 we expressed the worst-case running time of merge sort asthe recurence T(n) = 27 (n/2) + (0). I we are imersted only in the asymptotic behavior of T(n), there is no point in specifying all the lower-rder terms exactly; they are all understood tobe included ‘nthe anonymous function denoted bythe term (1). “The numberof sonymaus functions in an expression is understood to be equal to the number of times the asymptotic notation appears. For example, inthe ex- pression Yow. ‘Chapter Govt of Prins ‘there is only single anonymous function (a function of). Tis expression is thus ‘mot the same a8 O(1) + O(2) +--+ O(n), which doesn't realy have a clean ‘oterpretation In some cases, asymptotic notation appears on the left-hand side ofan equation, asin nt + 8(n) = O(n") “We interpret such equations using the following rule: No matter how the anony- ‘mous functions are chosen on the left ofthe equal sign, there is a way 1 choose the anonymcus factions onthe right ofthe equal sign 19 make the equation valid. “Thus, our example means that for any function f(n) € @(n), thee is some fune- tion g(n) € O(n?) such tat 2n? + f(n) = s(n) forall n. In other words, the right-hand sd of en equation provides a coarser level of detail than the left-hand side, ‘We can cain together a numberof such relationships, as in Dn? Intl = 2x74 O(n) = O() ‘We can interpret each equation separately by the rues above. ‘The first equa tion says that dere is some function f(n) ¢ @(n) such that 2n* + 3n +1 = 2n? + fn) forall n, The second eqution says tat for any function g(n) € @(n) (such as the f(r) just mentioned, there is some function h(n) € O(n?) such {hat 2n?-+ gin) = ACH) for all. Note that this interpretation implies that 2n? + 3n +1 = @(%2), which is wha the chaining of equations intitvely gives ‘o-notation “The axympratic upper bound provided by O-notation may or may not be ssymnp- totaly tiga, The bound 2n? = O(n) is asymptotically tight, but the bound 2n = O(n) is not. We use o-notation to denote an upper bound that is not asymp- tetcally tight We formally define o(g(n)) ("litle-oh of g of n") asthe set 08(n)) = (f(r): for any postive constant c > 0, there exists # constant ime > Osuch that 0 < f(n) < eg(n) forall n = mo) For example, 2n = o(n), but 20? # o(n. "The defitions of O-notation and o-notaton sre similar. The main diference is that in f(0) = O(g(n), the bound 0 < f(n) < ¢g(n) bolts for some con ‘stant ¢ > 0, but in f(r) = o(g(n)), the bound O = fn) < eg(n) holds for all ‘constants ¢ > 0. Iniitively, in o-notaton, the function Jn) becomes insignifiant relative to g(n) a5 approaches infinity; tats, 5 Anmpatenoation ey io. tn 22-0. en ‘Some authors use this limit as a definition ofthe o-notation; the definition inthis ‘book sso restricts the anonymous funtion 1o be asymptotically nonnegative. o-notation By analogy, e-notatin isto -tton a o-notaon is wo O-ntaton, We use tvrortion to dente a tower bound that i no arymptocaly igh One way Yo defn itis by f(0) € ofg(n) if and ony i (0) € 01/10) - Folly, however, we deine o(g(7) itle-omege of g fn") asthe st ‘&(@(n)) = (/¢0): forany postive constant ¢ > O there exists a constant tiy> Osuch that = ex(0) < forall n> na) For example, 14/2 = (0) but 13/2 + e(n?). The relation f(r) = 0(6(0)) imps tat Lo) “25m ifthe limit exis, Tha i f(n) becomes abi large relative to 0) 8 sppoaces nny ‘Comparing functions Many of the relational properties of real numbers apply to asymptotic comparisons ‘aswell. For the following, assume that (n) and g(n) are asymptotically positive. ‘Transitvity: Ja) = (etn) ad F(R) = CHCA) imply FH) = F(a) = O(g{n)) and a(n) imply (0) A(g{(n)) and etm) imply f(r) = Q(H0) « (g(0)) and g(a) = o(h(m)) imply f(r) = ofh)). L(0) = w(g{) and g(a) = axh(n)) imply f(n) = whin)) Reflesivity: Se) = Efe). fe) = Of). Se) = af) Cher} Gromhofanctns Symmetry: F(a) = 8(g(m)) if and only if g(0) = ELF) - ‘Transpose symmetry: S(0) = O(g{m)) if and only if a(n) = AF), F(0) = o(g(r)) ifandonly if a(n) = o(/(n)) Because these properties hold fr asymptotic notations, we can draw n analogy between the asymptotic comparison of two functions / nd g and the comparison of two real numbers @ and b: istike , istike a=6, istke ab. “Wesay that f(n) is asymptotically smaller than g(n) if f(n) = o(g(n)), 4nd fn) is asymptotically larger than g(n) if fn) = wg(n))- (One property of real numbers, however, does not carry over to asymptotic nots- tion: “Trichotomy: For any two ral numbers @ and , exactly one ofthe following must hold: a b. Attough any two real numbers can be compared, not all functions are asymptot- Jealy comparable. That is, for two functions f(n) and g(r), it may be the case that neither f(n) = O(8(n)) nor f(n) = 92((n) holds, For example, we cannot compare the functions n and n!** using asymptotic notation, since the value of ‘he exponent inn!" oscillates between O and 2, aking on all values in between, Exereloes sud Let /(n) and g(n) be asymptotically nonnegative functions. Using the basic defi- non of @-notaton, prove that max( f(r). ¢(n)) = @Cf(n) + g(r). 342 ‘Show that for any rel constants @ and b, where b> 0, (+a) =0(r). 62)

You might also like