You are on page 1of 13
On the Modeling and Analysis of Computer Networks LEONARD KLEINROCK, FELLOW, IEEE In this paper we view the landscape of analytic models for compute nenork performance evaluation. We present he bas (of nenork analyst discuss the nemork design procedare and Inoue some the fundamental of nenwork cont. Finally, we preven some ofthe tues surrounding the design and Behavior of ‘igabtmetwors |. INTRODUCTION AND APoLoGy Can you predict the way a network will perform? Can you do iteasly, exactly, corectly? Indeed, what metres should YoU use to evaluate performance? These are the kinds of {questions a network analyst asks himself all the time. “The fact is, there are a number of ways in which you ‘ean go about doing network system performance analysis. In order of increasing ugliness, they are as follows: 1) Conduct @ mathematical analysis which yields ex plicit performance expressions. 2) Conduct a mathematical analysis whieh yields an algorithmic or numerical evaluation procedure 3) Write and run a simulation. 44) Build the system and then measure is performance! Occasionally we are lucky enough to develop the fist type ‘of solution. Let us discuss this ase. The fact is that we ‘often work rather hard at trying to solve the mathematical model we develop, Indeed, we often “fallin love” with our mathematical model and assume that our careers depend ‘pon providing an exact solution to the model generated. However, it must be recognized that our mathematical models never do an exact job in representing the actual network being analyzed. Thus we always end up with an approximation, even if our analysis is exact. Sometimes, all ‘we can do is to provide an approximate solution 10 out mathematical model; this often takes cleverness on the part ‘ofthe analyst. different approach, and one which is often foverlooked is 10 generate a different (and possibly more approximate) mathematical model whose solution may be ‘more trctabie than for our original model. Is important for the analyst to recognize that this sa perfectly legitimate Manse meted Moy 13.1993 The a hte Computer Since Deparment, Usivenity of Calo, Lon Angle CA 02 1985. TERE Lg Nembr 931134 approach to consider. Afterall the final test of ur analysis, is when we compare is predictions to actual measurements ofthe real network. Only i the predictions are within an acceptable tolerance can we claim to have done our job as analysts In this paper, we will concentrate on the use of such mathematical models of network performance. Of necessity we will be making approximations in a variety of ways I, “Tie EARLY NETWORK ANALYSIS MODELS “The first models of computer networks were developed in the early 1960's [1] {2}. However, it was not until the Tate 1960's when the United States Department of Defense ‘Advanced Research Projects Agency (DARPA) funded the evelopment of the ARPANET that serous effort began in this are, Indeed, it was the infusion of a relatively smal amount of government money and a large amount of vision in funding network research that propelled the entire fet of networking and packet switching forward and produced the extensive analytical work and physical networks in which ‘we find ourselves swimming today. ‘One of the first general results was an exact expression for the mean delay experienced by a message as it passed through a network. The evaluation of this delay required the introduction of an assumption (the author's Indepen- dence Assumption) without which the analysis remains intractable, and with which the analysis becomes quite straightforward, Let us begin our jourey through the analytical thread with this model [1]. Consider the network shown in Fig. Here we see & network in which we assume there are LN nodes corresponding to the network switches and in ‘which we assume there are M Tinks corresponding (0 ata channels connecting the switches. We assume that messages arive from a Poisson process at origin node j and headed for destination node at a rate 734; that is, the variables 7s_ correspond to the entries in the network trafic matrix. Furthermore, we assume that the messages are exponentially distributed with mean length 1/. bits per message. In Fig. 2 we show a detailed view of node in the network 018921993903 00 © 1993 IEEE Fig 2. Det view of ctor shine nse Since each channel is, fact, a full-duplex channel, we choose 10 represent a channel ‘as to simplex. channels ‘Thus we see there are three ways in and three ways out of the node shown (for the moment, we ignore any attached hosts). Nov, when a message arrives to such a node, routing decision must somehow be made which determines ‘over which outgoing channel the message will ravel next Once this decision is made, the message is placed on the tal of « queve of other messages waiting to be transmitted out over this channel (say channel i). In the figure we have identified a queveing system consisting of such a queve and its corresponding channel. We denote by A, the tac carried on this channel (in messages per second) and we let ©, be the capacity of this channel (bits per second). In dition, we let T; be the mean response time of thi litle ‘queueing system, Let us also define some global quantities. First, we define the (otal (external) trafic carried by the network as =u o and the total intemal) network tafe carried by the chan nels as nea @ Moreover, if we let be the average number of hops that 44 message must tke in its journey through the network (averaged over all origin-destnation pairs), then i 8 easy to show thatthe following is tue [1 T= 8 Lastly, and most imporanily, we define T to be the mean delay of all messages (again, averaged over all ‘origin-destination pairs). T is the mean response time of the network and is one ofthe most important performance variables for a network. It can easily be seen (by two applications of Li's result (3)) chat (1 ® This is an exact equation and isan important general result, for networks of all sons. ‘The only difficulty with this last equation is that we have not given an explicit expression for T. In general, this ‘tum out to be an insurmountable problem. It is probably {nue that we will never beable to give a tractable expression for T)! This may come as a surprise to those familiar with |qucucing networks; indeed, it appears that we have set up «perfect Jackson open queueing network whose solution is simple and well-known [4]. The problem comes from the fact that a key assumption in queueing network theory is thatthe "service time” experienced by a “customer” in any node ofthe network is independent ofall service times of all customers in all nes. Our problem comes about since the “service time” a customer (the message) requires from the server (the channel) is simply transmission, Now it is clear thatthe time it takes to transmit a message over one channel is exactly the time it will ake to transmit that same message over a different channel of the same capacity. This generates a major dependence among the successive service times seen by a message as it makes its way through the network, In fac, this also ereates a dependency among the imerarcval and service times seen by a message at a given ‘node. It urs out that this causes grief heyond belie inthe analysis. Indeed, the ease of two nodes in tandem where the channel capacity of each of the two atiached channels, is idemical was solved (many years ater this problem was first posed in [1}) by Boxma [5]. and that solution was, extremely messy Tt tums out that if we change the mode! to one which i, in fact, more representative of data trafic, then we can give an exact solution for a restricted topology. The modification is to assume that the message lengths arc all the same (ather than the exponential assumption above) and thatthe topology isa tandem network: in this case, Rubin [6] was able to give an exact solution for the response time when all affic enters at one end of the tandem and exits atthe other end. Unfortunately, the analysis does not extend 10 the case of nontandem networks. Let us now return to our original model using expo: rnentally distributed message lengths. As mentioned above, extreme analytical difficulty comes from the dependence lamong message service and intrarival times. The Inde- pendence Assumption assumes that the length of « message | chosen independently from the exponential distribution each time it enters a switching node in the computer network! This is clearly a sweeping assumption which is patenly false: however, it turns out (from measurements and simulation) that this assumption has a negligible effect fon the mean message delay. AND, if we do make the assumption, then our analysis becomes trivial since we will have then reduced the system to a Jackson open queueing network model If we make the Independence Assumption, then the expression for 7; forthe ith queueing system shown in Fig. 2 reduces to an M/M/I queueing system [4] whose solation is simply o WORX Note that Cis the capacity ofthe ith channel expressed in messages per second. When we substitute this into (4), wwe end up with an explicit and simple expression for the mean message delay T as follows so "-rGen) © ‘This equation is very effective for design caleulations as wwe shall see below. However, it ignores certain realtes ‘which become important when one wishes to give a more precise prediction of network delay. For example, we have assumed K=O and P, = 0 (where K = nodal processing time and P= propagation delay). When we come to apply this analysis to any realistic nework, we must include these variables as well as other considerations. Specifically, not ‘only is there message trafic moving through the network, but there is also a certain amount of coniol traffic. The basic M/W/I result gives the message delay of the true message traffic fora single channel; of course, this delay is composed of two quantities, namely, a waiting time on ‘queue and a service time, The service ime has an average value related tothe average length ofthe trae data taf the waiting time, however, is due to the interference of al ‘other traffic inthe network and is composed parly of data traffic and party of control traffic. Therefore, it behooves us to separate these two contributions to delay and to use the appropriate parameters foreach. If we et I denote the average length ofa data packet and if we lt I/p represent the average length ofall packets, then we see that a more accurate expression for T; is lw. we. 1 aero’ Of course, if we set y' = yu tis will reduce t0 (5). IF wwe now account for the nodal processing time KC and the channel propagation time P we may then write down the following approximation for the average message delay: r=K+y o “The term in square brackets is just our new expression for ‘T, and the additional term KC comes from the fact that messages passthrough one more node than they do channels in their travels through the network. Note that in the case K = P, =O and p! =p the expression is reduced to our basic expression in (6). From this delay analysis we may predict quantitative as ‘well as phenomenological behavior ofthe average message delay in networks. In panicular, if we assume a relatively homogeneous set of C, and A, then as we increase the load Fig 3. Tho terol tain of erage eon dey fn the network, no individual term in the sum for delay as given in (4) will dominate the surmmation until the flowin ‘one channel (say channel ,) approaches the capacity ofthat channel this channel corresponds tothe network botleneck ‘Atthat point, the term T,., and hence T, will grow rapidly. ‘The expression for delay will then be dominated by this, term and T will exhibit a threshold behavior; prior to this, {heshold, 7 emains relatively constant and as we approach the threshold, T' will suddenly grow. Ths we expect an average delay in networks that has a much sharper behavior than the average MIM/I delay. This threshold behavior suggests a simplified determin- istic (that is, uid flow) model for delay in computer networks, The essence of the phenomenon is that delay remains essentially constant up to the critical threshold at which point the delay grows in an unbounded fashion, Such a simplification is shown in Fig. 3. In this figure ‘we represent the delay characteristic in terms of two basic system parameters. The first parameter, To, isthe constant Let us first focus on this, the Bow control function (A): tat is, how much trafic + does the network carry when itis offered an amount A? IF throughput were our only ‘consideration, then the ideal situation wold be 7 = 3 and \we would have no lost trafic. However, since the network has @ maximum capacity of =, we see that the “idea!” Now conteol function is that shown in Fig. 9, Below the Input rate of). we have na Tos, and beyond that point we have the minimum possible los. namely. A—y. AIO shown in the Figure are other more eeaisic functions. For example, if we apply no flow contro ata. then the "fee: flow’ curve is often the one that occurs, In this ese, we see that for low input rates, we achieve nearly the ideal. but a5 Fie 9. The How cont futon, the load increases, we reach a peak and then begin to fall ‘off as more Joad is applied: tha s, a6 we push harder, we «get less throughput! Eventually these uncontrolled schemes ‘ean drive the system ito a situation of very low throughput ‘or even zero throughput (also known as a deadlock). Such behavior may be found in systems otber than networks. for ‘example, thrashing in paged memory systems, traffic sass fon highways and city tees, water lowing out ofa vessel, te, Such a degradation s usally caysed hy the fact thet, ‘some system resource is being wasted: in our case, it often is devo unnecessary retransmissions, or unnecessarily ong paths for trafic, or buffering problems, ete. An aliemtive isto be extremely conservative in admiting trafic, in Which case we usually achieve the behavior shown in the figure where we lose los of traffic in light Toad, but do wells the load increases. On the other hand, if we intoduce a dy ‘amie flow control scheme (such as the throwing schemes discussed eacien, then we often achieve the behavior as shown for that case inthe figure. Here we see that we do rot do badly at low loads, and certainly do continue to increase throughput as we increase loud, approaching the ‘maximum network throughput feirly capil "Now let us get tothe Basie tradeotTs, We begin with the ‘radeofT between throughput. (2) and the mean response time, T(7(}). Clary, we wish to ave los of through put and small delay; unfortunately, these cannot bath be ‘achieved simultaneously, and so we wonder where is the ‘correct “operating point” forthe system. A quantity known a5 power" (P) has been studied which combines these 160 performance variables into a single measure and is defined 48 throughput divided by delay. namely P =1QV/TOW)). ay ‘We vssh to find that optimum value of A which maximizes PICs easy t0 show [19] that this value is such that THA AO) =aPOO)M. 15> That i, optimum power occurs when the input is suc that the derivative of response time with respect 0 throughput js exactly equal to the value of response time divides by throughput. This situation is shown in Fig. 10 where we show an arbitrary response time versus throughput profile In this figure, we also show that straight line out of the origin which has the smallest slope and which is tangent to the curve. The point of tangeney defines the optimum operating point (i.e. i maximizes power). We note that the slope of this straight line is exactly the inverse of the m os ° yw Fi 1 Locating he opti power poi power: thus itis clear that we seek the minimum slope Hine that touches the curve. Note that this result is good for ‘any flow control function 3(X) and for any response time function If the response time profile had a sharp knee (similar to the curve shown in Fig. 3, then the optim operating point would have been at 7", namely, the “knee” of the curve. This corresponds exactly where your intuition would tell you fo operate since it provides minimum response time and ‘maximum throughput. In more general cases the optimum power point essentially identifies the knee ofthe curv. If we apply this principle of maximum power to a simple M/M/I queueing system, we find that the optimum ‘operating point is at p = 1/2 (ie. opr = yw/2) At ‘this point, the response time is twice its minimum (the no Toad delay) and the throughput is half its maximum, More Jmporantly, NV, the average number of customers in the system at this point is exactly 4” = 1. Moreover, whereas the optimum value of g changes forthe MIGII queueing system as a function of the general service time, it remains the case that optimum power occurs when = 1! Now, lt tus consider a simple single-node queueing system in which you have complete control over how messages (customers) enter the system; suppose you were asked to find that input contol scheme which maximizes power for the system, ‘The obvious answer is that contol policy which allows ‘exactly one message inthe node a atime, and only when that message leaves would you insert the next message. Of course, we cannot contol iraffic in so simple a fashion in areal system. What i really interesting is that our analytic result above says that we should operate the system at a load such that on average we should have one customer in the system! Thus our analytic result matches our intuitive reasoning very well ‘This result cartes over very nicely to the case of com= puter networks. It states that the optimum number of ‘messages in ight in an n-hop path through the network should equal exactly n messages (ie. one per hop). More: ‘over, if we ask what should be the optimum value of the window W in an n-hop path using token flow contol, then fce again, we find the answer is W = n, One can also include the effet of loss in the definition ‘of power. AS shown in [20], it is still dhe case thatthe ‘optimum operating point is exacly that point where our “deterministic reasoning” would tell us it should be. In ‘act, this result extends to much more general definitions of, power where the only requirement is that the power be an increasing function of throughput as well as a decreasing funetion of both delay and loss. V._ Gicasrr Networks [As we enter the 1990's we find that we are rapidly ‘moving into a world of gigabit per second networks. We must ask ourselves if gigabits represent just another step in an evolutionary process of greater bandwidth systems, of, if gigabits are really differen? In the opinion of this, author, gigabits are indeed different, and the reason for this diference has to do withthe effect of the latency due to the speed of light Let us begin by examining data communication systems of various types. It tums out that there are a few key Parameters of interest in any data network system. These » capacity of the network (megabits per second MBPS) 2) b= number of bits in a data packet 3) Le= length of the network (miles), Tris simplest to understand these quantities if one thinks of the network simply as a communication link. One can combine these three parameters to form & single critical system parameter, commonly denoted as a, which is defined a =SLC/b 16) This parameter isthe ratio of the lteney of the channel (ie, the time it takes energy to move from one end of the link to the other) 10 the time it takes to pump one packet into the link. It measures how many packets can be pumped imto one end of the link before the first bit appears at the other end (21]. The factor $ appearing in the equation is simply the approximate number of microseconds it takes light to move Imi? Now, if we calculate this ratio for some common data networks, we Find the values shown in Table 1: Note the enormous range for the parameter a. At ‘one extreme, namely, local area networks, itis as small as 0.05, while at the other extreme, namely a cross-country ‘igabit ber optic link, itis a6 large as 15 000. This is 4 range of nearly six orders of magnitude for this single parameter! ‘We see that a grows dramatically when we introduce gi- bit links. So we naturally must ask ourselves if networks ‘made out of gigabit links ae different in some fundamental, ‘way from those made out of kilobit or megabit links. There ae two cases of interest to consider. First, we have th case that a large number of users are each sharing a small piece ‘of this large bandwidth. In this eas, itis Taitly clea that to each of them, a gigabit newwork looks no diferent than today's networks. However, if we have a few users each sending packets and files at gigabit speeds, then we do sce a change in ‘behavior and we do run into new problems, Atthese speeds, 2 Tout hs paper we make the siping asempin tat ight rogues Ser ope channel hy a propa in e See ONE MEGABIT Ath 4 gets very large, To see the effect of this change. fot us Consider the following scenario, Assume we are sting at ‘terminal and wish (0 send a -Mb fle aross the United Sales to some remote computer as shown in Fig. 11 Nov if the speed of the communication channel we have avilable is 64 KOS-KBPS (as in, say. an X.25 packet network). then. as shown in Fig, 12. the fist Bit of this transmission will arive at the East Coast computer aftr approximately 1000 b have been pumped into the channel “Thus we see that the channel is butTering roughly 0.001 of the messages that i, dere is 108 times as much data stored inthe terminal's butfer as there isin the channel. Cleary if we had a higher spoed channel the time to anit our [LM fle could be reduced. That is, we can benetit from ‘more bandwidth, Thus let us now increase the speed of the channel and use TI channel (1.584 MBPS). In Fig. 13 we show thiy new ‘configuration. Now we find thatthe terminalis bulfesing roughly 40 times as much data as is the channel, Once Again, we see that we ean bent from more bandwidth, Let us now increase the channel speed to a gigabit ‘channel: in particular, we will assume a 1.2-GDis-GBPS. Tink (the OC-24 SONET offering). This case is shown in Fig, 14 whore we see the entre 1-Mb file as @ small pulse moving down the channel Indeed the pulse occupies roughly only 0.05 of the channel “buffer.” It now cleat that more bandwidth is of no use at all in speeding up the transmission of the filet the latency of the chanel tat dominates the time to deliver the fie! Therein lies the fundamental change that comes about with the introduction of gigabit links into nationwide net- works, Specifically. we have passed from the regime (of ‘re-pigabit networking) in which we were huni lin ited, 10 the new regime of being latency limited in the post-gigabit world Things do indeed change (as we shal see below. The speed of Fight i the fundamental limitation for file transfer in this regime! And the speed of light is 4 constant of nature whieh we have not yet boen abl change! Inthe considerations above, we assumed that our file was the only trafic on the link. Let us now consider the case of competing taffic with smaller packets. Indeed. let us now assume that we have the classical queueing model of @ Poisson steam of arriving messages requesting transmission over a communication link, where each res- sage has & length which is exponentially distributed with 44 mean of 128 bytes (ie. an MMII queueing system) Tf, as usual, we let p denote the system utilization factor. then j= AO2A/C) where 2 isthe aval rate (messages per microsecond) and Cis the channel capacity (MBPS) In this situation, we Know that 7, the meun response time (seconds) ofthe system (i. the mean time from when the Fig 15. Meas response tine at 20 rpugton dey ime Fig 16. Mean respons ne at Ss propagion delay. message arives at the til of the transmit queve until the Jas bit ofthe message appears atthe output of te channel, including any propagation delay, is piven by o24/C - Where + is the propagation delay (Le. the channel Iatency) in seconds ‘Let us ask ourselves if gigabit channels actually help in reducing the mean response time In Fig. 15, we show the mean response time (in milliseconds) versus the system Toad p for three diferent channe! speeds. In this figure, we assume tha the spoed of light is infinite, and so r = 0. The channel speeds we choose are the same as those considered above, namely 64 KBPS, 1.544 MBPS, and 1.2 GBPS. We note a significant reduction in T when we increase the speed from 694 KBPS to 1.544 MBPS; thus the faster T1 channel helps. However, note that when we go from 1.544 MBPS: to 1.2 GBPS, we see almost no improvement. (The only region in which there is an improvement with gigabits is at extremely high loads, a situation fo be avoided for other reasons.) As far as response time is concerned, gigabits do not help here! ‘One might argue thatthe assumption of zero propagation delay has biased our conclusions. Not so; in Fig. 16 we show the case with a 1S-ms propagation delay (ie. the propagation delay across the USA) and we see again that Bigabits do not help. ‘We can sharpen our treatment of this latency-versus- ‘bandwidth discussion as follows, Let us assume that we have an M/MJ/I model as above, where the messages have an average length equal to D bits. Assume we wish 10 leansmit these files across the United States, as in the r= a ‘camcal’ exnowoTH ages vow ig. 17, The tdi neney tae fo LM le calier figures. Now, as can be seen from (17), there are ‘wo components making up the response time; namely, the ‘queueing-plus-transmission time delay (the frst term inthe equation) and the propagation delay (r)-In this paper. we have been discussing the relative size of each of these and we refered to regions of bandwidth-limited and latency~ Timited systems. Let us now make those concepts more precise. We choose 10 define a sharp boundary between these two regions. In particular, we define this boundary to be the place where the two terms in our equation are exactly equal; namely, where the propagation delay equals the queueing-pus-transmission time delay. From (17) we see that this occurs when the bandwidth of the channel takes on the following critical value: a Conn as) a-or In Fig. 17, we plot this critical value of bandwidth (on a fog scale) versus the system load ; we have drawn this plot forthe case of r = 15 ms and a message length of 1 Mb. ‘Above this boundary, the system is latency-limited which ‘means that more bandwidth will have negligible effect in reducing the mean response time 7’. Below this boundary. the system is bandwidth-limited which means that it can take advantage of more bandwidth to reduce T. Note for these parameters, thatthe system is latency limited over ‘most of the load range when a gigabit chanel is used: ‘his means that for these parameters, a gigabit channel is ‘overkill so far as reducing delay is concerned. ‘We repeat this plot in Fig, 18 for a number of different message sizes. Without labeling the regions, the same comments apply, namely, systems above the curve are latency-limited, and below they are bundwidth-limited. We ‘ote that gigabit channels begin to make sense for messages Of size 10 Mb or more, but are not helpful for smaller file sizes, This comment about message size refers 10 the fle size thatthe user application generates; the fact that ATM. tases 53 byte cells as litle to do with this comment. Figures 17 and 18 apply to the case of a cross-country link (ie. propagation delay of roughly 15 ms). For other than + = 15 ms, the eriical bandwidth which defines the boundary is given from (18) There are a number of other issues to be addressed in gigabit nets. Consider the example from the previous section, namely, a gigabit link spanning the United States, ‘Suppose we start transmitting a file @ time ¢=0. Roughly 15, cama ‘ttre BANDWIDTH" 88 Fig. The hana rb or ahs ie es. ms later, the fist bit will appear across the country, Now suppose that the receiving process decides. immediately thar it cannot accept this new flow which has begun. By the time the frst bit arives. however, there are roughly 15 million bits alkeady in dhe pipe heading toward th receiving process! And, by the time a stop signal reaches the source, another 15 millon bits wil have been launched! Te does not take 100 much imagination to see that we have 4 problem here. It is basically 4 congestion control and Now control problem as discussed in Section IV-B. Cleuly 2 closed control feedback method of flow conteol is 100 Sluggish in this environment (due, once agin, to latency) Some other forms of control must be incorporated. For example. one could use rate flow conte in which the wser 's pormitted to wansmit at & maximum allowable rte Another issue has to do with the maximum attainable efi ciency that one can obtain by aking advantage of statistical multiplexing of bursty sources in a gigabit environment Wwe have a large number of small bursty sources, then statistical multiplexing takes exquisite advantage of the Law of Large Numbers [4], and allows one to drive these channels at very high efficiencies, However, if we have 41 small number of large soures, then the multiplexing does ot usualy lead 10 very high efficiencies. This is because statistical smoothing of a small numberof sources is not sullicient to bring about the advantages of statistical multiplexing. Furthermore, if we have a large number of onhomogenous sources, one must caleulate the effective ‘umber of such sours inorder to calculate the efficiency to be expected from multiplexing 1221 Vi. Coxctusions In thiy paper. we have taken a tour ever the analytic landscape of computer networks, lying fay high over the terrain. The purpose was 1 give the reader the understand ing sto the approximate slate of the actin performance modeling and analysis of computer neworks. The reader should be aware that currently there isa great deal of effort, ning into network modeling and analysis. The driving Force behind this effort has boen the move toward broad- bond networks. These networks involse parameter ranges tnd trauofly that we have not scen before, ‘The switch hus become the economic and performance bottleneck of the sysom, Thus we are in the process of inventing: new protocols and architectures to give us access othe very high handwidhs aforded us by fiberoptic channels. As a result these new protocols and architectures require modeling, analysis and design We have introduced some of the basic modeling and ‘evaluation tools for networks. We have glimpsed some of the tradeofs that must be understood. And most of all, we have just scratched the surface of gigabit networks: they are ot yot understood. There is curently significant etfor underway 10 develop the modeling and analytic tls t0 provide this understanding REFERENCES UL L, Kinock, Communication Net: Stchatc Message Flow ‘aud ata New Yorks Macaw Hil 1964. YOst oF a. Reqrned by Dover Pubcon, 1972) 12] P'Baran, On ditrbued communications.” RAND Series Reports Ai. 64 1 TBC Lit. “A poo ofthe queueing formula = NH Se Re, 9. 9p. 388-887. 198 10) Rest Gig Sem. PH The [51 O1) Boxma: “Analysis of mls for tam queves” PLD. ‘hosration, Unversity of Lee, Uncen The Netra, 161 1 Robin "Communication networs: Message pak delays” IEEE Tran inform Theor sl 1120, pp. HCHS, 17 (7) Urata Nt" Gera, tnd Kiaock, he flow dev ation ctynl—An approach ote an ori commanic aon ne ston design’ Nerwarks vl ep. oT 1 3. 1978, 8) HO and LT Pench, Consmnicron, Trnslsson and Transom Nerwonke Reading. MAC Adaon Wes, ir E Kienock Queneing Soto, Wo I: Compater Aplin New York ey. 196, oy NE Grit "The design soe sndorwad (SP) networks for ‘Sonputercommunkston.” Une of California Las Angle ‘Shot of Eng and App: Se Eng. Rep. UCLA.ENG-13, 1H, MU Gaels dL Kino, “Flow contol: A comparative sur 3 MEE Tt Comma, vl, COM. a. pe 353-57, [Ar 1980. Also published in Computer Neon Arete (ant Prot Pe Green Ed, New York Plum, 18S. Pp. Soli [12] Bi Berka: and R, Galager, Data Nervort Gish Ns Pesce Hal 1587 tus, SUtame,"Pageing nt fame elo for ped and Rex.” Data Commun app 38-52, Ap 1990. 1141 °C. Gallager "Ahnu del rowing lgorien aing Snore computation” IEEE Tran: Comma sks COMA Thess. 197 Brel BensekisE.M. Gafni, and R.G. Galler. “Second erative lees for ininum sy dated vote nenwrks IEEE Trons Commun. sol COM, pp. 911-515 ios (16) BIW. Daves, “Me cont of cangention in packet switching networkin Proc: 2nd ACM TEEE Sv onthe Optima ‘Hs Coats Sto Pals Rs C8, 7, New Yer Englewood a7) ProMeakiae, Computer Newks and Som: Quang Tier ad Perfomance Evaluation "New ork: Sponge erty, 1960 Lis MUA Mira, G. Bato, and G. Come, Perormance Models MutiomcensrSstome. Cambridge, MA. MIT Pex. 1986, 1i91 UKicimc "On fw onl mcrae aes in Cn Hot Prot Cont on Communication ot WCF. (ne, Canada Jane 978), pp 321-2795, tao) On omer and determin tc rules of thumb for probit iobieme in computer communica” in Con Ree, Pot Je Cig tn Communications (Bonen. MA. he 1979) 7p rato tan aE Sehanmel fiseney fr LAN, Lol Are Mat Die Aucere Nenorke “RL. Pi Rosh, MD: Comper Set Pres. i980, pp. Bt 1221

You might also like