You are on page 1of 11

Manuscript as submitted

Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239

Bandwidth Management for the People


Understanding the QoS implications of network resource allocation
strategies with the B-Cube simulation tool

Fabrice Saffre, Cefn Hoile and Mark Shackleton

Abstract: In the context of a rapidly growing broadband market, it is critical to


understand the expectations and QoS requirements of a changing population of end users,
in order to make informed strategic decisions. Although a number of technologies are
available to fine-tune bandwidth allocation and ensure various forms of “fairness”, the
performance, cost and marketing implications of choosing one regime over another are
not always obvious to decision-makers. In this paper, we present the Broadband Broker
(B-Cube) simulation tool and demonstrate how it can be used as a strategic planning tool
to identify and evaluate the advantages and shortcomings of a wide variety of broadband
products, bandwidth management schemes or changes in usage patterns. The main
difference between B-Cube and other tools is that using it only requires a high level
understanding of the principles governing the distribution of the limited broadband
resource.
1. Introduction
Delivering and marketing a contended broadband product such as residential DSL is not
as straightforward as the delivery and marketing of many conventional consumer
products or services.
The relationship between what the customer thinks they are buying (a broadband
“experience”) and what they are actually buying (the technical means required to deliver
that experience) is complex and multi-layered.
It is in the interests of the service provider to preserve the customer’s confidence in the
simplicity of the broadband proposition. This simple proposition is rooted in the
responsiveness and transfer rate of their broadband connection. However, behind the
scenes, various influences determine the available capacity and latency experienced by
the customer’s networked applications. Only some of these influences are under the
direct control of the network provider.
Broadband performance can be affected by the instantaneous behaviour of other network
users as well as the network provisioning, traffic policing and shaping policies selected
by the service provider. Depending on the method of provisioning, the policies and user
behaviour of other service providers who share wholesale infrastructure components may
also influence performance.
Close customer scrutiny of broadband performance is likely if commonly used
applications become more demanding of bandwidth, or more demanding applications
become more commonly used. Ironically, if customers subscribe to broadband in order to
take advantage of more bandwidth-intensive applications this not only increases their
own performance expectations, but also impacts on the capacity available for both retail
and wholesale providers to meet their other users’ demands.
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239
A trend towards greater availability of popular media files is suggested by emerging
technologies such as Bittorrent [1] and Cybersky-TV [2] which reduce network load for
distributors, making it less costly to make files available for download. New network
uses can have a substantial impact, as demonstrated by recent estimates that peer to peer
applications account for up to 60% of network traffic [3]. In addition, data exchange
models such as RSS could have an impact. These approaches encourage the automated
retrieval by client software of regularly updated content such as weblogs. Companies
such as Microsoft who serve popular RSS feeds are already experience the escalating
costs of serving these resources and having to take action to control their costs [4].
Recent developments in malware, worms and viruses may also make an impact, with
hacked PCs acting as spam relays contributing substantially to network load [5].
Responding to these challenges, ISPs and Wholesale providers need to consider both
changes to their customer offerings, and changes to their network in order to preserve the
best range of value propositions across the whole broadband marketplace.
Successful network provisioning, management, and planning and effective marketing all
benefit from an accurate assessment of the way different network regimes and usage
patterns interact in the broadband network. A simulation tool – B-Cube – has been built
to examine the consequences of various “what-if” scenarios. It integrates simple models
of customer demand, as well as the network management and provisioning policies of
both ISPs and wholesale providers.
2. The Broadband Broker (B-Cube) simulation tool:
The key rationale for developing an application like B-Cube is the lack of a useable
simulation tool allowing for a quick, hassle-free evaluation of some of the many
bandwidth management options available to network strategists and policy-makers. Such
a tool does not seek to compete with extremely detailed and complex packages like
OPNET Modeller [6], which are designed to reproduce network behaviour very
accurately and at a very low level. Rather, B-Cube is meant to bridge the gap between
such “technology-oriented” tools, which require advanced training to operate and are
typically used to build heavyweight, case-specific simulations, and the more abstract
domain of resource management. With B-Cube, a high-level understanding of the
principles governing the distribution of a contended resource, i.e. bandwidth, is enough to
build scenarios and study the likely effect of changes in usage patterns and bandwidth
allocation policies (or any combination thereof) on quality of service (QoS).
The lightweight simulation engine can be explained in terms of treating bandwidth as a
“liquid”, flowing from a “reservoir” to multiple “taps”, through an unspecified number of
“pipes” of adjustable diameter. Opening/closing the taps (i.e. end users attempting to
obtain more or less bandwidth) determines the overall demand, which is propagated
“upstream” via progressive aggregation (e.g. an ISP relays the demand from all its
subscribers to the wholesale provider). At each level, the diameter of the upstream pipe
constrains throughput (e.g. a ½ Mbps end user cannot generate a demand greater than 512
kbps download speed). Once the global demand profile has been so obtained, the
resource “cascades” down from the reservoir to the taps, obeying rules that can be
different for every intermediate level (representing different management policies). The
key concept is that both the demand for, and the allocation of, the contended resource can
be manipulated and/or fine-tuned multiple times along its way up and downstream, which
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239
provides the basis for modelling various control mechanisms (capping, prioritisation
etc.).
B-Cube uses a combination of Monte Carlo [7] and deterministic simulation techniques.
A simulated individual’s behaviour is determined by a “user profile”, which includes a
number of factors like the bandwidth requirements of a class of applications (e.g.
browsing vs. peer-to-peer file sharing) but also daily patterns of activity (e.g. “peak-time”
effects). However, in order to model fluctuations between users and from one day to the
next, a stochastic component is added. Similarly, while simple round-robin is modelled
deterministically (cycling through all queues), weighted round-robin is approximated by
performing random tests against variable thresholds, which contributes to simplifying the
simulation engine, but only guarantees statistical reliability.
B-Cube is primarily intended as an experimentation tool for strategists who need a high-
level account of the implications of introducing/modifying broadband products or
bandwidth management schemes (or of the possible effects of large-scale sways in usage
patterns). For that reason, it is important to realise that B-Cube should never be used for
planning detailed network refits or hardware rollouts, which in most cases would require
greater accuracy. Though its light footprint suggests that B-Cube could also be useful for
rapid exploration of parameters space, with the objective of identifying regions where
more detailed modelling is required, this is very much a secondary purpose.
3. A simplified QoS measurement:
In the current prototype version of B-Cube, a very basic approximation of QoS is used to
represent average and individual “customer satisfaction”. It is simply the amount of
bandwidth received divided by the amount requested, over a sliding time window of
customisable size. Clearly this is very simplistic and we fully acknowledge that fact.
However, it should also be mentioned that the engine was specifically designed to allow
addition/modification of new/existing software modules, one of these containing the QoS
calculation functions.
We are actually already in the process of refining this component, in order to take into
account application-specific quality measurements. For example, some activities like
browsing are not very sensitive to a temporary shortage of bandwidth, which would result
in severe and immediately noticeable performance degradation for others (e.g. streaming
media). Similarly, large downloads (e.g. via FTP) are better served by a low but steady
throughput than by occasional but short-lived “bursts” of connection speed. All these
factors need to be used to compute a more accurate user satisfaction index, and ultimately
will be.
4. “Proof of concept” simulation results:
In order to demonstrate how B-Cube can be used to address existing issues we ran a
simple experiment in the archetypical context of bandwidth hogging by P2P file sharing
applications. This helps identify acceptable compromises that allow for good QoS for the
end user while simultaneously protecting ISPs and wholesale providers’ business models.
The chosen scenario includes two broadband ISPs of similar “size” (in terms of market
share), each of them modelled by 50 of their subscribers, which are of course meant to be
a representative cross-section of their entire user base. These ISPs are competing for a
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239
fixed amount of bandwidth sold to them by the wholesale provider on the basis of a 50:1
contention ratio. In other words: the 100 (total) 512 kbps simulated users effectively
share 1 Mbps. User profiles are as follows:
x ISP 1: ten “Multimedia” and 40 “Browsing” users
x ISP 2: two “P2P”, three “FTP”, five “Multimedia”, and 40 “Browsing” users
In the prototype version, user profiles are summarily defined as follows:
x “Browsing”: bursts of light activity, mostly during peak time
x “Multimedia”: bursts of heavy activity, mostly during peak time
x “FTP”: bursts of heavy activity, anytime
x “P2P”: sustained heavy activity (24/7)
4.1 Benchmark simulation:
The benchmark simulation involves no specific bandwidth management at all, meaning
that at every time-step, the two ISP queues are visited in turn until all bandwidth “units”
have been allocated (so the only possible source of asymmetry is when one queue is
effectively empty and some of the resource is left). The exact same logic is then used by
both ISP’s to distribute their share of the bandwidth to their subscribers, with identical
implications. The results are shown on figure 1.

Fig. 1: Benchmark scenario (B-Cube interface screenshot)

Having only “mostly peak time” subscribers, ISP 1 (top) has a clear 24 hour periodicity,
which is not the case for ISP 2 (bottom), whose “P2P” (and to a lesser extent “FTP”)
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239
users absorb all available off-peak bandwidth (continuous black line). So during peak
periods, the resource is about evenly distributed between both ISP’s, but overall, ISP 2
consumes a lot more bandwidth. However, average QoS is better for ISP 1 customers,
because at peak time (when competition is at its fiercest) the resource is evenly
distributed between ISP’s and the “Browsing” and “Multimedia” type subscribers of ISP
2 have to share their half with “P2P” users (unlike their ISP 1-affiliated counterparts).
This is also what leads to the awkward situation where the average experience of
heavyweight “P2P” users (measured over a 24 hours sliding time window) is actually
better than that of the more moderate subscribers. In short, at peak time, they both suffer
equally, but only the “P2P” type make up for it when bandwidth availability increases
(i.e. late night and early morning hours).
4.2 ISP “fair share”:
One possible management strategy applicable at the wholesale provider level is to base
the share of the bandwidth allocated to each ISP on the ratio between an agreed quota and
its users’ recorded activity over a predefined time window (also known as “fair share”
policy). This would probably be implemented using a weighted round-robin-like
procedure, with the weights being updated either in real time (which implies using a
sliding time window) or periodically (e.g. at the end of the time-slice over which total
bandwidth consumption is measured), depending on practical constraints.
This approach has the advantage of briefly favouring the lightweight ISP at the beginning
of the peak time period (because its recorded activity until then will typically be
comparatively low). It also includes the option of using different quotas for different
classes of ISP’s, which in turn creates an opportunity for offering “premium” packages
which are tailored to their customers’ usage profiles. In the present version of B-Cube,
when the “fair share” option is turned on, each ISP weight wi is calculated using
expression [1]:
§ 4kqi
2
·
wi max¨¨1, ¸
¸
[1]
© 4q i  x i
2 2
¹
Where k is a constant and qi and xi are the agreed quota and recorded activity of ISP i
over the chosen time window, respectively. Of course, other functions, or even
completely different weighting procedures, could easily be implemented: this one is
nothing but a simple example, and only the output value is directly relevant to the
simulation.
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239

5 quota = 25 ("Bronze")
quota = 50 ("Gold")

4
Weight (k = 5)

0
0 25 50 75 100 125 150
Bandwidth consumed (arbitrary unit)

Fig. 2: Example outputs of the weight calculation function

Fig. 3 shows the situation if ISP 2 accepts to buy a higher quota from the wholesale
provider (e.g. a “Gold” vs. “Bronze” package), in order to protect QoS for its customers.
The effect is partly illustrated by the appearance of a 24 hours period, because the
increased allowance for ISP 2 means that the peak time demand from its “Browsing” and
“Multimedia” users is now added to that of the “P2P” and “FTP” types, consuming up to
75% of the total bandwidth. This of course comes at the expense of ISP 1’s average QoS,
unless the wholesale provider invests some of its increased income into additional
capacity (i.e. lowering the contention ratio).
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239

Fig. 3: ISP “fair share” scenario (B-Cube interface screenshot)

4.3 Capping the users:


Continuing experimenting with B-Cube, it appears that ISP 2 may have an even better
option than just going for the “Gold” quota though. Instead, it could combine this
upgrade with the introduction of a cap reflecting what is considered acceptable bandwidth
consumption by an individual user. In the current version of the tool, caps apply to
relatively short periods (1, 3 or 24 hours), are only in force during peak time, and the
effect of reaching one’s cap is to be downgraded to “dial-up” connection speed until the
end of the period. B-Cube already provides the option of simulating the effect of end
users choosing different subscriptions with different caps and base modem speed, but for
simplicity, we won’t discuss it here.
As illustrated by Fig. 4, if ISP 2 opts for the combined bandwidth management strategy
(“Gold” quota + individual caps), it can increase average QoS while slightly reducing
inter-user variability, at the relatively modest cost of a small reduction of the “satisfaction
index” in the minority of heavyweight P2P users.
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239

Fig. 4: “Gold” ISP package + capping scenario (B-Cube interface screenshot)

4.4 Dynamic priorities based on relative usage:


The last option we chose to investigate in the context of this proof of concept simulation
is to prioritise users on the basis of their relative activity over the duration of the sliding
time window, as opposed to a predefined cap.
The main advantage of using capping as a bandwidth consumption control strategy is that
it involves an explicit and unequivocal agreement between provider and subscriber as to
what is acceptable use and what the consequences of exceeding one’s allowance are.
There is however also an essential limitation inherent to the inflexibility of the approach:
it does not take into account real-time availability of the contended resource. Whenever a
heavyweight user reaches his/her cap, his/her connection speed is throttled down to the
specified level (in B-Cube, dial-up speed), even if the requested bandwidth is actually
available, due to unrelated fluctuations of the demand. Moreover, the “access rights” of a
lightweight and heavyweight subscriber are basically identical until the latter exceeds
his/her allowance, even if their respective activity levels have been of a different order of
magnitude and fair allocation would suggest that the “light” user be given priority.
In our proposed alternative (named “managed service” for lack of a better term), the
weight of individual users is calculated from their “position” in the bandwidth
consumption-based frequency distribution of the entire population: the higher above
average, the lower the priority (and vice versa), as per expression [2].
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239

§ kxi ·
wi max¨¨1, k  ¸ [2]
© max( x1 , x 2 ,..., x n ) ¸¹
As shown on Fig. 5, this bandwidth allocation strategy not only improves average QoS
but also (perhaps more importantly) reduces inter-user variability even further, at no
apparent cost for any single category of end users. Our preferred explanation at this stage
is that this is because this form of prioritisation consistently favours lightweight users at
peak time yet doesn’t penalise their heavyweight counterparts by enforcing a “hard” cap
when this is unnecessary (i.e. they still have unfettered access to bandwidth whenever
available, including at peak time).

Fig. 5: “Silver” ISP package + “managed service” scenario (B-Cube interface screenshot)

5. Concluding remarks:
5.1 Other functionalities:
Section 4 offers only a glimpse into B-Cube’s capabilities. In the existing version, a
variety of other aspects of bandwidth management can already be explored, including but
not limited to:
x Other customer packages (1 or 2 Mbps, which can be combined with different
allowances when capping is applied)
x “Managed service” option at the wholesale level (i.e. applied to ISPs themselves)
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239
x Change in the behaviour or QoS evaluation criteria (currently limited to variable
“memory span”) of existing users
x Variable size of sliding time windows (allows for testing strategies that are more
or less responsive to temporary changes in individual usage patterns)
x Discrete vs. continuous priority updates
x Adjustable ISP-specific maximum ratio between highest and lowest priority
x Adjustable contention ratio at the wholesale level
In addition, the tool provides a number of “user friendly” features like the possibility of
turning statistics on or off, alternating between views plotting the bandwidth consumed
by an ISP against the available total and against its individual quota, saving or loading
scenarios etc.
5.2 Future developments:
As already mentioned, we are presently working on improving B-Cube by adding new
features or modifying existing elements like, e.g., the perceived QoS evaluation module.
However, more ambitious extensions are also under consideration for future work.
Among these is the development of a “policy builder” that would allow anyone using the
tool to specify a rule set for a completely new management strategy, whereas in the
prototype version, one is still limited to choose from a selection of predefined options and
parameter values. The ability to rapidly examine alternative strategies by defining their
high level behaviour is complemented by a range of increasingly powerful network
policing and shaping tools, Systems from suppliers such as Packeteer, P-Cube, Sandvine,
Ellacoya and Allot [8,9] permit intervention in network flows using a rich portfolio of
policies to control bandwidth availability for each of a user’s individual network flows
according to the type of traffic and their usage history. Even in the absence of these more
advanced tools, it is feasible to control headline rate and priority on a per-user basis using
simpler equipment already available in the network.
Furthermore, we wish to add the possibility of customising end user behaviour, in terms
of modelling fluctuations in typical bandwidth consumption over a 24 hours period. This
would allow for much finer modelling of subscribers’ activity patterns and expectations,
including any kind of “mixed” behaviour corresponding to a category of users who
alternate between profiles and applications, with the accompanying variations in QoS
requirements.
At present, B-Cube already provides a lightweight, user-friendly and intuitive simulation
tool to anyone who needs to understand and visualise the potential effect of bandwidth
management policy changes and/or of large-scale alterations to user behaviour (e.g. after
the launch of a new online “killer app”). After the suggested extensions are implemented,
we anticipate that B-Cube will also be capable of providing an accurate quantitative
measurement of perceived QoS, in the realistic context where each subscriber uses a
range of applications with variable bandwidth requirements.
6. References:
[1] http://bitconjurer.org/BitTorrent/
[2] www.tvoon.de/ctv/
Manuscript as submitted
Published with minor revisions in B.T. Tech. J. Vol. 23 No. 2 (April 2005) pp. 232-239
[3] http://downloads.lightreading.com/wplib/sandvine/meeting_the_
Challenge_of_P2P.pdf

[4] http://news.com.com/Microsoft+flip-flop+may+signal+blog+clog/2100-
1032_3-5368454.html

[5] http://www.sandvine.com/solutions/pdfs/spam_trojan_trend_analysis.pdf
[6] http://www.opnet.com/products/modeler
[7] G. S. Fishman (1996) “Monte Carlo, Concepts, Algorithms and Applications”,
Springer Verlag.
[8] http://www.lightreading.com/document.asp?doc_id=31901
[9] http://www.cabledatacomnews.com/mar03/mar03-2.html

You might also like