You are on page 1of 11

SCALABLE NETWORK BASED E2E ADMISSION CONTROL USING

FUZZY ARTMAP

R. Singh*, Verma S.**


*Dept. of C.S.I.T., I.E.T., MJP Rohilkhand University, Bareilly (India)
(rsiet2002@gmail.com)
** Indian Institute of Information Technology, Allahabad (India)
(sverma@iiita.ac.in)

ABSTRACT
The traditional approach to admission control assumes that traffic descriptor is provided
by the user or application for each flow prior to establishment for the real time
multimedia communication. How ever this approach suffers from several problems. So,
we propose a network-based endpoint admission Control system for scalable QoS
guaranteed real-time communication services. This system is based on a sink tree-based
resource management strategy, and is particularly well suited for DiffServ based
architectures which keeps minimum signaling overhead by performing the admission
decisions at the end points. In addition, the proposed system integrates routing and
resource reservation along the routes, and therefore displays higher admission probability
and better link resource utilization. This approach achieves low overall admission control
overhead because much of the delay computation is done during system configuration,
and so resources can effectively be pre-allocated before run time. We investigate a
number of resources sharing approaches that allow resources to be efficiently re-allocated
at run time with minimized additional overhead. We provide simulation experiments that
illustrate the benefits of using DS tree-based resource management for resource pre-
allocation and for routing, both with and without resource sharing.

Keywords: QoS, DiffServ, Scalability, DS-tree and RSVP.

1. INTRODUCTION provisioning, as less information is available during


flow establishment and very limited policing can be
IN ORDER TO make quality-of-service (QoS) done during the flow’s lifetime. On the other hand, it
guarantees, a network must exercise flow admission allows for less expensive architectures for admission
control. Admission decisions are based on some control. As less flow information must be maintained
traffic characterization, such as effective bandwidth in the network, it is easier to centralize flow
[7], [15] or leaky bucket descriptors [18]. The management.
reliable and efficient transmission of a real time The admission control decision is made at a
secure video on the Internet requires resources at the central location for each administrative domain by
server, client and network to be allocated in a bandwidth brokers [12], it significantly reduces the
dynamic manner. It is generally accepted that end-to- cost of flow establishment, because only one entity
end delay guarantees in networks and distributed in the domain needs to be contacted in the admission
systems are provided by (i) appropriate allocation of control procedure. It is unlikely that this solution is
resources across the network, (ii) strict admission scalable, however, given its tendency to have the
control, and (iii) traffic monitoring and policing in bandwidth broker node become a hot spot in
the network. This rather straightforward idea has situations with much flow establishment activity.
been well studied, organized in the form of the Performing admission control at the edge of the
Integrated Services architecture [3]. The most domain can therefore further reduce the
serious shortcoming for the Integrated Services establishment overhead. We call approaches that
architecture is lack of scalability: All mechanisms limit admission control activity to the edge of the
within the Integrated Services architecture rely on domain signaling-free admission control schemes.
flow awareness. The most natural way to cope with The signaling overhead of such approaches is either
this lack of scalability is to aggregate flows into literally zeros within the network, or it is sufficiently
classes of flows, and then have the network manage light so as to not cause any scalability related
flow classes instead of individual flows. This general problem. Depending on where the admission control
approach is followed in the IETF Differentiated is made, we classify signaling-free admission control
Services architecture [7, 12, 13]. The lack of per- schemes into two categories: host-based and
flow information can negatively affect QoS network-based.
In a host-based signaling-free admission control of real-time applications with end-to-end delay
scheme, the host makes the admission decision constraint within the DiffServ architecture [15, 22,
without invoking a signaling protocol. In a network- 23, 25]. The basic idea in [15] is to move per-flow
based signaling-free admission control the admission information from core routers to edge routers by
decision is performed by the ingress-router to the relying on dynamic packet status carried with each
network, and does not require signaling for the packet. So the core router estimates each flow’s
decision when a flow arrives. So-called endpoint- dynamic information such as end-to-end delay for
based admission control mechanisms ([19, 20, 21, admission control and packet scheduling based on
24]) typically fall under what we call the host-based the dynamic packet status. The main idea of [25] is
signaling-free category. to apply the idea of the dynamic Packet State
Independently of where the decision is made, the proposed in [15] to scalable admission control. In
admission control has to have adequate and correct contrast to these, work in [22] significantly reduces
information about network resources at the time of the run-time overhead of admission control by doing
decision. Otherwise, either delay guarantees are some of the computation off-line that is, during
violated, or the admission probability is unacceptably network design or configuration. Other than these,
low. Obviously, one way to eliminate the need for the maximum achievable resource utilization in the
signaling within the network is to control admission DS model has been studied in [23], with a heuristic
by probing the network at its edge. This is called route selection algorithm. While all of these
measurement-based admission control [24]. In this approaches address the reduction of the overhead of
approach, to set up a flow, the admission control the admission control procedure proposed, they omit
keep sending the same rate of dummy traffic it to address the overhead of the overall flow
wishes to set up for the predefined probing time. establishment, of which admission control is just a
After probing, the admission control makes the small part.
decision whether it admits or not based on how the RSVP [2] has become the de facto of signaling
network responded to the total traffic including for resource reservation on the Internet, in particular
dummy packets. Literally, it requires zero signaling. within the integrated services architecture [5].
However, no matter what technique is used in Recently, other resource reservation signaling
probing, the non-negligible side effect of long protocols with reduced signaling overheads have
latency for a flow set up makes it hard to be accepted appeared, such as YESSIR [11], Boomerang [14],
in real-time applications like Voice-over-IP. This and BGRP [17]. On the other hand, a series of efforts
will be hardly acceptable because the users expect has focused on lightening the RSVP itself for
the same or better quality of service when they have aggregated traffic [10, 16, 18]. These approaches still
experienced from traditional telephony systems. cannot support a large number of real-time
In this paper, we propose a completely different applications because 1) they still require rather long
signaling free admission control named network- latency in flow set-up because they rely on probing,
based endpoint admission control, which: 1) requires and 2) they are not be able to guarantee bandwidth
the minimum possible signaling overhead, 2) during the service lifetime. For supporting of the
provides zero latency for a flow set up, 3) has zero real-time applications in a scalable fashion, the soft
routing overhead, 4) has high admission probability, reservation paradigm is not appropriate.
and therefore high resource utilization, and 5) has In this paper we propose a network-based
tight end-to-end packet delay upper bounds. This is endpoint admission control scheme that is scalable.
achieved by: 1) structuring available resources off- The general idea of such a network-based admission
line using sink tree to reflect user traffic control is presented in [26]. In that work we also
requirements, 2) at run-time, limiting the admission introduce DS trees, discuss the problem of finding
control procedure to the ingress router only. The DS trees, the end-to-end delay analysis, and provide
ingress router then keeps track of the resources basic simulation results on admission probabilities.
available downstream up to the destination. The rest In this paper, we address the practicality of the DS
of the paper is organized as follows. Section 2 tree-based approach. Specifically, we discuss how to
describes previous work. In section 3, the DS tree relax rigid resource pre-allocation to respond to
system is described. Section 4 presents resource- changing flow establishment patterns while keeping
sharing strategies in the DS-tree paradigm. We signaling overhead to a minimum.
analyze the end-to-end delay in Section 5 for various
resource- sharing methods in the sink-tree paradigm 3. THE DS TREE PARADIGM
and simulated in section 6. Finally, conclusions and
future work are described in Section 7. For real-time applications resources must be
allocated to the newly established flow and de-
2. PREVIOUS WORK allocated only after the flow has been torn down.
Since flows require resources on a sequence of nodes
Some work has been done on admission control in the network, appropriate signaling must be in
place to synchronize the admission control. information includes available bandwidth in the tree,
Independently of whether the signaling is centralized the parent node in the tree, and the worst-case
(such as in bandwidth-broker based approach) or network delay in the tree.
distributed (such as RSVP-style, for example), the Admission Control at connection establishment,
overhead in the case of high flow establishment the connection initiator presents the admission
activity is enormous. A scalable resource controllers with a connection request message, which
management approach must therefore be able to contains destination IP address, required bandwidth,
make admission decisions with high accuracy, while and required network delay for the connection.
avoiding both high message counts and centralized Secondly, there should be a connection tear down
decision entities. Resource allocation overhead at run message. Finally, the input traffic assumed to be
time can be reduced by appropriately pre-allocating regulated by a leaky bucket at the traffic source.
resources during network re-configuration. When Once the ingress node receives a connection request
pre-allocating resources, four issues must be message, it looks up its routing table for the
considered: corresponding egress router and port number. From
1) The allocation must reflect the expected this information it determines which DS tree to use.
resource usage. Then it determines whether to admit or reject the
2) The allocated resources must be managed so request based on available bandwidth and the worst-
that the signaling overhead is minimized at case network delay in that tree.
run time. Ideally, ingress routers manage pre- Packet forwarding once a connection is admitted,
allocated resources. a label is given to each input packet according to the
3) Since the pre-allocation of resources defines DS tree it belongs to. So, each packet leaving the
the routing of flows in the network, the ingress router for the parent has a new label in the
signaling must be appropriately integrated packet. This label is used in packet forwarding in
with a routing mechanism. core (internal) routers in the domain2 and is deleted
4) Pre-allocation commits resources early, and so when the packet leaves the egress router (root) for a
may result in low overall resource utilization neighboring domain. Consequently the label (tree
due to fragmentation. ID) is effective in a domain only. In other words, the
Lightweight mechanisms must be in place to core routers do not see IP addresses; instead, they
change the pre-allocation in order to accept flows deal with only the packet label for routing.
that could not be accepted in a system with rigidly
committed resources. Allocating resources to DS 3.1. Model For Finding A Set of DS-Trees
trees can satisfy the first three requirements.
Generally speaking (we give a more precise Once we have a set of DS-trees for a given
definition later,) since trees are used to aggregate network, we can run them as described in the
connections according to their egress nodes. The root previous section. The problem of finding a sink tree
of a DS tree is then the egress router, and the leaves for a given network could be formulated in
are the ingress routers. By allocating resources so theoretical graph design. We assume that the
that each ingress router knows how much resource is network has no core routers. All the routers in the
ahead for each path towards each egress router, the domain are both ingress and egress routers. We also
admission control can be immediately performed at assume that the network supports a single real-time
the entrance of the network. Since each egress node class in addition to best-effort traffic. We define the
has its own DS tree, every possible pair of source domain network as a graph G = {V,E} with nodes
and destination node has its own unique path in a (routers or switches) in V, connected by links in E.
sink tree. Consequently, wherever a flow arrives at Link l in E has link capacity Cl. For an output link j
an ingress node, the node determines the DS tree for of an egress routers, we define a sink tree STj to be a
the new flow, based on the destination node. Then tree-like sub graph of G connecting Link j, where
the path to the destination and the resources available each link l in STj is marked with a bandwidth
on the path is determined automatically. An allocation Bl and k, which denotes the amount of
admission decision can then be made at the very bandwidth allocated on Link l to real-time traffic on
node where a flow arrives. In the following we DS tree STj. For a set of DS trees {STj} to be valid,
shortly discuss how typical network operations the following two constraints must be satisfied:
would be performed with DS-trees.
Link Capacity: On any link, the sum of bandwidths
Ingress Node Information: Each ingress node has allocated on the link for all DS trees should not
two types of information. One is the mapping table exceed the link capacity Cl. For each link, N
between the destination IP address in the input
N
packet and the corresponding IP address and port
number of the egress router. The other is the tree Σ B j
l ≤ C l ………. (1)
information corresponding to the egress router. Tree j= 1
Where Bjl is the bandwidth allocated for the jth DS (Minimum Spanning Tree), which performs very
tree on link l, N is the total number of Ds-trees, and well in finding valid DS trees if such DS trees exist.
Cl is the capacity of Link l. If it fails to find a valid DS tree, the heuristic
algorithm returns a set of reduced bandwidth
Depth-aware Bandwidth Allocation: In a DS tree requirements at the network entrance and relaxed
STj, the bandwidth allocated on any link l should be deadline requirement so that a valid set of DS trees
equal to or greater than the sum of bandwidths can be found. Unless otherwise mentioned, we use
allocated on all direct descendants of Link l in the this heuristic algorithm (Fuzzy ARTMAP) for the
sink tree. following simulations’ study.

Bjl ≥ Σ Bjk ……….(2) 4. RESOURCE SHARING IN THE DS TREE


(K ε Δ (l, j))
A fixed partitioning of resources and their
allocation to DS trees gives rise to fragmentation,
Where Δ (l, j) is the set of descendant links of
which can significantly reduce overall resource
Link l in DS tree STj. In addition, the set of DS trees
utilization, when the flow population along source
must satisfy deadline requirements and bandwidth
paths exceeds the expected values. Re-computation
requirements at the entrance and exit of the domain.
of resource allocations is very expensive in terms of
These requirements are formulated in form of the
computation time and communication overhead to
following three constraints: Bounded Network Delay:
distribute the information about the new
Every simple path’s delay, which covers from an
configuration for ingress routers. A lightweight
ingress to an egress node, should be bounded by the
method is needed to allow for adaptive re-allocation
given network delay.
of resources between reconfigurations. In this section
we describe how to take advantage of the resource
MAX {Dp1,Dp2, . . . , Dpb} ≤ D ……(3)
allocation structure in sink tree to share resources
within portions of the DS tree or across multiple DS
Where Dpi is the worst-case delay in simple path
trees.
pi, and D is the given network delay bound. Since
The general idea behind resource sharing is
Dpi = d1,2 + d2,3 + · · · + dj-1,j , (for example, d1,2 is the
illustrated in Figure 1. As seen in the figure, the four
worst-case packet forwarding delay from node 1 to
nodes (Nodes 1, 2, 3, and 4) are on the simple path in
node 2) Dpi is the sum of worst-case local delays in
a DS tree. Node 4 is the DS (root), and other
each simple path.
branches are not drawn. Each link should be
allocated resources so that each node can
Bandwidth at Domain Entrance: For each ingress
accommodate the equal number of real time flows
router, the amount of requested bandwidth for each
from it through the sink. So, Node 1 should be able
path from that router to each egress router should be
to accept ten real-time flows from Node 1 to Node 4.
allocated as requested.
Likewise, Node 2 should be able to accept ten real-
time flows from Node 2 to Node 4, and so on.
ri,j = Bi,j ……….. (4)
Accordingly, the links have bandwidth allocations
ranging from 10 to 30 from left to right so that Link
Where, ri,j is the requested bandwidth for the path
1 supports ten real-time flows, Link 2 supports
between Router i, and Router j. Bi,j is the allocated
twenty, and so on. Now suppose that Node 1 has one
B

bandwidth for the path by the DS tree.


real-time flow to Node 4, while Node 2 has ten real-
time flows to Node 4. In this situation, Node 2
Bandwidth at Domain Exit: For each egress router,
receives the 11th real-time flow set-up request. At
the sum of bandwidth allocated for the direct
this point, resources can be shared in three different
descendant links of the DS tree (the egress router)
ways.
should not exceed the given output bandwidth for
that DS tree.

Tj ≥ Σ Bjk …….….. (5)


(K ε Δ (j))

Where, Tj is the given output bandwidth for DS


tree j. Δ (j) is the set of descendant links of the egress
router. We showed in [26] that the problem of
finding a valid set of DS trees for a given network is
NP-Complete in all but the most trivial cases. We Figure 1: An illustration of resource sharing
also described a heuristic algorithm based on MST
No Sharing: In this strategy resources are not shared. Theorem 1:
Bandwidth on each link is exclusively allocated to a α α (T + ρYk )
pair of source and destination. In the example in dk = (β + ρ ∗Yk ) + (α −1) ...(6)
ρ ρ(N −α )
Figure 1, the 11th admission request at Node 2 is
denied even though the path from Node 2 to Node 4
Where
(Link 2 and Link 3) has resources available for 9
new requests in total. We will use no sharing as

∑d
baseline for comparison with other sharing strategies.
Yk = max j
………..(7)
path∈Sk
j∈path
Path Sharing: In path-sharing resources are shared
along overlapping paths of the same DS tree. In our In our case, a formula for the local delay dk at
example, the 11th admission request is admitted Server k can be formulated as follows based on [1, 4,
because the path from Node 2 to Node 4 (Link 2 and 6, 22]. β is the burst size of source traffic of the flow
Link 3) has still resources available. As a side effect, in the real time class. Sk is the set of all possible
if Node 2 has admitted 20 flows, it starves Node 1. paths upstream from Server k for flows that traverse
So the resource is shared only along overlapping Server k. Yk is the maximum delay a flow
paths of the same tree. experienced before arriving at Server k. N is the
number of input links to Server k. α is the portion of
Tree Sharing: Assume that in our example there is the link resource allocated for the real-time class
no available bandwidth in the tree to accept the 11th traffic. Because the upper bound has been derived
flow at Node 2. With tree-sharing the admission under the assumption that the worst-case local delay
control attempts to use bandwidth allocated to other comes with the maximum workload, there is no
sink trees on the path from Node 2 to Node 4. As dynamic, run-time dependent variable in Equation
long as there is such a tree, the 11th request is (6). If we assume that Yk is a constant (of course this
accepted. In this approach, resource is shared is not true, i.e., each node has different values of Yk),
between trees that share the same set of consecutive the upper bound is a monotone increasing function of
links. α. This is exactly correspondent to the common
sense that the higher the workload, the larger the
Link Sharing: In this approach, the new request is end-to-end delay.
admitted as long as there are resources available on
any tree along the requested path regardless of which Sharing Case: In an attempt to compare the worst-
tree the resource originally belongs to. The DS tree case end-to-end delays for the different resource
only determines the path for a flow and the resources sharing strategies, we first represent the
are shared among all tree edges in the domain. mathematically according to the strategies, because
the bandwidth for the real time class is limited by α
5. END-TO-END DELAY ANALYSIS on each link. Followings are the notations of
bandwidths for the representation of α.
In this section we investigate the DS-tree’s effect
on the worst-case end-to-end delay in the presence of
resource sharing. Cl,u =Cl −(Bl,1 +⋅⋅⋅+Bl,u−1 +Bl,u+1 +⋅⋅⋅+Bl,t )
t
=Cl − ∑Bl,u
No-Sharing Case: In order to calculate the worst-
case end-to-end network delay, we model the …..(8)
network as a collection of servers [4]. Once we get i=1,i≠u
an upper bound on the delay on each server, we can
easily obtain the end-to-end delay by simply Bl ,u = Rl1,u + ⋅ ⋅ ⋅ + Rld,u + ⋅ ⋅ ⋅ + Rln,u ..(9)
summing up all the worst-case local delays. For
simplicity of analysis, we consider a special case in Where, Cl represents the capacity of Link l. Since
which we have only two classes of traffic: real-time we consider only the two classes in this paper, any
with priority and non-real-time best-effort traffic link capacity consists of two parts: one for real-time
such that the real-time traffic is never delayed by non priority class
real-time best-effort traffic. In addition, all flows in
( ∑ u =1 B l ,u )
t
the real-time class have the same traffic and the other for the traffic by
characterization in terms of bandwidth and leaky
buckets parameters. Best-effort basis ( Bbest-effort ). In other words, Link l
B

Finally the input real-time traffic to the network has t number of tree-edges, each of them has its own
is constrained by (β, ρ) [1]. Inside the network allocated resource and the rest of the resource of
routers are not flow-aware, therefore there is no Link l is for the best-effort traffic. Cl,u means the
traffic regulator. resource available for the edge of DS -tree u on Link
l. Since the real-time class has priority, Cl,u has two requirements in both bandwidth and network delay.
types allocations: one for the resource for the best- So, we tried to achieve by a Fuzzy ARTMAP
effort traffic on Link l and the other for the resource algorithm is, to always produce a set of sink trees in
allocated for DS-tree u in Link l. Bl,u is the resource the polynomial time with the minimum network
allocated to DS-tree u on Link l. Rdl,u is resource delay and with or without reduced constraints for a
requirement for the path from Node d to the sink of given network. With the link capacity, we think that
DS-tree u which passes Link l. Now four variants of it is usually large enough to accommodate the path
α are given below using these notations. bandwidth delays. However, because we do not try
to find an optimal path value, which might be
minimized reduction in requirement, we do not say
Rld,u either our heuristic algorithm produces best possible
αn = …… (11) DS-trees or it guarantees performance with a certain
C l ,u percentage of difference from the optimal. The
reduction has two cases: One is for link capacity and
Bld,u other is for network delay.
αp = …….(12)
C l ,u 6.1. RESOURCE ALLOCATION PROBLEM

∑ Bl ,u + ⋅ ⋅ ⋅ + ∑u =1 Bδ ,u
n1 nδ
The video stream needs adequate resources at the
αt = u =1 ….(13) server, network and the client with constraints in
C l + ⋅ ⋅ ⋅ + Cδ DiffServ architecture. It is processed at the source
processor, transmitted over the network and then

t processed at the destination. However, the cost of
Bl ,u
αt = u =1
………(14) the corresponding resource results in lower or higher
utility for the DS trees. The algorithm always stops
Cl with a DS tree for each out port. So if the given
network has t number of output ports, the algorithm
The above four equations (αn, αp, αt, αl) are for no iterates exactly t times. Since we wish to construct
sharing, path sharing, tree sharing, and link-sharing DS-tree in the polynomial time, we limit the
respectively. These equations are rather execution of the loops for just one time for each
straightforward according to the definitions of violation such that the loop execution are bounded
sharing strategies in the previous section. In αt, δ by number of edges |E| for each DS-tree. This
means the number of trees that share a set of links apportioning of the resources has to be in real time
under consideration. Based on the common sense, a and the allocation time is a part of the serialization
smaller value of α will get a tighter value of the delay to develop DS-trees. The ARTMAP [27] is
upper bound. Moreover, since we know the paths the elastic enough able to either find an existing cluster
flows will take, we can take advantage of it in or plastic enough to form a new QoS cluster
calculating end-to-end delay. However, the four corresponding to a SLA.
equations of α reveal little about the order of
magnitude among them. Because the values of á and 6.2 ARTMAP ARCHITECTURE
Yk are determined in the DS tree construction
process, and we know the final values only when the The ARTMAP neural network [28] is an
trees are constructed, they cannot be compared architecture that can learn arbitrary mappings from
before the construction is completed. In comparison digital inputs of any dimensionality to outputs of any
of Equation (11) and (12), the inequality is obvious dimensionality. The architecture applies incremental
because αp is always not smaller than αn, because supervised learning of recognition categories and
Rdl,u is expected to less than or equal to Bl,u at best. multidimensional maps in response to arbitrary
The worst case end-to-end delay obtained here is sequences of analog or binary input vectors, which
used in the heuristic algorithm for constructing DS- may represent or crisp set of features. The ARTMAP
trees proposed in the next section. neural network consists of two ART modules
designated as ARTa and ARTb, as well as an inter
6. ALGORITHM DESIGN ISSUES FOR ART module. Inputs are presented at the ARTa
CONSTRUCTING DS TREES module, while the outputs are presented at the ARTb
module. The Inter ART module includes a MAP
Although finding a set of DS-trees for a given field, whose purpose is to determine whether the
network under the three constraints is NP-Complete mapping between the presented input and output is
[26], we need a practical algorithm which can always the desired one. The vigilance parameter constraints,
produce a set of DS-trees in the polynomial time the maximum size of the compressed representations
which may or may not have the reduced of the input/output patterns. The range of the
vigilance parameter is the interval [0,1]. Small values approach for both resource allocation and path
of the vigilance parameter, results in coarse selection. The network-based endpoint admission
clustering of the patterns, while large values lead to control proposed in this paper cannot be compared
fine clustering. ART is a clustering algorithm that directly to any host-based endpoint admission
operates on vectors with discrete-valued elements. control in terms of performance, because the latter
Adding a further layer of processing (the “MAP mainly focuses on the probing technique, which
field”) to ART yields a supervised clustering cannot guarantee both end-to-end delay and
algorithm. During learning, ARTMAP is presented bandwidth for the flow’s lifetime. First of all, Figure
with training vectors that have been labeled 2 shows the topology of the MCI ISP-backbone
according to the class to which each belongs. The network, which we use throughout the experiments.
ART module assigns the current training vector to a In the first set of experiments, all routers (black and
cluster. gray) can act as edge routers and core routers as well.
This means that the flows can arrive and leave at
ARTMAP Algorithm: every router. In a second set of experiments, black
nodes are edge routers and gray nodes are core
Step1: Application, User & Network parameters routers. We consider two classes of traffic in this
are given; simulation study:
Step2: Out port = t, Iterate for each output port;
Step3: Construct the DS-tree, for each constraint;
Step4: If DS-tree not constructed then
Step5: If link capacity violated then
Reduce the bandwidth requirement,
And Go to step 3;
Step6: else Network delay violated then
Reduce the Network delay requirement
And Go to step 3;
End;
Step7: else output is the DS-tree;
End;

If this procedure, termed match tracking,


successfully assigns the training vector to a cluster Figure 2: A network example
with the correct class association, the weights
defining that cluster are updated, making it more One real-time class and one non-real time class.
likely to capture that training vector in the future. If We assume that all flows in the real-time class have
no cluster with the correct class association is found, a fixed packet length of 640 bits (RTP, UDP, IP
a new cluster is created. headers and 2 voice frames) [8], and a flow rate of
32 Kbps. The end-to-end delay requirement of all
7. EXPERIMENTAL EVALUATION flows is fixed at 100ms. Thus, the input traffic of
each flow is constrained by a leaky bucket with
In this section, we evaluate the performance of parameters β = 640 bits and ρ = 32 kbps. This kind
the proposed network-based endpoint admission of QoS requirements is similar to ”Voice-over IP”.
control by simulation experiments that can be All links in the simulated network have the same
guaranteed by each strategy to evaluate the DS-tree’s capacity of 155 Mbps.
effect on the end-to-end delay. The performance is For the DS-tree system we assume that the
measured in terms of worst-case end-to-end delay, bandwidth requested for any path between a pair of
maximum possible resource allocation, admission ingress and egress nodes remains the same. In other
probability, and resource utilization. Finally, we words, for a node’s point of view, it has the same
investigate how this performance can be affected by amount of bandwidth available to every egress node.
the configuration of a given network. We compare In this experiment, the amount of resource for a path
our approach’s performance with that of a strategy between any two nodes is 1.28 Mbps, which
that uniformly allocates a fixed amount of bandwidth accommodates 40 real-time flows. For the flat-fixed
to every link of the given network (We call this system, roughly 10.5% of the link capacity is
strategy flat-fixed). This strategy shall use Shortest allocated for the real-time flows. So, every link can
Path First (SPF) routing in setting-up a flow. The accommodate 510 real-time flows. The number
performance comparison will eventually be between comes from the fact following. After the construction
the two systems: One allocates resources evenly to of DS-trees, the total sum of resources allocated to
each link and selects paths using SPF when a flow each link in the DS-tree system divided by the
request arrives, and the other uses the DS-tree number of total links gives roughly 16.3 Mbps,
which corresponds to 10.5% of 155 Mbps- provides the limit on maximum possible bandwidth
capacitated link. allocation for the flat-fixed system with optimal
routing. It is important to note here that all proposed
Item Flat- No- Path Tree Link- network-based endpoint admission control systems,
Fixed Sh. -Sh. -Sh. Sh. except for the link-sharing strategy, outperform the
Worst case UBAC system. As can be seen, UBAC gives about
e2e delay 3.138 1.609 4.204 4.204 13.580 50% performance improvement from the flat-fixed
system. However, no-sharing provides 70%
Table 1: Worst-case end-to-end delay for each improvement from flat-fixed. As expected, as we
resource-sharing strategy (msec.) share more resources, the maximum possible
resource allocation gets smaller since the sharing
Worst-case End-to-end Delay: Table 1 shows the causes loose end-to-end delay.
worst-case end-to-end delay (unit : msec.) in each
resource sharing strategy. As can be seen, the no- Admission Probability: We simulate the admission
sharing strategy outperforms flat-fixed, while link control behavior in the system by simulating flow
sharing is significantly worse. We interpret this as requests and establishments at varying rates with a
follows. According to the equations about á constant average flow lifetime. Requests for flow
(Equations (11) to (14)), the end-to-end delay establishment form a Poisson process with rate ë,
becomes larger with increasing bandwidth allocated while flow lifetimes are exponentially distributed
over the link for real-time traffic, while it gets with an average lifetime of 180 seconds for each
smaller when the link capacity grows. The reason for flow. Source and destination edge routers are chosen
the smallest value for no-sharing is that Rdl,u is much randomly. In the flat-fixed system, the path is
smaller than Bdl,u. In the case of link-sharing selected using SPF routing. In the DS-tree system
however, the path is pre-defined in a DS-tree. Figure

t
u =1
B ld, u -may be large enough on some links 3 shows the admission probabilities for the real-time
class in the five cases as a function of arrival rates. In
to produce the worst (largest) value. In flat-fixed the figure, the legends are that CA1-flat for ”Call
systems the resources are distributed evenly over the Admission-probability with Configuration 1 (black
network. Flat-fixed therefore, produces smaller end- and gray nodes are all edge and core nodes) in the
to-end delays than link-sharing. So in terms of flat-fixed system”, CA1-no for ”Call Admission-
performance, sink-tree paradigm, except for link- probability with Configuration 1 in the no-sharing
sharing, provides similar or tighter worst-case end- system” and so on.
to-end delays.
1
Maximum Possible Resource Allocation: We define
.9
the maximum possible resource allocation as the
Call-admission-probability

ratio of total resource allocated for the real-time .8


traffic over the total capacity of all links under the
condition that the worst-case end-to-end delay .7
reaches up to the worst case end-to-end delay
requirement (Table 2). In the table, UBAC stands for .6
“Utilization Based Admission Control” which was
presented in [23]. .5

.4
Flat- UBAC no- Path Tree Link 0 20 40 60 80 100 120 140 160
Item Fixed sh. Sh. -Sh. -Sh. call-arrival- rate

Alloc 0.33 1.609 0.57 0.49 0.49 0.39


Figure 3: Admission probabilities
Table 2: Maximum possible resource allocation for
each resource-sharing strategy. As can be seen in Figure 3, the four sink-tree
systems perform better than flat-fixed. Even no-
This approach illustrates the benefit of good path sharing outperforms flat-fixed. In the case of no-
selection: For each possible pair of source and sharing, if we wish to allow 90% chances of flow
destination nodes, the paths are predefined off-line, admission, the performance is approximately 50%
and among them, the most promising one (that larger than that of flat-fixed. The importance of this
provides the smallest end-to-end delay) is selected as result is that the sink-tree structured resource
runtime path. The bandwidth is allocated uniformly management brings a lot of benefit even when all the
over the all links, however. UBAC therefore bandwidth requirements, and all link capacity are the
same. The differences between the four resource-
sharing strategies in the sink-tree system are not look confusing in Figure 3, is that the admission
negligible either. At 90% chances of flow admission probability of link-sharing goes down below that of
point, link-shared strategy outperforms no-sharing by tree-sharing at around 80% chances of admission
about 10%, with a little more signaling overhead. If point. We answer this with Figure 5. As can be seen,
this signaling overhead is light enough to scale, this the number of links per accepted call (average
improvement is important. resources per accepted call, which is acronym-ed
NOL in the figure) is decreasing in all resource-
1
sharing strategies. Interestingly, the link-sharing’s
.9
NOL goes over that of tree-sharing. This must have
resulted in the higher admission probabilities for
Resource-utilization-rate

.8
tree-sharing by the same argument (resource
.7
fragmentation) mentioned above.
.6

.5
.4

Number-of-links-per-accepted-call
.3
.2

.1
0 0 20 40 60 80 100 120 140 160
call-arrival- rate

Figure 4: Link utilization

Figure 4 shows the utilization ratios for the five


cases. To be fair, the link utilization is defined by 0 20 40 60 80 100 120 140 160
the resource consumed by the flows in the network call-arrival- rate
divided by the total resources allocated. So this link
utilization is different measure of performance from Figure 5: Average amount of resource per
the maximum possible resource allocation described accepted call
earlier in this section. The former effectively
measures the efficiency of allocation, while the In fact, we do not put an emphasis on this
latter provides a limit on the maximum allowable because in most real cases, we expect the network
resources under the worst-case end-to-end delay will be administered at 90% or more chances of
constraint. By comparing Figure 4 and Figure 3, we admission points. We consider this as a network
know that the point of 90% admission probability in configuration-specific result.
no-sharing system corresponds to 75-calls-per-
second. At this call arrival rate, the flat-fixed system
1
utilizes resources only around 70%, while no-sharing
does around 85%. In other words, even no-sharing
.9
strategy’s efficiency is better than that of the flat-
fixed system by about 20% in link resource
Call-admission-probability

.8
utilization. The point is that although the total
amount of allocated resources are the same, the .7
utilization depends on how the resources are
allocated. Therefore the admission probability will .6
be better with an efficient resource allocation. The
big difference between 50% improvement in .5

admission probability and 20% improvement in .4


resource utilization comes from the fact that the 0 20 40 60 80 100 120 140 160
number of links requested by a call ranges from 1 to call-arrival- rate
the longest length of the simple path in the DS-tree.
This accommodates more calls requesting the small Figure 6: Admission probabilities with
fragment of resources rather than the calls requesting Configuration 2
longer paths (many number of links). So the number
of calls accepted gets higher. In overall, the proposed Network-configuration Effect on Resource Sharing:
network-based endpoint admission control shows To illustrate the effect, we ran the same set of
significant improvement in admission probability simulations with Configuration 2, where the core
over the flat-fixed system. One thing, which might nodes are not the edge nodes any more. That means
only the edge nodes can be the sources or In overall, it is clear that there is only a little
destinations for the flow. For this particular difference in the admission probabilities and
experiment, the black nodes in Figure 2 are the edge resource utilization among the four resource-sharing
nodes (1,2,3,4,5,6,7,8,9,0), while the gray nodes are strategies with Configuration 2. We expect that if we
core nodes (10,11,12,13,14,15,16,17,18). Following have more pure core routers, the difference will be
three figures 6, 7, and 8 show the admission much smaller. Observing these results, we can say
probabilities, resource utilization rates, and average that if the number of edge nodes is relatively smaller
resources per accepted call respectively. compared to the number of total nodes, the resource
sharing does not contribute much for admission
1
probability or resource utilization.
.9

.8
8. CONCLUSIONS
resource-utilization-rate

.7

.6 In this paper, we proposed a network-based


.5 endpoint admission control which: 1) requires the
.4 minimum possible signaling overhead, 2) provides
.3 minimum possible latency for a flow set up, zero
.2 routing overhead, high admission probability, high
.1 resource utilization, and tight end-to-end packet
0 delay upper bound. This is achieved by : 1) having
0 20 40 60 80
call-arrival- rate
100 120 140 160
resources structured off-line with DS tree reflecting
the user traffic requirement, 2) at run-time, referring
only to the edge router (the entrance of the network)
Figure 7: Link utilization with Configuration 2 at which a flow arrives and which automatically
keeps track of the resources available downstream up
to the destination. This approach is more suitable to
2.4 the many real-time applications like Voice-over-IP
2.35
than the host-based endpoint control. For evaluation
Number-of-links-per-accepted-call

purposes we compared this with the flat-fixed system


2.3
where all links have the same capacity and a fixed
2.25 portion of the link capacity is allocated for the real-
2.2
time traffic over the network uniformly with SPF
routing.
2.15
Simulation study shows that the proposed
2.1 system with Fuzzy ARTMAP algorithm provides
2.05
50% improvement in the tightness of the worst-case
end-to-end delay. Also, in terms of maximum
2
possible resource allocation, it provides 75%
1.95
0 20 40 60 80 100 120 140 160
improvement, both from the flat-fixed system
call-arrival- rate respectively. Simulation studies on the admission
probabilities show that no resource-sharing strategy
Figure 8: Average amount of resource per outperforms the flat-fixed system by up to 50%
accepted call with Configuration 2 improvement. Also, we showed the effect of network
configuration on the resource-sharing by simulations
By comparing Figure 3 and Figure 6, we observe with different network configurations. According to
that with Configuration 2, the admission probabilities the result, there is not much benefit of resource
are higher and the differences between the resource- sharing if the number of edge nodes is smaller
sharing strategies are smaller. For example, the link compared to the number of total nodes in the given
sharing has 10% improvement from no-sharing in network. In the future, we will extend the DS-tree
Figure 3, while it has only 3% improvement in paradigm on a global scale so that the very large
Figure 6. The reason is that because only the edge network will be able to support real-time applications
routers can be sources and destinations the resources in a scalable fashion.
are less fragmented than in Configuration 1. In other
words, in Figure 1, Node 2 and 3 are not edge nodes REFERENCES:
anymore, so there is no flow request at these nodes
with Configuration 2. Therefore the benefit of [1] R. L. Cruz, “A Calculus for Network Delay,
sharing is reduced by the same argument, there is Part I and Part II,” IEEE Trans. on Information
only a little difference in link resource utilization in Theory, Jan. 1991.
Figure 7 too.
[2] L. Zhang, S. Deering, D. Estrin, S. Shenker and Domain Reservations,” Bell Lab Technical
D. Zappala, “RSVP: a new resource reservation Memorandum, Work Project No.
protocol,” IEEE Networks Magazine, vol. 31, MA30034004, MA30034002, File Case 20564,
No. 9, pp. 8-18, September 1993. Dec. 1999.
[3] R. Braden, D. Clark and S. Shenker, “Integrated [18] Y. Bernet, “The Complementary Roles of RSVP
Services in the Internet Architecture,” RFC and Differentiated Services in Full-Service QoS
1633, Jun. 1994. Network,” IEEE Communications Mag., Feb.
[4] A. Raha, S. Kamat, W. Zhao, “Guaranteeing 2000.
End-to-End Deadlines in ATM Networks,” The [19] G. Bianchi, A. Capone, C. Petrioli,
15th IEEE International Conference on “Throughput analysis of end-to-end
Distributed Computing Systems, in 1995. measurement-based admission control in IP,” In
[5] P. P. White, “RSVP and Integrated Services in Proceedings of IEEE INFOCOM, Tel Aviv,
the Internet: A Tutorial,” IEEE Israel, Mar. 2000.
Communications Mag., May 1997. [20] C. Cetinkaya, E. Knightly, “Egress admission
[6] C. Li, R. Bettati, W. Zhao, “Static Priority control,” In Proceedings of IEEE INFOCOM,
Scheduling for ATM Networks,” IEEE Real- Tel Aviv, Israel, Mar. 2000.
Time Systems Symposium (RTSS’97), San [21] V. Elek, G. Karlsson, R. Ronngren, “Admission
Francisco, CA. Dec. 1997. control based on end-to-end measurements,” In
[7] K. Nicols, V. Jacobson, L. Zhang, “A Two-bit Proceedings of IEEE INFOCOM, Tel Aviv,
Differentiated Services Architecture for the Israel, Mar. 2000.
Internet,” Internet-Draft, Nov. 1997. [22] B. Choi, D. Xuan, C. Li, R. Bettati, W. Zhao,
[8] Thomas J. Kostas, Michael S. Borella, Ikhlaq “Scalable QoS Guaranteed Communication
Sidhu, Guido M. Schuster, Jacek Grabiec, and Services for Real-Time Applications,” The 20th
Jerry Mahler, “Real-Time Voice Over Packet- IEEE International Conference on Distributed
Switched Networks,” IEEE Network Mag, Computing Systems, Taipei, Taiwan, pp. 180-
Jan./Feb. 1998. 187, April 2000.
[9] T. Ferrari,W. Almesberger, J. L. Boudec, “SRP: [23] D. Xuan, C. Li, R. Bettati, J. Chen, W. Zhao,
a scalable resource reservation protocol for the “Utilization-Based Admission Control for Real-
Internet,” In Proceedings of IWQoS, pp. 107- Time Applications,” The IEEE International
117, Napa CA. May. 1998. Conference on Parallel Processing, Canada,
[10] A. Terzis, L. Zhang, E. Hahne, “Reservations Aug. 2000.
for Aggregate Traffic: Experiences from an [24] L. Breslau, E. W. Knightly, S. Shenker, I.
RSVP Tunnels Implementation,” The 6th IEEE Stoica, H. Zhang, “Endpoint Admission
International Workshop on Quality of Service Control: Architectural Issues and Performance,”
Napa Valley, CA, May 1998, pp. 23-25. In Proceedings of ACM SIGCOMM 2000
[11] P. Pan, H. Schulzrinne, “YESSIR: A Simple Stockholm, Sweden, Aug.-Sep. 2000.
Reservation Mechanism for the Internet”, In the [25] Z. Zhang, Z. Duan, L. Gao, Y. T. Hou,
Proceedings of NOSSDAV Cambridge, UK, “Decoupling QoS Control from Core Routers:
Jul. 1998. A Novel Bandwidth Broker Architecture for
[12] S. Blake, D. Black, M. Carlson, E. Davies, Z. Scalable Support of Guaranteed Services,” In
Wang, W. Weiss, “An Architecture for Proceedings of ACM SIGCOMM 2000
Differentiated Service,” RFC 2475, Dec. 1998. Stockholm, Sweden, Aug.-Sep. 2000.
[13] Y. Bernet et al., “A Framework for [26] B. Choi, R. Bettati, “Efficient Resource
Differentiated Services,” Internet-Draft, IETF, Management for Hard Real-Time
Feb. 1999. Communication over Differentiated Services
[14] G. Feher, K. Nemeth, M. Maliosz, Architectures,” The 7th IEEE International
“Boomerang: A Simple Protocol for Resource Conference on Real-Time Computing Systems
Reservation in IP Networks,” IEEE Real-Time and Applications, Cheju, Korea, Dec. 2000.
Technology and Applications Symposium, June [27] R. Singh, Ph. D. Thesis on “Study and Analysis
1999. of Some QoS issues in the Internet”, Lucknow
[15] I. Stoica, H. Zhang, “Providing Guaranteed University, Lucknow (India).
Services without Per Flow Management.” ACM [28] A Carpenter, S. Grossberg, N. Markuzon, H.
SIGCOMM, Sep. 1999. Reynolds, and D. B Rosen, "Fuzzy ARTMAP:
[16] L.Wang, A. Terzis, L. Zhang, “A New Proposal A neural network architecture for incremental
for RSVP Refreshes,” The 7th IEEE supervised learning of analog multidimensional
International Conference on Network Protocols maps," IEEE Transactions on Neural Networks,
(ICNP’99), Oct. 1999. vol. 3, no. 5, Sept. 1992, pp. 698-712.
[17] P. Pan, E. L. Hahne, H. Schulzrinne, “BGRP: A
Tree-Based Aggregation Protocol for Inter-

You might also like