You are on page 1of 131

Objective FP7-ICT-2007-1-216041/D-3.2.

The Network of the Future

Project 216041

“4WARD – Architecture and Design for the Future Internet”

Virtualisation Approach: Evaluation and

Date of preparation: 10-01-11 Revision: 1.0

Start date of Project: 08-01-01 Duration: 10-06-30
Project Coordinator: Henrik Abramowicz
Ericsson AB
Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Document Properties:
Document Number: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Document Title: Virtualisation Approach: Evaluation and Integration
Document Responsible: Zhao, Liang (UniHB)

Target Dissemination Level: PU

Status of the Document: Final
Version: 1.0

This document has been produced in the context of the 4WARD Project. The research leading to these results has
received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant
agreement n◦ 216041.
All information in this document is provided “as is” and no guarantee or warranty is given that the information is
fit for any particular purpose. The user thereof uses the information at its sole risk and liability.
For the avoidance of all doubts, the European Commission has no liability in respect of this document, which is
merely representing the authors view.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Network Virtualisation the one of the main topics of investigation of the FP7 Future Internet
project 4WARD as an enabler for the network innovations. In the first phase of the project the
roles, the architecture, the interfaces and the general life cycle of virtual networks were investi-
gated. The second phase is concentrating on focused feasibility studies and prototyping. Topics
of main interest for feasibility studies are the provisioning and embedding of virtual networks
for fixed and mobile networks, signalling and control for establishing and managing virtual
networks, virtual routers, and virtualisation of wireless systems. Among others inter-provider
issues and scalability are being addressed. Prototyping is concentrating on the integration of
individual parts and also on integrating prototypes of other work packages of the project. These
feasibility studies and the integrated prototyping are described in this deliverable as a prelimi-
nary version of the final deliverable D3.2.1 which will be available at the end of the project.

network virtualisation, integrated feasibility studies, prototyping, virtual network provisioning,
router virtualisation, virtualisation of wireless systems


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Executive Summary
The concept of Network Virtualisation as an overall vision of virtualising complete networks that
can realize independent architectures and coexist with the current Internet is the focus of the
4WARD project in WP 3. This corresponds to the following main objective: instantiation and
dependable inter-operation of different networks on a single infrastructure in a commercial setting.
Additionally, Network Virtualisation can also be regarded as a migration strategy towards new
network architectures.
To use network virtualisation in a commercial setting, new concepts and algorithms for pro-
visioning, embedding and management need to be defined and evaluated with respect to their
usability and scalability. This deliverable gives a first evaluation of these concepts (Section 2.1).
The second focus of this deliverable is the evaluation of methods for the virtualisation of individual
network resources. Here, the focus is on router virtualisation (Section 2.2) and the virtualisation
of wireless resources (Section 2.3), including scheduling for wireless resources in general, and
specifically LTE and WiMAX as case studies. Additionally, the second phase of 4WARD is con-
centrating on integrated feasibility tests and prototyping. This applies to both the integration of
several evaluation activities within the work package, as well as joint activities together with other
4WARD work packages. From the beginning of the project, it was foreseen to carry out several
such integrated feasibility tests and prototyping.
Section 3 describes these integration tests. The first one looks at inter-provider aspects by con-
necting several sites as infrastructure providers and includes the virtual network embedding and
instantiation as a joint feasibility test. The second testbed is being used for performance evalua-
tions and additionally serves as an enhanced demonstrator platform. This platform is being used
for joint demonstrations with other work packages (joint tasks, Section 4). This includes e.g. a
demonstration of new network architectures developed by WP2 (NewAPC) (Section 4.1), and a
joint demonstration with the new concepts developed for the Network of Information (WP6 - Net-
Inf) (Section 4.3). A separate evaluation has been performed for the decentralised self-organising
management of virtual networks based on situation awareness for dynamic virtual network provi-
sioning in cooperation with WP4 (InNetMgmt) (Section 4.2).
The deliverable has been structured in such a way that only a high level view and some major
examples of the results are given in the main part, and most of the detailed results are given in the
comprehensive appendix. The results reported in this deliverable will be updated at the end of the
project as deliverable D3.2.1.



1 Introduction 1

2 Evaluation Activities 3
2.1 Provisioning, Management and Control . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.2 Virtual Network Embedding . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Mobility Aware Embedding . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.4 Virtual Network Provisioning . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.5 Virtual Link Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1.6 Resource Allocation Monitoring and Control . . . . . . . . . . . . . . . . 22
2.1.7 End User Attachment to Virtual Networks . . . . . . . . . . . . . . . . . . 27
2.1.8 Interdomain Aspects: Management and Control . . . . . . . . . . . . . . . 29
2.1.9 Shadow VNets Feasibility tests . . . . . . . . . . . . . . . . . . . . . . . 30
2.2 Router Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.2 High Performance Software Virtual Routers on Commodity Hardware . . . 35
2.2.3 Resource Allocation in Xen-based Virtual Routers . . . . . . . . . . . . . 41
2.2.4 Conclusions and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3 Wireless Link Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.3.2 Performance Analysis of Wireless Access Network Virtualisation . . . . . 47
2.3.3 WMVF: Wireless Medium Virtualisation Framework . . . . . . . . . . . . 49
2.3.4 CVRRM Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.3.5 LTE Wireless Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.3.6 WiMAX Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3 Targeted Integrated Feasibility Tests 67

3.1 Inter-Provider VNet Testbed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.1.1 Scenario 1: VNet Provisioning . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.2 Scenario 2: VNet Management . . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.3 Conclusion and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2 VNet Embedding and Instantiation joint feasibility test . . . . . . . . . . . . . . . 69
3.2.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3 VNet Performance and Interconnections . . . . . . . . . . . . . . . . . . . . . . . 71
3.3.1 VNet Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.3.2 Interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4 Targeted Integrated Feasibility Tests for Joint Tasks 77


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

4.1 Joint Testbed of WP2 and WP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.1.1 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Decentralised Self-Organising Management of Virtual Networks . . . . . . . . . . 79
4.2.1 Supporting Dynamic VNet Provisioning with Situation Awareness . . . . . 80
4.3 Joint prototyping of WP3, WP5, and WP6 . . . . . . . . . . . . . . . . . . . . . . 82
4.3.1 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

5 Summary and Outlook 85

5.1 Summary of Initial Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1.1 Provisioning, Management and Control . . . . . . . . . . . . . . . . . . . 85
5.1.2 Virtualisation of Resources . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.2 Preliminary Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Appendices 89

A Provisioning, Management and Control 91

A.1 Mobility-aware Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A.1.1 Simulation Model of centralised framework . . . . . . . . . . . . . . . . . 91
A.1.2 Results from centralised framework . . . . . . . . . . . . . . . . . . . . . 92
A.1.3 Definition of Distributed Protocol Messages . . . . . . . . . . . . . . . . . 94
A.1.4 Extra Results of the MADE Protocol performance . . . . . . . . . . . . . 95

B Wireless Link Virtualisation 97

B.1 Analytical Modeling of a Single VNet Service . . . . . . . . . . . . . . . . . . . . 97
B.1.1 Service Model of a Single User . . . . . . . . . . . . . . . . . . . . . . . 99
B.2 WMVF: Wireless Medium Virtualisation Framework . . . . . . . . . . . . . . . . 104
B.2.1 WMVF Simulation Model . . . . . . . . . . . . . . . . . . . . . . . . . . 104
B.2.2 WMVF Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.3 CVRRM Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
B.3.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
B.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
B.4 LTE Wireless Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
B.4.1 LTE Virtualisation Simulation Model . . . . . . . . . . . . . . . . . . . . 111
B.4.2 Simulation Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
B.5 WiMAX Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.5.1 Virtualised BTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.5.2 Modified ASN Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

List of Abbreviations, Acronyms, and Definitions 117

Bibliography 119


1 Introduction
4WARD work package 3 (VNet) set out to develop concepts and technologies to enable the con-
current deployment and management of multiple networks — possibly using entirely different ar-
chitectures — on a common, shared infrastructure. This is seen as a way to build a more adaptable
and evolvable Internet and to overcome the impasse [1] that currently impedes the deployment of
new and innovative network architectures and technologies in the Internet. Network virtualisation
is the main technical tool to realise this approach. Network virtualisation in 4WARD refers to the
virtualisation of entire network infrastructures, and the end-to-end provisioning and management
of complete virtual networks rather than just individual network components or parts of a network.
The technical work in WP3 encompasses two main technical areas:

• A framework for the systematic provisioning, deployment, and management of complete

end-to-end virtual networks on demand. The framework takes network virtualisation from
an individual resource view towards a complete network view. It comprises, among others,
functions for the embedding of virtual networks (i.e. the process of mapping requested
topologies to virtual nodes and links hosted in the virtualised infrastructure), provisioning of
virtual networks, link setup, various interfaces and protocols for management and control,
network description schemes, as well as debugging facilities.

• The efficient virtualisation of network resources including routers, wired and wireless links,
and potentially any other types of network resources. Efficient virtualisation is a prerequisite
for the implementation of a virtualised infrastructure for the Future Internet. In particular,
virtualised routers are a crucial resource. While a multitude of technologies for the virtuali-
sation of wired links exists (also see D-3.1.1 for more background), virtualisation of wireless
links is not well understood today. However, as wireless links are increasingly becoming in-
tegral parts of many networks, an approach aiming at comprehensive network virtualisation
must take them into account.

In contrast to some other proposals currently discussed in the networking research community,
the concepts developed in WP3 are not just aimed at supporting experimental research, but specif-
ically also at network virtualisation in a commercial setting. To this end, WP3 defines functional
provider roles that are designed to support a variety of scenarios and business models. A model
with three major roles was chosen: An Infrastructure Provider (InP) maintains virtualised physical
resources. A Virtual Network Provider (VNP) constructs virtual networks using virtual resources
made available by one or more InPs or other VNPs. Finally, a Virtual Network Operator (VNO)
operates, controls and manages the virtual networks in order to offer services. A business entity
can take on one or more of these roles, which facilitates a wide range of business strategies. How-
ever, while some preliminary business analysis of the approach has been performed in 4WARD in
cooperation with WP1, WP3 focuses on the enabling technologies. Anticipated constraints arising
in a competitive commercial environment (for example limited trust between different business
entities) are taken into account and impose a number of additional requirements on the technology.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Within the overall 4WARD project, network virtualisation serves as enabler for the deployment
and concurrent operation of new networking paradigms developed in the other technical work
packages, such as information-centric networks or new transport abstractions. Furthermore, new
network management concepts developed in 4WARD may be applied within the VNet framework
The work plan for WP3 is structured into two main phases: First, the development of archi-
tectural concepts required to achieve the technical goals of the work package; and second, the
evaluation of these concepts in terms of feasibility and performance by means of analytical meth-
ods, simulations, experimentation, and prototyping. To address the cross-workpackage aspects
mentioned in the previous paragraph, the evaluation phase also includes joint evaluation activities
together with other 4WARD work packages.
The architecture and concepts developed in the first phase, along with background information,
motivation, and identified business aspects, were described in the previous deliverable D-3.1.1.
The present deliverable describes activities and results from the evaluation phase as far as they are
available at this point. While brief introductions are given for major concepts, the reader is referred
to D-3.1.1 for more detailed descriptions.
Since the evaluation activities are still ongoing for the remainder of the project, the results shown
in this document reflect the current state of the work and do not represent a final conclusion yet. In
particular, integration activities concerning multiple components developed in the work package,
as well as in cooperation with other 4WARD work packages, have not been completed, as this is
the focus of ongoing project work. However, a number of evaluation results for individual concepts
are available and presented in this document. A conclusion and final assessment of the feasibility,
scalability, and performance of the overall approach will be presented in the deliverable D-3.2.1 at
the end of the project.
The document is structured as follows. In Section 2, feasibility tests for a number of individual
aspects and components of the VNet architecture are described. The aspects are roughly structured
along the lines of the two main areas mentioned above: First, components of the VNet provisioning
and management framework are addressed. Next, evaluations of concepts for the virtualisation
of resources are presented. The focus is on the areas of router virtualisation and wireless link
virtualisation. There is no separate section on the evaluation of wired link virtualisation techniques,
since a multitude of such approaches already exist and WP3 mainly focused on integrating them
into its framework.
The subsequent Section 3 describes targeted integration tests. While WP3 is not developing a
single integrated prototype of all developed components, integrated tests are being performed in
two testbed groups. The first is an inter-provider testbed that is being built to show the feasibility
of the VNet architecture in a competitive inter-provider environment; this testbed is distributed
over several partner sites that take the role of infrastructure providers according to the provider
model developed by WP3. The second testbed is used to evaluate various performance aspects of
the VNet framework.
Finally, Section 4 describes evaluation activities that are being performed in cooperation with
other 4WARD work packages. The activities in this context include a joint WP2/3 testbed, a
conceptual evaluation of the application of In-Network-Management mechanisms developed in
WP4 for the VNet framework, and a joint prototype in cooperation with WP5 and WP6.
In the interest of readability, an attempt was made to streamline the main text and to keep out
details that are not strictly necessary to understand the big picture. Such details have instead been
moved to the appendices for reference by the interested reader.


2 Evaluation Activities
This section provides a description for the individual feasibility tests that exist for the different
components and aspects of the VNet framework. Before introducing the individual feasibility tests
in the remainder of this section, we will present a conceptual overview of the feasibility tests, how
they relate to each other, and how they fit into the overall VNet framework. For this purpose, we
will refer to Figure 2.1.

Mapping Router
Router Link
Mapping and
and Link
RDL ++ Embedding Virtualisation
Virtualisation Virtualisation
Embedding Virtualisation
Toolset Algorithms
Resource Virtualisation
VNet Virtualisation
Description Embedding
Embedding of
of Resources

Out-of-VNet Management
Management Inter-provider
Access of
of VNets
VNets signalling


End-user Operation
Operation VNet
Attachment of
of VNets
VNets Debugging

Change Requests

Figure 2.1: Overview of WP3 Feasibility Tests

The overview of feasibility tests can be structured into the following five areas (c.f. Figure 2.1,
which are roughly aligned to the lifecycle of virtual networks. The following enumeration gives a
brief description of each area.

• The Resource Description Framework provides a language to describe virtual network topo-
logies and to express the manifold constraints that might be imposed on these topologies. A
formal description of the VNet is the first step of the creation process and was drafted within
WP3 and tools to ease the handling of VNet descriptions were created (refer to section 5.1
of deliverable D-3.1.1).


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

• Using the previously created VNet description and information on available resources, the
VNet Embedding process can take place, which includes candidate discovery, candidate se-
lection, and candidate binding. For this purpose, numerous algorithms were developed and
evaluated. These have been reported earlier in section 5 of D-3.1.1.

• Virtualisation of physical Resources and reservation enable binding of retained candidates.

To demonstrate viability of the overall framework, WP3 considered multiple aspects of
resource virtualisation, i.e., the virtualisation of routers, the virtualisation of wired links,
and the virtualisation of wireless links and evaluated them with regard to certain aspects in
testbeds and in simulation environments (refer to section 4 of D-3.1.1).

• The Management of VNets is another aspect of the VNet framework that is reflected in the
feasibility tests. We are conceptually evaluating aspects like the Out-of-VNet access (see
section 3.4 of deliverable D-3.1.1) and management signalling interfaces between InPs.

• The Operation of VNets subsumes the attachment of end users as well as the debugging
and optimisation of existing VNets. These aspects are also implemented and evaluated in

The two dotted arrows in Figure 2.1 indicate operation and management cycles in the VNet frame-
work and point out that the embedding of virtual networks may be adapted during their lifetime
transparently by InPs and VNPs for optimisation purposes on the one hand and on explicit request
by VNOs on the other hand.
In the following sections, the individual feasibility tests and results will be presented. For some
of them, more detailed material is offered to the interested reader in the corresponding appendices
of this document.

2.1 Provisioning, Management and Control

2.1.1 Introduction
This section provides performance evaluation and feasibility tests for the provisioning, manage-
ment and control of virtual networks. The evaluation concerns mainly the embedding algorithms
while the feasibility tests cover the overall virtualisation framework components. Individual feasi-
bility tests have been conducted and reported. The conducted and envisaged integrated feasibility
tests are the subject of dedicated sections on integrated and joint tests.
The structure of the document follows pretty much the overall provisioning and management
process as depicted in Figure 2.2. The provisioning process starts with the expression of VNet
requests in the form of graphs by users (service providers or VNet Operators) corresponding to
Step 1.
The VNet provisioning process continues with the following additional key phases: (a) VNet
discovery (resource matching and VNet request splitting) (b) VNet embedding (resource selection)
(c) VNet Instantiation (resource allocation and binding):

• Resource Discovery, Matching and Request Splitting, achieved by the VNet Provider, is
based on similarity relationships between VNet request descriptions (specified by users) and
substrate resource descriptions (offered by InPs). The matching process relies on a resource


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.2: VNet Provisioning and Management

description framework, conceptual clustering techniques and similarity based matching al-
gorithms addressed in D3.1.1. Using the matching, VNet Providers split the requested virtual
network graph across multiple Infrastructure Providers (Step 2).

• VNet Embedding (Step 3 and 4), achieved by the Infrastructure Provider, consists in select-
ing for each requested virtual node and link the best substrate resources identified during the
matching phase. The VNet Embedding section reports the implementation and feasibility
tests conducted for the initial VNet embedding process.

• VNet Instantiation (Step 5), executed by the InP, consists in reserving and allocating the
selected substrate resources to set up the VNet. This is reflected in dedicated sections on
VNet instantiation and VNet link setup.

When the VNet is in operation (Step 6), it must be maintained, controlled and managed so all
established and active contracts are respected. In the operation mode, dynamic provisioning (in
the sense of adaptation, dynamic configuration and resource allocation), monitoring, control and
management have to be ensured. The document reflects these functions and steps by adopting a
similar organisation as listed below:

• Dynamic Embedding: As running VNets are the subject of dynamic variations due to failures
(or resource degradation requiring replacement in the case of fixed nodes) or mobility (of
users as well as resources), two subsections describe the dynamic embedding algorithms
developed by the project. An adaptive VNet embedding algorithm (provided in the VNet
Embedding Section) and a Mobility-aware embedding algorithm (presented in the Mobility-
aware Embedding Section) are developed and the performance evaluation and feasibility
tests conducted to assess these algorithms are reported.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

• Management of VNet (Step 7): A control and signalling framework is required to achieve
the virtual network instantiation and the final set up. The virtual networks can finally be
activated, monitored and managed. The sections dedicated to VNet Link Setup, Resource
Allocation, Monitoring and Control and Interdomain Aspects: Management and Control
been organised according to these phases. They report the results of implementation and
feasibility tests on each of these components.

• Operation of VNet: Two sections are dedicated to running VNets. The section End User
Attachment to Virtual Networks provides a feasibility test concerning automated and secure
attachment of end users to VNets. More general adaptation, replacement, maintenance and
management functions are handled through the establishment of shadow virtual networks
that address more significant changes. Shadow VNets Feasibility tests that focuses on the
optimisation and debugging of running VNets covers these aspects and concludes the provi-
sioning, management and control part of this document.

The ensuing sections concern the virtualisation technologies themselves and report the results
of feasibility tests related to node, link and wireless virtualisation. These describe the key enablers
and components for virtualisation. Scheduled joint and integrated feasibility tests are the subject
of section 3.

2.1.2 Virtual Network Embedding

VNet Embedding (or mapping) consists in assigning VNet requests (VNet nodes and links form-
ing a target network topology) to a specific set of virtual nodes and virtual paths extracted from
substrate resources. Optimal VNet embedding that aims maximising the number of provisioned
virtual networks is known to be an NP-hard problem [2]. VNet embedding has for this reason
often been tackled using heuristic algorithms assigning VNets to substrate resources using greedy
algorithms in [3, 4, 5], customised algorithms in [4], iterative mapping processes in [6] and coor-
dinated node and link mapping in [7]. Since the underlying physical network can change due to
node failures, migration, mobility and traffic variations, adaptive embedding is required in addition
to initial VNet establishment. Two embedding algorithms in [8] ensure such VNet embeddings. A
distributed initial embedding algorithm is first used to create and allocate virtual networks. Once
virtual networks have been instantiated and activated, a distributed fault-tolerant embedding algo-
rithm maintains virtual network topologies according to established contracts. These algorithms
have been implemented, evaluated and the subject of feasibility tests that are reported in this doc-
ument. Components
This section provides the hardware and software components used in our experimentation to im-
plement and evaluate the proposed embedding algorithms.

Hardware The distributed VNet embedding algorithms are implemented and tested over the
GRID5000 experimental platform [9] that can emulate a real substrate network. GRID5000 is
a French national experimental facility of about 5000 processors shared among 9 clusters, lo-
cated in various French regions and interconnected by a dedicated fiber-optic network of 10Gb/s.
GRID5000 allows on-demand and automatic in-depth reconfiguration of all the processors. Users


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

can reserve pool of resources for time slots of few hours. Within this pool, the users can deploy,
install, launch, and execute their own operating system images and protocol stacks. In our case,
the embedding algorithms were deployed on these GRID5000 machines and ran as a distributed
multi-agent cooperative system ensuring dynamic selection of physical resources to compose vir-
tual networks.

Development framework and tools The implementation of the distributed embedding algo-
rithms relies on a Multi-Agent based approach. Autonomous agents are deployed in the GRID5000
machines to emulate the virtual nodes and to handle the distributed embedding algorithms. The
Java Agent Development Framework (JADE) [10] is used to implement the autonomous agents.
The deployed agents in GRID5000 nodes exchange messages and cooperate to execute the dis-
tributed algorithms. A declarative Agent Communication Language (ACL) [11] is used to define
and specify the interactions and messages between the agents. The GT-ITM tool [12] is used to
randomly generate topologies for both VNet requests and substrate networks and prove the con-
cept. Multi-Agent based Embedding Algorithms

To create, maintain and adapt virtual networks, the Multi-Agent based algorithm is used to ac-
complish distributed negotiation and synchronisation between the substrate or virtual nodes. The
Multi-Agent based embedding framework is composed of autonomous agents integrated in sub-
strate nodes. These autonomous agents communicate, collaborate and interact with each other to
plan collective selection of resources for embedding and maintaining VNets. These agents are
used to realize an initial embedding algorithm (for the initial virtual network creation) as well as
a fault-tolerant algorithm for adaptive embedding (to handle dynamic variations once the virtual
network has been activated and to maintain topologies).

Distributed initial embedding algorithm In the initial embedding, VNet requests need to
be first decomposed into sets of elementary star (or hub-and-spoke) clusters. The hub-and-spoke
clusters are composed of a central node (i.e hub) to which multiple adjacent nodes (i.e. spokes) are
connected. Spokes may also represent the hubs of other clusters. The mapping of a VNet topology
to a substrate network is realised by assigning sequentially its hub-and-spoke clusters. The VNet
embedding algorithm used for selecting and assigning the hub-and-spoke clusters to the substrate is
distributed. Each substrate node designated as “root node” (i.e. the node with maximum available
resources) will be responsible for selecting and mapping one cluster to the substrate. The root
node determines the set of substrate nodes able to support the spoke nodes based on shortest path
or multi-commodity flow algorithms. The root nodes communicate, collaborate and interact with
each other to plan collective VNet embedding decisions and to accomplish distributed localised
VNet embedding. The proposed distributed algorithm can be viewed as a cooperative task executed
jointly by all root nodes via message exchange.

Distributed fault-tolerant embedding algorithm A distributed fault-tolerant embedding

algorithm has also been implemented and evaluated. The goal of this algorithm is to maintain
active VNet topologies by selecting new nodes and links to handle node failures or inability of
nodes to keep fulfilling their contracts. The algorithm relies on monitoring and failure detection
mechanisms and uses the Multi-Agent framework to ensure distributed fault-tolerant embedding.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

The substrate agents exchange messages and cooperate to plan collective reselection decisions.
When a substrate node has to be replaced or fails, the distributed fault-tolerant embedding algo-
rithm (running in the substrate nodes) selects an alternative one. If a substrate node, supporting a
spoke node, fails then the root node selects an alternative substrate node to maintain the topology.
If a root node, supporting a hub node, fails then a substrate node supporting a spoke node will
be selected as the root node (of the hub and spoke cluster). The initial embedding algorithm is
executed to update the link mapping using a shortest path algorithm (for unsplittable paths) or a
multi-commodity flow algorithm (for splittable paths).
The scenarios used for assessment of the initial and adaptive algorithms are now described. Scenario 1: Initial VNet Embedding

Testbed Setup In scenario 1, the initial embedding consists in mapping a VNet request to a
substrate network. The GT-ITM tool is used to randomly generate a VNet request with 25 virtual
nodes (Each pair of substrate nodes are randomly connected with probability 0.5). The Central
Processing Unit (CPU) of VNet nodes are chosen uniformly in the range 0 and 20 processing
power units. The required bandwidth of VNet links requests are drawn from a continuous random
variable uniformly distributed between 0 and 50 bandwidth units. These choices are arbitrary since
the main objective is to evaluate the embedding algorithms.
The GRID5000 platform is used to emulate the substrate network. Several random substrate
topologies with different sizes (from few to 100 nodes) are generated over the GRID5000 platform
using the GT-ITM tool. The available CPU and bandwidth of substrate components (nodes and
links) are uniformly distributed between 50 and 100. Two kinds of substrate topology are distin-
guished: full mesh topology (substrate connectivity is 100%) and partial mesh topology (average
substrate connectivity is 50%).
The initial embedding algorithm performance is evaluated in terms of time delay and number of
messages required to map a given VNet request to a substrate.

Figure 2.3: The time delay taken by the embedding algorithm to assign one VNet request in both
full (FMS) and partial (PMS) mesh substrate topologies: centralised vs distributed


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Results Figure 2.3 depicts the time delay taken by the distributed initial embedding algorithm
to map one VNet topology (with 25 virtual nodes) for full (FMS) and partial (PMS) mesh substrate
topologies. The performance results are compared to those achieved by a centralised VNet embed-
ding algorithm. The time delays required to map a VNet in a centralised manner (upper curves) are
higher compared to the time delays needed to map a VNet in a distributed fashion (lower curves).
The additional delay, in the case of the centralised approach, is due to the time delay needed for
a central coordinator to gather all information about the substrate links. This delay depends on
the substrate topology (full versus partial mesh topology). This is different from the distributed
approach where each substrate node already maintains all parameters (e.g. capacity and weight) of
the links directly connected to its network interfaces. The results show approximately a factor of
1.5 and 2.5 improvement in time delay for partial and full mesh topologies respectively, when the
number of substrate nodes is in the range from 25 to 100.

Figure 2.4: Number of messages used by the embedding algorithm to map one VNet request: cen-
tralised vs distributed Scenario 2: Adaptive VNet Embedding

As illustrated in Figure 2.4, the number of messages exchanged to map a VNet, in both centralised
and distributed cases, corroborates the delay results shown in the Figure 2.3.
Figure 2.5 depicts the time delay taken by the main algorithm to map simultaneously multiple
VNet topologies to a full mesh substrate. The time delay required to assign multiple VNets in both
centralised and distributed embedding cases is depicted in Figure 2.5. The time delay required
to map 10 VNet requests (with 25 virtual nodes) in a centralised manner (upper curve) is higher
compared to the time delay needed to map 10 VNet requests in a distributed manner (lower curve).
The decentralised VNet embedding achieves high-speed parallel processing of several VNet re-
quests. A significant improvement up to a factor of 14 for time delay is obtained when the number
of substrate nodes increases from 25 to 100.

Testbed Setup The objective of this scenario is to emulate a substrate node failure as a special
case of dynamic variation to which VNet embedding must react while VNets are active and run-


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.5: The time delay taken by the embedding algorithm to assign ten VNet requests: cen-
tralised vs distributed

ning. The adaptive embedding algorithms used to handle mobility induced variations are addressed
and evaluated in a companion and separate section.
The VNet is first allocated and instantiated in the GRID5000 platform and is activated. An agent
running the distributed fault-tolerant embedding algorithm is deployed in each selected GRID5000
node. To emulate a failure, a GRID5000 node is disconnected deliberately. This enables evalua-
tion of the fault-tolerant embedding algorithm performance in terms of time delay and number of
messages required to select an alternative GRID5000 node to maintain the VNet.

Figure 2.6: The time delay taken by the fault-tolerant embedding algorithm to repair a node failure:
centralised vs distributed

Results Figure 2.6 depicts the time delay needed to repair, i.e. reassign, a VNet when a change
occurs in the substrate (e.g. node failure). The time delay required by the distributed fault-tolerant
embedding algorithm to localize the affected substrate node and select a new one (see upper curve)


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

is lower compared to the time delay needed by a centralised fault-tolerant embedding to react to
local failures (lower curve). The distributed algorithm provides up to a factor of 10 improvement
in time delay when the number of substrate nodes increases from 25 to 100.

2.1.3 Mobility Aware Embedding

The main intention of this work is to take into account mobility of resources in the physical sub-
strate and evaluate the performance of the associated embedding mechanism. The main objectives
are to maximize the flexibility (maintaining actual mappings whenever implied physical nodes
are moving) and efficiency (map as many VNets as possible) of the embedding algorithm and to
identify specific criteria which deal with mobile nodes within the mapping algorithm (e.g.:look
for the shortest path to avoid incrementing the number of nodes susceptible to move and break the
route, to update the Physical Substrate status more often so as to have up-to-date information about
node locations, etc). The first version of this Mobility Aware Embedding has been developed in C
code, taking a previous implementation as a basis [13]. All effort was focused on the adaptation
of the embedding process to solve problems induced by mobility (Node and Link Re-mapping
algorithms). The expected results from simulations highlighted the success ratio of the embedding
process over scenarios with several grades of mobility for their nodes. Conclusions showed that
Path Splitting and Migration techniques help to improve the index of Completed VNet Requests
from the VNet Provider’s perspective, also in mobile substrates. In the second phase of this work
the C-based development was migrated to a NS2 simulation environment (implementation in C++
language). The main issue here was to apply a distributed approach where all nodes are at the same
level and VNet requests can arrive at every node in the substrate. For an extensive description of
this approach, please see [14]. Simulation Model

In the first place a centralised approach of the mobility aware embedding procedure was developed,
based on a previous work presented in [15]. It was assumed that a central super node is present in
the substrate and can manage all information associated to the rest of nodes in the physical network.
A detailed explanation of the architecture and the simulation model can be found in A.1.1. At a
second stage, a distributed environment was desired, so it was necessary to integrate the whole
embedding protocol within every node belonging to the physical substrate, and to implement an
embedding protocol in order to allow information exchange among nodes. All this second phase
has been implemented within the NS2 simulation environment, where some modifications in the
implementation of wireless nodes were needed. To see the protocol message definition, please go
to A.1.3.

Link Remapping Algorithm A substrate link, which may belong to several virtual links, is
broken. Both substrate nodes with the broken link send Mobility Error Messages towards the
opposite edge of their mapped virtual link(s). An error is propagated to both edges, so in order to
avoid a “race” between both virtual nodes trying to repair the virtual link, only the virtual node
with lower ID will send a new Request message indicating the repairing purpose. The opposite
edge enters in a reparation state and if after a while a Mapping Message does not arrive, the virtual
link is considered as broken, and the physical node releases the whole VNet. Released VNets are
marked as ’stopped’.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Node Remapping Algorithm A physical node hosting a location-specific virtual node moves
outside the requested cell, so the substrate node moved needs to find another physical node in the
correct cell (the one with highest amount of available resources) and ask it to join the affected
VNet (try to re-allocate the virtual node lost) through a Reallocate Message. The substrate node
moved will also send a Mobility Error Message in order to release the previous virtual links and
let its previous virtual neighbours know that they must enter in a reparation state.

Link Migration Procedure This technique consists in re-allocating the currently mapped vir-
tual links to a new set of resources, with the goal of reducing the cost (sum of BW allocated in
all substrate paths hosting the virtual link) of embedding that virtual link. Link migration is only
applied for virtual links of long-lived VNets:
• If lifetime(VNet) >200s then 1 migration is made in the middle (1/2 of the lifetime)
• If lifetime(VNet) >400s then 3 migrations are carried out at 1/4, 1/2 and 3/4 of the lifetime
Migration is applied without interruption of the VNet operation. Extra BW resources are reserved
for both the actual and the candidate substrate path, until a decision is taken. When the substrate
node decides whether the new path is worth or not, it sends a Migration Release Message to the
path with higher cost, in order to release those resources. Scenarios
All simulations, results and conclusions regarding the centralised implementation of this work can
be found in [16]. In this document, the distributed and more recent implementation of the Mobility
Aware Distributed Embedding (MADE) Protocol, will be explained in detail.
The initial definition of our scenarios is based on [15] an also present in [16]. In this case all
scenarios are generated with N=40 wireless nodes, placed randomly in a bi-dimensional square
area. The area covers a 500x500m grid with 2x3 cells. Each node has 100 units of CPU capacity
and 100% of its BW at the beginning. Since Wireless WiFi nodes have been implemented in NS2,
the initial BW of each one is the available throughput defined for WiFi in NS2. The same happens
with the radio coverage of nodes, it is around 120m (defined by the WiFi implementation in NS2).

Simulation Setup The evaluation of the Mobility-Aware Distributed Embedding Protocol has
been carried out using the NS2 environment, running extensive sets of configurations. The wireless
technology chosen for the simulations has been WiFi. The time considered since the application
level sends a packet until the data is sent by the antenna is 10 ms due to the ARP and RTS messages
exchange. Background application data among virtual nodes has not been simulated since MADE
protocol messages are assumed to have pre-emptive priority. All simulated scenarios are square
areas divided into grids, where wireless nodes are placed randomly at a first stage. Taking into
account that NS2 considers radio coverage of 120 m for WiFi, we run our tests in a 500 x 500 m
map, so that each node will have around seven neighbours on average. We aim at finding certain
relations between the performance of our MADE protocol and the grade of movement present in
the substrate. For this purpose, we run several simulations for the same scenario, but varying the
ratio of mobile nodes. The speed of a node is given by a random value between 1 (slow) and 4
(high speed). Our mobility pattern consists in stochastic linear movements of nodes from their
actual locations to different random locations in the map. Short and long distance movements are
mixed, and 5 is the maximum number of movements for a single node during a single simulation.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Results For a detailed overview of the results from the centralised implementation, see A.1.2.
In this section we present main results in order to evaluate the proposed distributed embedding pro-
tocol and show the benefits of implementing mobility management in the virtualisation of mobile
substrates. Results are also presented depending on different combinations of splitting ratio and
with/without migration. Figure 2.7 - left - shows the amount of VNet requests successfully served
(completed) depending on the splitting ratio applied (% of VNet requests that allow split links).
Each line in the graph corresponds to a certain % of mobile nodes (0%, 33%, 66% and 100%) and
with (Mig) or without applying the migration process described. All simulations were run with a
30% of the VNet requests asking for specific location of nodes.

Figure 2.7: Number of completed VNet Requests over: the splitting ratio (left) and the ratio of
Loc-aware VNet Requests (right)

From these results, it can be easily derived that both the splitting and migration techniques
improve the performance of the embedding in terms of completed (successfully served) VNet Re-
quests. The efficiency increases in the case of 0% mobility, which is the most beneficial situation
since no dynamicity is involved, but even for the 100% of mobile nodes, we can observe that the
splitting ratio makes the number of completed VNet Requests rise. Nevertheless, the improvement
of splitting decreases as mobility is increased, since the splitting forms higher number of substrate
paths, and higher probability of broken links. The number of completed VNets decreases as the
mobility grade increases, which was expected. The number of links broken increases with mo-
bility, and therefore, the number of link re-mapping procedures. Sometimes the link may not be
repaired, forcing the release of the request (rejected) or a new mapping attempt after a random
time. Migration allows to optimize the embedding, reducing the number of substrate paths and
hops per virtual link. This way, the number of VNet Requests to re-map will decrease improving
the performance of the embedding. Figure 2.7 - right - presents the same analysis (number of
completed VNet Requests) but depending on the % of requests asking for specific locations for
nodes. Simulations were made with a 66% of mobility. Location constraints decrease the number
of VNet Requests served. The main reason of the reduction showed in the graph is that the set
of valid nodes to map a location-aware virtual node decreases as the number of cells increases.
Finally, the smaller the size of a cell is, the higher amount of reallocations will be necessary for
the same mobility ratio.
Figure 2.8 - left - shows delay times involved in the distributed embedding, depending on the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.8: Embedding delay over number of demanded virtual nodes (left) and Migration balance
over splitting ratio (right)

number of virtual nodes requested by the VNet Request. In the graph only delays provoked by
the exchange of MADE protocol messages is represented. As we expected, the elapsed time to
complete the mapping of a VNet Request increases with the number of nodes required. We can
observe that the average time for the largest VNet Request is less than 2 seconds, though, so we can
conclude that the response times of the embedding protocol proposed do not limit the performance
of virtualisation. Figure 2.8 - right - represents the balance obtained from the migration process
for two scenarios (0% and 100% mobility) with/without migration, where different splitting rations
were applied. The Migration balance presented is the difference between the cost of embedding
the number of completed VNets and the income demanded (in terms of virtual resources) by the
whole set of completed requests. If all VNets could be completed and embedded without hops in
the substrate paths, the income and the cost would be the same. Since the optimum embedding is
reached by mapping virtual links onto multiple substrate paths and multiple hops, normally the cost
of allocating a VNet Request will always be higher than its income. We define the Balance as the
difference between the overall cost and overall income (all completed VNets in the simulation).
Observing Figure 2.8 - right - we can state that migration techniques reduce the overall cost of
embedding VNets so that revenue is increased. In the static case, the benefit (in terms of balance)
is extremely good, although there is no such a great difference in the 100% mobility case. This is
because mobility increases the repairing and re-mapping, situations where a sort of link migration
is already performed.
Extra results regarding delays and overhead analysis of the MADE protocol can be found in
A.1.4. Conclusion
From a large set of tests, we have derived a number of promising results regarding performance.
Path splitting and migration techniques have been successfully incorporated into the proposed
MADE protocol, and their benefits have been demonstrated for mobile substrates as well. The mo-
bility management procedures (link and node re-mapping algorithms) have been proved to perform
high ratios of repairing and re-mapping during operation time, without introducing unaffordable


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

delays. Evaluation results have also been presented. A distributed approach is always suspected
of introducing high overhead in the network and lacking scalability. In our analysis, monitoring
of the delay times and message overhead did not reveal major problems. We expect to continue
this analysis with further validation tools, in order to demonstrate the scalability of the proposed
solution. Also, as part of our future work, we intend to incorporate an exhaustive analysis of the
time scales present in the embedding. For that purpose, we plan to develop simulations where
virtual application traffic is present, and the pre-emptive priority of the MADE protocol could be
tested in a more complex environment.

2.1.4 Virtual Network Provisioning

We explore virtual network provisioning, and generally the space of network virtualisation, using a
prototype implementation which runs on a medium-scale experimental infrastructure. Our imple-
mentation is consistent with the Network Virtualisation Architecture; VNet Operator, Provider and
two Infrastructure Providers reside in separate physical nodes, allowing the provisioning and man-
agement access of fully-operational VNets. The management node within each InP is responsible
for the coordination of VNet provisioning on behalf of the InP. Figure 2.9 gives an overview of the
prototype implementation, showing the supported functionality and interactions between the ac-
tors; further details can be found in [17, 18]. Our primary goal is to demonstrate that already today
we have all the necessary ingredients needed to create a paradigm shift towards full network vir-
tualisation. In particular, we use our prototype in order to: (i) show that VNets can be provisioned
in short time, even when they span multiple InPs, (ii) explore the scalability of our architecture
during VNet provisioning, and (iii) investigate the tussles between architectural decisions, such as
information disclosure against information hiding or centralisation of control against delegation of

Figure 2.9: Prototype Overview


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0 Hardware
The prototype is implemented on the Heterogeneous Experimental Network (HEN) [19], which
includes more than 110 computers connected by a single non-blocking, constant-latency Gigabit
Ethernet switch. We mainly use Dell PowerEdge 2950 systems with two Intel quad-core CPUs,
8GB of DDR2 667MHz memory and 8 or 12 Gigabit ports. Software
The prototype (see Figure 3.3), which is implemented in Python, synthesizes existing node and
link virtualisation technologies to allow the provisioning of VNets on top of shared substrates. We
used Xen 3.2.1, Linux and the Click modular router package [20] (version 1.6 but with
patches eliminating SMP-based locking issues) with a polling driver for packet forwarding. We
relied on Xen’s paravirtualisation for hosting virtual machines, since it provides adequate levels of
isolation and high performance [21]. VNet Operator, VNet Provider, the InP management node
and the InP nodes interface via remote procedure calls which are implemented using XML-RPC.
We use an XML schema for the description of resources with separate specifications for nodes and
links. During VNet embedding, for node and link assignment we obtain the required CPU load
and link bandwidth measurements using loadavg and iperf, respectively.
The substrate topology is constructed off-line by configuring VLANs in the HEN switch. This
process is automated via a switch-daemon which receives VLAN requests and configures the
switch accordingly. For the inter-connection of the virtual nodes, we currently set up IP-in-IP
tunnels using Click encapsulation and decapsulation kernel modules, which are configured and in-
stalled on-the-fly. Substrate nodes that forward packets consolidate all Click forwarding paths onto
a common domain (i.e., Dom0) avoiding costly context switches; hence, the achievable packet for-
warding rates are very high [21]. Note that each virtual node creation/configuration request (within
each InP) is handled by a separate thread, speeding up VNet embedding and instantiation. Simi-
larly, in the presence of multiple InPs, separate threads among them allow VNet provisioning to
proceed in parallel. Experimental Results

We explore VNet provisioning with a single and multiple InPs, separately. The single-InP study
uncovers the efficiency on VNet provisioning without the implications of InP selection and VNet
splitting among InPs. Furthermore, we investigate the efficiency of centralised control with a di-
verse number of InP nodes. The multiple-InP study sheds light in the VNet Provider role and
particularly how it should interact with the participating InPs during VNet provisioning. In addi-
tion, we show how VNet provisioning scales in the presence of multiple InPs.

Single InP We consider two experimental scenarios, as shown in Figure 2.10. In both scenarios,
we use the depicted physical topology composed of 10 substrate nodes. First, we measure the time
required to provision the VNet of Figure 2.10(a), including VNet assignment (i.e., resource discov-
ery and VNet embedding), the instantiation of virtual nodes, setting up the tunnels and establishing
management access to the virtual nodes. Table 2.1 provides the corresponding measurements for
VNet provisioning and assignment. In particular, VNet assignment includes: (i) node assignment,
where the requested virtual nodes are mapped to the substrate nodes (preferably to different nodes)
based on virtual machine specifications (e.g., location, number of physical interfaces, etc.) and a


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

combination of node stress and CPU load, and (ii) link assignment, where each requested virtual
link is subsequently mapped to a substrate path based on the shortest-path algorithm.

(a) Scenario 1 (b) Scenario 2

Figure 2.10: Experimental scenarios with a single InP

Table 2.1: VNet Provisioning Time (sec) – Scenario 1

min avg max stddev
VNet Provisioning 3.13 3.23 3.32 0.07
VNet Assignment 0.31 0.33 0.34 0.01

Our results show that a VNet can be provisioned rapidly, with most time being spent within
the InP, especially for virtual machine creation and configuration. More precisely, it takes 3.23
seconds on average across 20 runs with a small standard deviation of 0.07 to provision the specific
virtual network. In addition, the VNet assignment is concluded just within 330 msec on average.
In order to show how VNet provisioning scales, we measure the time required to provision VNets
with an increasing number of nodes/links (see Figure 2.10(b)). Figure 2.11(a) shows that VNet
provisioning scales linearly as the requested virtual networks become larger. The increasing provi-
sioning time for VNets with more than 10 nodes/links occurs since some substrate nodes are bound
to host more than one virtual machines – a procedure which induces further delays.
In our design and implementation, we rely on centralised control within the InP (i.e., a single
management entity). In order to show its efficiency, we initiate VNet requests with the topology
depicted in Figure 2.10(a), varying the number of substrate nodes from 5 to 50. Figure 2.12
shows that VNet assignment scales linearly with the number of physical nodes. Hence, for large
substrate topologies, one either has to increase the number of management nodes as the InP scales
or establish a network-wide authoritative configuration manager which subsequently delegates the
instantiation of the individual nodes across multiple configurators.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

(a) single InP (b) multiple InPs

Figure 2.11: VNet provisioning scalability

Multiple InPs First, we study two resource discovery scenarios among multiple InPs. An InP
typically will not expose detailed resource information; however, it can advertise the services it
provides including some basic resource information. This allows the VNP to retrieve such infor-
mation from all participating InPs and eventually create a resource/service discovery framework
which facilitates VNet provisioning and reduces the exchange of resource information, as we see
later on. Alternatively, we consider the case where the InPs are unwilling to expose any resource
information; therefore, the VNP has to negotiate with them using resource queries. Similarly to our
single-InP experimental study, we run the experimental scenario of Figure 2.13 with the requested
VNets split between two InPs. Figure 2.14 illustrates the number of messages exchanged for each
resource discovery scenario vs. the number of virtual nodes/links. Resource advertising involves
interactions between the VNP and each InP management node, which are solely dependant on the
number of participating InPs. On the contrary, negotiation via resource queries results in a notable
communication overhead. Relying on resource advertising, we explore the scalability of VNet
provisioning with two InPs. Figure 2.11(b) clearly shows the strong scalability properties of VNet
provisioning; we anticipate similar scalability levels with more InPs. Conclusion
Prototyping the VNet provisioning framework uncovered which technological ingredients are nec-
essary for its implementation and how they have to be combined to provision and operate VNets.
We used the prototype implementation to show that the primary components of the proposed Net-
work Virtualisation Architecture are technically feasible, allowing the fast provisioning of oper-
ational VNets. We also demonstrated experimentally that VNet provisioning scales linearly with
larger VNets for given substrate topologies.

2.1.5 Virtual Link Setup

This feasibility test will demonstrate the setup of QoS-supporting virtual links for the creation of
virtual networks. Such virtual links are created along a substrate path between the two physical
nodes hosting the virtual nodes and possibly across some intermediate substrate nodes. A Virtual


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.12: VNet assignment with a diverse number of substrate nodes

Figure 2.13: Experimental scenarios with multiple InPs

Link Setup Protocol allocates the necessary resources along the aforementioned substrate path and
connects its ends to the virtual node interfaces. Based on an implementation of the Next Steps in
Signaling (NSIS) Protocol [22], we will extend the QoS NSLP (NSIS Signaling Layer Protocol),
which runs on top of the General Internet Signaling Transport Protocol (GIST). Figure 2.15 pro-
vides a coarse overview of the NSIS signaling protocol architecture. QoS NSLP is a path-coupled
resource reservation protocol, i.e., it performs admission control and allocates resources for QoS-
based IP forwarding. The extension of QoS NSLP carries the necessary address information that
is required for setting up a virtual link between virtual nodes. This Virtual Link Setup Protocol is
used to interconnect virtual networks hosted on top of IP or to interconnect virtual networks par-
tially hosted on fully virtualised network domains over the today’s Internet. The chosen approach
combines two logically separate protocols into one, in order to minimize signaling overhead and
latency for virtual link setup procedures.
For the setup of virtual links, the following steps need to be performed:


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.14: Resource discovery among 2 InPs

1. InPs hosting VNodes need to acquire the substrate address of the opposite end of the virtual
link, e.g., via the involved VNP. This means that in order to construct the virtual link between
InP1 and InP2 , each of them needs to get hold of the respective substrate address on the
opposite side. InP1 needs to know that it has to build the virtual link to substrate node A and
InP2 needs to acquire substrate address B.

2. The virtual nodes should already be instantiated when the virtual link setup is initiated,
because the signaling messages request the setup of a virtual link between already existing
nodes. This requires synchronisation between both parties.

3. The path-coupled signaling approach between substrate nodes ensures that a feasible sub-
strate path exists between the substrate nodes that host the virtual nodes.

4. Resource reservation along the substrate path via the corresponding RMFs in order to pro-
vide QoS guarantees to virtual links if required.

5. Signaling must reach the control plane of the opposite substrate node in order to install
any state required to connect the virtual link to the correct interface of the virtual node.
The necessary information, i.e., the two (local virtual link end and remote virtual link end)
tuples (VNet − ID, VNode − ID, VIf − ID) are carried within the signalling messages to
the involved RMFs.

6. The final step consists of the involved RMFs actually installing the state required to connect
the substrate tunnel end (e.g., incoming demultiplexed flow) to the virtual link end (e.g.,
network interface of the virtual node) using the by now available information and bringing
up the virtual link.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.15: Overview of the NSIS Protocol Architecture Implementation Overview

For the setup of virtual links, we extend the NSIS QoS NSLP resource reservation signaling pro-
tocol by an additional but optional object. We create a new NSLP Object, the Virtual Link Setup
Protocol (VLSP) Object, which may be added to a RESERVE/RESPONSE message and carries
the following additional information to the other link end:
• VNet-ID
• Virtual Node IDs of the source and destination nodes
• Virtual Interface IDs of the source and destination nodes
• Virtual Link ID (optional)
The addressing information carried in the VLSP object enables the endpoints of the virtual link
to connect the substrate link ends to the virtual link ends correctly. Since the VLSP information
only needs to be interpreted by the virtual link ends, intermediate nodes can safely ignore the new
additional object and pass it on unmodified. This can be done in a backwards compatible fashion
by using the NSLP object extensibility flags. The Path-Coupled Message Routing Information
describes the addressing information of the outer tunnel flow, i.e., the substrate tunnel. Scenario: Virtual Link Setup at VNet Creation

For this scenario, we will consider two virtual nodes that are interconnected by a substrate path
spanning two or more substrate links. We will perform the required signaling and install any state
required for the virtual links from virtual interface to virtual interface by triggering the involved
RMFs as needed. Admission control and resource reservation for the virtual link are performed.
Cross traffic outside the virtual link will not affect the traffic inside the virtual link if guaranteed
QoS was requested, which is required for concurrent operation of VNets in a reliable way.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.16: Signaling Overview of Virtual Link Setup Outlook
The implementation of the virtual link setup protocol based on NSIS is still under active develop-
ment. A specific aspect that will be shown is the coupling of a QoS resource reservation protocol
with piggybacked information for virtual link setup, which provides an important building block
to dynamically create virtual networks.

2.1.6 Resource Allocation Monitoring and Control

The focus of this section is to study and demonstrate provisioning, management and control of
virtual networks using a small-scale network virtualisation testbed. In the envisioned network
virtualisation environment, the infrastructure provider is responsible for managing and controlling
physical network resources. Virtual networks are established as a result of a VNet Provider explicit
request (following 4WARD business model), or through the network management console. When-
ever a request to establish or modify a virtual network is received, the network resource controller,
based on specific resource utilisation policies, should decide whether or not the request can be
accepted and, if it can, how to map the virtual resources into physical resources. The fundamental
objectives of these feasibility tests are to:

• Demonstrate a full-blown network virtualisation scenario and the decoupling of network

infrastructure and virtual resource control;

• Demonstrate advantages and potential applications of network virtualisation for operators,

particularly network infrastructure providers;

• Demonstrate the automatic provisioning of virtual networks over a physical infrastructure;


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

• Demonstrate a solution for fully automated physical resource management and control.

The implementation of basic functionalities has been divided in two phases, to be reported in
D2.3.0 and D2.3.1, respectively. The objectives achieved in phase 1 are as follows:

• Virtual Network creation: Creates a new virtual network, based on specification in XML

• Virtual Network deletion: Removes a virtual network and releases all associated resources;

• Resource discovery: Discovers the topology of the physical substrate and identifies the com-
plete set of virtualisable resources;

• Monitor virtual resources: Provides overall information about all VNets that share the same
substrate network. Provides the current status of the resources allocated to a specific VNet,
uniquely identified by VNetID: virtual machines, virtual links (network path), storage ca-
pacity, link capacity.

• Monitor physical resources: Provides information about the physical resources:

– Nodes: static parameters (CPU, OS, RAM, storage, capacity [in terms of Virtual Ma-
chines]) and dynamic parameters (occupancy [# VMs that can still be accepted], avail-
able storage, available memory);
– Links: static parameters (link technology, capacity in Mbit/s), available bandwidth per
physical link. Components
Hardware In its current version, the network virtualisation testbed is composed by 6 rack-
mounted computers, with the following characteristics:

• 1 with an Intel Core 2 duo, 4 GB DDR2 of memory and megabit interfaces;

• 2 with an Intel Core 2 duo, 4 GB DDR2 of memory and gigabit interfaces;

• 1 with an Intel Core 2 duo, 8 GB DDR2 of memory and gigabit interfaces;

• 2 with an Intel quad core, 8 GB of DDR3 and gigabit interfaces.

Software The software used is as follows:

• Fedora 8 i386 (Linux OS for Physical Machines and Virtual Machines;

• Xen hypervisor version 3.1;

• Bridge utils;

• 802.1Q VLAN implementation for Linux;

• Quagga Routing Suite.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0 Software Architecture

Figure 2.17 depicts the of the Virtual Network Manager framework. The Network Manager is re-
sponsible for collecting information about the substrate resources and storing them in the database.
It is also responsible for interacting with the VNP and with the administrator of the substrate re-

Figure 2.17: Modular description of the Virtual Network Manager Framework Scenario
Testbed Setup Figure 2.18 represents the configuration of the physical and virtual networks
used as the basis for the phase 1 results, shown in this section. The functions provided by the
Resource Allocation Monitoring and Control Framework are as follows:

• a) List Virtual Networks

• b) Show Virtual Network

• c) Show Substrate Network

• d) Show Substrate Node

• e) Create Virtual Network

• f) Delete Virtual Network

Results The figures below are printouts of the implemented command line interface. Option
a) gives the identification of all virtual networks that are accommodated in the substrate and their
size as depicted in Figure 2.19.In option b) the user is prompted to insert the identification of the
virtual network which he wants to view and the output will be the virtual network characteristics, as


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.18: Virtual Networks Use case

shown in Figure 2.20. Option c) provides static and dynamic parameters of the substrate resources,
as presented in Figure 2.21). Option d) prompts the user to insert the substrate node identification
and the output will be the node characteristics and its virtual nodes, as demonstrated in Figure 2.22.
Finally, options e) and f) can be used to create and delete a virtual network, respectively.

Figure 2.19: Output of ‘List Virtual Networks’ function Conclusion
Basic resource control functions of a network virtualisation environment have been implemented
and demonstrated. The objectives for the next phase are to enhance the system capabilities and
introduce new features, namely admission control and virtual resource mapping. Next phase results
will be available in the second version of deliverable D3.2, due in the end of the project.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.20: Output of ‘List Virtual Network’ function

Figure 2.21: Output of ‘Show Substrate Network’ function

Figure 2.22: Output of ‘Show Substrate Node’ function


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

2.1.7 End User Attachment to Virtual Networks

The attachment of end users to virtual networks is a crucial component for the deployment and
success of any network virtualisation framework. It has to provide a high degree of usability
and has to allow for a fully automated, secure attachment of end users to their preferred virtual
networks. On the other hand, the end user attachment has to provide a high degree of flexibility in
order to enable dealing with more sophisticated scenarios.
This feasibility test will demonstrate some of the envisioned end user attachment scenarios. Our
setup will allow an end user to connect a device to an Infrastructure Provider and to automatically
be connected to a set of predefined VNets. An extension of this scenario will allow end users to
proactively request further VNets they want to attach to after the initial preferred VNet attachment.
On the one hand, end user attachment should be as flexible as possible in order to allow non-
technical tussles [23] introduced by, for example, business rivalry, to play out in the real world. On
the other hand, an open and standardised way for getting access to VNets is needed.
We will base our feasibility test of the end the following assumptions

• End users need to attach to multiple VNets concurrently, using the same substrate network
access. This is required as end users might want to use various services that are each realised
in optimised, concurrently running VNets respectively (e.g., a video streaming service and a
banking service).

• VNets may employ their own network architectures with, e.g., own addressing and routing
schemes, in order to optimally support the services running inside the VNet.

• VNet elements as well as end users may be migrated or be mobile, respectively.

• VNet services and contents may be restricted to closed user groups and therefore require
authentication and authorisation mechanisms.

• Services and content provided inside a VNet may require payment by end users. Therefore, it
is necessary to allow establishment of a trust relationship between the parties and to perform
the required accounting and charging operations. Overview
For an overview of the end user attachment process, we introduce the components involved in the
process and give an overview of the process afterwards using Figure 2.23 as reference.

End User End user nodes are able to physically attach to the network, e.g., by plugging in an
ethernet cable, dialing up via modem, or connecting to a wireless network. An end user
attaching via ethernet is assumed in our scenario.

Network Access Server Network access servers are rather light-weight devices that aggre-
gate end users and are able to directly contact AAA infrastructure in the local domain. An
802.1X-capable ethernet switch is used for this purpose.

Home AAA infrastructure The AAA infrastructure of the end user’s home domain, which is
in the end capable of verifying his identity and to attest this identity towards other providers.
We will use a Radius server with a MySQL backend as AAA infrastructure.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Roaming AAA infrastructure A similar setup (Radius + MySQL) as for the Home AAA in-
frastructure is also used for the Roaming AAA infrastructure

Two default VNets The user is by default attached to two virtual networks. This information is
stored in the user’s Home AAA infrastructure.

One on-demand VNet The end user decides later on, to attach to a further VNet. For this
purpose a signaling channel is required.

VNet AAA infrastructure If required, the VNet itself needs to provide AAA infrastructure in
order to identify legitimate users.

Tunnel construction The link to actually attach the end user is created by setting up a tunnel.
L2TPv3 [24] and 802.1Q seem to match our requirements for this scenario.

Figure 2.23: End User Attachment Process Overview

The following required steps are shown in Figure 2.23:

1. The End User connects to a substrate access network of some InP.
2. The End User authenticates to his local and to his home network via the Network Access
Server and the AAA infrastructure of the local InP and its home InP.
3. After successful authentication, the End User may request access to a set of VNets.
4. Access to a specific VNet may require further authentication and access control.
5. After successful authentication of the End User, a virtual link between a virtual node and the
End User is constructed, i.e., a suitable VNet access point for the VNet must be determined
as well as a viable substrate path must be found or set up. Scenarios
We will evaluate the end user attachment concept using the following scenarios:


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Scenario 1: Automated End User Attachment to Preferred VNets For this feasibility
test, we will have the end user contact its Home AAA infrastructure where information about the
user’s preferred VNets is stored. Those preferred VNets will then be contacted and a virtual link
to the current location of the end user will be constructed.

Scenario 2: End User initiated attachment to VNets For this scenario, the end user is
already attached to some VNets and now intends to attach to a further VNet. For this purpose, a
signaling channel is required, which enables the end user to contact his Home AAA infrastructure
or the VNet AAA infrastructure directly. This scenario makes assumptions on the trust relation-
ships between the involved parties and usable charging models, which might be discussed in more
detail in the appendix. Conclusion
The attachment of end users to virtual networks is required for successful introduction of network
virtualisation on a large scale. This feasibility test will show the conceptual functioning of our
envisioned end user attachment process.

2.1.8 Interdomain Aspects: Management and Control Introduction: the Internet Model
The interconnection of two operators is always a tricky business. In traditional networking envi-
ronments, like IP networks, the routing protocol allows the implementation of policies as a way to
control the information exchange between operators. These policies implement filtering techniques
which can:

1. Completely hide parts of an operator’s network to other operators

2. Change the precedence or significance of parts of a peer’s network

These policies not only impact the way a certain operator views the peer’s network, but also
change the view the operator’s clients have of it. in the case of the Internet, the classical model
includes two peering types:

1. Customer-provider relationship, which operationally translates into :

• Send all traffic directed to customers first
• Send traffic to providers as a last resort

2. Peer-to-peer relationship, which operationally translates into

• Favor peers over providers when deciding where to send the traffic to
• Inform your peers about your clients and receive information about the client’s peers

which is a translation of the economic rules of the Internet principles:

1. Send traffic to clients, because that’s what you charge them for


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

2. Establish peer-to-peer peering agreements to reduce the amount of money you pay to your
3. Have providers to reach the parts of the Internet you cannot get from your peer-to-peer
peerings Interprovider VNet Interface

Figure 2.24 shows the principle of the Interdomain VNet Interface proposed for experimentation
in the scope of the 4WARD project. As shown in the figure, the main component is a resource
filter which allows the operator to select which resources he wants a peer to have visibility to. As
a proof of concept, the resource interface will control all kinds of network components present in
the resource database.

Figure 2.24: An Interprovider Interface proposal

The objective of this proof of concept is to find out what resources need to be exposed from
an interprovider provisioning point of view and which resources can be implicitly reserved by the
local resource management. During the proof of concept, one important insight to be gained is
whether it is enough to expose “router slices” to a peer in order to provide him with reachability to
his clients or whether “link slices” need also to be exposed. If the latter is the case, it is important
to gain some understanding of the level of exposure, since it translates directly into the level of
knowledge of the internal topology of his own network that an operator is going to be forced to
expose to potential peering partners.

2.1.9 Shadow VNets Feasibility tests

Debugging and troubleshooting networks is a job not for the faint of heart, especially when net-
works spawn multiple geographic and administrative domains, and provide many services with
differing and sometimes outright contradicting requirements.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Predicting the potentially global implications of configuration changes or software updates in

such a context is often difficult, and simulations or testbeds provide only limited insight, as the
modeling abstractions necessary for simulations limit their accuracy and cost factors constrain
feasible testbed sizes. As such configuration changes or upgrades are often tested on a smaller
scale and applied in trial-and-error fashion, during (often nightly) low load intervals. As such,
these methods often do not suffice to catch real-life network problems that stem from complex
interactions of many parties, especially if they only occur sporadically.
In this feasibility test, we utilize the emerging concept of network virtualisation to tackle net-
work troubleshooting problems in a novel fashion.
We demonstrate how an operator can diagnose and troubleshoot a problem in his network with
the help of Shadow VNets1 . A Shadow VNet replicates not just VNets but also their input, safely.
This offers network operators a new range of capabilities with low overhead, namely to safely eval-
uate and test new configurations or software components, at the scale of their production network,
under real user traffic. As the new setup exists in parallel with the old one they can potentially be
swapped in a near-atomic fashion. This enables the operator to switch only once he is convinced
that the new setup is operational and offers the expected benefits.
Running both networks in isolation requires sufficient substrate resources. As we expect many
VNets to be running in parallel on a substrate, each carrying but a fraction of the total traffic, we
expect that Shadow VNets can be run for a subset of the production VNets without overloading
the substrate. If resources are sparse, production VNet traffic can be prioritised over shadow VNet
traffic to ensure the productive service is not affected. Experiment scenario

To evaluate the benefit of a Shadow VNet we choose the following scenario: A VNet Operator that
offers both VoIP and Internet access across a best effort VNet, considers moving to a setup with
service differentiation to offer better quality of service to its VoIP traffic. This move is motivated
by customer complaints about their VoIP quality during certain times of the day.
For our experiment, a VoIP call and background traffic of varying intensity is routed through a
virtualised substrate (Figure 2.25). The substrate network consists of three nodes. We now instan-
tiate two parallel VNets, VNet A and B, each with a maximum bandwidth of 20 Mbit/s throughout
the experiment, enforced by traffic shaping on Node 2. Moreover, we setup an additional virtual
network for monitoring. On entry to the VNet, traffic is duplicated to both VNets A and B and
forwarded within each via Node 2 to node 3 using separate virtual links (VLANs). On exit, when
leaving Node 3, only output from one VNet is sent to the receivers. In addition, the monitoring
VNet “Moni” receives a copy of the VoIP traffic from both VNets. Evaluation
We now experimentally evaluate the feasibility of scenario outlined above.
For our evaluation, we measure at two points in the experiment: M oni records data for both
VNets on exit of the VNet, while the Receiver records the quality as experienced by the user.
We record the percentage of dropped packages on the VoIP call as a rough quality indicator, and
The name is inspired by the results of Alimi et al. [25], who implement shadow configurations. to improve the
safety and smoothness of configuration updates.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Node 1
Node 2 Node 3 BG Traffic
BG Traffic

Vnode 1A Vnode 2A Vnode 3A

Vnode 1B Vnode 2B Vnode 3B

sender Moni
Vnet entry node Vnet transfer node receiver


Vnet exit node

Traffic flow

Figure 2.25: Shadow VNets experiment setup

calculate the M oS value as defined by the the E-model, an ITU-T standard for measuring the
transmission quality of voice calls [26] 2 .
For the VoIP traffic we use an open source VoIP client based on SIP, generating traffic using the
G.711 codec. It generates traffic at a constant rate of 80kbps using the G.711 codec with a net
bit rate of 64kbps. Each voip RTP packet contains 20 ms voice and has a payload of 160 Bytes.
A pool of servers is used to generate the background traffic, using Harpoon [27], with properties
that are consistent with those observed in the Internet. To account for different intensities of the
background traffic during different times during the day we use two different load levels: L/H
corresponding to 20-25%, 60-86% average link utilisation, respectively. All traffic sources are
located on the left in Figure 2.25.
Experiment phases:

Table 2.2: Experiment phases

Phase 1 2 3 4 5 6
Production VNet A A A A B B
Active VNets A A A&B A&B A&B B
QoS enabled - - - B B B
Internet traffic L L/H H H H H

The experiment is conducted in six phases with a length of five minutes each. Figure 2.26 shows
the rate of the background traffic (shaded area) averaged over 10s (scale on right axis) across time.
In addition, Figure 2.26 shows the number of dropped packets across time, again using 10s bins
The E-model states that the various impairments contributing to the overall perception of voice quality (e.g., drops,
delay, jitter) are additive when converted to the appropriate psycho-acoustic scale (R factor). The R-factor is then
translated via a non-linear mapping to the Mean opinion Score(M oS), a quality metric for voice. M oS values
range from 1.0 (not recommended) to 5.0 (very satisfied). Values above 4.0 indicate satisfied users, values below
4.0/3.6/3.1 indicate that some/many/nearly all users are dissatisfied.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

(scale on left axis). Drop rates for VNet A are depicted as blue plus signs, VNet B as red crosses,
and the values measured at the receiver as green diamonds. Table 2.2 summarizes the configuration
of each phase.

phase 1

phase 2

phase 3

phase 4

phase 5

phase 6


BG Traffic

BG traffic throughput [MBit/s]

packet drops %




0 500 1000 1500

Experiment time [s]

Figure 2.26: VoIP packet drops (left) and background traffic throughput(right) across time

In phase 1, background traffic is running at low intensity. In the middle of phase 2 the intensity
of the Internet traffic is switched to high. This causes a problem in VoIP quality as measured by the
MoS value, see Figure 2.27. The perceived quality drops from a MoS score of 4.34 which corre-
sponds to a “very satisfied” service level drops to 4.16 which corresponds to a level of “satisfied”.

Figure 2.27: MoS results per phase

As such, the VNet Operator asks to instantiate a Shadow VNet at the beginning of phase 3.
This means that all packets are now duplicated at Node 1 and are routed in both VNets A and
B. However, the end user for VoIP service is still getting service through VNet A. This allows the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

operator to assess the impact of the degradation and to do root cause analysis in VNet B. Indeed, the
quality of the call decreases further in our experiment. In our case the operator decides to prioritize
VoIP traffic to counter the bad performance. He enables QoS at the start of phase 4. this reduces
the loss rates within VNet B significantly and the MoS value increases again to 4.38. Indeed, due
to some bad congestion the VoIP MoS score within VNet A drops to 1.45 which corresponds to
“not recommended”.
At the start of phase 5 the operator switches his production VNet from VNet A to VNet B.
Therefore, the user is now getting the good performance provided by VNet B. With phase 6 the
operator deactivates VNet A.
Already this very simple scenario shows how an operator can benefit from Shadow VNets, e.g.,
to smoothly upgrade this network configuration to amend a network performance problem. Summary and future work

In this experiment, we study a novel approach that leverages the capabilities of network virtualisa-
tion to add to our network troubleshooting capabilities, especially for large production networks.
Shadow VNets enable operators to upgrade configurations and software in an operationally safe
way and with transaction semantics while exposing the new system and configuration to real user
behavior. As such the system can be tested before putting it into the wild.
The experiences with our prototype implementation underlines the feasibility of the approaches,
especially if used on a virtualisation platform that offers good isolation. It also hints at the power
of these new troubleshooting tools.
In the future we plan further experiments using hardware virtualisation enablers (e.g., Open
Flow, Multi Queue NIC) as these promise proper isolation and explore the scalability limits. More-
over, we plan to integrate Monitoring VNets and Shadow VNets into one of the emerging VNet
architecture platforms.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

2.2 Router Virtualisation

2.2.1 Introduction
Extending virtualisation to network resources, and to routers in particular, results in significant
benefits. A single virtual router platform can provide independent routing for multiple networks in
a manner that permits independent management of those networks. Virtual routers allow separate
administration within a single box in a natural manner; they also enable many business models
that are currently difficult. For example, consider physical infrastructures where virtual routers
are directly connected by VLAN or MPLS sliced physical links. This allows an entire network to
be fully virtualised; whole new network architectures can be rolled out without risk of damage to
existing network services. Given the difficulties faced in changing the current Internet, it seems
likely that router and network virtualisation could be a key enablers for Internet innovation.
PC hardware has moved on significantly in the last few years, and performance numbers from
five years ago are now largely irrelevant. Multi-core CPUs, along with recent advances in memory
and buses, render PC hardware a strong candidate for building flexible and high-performance soft-
ware routers. The functionality provided by software routers combined with recent virtualisation
technologies allows multiple instances of routers to run concurrently on a single box while offer-
ing highly configurable forwarding planes and custom routing protocols. Some recent PC-based
virtual router prototypes (e.g., [28]) have been proposed, but none of them exploit recent advances
in commodity hardware such as multi-core CPUs or network interface cards with hardware multi-
We hereby assess our virtual router platform which leverages multi-core and hardware multi-
queuing to consolidate the virtual data planes onto a common forwarding domain and enable
highly configurable forwarding planes for advanced programmability. We evaluate all the sup-
ported virtual forwarding scenarios, uncovering the benefits as well as possible shortcomings in
terms of performance and isolation. We emphasize on the flexibility of our platform, especially
when advanced security and isolation properties are required. We also investigate the tussles be-
tween different software architectures that distribute packet processing among CPU cores.
In addition, we optimize CPU allocation among virtual routers hosted in separate guest domains
to prioritize delay-sensitive flows and, therefore, achieve low delays. We assume that all flows
handled by a particular virtual router require the same service, which signifies different priorities
among virtual routers (and subsequently among the corresponding guest domains). In this evalu-
ation, we use I/O channels for inter-domain communication (instead of direct-mapped interfaces)
which limits the achievable forwarding rates due to the I/O virtualisation overhead. However, even
with this limitation, we achieve notable gains (i.e., low delay) with our improved CPU allocation.
Our implementations rely on paravirtualisation systems (i.e., Xen [29]); however, as we discuss
later on, hybrid solutions (i.e., Xen and OpenVZ) are also possible.

2.2.2 High Performance Software Virtual Routers on Commodity

As shown in [21], the main performance bottleneck of PC servers is memory access, through a
combination of memory latency and memory/front-side bus overload. While technologies such
as Non-Uniform Memory Access (NUMA) with their multiple memory controllers are poised to
alleviate the problem, memory latency issues are likely to persist for some time, and remain the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

principal performance-limiting factor for software routers. In order for software routers to achieve
high performance it is therefore crucial to reduce memory accesses to a minimum. In commodity
PCs, this can be accomplished by making sure that packets, as well as much of the data structures
needed to process them, stay in cache memory as they travel from an input to an output interface.
This renders a cache hierarchy (i.e., the set of multi-level caches present in a CPU) as the basic
hardware unit of consideration when implementing software routers. Hardware
The system under test has two Intel Nehalem processors. These are 2.8GHz Xeon 5560, quad-core
CPUs, with each core having a 32KB L1 data cache, a 32 KB L1 instruction cache and a 256KB
L2 cache. In addition, each processor has an 8MB L3 cache shared among all of its cores. Further,
Nehalems have an integrated memory controller and so are an implementation of NUMA, since
accessing their local memory is relatively cheaper than accessing that of another processor on the
system. Finally, the processor has several QuickPath Interconnect interface links (QPI - Intel’s
replacement for the front-side bus) linking it to any other processors and I/O hub on the system.
In terms of network interfaces, our system has two dual-port 10G cards (model 82598EB) with
a PCIe connection. Among other features, each of these cards supports hardware multi-queueing,
effectively splitting the card into a set of interfaces (64 for receiving and 32 for transmitting). In
addition, the cards support two mechanisms: Receiver Side Scaling (RSS), which load balances
packets across the set of available hardware queues; and Virtual Machine Device Queues (VMDq),
which splits packets across these queues based on the virtual machines residing on the system. Software
For software, we rely on Xen [29] for virtualisation and also use Click [20], in kernel-mode on a
linux system, to undertake all of the packet processing, since it provides not only a flexible
platform but also yields high performance. Click configurations consist of a set of modules (known
as elements) connected in a data-flow style graph. Elements represent packet processing functions,
including queueing. We call a forwarding path the set of Click elements that any packet traverses
from input to output. Hence, a Click configuration contains several FPs, while a single Click
element can belong to several FPs simultaneously.
For all our experiments, we assign a single Click thread to each CPU core (in order to minimise
interference with the Linux scheduler scheduling the threads [30]) and then assign the various
Click tasks under consideration to these threads as needed. We also always use Click in polling
mode – i.e., no interrupt is used at any time to signal events to the Click threads. Finally, we use
short packets (64 bytes) in all our experiment in order to measure the performance of the system
under stress. Virtual Forwarding Scenarios

As shown in Figure 2.28, our virtual router platform [31, 32] comprises the following basic com-
ponents: (i) a management domain for the management of guest domains3 , (ii) an isolated driver
domain (IDD) for aggregated packet forwarding, and (iii) a number of guest domains for host-
ing control planes (one per virtual router), and optionally FPs when increased isolation and safety
The terms guest domain and DomU are used here interchangeably


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.28: Virtual Router Platform Overview

properties are required. The IDD hosts the merged FPs and provides the ability to control and
configure the individual FPs to their respective guest domains. The merging process enables the
consolidation of all FPs within the IDD, allowing a large number of virtual routers to share com-
mon network interfaces. Figure 2.28 depicts this configuration with FP1 running in the IDD.
Figure 2.29(a) shows the performance achieved with packet forwarding within a common for-
warding domain (IDD). In this experiment, we use 6 uni-directional flows achieving packet for-
warding rates close to 7 Mpps by avoiding costly per-packet hypervisor domain context switches.
Isolation appears to be an issue with this forwarding scenario, as the virtualised data planes are
consolidated in the same guest domain and therefore, they are not isolated from each other. How-
ever, we enforce resource isolation by strict CPU core allocation, as we explain later on.
Furthermore, we evaluate two additional packet forwarding configurations supported by our
platform: (i) splitting a FP between an IDD and a separate guest domain while using local I/O
channels for inter-domain communication (e.g., FP2 and FP2’ in Figure 2.28); and (ii) mapping
interfaces directly into guest domains so that each FP resides in a separate guest domain (e.g., FP3
in Figure 2.28). For configuration (i), we measure the packet forwarding rates of 2 split FPs among
IDD and a separate DomU ( Figure 2.29(b)). We use 4 cores in total, with each FP assigned to a
separate core. While each FP within IDD forwards packets at 1.2 Mpps, the FPs residing in each
DomU yield very limited performance (around 90 Kpps). The reason for such poor performance
is that in order to transfer packets between Dom0 and DomU, Xen uses I/O channels, requiring
costly hypervisor domain switches. [33] shows that 30-40% of the execution time for network
transmit or receive operation is spent at Xen’s hypervisor. Despite its performance limitation, this
forwarding scenario can be used to safely run FPs that include intrusted Click elements without
compromising the performance and safety of other virtual routers’ FPs.
In order to evaluate the performance of configuration (ii), we directly map 5 pairs of interfaces to
5 DomUs. Figure 2.29(c) shows the aggregated forwarding rate which exceeds 7 Mpps. In contrast
to the previous forwarding scenario, the packets are directly transferred to each DomU’s memory
via DMA, resulting in high performance. A potential shortcoming of this configuration might be
that each domain needs its own interfaces and, at first sight, the number of virtual data planes
would be limited by the number of physical interfaces on the system. However, with the advent
of hardware multi-queuing on network interface cards (NICs), such as VMDq or RSS, physical
interfaces can be shared among guest domains.
More precisely, the VMDq extensions of the NICs enable up to 16 virtual interfaces to be con-


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

(a) Packet forwarding within a common for- (b) Splitting each FP into a separate domain
warding domain (IDD) with direct-mapped interfaces

(c) Packet forwarding in 5 domUs with direct-

mapped interfaces

Figure 2.29: Performance with each of the virtual forwarding scenarios

figured on the NIC by way of assigning a MAC address to a hardware queue, or VMDq index.
If we were to just rely on VMDq based input filtering, each virtual router would essentially see
single-queue virtual interfaces and opportunities for parallelising their forwarding paths would be
very limited. Fortunately, the NIC can combine VMDq and RSS to split a virtual interface into
several hardware queues. The details of this are card-dependent, but the general principle is that
the available hardware queues are selected as part of a two-step hierarchical space as opposed to a
single flat space (when only either VMDq or RSS is used alone). With the ability to use hardware
multi-queueing on virtual interfaces, we can parallelize virtual routers efficiently, as we show in
the following section, and allocate resources to them in a very flexible manner. Forwading Path Architecture

In order to maximize the performance of PC-based virtual routers, the CPU cores and the cache
memory hierarchies need to be carefully exploited. As a result, the aim when implementing a
virtual router should be to keep a packet as deep as possible inside a cache hierarchy (i.e., close to
the cores) while distributing the packet processing over as many spare cores as possible within the
same cache hierarchy. This ensures that processing is not CPU-limited since multiple cores are in


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.30: Forwarding path architecture. MPD stands for Multiple Poll Device and TD for

use, while reducing expensive accesses to main memory.

Since it is not possible to know in advance to which output interfaces packets have to be
switched, our proposed forwarding path (Figure 2.30) architecture decomposes the router’s graph
organisation into a series of forwarding trees. Each of these forwarding trees is associated with
an input interface and represents all of the possible forwarding paths followed by packets enter-
ing through that interface. The advantage of using a forwarding tree is that its elements can all
be allocated to the same cache hierarchy, thus confining packets to this hierarchy and reducing
main memory accesses. Furthermore, by using hardware multi-queueing and classification based
on VMDq, which is available in recent NICs, each interface (both input and output) is split into
as many hardware queues as there are virtual routers (VRs). This organisation ensures that ev-
ery interface is shared (i.e. virtualised) among the VRs and every VR can access the interfaces
in parallel. Of course, this also ensures that multiple cores can access the interfaces in parallel,
providing significantly improved performance. However, being able to access an interface simul-
taneously still does not prevent the system from the costly overhead of a context-switch when an
empty hardware queue is accessed. To mitigate this overhead, we extend Click’s original PollDe-
vice element, so that multiple input queues (from multiple input ports) can be assigned to a single
element. This reduces the probability that no packets will be processed by an element when it
gets scheduled. Experiments with 4 input interfaces and 4 FPs, each one a separate core, reveal
throughput rates of 6.9 Mpps per interface – a notable improvement compared to the achievable


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

6.25 Mpps without this extension.

Output Processing Using hardware multi-queueing on the output ports results in improved
performance as well, due to the true parallelisation given by the independent access to multiple
queues. Figure 2.31 illustrates these performance gains with an increasing number of FPs (each
one assigned to a separate core) and a single output interface in contrast to a output packet process-
ing architecture where multiple cores have to obtain exclusive access to a single output interface
(i.e., without hardware multi-queueing). Certainly, this can be achieved by low-level locking to
coordinate access to the interface from several elements which induces a significant performance
penalty. Without hardware multi-queueing, we particularly observe a dramatic loss of performance
as soon as the 5th FP is added. This is because, up until that point, the lock used for synchronisa-
tion was only accessed from CPU1, and thus its value stayed fresh in the L3 cache of that CPU.
But with the addition of the 5th FP, FPs are now running on both CPUs, and this causes the cached
value of the lock to be dirtied whenever a TD on the other CPU changes its value, forcing a fresh
value to be reloaded (from either the L3 in the other CPU or main memory), slowing all FPs down.

Figure 2.31: Performance with output processing

In the presence of hardware multi-queueing, it is worth noting that once a packet has been
placed in a hardware queue, it is out of the control of the router. As a result, if the NIC does not
support certain features such as traffic management or advanced scheduling policies, these must
be handled in software by coordinating the output tasks involved. This, in turn, brings back the
need for synchronisation primitives (e.g., locks) for exclusive access to shared data structures (e.g.,
token buckets).

Hardware Queuing Limitation Hardware multi-queuing is obviously an excellent solution

to many of the problems encountered when building software routers. However, one begging
question is whether there is an overhead cost associated with them. To answer this question, we
run the following experiment. First, we assign a single PD to a core, and polled the interface.
We then create an increasing number of hardware queues on the interface, polled by an equally
increasing number of PDs on the same core. The input traffic to the interface, in all cases, is
such that it is saturated. Figure 2.32(a) shows the ratios of throughput obtained in the multi-queue
cases to the throughput obtained when polling the interface without multi-queueing. With 4 queues
polled by a single core the performance hit is about 30%.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

(a) Throughput increase with an increasing (b) Impact of burst size on throughput
number of hardware queues

Figure 2.32: Observations on the forwarding path architecture

The results show that the performance takes an important hit as queues are added and that
hardware multi-queueing is therefore not free. We observe a near linear degradation as the number
of queues increase up to 8 queues, but then, interestingly the performance holds for further queues.
The NIC we are using has 8 packet buffers, across which the hardware queues are allocated, and
we speculate that the performance behaviour observed is mostly caused by a hardware overhead
on the card. A similar increased overhead pattern is also observed for multiple output hardware

Poll Burst Size One final consideration is the poll burst size. As mentioned earlier, a PollDe-
vice can be configured so that whenever it gets scheduled it tries to poll n packets from a hardware
queue, or if less than n packets are available, it polls them all; this mechanism is called burst size
in Click. Figure 2.32(b) shows throughput results for different burst sizes. As expected, increas-
ing the burst size yields sizable improvements, since it reduces the number of times a PollDevice
element gets scheduled per packet processed. However, at some point, the performance will level
since packets are arriving slower than they can be polled, while longer bursts result in less respon-
siveness. A burst size of 32 offers a good compromise and this is why we limited our experiment
to that maximum value.

2.2.3 Resource Allocation in Xen-based Virtual Routers

In the case of commodity hardware based virtual routers, performance may be hardware-limited
(CPU or Memory limited). We aim to allocate the different virtual machines enough resources to
optimally forward packets. Since forwarding is achieved in the guest machines through the I/O
channels mechanism, the driver domain is even more implicated in the forwarding operation since
it is the only one to directly communicate with the device. Allocating enough resource to this latter
becomes even more critical. Furthermore, delay sensitive flows need to be forwarded as soon as
possible. Thus we need to prioritize the corresponding virtual router by servicing its packets first
so that they experience the least latency.
When the system is CPU-limited, scheduling becomes crucial. The Credit scheduler is the
default Xen CPU scheduler. It initially allocates credits to runnable virtual machines according


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

to their weights. Weights are relative to each other. To allow the driver domain stay as long as
possible under its fair share, we have to assign it an optimal weight compared to other virtual
machines. In our work we consider the case where different virtual routers are dedicated to flows
of different types and constraints. Thus, in case we aim to prioritize the virtual router forwarding
delay-sensitive flows, we have to allocate it more CPU cycles than the others, but not more than the
driver domain. This is achievable through assigning it the optimal weight. Moreover, since delay
sensitive flows packets need to experience the least possible latency, virtual routers forwarding
delay sensitive flows need to have their packets classified and first served in the driver domain.
Hence, by assigning the different virtual machines necessary resources and classifying the packets
and servicing them in a Weighted Round Robin manner we achieve the highest overall throughput
and the least packet latency for delay-sensitive packets. We call an enhanced system a system
where optimal weights are applied and where packets are classified and delay sensitive ones have
the highest priority to be scheduled by the Round Robin scheduler. Components
For our experiments, we use the same software used described in Section However, since
our goal is an optimal scheduling when CPU cycles are the performance limiting factor, we have
chosen to use only two CPU cores. Scenarios
Testbed Setup Here, we focus on the case where forwarding is achieved by the virtual ma-
chines, which communicate with the driver domain through the i/o event channel (FP2 in Fig-
ure 2.28). We evaluated forwarding performance of the driver domain, of two concurrent virtual
routers and of 4 concurrent virtual routers. In all these cases forwarding was performed in the
guest machines using I/O channels. We conducted tests for two packets sizes : 64 bytes and 1500

Figure 2.33: System throughput different Credit weights)

Results We denote by Wi the weight assigned to the domaini. 256 is the default weight assigned
by the Xen scheduler. We change weights assigned to the guest machines as well as to the driver


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

domain. Figure 2.33 depicts the throughput achieved by VR1. We notice that assigning Dom0 a
twice higher weight than the guest domains results in a slight increase of the throughput of each
running virtual machine. As expected, the improvement is even better when Dom0’s weight is four
times the guest machines one. (Actually, the two guest machines achieve the the same throughput
when they are assigned the same weight) Nevertheless, when Dom0 gets a much higher weight, it
monopolizes the CPU and gets all the packets arriving on the network device. However, since the
virtual domains are not accessing the CPU enough, they are unable to process the received packets
which results in a higher packets loss and in a decrease in their throughput (not represented in the

Figure 2.34: Achieved throughput with the packets classification mechanism

Figure 2.35: Packets latency with the packets classification mechanism

By only adapting the different virtual machines weights we result in a slight overall throughout
improvement, which may be insufficient for flows with high constraints. Below, we evaluate the
impact of the packets classification mechanism that we proposed.
Figure 2.34 shows how VR1 was privileged by the enhanced system (supporting optimal CPU
scheduling, packet classification and weighted Round Robin packet switching). Its maximum


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

achievable throughput reaches 78 Kpps (Kilo packets per second) after having been limited to 40
Kpps with native Xen and to 49 Kpps after configuring the Credit scheduler. We can also notice
that the rest of guest domains have conserved a throughput as high as the throughput achieved in
the case of native Xen.
As long as the latency is concerned, Figure 2.35 shows that packets forwarded by VR1 meet a
latency and a delay kept well behind those of their peers forwarded by VR2. After being classified
in the driver domain, delay sensitive packets are scheduled by the Weighted Round Robin scheduler
with a priority of 67%.

2.2.4 Conclusions and Outlook

The spare computing capacity (e.g., CPU cycles) available facilitates the provision of high perfor-
mance virtual router platforms. Our experimentation with current system virtualisation approaches
shows that the performance of virtual routers is extremely sensitive to various system issues. Our
conclusion is that currently the best performance is achieved by virtualising forwarding engines
within a single forwarder domain. Doing so yields aggregate performance close to that realised
without virtualisation. However, I/O performance for guest domains is receiving research attention
[34], so in the future, a forwarder domain may no longer be necessary.
In order to exploit the substantial computational capacity of recent multi-core server architec-
tures for virtual routers, a suitable forwarding architecture is needed that is capable of high-speed
packet forwarding. We have shown the crucial role hardware multi-queueing plays in enabling
high degrees of parallelism, flexibility and performance.
To overcome the process resources limitation and the virtualisation overhead incurred from the
Xen I/O architecture, we first optimised the CPU allocation. The driver domain is then allocated
the right amount of CPU cycles that allows it to receive the maximum number of packets that could
be processed afterwards by the different virtual routers. Additionally, we proposed that the driver
domain should classify the packets according to their types and then switch them to the target
virtual routers according to their priorities. This allows virtual routers dedicated to delay sensitive
flows forward their packets with acceptable throughput and delay.
So far we have only considered a virtual router platform built using a single virtualisation sys-
tem. One possible hybrid solution would be to use Xen for the forwarding plane and OpenVZ for
the control plane, which is possible by taking advantage of hardware virtualisation to prevent the
two clashing. Besides letting us run non-Linux OSes, Xen has the advantage of allowing us to use
Click for the forwarding plane without modifications. OpenVZ’s architecture, on the other hand,
naturally lends itself to running multiple control planes concurrently because it only runs a single
instance of the kernel, enabling the guest control planes to be hidden from one another without
requiring software modifications to the control software.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

2.3 Wireless Link Virtualisation

2.3.1 Introduction
While virtualisation of servers, routers, wire line links, end nodes and hosts has been extensively
studied in the literature [20][35][36][37][38][39], the wireless part has not yet received major con-
sideration within today’s research community. Thus even though in actual and future networks
wireless links are playing an increasingly important role. Virtualisation of the wireless link com-
bines many different aspects of the virtualisation problem. “Virtual Radio” is a framework that
has been introduced by 4WARD dealing with the wireless virtualsation aspects. The framework is
proposed for configurable radio networks; further details can be found in [40]. Wireless commu-
nications cover a wide range of different technologies, types of nodes, communication protocols
and etc. The ’wireless’ concept also implies derived aspects like mobility or dynamicity within
the network. So, the global aspect of virtualising wireless links is a loose concept that demands
specification to become a feasible task, breaking down the problem into several smaller areas of
actuation. This is the approach taken by WP3 when dealing with the prototyping activities of such
a challenge. Efforts have been divided into several aspects of the virtualisation of the wireless part,
and each aspect has been extensively covered in one of the subsequent sections. Main subareas are
briefly described in the following paragraphs:

• Scheduling: this is always very close to the sharing of resources, since such techniques aim at
an efficient way of distributing the use of a certain resource among several participants. This
is for example the case of the wireless medium, which is normally shared among several
nodes/users competing for gaining access to the network. In this direction, two different
simulation tests have been developed:
– Section 2.3.2 presents a general model for the dynamic behaviour of a single VNet
service. Efficient scheduling of virtual links over shared last mile access networks has
significant impacts on the quality of service for users and channel resource utilisation.
In a system where a single wireless base station with a single antenna transmits delay
tolerant data to multiple virtual networks, when the partial Channel State Information
(CSI) is available, the optimal scheduling strategy to maximize spectrum efficiency
is to transmit to a single user with the best channel quality in each scheduling epoch
(i.e., time slot) [41, 42]. This can be considered as an opportunistic service discipline.
Opportunistic scheduling with Adaptive Modulation and Coding (AMC) schemes are
widely proposed for modern wireless systems [43]-[44]. It is demonstrated that a finite-
state Markov process can adequately be used for this purpose. This conclusion is sup-
ported by the presented results of extensive simulations.
– Section 2.3.3 presents a Matlab based implementation aimed at the analysis of perfor-
mance for a variable number of VNOs (with several types of traffic) accessing the same
wireless medium through the same wireless interface, according to a TDMA basis. The
core algorithm of the WMVF (Wireless Medium Virtualisation Framework) is based on
the WRR (Weighted Round Robin) scheduling mechanism, modified with some spe-
cific rules and enhanced to become an adaptive technique. Adaptation enhances the
performance of the main principle (TDMA), since the access node can measure (in
principle) the usage conditions taken by the different VNOs and adapt the assignment
of their Time Slots (TS) in the WRR accordingly. The idea is to look for the maximum


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

usage maintaining the best possible QoS conditions for each traffic type. A second
phase of this work, consists in the migration of the adaptive algorithms from Matlab to
the NS-2 simulation environment (C++ based). In this way, conclusions from Matlab
could be tested in the interaction of several wireless nodes (possible WiMAX scenario)
and different types of transmissions (e.g. Video streaming).

• The Cooperative VNet Radio Resource Management (CVRRM) is devoted to manage the
radio resources among VNets, in a heterogeneous environment. Its main objective is the
analysis of a set of different virtualised wireless technologies, instead of looking into the
virtualisation of a particular technology. A first evaluation of CVRRM mechanisms pro-
posed in D.3.1.1 [45], in particular the VNet requirements Radio Resource Control (VRRC)
has been made. In Section 2.3.4 a network model is presented in order to derive reference
values for evaluation metrics. The wireless access technologies involved in the study are
TDMA/FDMA, CDMA, OFDM, and OFDMA, as they cover most of the current wireless
systems (GSM/GPRS, UMTS, WiFi, and Long Term Evolution (LTE)). Initial simulations
on static traffic conditions are demonstrated.

• Case studies show practical implementation tests and are more specific; in this section we
have included two feasibility tests:

– LTE: the sharing and assignment of wireless resources at a Base Station needs to be
fair; the fairness in wireless can be defined differently: fairness in means of spectrum
used, or power used, or a combination of these two, or even fairness of QoS delivered
to end users. In context of the Section 2.3.5 work we limit the first investigations on
one base station only and we consider fairness to be given by a fair sharing of spectrum
in means of bandwidth. This means the challenge considered here is how to schedule
users of different Virtual Networks ensuring they are granted their share of bandwidth.
The main focus of the work is to study the effects and gains of introducing network
virtualisation into the LTE system. This is done through virtualising the air interface
of LTE (i.e. virtualising the eNodeB), where a hypervisor is added to virtualise and
schedule the air interface resource to be shared among the different virtual networks.
– WiMAX: another study addresses the challenges of virtualisation of a commercial Mo-
bile WiMAX base station (NEC) and its integration as part of an open virtualised ex-
perimental framework. Different issues are addressed ranging from ensuring radio iso-
lation to network layout of system components. Specifically the contributions of this
study are:
∗ It lays down guidelines for virtualising and integrating a mobile WiMAX radio as
part of a wireless testbed. Modifications required for ensuring frame switching
based purely on layer 2 are described as a part of this prototyping effort.
∗ Experimental case studies are used to show that such an architecture is capable of
supporting multiple network stacks simultaneously.
∗ Controlled experiments show that sufficient radio isolation can be achieved in a
shared mode of operation so that scientific conclusions can be drawn with suffi-
cient accuracy.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

2.3.2 Performance Analysis of Wireless Access Network

We consider sharing a single base station among multiple VNets with opportunistic scheduling. In
this work we focus on a scenario where there is strong correlation among the channel quality of
different users belonging to the same VNet. A quasi static Rayleigh flat fading model is assumed
for the wireless channels, where the SINR value at a mobile station is a random variable that
remains constant for an entire duration of a time slot. We consider a scenario where VNets have
statistically identical and ergodic channels during the time period of the analysis. During this
period shadowing and path loss are considered to be constant. It is widely accepted that such
channels can be modeled by a finite-state Markov process [46], as shown in Figure 2.36. This
model approximates the dynamics of the random fluctuations of the wireless channels due to fast
fading. In this model, the range of the received SINR by a user is divided into multiple sections.
When the signal power is below ζi and above ζi−1 the channel is considered in state Si . The
transition probability from state Si to state Sj in the next time slot is denoted by pi,j . The algorithms
for computation of transition probabilities are given in [46]. It is also shown that the transition
among nonadjacent states can be neglected. Thus, there are only transition among adjacent states,
as shown in Figure 2.36.

Figure 2.36: Finite-state Markov model for fading channels

For analytical modeling, the scheduling problem with multiple users is simplified into a schedul-
ing problem where a tagged VNet competes with a single super user. The super user combines the
competing capabilities of all users, except the tagged VNet. First, the instantaneous achievable rate
of the super user is modeled by a finite-state Markov process. Next, the reduced problem is solved
to obtain the service model of a single user. The details of the derivations are given in Appendix
We perform Monte Carlo simulations to verify the accuracy of the proposed analytical model.
The simulation scenario includes a single base station with an arbitrary number of VNets. For the
fading channel simulator, we consider flat fading at fc = 1900 MHz; the maximum Doppler shift is
10 Hz, and the time slot duration is 1.25 ms. We often compare an analytically computed transition
matrix, Pm , with the corresponding simulation result, Ps , for different system settings. Unlike the
comparison of scalar values, matrices of arbitrary dimensions cannot be easily compared. The


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.37: Modeling error vs. the speed of fading process (fm is the maximum Doppler shift and
ts is the time slot length)

average normalised norm of the rows of the error matrix, i.e., Pm − Ps , is used to represent the
modeling error as follows.
s Pm
m 2
1 j=1 [Ps (i, j) − Pm (i, j)]
em = Pm 2
, (2.1)
m i=1 j=1 Ps (i, j)

where m is the dimension of the matrices.

(a) Modeling error vs. the resolution of the model (b) Modeling error vs. the number of users
(number of states)

Figure 2.38: Modeling error

The summary of comparisons are shown in Figure 2.37 and 2.38(a). The modeling error, defined
by (2.1), against the normalised Doppler frequency shift is shown in Figure 2.37 for ζ = [10, 5, 0]
dB (SINR partition levels in Figure 2.36) and N = 10. The normalised Doppler frequency shift


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

represents the speed of channel fading. This figure shows that the modeling error of a 5-state
Markov process for per-user service is below 2% for a good range of channel fading speed. The
modeling error increases with the increasing speed of the fading process due to decreasing accu-
racy of the underlaying channel model. Indeed, a first order Markov model becomes less accurate
for fast fading channels. Figure 2.38(a) shows the per-user service model error versus the resolu-
tion of the model, i.e., the number of states in the resulting Markov process. The number of users
for this simulation is N = 10, and the maximum Doppler shift is 8.8 Hz, which corresponds to the
fading speed of a pedestrian channel. Our observation is that the modeling error for individual non-
zero components of the transition probability matrix slightly increases as we increase the number
of states. However, the percentage of error decreases as the number of non-zero components de-
creases. Figure 2.38(b) shows the modeling error against the number of users. The initial increase
and the later decrease in modeling error with increasing number of users can be observed in this
figure. The accuracy of the individual components of the computed probability transition matrix
decreases when the number of users increases. However, when the number of users increases, the
number of zero components of the matrix increases, as well. This effect decreases the modeling
error according to (2.1).

2.3.3 WMVF: Wireless Medium Virtualisation Framework

The main goal of this testbed was to develop a framework, based on a TDMA scheduling technique,
to assess the feasibility of virtualising a wireless access point maintaining the QoS requirements
of Voice over IP (VoIP) applications, for instance, or any kind of real time traffic, in general. The
intention is to analyse the efficiency of TDMA scheduling looking for the maximum usage of the
wireless medium while fully satisfying the VNet Operator (VNO) requests. The WMVF has been
developed in the form of an event-driven environment, where discrete time instants play the role of
triggering a TDMA-virtualised access node. The number of VNOs present, the number and type of
traffic patterns to be generated, the minimum Time Slot (TS) to be guaranteed for each VNO during
the simulation time, the equivalent bandwidth (throughput) estimated for each channel, and some
other parameters can be configured and specified for the different scenarios and simulations. Ex-
periments are designed in order to assess the necessary tradeoff between virtualisation (maximum
usage of the resource) and QoS demanded by services. WMVF Simulation Model

This work is completely based on simulation, so its components are the basic constructs of the
software design. It mainly has two differentiated parts, one implemented using Matlab, and the
other one implemented within NS2 (C++ based).
Regarding the Matlab framework design, the simulation starts when traffic packets are generated
and start arriving at the queues. Each VNO has a different queue assigned in the access node. The
server attends packets from the queues following a Weighted Round Robin (WRR) basis. The
duration of each VNO’s TS is adapted periodically, but it can never reach a value below an agreed
minimum (for instance, one VNO can demand a minimum by contract and this would be fixed, even
though the system detects a mis-usage of the time assigned to that VNO). An adaptive algorithm
has been incorporated in order to dynamically re-assign the duration of each VNO’s TS. This
adaptation is driven by the relative comparison between average sizes of the queues. If one VNO
(queue) is accumulating much more packets than others, it means that its traffic is suffering from


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

higher delays, which normally leads to decrease the QoS for those end-users. Adaptation tries to
increase the weight assigned to this VNO in the scheduler, if possible. A detailed explanation of
the software modules implemented can be found in B.2.1.
The NS2 implementation is ongoing work. The simulation model, scenarios and results will be
presented in following documents. From a high level perspective we could say that the skip to NS2
comes from the necessity of testing this adaptive scheduling in the communication among several
wireless nodes (not only a single access point like it is in Matlab). Efforts have been put into two
main aspects: (1) Expand the scheduling implementation to several wireless nodes communicating
among themselves (each node implements the WMVF so as to manage its own virtualised queues
and to route traffic from different VNOs) and (2) Test the WMVF in terms of guaranteed QoS
when the critical traffic consists of several video streams from different VNOs to several wireless
users. First implementations with NS2 are available. Scenarios
For the experiments on Matlab, we have considered the scenarios summarised in Figure 2.39. All
throughput estimations, as well as background traffic patterns have been referred to WiFi. The
first six scenarios (named also symmetric scenarios) use the same traffic patterns per VNO with
three possible combinations: when there is only one VNO (no virtualisation), two or three VNOs
running the same wireless node. All VoIP traffic models are configured with three full-duplex
simultaneous connections with a random duration around 100s. The system is configured so that
the WiFi background traffic finishes when the last packet of VoIP traffic is sent. Scenarios from 1
to 6 have been simulated with G723.1 and G711 VoIP codecs and with 6 and 10 background traffic

Figure 2.39: Simulation Scenarios and their Utilisation factor

The symmetric scenarios use the same traffic patterns changing the number of VNOs to analyze
the degradation of the delay and jitter when the number of VNOs increases. Simulations with
different VoIP codecs allow pointing out the effect in the quality indicators when the number of
voice calls is the same but the needed bandwidth is different. On the other hand experiments 7
and 8 intend to analyze the asymmetry of the traffic in different VNOs running the same node, and
compare the situation to the equivalent scenarios with symmetric traffic, i.e. scenario 4 with G711
and 5 with G723.1 respectively.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

All scenarios in Figure 2.39 have been tested both with and without adaptation in the scheduler.
In the case of adaptive experiments, adaptation has been based both on the utilisation per VNO,
and the size of the VNO-queues. Results of both are summarised in following paragraphs.

Simulation Setup The focus has been put on measuring the tradeoff (pros and cons) of vir-
tualisation: incrementing the number of VNOs using the access node, while trying to maintain
the QoS required for delay-sensitive services. In this sense, the utilisation factor of the shared
resource (access node) has been monitored, especially in those situations where the mean delay
experienced by VoIP packets was increased. The jitter of VoIP traffic has also been measured in
order to maintain QoS parameters within the limits permitted for a quality application. In NS2
simulations, where video streaming is introduced at the application level, we plan to monitor the
delay and jitter, as well as the quality perception of a common user, through an ad-hoc visualisation
tool. In order to quantify the aforementioned aspects we have particularised the wireless medium
virtualisation problem for scenarios where the wireless technology is 802.11b. For details about
parameter configuration in the traffic patterns used, see B.2.2.

Figure 2.40: Mean Delay and Jitter for all the scenarios

Results Figure 2.39 includes the normalised utilisation of the wireless medium in the last col-
umn, and Figure 2.40 shows the mean delay and jitter for the symmetric scenarios. The six sym-
metric scenarios show a progressive increase of the utilisation with maximum values around 40
and 70 per cent for G723.1 and G711 respectively. Figure 2.40 presents the cost of a degree of
utilisation in terms of mean delay and jitter. The situation with one operator (no virtualisation at
all) shows a very low delay and system utilisation (wireless interface) as it was expected, because
the load of traffic is low for the whole wireless node. So although the quality for that operator
is very good, the goal of virtualisation is to share the wireless interface maintaining acceptable
quality metrics. For the rest of the symmetric scenarios, the mean delay and jitter increase as
the traffic and number of VNOs increase. G711 codec generates bigger packets (200 bytes) and
at higher packet rate than G723.1, what makes the delay increase, even more. As the number of


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

WiFi background traffic increases, both VoIP and background traffic delay increase. This is normal
because of the FIFO nature of the VNO-Queues. Future versions of the WMVF will implement
traffic priorities in the VNO-Queues to avoid this effect.
Just to present a couple of comparisons between the non-adaptive and the adaptive algorithms,
Figure 2.41 shows the mean delay experimented by VoIP and Wifi TCP traffic packets in scenarios
5 and 6 from Figure 2.39. Both scenarios have been run with both VoIP codecs (G711 and G723.1),
so we have 4 cases presented in this Figure 2.41.

Figure 2.41: Mean Delay for each traffic type in scenarios 5 and 6

In this comparison, we can state that adaptation mechanisms definitely reduce the mean delay
for all types of traffic. In the case of G723-VoIP packets, the improvement is less appreciated, since
the delays experimented with the non-adaptive version were already quite low. For TCP traffic, the
decrease in the average delay is much more visible, and even for G711-VoIP traffic, where the
demand is higher (in terms of packet-size, but also in the BW demanded by flows).

2.3.4 CVRRM Evaluation Model assumptions
In order to define a model to evaluate Cooperative VNet Radio Resource Management (CVRRM)
mechanisms proposed in D.3.1.1 [45], and in particular the VNet requirements Radio Resource
Control (VRRC) function, some assumptions are taken: uniform coverage by all the wireless sys-
tems under analysis, and the inexistence of a specific requirement from the VNet Operator related
to the wireless technology involved. It is considered that VNet Operators do not care about the
specific wireless technology in use, as long as the contractual requirements are ensured. Moreover,
it is assumed that end-user nodes are mobile and multi-homed, i.e., capable of supporting several
radio interfaces, so that it can connect to any available network. Concerning the wireless access
technologies involved, one considers TDMA/FDMA, CDMA, OFDM, and OFDMA, as they cover
most of the current wireless systems (GSM/GPRS, UMTS, WiFi, and LTE), which from now on
are considered as examples of such access technologies. Although, radio channel multiple access
definition for each wireless technology is different, a level of abstraction is added, enabling a com-
mon approach to manage all radio resources. It is considered that each wireless link is generically


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

composed of channels, which varies in number and capacity according to the wireless technology
involved. However, the characteristics of each technology are taken into account, in order to em-
phasis the specific factors that influence channel capacity. The main feature considered here is
the channel data rate. In this first evaluation, VNets are classified according to their contractual
characteristics. For simplicity, only two VNet types are considered:

• Best Effort (BE), i.e., a VNet that has no data rate requirements;

• Guaranteed (GRT), with stringent data rate requirements.

In the BE case, channels may be transferred (borrowed) to perform the total amount of data rate
required by the GRT one, if no other channels are available. Strategies and algorithms

CVRRM functions must interact with a Monitoring Entity (ME) (e.g., the In-Network Manage-
ment resource monitoring in the context of the 4WARD project [47], which provides real time
measurements, like available resources quantity and quality, neighbouring resources and failure
detection. Furthermore, it is assumed that a ME instance exists in the physical node, providing
global resource monitoring information, and in each virtual node, collecting its own monitoring
information. The monitoring of the whole VNet is done through the association of several MEs
instantiated in each of its VNodes, constituting an aggregated system. It is assumed that the ME,
mainly the real time monitoring part, monitors the wireless medium and the node, and that it can
provide the cost function inputs to computation, in order to evaluate virtual networks and resources.
To cope with the initial VNet requirements and the optimisation of the usage of radio resources are
the VRRC main objectives. In order to reach them, VRRC uses monitoring information, compar-
ing the actual capacity with the contractual one, then, deciding on radio resources (re)allocation
to a given VNet. The VRRC algorithm is based on the main goal, i.e., on the possible changes in
capacity/availability of radio resources that affect VNet requirements, e.g., data rate, delay, jitter,
packet loss, and error rates. The selection of additional radio resources is done according to the
cost, obtained from cost function computation, of neighbour resources. Depending on resources
utilisation, and VNets characteristics, VRRC may also decide on the migration or adaptation of the
amount of resources allocated to the VNets, in order to optimise radio resource usage. The strate-
gies used by VRRC are related to the VNet requirements, and are reflected by Key Performance
Indicators weights in the cost function computation for resource evaluation. Evaluation Metrics

In order to evaluate VRRC algorithm performance, a set of output parameters was identified. These
parameters, VNet Operator satisfaction level, out of contract ratio, VLink utilisation and physical
Link utilisation, are key indicators to allow a proper validation of the proposed model. The equa-
tions for the computation of each parameter are given in Appendix B.3.1.
The VNet Operator satisfaction level, SV N O , computes the VNet operator requests to use the
allocated capacity, according to the contract established with the VNet Provider, and this capacity
is not available: SV N O accounts for the effective decrease in the amount of contracted resources
perceived by the VNet Operator. The analysis of this parameter should be made at the VLink and
at the VNet levels. Indirectly, it allows evaluating the network capability to react to radio/channel


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

impairments that reduce the availability of the virtual resources. It can be used to monitor the
virtual links, in order to detect contract violations.
The out of contract ratio is defined as the period of time, over the total sampling one, for which
that VNet contracted capacity is not available. It is a global metric that is independent of the
service level experienced by the VNet Operator, since in low VNet usage, capacity reduction may
not be perceived. The out of contract ratio can be computed for a VLink and for a VNet. The
objective of this metric is the evaluation of network reaction to radio/channel impairments that
reduces the availability of the virtual resources under VNet contracted requirements. The scanning
time adaptation to fast or slow variability of the medium is another target to achieve.
VLink utilisation is defined as the ratio of the number of occupied channels over the total number
of allocated channels.This global metric should be applied to the individual VLinks in order to eval-
uate the possibility to introduce some balancing mechanisms to optimise physical link utilisation,
which is defined as the ratio between the total number of occupied channels and the total number of
physical channels.This parameter should be calculated on all wireless physical resources, namely,
Base Stations (BSs). Scenarios for evaluation

In order to have reference values for each of the evaluation metrics defined in the previous sub-
section, different scenarios were identified, derived from a reference scenario. This reference sce-
nario consists of a cluster of BSs, from several Radio Access Technologies (RATs) under analysis,
deployed on a circular area. A uniform coverage by all RATs is considered, therefore, allowing
vertical handover in heterogeneous networks. Two VNets for data traffic are considered, a BE
and a GRT ones. Virtual resources instantiated in all the physical resources within the cluster,
are assumed in both cases. The reference scenario can be expanded into several sets of scenarios,
according to the input parameter to be analysed, namely, number of BSs per RAT, cluster per-
formance, GRT/BE VNet ratio, and service mix. In this study, just the cluster performance was
varied. Concerning the number of BSs per cluster, one considers a reference scenario in which
the cluster is served by 2 TDMA, 5 CDMA, 8 OFDM and 6 OFDMA BSs. A main parameter for
data rate variability is the Signal to Interference plus Noise Ratio (SINR), thus, in order to have
reference and bounds values, three performance cases are considered: Reference, Good and Poor
ones. The Reference case corresponds to the mean data rate for each wireless technology, while
in the Good one, the maximum data rate is considered, and in the Poor case, the minimum data
rate is obtained. The total cluster data rate, i.e., the capacity of the cluster to process the offered
traffic, is calculated through the sum of the individual BS reference data rates for each RAT. In
order to obtain the maximum data rate for each VNet in the cluster, the ratio between both VNets
should be established. Considering that all physical resources are allocated for the two VNets, the
VNets’ capacity is obtained as being proportional to the defined ratio over the total capacity. A
relation of two to one between BE and GRT VNet is considered, meaning the BE VNet gets twice
the allocated data rate of the GRT one. The main reason for this is the mean typical data rate of
the services considered in the BE VNet, which is higher than the one for the GRT VNet. Although
other relationships could be considered, this one is used as an example. Table 2.3 presents the
cluster data rate, per VNet, for the defined set of performance scenarios. It is assumed that the
maximum required data rate for the GRT VNet is the value reached in Good, and the minimum
guaranteed data rate is the one obtained for Reference. The BE VNet has the minimum data rate
requirement, i.e., the one obtained in the Poor. The number of users in the GRT VNet is twice


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Performance scenario Cluster data rate [Gbit/s]

Good 4.68 2.34
Reference 2.28 1.14
Poor 0.99 0.49

Table 2.3: Cluster Data Rate

the one in the BE VNet. For the reference scenario characterisation, it is important to quantify
the amount of offered traffic to the VNets, which is related to a typical data rate computed overall
active end-users in the network during that period of time, hence, it is considered, from now on
as the mean data rate per active end-user. A set of different services provided by each VNet is
considered, according to their type. The GRT VNet runs services with stringent requirements, and
the BE VNet provides other services, which do not have such stringent, Table 2.4. Tele-working/
Interactive gaming has been included in the set of services, since it is expected to be of great impact
in the future [48]. The offered data rate calculation per active end-user was performed based on the
mix of services presented on Table 2.4, using the mean value of the typical data rate interval. The
obtained values for the offered data rate per active end-user in each VNet are 15.7 and 2.9 Mbit/s,
for BE and GRT VNets, respectively.

Services Typical data rate [Mbit/s] Service penetration [perc.]

VoIP [0.032, 0.064] 57
Video Streaming 5 28
Tele-working/Interactive gamming [1, 20] 15
FTP [1, 50] 2
P2P [1, 50] 56
Web [0.064, 2] 29
Chat [0.064, 0.512] 8
Email [0.064, 0.512] 5

Table 2.4: Service Penetration per VNet (Based on [48] and [49]) Results
General considerations After the identification of a set of relevant scenarios, reference val-
ues for the evaluation metrics referred to in sub Section have been computed. The results
obtained from these scenarios are based on static values (dynamic ones are left for further work,
obtained from simulations). However, they should allow getting typical and bound values for each
of the parameters, by varying some of the input variables. Hence, a relative comparison to the ones
that will be obtained after the introduction of the CVRRM algorithms can be achieved. Only a
subset of these results is presented on this main part, being the others in Appendix B.3.

Out of contracted ratio Since this is a static analysis, based on the offered traffic to the VNets,
one needs to consider a light traffic situation, corresponding to a reduced number of active users, to


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

estimate bounds for the out of contract ratio, rtnav . The limits obtained for rtnav in terms of number
of active users, are similar to the ones found for SV N O , presented in Appendix B.3. Results, in
Figure 2.42 , illustrate the rtnav variation for 100 and 194 users offering traffic to the BE VNet (i.e.,
200 and 388 users in the GRT VNet). Under a severe degradation of radio interface conditions,
depicted by Poor, the percentage of time the contracted capacity is not available increases with the
introduction of more users. Figure 2.42 (b) presents the limit case in Reference without GRT VNet
contract violation. In Poor, the GRT VNet will run all the time out of contract. In this case, in order
to guarantee the stringent requirements of this type of VNet, the introduction of some mechanisms
to compensate for capacity reduction must be considered. Although the same phenomenon occurs
for BE VNet when the number of users is near its limit (145 users), it is not so critical due to the
elastic characteristics of the traffic in this type of VNet, and the possibility of being delayed.

Figure 2.42: rtnav for the different performance scenarios.

2.3.5 LTE Wireless Virtualisation

In order to virtualise the LTE air interface, the eNodeB has to be virtualised, since the eNodeB
is the entity that is responsible for accessing the radio channel and scheduling the air interface
resources. Virtualising the eNodeB is partly similar to node virtualisation. In node virtualisation
there exist a number of solutions, where the physical resources of the virtual machine (like the
CPU cycles, memory, I/O devices etc.) are being shared between multiple virtual instances of
virtual operating systems. XEN [35] for example is a well known PC virtualisation solution that
calls the entity that is responsible for scheduling the physical resources a “Hypervisor”. From that
our proposal follows the same principle, where a hypervisor is added to the LTE eNodeB as shown
in Figure 2.43.
The LTE Hypervisor is responsible for virtualising the eNodeB into a number of virtual eNodeBs
(each will be used by a different operator), the physical resources are being scheduled among the
different virtual instances via the hypervisor (similar to XEN). In addition, the LTE hypervisor
is also responsible for scheduling the air interface resources (that is the OFDMA sub-carriers)
between the different virtual eNodeBs. LTE uses OFDMA in the downlink, which means that
the frequency band is divided into a number of sub-bands each with a carrier frequency. The air
interface resources that the hypervisor schedules are actually the Physical Radio Resource Blocks


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.43: Virtualised LTE eNodeB protocol stack

(PRB); this is the smallest unit that the LTE MAC scheduler can allocate to a user. A PRB consists
of 12 sub carriers in the frequency domain as well as 7 OFDMA symbols in the time domain.
Scheduling the PRBs between the different virtual eNodeBs actually means splitting the frequency
spectrum between the different eNodeBs of the different operators. The hypervisor can make use
of apriori knowledge (e.g. user channel conditions, virtual operator contract, load ... etc.) to
schedule the PRBs. OFDMA scheduling has been studied extensively in the literature, but what is
new here is that the frequency spectrum among the different operators has to be scheduled. This
is even more challenging because of the additional degree of freedom that has been added. A
number of possibilities and degrees of freedom exist here, where the scheduling could be based
upon different criteria: bandwidth, data rates, power, interference, pre-defined contract, channel
conditions, traffic load or a combination of these . At the end the hypervisor has to convert these
criteria into a number of PRBs to be scheduled for each operator, but the challenge is to make sure
that the allocated PRBs will be fair and enable the operators to satisfy their requirements. This
means that some mechanisms/contract guidelines have to be defined to guarantee the resources to
the operators. What is also important here is the time frame that the hypervisor operates with in
order to guarantee the pre-defined requirements. To show the spectrum of possibilities we only
concentrate here on two different types of scheduler algorithms, a static and a dynamic one. In
the static algorithm the spectrum is divided between the different virtual operators beforehand, and
each operator will get his operating spectrum and keep it for the whole time. This is similar to
today’s network, where each operator has his own frequency spectrum and no other operator is
allowed to use this spectrum. In the dynamic algorithm, the resources are allocated to the different
operators during runtime, and the amount and allocation can be changed over time depending on
the operator’s traffic load. The latter algorithm is an example of what can be gained by applying
network virtualisation into wireless mobile communication systems, where not only the operator
will share the physical infrastructure but will also share the frequency spectrum and this in turn
will lead to better resource utilisation. LTE Virtualisation Simulation Model

The LTE simulation model used in this project was developed using the simulation tool OPNET
[50]. OPNET is a discrete event simulation engine that uses hierarchal modelling and object ori-
ented programming. It supports an integrated GUI for debugging and analysis. The LTE simula-
tion model is designed and implemented following the 3GPP specifications, where most of the LTE


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

functionalities and protocols have been implemented as can be seen in Figure B.7. The simulation
model details can be found in B.4.1. Simulation Configurations

The main goal is to show the effects and advantages of using network virtualisation in mobile
communication systems. Specifically we investigate the additional benefits that can be gained
if the mobile operators actually share the air interface resources (i.e. frequency band), which
is possible by virtualising the mobile system infrastructure. From that, the simulation scenarios
investigated are divided into two different setups: A static hypervisor configuration and a dynamic
hypervisor (load based) configuration.
Besides the hypervisor scheduler algorithm the rest of the configuration is exactly the same for
both scenarios and is shown in Table B.2. In order to show the effect of the wireless virtualisation
a peak traffic model has been used for only 300 seconds to emulate a sudden peak in the load of
the operator. The place where this peak traffic model is used for each operator’s users within the
simulation time is configured as seen in Table B.1. This means that in each cell, one operator will
have a peak load where he would require more resources to support his users, whereas the other
two virtual operators are experiencing low traffic load and would require a smaller number of the
air interface resources. Simulation Results

Figure 2.44: Cell1 per virtual operator (VNO) allocated bandwidth (MHz)

As discussed earlier, two scenarios each with a different hypervisor algorithm are compared
against each other. The first scenario “Static” is similar to today’s mobile network operator setups
apart from sharing the infrastructure, Three VNO are operating in the same region, each uses his
own frequency band; these bands are being pre-allocated in the beginning of the simulations. Since
we have 99 PRBs in total, each operator will get one third of the PRBs. The second scenario “Dy-
namic” can be looked at as the futuristic approach where the three VNO are sharing the frequency
band dynamically depending on their traffic load, in each time interval (which is chosen to be one
second as shown in Table B.1) the hypervisor tries to calculate from the previous instance what the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

traffic load of each operator is and how the channel conditions experienced by the operator’s users
look like, and then assigns the PRBs among the VNOs based on these calculations.
Figure 2.44 shows the bandwidth of each of the virtual operators that the hypervisor has allo-
cated. It can be noticed that the allocated bandwidth changes with time depending on the load and
the users’ channel condition of each operator. In the first 300 seconds of the simulation run time
it can be seen that operator 1 has been allocated a much higher bandwidth compared to the other
two operators. This is due to the scenario configuration shown in Table B.1, where operator 1 will
have four additional users with video applications causing an increase in that operator’s traffic.

Figure 2.45: Cell1 virtual operator 1 per user air interface throughput (kbps)

The average per user air interface throughput for operator 1 users in cell number 1 can be seen in
Figure 2.45. The left side figure shows the air interface throughput for the VoIP users whereas the
right side figure shows the video users’ throughput. What can be noticed from these figures is that
the dynamic scenario achieves an expected better throughput than the static case. The reason for
that is the fact that additional resources have been used for the dynamic case, where in the static
case these resources were allocated for the other operators but were not used.

Figure 2.46: Cell1 virtual operator 1 per user end to end application delay (m sec)

The average user application end-to-end delays can be seen in Figure 2.46. It can be noticed
that the static scenario suffers from higher application delays as compared to the dynamic case


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

especially for the video users since they are on a higher MAC Scheduler priority. The reason is the
limited air interface resources.

Figure 2.47: Cell1 virtual operator 1 end to end delay probability is more than 300 m sec

In order to check the performance of the VoIP application, the probability when the end-to-end
delays of the VoIP packets was greater than 300 sec (which is the QoS limit for the VoIP packets)
is shown in Figure 2.47. It can be seen that the static case has a higher probability for not satisfying
the QoS threshold.
In conclusion, the results show that additional advantages can be gained from applying the
virtualisation principle into the LTE system, especially when it comes to sharing the air interface
(i.e. spectrum) where the overall system as well as the individual users’ performance is increased.

2.3.6 WiMAX Virtualisation

Open networking testbeds such as ORBIT [51], and PLANETLAB [52], have gained widespread
popularity among the research community due to their ability of permitting evaluation of new
protocols and features in a controlled environment. In order to support experiments for a long
duration and improve scalability and efficiency of these testbeds, different network virtualisation
approaches have been proposed. Virtualisation in this case, deals with the task of splitting the
entire testbed, including the network resources (radio or wired), management hardware, and ex-
periment consoles in such a way that every user/slice has the illusion of having the entire testbed to
itself. Virtualisation of the testbed also allows for integration in the GENI [53] framework, which
envisions the presence of an open heterogeneous experimental substrate.
A standard WiMAX system typically consists of three important components: 1) Basestation
transceiver system (BTS), 2) Access Service Network Gateway (ASNGW or ASN), and 3) Content
Service Network Gateway (CSN)1 .
The BTS shown in Figure Figure 2.48 is the main component of the WiMAX system and con-
sists of the network interface and the radio which communicates with the clients. Among other
There are other components like the AAA authentication server, but these are not essential for the virtualisation
work presented here.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

control gre1a gre1b gre2a gre2b greMa greMb


802.16e Common Packet Sublayer (CPS)

UGS ertPS rtPS nrtPS BE

RR,WRR etc. RR,WRR etc. RR,WRR etc. RR,WRR etc. RR,WRR etc.

Priority based Scheduler

WiMAX Frame Scheduler

Figure 2.48: WiMAX Basestation Transceiver System

things, the BTS is capable of controlling RF features such as the transmission frequency, power,
rate, symbol ratios, retra-nsmission mechanisms, and other client management functionality. The
BTS interacts with the wired world through the use of the base station controller (ASNGW). The
ASNGW is used to route traffic appropriately from the wired interface(s) to individual service
flows on WiMAX clients. In profile A, WiMAX systems, the ASNGW is also responsible for
performing all radio resource management functions. Virtualisation Architecture

Figure 2.49: Generic architecture for the proposed WiMAX deployment


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

A broad overview of our system design is as shown in Figure The BTS hardware
from the original NEC WiMAX system is used in an un-modified state. To support virtualisation
research and simultaneous experimentation in a virtualised mode we modified the ASNGW, to
interface with the rest of the system. To allow an independent control setup for individual slices,
we provision virtual machines that act as virtual basestations or vBTSs. These vBTSs will act as
point of presences in the vitualised framework for the experimenters. All frame switching from
the on the vBTS to the wireless clients is modified to work purely at layer 2. To enable this,
we use features such as VLANs, tunnels at layer 2 and frame routing through the Click modular
router. Our architecture also supports a remote vBTS mode of operation where the slice POP is
running remotely and tunnels over the internet to a local vBTS. More details on the switching used
for the internal data path as well as on other aspects of the WiMAX basestation virtualisation and
deployment are given in [54]. The virtualised BTS architecture as well as the modified ASN design
are described in more details in appendix B.5.1 and B.5.2 accordingly. WiMAX Deployment and Experimentation

Baseline Performance To measure the coverage with the basestation and outdoor antenna,
we perform an experiment where a client is driven around the WINLAB tech center campus in
a car. Using a GPS and a tracing tool, we measure the RSSI obtained with a frequency of 1Hz.
Results obtained from the experiment are as shown in the Figure
An experiment was performed to study the baseline performance of the WiMAX basestation
transceiver at multiple locations. We consider three position for the clients: 1)control room
(0.01miles), 2)Loc1 (0.06miles) and 3)Loc2 (0.14miles). Each of these locations are as marked on
the Figure 2.51. When the client is placed in the control room it is very close to the basestation
and observed CINR is 29dB and RSSI is -51dB. Similarly at locations loc1 and loc2, CINR values
are 27 and 24dB respectively, the RSSI values are -67 and -72dB respectively. To evaluate the
performance we measure UDP throughput at each of the locations for different frame sizes and
MCS used to reach the clients.
Measurements obtained from the baseline experiment are as shown in Figure 2.52. As seen in
the results, the UDP throughput performance is best when the client is in the control room which is

Figure 2.50: Orbit WiMAX Deployment


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0


Control room

Loc 1

Loc 2
CINR: 24, CINR: 27,
RSSI: -72dB RSSI: -67dB

Figure 2.51: Orbit WiMAX Baseline Topology

closest to the transmitter. In all cases, we observe that the auto rate scheme used at the Basestation
is capable of matching the performance achieved by static rates at all three locations. This serves
as a baseline test to validate performance of the auto rate algorithm on the BTS.

(a) Control Room

(b) Location-1 (c) Location-2

Figure 2.52: Baseline Throughput Measurements.

Slice Isolation Experiment This experiment emulates mobility in a Femtocell deployment,

and is used to illustrate the need for slice scheduling (isolation). We consider two slices which
are sending traffic from vBTS1 and vBTS2 to their corresponding clients. Client for the flow from


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 2.53: Slice Isolation Experiment Throughput

vBTS1 (slice1) is stationary, and the client for the flow from vBTS2 (slice2) is mobile. The sta-
tionary client is located such that it has a CINR greater than 30 which allows the basestation to
send traffic comfortably at 64QAM 56 . As experimenter walks with the mobile client so that for a
portion of the path the link degrades significantly. Each slice is configured to saturate the link to
its client with UDP traffic.
The observed downlink throughput for both the clients without any shaping is as shown in Fig-
ure Figure 2.53. We observe that as the mobile client reaches areas where the RSSI drops below
certain threshold, the rate adaptation scheme at the basestation selects a more robust modulation
and coding scheme(MCS). However, in the process the link with the mobile client ends up con-
suming a lot more radio resource at the basestation, which affects performance of the stationary
client. Thus we observe that while the BS scheduler is capable of providing QoS, it does not en-
sure radio resource fairness across links. To alleviate this problem we propose and implement the
virtual network traffic shaping (VNTS) technique [55], which adaptively controls slice throughput.
Results from the experiment repeated with VNTS are as shown in the Figure Figure 2.53. Even as
the channel for the mobile client deteriorates, the VNTS mechanism is able to appropriately limit
the basestation utilisation for the mobile client (slice) thereby providing fairness to the stationary
client (red line).

2.3.7 Conclusions
Initially, we analyzed the performance of individual VNETs link virtualisation for wireless ac-
cess networks. An opportunistic scheduling scheme is considered for the isolation of different
VNETs sharing a single base station. We showed that the dynamics of the received service by a
single VNET can be approximated by an m-state Markov process. Monte Carlo simulation results
demonstrated the accuracy of the proposed analytical model. The proposed model can be used to
implement an effective admission control mechanism for a single base station in order to provide
the required QoS for VNETs and improve spectrum utilisation.
When in a given RAT the use of wireless resources rises, usually a degradation of the quality
of the service is observed. The designer has to take into account the degradation permitted and
the type of additional VNOs and traffic patterns that a specific wireless resource can support. The


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

maximum degree of utilisation achievable is the one that allows the maximum number of traffic
connections and VNOs but complying with the quality limits. Regardless of the type of traffic
pattern and connections per VNO, no other VNO using the same wireless resource is affected.
This condition is fundamental, because it grants isolation between VNOs.
Adaptation is a mechanism designed to share the time slots of the Round Robin basis in a fairer
way. Results show that the efficiency of the algorithm improves the mean delay experienced by all
types of traffic services for the same configuration of scenarios and traffic generation.
The CVRRM model aims to minimise the effect of QoS degradation inherent to radio interface,
by evaluating globally the impact of heterogeneous network environment into the virtualisation ap-
proach. The presented CVRRM model evaluates the proposed VNet requirements Radio Resource
Control algorithm. This model is based on a cluster, with VNets composed by heterogeneous net-
works, assuming that all RATs cover the area considered in the scenario. Results suggest that the
radio channel variability is a challenge for the deployment of virtual networks with guaranteed
requirements, e.g., the VNet Operator service level can rise from zero to its maximum, from a
Poor to a Good performance scenario. Hence, a fast reaction of the network in the reallocation of
resources in order to assure the Virtual Networks proper deployment becomes a critical point. The
obtained physical link utilisation allows concluding that, in order to optimise wireless link utili-
sation, balancing mechanisms in the physical links should include also the evaluation of VLinks
utilisation, since it affects the physical link utilisation.
A particular case of future wireless virtualisation, is the LTE. Initially it is expected that better
system performance can be achieved by dynamic spectrum sharing. Simulation results confirmed
this expectation. Based on the instantaneous traffic load each of the virtual operators can require
more resources (PRBs) from the other virtual operator when they still have spare resources. In
this way, the overall resource utilisation is enhanced offering better performance to both network
and end-users. Although the simulation results are quite scenario-specific, the basic findings are
representative and show the advantages that can be achieved by applying network virtualisation to
the LTE system. In addition for some cases especially in rural areas with low density of population
and traffic using the dynamic spectrum sharing it is a much better choice than in today’s static
spectrum allocation.
Finally, WiMAX virtualisation results show that when WiMAX BSs radio resources are attached
to mobile users (virtual slice), they have impact on static ones (slices). This undesirable effect can
be minimised by introducing the VNTS technique, which works adaptively by controlling slices
throughput. This control is performed by limiting slices resources, providing fairness to other
slices, namely the static ones.


3 Targeted Integrated Feasibility Tests
3.1 Inter-Provider VNet Testbed
The overall goals of this feasibility test are to integrate several feasibility tests and to show the prac-
tical feasibility of the VNet Architecture in a competitive inter-provider environment. Our testbeds
distributed over three locations, i.e., Berlin/Germany, London/UK, and Karlsruhe/Germany. Each
site acts as an InP and offers virtualisation services to the remaining parties.
Each of the participants (T-Labs, ULANC, UKA) also takes the role of a VNP and thereby en-
ables VNOs to request whole virtual networks from them. The required resources are then acquired
from the contracted InPs by the VNPs, only restricted by their mutual contractual relationships
with InPs and the InPs’ policies. Figure 3.1 sketches an exemplary business relationship between
the involved players. Whereas the UKA and ULANC VNP roles may only construct VNets from
their own and one other InP’s infrastructure, the T-Labs VNP, in this example, may request virtual
resources from any InP in this scenario.

Figure 3.1: Logical Testbed Overview

The feasibility tests that will be integrated here are:

• VNet Provisioning which subsumes Embedding Algorithms
• VNet Management via the Out-of-VNet / Console interface
• Virtual Link Setup


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

With the integration of those feasibility tests, we demonstrate that our VNet Framework, i.e., its
roles, interfaces, and processes, are enabling the dynamic and flexible provisioning and manage-
ment of virtual networks. For all these feasibility tests, we rely on a resource description language,
which enables a detailed specification of VNets. The description language is used for requesting
VNets, optionally advertising physical resources (by InPs), and maintaining databases of physical
and virtual resources within InPs and VNPs, respectively.

(a) Provisioning (b) Console

Figure 3.2: VNet provisioning (a) and console architecture (b).

Our testbed environment contains the following kinds of nodes:

Substrate Nodes Physical nodes that might be employed to host virtual nodes. A substrate
node offers a management interface to the InP that can be accessed, by InP management
nodes in order to, e.g., request creation of virtual nodes on the specific substrate node or
to initiate migration of a virtual resource. This interface can also be employed to acquire
monitoring information from the substrate node, e.g., to ease resource discovery.
InP Management Nodes InP management nodes receive VNP requests and depending on the
contractual relationship and conforming to the INP’s policies, they process VNP requests,
which might, e.g., result in determining the embedding of a newly requested virtual node in
the InPs substrate. Besides, InP management nodes realize local decisions, e.g., by trigger-
ing the instantiation of a virtual node on a substrate node as determined by the embedding
process and creating the according virtual links. Another functionality of InP Management
nodes subsumes the provisioning of proxied console access to virtual resources by the VNO
as depicted in Figure 3.2(b).
VNP Management Nodes VNP management nodes receive requests for VNets from VNOs and
determine how the VNO’s needs can be satisfied with the available contractual relationships
with InPs. Based on the result of this process, the VNP Management Node subsequently
contacts the chosen InPs for resource negotiation.
VNO Management Nodes A VNO management node provides the VNO with a homogeneous
view of the virtual network and allows him to adapt the VNet to current needs by, e.g.,


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

passing VNet descriptions to VNPs via the provisioning interface (c.f. Figure 3.2(a)) as well
as to manage an existing VNet via the Console / Out-of-VNet access interface as shown in
Figure 3.2(b).

3.1.1 Scenario 1: VNet Provisioning

VNet provisioning synthesizes existing virtualisation technologies coupled with sophisticated em-
bedding algorithms, allowing the provisioning of operational VNets at short timescales. This pro-
cedure involves several interactions between the VNP, the management nodes within participating
InPs and of course, the substrate nodes. Upon the admission of a VNet request, a VNet with the
given specifications is provisioned on-the-fly on top of one or multiple InPs.
In principle, VNet provisioning includes: (i) resource discovery, (ii) VNet embedding, (iii) vir-
tual machine creation and configuration, and (iv) virtual link setup. Each InP may optionally ad-
vertises to the VNP available resources, forming a service discovery framework with information
from all participating InPs. This allows the VNP to split each incoming VNet request among con-
tracted InPs. Subsequently, the VNet embedding within each InP takes place, which is responsible
for mapping its corresponding VNet part to the substrate network.
Upon VNet embedding, the InP management node signals individual requests to the assigned
substrate nodes. Each substrate node handles the request within their management domain, which
then creates the necessary virtual machines as guest domains.

3.1.2 Scenario 2: VNet Management

The Scenario VNet Management assumes that a VNet has already been created as described in the
previous section. This VNet is then managed by the VNO via the Out-of-VNet Access / Console
access (see Figure 3.2(b)). The InPs may independently perform maintenance and optimisation of
their infrastructure.

3.1.3 Conclusion and Outlook

While the overall integration is not yet fully implemented, part of this prototype has already been
realised and published [56]. This prototype will be extended to integrate the introductorily men-
tioned feasibility tests and to demonstrate the feasibility of their integrated operation.

3.2 VNet Embedding and Instantiation joint feasibility

The objective of this test is to integrate the VNet embedding algorithms (developed by GET-INT)
and the VNet instantiation (implemented by ULANC) in a common testbed to conduct a joint
feasibility test over an experimental facility (Heterogeneous Experimental Network – HEN) in a
realistic setting and at the appropriate scale.
The distributed embedding algorithms, provided in subsection 2.1.2, are developed for large
scale networks. The algorithms used in the VNet instantiation feasibility test deal with small scale
tests since the main objective was to validate the instantiation process. Combining VNet embed-
ding and instantiation into a joint feasibility test using the HEN platform enables validation of the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

two frameworks for the large scale. Since the HEN platform has been designed to use centralised
management nodes, to ensure interactions between VNPs and physical infrastructure providers for
selection and binding, the planned joint feasibility test will be accomplished using centralised and
exact embedding algorithms. The latter will assign VNet requests to HEN, evaluated in subsection
2.1.4. with the aim of making efficient use of HEN resources and accepting as many VNet requests
as possible.
To evaluate the multiple infrastructure providers case, the HEN platform is split into multiple
logical clusters, each simulating an independent substrate network. The VNP relies on VNet Re-
quest Splitting algorithms (implemented by GET-INT)to efficiently split the VNet graph among
InPs that are responsible for embedding the VNet sub-graph to the corresponding HEN cluster.
Management nodes in each InP execute a centralised selection algorithm (proposed also by GET-
INT) to find a solution for their sub-graph. These sub-graphs are instantiated using the VNet
provisioning framework implemented by ULANC on the HEN platform.

3.2.1 Components
This section provides the hardware and software components for our testbed. Hardware
The testbed is set up on Heterogeneous Experimental Network (HEN), which includes more than
110 computers connected together by a single non-blocking, constant-latency Gigabit Ethernet
switch. We mainly use Dell PowerEdge 2950 systems with two Intel quad-core CPUs, 8GB of
DDR2 667MHz memory and 8 or 12 Gigabit ports. Software
The integrated prototype is mainly implemented in Python and synthesizes existing node and link
virtualisation technologies to allow the provisioning of VNets on top of shared substrates. We used
Xen 3.2.1, Linux and the Click modular router package [20] (version 1.6 but with patches
eliminating SMP-based locking issues) with a polling driver for packet forwarding. We relied on
Xen’s paravirtualisation for hosting virtual machines, since it provides adequate levels of isolation
and high performance [21]. VNet Operator, VNet Provider, the InP management node and the InP
nodes interface via remote procedure calls which are implemented using XML-RPC.
The substrate topologies are constructed off-line by configuring VLANs in the HEN switch.
This process is automated via a switch-daemon which receives VLAN requests and configures
the switch accordingly. For the inter-connection of the virtual nodes, we currently set up IP-
in-IP tunnels (other tunneling technologies may be used as well) using Click encapsulation and
decapsulation kernel modules, which are configured and installed on-the-fly. Each virtual node
creation/configuration request (within each InP) is handled by a separate thread, speeding up VNet
embedding and instantiation. Similarly, in the presence of multiple InPs, separate threads among
them allow VNet provisioning to proceed in parallel.
Both the VNet Request Splitting algorithm and the exact embedding algorithm are developed
using Java (JDK 1.4) with standard libraries. These two algorithms rely on the simplex algorithm
and branch-and-bound methods for solving linear programming models and providing exact and
optimal request splitting and embedding solutions. VNet embedding requires CPU load and link
bandwidth measurements which are obtained using Linux’sloadavg and iperf, respectively. For


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

the description of the resources both for the substrate and the virtual networks we rely on a rich
XML schema with separate specifications for nodes and links. Figure 3.3 provides an overview
of the integrated prototyping, illustrating the basic components which are in accordance with the
Network Virtualisation Architecture.

Figure 3.3: Integrated Prototype Overview

3.3 VNet Performance and Interconnections

VNets are versatile networks created out of virtualised resources for different purposes. The con-
cepts of VNets are extensively elaborated in this and previous deliverables. The purpose of this
section is to evaluate the performance of VNets in terms of how VNets perform in commercially
available current networking infrastructure. The sub sections individually looks at the different
tests that were performed and the tests that are planned.
The Section 3.3.1 details the initial evaluations done and planned to check the performance
of VNets by creating a number of simultaneous VNets and loading the VNets with data. The
Section 3.3.2 details the planned performance measurements when interconnecting the different
VNet domains based on identified scenarios.

3.3.1 VNet Performance

The advantages of VNets have been explained and discussed in this and previous deliverables
extensively. Though VNets are conceptually attractive, they must be evaluated in terms of how
they perform in physical networks. The following metrics are identified to be evaluated in VNets.

• Performance of the network and the end-user applications

• Setup time performance

• Performance with QoS


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

A couple of scenarios are identified that focus on evaluating the above metrics. These scenarios
look at VNets that are built on commercially available commodity hardware. The scenarios are as

• Setup of multiple VNets that are equally capable in terms of the resources allocated for each

• Setup of multiple VNets with varying QoS Guarantees for certain VNets Components
The VNet management (VNM) test-bed is a suite of software modules developed to build VNets
on the fly. The environment consists of agents that reside in the different physical resources that
control the resource to create and remove (manage) virtual resources. These agents are controlled
by a manager. The managers accept commands from infrastructure providers to manage the phys-
ical resources. The manager holds a resource repository that it updates continuously based on the
current status of the VNM environment. The repositories can be located in a distributed manner or

Figure 3.4: VNet Management Environment

Figure 3.4 shows an example VNet environment with 3 physical resources. Each resource has
a VNet Agent running in it to control the resource to create, remove or modify virtual resources
associated with the specific resource. In this example, one of the physical resources also holds
the VNet Manager that will control all the VNet Agents and interact with the repository. The
infrastructure providers interact with the VNet Manager to control the whole VNet environment.
Controlling the VNet environment through the Manager is done either in an interactive manner or
from the embeding.
The VNet Repository holds information related to the current state of the VNet environment. It
holds information related to each resource and the links between them. These include physical as
well as virtual resources.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Hardware Item Operating System Networking Components and Software

Server 64 bit machine with a 8GB 1 VLAN capable network card
RAM installed with a Debian
Linux based XENified kernel
and a Packet Generator server
Router 64 bit machine with a 8GB 2 VLAN capable network cards
RAM installed with a Debian
Linux based XENified kernel
Switch Giga-bit switch VLAN capable on all ports
Client(s) 64 bit machine with a 8GB WLAN network card
RAM installed with a Debian
Linux based XENified kernel
and a Packet Generator client

Table 3.1: Hardware Details

The VNet Manager interacts with a VNet embedding component to automatically create virtual
networks. This component includes embedding algorithms to efficiently assign VNet requests to
substrate resources. The aim is to achieve load balancing among the substrate nodes and links.
The embedding algorithm first assigns VNet nodes to substrate nodes with maximum available
resources. Next, the algorithm assigns the VNet links to substrate paths (relying on e.g. shortest
path algorithm).
. Hardware and Software

The following hardware setup is used to evaluate the performance of the different VNets based on
the scenarios. A set of commodity hardware is used as depicted in Figure 3.5.

Figure 3.5: Hardware Setup

The virtualisation is achieved using XEN and VLAN capabilities. A client is used to receive the
streams generated in the different VNets. Table 3.1 shows the hardware configurations. Results

The test-bed setup has been tested at a very initial level after creating a number of VNets that
stream data to clients. Figure 3.6 shows how the VNets are created.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 3.6: VNet Creation

A total of 9 VNets are created in this experiment and a UDP stream is generated by the Server
in each VNet. Each of the servers in every VNet, generates a 1 Mbit/sec stream to the client. The
links of the VNets are Giga-bit links. The performance is shown in Figure Each VNet is
brought up in around 100 second intervals. But towards the end of the test, this interval increases
due to the delays in bringing up virtual resources.

Figure 3.7: Performance with a 1 Mbits/sec offered user throughput (UDP)

The results show that as the number of VNets increase, each VNet experiences performance
degradations. The packet losses increase as the physical hardware attempts to distribute the re-
sources among the different VNets. The conclusion that can be made from these results is that
the performances of the VNets are directly related to the capabilities of the physical hardware


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

used. That means, more powerful hardware used (i.e. in terms of the number of CPUs, etc) for
virtualisation will result in improved performance in the VNets.
As indicated above, the evaluations of the different scenarios mentioned above are on-going. A
complete set of results and the related evaluation of the performance of VNets will be presented in
the next deliverable.

3.3.2 Interconnection
One of the main problems of the current Internet is that it was designed as a flat infrastructure and
therefore, domain interconnection has come as an add-on. When the current IP network virtualisa-
tion solutions were designed, this error was repeated again. The result is that MPLS network inter-
connection is not an easy task and end-to-end signalling over MPLS networks is still in the making.
This can not happen in the Future Internet, with virtualisation as a built-in feature. Provider rela-
tions and interconnection has to be designed as a basic feature provided by the Infrastructure and
this feature should be available from the very beginning.
Therefore, Telefónica I+D (TID) is working on an Interdomain GUI to test different strategies
for resource publication and negotiation. An experimentation facility for domain/provider inter-
connection as a proof of concept for different solutions will be installed at the premises of TID.
The main objective of this proof of concept is to test a new Interdomain communication protocol
specifically designed for Interdomain resource publication, which is not going to use the Resource
Description Language (RDL) developed within WP3 for the sake of simplicity. The modular nature
of the proof of concept GUI allows for a future integration of the RDL into it. Proof of concept setup

The setup will consist of four cppc running the virtualisation software provided by University of
Bremen. The four PCs will be configured to run routing “slices”. Two domains will be established
(2+2 PCs) each controlled by one instance of the Interdomain Virtualisation GUI. Both instances
of the Interdomain GUI will communicate, enabling an exchange of available resources and nego-
tiation for resource allocation in the foreign domain.
The hardware for the testbed is composed of 4 standard commercial PC platforms running a
Debian based Linux distribution, the XEN hypervisor and the control module from the University
of Bremen. A control PC will be running two independent and intercommunicating instances of
the Interdomain GUI, which will show how resources are negotiated at the Interdomain level on
the peering. It will control internal as well as external resource allocation requests.
The Interdomain GUI is programmed using Java 1.5/1.6 with standard libraries. A compatible
Sun Java SDK (at least the JRE) is the only software requirement for the Interdomain GUI. The PCs
deployed in the testbed conform with the requirements of the University of Bremen virtualisation
Figure 3.8 shows the software architecture intended for the proof of concept. The main assump-
tion of the architecture is the availability of modules in the Interdomain GUI which control the
1. between the two domains involved
2. between the controlling GUI and the PCs in the testbed.
The main features which will be subject of the proof of concept are:


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 3.8: Feasibility test setup in TID’s premises

1. the interconnection between Infrastructure Providers, mainly the treatment of requests com-
ing from peers as opposed to requests generated by the own infrastructure

2. the resource publication strategies between Infrastructure Providers (InPs)

The objective of this scenario is to demonstrate the steps in establishing a multidomain Virtual
Network (VNet).

Setup Domains A and B publish the availability of enough resources to provide for two VNets
as shown in Figure 3.8. Domain A requests VNetA and domain B requests VNetB to be published.
The second part of the proof of concept will demonstrate how a multidomain VNet creation
process can be aborted if resources are not available. Domain A establishes some extra VNets
which consume all the resources local to Domain A. Domain B requests an additional VNets
which needs resources from Domain A. In the setup phase, the request is rejected by domain A. Conclusion
This proof of concept will deliver a basic interdomain GUI for demonstration and test purposes.
This GUI is the base for further experiments in which the resource availability publication strate-
gies can be varied to show their impact on the Interprovider Virtualisation business. Once the
interdomain GUI is available, it will also be possible to check whether the RDL developed within
WP3 needs modifications to accommodate to Infrastructure Providers’needs.


4 Targeted Integrated Feasibility Tests
for Joint Tasks

4.1 Joint Testbed of WP2 and WP3

From the overall architecture point of view, WP3 develops the concepts of how to virtualise
the physical resources belonging to the infrastructure provider(s) and compose them into proper
“empty” virtual networks, according to the requirements of the virtual network operator, such as
QoS, topology and type of access etc., and the virtual network provider acts as a broker to co-
ordinate the deployment of VNets. The term “empty” indicates that the initial virtual networks
are not necessarily having the protocol stacks running inside by default, because this is VNet
operator-specific, and the co-existing VNets could also have totally different architectures in terms
of different protocol stacks, naming and addressing schemes etc. WP2 then has the target to de-
velop an architecture framework from which different network architectures can be built, and the
output, i.e. network architectures, will be put into the empty VNets.
A joint effort has been provided to harmonize the main features of Network Virtualisation (WP3)
with those of the Architectural Framework (WP2). For that very specific purpose, it is intended
to integrate a Node Architecture implementation, which is one of the proposals from WP2, into a
VNet and Node-Arch based test bed. The Node Architecture defines the network architecture from
a node point of view, where one node can contain multiple Netlets running in parallel. Netlets
can be considered as containers that provide a certain networking atomic service called Function
Blocks. They consist of functionalities that are needed in order to provide the end-to-end services.
The collection of all the functionalities leads to the definition of protocols provided by a Netlet.
The Netlets contain protocols and, therefore, provide the medium needed to carry information.
The Function Block is the smallest unit of specific function, e.g. CRC. Also, the definition of
interfaces to the VNets is important: Network Access (NA) is the interface to the actual network.
More information about the Node Architecture can be found in [57].
Interoperability is one of the most important aspects of the new architecture framework of the
Future Internet, which enables the interconnection and inter-operation among the networks. The
Folding Point Concept is developed by WP3 to interconnect the VNets. Folding Point is a kind
of facility standing between two virtual networks in terms of links, nodes or combination of them.
The task of WP3 is to find a proper location of the Folding Point and instantiating an “empty”
one as the container for corresponding interconnection functionalities. In the Node Architecture,
the interoperability between different networks is handled by the Interop-Netlet, which is the com-
position of the required functionalities, such as address mapping, protocol translation, peering,
DTN scheduling etc. In this sense, the Interop-Netlet can be in the Folding Point under the Node


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure 4.1: Logical view of the Node Architecture in a virtual node

4.1.1 Prototyping
In order to test the combination of Network Virtualisation and Node Architecture the structure
illustrated in Figure 4.1 will be realised. On one physical node, there are multiple virtual nodes
running side by side. Each of them might belong to different virtual networks. In one of the virtual
nodes, the Node Architecture can be locked where the Network Access (NA) interface of Node
Architecture will face the virtualised underlying network other than the physical one. A special
realisation of the Node Architecture in a virtual node will be the Interop Netlet in the Folding Point,
and the difference is that there will be two network interfaces, i.e., router-like. Joint prototyping
therefore includes the following characteristics:
• WP2 takes care of anything inside a VNet (architecture, protocol format, multiplexing etc.)

• WP3 sets up a virtual connectivity to an existing VNet and takes care of multiplexing multi-
ple VNets and substrate traffic

• The interface between WP2 and WP3 is the Network Access (NA)

• The Node Architecture will run in the virtualised environment with carefully designed Net-
work Access interfaces for the virtual resources

• Running applications, e.g., video streaming from and to two Node-Arch-capable nodes via

• Interconnection of two VNets running Node-Arch via Folding Point plus Interop-Netlet
In order to achieve these objectives, the scenario of the testbed is an extension of Figure 3.5
and Figure 3.6, and the hardware setup is same as Table 3.1. The difference here is that the Node
Architecture should be locked into the virtual nodes, especially, server node and client node.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

In order to have a friendly interface, a GUI will also be extended from the WP3 demo to adapt
the illustration of the attachment of end user to the virtual network and the existence of Node
Architecture in the virtual nodes. First successful demonstrations of the joint testbed have been

4.2 Decentralised Self-Organising Management of Virtual

For the high-level model of self-organisation for Virtual Networks, please see Deliverable D3.1.1.
Here we designed the algorithm. The self-organising model is based on a parallel and distributed
algorithm that uses local information during the decision-making. This algorithm is executed by
the self-organising control loop inside the virtual manager module of each substrate node. The
local information is retrieved from the measurement points by the monitoring control loop. Af-
terwards, the self-organising and the monitoring control loops exchange the measured information
in order to verify whether a re-organisation of virtual elements is required. Such re-organisation
is triggered by the detection of an overloaded substrate link or overload of the storage and CPU
resource on that node. Note however, that the storage and CPU loads are not that important for
finding out what an action could be (that load is basically only a constraint (we cannot move a
virtual node to another substrate node, when that one is overloaded).

Figure 4.2: Self-organising algorithm

A traffic can be identified under two perspectives: the virtual pipe (traffic that is passing through
the node not using CPU and storage capacity) where the traffic is passing by and the virtual node
that is the source generating this traffic. Analysis of traffic under the virtual pipe perspective is
called receiving candidate perspective, while analysis considering virtual nodes is called mov-
ing candidate perspective. Considering the fact that inside a substrate node might exist virtual
nodes and virtual pipes, it becomes necessary to execute both perspectives inside the same control


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

loop. In this way, the cut-through traffic associated either to virtual nodes or virtual pipes can
be properly identified and self-organising actions activated. An example of the high level execu-
tion of the self-organising algorithm is presented in Figure 4.2. This figure illustrates the stages
of the self-organising algorithm under the perspective of receiving and moving candidates. The
self-organising control loops depicted in Figure 4.2 are part of two substrate nodes.
Five stages characterize the self-organising algorithm. On the first stage, the capacity of the
substrate links associated to a substrate node are locally analyzed by both perspectives of the self-
organising algorithm. The analysis starts with the identification of overloaded substrate links.
Then, the traffic associated to each one of these overloaded links is investigated on the light of
receiving and moving candidate perspectives. Heuristics are used to identify the cut-through traffic
associated to each perspective. So far, the output of first stage is a list of virtual nodes that should
be received and moved by the substrate node executing the self-organising algorithm.
On the second stage, if the list of receiving and moving candidates is not empty, the receiving
candidate perspective adopts an active behavior while the other perspective adopts a passive be-
havior. The receiving candidate perspective sends a message to its substrate neighbor requesting
to receive a virtual node it supposes that is deployed on such neighbor. Meanwhile, the moving
candidate perspective waits for a message from a substrate neighbor requesting to receive a virtual
node belonging to its moving candidate list.
The third stage is characterised by the decision of moving a virtual node. Whenever the sub-
strate node executing the moving candidate perspective receives the request message, it verifies
if there is a match between the virtual node requested and its internal list of virtual nodes to be
moved. Whether there is a successful match, the moving candidate perspective calculates the costs
of re-organising the virtual elements. If there is a favorable cost-effect relationship on this reor-
ganisation, the request to move the virtual node is accepted.
The forth stage is the announcement of the decision to move a virtual node and the reservation
of resources to host such virtual element at the substrate node that had its request accepted. Finally,
at the fifth stage, the virtual node is moved and during the moving process all packets associated
to this moving virtual node are buffered to be replayed after the end of the moving process.

4.2.1 Supporting Dynamic VNet Provisioning with Situation

The VNet and Infrastructure Providers construct a requested virtual network [45]. When a vir-
tual network request is expressed (by the VNP), an initial provisioning process is used to select
resources to compose the requested virtual network. The VNet can be based on static or slowly
changing information about the network situation (of the substrate), as it is seen by the infras-
tructure provider. Once a virtual network is in operation, it and all resources must be maintained
and managed, dynamic provisioning (in the sense of adaptation, dynamic configuration, resource
allocation and management) has to be ensured. This includes mechanisms in two areas:
• Sustained availability of offered resources by the infrastructure provider internally (the VNP
uses requested resources transparently) and

• Information about actual QoS, major changes or failures from the infrastructure provider,
even with indication about upcoming problems (e.g., by alarm thresholds and triggers).
These operations are part of the dynamic VNet provisioning, which includes embedding (or
mapping - the efficient selection of substrate resources), resource configuration and control.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

An important issue for delivering monitoring information about lower-level resources is the level
of abstraction of this information. When requesting resources from an infrastructure provider, a vir-
tual network provider must be able to choose the level of detail (e.g., QoS) in the request. The less
specific the request, the more options the infrastructure provider has to map the request to physical
infrastructure and less monitoring resources are needed to supervise these resources. It is also a
security related issue, infrastructure providers might not want to disclose certain details about their
network resources to external parties. In any case, the description of resources might lead to a mon-
itoring process with the same level of abstraction. The INM (In-Network Management) uses two
approaches for management information storage and access: The situation awareness framework
and the network of information (NetInf) [58]. The approaches reside on different levels: NetInf
is a middleware, while the situation awareness framework is placed below on the level of network
functions. Both concepts can interwork by aggregation of information towards more abstract and
long-lived information. Via the situation awareness framework information such as physical and
virtual resources can be offered by network nodes or other entities that want to share these with
others. To be able to exchange information via the framework a key concept of this approach is
the usage of directories (placed in domains based on the INM), where entities register themselves
with the type of information they want to offer and with the location where this information can
be received from (acting as provider). Upon a request of a node that wants to receive information
about a certain resource (acting as consumer) the directories then act as a lookup service providing
the location where this information can be retrieved. Afterwards the nodes (provider and con-
sumer) can directly communicate with each other and exchange the resource information either in
a synchronous or asynchronous way.
The situation awareness framework is based conceptually on the paradigm of aggregating infor-
mation. The aggregation process, from directly monitored data towards a description of a situation
of an entity, hides unwanted details about underlying structures. This approach supports a final
decision process, because the pre-processing of information is already part of the decision (or
influences it). This concept also supports handling of information at the required level of abstrac-
tion in interactions between VNet and infrastructure provider. For debugability, responsibility and
accountability the architecture of a substrate node [45] needs to be extended by a configurable
monitoring probe (see below). The probe is located in the multiplexing function block, which is
controlled by the substrate node control. Due to security reasons, only the infrastructure provider is
permitted to access this component and is allowed to setup virtual node and virtual links. By con-
sidering the addressing (based on a VNet-ID) and the mapping on packet flows via the multiplexing
function block, this allows the access to packet flows (also from remote). Monitoring functionali-
ties may include packet counts or statistical information based on packet sampling, probably after
defining and applying additional packet filters.
As described above, the situation awareness framework is well-suited for dynamic operation
with short-term information (e.g., monitoring data) and initial aggregation of this information.
It can be used inside an infrastructure provider and towards the VNet Provider. For the latter
interaction the mapping of the situation awareness framework and existing standards is discussed
here. The situation awareness framework is based on IPFIX [59] for the transport of context
and aggregated/situation information. The IPFIX protocol has been developed for the purpose of
exporting IP packet flow information from IP devices such as routers or measurement stations to
mediation, accounting, and network management systems. This export includes attributes derived
from the IP packet headers and attributes known only to the exporter (e.g., ingress and egress ports,
network prefix). The packet sampling (PSAMP) [60] extends the information model of IPFIX with


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

the ability to report per packet information including parts of the payload. Furthermore it defines
packet selection methods like filtering and sampling. The configuration of monitoring probes
is related to two aspects: The transport and the description of configuration information. For
transport and management of IPFIX/PSAMP [61] configurations, the NETCONF protocol can be
used [62]. The NETCONF [63] protocol provides mechanisms to install, manipulate, and delete the
configuration of network devices. All operations are realized by a simple Remote Procedure Calls
(RPC). The configuration is expected to be based on XML encoding. Recent work is done to use
YANG [64] as the data modelling language to write the configuration for the NETCONF protocol.
YANG was developed as a human readable and easy to learn description of network component
configurations. For monitoring probes, the configuration needs to include expressions to address
interfaces, to filter packet flows, to describe monitoring and aggregation processes, to identify the
destination of monitored data (e.g. IPFIX collector) and more. The resource description plays an
essential role in the interaction between infrastructure and VNet provider. While static resource
descriptions can be directly retrieved from INM (via the situation awareness framework or NetInf),
the supervision of operational resources by monitoring probes need an additional processing step.
The description of requested resources needs to be translated into appropriate configurations for
monitoring probes. This step has to be done after candidate discovery and matching, because
the monitoring process is closely related to the (finally selected) resource from the infrastructure.
This relation also allows running the monitoring process at the appropriate level of abstraction,
for efficiency and security reasons. Nevertheless, the availability of monitoring resources can be
an additional selection criterion for resources. Finally, the monitored resources will be available
as information within the situation awareness framework. Note that NETCONF supports multiple
configuration datastores, as they are used during the VNet creation (instantiation) and operational

4.3 Joint prototyping of WP3, WP5, and WP6

Virtualisation (VNet), networking of information (NetInf) and forwarding and multiplexing (For-
MUX) paradigms developed in 4WARD can be combined to offer additional flexibility and elas-
ticity in the overall architecture of future networks.
The most obvious and logical approach consists of using virtualisation and virtual networks cre-
ation and instantiation as underlying resource services for NetInf and ForMUX. NetInf and For-
MUX would act in such case as clients or users of virtualisation and virtual network provisioning,
control and management. NetInf and ForMUX would request the provisioning of virtual resources,
either empty containers with bootstrapping capabilities or containers embedding already a baseline
architecture. VNet provisioning would search, find and allocate resources through VNet Providers
(or brokers). The brokers would rely on infrastructure providers who will select appropriate or
matching atomic or elementary physical resources to compose the requested virtual resources by
the providers on behalf of the users; in this case NetInf and ForMUX.
NetInf and ForMUX would ask for empty or preconfigured containers to build the underlying
virtual networks where they would deploy their node architecture for NetInf (NetInf Node) and
Node Compartment (Node CT) for ForMUX. These two clients would than deploy their entities,
processes, protocols, stacks and architectures on the reserved execution environments (or virtual
resources). VNet would provide the tools, the control and signalling mechanisms to enable deploy-
ment and instantiation of the dedicated slices (or virtual networks) followed by configuration and


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

execution of their deployed functions and processes. Interfaces, APIs and handles will be provided
to access, instantiate and run the slices and the architectures deployed on these slices.
Just like NetInf and ForMUX act as clients of virtual networks provisioning and management,
VNet can be the client of NetInf and ForMUX. VNet would benefit from services offered by
NetInf and ForMUX. NetInf could handle virtual and physical resources as objects and help in
searching and finding resources and assist to some extent in connecting objects by pointing or
referring to locators of these resources. ForMUX could set up connectivity between the resources
via the establishment of GPs. As ForMUX can be viewed also as a link virtualisation with inherent
link diversity (that could even embed network coding capabilities), ForMUX can provide link
virtualisation on demand and set up dedicated GPs for VNet as well as dedicated NetInf GPs as a
matter of fact. A more elaborate combination is to achieve joint virtualisation of physical resources
using VNet and ForMUX principles within the same process.

4.3.1 Prototyping
In order to test the combination of network virtualisation with networking of information and
ForMUX, a number of physical resources (personal computers and servers with embedded XEN
virtualisation technologies) with preconfigured slices will be made available. The software images
and virtual machines from NetInf and ForMUX can be deployed dynamically on the selected and
reserved resources (or dedicated slices from physical infrastructures) to set up the desired NetInf
or ForMUX nodes (NetInf Node stack and Node CTs respectively).
Joint prototyping will include consequently the following characteristics:

• WP5 (ForMUX) and WP6 (NetInf) deploy, handle and manage all components deployed
in the VNet(architecture, protocols, formats, multiplexing, routing and etc.). They prepare
their virtual machines and protocol stacks in the form of software images to be deployed on
dedicated slices (set up on demand by VNet).

• WP3 ensures provisioning of the required virtual resources and sets up the virtual network;
virtual nodes and links resulting from the combination or composition of preconfigured phys-
ical resources serving as elementary building blocks;

• WP3 or VNet can offer virtual resource migration within the physical infrastructures and
peering services between VNets. VNet would also ensure maintenance of associated storage
and caching, of connectivity and of topology during dynamic changes induced by mobility
or required by optimisation, adaptation and management;

• The Node Architectures (NetInf Node and Node CT) will run in the virtualised environment
with carefully designed network access interfaces for the virtual resources;

• Deploying and running applications on NetInf and ForMUX capable nodes

• Interconnection of two running virtual networks using ForMUX principles; especially using
GPs to set up intra VNet and inter VNets networking. A folding point may also be envisaged
to navigate or map between name spaces;

• Evaluate the performance and optimize the overall joint WP3/WP5/WP6 designs.


5 Summary and Outlook
4WARD work package 3 is currently in the second major phase of the project, focussing on the
evaluation of concepts that have been developed and documented during the first phase. This de-
liverable provided an intermediate report on these activities. Section 2 described individual evalu-
ations of aspects and components of the VNet architecture, and provided a number of initial results
from these evaluations as far as they are available at this point. Additionally, integrated feasibility
tests that are currently being developed and implemented have been described in Section 3, along
with planned joint prototyping and evaluation with other 4WARD work packages (Section 4).
The following section provides a brief summary of the above mentioned initial results. It is
followed by preliminary conclusions and an outlook on upcoming activities.

5.1 Summary of Initial Evaluation Results

5.1.1 Provisioning, Management and Control
The VNet framework comprises a number of algorithms, protocols and interfaces to facilitate the
provisioning and management of end-to-end virtual networks on demand.
A key element of the provisioning process is the problem of embedding virtual networks in a
physical substrate, the main challenge in the 4WARD context being scalability. Distributed em-
bedding algorithms for virtual networks were implemented and tested at scale using the GRID5000
experimental platform in France. The algorithms were found to perform favourably in comparison
to known centralised approaches. In particular, they are able to map a VNet with shorter delays
than a centralised approach (due to additional time needed for information gathering in the latter
case). The tests were performed both for initial embedding of new VNets, and adaptive embedding
of existing VNets, e.g. in response to failing network resources. In the latter case, it was found
that the distributed algorithm can improve the delay by up to a factor of ten when the number of
substrate nodes is large.
Promising results were also achieved in the area of mobility-aware embedding, i.e. the map-
ping of virtual networks in the presence of mobile resources in the physical substrate. The main
challenge here is to maintain suitable mappings and thus a coherent virtual network when mobile
resources are moving. Simulations of the algorithms demonstrated the benefits of path splitting and
migration techniques for mobile substrates, and showed that high ratios of repairing and remapping
can be achieved without suffering unacceptable delays.
An implementation of major components of the VNet provisioning framework on the HEN
testbed at the University of Lancaster helped to identify the specific technological ingredients and
how they have to be combined to provision and operate VNets, and showed that the primary com-
ponents of the proposed architecture are technically feasible. The implementation followed the
VNet provider model, including implementations of VNet Operators, VNet Providers, and Infras-
tructure Providers. The experiments showed that VNet provisioning using the model developed


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

in the project works rapidly and scales linearly with increasing VNet sizes for given substrate
Predominantly for use by an Infrastructure Provider, a Virtual Network Manager has been partly
implemented. Functionalities of the prototype currently comprise the discovery of substrate re-
sources and topology, creation and deletion of pre-specified VNets in the substrate, and the enu-
meration of instantiated VNets and substrate resources. Work is ongoing to enhance the capabil-
ities and implement new features, specifically admission control and the integration of automatic
virtual resource mapping.
A virtual link setup protocol based on the NSIS protocol has been developed. The purpose of this
protocol is to interconnect VNets across Infrastructure Providers while taking QoS requirements
into account. The approach chosen is to extend the NSIS QoS NSLP resource reservation sig-
nalling protocol by adding a Virtual Link Setup Protocol object. This extension will enable the use
of the QoS resource reservation protocol in the VNet context by “piggybacking” the information
required by the virtual link endpoints to connect to the proper substrate links.
Beyond the creation of the interconnection links, inter-provider relations in a commercial envi-
ronment create a number of technical challenges. 4WARD must enable the implementation and
enforcement of policies that control the exchange of information between providers. In particular,
commercial operators will typically want to limit visibility into their infrastructure and resources
for other providers. To experiment with these constraints, an inter-domain interface including a re-
source filter has been implemented and will be used to further identify the amount of information
that needs to be exposed while still allowing the creation of cross-provider VNets.
Finally, a novel approach for testing and debugging networks that leverages the capabilities of
network virtualisation has been investigated experimentally. Shadow VNets enable operators to
reconfigure and upgrade their networks in a safe way before putting it to production use, while
exposing the new configuration to real user behavior rather than just artificially generated test
traffic. The experience with the prototype showed the feasibility of the approach and indicates that
network virtualisation can be a powerful tool for network debugging.

5.1.2 Virtualisation of Resources

The second major technical area of WP3 is the efficient virtualisation of resources. The project
focused specifically on virtual routers and techniques for the virtualisation of wireless systems.
Various design alternatives for programmable virtual routers based on commodity hardware
were explored and comprehensive performance tests carried out. The experiments showed that the
performance varies greatly with different designs. Our conclusion is that currently the best perfor-
mance is achieved by virtualising forwarding engines within a single domain of the virtualisation
platform. This yields aggregate performance close to that realised in comparable routers without
virtualisation. However this may change in the future as I/O performance in guest domains of
the virtualisation platforms is being optimised. Hardware with multi-queueing support can greatly
enhance flexibility and performance.
Further optimisations can be achieved by advanced allocation of CPU cycles between the driver
and guest domains of the virtual router platform. Additionally, classification and switching of
packets according to their priority within the driver domain was tested. Experiments using the XEN
virtualisation platform showed that the combined approach can remove performance bottlenecks
and improve throughput and delay.
Wireless virtualisation is a comparatively new field. The project studied and evaluated general


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

concepts of wireless virtualisation. Additionally, two existing wireless systems, LTE and WiMAX,
were used in case studies to evaluate a number of specific wireless virtualisation aspects using
simulations and experiments.
The performance of link virtualisation for wireless access networks was evaluated. An oppor-
tunistic scheduling scheme was proposed, and an analytical model of the received service devel-
oped. The accuracy of the model was verified using simulations. The proposed model can be used
to implement an effective admission control mechanism for a single base station in order to provide
the required QoS for VNets and improve spectrum utilisation.
The CVRRM scheme for radio resource management, as described in deliverable D-3.1.1, has
been evaluated analytically. Results suggest that the radio channel variability is a significant chal-
lenge for the deployment of virtual networks with guaranteed requirements. It became clear that
the capability to rapidly reallocate resources is critical to maintain coherent virtual networks under
such conditions. The physical link utilisation observed using the analytical model suggests that the
utilisation of the virtual links should be taken into account when designing balancing mechanisms
on the physical level to optimise physical link utilisation.
As a case study with an existing wireless system, an OPNET simulation model for LTE was
developed to investigate the effects of virtualising the mobile system infrastructure, specifically
the LTE eNodeB. The results showed that system performance and spectrum utilisation can be
improved by using virtualisation to enable dynamic spectrum sharing between operators based on
current demand (as opposed to static spectrum allocations).
In a second case study, an actual WiMAX basestation connected to the ORBIT testbed at Rut-
gers WINLAB was used to study virtualisation and slice isolation techniques experimentally. By
modifying the basestation controller, the provisioning of virtual machines (with layer-2 connectiv-
ity to the basestation) acting as virtual basestations was facilitated. Experiments with this setup
showed that the existing QoS scheduler of the system cannot ensure fairness across links. E.g., a
mobile client moving into an area with degraded connectivity will unduly affect the performance
seen by stationary clients connected to the same physical basestation, thus breaking the isolation
between the virtual basestations. To alleviate this problem, a traffic shaping technique has been
implemented that is able to provide fairness to the stationary clients by adaptively controlling the
offered load to each of the virtual basestations.

5.2 Preliminary Conclusions

In the area of the VNet provisioning and management framework, the essential algorithms and
functions have been evaluated individually. The results so far suggest that the provisioning of
virtual networks at large scale is in principle feasible using the developed methods and provider
model. However, a final assessment will only be possible once integrated feasibility tests have been
performed that combine the components. In particular, the performance and degree of automation
that can be achieved when provisioning end-to-end VNets across several providers, taking the lim-
ited trust and information exchange between providers in a competitive environment into account,
is still an open question.
As far as the virtualisation of individual resources is concerned, the conclusiveness of our re-
sults varies greatly between fixed and wireless resource types. The existing body of work and our
exploring of design options for programmable virtual routers based on commodity hardware has
provided a solid understanding of performance, system issues, and limitations of such components.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Virtualisation of wired links is also well understood and in wide use today. However, virtualisation
of wireless resources is much less mature, and many aspects are not well understood yet. This
includes, among others, the questions how to provide isolation between slices in a wireless sys-
tem, and how to design a provisioning and resource management system that is able to deal with
the highly dynamic properties of many wireless links in large-scale virtual networks. While our
analytical and experimental results provide some hints, major technical challenges remain.

5.3 Outlook
The ongoing work until the end of the project will focus on “putting the pieces together” by means
of the targeted integrated feasibility tests described in Section 3. In particular, these are expected
to provide a better understanding of the inter-provider issues and specific performance results
in a more realistic setting. Additionally, we intend to test the combined approaches against the
integration scenario that was briefly outlined in Appendix B of deliverable D-3.1.1 to help assess
the coherence of the architecture.
To evaluate the functioning of our concepts within the overall 4WARD approach, integrated
evaluations in cooperation with other technical work packages will be carried out. Joint proto-
types are currently being implemented together with WP2 and WP5/6, and the use of the situation
awareness framework developed by WP4 in the VNet provisioning framework will be analysed by
the joint task TC34.
The results of these activities will be documented in the final deliverable D-3.2.1.




A Provisioning, Management and
A.1 Mobility-aware Embedding
A.1.1 Simulation Model of centralised framework
The centralised version of the Mobility-aware Embedding Framework has been developed in C
language, and Figure A.1 represents the overall mechanism of the software architecture.

Figure A.1: Block diagram of the centralised embedding procedure

The simulator operation is divided into time windows, within which the different algorithms are
applied in the following sequence:
• Previously mapped VNet requests that have already been completed are released.
• The mobility management functions are started: location update procedure and repairing of
affected VNet requests due to lost links or nodes (dynamicity).


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

• The VNet requests arriving inside that time window are attended in decreasing order of the
amount of bandwidth (BW) they require. Node Mapping and Link Mapping run sequentially
per VNet request. Some VNet requests do not succeed and are stored in a VNet request
queue; they will be re-tried again in a certain interval’s time (incremental periods). After 3
attempts, if the VNet request cannot be mapped, then it is definitely discarded (rejected).

• If permitted by user configuration, migration is applied periodically, every 10 completed and

released VNet requests; migration of nodes and links is produced only if the overall cost of
the request gets reduced by migration. Otherwise the procedure finishes.

A.1.2 Results from centralised framework

Simulation Setup The initial definition of our scenarios is based on [15], where some specific
capabilities related to mobility have been added, such us: the cell-index (scenario size subdivided),
the location parameter and the random mobility patterns for nodes. All scenarios are generated
with N=100 wireless nodes, placed randomly in a bi-dimensional square area. Each node has 100
units of CPU capacity and the 100% of available BW at the beginning.
In order to configure a set of coherent simulations and to compare results among them, Fig-
ure A.2 gathers the relevant configuration values for main parameters used in the simulations cov-
ered in our analysis.

Figure A.2: Configuration parameters for scenario simulation

Figure A.3: Completed VNRs for each scenario (left) and as a function of the grade of mobility


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Results On the left side of Figure A.3 we can see the amount of completed VNet requests as a
function of the splitting-ratio for scenarios defined in Figure A.2, with (Mig) and without (NMig)
migration. All simulations were run with a medium grade of mobility (50% of nodes are mobile).
On the right side of Figure A.3 we have the number of completed VNet requests for all the possible
mobility grades (Static-St- to Full Motion-FM-) and splitting ratios, but specifically for scenario
1. “The benefits of using Path Splitting are extensible to mobile substrates” The main conclu-
sion we can extract from the obtained results in Figure A.3 (left) is that Path Splitting is
a beneficial mapping technique, also for wireless mobile substrates, which helps the VNet
Provider to maximize the number of allocated VNet requests, and hence the revenue ob-
tained. There are a series of considerations that can be extracted from the results:
• A low relation between the map size and radio coverage (Figure A.3 -left-, scenarios B
and D) makes the Path Splitting non-useful because there are not enough links per node.
Few VNet requests exhaust the Physical Substrate since, having very few physical
links, they all get fully occupied quickly and there is no margin for Splitting to gain
any benefit.
• The smaller the cell size is, the lower number of nodes per cell we have, which makes
more difficult to map a specific node for location-aware VNet requests. This is a reason
why the number of completed VNet requests is decreased, for example in the results of
scenario C (1.56 nodes per cell on average) in Figure A.3 (left).
2. “The number of completed VNet requests does not vary much with the grade of mobility”
As we expected, analysing a single scenario (A), the biggest amount of completed VNet
requests was reached for the static case. Without mobility, VNet requests are not interrupted
and stopped. We can also see in Figure A.3 (right) how the number of completed VNet
requests does not change a lot as mobility grade increases; graphs are quite close in the
scale. That is because the Repairing-Ratio (explained next) does not decrease as the grade
of mobility is incremented.

Figure A.4: Resource saving (left) and Repairing-Ratio (right) as functions of mobility grade for
Scenario A

3. “The Repairing-Ratio increases with the grade of mobility” The Repairing-Ratio is defined
by equation A.1 and it gets improved as we increment the grade of mobility for a certain


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

scenario (see Figure A.4 -right-). “Stopped VNet requests” are all requests that need to be
interrupted by mobility issues; “Repaired VNet requests” are those affected requests that can
be repaired by the Remapping Algorithm; “Remapped VNet requests” are those requests that
could not be repaired, but that are completely remapped in following time-windows. In the
following expression we denote a VNet request as ’VNR’.

(Remapped V N Rs/Stopped V N Rs) ∗ Repaired V N Rs

Repairing Ratio = (A.1)
Completed V N Rs

Increasing mobility for a scenario makes the number of “Remapped VNRs” increase (more
attempts needed to maintain a VNet request), but the number of “Stopped VNRs” also goes
up, so the proportion remains (roughly) constant. What is increased is the number of “Re-
paired VNRs”, and that is the main reason why the Repairing-Ratio grows up with mobility.
The percentage of VNet requests allowing splitting is also a key factor showed in Figure A.4
(right). More splitted paths imply more affected links due to mobility, so the “Repaired
VNRs” will increase, and the Repairing-Ratio grows with the splitting ratio.

4. “Migration improves the saving of resources on mobile substrates” Migration reduces the
cost of allocating VNet requests on the substrate as it is displayed on figure 4 (left). The
Resource saving shows the average difference of costs between the case where migration is
applied and the one where it is not (for scenario A). This Resource saving is obtained as the
total mapping-costs divided by the average mapping-duration (Av(MD)) and by the number
of completed VNet requests. See the next expression:

hops(V path) ∗ bw(V link)
allV links allV paths
Cost = (A.2)
Av(M D) ∗ Completed V N Rs

The splitting ratio increases this saving because the number of hops per path will signifi-
cantly increase the allocating costs. This way, reallocations will have a higher margin of
cost-reduction. The grade of mobility also affects the Resource saving (Figure A.4 - left-),
negatively in this case: the higher the movement is, the higher the cost of the same VNet
request becomes; migration can reduce costs but not so significantly.

A.1.3 Definition of Distributed Protocol Messages

In order to fulfil all requirements specified within the Mobility-Aware Distributed Embedding
(MADE) problem, a new protocol has been defined. The message specification is described as
• Hello Message: used to exchange routing and available resources information among sub-
strate nodes. ’Hellos’ are sent in broadcast every two seconds.

• Mapping Message: sent by a node after allocating a new virtual node. The whole substrate
is informed about which physical node is hosting which virtual node and the remaining


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

• Request Message: used sequentially among substrate nodes involved in the same VNet. It
includes the whole VNet request (ongoing embedding), the next virtual node to be mapped,
and a list of already allocated pairs like (physical node - virtual node).

• Error Message: sent backwards (to previous substrate nodes involved in the same VNet) if
one virtual node cannot be hosted in the actual substrate node. The intention is to release
the pre-reserved resources and to start a new embedding attempt from the beginning after a
random time period.

• Mobility Error Message: when a substrate link is broken, the physical node affected will
send mobility error messages towards both edges of the virtual link hosted (and lost). This
way the intermediate physical nodes will release affected resources. If the virtual link was
split, the error message is spread out in several sub-paths.

• Reallocate Message: a substrate node that changed its cell location, and had a virtual node
hosted requiring specific location, will send a Reallocate Message to another substrate node
in the correct position. Reallocate includes the necessary information to join the affected

• Migration Message: exchanged among substrate nodes belonging to the same virtual link, in
order to estimate costs and decide if it is worth to migrate the whole link.

• Migration Release Message: it is sent in response to a Migration Message to release the

substrate path marked with a higher cost.

A.1.4 Extra Results of the MADE Protocol performance

Figure A.5: Delay specifically associated to the protocol messages exchanged (average and stan-
dard deviation

In Figure A.5 we include the average time needed (and standard deviation) to complete a suc-
cessful repair (due to mobility), a reallocation (re-map de whole VNet Request) and the migration
of a virtual link when the splitting is permitted, which means that the virtual link could be mapped
into separate paths, each one with several hops. Repairing times are highly dependent on the limit
to consider a link as broken (3 consecutive lost ’Hellos’ takes around 2s). This is to make sure
the broken link is noticed by both edges and the Mobility Error Message has reached both virtual
nodes. The time needed to complete a migration is low since the substrate nodes will try to map
their virtual neighbours in physical neighbours (propagation times are short). The high value of
the deviation is due to the migration of virtual links mapped in several hops (higher propagation
In Figure A.6 we have summarised some performance values regarding the mobility manage-
ment procedures, in terms of overhead (number of protocol messages involved). Logically, the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure A.6: Correction ratio and protocol overhead according to the mobility ratio of the Physical

overhead increases as the mobility ratio of the scenario is incremented. Nevertheless, the Correc-
tion Ratio is roughly maintained, which is a very good result, since the protocol can repair almost
the same ratio of mobility problems, even for a 100% mobile Physical Substrate.


B Wireless Link Virtualisation
B.1 Analytical Modeling of a Single VNet Service
Let N be the number of VNets and X(t) be the channel state of a tagged VNet in time slot t. Denote
by Xi (t), i = 1, . . . , N − 1, the state of the channel from the base station to the competing
user i in time slot t. The tagged VNet wins the competition for transmission in a time slot if
X(t) > max(Xi (t), . . . , XN −1 (t)) or when a tie is randomly broken in its favor. This competition
policy suggests that the tagged VNet virtually competes with a super user whose channel state is
given by Z = max(Xi , . . . , XN −1 ). Thus, we can simplify the analysis by replacing all competing
users with a super user whose channel model can be computed from those of the N − 1 competing
users as follows.
We develop an iterative algorithm to obtain the channel model of the super user by gradual
combination of the channel models of the N − 1 competing users. First, we develop an algorithm
to combine the channel models of two users. Let pi,j and qi,j be the probability of transition from
state Si to state Sj for users 1 and 2, respectively. Thus,
pi,j = P r{X1 (t) = Sj |X1 (t − 1) = Si }
qi,j = P r{X2 (t) = Sj |X2 (t − 1) = Si }. (B.1)
Denote by Z(t) the channel-state of the combined super node, given by Z(t) = max(X1 (t), X2 (t)).
Let δi,j be the state transition probability of Z(t), defined as δi,j = P r{Z(t) = Sj |Z(t − 1) = Si }.
To simplify the expressions, we define the following events.
Si (t) : {Z(t) = Si }
Si,j (t) : {X1 (t) = Si , X2 (t) = Sj }. (B.2)
Since Z(t) is the maximum of two random processes, X1 (t) and X2 (t), it will less frequently
be in lower states. For very small values of Pr{Si (t − 1)}, e.g., less than 0.1% of the average state
probability of staying in a typical state, we can safely eliminate state Si and reduce the number
of states. Alternatively, we can assume that the process will move to the next higher state with a
probability close to 1, i.e., 
1, if j = i + 1
δi,j = (B.3)
0, otherwise,
δj,i = 0, if j < i. (B.4)
For significant values of Pr{Si (t − 1)}, larger than 1% of the average value of the probability
of being in any state,
δi,j = P r{Sj (t)|Si (t − 1)}
("j−1 # )
= Pr (Sj,l (t) ∪ Sl,j (t)) ∪ Sj,j (t) Si (t − 1) ,


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Since Sj,l (t), Sl,j (t), and Sj,j (t) are mutually exclusive events,
δi,j = [µ(i, j, l) + µ(i, l, j)] + µ(i, j, j), (B.5)

µ(i, m, n) = P r{Sm,n (t)|Si (t − 1)}
P r{Si (t − 1)|Sm,n (t)}P r{Sm,n (t)}
= ,
P r{Si (t − 1)}
P r{Si (t − 1)|Sm,n (t)} =
(" i−1 # )
Pr Si,k (t − 1) ∪ Sk,i (t) ∪ Si,i (t − 1) Sm,n (t) ,
P r{Si (t − 1)} =
(" i−1 # )
Pr Si,k (t − 1) ∪ Sk,i (t) ∪ Si,i (t − 1) . (B.6)

Since Si,k (t), Sk,i (t), and Si,i (t) are mutually exclusive events,
T1 + T2
µ(i, m, n) = P r{Sm,n (t)}, (B.7)
T3 + T4
T1 = P r{Si,k (t − 1)|Sm,n (t)}

+P r{Sk,i (t − 1)|Sm,n (t)} ,
T2 = P r{Si,i (t − 1)|Sm,n (t)},
T3 = [P r{Si,k (t − 1)} + P r{Sk,i (t − 1)}] ,
T4 = P r{Si,i (t)}. (B.8)
Using properties of conditional probabilities,
P r{Si,k (t − 1)|Sm,n (t)} =
P r{Sm,n (t)|Si,k (t − 1)}P r{Si,k (t − 1)}
P r{Sm,n (t)}
pi,m qk,n πi,k
= , (B.9)
πi,k = P r{Si,k (t)}
= P r{X1 (t) = Si }P r{X2 (t) = Sk }. (B.10)


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0


µ(i, m, n) = (B.11)
pi,m qi,n πi,i + i−1
k=1 (pi,m qk,n πi,k + pk,m qi,n πk,i )
πi,i + i−1
k=1 (πi,k + π k,i )

We can extend the procedure for computing the channel model of a super user to an arbitrary
number of users by repeating the above algorithm, until all N − 1 competing users are considered.

B.1.1 Service Model of a Single User

Next, the problem is reduced to a scheduling problem with two users, where the tagged VNet
competes with a single super user. The reduced scenario can be used to obtain the service model
of the tagged VNet. Let X(t) and Z(t) represent the channel states of the tagged VNet and the
equivalent super user, respectively. Denote by C(t) the service model of the tagged VNet. C(t)
can be modeled by an (m + 1)-state Markov process, where m is the number of the states of the
fading channels. An extra state of C(t), denoted by S0 , indicates a non-scheduled state, where the
tagged VNet does not win the competition for transmission in time slot t.
First, we define the following notations:

γi,j = P r{C(t) = Sj |C(t − 1) = Si }

Event Si,j (t) : {X(t) = Si , Z(t) = Sj }
σi,j = P r{Si,j (t)}
pi,j = P r{X(t) = Sj |X(t − 1) = Si }
δi,j = P r{Z(t) = Sj |Z(t − 1) = Si } (B.12)

To take into account the tie breaking policy, we denote the event that the tagged VNet wins a
tie in state Si by Ei and the corresponding probability by i . We consider a tie-breaking policy
that randomly selects one of the users with the highest achievable rate and gives equal chance of
winning a tie case to all users. For this policy,
N −1
X 1
i = Pr{k users in state Si }. (B.13)
N −1 k
Pr{k users in state Si } = πi (1 − πi )(N −1−k) , (B.14)
where πi is the probability that the channel state of a single user is in state Si . Plugging (B.14) into
N −1  
X 1 N −1 k
i = πi (1 − πi )(N −1−k) . (B.15)
k + 1 k
Next, we compute γi,j . For non-significant values of P r{C(t − 1) = Si }, i.e., less than 1% of
the average probability of being in any state,

1, if j = 0
γi,j = (B.16)
0, otherwise.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

For significant values of P r{C(t−1) = Si }, we break down the problem into three separate cases:
i, j 6= 0; i = 0, j 6= 0; and i 6= 0, j = 0. It is obvious that the case i = 0, j = 0 can be obtained
from the basic property of a transition probability matrix, where the sum of each row is one.
For i, j 6= 0, from definition of γi,j in (B.12),
γi,j = P r
(" # )
 0 [

Sj,j (t) ∩ Ej Sj,k (t) C(t − 1) = Si (B.17)
The right hand side of (B.17) is the union of mutually exclusive events. Thus,
γi,j = P r{Sj,j (t) ∩ Ej |C(t − 1) = Si } +
| {z }
P r{Sj,k (t)|C(t − 1) = Si } . (B.18)
| {z }
Term A in (B.18) is given by

A = Pr Sj,j (t) ∩ Ej Si,i (t − 1) ∩ Ei ∪
Si,l (t − 1)
A1 + A2
= , (B.19)
0 0 0
A1 = P r{Sj,j (t) ∩ Ej |Si,l (t − 1)}P r{Si,l (t − 1)},
0 0
A2 = P r{Sj,j (t) ∩ Ej |Si,i (t − 1) ∩ Ei } ×
P r{Si,i (t − 1) ∩ Ei },
( "i−1 #)
 0 [

A3 = Pr Si,i (t − 1) ∩ Ei ∪ Si,l (t − 1) . (B.20)
Given that
0 0
P r{Sj,j (t) ∩ Ej |Si,l (t − 1)} =
0 0 0 0
P r{Ej |Sj,j (t) ∩ Si,l (t − 1)} Pr{Sj,j (t) ∩ Si,l (t − 1)}
Pr{Si,l (t − 1)}
= j pi,j δl,j , (B.21)
0 0
P r{Sj,j (t) ∩ (t) = 1|Si,i (t − 1) ∩ Ei } =
0 0
P r{Ej ∩ Ei |Sj,j (t) ∩ Si,i (t − 1)}
Pr{Si,i (t − 1) ∩ Ei }
0 0
Pr{Sj,j (t) ∩ Si,i (t − 1)}
= j pi,j δi,j , (B.22)


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

using (B.20)-(B.22) we can rewrite (B.19) as follows.

j pi,j δi,j i σi,i + i−1
l=1 j pi,j δl,j σi,l
A= Pi−1 (B.23)
i σi,i + l=1 σi,l
Term B in (B.18) can be expanded as
[ 0
B = P r Sj,k (t) Si,l (t − 1) ∪

[Si,i (t − 1) ∩ Ei ]
B1 + B2
= , (B.24)

B1 = P r Sj,k (t) Si,l (t − 1) P r{Si,l (t − 1)}
= pi,j δl,k σi,l ,
B2 = P r Sj,k (t) Si,i (t − 1) ∩ Ei ×

P r{Si,i (t − 1) ∩ Ei }
= pi,j δi,k i σi,i ,
B3 = i σi,i + σi,l . (B.25)

pi,j δi,k i σi,i + i−1
l=1 pi,j δl,k σi,l
B= Pi−1 . (B.26)
i σi,i + l=1 σi,l
6 0, referring to (B.12),
For i = 0 and j =
γ0,j = P r{C(t) = Sj |C(t − 1) = S0 }. (B.27)
For non-significant values of Pr{C(t − 1) = S0 },

1, if j = 0
γ0,j = (B.28)
0, otherwise.
For significant values of Pr{C(t − 1) = S0 },
(" j−1 # )
0 0

γ0,j = P r Sj,k ∪ [Sj,j ∩ Ej ] C(t − 1) = S0
= P r{Sj,k (t)|C(t − 1) = S0 } +
| {z }
P r{Sj,j (t) ∩ Ej |C(t − 1) = S0 } . (B.29)
| {z }


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Term D in (B.29) is given by

" m l−1 #
0 0

D = P r Sj,k (t) Sn,l (t − 1) ∪
l=1 n=1
"m #)

Sl,l (t − 1) ∩ Ēl
D1 + D2
= , (B.30)

m X
X l−1
0 0 0
D1 = P r{Sj,k (t)|Sn,l (t − 1)} Pr{Sn,l (t − 1)}
l=1 n=1
Xm Xl−1
= pn,j δl,k σn,l ,
l=1 n=1
0 0
D2 = P r{Sj,k (t)|Sl,l (t − 1) ∩ Ēl } ×
Pr{Sl,l (t − 1) ∩ Ēl }
= pl,j δl,k (1 − l )σl,l ,
( " m l−1 #
D3 = P r Sn,l (t − 1) ∪
l=1 n=1
"m #)

Sl,l (t − 1) ∩ Ēl
XX l−1 m
= σn,l + (1 − l )σl,l . (B.31)
l=1 n=1 l=1

Pm Pl−1 Pm
l=1 pn,j δl,k σn,l + pl,j δl,k (1 − l )σl,l
D= Pn=1
m Pl−1
Pm . (B.32)
l=1 n=1 σn,l + l=1 (1 − l )σl,l

Term F in (B.29) can be written as

( "[m l−1
0 0

F = P r Sj,j (t) ∩ Ej Sn,l (t − 1) ∪
l=1 n=1
"m #)

Sl,l (t − 1) ∩ Ēl
F1 + F2
= , (B.33)


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

m X
X l−1
0 0 0
F1 = P r{Sj,j (t) ∩ Ej |Sn,l (t − 1)} Pr{Sn,l (t − 1)}
l=1 n=1
Xm Xl−1
= pn,j δl,j σn,l j ,
l=1 n=1
0 0
F2 = P r{Sj,j (t) ∩ Ej |Sl,l (t − 1) ∩ Ēl }
Pr{Sl,l (t − 1) ∩ Ēl }
= pl,j δl,j j (1 − l )σl,l ,
F3 = D3 . (B.34)

Hence, Pm Pl−1 Pm
l=1 n=1 pn,j δl,j σn,l j + l=1 pl,j δl,j j (1 − l )σl,l
F= P m Pl−1 Pm . (B.35)
l=1 n=1 σn,l + l=1 (1 − l )σl,l
For i 6= 0 and j = 0, referring to the hbasic property
i of a transition probability matrix, i.e., the sum
of each row is equal to 1, γi,0 = 1 − j=1 γi,j .


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

B.2 WMVF: Wireless Medium Virtualisation Framework

B.2.1 WMVF Simulation Model
Figure B.1 describes the Matlab framework, representing the main parts of the environment. From
left to right the whole system starts running when the arrival of traffic packets at the VNO-Queues
is produced.

Figure B.1: Modular description of the Wireless Medium Virtualisation Framework

From Figure B.1 we can derive a detailed description of the main modules presented in the
• Traffic Generator (TF): it creates the network traffic based on traffic models that basically
generate arrival instants and duration intervals for all packets of each type of traffic. Traffic
models are implemented following empirical recommendations and statistical approxima-
tion of parameters to generate certain service patterns [65][66]. For example, a Poisson
distribution with appropriate parameter values is a well known (and extensively used) model
in Traffic Engineering to generate voice traffic arrivals. In a similar way, different traffic
models have been defined to simulate the generation of certain traffic types such as: VoIP,
HTTP, FTP, etc. The TF includes an ON-OFF model (based on Weibull distributions) [67]
to generate traffic arrivals in bursts. The type of traffic obtained with these bursts is more re-
alistic (constant or exponential models are more optimistic, compared to reality). In order to
avoid non-realistic overflow at the beginning of the simulation, the starting instant of every
traffic is random inside a certain time window.
• VNO-Queues: an indefinite number of queues can be configured in the WMVF. Each queue
is associated with a different VNO. Each VNO can offer different type of services and the


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

queue associated to that VNO stores its packets no matter the type of traffic. The key concept
here is the univocal correspondence between queues and VNOs. All the queues receive
the incoming packets generated by the TF and store them until they are requested by the
TDMA Scheduler, which is in charge of emulating the virtualisation process implemented
by a certain node to handle traffic from different VNOs. The VNO-Queues considered in
the current version of the WMVF are finite FIFO arrays, although future versions plan to
develop different type of queues taking priorities into account. The limitation in the size of
the queues is important because it makes packet losses possible and realistic.

• TDMA Scheduler: all the VNO-Queues are managed by the TDMA scheduler that is in
charge of allowing a specific VNO-Queue to serve packets to be transmitted. The TDMA
Scheduler follows a Weighted Round Robin (WRR) discipline, i.e., it assigns sequentially
a time slot per VNO, being these time slots of equal (RR) or different (WRR) duration.
The WRR behaviour can be refined by using an adaptive algorithm. That is the reason why
we call it Adaptive WRR Scheduler (AWRRS). This dynamical mechanism has two key
functions within the simulation environment:
– Collect partial measures of the performance of the scheduler (in terms of resource us-
age, packet delay, packet losses, queue sizes, etc)
– Continually re-adjust the assignment of weights in the WRR performance: the algo-
rithm includes a sort of decision mechanism based on the partial measures of the net-
work. Depending on certain values, the scheduler can re-adapt the time sharing among
VNOs, in order to find a better weight-assignment (more refined).
Adaptation tries to increase the weight assigned to the most congested VNO in the scheduler,
if possible. To increase one weight, it is necessary to decrease the others, so that’s the reason
why it won’t be always possible to do it. A minimum weight is defined for each VNO (due
to any reason, by contract or agreement with the Infrastructure Provider, for instance), so it
will be impossible to reduce one VNO’s weight below its guaranteed minimum.

B.2.2 WMVF Simulation Setup

The most relevant parameters regarding the traffic patterns introduced in Matlab simulations are
presented in Figure B.2.
Apart from these models of traffic patterns, which can be changed or adapted as needed, there
are certain parameters that allow the user to configure a certain scenario to be tested:

• Number of VNOs: this is the number of operators offering services to their users through
the sharing of the same wireless medium (same access node)

• Number of traffic users for each traffic type: number of connections modeled in the access

• VoIP codec to be modeled: implemented options are G711 and G723.1 but others could be

• Size of the VNO queues: number of packets that can be enqueued for each VNO before
overflowing (start discarding traffic packets)


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure B.2: Traffic Generation Parameters

• Minimum TS assigned (by contract) to each VNO

• Adaptive Scheduler: we can configure the system to use adaptation or not, and we can also
choose from different adaptation criteria (utilisation per VNO or size of queue per VNO)

• Duration of the test/simulation: in terms of volume of traffic generated (not in terms of time),
since it is an event-driven approach

• Throughput of the wireless medium simulated: in the case of 802.11b, we have estimated the
Theoretical Maximum Throughput (TMT) for VoIP connections around 2.5Mbps according
to [68]. For TCP applications over 802.11b the TMT is around 5.7 Mbps [69].


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

B.3 CVRRM Evaluation

B.3.1 Evaluation Metrics
The VNet operator satisfaction level, SV N O , computes the VNet Operator requests to use the allo-
cated capacity, according to the contract established with the VNet Provider, and this capacity is
not available:  in
RV L − RVactL

SV N O = 1 − (B.36)
• RVinL : data rate offered to the VLink,
• RVactL : actual VLink data rate.
This equation is only applied when the data rate offered to the VLink is above the actual, and below
the minimum contracted; otherwise, the value of SV N O is zero.
The out of contract ratio is computed by:
 T  max 
nav RVmin
L − Nch − Nch o
· Rch + RVinL
rt = (B.37)

• Nch : total number of channels,
• Rch : maximum channel data rate in a cluster of resources,
• Nch : number of occupied channels in the cluster, i.e.,

o RVinL
Nch = act
• Rch : actual channel data rate.
The out of contract ratio can be computed for a VLink and for a VNet.
VLink utilisation is defined as the ratio of the number of occupied channels over the total number
of allocated channels:
ηV L = VchL (B.39)
• Nch : number of virtual link allocated channels.
The physical link utilisation is defined as the ratio between the total number of occupied channels
and the total number of physical channels:
PNV L o o
Nchi + Nch
ηP L = i=1 PL


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

• Nch i
: number of physical channels,

• NV L : number of VLinks created in the physical resource,

• Nch nV
: number of occupied non virtualised channels.

This parameter should be calculated on all wireless physical resources, namely, Base Stations

B.3.2 Results
B.3.2.1 VNet Operator satisfaction level

In a first step, the SV N O behavior related to the increase of offered traffic was studied, in order
to determine the maximum number of active users that allows the maximum SV N O value. A
first evaluation was done, for the reference performance and reference cluster scenario. From the
results, one concludes that approximately until 150 active users, corresponding to 2.40 Gbit/s of
total generated data rate, BE VNet Operator maintains its maximum value (SV N O = 1). For the
GRT VNet, the maximum number of users the VNet Operator can serve, simultaneously, with a
maximum satisfaction level is about 400, corresponding to 1.18 Gbit/s of offered data rate. In order
to evaluate the impact of different performance scenarios, SV N O was obtained for 150 and 200
users in BE VNet, hence, for 300 and 400 users in GRT VNet. In Figure B.3 , results are depicted,
as a function of the performance scenarios. As one can observe, there is a great dependence of

Figure B.3: SV N O for the different performance scenarios.

SV N O on the performance scenario, reflecting radio interface conditions. It can be seen that, for
the same number of active users in the VNet, SV N O can vary from its maximum value in Good to
zero in Poor. In the latter, the VNet Operator is not using all the contracted capacity, still, it cannot
allocate more capacity to its users due to a severe degradation on the radio interface.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

B.3.2.2 Virtual Link and Physical Link utilisation

The main difference between VLink utilisation, ηV L , and physical link utilisation, ηP L , theoretical
reference values is the quantity of channels that belongs to each one. While the physical link cor-
responds to the cluster total number of channels, the virtual link has only a fraction of the physical
link in terms of number of channels. Figure B.4 shows that the utilisation of the VLink, ηV L ,
achieves its maximum, in the Reference scenario, for 150 and 400 active users in the BE and GRT
VNets, respectively. It can be also observed that the utilisation ratio difference, between Poor and
Good increases from 0.25 (25 active users in GRT VNet) to 0.75 (200 active users in GRT VNet)
when the number of active users increases. In fact, when the number of users using the VNets is
higher, the deterioration of the radio interface is more critical, since this reduction is perceived by
all users. The maximum difference is detected when half of the maximum number of active users
is in the network. According to Figure B.5, the maximum value for ηP L in Reference is obtained

Figure B.4: ηV L for the Reference performance scenario.

for approximately a total of 550 active users in both VNets, considering the number of active users
in each VNet increasing proportionally. However, if the traffic is not increasing uniformly in both

Figure B.5: ηP L for the Reference performance scenario.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

VNets, Figure B.6, ηP L can be lower or higher, for the same total number of users, according to the
individual VLinks utilisation. If the traffic in BE VNet is maintained low and just the users offering
traffic to GRT VNet increase to its maximum, ηP L increases slower, Figure B.6(a); instead, if the
number of active users in GRT VNet is low, and the one in BE VNet increases, the increase in ηP L
is much visible, Figure B.6(b). It can be concluded that the evaluation of physical link utilisation

Figure B.6: ηP L for independent Nu increase in BE and GRT VNets.

should be done in conjunction with the individual VLinks utilisation, allowing taking some deci-
sions in order to optimise physical link utilisation. Concerning the variation of the physical link
utilisation, from Poor to Good, the comments made on ηV L continue to be valid, as expected.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

B.4 LTE Wireless Virtualisation

B.4.1 LTE Virtualisation Simulation Model
Figure B.7 shows the LTE simulation model implemented using OPNET as well as the imple-
mented functionalities and protocol stacks.

Figure B.7: LTE simulation model overview

The focus of this work was not on the node or link virtualisation, but rather on the air interface
virtualisation and how to schedule the air interface resources among the different virtual opera-
tors, no node/link virtualisation was simulated, instead the assumption was that we have a perfect
node/link virtualisation. Figure B.8 shows an example scenario, it can be seen that an additional
entity between the virtual eNodeBs that is the “Hypervisor”. This entity is responsible for schedul-
ing the air interface resources (frequency spectrum or PRBs) among the different virtual eNodeBs.
The Hypervisor also has direct access to the MAC layers of each of the LTE virtual eNodeBs to
collect the required relevant information to be used to base the scheduling on, like each operator’s
users channel conditions, and operator traffic load. In our implementation, two versions of the hy-
pervisor exist based on the previously discussed hypervisor types. One is the static version, where
the hypervisor allocates the PRBs among the different virtual operators just once at the beginning
of the simulations, the number of the allocated PRBs for each operator will be equal, where each
virtual eNodeB will get the exact same amount of PRBs and keeps it regardless if it is being ac-
tually used or not. The second version of the hypervisor is a dynamic one, where the PRBs are
allocated to the different virtual operators in a dynamic manner at equal time intervals. The amount
of the allocated PRBs will depend on the load that each operator is experiencing during the last
time instance. In this way, each operator will only get his required share of the PRBs and no waste
of resources will occur.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Figure B.8: Virtualised LTE simulation model (example with 3 Virtual Operators)

B.4.2 Simulation Parameters

Cell1 Cell2 Cell3

Virtual operator 1 0 - 300 sec 600 - 900 sec 300 - 600 sec
Virtual operator 2 300 - 600 sec 0 - 300 sec 600 - 900 sec
Virtual operator 3 600 - 900 sec 300 - 600 sec 0 - 300 sec

Table B.1: Peak traffic setup for each virtual operator


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

Parameter Assumption
Number of virtual operators 3 virtual operators (red, blue and green in Figure B.8)
Number of virtual eNodeBs 3 eNodeBs (one per virtual operator)
eNodeB coverage area Circular with radius = 375 meters
Cells = 3, each with 120 degree
Total Number of PRBs 99 (which corresponds to about 20 MHz)
Reuse factor = 3
Mobility model Random Way Point (RWP)
Users are initially distributed uniformly within the cell
Users speed 5 km/h
cell1 cell2 cell3
4 Video 2 Video 1 Video
Number of active users VO2 10 VOIP 10 VOIP 10 VOIP
4 Video 2 Video 1 Video
4 Video 2 Video 1 Video
Path loss model 128.1 + 37.6 log10(R) dB, R in km [70]
Slow Fading model Log normal distributed
Mean value = zero
Standard deviation = 8 dB
Correlation distance = 50 meters
Fast Fading model Jake’s model
CQI reporting Ideal
Modulation schemes QPSK, 16 QAM, 64 QAM
Link-to-System level interface Effective Exponential SINR mapping (EESIM) [71]
Downlink Low traffic model Voice Over Internet Protocol (VOIP)
Silence length = neg. exponential with 3 sec mean
Talk Spurt length = neg. exponential with 3 sec mean
Encoder scheme = GSM EFR
Call duration = 10 sec
Inter-repetition time = uniformly distributed between
5 and 10 sec throughout the whole simulation
Downlink Peak traffic model Video conferencing application
Incoming stream inter arrival time = Const (0.01 sec)
Outgoing stream inter arrival time = Const (0.01 sec)
Incoming stream frame size = Const (80 Bytes)
Outgoing stream frame size = Const (80 Bytes)
Duration = Const (300 sec)

The active periods depend on the virtual operator (see x.x)

Hypervisor resolution 1 sec (this is only for the dynamic scenario)
Simulation run time 1000 sec

Table B.2: Simulation scenario configurations


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

B.5 WiMAX Virtualisation

B.5.1 Virtualised BTS

eth0.vl1 eth0.vl2 eth0 eth0.vln

vnet0 vnet0 vnet0

vBS1 vBS2 vBSn




vBS vnet1 vnet1 vnet1

Manager eth1
eth1.vl1 eth1.vl2 eth1.vln

Figure B.9: Proposed architecture for the vBTS substrate. VLANs are named as ethX.vly , where
X is the number of the physical interface and y denotes the VLAN id on that interface.

The vBTS substrate fulfills the purpose of emulating a fully functional WiMAX Basestation
Transceiver (BTS) to every slice user, while possibly supporting lesser (but fixed) radio capacity
than the original BTS itself. The vBTS runs on a local machine and communicates over layer 2
links through a customised base station controller to the physical basestation transceiver1 . As a
part of the vBTS, one of the ethernet interfaces vnet1 is made available for communication with
the WiMAX clients. From a slice user’s perspective, every packet destined to the wireless clients
should be sent over this local interface in the vBTS environment, and similarly receive response
from the clients through that interface. The slice can use its own custom network stack over this
interface as a part of the virtual machine to communicate with the wireless clients.
For running the virtual machines, we use Kernel Virtual Machines (KVM) [72], which are based
on the QEMU [73] emulator. This is a full virtualisation technique and relies on the CPU architec-
ture to have virtualisation extensions (e.g. Intel VT or AMD-V) and supports virtual networking
that allows creation of virtual interfaces.
A generalised design of the vBTS substrate is as shown in the Figure Figure B.9. Each of the
VMs are designed to have three virtual interfaces. The first virtual interface veth0 is responsible
for providing connectivity to the outside world from the VM. This interface is directly routable
from the physical eth0 interface on the vBTS host machine. The second interface on the virtual
machine is veth1, which is used by the slice user to reach the WiMAX clients via the BTS. Each
of these veth1 interfaces are tunneled to individual VLANs on the eth1 interface of the vBTS host
machine. Using VLANs allows for the easy separation of slice traffic at layer 2. Use of VLANs
The vBTS functionality can also be supported remotely by using either L2 link or by tunneling over the internet.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

also help in making the system design scalable, since this allows the system designer to span the
vBTS substrate over multiple physical machines connected over LANs. VLANs also allow the
system designer to allocate multiple VM instances to slices, on possibly different physical hosts
and eventually connecting them using a single VLAN. The third interface on each of the VMs is
veth2, which is a control interface connected via the eth2 interface on the node. This interface
allows the experimenter to orchestrate their experiment, collect results, and eventually provide
means of remote monitoring using the vBTS management services. This interface is also the means
by which every slice interacts with the system administrator for change in experiment parameters,
such as addition of clients, or changing slice allocations.

vBTS Aggregate Manager The vBTS aggregate manager is responsible for providing control
API that is used by individual slices for for experiment configuration. The aggregate manager also
performs other functions such as monitoring the virtual machines for each of the slices and local
resource usage by individual slices. Functionality of the aggregate manager is provided through
a customised version of the OMF [74] grid service. Most of the control and monitoring features
provided by the OMF grid service are web-based thus providing standardised interface, which can
also be accessed remotely.

B.5.2 Modified ASN Design

The ASN substrate shown in Figure Figure B.10 is responsible for acting as a transparent gateway
between the vBTSs and the actual air interface. Since all packet switching has to happen at Layer2,
we removed the IP routing from the conventional ASN-GW setup and use Click [75, 76] for frame
redirection. All packet classification is based on slice identifiers, VLAN ids, and MAC addresses.
The management architecture on the vBTS substrate sends client MAC information for every slice
as per individual requests to the ASN aggregate manager. Typically, this information allows the
ASN to bridge traffic from the individual VLAN devices (on eth0) to corresponding gre 2 tunnels
(on eth1) ending in the BTS, which transmits and receives information from the wireless clients.

Slice Isolation Engine An important part of this forwarding engine running on the ASN sub-
strate is the slice isolation engine. The slice isolation engine is essentially a shaping mechanism
that limits slice traffic irrespective of the clients and service classes used, such that the fraction of
radio resource used by each slice are as per slice allocation policies. This is a closed loop control
mechanism that constantly estimates performance of clients belonging to individual slices, and
uses this information to shape downlink traffic.

RF Aggregate Manager The RF aggregate manager is a user-space daemon that is responsible

for maintaining client MAC to gre tunnel mapping and controlling Click modular router, as well
as implementation of closed control loop for traffic shaping/scheduling. It is also based on the
omf framework and has an integrated data collection service that logs experiment metrics such as
observed throughputs, client RSSI, retries and a host of other features.

Though, the gre tunnels themselves are Layer 3 devices, we do not use any layer 3 routing and all forwarding is
at layer 2. Standard five tuple classifiers (Source IP, Destination IP, Source Port, Destination Port, TOS) can be
used in addition to our switching mechanism to redirect client traffic to correct gre tunnels, representing different
service classes.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

eth0.vl1 eth0.vl2 eth0 eth0.vln


control gre1a gre1b gre2a gre2b greMa greMb

Figure B.10: WiMAX Base Station Controller


List of Abbreviations, Acronyms, and
AAA Authentication, Authorisation, Accounting

NSIS Next Steps In Signaling

NSLP NSIS Signaling Layer Protocol

GIST General Internet Signaling Transport

TLS Transport Layer Security

DTLS Datagram Transport Layer Security

SCTP Stream Control Transmission Protocol

DCCP Datagram Congestion Control Protocol

RMF Resource Management Function

3GPP 3rd Generation Partnership Project

CPU Central Processing Unit

CVRRM Cooperative Virtual Radio Resource Management

eNodeB enhanced NodeB

FP Forwarding Path

GSM Global System for Mobile Communications

IP Internet Protocol

InP Infrastructure Provider

LTE Long Term Evolution

MAC Media Access Control

NA Network Access

OFDMA Orthogonal Frequency Division Multiple Access

PRB Physical Radio Resource Block


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

QoS Quality-of-Service

RF Radio Frequency

RWP Random Way Point

UMTS Universal Mobile Telecommunications System

VNet Virtual Network

VNO Virtual Network Operator

VNP Virtual Network Provider

VoIP Voice Over Internet Protocol

WLAN Wireless Local Area Network

BE Best Effort

BS Base Station

CDMA Code Division Multiple Access

FDMA Frequency Division Multiple Access

GPRS General Packet Radio System

GRT Guaranteed

INM In-Network Management

ME Monitoring Entity

OFDM Orthogonal Frequency Division Multiplexing

RAT Radio Access Technology

SINR Signal to Interference and Noise Ratio

TDMA Time Division Multiple Access

VRRC VNet requirements Radio Resource Control

WiFi Wireless Fidelity


[1] Scott Shenker, Larry Peterson, and Jon Turner. Overcoming the internet impasse through
virtualization. In HotNets-III: Proceedings of the 3rd ACM conference on Hot Topics in
Networks. ACM, 2004.

[2] D. Andersen. Theoretical approaches to node assignment. Unpublished Manuscript,, April 2002.

[3] Y. Zhu and M. Ammar. Algorithms for assigning substrate network resources to virtual
network components. INFOCOM, 2006.

[4] M. Yu, Y. Yi, J. Rexford, , and M. Chiang. Rethinking virtual network embedding: Substrate
support for path splitting and migration. In ACM SIGCOMM Computer Communication
Review, April 2008.

[5] R. ricci and al. A solver for the network testbed mapping problem. In ACM Computer
Communication Review, January 2003.

[6] J. Lu and J. Turner. Efficient mapping of virtual networks onto a shared substrate. In
TR.WUCSE-2006-35, Washington University, 2006.

[7] N.M. Chowdhury and al. Virtual network embedding with coordinated node and link map-
ping. INFOCOM, 2009.

[8] I. Houidi, W. Louati, and D. Zeghlache. A distributed virtual network mapping algorithm.
ICC, May 2008.

[9] Jade (java agent development framework) website.

[10] Grid 5000 website.

[11] Foundation for intelligent physiacl agents.

[12] Modeling topology of large internetworks.


[13] Virtual network embedding simulator. minianyu/embed.tar.gz.

[14] G. Hernando, S. Perez, and J.M. Cabero. Mobility-aware distributed embedding (made) of
virtual networks. In submitted to ICT Mobile Summit, 2010.

[15] M. Yu, Y. Yi, J. Rexford, and M. Chiang. Rethinking virtual network embedding: Substrate
support for path splitting and migration. ACM SIGCOMM, April 2008.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

[16] G. Hernando, S. Perez, and J.M. Cabero. Analysis of path splitting and migration in the
virtualisation of mobile substrates. In ICT Mobile Summit, June 2009.
[17] Panagiotis Papadimitriou, Olaf Maennel, Adam Greenhalgh, Anja Feldmann, and Laurent
Mathy. Implementing network virtualization for a future internet. In Proceedings of ITC
Specialist Seminar on Network Virtualization, Hoi An, Vietnam, May 2009.
[18] Gregor Schaffrath, Christoph Werle, Panagiotis Papadimitriou, Anja Feldmann, Ronald
Bless, Adam Greenhalgh, Andi Wundsam, Mario Kind, Olaf Maennel, and Laurent Mathy.
Network virtualization architecture: Proposal and initial prototype. In Proceedings of ACM
SIGCOMM VISA, Barcelona, Spain, August 2009.
[19] Heterogeneous experimental network.
[20] E. Kohlera, R. Morris, B. Chen, J. Jahnotti, and M. F. Kasshoek. The click modular router.
In ACM Transaction on Computer Systems, vol. 18, no. 3, page 263297. ACM, 2000.
[21] Norbert Egi, Adam Greenhalgh, Mark Handley, Mickael Hoerdt, Felipe Huici, and Laurent
Mathy. Towards high performance virtual routers on commodity hardware. In Proceedings
of ACM CoNEXT 2008, Madrid, Spain, December 2008.
[22] R. Hancock, G. Karagiannis, J. Loughney, and S. Van den Bosch. Next Steps in Signaling
(NSIS): Framework. RFC 4080 (Informational), June 2005.
[23] David D. Clark, John Wroclawski, Karen R. Sollins, and Robert Braden. Tussle in cy-
berspace: defining tomorrow’s internet. In SIGCOMM ’02: Proceedings of the 2002 con-
ference on Applications, technologies, architectures, and protocols for computer communi-
cations, pages 347–356. ACM, 2002.
[24] J. Lau, M. Townsley, and I. Goyret. Layer Two Tunneling Protocol - Version 3 (L2TPv3).
RFC 3931 (Proposed Standard), March 2005. Updated by RFC 5641.
[25] Richard Alimi, Ye Wang, and Y. Richard Yang. Shadow configuration as a network manage-
ment primitive. In Proc. ACM SIGCOMM, 2008.
[26] The E-model, a Computational Model for Use in Transmission Planning, ITU-T Rec. G.107,
[27] J. Sommers and P. Barford. Self-configuring network traffic generation. In Proc. ACM IMC,
[28] Eric Keller and Evan Grren. Virtualizing the data plane through source code merging. In
Proceedings of PRESTO’08, Seattle, USA, August 2008.
[29] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R.Neugebauer, I.Pratt, and
A. Warfield. Xen and the art of virtualization. In 19th ACM Symposium on Operating Systems
Principles. ACM Press, October 2003.
[30] Norbert Egi, Adam Greenhalgh, Mark Handley, Mickael Hoerdt, Felipe Huici, and Laurent
Mathy. Fairness issues in software virtual routers. In Proceedings of PRESTO’08, Seattle,
USA, August 2008.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

[31] Norbert Egi, Adam Greenhalgh, Mickael Hoerdt, Felipe Huici, Panagiotis Papadimitriou,
Mark Handley, and Laurent Mathy. A platform for high performance and flexible virtual
routers on commodity hardware. In ACM SIGCOMM 2009, Poster, Barcelona, Spain, August

[32] Norbert Egi, Adam Greenhalgh, Mark Handley, Mickael Hoerdt, Felipe Huici, Laurent
Mathy, and Panagiotis Papadimitriou. A platform for high performance and flexible virtual
routers on commodity hardware. SIGCOMM Comput. Commun. Rev., 40(1), 2010.

[33] Arawind Menon, Jose Renato Santos, Yoshio Turner, G. John Janakiraman, and Willy
Zwaenepoel. Diagnosing performance overheads in the xen virtual machine environment. In
Proceeding of the first International Conference on Virtual Execution Environments, Chicago,
Illinois, USA, June 2005.

[34] Jose Renato Santos, Yoshio Turner, John Janakiraman, and Ian Pratt. Bridging the gap
between software and hardware techniques for i/o virtualization. In Proceedings of the
USENIX’08 Annual Technical Conference, Boston, Massachusset, USA, June 2008.

[35] David E. Williams and Juan Garcia. Virtualization with xen: Including xenenterprise,
xenserver, and xenexpress. In Syngress Publishing, Inc..., ISBN 9781597491679, May 2007.

[36] S. Bhatia, M. Motiwala, W. Muhlbauer, V. Valancius, A. Bavier, N. Feamster, L. Peterson,

and J. Rexford. Hosting virtual networks on commodity hardware. In Tech. Rep. GT-CS-07-
10, Georgia Tech. University, January 2008.

[37] Vrout.

[38] Vmware server.

[39] Cisco vn-link: Virtualization-aware networking, white paper.


[40] J. Sachs and S. Baucke. Virtual radio: A framework for configurable radio networks. In
International Wireless Internet Conference (WICON 2008), Maui, USA, November 2008.

[41] R. Knopp and P. A. Humblet. Information capacity and power control in single cell multiuser
communications. In Proc. of the IEEE Int. Conf. on Comm., pages 331–335, June 1995.

[42] D. N. C. Tse. Optimal power allocation over parallel gaussian channels. In Proc. of Int. Symp.
Inf. Theory, page 27, June 1997.

[43] M. Mehrjoo, M. Dianati, X. Shen, and K. Naik. Opportunistic fair scheduling for the down-
link of ieee 802.16 wireless metropolitan area networks. In Proc. of the 3rd Int. conf. on
QoS in heterogeneous wired/wireless networks (QShine), Waterloo, Ontario, Canada, August

[44] H. J. Bang, T. Ekman, and D. Gesbert. Channel predictive proportional fair scheduling. IEEE
Trans. on Wireless Commu., 7(2):482–487, February 2008.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

[45] 4WARD Consortium. Virtualisation approach: Concept. Technical report, ICT-4WARD

project, Deliverable D3.1.1, Sep. 2009.

[46] Q. Liu, S. Zhou, , and G. B. Giannakis. Queuing with adaptive modulation and coding over
wireless links: Cross-layer analysis and design. IEEE Trans. on Wireless Comm., 4(3):1142–
1153, May 2005.

[47] 4WARD Consortium. In-network management concept. Technical report, ICT-4WARD

project, Deliverable D4.2, Mar. 2009.

[48] M. Werner, P. Jesus, C. Silva, G. Kyriazakos, S.and Karetsos, Y. Zhu, W. Warzanskyj,

I. Berberana, and P. Karamolegkos. Scenarios for evolution to winner ii - version 1.0. Tech-
nical report, IST-WINNERII Project, Deliverable D6.11.3, Nov. 2007.

[49] S. Duarte. Analysis of technologies for long term evolution in umts. Master’s thesis, Techni-
cal University of Lisbon, Lisbon, Portugal, 2008.

[50] Opnet website.

[51] D. Raychaudhuri, M. Ott, and I. Seskar. Orbit radio grid tested for evaluation of next-
generation wireless network protocols. In Proceedings of IEEE TRIDENTCOM, pages 308–
309, Washington, USA, 2005.

[52] Larry Peterson, Steve Muir, Timothy Roscoe, and Aaron Klingaman. PlanetLab Architecture:
An Overview. Technical Report PDN–06–031, PlanetLab Consortium, May 2006.

[53] GENI. http : //

[54] G. Bhanage, R. Mahindra, I. Seskar,D. Raychaudhuri, S. Rangarajan. Architecture For An

Open Virtualized WiMAX Basestation. In Winlab Technical Report No. 352, November

[55] Gautam Bhanage, Ronak Daya, Ivan Seskar, and Dipankar Raychaudhuri. Vnts: A virtual
network traffic shaper for air time fairness in 802:16e systems. Submitted to IEEE Interna-
tional Conference on Communications (ICC), 2010.

[56] Gregor Schaffrath, Christoph Werle, Panagiotis Papadimitriou, Anja Feldmann, Roland
Bless, Adam Greenhalgh, Andreas Wundsam, Mario Kind, Olaf Maennel, and Laurent
Mathy. Network virtualization architecture: proposal and initial prototype. In VISA ’09:
Proceedings of the 1st ACM workshop on Virtualized infrastructure systems and architec-
tures, pages 63–72, New York, NY, USA, 2009. ACM.

[57] Lars Völker, Denis Martin, Ibtissam El Khayaty, Christoph Werle, and Martina Zitterbart. A
node architecture for 1000 future networks. ICC, June 2009.

[58] 4WARD Consortium. In-network management design. Technical report, ICT-4WARD

project, Deliverable D4.3, Dec. 2009.

[59] Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP
Traffic Flow Information, January 2008.


Document: FP7-ICT-2007-1-216041-4WARD/D-3.2.0
Date: January 14, 2010 Security: Confidential
Status: Final Version: 1.0

[60] A Framework for Packet Selection and Reporting, March 2009.

[61] Tanja Zseby. Ipfix/psamp: Future standards for ip measurements. In First IEEE LCN Work-
shop on Network Measurements (WNM2006), November 2006.

[62] G. Muenz, A. Antony, F. Dressler, and G.Carle. Using netconf for configuring manage-
ment probes. In IEEE/IFIP Network Operations & Management Symposium(IEEE/IFIP
NOMS2006), 2006.

[63] NETCONF Configuration Protocol, December 2006.

[64] Yang cental.

[65] G. Anastasi, E. De Stefano, and L. Lenzini. Qos provided by the ieee 802.11 wireless lan to
advanced data applications: a simulation analysis. Wireless Networks, 6:90–108, 2000.

[66] ITU-T Rec. Artificial conversational speech, 1993.

[67] Chuah Chen-Neeand and Randy H. Katz. Characterizing packet audio streams from internet
multimedia applications. IEEE International Conference and Communications, April 2002.

[68] Maximum throughput calculations for 802.11b wlan.


[69] When is 54 not equal to 54 a look at 802.11a, b, and g throughput. http:


[70] M. Anas, F.D Calabrese, P.E Mogensen, C. Rosa, and K.I Pedersen. Performance evalua-
tion of received signal strength based hard handover for utran lte. In Vehicular Technology
Conference VTC2007-Spring IEEE 65th, 2007.

[71] Erik Westman. Calibration and evaluation of the exponential effective sinr mapping (eesm)
in 802.16. In Master Thesis, Stockholm, Sweden, 2006.

[72] Linux kernel virtual machines. http : //www.linux −

[73] Qemu, open source processor emulator. http : //

[74] Omf framework. http : //

[75] Eddie Kohler, Robert Morris, Benjie Chen, J ohn Jannotti, and M. Frans Kaashoek. The click
modular router. ACM Trans. Comput. Syst., 18(3):263–297, 2000.

[76] Click modular router. http : //