You are on page 1of 93

FUTURE COMMUNICATION ARCHITECTURE FOR MOBILE CLOUD SERVICES

Acronym: MobileCloud Networking


Project No: 318109
Integrated Project
FP7-ICT-2011-8
Duration: 2012/11/01-2015/09/30

D3.2 Infrastructure Management Foundations


– Components First Release
Type Prototype

Deliverable No: 3.2

Workpackage: WP3

Leading partner: INTEL

Author(s): Thijs Metsch (Editor), List of Authors


overleaf.

Dissemination level: RE

Status: Draft

Date: 29 April 2014

Version: 1.0

Copyright  MobileCloud Networking Consortium 2012 - 2015


List of Authors

 Monica Branco (INOV)


 Giuseppe Carella (FhG)
 Luis Cordeiro (ONE)
 Marius Corici (FhG)
 Desislava Dimitrova (UBERN)
 Andy Edmonds (ZHAW)
 Alex Georgiev (CS)
 Andre Gomes (UBERN/ONE)
 Peter Gray (CS)
 Luigi Grossi (TI)
 Atoosa Hatefi (Orange)
 Sina Khatibi (INOV)
 Jakub Kocur (TUB)
 Giada Landi (NXW)
 Cláudio Marques (ONE)
 Thijs Metsch (INTEL)
 Julius Mueller (TUB)
 David Palma (ONE)
 Dominique Pichon (Orange)
 Bruno Sendas (PTIN)
 João Soares (PTIN)
 Bruno Sousa (ONE)
 Lucio Studer Ferreira (INOV)

List of reviewers:
 Santiago Ruiz (STT)
 Florian Antonescu (SAP)
 Paolo M. Comi (ITALTEL)

Copyright  MobileCloud Networking Consortium 2012-2015 Page 2 / 93


Versioning and contribution history

Version Description Contributors

0.1 Initial draft INTEL

0.2 Collection of material TI PTIN NEC INTEL CS NXW


ONE UTWENTE TUB INOV
UBERN ZHAW Fraunhofer
Orange

0.3 First complete draft INTEL

0.4 Version for peer review INTEL

0.5 Changes based on peer review TI PTIN NEC INTEL CS NXW


ONE UTWENTE TUB INOV
UBERN ZHAW Fraunhofer
Orange

0.6 Version for GA review INTEL

0.7 Final editing after peer review INTEL

1.0 Final version ready for submission INTEL

Copyright  MobileCloud Networking Consortium 2012-2015 Page 3 / 93


Executive Summary
This deliverable shows the first prototype implementations of the infrastructure foundations. Those
foundations were defined as architectural artefacts in the previous deliverable (D3.1 2013). So this
deliverable can be seen as a continuation of the work previously presented.
The infrastructural foundation of the MobileCloud Networking project can be roughly split into 5 parts,
which also correspond to the related Tasks in the work package. It deals with the Networking on Data-
Centre level (for interconnectivity of the service instance components) in Task 3.1, the
performance/profiling of the virtualized compute storage and network resources in Task 3.2, how to
Monitor these resources in Task 3.3, how to make use of the same resources in Task 3.4 and finally
about how access to the Radio Access Network is provided in Task 3.5.
These parts form the foundations on which the services from the other work-packages can build upon.
Hence as a first step all components of the foundation need to be offered with basic functionalities so
other services can move onto the infrastructure foundations as described in D3.1.
Hence the components delivered in M18 do not show their functionalities but provide all the basics
components needed to enable the other services. The most important functions/features and evaluations
provided as of M18 are therefore:
 Basic support for Load Balancing, Domain Name Service and research and evaluation work
done on intra data centre connectivity.
 First Performance tests have been carried out in an automated fashion with the example of the
RANaaS.
 Generic Monitoring capabilities have been implemented which allow for instrumentation of the
infrastructure as well as first services.
 A Cloud Controller has been implemented which integrates all the different parts and acts as the
abstraction layer for the services building on top of the infrastructure foundations.
 Basic traffic injections, performance tests and designs for the RANaaS have been done to verify
the architecture of the RANaaS. This will also enable the first steps towards an End-To-End
demonstration.
With these first prototypes in hand the partners in WP3 will continue on working on the foundations in
close collaboration with the other services from the different work packages. And hereby eventually
deliver the fully features infrastructural foundations of the MobileCloud Networking project.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 4 / 93


Table of Contents
1 INTRODUCTION ..................................................................................................................................... 10
2 COMPONENTS OF WP3......................................................................................................................... 11
2.1 DOMAIN NAME SYSTEM-AS-A-SERVICE .................................................................................................. 11
2.1.1 Definition and Scope ..................................................................................................................... 11
2.1.2 High-level design .......................................................................................................................... 11
2.1.3 Low-level design............................................................................................................................ 12
2.1.4 Documentation of the code ............................................................................................................ 12
2.1.5 Third parties and open source software ........................................................................................ 13
2.1.6 Installation, Configuration, Instantiation...................................................................................... 13
2.1.7 Roadmap ....................................................................................................................................... 13
2.1.8 Research works and algorithms .................................................................................................... 13
2.1.9 Conclusions and Future work ....................................................................................................... 14
2.2 LOAD BALANCER-AS-A-SERVICE ............................................................................................................ 14
2.2.1 Definition and Scope ..................................................................................................................... 14
2.2.2 High-level design .......................................................................................................................... 15
2.2.3 Low-level design............................................................................................................................ 15
2.2.4 Documentation of the code ............................................................................................................ 15
2.2.5 Third parties and open source software ........................................................................................ 15
2.2.6 Installation, Configuration, Instantiation...................................................................................... 16
2.2.7 Roadmap ....................................................................................................................................... 16
2.2.8 Conclusions and Future work ....................................................................................................... 16
2.3 INTRA DATACENTRE CONNECTIVITY ....................................................................................................... 16
2.3.1 Definition and Scope ..................................................................................................................... 16
2.3.2 High-level design .......................................................................................................................... 16
2.3.3 Low-level design............................................................................................................................ 17
2.3.4 Documentation of the code ............................................................................................................ 19
2.3.5 Third parties and open source software ........................................................................................ 20
2.3.6 Installation, Configuration, Instantiation...................................................................................... 21
2.3.7 Roadmap ....................................................................................................................................... 21
2.3.8 Research works and algorithms .................................................................................................... 21
2.3.9 Conclusions and Future work ....................................................................................................... 23
2.4 RE-DIRECTION AS A SERVICE .................................................................................................................. 23
2.4.1 Definition and Scope ..................................................................................................................... 23
2.4.2 High Level Design ......................................................................................................................... 23
2.4.3 Low Level Design .......................................................................................................................... 27
2.4.4 Documentation of code.................................................................................................................. 30
2.4.5 Third parties and open source software ........................................................................................ 30
2.4.6 Installation, Configuration, Instantiation...................................................................................... 31
2.4.7 Roadmap ....................................................................................................................................... 31
2.4.8 Research Work & algorithms ........................................................................................................ 31
2.4.9 Conclusions and Future work ....................................................................................................... 31
2.5 PERFORMANCE ........................................................................................................................................ 31
2.5.1 Definition and Scope ..................................................................................................................... 31
2.5.2 High-level design .......................................................................................................................... 32
2.5.3 Low-level design............................................................................................................................ 35
2.5.4 Documentation of the code ............................................................................................................ 37
2.5.5 Third parties and open source software ........................................................................................ 37
2.5.6 Installation, Configuration, Instantiation...................................................................................... 37
2.5.7 Roadmap ....................................................................................................................................... 38
2.5.8 Research works and algorithms .................................................................................................... 39
2.5.9 Conclusions and Future work ....................................................................................................... 39
2.6 MONITORING-AS-A-SERVICE ................................................................................................................... 40
2.6.1 Definition and Scope ..................................................................................................................... 40
2.6.2 High-level design .......................................................................................................................... 40
2.6.3 Low-level design............................................................................................................................ 44

Copyright  MobileCloud Networking Consortium 2012-2015 Page 5 / 93


2.6.4 Documentation of the code ............................................................................................................ 48
2.6.5 Third parties and open source software ........................................................................................ 49
2.6.6 Installation, Configuration, Instantiation...................................................................................... 49
2.6.7 Roadmap ....................................................................................................................................... 50
2.6.8 Research works and algorithms .................................................................................................... 50
2.6.9 Conclusions and Future work ....................................................................................................... 52
2.7 ANALYTICS-AS-A-SERVICE...................................................................................................................... 52
2.8 CLOUD CONTROLLER .............................................................................................................................. 53
2.8.1 Definition and Scope ..................................................................................................................... 53
2.8.2 High-level design .......................................................................................................................... 53
2.8.3 Low-level design............................................................................................................................ 54
2.8.4 Documentation of the code ............................................................................................................ 57
2.8.5 Third parties and open source software ........................................................................................ 57
2.8.6 Installation, Configuration, Instantiation...................................................................................... 58
2.8.7 Roadmap ....................................................................................................................................... 59
2.8.8 Research works and algorithms .................................................................................................... 59
2.8.9 Conclusions and Future work ....................................................................................................... 62
2.9 SERVICE GRAPH EDITOR ......................................................................................................................... 62
2.9.1 Definition and Scope ..................................................................................................................... 62
2.9.2 High-level design .......................................................................................................................... 63
2.9.3 Low-level design............................................................................................................................ 64
2.9.4 Documentation of the code ............................................................................................................ 64
2.9.5 Third parties and open source software ........................................................................................ 65
2.9.6 Installation, Configuration, Instantiation...................................................................................... 65
2.9.7 Roadmap ....................................................................................................................................... 65
2.9.8 Conclusions and Future work ....................................................................................................... 65
2.10 DATABASE-AS-A-SERVICE .................................................................................................................. 65
2.10.1 Definition and Scope ................................................................................................................ 66
2.10.2 High-level design ...................................................................................................................... 66
2.10.3 Low-level design ....................................................................................................................... 66
2.10.4 Documentation of the code ....................................................................................................... 66
2.10.5 Third parties and open source software ................................................................................... 66
2.10.6 Installation, Configuration, Instantiation ................................................................................. 67
2.10.7 Roadmap................................................................................................................................... 67
2.10.8 Conclusions and Future work................................................................................................... 67
2.11 RADIO ACCESS NETWORK-AS-A-SERVICE .......................................................................................... 67
2.11.1 Definition and Scope ................................................................................................................ 67
2.11.2 High-level design ...................................................................................................................... 67
2.11.3 Low-level design ....................................................................................................................... 68
2.11.4 Documentation of the code ....................................................................................................... 72
2.11.5 Third parties and open source software ................................................................................... 73
2.11.6 Installation, Configuration, Instantiation ................................................................................. 74
2.11.7 Roadmap................................................................................................................................... 75
2.11.8 Research works and algorithms ............................................................................................... 76
2.11.9 Conclusions and Future work................................................................................................... 78
3 SERVICE ENABLEMENT ...................................................................................................................... 80
3.1 GENERIC SERVICE ORCHESTRATOR ......................................................................................................... 80
3.1.1 Definition and Scope ..................................................................................................................... 80
3.1.2 High-level design .......................................................................................................................... 80
3.1.3 Low-level design............................................................................................................................ 80
3.1.4 Documentation of the code ............................................................................................................ 82
3.1.5 Third parties and open source software ........................................................................................ 82
3.1.6 Installation, Configuration, Instantiation...................................................................................... 82
3.1.7 Roadmap ....................................................................................................................................... 82
3.1.8 Conclusions and Future work ....................................................................................................... 83
3.2 GENERIC SERVICE MANAGER .................................................................................................................. 83
3.2.1 Definition and Scope ..................................................................................................................... 83

Copyright  MobileCloud Networking Consortium 2012-2015 Page 6 / 93


3.2.2 High-level design .......................................................................................................................... 83
3.2.3 Low-level design............................................................................................................................ 83
3.2.4 Documentation of the code ............................................................................................................ 86
3.2.5 Third parties and open source software ........................................................................................ 86
3.2.6 Installation, Configuration, Instantiation...................................................................................... 86
3.2.7 Roadmap ....................................................................................................................................... 87
3.2.8 Conclusions and Future work ....................................................................................................... 87
4 CONCLUSIONS ........................................................................................................................................ 88
5 TERMINOLOGY ...................................................................................................................................... 89

Copyright  MobileCloud Networking Consortium 2012-2015 Page 7 / 93


Table of Figures
Figure 1 DNSaaS FMC diagram ........................................................................................................................... 11
Figure 2 DNSaaS UML class diagram .................................................................................................................. 12
Figure 3 Cloud Service Provider - Network Architecture ..................................................................................... 17
Figure 4 OpenDaylight Hydrogen – Virtualization Edition Architecture (Linux Foundation 2013) .................... 18
Figure 5 Architecture Overview of the running prototype .................................................................................... 19
Figure 6: MCN Architecture with Generic OpenStack and Neutron Settings ....................................................... 24
Figure 2: Configurations & Policy Engine with OpenFlow in Provider Domain ................................................. 25
Figure 8: Disaster Recovery Traffic across different paths ................................................................................... 26
Figure 4: General SDN with Control plane service .............................................................................................. 27
Figure 5: FMC Diagram for a generic SDN Controller ........................................................................................ 28
Figure 6: Service Template Graphs with SDN/OpenFlow .................................................................................... 29
Figure 6 Performance testing: Overall workflow .................................................................................................. 33
Figure 7 STG and ITG mapping ........................................................................................................................... 35
Figure 8 Mapping SIC to IaaS .............................................................................................................................. 36
Figure 9 End-2-End MCN Service Deployment with MaaS ................................................................................ 41
Figure 10 Zoom in - Deployment of MaaS ........................................................................................................... 41
Figure 11 Zoom-in - MCN Service Deployment with MaaS ................................................................................ 42
Figure 12 MaaS interfaces .................................................................................................................................... 42
Figure 13 FMC diagram of the ZPC architecture .................................................................................................. 43
Figure 14 ZPC life-cycle ....................................................................................................................................... 44
Figure 15 Hierarchical Zabbix Schema ................................................................................................................. 45
Figure 16 The Zabbix-Ceilometer Schema used by ZCP ...................................................................................... 45
Figure 17 ZCP UML class diagram ...................................................................................................................... 46
Figure 18 Flow diagram ........................................................................................................................................ 52
Figure 19 High level overview of CC ................................................................................................................... 53
Figure 20 Service Development Kit UML class diagram ..................................................................................... 55
Figure 21 Sample SO UML class diagram (hidden details) .................................................................................. 55
Figure 22 Screenshot of integrate SO/SDK & CC ................................................................................................ 56
Figure 23 CC Northbound Interface UML class diagram ..................................................................................... 57
Figure 24 Example of network topology with QoS parameters ........................................................................... 62
Figure 25 StgEditor screenshot ............................................................................................................................. 63
Figure 26 StgEditor information flow ................................................................................................................... 64
Figure 27 Architecture reference model for M18 prototype ................................................................................. 68
Figure 28 Use-case diagram of RANaaS for M18 ................................................................................................ 69
Figure 29 eNodeB architecture ............................................................................................................................. 70
Figure 30 Interfaces between User Generator, eNodeB and SGW ....................................................................... 70
Figure 31 User generator and eNodeB .................................................................................................................. 71
Figure 32 Configuration management via EMS Agent ......................................................................................... 72
Figure 33 eNodeB configuration ........................................................................................................................... 75
Figure 34 Configuration of User generator ........................................................................................................... 75
Figure 35 SO UML class diagram ........................................................................................................................ 81
Figure 36 Structure of a SO bundle ....................................................................................................................... 81
Figure 37 SM UML class diagram ........................................................................................................................ 84
Figure 38 Sample SM ........................................................................................................................................... 85
Figure 39 SO bundle structure .............................................................................................................................. 86

Copyright  MobileCloud Networking Consortium 2012-2015 Page 8 / 93


Table of Tables
Table 1 Components & Tasks ............................................................................................................................... 10
Table 2 DNSaaS documentation ........................................................................................................................... 13
Table 3 DNSaaS dependencies ............................................................................................................................. 13
Table 4 LBaaS documentation .............................................................................................................................. 15
Table 5 LBaaS dependencies ................................................................................................................................ 15
Table 6 Intra-DC network documentation ............................................................................................................ 19
Table 7 Intra-DC connectivity dependencies ........................................................................................................ 20
Table 8 QoS parameters ........................................................................................................................................ 22
Table 9 Re-Direction service documentation ........................................................................................................ 30
Table 10 Re-Direction service dependencies ........................................................................................................ 30
Table 11 Metrics - Resources list .......................................................................................................................... 36
Table 12 Performance documentation ................................................................................................................... 37
Table 13 Performance dependencies ..................................................................................................................... 37
Table 14 MaaS documentation.............................................................................................................................. 49
Table 15 MaaS dependencies ................................................................................................................................ 49
Table 16 CC documentation.................................................................................................................................. 57
Table 17 CC dependencies .................................................................................................................................... 58
Table 18 StgEditor documentation ........................................................................................................................ 65
Table 19 StgEditor dependencies .......................................................................................................................... 65
Table 20 Database-as-a-Service documentation ................................................................................................... 66
Table 21 Database-as-a-Service dependencies...................................................................................................... 66
Table 22 API for the eNB Management ................................................................................................................ 72
Table 23 RANaaS documentation ......................................................................................................................... 73
Table 24 RANaaS dependencies ........................................................................................................................... 74
Table 25 SO documentation .................................................................................................................................. 82
Table 26 SO dependencies .................................................................................................................................... 82
Table 27 SM documentation ................................................................................................................................. 86
Table 28 SM dependencies ................................................................................................................................... 86

Copyright  MobileCloud Networking Consortium 2012-2015 Page 9 / 93


1 Introduction
This document gives an introduction to the software components of the Infrastructure Management
Foundations. The work presented here shows the first prototype implementations. With these the other
work-packages should be able to use the infrastructural foundations. The focus, for this deliverable
therefore, was on enabling the virtualizations of the services delivered out of work-package 3, 4 and 5.
All components of WP3 are presented in this document in a uniform way. Some describe new
developments, others describe evaluations of existing technologies which might be enhanced in future.
Table 1 shows an overview of the components part of this deliverable and which Tasks in WP3 worked
on it.
Table 1 Components & Tasks
Service Component Delivering Task
DNS-as-a-Service Task 3.1
Load-Balancer-as-a-Service Task 3.1
Intra Datacentre connectivity Task 3.1
Re-Direction as a Service Task 3.1
Performance Task 3.2
Monitoring-as-a-Service Task 3.3
Analytics-as-a-Service Task 3.2/Task 3.3
Cloud Controller Task 3.4
STG editor Task 3.4
Database-as-a-Service Task 3.4
Radio Access Network-as-a-Service Task 3.5
Generic Service Orchestrator WP3
Generic Service Manager WP3

In general the technologies demonstrate here were evaluated, implemented and enhanced using an Agile
methodology. WP3 uses a Scrum based process in which Sprints are organized. Please note that some
services – such as the Load Balancer – are built upon existing technologies and work carried out is
mostly about evaluating the technologies. Other services build upon existing technologies and have been
enhanced in the past Sprints – such as the DNS service. Others represent new developments such as the
work done on the CC. Each section will detail what are new developments and what are enhancements.
Next to that some services are not part of this M18 prototype deliverable. Such as the Analytics services
which will be enhanced over the next Sprints. Wherever possible the solutions presented in this
deliverable were also deployed on the testbeds to verify their functioning.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 10 / 93


2 Components of WP3
Each of the following sections addresses one of the components delivered out of WP3. All these
components were developed with the focus of getting the initial service workloads virtualized.

2.1 Domain Name System-as-a-Service


The following sub-sections detail the sub-components of Domain Name Service-as-a-Service
(DNSaaS). The work presented here builds upon existing technologies and was delivered out of Task
3.1.

2.1.1 Definition and Scope


The Domain Name Service-as-a-Service (DNSaaS) provides the signalling and the management to
enable DNS operations for different tenants. The DNSaaS support the end-to-end provisioning and life
cycle managements for needs of DNS services. Such provisioning includes the instantiation of machines
to hold DNS backends (i.e., PowerDNS servers to reply to DNS mapping operations requests) and the
DNSaaS API, which enable the maintenance of DNS information of a tenant.

2.1.2 High-level design


As first defined in (D3.1 2013) the following diagram in Figure 1 depicts the high-level overview of
the main components of DNSaaS.

Figure 1 DNSaaS FMC diagram


The high-level components of DNSaaS have been updated to include the required modules to interact
with the MCN architecture. In particular, the DNSaaS AAA component has been added to allow
authentication of requests. Moreover, the DNSaaS client replaces the old configurator to avoid

Copyright  MobileCloud Networking Consortium 2012-2015 Page 11 / 93


ambiguities with internal configuration classes of DNSaaS, previously introduced in deliverable (D2.2
2013).

2.1.3 Low-level design


DNSaaS is implemented in Python and uses the SDK provided by the MCN SDK. The diagram of
classes is depicted in the UML diagram shown in Figure 2. The components marked in grey, such as
MaaS and AAA are not part of DNSaaS but are illustrated to detail the services that are related with
DNSaaS.

Figure 2 DNSaaS UML class diagram


MCN DNSaaS is composed by three main packages:
 ServiceOrchestrator, which includes all the classes for managing DNSaaS components, such
as instantiation, scaling operations. These classes are defined in the MCN SDK.
 DNSaaSClient, which includes classes to interact with DNSaaS.
 DNSaaS, which includes all the classes that provide DNSaaS functionalities, such as creation
of records among others. This package includes also interfaces to the Designate component, as
well as classes to interact with Keystone and MaaS.
More information can be found in the documentation section of the DNSaaS.

2.1.4 Documentation of the code


Table 2 shows the available documentation of the DNSaaS.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 12 / 93


Table 2 DNSaaS documentation
Sub-Component Reference Documentation
DNSaaS https://git.mobile-cloud- Documentation of the different
networking.eu/networking/dnsaas. classes can be found at the doc
folder. Info can also be found in
the README.md file.

2.1.5 Third parties and open source software


DNSaaS uses the following third-party software packages described in Table 3
Table 3 DNSaaS dependencies
Name Description Reference License
Runtime
Keystone AAA solution https://github.com/openstack/keystone Apache
2.0
Designate DNSaaS solution https://github.com/stackforge/designate Apache
2.0
Development
Designate- Client for Designate https://launchpad.net/python- Apache
client API designateclient 2.0

2.1.6 Installation, Configuration, Instantiation


DNSaaS requires installation and configurations of different components (e.g., MySQL, PowerDNS),
which can be performed as described in a dedicated wiki page:(ONESource 2014). In addition, the
service template provided by the CC (CC) can also create the DNSaaS instances.

2.1.7 Roadmap
The following sprints as defined in (DNSAAS 2014) and will focus on delivering the following feautes
up to M27:
 Implement framework for DNSaaS performance metrics.
 Implementation of scaling algorithms for vertical and horizontal scaling.
 Definition of templating mechanisms with OpenStack Heat.
 Implement support for Name Authority Pointer (NAPTR) records.
 Add support for Designate API V2.
 Implement Load Balancing for DNS.

2.1.8 Research works and algorithms


Scaling decisions, both vertical and horizontal need to rely on values of certain criteria and respective
thresholds, for instance if queries per second drop to 500 or less, or if queries latency is higher than 10s.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 13 / 93


Algorithms detecting that the thresholds have been exceed propose scaling according to the values of
such excess.
One of the initial research work and compile in a scientific article (Sousa 2014), was devoted to assess
how different configurations impact the performance of Designate. The performance evaluation of
Designate included the following criteria:
 Processing time - Time taken by DNSaaS to process a certain request (in ms).
 Queries throughput - Number of DNS queries supported by DNS Backend (in qps).
 Queries lost - Number of DNS queries not replied by DNS backend or lost.
 Queries Latency – time taken by DNS Backend to reply to DNS requests (in ms).
 CPU and RAM usage - How efficiently CPU and RAM resources are used.
The evaluation considered the performance of designate with different server configurations, to assess
how a certain configuration impact the performance of Designate. Taking the example of RAM and
number of CPUs two main configurations were performed for the servers hosting designate
functionalities:
 enhanced: with 2 virtual CPUs and 6Gb of RAM
 normal: with 1 CPU and 1Gb of RAM.
Results demonstrate that processing time is lower with the enhanced server and this supports high
number of concurrent request. Moreover, the results allowed to profile the performance of designate
within different sets of configuration (i.e., number of queries supported given a certain amount of
memory and CPU).

2.1.9 Conclusions and Future work


The work described in the previous sections highlight the initial steps to enable basic Domain Name
Services-as-a-Service (DNSaaS). Efforts have been put into the measurement of initial performance of
DNSaaS (designate) to determine an initial profiling of the designate performance.
The following sprints will integrate the tests results into a framework for DNSaaS performance
collecting metrics, which are used as input for the scaling algorithms. Other features will enhance the
number and type of DNS records supported and advanced mechanisms, such as load balancing.

2.2 Load Balancer-as-a-Service


The following sub-sections detail the sub-components of Load Balancer-as-a-Service (LBaaS). The
work presented here builds upon existing technologies and was delivered out of Task 3.1.

2.2.1 Definition and Scope


LBaaS is provided at M18 as is available in OpenStack Neutron. Neutron provides an API for
controlling the LB service instance that allows to distribute HTTP and TCP incoming traffic onto a
pool of equivalent servers. While HTTP load balancing distributes each incoming HTTP request to a
properly selected server, TCP load balancing distributes complete TCP connections, independently of
the upper application layer protocol. When a TCP connection is set up the Load Balancer chooses a

Copyright  MobileCloud Networking Consortium 2012-2015 Page 14 / 93


server and then forwards forth and back to the same server all subsequent TCP packets pertaining to
the same connection. This latter is adopted as an example to scale SMTP servers.
As already mentioned in (D3.1 2013), as no specific innovation contents is involved, LBaaS is
provided in MCN as is in Openstack Neutron. Up to the moment when this report is being written, no
need for any specific developments has yet been recognized. MCN applications and services needing a
HTTP/TCP Load Balancing service can instantiate a Neutron LBaaS instance through appropriate
Heat Orchestration Template (HOT) or by directly invoking Neutron API commands through the API
network.
It has also to be recalled that Load Balancing is a very generic term indicating a technical solution to
support system scalability where a pivot element, named Load Balancer, distributes the traffic over a
number of equivalent servers. Where the involved protocols are widely used, like HTTP and/or TCP, a
commodity solution is applicable, like the one herein depicted. If, as in many MCN services, the load
balancing has to be achieved by inspecting application level data, then there is no commodity solution
and each solution has to be developed separately to cope with the specific requirements.
For this reason the following paragraphs briefly refer to how LBaaS is supported in MCN and
therefore to its Openstack Neutron implementation.

2.2.2 High-level design


The architecture is given by the OpenStack Neutron project. Documentation of the architecture can be
found here:
https://wiki.openstack.org/wiki/Neutron

2.2.3 Low-level design


More detailed low level architecture artefacts for the Load Balancer in OpenStack Neutron can be found
here:
https://wiki.openstack.org/wiki/Neutron/LBaaS/Architecture

2.2.4 Documentation of the code


Table 4 shows the available documentation of the LBaaS.
Table 4 LBaaS documentation
Sub- Reference Documentation
Component
LBaaS https://wiki.openstack.org/wiki/Neutron/LBaaS/API Contains API documentation
of the LBaaS.

2.2.5 Third parties and open source software


LBaaS uses the following third-party software packages described in Table 5.
Table 5 LBaaS dependencies
Name Description Reference License
Runtime

Copyright  MobileCloud Networking Consortium 2012-2015 Page 15 / 93


OpenStack OpenStack https://wiki.openstack.org/wiki/Neutron Apache
Neutron Networking 2.0

2.2.6 Installation, Configuration, Instantiation


LBaaS needs some installations and/or configurations be done on both the Controller Node and the
Network Node as described in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun. On the
Controller Node the LBaaS has to be enabled while on the Network node there is the need to install the
LBaaS_Agent that executes the commands issued by Neutron, and the HAProxy. LBaaS instances can
also be created by means of service template provided to the CC

2.2.7 Roadmap
Currently not further activities are planned for the next sprints. Inputs from services on requirements for
LB will however be accounted for if possible.

2.2.8 Conclusions and Future work


The LBaaS form OpenStack offers the basic functionalities needed by most services in MCN. Task 3.1
has evaluated and tested the LBaaS solution available. No immediate further developments are planned.

2.3 Intra Datacentre Connectivity


The following sub-sections describe the components which enable the Cloud Service Provider (CSP) to
establish Intra Data Center (DC) connectivity. The work presented here builds upon and leverages
existing technologies and it was delivered out of Task 3.1.

2.3.1 Definition and Scope


MCN considers the CSP network to be OpenFlow based. In other words, a network based on OpenFlow
Switches (OFSs) that interconnects multiple servers, which again are also OpenFlow enabled (e.g. via
OpenVSwitch).
The fundamental assumption in a CSP domain is the existence of an IaaS (and NaaS) layer e.g.
OpenStack that is used as a basis for deploying services. From the networking perspective, the basic
building block is the tenant based virtual network that is mapped to the physical infrastructure in a CSP
domain. Furthermore, the support of QoS and chaining functionalities in the network is a must to MCN
services, and to a broader extent to a great majority of virtual network functions (VNFs) as defined in
(ETSI 2013).
This section details the current network implementation of a CSP in an intra-DC perspective having into
account the above mentioned factors.

2.3.2 High-level design


Herein we briefly recapture the CSP network architecture for one DC presented in (D3.1 2013). The
architecture depicted in Figure 3 is composed by following main components:
 Frontend – component that exposes network connectivity services to external entities and allows
the provisioning and management of those services. This component is the entry point to the
Network Management System. It is important to note that the way services are expressed is

Copyright  MobileCloud Networking Consortium 2012-2015 Page 16 / 93


closely related to the Infrastructure Template Graph (ITG) definition which can be found in
(D3.1 2013).
 Network Management System – framework that ensures the life-cycle of all network
connectivity services within the CSP’s domain. The Network Management System is part of the
CSP’s Cloud Management System, system responsible for managing the entire CSP structure,
in order to allow a consistent configuration and re-configuration of the entire CSP environment.
 OpenFlow Control Adaptor – component responsible for the translation between technology
independent commands sent from the Network Management System and the technology
dependent commands expected at the north bound interface of the OpenFlow Controller (OFC).
 OpenFlow Controller – component able to modify the behaviour of the networking resources
via the OpenFlow protocol. The control of the CSP’s network relies on this element and is
independent of specific network topology.
 OpenFlow Enabled Resources – a set of resources that support the OpenFlow protocol. These
resources must be OpenFlow switches (whether hardware or software).

Figure 3 Cloud Service Provider - Network Architecture

2.3.3 Low-level design


In (D3.1 2013) there were identified candidate software tools for each network architecture component.
Although the choices were clear from the beginning for some components (OpenStack Neutron), it was
not for others (OpenFlow Control Adaptor and OpenFlow Controller). This section presents and clarifies
the software choices adopted in MCN for each network architecture component of the CSP, with special
focus on the OpenFlow/SDN Controller.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 17 / 93


For the Network Frontend and Network Management System OpenStack Neutron API and OpenStack
Neutron ended up being obvious choices. Note that the use of OCCI as (another) Network Frontend is
not excluded, however its support is not currently a main target.
On the other hand, the OpenFlow Controller’s choice (and implicitly the OpenFlow Control Adaptor)
was not obvious at the time. Trema, Ryu and Floodlight were tested and identified as the main candidates
for the Controller role (D3.1 2013). However, in the meanwhile the OpenDaylight project (Linux
Foundation 2013) “boom” took place and its first official release took place (February 2014). One can
say that the momentum around OpenDaylight today allows it to be equated as the “OpenStack of SDN
controllers”. In this sense, MCN adapted and evaluated the possibility to adopt OpenDaylight. This
evaluation led to a definitive choice of OpenDaylight as the main SDN/OpenFlow Controller within the
MCN project. The Virtualization Edition of the OpenDaylight Hydrogen release (see Figure 4 ) already
includes basic functionalities in support of the OpenStack integration through a Neutron plugin. The
MCN plan is to rely on this architecture, extending the OpenDaylight internal components and APIs, as
well as the OpenDaylight Neutron plugin (see Table 7 for the detailed list of software modules) in
support of the new MCN features in terms of QoS and monitoring. Potential extensions for Service
Function Chaining (SFC) will be also investigated during the next periods.

Figure 4 OpenDaylight Hydrogen – Virtualization Edition Architecture (Linux Foundation 2013)


In summary, the CSP’s intra DC connectivity relies on a set of open source software packages further
detailed in the following section. Figure 5 provides an overview of the architecture of the current running
prototype. OpenStack Neutron is the platform of the CSP that exposes (Neutron API) and manages
(Neutron) the available network services and interacts with the remaining platform components (e.g.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 18 / 93


OpenStack Nova for compute resources). Since Neutron’s main objective is not to directly enforce
network services in the infrastructure, it relies on plug-ins to interact with resources directly or with
other middleware platforms, such as in our case with OpenDaylight (Neutron ODL Plug-in). In other
words, Neutron exposes and handles service requests but at a certain point it forwards them to
OpenDaylight to effectively enforce them. To be able to handle Neutron service requests, OpenDaylight
has an application (Neutron Application - OVSDB) and a correspondent API (OpenDaylight API) for
integration with Neutron that abstracts Neutron from the complexity of managing the network
infrastructure. Within OpenDaylight, there is a component named Service Abstraction Layer (SAL) that
exposes device services to applications and other modules irrespective of the underlying protocol used
to communicate with the devices. The OpenDaylight Neutron application relies on this component to
interact with the devices.

Neutron API

Neutron

Neutron ODL Plug-in

OpenDaylight Neutron API

Neutron Application (OVSDB)

Service Abstraction Layer (SAL)

OVSDB / OpenFlow
OpenFlow Devices

Figure 5 Architecture Overview of the running prototype

2.3.4 Documentation of the code


Table 6 shows the available documentation of the CSP’s Intra-DC connectivity.
Table 6 Intra-DC network documentation
Sub- Reference Documentation
Component
OpenStack http://docs.openstack.org/api/openstack- Contains API
Neutron API network/2.0/content/ documentation of
OpenStack Networking
service (Neutron).
OpenStack https://wiki.openstack.org/wiki/Neutron Contains
Neutron documentation of

Copyright  MobileCloud Networking Consortium 2012-2015 Page 19 / 93


OpenStack Networking
service (Neutron).
OpenDayligh https://blueprints.launchpad.net/neutron/+spec/ml2- Contains
t Plugin for opendaylight-mechanism-driver documentation of
OpenStack OpenStack Networking
Networking plugin for
OpenDaylight
OpenDayLig https://wiki.opendaylight.org Contains
ht https://wiki.opendaylight.org/view/OpenDaylight_Contro documentation of
ller:Neutron_Interface OpenDaylight (and its
https://wiki.opendaylight.org/view/OVSDB_Integration: integration with
Design OpenStack)
Open http://openvswitch.org/support/ Contains
vSwitch documentation of Open
vSwitch

2.3.5 Third parties and open source software


Table 7 shows the third-party software packages:
Table 7 Intra-DC connectivity dependencies
Name Description Reference License
OpenStack Neutron Network https://wiki.openstack.org/wiki/Neutro Apache
API Management n 2.0
Frontend
OpenStack Neutron Network https://wiki.openstack.org/wiki/Neutro Apache
Management System n 2.0
OpenDaylight Plugin OpenFlow Control https://blueprints.launchpad.net/neutro Apache
for OpenStack Adaptor n/+spec/ml2-opendaylight- 2.0
Networking mechanism-driver
OpenDayLight OpenFlow Controller http://www.opendaylight.org/ Eclipse
Public
License –
v 1.0
(EPL),
Open vSwitch OpenFlow enabled http://openvswitch.org/ Apache
resources 2.0

Note that the official support of OpenDaylight in OpenStack was only introduced in the OpenStack
Icehouse first release (17 April 2014).

Copyright  MobileCloud Networking Consortium 2012-2015 Page 20 / 93


2.3.6 Installation, Configuration, Instantiation
All the installation and configuration process of the setup previously presented can be accomplished by
following the guidelines provided in (Linux Foundation 2014)

2.3.7 Roadmap
The following sprints as defined in (TNET 2014) with respect to the intra-DC connectivity will focus
on:
 Support monitoring functionalities for network service
o Provide network monitoring (as described in (D3.1 2013 p. 1)) from OpenDaylight –
main goal is to contribute to the OpenDaylight project.
 Provide QoS features to network services
o Support of QoS on OpenStack Neutron (e.g. dedicated bandwidth between two virtual
machines) - main goal is to foster and contribute to the official support.
o Support of QoS on OpenDaylight Neutron application - main goal is to foster and
contribute to the official support.
 Definition of Models for SFC
o Support of chaining on OpenStack Neutron - main goal is to foster and contribute to the
official support.
o Support of chaining on OpenDaylight Neutron application - main goal is to foster and
contribute to the official support.
These sprints are expected to present progress results in the upcoming deliverable, on month 27.

2.3.8 Research works and algorithms


This section presents initial work on two main networking gaps within OpenStack that are of extreme
importance for a solid infrastructure foundation in the MCN and NFV context, namely support of QoS
in network services and service chaining.

2.3.8.1 QoS support for CSP network services


In the latest official release of OpenStack (Icehouse), QoS parameters for network resources are not yet
implemented and integrated. A preliminary draft for modelling QoS within Neutron framework can be
found at (Neutron 2014).
This draft proposal is based on the specification of a new Neutron resource (qos) that describe a QoS
parameter. In the current version the following types of QoS parameters are supported:
 Differentiated services code point (DSCP), for marking packets with a DSCP mark
 Rate-limit, to throttle a port or all ports attached to a network based on a bandwidth value
The new QoS resource can be applied to a network (meaning that the traffic related to all host belonging
to that network will be managed according to the selected policy) or to an interface (meaning that only
the traffic on that interface will be managed according to the selected policy).

Copyright  MobileCloud Networking Consortium 2012-2015 Page 21 / 93


This solution is limited and many points are still left open, for example how to manage parameters like
latency or jitter (related to the traffic among two VMs), how to deal with packet classification and
marking, etc. However this modelling is a good starting point that can be extended to include further
parameters and specifications.
In particular, we propose to extend the solution mentioned above defining a more structured QoS
resource in Neutron (OS::Neutron::qos) that identifies a group of QoS parameters
(OS::Neutron::qos_param) and the related traffic classifiers (OS::Neutron::classifier). The latter is an
optional object that can be applied to limit the enforcement of a given QoS parameter to a selected subset
of traffic.
In some cases, the classifier parameter can identify the relationship with a given port resource (i.e. a
network interface): this is useful for parameters like delay or jitter that may be referred only to a specific
pair of VMs’ ports. However, these types of parameters can be also simply applied to a network
resource: in this case they will be referred to all the possible pairs of ports attached to the network itself.
Table 8 shows the possible types of QoS parameters that can be defined.
Table 8 QoS parameters
QoS parameter Description Applicable to resource Notes
type
Rate-limit Maximum bandwidth of Port, Applicable to both
traffic allowed when traffic routers’ and hosts’
Network
shaping is applied. ports.
DSCP DSCP value for marking Port, Applicable to both
packets routers’ and hosts’
Network (meaningful
ports.
only in combination with
classifier)
Minimum Minimum value of bandwidth Port,
Reserved that is reserved for that traffic
Network
Bandwidth flow.
Delay Maximum acceptable latency Port (it requires a
(one direction) between two classifier to indicate the
hosts. destination port)
Network
Jitter Maximum acceptable jitter Port (it requires a
between two hosts. classifier to indicate the
destination port)
Network

It should be noted that while rate-limit and DSCP parameters corresponds to real configuration that can
be directly translated to the configuration of the physical devices, this is not valid for parameters like
delay and jitter. However, their indication on the user side can lead to other types of resource allocation
or configuration decisions at the orchestration level (e.g. VMs to be interconnected with low delay can

Copyright  MobileCloud Networking Consortium 2012-2015 Page 22 / 93


be allocated in the same server, or in the same DC; low values of jitter can be achieved through proper
configuration of queues and traffic priorities in the DC routers, etc.). The minimum reserved bandwidth
may be supported through specific DC router configuration, or dedicated network technologies (e.g.
sub-wavelength optical technologies) and, in case of inter-DC interconnections, through Bandwidth on
Demand service.
An extract of a service template that specifies network QoS parameters following the proposed model
is shown in section 2.8.8.

2.3.8.2 CSP support for Service Function Chaining (NSC)


The term SFC is today widely referred within the NFV context (also referred by Virtual Network
Function (VNF) Forwarding Graph within the ETSI NFV ISG). It is considered to be a key functionality
in this context, and therefore prone to be considered as available in OpenStack. In this sense, MCN as a
reference project in the NFV scope is collaborating on the definition of a chaining model proposal for
OpenStack Neutron.
This proposal has already been presented and discussed with some of the OpenStack Neutron core
developers. The proposal is currently being refined and its submission as an official blueprint or
integration in another blueprint is under evaluation.

2.3.9 Conclusions and Future work


The work described above provides details on the initial network support for MCN services. This stage
comprised a careful and long assessment with respect to the selection of the most proper (and prominent)
SDN controller. Also, special focus was given to the networking gaps identified in OpenStack, namely
in terms of QoS and SFC support.
The next sprints will continue to foster and contribute to the closing of the identified network gaps.

2.4 Re-Direction as a Service


The following sections described the work on the Re-Direction service which enables features such as
network scaling and the Follow-Me Cloud under the context of SDN/OpenFlow. This work has been
contributed via Task 3.1.

2.4.1 Definition and Scope


The basic goal of the service is to correctly forward traffic to an appropriate resource by exploiting the
network programmability offered by SDN/OpenFlow Controllers for a given programmable network
operational environment. The concept will be materialized by a special control application (Re-
Direction) running within a SDN environment that setups the necessary state within the forwarding
plane. The running state as well as any update for changing the forwarding behaviour should be
configurable by exposing a North Bound Interface (NBI) for the service. Primarily, the service focuses
on two use cases: network scaling (e.g. offloading) of the EPC components (within datacentre) and
Follow-Me- Cloud (across datacentres).

2.4.2 High Level Design


The following sections outline the high level design description for the Re-Direction service that is
defined within the context of the SDN/OpenFlow, OpenStack and overall MCN architecture.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 23 / 93


The generic SDN/OpenFlow architecture within OpenStack/Neutron and MCN architecture is shown in
Figure 6. The architecture could be classified into three distinct layers.

Figure 6: MCN Architecture with Generic OpenStack and Neutron Settings1


The upper layer consists of the overall MCN architecture that has been defined in (D2.2 2013). However,
without loss of generality, an instance of EPCaaS is shown in Figure 6 along with Service Manager
(SM) and Service Orchestrator (SO) that has been coded against the Cloud Controller (CC) SDK defined
in (D3.1 2013). The middle layer consists of “Neutron API” (“OpenStack Neutron” n.d.) that is used for
manipulating the tenant based virtual networks within OpenStack environment. The bottom layer
consists of a generic SDN controller for managing OpenFlow based network resources. Since Neutron
follows a generic plugin-approach, a SDN controller could be integrated into the Neutron environment
(NEC 2013). The Re-Direction service could be envisioned as a control plane application within the
overall Figure 6.
Open Network Foundation (ONF) (ONF 2014a) proposes a generic SDN architecture that could be
further divided into distinct parts: a south based interface, a control layer for providing any network
service, an application layer and on top a northbound interface for manipulating the state of the
underlying network based on specific application requirements. For example, the southbound OpenFlow
driver may support different versions of the OpenFlow protocol (“OpenFlow Switch Specifications,
version 1.4.0” 2014). It is important to note that the newer OpenFlow wire protocol version quite
significantly modifies the OpenFlow packet processing pipeline for supporting advanced flow level
control functionality. However, different controller frameworks such as Trema, Ryu, Floodlights,
among others2 that have been outlined in (D3.1 2013) could be used for developing & testing the network

1 EPC is an example service instance


2 Lately, OpenDayLight developments, NEC VTN Model etc.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 24 / 93


control applications for supporting a specific application type. Normally, the above-mentioned
frameworks offer a generic set of applications (e.g. topology discovery, L2 bridge, statistics etc.) for
discovering network nodes and setting up the necessary connectivity between the end points. However,
for supporting a specific service e.g. gateways3 of EPC (Evolved Packet Core), the generic SDN
controller needs to be augmented with specific control applications. For instance the VM’s need to
support the GTP tunnels between the endpoints that could be implemented in the VM or in a physical
switch. Such configurations need to be passed from the orchestrator to the underlying infrastructure and
should be supported both at the virtual and physical infrastructure level.
Similarly, for network resource scaling, the SDN controller needs to support the resource usage levels
e.g. OpenFlow statistics for the underlying OpenFlow based infrastructure. Once the resource demands
for the current service instance are increasing, resources needs to be added in the infrastructure for the
given service. The resource constraint needs to be signalled from the OpenFlow based network to the
monitoring service as defined in (D3.1 2013). The orchestrator may pull this information from the
monitoring service to enforce a specific policy for network scaling. However, once additional resources
are added the control application needs to setup the appropriate OpenFlow rules for correctly forwarding
the traffic to the newly added resources. This functionality needs to be supported within the OpenStack
and orchestration architecture all along the networking stack.

Figure 7: Configurations & Policy Engine with OpenFlow in Provider Domain


For cross-domain or cross datacentre traffic, the fundamental question is that network resources at some
pre-defined granularity should be visible at the orchestrator entity. However, for such scenarios the
orchestrator entity should also be aware how the resources between geographical distributed sites, POP
(Point of Presence), are arranged. That opens up quite a number of details of the underlying Transport
network where network resources should be configurable on the fly for enabling the SDN-WAN (Wide
Area Network) functionality. From Task 3.1 perspective, it is assumed that a business relationship exists
between Cloud Service Provider and the Network Service Provider. Figure 7 shows a simplified
architecture of customer sites (CE = Customer Edge), the provider domain (PE = Provider Edge, PCE
= Path Computation Element) and the possibility of introducing the “OpenFlow Controller clients”

3 serving & PDN gateway (3GPP 2013)

Copyright  MobileCloud Networking Consortium 2012-2015 Page 25 / 93


(OFC) (ONF 2014b) on top of the virtualized infrastructure. However, this contribution offers flexibility
for controlling traffic between different sites based on SDN principles that spawn within and across
network provider domain.

Figure 8: Disaster Recovery Traffic across different paths


Figure 8 outlines a deployment possibility where Cloud Service Provider resources are spread across
different geographical sites. However, it depends on the business relationship between the CSP and
Network provider about the topological visibility that the network provider offers to clients (CSP). For
instance, it is assumed that site-4 in Figure 8 is experiencing an overload situation and certain services
have to be shifted to site 3 for fulfilling the SLA (Service Level Agreement) requirements. For such
cases, traffic from site-4 has to be replicated to the site-3. The corresponding, traffic flows can be routed
along different paths in the provider domain based on OpenFlow controller functionality. However, the
required flow semantics along with the necessary network configurations based on the underlying
technology offered by the network provider need to be conveyed by the global orchestrator entity.
Further, given the fine granular flow level control over the traffic, inter-site traffic could summarized as
follow:
 Based on the resource location
 Based on Traffic class
 Based on traffic type e.g. disaster recovery, backup, replication etc.
 Based on path characteristics e.g. load, available bandwidth etc.
 Based on resource optimizations within or across sites/datacenters e.g. load balancing
 On the fly provisioning of resources across domains
Further for any specific network configuration within network elements, there is ongoing work in ONF
OpenFlow configuration protocol (e.g. version 1.2) (ONF 2014c) for enforcing certain configurations
within the OpenFlow switches. Such resource configurations may include tunnel end points, parameters

Copyright  MobileCloud Networking Consortium 2012-2015 Page 26 / 93


for queues associated with specific switch ports (physical/logical), metering objects, event framework
etc. Most importantly, the configuration protocol supports the OpenFlow wire protocol for enabling the
SDN based network management and operations.

2.4.3 Low Level Design


Figure 9 shows the low level design details of the SDN/OpenFlow controller. The lower layer consists
of the generic OpenFlow driver that supports a specific protocol 1.x version that is compatible with the
OpenFlow resources.

Figure 9: General SDN with Control plane service


The middle layer consists of a number of components; any controller implementation provides the
necessary event/messaging layer that provides developers with necessary interfaces for communicating
with applications that are running under the SDN controller. For the re-direction as a Service, the generic
necessary components/applications are shown along with their dependency relationship. The basic
building block is the network topology which implies that whenever a new network resource is added
in the network this component discovers the new node and updates the current topological graph
instance. Similarly, the statistics module needs the data plane view of the various points of interest in
the physical infrastructure. This could be ports, flows, aggregation of statistics along the set of switches
in the path etc. However, the points of interest are derived from the Topology application/module.
Similarly, there is a WAN application that needs to control the edge as well as the required configuration
that should be made for the customer edges in the data centre site. The overall services control plane can
be divided into two parts. The generic control plane is shown on the left side of the Figure 9 and the
service specific part is shown on the right hand side. The important distinction to make here is that
specific services needs specific control plane functionality. The upper layer consists of a set of API calls
that can be extended for exposing the control plane state for updating the state of the data plane.
However, eventually, the potential client for consuming such a state is the next generation SDN based
OSS/BSS systems.
Figure 10 shows the FMC diagram for a generic SDN/OpenFlow controller that has been already
described above. Given that Re-direction should be a generic service instance, the possible entities

Copyright  MobileCloud Networking Consortium 2012-2015 Page 27 / 93


involved in the system are split into generic and specific control services applications for a specific
MCN service. The OpenFlow 1.x driver is running at the bottom and the control services can make
request to underlying OpenFlow resources.

Figure 10: FMC Diagram for a generic SDN Controller


The fundamental building block of the system consists of the topology discovery service (application)
that determines the how the underlying OpenFlow resources are connected in the system (Trema 2014).
Similarly, the nodes could be further classified on the basis of type. For instance, the controller needs to
distinguish the OpenFlow edge that regulated the traffic in and out of the datacenter site. Once the
interconnection is known to the controller, the statistics module can make use of resource usage level
associated with each network element (e.g. ports, queues etc.) for determining the resource usage level.
The QoS module could be used for applying any service specific constraint on the underlying topology.
The Re-Direction control service module takes into account the required information from the existing
applications for redirecting traffic based on traffic offload for EPC components and Follow Me Cloud
use cases.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 28 / 93


Figure 11: Service Template Graphs with SDN/OpenFlow
Figure 11 shows the low level design details for supporting the two use cases of Traffic Offload and
Follow-Me-Cloud within the functionality of the re-direction as a service. The overall service graph
topology is shown at top in Figure 11. It is assumed that some optimization functionality within the
Service Orchestrator (or SDN application etc.) entity splits the graph into a number of segments that
should be interconnected together for delivering correct service behaviour. The Networking component
should have the necessary configuration that determines how these segments should be interconnected.
Further, two flows are shown in the Figure 11: the green flow shows the Network Scaling (offloading)
use case where as the red flow shows the Follow Me Cloud use case where initially traffic needs to be
directed to the edge and from there to the selected datacentre site location.
Primarily, the underlying resources are OpenFlow enabled therefore any modification to the traffic
forwarding rules is based on OpenFlow Flow Modification messages. For configurations, the OF-
configuration protocol could be used. However, “when” and the “type” of the flow modification requests
that are requested from the controller application is based on a specific service implementation. Further,
additional functionality is needed for the forwarding plane like supporting the GTP tunnel endpoints etc.
However, the controller should have the abstract mechanism for exposing this functionality for any
client that may use the Rest or any other interface for modifying the forwarding behavior.
 CRUD Operations for managing the tunnel end points e.g. GTP tunnels necessary encapsulation
and de-capsulation etc.
 Function for forwarding traffic to ports that needs GTP functionality
 Operations for managing forwarding traffic rules in the OpenFlow edge switch
 Re-direction operations for directing tenant traffic to a specific edge within the datacenter e.g.
based on tenant (data path identifier, flow identifier, encap/decap, priority, etc.)

Copyright  MobileCloud Networking Consortium 2012-2015 Page 29 / 93


 Re-direction operation for forwarding traffic from the edge switch to the next data center with
necessary encapsulation etc.
Primarily, these abstract function calls would modify the state of the Re-direction control application
running inside a specific controller environment. For that an information model is needed that should
outline the necessary state that is maintained by the control application. This needs to be further
identified to outline the details of the API’s.

2.4.4 Documentation of code


For code documentation, please refer to Table 9 that contains references to components that potentially
could be used for the outlined service. However, it is still work in progress therefore the list as well as
references could be further updated.
Table 9 Re-Direction service documentation
Component Reference Documentation
OpenStack https://wiki.openstack.org/wiki/Neutron Includes the details of Neutron,
Neutron extensions, API details etc.
OpenDaylight https://wiki.opendaylight.org/view/Main_Page Main wiki page for
Opendaylight activities
including OpenStack etc,
NOX http://www.noxrepo.org/ Documentation for NOX with
installation, usage etc.
OpenFlow https://www.opennetworking.org/images/stories/ Contains protocol level details
Specifications downloads/sdn-resources/onf- of messages between the
specifications/openflow/openflow-spec- controller and network
v1.4.0.pdf resources
OpenFlow Software(http://openvswitch.org/); hardware Contains details for
resources (dependent on vendor) configuration, installation,
supported OpenFlow
specification version etc.

2.4.5 Third parties and open source software


The documentation of various components is available online that is referenced under Table 10.
However, it is work in progress; therefore the list as well as the reference could be further updated.
Table 10 Re-Direction service dependencies
Component Reference Documentation
OpenStack Neutron https://wiki.openstack.org/wiki/Neut Includes the details of Neutron,
ron extensions, API details etc.
OpenDaylight https://wiki.opendaylight.org/view/ Main wiki page for Opendaylight
Main_Page activities including OpenStack etc,

Copyright  MobileCloud Networking Consortium 2012-2015 Page 30 / 93


NOX http://www.noxrepo.org/ Documentation for NOX with
installation, usage etc.

2.4.6 Installation, Configuration, Instantiation


Each component that has been mentioned in the “Third parties and open source software” has the
corresponding installation, configuration as well as the required dependencies for using the desired
component.

2.4.7 Roadmap
Primarily, the service should support two use cases that are already outlined in the subsection 2.4.1. For
the coming months up to M27 the sprint will therefore focus on:
 Traffic offloading using the Re-direction service and EPC
 Implementation of the Follow-Me Cloud.

2.4.8 Research Work & algorithms


The control plane integration for the OpenFlow controller could be explored in the service provider
domain (Azodolmolky S et al. 2011). Certain resource optimizations within and across data centre sites
could be explored along with possible interactions with the service orchestrator entity.

2.4.9 Conclusions and Future work


From the networking perspective, given the orchestrator entity maps the disjoint segments within or
across the data center sites. A possible future work direction is to automate the complete operation of
network provisioning, configurations, interconnection etc. and later an autonomous way of doing
network based optimizations for traffic management across data center sites for a cloud service provider.
Similarly, the role of OpenFlow controller along with the control plane integration could be explored in
the provider domain (Azodolmolky S et al. 2011).

2.5 Performance
The following sections describe work done on the topic of Performance in the Infrastructure
Foundations. The work described has been carried out by Task 3.2.

2.5.1 Definition and Scope


Since the completion and delivery of D3.1, Task 3.2 has been concentrating on supporting Service
Owners as their respective Services are developed and as the specific performance requirements become
more obvious. In (D3.1 2013) we outlined how we performed a series of preliminary performance tests
for atomic services to help understand and develop the infrastructural foundations.
In order for Task 3.2 to be able to do performance optimisations, it is a hard requirement that all Service
Owners begin to think about how they will map known performance profiles of hardware based services
to virtualised instances of such services. Since the writing of D3.1, we have been investigating various
ways to approach the issue of performance and how we can help Service owners by developing a
common performance testing strategy that places the least amount of overhead on Service Owners. In
the sections that follow, we will outline the approach we have agreed to take, the suggested workflow
and the tools we recommend to use.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 31 / 93


A comprehensive testing methodology was outlined and presented to Service Owners in order to support
the planning, execution and analysis of performance testing of both individual SICs (unit testing), as
well as Composite Services, made up of core and support SICs (integration / end-to-end testing).
To describe the workflow, firstly we revisit the research we documented in D3.1 on the USE method
and explain how this methodology can form an integral part of the process. We then look at how we can
use tools such as Jenkins - an open source integration tool - to automate performance testing.
In the research section, we document the progress made by Task 3.5 in performance testing the
OpenAirInterface and describe how this work has been used to validate the testing strategy we have
devised.

2.5.2 High-level design


A high-level overview of the SIC / Composite Service testing methodology can be viewed in the diagram
in Figure 12.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 32 / 93


Identify performance metrics and QoS of SICs
and Composite Service III
Determine tests for identified
parameters
I II
Unit Integration
testing testing
Identify edge Identify
Create Block cases resource blocks
Create Block
diagram based
diagram based
on SIC to
on STG
atomic services

IV
Identify metrics Identify metrics Generate tests for identified
from diagram from diagram parameters
edges edges
dstat

Select service- Zabbix Code tests


Select service-
relevant relevant
metrics to metrics to
measure for measure for
from USE table from USE table Use available
tools nmon

Identify inter-
service QoS Ceilometer
metrics vmstat profiler

VI V
Process test data Execute tests
spreadsheet
Logstash scripts
Kibana Configure Slave
Run tests
VMs for
Run analytics periodically
automation
Prepare test on data
output data for
processing
cron Jenkins
nmon2rrd

Jenkins
Graph data
spreadsheet

Identify when
Identify when
inter-service
SIC QoS has
QoS has been
been breached
breached

Figure 12 Performance testing: Overall workflow


This workflow can be described as follows;
Firstly, the identification of performance metrics and QoS requirements of SICs and each MCN
Composite Service has to be conducted. At this stage, this can be broken down into two subtasks;
I. Unit testing
II. Integration testing of services within a STG
Once this is done, work can progress on to determining the tests to be conducted.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 33 / 93


III. Based on the metrics to be tested for in step 1, both edge cases and typical SIC usage cases
should be identified. The latter can refer to technical documentation and component
specifications of traditional telecoms infrastructure.
IV. Having identified the test case scenarios, tests could be written programmatically to target the
virtualised services using their APIs. Alternative test strategies to execute the test case scenarios
could involve using profilers inherent to the SIC framework, as is the case with OAI. This will
give performance metrics as to the services themselves. Additional monitoring tools should be
used to log infrastructural performance, i.e. VM CPU, RAM, network metrics.
V. Once the tests are written, be it coded within a programming language or a script, Service
Owners are encouraged to use automation tools to facilitate quantitative test runs, constantly
spaced over time. This will allow real-world performance analysis to be conducted, indicative
of the variable load cloud infrastructure can be subject to over time. A more detailed workflow
of the facilitation can be seen in Figure 12. Task 3.2 strongly encourages the adoption of tools
such as Jenkins for the facilitation of this functionality.
VI. Once the tests are conducted, this step is concerned with processing the output generated by the
metric recording tools, in preparation for further analysis.
a. Bash scripts can be employed, automated with Jenkins jobs in order to process the raw
output data of the tools in step 5, grouping it for example by day or week. Alternative
tools include Logstash.
b. Once processed, the data can be graphed. Graphing tools include spreadsheet
applications, nmon2rrd for nmon, Kibana or the graphing functions inherent to the
monitoring tools, where applicable (e.g Zabbix).
c. The data can optionally be run through an analytics engine, where rules can be set up
to programmatically identify e.g. QoS breaches from the gathered data. Spreadsheet
applications and Kibana are examples of tools offering this functionality.
To date, MCN has progressed sufficiently to begin performance testing individual SICs, with integration
and end-to-end testing to follow once unit testing of SICs has been conducted.
The first step of the performance testing methodology as outlined by Task 3.2 involves identifying what
atomic resources each SIC will consume. The edges connecting each SIC to the infrastructure can then
be mapped to a USE method block diagram, facilitating the identification and extraction of metrics and
latency limits to be measured for.
This principle can be further developed when integration testing is due to commence, by taking the STG
and identifying inter-SIC latency limits, for example between EPC and IMS. This extends the
expectations SICs have of the atomic services, i.e. infrastructure.

2.5.2.1 High-level design of the automation tool chosen – Jenkins


Jenkins’ intended use is as a Continuous Integration server - it downloads code from a repository,
resolves dependencies, builds the code, tests it, and then deploys it.
While it is normally used for building and deploying software, it can easily be used for more interesting
purposes. Jenkins’ modular and flexible nature makes it suitable for a wide variety of use cases outside
of compiling and testing code.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 34 / 93


Jenkins has the advantage of centralization, per-run logging, IM and/or email notification, time trending,
and providing a self-contained workspace. It can be used to boost productivity by automating repetitive
tasks using a consistent and easy to use GUI, providing an audit trail of each run, as well as access to
the output of the run.
Jenkins is suitable for use for things that can easily be scripted, are incredibly repetitive and produce
data that is outdated quickly, or data that needs to be collected regularly over prolonged periods of time.
This makes Jenkins perfect for the purposes of providing a common platform for the running of
performance tests.
A working Jenkins master instance has been provided by CloudSigma for use by all MCN partners in
order to facilitate performance testing and of course – software development.

2.5.3 Low-level design


SICs in the context of a Composite Service can be thought of separate units of functionality and as such,
are suitable for the adoption of unit testing strategies.
For the purpose of performance testing SICs, Service Owners have been asked as a first step to consider
the resource usage requirements of their respective Service. To do this, it can be helpful to try and
determine the known edge cases and different classes of ‘resource blocks’ hence workload scenarios
and auto-scaling triggers in legacy infrastructure. For example, users loading RAN beyond a given
threshold resulting in a carrier increase.
It is essential at this stage to determine the latency limits and throughput requirements of SICs, as these
can be seen as a fundamental pre-requisite for capacity planning, auto scaling strategies, inter-service
QoS and for establishing SLA guarantees. Where appropriate, technical documentation and component
specifications of traditional telecoms infrastructure should be used to identify these QoS requirements.
Using the SIC diagram in Figure 13, a Service Owner, in this case RANaaS, creates a block diagram
illustrating the interactions the Service has with the atomic services.

Figure 13 STG and ITG mapping


By focusing on the edges connecting the Service with the atomic services as can be seen in Figure 14,
and identifying the appropriate metrics to be measured by referencing the mappings in Table 11, the
Service Owners are then able to clarify the QoS requirements of their respective service, and can begin
thinking about suitable tools / approaches to extract these metrics.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 35 / 93


Figure 14 Mapping SIC to IaaS
Table 11 Metrics - Resources list

Having identified the test case scenarios, tests could be written programmatically to target the virtualised
services using their APIs. Alternative test strategies to execute the test case scenarios could involve
using profilers inherent to the SIC framework, as is the case with OAI. This will give performance
metrics as to the services themselves. Additional monitoring tools should be used to log infrastructural
performance, i.e. VM CPU, RAM, network metrics.
An example of a Jenkins Job to run performance profiling on OAI using nmon to monitor system load
is;
cd ~
nmon -F ./results/nmon_rslt/dl_stats_$(date '+%Y%m%d_%H%M%S').nmon -s 3 -c 30
cd /home/openair/openair4G/openair1/SIMULATION/LTE_PHY
./dlsim -P -a -D -B25 -m16 -n3000 -s30 >
/home/openair/results/oai/phy_dl_5_m16_s30_$(date '+%Y%m%d_%H%M%S')

This starts nmon, a VM performance monitoring tool, followed by the dlsim, a performance profiler for
OAI. The combination of both tools allows for both the performance of the VM to be monitored and
logged, as well as the performance of OAI itself.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 36 / 93


2.5.4 Documentation of the code
Table 12 shows the documentation of the components used in MCN.
Table 12 Performance documentation
Sub- Reference Documentation
Component
Jenkins https://wiki.jenkins- Contains a thorough
Distributed ci.org/display/JENKINS/Distributed+builds guide on using Jenkins
Builds guide in a distributed set-up

Performance http://git.mobile-cloud- Contains


testing in networking.eu/performance/service-testing/ documentation on how
MCN to setup Jenkins for
performance testing.

2.5.5 Third parties and open source software


Table 13 shows the used third-party software packages:
Table 13 Performance dependencies

Name Description Reference License

Development

Jenkins CI servers http://jenkins-ci.org MIT

2.5.6 Installation, Configuration, Instantiation


The installation of Jenkins will not be covered here, but rather, the steps necessary to install a Jenkins
slave client on a VM, thus enabling Service Owners to control automation on their VMs remotely via a
central GUI interface.
Firstly, Java must be installed on the VM to be turned into a Jenkins Slave. On a Debian machine, this
is achieved in 4 steps;
In a terminal window, type the following;
 sudo apt-get install python-software-properties
 sudo add-apt-repository ppa:webupd8team/java
 sudo apt-get update
 sudo apt-get install oracle-java7-installer
Secondly, a directory needs to be made where the Jenkins slave is placed. Typically, this is done in
/var/jenkins. The folder where Jenkins slave will be deployed needs its ownership changed to match the
SSH user, which will be connecting via Jenkins.
To accomplish this, the following commands should be run;

Copyright  MobileCloud Networking Consortium 2012-2015 Page 37 / 93


 sudo mkdir /var/Jenkins
 sudo chown -R <<username used in Jenkins SSH session>> /var/jenkins
Now that the VM is ready to accept a Jenkins slave, configuration can proceed on the Jenkins GUI. To
configure a new Slave Node, go to the Manage Jenkins option and choose Manage Nodes. The steps are
then:
1. Click New Node.
2. Select Dumb Slave and give it a name (symbolic, doesn't need to be the domain name).
3. Click OK and proceed to the configuration page.
4. Fill in the # of executors with the number of simultaneous processes you want to run on that node.
Usually, you do not want to allocate more than the number of allocated CPU cores. Set it to 1.
5. Fill in the Remote FS Root with the full path to where you want Jenkins to store its working files.
/var/jenkins/ is typically a good choice.
6. Optionally add Labels if you are using those in your system (not necessary in many cases, but your
Projects may require nodes to have certain labels to be in the pool of executors for specific jobs).
7. Put the fully-qualified domain name in the Host field under Launch Method, leaving it set at Launch
slave agents on Unix machines via SSH.
8. When the new node has been configured, click ‘save’ and Jenkins will proceed to install the Jenkins
slave client on the VM.
Jenkins Slave Nodes are Java client processes that connect back to a master Jenkins instance over the
Java Network Launch Protocol (JNLP).
A Slave instance can be used to run tasks from a Master Jenkins instance on one or more remote
machines providing an easy to use and flexible distributed system which can lend itself to a wide variety
of tasks.
Within MCN, in the context of performance testing, we propose Jenkins be used as a central hub where
test ‘Jobs’ are set up and used to automate the benchmarking of virtualised services on cloud VMs of
different parameters. A single test can be run in parallel on numerous machines automatically.

2.5.7 Roadmap
The steps outlined in the Performance Testing methodology diagram have been transformed to JIRA
‘swimlanes’ within a Task 3.2 project. This offers Service Owners a project management framework,
which offers progress visibility to all partners.
The most relevant upcoming topics are summarized as followed:

 Seeing as very few service owners have conducted performance testing of their SICs, these are
still marked as being outstanding issues and are due to be completed in upcoming sprints.
 Performance optimisation work will follow once performance testing of SICs and Composite
Services has been conducted.

More details on the roadmap of Task 3.2 can be found in (TPERF 2014).

Copyright  MobileCloud Networking Consortium 2012-2015 Page 38 / 93


2.5.8 Research works and algorithms
The RANaaS Service Owners adopted to a large extent the proposed performance testing methodology
and have come up with the following results:
For the development of RANaaS and moving the eNodeB functionality to the cloud in particular, Task
3.5 has conducted experiments on the performance of RAN components when deployed on GPP
infrastructure provided by Task 3.2 partners. The goal was to establish boundaries on the variation of
the processing time takes for the processing of RAN components, which can provide valuable feedback
to the design process of cloud-based eNodeB.
In order to deliver close to reality values the OpenAirInterface (OAI) emulator of the LTE radio access
was used. The emulator offers a realistic representation of the radio and network stack of the eNodeB
as well as the UE to allow for experimentation. OAI offers in addition profiling tools that gather
information on the processing time takes by each functionality at the PHY and higher layers. Here results
with the profiling of the PHY level will be given.
To the current moment initial, control tests were run manually but for test automation Jenkins was
provided by Task 3.2 to allow the periodic remote running of hardware monitoring tools and the OAI
profiling tools. The intention is to collect multi-day measurements spanning working days as well as
weekends in order to provide feedback to Task 3.5 on the variation of the processing time for the eNodeB
functionality.
Target: Establish the impact of shared physical infrastructure on the computational performance of
virtual machines, on which RAN components are running. The final goal is to determine the degree of
resulting fluctuations in the processing time for the eNodeB software components.
Approach: Conduct control tests with RAN components running on VMs with different configurations
of the VMs and different time instances. The eNodeB components are realized in software by the
OpenAirInterface emulator. Real processing at the PHY layer is used.
Although in a realistic system the UE functionality will not run in the cloud but at the user’s devices,
we have used the opportunity to gather information on the eNodeB performance but also on the UE
performance. Since our purpose is to establish the fluctuation in processing time, these tests serve the
purpose perfectly. More details on the findings have been published in an internal report.

2.5.9 Conclusions and Future work


Test automation, both in the running of performance testing as well as in the processing of monitoring
data and subsequently analytics becomes increasingly important as the integration of multiple SICs’
monitoring data and its inherent interdependence increases the complexity of extracting analytics from
such a Composite Service.
Integration testing may thus prove to be easier to execute once MaaS / AaaS are better developed. During
the development of the performance testing methodology, Task 3.2 has identified key areas of focus and
potential problem areas which should be considered and developed upon during the development of the
Monitoring and Analytics services.
The OAI emulator represents realistically and in great detail the processing taking place within an eNB
component, including processing of the PHY layer. Although no real radio channel has been tested, the
OAI platform can be used to gather valuable information on the actual performance of an eNB on a GPP
platform, such as the ones offered by public clouds. By tuning the radio parameters of the emulator and

Copyright  MobileCloud Networking Consortium 2012-2015 Page 39 / 93


changing the parameters of the VM infrastructure, the RAN research group can gain valuable insight
into the visualised RAN challenges. Experimentation with actual eNB equipment as provided by Orange
will not benefit such profiling much, even if being 'real world', since the hardware/software is proprietary
and no software transition to VMs is possible.

2.6 Monitoring-as-a-Service
The following sub-sections describe the components which enable the Monitoring capabilities. The
work presented here represent new developments and was delivered out of Task 3.3.

2.6.1 Definition and Scope


This subsection describes the first version of software components delivered for a MCN testbed
environment and first integrations out of Task 3.3. A first milestone implementation of Monitoring-as-
a-Service (MaaS) is described according to the specification of (D3.1 2013). The two monitoring life
cycle phases of Deployment and Provisioning have been covered as part of M18.
In particular, the realization of the first basic set of functionalities required for a working Service
Manager (SM), Service Orchestrator (SO) and Cloud Controller (CC)/SDK features have been
implemented.
The integration of Ceilometer [Ceilometer] into Zabbix [Zabbix], first conceptually and later using a
prototypical approach has been covered. The definition and implementation of MaaS reference points
has also been outlined.
A test-VM containing a monitoring system for MCN partners and other MCN Services has been realized
and exposed. An abstract monitoring agent definition and examples have been provided to other MCN
service owners.

2.6.2 High-level design


This section outlines the high level design of the two MCN life cycle phases deployment and
provisioning, which have been addressed.

2.6.2.1 Architectural Overview of MaaS and other MCN Services


MaaS as a supporting service is used by other services for retrieving monitoring metrics from the system.
In order to be integrated and connected, MaaS should be deployed first in the sequence of services.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 40 / 93


Figure 15 End-2-End MCN Service Deployment with MaaS
Figure 15 depicts the overall provisioning of MaaS following the hierarchical model consisting of
different level of MCN Services. The top level MCN Service requests any MCN Service and Monitoring
in the example. The presented sequence diagram is split into two parts of MaaS and any MCN Service.
First MaaS is deployed, returning an endpoint, which again is forwarded to other MCN services for
provisioning.

Figure 16 Zoom in - Deployment of MaaS


Figure 16 depicts the deployment phase of MaaS. An external trigger to the SM requests the deployment
of the MaaS-SO at the CC. The CC deploys and provisions the MaaS-SO given a predefined service
template. Once up and running, the MaaS-SO signals a Yet Another Multicolumn Layout (YAML)
template to the CC for the MaaS deployment. After successful instantiation, the MaaS endpoint is
signalled back to re initial requestor. This endpoint identifies MaaS and support other MCN Services to
access MaaS.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 41 / 93


Figure 17 Zoom-in - MCN Service Deployment with MaaS
Figure 17 depicts the second deployment phase, in which the MCN Service is deployed. A provisioning
of the MCN Service will include the endpoint of the MaaS provided after the successful MaaS
deployment.

2.6.2.2 Interfaces, reference points and APIs


The different types of interfaces exposed by MaaS are shown in Figure 18.

Figure 18 MaaS interfaces


Three main types of interfaces have been defined:
• MaaS configuration interface: the interface used to deploy and provision the MaaS through the
interaction with the MaaS SO, SM and the CC.
• MaaS Northbound operational interface: the interface used at runtime by the MCN services that
need to collect monitoring information.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 42 / 93


• MaaS Southbound operational interface: the interface used at runtime to collect the monitoring
information from the agents associated to the different services to be monitored. At the moment, an
agent towards Ceilometer has been defined to collect information about IaaS, as described in section
2.6.2.3.

2.6.2.3 Ceilometer to Zabbix Integration


The goal of providing Monitoring-as-a-Service (MaaS) in heterogeneous cloud infrastructures led to the
integration of the Ceilometer monitoring system (Ceilometer 2013) which stands as the default
monitoring system within the OpenStack cloud framework, with the Zabbix distributed monitoring
solution used in the MobileCloud Networking MaaS.
The proposed mechanism for this integration relies on the communication of a newly developed module,
the Zabbix-Ceilometer Proxy (ZCP), which takes advantage of already existing information and events
generated by OpenStack Modules. Figure 19 depicts the overall architecture of the created module and
its relationship with both Monitoring and Cloud related modules.

Figure 19 FMC diagram of the ZPC architecture


In order to provide a seamless consolidation between the cloud and monitoring environments, the ZCP
initiates its process by acquiring the necessary authorisation token, by communicating with OpenStack’s
Keystone, being then able to correctly access and communicate with the remaining cloud-integrant
modules. The high-level live-cycle is depicted in Figure 20, showing the described authorisation request
and further subscription of cloud-related infrastructure change requests. One possible change in the
cloud infrastructure is the addition of a new tenant (change issued by Keystone), as exemplified in the
same figure. The RabbitMQ module from OpenStack is the entity responsible for publishing all these
changes. The ZCP will process these changes and then proceed to create the required host-groups and
hosts through the Zabbix API. This allows us to correctly match all the existing monitoring information,
collected by Ceilometer, to the respective hosts created in Zabbix. Prior to the host’s creation, a periodic
request for metrics is issued towards Ceilometer and mirrored in the MaaS default solution, Zabbix.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 43 / 93


Figure 20 ZPC life-cycle
Further details about the Zabbix-Ceilometer Proxy module are presented in the low-level description,
including the classes diagram along with all the required schemas and a list of the considered OpenStack-
specific resources/metrics.

2.6.3 Low-level design


This section outlines implementation and software design specific information on realizing the
deployment, provisioning as well as the connectivity among related instances.

2.6.3.1 Ceilometer to Zabbix Integration Implementation Details


The previously presented high-level architecture was important to lay the grounds of a transparent and
seamless integration of OpenStack’s Ceilometer with Zabbix. This approach does not require any
changes or updates to Ceilometer, nor to Zabbix. However, the mapping of the existing information
between the two entities must be considered. In order to do so, the existing Ceilometer resources are
mapped into Zabbix’s Host Groups, Hosts, Applications and Items. This was achieved by creating a

Copyright  MobileCloud Networking Consortium 2012-2015 Page 44 / 93


tailored Zabbix schema for Ceilometer. The organisation of generic Zabbix schema is presented in
Figure 21.

Figure 21 Hierarchical Zabbix Schema


This hierarchical schema organization supported by Zabbix makes available Host Groups to be used for
the access control of hosts assigned in different user groups, similarly to tenants. The notion of Hosts
corresponds to the devices to be monitored such as servers, workstations, switches and others. Since
many metrics or resources are likely to be monitored, each one being part of certain groups, they can be
grouped in logical classes named as Applications. These metrics are known in Zabbix as Items.
The specific mapping that was defined for ZCP is defined by the Zabbix-Ceilometer Schema (ZCS) and
is depicted in Figure 22, matching the corresponding Ceilometer notation with Zabbix’s notation for
each component of the architecture.

Figure 22 The Zabbix-Ceilometer Schema used by ZCP


The Host Group entity is mapped to existing tenants in Ceilometer, while each existing instance is
mapped onto a Host. Following what was previously explained, Applications are used to aggregate
meters of the same type, now known as Items, such as meters related with the CPU utilization of each
instance.
As previously mentioned, the integration of the Zabbix-Ceilometer Proxy both with OpenStack and
Zabbix involves different modules with distinct purposes. Figure 23 represents the implemented
interactions with these modules. Referring back to the ZCP life cycle, it is possible to see that the
necessary methods for keeping all the information up to date have been defined.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 45 / 93


Figure 23 ZCP UML class diagram
Ceilometer supports many different metrics and is responsible for monitoring several components
integrating OpenStack’s infrastructure. All this information varies in complexity and purpose. Currently,
the proposed resources to be supported by ZCP, regarding OpenStack’s Networking (information from
the SDN controller) are detailed in documentation of the source code. Metrics that are marked for
possible support may be considered but are not considered as relevant at the moment.
The metrics proposed are to be retrieved from Ceilometer and forwarded by the Zabbix-Ceilometer
Proxy towards Zabbix and associated with the respective Hosts. Since the actual instance monitoring
was considered a priority when compared against OpenStack’s internal modules, the current
implementation of ZCP supports only instance related metrics. Nonetheless, all the other resources can
easily be added also.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 46 / 93


2.6.3.2 Monitoring Agent at MaaS registration
The Monitoring concepts used in MaaS supports MCN Service owners to define and develop their
individual monitoring agent for extracting specific metrics.

2.6.3.2.1 Connectivity of the VM instance with Zabbix


Installation of the agent
To create the monitoring connection between the VM instance and the Zabbix-server, the agent needs
to be installed either through the [package]-management or build it from [source]. The agent is either
started automatically after installation (e.g. Ubuntu) or has to be started manually (see ‘Start/Stop’). As
soon as started, the agent will try to register with the configured server (see ‘Auto-register with Server
and Active Monitoring’) and/or wait for a polling request from the configured server (see ‘Discovery
by server and passive monitoring’)
Auto-register with Server and Active Monitoring
The preferred method of connectivity is via [auto-registration]. In this scenario, the server is listening
on port 10050 (default) for incoming agent registration requests. If received, a predefined auto-register
action will be executed which may include adding the host to the monitoring process, linking it into
host-groups and so on.
When using active auto-registration and agent pushing, the agent establishes a TCP connection to the
server and requests a list of items to be measured. The server responds with a list of items. The agent
processes the list and starts to periodically collect the data. The connection is closed after that. On
publish, the agent establishes the TCP connection again and sends a list of corresponding values. The
server processes the values, saves them into the database and sends the processing status back to the
agent. The connection is closed right after.
For this method, every agent must know where the server is located (network IP) and an auto-registration
action has to be defined on the server side.
Discovery by Server and Passive Monitoring
Another way to get the monitoring information is via the listening agent. In this scenario, the agent is
listening on port 10051 (default) for incoming server requests. For the requests to take place, the node
has to be discovered. Those [discoveries] are defined IP-range scans, configurable in the server interface.
On discovery predefined action will be executed, similar to the one used in ‘Auto-register with Server
and Active Monitoring’.
When using passive polling, the server establishes a TCP connection to the agent and sends a list of
requested item values. The agent responds with those values (if available). The server processes the data
and saves it into the database. The connection is closed immediately after.
For this method, the server must know in what IP-range(s) the agent-nodes are placed and the discovery
feature has to be configured on the server side.

2.6.3.2.2 Instantiation and configuration


Start/Stop
To start or stop the Zabbix agent, the corresponding init scripts should be used. With Ubuntu’s upstart
init-daemon, it is placed under /etc/init.d/zabbix-agent/. It supports start/stop/restart. To manually start

Copyright  MobileCloud Networking Consortium 2012-2015 Page 47 / 93


the agent from a cli run zabbix-agentd. See man zabbix_agentd for more information on startup
parameters.
Once started the agent will proceed to run as stated in ‘‘Auto-register with Server and Active
Monitoring’’ or ‘Discovery by Server and Passive Monitoring’, depending on the configuration.
Configuration
The following changes in the default agent configuration file (found at /etc/zabbix/zabbix_agentd.conf)
are mandatory:
"Server=" address should be changed from localhost to the wanted server address(es) for passive polling.
This can be a list of comma-separated addresses. Only these servers are allowed to do passive metering
with the configured agent.
"ActiveServer=" address should be changed from localhost to the wanted server address(es) for active
pushing. This can be a list of comma separated addresses. Only these servers are allowed to do active
metering with the configured agent.
"Hostname=" should be commented out so the agent can register itself using the native hostname of the
VM. A custom name could be defined here as well. Most of the time you want the automatic hostname
extraction to take care of this.
Note that the agent has to be restarted for changes to take place.
Custom [user–parameters] can be configured in a plain text file in "/etc/zabbix/zabbix_agent.d/". They
will be usable after the next agent restart and be used as keys, usable in server side defined items.

2.6.3.2.3 Data extraction


Frontend
Data extraction through the PHP-frontend works rather intuitive. The simplest way would be using the
"Graphs" or "Latest Data" section. However, it is also possible to define own, more complex, graphs,
maps and screens. This way of extraction is more of a WYSIWYG approach, so it’s not really usable
for automation of configurations, but nice to have to get a simplified overview.
JSON-API
The Zabbix-API is a web based API and shipped as part of the web frontend. The frontend supports the
use of remote HTTP requests to call the API. It is based on JSON-RPC, so all commands sent have to
be JSON encoded. This way of data extraction and modification is suited to be used in third party
software.
Notifications
There are a lot of different possible (E-mail, SMS, instant messenger or custom alert scripts) that can be
triggered by defined events and conditions. This ranges from mails/SMS to local/remove executed
scripts and SSH execution.

2.6.4 Documentation of the code


In order to correctly support the code developed for the Common Monitoring Management System a
GIT repository was created to maintain sources and documentation.
https://git.mobile-cloud-networking.eu/monitoring/MaaS

Copyright  MobileCloud Networking Consortium 2012-2015 Page 48 / 93


The documentation was structured in such a way that all the developers can easily generate the
corresponding documentation of their code. For this the pydoc and Sphinx were used for the code
developed in Python, while Javadocs was considered for the Java programming Language.
Table 14 shows where the source code can be found and how documentation can be accessed:
Table 14 MaaS documentation

Sub- Reference Documentation


Component

MaaS https://git.mobile-cloud- See README.md file.


networking.eu/MaaS

2.6.5 Third parties and open source software


Table 15 shows the used third-party software packages:
Table 15 MaaS dependencies

Name Description Reference License

Development

Urllib2 URL library http://urllib3.readthedocs.org/ MIT

ConfigParser Configuration Parser http://docs.python.org/py3k/library/configp MIT


arser.html

SocketIO Socket package https://github.com/invisibleroads/socketIO MIT


-client

Threading2 Threading library http://github.com/rfk/threading2 MIT

Pika RabbitMQ library https://pika.readthedocs.org MPL


v1.1 and
GPL v2.0
or newer

The Common Monitoring Management System (CMMS) is based on Zabbix, an OpenSource


monitoring software released under GPL license. For ZCP the following software libraries are required:

2.6.6 Installation, Configuration, Instantiation


Please find all installation and configuration information for the source code under the following git
repository:

Copyright  MobileCloud Networking Consortium 2012-2015 Page 49 / 93


https://git.mobile-cloud-networking.eu/monitoring/MaaS

2.6.7 Roadmap
All tasks covered in Task 3.3 have been transformed to JIRA ‘issues’ in order to follow the SCRUM
methodology as close as possible (TMAAS 2014). A project entitled TMAAS has been instantiated,
which is public to the MCN consortium:
All issues are managed in monthly sprints. Backlogs store future work items for upcoming sprints.
The most relevant upcoming topics are summarized as followed:
• Integration of MaaS with other reporting MCN Service monitoring adapter into MaaS e.g.
EPCaaS, IMSaaS and RANaaS.
• Integration of MaaS with other consuming MCN services requesting monitoring data such
as AaaS, RCBaaS and SLAaaS.
• Completing the MCN life cycle phases for MaaS.
• Validation and tests of the monitoring system.

2.6.8 Research works and algorithms


This section presents additional research work, which has not been presented in previous sections. The
integration of Ceilometer into Zabbix has been discussed in the MCN consortium and two main
approaches have been identified. The final approach has been presented under section 4.3, but a second
promising approach has been designed, which is outlined in the following.

2.6.8.1 Ceilometer into Zabbix Integration Approach


Monitoring and metering are essential parts to provide scalable and reliable services within Mobile
Cloud environments. While many third-party monitoring solutions already exist, OpenStack brings its
own system, Ceilometer. Even though highly integrated with its own environment, it still lacks essential
features (e.g. tested and powerful alarms/actions) and differs in some ways from other mainstream
monitoring solutions (in this example: Zabbix). However, some OpenStack based cloud platform
solutions might want to continue with Ceilometer as an OpenStack built-in monitoring solution. The
challenge is to integrate Ceilometer properly into Zabbix, to get the best of both worlds.
This document points out one possible way to achieve a seamless integration of Ceilometer into Zabbix.
In particular, the Ceilometers REST API is used to access Ceilometers data and Zabbix-agents user-
parameters to make them natively useable by the Zabbix-server.

2.6.8.1.1 Main Differences in Ceilometer/Zabbix Setup, Regarding Data Gathering


In Ceilometer, the nova-node component gets monitored by an agent, which meters the instances via
information provided by the used hypervisor. Data from multiple sources (agents, API-pushes, event-
bus) are gathered by the collector and then added to the database. This database can only be accessed
through the API, provided by the ceilometer-API server.
In Zabbix, instances and physical nodes get monitored by agents, running directly on the node. Server
and agent are communicating to exchange the meters to be monitored and the metering-data created.
Data from single sources, the agents, gets directly pushed or pulled into the Zabbix server and its
database. Other sources (e.g. SNMP support) are also possible but independent from this approach.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 50 / 93


Data can get queried through the API and is accessible over the web-frontend.

2.6.8.1.2 Integration Approach


The goal is to integrate data collected by Ceilometer directly into Zabbix; in other words, to provide
compatibility for already implemented custom meters or OpenStack related data, which is more easily
accessible through Ceilometer than Zabbix.
This task should be automated so it is transparent from the metering-consumer and well integrated.
The idea is to pipe ceilometer metrics directly into the Zabbix monitoring process. This is achieved by
defining custom user-parameters for the related Zabbix-agent, which are in fact request calls to the
Ceilometer API. The received metrics will now be pushed/pulled to the Zabbix-server just like native
parameters.
These user-parameters can be created after the deployment, “on the fly”, although this requires a restart
of the agent.

2.6.8.1.3 Benefits of this Solution


This approach is rather straightforward. It only uses Zabbix and ceilometer APIs and a script, which will
add the required user-parameters on deploy and register them in Zabbix. Direct measurements on top of
Ceilometer (without passing through Zabbix) for very fast notifications of the subscriber are still
possible.

2.6.8.1.4 Problems Identified


The most obvious problem with this approach is the overhead it creates. Using two monitoring systems
and two databases will overlap in terms of features to some degree.
Consistent authentication is also problematic. Both Zabbix and Ceilometer are using token-based
authentication. While Ceilometer uses the existing OpenStack keystone authentication system, Zabbix
brings its own. Access to both systems is needed, so two authentications have to be performed where
the credentials may differ.
A consistent nomenclature is a significant feature of well-integrated systems. Using the same ID's and
names for nodes, tenants and users with both systems is highly recommended. This is not the default
behaviour though and may be hard to achieve consistently.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 51 / 93


2.6.8.1.5 Flow Diagrams for the Integration Approach

Figure 24 Flow diagram


Figure 24 show the general flow:
1. Ceilometer’s Collector farms the metrics (via compute-agent, event bus, w/e) and publishes
them into the database.
2. The user-parameter in the Zabbix-agent configuration is in fact a call to the Ceilometer REST
API, requesting the wanted metric. A keystone token has to be acquired for authentication purposes.
3. The requested metric gets published via Zabbix-agent to the Zabbix-server, and will be
processed/evaluated there.

2.6.9 Conclusions and Future work


The Monitoring-as-a-Service (MaaS) supporting service supports distributed infrastructure monitoring
data collection and exposure towards other MCN services. As part of the first release, a basic prototype
has been realized, which follows the MCN service life cycle deployment and provisioning model.
Zabbix (Zabbix 2013) has been selected as a well-established Open Source monitoring tool, as an
outcome of D3.1 in M12. A generic Monitoring Adapter has been specified and examples have been
provided for other MCN service owners to implement specific monitoring adapters. Two approaches of
Ceilometer into Zabbix integration have been specified, from which one approach has been implemented
as part of the first prototype. Future work will elaborate the first release of MaaS further on and improve
the life cycle stages for MaaS. Analytics-as-a-Service (AaaS) is expected to be tightly integrated into
MaaS.

2.7 Analytics-as-a-Service
The Analytics service is not part of the M18 prototype. Future work will include the detailed
architecture, prototype implementation and first algorithms to analyse data. This data/traces should be
collected from the services which are being cloud-enabled on the MCN architecture in the deliverables

Copyright  MobileCloud Networking Consortium 2012-2015 Page 52 / 93


of M18. Hence an earlier start point for the development of the analytics service was not planned. Also
note that the Analytics service was not envision during the writing of the (DoW 2012).

2.8 Cloud Controller


The following sub-sections describe the sub-components which make up the Cloud Controller (CC).
Most of the work are new developments and are contributed out of Task 3.4.

2.8.1 Definition and Scope


The CC provides the signalling and management interfaces to enable the control planes. These will be
used by the instances of SM and SO. The Cloud Controller will support the SO's end-to-end provisioning
and life cycle management needs of services in Mobile Cloud Networking. It will provide both atomic
and support services required for realising those SO needs. The main MCN architectural entity that
interacts most with the CC is the SO, which is responsible for service instance creation (including
orchestration).
The CC is a logical entity - consisting of multiple sub-components that abstract underlying technology
choices. It is copyright © 2013-2015 by Intel Performance Learning Solutions Ltd, Intel Corporation
and licensed under an Apache 2.0 license.

2.8.2 High-level design


The diagram in Figure 25 shows a high-level overview of the components provided in M18. The SOs
developed by the service are deployed by using the Northbound RESTful API (which is based on the
OCCI standards described in (Nyren et al. 2011)). Once the SO instances are running the make use of
the Service Development Kit (SDK) to interact with the Modules of the CC. The information model to
call the SDK and the sub modules of the CC is defined by a service template. This template consists of
the Service Template Graph (STG) and Infrastructure Template Graph (ITG) as defined in (D3.1 2013).

Figure 25 High level overview of CC


Three main development activities have been carried out over the last sprints:
 As of M18 the Northbound API is based on the OCCI standard and is implemented as a basic
service interface.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 53 / 93


 A SDK which supports basic functionalities for deploying and provisioning services was
realized.
 And for easy development the CC can be automatically setup & provisioned in a set of Virtual
Machines using the Vagrant tool (Vagrant 2014).
Those three main sub-components will be highlighted in the next sections. Focus for deliverable was to
get a basic CC up and running which supports the deployment and provisioning of basic services.

2.8.3 Low-level design


For each part of the CC some more details are provided in the next sections. The SDK and API are
implemented using the Python programming language. Hence the class diagrams will only show the
classes not necessarily all code functions which are not object orientated. The service templates can
either be presented in AWS CloudFormation (AWS 2013) of Heat Orchestration Template (HOT) (Alex
Henneveld 2013) format.
For interactions between the components please refer to the sequence diagrams described in (D3.1 2013
p. 102)
All code has been tested and the lowest code coverage with test is currently at 99%. As coding standards
the same standards as those for OpenStack apply. Test coverage is mandatory and style is defined and
enforced by tools as defined in (van Rossum et al. 2001).

2.8.3.1 Development environment


The development environment consists of three Virtual Machines (VMs). One VM which runs
OpenShift Origin for hosting the SO instances. The other two VMs contain a minimalistic OpenStack
installation with support for IaaS, SDN and basic Storage enabled. All three VMs are defined in a
Vagrantfile and are provisioned using Puppet configuration files on the fly. The advantage of taking this
approach is that not only can a certain software deployment configuration be on the developer’s
machine, the very same deployment configuration can be placed on the actual testbeds.

2.8.3.2 Service Development Kit


The following UML diagram in Figure 26 shows the basic structure of the Service Development Kit.
The Deployer is used to deploy services using OpenStack heat. To abstract from the underlying
technologies an adapter pattern has been chosen. The same principle applies to the Authentication
service. Here OpenStack keystone is used and abstracted from. The services implementation in the SDK
enables the access to all services available in the CloudController. It is currently realized through
OpenStack keystone and represents the Design Module of the CC.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 54 / 93


Figure 26 Service Development Kit UML class diagram

2.8.3.2.1 Sample SO
To verify the overall architecture and integration of the SDK with the development environment shown
above a sample SO was developed. For more details on SOs see section 3.1. It deploys a simple service
described in the AWS CloudFormation template language and deploys it using the SDK. The SO itself
is deployed through the Northbound API. The sample SO exposes a simple OCCI like interface itself,
which can be accessed by the SM. It will be used to trigger the deployment, provisioning and disposal
operations. The UML class diagram in Figure 27 shows the sample SO.

Figure 27 Sample SO UML class diagram (hidden details)


The interface of the sample SO is implemented in the Application class. See the screenshot in Figure
28, which shows Northbound API of the CC in the larger Firefox window, the SO instance interface in
smaller Firefox window (which shows the stack id) and the listing of stack on top of OpenStack in the
lower ssh window (note the stack id from the SO).

Copyright  MobileCloud Networking Consortium 2012-2015 Page 55 / 93


Figure 28 Screenshot of integrate SO/SDK & CC
All this integration work has been done on top of the development environment described in section
2.8.6.1. Deployment was done using an emulated SM which would use the Northbound API to
instantiate the Sample SO. Once done only authentication tokens need to be given to the SO through its
own interface. Then deployment & provisioning as well as disposal and retrieval of the state can be
triggered as shown in the screenshot above.

2.8.3.3 Northbound API


The northbound API is implemented using the pyssf OCCI implementation (see section 2.8.5 for
references). Hence the central piece is the implementation of a registry which is passed to pyssf’s OCCI
implementation. The following UML diagram in Figure 29 shows a high-level class diagram.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 56 / 93


Figure 29 CC Northbound Interface UML class diagram
Backends for dealing with application types (AppBackend) and there interconnects (ServiceLink) are
set up by the registry and connected to the OCCI core model in the registry. Two classes of the OCCI
core model are the Application and Resource templates.

2.8.4 Documentation of the code


The Table 16 shows where the source code can be found and how documentation can be accessed:
Table 16 CC documentation

Sub- Reference Documentation


Component

Development https://git.mobile-cloud- Documentation can be found in


environment networking.eu/cloudcontroller/mcn_cc README.md file

Service https://git.mobile-cloud- Documentation can be found in


Development networking.eu/cloudcontroller/mcn_cc_sdk README.md file. Documentation
Kit of samples, sample SO can be found
in README.md in the respecting
sub-directories. API documentation
can be created by running “make
html” in the doc sub-directory.

Northbound https://git.mobile-cloud- Detailed documentation can be


API networking.eu/cloudcontroller/mcn_cc_api created by running “make html” in
the doc sub-directory.

2.8.5 Third parties and open source software


Table 17 shows the used third-party software packages:

Copyright  MobileCloud Networking Consortium 2012-2015 Page 57 / 93


Table 17 CC dependencies

Name Description Reference License

Runtime

OpenStack IaaS solution http://www.openstack.org Apache


2.0

OpenShift PaaS solution http://openshift.github.io/ Apache


Origin 2.0

pyssf Provides OCCI compatible RESTful http://github.com/tmetsch/pyssf LGPL


interface

Development

Vagrant Vagrant provides easy to configure, http://vagrantup.com/ MIT


reproducible, and portable
development environments

mox Mox is a mock object framework for https://code.google.com/p/pymox/ Apache


Python testing 2.0

2.8.6 Installation, Configuration, Instantiation


The following sub-sections deal with the installation, configuration procedures for each sub-component.

2.8.6.1 Development environment


Clone the SCM repository and run
$ git submodule init
$ git submodule update
$ vagrant up

to bring up the development environment Virtual Machines.


More detailed installations instructions can be found in the SCM repository of the Cloud Controller –
see previous section.

2.8.6.2 Software Development Kit


Not applicable as SO instances will make use of this sub-component directly. It is installed once a SO
instance is created.

2.8.6.3 Northbound API


Configuration can be done in the file etc/defaults.cfg. Once done the service can be brought up by
running runme.py.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 58 / 93


2.8.7 Roadmap
The following sprints up to the next deliverable (in M27) as defined in (TCLOUD 2014) will focus on:
 Supporting runtime operations of large-scale services.
 In depth support and management of interconnected resources which are exposed by Software
Defined Networking principles.
 Integration with other services like the Monitoring and SLA service.
 Advancing the Service Development Kit with more features to make de the orchestration of
large scale service as simple as possible.

2.8.8 Research works and algorithms

2.8.8.1 Description of network Quality of Service in service templates


An extract of a template that specifies network QoS parameters following the proposed model is shown
in the following listing, where the new structures are shown in red.
heat_template_version: 2014-03-25
resources:
test_network_1:
type: OS::Neutron::Net
properties:
name: test_network_1
qos_id: { get_resource: qos_1 }
qos_1:
type: OS::Neutron::qos
properties:
qos_parameter: { get_resource: qos_p1 }
qos_parameter: { get_resource: qos_p3 }
qos_2:
type: OS::Neutron::qos
properties:
qos_parameter: { get_resource: qos_p2 }
qos_p1:
type: OS::Neutron::qos_param
properties:
type: rate_limit
policy: 1024 kpbs
classifier: { get_resource: classifier_c2 }
qos_p2:
type: OS::Neutron::qos_param
properties:
type: delay
policy: 2 ms

Copyright  MobileCloud Networking Consortium 2012-2015 Page 59 / 93


classifier: { get_resource: classifier_c1 }
qos_p3:
type: OS::Neutron::qos_param
properties:
type: delay
policy: 4 ms
classifier_c1:
type: OS::Neutron::classifier
properties:
type: destinationIf
policy: { get_resource: host2_port }
classifier_c2:
type: OS::Neutron::classifier
properties:
type: L3_protocol
policy: udp
test_subnet_1:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: test_network_1 }
name: test_subnet_1
cidr: 5.0.0.0/24
gateway_ip: 5.0.0.1
allocation_pools:
- start: 5.0.0.100
end: 5.0.0.200
host1:
type: OS::Nova::Server
properties:
name: host1
image: cirros-0.3.1-x86_64-uec
flavor: m1.nano
networks:
- port: { get_resource: host1_port }
host2:
type: OS::Nova::Server
properties:
name: host1
image: cirros-0.3.1-x86_64-uec
flavor: m1.nano
networks:
- port: { get_resource: host2_port }

Copyright  MobileCloud Networking Consortium 2012-2015 Page 60 / 93


host3:
type: OS::Nova::Server
properties:
name: host1
image: cirros-0.3.1-x86_64-uec
flavor: m1.nano
networks:
- port: { get_resource: host3_port }
host1_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: test_network_1 }
fixed_ips:
- subnet_id: { get_resource: test_subnet_1 }
qos_id: { get_resource: qos_2 }
host2_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: test_network_1 }
fixed_ips:
- subnet_id: { get_resource: test_subnet_1 }
host3_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: test_network_1 }
fixed_ips:
- subnet_id: { get_resource: test_subnet_1 }

In this example we have three hosts (host1, host2, host3) that are connected to a single network, each of
them with a network interface (host1_port, host2_port, host3_port). At the network level, we have
defined a QoS resource qos1, so that the maximum delay acceptable between any port attached to the
network is 4 ms (see qos_p3 resource) and the UDP traffic is limited to 1024 kbps on every port (see
qos_p1 resource and the associated classifier classifier_c2).
On the other hand, since we need a more restricted value for the maximum delay between host1 and
host2, we have defined an additional QoS resource qos2, which refers to the QoS parameter resource
qos_p2. Here the classifier classifier_c1 specifies that the QoS requirement needs to be enforced only
for the subset of traffic that is designated to network interface in host2. Finally, the QoS resource qos2
is attached to the host1_port. The resulting topology is shown in Figure 30.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 61 / 93


Figure 30 Example of network topology with QoS parameters
The sample values included in the classifier resources of the previous example are related to a specific
port resource or to the value of a field in the IP packet header (in this case the protocol field). This can
be generalized to any value (or combination) of the fields in L2, L3 and L4 headers.

2.8.9 Conclusions and Future work


The work described above describes the initial steps of enabling basic SOs to deploy and provision
services. With the help of tests and sample implementations of the Service Orchestration the basic
functionalities could be verified. Also huge efforts have been put into realizing the integration of all the
separate components into the overall logical entity of the CC.
The following sprints will focus around the work of enhancing the deployment and provisioning phase
based on inputs from the services. Next to this future work will be done on runtime management and
management of networks through SDN.

2.9 Service Graph Editor


The Service Graph Editor – or StgEditor – is being developed out of Task 3.4 to help Service
Owners/Developers. The help offered is to easily generate templates which are compatible with the
information model the CC uses as defined in (D3.1 2013).

2.9.1 Definition and Scope


The StgEditor is a desktop tool that enables the graphical editing of MCN Service Template Graphs and
aims at automating the generation of the corresponding Infrastructure Template Graphs. From an
architectural standpoint the StgEditor is part of the SM and is used by the MCN-SP to manipulate and
customize STGs. Nevertheless it is service independent and for this reason considered as a common
component.
The StgEditor provides a GUI for the manipulation of a graph that represents a STG where each node
and edge can be clicked to inspect and possibly modify its parameters. It eventually builds an ITG
template file to be handled by the SO for deployment, provisioning, run-time, phases. Though it operates
at SM level it is actually a plain infrastructure related tool.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 62 / 93


2.9.2 High-level design
The StgEditor provides a GUI tool for the composition and customization of STGs and the
corresponding ITGs.
The adopted graphical model for the STG is a conventional graph where the nodes represent Service
Instance Components –SICs– and the edges represent interfaces. The StgEditor shows a palette of
template nodes and template edges that are selected by the user and then dragged onto a project
whiteboard to become nodes and edges instances as shown in Figure 31.

Figure 31 StgEditor screenshot


The palette appears on the left hand side of the window and is made of a tab for the collection of the
available SIC templates and a tab for the Interface templates. The user selects a template by clicking on
it and he can then drag an instance on the work area on the right. The edges endpoints can be stretched
to connect one node to the other. Each node and edge is marked with a name that mnemonically shows
the role of the SIC or interface. The nodes are also distinguished by the icon that is used to graphically
represent them. In the following the term node will be used interchangeably with the SIC it represents,
and the term edge with the related interface.
Each SIC and Interface in the STG project can be double clicked to open configuration pop-ups. The
configuration pop-up is made of a set of tabs where the first one holds some general common
information, including the name of the instance. The second tab holds a number of parameters that can
be edited to customize the instance. The third one holds the definition of the exposed Interface End
Points.
Once the editing of the STG is completed, the user can save the STG project in a file for further editing
at a later time. If the editing is completed, the user can select on a menu the Generate ITG template
command and save the result in a HOT Template file.
The means for uploading the ITG template file onto the SO has not yet been defined.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 63 / 93


2.9.3 Low-level design
The StgEditor is being implemented as a desktop application written in Java with few open source
libraries. The architecture is that of a typical java swing application, i.e. event driven, where events are
generated by a variety of sources, either related to graphical operations (e.g. drag and drop) or graph
related operations (e.g. Graph Node Created).
When the StgEditor is launched the palette definition files for SICs and Interfaces are read and decoded.
The SICs templates and the Interfaces templates are defined in JSON files stored in a configurable
directory. At the end of the start up procedure the actual palette for STG editing is displayed at the left
hand side of the StgEditor window. At the time this deliverable is edited two types of definition files are
implemented one for the STGNode, i.e. a SIC template and the other for the STGEdge.
Each STGNode template file defines a ClassName to identify the SIC, a textual description, the path to
the file of the icon image, the path to a file containing the definition of the ITG resources associated
with the SIC, the definition of the endpoints of the interfaces to be terminated by the SIC.
STGEdge template file is quite similar to the STGNode one. Differences are the lack of an icon file, as
edges are drawn as lines, and the presence of just two endpoints as each edge always connects up to two
STGNodes.
Figure 32 shows how the different components of the model are combined to yield the expected result.
The uppermost level shows two SICs interconnected by an interface. This abstract model is represented
in the Stg Editor GUI by means of a graph having one node (StgNode) per each SIC and an edge
(StgEdge) for the interface. Each node and edge is characterized by a number of parameters specific to
each SIC and Interface, and by a snippet of HOT code that describe the relevant Itg components. The
Stg Editor user, e.g. a MCN Service Designer, can edit these parameters and assign values to fit the
specific application. Once the STG editing has completed, the Stg Editor uses the configured parameters
to customize the HOT snippets. All the customized snippets are then combined together to produce the
integrated HOT template file that can be passed to the Service Orchestrator and enter its service life
cycle.

Figure 32 StgEditor information flow

2.9.4 Documentation of the code


Table 18 shows where the source code can be found and how documentation can be accessed:

Copyright  MobileCloud Networking Consortium 2012-2015 Page 64 / 93


Table 18 StgEditor documentation

Sub-Component Reference Documentation

STG editor framework https://github.com/jgraph/jgraphx/tree/m Documentation can be found


aster/docs within the “api” and “manual”
directories

STG Editor User https://git.mobile-cloud- User Manual for the Stg Editor
Manual networking.eu/cloudcontroller/stg_editor (living document)
/UserManual_v0.doc

Stg Editor prototype https://git.mobile-cloud- How to run the Stg Editor


networking.eu/cloudcontroller/stg_editor prototype
/Windows

2.9.5 Third parties and open source software


Table 19 shows the third-party dependencies of the StgEditor.
Table 19 StgEditor dependencies
Architecture component Software Name Reference License
STG editor framework jgraphx https://github.com/jgraph/jgraphx BSD
Yaml coding/decoding SnakeYaml http://www.snakeyaml.org Apache 2.0
JSON coding/decoding Json-lib http://json-lib.sourceforge.net Apache 2.0

2.9.6 Installation, Configuration, Instantiation


The StgEditor is provided as a jar file plus a number of jar libraries. A configuration file is available for
customizing the location for the palette definition files, icon images and HOT snippets, and other
possible customizations. Before launching the StgEditor these files should be properly populated.

2.9.7 Roadmap
At M18 a first working prototype will be demonstrated capable of generating actual HOT templates out
of SIC based graphs. At M21 the StgEditor is expected to be completed with a number of examples.

2.9.8 Conclusions and Future work


The StgEditor is an ongoing work that aims at exploiting GUI techniques to ease the complexities of the
MCN services instantiation. Future work will focus on challenging the implementation by coding the
STG nodes and edges to represent the MCN Services being developed.

2.10 Database-as-a-Service
A Database-as-a-Service offer storage capabilities to SO and SIC instances. This service is built on
existing technologies and delivered out of Task 3.4.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 65 / 93


2.10.1 Definition and Scope
SO instances as well as SIC should have access to on-demand storage. This feature is delivered through
a Database-as-a-Service.

2.10.2 High-level design


SO instances can access storage to be themselves fault tolerant and high-available. Therefore the
Northbound API of the CC allows for attaching Storage solutions such as MongoDB, PostgreSQL and
MySQL.
SIC can access storage through the Database-as-a-Service offered through OpenStack Trove (Trove
2014) . The databases themselves are launched in Virtual Machines and then can be accessed through
their native interfaces. Supported Databases such as: Percona, MySQL, MongoDB, Cassandra, Redis,
CouchDB, Memcache and VoltDB.

2.10.3 Low-level design


For OpenStack Trove’s Design manuals please refer to:
https://wiki.openstack.org/wiki/TroveArchitecture
For OpenShift Origin’s Design manuals please refer to:
http://openshift.github.io/documentation/oo_cartridge_guide.html#mariadb

2.10.4 Documentation of the code


Table 20 shows where the source code can be found and how documentation can be accessed:
Table 20 Database-as-a-Service documentation

Sub- Reference Documentation


Component

OpenStack http://docs.openstack.org/developer/trov https://wiki.openstack.org/wiki/Trove


Trove e/

OpenShift https://www.openshift.com/developers/t For example the Mongo documentation


Cartridges echnologies can be found here:
https://www.openshift.com/developers/
mongodb

2.10.5 Third parties and open source software


Table 21 shows the used third-party software packages:
Table 21 Database-as-a-Service dependencies

Name Description Reference License

Runtime

Copyright  MobileCloud Networking Consortium 2012-2015 Page 66 / 93


OpenStack IaaS solution http://www.openstack.org Apache 2.0

OpenShift Origin PaaS solution http://openshift.github.io/ Apache 2.0

2.10.6 Installation, Configuration, Instantiation


OpenShift Origin comes with support for Databases such as Mongo and PostgreSQL out of the box.
They only need to be configured in the respective environments. For example the MongoDB has been
configured for the MCN CC development environment and is available out-of-the-box. Instance of the
Database can be attached to SO instances using the Northbound API of the CC.
OpenStack Trove is available on the OpenStack Cloud Stack and so is tightly integrated. Using the
Trove-API it can be easily seen as a Database-as-a-Service. When requesting a new database instance
the reference endpoint is returned including the authentication tokens required to access the data.

2.10.7 Roadmap
Only open item is to test the integration of OpenStack Trove with OpenStack heat supporting the full
orchestration which will be completed once OpenStack Icehouse is released.

2.10.8 Conclusions and Future work


This concludes the initial development/configuration of storage solutions for both SO instances and
Service Component Instances. The work on integrating OpenStack Heat with Trove by the OpenStack
community will be closely monitored and tested. No immediate further work is planned.

2.11 Radio Access Network-as-a-Service


The following sections cover work done on the Radio Access Network conducted by Task 3.5.

2.11.1 Definition and Scope


The task is of constructing the design elements for a system that can be used to manage Radio Access
Network (RAN) for an organization that specializes in providing on demand RAN to customers, either
directly to Enterprise End Users (EEU), e.g., a Mobile Network Operator (MNO) or via intermediate
actors, called Mobile Cloud Network Provider (MCNSP) in the MCN terminology. The organization
offers RAN to its customers with a given pricing in a variety of geographical areas for a certain duration
with some target traffics to be supported with specific Radio Access Technologies (RAT) and with
certain guarantees specified by a Service Level Agreement (SLA).

2.11.2 High-level design


Figure 33 shows the high-level system functional view for current release. The current implementation
design aims at designing a basic end to end configuration using the MCN framework. For this reason
we chose to implement a scenario with a single stakeholder providing both Radio Access Network and
core network. As a result, a single instance of SO controls both LTE RAN and EPC. Please also note
that the base station implements the S1 interface with the Core network and the interface with the traffic
generators. Next RANaaS release will integrate a dedicated RANaaS SM and an emulated base station
implementing radio layers using the OpenAir Interface open source project.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 67 / 93


Service Cloud
Orchestrator Controller
Configs.
storage

RANaaS Instance

U-
BBool
A p A
Per RAT BBU A
Traffic L3 eNB MME
Generator /S-GW

A Agent
BBU Base Band Unit
DNS Domain Name System
eNB evolved NodeB
DNS L3 eNB Layer 3 eNB
MME Mobility Management Entity
RAT Radio Access Technology
S-GW Serving Gateway

Figure 33 Architecture reference model for M18 prototype


To complete the demo for the M18 prototype OpenEPCs eNodeB was virtualized and used with the
OpenEPC solution. Next to this, performance analyses have been conducted using the OpenAirInterface.

2.11.3 Low-level design


The current release of the RANaaS focuses mainly around two user stories as also shown in Figure 34:
 As an EPC and RAN Provider I am able to deploy and subsequently provision an end to end
mobile network made up of simplified LTE base stations and an Evolved Packet Core in order
to be able to test end to end connection between traffic generators and IP service end points
(e.g., a web server).
 As an EPC and RAN Provider I am able to inject traffic into a deployed RAN made up of
simplified LTE base stations in order to be able to test end to end connectivity and test
algorithms, e.g., network function placement.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 68 / 93


Cloud Controller
<<include>> Authentication
Manage Service
Requests

Service
Orchestrator
Design RAN + EPC
<<include>>

Deploy RAN + EPC


<<extend>>
<<extend>>
Provision RAN +
<<include>>
EPC
Manage RAN + EPC <<extend>>

<<extend>>
RANP + EPC Provider

File Editor

Manage User
Generator

Figure 34 Use-case diagram of RANaaS for M18

2.11.3.1 OpenEPC’s eNodeB


The eNodeB which is part of OpenEPC is a key component for the M18 prototype release. The eNodeB
design consists of separate elements with multiple functionality that have to cooperate within one entity.
Figure 35 depicts a general overview of the construction of the eNodeB emulator and its main
components.
Besides protocols enabling standard communication with other network elements, the most important
role in the eNodeB emulator model is played by the highest logical block, called "eNodeB". It contains
the majority of the component's logic which has to full multiple different tasks ensuring proper operation
of the node.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 69 / 93


eNodeB

enodeb
addressing

S1AP X2AP

sctp

gtp routing
nas

routing_gtpu routing_raw

mysql console

Figure 35 eNodeB architecture


Figure 36 shows the interfaces between the eNodeB the traffic generator the OpenEPC instance.

Application
Application GTP-U GTP-U
GPRS Tunnelling GPRS Tunnelling
Protocol Protocol

IP IP UDP UDP
Internet Protocol Internet Protocol User Datagram Protocol User Datagram Protocol

IP IP IP IP
Internet Protocol Internet Protocol Internet Protocol Internet Protocol

MAC MAC MAC MAC


Medium Access Control Medium Access Control Medium Access Control Medium Access Control

Ethernet Ethernet Ethernet Ethernet

User LTE-Uu eNodeB SGW-U


Genarator emulator S1-U
emulation

Figure 36 Interfaces between User Generator, eNodeB and SGW


With this setup it is possible to inject traffic through a User Generator via the eNodeB to the SGW of
EPC. The User Generator works on a per “virtual” user basis. For each virtual user the following phases
could be identified:
 Attachment
 Traffic generation
 Detachment
The message exchange during all of the procedures is presented in the Figure 37. The communication
between the User Generator and eNodeB is based on simple text commands sent via a regular TCP/IP

Copyright  MobileCloud Networking Consortium 2012-2015 Page 70 / 93


connection. The commands are received and processed by an agent working on listening on the specific
IP address and TCP port configured by the SO.

Figure 37 User generator and eNodeB


In order to provide dynamic configuration management for cloudified network functions, a novel small
size Element Management System (EMS) Agent was developed, which resides in each instance of a
cloudified network function. In order to enable dynamic configuration of the specific network function,
the particular EMS Agent controls the instance’s active state. EMS Agent receives commands from
external site (Service orchestrator) usually over the management network.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 71 / 93


The communication part between SO and eNodeB-Config is realized via a REST based API as shown
in Figure 38. Each command sent to REST API corresponds to a script located directly on the network
function’s instance – this will be called service adapter scripts in following sections.

Service Orchestrator

Rest API

HTTP Server
configuration

L3 eNB
(Generic EPC L3-eNB
binary)

Figure 38 Configuration management via EMS Agent


Although these service adapter scripts are individual to each service, these scripts are organized in a
common way. Some of the scripts are related only to configuration information regarding the particular
network function unit. Other scripts are executed whenever dependencies among Network Functions
exist.
Table 22 API for the eNB Management
Name Function
preinit Taking care of basic configuration (like network-configuration) related *not*
to the service functionality provided through this service instance but especially
to the the running VM.
install Installing the component itself, applying options etc, anything which is not
dependent on other service instances, or services.
<foreign-service>- Taking care of all things need to be done to establish a connection between
relation-joined dependent components.
<foreign-service>- Removing the established connection.
relation-departed
start Starting a service instance
stop Stopping a service instance

Please note whenever a relation between network function units is established (joined) or dismantled
(departed) a corresponding scripts are executed on each site of the created/deleted relation.

2.11.4 Documentation of the code


The Table 23 shows the documentation of the RANaaS.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 72 / 93


Table 23 RANaaS documentation

Sub- Reference Documentation


Component

OpenEPC http://www.openepc.net Licensors can retrieve the documentation


of OpenEPC.

2.11.5 Third parties and open source software

2.11.5.1 OpenAirInterface
OpenAirInterface (Eurecom 2013) is an open-source hardware/software development platform
developed by Eurecom as an emulator for the LTE RAN (Nikaein 2012). It combines simulation or
emulation (of the physical layer) with emulation (of MAC and higher layers) but there are also versions
allowing work with actual PHY layer equipment. Currently, Eurecom works on adding an EPC to the
emulator which shall be released still in 2014.
It has several emulators with corresponding profiling tools: dlsim and ulsim: implement only the
physical layer processing; oaisim: implements the complete stack, generating a given number of UEs
and a eNB, providing their IP address, enabling to inject traffic using traffic source generators – such as
ORG, IPERF or DITG – that is entirely processed as real equipment, the processing executing all layers
of the UE and eNB protocol stack.
OAI has been installed on several machines. In particular in CloudSigma ones, a public cloud provider
of virtual machines (VMs) supported by a shared physical infrastructure, and on a VM at the University
of Bern, where requirements towards the physical infrastructure are lower and the server has higher
specs. OAI is used to profile, for specific allocated radio resources and traffic / services usage, the
needed computation resources of the various BBU components (PHY cell, UP, CP) which satisfy the
LTE 3GPP requirements in terms of latency.
Several limitations and errors have been identified which do not allow yet the use of all OAI stated
potentialities. In particular, it does not support multi-core processing (although it is multi-thread),
strongly limiting its performance. The operation with traffic sources presents also several errors that
require the improvement of OAI. These faults are expected to be solved in a mid-term, in order to run
the intended evaluations, and enable to present final conclusions on the feasibility of eNBs running in
the cloud.
Several improvements have been done in the code, and submitted as contributions to the open source
community, in order to enable the profiling of processing resources.
OpenAir Scenario Descriptor (OSD) is a configuration dataset which is composed of four main parts,
which represent the basic description of Open Air Interface (OAI) emulation platform. It is part of OAI
emulation methodology for describing scenarios using the XML format. This allows repeatable and
controlled experimentation to be executed, without having to run simulations in the command line and
setting parameters manually. As the parameter set was rather limited, more parameters required for
experimentation works were defined and implemented and contributed back to the OAI community.
The OpenAirInterface Traffic Generator (OTG) is a tool used for the generation of realistic application
traffic for the performance evaluation of emerging networking architectures. It accounts for

Copyright  MobileCloud Networking Consortium 2012-2015 Page 73 / 93


conventional traffic but also for the traffic characteristics of applications such as M2M and online
gaming. Hence, OTG is capable of generating mixed human and machine type traffic patterns.
Table 24 shows the used third-party software packages:
Table 24 RANaaS dependencies

Name Description Reference License

Development

OpenEPC Includes eNodeB http://www.openepc.net Fraunhofer


proprietary

OpenAirInterface OAI http://www.openairinterface.org GPL

2.11.6 Installation, Configuration, Instantiation


The following sections briefly describe the configurations needed for the RANaaS prototype of M18.

2.11.6.1 OpenEPC’s eNodeB


Figure 39 describes how eNodeB service unit is instantiated and configured. It considers that in the
initial state a VM holding the OpenEPC’s eNodeB binary, eNodeB’s configuration scripts, and
additionally the EMS is provided internally.
When the eNodeB service unit is instantiated the only running component expected it the REST based
API.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 74 / 93


Orchestrator eNB MME
preinit

install

dns-reletion-joined

mme-reletion-joined

user-generator-reletion-joined
start
S1AP: S1 Setup Request

S1AP: S1 Setup Response

REST commands
3GPP standard
signalling

Figure 39 eNodeB configuration


An overview of execution order for starting a User Generator instance is shown in Figure 40.

Orchestrator User generator


preinit

install
REST commands
enodeb-reletion-joined

start

Figure 40 Configuration of User generator


After User Generator and eNodeB services were successfully configured, required relations were created
between the VMs and the elements are operational the procedure of attaching multiple virtual users to
the base station can be initiated.
Further information on the installation and usage of the eNB and the user generator can be found in
section 2.11.5

2.11.7 Roadmap
As also detailed in (TRAN 2014) the following sprints will focus for M 21 on
 Integrating the MCN framework with a RANaaS Service Manager and Service Orchestrator,

Copyright  MobileCloud Networking Consortium 2012-2015 Page 75 / 93


 Implementing the RANaaS service lifecycle using the OpenAirInterface emulator as the base
station
 Further work on traffic generation with Open Air Interface
 Integration of Monitoring
For M 27 the sprints will be dedicated to:
 the integration with other MCN support services, e.g., RCB (Rating Charging and Billing), SLA
(Service Level Agreement) and
 Work on the business aspects of the RAN service management.

2.11.8 Research works and algorithms


Task 3.5 is working on multiple research topics in parallel as detailed in the next sections.

2.11.8.1 Performance analysis of eNodeB for porting to the cloud


A range of tests to determine the computational needs of RANaaS concerning the execution of the RAN
functionality on cloud-based infrastructure were described in the internal report (Dimitrova 2014).
Central focal point is also the requirements set towards the physical infrastructure by the strict
processing delay set by the RAN. The conducted tests are organized in four categories:
1. Test group A: Tests aiming to determine the dependency of the processing time for the PHY
layer functionality – depends on the configuration (and load) of the radio interface.
2. Test group B: Tests aiming to determine the dependency of the processing time for the MAC
and higher layers functionality – depends on the number of end users.
3. Test group C: Tests aiming to determine the statistical boundaries of the processing time given
execution in a (public) cloud.
4. Test group D: Tests aiming to determine the dependency of the processing time on the
configuration of the hardware platform.
The current OAI profiling of the PHY layer does not support multi-threaded operation and thus did not
allow us to test the impact of the number of cores on the processing of the PHY layer. It is worth noting,
however, that PHY operations are not all possible to be run in parallel, because they are sequential in
nature, i.e., each operation depends on the output of the previous one. Therefore, multiple cores will
only help there where the signal processing can be done in parallel. It order to investigate the potential
wins of multi-core platforms we have keep an open communication channel with Eurecom to investigate
the feasibility to have the code multithreaded and in parallel will look in the details of the signal
processing chain to identify which PHY processing steps could be parallelized if at all.
Profiling of the higher layer stack has bigger chances to benefit from multi-threading and thus increasing
the number of cores. This is due to the fact that processing of the higher layers is done in parallel for the
different users and allows parallelism of the processing. To test the impact of the number of cores on
the processing of higher layers the profiling software (and the implementation of the eNB emulation)
should support multi-threading. This is an open issue with the current version of OAI and is currently
under investigation in collaboration with the Eurocom institute in the scope of open source contributions.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 76 / 93


Given that part of the eNodeB processing may not qualify for optimization due to sequential execution
of the functionality an alternative to improve the performance is to use CPUs with higher speeds.
Although we conducted initial control tests, which gave promising results, the impact of the CPU speed
could not be tested thoroughly due to infrastructure limitations. The physical infrastructure of our
CloudSigma partners is based on CPUs with equal parameters, and does not allow definition of
increasing CPU speed scenario. As CloudSigma are in the process of upgrading their infrastructure we
target test repetition on new, faster processors. Alternatives to the physical infrastructure are searched
as well.
To allow for more comprehensive evaluation of the processing needs of the RAN components
monitoring tools were deployed on the machines where the OAI is running. Monitoring schedule was
set to periodically trigger execution of the OAI profiling and in parallel rung the monitoring tools
collecting information on the CPU, RAM and other hardware resources used. The schedule is set to span
both working days and weekends to also allow the evaluation of fluctuation in the processing times
caused by the sharing of the physical hardware infrastructure.

2.11.8.2 Fronthaul Solutions


For building a fronthaul solution it is mandatory to keep into account three interdependent requirement
types: technical aspects, business aspects and regulation constraints. Detailed fronthaul requirements
and available technical solutions are partially described in (D3.1 2013). Based on these requirements,
detailed in the internal report (Pizzinat 2013), four work paths have been identified:
1. Single fiber optical distribution network (ODN): as a first step this could be realized by means
of Coarse Wavelength Division Multiplexing -like bidirectional transceivers. Then, this could
be done on dedicated ODN or shared ODN (Fiber to the …).
2. Low cost colorless transceivers. Colorless transceivers could be used to suppress inventory
issue.
3. Greenfield or brownfield with coexistence: this item refers to NGPON2 scenarios with
coexistence element that allows NGPON2 coexistence with previous GPON or XGPON
generations on existing ODN.
4. Without management capability or with PtP Encapsulation Method (PEM) to fit the required
fronthaul requirements and providing O&M: the first option consists in fronthaul in
wavelength overlay over ODN, the second option consists in using NGPON2 interfaces for
CPRI transport over NGPON2 frame.

2.11.8.3 Virtualization of Radio Resources


One of the key characteristics of RAN-as-a-Service (RANaaS) is the capability to provide multi-
tenancy. These tenants, which are Virtual Network, are served elastic, on-demand and simultaneously
over the same physical infrastructure according to their Service Level Agreements (SLAs). In addition,
it is desired to share/virtualize the limited radio resources (i.e., spectrum) while providing them isolation,
ease of use (i.e., network element abstraction), and multi-RAT (Radio Access Technique) support.
The objective of virtualisation of radio resources, as it is described in, is to realise virtual wireless links.
In other words, to share the limited radio resources (e.g., spectrum) among the EEUs while providing
them RAN instance isolation, ease of use (i.e., network element abstraction), and multi-RAT (Radio

Copyright  MobileCloud Networking Consortium 2012-2015 Page 77 / 93


Access Technique) support. Virtual Radio Resource Management (VRRM) is a statistical decision
problem, in which a decision has to be taken under sets of uncertainty.
Within the internal report (Kocur 2013) statistical models have been setup, evaluated and first results
achieved.

2.11.8.4 Radio and Cloud Resources Management


C-RAN and RANaaS enables to allocate elastically radio resources on-demand. The objective is to
design a Radio and Cloud Resources Manager (RCRM) that satisfies on-demand the users of a Mobile
Network Operator (MNO) dynamically requesting services. In order to support a given offered traffic
load (which varies both geographically and temporally), the requested radio and cloud (processing and
storage) resources will be adequately configured, considering the available fibre resources between
RRHs and DCs.
A key aspect is the quantification and profiling of the relationship between the processing needs at the
BBU to support a given radio resources usage. This processing must be decomposed between load
independent and dependent components, which can be scaled according to needs. This may prove
valuable when deciding how to deploy and split the functional components of a BBU on Virtual
Machines (VMs). Various traffic scenarios are to be studied, as described in (Ferreira et al. 2013a),
considering both geographic and temporal variations. Upper and lower processing boundaries will be
established, and analytic expressions to model system behaviour re being proposed and worked on in
the internal report (Ferreira et al. 2013b).

2.11.9 Conclusions and Future work


The document shows the first step towards the design of a system that offers RAN-as-a-Service on-
demand in a variety of geographical areas and for a specific duration of time, to support target traffic
loads with RAT with a given pricing and specific SLAs’ guarantees. It is supported by a service manager
and service orchestrator that enable a large variety of use cases.
RANaaS is designed to explore the existing RANs in terms of service offer (covered geographical area,
technologies, supported traffic profiles and SLAs). It enables customers to add, delete or modify RAN
items, paying per usage.
A high level design is proposed, identifying RANaaS system actors and use cases, such as managing the
RANaaS catalog, the customer’s RAN, and orchestration use cases. They evidence the flexibility and
potentiality of the RANaaS concept.
An API has been developed enabling to deploy and manage RAN-as-a-Service. In the current state, it is
able to deploy and manage a set of Layer-3 eNBs according to the proposed design. It is based on the
OpenEPC software, having been developed the service orchestrator that controls eNBs. As future work,
the RANaaS application will include a dedicated SM and SO with LTE base stations to enable scenarios
with radio constraints. This will enable scenarios where the associated radio resources shall be
dimensioned according to the expected traffic load and needed radio resources, in order to satisfy
specific SLAs.
A variety of open source software is used within this activity. OpenAirInterface (OAI) is a software-
based LTE eNodeB, running on CloudSigma machines. It has been used initially to profile the needed
computation resources for various traffic and radio resources usages. It shall be used in a second stage
to replace the already developed Layer-3 eNB in order to emulate a realistic eNB, implementing the

Copyright  MobileCloud Networking Consortium 2012-2015 Page 78 / 93


entire eNB stack. To support it, open source code for generation of traffic has been used from OAI, the
so called OTG (OpenAir Interface Traffic Generator), as well as IPERF and D-ITG (Distributed Internet
Traffic Generator), which are currently used for specific purposes. The goal is to be able to inject realistic
traffic, either from realistic traffic generators or from real end-users equipment running real applications.
Several research topics are addressed around this activity. Using OAI, a study was presented of the
variation of the processing time and its dependence on the physical infrastructure. It is shown that RAN
processing in the cloud should be done with care, as the processing time presents large fluctuations, due
the sharing of physical hardware infrastructures, showing to be unable to give processing guarantees.
The research study shall be extended to various scenarios of resources usage, and solutions will be
researched to solve these. Limitations of OAI in terms of multi-core support and several needed
improvements still to be implemented avoid to present final conclusions at the current stage on the
viability of supporting well performing eNBs in the cloud.
The design of a Radio and Cloud Resources Manager (RCRM) is proposed. To support a given offered
traffic load (which varies both geographically and temporally), the requested radio and cloud resources
shall be adequately configured. Several radio resources can be dynamically allocated, such as the set of
active micro- and macro-cells RRHs, the available RATs, the number of frequency carriers per RRH
and associated bandwidth size. On the other hand, the processing and storage needs of BBUs vary
dynamically with the load of the associated RRH. Computing resources supporting the instantiation of
BBUs may be scaled up or down (increase or decrease processing and storage capacity) according to
the needs (requiring to support the seamless migration of BBUs from one DC to another). As future
work, its implementation and performance evaluation shall be assessed.
To support the RRH-BBU fronhault, several solutions are evaluated in terms of technical requirements,
costs, technical solutions on fiber, four long-term options being presented, ranging from single fiber
network, low cost colorless transceivers, greenfield or brownfield with coexistence, and without
management capability. An example taken from a deployment in Brittany highlighting antennas and
central office locations, and interconnecting links. This shall be used for a reference scenario to be
evaluated in a further stage.
The concept of virtual radio resources is also proposed, and a model for management of these resources.
The virtualisation of radio resources solution aggregates and manages all available resources. Virtual
Network Operators do not have to deal with physical radio resources at all. Instead, they have to ask for
wireless connectivity in form of capacity per service. The services of VNOs are provided by required
virtual resources based on the contract and defined SLA (Service level agreement). Virtualisation
approach leads to more efficient and flexible V-RAN. The details of proposed model for management
of virtual radio resources containing its relation with physical resource managers, estimation of network
capacity, and allocation of data rate to each service of each VNO have been described. A practical
heterogeneous cellular network is considered as a case study. The initial numeric results support the
increase of resource usage efficiency up to 6%. These results present how the virtual radio resource
management allocates capacity to various services of different VNO when they have different SLAs and
priority

Copyright  MobileCloud Networking Consortium 2012-2015 Page 79 / 93


3 Service enablement
The following two sections describe the generic SO and Manager. Although not originally intended to
be delivered out of WP3 they were developed for integration and testing purposes for M18.

3.1 Generic Service Orchestrator


The SO is defined in (D2.2 2013). It is again a domain specific components and hence in the hands of
the service owners to implement for their services. However to test integration of the foundations defined
in WP3 a first generic implementation is provided out of WP3.

3.1.1 Definition and Scope


The service orchestrator is the component in the MCN architecture that is responsible for the creation
of a tenant’s service instance. For each tenant’s service instance request a service orchestrator is
instantiated and managed by the service manager.
To aid the management of the SO’s instantiated by the service manager it has a lightweight interface for
the sole use by the service manager. This interface is currently a very early prototype and the definition
of it is undergoing iterations currently.

3.1.2 High-level design


As defined in (D2.2 2013 p. 30) the SO contains two functional blocks: “The Decision block (SOD) is
responsible for interacting with “external” entities, e.g. Support Services or SM and take decisions on
the run-time management of the SICs […] The Execution block (SOE) is responsible for enforcing the
decisions towards the CC”

3.1.3 Low-level design


The following UML in Figure 41 class diagram shows the SO. To implement a service orchestrator, a
developer needs to implement the ServiceOrchestratorExecution and ServiceOrchestratorDecision
interfaces. The classes that implement these interfaces will then provide the means to create an instance
of the relevant service.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 80 / 93


Figure 41 SO UML class diagram
The Application class is the entry point and represent the SO’s interface towards the SM.

3.1.3.1 SOs and SO Bundles


Service orchestrators are deployed within a service manager as a code (python in the prototype case)
and supporting files. There is no assumption made about the structure of the bundle except for now.
Figure 42 shows a possible structure.

Figure 42 Structure of a SO bundle


The service manager knows where to find this service orchestrator bundle as it will read the location
from the service manager configuration file. Within the bundle the following details might be stored:
 Any kind of data needed to perform the management of Service Instances. This can include the
service template, configuration files or whatever else is needed.
 The logic for deploying, provisioning, runtime manage and disposal of services in form of
python code (so.py).

Copyright  MobileCloud Networking Consortium 2012-2015 Page 81 / 93


 Configuration files which tell the CC to install dependencies such as the SDK. And setting
environment variables to use the right design module endpoint (support directory).

3.1.4 Documentation of the code


The Table 25 shows where the source code can be found and how documentation can be accessed:
Table 25 SO documentation

Sub- Reference Documentation


Component

SO https://git.mobile-cloud- See README.md


networking.eu/cloudcontroller/mcn_cc_sdk/tree/master/mis file.
c/sample_so

Deployment https://git.mobile-cloud- See README.md


code of networking.eu/cloudcontroller/mcn_cc_sdk/tree/master/mis file.
sample SO c/cc_deploy

3.1.5 Third parties and open source software


Table 26 shows the used third-party software packages:
Table 26 SO dependencies

Name Description Reference License

Development

pyssf OCCI http://github.com/tmetsch/pyssf LGPL


implementation

3.1.6 Installation, Configuration, Instantiation


The sample service orchestrator can be run by following the steps in README.md file of the cc_deploy
as described in section 3.1.4. It basically deploys the sample SO once it has installed dependencies and
set the right environment variables (Endpoint of the design module).

3.1.7 Roadmap
The following sprint up to M21 will cover some basic changes on the generic sample SO. The
Application class will be extracted and put into the SDK. This will generalize the Interface of SO
instances. This interface will be built upon the OCCI specification. No other actions are planned.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 82 / 93


3.1.8 Conclusions and Future work
The SO presented here was used to test the overall integration of all components. It will be used for
automated testing in future. Full flesh implementations of the SOs will be carried out in the respecting
work packages (WP4, WP5).

3.2 Generic Service Manager


The SM is defined in (D2.2 2013) and represents one of the key components of the overall architecture.
Because SMs and SOs are domain specific they should be implemented by Service Owners from higher
level services such as EPC (WP4) and IMS (WP5). However to ensure an integrated environment WP3
has made a first implementations for integration purposes.

3.2.1 Definition and Scope


The service manager is the first point of entry for EEUs. At the service manager the EEU can, through
the MCN service lifecycle, request service instances. The service manager provides the EEU the simple
operations of creation, deletion, updating and description of service instances.

3.2.2 High-level design


In order to aid integration and interoperation, the SM exposes and offers management of service types
and instances through the OCCI specification. In this particular case, the core OCCI specification is
used to allow service providers represent their service offer as an OCCI type, or what is better known
as an OCCI Kind. Importantly, by adopting OCCI the means to discover what service types are offered
by the service provider, through the SM, is provided.

3.2.3 Low-level design


The UML class diagram in Figure 43 shows a brief overview of a generic SM.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 83 / 93


Figure 43 SM UML class diagram
To implement a service manager, a developer needs to simply define their service as an OCCI Kind as
shown below. Once done, the service manager is ready to be executed, run and service requests from
EEUs. In the Figure 44, an example of a service manager implementation is shown. This is the only
code that a developer needs to implement to have a basic (without AAA or SLA support) service
manager ready for operation.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 84 / 93


Figure 44 Sample SM
In the code, a service type is defined. A service type is what gives the service its signature. Service types
are implemented as OCCI Kinds and have a set of metadata that describe them. The key metadata points
to note are:
 identifier: the identifier is in fact a combination of the first two parameters. When combined this
provides a unique identifier for the type. In the example code above, the identifier would be:
http://schemas.mobile-cloud-networking.eu/occi/sm#epc
 title: this provides some human readable text briefly describing the service.
 attributes: these are a set of attribute names that can either be read-only (immutable) or
read/write (mutable) by the EEU. The details of the exact semantics are covered in the OCCI
core specification.

3.2.3.1 SM and Service Bundles


Having a service manager run and operate is not enough to create service instances. The key aspect of
this is the service orchestrator. As described in (D2.2 2013), it is the service orchestrator that is
responsible for the creation of EEUs’ service instances. Also described in D2.3 was the concept of the
service orchestrator bundle. This is supported in the service manager. Currently, the service orchestrator
is deployed along with a service manager. It sits within a directory named “bundle” relative to the service
manager implementation (“demo_service_manager.py” in the Figure 45).

Copyright  MobileCloud Networking Consortium 2012-2015 Page 85 / 93


Figure 45 SO bundle structure
The service manager knows where to find this service orchestrator bundle as it will read the location
from the service manager configuration file. For more details on the configuration of the service manager
please see the README.md in the service manager code repository.

3.2.4 Documentation of the code


The Table 27 shows where the source code can be found and how documentation can be accessed:
Table 27 SM documentation

Sub- Reference Documentation


Component

SM https://git.mobile-cloud- See README.md file.


networking.eu/cloudcontroller/mcn_sm/
tree/initial_sm_impl

3.2.5 Third parties and open source software


Table 28 shows the used third-party software packages:
Table 28 SM dependencies

Name Description Reference License

Development

pyssf OCCI http://github.com/tmetsch/pyssf LGPL


imeplementation

3.2.6 Installation, Configuration, Instantiation


All configuration of the service manager is carried out through etc/sm.cfg. There are three sections to
this configuration file.
 general: this configuration section is used by the code under the namespace of mcn.sm.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 86 / 93


o port: the port number on which the service manager listens
 service_manager - this configuration section is related to the service manager that the developer
implements
o bundle_location: this is where your service orchestrator bundle is located. Currently
only file path locations are supported
 cloud_controller - this configuration section is related to the configuration of the cloud
controller’s APIs
o nb_api: The URL to the North-bound API of the CloudController

3.2.7 Roadmap
The upcoming sprints will, as defined in (SM 2014), will focus on the following user stories and deliver
this by M27 and M30.
 Integration of support service that support both the technical and business service manager
 Separation of SM into BSM and TSM components
 Implement the BSM to BSM components to support inter-SM communications
 Implement asynchronous request processing to improve perceived responsiveness of SM
 Extend the administration capabilities of the SM (e.g. remote uploads of service bundles)

3.2.8 Conclusions and Future work


Although this first implementation of the SM has been offered by work-package 3 for now it should be
turned over to the work-packages with more domain knowledge. Still the implementation presented here
ensured that work-package 3 could verify that the “foundations” are integrated and are functional.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 87 / 93


4 Conclusions
As the title of deliverable suggest the first ptototypes of all components – except the analtics service
which is planned for future releases – which are delivered out of WP3 were shown.This is an important
milestone for the further work as from now on the basic infrastructural foundations are available for all
other services within the project.
Each of the previous section have conclusions on their own and outlooks on future work. For the overall
work done in WP3 the authors want to emphasis that more work will be carried out to integrate the
differenent components. Future steps therefore will also focus on integrating the components even
tighter together using the CC an the corresponding Service Development Kit. These two parts are the
key architectural artefacts which bind all the different services and components together.
Verfication of the components has been done using installations in the testbed, unittesting, and running
systems. This is a key outcome as we can verify up and running codes which have been developed within
the project and from external communities. With that WP3 has also left the more theoretical work on
architectures and moves into the verification of the previsous work items.
Also note that the work done here might or already has influenced external communities. This can
include contributions to projects such as OpenStack as well as contributions and influencing Standards
such as OCCI, which is wildly used in the foundations of MCN.

Copyright  MobileCloud Networking Consortium 2012-2015 Page 88 / 93


5 Terminology
AAA – Access, Authorisation, Accounting
AaaS – Analytics-as-a-Service
API – Application Programming Interface
BBU – Base Band Unit
CC – Cloud Controller
DNS – Domain Name Service
DNSaaS – Domain Name Service-as-a-Service
E2E – End to End
EEU – Enterprise End User
EMS – Element Management System
EPC – Evolved Packcage Core
HTTP – Hypertext Transfer Protocol
ITG – Infrastructure Template Graph
LB – Load Balancing
LBaaS – Load Balancing-as-a-Service
MaaS – Monitoring-as-a-Service
MCNSP – Mobile Cloud Networking Service Providder
MNO – Mobile Network Operator
OAI – OpenAirInterface
OCCI – Open Cloud Computing Interface
ODN – Optical Distribution Network
RAN – Radio Access Network
RANaaS – Radio Acces Network-as-a-Service
RANP – Radio Access Network Provider
RAT – Radio Access Technique
SCM – Source Code Management
SI – Service Instance
SIC – Service Instance Component
SLA – ServiceLevel Agreement
SLA – Service Level Agreement
SLAaaS – Service Level Agreeement-as-a-Service

Copyright  MobileCloud Networking Consortium 2012-2015 Page 89 / 93


SM – Service Manager
SO – Service Orchestrator
STG – Service Template Graph
TCP – Transmission Control Protocol
UML – Unified Modelling Language
VRRM – Virtual Radio Resource Management

Copyright  MobileCloud Networking Consortium 2012-2015 Page 90 / 93


References
3GPP. (2013) LTE; Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal
Terrestrial Radio Access Network (E-UTRAN); Overall description; Stage 2 (3GPP TS
36.300 version 11.5.0 Release 11),
http://www.etsi.org/deliver/etsi_ts/136300_136399/136300/11.05.00_60/ts_136300v110500p.
pdf

Alex Henneveld. (2013) CAMP, TOSCA, and HEAT, http://de.slideshare.net/alexheneveld/2013-


04specscamptoscaheatbrooklyn

AWS. (2013) AWS CloudFormation, http://aws.amazon.com/en/cloudformation/

Azodolmolky S, Nejabati R, Escalona E, Jayakumar R, Efstathiou N, and Simeonidou D. (2011)


Integrated OpenFlow-GMPLS Control Plane: An Overlay Model for SOftware Defined Packet
over Optical Networks, Presented at the ECOC, pp., 1–3

Ceilometer. (2013) Project Ceilometer OpenStack, https://wiki.openstack.org/wiki/Ceilometer

D2.2. (2013) Overall Architecture Definition, MobileCloud Networking Project

D3.1. (2013) Infrastructure Management Foundations – Specifications & Design for MobileCloud
framework, MobileCloud Networking Project

Dimitrova, D. (2014) Performance analysis of eNodeB for porting to the cloud, MobileCloud
Networking, https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-UBern-
D3.2_performance.docx

DNSAAS. (2014) DNSaaS Jira - Issues & Roadmap, https://jira.mobile-cloud-


networking.eu/browse/DNSAAS

DoW. (2012) Description of Work, Mobile-Cloud Networking

ETSI. (2013) ETSI GS NFV 002: Network Functions Virtualisation (NFV): Architectural Framework,
http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.01.01_60/gs_NFV002v010101p.pdf

Eurecom. (2013) Open Air Interface (OAI), www.openairinterface.org/

Ferreira, L., Branco, M., and Correia, L. M. (2013a) Traffic Generation, MobileCloud Networking,
https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-INOV-091-
01-Traffic_Generation_for_D3.2.docx

Ferreira, L., Branco, M., and Correia, L. M. (2013b) Radio and Cloud Resources Management,
MobileCloud Networking, https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-INOV-090-
02-RCRM_for_D3.2.docx

Kocur, J. (2013) Installation, configuration and instantiation of eNB, MobileCloud Networking,


https://svn.mobile-cloud-
networking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-TUB-
installation-eNB.docx

Copyright  MobileCloud Networking Consortium 2012-2015 Page 91 / 93


Linux Foundation. (2013) OpenDaylight - An Open Source Community and Meritocracy for Software
Defined Networking, Linux Foundation,
http://www.opendaylight.org/publications/opendaylight-open-source-community-and-
meritocracy-software-defined-networking

Linux Foundation. (2014) OVSDB OpenStack Guide,


https://wiki.opendaylight.org/view/OVSDB:OVSDB_OpenStack_Guide

NEC. (2013) NEC Neutron Plugin, https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin

Neutron. (2014) OpenStack Neutron QoS, https://wiki.openstack.org/wiki/Neutron/QoS

Nikaein, N. (2012) OAI emulation platform, Eurecom,


http://svn.eurecom.fr/openair4G/trunk/targets/DOCS/oaiemu.doc

Nyren, R., Edmonds, A., Alexander Papaspyrou, and Metsch, T. (2011) Open Cloud Computing
Interface - Core, Open Grid Forum, http://ogf.org/documents/GFD.183.pdf

ONESource. (2014) DNSaaS installation instructions., https://wiki.mobile-cloud-


networking.eu/wiki/DNSaaS_Implementation

ONF. (2014a) ONF Software Defined Networking, https://www.opennetworking.org/sdn-


resources/sdn-definition

ONF. (2014b) ONF Optical Transport WG, https://www.opennetworking.org/working-groups/optical-


transport

ONF. (2014c) ONF Configuration and Management, https://www.opennetworking.org/working-


groups/configuration-management

OpenFlow Switch Specifications, version 1.4.0. (2014) ONF Foundation,


https://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-
specifications/openflow/openflow-spec-v1.4.0.pdf

OpenStack Neutron. (n.d.) https://wiki.openstack.org/wiki/Neutron

Pizzinat, A. (2013) Fronthaul Solutions, MobileCloud Networking

Van Rossum, G., Warsaw, B., and Coghlan, N. (2001) Style Guide for Python Code, Python Software
Foundation, http://legacy.python.org/dev/peps/pep-0008/

SM. (2014) Service Manager Jira - Issues & Roadmap, https://jira.mobile-cloud-


networking.eu/browse/SM

Sousa, B. (2014) Towards a High Performance DNSaaS Deployment, Presented at the 6th
International Conference on Mobile Networks and Management

TCLOUD. (2014) Cloud Controller Jira - Issues & Roadmap, https://jira.mobile-cloud-


networking.eu/browse/TCLOUD

TMAAS. (2014) MaaS Jira - Issues & Roadmap, https://jira.mobile-cloud-


networking.eu/browse/TMAAS

Copyright  MobileCloud Networking Consortium 2012-2015 Page 92 / 93


TNET. (2014) Intra DC connectivity Jira - Issues & Roadmap, https://jira.mobile-cloud-
networking.eu/browse/TNET

TPERF. (2014) Performance Jira - Issues & Roadmap, https://jira.mobile-cloud-


networking.eu/browse/TPERF

TRAN. (2014) RAN Jira - Issues & Roadmap, https://jira.mobile-cloud-networking.eu/browse/TRAN

Trema. (2014) Sliceable switch tutorial, https://github.com/trema/apps/wiki/sliceable_switch_tutorial

Trove. (2014) OpenStack Trove, https://wiki.openstack.org/wiki/Trove

Vagrant. (2014) Vagrant, http://www.vagrantup.com/

Zabbix. (2013) Project Zabbix, http://www.zabbix.com/

Copyright  MobileCloud Networking Consortium 2012-2015 Page 93 / 93

You might also like