You are on page 1of 99

Contributions for the Standardisation of a SDN Northbound

Interface for Load Balancing Applications

Diogo Manuel Caldeira Coutinho

Thesis to obtain the Master of Science Degree in

Telecommunications and Informatics Engineering

Supervisor: Prof. Fernando Henrique Côrte-Real Mira da Silva

Examination Committee

Chairperson: Prof. Paulo Jorge Pires Ferreira


Supervisor: Prof. Fernando Henrique Côrte-Real Mira da Silva
Member of the Committee: Prof. Paulo Luı́s Serras Lobato Correia

October 2017
Acknowledgments

I would like to thank my parents for their encouragement and caring over all these years. I would
also like to thank my girlfriend, Beatriz, for being an inspiration, support and best friend throughout all of
these years.
I would also like to acknowledge my dissertation supervisor Prof. Fernando Mira da Silva for his
insight, support and sharing of knowledge that has made this Thesis possible and has made me grow
as a person. Especial acknowledgment to my father for providing the cloud server used to evaluate this
work.
Last but not least, to all my friends and colleagues that helped me in the making of this thesis by
bouncing ideas and distractions when necessary and the Floodlight community forum users that have
provided some enlightenment regarding the controller functionalities and possibilities.
To each and every one of you – Thank you.

i
Abstract

Software-Defined Networking (SDN) is a new paradigm that is emerging in networking. It has the goal
to contribute to deal with the inherent complexity of today’s networks. The main concept behind SDN
is the separation of the control plane from the data plane, centralizing the control plane of the network
devices in a server or SDN controller.
OpenFlow emerged as the first widely adopted protocol for the communication between the central-
ized control plane and the data plane, known as the Southbound Interface (SBI). Its success has led
SDN to the spotlight. The Northbound Interface (NBI) is the Application Programming Interface (API)
for the communication between the control plane and the application layer. This interface provides an
abstraction to network application developers, making it possible to implement the desired functionalities
without concerns regarding the low-level details of the underlying infrastructure. This has resulted in a
more efficient, higher level, network application development process. But, unlike the SBI, there is not
yet an accepted open standard for the NBI, which makes SDN applications lose interoperability, lead-
ing to a fragmented framework. The implementations of the NBI for specific domains will help define a
broader standard, considering a wider range of domains, that will facilitate SDN widely adoption.
In this thesis, we discuss the standardisation of the NBI and extend the Floodlight controller to provide
a relevant NBI for load balancing applications. We developed programs to access this interface and
evaluated the system in a typical data center network topology in a real environment.

Keywords

Software-Defined Networking, OpenFlow, Northbound Interface, Load Balance

iii
Resumo

Software-Defined Networking (SDN) é um novo paradigma que está a surgir nas redes informáticas.
Tem como objetivo contribuir para lutar contra a complexidade inerente das redes tradicionais. O con-
ceito principal do SDN é a separação do plano de controlo do plano de dados, centralizando o plano de
controlo de todos os dispositivos da rede num servidor ou controlador SDN.
OpenFlow surgiu como o primeiro protocolo amplamente adotado para a comunicação entre o plano
de controlo e o plano de dados, conhecido como Southbound Interface (SBI). O seu sucesso elevou
SDN para uma grande notoriedade. A Northbound Interface (NBI) é a Interface de Programação de
Aplicação (IPA) que define a comunicação entre o plano de controlo e a camada aplicacional. Esta
interface proporciona uma abstração para os criadores de aplicações de rede, para que haja a possibil-
idade de implementar as funcionalidades desejadas, sem preocupações com os detalhes da infraestru-
tura subjacente. Contudo, oposto à SBI, ainda não existe uma interface aberta e normalizada para
a definição da NBI, o que leva as aplicações de SDN a perder interoperabilidade e a uma framework
fragmentada. O desenvolvimento de interfaces para domı́nios especı́ficos vai ajudar na definição de
uma normalização mais ampla, levando a uma maior adoção ao SDN.
Nesta tese vamos analisar a normalização da NBI e aprofundar o controlador Floodlight, de forma a
tornar a NBI relevante para aplicações de distribuição de carga. Desenvolvemos interfaces para aceder
à NBI e avaliámos o sistema numa topologia tı́pica de um centro de dados.

Palavras Chave

Software-Defined Networking, OpenFlow, Northbound Interface, Balanceamento de carga

v
vi
Contents

1 Introduction 1
1.1 Motivation and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Objectives and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 State of the art 5


2.1 Network Openness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Software-Defined Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 OpenFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Northbound Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Features and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Network Functions Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.1 Mininet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.2 Floodlight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.3 Open vSwitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3 System Architecture 25
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Load Balancing Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.1 Floodlight Internal Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 Floodlight Northbound Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4 Implementation 35
4.1 Load Balancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1.1 Client to Server Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1.2 Statistics Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

vii
4.1.3 Load Balancing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.4 Health Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.5 Pool Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Northbound Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.1 REST API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2.3 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Mininet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5 System Evaluation 49
5.1 Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1.1 Test Bed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2 Evaluation Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2.1 Load Balancer WRR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2.2 Load Balancer Statistics Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2.3 Load Balancer TLS handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.4 Load Balancer Health Monitoring Algorithm . . . . . . . . . . . . . . . . . . . . . . 55
5.2.5 Load Balancer Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2.6 Northbound Interface Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6 Conclusions and Future Work 65


6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

A Load Balancer API Documentation 74

viii
List of Figures

2.1 Overview of the SDN architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


2.2 Main components of an OpenFlow switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Main components of a flow entry in a forwarding element flow table. . . . . . . . . . . . . 13

3.1 Common load balancer architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28


3.2 High-level architecture of the system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Simplified data structures of the load balancer module. . . . . . . . . . . . . . . . . . . . . 30

4.1 Flow chart of the forwarding algorithm of the load balancer module. . . . . . . . . . . . . . 37
4.2 Flow chart regarding the statistics collection module. . . . . . . . . . . . . . . . . . . . . . 39
4.3 Health monitoring system flow chart of the load balancer module. . . . . . . . . . . . . . . 42
4.4 Flow chart regarding the implementation of pool statistics. . . . . . . . . . . . . . . . . . . 43
4.5 Screen capture of the load balancer management web interface. . . . . . . . . . . . . . . 47

5.1 Load balancer statistics algorithm tests with 8 members and 8 clients. . . . . . . . . . . . 53
5.2 Load balancer statistics algorithm tests with 8 members and 16 clients. . . . . . . . . . . 53
5.3 Floodlight control plane performance test results with cbench. . . . . . . . . . . . . . . . . 56
5.4 Floodlight load balancer and forwarding modules packet-in and packet-out counters after
achieving full network connectivity, for three different tree topologies. . . . . . . . . . . . . 59
5.5 Floodlight statistics and link discovery packet-in and packet-out counters over a period of
15 seconds, for three different tree topologies. . . . . . . . . . . . . . . . . . . . . . . . . . 60

ix
List of Tables

2.1 The different features of OpenFlow versions and reason for the implementation. . . . . . . 11

3.1 Operations of the API for the management of load balancers. . . . . . . . . . . . . . . . . 34

5.1 Comparison of the theoretical probabilities and results of the weighted round-robin algo-
rithm for picking a member. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Average bandwidth of a pool running the statistics algorithm, with 4 members handling
320 total requests from 16 concurrent clients. . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3 Control plane message lengths of OpenFlow version 1.3.0, for a tree network topology
with 64 hosts and 21 switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.4 Calculations of the total weight of the control plane messages used by the Floodlight
components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.5 Comparison of the total bandwidth consumption over 2.38 seconds of the Floodlight mod-
ules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.6 Northbound Interface performance test with bench-rest, executing 100 000 HTTP Get
requests on all the members of a pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.7 Northbound Interface performance test with bench-rest, executing 100 000 HTTP Get
requests on a member. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.8 Northbound Interface performance test with bench-rest, creating new members with HTTP
Put followed by HTTP Get requests on the members. . . . . . . . . . . . . . . . . . . . . . 64

x
Acronyms

APs Access Points

APIs Application Programming Interfaces

ARP Address Resolution Protocol

BDDP Broadcast Domain Discovery Protocol

BGP Border Gateway Protocol

CA Certificate Authority

CAPEX Capital Expenditure

CLI Command-Line Interface

CPU Central Processing Unit

CRUD Create, Read, Update and Delete

DNS Domain Name Server

DoS Denial of Service

ETSI European Telecommunication Standards Institute

ForCES Forwarding and Control Element Separation

FIFO First In First Out

GPL General Public License

GUI Graphical User Interface

HTML Hypertext Markup Language

HTTP Hypertext Transfer Protocol

xi
HTTPS Hypertext Transfer Protocol Secure

ICMP Internet Control Message Protocol

IDS Intrusion Detection System

IETF Internet Engineering Task Force

IMAP Internet Message Access Protocol

IP Internet Protocol

ISG Industry Specification Group

IT Information Technology

JKS Java KeyStore

JSON Javascript Object Notation

LF Linux Foundation

LLDP Link Layer Discovery Protocol

MAC Media Access Control

MPLS Multi Protocol Label Switching

NAT Network Address Translation

NBI Northbound Interface

NBI-WG Northbound Interface Working Group

NETCONF Network Configuration Protocol

NFV Network Functions Virtualization

NOS Network Operating System

OCP Open Compute Project

OF OpenFlow

OF-CONFIG OpenFlow Management and Configuration Protocol

ONF Open Networking Foundation

ONI Open Networking Initiative

xii
OPEX Operation Expenditure

OS Operating System

OTT Over-The-Top

OVS Open vSwitch

OVSDB Open vSwitch Database

OXM OpenFlow Extensible Match

PoC Proof-of-Concept

POF Protocol-Oblivious Forwarding

POP Post Office Protocol

PoP Point of Presence

QoS Quality of Service

REST Representational State Transfer

SBI Southbound Interface

SDN Software-Defined Networking

SMTP Simple Mail Transfer Protocol

SNMP Simple Network Management Protocol

TCP Transmission Control Protocol

TLS Transport Layer Security

TSPs Telecommunication Service Providers

TTL Time-to-Live

UDP User Datagram Protocol

UML Unified Modeling Language

URI Uniform Resource Identifier

VLAN Virtual Local Area Network

VNF Virtualized Network Function

xiii
VM Virtual Machine

WRR Weighted Round-Robin

xiv
Introduction
1

Contents
1.1 Motivation and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Objectives and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1
In traditional networks it is not possible to develop network functionalities, like load balancers, without
knowing the low-level details of the forwarding devices, such as switches, routers and Access Points
(APs). This is due to the conventional architecture of these elements, usually featuring a tightly coupled
data plane and control plane. This leads to long development cycles of network applications and extra
complexity in their management.
Software-Defined Networking (SDN) is a relatively new paradigm that decouples the data plane from
the control plane, which enables the latter to be centrally manageable. It is possible to have control
over the devices in the entire network of an organization. This is done by the so-called SDN controller.
The controller is programmable and provides abstraction for the applications running on the underlying
infrastructure. It provides two Application Programming Interfaces (APIs). The Southbound Interface
(SBI) links the controller to the data plane. The Northbound Interface (NBI) enables external applications
to manage the controller. OpenFlow [1], created at Stanford University, was one of the first protocols for
the SBI, and today it is considered the industry de facto standard. The NBI provides a high-level API
to the SDN applications. This makes it possible to implement complex functions in the network, without
knowing the low-level specification of the network infrastructure.
However, there is no standard accepted by the industry for the definition of the NBI. The earlier con-
cern was to have a well defined SBI. Now, SDN has a defined SBI, but is lacking a standard for the NBI.
This led to each SDN controller implementing their own NBI, losing interoperability and, therefore, wast-
ing time and resources in porting applications between different SDN controllers. Defining standards for
SDN is the focus of the Open Networking Foundation (ONF)1 . An organization with the goal to popular-
ize SDN through the development of open standards. It created a group focusing on defining APIs for the
NBI, the Northbound Interface Working Group (NBI-WG) [2]. It is working to develop a domain-agnostic
interface, as well as domain specific APIs. This group is functioning to establish standardisation across
the wide range of application use cases that the current networks demand.
Parallel to SDN, there was the introduction of Network Functions Virtualization (NFV) [3], a concept
that decouples the network functions, such as Network Address Translation (NAT) and Intrusion Detec-
tion System (IDS), from the underlying hardware appliances. As it is a relevant emerging paradigm in
networks, for which SDN may be an important enabler, we will briefly discuss its role in this work.
In this work, we added new load balancing features to the Floodlight OpenFlow controller, in order to
define it closer to the load balancing principles. We created a Representational State Transfer (REST) in-
terface to access the new implemented features and develop external applications to manage Floodlight
load balancer through this interface. The system is evaluated using a data center network architecture
with Floodlight in the cloud and a Virtual Machine (VM) running a tree network topology connected to
Floodlight. During the evaluation of this system, we test the major components that give the ability to

1 https://www.opennetworking.org/

2
Floodlight load balance client requests and other essential load balancing features. Furthermore, we
prove that the load balancer is correctly distributing traffic across servers, estimate its control plane
bandwidth consumption and measure the REST interface performance in requests per second.

1.1 Motivation and Scope

Without the proper standardisation of the protocols and API for the NBI, the current implementations
of SDN controllers are providing different solutions to the same problem. This leads to an awkward
paradigm where SDN features make it simpler to develop network applications, but it is necessary to do
it for the different controllers available. Otherwise the application is coupled to a single controller. This
means that network application developers, in order to deploy their applications in multiple controllers,
need to learn controller-specific languages and design patterns, without these applications having any
controller dependent features.

1.2 Objectives and Contributions

The goal of this work is to discuss and make contributions to the definition of the NBI. As the NBI
has a broad scope of use cases, we focus mainly on load balancing applications. The implementation
follows the ONF guidelines in order to aid the SDN key players to agree upon a NBI standard for load
balancing applications and start discussing interfaces for a wider range of use cases.
In this thesis, we have contributed with a definition of the network management operations, that
can be used by external programs through the NBI, for load balancing applications. We developed a
REST interface and provided useful documentation regarding the specification of this API. Moreover, we
contributed with new load balancing features to the Floodlight SDN controller, in order to have a Proof-of-
Concept (PoC) implementation and to evaluate the interface in a real environment. Some components
of the software developed in the scope of this thesis, such as two new load balancing algorithms, two
switch statistics collection methods and a pool statistics retrieval function, were submitted and integrated
in the trunk of the Floodlight controller. A few more were submitted and are currently pending of final
approval. Furthermore, we developed two client applications to facilitate the access to the management
operations of the NBI.

1.3 Document Structure

The remaining of this document is organized as follows: Chapter 2 gives an insight on the current
tendencies in the networking area, an overview to the work being developed in SDN that is relevant

3
to this work and important SDN architectural specifications. Chapter 3 discusses the overall system
architecture, giving details on how it was built. Then, in chapter 4, we address the multiple components
that we have developed to give functionality to the load balancer and its API. In chapter 5, we evaluate
the most important aspects of our solution and analyze the test results. Finally, in chapter 6, we conclude
the thesis, discuss the system limitations and future work.

4
State of the art
2
Contents
2.1 Network Openness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Software-Defined Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 Network Functions Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5
In this chapter, it is discussed the state of the art of SDN, with emphasis on the aspects important
to this work and we provide some background on openness initiatives of the computer industry. In
section 2.1, the industry openness history is addressed. In section 2.2, SDN features are described and
its architectural components are analyzed. Then, in section 2.3, NFV, its characteristics and relationship
with SDN are explained. In section 2.4, the tools used to develop the contributions of this work are
addressed. Finally, in section 2.5, we discuss the main conclusions of this chapter.

2.1 Network Openness

Traditionally, the layers of switches, routers and APs are tightly coupled and closed by hardware ven-
dors. What this means is that each node has a physical control plane and data plane. This architecture
is aligned with closed software running in the node, and therefore is limited to vendor implemented fea-
tures. Due to these characteristics, the innovation in networking features is limited by the lack of open
contributions from different vendors and the research community. This leads to more network appliances
added to fill functions that regular devices have no capability to deal with. Given this scenario, network
complexity is rising and management is increasingly more difficult. An open approach would contribute
to innovation, stopping the prioritization of vendor interests over technological development.
The concept of openness is not new to the computer industry. To guarantee a fair use of open
software, several licenses for distributing it were designed, as, for example, the GNU General Public
License (GPL)1 , created in 1989, and the Apache2 license. The Linux Operating System (OS) is dis-
tributed under the GPL, and Android, a very popular mobile OS based on the Linux kernel, is under the
Apache license. These technologies have been using open strategies to increase the pace of innovation
and improvement through community collaboration.
Up to now, networking industry openness was limited to standard protocols, such as Network Con-
figuration Protocol (NETCONF) and Border Gateway Protocol (BGP). These are enough to provide
interoperability between devices and services at a high level. But they are not enough to solve the more
general management problems. It is necessary to have open-source software and hardware to leverage
community research and development. The Linux Foundation (LF)3 is leading open-source develop-
ment projects across many fields of networking, keeping the ideals of Linux. Other concepts pursue the
same goal, like NFV [3]. This concept was first introduced in 2012 by a consortium of Telecommuni-
cation companies, such as AT&T, Verizon, Orange and China Mobile. NFV will be further discussed in
section 2.3.
4
Following the same approach, some initiatives like the Open Compute Project (OCP) are starting
1 https://www.gnu.org/licenses/old-licenses/gpl-1.0.html
2 https://www.apache.org/licenses/
3 https://www.linuxfoundation.org/
4 http://www.opencompute.org/

6
open source hardware development with the goal of providing more choices for the consumers and more
efficient hardware for Information Technology (IT) infrastructures. Another interesting project, founded in
2014, is the Open Networking Initiative (ONI) started by Dell. With the objective to disaggregate network
switching software and hardware and to enable consumers to freely choose what software to run on
top of a switch, thus allowing anyone to develop their own switching software. The ONI is composed
by many IT and networking companies such as Big Switch Networks, Cumulus Networks, Juniper, HP
and VMware. In 2016, Microsoft started Project Olympus 5 with the OCP community, proving that open
source projects are showing results to address problems that networks are currently facing.
Based on these projects, it seems the tendency of the industry is flowing to open source development
of hardware and software. Also, open projects have the possibility of collaboration by the community and
enhancing the chances for progress in the area, thus increasing the consumers choice.

2.2 Software-Defined Networking

SDN is an architecture where the data plane is decoupled from the control plane and the control
functions of the network devices are put into a centralized external node, the controller. This gives the
possibility to directly program the controller with network services, such as management and security,
while gaining the ability to centrally manage the network. Before SDN, this concept was already specified
by the Forwarding and Control Element Separation (ForCES) [4] framework, defined by the Internet
Engineering Task Force (IETF) in 2004. It was from ForCES that SDN emerged, however with a few
differences that led to SDN gaining popularity. An important difference between ForCES and SDN is that
the former does not require the control plane functions to be externally centralized [5], i.e., a network
element can physically have the control and data planes, as in the traditional networks, therefore losing
network-wide visibility.
SDN is composed of three layers: the infrastructure layer, the control layer and the application layer,
as seen in Figure 2.1. The infrastructure layer is where the forwarding elements of the network oper-
ate. Opposed to traditional networks, these elements only perform forwarding operations. There is an
implicit abstraction, where the switches behave in the same way, receiving a packet, apply some kind of
decision algorithm and forwarding/discarding the packet. Considering this level of abstraction a network
element is called a white box switch. They are represented by the commodity hardware that rely on the
controller to configure the instructions necessary to forward the packets. Henceforth, we denominate
these forwarding elements as simply switches, because that is mainly how they are addressed by, in the
industry.
The communication between the infrastructure layer and the control layer is defined as the SBI.
5 https://azure.microsoft.com/en-us/blog/microsoft-reimagines-open-source-cloud-hardware/

7
The goal of this interface is to allow the use of vendor-agnostic forwarding devices in the data plane.
There are multiple protocols that can define the SBI, such as ForCES protocol [6], NETCONF [7],
OpenFlow (OF) [1] and Protocol-Oblivious Forwarding (POF) [8]. The ForCES protocol was the first
to be developed, but it was never widely adopted. NETCONF is an already standardized protocol to
configure network devices. Although, it is not completely suited for SDN environments, it is often ex-
tended to fit the needs of SDN. OpenFlow is the industry adopted standard for the SBI. Most of the
current SDN controllers support this protocol. We will further discuss OpenFlow in section 2.2.1. POF
improves the infrastructure layer by making the switches protocol-agnostic, i.e., switches do not have to
know the protocol being used in the communication with the control layer. However, there still needs to
be more research regarding this concept. But, for it to be widely adopted, it is necessary to have for-
warding elements that support POF. This issue is critical, as vendors would have to change the current
investment in OpenFlow devices to POF-optimized elements.

The control layer is where the controller resides. Compared to a traditional network, SDN has no
separation of the management plane from the control plane. This layer is responsible for the control as
well as providing the programmatic interface for the applications, because it is between the application
layer and the infrastructure layer. It is acting as the abstraction layer for the applications to access
the underlying infrastructure, in the same way as an OS does for the applications in a computer. The
controller is software-based and it is often named the Network Operating System (NOS). As seen
before, the OpenFlow is the most used SBI, leading to the development of controllers and switches
conforming with it. One of the first OpenFlow controllers developed was NOX [9] (that introduced the
term NOS as it is now used). Today there are plenty of more sophisticated controllers developed for
SDN open to the community, such as Floodlight [10] and OpenDaylight 6 , implemented in Java. These
controllers were developed based on Beacon [11], which had the goal to be developer-friendly and
assure high performance. Ryu7 and POX8 are other open-source controllers available, implemented in
Python. All the previously mentioned controllers have OpenFlow as the SBI, although OpenDaylight also
supports NETCONF and Open vSwitch Database (OVSDB) [12]. Ryu has also support for OVSDB, a
complementary protocol to OpenFlow, that allows for the direct configuring of the Open vSwitch (OVS),
which we will be discussing in section 2.4.3.

Due to the NOS, it is possible to develop applications with much less complexity, thus increasing
the rate at which is possible to create new features and close the gap between the time to deploy
network functions, when compared to software applications. The controller is responsible for handling
all the communications that pass trough the network. The configurations to implement in the white box
switches are received from the applications, containing the rules on how to forward its traffic.

6 https://www.opendaylight.org/
7 https://osrg.github.io/ryu/
8 https://github.com/noxrepo/pox

8
Figure 2.1: Overview of the SDN architecture.

The communication between the controller and the applications is defined by a NBI, usually imple-
mented following a REST architecture [13]. This interface is constructed according to different SDN use
cases and application specific requirements, therefore there is no global standard defining it, up to date.
So, generally, each controller has its own protocol to communicate between control layer and application
layer.
The application layer represents the programs that communicate with the control layer through the
NBI. These programs can leverage the network-wide visibility of SDN controllers to understand the state
of the network and dynamically adapt to any changes. They can implement the control-logic deemed
necessary to program the network elements with the desired behavior. This is optimally done without the
applications knowing the details of the controller or the switches in the infrastructure layer. This means
that the NBIs should be independent of the SDN controllers. The scope of these programs is very broad,
for instance, firewalls, routing algorithms and load balancers are examples of what can be used. These
network applications lead to very different types of requirements. This is one of the reasons why it is
hard to define a standard for the NBI.

2.2.1 OpenFlow

The SBI is a crucial part of the SDN architecture. Its specification is playing a major role in the
widespread adoption of this new paradigm. OpenFlow [1] was developed in 2008 at Stanford University,

9
with intentions to be supported by vendor’s switches. This would enable researchers to test their experi-
ments using real environments, without exposing the switches internal specifications. In 2011, the ONF
was founded, a consortium composed by Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and
Yahoo!, with the goal to promote adoption of SDN and OpenFlow. OpenFlow is an on going project being
progressively updated. As of today, the final publicly available OpenFlow version is 1.5.1 [14]. SDN ap-
plications, controllers and switches are slowly catching up with the protocol capabilities and supporting
the newest versions. Notwithstanding, we believe that we reached a point where the development of
these products and research in the area is more important than new extensions to the protocol.

The different major features implemented in each version are represented in table 2.1, based on [15].
We can see from versions 1.0 to 1.1, the major features added were the ability to have multiple flow
tables, the addition of the Group Table and full VLAN and MPLS support. From versions 1.1 to 1.2,
the addition of OpenFlow Extensible Match (OXM) and the possibility to have multiple controllers in the
same network. From versions 1.2 to 1.3, the major features added were the meter table and the table
miss entry. From versions 1.3 to 1.4, the addition of the synchronized table and bundle. From versions
1.4 to 1.5, the most important features added were the possibility to process traffic through the output
port and the scheduled bundle. These features will be further explained throughout this section. The
changes from 1.5.0 to 1.5.1 are not presented in the table, because they are two minor adjustments to
the meter feature and some clarifications in the documentation. OpenFlow refers to the specifications
of the forwarding element in a SDN infrastructure layer, designated as OpenFlow switch, and also the
protocol to manage this switch from a remote controller. Although, the specification does not detail how
to configure an OpenFlow switch. For instance, it assumes the switch has been configured with the
controller’s Internet Protocol (IP) address. This is part of another protocol, the OpenFlow Management
and Configuration Protocol (OF-CONFIG) [16], which will be briefly discussed in section 2.4.

There are two different types of OpenFlow-compliant switches, the OpenFlow-only, which only sup-
ports OpenFlow operation for processing packets, and the OpenFlow-hybrid, which supports OpenFlow
and Ethernet switching operations. The hybrid switches should differentiate between the traffic to be
processed conventionally and the traffic to be forwarded using the OpenFlow pipeline.

The main OpenFlow switch elements can be seen in figure 2.2. It is composed by the OpenFlow
secure channel, the group table and the pipeline. The pipeline is composed of one or more flow tables.
The basic entity is a flow. A packet is classified as being part of a certain flow based on matching fields
in the flow table. Each flow table contains flow entries. A flow entry is composed by match fields to
classify packets. Counters to make it possible to collect statistics regarding flow level traffic. Actions to
apply to the packets that have been match to a field. These components are presented in figure 2.3.

The matching fields in the specification include the switch input port, Transmission Control Protocol
(TCP) source/destination port, User Datagram Protocol (UDP) source/destination port. These fields are

10
Versions Major Feature Reason for implementation

Multiple flow tables Avoid flow entry exhaustion

1.0 - 1.1 Group Table Enable applying action sets to group of flows

VLAN and MPLS Support Enhance OpenFlow compatibility

OXM Match Extend matching flexibility


1.1 - 1.2
Multiple Controllers High Availability and Scalability

Meter Table Add QoS and DiffServ capability


1.2 - 1.3
Table Miss Entry Provide flexibility

Synchronized Table Enhance table scalability


1.3 - 1.4
Bundle Enhance switch synchronization

Egress Table Enable processing to be done using output port


1.4 - 1.5
Scheduled Bundle Further enhance switch synchronization

Table 2.1: The different features of OpenFlow versions and reason for the implementation.

required by OpenFlow. Others are optional, such as Virtual Local Area Network (VLAN) Id, Internet
Control Message Protocol (ICMP) type, or Multi Protocol Label Switching (MPLS) label. As of 1.5.0
version of OpenFlow, it is also possible to match a flow according to the TCP Flags. This is useful for
a load balancing application to determine when a TCP session is starting and ending. It is possible for
a packet to match multiple fields, increasing the granularity for classifying packets. There is also the
possibility to add new matching criteria through OXM, thus increasing the flexibility of the protocol.
The pipeline is ordered to maintain flow tables with a higher priority first and the lowest priority last.
This way it is possible for a packet to be dropped sooner in the pipeline, saving resources in the switch.
If a packet is matched with a field, then there is an action to be performed on that packet. When a
packet has not been matched to any field, it is, by default, dropped. This behavior can be changed using
OF-CONFIG protocol to configure another way to deal with unmatched packets. It is also possible for
the controller to create a table-miss entry in the flow table. This entry has all match fields omitted and is
set with the lowest priority. From this entry a packet can be discarded, forwarded to another subsequent
flow table or forwarded to the controller. It is possible to synchronize tables, which enables the controller
to perform alterations to a flow table, reflecting the same changes on other tables that are synchronized
with it. To improve the flow table modifications, bundles can be used to atomically change flow tables
in multiple forwarding elements. This means that either all modifications are done to the flow tables or,

11
Figure 2.2: Main components of an OpenFlow switch.

in case of a failure, the operation is roll backed. Furthermore, these bundles were improved in the last
version of OpenFlow to scheduled bundles, which include execution time, so that the switches know
when they are expected to commit the bundle.
OpenFlow switches require support for the actions Output, Group and Drop. The Output action
forwards a packet to a specified port number. The Group action processes the packet according to a
specified group rule. The Drop action drops the packet. Other optional actions can be performed, such
as push/pop MPLS label, push/pop VLAN header, decrement packet Time-to-Live (TTL) and Set-Queue,
that determines the queue in which packet shall be placed. The actions are applied at the end of the
pipeline by an Action Set. An Action Set is empty by default, but it can be specified by a flow entry. In
the Action Set, a specified action can only be performed once and the order of the set of actions to be
performed is according to the OpenFlow configuration. Nevertheless, there is another way to perform
actions. They can be performed immediately after a packet has been matched, by using List of Actions.
These are carried out in the order in which they were made and are applied to packets cumulatively. For
instance, if the list of actions specifies two Push VLAN header actions, then two headers are added to
the packet.
OpenFlow allows to group flows that are going to have the same processing by using a Group Table.
This table enables the switch to apply actions on a set of flows. These actions are called Action Buckets
and can be seen as an ordered list of multiple Action Sets. Action Buckets are associated with a specific
group of ports that can evaluate its state (up or down). These Group Tables allow for extra functionalities,

12
Figure 2.3: Main components of a flow entry in a forwarding element flow table.

namely, they enable load balancing by giving the possibility to forward to a determined port, designated
from a set of ports that are available. They enable multicasting or broadcasting by cloning packets to all
ports specified in the Action Buckets. They enable a fail over mechanism, which works by processing
the packets with the first Action Bucket that is live.
The OpenFlow specification for granting Quality of Service (QoS) is through the Meter Table. In this
table each entry defines per-flow meters that measure and limit the rate of one or more associated flow
entries. Each meter can be combined with per-port queues for implementing differentiated services
in the network. OpenFlow maintains counters that can be used to collect statistics by the controller.
There can be multiple counters, for instance, counters to track received/transmitted packets in each
port, duration (seconds) in each flow entry or number of active entries per flow table. Also, the controller
can query the switch to know which are the optional counters that can be used.
Another main component of the switch is the communication channel interface. It is through it that
the controller manages the OpenFlow switches. This channel can be secured using Transport Layer
Security (TLS), but it is not mandatory, as it can run solely using TCP. The protocol provides three types
of messages to be exchanged: controller-to-switch, asynchronous and symmetric. Controller-to-switch
messages are initiated by the controller to configure the switch, to request the abilities and features of a
switch, to request for statistics and the current state of a switch, to populate the flow tables of the switch
with forwarding rules, among other management functionalities. Often, these messages do not require
any response from the switch. The asynchronous messages are sent from the switch to the controller

13
and are used to inform the controller about events in the network and changes in the switch state. The
packet-in message is an example of an asynchronous type message. It is used when the switch does not
find a match for a packet in the flow entry of a table or a table-miss entry is available. The symmetric type
of messages are used for session setup and are sent disregarding the switch or controller demand. Hello
messages are exchanged in the initial connection and Echo Request/Reply are swapped to verify the
connection between the elements. Error messages, from either of the components to notify problems,
and messages to test future OpenFlow features are also defined in this category of messages.
The switch starts the connection with the controller if there is a connection Uniform Resource Iden-
tifier (URI) available in the configuration. It must indicate the protocol, the IP address and the port for
the communication. The default port for the communication between controller and switch is 6653, so if
there is no configured connection URI in the switch, it must accept TCP or TLS incoming connection in
the default port or in a specified port. After the initially connection, the controller can start configuring the
flow rules in the switch. There are two methods for a SDN controller to implement the flow instructions
(to add, update or delete a flow entry) in the forwarding elements. The proactive and the reactive meth-
ods. The proactive method is used when the controller knows what are the instructions to give to the
switches on how to handle the packets, before they arrive at the switch. The reactive method consists
of waiting for packets to reach the network. Then, the first packet of every flow must be forwarded to
the controller and only after this, the controller will indicate the appropriate rules on how to forward the
incoming packet. This means that a packet-in message will be sent to the controller and it will have to
be processed, increasing the latency of the network. By having to send this message for every new flow,
there is a possible threat to the scalability of the network. However, there is the possibility for a hybrid
approach, a combination of the reactive and proactive roles.

2.2.2 Northbound Interface

As seen before, the NBI is the API that enables an application to program the network, via the
network services provided by the controller. This controller will translate the application’s instructions
for the network elements to a language that the switches will understand (which is done through the
OpenFlow protocol). The importance of the NBI is that it simplifies the operation of the control plane, by
providing a high-level API between the application layer and the controller. This makes it easier for the
development of network functionalities.
To continue improving the adoption of SDN, the ONF created a Working Group to develop prototypes
for the NBI, but not necessarily standardize it. The Northbound Interfaces Working Group [2] has the goal
of building an interface capable of dealing with different levels of abstraction and across a wide range of
domains. But before finding the broad solution, it is looking to build use case specific interfaces. Although
this group was first created in 2013, to this day, it has not yet publicly released a NBI standardisation

14
draft.

Separately, there are other implementations of the NBI that have been developed by the SDN com-
munity. These works will help the industry find the appropriate requirements to be defined, in order to ob-
tain the right interface to be proposed as a standard. The SDN controllers Ryu and Floodlight implement
a REST-like API for the NBI. REST architectural design is very popular, because it has the necessary
functionalities asked by a NBI. But both Ryu’s and Floodlight’s NBIs do not fully respect the fundamental
designs defined for REST in [13]. This problem is addressed in [17], where a NBI is proposed following
the principles for REST APIs. Otherwise, the NBI may not be able to be extensible, scalable and inter-
operable. Furthermore, two violations of the REST principles are identified in the Floodlight controller:
The first violation is exposing the media type in an URI, such as wm/firewall/rules/json, which limit the
client and the server ability to evolve independently. Alternatively, the servers should instruct clients on
how to construct URIs, by defining those instructions within media types and link relations. The second
violation is exposing a fixed set of URIs, losing the controller’s ability to change URIs and reallocate
the resources. The REST API must provide an entry URI, from there, all transition possibilities shall be
given to the clients by the server. This concept is part of the hypertext driven approach. Then, the pa-
per recommends some modifications on the API and presents a framework for designing RESTful NBIs
following a hypertext driven approach. The NBI is tested using a generalized SDN controller. Results
showed that there is a trade-off regarding the performance of the NBI and its scalability and extensi-
bility. Following the same approach RAPTOR [18], addresses the violations of REST in Floodlight and
Ryu controllers and creates an interface for translating application requirements into these controllers
NBIs. This approach can be useful for networks with multiple SDN controllers, easing the management
of the network. But it is fairly limited, as the interaction with the user requires an external application
and the interface is dependent of the controllers. This means that as their implementation may change,
RAPTOR’s has to change accordingly.

Another way to implement the NBI is used in the ONOS [19] SDN controller. It provides the appli-
cations a different way to specify their desires through intents. An intent is a policy-based request or
connectivity requirement, simplified so that the details of how the service will be performed is abstracted
for the application. This is possible due to the mapping used to translate between the simple consumer
terms, presented in the intent, and the specific detailed terms, needed for the server to interpret and
perform the operation requested in the intent. The ONF has defined the guidelines for implementing a
NBI following this approach [20]. Considering these guidelines and ONOS NBI intent framework, [21]
proposes a framework designed to leverage the intent-based architecture to provide support to a wide
range of applications. The goal is to allow a developer to use the framework to create new services
and/or use already existing services that satisfy the requirements for its application.

Independently of the way the NBI is implemented, there are issues related to the security of this

15
interface. In [22], requirements such as confidentiality, integrity, authenticity and accountability are raised
in the context of the NBI. A REST-like API is proposed with added security features to ensure that the
security requirements are respected in the interface. It is proposed that the SDN controller is associated
to a Certificate Authority (CA). Then, the applications would have to trust this CA, as well as have an
application certificate signed by a CA. This CA would represent the vendor of the application, enabling
the controller to know which applications are to be trusted or not. From there it is possible to give certain
permissions to the application, considering the level of trust in the application vendor.
An important requirement of the NBI is that it should be user-friendly, the APIs should be provided to
the users with useful documentation and a web Graphical User Interface (GUI).

2.2.3 Features and Limitations

Considering the aspects related to the architecture previously mentioned, there are some SDN fea-
tures that should be explained in greater detail. These features were highlighted, because they represent
a differentiation from traditional networks and are a response to the problems related to them. Some
limitations regarding this new paradigm of networking are also addressed.

• Features

Network Programmability: The added network programmability is a major advantage of this ar-
chitecture. It allows to deploy and test network functionalities at a much faster rate than in traditional
networks, thus leading to an improved research and development in the area. The amount of new ap-
plications being developed in all the different network categories are proof of this advantage for SDN.
Examples of recently developed traffic engineering applications are [23], [24] and [25]. The first two are
recently developed load balancers, leveraging SDN features and highlight how important programmabil-
ity is to innovate network application development. The last application consists of a routing algorithm
that uses the MAC address as a label to forward packets, in order to reduce the size of the forwarding
tables.
Centralized Control: The SDN controller is the logical center of the network, this means that the
management of all the network elements in the infrastructure layer can be controlled and managed by
this entity. It is possible to have full knowledge of the network state, which makes it much simpler to
operate for monitoring and management applications. Therefore, these applications have the optimal
conditions to manage the traffic. Centralized control can also result in possible savings in CAPEX and
reduction in OPEX. This advantage is due to savings in new network elements, that would otherwise be
added if the architecture would be that of a conventional network.
Dynamic Flow Control: The controller is able to implement routing decision-making logic to the data
plane. In OpenFlow these are called flow rules and can be installed dynamically. This feature provides

16
load balancing applications the ability to adapt to changes in traffic requirements. There are examples
of dynamic load balancers developed to leverage SDN, such as [26]. In which the matching fields of the
flow entries are used to partition the client’s traffic and then split it to different servers.
Collecting Statistics: The communication between the controller and the white box switches is not
unidirectional. With OpenFlow, the controller can request the switches for statistics regarding the flows
and collect them to report back to the applications. This allows the controller to retain knowledge of
the network state. We’ve seen in section 2.2.1, how OpenFlow handles statistic collecting mechanisms,
through the usage of counters.
Security: SDN brings advantages regarding network security. Using the features already men-
tioned, it is possible to create new security applications. In [27], the benefits to network security that
these features can provide are highlighted. For example, with centralized control it is possible to have
coordinated monitoring and management of the entire network. In traditional networks this would be
harder, as it would be necessary to monitor each subnet and gather information from them.

• Limitations

Single Point of Failure: As stated before, the control plane is decoupled from the data plane and
all the communication in the network converges to the SDN controller. This means that all the traffic
between the the application layer and the infrastructure layer is handled by the controller. This char-
acteristic originates a single point of failure in the network. If the controller fails, then there will not be
connectivity between the network devices and the application layer. Thus, network users are prone to
lose connectivity, because switches do not have the necessary instructions to forward new flows and
possibly discard packets. However, there are several proposals that overcome this limitation, as it will
be discussed next.
Scalability: Considering this architecture, one controller cannot provide enough scalability when
dealing with a large number of network devices. Since the start of the designing of SDN controllers
this has been a great concern. Beacon [11] was implemented considering this problem and was built to
assure high performance in the control plane. Controllers are evaluated by multiple criteria regarding this
limitation. A terminology for bench-marking SDN controllers is proposed by the IETF Network Working
Group in [28]. The packet-in rate (packets per second) that a controller is able to process and the
average time it takes to process them, as well as measuring the maximum number of control sessions
the controller can maintain and the capacity of the forwarding table. These are examples of important
criteria to consider when measuring scalability in SDN.
To deal with the scalability and the single point of failure limitations in SDN, there has been a de-
velopment in the industry regarding a distributed SDN architecture. SDN controllers have the ability
to communicate with other controllers using the Eastbound/Westbound API. This interface is used by
controllers managing the same network domain to exchange information. For example, in the case of

17
controller failure or for controller resilience. This concept and its characteristics are not in the scope of
this work, but they are addressed in other recent works. For instance, [29] proposes an elastic SDN
controller cluster to deal with these limitations. Multiple instances of the controller manage different
parts of the network to achieve the desired scalability. Also, [30] validates that SDN is a viable option for
networks that require high availability and reliability, such as data centers. It shows that it is possible to
replace a failing controller without network disruption by using load balancing techniques.
Quality of Service: SDN networks are more likely to have higher latency than traditional networks
in the initial setting of flows. Depending on the configuration, the switches can forward the first packet
of a flow to the controller, therefore adding another hop to the controller. Afterwards the controller has
to decide what to do with the flow and give the forwarding instructions back to the switch. This issue is
aggravated with the increase in the distance from the controller to the switches. For applications where
latency is decisive, this is far from ideal. Some works regarding QoS and SDN have been developed [31],
but there is more to expect in regards to QoS specifications in OpenFlow.
Network Resilience: SDN features can be leveraged to increase security. But SDN can be exploited
and used to take control of an entire network. If a controller is compromised, then the whole network
is subject to the intentions of the attacker. By taking command of the controller, the defenses of the
network can be managed by the aggressor. This would make the intrusion very hard to be detected and
the recovery would have to be done manually.
Lack of Standards: For enterprises, it is important that SDN has standard norms in order for them
to feel comfortable adopting this new paradigm. This can only be achieved through cooperation among
standard developing organizations and the community responsible for leading the SDN industry. Fur-
thermore, it is possible for anyone interested in SDN to give contributions and help shape the SDN
standardisation. Contributions include developing applications for networks, using open-source SDN
controllers, testing and extending SDN related interfaces that are open to the public.
Note that some of these limitations result from the relatively new state of SDN. Therefore, they are
expected to have solutions when more research is done. Also, there are already interesting works
regarding distributed SDN.

2.3 Network Functions Virtualization

There is a growing complexity in the Telecommunication Service Providers (TSPs) networks, result-
ing from the increase of network elements necessary to respect the requirements of today’s communi-
cation networks. This led to the companies having an increase in Operation Expenditure (OPEX) and
a decrease in profitability. The cost of maintaining the number of proprietary devices existent in the
networks is increasing. For each new service that a network operator wants to deploy, it has to invest

18
in more hardware to accommodate its services. The aforementioned problems and the advantages re-
lated to investments in the Cloud and the increase of Over-The-Top (OTT) content, led to a need for the
network operators to rethink their operation, in order to decrease costs.
What NFV [3] proposes is the virtualization of the network elements, by the separation of hard-
ware appliances from the network functions its coupled with. Then, porting these functions into one
or more VM. This brings benefits to TSPs, such as reduced equipment costs, energy savings and a
faster deployment of network services. More benefits from the NFV include that the Virtualized Network
Function (VNF) can be run in open standard servers, which allows for different applications, users and
tenants to share resources in the same platform. Also, NFV can be physically deployed in arbitrary lo-
cations. For instance, placed in a data center, Point of Presence (PoP) or where the network operators
deem optimal for a determined network function. This opens the possibility for the services to be rapidly
scaled up or down, in accordance to the necessities of the network operators. Ultimately, this leads to
the cut in expenses for TSPs, as they are not dependent of any specific hardware vendor. With NFV
the network industry is more open to innovation, research and development. This is due to losing the
constraints placed by the previous coupled hardware appliance and network functions.
NFV is still a new concept, to be able to fulfill the needs of this paradigm, many developments have
to be done in the industry. In [32] the technical requirements for implementing VNFs are identified as
performance, manageability, reliability, stability and security. It is necessary to have high performance
VNFs that are portable between different servers and managed by different hypervisors (manager of the
guest OS in a machine), without this being a costly procedure for the service providers. There must be
an automation of the functions that fit the NFV concept, so it has the ability to scale. It must be ensured
that the VNFs are well managed and secure from possible malicious intent. There must be a guarantee
that the hardware and software for the NFV is able to ensure that the performance is at least as good as
what the proprietary hardware appliances provide.
The complications presented above need to be solved with cooperation between the IT and the net-
work industries as well as standard developing organizations. This would make it possible to get the
most of the benefits provided by NFV. The European Telecommunication Standards Institute (ETSI) is
working towards this goal, with the creation of the ETSI Industry Specification Group (ISG)9 for NFV.
From 2012 until this day, this ISG has made publications with detailed specifications for NFV architec-
tural framework [33] and definition of use cases of interest for NFV [34]. Other implementations were
developed as a PoC10 , although not publicly available yet.
TSPs have been forced to look for ways to change their business model, in order to save Capital
Expenditure (CAPEX) and reduce OPEX. NFV is the proposed solution, but it is still in its early develop-
ments. To accelerate the rate of growth of this new paradigm, there is a joint effort from TSPs and ETSI
9 http://www.etsi.org/nfv
10 http://www.etsi.org/technologies-clusters/technologies/nfv/nfv-poc

19
by contributing with PoC implementations and standard propositions.

It is clear by now that SDN and NFV are different concepts, each can be adopted independently.
Notwithstanding, we see that SDN and NFV have common objectives. Nonetheless, ETSI originally pro-
posed a NFV architecture without explicitly mentioning SDN, the ONF has published a solution involving
collaboration of both the concepts [35]. The usage of commodity open switches in SDN is aligned with
the vision of NFV to eliminate the proprietary hardware devices, replacing them with industry standard
servers, switches and storage. Both the concepts enable the industry to have more potential for inno-
vation, by adhering to open standards and developing open-source projects where the community is
invited to give feedback and by creating the opportunity for organizations to implement new services
and business models. Besides sharing common goals, there is an added value of combining the two
concepts, these benefits could lead to a greater flexibility in networks and simplicity in the delivery and
management of applications over the network.

In [36], there is an attempt to demonstrate the deployment of VNFs and the adaptability of the network
using SDN. With NFV there is a need for complex forwarding rules, because VNFs are deployed in
multiple VMs, in order to obtain scalability. This leads to dynamic traffic steering that must be provided
by SDN. Through the NBI, an application or user can communicate to the SDN controller changes in the
state of the network, and according to them, redirect traffic to the VMs hosting the appropriate VNFs to
apply in that particular scenario. As a PoC, the paper uses Ericsson Cloud Lab11 open test environment,
for SDN and NFV experiments, with focus on VNF chaining for QoS enforcement. Results showed that
telecommunication services using NFV can be dynamically adjusted according to the changing of the
network state. Also, there is a performance gain by using traffic steering, but there is a trade-off due
to the performance degradation caused by the limited resources in the cloud environment used for the
experiment.

A different approach to NFV and SDN cooperation is defined in [37], where three different types of
NFV architectures are explained. The first named SDN-agnostic NFV architecture is as the originally
presented by ETSI. VNFs are software boxes running in commodity servers and there is no mention of
SDN. In the second architecture, a SDN-aware NFV architecture, as in what the ONF has presented,
with SDN to be able to automatically manage the network and dynamically configure the network nodes.
In the third architecture, a SDN-enable NFV architecture, a new concept where a SDN switch implements
part of the VNF. This approach leads to a better resource usage and efficient data processing, as there
are situations where it is not necessary to steer traffic to the VM where the VNF is hosted. Instead,
the switch will perform the operations necessary. This brings up two challenges, as the infrastructure
layer has to support two roles, VNF operations and forwarding operations, also, the VNF must be split
into separate roles, one to be implemented in the switch, network related, and another one in the VM,

11 https://www.ericsson.com/news/1923781

20
computation related. Given all of this, the paper tests these architectures using the flow-based access
control application FlowNAC [38]. Results showed that there is less network utilization and the VNFs
performance is improved, when comparing data processing with the SDN-enable NFV architecture and
the other two previously explained architectures. This is due to the first node being able to drop the
traffic if it does not have the right access requirements, and the traffic authorized does not have to be
processed twice, to and from the VM.
The state of NFV and SDN as a combined architecture is in its early developments, there is an
effort from ETSI, the ONF and telecommunication companies to accelerate the rate of growth of both
these new paradigms, by developing standards and PoC implementations. These concepts can be
independently used, but the benefits of joining them is not overlooked by the telecommunication and IT
industries. It is widely accepted that SDN can be a powerful enabler for further NFV developments by
providing virtualization and programming features.

2.4 Development Tools

In this section, we introduce some relevant tools that contributed to the development of SDN. Most
of the works discussed in this thesis have been using a common set of tools, which will be discussed in
the following sections.

2.4.1 Mininet

Mininet [39] is a network emulator capable of creating virtual networks with hundreds of hosts and
switches on a single computer. It is based on Linux process virtualization to run the nodes in the OS
kernel. As of Linux version 2.2.26 there is support for network namespaces, a lightweight virtualization
feature, and Mininet takes advantage of it. This means that each process can have a separate network
interface, routing table and Address Resolution Protocol (ARP) table. Then, it is possible to have links
(virtual Ethernet pairs) between hosts and switches (processes), in order to build a virtual network.
The differentiation factor from other network emulators is that Mininet provides support for SDN. It
supports OpenFlow compliant switches, for instance, the OVS, which will be discussed in section 2.4.3.
With Mininet it is possible to run, test and debug SDN environments with a remote controller. It allows
for setting an arbitrary custom topology. It also supports dynamic change of the state (up or down) of
the nodes in the topology. When compared to testing networks in real systems, Mininet provides quicker
reconfiguration and easier to change topologies, at no cost for the user.
Notwithstanding, Mininet has some limitations, as it is dependent of the Linux kernel, it is unable to
emulate non-Linux-compatible OpenFlow devices. Also, it cannot deliver the practical performance for a
large scale system.

21
2.4.2 Floodlight

Floodlight [10] is a Java-based OpenFlow controller. It was developed by Big Switch Networks and
used in their commercial switches. Then, opened to the community under an Apache license, where it
is actively developed. Floodlight is compatible with hardware and virtual switches, such as the OVS.

The latest stable version of Floodlight is v1.212 , it has a defined REST-like API for the NBI, although
it only has one application using this interface. Other applications are built in as Java modules, be-
cause they need higher bandwidth in communications with the controller. These modules provide their
13
resources to the applications through the NBI. This interface is defined using restlet , an open-source
framework to develop RESTful interfaces, for the Java programming language. The goal is to abstract
the low-level details of the web server, making it easier for the developers to provide resources to the
applications. The Floodlight controller is composed by multiple different modules. It provides function-
alities that can be of interest for SDN applications. Each module is defined in a Java package. In order
for Floodlight to load and register a new module, it has to be registered as a resource and appended in
the configuration file. To achieve full functionality, the modules are connected to one another according
to defined per module dependencies, set by the developers. Another important aspect is that modules
have to implement listeners for the OpenFlow messages. For instance, if a module wants to receive the
OpenFlow packet-in message, it has to add a listener and give it an identifier. This is done because
there can be multiple listeners for the same message and this message will be handled by all of them,
consecutively.

Floodlightv1.2 has full support for OpenFlow versions 1.0 and 1.3, as well as experimental support
for OpenFlow versions 1.1, 1.2, and 1.4. To accomplish the support of multiple versions and to be able to
adapt to future releases, Floodlight uses Loxigen. Loxigen exposes the OpenFlow protocol through an
API. This API results in the OpenFlowJ-Loxigen Java library, used by Floodlight to abstract the low-level
details of OpenFlow, while providing a high-level API for developers. As seen in section 2.2.1, Open-
Flow does not provide the specifications on how to configure an OpenFlow switch. To accomplish this,
Floodlight uses OF-CONFIG [16]. This protocol allows for an entity to remotely configure the switches.
For instance, it allows to determine if the protocol to use between the communication with the switches
will be TCP or TLS.

Floodlight was one of the first open-source SDN controllers to have a defined NBI, where an external
application could manage the network. Therefore, it is a good opportunity to make use of it, because it
is possible to contribute to it, by extending existing modules and implementing new ones. Then, users
are able to experiment proposed contributions and give feedback regarding the new functionalities.

12 https://github.com/floodlight/floodlight/tree/v1.2
13 https://restlet.com/

22
2.4.3 Open vSwitch

OVS is an open-source virtual switch under the Apache license. It is built to forward traffic between
different VMs on the same host and between VMs and the physical network. OVS provides support for
multiple protocols, including OpenFlow. As it is a software in active development, the latest stable version
(2.5.0) does not yet fully support OpenFlow versions 1.4 and 1.5. OVS is portable and independent of
the platform and includes OVSDB client and server implementations.
As stated in 2.2, OVSDB is a management protocol to directly configure an OVS. This protocol
enables the configuration of the behavior of the OVS, through a database. The database holds the infor-
mation for an instance of the virtual switch. The management operations on the OVS are complementary
to the OpenFlow protocol, but OVSDB operations are on a longer time-scale. For instance, OVSDB is
used to create, modify and delete OpenFlow data paths, of which there may be many in a single OVS
instance. It is used to configure the set of controllers to which an OpenFlow data path should connect
or to collect statistics from the virtual switch. Although, OVS is not limited to this single protocol to be
managed.
OF-CONFIG is another protocol to configure the OVS. This protocol is broader than the OVSDB, as
it can be used for any OpenFlow-compliant switch. Its goal and operation is closely related to OVSDB.
OF-CONFIG extends NETCONF and uses it as the transport protocol, while OVSDB uses Javascript
Object Notation (JSON). OF-CONFIG can instantiate an OpenFlow data path in the OVS and assign
resources, such as ports and queues, to be configured. For example, OpenFlow has three parameters
from queues that can be configured: min-rate, max-rate and experimenter. OF-CONFIG provides the
means for this configuration to happen.

2.5 Discussion

In this chapter, we discussed the benefits of an open approach to hardware and software devel-
opment. We showed that community collaboration is a step towards innovation in networking. This is
demonstrated by the increasing number of recently developed projects, which follow this approach. We
briefly debated on how NFV can impact the future of the network industry, by reducing the OPEX and
CAPEX of operators and facilitating the management of networks, when enabled by SDN. We discussed
the state of the art of SDN and presented the most relevant tools for the development of this thesis. With
regards to the these tools, we selected them, because they fit the needed requirements to build our so-
lution, as well as enable the thorough evaluation of the system. Furthermore, we are able to contribute
to Floodlight, which will expose our definition of a NBI to the public.

23
24
System Architecture
3
Contents
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.2 Load Balancing Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

25
In this chapter, the architectures of the Floodlight load balancer and the NBI are discussed. In
section 3.1, we provide an introduction to the developed system. In section 3.2, the load balancing
principles are addressed. Then, in section 3.3, Floodlight modules and NBI data structures are explained
from a high-level perspective. Finally, in section 3.4, we present the closing remarks of this chapter.

3.1 Introduction

The goal of this project is to make contributions to the definition of the NBI. This interface has a
wide scope for network application usages. We are going to consider the load balancing use case and
define a REST API with functions that are essential to the management of this kind of applications.
The definition of this interface can help to find the requirements and operations that are necessary to
contribute to a load balancing NBI standard. Furthermore, it is important to have definitions of specific
domain interfaces, which will help uncover a broader definition for the complete NBI.
With this goal, we have to understand the load balancing functions that can be performed in a SDN
environment, and implement these operations in a controller. We considered open-source SDN con-
trollers available, Floodlight was chosen due to having a NBI and a collaborative approach, with the
possibility given to the community to directly contribute to it. We modified the load balancer module
existing in this controller. This module had some limitations. It addressed ICMP, TCP and UDP traffic
and had a simple round-robin algorithm to distribute client requests, but no concern for traffic volume.
Also, there was no health monitoring system implemented to check on availability of the members. Con-
sidering these issues, we have extended the Floodlight load balancer module. We added new features
to the project, such as a health monitoring system, two new load balancing algorithms: the statistics
algorithm and the Weighted Round-Robin (WRR) algorithm, the capability to handle TLS traffic, new
statistics collection algorithms and improved and expanded the load balancer NBI. The management
operations exposed by the defined load balancer REST API are going to be addressed in section 3.3.2.
We defined these operations, in order to allow an external application to have control over the functions
that are essential to the management of a load balancer. We developed a Command-Line Interface (CLI)
and a web GUI to provide easier access to this interface.
Our system’s core is composed of Floodlight and Mininet. The concepts and features here mentioned
are addressed over the next two chapters, revealing the details of their architecture and implementation.

3.2 Load Balancing Principles

Before we dive into the specifics of our system architecture, we will explain the fundamentals of a
load balancer [40]. Starting by introducing the load balancing terminology, we have the concepts of

26
virtual server, server or host, member and pool. The virtual server is a proxy for the actual physical or
virtual machine. The virtual server has a virtual IP, it is to this address that the clients will send their
requests. The server or host is the actual physical or virtual machine, where the requests are distributed
from the virtual server. The member is an entity that represents a server or host. The member can also
represent more than a host, i.e., it can contain the port associated to a service in that server. This allows
for greater granularity when load balancing the traffic, as it can consider the services included in a host.
A pool is a cluster of members that can share a property. The pool is usually associated with a virtual
server, so that requests sent to this virtual server will be distributed to the members of a specific pool.

In figure 3.1, we can see an example of how a load balancer works. Typically, the load balancer is
placed between the client and the servers. A client will send the request to the known virtual IP, then the
virtual server will choose the pool to which the request shall be delivered. This decision can be made
according to different policies, for instance, there can be a pool only comprised of Hypertext Transfer
Protocol (HTTP) members and another pool solely formed by Domain Name Server (DNS) members.
Therefore, the virtual server can pick a pool to distribute the request based on its characteristics. Then, a
member of the chosen pool will answer the client’s request. To make the process completely transparent
for the client, the IP of the member that responded will be replaced by the virtual IP. In the example
presented in figure 3.1, we have a load balancer with four servers, two pools and five members. It
is possible for a server to have members from different pools, as they can have multiple services and
distinct services can be grouped in separate pools.

The goal of a load balancer is to minimize the load at the servers. There are various algorithms
that define which members of a pool should be picked to distribute the incoming request. For example,
simple round-robin between the members of a pool would make each host answer the same number
of requests, but this does not consider the extent of each request. However, there are other algorithms
that can get information from the servers, for instance, memory, Central Processing Unit (CPU) and
bandwidth utilization to better allocate the resources of the network. Also, each pool can have different
algorithms to ensure that relevant algorithms are applied to the pool’s attributes.

Another important aspect for the load balancer to consider is the network availability. There should
not be the situation where a pool chooses a member which is unavailable. To avoid this problem, health
monitors are implemented to make sure members who are unavailable are removed from the algorithm
options.

In traditional networks, the load balancer, as presented in figure 3.1, is usually a physical machine
running the logic necessary to accomplish the previously discussed load balancing functions. But, if we
consider these load balancing principles in a SDN environment, there is no need for the load balancer
server and the additional middleware. The logic to incorporate load balance in the network can be coded
in the SDN controller. Moreover, the OpenFlow protocol addressed in section 2.2.1 is able to provide the

27
Figure 3.1: Common load balancer architecture.

necessary tools to have a fully functional load balancer. The aforementioned principles, such as client
transparency, load distribution algorithms and health monitors, can also be implemented resorting to the
OpenFlow protocol.

3.3 System Overview

Our project focuses on data center networks, as it is an environment where SDN is gaining popularity,
as seen in the previous chapter. In data centers, the most common topology is the tree topology [41].
We will be using it to deploy our solution and evaluate it.
The system’s high-level architecture is represented in figure 3.2. Traffic will be generated from clients
to the network, represented in green. The network in a tree topology, will receive the incoming traffic
at the root OVS A. Then, the switches B and C will forward the traffic to the servers according to load
balance rules defined by the SDN controller. The controller gets the instructions necessary to create the
load balance activity through the NBI.
We defined the load balancer NBI as an interface based on the principles previously mentioned. It
is easily extensible, so in case there is need for more advanced load balancers, the changes can be
incorporated into the NBI. This interface is developed according to the fundamental principles for REST
web applications, as it provides independency from platform and programming languages to the service.
However, Floodlight’s NBI has some violations of these principles, as mentioned before in section 2.2.2.

28
Figure 3.2: High-level architecture of the system.

Knowing this, we have created an interface that fixed the first violation of a REST principles in Floodlight.
The load balancer REST API is no longer exposing the media type in an URI. However, the second
violation corresponds to exposing a fixed set of URIs. This has not been addressed, as it showed to
have a trade-off between performance and scalability. Furthermore, the load balancer API with a fixed
set of URIs stays consistent with the other modules’ APIs, that also have an invariable set of URIs
defining their interface.

As seen in figure 3.2, the forwarding elements used in the network are connected to Floodlight. The
controller serves as the control plane for these elements. The switch used for the data plane of our
solution is the OVS. We have addressed the features and functionalities of the OVS in section 2.4.3.
This switch is compatible with both Floodlight and Mininet.

We have presented a general architecture and some concepts of our system. Now, we will go into
more detail regarding the solution’s architecture. We will present the data structures of the Floodlight
load balancer module, as well as other modules that we used. Furthermore, we address some of the
characteristics of the Floodlight NBI.

29
Figure 3.3: Simplified data structures of the load balancer module.

3.3.1 Floodlight Internal Modules

Floodlight is built by multiple application modules. The core services that make use of the OpenFlow
protocol to build an OpenFlow controller can interact with these application modules through a Java
API. The application modules could be placed outside of the controller and communicate through the
NBI. However, due to the high communication with the controller needed by these applications, they are
compiled with it.
The load balancer, statistics and static flow entry pusher modules are part of those type of applica-
tions. The load balancer is the principal component of our project. It is responsible for distributing net-
work traffic through the servers available, thus increasing availability and responsiveness. This module
was designed to be compatible with the OpenStack Neutron load balancer as a service1 . In section 3.2,
we see that the load balancing principles have a set of concepts. These concepts are represented
in the data structure of the load balancer module, presented in figure 3.3. It is an Unified Modeling
Language (UML) diagram of this module, corresponding to the most important five classes and their
relationships. Below, we describe the components of this UML:

1 https://wiki.openstack.org/wiki/Neutron/LBaaS

30
• LBVip: This is the class that represents the virtual server and IP of the load balancer. An instance
of this class is identified by a unique attribute id. The address attribute is the IP that the clients are
able to see and can send their request towards it. This class has a list of the pools that are served
through the LBVip address. The protocol attribute is used to set the type of traffic which the LBVip
will be responsible for distributing. The proxyMac attribute is the Media Access Control (MAC)
address of the load balancer. This attribute is constant for every instance and it is used to respond
to ARP requests.

• LBMember: This class is the representation of a member, according to the load balancing princi-
ples. The attribute id is a unique identifier for an instance. A member is composed of an address
and a port, which are attributes of this class. Furthermore, a member is associated with a pool and
a LBVip. The LBMember has the MAC address of the physical (or virtual) server that it is repre-
senting, saved in the macString attribute. The status attribute is used to determine the availability
of a member.

• LBMonitor: One important aspect of load balancing is the health monitor system. This class
represents one monitor that is responsible for evaluating the availability of the members of a pool.
Each entity of this class is identified by a unique attribute id. The type attribute is the description
of the method the monitor will use to query the availability of the member. The delay attribute is
the period of time between queries to the member. The LBMonitor class is responsible for the
status change in a member. If a member is deemed inactive because it could not answer to the
monitor, then its status will be changed accordingly. One LBMonitor can only be associated with
one LBPool.

• LBPool: This class represents a cluster or pool of members. It is identified by the attribute id. This
attribute must be unique among other instances of this class. The lbMethod is the algorithm used
by the pool to pick the members associated with it. The available algorithms will be presented and
discussed in the next chapter. Every LBPool must have a vipId, this corresponds to the attribute id
of a LBVip. The members and monitors that are connected to a pool will be saved in two different
lists. These lists contain the ids of the corresponding LBMember and LBMonitor. The LBPool also
has a link to PoolStats, which is described below.

• PoolStats: To better understand the state of the network, it is provided information about the
created pools. The information corresponds to the bytes that a certain pool has received, the
bytes that it has transmitted, the total flows that have been created by it and the currently active
flows. This intelligence can be used to plan modifications to the network or to diagnose possible
problems.

31
There are still other important data structures in the load balancer module. All the LBVip, LBPool,
LBMember and LBMonitor in the network are saved in hash maps. The hash map, as the name sugests,
is a data structure that maps keys to values and uses a hash function to provide access to its elements.
It has the attribute id of the classes as the key, and the instance of the class as the value. This allows
to retrieve and access all the components in the network by its unique identifier. These hash maps can
grow to a large amount of elements, specifically considering the members map. To avoid scalability and
performance issues, operations that use these data structures avoid performing iterations through the
hash map. They are mostly used to Create, Read, Update and Delete (CRUD) operations. The time
complexity of the Map interface in Java is constant, which means that the size of these data elements
does not affect the access performance of CRUD operations. Therefore, the time complexity of the hash
map is the same, because it is an implementation of the Java interface Map.
Another important and useful module for our project is the statistics module. It is responsible for the
collection of statistics regarding the forwarding elements. For each statistics message in the OpenFlow
protocol it is given the necessary inputs to queue that message onto the switches. Then, if a particular
statistics message is made, threads are created equal to the number of available switches in the network
to deliver these messages. The responses are saved in structures created accordingly to each type
of message. All the other application modules have access to this information, through the interface
provided by the statistics module. This is how the load balancer is able to power the functionality of the
load balancing algorithms, the pool statistics, among other services that will be further discussed in the
next chapter.
The static flow pusher module is another important piece that builds the load balancer. This module
is responsible for the creation and removal of a set of static flows that are installed on the forwarding
elements. This module provides an interface to enable other modules to add static flows to a switch,
delete a static flow and get a list of all flows from a switch. The load balancer module is able to create the
static flows that are necessary through this interface. When a connection to the LBVip is successfully
established, the routes from the member and the client are set through the static flow pusher service.
These are the internal Floodlight modules whose great synergy is used to build our solution. Other
modules are used as well, but to a minor degree of relevance.

3.3.2 Floodlight Northbound Interface

Floodlight’s NBI is well documented and it is the recommend way to utilize the features of this con-
troller. The internal modules have to create the resources that they want to see exposed in this interface.
For instance, the load balancer module will expose the LBVip resources in order to allow external ap-
plications to execute CRUD operations in that data structure. Similarly, the statistics module can be
enabled or disabled and the statistics collected in the forwarding elements are accessed through the

32
NBI. This way it is possible to have control over the Floodlight internal modules from an external point
of view. Due to the nature of the information exposed in the NBI, it would not be considered secure
to have potentially malicious users access the contents of the resources. The Floodlight NBI can be
accessed safely using the TLS protocol to avoid this problem. Although, the access to this interface
is not interdicted through HTTP without any security protocol. The data itself can be protected using
TLS, but Floodlight also restrains the access to the NBI if deemed necessary. It is possible to create an
access control list or authentication process via the creation of a Java KeyStore (JKS). This is a type of
authentication that does not require the user to have a password to access the interface. The way JKS
works is through white listing the trusted users that are permitted to access the NBI. The JKS has a
management tool - Keytool - that can be utilized to add user’s public keys in the JKS, thus giving them
the permission to access the NBI on a secure port.
Floodlight provides a web GUI exposed through the NBI, so that it is easier for users to manage
the controller. This web interface is important as it can be used as a debugging tool and offers all the
necessary functions to manage a complex network. We also developed a python CLI application with
the load balancing management functions. Having a GUI as well as a CLI to manage the load balancer
can be itself a test to this NBI and overall favorable to Floodlight.
Now that we have presented some important concepts of the NBI, we will describe the exposed op-
erations of the load balancer API. To access the NBI and the resources there defined, a user simply has
to address the controller IP and port together with the URI of the desired operation. The API supports
requests and responses in the JSON data format. Furthermore, this interface returns standard HTTP
error messages in case there are failures when processing a request. Moreover, some operations re-
quire the input of arguments in order to execute its functions. These parameters have some constraints,
if the inputs are not respecting the constraints, the API will return an error. The details concerning the
load balancer REST API are addressed in appendix A and the operations are summarized in table 3.1,
with the added statistics related operations that we have further extended in the Floodlight NBI.

3.4 Discussion

In this chapter, we have seen the general architecture of the system implemented in this thesis.
The main focus of the solution is the implementation of load balancing functionalities using the features
available in a SDN environment. To achieve this, we have selected a SDN open-source controller that
fits our needs. Floodlight allows us to contribute to its source code and provides an extensible NBI. We
introduce Floodlight’s NBI, describing its most important features and how they can be accessed by an
external program. Finally, we presented the definition of the NBI for load balancing applications.

33
Entity URI HTTP Verb Arguments Description

/quantum/v1.0/vips/ POST PUT Create a VIP

VIP /quantum/v1.0/vips/”vipId” DELETE vipdId: ID of the VIP Delete a VIP

/quantum/v1.0/vips/”vipId” GET vipdId: ID of the VIP List the VIP

/quantum/v1.0/vips/ GET List all the VIPs

/quantum/v1.0/pools/ POST PUT Create a Pool

/quantum/v1.0/pools/”poolId” DELETE poolId: ID of the Pool Delete a Pool

/quantum/v1.0/pools/”poolId” GET poolId: ID of the Pool List a Pool

/quantum/v1.0/pools/ GET List all the Pools


Pool
/quantum/v1.0/pools/”poolId”/members GET poolId: Id of the Pool List Members by Pool

/quantum/v1.0/pools/”poolId”/health monitors POST PUT poolId: ID of the Pool Associate Monitor to Pool

/quantum/v1.0/pools/”poolId”/health monitors DELETE poolId: ID of the Pool Dissociate Monitor with Pool

/quantum/v1.0/pools/”poolId”/health monitors GET poolId: ID of the Pool List Pools with Monitors

/quantum/v1.0/pools/”poolId”/stats GET poolId: ID of the Pool List statistics of Pool

poolId: ID of the Pool, mem-


/quantum/v1.0/pools/”poolId”/members/”memberId” GET Prioritize a member of a Pool
berId: ID of the Member

/quantum/v1.0/members/ POST PUT Create a Member

memberId: ID of the Member,


Member /quantum/v1.0/members/”memberId”/”weight” POST PUT Set the weight of a Member
weight: weight of the Member

/quantum/v1.0/members/”memberId” DELETE memberId: ID of the Member Delete a Member

/quantum/v1.0/members/”memberId” GET memberId: ID of the Member List a Member

/quantum/v1.0/members/ GET List all the Members


Enable or disable health
/quantum/v1.0/health monitors/”status” POST PUT status: ”enable” or ”disable”
monitoring

period: period to check mem-


/quantum/v1.0/health monitors/monitors/”period” POST PUT Change period of monitoring
bers health
Monitor
/quantum/v1.0/health monitors/ POST PUT Create a Monitor

/quantum/v1.0/health monitors/”monitorId” DELETE monitorId: ID of the Monitor Delete a Monitor

/quantum/v1.0/health monitors/”monitorId” GET monitorId: ID of the Monitor List a Monitor

/quantum/v1.0/health monitors/ GET List all the Monitors


Enable or disable statistics
/wm/statistics/config/”status”/ POST PUT status: ”enable” or ”disable”
collection
period: period for collection of Change period of flow stats
/wm/statistics/config/flow/”period” POST PUT
flow stats collection
Statistics period: period for collection of Change period of port stats
/wm/statistics/config/port/”period” POST PUT
port stats collection

/wm/statistics/flow/”dpid”/ GET dpid: ID of the switch Collect flow statistics

dpid: ID of the switch, port: port


/wm/statistics/bandwidth/”dpid”/”port”/ GET Collect bandwidth statistics
of the switch

dpid: ID of the switch, port: port


/wm/statistics/portdesc/”dpid”/”port”/ GET Collect description of port
of the switch

Table 3.1: Operations of the API for the management of load balancers.

34
Implementation
4
Contents
4.1 Load Balancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.2 Northbound Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.3 Mininet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

35
In this chapter, we will focus on the details of the implementation of the load balancer and the NBI.
In section 4.1, we present the algorithms and flow charts that represent the fundamental functions of
the load balancer module in Floodlight. In section 4.2, we address the details of the NBI implementation
and the applications developed to access it. Then, in section 4.3, we provide detail of how the system
is completed with Mininet. Finally, in section 4.4, we discuss the contents and make the final remarks
regarding this chapter.

4.1 Load Balancer

As stated in chapter 2 section 2.4.2, Floodlight modules are created as Java packages. Therefore,
the load balancer is a Java package defined, along with others, in the controller. The modules usually
define an interface that is common to Floodlight, so that modules can provide their service to others.
The load balancer uses this to provide the necessary functionalities. For instance, it uses the statistics
module to get information from the switches that is used to power the decision algorithms.
Below, we present the implementation of the most important elements that comprise the load bal-
ancer module. The forwarding capabilities needed to set the communication between client and server.
The statistics collection that will be used to give context to the decisions of the load balancer. The load
balancing algorithms to determine which server will satisfy the client’s requests. The pool statistics algo-
rithm will be described, as it provides good information to manage the network and possibly debug any
problems. Finally, we explain the functionalities of the health monitoring system.

4.1.1 Client to Server Communication

This feature was already implemented in Floodlight, however we have to explain it to better under-
stand how the load balancer is operating. Moreover, we have contributed with auxiliary functions to
improve the source code of this component, as well as other features, such as the TLS handler. In
figure 4.1, we see the high-level flow chart of the forwarding algorithm of the load balancer. Floodlight
modules that want to process a packet-in message have to define the corresponding listener. Then,
these messages are processed in order, module by module. The load balancer is set to process the
packet-in before other modules, such as the forwarding module. This way, if a packet-in does not have
the load balancer virtual IP address as the destination, it is discarded to continue in the Floodlight pro-
cess pipeline, specifically, the forwarding module.
Considering a state where the flow tables of the switches are empty, if a request is sent to the LBVip
address, a packet-in must be sent to Floodlight as the forwarding elements do not know how to process
this request. The first thing the load balancer does is get some details of the incoming packet. It retrieves
the IP address and port to which the packet is directed. Then, it deals with the ARP requests. The load

36
Figure 4.1: Flow chart of the forwarding algorithm of the load balancer module.

balancer responds to ARP requests by pushing a packet-out to the forwarding elements that sent the
request with an ARP reply. The reply comes with the proxyMac attribute of the LBVip that is addressed
in the packet-in, which is a static MAC address common to every instance of this class.

Then, the load balancer handles only IPv4 traffic. It starts by filtering any TLS protocol messages. It
compares the target port to a list of known TLS protocol ports. If the packet is from port 443, 993, 995
or 465, which correspond to the Hypertext Transfer Protocol Secure (HTTPS), Internet Message Access
Protocol (IMAP), Post Office Protocol (POP) and Simple Mail Transfer Protocol (SMTP) TLS protocol
ports. Then, the packet is redirected to a LBVip with the attribute protocol equivalent to TLS. If the
packet is not destined to any of those ports, there is no necessity to redirect it to a LBVip associated
with TLS protocol. It is forwarded to a pool of the LBVip with attribute address equal to the destination
IP address of the packet-in message. After deciding the LBVip to forward the request, the load balancer
picks a LBPool associated with it. Then, the LBPool chooses a member which is the final server that
responds to the request. The algorithms used to make these decisions are discussed further ahead in
this chapter.

From here, it is necessary to create bidirectional routes between the client and the member that was

37
picked. The load balancer uses another module services to get the available devices in the network. It
is able to map the IP address of the client and the member to the corresponding MAC addresses. The
devices in the network store all the IP addresses that are associated with it. This way, it is possible to set
the macString attribute of LBMember with the corresponding server MAC address. Knowing the devices
of the client and the member, we create bidirectional routes between them. Using the data retrieved from
the packet-in, we create matches and actions for the flow tables of the forwarding elements in the route.
We set the matches with the IP address of the client, the port of the switch used to forward the message
and the IP protocol. For the actions of the matched packets that are sent from the client to the LBVip,
the switches change the destination MAC and IP addresses to those of the LBMember. For the actions
of the matched packets that are sent from the LBMember to the client, the switches change the source
of the packet to the MAC address of the LBVip and the source IP to the original destination address
of the packet-in. The LBVip IP address is not set as the source IP, because there is the possibility of
redirection to another LBVip, in the case of a TLS request. The above actions are necessary to have
a transparent packet redirection for the clients. This means that the client is not aware that the request
can be answered by others than the original destination.
Furthermore, the load balancer uses the static flow pusher module in order to have these routes
established permanently. This module enables operations to add and remove flow table entries to the
switches and pushes that information to them through a packet-out. The flow table entries are created
with no time out, so they will last until they are manually removed from the switches. This guaran-
tees connection persistence, as the same client will have the packets match the same fields and the
forwarding elements will route them to the same member without the need of a packet-in message.

4.1.2 Statistics Collection

In figure 4.2, we present the statistics collection algorithm as a flow chart. To start the statistics
collection, an application has to send a message to the NBI or to enable it directly in the Floodlight
default properties file. Although, it is the NBI function that has the final decision regarding the state of
the collection of statistics. To send the message towards the switches, it is necessary to create a number
of threads equal to the number of forwarding elements in the network. Then, each thread will query one
switch in order to receive the statistics requested. A time out is created to guarantee the information is
retrieved in a reasonable time frame. If a thread is unable to fulfill its retrieval before the time expires,
the switch statistics of the failing thread will not be considered in the reply.
After enabling statistics collection, Floodlight will query the forwarding elements regarding three types
of statistic information:

• Port Stats Message: The statistics returned by this message are used to calculate the bandwidth
of a switch port. The port to get the statistics from can be specified to collect it individually. This

38
Figure 4.2: Flow chart regarding the statistics collection module.

message returns the received and transmitted bits and received and transmitted packets, among
other information, of an interface. Due to the information referring to the time since the port was
alive, we have to retrieve two of these messages before being able to calculate the bandwidth of
a port. It would be ideal to have the switch report the bandwidth of a port directly through the
message, but this is not the case. So, the calculation of the bandwidth will take into account the
current and the most recent statistics reports to calculate it over the time of the collection of these
messages. The information retrieved by this message is applied to the load balancing algorithm
for choosing a member. This message was already implemented in Floodlight.

• Port Description Message: This message is used to collect information regarding the description
of a switch port. It returns the hardware address, the features supported by the port, the current
features, the configuration and the port status (enabled or disabled). Then, this information is used
by the health monitoring algorithm to decide what is the availability of a member.

• Flow Stats Message: The information delivered to the controller by this message enables it to
understand the state of the configurations of the flow tables in the forwarding elements. The flows
to get information from can be filtered by the match field, as well as the output port of the actions
field and the table identifier. It is also possible to get all the individual flows without filtering. This
message returns useful parameters as the number of bytes and packets matched by a flow entry
and the total number of flow entries in a flow table. This information is taken into account in the
pool statistics that will be explained in section 4.1.5.

39
In the statistics module, these messages are set to be transmitted every ten seconds, by default.
However, this can be changed through the NBI and the default configurations file in Floodlight. The
corresponding network throughput of having these messages sent periodically is evaluated in the next
chapter.
This module has an interface available to other Floodlight modules, so that they can use its resources.
The load balancer makes use of this interface to get the statistics information necessary to provide the
load balancing algorithms. The statistics module implements three different hash maps, which refer
to the three messages explained above. The hash maps store the important contents of the reply
messages in their values. The hash maps regarding the port related messages have the switch port
combination as their keys. In the case of the flow stats message, the key that identifies a reply message
content is the pair flow match field and switch identifier. By accessing these structures it is simple to
share the statistics information with other modules, as they only need to know the key to the maps.

4.1.3 Load Balancing Algorithms

In this section, we address in detail the algorithms used to distribute the client requests to the different
servers available. Firstly, the algorithm used to pick a member from a pool only takes into account the
statistics collected from the switches of the network. Differently from other load balancing algorithms
that access the resources in a server and collect information regarding its state, this algorithm does not
query the server directly. Instead, it relies on the OpenFlow protocol to get context from the forwarding
elements to make a decision. This could be done using another management protocol, such as the
Simple Network Management Protocol (SNMP), but with the OpenFlow protocol we are capable of
getting the necessary context information to build a reliable load balancer.
Floodlight had one algorithm available to perform the distribution of client requests. It was a simple
round-robin algorithm, in which the pool simply iterates through the members list and picks the next
element until the end, then, it goes back to the start of the list of members and continues iterating.
To further improve the load balancer, we have developed two more load balancing algorithms. The
first one is the WRR algorithm. In this algorithm, a weight is given to the members and the algorithm
will pick a member according to the weight associated with it. The greater the weight of a member, the
more chances there are for it to be chosen. With WRR, the pool is given all the members weights. Then,
it will randomly generate a number between one and the sum of the total weights of the members in
that pool. It will proceed to iterate through the members weights in ascending order while summing their
weights. When this sum is greater than the random number generated, the member currently in iteration
is picked. This algorithm ensures that the members with the higher weights will be picked more often
than the ones with less weight.
The second algorithm implemented is based on the bandwidth of the port connected to a member.

40
Using the statistics collection described previously, we are able to retrieve the bandwidth of the ports
connected to the members. According to the bandwidth displayed, the algorithm will choose the member
which has the least consumed bandwidth. Due to the way statistics collection works, there is a period
where the bandwidth information remains constant. To avoid picking the same member over this period
of time, a list of previously chosen members is kept and the algorithm will not take into account members
in it. This list has capacity dependent on the number of members in the pool and works as a First In
First Out (FIFO) queue, where the elements are pushed to the right and inserted in the beginning until it
reaches the maximum capacity and drops the last element.
With the presented algorithm descriptions there are some limitations to address. The first algorithm
does not consider the conditions in the members and is used only to blindly balance the load across
the servers. The second algorithm is useful to give a biased towards certain members, but the weights
are set according to user preference and therefore do not consider members context. The third algo-
rithm is getting indirect context from the members and is able to automatically select the better option.
Considering the three available algorithms, the purpose of a load balancer is met, although in different
degrees of satisfaction. As each pool can implement a different algorithm, we see that there is a network
management opportunity to use them according to the network needs.

4.1.4 Health Monitoring

We have implemented a health monitoring system, which is an important component of any load
balancer. It is necessary to ensure there is no unavailable member in the network able to be selected by
the load balancing algorithms. In figure 4.3, the flow chart of the functionalities of the health monitoring
system are presented. Firstly, the health monitoring system relies on the statistics module being enabled.
It is through the context given by these statistics that the system is able to monitor the members. As
seen before, the statistics can be enabled through the NBI and through the default configurations file in
Floodlight. The health monitoring system can be activated through this interface as well. If this system
is activated, but the statistics are disabled, then it will automatically enable the statistics module in order
to offer a better service.
The statistics module sends the port description message to the forwarding elements, then the health
monitoring system uses the collected statistics to determine if a member is available. The port descrip-
tion message has information about the state of the port. The health monitoring system maps the
members to the connected switch ports and checks if the associated port is up or down. According to
the state of the port, the member attribute status is set to minus one if the port is down. This is a very
simple way to check the availability of a member, as there is no chance that the member is going to be
answering requests if its switch port is down. Nevertheless, if the port is active, there is still the possibility
that the member is unavailable. A member represents a service in a server, that means the member can

41
Figure 4.3: Health monitoring system flow chart of the load balancer module.

still be unavailable if this particular service is shutdown in the server. To check the communications state
in the server, the health monitoring system sends an ICMP request to the server, in order to understand
the state of its connectivity. The reply to this ICMP request sent by the health monitors is redirected to
the controller as a packet-in message. This is due to the switches not having flow information on how to
forward the ICMP reply, then, it is by default forwarded to the controller. If Floodlight receives a response
to this request, the member is deemed healthy and its status is set to one. If there is no response from
the server, the status of the member is set to minus one.

The status of the members are considered in the load balancing algorithms. If the health monitors
are enabled and a pool is associated to a monitor, then the members of that pool need to have status
set to one. If the status of the members of that pool are not set to one, they will not be accounted for in

42
Figure 4.4: Flow chart regarding the implementation of pool statistics.

the available members list. This ensures that the members deemed unavailable by the health monitoring
system will not be picked by the load balancing algorithms. If this system is disabled or there are no
monitors associated to the pool, the algorithms will ignore the status of the pool members.
Considering this system, we see that there is room for improvement. The member’s service is not
directly addressed by the health monitors, instead it gets context from its availability through the switch
port and general connectivity of the server. The first one is certain to correctly judge the availability,
but an successful ICMP reply does not guarantee that another service in the server is accessible. A
health monitoring message could be built considering the service provided by each member. But, it is
still satisfactory to have this system as it is and an evaluation of its correctness is done in chapter 5.

4.1.5 Pool Statistics

In figure 4.4, we show the definition of a flow diagram regarding the main events triggered to imple-
ment statistics related to the load balancer pools. We developed this feature and exposed it through the
NBI, so that an external application can manage the load balancer with greater context regarding the
network traffic. It can be useful to determine how to associate pools to members and debug possible
network problems.
The pool statistics can be requested from the NBI, when this happens the load balancer module
gets the flow stats message values from the statistics module interface. Due to the request being

43
made asynchronously, the function that handles the gathering of the statistics is running periodically.
After receiving the message values, Floodlight has to filter the flows by pool, in order to aggregate the
statistics. When the flows are created through the client to server communication process, they are
saved in a hash map. This map contains the match field of a flow and the switch identifier as the key.
This identifies a specific flow inside a switch. The values of the map are the LBVip id attribute, so it
is possible to determine which flows are created by each LBVip. This way, the pool statistics function
can know which pools are responsible for the flows, since there is an association between LBPool and
LBVip. Comparing the flows available in the retrieved flow stats message with the flows in the hash map
created in the client to server communication process, it is possible to extract the information necessary
regarding each flow. The parameters used are the bytes received, bytes transmitted and total active
flows in the forwarding elements. Other variables could have been used, such as the packets received
and transmitted, although it would be redundant and less revealing of the pool conditions than bytes
received and transmitted. The active flows in a pool can be used, for instance, by external security
applications to limit the maximum allowed flows in a pool to prevent Denial of Service (DoS) attacks.

4.2 Northbound Interface

In this section, we address the implementation of the Floodlight NBI for load balancing applications.
Firstly, we address how this interface was developed in Floodlight, through restlet 1 , a framework for the
creation of REST services in the Java platform. The development of a CLI application able to manage
the load balancer through the NBI is presented. Then, the implementation of the GUI is addressed. The
operations of the API are given in table 3.1. Looking at this table, we can see that it is aggregated by the
entity of the load balancer in question. Each URI gives access to the resources corresponding to this
entity. The column HTTP Verb refers to the methods that design an action over the resources. Some of
the actions require the user to provide an argument to the URI. This argument is generally the identifier
of the resource that it wants to access or create. In chapter 3, we introduced this table and explained
some concepts of the NBI useful from a user perspective, now we further address the details of the
implementation of this interface, from a developer perspective.

4.2.1 REST API

Floodlight has a module responsible for the REST server that powers the API, using restlet as the
underlying structure to abstract the details of processing the HTTP requests for Floodlight. It is respon-
sible for the creation of the server, which will listen to the HTTP requests and serve them. It is where the
authentication and the port determined to access the API is set. In the Floodlight default configurations
1 https://restlet.com/

44
file, there are parameters that control the configuration of the API. It is possible to determine if the access
is going to be done using HTTPS or HTTP. It provides the ability to chose the ports in which the server
will listen to each of these protocols. Also, it is possible to define an access control list to this interface
by using a JKS. The configuration variables of the KeyStore, such as the path to it and its password
are set in the Floodlight default configurations file. The REST server module is responsible for starting
the server and providing an interface, so that other modules are able to map their resources using this
server. Through this interface, the other Floodlight modules can create their own APIs in a simple way.
Firstly, the modules have to implement the REST server interface. Secondly, create the URIs necessary
and define the resources to expose. Finally, register the completed resources and URIs to the REST
server interface.
An URI must be created for each resource, in order to be able to access it. Each module that wants its
resources exposed, have to implement the REST server interface and associate URIs to its resources.
These resources are classes inside the module that extend Server Resource. This enables a module to
define the functions to execute, when a HTTP command is called on a resource. The API only deals with
the JSON data format. Using it, the load balancer is able to retrieve request information and respond
to the clients. When a HTTP request is received, the API parses it and maps the JSON fields to the
attributes of the load balancer entities. The load balancer module has created resources defining its
principal entities. These entities are stored in hash maps, kept in the load balancer module main class.
The functions that alter these maps are defined by an interface, which is used by the Server Resource
classes to directly map the clients JSON data into the entities in the maps. Moreover, when there is
a failure regarding a client request, the server can throw an exception that is mapped to the standard
HTTP status messages.
We have provided the operations that can be used to manage the load balancer through the NBI,
in table 3.1. The CRUD operations of the elements of the load balancer module, for instance, LBVip
and LBMember, are functions which purpose is to add, remove and delete these entities from the hash
maps where they are stored. The user can set an identifier, as the argument of the function and key to
the hash map, to determine which objects are going to be created in the load balancer. The detailed
API documentation can be found in appendix A and in the wiki of the git repository of this project2 .
This documentation has all the details regarding the functionality of the API, as well as the constraints
regarding the input arguments for the operations.

4.2.2 Command-Line Interface

It is important to have examples of how to use an API. A user should perform the management actions
necessary in the easiest way possible. With this goal in mind, we developed a python application that is
2 https://github.com/OCoutinho/floodlight/wiki

45
able to use the NBI of Floodlight to manage the load balancer module. The application is used through
a CLI and a defined set of commands that are associated to the operations in table 3.1.
Curl 3 , a command-line tool used to transfer information between devices, supporting HTTP, which is
used to send the commands to the REST API, enables the python application. Through this application,
the user has the chance to complete operations regarding the load balancing management in Floodlight.
Each operation has its unique command and necessary parameters to execute the REST interface
operations. The user simply needs python and a CLI to write commands and be able to manage the
load balancer. The documentation concerning the details of these commands, as well as examples on
how to perform the operations, are available in the application file4 .

4.2.3 Graphical User Interface

Floodlight has a web GUI available to manage the controller and get useful information for debugging
the network. Through this interface it is possible to manage services provided by Floodlight, such as
the firewall module and the static flow pusher module. We added the load balancing services from the
developed module to this interface using Hypertext Markup Language (HTML) and Javascript.
This interface is an alternative to the CLI enabled by python. It does not require the user to have
python or a CLI to manage the load balancer. It can be accessed through a web browser and its made
to be more user-friendly. In figure 4.5, part of the interface is presented. The theme was created to be
consistent with the other management interfaces, such as the firewall interface. Here, the user does
not need to write commands, only set parameters in the appropriate fields and press buttons to use the
operations of the API.

4.3 Mininet

In this section, we will address the importance of Mininet to this project. We have already presented
some concepts of what Mininet does in chapter 2, now we will go into more detail regarding our usage
of this tool.
Mininet is responsible for emulating the network that we use to test and implement our solution. It
provides support to emulate SDN systems and it is compatible with OVS. With it, we are able to easily
define a tree network topology with arbitrary depth and fanout, as well as test the connectivity in the
network and access the OVS switches to retrieve significant information to test functionalities in our
solution. We are able to use the Mininet hosts to use iperf 5 , a network tool to measure and create TCP
3 https://curl.haxx.se/
4 https://github.com/OCoutinho/floodlight/blob/lbstats-monitors/apps/loadbalancer/load_balancer.py
5 https://iperf.fr/

46
Figure 4.5: Screen capture of the load balancer management web interface.

and UDP traffic in a network. Also, we can set apache servers and clients in the hosts, making web
servers in the hosts and sending HTTPS requests to test the load balancer TLS handler.
As seen, Mininet is used to complete the system with switches and hosts, and provide the ability to
test the functionalities of the load balancer using this network.

4.4 Discussion

In this section, we briefly discuss the addressed topics in this chapter. The details of the implemen-
tation of the load balancing system and the exposure of its resources through the NBI were explained.
We have presented the functionalities of our solution and how it was implemented, particularly through
the usage of the features provided by a SDN environment and the OpenFlow protocol. We discussed
the major components of the load balancer module and the algorithms used to enable them. We also
addressed how the NBI works and how we developed applications to access it. Finally, we explained
how the system can be emulated and tested using Mininet.
The functionalities here described are going to be submitted to tests in the next chapter, in order to
evaluate the statements provided in this chapter and to show results regarding the management of a
network using our solution.

47
48
System Evaluation
5

Contents
5.1 Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.2 Evaluation Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

49
In this chapter, we address the evaluation of the system implemented in this work. We start by stating
the test environment, in which we use Floodlight and Mininet, in section 5.1. Then, in section 5.2, the
results regarding the most important features and performance of the system are analyzed. Finally, in
section 5.3, we discuss and summarize the results showed throughout this chapter.

5.1 Test Environment

The goal of this evaluation is to assess the performance of the system in the real conditions that
the solution was created to face. In our case, we want to recreate a SDN data center that is controlled
by Floodlight. To test the solution in a more realistic environment, Floodlight will be placed in a remote
machine, while the network is emulated in a VM, ran in another machine. These conditions are set to
obtain test results that resemble a real networking scenario.
The machines used do not have the required specifications to realistically emulate the size of a data
center network. Given this fact, the tests made focus on the evaluation of the correctness of the system
and the performance of the REST API and Floodlight.
In the next section, we present the specifications of the machines used in this test environment as
well as any relevant software incorporated in these machines.

5.1.1 Test Bed

Our test bed consists of two machines, one running the Floodlight OpenFlow controller and the other
running Mininet with the emulated network. The specifications of the machines are as follows:
Controller machine:

• OS: Ubuntu 14.04.3 LTS (GNULinux 3.13.0-32-generic x86 64).

• RAM: 6 GB DIMM DRAM.

• CPU: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz.

• OpenFlow Controller: Floodlight.

• Network Bandwidth: 20 MB/s.

Network VM:

• OS: Ubuntu 14.04.5 LTS (GNULinux 3.13.0-32-generic x86 64).

• RAM: 6 GB SODIMM DDR3.

• CPU: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz.

50
• Network Emulator: Mininet version 2.2.1.

• OpenFlow Switches: OVS version 2.3.90.

• Network Bandwidth: 400 MB/s.

Other relevant tools used in the system evaluation include apache2, iperf and links1 .

5.2 Evaluation Results and Analysis

In this section, we present and analyze the results obtained throughout the system evaluation. The
main focus of our evaluation is in the load balancer module implemented in Floodlight. The new compo-
nents and algorithms added to it have refined the load balancer and increased the complexity. To make
sure the system works as intended, we have determined test to evaluate the new features of the load
balancer. The features that are going to be evaluated in this section are the load balancing algorithms,
the TLS handling system, the health monitoring system, the overall performance of Floodlight with the
load balancer and the performance of the REST API.

5.2.1 Load Balancer WRR Algorithm

We test the WRR algorithm, in order to get the results of the usage in each of the members of the pool
implementing this algorithm. It is expected that a higher weight member is used more often than a lower
weight member. The probability of a member being picked to serve a client, according to this algorithm,
is expressed through equation 5.1. The algorithm is made so that the probability that a member has to
be chosen equals the weight of the member divided by the sum of all the weights of the members in the
pool.

weight of member
P (M ember picked) = (5.1)
sum of weights of pool members

To test this algorithm, we built a scenario where we set four members with different weights, in a
pool running this algorithm. We executed a hundred TCP requests, using iperf, to this pool and counted
the times that each member was chosen by the algorithm. The experiment was repeated five times to
achieve a reasonable sample size. The average of the experiment results are shown in table 5.1.
As seen, the test results are very close to the expected probabilities. This algorithm ensures a bias
towards members with higher weight, while still managing to serve every member in a pool.
1 http://links.twibright.com/

51
Member ID Weight Theoretical Probability Practical Probability

1 10 41.6% 42.4%

2 8 33.3% 31.8%

3 4 16.6% 16.0%

4 2 8.3% 9.8%

Table 5.1: Comparison of the theoretical probabilities and results of the weighted round-robin algorithm for picking
a member.

5.2.2 Load Balancer Statistics Algorithm

Regarding this algorithm, we want to test if the distribution of the load among the members is being
efficient. That is the primary goal of a load balancer. This algorithm works with statistics collection
module, in order to obtain the bandwidth usage of the members of a pool and picks the members
with the lowest score. With this in mind, we created a test scenario where we have a pool with eight
members running this algorithm. Then, we register the maximum and minimum bandwidth achieved by
the members, while the request are being processed. We set twenty TCP connections per client, each
sending 512 kilobits to the pool members, over a five second period, using iperf. The following scenarios
were used in this test:

• 8 members and 8 clients with statistics collection period set to 10 seconds.

• 8 members and 8 clients with statistics collection period set to 5 seconds.

• 8 members and 16 clients with statistics collection period set to 10 seconds.

• 8 members and 16 clients with statistics collection period set to 5 seconds.

• 4 members and 16 clients with statistics collection period set to 10 seconds.

• 4 members and 16 clients with statistics collection period set to 5 seconds.

The results for the first two scenarios are presented in figure 5.1 and the next two in figure 5.2.
Regarding these four cases, we see a difference in the maximum bandwidth achieved by a member,
with sixteen clients it is higher than with eight clients. The minimum bandwidth across the four figures is
mostly zero, which means that at least a member of the pool is not responding to any request. This is
not optimal, as the load should be spread across all the pool members, but it can be explained by the
insufficient number of clients to reach the point where a member can no longer respond to a request

52
(a) Statistics period set to 10 sec- (b) Statistics period set to 5 sec-
onds. onds.

Figure 5.1: Load balancer statistics algorithm tests with 8 members and 8 clients.

(a) Statistics period set to 10 sec- (b) Statistics period set to 5 sec-
onds. onds.

Figure 5.2: Load balancer statistics algorithm tests with 8 members and 16 clients.

before receiving another. This happens in this experiment with eight and sixteen clients, however only for
a short period of time and with residual bandwidth values. We spot the consequences of having statistics
collection periods of ten and five seconds, by comparing the left side graphs to the right side graphs
of both figures. The values of the bandwidth are constant for longer periods of time in the case of the
statistics collection every ten seconds, because it takes more time to update the values and the algorithm
is relying on older information. Although, this is not a limitation of this algorithm, as the bandwidth
allocation is still balanced, as shown in table 5.2. This is due to the previously chosen member list used
by the algorithm, described in the preceding chapter. It is able to maintain the average bandwidth of
the members, while processing the requests, fairly even. Considering the average bandwidth of the
members in the same statistics collection period, we see that the values are close. Even comparing
across the collection period, the values remain alike. This is proof that the algorithm is distributing the
load across the servers uniformly.

53
Statistics Collection Period Member Id Average Bandwidth (bits/s)

1 10 057

10 seconds 2 91 971

3 80 031

4 95 529

1 12 391

5 seconds 2 82 541

3 83 428

4 92 447

Table 5.2: Average bandwidth of a pool running the statistics algorithm, with 4 members handling 320 total requests
from 16 concurrent clients.

5.2.3 Load Balancer TLS handler

Handling TLS traffic is another feature of the load balancer. The goal of this test is to evaluate the
correctness of this function. To evaluate this feature, we set a network with two virtual IPs and two pools,
one of which is responsible for handling TLS traffic. Using Mininet, we set an apache2 web server with
a self-signed certificates in the hosts of the TLS pool. It is now possible to recreate the conditions of
a TLS request entering the network and having it handled by the pool set with the TLS protocol. With
links, Mininet hosts are able to connect to the web server through HTTPS. Then, we send requests to
both the virtual IPs, and see which pool is going to answer the requests.

The results of this experiment showed that the TLS pool is always used when a packet of this protocol
is handled by the load balancer, regardless of the virtual IP address used by the client. With wireshark 2 ,
a network protocol analyzer, we are able to see to which host the request is being redirected. The load
balancer will redirect the packets to the TLS pool, as long as the port used by the protocol is mapped in
the controller. The ports mapped in the controller were presented in section 4.1. This can be regarded
as a limitation of this feature, due to the TLS protocol being used by other applications in different
ports. Although, the services covered by the mapped ports in the controller are services widely used in
aggregation with the TLS protocol.

2 https://www.wireshark.org/

54
5.2.4 Load Balancer Health Monitoring Algorithm

The health monitoring algorithm can be evaluated in a simple way. The scenario used to test this
feature is a network with one pool and one health monitor associated with this pool. The pool members
network condition is evaluated by the monitors in two ways:
Firstly, the switch port status, to which the members are connected to, is requested by the monitors
to determine the member health. This status can be changed using Mininet, so that we can evaluate if
the monitors are reading this information correctly.
Secondly, if they are determined healthy in the first stage, an ICMP echo request is sent to the
members. Then, the controller checks which members have replied to it and that determines their
status. With iptables, a tool used to configure firewall rules in the Linux OS, it is possible to have a
server stop responding to ICMP requests and force the health monitors to deem it inactive. We are able
to control the status of a server during the test of the health monitors, which is enough to determine if
the function is working as expected.
In this test scenario, the state of the switch ports connected to pool members are changed with
Mininet commands and iptables is used to create and delete firewall rules to block or allow ICMP replies
from the server. The results of the experiment have been successful, as the health monitors are able to
determine the status of a member correctly. If a member has its switch port inactive or is not answering
the ICMP request, then it is set as inactive. If a member has its switch port active and is answering the
health monitor ICMP request, then it is set as active by the health monitors. The members do not have a
determined time to answer the health monitors. If the controller receives the response at any given time,
the member is deemed active. If there is no response from the member, it is set as inactive until the
answer arrives in the controller. The status are updated when the controller sends the requests. This
period is set to ten seconds, by default, but can be changed through the NBI.
This feature is not completely fail proof, as said before, the members represent a particular service in
the host. This means that the members can be inactive and yet the host is able to respond to the ICMP
request.

5.2.5 Load Balancer Performance

Controller performance has been a topic of discussion since SDN gained popularity [42]. In this test,
we will focus on the bandwidth consumption of the Floodlight elements. This will allow us to compare the
load balancer with other Floodlight modules in terms of control plane communications. Firstly, we need
to know the response time of Floodlight and throughput of these kind of communications. Then, we can
calculate the load on this channel and assess the bandwidth used. Note that these are measurements
of the control plane efficiency of Floodlight, which is the base for the load balancer functionality.

55
(a) Throughput. (b) Latency.

Figure 5.3: Floodlight control plane performance test results with cbench.

The Floodlight control plane performance can be measured using cbench3 , a benchmarking tool for
SDN controllers. It is able to emulate switch sessions with the controller and measure the throughput and
latency of flow modification messages. The OpenFlow version compatible with cbench is 1.0, although
we believe the results are still relevant. The scenario used to perform the tests were three different tree
network topologies. The results of the Floodlight performance benchmark tests are shown in figure 5.3.
We can see that the throughput is increasing with the number of switches and the latency is de-
creasing. This is an indication that the threshold for the optimal Floodlight performance has not yet
been reached with these number of switches, as the latency should increase with the growing number
of switches. Beyond that limit, the latency is expected to increase.
However, these measurements do not reflect the weight from our contributions over the performance
of Floodlight. To get a better sense on how the load balancer affects the performance, we further test
the controller, by measuring the packets-out sent by the load balancer and statistics elements and com-
pare them with other Floodlight modules that have a similar way of communicating with the forwarding
elements. In figures 5.4 and 5.5, we can see the comparison between the packet counters used in
the communication with the switches, by different Floodlight constituents. The number of control plane
packets that reach the network are dependent on its conditions, but the logic of the modules can be
explained to better understand how we got these results.
The load balancer handles packet-in messages with reason ”no match”, because the forwarding
elements do not know how to forward a given packet. For each of these messages received, it will create
a set number of packets-out. These packets-out are used to create flow table entries in the forwarding
elements. In the worst case scenario, for every packet-in received, the load balancer will send twice the

3 https://github.com/mininet/oflops/tree/master/cbench

56
maximum number of hops in packets-out. This is due to the creation of bidirectional routes and because
these routes are installed in all the necessary switches. The load balancer also sends packets-out, in
order to query the state of the members through ICMP requests. These requests are sent one per
member in a period of ten seconds, by default. Furthermore, the load balancer handles ARP messages
directed at the virtual IPs, which increases the number of packets handled by this module. The number of
ARP messages exchanged depend on the number of virtual IPs used in the network, for this experiment
we consider only one.

The statistics module is set to handle three different types of statistic information. The number
of packets-in and packets-out is related to the type of statistic messages involved. For each type of
message, the module sends the request to every forwarding element. Therefore, the number of packets-
out sent by this constituent is three times the number of switches in the network. The number of packets-
in received by this module is equal to the number of packets-out sent, because each forwarding element
responds to the packet-out with a single packet-in.

The link discovery module is responsible for sending the Link Layer Discovery Protocol (LLDP) and
Broadcast Domain Discovery Protocol (BDDP) packets to the switches, as well as handling the re-
sponses. A BDDP packet is simply a broadcasted LLDP packet. These protocols are used to discover
links and hosts in the network. Every fifteen seconds, packets-out are sent to all the ports of active
switches in the network. The number of packets-in handled by the link discovery module is related to the
number of links between forwarding elements in the network. This module can serve as a reference for
comparing the number of packets handled, because its behavior is similar to the statistics module.

The forwarding module is responsible for handling traffic directed at the network. But only if the
IP address used in the communication with the network is not one of the virtual IPs created for the
load balancer, then this constituent is going to handle it. In our scenario, this never happens, as the
clients should only obtain the virtual IPs given by the load balancer. However, it is useful to know the
amount of control plane packets this module is using, due to this module having similar behavior to the
load balancer. We are going to compare the number of packets used in the communication with the
forwarding elements of these modules.

To achieve the results presented, we executed Mininet with three different tree topologies: one with
depth of two and fanout of four; one with depth of two and fanout of nine; one with depth of three and
fanout of four. In this scenario, the connectivity between all the hosts of the network is established
by inserting the necessary flows in the forwarding elements, through the load balancer and forwarding
modules. This is done to have these modules communicate with the switches of the network, increasing
its control plane packet counters. Achieving full connectivity in a network - bidirectional routes between
every hosts - is considering the peak amount of communication between the controller and the switches.
This happens, because the flows are set and the switches should not need to send more packets-in

57
regarding future communications between the hosts.

Given the results in figure 5.4, we can observe a remarkable difference in total control plane mes-
sages from the forwarding module counters to the load balancer module counters. The main factor for
this difference between the packets-in of the load balancer and the forwarding modules is that the for-
mer installs bidirectional flows in the complete path between a host and another, which leads to less
packets-in transmitted from the forwarding elements to the load balancer. The latter only installs flows
in one direction, which means that, to achieve full connectivity in a SDN environment, it leads to more
packets-in sent to the controller. However, the number of packets-out sent from these modules is very
close, as the number of flows installed is going to be equal, with regards to the same topology. However,
the load balancer has to send and receive a few more packets due to the health monitoring system and
ARP handlers. We can observe that the forwarding module sends as many packets-out as packets-in
received and the number of control plane packets exchanged increases with the number of switches and
hosts in the network.

In figure 5.5, we can see the difference from the statistics module to the link discovery module is not
significant. But it is still noticeable, with the latter having a few dozens more total control plane packets
handled, in all of the tested topologies. Regarding the counter numbers for the statistics module, we see
that the packets sent and received are equal to three times the number of switches in the network. The
link discovery module counter values are directly related to the number of links between the switches
and hosts. In a tree topology with depth of two, the number of packets-out sent by this module is equal
to the packets-in times the number of switches in the network.

The results interpretation is not complete without further analysis of the size of the packets traveling
through the communication channel. The load balancer packet-in message is due to the forwarding
elements not having a matching rule in the flow table to forward an incoming packet. This message
length is variable depending on the transport protocol that was used. The corresponding packet-out
message sent to insert the flow table entries is Flow Mod. This message has the match fields and
instructions to set on the forwarding elements. The statistics module handles three different types of
messages, the OF Multipart Port Stats, the OF Multipart Port Desc and the OFPMP Flow. The reply
part of these messages have variable lengths. The port related messages have lengths depending on
the number of ports per switch. The flow message depends on the number of flows and its configuration
in the forwarding elements. The link discovery module handles LLDP messages, which have constant
length, but are sent depending on the number of switch ports in the network.

The scenario to run this experiment consists of a tree network topology with depth of three and
fanout of four. This builds into a network with sixty-four hosts, eighty-four links and twenty-one switches.
The OpenFlow version used is 1.3.0. However, the statistics messages are compatible with the other
versions of OpenFlow until version 1.5.0. This means that the length of the statistics messages should

58
(a) depth=2, fanout=4. (b) depth=2, fanout=9.

(c) depth=3, fanout=4.

Figure 5.4: Floodlight load balancer and forwarding modules packet-in and packet-out counters after achieving full
network connectivity, for three different tree topologies.

not change dramatically.


Following the test scenario described, we created a table with the length of each message, presented
in 5.3. With the exception of the message OFPMP Flow Reply, these messages were captured using
wireshark in a live environment to get their lengths. This particular message has variable size, which
depends on the number of flows and its configuration in the switches, as stated before. Throughout the
test, we calculated the total flows added during the connectivity allocation between hosts and seen 17
856 total flows, as seen in figure 5.4 (c). By distributing the flows among the switches, assuming the
flows are installed evenly across the switches, we get 851 (17 856 total flows divided by 21 switches)
flows per switch. As observed with wireshark, each message of OFPMP Flow Reply has 64 bytes per
flow, so this message will have a total of 54 464 (851 times 64) bytes. The Packet-In No Match and the
Flow Mod message lengths are inflated to the worst values observed in this test.

59
(a) depth=2, fanout=4. (b) depth=2, fanout=9.

(c) depth=3, fanout=4.

Figure 5.5: Floodlight statistics and link discovery packet-in and packet-out counters over a period of 15 seconds,
for three different tree topologies.

Considering the message length measurements presented in table 5.3, we can calculate an approx-
imation for the load usage by the modules. In table 5.4, we calculated the amount of kilobytes used in
the control plane communication, for each module. The message weights column represents the load
used, calculated by multiplying the length of the messages by the number of messages and summing
the different parts. For instance, the load balancer module handles 17 856 Flow Mod messages, 2016
Packet-In No Match messages, 1 Packet-Out ARP message, 1 Packet-In ARP message, 64 Packet-
In ICMP messages and 64 Packet-Out ICMP messages. The same logic is applied to the rest of the
modules, in order to calculate their weights. This enables us to compare which modules have a bigger
weight in the communication with the forwarding elements. We determined that using the load balancer
instead of the forwarding module is saving resources in the Floodlight control plane. Even if we add the
total weight of both the statistics and the load balancer, this load is just 408.03 kilobytes more than half

60
Messages Bytes

Flow Mod 300

Packet-Out ARP 198

Packet-Out ICMP 150

Packet-Out LLDP 183

OF Multipart Port Desc Request 84

OF Multipart Port Stats Request 92

OFPMP Flow Request 124

Packet-In No Match 500

Packet-In ARP 152

Packet-In ICMP 152

Packet-In LLDP 185

OF Multipart Port Desc Reply 468

OF Multipart Port Stats Reply 756

OFPMP Flow Reply 54 464

Table 5.3: Control plane message lengths of OpenFlow version 1.3.0, for a tree network topology with 64 hosts and
21 switches.

of the total weight of the forwarding module.


Given the throughput of the Floodlight control plane presented in figure 5.3 and the weight of the
communications handled by the Floodlight components addressed in table 5.4, we are able to calculate
an approximate bandwidth used by the modules. In table 5.5, we show the values of the bandwidth for
each module. The speed at which Floodlight control plane can process packets, for this topology, is
23 569 responses per second, as seen in figure 5.3. If we consider that the four Floodlight elements
functioning in the same period of time, we sum the number of packets that are going to be used in the
control plane communication channel. This amounts to the sum of the values in figures 5.4 and 5.5,
regarding the tree topology with depth of three and fanout of four. Considering the throughput and
the total packets to be processed, we calculated the time it takes Floodlight to process these packets.
Furthermore, with the weight of the communications regarding each module, we are able to assess the
bandwidths used in the control plane. With these results, we reach the conclusion that the load balancer
is using around 45% of the bandwidth of the forwarding module. Furthermore, as the statistics module
is a core element of the load balancer solution, if we consider its bandwidth, it is still approximately 53%
of the total bandwidth of the forwarding element in Floodlight.

61
Modules Message Weights (bytes) Total Weight (kilobytes)

Load Balancer 17 856 x 300 + 2016 x 500 + 198 + 152 + 64 x 152 + 64 x 150 6 234.84

Forwarding 17 856 x 300 + 17 856 x 500 13 950.00

Statistics (84 + 92 + 124) x 21 + (468 + 756 + 54 464) x 21 1 148.19

Link Discovery 168 x 183 + 40 x 185 37.25

Table 5.4: Calculations of the total weight of the control plane messages used by the Floodlight components.

Modules Message Weights (kilobytes) Duration (s) Total Bandwidth (kilobytes/s)

Load Balancer 6 234.84 2.38 2 619.68

Forwarding 13 950.00 2.38 5 861.34

Statistics 1 148.19 2.38 482.43

Link Discovery 37.25 2.38 15.65

Table 5.5: Comparison of the total bandwidth consumption over 2.38 seconds of the Floodlight modules.

Regarding this evaluation, we can conclude that the load balancer has a more efficient use of control
plane resources when compared to the forwarding module, even when allied with the statistics module.
Notwithstanding, the statistics module is contributing with more load than its reference element, due to
its similar periodical control plane usage pattern, the link discovery module. This is mainly due to the
greater length messages used to collect statistics, particularly, the OFPMP Flow Reply message.

5.2.6 Northbound Interface Performance

The NBI is a crucial component of this work, as we want to make this interface ready for real environ-
ments with the possibility of intense communications between this and external applications. This test
is to ensure the NBI performance is up to standards by testing if it is able to cope with a high amount of
requests. To make this test possible, we used bench-rest 4 . This tool is used to bench-mark the perfor-
mance of REST interfaces, by sending high amounts of concurrent requests and outputting vital metrics
about the state of the interface, when submitted to this pressure.
The scenario played in this test consists of Floodlight running in the cloud machine and the VM
running a network connected to Floodlight, while sending 100 000 HTTP Get requests to the NBI, asking
for the information about the members of a certain pool. Using bench-rest, we are able to set multiple
threads to execute this requests and measure the interface performance in requests per second. The
4 https://github.com/jeffbski/bench-rest

62
results of this experiment are show in table 5.6. We observe that increasing the number of members
associated to a pool, the interface performance decreases. This is due to the high amount of information
that the controller needs to send through the network, as the members information is increasing.

Number of Requests Number of Pool Members Requests/second

100 000 100 298.12

100 000 1 000 144.29

100 000 10 000 23.34

Table 5.6: Northbound Interface performance test with bench-rest, executing 100 000 HTTP Get requests on all the
members of a pool.

However, we ran another test, this time the requests to the interface are asking information about
a certain member, instead of a report of all the members of a certain pool. The results of this test
are presented in table 5.7, where we see that the requests per second are relatively constant. This
leads us to believe that the NBI average performance of this interface is around 1 236.97 requests per
second. The difference between this test and the previous, is the amount of computation Floodlight has
to perform and the amount of data to be transfered to the clients. Furthermore, this test is more like the
conditions this interface would face in real environments, as most of the operations in the NBI definition
are directed to a single resource.

Number of Requests Number of Pool Members Requests/second

100 000 100 1 070.84

100 000 1 000 1 434.45

100 000 10 000 1 205.61

Table 5.7: Northbound Interface performance test with bench-rest, executing 100 000 HTTP Get requests on a
member.

In order to further analyze the performance of this interface, we test it with two consecutive HTTP
commands. In this test, we will dispatch a HTTP Put message to create a LBMember and then execute
HTTP Get message over that same element. The results of these operations are shown in table 5.8. It
is noticeable that with 200 and 2 000 concurrent requests, the interface is not being stressed. This is
apparent when the number of requests increases and the requests per second also increases. Although,
if we look at the bottom three lines of the table, we see that there was a decrease of performance, which

63
makes us conclude that the maximum performance has been reached, and the system is pressured
starting from 200 000 concurrent requests. So, we can deduce that this interface is able to deal with
506.33 requests per second, in a stressed environment. With bench-rest, we are able to determine the
errors given by the interface, when attempting to answer the requests and seen there were no errors
while running this test.

Number of Requests Requests/second Errors

200 343.18 0

2 000 309.84 0

20 000 592.88 0

200 000 525.47 0

2 000 000 506.33 0

Table 5.8: Northbound Interface performance test with bench-rest, creating new members with HTTP Put followed
by HTTP Get requests on the members.

5.3 Discussion

In this section, we discuss the system evaluation performed in this chapter. We have seen the
different components of the evaluation explained and presented an analysis of the results.
Regarding the load balancer, the analysis of the correctness of the multiple aspects that enable the
Floodlight load balancer module to execute its functions was positive. In particular the member picking
algorithms, the TLS handler and the health monitors. The load balancer is deemed to perform better,
in terms of control plane load usage and bandwidth, than the existing forwarding element in Floodlight.
To perform the same task, the load balancer uses approximately 53% of the resources of the forwarding
module. The calculations presented throughout the load balancer performance test, take into account
the control plane network bandwidth and the machine specifications presented in the test environment.
We also assume that the throughput determined using cbench, which floods the control plane with
Flow Mod messages, is able to respond with the same speed to the other OpenFlow message used by
the evaluated modules.
Regarding the NBI, we evaluated its performance and seen that it is dependent on the implementa-
tion of the resources in the load balancer. We also reached a value for the performance of the interface,
when dealing with difficult environment conditions (2 million concurrent requests), the NBI is able to
respond to 506.33 requests per second.

64
Conclusions and Future Work
6

Contents
6.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

65
6.1 Conclusions

SDN was first introduced to address issues related to traditional networks, such as the increasing
management complexity. This complexity is due to the strong interconnection between control and data
planes in traditional network devices. Thus, these devices are difficult to program and each network ser-
vice is associated with different hardware. SDN simplified the network devices, in order to be abstained
from the computation parts, that are integrated in a central SDN controller. As seen before, a centralized
control brings benefits to the management of networks. There is an improvement in the research and
development of applications, due to the network programmability provided by the controller and the NBI.
This interface is usually implemented as a REST API. There are also some limitations to the architecture,
such as scalability and the lack of standards. But companies leading the SDN movement are working to
establish proper standards and documentation to increase the adoption of this new paradigm. Also, the
community has presented relevant works to solve the limitations linked to this new architecture. It is fair
to say that the benefits of SDN outweigh the limitations.
Related to the start of an openness approach in the network industry, NFV emerges for addressing
TSPs issues. Both NFV and SDN have similar goals. These concepts acknowledge that the openness
factor leads the industry to more innovation, and complement each other in ways that TSPs can leverage
to build new business models and provide better services for future networks.
Taking this research into account, we developed a REST API for load balancing applications, con-
ductive to contribute to the definition of a standard. This API provides all the necessary functions to
external applications in order to manage a load balancer, according to the load balancing principles. We
contributed directly to the source code of Floodlight with new features, such as two new load balancing
algorithms, two switch statistics collection methods and the pool statistics retrieval. Other features are
pending approval by the Floodlight team, including the TLS handler and the health monitoring system.
Thus, we improved this controller, making it relevant for load balancing applications, in the context of data
center networks. Furthermore, we implemented two different interfaces, in which users have access to
the NBI operations, simplifying the usage of Floodlight load balancing management functions.
In order to have a system ready to be deployed in a data center, we performed an in-depth evaluation
on the most important components of our solution. We used a test environment with conditions that are
close to those that are expected in a data center. We reach the conclusion that the new Floodlight load
balancing features are working as intended. The TLS handlers redirect the requests appropriately. The
health monitoring system is identifying disconnected members. The balancing algorithms are distributing
the load evenly across members. Furthermore, the performance of the controller is not degraded by
these functions, it is actually using 53% of the bandwidth used by the forwarding component, due to the
way the load balancer is installing the flows in the forwarding elements. Regarding the REST API, we
concluded, after performing stress tests with 100 000 concurrent requests, that it is suited to answer on

66
average 1 236.97 requests per second and 506.33 requests per second, on an extreme environment.
It is our hope that this thesis can help determine the functions, essential for the management of load
balancers, to be defined in a standard. With this work, it is easier to identify and incorporate these
functions into a future standard.

6.2 Future Work

Although we see our solution as fitting for starting discussion about a NBI standard, it comes with
some limitations. In this section, we will propose a few ways to continue this work and fix these limita-
tions.
The load balancing algorithms are working as intended, however they could be improved if there was
server context information to better the distribution decision. This could be achieved by incorporating
the NETCONF protocol or similar in the Floodlight controller.
Another aspect of our solution is that the flows installed by the load balancer in the forwarding el-
ements have infinite time out. This is to ensure TCP session persistence between server and client.
However, this will lead to the exhaustion of switch resources in the long term. It should be done more
research concerning the optimal time frame to have when dealing with a TCP session establishing and
termination, then implement it to flow installation.
The CLI and web GUI interfaces developed to access the resources of Floodlight through the NBI
were not user tested. So, a possible direction that should be exploited is user testing, thus increasing
the quality and friendliness towards users of these interfaces. To further complete the range of appli-
cations available to access the NBI, it could be interesting to have an interface developed for mobile
environments.

67
68
Bibliography

[1] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker,


and J. Turner, “Openflow: Enabling innovation in campus networks,” ACM SIGCOMM Computer
Communication Review, vol. 38, no. 2, pp. 69–74, April 2008.

[2] D. L. Sarwar Raza, “Northbound interfaces working group (nbi-wg) charter,” Tech. Rep., June
2013, working for a standardized NorthBound Interface definition, Accessed: 2016-11-20. [Online].
Available: https://www.opennetworking.org/images/stories/downloads/working-groups/charter-nbi.
pdf

[3] European Telecommunications Standards Institute, “Network functions virtualisation - an


introduction, benefits, enablers, challenges and call for action,” Tech. Rep., October 2012,
Accessed: 2016-11-26. [Online]. Available: https://portal.etsi.org/nfv/nfv white paper.pdf

[4] L. Yang, R. Dantu, T. Anderson, and R. Gopal, “Forwarding and control element separation (forces)
framework,” Internet Requests for Comments, RFC Editor, RFC 3746, April 2004, Accessed:
2016-11-12. [Online]. Available: https://tools.ietf.org/rfc/rfc3746.txt

[5] S. Hares, “Analysis of comparisons between openflow and forces,” Working Draft, IETF
Secretariat, Internet-Draft draft-hares-forces-vs-openflow-00, July 2012, Accessed: 2016-11-13.
[Online]. Available: http://www.ietf.org/internet-drafts/draft-hares-forces-vs-openflow-00.txt

[6] A. Doria, J. H. Salim, R. Haas, H. Khosravi, W. Wang, L. Dong, R. Gopal, and J. Halpern,
“Forwarding and control element separation (forces) protocol specification,” Internet Requests for
Comments, RFC Editor, RFC 5810, March 2010, Accessed: 2016-11-12. [Online]. Available:
http://www.rfc-editor.org/rfc/rfc5810.txt

[7] R. Enns, M. Bjorklund, J. Schoenwaelder, and A. Bierman, “Network configuration protocol


(netconf),” Internet Requests for Comments, RFC Editor, RFC 6241, June 2011, Accessed:
2016-11-16. [Online]. Available: http://www.rfc-editor.org/rfc/rfc6241.txt

69
[8] H. Song, “Protocol-oblivious forwarding: Unleash the power of sdn through a future-proof forwarding
plane,” in HotSDN ’13 Proceedings of the second ACM SIGCOMM workshop on Hot topics in
software defined networking. ACM New York, NY, USA ©2013, August 2013, pp. 127–132.

[9] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, “Nox: Towards an
operating system for networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 3,
pp. 105–110, July 2008.

[10] Big Switch Networks. (2012) Floodlight openflow controller. Accessed: 2016-12-15. [Online].
Available: https://github.com/floodlight/floodlight

[11] D. Erickson, “The beacon openflow controller,” in HotSDN ’13 Proceedings of the second ACM
SIGCOMM workshop on Hot topics in software defined networking. ACM New York, NY, USA
©2013, August 2013, pp. 13–18.

[12] B. Pfaff and B. Davie, “The open vswitch database management protocol,” Internet Requests for
Comments, RFC Editor, RFC 7047, December 2013, Accessed: 2016-11-16. [Online]. Available:
http://www.rfc-editor.org/rfc/rfc7047.txt

[13] R. T. Fielding, “Architectural styles and the design of network-based software architectures,” Ph.D.
dissertation, University of California, Irvine, 2000.

[14] Open Networking Foundation, “Openflow switch specification,” Tech. Rep., March 2015, Accessed:
2017-08-22. [Online]. Available: https://www.opennetworking.org/wp-content/uploads/2014/10/
openflow-switch-v1.5.1.pdf

[15] C. Ching-Hao and Y.-D. Lin, “Openflow version roadmap,” Department of Computer Science, Chiao
Tung National University, Tech. Rep., September 2015, Accessed: 2017-09-24. [Online]. Available:
http://speed.cis.nctu.edu.tw/∼ydlin/miscpub/indep frank.pdf

[16] ONF, “Openflow management and configuration protocol,” Tech. Rep., February 2014.
[Online]. Available: https://www.opennetworking.org/images/stories/downloads/sdn-resources/
onf-specifications/openflow-config/of-config-1.2.pdf

[17] W. Zhou, L. Li, M. Luo, and W. Chou, “Rest api design patterns for sdn northbound api,” in WAINA
’14 Proceedings of the 2014 28th International Conference on Advanced Information Networking
and Applications Workshops. IEEE Computer Society Washington, DC, USA ©2014, May 2014.

[18] S. Rivera, Z. Fei, and J. Griffioen, “Raptor: A rest api translator for openflow controllers,” in Com-
puter Communications Workshops (INFOCOM WKSHPS), 2016 IEEE Conference. IEEE, April
2016.

70
[19] Open Networking LAB, “Introducing onos - a sdn network operating system for service
providers,” Tech. Rep., April 2015, Accessed: 2016-11-28. [Online]. Available: http:
//onosproject.org/wp-content/uploads/2014/11/Whitepaper-ONOS-final.pdf

[20] C. Janz, “Intent nbi – definition and principles,” Tech. Rep., October 2016, Accessed: 2016-11-
28. [Online]. Available: https://www.opennetworking.org/images/stories/downloads/sdn-resources/
technical-reports/TR-523 Intent Definition Principles.pdf

[21] M. Pham and D. B. Hoang, “Sdn applications - the intent-based northbound interface realisation for
extended applications,” in NetSoft Conference and Workshops (NetSoft), 2016. IEEE, June 2016.

[22] C. Banse and S. Rangarajan, “A secure northbound interface for sdn applications,” in TRUSTCOM
’15 Proceedings of the 2015 IEEE Trustcom/BigDataSE/ISPA - Volume 01. IEEE Computer Society
Washington, DC, USA ©2015, August 2015, pp. 834–839.

[23] J. S. Sukhveer Kaur, Krishan Kumar, “Round-robin based load balancing in software defined net-
working,” in 2nd International Conference on “Computing for Sustainable Global Development.
IEEE, May 2015.

[24] A. Y. Kenji Hikichi, Toshio Soumiya, “Dynamic application load balancing in distributed sdn con-
troller,” in Network Operations and Management Symposium (APNOMS), 2016 18th Asia-Pacific.
IEEE, October 2016.

[25] A. Schwabe and H. Karl, “Using mac addresses as efficient routing labels in data centers,” in
HotSDN ’14 Proceedings of the third workshop on Hot topics in software defined networking. ACM
New York, NY, USA ©2014, September 2014, pp. 115–120.

[26] R. Wang, D. Butnariu, and J. Rexford, “Openflow-based server load balancing gone wild,” in Hot-
ICE’11 Proceedings of the 11th USENIX conference on Hot topics in management of internet,
cloud, and enterprise networks and services. USENIX Association Berkeley, CA, USA ©2011,
March 2011, pp. 12–12.

[27] S. Shin, L. Xu, S. Hong, and G. Gu, “Enhancing network security through software defined net-
working (sdn),” in Computer Communication and Networks (ICCCN), 2016 25th International Con-
ference. European Council for Modeling and Simulation, August 2016.

[28] B. Vengainathan, A. Basil, M. Tassinari, V. Manral, and S. Banks, “Terminology for


benchmarking sdn controller performance,” Working Draft, IETF Secretariat, Internet-Draft
draft-ietf-bmwg-sdn-controller-benchmark-term-02, July 2016, Accessed: 2016-11-26. [Online].
Available: http://www.ietf.org/internet-drafts/draft-ietf-bmwg-sdn-controller-benchmark-term-02.txt

71
[29] F. F. Azevedo, “A scalable architecture for openflow sdn controllers,” Master’s thesis, Instituto Supe-
rior Tecnico, Av. Prof. Dr. Cavaco Silva, 2744-016 Porto Salvo, October 2015.

[30] J. Zhang, P. Song, Y. Liu, and D. Qian, “Online replacement of distributed controllers in software
defined networks,” in 21st International Conference on Parallel and Distributed Systems (ICPADS).
IEEE, December 2015.

[31] T. Y. Cheng, M. Wang, and X. Jia, “Qos-guaranteed controller placement in sdn,” in Global Com-
munications Conference (GLOBECOM), 2015. IEEE, February 2015.

[32] L. J. Bo Han, Vijay Gopalakrishnan and S. Lee, “Network function virtualization: Challenges and
opportunities for innovations,” IEEE Communications Magazine, vol. 53, no. 2, pp. 90–97, February
2015.

[33] ETSI Industry Specification Group for NFV, “Network functions virtualization - architectural
framework,” ETSI ISG, Tech. Rep., October 2013, ETSI GS NFV 002 V1.1.1, Accessed:
2016-12-13. [Online]. Available: http://www.etsi.org/deliver/etsi gs/nfv/001 099/002/01.01.01 60/
gs nfv002v010101p.pdf

[34] ETSI ISG for NFV, “Network functions virtualization - use cases,” ETSI ISG, Tech. Rep.,
October 2013, ETSI GS NFV 001 V1.1.1, Accessed: 2016-12-13. [Online]. Available:
http://www.etsi.org/deliver/etsi gs/nfv/001 099/001/01.01.01 60/gs nfv001v010101p.pdf

[35] Open Networking Foundation, “Openflow-enabled sdn and network functions virtualization,” Tech.
Rep., February 2014, Accessed: 2016-12-03. [Online]. Available: https://www.opennetworking.org/
images/stories/downloads/sdn-resources/solution-briefs/sb-sdn-nvf-solution.pdf

[36] F. Callegati, W. Cerroni, C. Contoli, R. Cardone, M. Nocentini, and A. Manzalini, “Sdn for dynamic
nfv deployment,” IEEE Communications Magazine, vol. 54, no. 10, pp. 89–95, October 2016.

[37] J. Matias, J. Garay, N. Toledo, J. Unzilla, and E. Jacob, “Toward an sdn-enabled nfv architecture,”
IEEE Communications Magazine, vol. 53, no. 4, pp. 187–193, April 2015.

[38] J. Matias, J. Garay, A. Mendiola, N. Toledo, and E. Jacob, “Flownac: Flow-based network access
control,” in EWSDN ’14 Proceedings of the 2014 Third European Workshop on Software Defined
Networks. IEEE Computer Society Washington, DC, USA ©2014, September 2014.

[39] B. H. Bob Lantz and N. McKeown, “A network in a laptop: Rapid prototyping for software-defined
networks,” in Proceeding Hotnets-IX Proceedings of the 9th ACM SIGCOMM Workshop on Hot
Topics in Networks. ACM New York, NY, USA ©2010, October 2010.

72
[40] F5 Networks, “Load balancing 101: Nuts and bolts,” Tech. Rep., May 2017, Accessed: 2017-7-15.
[Online]. Available: https://f5.com/resources/white-papers/load-balancing-101-nuts-and-bolts

[41] K. Bilal, S. U. Khan, J. Kolodziej, L. Zhang, K. Hayat, S. A. Madani, N. Min-Allah, L. Wang, and
D. Chen, “A comparative study of data center network architectures,” in ECMS 2012 Proceedings
edited by: K. G. Troitzsch, M. Moehring, U. Lotzmann, 2012, pp. 526–532.

[42] R. Sherwood, M. Casado, Y. Ganjali, S. Gorbunov, and A. Tootoonchian, “On controller perfor-
mance in software-defined networks,” in Hot-ICE’12 Proceedings of the 2nd USENIX conference
on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services. USENIX
Association Berkeley, CA, USA ©2012, April 2012.

73
Load Balancer API Documentation
A
• VIPs
• Operation: Create VIP.

• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/vips/

• Success: Returns the created VIP, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully create a VIP:

• id: A string identifying the VIP.

• name: The name of the VIP.

• protocol: The protocol for the pool associated with this VIP. Can be either TCP, UDP, ICMP or
TLS.

74
• address: The VIP IP address to be publicly available to the clients.

• port: The port associated with the VIP.

• Operation: Update a VIP.

• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/vips/”vip id”

• Parameter: vip id - Identifier of the VIP.

• Success: Returns the updated VIP, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully update a VIP:

• id: A string identifying the VIP.

• name: The name of the VIP.

• protocol: The protocol for the pool associated with this VIP. Can be either TCP, UDP, ICMP or
TLS.

• address: The VIP IP address to be publicly available to the clients.

• port: The port associated with the VIP.

If these values are not specified, they will be filled with default values. However, if there is an error
in the request, it will be responded with HTTP error message with code 400.

• Operation: Delete a VIP.

• HTTP Verb: DELETE.

• URI: /quantum/v1.0/vips/”vip id”

• Parameter: vip id - Identifier of the VIP.

• Success Code: 200 (Success)

• Error Code: 404 (Not Found).

• Operation: Get a VIP.

• HTTP Verb: GET.

• URI: /quantum/v1.0/vips/”vip id”

75
• Parameter: vip id - Identifier of the VIP.

• Success: Returns the VIP requested, showing its attributes in JSON format.

• Error Code: None.

• Operation: Get list of VIPs.

• HTTP Verb: GET.

• URI: /quantum/v1.0/vips/

• Success: Returns a list containing all the created VIPs, showing their attributes in JSON format.

• Error Code: None.

• Pools
• Operation: Create Pool.
• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/pools/

• Success: Returns the created Pool, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully create a Pool:

• id: A string identifying the Pool.

• name: The name of the Pool.

• protocol: The protocol for the pool associated with this VIP. Can be either TCP, UDP, ICMP or
TLS.

• vip id: The identifier of the VIP to be associated with this Pool.

• lbMethod: The algorithm to be used to distribute traffic between the servers. Can be either WRR,
RR or Statistics.

• Operation: Update a Pool.


• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/pools/”pool id”

• Parameter: pool id - Identifier of the Pool.

76
• Success: Returns the updated Pool, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully update a Pool:

• id: A string identifying the Pool.

• name: The name of the Pool.

• protocol: The protocol for the pool associated with this VIP. Can be either TCP, UDP, ICMP or
TLS.

• vip id: The identifier of the VIP to be associated with this Pool.

• lbMethod: The algorithm to be used to distribute traffic between the servers. Can be either WRR,
RR or Statistics.

If these values are not specified, they will be filled with default values. However, if there is an error
in the request, it will be responded with HTTP error message with code 400.

• Operation: Delete a Pool.


• HTTP Verb: DELETE.

• URI: /quantum/v1.0/pools/”pool id”

• Parameter: pool id - Identifier of the Pool.

• Success Code: 200 (Success)

• Error Code: 404 (Not Found).

• Operation: Get a Pool.


• HTTP Verb: GET.

• URI: /quantum/v1.0/pools/”pool id”

• Parameter: pool id - Identifier of the Pool.

• Success: Returns the Pool requested, showing its attributes and members associated with it, in
JSON format.

• Error Code: None.

• Operation: Get list of Pools.


• HTTP Verb: GET.

77
• URI: /quantum/v1.0/pools/

• Success: Returns a list containing all the created Pools, showing their attributes in JSON format.

• Error Code: None.

• Operation: List Members by Pool.


• HTTP Verb: GET.

• URI: /quantum/v1.0/pools/”pool id”/members

• Parameter: pool id - Identifier of the Pool.

• Success: Returns a list containing all the Members ordered by Pools, showing their attributes in
JSON format.

• Error Code: None.

• Operation: Associate Monitor with a Pool.


• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/pools/”pool id”/health monitors

• Parameter: pool id - Identifier of the Pool.

• Success: Returns the updated Monitor, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully update a Member:

• id: A string identifying the Monitor.

• address: The IP address of the Monitor.

• type: The protocol used to query the members associated to the Pool being monitored by this
element. Only ICMP available.

If the Pool identified by the pool id already has a monitor associated with, then it cannot have
another monitor associated with it.

• Operation: Dissociate Monitor with a Pool.


• HTTP Verb: DELETE.

• URI: /quantum/v1.0/pools/”pool id”/health monitors/”monitor id”

78
• Parameters: pool id - Identifier of the Pool, monitor id - Identifier of the Monitor.

• Success code: 200 (Success)

• Error Code: 404 (Not Found).

• Operation: List Pools with Monitors.


• HTTP Verb: GET.

• URI: /quantum/v1.0/pools/”pool id”/health monitors/

• Parameter: pool id - Identifier of the Pool.

• Success: Returns a list with the Pools that are associated with a Monitor.

• Error Code: None.

• Operation: List Pool Statistics.


• HTTP Verb: GET.

• URI: /quantum/v1.0/pools/”pool id”/stats

• Parameter: pool id - Identifier of the Pool.

• Success: Returns a list containing the transmitted and received bytes and the active flows, by the
Pool identified by pool id.

• Error Code: None.

• Operation: Prioritize a Member of a Pool.


• HTTP Verb: POST or PUT.

• URI:/quantum/v1.0/pools/”pool id”/members/”member id”

• Parameters: pool id - Identifier of the Pool, member id - Identifier of the Member.

• Success Code: 200 (Success)

• Error Code: 404 (Not Found).

• Description: Select a pool and a member, increase weight of a member from that pool, and adjust
other members weight, in order to have a biased towards this member, increasing its chances to
be selected by the WRR algorithm.

• Members
79
• Operation: Create Member.
• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/members/

• Success: Returns the created Member, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully create a Pool:

• id: A string identifying the Member.

• address: The name of the Member.

• port: The switch port of this service.

• pool id: The identifier of the Pool to be associated with this Member.

• Operation: Update a Member.


• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/members/”member id”

• Parameter: member id - Identifier of the Member.

• Success: Returns the updated Member, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully update a Member:

• id: A string identifying the Member.

• address: The name of the Member.

• port: The switch port of this service.

• pool id: The identifier of the Pool to be associated with this Member.

If the Pool identified by the pool id does not exist, the member is not created and an error is sent
to the client. If multiple members are created with similar attributes address and pool id, the load
balancer is not able to distinguish them. Therefore, it will not be able to apply its features to the
members appropriately.

• Operation: Set Member Weight.

80
• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/members/”member id”/”weight value”

• Parameters: member id - Identifier of the Member, weight value - Integer value between 1 and 10.

• Success Code: 200 (Success)

• Error Codes: 404 (Not Found), 400 (Bad Request)

If the weight value is not corresponding to the constraints, an error message with code 400 (Bad
Request) is sent to the client. If the member does not exist, the server responds with an error
message with code 404 (Not Found).

• Operation: Delete a Member.


• HTTP Verb: DELETE.

• URI: /quantum/v1.0/members/”member id”

• Parameter: member id - Identifier of the Member.

• Success Code: 200 (Success)

• Error Code: 404 (Not Found).

• Operation: Get a Member.


• HTTP Verb: GET.

• URI: /quantum/v1.0/members/”member id”

• Parameter: member id - Identifier of the Member.

• Success: Returns the Member requested, showing its attributes, in JSON format.

• Error Code: None.

• Operation: Get list of Members.


• HTTP Verb: GET.

• URI: /quantum/v1.0/members/

• Success: Returns a list containing all the created Members, showing their attributes, in JSON
format.

• Error Code: None.

81
• Monitors
• Operation: Enable/Disable Monitors.
• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/health monitors/”status”

• Parameter: status - ”Enable” or ”Disable”, according to the intentions of the user.

• Success: 200 (Success)

• Error Code: 409 (Conflict). If it is not possible to cancel or start monitoring, then an error is sent to
the client.

• Operation: Change Period of Monitors.


• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/health monitors/monitors/”period”

• Parameter: period - Integer that represents the seconds that Monitors have to wait between query-
ing the Members.

• Success: 200 (Success)

• Error Code: 400 (Bad Request). If the parameter ”period” is badly formatted.

• Operation: Create Monitor.


• HTTP Verb: POST or PUT.

• URI: /quantum/v1.0/health monitors/

• Success: Returns the created Monitor, showing its attributes in JSON format.

• Error Code: 400 (Bad Request).

The client can specify the following attributes in order to successfully create a Monitor:

• id: A string identifying the Monitor.

• address: The IP address of the Monitor.

• type: The protocol used to query the members associated to the Pool being monitored by this
element. Only ICMP available.

• Operation: Delete a Monitor.

82
• HTTP Verb: DELETE.

• URI: /quantum/v1.0/health monitors/”monitor id”

• Parameter: monitor id - Identifier of the Monitor.

• Success Code: 200 (Success)

• Error Code: 404 (Not Found).

• Operation: Get a Monitor.


• HTTP Verb: GET.

• URI: /quantum/v1.0/health monitors/”monitor id”

• Parameter: monitor id - Identifier of the Monitor.

• Success: Returns the Monitor requested, showing its attributes, in JSON format.

• Error Code: None.

• Operation: Get list of Monitors.


• HTTP Verb: GET.

• URI: /quantum/v1.0/health monitors/

• Success: Returns a list containing all the created Monitors, showing their attributes, in JSON
format.

• Error Code: None.

83

You might also like