You are on page 1of 126

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/365449008

COMPARISION OF THREE OPENFLOW SDN CONTROLLERS - OPENDAYLIGHT,


FLOODLIGHT, AND HPE VAN

Thesis · October 2020


DOI: 10.13140/RG.2.2.25124.35202

CITATIONS READS
0 109

1 author:

Godson Dawuni
University of Ghana
3 PUBLICATIONS   0 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Godson Dawuni on 17 November 2022.

The user has requested enhancement of the downloaded file.


UNIVERSITY OF GHANA

COLLEGE OF BASIC & APPLIED SCIENCE

COMPARISION OF THREE OPENFLOW SDN CONTROLLERS - OPENDAYLIGHT,

FLOODLIGHT, AND HPE VAN

DAWUNI RASHID GODSON

(10805844)

THIS THESIS IS SUBMITTED TO THE UNIVERSITY OF GHANA, LEGON IN PARTIAL

FULFILMENT OF THE REQUIREMENT FOR THE AWARD OF MSC COMPUTER

SCIENCE

OCTOBER,2020

i
DECLARATION

I hereby declare that this thesis is my own original research work undertaken at the

Department of Computer Science, University of Ghana, Legon under the guidance of

my thesis supervisor except for other people’s work which have been duly cited and

acknowledged.

STUDENT

Name: Dawuni, Rashid Godson

Signature: ..........................................................

Date: ..........................................................

SUPERVISOR

Name: Dr. Jamal-Deen Abdulai

Signature: ..........................................................

Date: ..........................................................

CO-SUPERVISOR

Name: Prof. Ferdinand A. Katsriku

Signature: ..........................................................

Date: ..........................................................

ii
DEDICATION

To my mother

iii
ACKNOWLEDGEMENT

I appreciate my supervisors Dr. Jamal-Deen Abdulai and Prof. Ferdinand A. Katsriku for their

tutelage, support and guidance in my Masters Programme.

I also thank the lecturers in the Department of Computer Science for both teaching and impacting

my life positively.

iv
ABSTRACT

Software Defined Networking (SDN) is an innovative way of programming the age-old traditional

networks. Traditional networking devices have the control and forwarding planes bundled

together. SDN enable independent evolution of the planes by separating them. The Control plane

is pushed to a Controller while packet forwarding resides in the switches and routers.

The Implementation of a Software Defined Network relies significantly on the SDN Controller.

The Controller acts as the brain of the network where decisions regarding routes and packet

forwarding are made. The Controller also has a detailed visibility of the data plane devices. The

capabilities of these devices are made available through the Controller Southbound Interface (SBI)

such as Network Configuration Protocol (NETCONF), Simple Network Management Protocol

(SNMP) as well as OpenFlow. Applications for Security, Quality of Service, Traffic Engineering

among others are written and deployed to these data plane devices through the Northbound

Interface (NBI) of the Controller.

There has been much research done on designing and implementing SDN Controllers. The

implemented SDN Controllers are either opensource or vendor specific. The Controllers are further

divided into Centralized and Distributed Controllers.

This thesis focuses on comparing the implementation and performance metrics of three SDN

Controllers for particular networks topologies. The Controllers examined in this thesis includes:

OpenDayLight, which is Opensource and a Distributed Controller. Floodlight, an Opensource and

Centralized Controller and HPE VAN Controller, a proprietary Controller.

v
TABLE OF CONTENTS

DECLARATION ............................................................................................................................ ii
DEDICATION ............................................................................................................................... iii
ACKNOWLEDGEMENT ............................................................................................................. iv
ABSTRACT.................................................................................................................................... v
TABLE OF CONTENTS ............................................................................................................... vi
LIST OF FIGURES ....................................................................................................................... ix
LIST OF TABLES ......................................................................................................................... xi
LIST OF ABBREVIATIONS ....................................................................................................... xii
CHAPTER ONE ............................................................................................................................. 1
1 INTRODUCTION ................................................................................................................... 1
1.1 Background of the study .................................................................................................. 1
1.2 Statement of the problem ................................................................................................. 7
1.3 Aim ................................................................................................................................... 9
1.4 Justification of the study ................................................................................................ 10
1.5 Significance of the study ................................................................................................ 10
1.6 Scope of the study .......................................................................................................... 11
1.7 Limitations of the study.................................................................................................. 11
1.8 Organization of the study ............................................................................................... 12
CHAPTER TWO .......................................................................................................................... 13
2 LITERATURE REVIEW ...................................................................................................... 13
2.1 Introduction .................................................................................................................... 13
2.2 History of SDN Controllers ........................................................................................... 13
2.2.1 Active Networking .................................................................................................. 14
2.2.2 Control and Data Plane Separation ......................................................................... 15
2.2.3 OpenFlow Protocol and Network Operating Systems ............................................ 22
2.3 SDN Architecture ........................................................................................................... 29
2.3.1 Infrastructure Layer ................................................................................................ 30
2.3.2 Control Plane .......................................................................................................... 31
2.3.3 Application Layer ................................................................................................... 34
2.4 SDN Controllers ............................................................................................................. 34
2.4.1 Centralized Controllers ........................................................................................... 35

vi
2.4.2 Distributed Controllers............................................................................................ 36
2.5 Feature Based Comparison of SDN Controllers ........................................................... 36
2.6 Related Works ................................................................................................................ 38
2.7 Chapter Summary ........................................................................................................... 40
CHAPTER THREE ...................................................................................................................... 41
3 RESEARCH METHODOLOGY .......................................................................................... 41
3.1 Introduction .................................................................................................................... 41
3.2 Research Method ............................................................................................................ 41
3.3 Proposed Controllers ...................................................................................................... 41
3.3.1 OpenDayLight......................................................................................................... 42
3.3.2 Floodlight ................................................................................................................ 43
3.3.3 HPE VAN ............................................................................................................... 45
3.4 Emulation Environment ................................................................................................. 46
3.4.1 VMware ESXi 6.7 ................................................................................................... 47
3.4.2 Ubuntu..................................................................................................................... 48
3.4.3 Open vSwitch .......................................................................................................... 48
3.4.4 Mininet .................................................................................................................... 48
3.5 Emulation Scenarios ....................................................................................................... 49
3.5.1 Single Topology ...................................................................................................... 49
3.5.2 Linear Topology...................................................................................................... 50
3.5.3 Tree Topology ......................................................................................................... 50
3.6 Performance Metrics ...................................................................................................... 50
3.6.1 Topology Discovery................................................................................................ 51
3.6.2 Throughput .............................................................................................................. 51
3.6.3 Round Trip Time..................................................................................................... 51
3.7 Tools for Performance metrics ....................................................................................... 52
3.7.1 Iperf ......................................................................................................................... 52
3.7.2 Ping ......................................................................................................................... 53
3.8 Chapter Summary ........................................................................................................... 53
4 DESIGN AND IMPLEMENTATION .................................................................................. 54
4.1 Introduction .................................................................................................................... 54
4.2 SDN Deployment Approaches ....................................................................................... 55

vii
4.3 Justification of Implementation Options ........................................................................ 55
4.4 Testbed Setup ................................................................................................................. 56
4.4.1 VMware ESXi 6.7 ................................................................................................... 57
4.4.2 Ubuntu 18.04.2........................................................................................................ 59
4.4.3 Mininet .................................................................................................................... 59
4.5 SDN Controllers ............................................................................................................. 60
4.5.1 OpenDayLight......................................................................................................... 60
4.5.2 Floodlight ................................................................................................................ 61
4.5.3 HPE VAN ............................................................................................................... 62
4.6 SDN Topologies ............................................................................................................. 62
4.6.1 Single Topology ...................................................................................................... 62
4.6.2 Linear Topology...................................................................................................... 64
4.6.3 Tree Topology ......................................................................................................... 66
4.7 Chapter Summary ........................................................................................................... 67
CHAPTER FIVE .......................................................................................................................... 68
RESEARCH EVALUATION ....................................................................................................... 68
5 Introduction ........................................................................................................................... 68
5.1 Topology Discovery ....................................................................................................... 68
5.2 Round Trip Time ............................................................................................................ 76
5.3 Throughput ..................................................................................................................... 86
5.4 Chapter Summary ........................................................................................................... 88
CHAPTER SIX ............................................................................................................................. 91
6 CONCLUSION AND FUTURE WORK .............................................................................. 91
6.1 Introduction .................................................................................................................... 91
6.2 Summary of the Study .................................................................................................... 91
6.3 Summary of Findings ..................................................................................................... 94
6.4 Suggestions for Further Work ........................................................................................ 97
References ..................................................................................................................................... 99

viii
LIST OF FIGURES

Figure 1: Traditional Network Device versus SDN Device ........................................................... 3

Figure 2: Selected evolution of SDN with supporting technologies............................................. 14

Figure 3: Diagram of an OpenFlow Enabled Network ................................................................. 23

Figure 4: OpenFlow Switch Components ..................................................................................... 24

Figure 5: Components of Flow Entries in a Flow Table ............................................................... 24

Figure 6: Packet flow through a processing pipeline .................................................................... 26

Figure 7: Diagram of the SDN Architecture ................................................................................. 30

Figure 8: Test Server Mounted in Rack ........................................................................................ 57

Figure 9: Virtual Servers ............................................................................................................... 58

Figure 10: Properties of a VM ...................................................................................................... 58

Figure 11: Ubuntu OS Updated .................................................................................................... 59

Figure 12: Mininet Installation ..................................................................................................... 59

Figure 13: Mininet Network ......................................................................................................... 60

Figure 14: OpenDayLight Installation .......................................................................................... 61

Figure 15: OpenDayLight Features Installation ........................................................................... 61

Figure 16: Floodlight Installation ................................................................................................. 61

Figure 17: Network Configuration of HPE VAN ......................................................................... 62

Figure 18: Mininet Single Topology............................................................................................. 63

ix
Figure 19: Single Topology Displayed by OpenDayLight ........................................................... 64

Figure 20: Mininet Linear Topology ............................................................................................ 65

Figure 21: Linear Topology Displayed by Floodlight .................................................................. 65

Figure 22: Mininet Tree Topology ............................................................................................... 66

Figure 23: Tree Topology Displayed by HPE VAN ..................................................................... 66

Figure 24: ODL with 8 hosts and 1switch .................................................................................... 68

Figure 25: ODL with 64hosts and 1 switch .................................................................................. 69

Figure 26: ODL with 128 hosts and 1 switch ............................................................................... 69

Figure 27: Floodlight with 8host and 1 switch ............................................................................. 70

Figure 28: Floodlight with 64host and 1 switch ........................................................................... 70

Figure 29: Floodlight with 128host and 1 switch ......................................................................... 71

Figure 30: HPE VAN with 8hosts and 1 switch ........................................................................... 71

Figure 31: OpenDayLight with Linear topology of 8 switches and 8 hosts ................................. 72

Figure 32: Floodlight with the Linear topology of 8 switches and 8 hosts................................... 73

Figure 33: HPE VAN with Linear topology of 8 switches ........................................................... 73

Figure 34: OpenDayLight with tree topology of the depth of 2 and fanout of 2 .......................... 74

Figure 35: OpenDayLight with tree topology of the depth of 2 and fanout of 2 .......................... 74

Figure 36: OpenDayLight with tree topology of the depth of 2 and fanout of 2 .......................... 75

Figure 37: Minimum Round-Trip Time for Single Topology ...................................................... 77

Figure 38: Average Round-Trip Time for Single Topology ......................................................... 78

Figure 39: Maximum Round-Trip Time ....................................................................................... 79

Figure 40: Minimum Round-Trip Time for Linear Topology ...................................................... 80

Figure 41: Average Round-Trip Time for Linear Topology ........................................................ 81

x
Figure 42: Maximum Round-Trip Time for Linear Topology ..................................................... 82

Figure 43: Average Round-Trip Time for Tree Topology with Depth of 1 ................................. 84

Figure 44: Average Round-Trip Time for Tree Topology with Depth of 2 ................................. 86

Figure 45: Throughput in Single Topology .................................................................................. 87

LIST OF TABLES

Table 1: Feature Comparison of SDN Controllers ....................................................................... 36

Table 2: Minimum Round-Trip Time for Single Topology.......................................................... 76

Table 3: Average Round-Trip Time for Single Topology ............................................................ 78

Table 4: Maximum Round-Trip Time for Single Topology ......................................................... 79

Table 5: Minimum Round-Trip Time for Linear Topology ......................................................... 80

Table 6: Average Round-Trip Time for Linear Topology ............................................................ 81

Table 7: Maximum Round-Trip Time for Linear Topology ......................................................... 82

Table 8: Minimum Round-Trip Time for Tree Topology with Depth of 1 .................................. 83

Table 9: Average Round-Trip Time for Tree Topology with Depth of 1 ..................................... 83

Table 10: Maximum Round-Trip Time for Tree Topology with Depth of 1................................ 83

Table 11: Minimum Round-Trip Time for Tree Topology with Depth of 2 ................................ 84

Table 12: Average Round-Trip Time for Tree Topology with Depth of 2 ................................... 85

Table 13: Maximum Round-Trip Time for Tree Topology with Depth of 2................................ 85

Table 14: Throughput in Single Topology.................................................................................... 87

Table 15: Summary of Single Topology RTT Results ................................................................. 89

Table 16: Summary of Linear Topology RTT Results ................................................................. 89

Table 17: Summary of Tree Topology RTT Results .................................................................... 90

xi
LIST OF ABBREVIATIONS

Application Programming Interface - API

Forwarding and Control Element Separation - ForCES

Graphical User Interface - GUI

Hewlett Packard Enterprise - HPE

Integrated Development Environment - IDE

Model-Driven Service Abstraction Layer - MD-SAL

Network Interface Cards - NIC

Network Functions Virtualization - NFV

OpenDayLight - ODL

OpenFlow Discovery Protocol - OFDP

Open Service Gateway Interface - OSGi

Representational State Transfer - REST

Round Trip Time - RTT

Secure Architecture for Networked Enterprise - SANE

xii
Software Defined Networking - SDN

Virtual Application Networks - VAN

xiii
CHAPTER ONE

1 INTRODUCTION

1.1 Background of the study

The internet is used by billions of people across the world daily. This project which started in 1969

as a research project in academia is today the world’s largest computer network. Vint Cerf and

Bob Kahn who are widely recognized as the fathers of the internet worked on its standardization

and invented the Internet Protocol Suite (Transmission Control Protocol (TCP) and the Internet

Protocol (IP). The Internet Protocol suite replaced the Network Control Program (NCP) which was

the first Host-To-Host protocol. NCP used a simplex protocol by selecting two port addresses, one

address to send data, and the other port to receive data. NCP also allowed different networks in

the Advanced Research Projects Agency Network (ARPANET) to route traffic through the

Interface Message Processor (IMP).

Networking technologies quickly sprang up afterward. An example was the introduction of

Ethernet technology at Xerox PARC by Bob Metcalfe. Ethernet is the key technology behind

Local Area Networks (LAN), Metropolitan Area Networks (MAN) and Wide Area Networks

(WAN). Networking at this period become full-blown and it became difficult to keep track of the

Internet Protocol Addresses that were assigned to computers. This led to the development of the

Domain Name System (DNS) to map the IP Addresses to their respective hostnames. Routing was

done using a single routing algorithm across all networks on the internet. This was quickly scaled

up with the introduction of Interior Gateway Protocol (IGP) to be implemented in each

autonomous system and these autonomous systems were interconnected with another routing

algorithm called the Exterior Gateway Protocol (EGP).

1
The early success of innovation in computer networking has stagnated in recent times. The

networks of today have higher overhead, they are more difficult to troubleshoot and they do not

offer flexibility for experimentation. The routing protocols described above are still being used

with little improvements over the years. Routers and switches are shipped with vendor-specific

operating systems such as the Cisco IOS, the network administrator is thus stuck to what these

vendors have to offer which mostly is inadequate. Data Centers now deploy virtualization

technologies to share resources, reduce cost, and testing (Bizanis, Kuipers, & Member, 2016),

(Rehman, Aguiar, & Barraca, 2019). These technologies include; Application, Data, Desktop,

Network, Server, and Storage virtualization. The closest to a networking improvement is the

Network Functions Virtualization (NFV) where the functions of firewalls, Unified Threat

Management (UTM) appliances, routers, switches, and other networking devices are virtualized

and installed on a physical server. Networks implemented by these virtualized network devices are

called overlay networks. The traffic generated by the overlay network is pushed to the physical

network through the Network Interface Cards (NIC) of the physical server (hosting the virtualized

network devices).

These virtualization technologies rely on the underlining network infrastructure (underlay

network) to carry traffic through the internet. These overlay networks leave bare the problems

traditional (underlay) network present in terms of scalability, service setup, and configuration.

Software-Defined Networking is a novel networking paradigm that expands the capabilities of

existing networks. The shift that SDN introduces is the separation of control and forwarding layers

of networking devices. The Open Networking Foundation (ONF) is the lead organization that

2
drives innovation in network infrastructure and carrier services. SDN is based on three principles

as outlined by ONF (ONF, 2014) as follows:

i. separation of controller and data planes

ii. centralization of control

iii. an abstraction of network resources and state and exposing such resources to external

applications

Figure 1: Traditional Network Device versus SDN Device

The SDN as opposed to traditional networks as shown in Fig.1 offers several benefits which

include;

• Network Programmability: The policies governing the network are directly programmable

since control functions are located in a controller that has open APIs to automation tools and

application-deployment tools such as Chef, Ansible, Jenkins, and CFEngine. Software

Development Kit (SDK) such as the Java Development Kit (JDK) and YANG Development

Kit (YDK) are useful in developing custom applications for the network.

3
• Centralization of Management: The intelligence of the network is located at the controller and

the global view of all devices is shown in the controller topology. Applications and network

policy engines view the entire network as a single logical switch which makes policy

deployment easy.

• Content Delivery: SDN allows traffic engineering leading to the implementation and

automation of Quality of Services (QoS) for Voice over IP (VOIP), video, and audio

transmissions. The network automatically shifts resources from less congested applications to

applications that are operating at their peak thereby guaranteeing delay and bandwidth.

Resource re-allocator (re-routing module) is an app in the management plane that reallocates

flows through OpenFlow adjust to network congestion and periodically to optimize the use of

resources (Tajiki, Akbari, Shojafar, & Mokari, 2017).

• Reduced capital expenditures and Hardware savings: SDN advocates for openness to both

control and data plane logic in networking equipment, this permits network administrators to

purchase devices without concern of compatibility with other devices.

There have been several approaches to implementing SDN, most noteworthy is the NOX SDN

Controller project (Gude et al., 2008) developed by Nicira Networks. NOX became the first

SDN Controller and was installed on a physical server to have a networkwide view of

connected devices. Network devices connected to the NOX Controller through an OpenFlow

switch. The OpenFlow protocol (Mckeown, Anderson, Peterson, Rexford, & Shenker, 2008)

was developed to carry instructions to the connected devices through the OpenFlow switch.

The NOX Controller thus served as a bridge between applications written in higher

programming languages such as Java and the data plane devices. The key challenges that

4
emerged from the implementation of SDN were scalability, reliability, and security.

Organizations aiming to deploy SDN has to take into consideration these challenges.

Organizations such as Arista Networks have implemented SDN in their “Software-Driven Cloud

Networking (SDCN)” which integrates the principles of cloud computing like self-service

provisioning, automation, and scaling in terms of performance and operational cost. SDCN uses

“Arista Extensible Operating System (EOS)” to provide network virtualization, high availability,

simplified architectures, custom programmability, rapid integration with a wide range of third-

party applications, and cost efficiency to their clients(Arista, 2020).

Ubiquiti Networks has employed the principles of SDN to develop a Network Management

Controller called the Unifi SDN Controller which runs on Apache Tomcat web server. The

controller is free to download for network administrators using Ubiquiti products with support on

Windows, Mac, and Linux and mobile app support for iOS and Android. The controller is bundled

with the UniFi-Discover tool for finding and managing UniFi devices on the local network. The

architecture is built for scalability to support a large number of Ubiquiti devices comprising of

Localized UniFi Controllers, Access Points, Fiber/Ethernet switches, and Internal Gateways. The

Controller uses proprietary CPE WAN Management Protocol (CWMP) protocol for auto-

configuration, software or firmware upgrade, device status, performance management, and

diagnostics, A cluster of Unifi SDN Controllers can be managed at a central point or under a single

dashboard using the UniFi Hybrid Cloud. The communication between the remote controllers is

done through Web Real-Time Communication (WebRTC) for end-to-end access with the added

advantage of maintaining a connection in a dynamically changing public IP environment or though

NAT/double NAT firewall.

5
Cisco, the world leader in network equipment vending has adjusted to the SDN trend by rolling

out next-generation routers and switches. Cisco Nexus 9000 Series Switches supports OpenFlow,

an opensource SBI, and Cisco One Platform Kit (onePK), Cisco proprietary SBI, to connect to the

Cisco Open SDN Controller or any Controller the network administrator choice. Cisco ASR 9000

Series Aggregation Services Routers is SDN compatible with support for stateful, and stateless

PCEP, Border Gateway Protocol Link-State (BGP-LS), OpenFlow, and NETCONF/YANG

(Roberts et al., 2014). Application agility in the data center is addressed with the introduction of

the Cisco Application-Centric Infrastructure (ACI). Cisco ACI uses the Application Policy

Infrastructure Controller (APIC) as a central management center. Cisco SDN coupled with

Kubernetes enables automated deployment, scaling, and management of containerized

applications. There is also complete network visibility and threat protection, this is possible

through the integrating of third-party applications for advanced security, load balancing, and

monitoring. Examples of third-party solutions the network administrator can deploy in ACI are

SourceFire ( for network security), Embrane ( for VPNs, firewalls, load balancers ), and F5 ( for

application delivery network).

Small to medium-sized organizations that do not have the capacity, expertise, or capital of the

above-mentioned companies can still manage their networks using SDN. Physical data plane that

supports SDN is currently expensive, as such VMware has introduced the VMware NSX to bring

network visibility, flexibility, agility, and scalability to networks. NSX Manager offers central

management of SDN networks while providing Rest API for creation, configuration, and

monitoring NSX components. A control plane is an appliance-based on the controller cluster while

the data plane is made of NSX vSphere Distributed Switch with kernel modules (VXLAN,

6
Distributed Logical router and Firewall) and the NSX Edge Services Gateway(Gustavo A. A.

Santana, 2013).

1.2 Statement of the problem

Network management involves the collection of data about the capabilities of network devices,

analysis of the data to detect faults, to measure performance, and to maintain Quality of Service.

Network management activities are in five functioning areas namely; Fault, Configuration,

Performance, Account, and Security Management (FCAPS) (Nuangjamnong, Maj, & Veal, 2008).

The network is managed by having a central server called Network Management System (NMS)

and agents of the NMS deployed in end devices to be monitored. The end devices store their data

in a Management Information Base (MIB) (McCloghrie, 2013) and use the agent program to

communicate with the server for the management functions i.e. FCAPS. The agent program relays

the device state, network traffic, and notifications to the NMS through supported protocols such

as the Simple Network Management Protocol (SNMP) or Common Management Information

Protocol (CMIP) (Ren & Li, 2010). Network traffic gathered in this form is used to build the

network topology, statistics are analyzed by operators to ascertain how best the network and device

performance could be improved and this is a daunting task (Gupta & Feamster, 2016). Currently

deployed NSM in a traditional network environment includes; HP OpenView, CiscoWorks,

SolarWinds Network Performance Monitoring tool, Nagios, Cacti, Argus, Zabbix, etc. These

network management systems for traditional networking is complex to implement and has to be

integrated with different products to support the five functional areas of network management.

Device configuration has to be done individually since none of the NSM has complete control of

the device’s control plane usually configured through the Command Line (CLI) (Dubey, 2016).

Network management is much easier with SDN because all control function of the devices is

7
located in the SDN Controller and network control frameworks such as Procera help network

operators manage the network better (Kim & Feamster, 2013).

In a traditional network, scalability at layer 2 is achieved through the use of Virtual Local Area

Network (VLAN). Broadcast domains are segmented for security and simpler management while

reducing device purchase cost (Al-khaffaf, 2018). This Ethernet technology uses the IEEE 802.1Q

standard to scale up to 4094 different networks which are inadequate for Data Centers with Cloud

Computing capabilities and Multi-Tenant Architecture (Kwon, Lee, Hahni, & Perrig, 2020).

Existing VLANs use the Spanning Tree Protocol (STP) with little or no support for the newer

Shortest Path Bridging(SPB) (Avaya, 2011) to converge the network topology thereby blocking

some of the ports (redundant paths) to prevent traffic loops, therefore only half of the available

paths are used. Transparent Interconnection of Lots of Links (TRILL) (Salam, Kumar, & Eastlake,

2015) support multipathing for the datalink enabling Fibre Channel over Ethernet (FCoE) and

improve latency is yet to be adopted in current layer 2 devices. Furthermore, traditional layer 2

devices do not support newer protocols such as the Virtual Extensible Local Area Network

(VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE) and Generic

Networking Virtualization Encapsulation (GENEVE) to scale networks (Yun, 2013).

The closed (Proprietary) nature of network devices means there is research stagnation in

networking and real-world experiments are difficult to perform on existing large-scale production

networks. For example, in a network, it is difficult to develop and install a new routing protocol to

route traffic throughout the network. Existing protocols that allow routing in a network include

Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), and

Border Gateway Protocol (BGP) that are prone to routing table issues. Protocols such as the Path

Computation Element Communication Protocol (PCEP) in traditional networking gives scope for

8
route manipulation. In PCEP, Path Computation Clients (PCC) request path computations from

Path Computation Elements (PCE). PCE is a network application that calculates sophisticated

paths through a network by applying computational constraints in real-time.

Enterprise networks that need new features and functions added to these devices has to contact the

device vendors, which in turn consult with their software developers and chipset manufacturers,

the process can take several years to be implemented. These new features and functionalities could

have been easily added by the in-house software developers or network administrators of the

enterprise networks if the vendors had exposed the devices Application Programming Interface

(API). The preferred option would be for equipment vendors to sell only the data plane devices to

clients without an embedded control plane, in that case, the packet process could be defined with

match-action tables for switches, router, load balancers, etc. through OpenFlow or any selected

Southbound Interface of choice.

1.3 Aim

This thesis will compare the performance of different SDN controllers. In a typical traditional

network environment, the control and packet-forwarding planes are bundled together, those planes

will be segregated in this study.

This study will focus on the following

• Protocols used by applications in the management plane to control data planes will be

identified and focus will be given to the Representational State Transfer (REST) API.

• In the Control plane, the different types and categories of SDN Controllers will be identified.

• Protocols used by the controller to communicate with the data plane will be identified. The

OpenFlow will be examined in detail.

9
• OpenDayLight, Floodlight and HPE VAN Controllers will be installed on separate servers, the

same network traffic will be simulated to these controllers. Analysis of how the controllers

handle the same traffic will be done and compared on topology display, round-trip time, and

throughput.

• In the data plane, SDN enabled network devices both physical and virtual will be identified

and a survey of devices rolled-out by device manufacturers such as Cisco, HP, Huawei, etc.

1.4 Justification of the study

At the time of this research, there was limited literature on the topic. Various companies, mostly

network vendors, cloud services providers, and network security organizations had white papers

on their websites about how they have deployed SDN in their respective products. The success

and critical evaluation of such deployments are not readily available to the research community.

For an instance, in a large Wide Area Network (WAN), there is the controller placement problem

where a network engineer does not how many SDN Controllers are needed to support the network

and where they should be located in the network. The documentation of SDN Controllers are

difficult to understand and cannot be easily followed by a beginner to implement SDN. This

research will add to the growing body of knowledge of SDN deployment and evaluation of SDN

Controllers.

1.5 Significance of the study

The outcome of this research will simplify the concept and deployment of SDN, especially for

network engineers in small to medium-size organizations who are skeptical and uncertain about

SDN implementation but are seeking to improve and save cost in the network infrastructure. The

study will make the selection of an SDN controller for network administrators easier since

10
selection would be based on the performance of each controller for different scenarios. The steady

growth of internet usage demands innovation in the control and data planes of networking

equipment for Service Providers. This study demonstrates how such innovation can solve network

monitoring, configuration, management, and guaranteed delivery of service to clients. Future

researchers can also refer to this study for further academic work on SDN.

1.6 Scope of the study

The research examines the historical and academic background of SDN, the architectural

difference between traditional networking devices and SDN-enabled devices. The research then

focuses extensively on SDN Controllers, the different types, categories, and classification of SDN

Controllers. Three SDN Controllers are selected, one centralized controller (Floodlight), a

distributed controller (OpenDayLight) and a proprietary controller (HPE VAN ) to be implemented

as a centralized control mechanism for OpenFlow-enabled data plane devices. Analysis of how

the selected controllers handle the flows and topology of the simulated network would also be

done in this thesis.

1.7 Limitations of the study

SDN is a relatively new technology, as such existing networking devices cannot be used for its

implementation. Nonetheless, existing technologies and network devices were twerked to

implement SDN in this study. Server virtualization technology (VMware ESXi 6.7) was used to

support Virtual Machines (VMs) for the SDN Controllers and Network Function Virtualization

(NFV) was used switches. The capabilities of the deployed systems were limited by the inherent

limitations of the physical server. Additionally, the physical server hard disk, RAM, and CPU

could not support the deployment of a large SDN infrastructure.

11
1.8 Organization of the study

Chapter One introduces the concept of SDN and why it is the future of networking. The problem

and aim of this study are also outlined in Chapter One. Chapter Two presents a literature review

in which the history of SDN, SDN architecture, types of SDN Controllers, comparison of SDN

Controllers, and the different types of protocols used in SDN are discussed in depth. Chapter

Three examines the methodology and various strategies used in solving the research aims. Chapter

Four focuses on the Design and Implementation of the selected SDN Controllers and how the data

plane devices will be managed. Chapter Five evaluates the research findings and analysis resulting

from the SDN Controller simulations. Chapter Six ends the thesis with a justification of the

research conclusions and suggestions for future work.

12
CHAPTER TWO

2 LITERATURE REVIEW

2.1 Introduction

This chapter examines the theoretical framework with regards to SDN. A literature review will be

conducted in these selected areas.

• History of SDN Controllers

• SDN Architecture

• SDN Controllers

• Some previous research works conducted in comparing SDN Controllers and research gaps.

2.2 History of SDN Controllers

The term “Software-Defined Networking” was first used in an article (Greene, 2009) to describe

the OpenFlow protocol by Nick McKeown, and his peers at Stanford University.

The history and development of SDN can be split into three stages where each stage has different

objectives and researchers contributing to its development (Feamster, Rexford, & Zegura, 2014).

These stages include;

• Active Networks

• Control and Data Plane separation

• OpenFlow Protocol and Network Operating Systems

13
Figure 2: Selected evolution of SDN with supporting technologies

2.2.1 Active Networking

Active networking is one of the early developments leading to SDN. The idea for an active network

was to have a programmable interface in the form of a network API that could be used to expose

the network node’s resources (Kreutz et al., 2015). The role of the network was extended from

transmitting of packets to a computation engine, in this case, packets can be modified, stored, or

redirected during transmission (Bhattacharjee, Calvert, & Zegura, 1997).

Two programming models that were considered, namely; capsule model and the programmable

routers/switches model (Nunes, Mendonca, Nguyen, Obraczka, & Turletti, 2014).

The capsule model is based on Active Node Transport System (ANTS) (Wetherall, Guttag, &

Tennenhouse, 1998) which works with three techniques namely mobile-code, load on request, and

caching. It allows dynamic deployment of new protocols to network nodes without the need for

14
physical nodes and running protocols to be synchronized. The concepts in its architectural design

include the following: network services are customized by capsules (which is a replacement for

traditional IP packets), routing functions are performed by Active nodes which process capsules

while maintaining the soft-store and finally routines are distributed dynamically to nodes (Achir,

Fonseca, & Doudane, 2000), (Goransson & Black, 2014).

The programmable routers/switches model consists of a programmable element which performs a

routing or switching function (Munir, 1997). In this way, existing routers can have additional

processing capabilities by adding a computation engine at routers ports (Wolf & Turner, 2001).

Active networking provided three intellectual contributions to the development of SDN. These

include;

• The concept of programmable functions to lower hindrance to innovation.

• The architectural framework of active networks was a precursor to network virtualization. This

architecture had a shared operating system called NodeOS which controlled shared resources,

Execution Environment (EE) in the form of a virtual machine for processing packets and

Active Applications(AA) hosted within the EE (Calvert, 1999) (AN Group, 1998).

• The concept of orchestrating middleboxes into a unified platform that laid the foundation for

the development of Network Function Virtualization (NFV)

2.2.2 Control and Data Plane Separation

Control and Data plane separation was a reaction to the growth in backbone networks, network

equipment vendors deployed packet forwarding logic in hardware instead of control-plane

software.

15
Since Control and Data planes were now separated, there had to be a means for communication

between the two planes, and two innovations were developed for such purpose (Feamster et al.,

2014). These innovations were Open Interfaces and Logically Centralized Control of the network.

i. Open Interfaces

The Forwarding and Control Element Separation (ForCES) protocol (Yang, L., R. Dantu, T.

Anderson, 2004) was proposed by Internet Engineering Task Force (IETF) as a standard for

communication between the planes.

There are two types of Network Elements (NE) in ForCES;

The Control Element (CE) is located at the control plane in the form of routing and signaling

protocols that provide control functionality.

The Forwarding Element (FE) is located at the forwarding (data) plane. These elements are either

Application-Specific Integrated Circuit (ASIC) or network processor devices responsible for

performing the operations on data packets.

ii. Logically Centralized Network Control

Centralized Control of network in Internet Service Providers (ISP) for programmability and

network-wide visibility was a driving factor for Control and Data plane separation.

Intelligent Route Service Control Point (IRSCP) (Van Der Merwe et al., 2006) enabled route

computation and selection to be performed outside routers by external network intelligence. IRSCP

is a logically centralized, separated from routers, that have control over the selection of routes in

a Multiprotocol Label Switching (MPLS) network. IRSCP can also be used for connectivity

management tasks such as blackholing of DDoS traffic, Virtual Private Network ( VPN ) gateway

16
selection, load balancing across multiple egress points as well as moving traffic away from router

going into planned maintenance.

The architecture of IRSCP comprises of traditional network elements, routers, route-reflectors, and

IRSCP with its associated functions. IRSCP uses Internal BGP(iBGP) to receive routes from

routers, computes best routes, and send back these selected routes to routers.

Performing flexible route control for multiple administrative domains in an MPLS or Generalized

Multi-Protocol Label Switching (GMPLS) networks relied on Path Computation Element (PCE)-

Based Architecture (Farrel, Vasseur, & Ash, 2006). A Path Computation Element (PCE) is located

within the network or at a remote destination computer network based on a network graph. Traffic

Engineering Database (TED), stores information about resources and topology of the network

domain while a Path Computation Client (PCC) is a client application that requires path

computation from a PCE. Path computation in this architecture can be performed in inter-layer,

intra-domain, and inter-domain. A domain in this context refers to a group of network elements

existing on the same path computation responsibility or address management. These domains

include IGP (OSPF, EIGRP ) areas, Autonomous Systems (AS), or a collection of AS under an

ISP.

Route Control Platform (RCP) (Caesar, Caldwell, Feamster, & Rexford, 2005) was proposed in

2004 with support from telecom giant AT&T to solve routing loops, convergence, and traffic

engineering associated with BGP and iBGP.

The RCP proposal was to have Autonomous Systems (AS) rely on an RCP Server to handle

interdomain routing. The iBGP routers will simply forward packets and outsource their routing

functions to the RCP server which will make all computations and update routing tables of the

iBGP routers (Feamster, Rexford, Balakrishnan, & Shaikh, 2004), (Caesar et al., 2005).
17
The SoftRouter Architecture (Lakshman, Nandagopal, Ramjee, Sabnani, & Woo, 2004) allowed

Control and forwarding plane devices to be dynamically associated with each other.

The Control plane functionalities are executed in servers called Control Elements (CEs) that are

multiple hops away from the Forwarding Elements (FEs). These elements interact with each other

using two protocols namely Dynamic Binding Protocol (Dyna-BIND) and ForCES (Ramjee et al.,

2006).

A Dynamic Binding Protocol (A. F. Ahmed & Lakshman, 2015) performs three tasks namely,

discovery, association, and operation. Discovery enables Forwarding Elements to broadcast its

existence and discover Control Elements in a SoftRouter network using spanning tree protocol if

Ethernet services are supported. If the elements are heterogenous, then a hop by or source-routed

routing layer over IP is used to route the packets of the discovery protocol.

Each FE is associated with a primary CE and a backup CE at the planning stage of the network by

the administrator. The association is done taking into consideration the load managed by the CE,

location, or distance and reliability of network links between the FE and CE.

Failure and Error detection in a SoftRouter network is detected and repaired by using heartbeat

messages sent by the FE to the CE. When the path to CE is no longer valid, the FE switches over

to a backup CE on its list.

Equipment vendors did not readily accept separating the Control and Data plane, a clean-slate

architecture was explored to combat this inertia. The clean-slate approach was to redesign

networks (the internet) from scratch, offering improved performance and abstractions while

avoiding the complexity of existing systems using a set of new core principles (Girod et al., 2006),

(Feldmann, 2007).

To achieve this objective, The United States (US) National Science Foundation’s

18
(NSF) launched a “100x100 Clean Slate Project” in 2003 to stimulate innovative thinking in

conjunction with multiple research institutions and universities (Denieffe, Kavanagh, & Okello,

2016). There were five areas of importance identified for innovation, namely; network

architecture, security, heterogeneous applications, heterogeneous physical-layer technologies as

well as economics and policy.

The project led to the development and adoption of the 4D Architecture, SANE, and Ethane as

discussed below.

The key principles of network-level objectives, network-wide views, and direct control of

networks caused the 4D architecture to restructure the network control plane. Network control

plane functionalities were organized into four components by the 4D architecture consisting of the

data, discovery, dissemination, and decision planes (Greenberg et al., 2005).

The Data plane provides services like Ethernet packet forwarding, IPv4, or IPv6 based on the state

of the decision plane. The flow-scheduling weights, forwarding table or forwarding information

base (FIB), queue-management parameters, network address translation mappings, and packet

filters are examples of the data plane state. The Data plane also has support for collecting

measurement data (Varghese & Estan, 2004).

The discovery plane role is to discover the network physical components ( interfaces present on

the devices, their capacities, and connection to other devices) and represent them logically by

creating their logical identifiers. The persistence and scope of these identifiers are defined, the

discovery plane also set them to be automatically discovered and manage their relationship with

each other.

The dissemination plane serves a bridge between the data plane and decision plane by providing

logical channels to relay control information.

19
The decision plane is the executive center in-charge of network control such as load balancing,

network reachability, security, access control as well as the configuration of interfaces. The

operation of the decision plane is in real-time and it has a network-wide view of the topology,

traffic flowing through the network, and the capabilities of physical components (routers and

switches). The decision plane is made of multiple servers called decision elements seen as a single

entity at the data plane (which receives commands from the decision plane through the

dissemination plane ). A system implementation such as Tesseract (Yan et al., 2007) can then used

to control how the network should behave.

Another achievement of the clean-slate project was for enterprise network security in the form of

a Protection Architecture. The Secure Architecture for the Networked Enterprise (SANE) (Casado

et al., 2006) offers a single protection layer to control an enterprise network connectivity. SANE

design goals were to support policies that do not depend on network topology or devices being

used, enforce those policies even at the link layer, and obfuscate network resources and services

from attackers. Policies are defined and executed from a central component instead of being

administered from several components or services like routers, switches, firewalls, authentication

services ( Kerberos and Active Directory), and Domain Name System.

A Domain Controller (DC) is a physical server, replicated at different locations that act as the brain

in a SANE network. The DC authenticates hosts and users, advertise services available on the

network, and grant access to those services.

The DC offers three key functionalities including; Authentication Service, Network Service

Directory (NSD), and Protection Layer Controller (Casado et al., 2006).

20
The Authentication Service maintains an encrypted channel to authenticate users, switches, and

hosts.

The Network Service Directory is a substitute for the Domain Name System in a SANE network.

When system principals (users, groups) need to access a service, a service lookup is made in the

NSD ( servers use a unique name to publish their services ). The NSD has an Access List (ACL)

for each service that specifies the permissions granted to system principals. If a system principal

is whitelisted in that ACL, then permission will be granted to access those services.

The Protection Layer Controller manages SANE network connectivity by granting and revoking

capabilities (routes from clients to servers). The capabilities are encrypted in anonymous socket

connections, onion routing (Goldschlag, Reed, & Syverson, 1996) to limits a network's

vulnerability and to hide topology.

Ethane (Casado et al., 2007), a clean-slate network architecture followed the lead of 4D

Architecture and SANE to design a centralized control architecture for network administration. An

Ethane network consists of the Controllers, Ethane Switches, and hosts. Ethane has control over

the network by allowing communication between end-hosts through explicit permission. This is

achieved through the use of a centralized Controller which has the global network policy used in

determining what happens to packets. Ethane Switches contains a flow table that contains how and

where packets should be sent to in the network. When a packet arrives at the switch, the flow table

is quickly looked at to determine what to do with the packet. If a flow (instruction) is not found,

the packet is forwarded to the Controller. The switches connect to the Controller using a secure

channel.

21
The operational deployment of Ethane at Standford University inspired the development of the

original OpenFlow.

2.2.3 OpenFlow Protocol and Network Operating Systems

The Open Networking Foundation (ONF) was formed in 2011 to be a leader in network

transformation, taking over from where the clean-slate project stopped. ONF is an operator-led

consortium with a partnership from Comcast, Turk Telekom, China Unicom, Google, AT&T, and

Deutsche Telekom (Jammal, Singh, Shami, Asal, & Li, 2014). This organization offers open

source solutions for operators and is the de facto standard Authority for SDN. The OpenFlow

specification, its information model, and the functionalities of its components was first published

in 2009 after it was first implemented at Stanford University.

OpenFlow is focused on the Ethernet switch, which has an internal flow-table with a standard

interface for adding or removing flow entries. It was designed to enable experimental protocols to

be run by researchers in a network. To achieve this goal, network vendors were encouraged to

incorporate OpenFlow into their switching devices, which would successively be deployed in

college campus wiring closets and backbone networks (Mckeown et al., 2008).

The OpenFlow protocol is implemented in a switch called OpenFlow Switch. The idea for this

switch is rooted in the fact that most vendors have equipped their devices to use flow-tables

running at different line-rate. OpenFlow exploits this diversity and creates uniformity in

programming flow-tables across different vendor devices (Mckeown et al., 2008).

A network that supports OpenFlow has its basis on three concepts: ( 1 ) an OpenFlow enabled

switch or switches that act as the data plane devices ( 2 ) the control plane is managed by a remote

OpenFlow enabled Controller(s) ( 3 ) a secure control channel that connects OpenFlow switches

22
to the OpenFlow Controller (Braun & Menth, 2014), (W. Li, Meng, & Kwok, 2016), (Alsaeedi,

Mohamad, & Al-Roubaiey, 2019).

Figure 3: Diagram of an OpenFlow Enabled Network

An OpenFlow switch is an OpenFlow-enabled switch that uses the OpenFlow protocol to

communicate with an external controller. The switch contains at least one or more flow tables and

a group table. OpenFlow switches are classified into two types: OpenFlow-Only switches that

support mainly OpenFlow pipeline operations and OpenFlow-hybrid switches that process both

OpenFlow and traditional Ethernet switching functionalities (Jammal et al., 2014).

23
Figure 4: OpenFlow Switch Components

2.2.3.1 Components of Flow Entries

A Flow Table contains flow entries that detail how packets are matched and processed. Flow refers

to packets that follow the same pattern.

Figure 5: Components of Flow Entries in a Flow Table

24
Flow entries contain different components namely; Match Field, Priority, Counters, Instructions,

Timeouts, Cookie, and Flags (Y. Li, Zhang, Taheri, & Li, 2018).

i. Match Fields comprises of ingress port, sections of packet headers as well as other pipeline

fields such as metadata from previous steps. Matching is used to verify if particular field values

comply with a set of constraints referred to as a match. A match can either be, an Exact match

where a particular value must match in the field (value=field), Bitwise match where particular

bits values are matched in the field or Wildcard match where there is no constraint on the

matching field. There are also other types of matching which cannot be directly expressed and

indirect methods are used. Some of these matches are; Set match, Range match, Inequality

match, and Conjunctive match (Pfaff, 2019).

ii. Priority determines how Flow entries are sorted and which flow entry has to be processed

before the other, the flow entry with the highest priority is given precedence.

iii. Counters are updated upon successfully matching packets.

iv. Instructions can modify action-set which are linked to the processing ( matched ) packet or

the forwarding process.

v. Timeouts specify how long the switch caches a flow entry. Timeouts could either be an Idle or

Hard Timeout. Idle Timeout refers to the time in seconds that a flow entry is removed from the

flow table because it could not be matched while Hard Timeout is the maximum time in

seconds that a flow entry is removed from the flow table regardless of its matching status.

vi. A cookie is an opaque data type (value) randomly chosen by the external Controller to filter

flow entries, flow modification as well as flow deletion requests. Cookies are not used for

processing packets.

vii. Flags are used to change the management of flow entries.

25
Flow Tables are managed by the external Controller that can add, modify, or delete flow entries

reactively or proactively. Reactive Flow Entries are created by the controller after it dynamically

discovers devices on the network. The flow tables of OpenFlow switches in the network are

updated to establish end-to-end connectivity between the discovered devices. Proactive Flow

Entries are created by the Controller before the devices are connected to the network or before

they generate network traffic (Open Networking Foundation, 2015).

2.2.3.2 OpenFlow Pipeline Processing

OpenFlow switches contain at least one Flow Table or several Flow Tables to form a pipeline. The

OpenFlow pipeline processing specifies how packets interact with flows in Flow Tables,

permitting flows to be directed through that pipeline until a match rule is found or forwarded to

the Controller during a table-miss (Wu, Jiang, & Yang, 2016).

Figure 6: Packet flow through a processing pipeline

26
When a host in the network wants to communicate with another host, it sent packets to the switch

Ingress Port, these packets are initially not matched because there is no flow entry for it, this is

termed table-miss. In this case, the switch forwards those packets to the Controller as a Packet-In

message. The Packet-In message encapsulates the original packets from the hosts and is referenced

using a Buffer Id by the Controller which then decides what happens to the packet. The Controller

can send a Packet Out message instructing the switch what to do with the packet ( i.e. the switch

should forward the packets out of a specific port or port range). Alternatively, the Controller can

send a Flow Modification ( Flow-MOD) message instructing the switch to install a new flow into

its Flow Table(s). After this initial process, when packets from hosts arrive, they traverse from the

first Flow Table ( table 0) through to table n to find matches that triggers the instruction set to

execute.

When a packet first arrives, metadata set consisting of Action List, Action Set, or both are created

for it. Actions are the operations to perform on the packet such as dropping the packet, forwarding

the packet to a particular port, or modifying the packet header. Action List and Action Set defer in

their time of execution, actions in a List are executed directly after leaving the current Flow Table

while actions in a Set are accumulated and execute cumulatively after being processed in all the

Flow Tables (van Asten, van Adrichem, & Kuipers, 2014). Action bucket is a collection of actions

to be selected as a bundle for packet processing and these action buckets are contained in a group

(Group Table) which determines a mechanism of choosing which action bucket is to be applied on

a packet (Izard, 2018).

2.2.3.3 Communication Between OpenFlow Switch and Controller

A requirement for an OpenFlow network is the presence of a secure channel that ensures

connectivity between OpenFlow switches and external OpenFlow Controllers. An OpenFlow

27
connection carries messages between switches and Controllers, a message is either a control

command, request, reply, or a status event. The interface that connects the switch to the controller

is called an OpenFlow channel, this channel allows the controller to send control messages to the

switch for operational purposes such as event notification, switch configuration, and management.

This channel uses Transport Layer Security (TLS) to encrypt communication thereby mitigating

security risks. In a situation where a switch is managed by more than one controller, then a separate

OpenFlow channel would be set for each, and an aggression of all these channels is called Control

Channel (Z. Li, Li, Zhao, & Xiong, 2014), (Y. Li et al., 2018), (Liyanage, Ylianttila, & Gurtov,

2014).

The OpenFlow protocol indicates three types of messages with their subtypes exchanged between

the controller and switch. All messages include an internal header instance that encapsulates

protocol version, message type, message length (in bytes), and transaction ID (XID) The three

types are; controller-to-switch, asynchronous and symmetric messages.

Controller-to-switch messages are used to manage the switch directly or to inspect the state of the

switch. This message type is initiated by the controller and may not necessarily require a response

from the switch. The subtypes of Controller-to-switch messages include; Features, Configuration,

Modify-State, Read-State, Packet-out, Barrier, and Role-Request.

Features are requested by the controller to get the capacities of a switch; the switch is required to

respond to this request. Using Configuration messages, the configuration parameters of the switch

can be set and queried by the controller. Modify-State messages from the controller manage the

state of a switch by adding, deleting, changing the flow or group table entries or to set switch port

priorities. Read-State messages gather information about the switch current statistics and

configuration for the controller. Packet-out messages enable the controller specifies the action to

28
apply to a packet and which port of the switch packets are sent out from. Barrier messages are sent

to verify the completion of the previous operations, the switch respond with a Barrier reply upon

the completion of previous operations. Role-Request set or query the role of the controller in the

OpenFlow channel (Siamak Azodolmolky, 2013).

Asynchronous messages are initiated and used by the switch to update the controller of network

event and changes that has occurred in the switch state. There are four subtypes of Asynchronous

messages namely; Packet-in, Flow-Removal, Port-status and Error messages. Packet-in messages

occur when there is a table-miss in the switch flow table, the switch forward the mismatched

packets to the controller for processing. Flow-Removal messages are sent to update the controller

about flows that have been flushed from the switch flow table due to either an Idle or hard timeout.

Port-status update the controller of changes in the switch ports and Error messages are sent when

the switch encounters an error.

Symmetric messages are initiated without solicitation from either the switch or controller, these

include Hello, Echo and Vendor messages. Hello messages are exchanged when the switch and

controller are connected, Echo messages (Echo request/reply messages) are used to measure

liveliness of a controller-switch connection (representing heartbeats ),or measure latency and

throughput. Vendor messages also known as Experimenter messages allow switches to offer

additional functionalities for upcoming revisions of OpenFlow (Siamak Azodolmolky, 2013).

2.3 SDN Architecture

The SDN architecture details how a network or computing system can be implemented using a

combination of open, software-based technologies and inexpensive commodity networking

hardware that facilitate the separation of the SDN control and data planes of the networking stack

29
(sdx central, 2017). The architecture serves at a high level as the reference points and interfaces to

the SDN controller. It also allows the controller to manage a variety of data plane resources (Open

Networking Foundation, 2014).

Figure 7: Diagram of the SDN Architecture

The SDN architecture as show in fig is divided into Application Layer, Control Layer and

Infrastructure Layer.

2.3.1 Infrastructure Layer

Network infrastructure at data plane is made of networking elements such as switches and routers.

These devices forward network traffic based on forwarding rules set by the control plane (Bakhshi,

30
2017). This plane differs from traditional data plane in how control logic is implemented, control

logic is located at a remote controller. Communication between data plane and controller is

established using open and standard interfaces called Southbound Interfaces such as OpenFlow.

SDN as a technology is well integrated into virtualized networks and data centers, as such

software-based data plane devices are in high demand. Examples of software-based switches that

support SDN include; Open vSwitch (Pfaff et al., 2009) from open source community, XorPlus

from pica8, ofsoftswitch13 (Bonelli, Procissi, Sanvito, & Bifulco, 2017) from Ericson, Switch

Light from Big Switch, Pantou for OpenWRT wireless environment from Stanford University.

Software-based routers for SDN includes; contrail-vrouter from Juniper Networks,

OpenFlowClick developed by Yogesh Mundada and ZXR10 V6000 vRouter from ZTE.

There are also hardware switches and routers that support SDN. Hardware switches include; Arista

7150 series, BlackDiamond X8, NoviSwitch 1248, RackSwitch G8264, Pica8 3920 and Plexxi

Switch. Hardware routers include; Huawei CX600 series, Brocade MLX series and Cisco ASR

9000 Series.

2.3.2 Control Plane

Control plane is a logically centralized plane where network intelligence is located, it has global

view of networks under its administration and can reconfigure it dynamically (Cox et al., 2017).

This plane is composed of Northbound Interface, Southbound Interface and Network Operating

System (SDN Controller).Popular SDN Controllers include; Floodlight, Trema, Maestro, ONOS,

Pox, Onix, Nox and Ryu. Controllers are responsible for base network service functionalities such

as topology management, statistics management, host tracking, switch management, traffic

redirection, link discovery, routing, message filtering, performance monitoring and routing.

31
2.3.2.1 Northbound Interface

Northbound Interface or API is used by push code ( custom network applications) developed by

programmers for business needs through the control plane to data plane (Raju, 2018). This

interface can be used to customize network control with popular programming languages like Java,

Python or Ruby. This is possible because the Northbound API can unmask network abstraction

data model as well as the functionality in the control plane for use by network applications (Tuncer,

Charalambides, Tangari, & Pavlou, 2018). Northbound API are in three categories, namely;

Representational State Transfer (REST) API, Network (Domain Specific) programming languages

and SDN Controller customized API (Bernini & Caba, 2015).

REST API (Zhou, Li, Luo, & Chou, 2014), (Bakhshi, 2017) follows a client-server architecture in

which the server ( control plane) is stateless and the client ( application plane program) keeps track

of its session state. RESTful API use existing HTTP methodologies such as GET,POST,PUT and

DELETE. GET(read) is used to retrieve a resource, example the status of a switch flow table, PUT

(insert) to modify or update a resource, POST( write) to create a new flow table and DELETE

(remove) to erase it.

Another Northbound API implementation is Programming languages that are classified into Low-

level, API-based and Domain-Specific Programming Languages (Trois, Del Fabro, de Bona, &

Martinello, 2016). Low-level Programming allows General-purpose Language (GPL) such as

Java, C, Python, or shell script to directly programme network devices through Control-Data-

Plane Interface (CDPI). API-based Programming use the Controller API program network

behavior (Agg, Johanyák, & Szilveszter, 2016). Domain-Specific Programming Languages offer

higher-level abstractions through the controller APIs. These languages include; Frenetic (Foster

et al., 2010), Procera (Voellmy, Kim, & Feamster, 2012) , Nettle (Voellmy & Hudak, 2011),

32
Pyretic (Reich, Monsanto, Foster, Rexford, & Walker, 2013), NetKAT (Anderson et al., 2014) and

NetCore (Monsanto, Foster, Harrison, & Walker, 2012).

Many SDN Controllers have also defined their own Northbound API customized to suit specific

needs. OpenDayLight and Floodlight use a customized RESTful Northbound API for rule

programming, handling queries and network state reporting (Chua, 2012).

2.3.2.2 Southbound Interface

Southbound Interface enables communication between control plane and data plane ensuring that

end devices have the appropriate configuration and flow entries. OpenFlow (Kreutz et al., 2015)

is the dominant Southbound API supported on a wide variety of networking equipment. Other

Southbound APIs are either OpenFlow dependent such as OVSDB, POF, OpenState, HAL, OF-

Config or OpenFlow independent API such as NetConf , OpFlex and ForCES.

• Open vSwitch Database Management Protocol (OVSDB) (Pfaff & Davie, 2013) is designed

as an OpenFlow configuration protocol to manage Open vSwitch implementations.

• Protocol-Oblivious Forwarding (POF) (Song, 2013) is a proposal from Huawei Technologies

to support protocol-independent data planes by using a generic flow instruction set (FIS).

• OpenState extend OpenFlow match-actions by using an Extended Finite State Machine

(XFSM). XFSM enable OpenState to perform several stateful tasks inside forwarding devices

while avoiding control plane communication overhead.

• Hardware Abstraction Layer ( HAL) (Ogrodowczyk et al., 2014) provide OpenFlow support

for legacy network devices.

• OpenFlow Management and Configuration Protocol (OF-Config) (Čejka & Krejčí, 2016)

focus on function necessary to configure an OpenFlow data-path using the OpenFlow protocol.

33
OF-Config is used to configure ports ( i.e. port shutdown or up),set tunneling such as VXLAN

or IP-in-GRE (Bansal, Bailey, Dietz, & Shaikh, 2013).

2.3.3 Application Layer

The application plane contains applications which has exclusive control of a group of network

resources retrieved through the Northbound API of one or more SDN controllers. Different

application can work collaboratively or override each other to achieve specific network goals set

by a network administrator. An SDN application can also perform the role of an SDN controller

under certain defined scenarios (Open Networking Foundation, 2014).

Network vendors have taken advantage of the usefulness of the application plane and developed

customized applications for their products. Brocade has developed a Flow Optimizer, Virtual

router and Network advisor (Arora, 2017). Hewlett Packard Enterprise (HPE) has gone a step

further by having SDN App store where network administrators can download applications to

manage their SDN network. Examples of these applications are HPE Network Optimizer, Network

visualizer, TechM server load balancer, Network protector, TechM smart flow steering and NEC

UNC for HP SDN VAN Controller (Hewlett Packard Enterprise, 2016).

2.4 SDN Controllers

SDN Controller is an operating system for networks with the aim of providing abstractions and

common API to developers. It also provides essential services and functionalities device discovery,

topology information and distribution of network configuration (Kreutz et al., 2015). SDN

Controllers are located in the control plane of the SDN architecture and forward policies from

application layer to network elements in data plane.

34
There are several SDN controllers that can be used can installed on either a physical or virtual

server. Examples are; Beacon, Maestro, Trema, NOX, Floodlight, Contrail, Ryu, etc. (Benzekki,

El Fergougui, & Elbelrhiti Elalaoui, 2016), (D. & Gray, 2013). These controllers are classified as

either centralized controller where one control entity manages an entire network or a distributed

controller where the network is partitioned into different areas for management (Blial, Ben

Mamoun, & Benaini, 2016), (Paliwal, Shrimankar, & Tembhurne, 2018).

2.4.1 Centralized Controllers

Centralized controllers use a single control plane to manage an entire network. This category of

controllers is either physically centralize or logically centralized. Physically centralized controllers

are installed on a single physical server and tasked to manage an entire network. The advantage of

physically centralized controller is simplicity and management since there is only one controller.

Also, this type of controllers use multithreading techniques However, scalability especially in

enterprise networks that spans different geographical areas is not feasible with this approach

(Abuarqoub, 2020). Examples of physically centralized controllers are; Beacon, Floodlight,

Rosemary, Pox, Maestro, Ryu, NOX-MT. and Meridian.

Logically centralized controller uses multiple physical servers for operation, each controller is

assigned a dedicated task on the network but replicate a shared network state through a shared

centralized data store (Tadros, Mokhtar, & Rizk, 2018). However, the architecture of logically

centralized control hides the layout of physical servers, rather, they are presented to the data plane

as one entity (control plane) (Blial et al., 2016). Examples of this type of controllers are; Ryu,Oix,

SMaRtLight, HyperFlow, OpenDayLight, Pox, ONOS and Espresso.

35
2.4.2 Distributed Controllers

Distributed controllers function as a distributed control plane to manage a network, however, the

network is divided into multiple domains, each domain is then managed by its own controller

(Chaudhry, Bulut, & Yuksel, 2019), (H. G. Ahmed & Ramalakshmi, 2018). There are two variants

of distributed controllers; flat and hierarchical design. Flat design divides a network into different

domains with each domain assigned its own controller. Controllers using flat design use east-west

interface to communicate with each other to have a global view of the network. Hierarchical design

employees a two-layer controller concept, first, a domain controller to handle switches and run

applications in its local domains and second, a root controller that maintains global network while

also managing the domain controllers (Hu, Guo, Baker, & Lan, 2017). Examples of distributed

controllers are; OpenDayLigt, HPE VAN, Onix, HyperFlow, Kandoo and DISCO.

2.5 Feature Based Comparison of SDN Controllers

SDN Controllers have different properties and features which makes them more suited for one

deployment over another. The controllers are compared on the basis of architecture, programming

language used to develop them, supported northbound and southbound interfaces and the controller

developers or partnership.

Table 1: Feature Comparison of SDN Controllers

Controller Architecture Language Northbound Southbound Partners

API API

Beacon Physically Java Ad-hoc OpenFlow 1.0 Stanford

Centralized

36
NOX Physically C++ Ad-hoc OpenFlow 1.0 Nicira

Centralized

Floodlight Physically Java REST, Java RPC, OpenFlow BigSwitch

Centralized Quantum 1.0, 1.3

Ryu Physically Python REST OpenFlow 1.0 NTT,

Centralized OpenFlow 1.5 OSRG

Maestro Physically Java Ad-hoc OpenFlow 1.0 Rice

Centralized University

POX Physically and Python Ad-hoc OpenFlow 1.0 Nicira

Logically

Centralized

OpenDayLight Logically Distributed Java REST,XMPP OpenFlow Linux

Flat Architecture RESTCONF, 1.0, 1.3 Foundation

NETCONF

HPE VAN Logically Distributed Java RESTful API OpenFlow 1.0 HP

Flat Architecture

ONOS Logically Distributed Java RESTful API OpenFlow 1.0 Linux

Flat Architecture Foundation

ONIX Logically Distributed Python NVP NBAPI OpenFlow 1.0 Nicira

Flat Architecture C

HyperFlow Logically Distributed C++ ------------- OpenFlow 1.0 University

Flat Architecture of Toronto

37
Kandoo Logically Distributed Python ------------- OpenFlow 1.0 University

Hierarchical C of Toronto

Architecture C++

2.6 Related Works

This section gives a review of relevant past studies done on comparing SDN controllers. The most

common metrics in these papers used in evaluating controllers are throughout, latency and round-

trip time.

The authors examined four OpenFlow controllers in (Shah, Faiz, Farooq, Shafi, & Mehdi, 2013),

namely, Maestro, NOX, Floodlight and Beacon, all of them were opensource controllers. They

were compared on architectural design, packet batching, task batching and multi-core support.

Static switch partitioning and static batching were used in the architectural design of Floodlight,

NOX-MT and Beacon but Maestro used adaptive batching and shared-queue. The evaluation of

performance showed Beacon to have better results in terms of scalability when varying the number

of switches from 16 to 64 switches. NOX-MT had the second-best scalability, followed by Maestro

with Floodlight being the least scalable controller.

The work done in (Khondoker, Zaalouk, Marx, & Bayarou, 2014) evaluated five controllers,

namely, OpenDayLight, POX, Trema, Ryu and Floodlight. The authors used Analytic Hierarchy

Process (AHP) to determine how a controller should be chosen. This is because selecting a suitable

controller is considered a Multi-Criteria Decision Making (MCDM) problem since several or

different properties of a controller are of importance to operators. The basis of comparision in this

study were; support for virtual switching, Transport Layer Security(TLS), REST API, OpenFlow,

38
availability of Graphical User Interface (GUI) and support for OpenStack networking. The study

results showed Ryu had the best priority vector value (0.287), Floodlight and OpenDayLight had

priority vector values of 0.275 and 0.268 respectively. Trema and POX had lower priority vector

values of 0.211 and 0.146 respectively.

The performance of seven selected controllers: Floodlight, NOX, Beacon, POX, Maestro, Ryu

and Mul were done in (Shalimov, Zuikov, Zimarina, Pashkov, & Smeliansky, 2013). The

performance metrics of latency, throughput and scalability were evaluated using Cbench while

hcprobe was used to conduct reliability and security tests. The average throughput with different

number of hosts, different number of switches and different number of threads showed Beacon

had the maximum throughput. POX was second but its throughput dropped significantly when the

number of switches reached 256 and Ryu had the least throughput. The smallest latency has been

demonstrated by Mul and Beacon controllers had the smallest latency, while python-based

controller POX achieved the largest latency. For controller reliability, all the controllers coped

with the test load except Mul and Maestro. Five tests were performed for security measurements,

including; invalid OpenFlow version, incorrect message length, incorrect OpenFlow message type

,malformed port status and malformed packet-In message. Ryu came out with the best results.

Simulation and emulation of SDN was done in (Al-Somaidai, 2014) with four different platforms.

These platforms were; NS-3,Mininet, Trema and EstiNet. The research also discussed different

switch software and tools including a comparison among the different versions of OpenFlow.

Controller used in this study were; Floodlight, OpenDayLight , NOX, Mul, POX, Beacon and Ryu.

The study observed that OpenDayLight and Floodlight were flexible and had good documentation.

39
2.7 Chapter Summary

In this chapter, the history of SDN controllers was discussed, starting from the days of Active

Networking through to the development of the OpenFlow standard. The OpenFlow protocol used

a pipeline to process matching criteria specified by a controller by using flow tables, any packet

which does not meet the criteria called a table-miss is sent to the controller to determine its fate.

The SDN architecture was also discussed and its three layers were identified as infrastructure (data)

plane, control plane and application plane. Network elements such as routers and switches that

support SDN were listed as data plane devices. The role of control plane was discussed and its

components listed as; Network Operating System (SDN controller), northbound interface and

southbound interface. Application plane hosted a variety of specific network application developed

by either network vendors or in-house development teams.

This chapter further surveyed literature on SDN controllers and named some popular controllers

such as OpenDayLight, Floodlight, HPE VAN, Maestro, Beacon among others. These controllers

were discovered to be in different categories. They were either centralized or distributed

controllers. A centralized controller had one single physical server to serve the network.

Furthermore, a centralized controller could also logically centralized where a set of distributed

physical servers act as one entity to serve a network. Distributed controllers further divided into

flat and hierarchical architecture were installed on multiple physical servers with each controller

given a set of management responsibility for certain segments of a network. A feature-based

comparison of controllers was also surveyed.

Finally, the research analyzed related works in implementing and comparing SDN controller

performance. In the next chapter, the methods used to conduct this research work is discussed.

40
CHAPTER THREE

3 RESEARCH METHODOLOGY

3.1 Introduction

This chapter discusses the research methodology used and why the selected method was best suited

for this study. Research methodology describes the systematic steps taken to resolve a research

problem, in this case, performance differences among SDN controllers. This can be scientific,

interpretive, and recently design science research. Scientific research includes; laboratory

experiments, field experiments, surveys, and simulation or emulation research.

3.2 Research Method

SDN is a fairly new technology therefore examining its implementation and controller

performance would require a huge investment in network equipment that supports OpenFlow. Due

to this limitation, this study used emulation methodologies to create nodes and connected those

nodes to a controller. Through emulation, performance metrics such as controller throughput,

latency, and scalability data were collected for analysis and discussion. Moreover, emulation for

each performance metric was iterated fives times and an average derived. Details of emulation

software, selected controllers, and tools used to measure controller performance are provided

below.

3.3 Proposed Controllers

Three SDN controllers were selected for this study, these are OpenDayLight, Floodlight, and HPE

VAN. A technical overview of each controller has been provided below.

41
3.3.1 OpenDayLight

OpenDayLight (ODL) is a modular open-source controller used to automate and custom networks

of any scale and size. As an open-source project, ODL is driven by the collaborative effort of

network equipment vendors, researchers, and user organizations to continually improve its

features. It is written in Java thus can be deployed on any operating system (Windows, Linux, etc.

) and commodity hardware that supports java. Due to this, it is the most commonly deployed SDN

controller platform and integrated into the core of open-source frameworks like OpenStack. Open

Network Automation Platform (ONAP) and Open Platform for NFV (OPNFV) uses it as a service

for network automation, orchestration, and management.

ODL has several versions since releasing the first version, Hydrogen, in February 2014. This study

used a stable version called Lithium-SR4 because it is stable and has good documentation. The

Lithium version as in other versions of ODL is presented as a modular platform allowing modules

to reuse services and interfaces. The core of this modular implementation is Model-Driven Service

Abstraction Layer (MD-SAL). MD-SAL presents underlying network elements and applications

as models and objects, processing their interactions within the SAL. The role played by MD-SAL

in user interaction with network elements makes ODL act as a Model-View-Control platform.

The internal representation is divided into three interconnected parts, namely, Model, View, and

Control. The data model used is YANG (obtained by using a remote procedure call ) which

describes nodes and how they interact. Views of resources are displayed using the northbound

interface, REST API, or RESTCONF. Control is implemented using java codes to handle

notifications and data changes.

OpenDayLight Lithium relied on the following technologies before it was successfully set up for

use.

42
i. Maven: a software tool for managing and building Java projects. ODL Lithium uses Maven

for scripting bundle dependencies and to specify bundles to start.

ii. Java: a programming language used to develop ODL features and applications.

iii. Open Service Gateway Interface (OSGi): is a Java framework used in the back-end of

OpenDayLight allowing bundles and JAR packages to load dynamically and binding modules

for information exchange.

iv. Karaf: is an OSGi based runtime used to load different modules. The controller is started by

running Karaf. Karaf was also used to install relevant features.

The emulation of ODL Lithium required some features to be installed. These were;

i. DLUX: a web-based user interface for OpenDayLight. This web interface presents an MD-

SAL flow viewer for viewing OpenFlow entries in the network. It also has a network topology

visualizer that displays graphically how network elements are connected. There is also a

toolbox and YANG model to respond to queries while visualizing the YANG tree.

ii. OpenFlow plugin: This provided support for connecting ODL to OpenFlow-enabled network

devices for discovery and control. This feature has support for OpenFlow versions 1.0 and

1.3.

iii. Layer2 Switch: This feature provided layer2 (Ethernet) forwarding in emulated OpenFlow

switches and tracked connected hosts.

3.3.2 Floodlight

Floodlight is an open-source SDN controller under Apache license, written in Java, and has its

foundation built on the Beacon SDN controller. It is supported by Big Switch Networks and its

architectural design is modeled after Big Network Controller (BNC), a commercial controller

43
from the company. Commercial applications developed for Floodlight by programmers can be

certified and qualified with BNC.

Floodlight has a modular design with different modules. The modules that were used in this study

were; topology module, Web Graphical User Interface (Web GUI), learning switch, device

management module, Link Discovery. The topology module has a topology Service to compute

network topologies from link information learned through Link Discovery Service. The topology

Service keeps an OpenFlow island which is a group of connected OpenFlow switches managed by

a single instance of Floodlight. Web GUI allows users to view connected switches, devices, and

hosts on the network flows installed in switch flow tables, controller state information, and the

overall network topology. The Learning Switch module detects new devices by learning about

their MAC addresses. It also identifies input and output switches when the controller detects a new

flow. The Device Management module uses PacketIn requests to detect devices, track the devices,

and specify a destination device for a new flow. It then classifies the devices according to an entity

classifier which by default uses MAC address and VLAN. Link discovery service discovers and

maintains links status of a network managed by floodlight.

Floodlight has a total of six versions since the first stable version (v0.85) was released in 2012,

these versions are; 0.85, 0.90, 0.91, 1.1, and 1.2. The version of Floodlight used in this study was

version 1.1 because it was the last stable version. Floodlight v1.1 depended on these technologies;

Java Runtime Environment, Apache Ant, Python Development Package, Eclipse Integrated

Development Environment (IDE), OpenFlow, and REST API.

• Java Development Kit (JDK8) was needed since Floodlight is a java-based platform and had

to run on standard JDK tools. The controller engine uses a Java library produced by Loxigen

also referred to as OpenFlowJ-Loxi.

44
• Apache Ant, a Java library as well as a command-line tool was used to build floodlight files

as targets.

• Python Development Package was installed to resolve floodlight dependencies issues.

• Eclipse IDE was installed to resolve dependencies for the controller.

• OpenFlow version 1.3, a southbound interface was used to connect to the OpenFlow network.

• REST API, a northbound interface, was used to query the controller for information to be

displayed on the Web GUI.

3.3.3 HPE VAN

HPE Virtual Application Networks (VAN) SDN Controller is a java-based proprietary controller

marketed by the Hewlett Packard Enterprise Company. This controller is supported and maintained

by Aruba Networks, a subsidiary of HPE. Because of the support and feature improvement

received from Aruba Networks, this controller is often referred to as the Aruba SDN controller.

The controller was first released in 2012 followed by an SDN Developer Kit (SDK) and SDN App

Store in 2013. The Developer Kit equips developers with the necessary tools for creating, testing,

and validating SDN applications. By using this toolkit, developers can leverage HP’s SDN

infrastructure with its full complement of support services. HP certifies applications developed

with SDK and deploys them onto the SDN App store. HP SDN App Store can be used by customers

for searching, purchasing, and directly downloading SDN applications onto their deployment of

HPE VAN controller.

The controller has core network service applications installed as modules. This study used the

following modules; OpenFlow Link Discovery, OpenFlow Node Discovery, Topology Manager,

Topology Viewer, Client Mapper Service, and Web GUI. OpenFlow Link Discovery implements

" com.hp.sdn.supplier.LinkSuppliersBroker " interface while using Link Supplier Service and Link

45
Service APIs for creating and maintaining links information for data paths registered by the

controller. It also pushes flow-mods to get discovery packets, injecting discovery packets to ports

on data paths, and discovers links by listening to PACKET_IN messages. OpenFlow Node

Discovery creates and maintains node information for data paths, it pushes flow-mods to devices

to copy ARP or DHCP packets sent to the controller. Topology Manager and Topology Viewer

are used to data collection and displaying the OpenFlow network setup. Client Mapper Service

contains information on the network client’s host MAC addresses, IP address, and connected data

path and port. It also contains relevant information about device location on the network. The GUI

uses the REST API to fetch data from the network and passes it on in a readable form displayed

by the GUI. The Web GUI display topology of discovered switches and hosts, data flow details,

alerts, and logs. It also permits adding, enabling, and removing applications.

The last stable version 2.8.8 of the HPE VAN controller was used in this study. The controller

depended on the following technologies; Java, OSGi (Equinox framework and Virgo stack ), and

Apache Cassandra.

3.4 Emulation Environment

This research required an OpenFlow network to be set up, as such the first option available was to

purchase physical OpenFlow supported switches and servers. The second option, which was cost-

effective, was to purchase a physical server with good specifications ( Hard Disk, RAM, Processor,

etc. ) and virtualize all the OpenFlow network setup. A hypervisor software (VMware ESXi 6.5)

was installed on a physical server (Dell PowerEdge R730 CPU: 6 x Intel(R) Xeon(R) E5-

2620@2.0GHz RAM: 48GB HDD: 2TB NIC: 4 Ports). After the hypervisor was installed, four

Linux (Ubuntu 18) virtual machines were created to host the three SDN controllers and an

emulator (Mininet) to generate OpenFlow traffic. Details of the technologies are found below.

46
3.4.1 VMware ESXi 6.7

VMware ESXi is a type-1 and enterprise-class hypervisor product from VMware for deploying

and managing virtual computers. As a type-1 hypervisor, it is installed directly onto a hardware

server (bare metal) and has its kernel (Linux) and other vital OS components.

The hypervisor offered these functionalities relevant to the study; Virtual machines management,

storage management, and networking services. Virtual machine management allowed a new virtual

machine to be created, given a unique name, storage location, memory allocation, network

adapters, and USB controllers. Storage management display storage devices attached to the

hypervisor, their capacities, and the option to add more storage. It also displays the datastores and

provides a mechanism to browse through the files that make up a virtual machine such as the vmdk

file ( which can be copied to backup the VM elsewhere). Networking service was used to determine

how the Network Interface Cards (NIC) of the physical was to be assigned. Three out of the four

network cards were used. The first card was used as a management interface for administering the

hypervisor, the other two cards were designated as public and private interfaces. Each VM was

given both public and private interface, public interfaces connected the VMs to the internet to

download necessary packages for the study while private interface acted as a Local Area Network

for the emulation.

The ESXi has several versions but version 6.7 was used in this study because it offered an easy to

use GUI presented in a web browser. Previous versions were complex to manage since a vSphere

Client was required to be installed on a dedicated management computer. But the version used in

this study offered flexibility since the hypervisor could be reached from any computer that has a

web browser.

47
3.4.2 Ubuntu

Ubuntu is a free and open-source Debian Linux distribution. There are three officially released

versions; Server, Desktop, and Core. The Desktop version of Ubuntu 18.04.2 Long Term Support

(LTS) was used since it provided a graphical interface making configuration of network interface

easy. Also, the SDN controllers used web-based GUI for management, so it was appropriate to use

a desktop version to access the controller GUI.

3.4.3 Open vSwitch

Open vSwitch (OVS) is an enterprise-level open-source multilayer virtual switch platform. What

makes OpenvSwith suitable as a software switch for research and enterprise environment is its

support for a wide range of features like Standard 802.1Q VLAN, Spanning Tree Protocol (STP),

NetFlow, sFlow(R), QoS (Quality of Service) configuration, and OpenFlow 1.0 and above. The

version of Open vSwitch used was version 2.9.5 and was chosen for this research because of its

support for OpenFlow and ease of integration to Mininet. The switch was used to connect nodes

generated by Mininet to the SDN controllers.

3.4.4 Mininet

Mininet is a software package installed on an operating system such as Linux (Ubuntu) to provide

a development environment and virtual testbed for network experimentation. It can generate

networks that run real code with real Linux kernel, networking stack, and standard Linux network

applications. Mininet was selected as a method to conduct this research because it enables SDN to

be rapidly prototypes and complex network topologies can be tested without a physical network.

A Mininet network of hosts that can run any Linux command and application, Switches that are

OpenFlow enabled, the default is Open vSwitch and Links that serve as ports and interface to

48
connect nodes to the switch. The version of Mininet used was version 2.3.0d6 which has better

performance and bug fixes.

3.5 Emulation Scenarios

The performance of the three controllers was measured by creating different network topologies

using Mininet and connecting those networks to each controller. The topologies used to measure

each controller’s performance were single, linear, and tree topology with a description below.

3.5.1 Single Topology

Single topology in Mininet uses one OpenFlow-enabled switch and defining the number of hosts

needed in that network. These hosts are connected to the switched and in this study, the switch has

a connection to an external SDN controller.

The number of hosts on the switch is increased by a multiple of two from the previous value until

the hosts reach 128. The performance metrics of each controller is evaluated as the number of

hosts increases.

The command used to generate a single topology with 4 hosts is "sudo mn --topo single,4 --mac -

-controller=remote, ip=10.77.66.61, port 6633 --switch ovsk, protocols=OpenFlow13". Mininet

uses sudo mn to generate a network, --topo represents the network topology which is single, the

number of hosts is 4, --mac gives the hosts MAC addresses, controller represents the SDN

controller the switch should connect to which is located at a “remote” destination with an IP of

10.77.66.61 and connection port of 6633, the type of switch is OpenvSwitch (ovsk) that uses the

OpenFlow protocol version 1.3 for connection.

49
3.5.2 Linear Topology

Linear topology has switches and hosts connected in line with each host connecting a particular

switch. The switches are also connected and a connection is established to a remote controller. In

this topology, the number of hosts is the same, one host per switch but the switches are instantiated

from two switches and increased with a multiple of two till they reached 64 switches. Performance

metrics of generated traffic for each controller was collected for analysis.

The command used to generate a linear topology with two switches and a host per switch was "

sudo mn --topo linear, 2 --mac --controller=remote, ip=10.77.66.61,port=6633 --switch ovsk,

protocols=OpenFlow13"

3.5.3 Tree Topology

Tree topology arranges OpenFlow-enabled switches and hosts in the form of a tree structure. It

uses multiple branches and each branch has a connection of multiple switches and hosts. Tree

topology command syntax defined depth and fanout. Depth presents the number of switches levels

while and fanout is the number of output ports available for connecting hosts or switches.

The command used to generate tree topology in Mininet for a depth of 1 and fanout of 1 is “sudo

mn –topo tree, depth=1, fanout=1 --mac --controller=remote, ip=10.77.66.61,port=6633 --switch

ovsk, protocols=OpenFlow13". The performance metrics of each controller for a depth of 1 with

fanout of 1,2 and 3 as well as a depth of 2 with fanout of 1,2 and 3 were taken for analysis.

3.6 Performance Metrics

The performance of an SDN controller is the overall quality of service it can provide for networks

under its administration. There are many different metrics used to measure a controller’s

50
performance. The selected metrics in this study were; topology discovery, throughput, latency, and

round-trip time.

3.6.1 Topology Discovery

Topology discovery is the ability of the controllers to detect and determine the topology of the

networks generated by Mininet. Each controller should be able to determine the topology type (

single, linear, and tree), connected switches, and nodes with their respective IP and MAC

addresses.

3.6.2 Throughput

Throughput is the number of flow setup requests a controller can handle. Two nodes on the

network were selected to act as a TCP server and TCP client. Data was sent from the client running

Iperf command with port 5001 through the controller to a server also running Iperf command on

port 5002. This was repeated for intervals of 10,20,30,40 and 50 milliseconds. The average

throughput was calculated as the transferred data over the time taken for delivery.

3.6.3 Round Trip Time

Round Trip Time (RTT) in this study is the total duration in time for a packet to be sent from a

host through the SDN controller, processed and passed to the destination host plus the reply from

the destination host to the sending host. Any delay would indicate the time taken by the controller

to locate where each host is located in the network and appropriate update the OpenFlow switch

flow table. It was expected that the first packet would record the highest RTT. Recordings in

milliseconds were taken for the minimum (min), average (avg), and maximum(max) Round Trip

Time for each controller performance to the topology and varying switches as well as node

numbers.

51
3.7 Tools for Performance metrics

There are several tools for evaluating an SDN controller performance, noteworthy among them

are; CBench, Hcprobe, OFCBenmark, and OFCProbe. They were all tried but failed to meet the

study expectations. In previous studies of SDN controller performance, the majority of them used

Controller Benchmarker (CBench) to generate packet-in messages for new flows. CBench

emulates its switches, then connects them to a controller, fakes packet-in messages, and observe

how flow-mods are sent to switches. It was initiated used in this study but was replaced because it

could not fulfill all the requirements of the study though it took significant time off the project.

The performance metrics of throughput and round-trip time were measured using standard

networking tools. These were Iperf and Ping command. They were selected because of their ease

of use and reliability in giving accurate results as opposed to the aforementioned benchmarkers.

3.7.1 Iperf

Iperf is available for Windows, Linux, etc. It performs traffic performance evaluation for both TCP

and UDP and works in unicast and multicast transmission. It generates reports for various

statistical measurements like datagram loss, throughput, and latency and these reports can be

directed to an output file for analysis. Iperf operates as a command-line performance tool but a

GUI interface version developed with java called Jperf is also available.

Iperf3 package was installed on the Mininet virtual and exposed to an emulated host to be used for

measurement. The Iperf command used for the client with IP address 10.0.0.1 was “iperf -c

10.0.0.17 -p 5001 -t 20” where -c denote client, IP address 10.0.0.17 refers to the server the client

should connect to, -p is the port used for the connection which was 5001 and -t is the time in

52
milliseconds to transfer data which was 20ms. The Iperf command at the server-side was “ iperf -

s -i 1” where -s denoted server and -i was the recording taken in unit time which was 1 millisecond.

3.7.2 Ping

Ping is a network utility software used to test the reachability of a host. It measures the round-trip

time of packets from a sending host to a destination host. A host sends echo request packets using

The Internet Control Message Protocol (ICMP) to target host on the network and waits for an echo

reply. Ping uses this process to report packet loss and errors then summarize these reports as round-

trip time with the minimum, maximum, average, and standard deviation of the average. The ping

command used was “ h1 ping -c 10 h17” where h1 denoted host 1 -c denoted the number of ICMP

packets to be sent and h17 was the destination host. The reply from h17 to h1 contained the relevant

round-trip time information which was recorded for five different iterations and an average found

for analysis.

3.8 Chapter Summary

In this chapter, the scientific research approach was adopted to achieve the aim of this study.

Further, emulation methods were used to set up the necessary environment to test and collect data

for analysis. The architectural design of the three SDN controllers namely; OpenDayLight,

Floodlight, and HPE VAN were discussed. The modules of each controller used in the study were

also identified and discussed. Finally, the supporting technologies that make these controllers

functions were also explained.

The chapter gave an overview of the research test environment and its components such VMware

ESXi 6.7 which was the hypervisor used to install the virtual machines, Ubuntu 18.04.2 which was

used to download and install SDN controller packages, Open vSwitch which acted as an

53
OpenFlow-enabled switch, and Mininet to generate an OpenFlow network. The emulation

scenarios were explained, these were a single topology that used only one OpenFlow-enables

switch with several hosts, a linear topology that had several OpenFlow-enables switches with only

one host attached to them, and tree topology that arranged switches in a hierarchical order using

depth and fanout.

The performance metrics used to measure how the controllers compare to each other were outlined

and explained. These were the ability of a controller to detect and determine the topology of a

network presented to it by Mininet for managing. A good controller should determine if the

topology is single, linear, or tree and the number of switches and nodes connected to it. The

throughput to determine the amount of data transfer from one host to another host managed by a

controller was explained. Also, round-trip time determined the time it took for packets to travel

from one host to another host and back. Finally, the measuring tools, Iperf and Ping, used to record

these metrics were discussed.

CHAPTER FOUR

4 DESIGN AND IMPLEMENTATION

4.1 Introduction

This chapter presents a brief overview of SDN deployment approaches and a justification for the

chosen deployment approach and selected controllers. It also discusses the testbed needed for

deployment, the installation procedure each component of the testbed. The implementation steps

of the three selected SDN controllers are outlined. There is also a simulation of three network

topologies and a demonstration of how each controller will handle the topologies. Generated traffic

54
between hosts for each topology is demonstrated and the time taken for hosts to respond to each

other echo messages.

4.2 SDN Deployment Approaches

SDN as new technology to segregate control and data plane of networking devices has

implementation challenges. Foremost of this is the acquisition of devices that support SDN, which

are expensive to procure. A roadmap to deploying SDN has been simplified into three different

approaches, namely, switch-based networks, overlay networks, and hybrid networks that combine

the first two. Switch-based networks are the original deployment solution of SDN by using only

SDN compliant networking devices. With this deployment, an SDN controller installs flow entries

into OpenFlow-enabled switches. Overlay SDN deployment appropriates existing physical

network infrastructure to create a virtualized environment. This virtualized environment can be

used to create SDN networks. Hybrid deployment combines both switch-based and overlays

network approaches.

4.3 Justification of Implementation Options

The project was deployed in an overlay network. It used an existing network infrastructure

consisting of a traditional switch, router, and server. It was the approach chosen because physical

OpenFlow devices were expensive. This approach enabled SDN testing through the use of virtual

OpenFlow switch (Open vSwitch) and virtualized SDN Controllers.

The selected controllers ( Floodlight, OpenDayLight, and HPE VAN) were chosen over other

controllers for the following reasons below.

55
• The controllers had to be developed using the same programming language. All three

controllers were developed with Java. The choice of Java was because it is a platform-

independent language as such SDN Controllers developed with it can be deployed on any

operating system. Furthermore, Java could be easily learned to compile different modules of

the controllers.

• The performance comparison had to be done for both centralized and distributed controllers.

Floodlight is centralized while OpenDayLight and HPE VAN are distributed controllers.

Floodlight was chosen because it was the only centralized controller developed with Java with

easily accessible documentation, moreover, it supported REST API as a northbound interface.

OpenDayLight was chosen because it is the most widely used distributed open-source

controller. HPE VAN was chosen because it is a proprietary controller distributed controller

with similar features to that of OpenDayLight.

• The features and performance comparison had to be conducted for open-source and proprietary

controllers. HPE VAN is a proprietary controller while Floodlight and OpenDayLight are

open-source controllers.

4.4 Testbed Setup

The testbed setup had a single physical server with the specification.

• Model: Dell PowerEdge R730

• Processor: 6 x Intel(R) Xeon(R) CPU E5-2620@2.0GHz

• RAM: 48GB

• HDD : 2TB ( 3 X 1.2TB SAS 10K Configured with RAID 5)

• Embedded NIC: 4 x 1GbE

56
Figure 8: Test Server Mounted in Rack

The testbed also has Virtual Machines (VM) listed below. These VMs were created on top of the

physical using a hypervisor software (VMware ESXi 6.7).

• Mininet - Ubuntu 18 - 8GB RAM, 20GB HDD, 2CORE CPU, IP:10.77.66.54

• OpenDayLight - Ubuntu 18 - 8GB RAM, 16GB HDD, 2CORE CPU, IP:10.77.66.61

• Floodlight - Ubuntu 18 - 8GB RAM, 16GB HDD, 2CORE CPU, IP:10.77.66.62

• HPE VAN – Ubuntu 18 - 8GB RAM, 16GB HDD, 2CORE CPU, IP:10.77.66.63

4.4.1 VMware ESXi 6.7

An account was created at https://my.vmware.com for a 60-day trial version. The hypervisor

image (iso) was downloaded and a bootable USB drive created with it using PowerISO. The

operating system was then installed on the physical server and given an IP. The management

console was accessible through a web browser by entering the IP.

57
Figure 9: Virtual Servers

Figure 10: Properties of a VM

58
4.4.2 Ubuntu 18.04.2

Ubuntu 18.04.2 was downloaded from “ http://old-releases.ubuntu.com/releases/18.04.2/ubuntu-

18.04.2-desktop-amd64.iso” and installed.

Figure 11: Ubuntu OS Updated

4.4.3 Mininet

The Mininet software package was downloaded from https://github.com/mininet/mininet,

installed, and tested as shown in the figure below.

Figure 12: Mininet Installation

59
Figure 13: Mininet Network

4.5 SDN Controllers

This section demonstrates how the controllers were downloaded, installed, and operated.

4.5.1 OpenDayLight

A zip file of OpenDayLight package distribution-karaf-0.3.4-Lithium-SR4 was downloaded at “

https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/distributi

on-karaf/0.3.4-Lithium-SR4/distribution-karaf-0.3.4-Lithium-SR4.zip “.

It was installed and started with karaf as shown in Figure 14, OpenDayLight features necessary to

manage OpenFlow networks were also installed as shown in figure15 below.

60
Figure 14: OpenDayLight Installation

Figure 15: OpenDayLight Features Installation

4.5.2 Floodlight

Floodlight was downloaded from “https://codeload.github.com/floodlight/floodlight/zip/master “.

The submodules were updated and ant used to build the modules as shown in figure16 below.

Figure 16: Floodlight Installation

61
4.5.3 HPE VAN

A virtual machine with minimal installation containing the HPE VAN controller was downloaded

at “https://media.arubanetworks.com/sdn-apps/hpe-van-sdn-ctlr-2.8.8-ova.zip”. The controller

was downloaded pre-install as such only the network interface was configured as shown in

figure17 below.

Figure 17: Network Configuration of HPE VAN

4.6 SDN Topologies

This section demonstrates how Mininet created networks for the controllers to manage. The

networks created are single, linear, and tree topologies.

4.6.1 Single Topology

Single topology has one switch with a different number of hosts. A sample test network with 20

hosts created in Mininet is shown in figures18, and figure19 below.

62
Figure 18: Mininet Single Topology

63
Figure 19: Single Topology Displayed by OpenDayLight

4.6.2 Linear Topology

Linear topology has two or more switches with one host each. A test network in Mininet is shown

in figure20, and the topology is displayed by Floodlight in figure21 below.

64
Figure 20: Mininet Linear Topology

Figure 21: Linear Topology Displayed by Floodlight

65
4.6.3 Tree Topology

Tree topology uses depth and fanout to arrange networks by hierarchy. Below is a network in

Mininet and the topology discovered by the HPE VAN controller.

Figure 22: Mininet Tree Topology

Figure 23: Tree Topology Displayed by HPE VAN

66
4.7 Chapter Summary

This chapter has examined and justified the overlay deployment approach to SDN. The rationale

of choosing the selected controllers was explained as well as the implementation steps needed to

make them work. Test networks were emulated in Mininet and connected to the implemented

controllers to manage.

67
CHAPTER FIVE

RESEARCH EVALUATION

5 Introduction

This chapter presents the results of the experimental evaluation, the results are discussed based on

performance metrics outlined and discussed in Chapter 3. These metrics were; topology discovery,

round-trip time, and throughput.

5.1 Topology Discovery

The controllers are compared on their ability to provide near a real-time and accurate display of

the network topology in a process called Topology Discovery. Accurately detecting and displaying

the network topology plays an essential role in other internal functionalities of the controller. It is

used for host tracking, network configuration, planning of routes, attack detection, among others.

The figures below show how the three controllers display and handle different networks.

Figure 24: ODL with 8 hosts and 1switch

68
Figure 25: ODL with 64hosts and 1 switch

Figure 26: ODL with 128 hosts and 1 switch

69
Figure 27: Floodlight with 8host and 1 switch

Figure 28: Floodlight with 64host and 1 switch

70
Figure 29: Floodlight with 128host and 1 switch

Figure 30: HPE VAN with 8hosts and 1 switch

71
Figure 31: OpenDayLight with Linear topology of 8 switches and 8 hosts

72
Figure 32: Floodlight with the Linear topology of 8 switches and 8 hosts

Figure 33: HPE VAN with Linear topology of 8 switches

73
Figure 34: OpenDayLight with tree topology of the depth of 2 and fanout of 2

Figure 35: OpenDayLight with tree topology of the depth of 2 and fanout of 2

74
Figure 36: OpenDayLight with tree topology of the depth of 2 and fanout of 2

Single topology had a minimum of 8 hosts connected, an average of 64 hosts and a maximum of

128 hosts. Linear topologies had a minimum of 2switches and a maximum of 32 switches each

with an end host connected. Tree topology had a minimum depth of 1 and a maximum of 2. The

three controllers identified the different network topologies and displayed them as shown in the

figures above. Each controller had a unique topology GUI that provided:

• discovered switches, and end nodes

• ports discovered on a switch

• a path between end nodes

• end nodes MAC or IP addresses

• view of active flow rules, tools for adding, editing, and deleting flows

The controllers use the same OpenFlow Discovery Protocol (OFDP) to discover the different types

of topologies. OpenDayLight provided all topology features and could detect new networks

generated from Mininet without restarting its modules. OpenDayLight display for single topology

is shown in figures 24,25, and 26 while linear and tree topologies are shown in figure 31 and figure

34 respectively. Floodlight detected and displayed all topology features but had to be restarted and

its modules reloaded to clear previous networks that are cached. Floodlight display for single

topology is shown in figures 27,28, and 29 while linear and tree topologies are shown in figure 32

75
and figure 35 respectively. HPE VAN display for single topology is shown in figure 30 while

linear and tree topologies are shown in figure 33 and figure 36 respectively HPE VAN only showed

the number of switches and end nodes without a visual diagram of how each host is connected in

the network. It did not provide the MAC or IP addresses of end hosts or paths between the hosts.

This was because the license required for those topology features had expired. OpenDayLight has

a better display and labeling of switches, and hosts using conventional networking icons and

colors, Floodlight displays both switches and host in monotone color thus a technical eye is needed

to differentiate between them while HPE VAN only displays switches.

5.2 Round Trip Time

The round-trip time for packets sent by a host to another host in a Mininet generated network

managed by each controller is compared. The number of packets sent by hosts is set to 10 packets,

the minimum, average, and maximum round-trip time for these packets for each controller network

is recorded and compared below.

Table 2: Minimum Round-Trip Time for Single Topology

No of Hosts OpenDayLight Floodlight HPE VAN

8 0.0864 0.0858 0.086

16 0.0904 0.0852 0.0875

32 0.0868 0.0842 0.089

64 0.0832 0.0826 0.0858

128 0.083 0.0836 0.0814

Average 0.08596 0.08428 0.08594

76
Figure 37: Minimum Round-Trip Time for Single Topology

From figure37 derived from table 2 (the minimum RTT for single topology), it is observed that

Floodlight has the lowest RTT. The RTT decreases with an increasing number of hosts but starts

to increase after the number of hosts reach a threshold of 128. HPE VAN has the second-best

minimum RTT that increases steadily for up to 32 hosts but declines as the number of hosts

increase beyond 32. OpenDayLight has the worst minimum RTT that peaks for 16 hosts and

steadily declines till 64 hosts after which the RTT remains constant beyond 128 hosts. The average

RTT for an accumulation of hosts indicates put Floodlight at number one with 0.08428ms, HPE

VAN is second with 0.08594ms and OpenDayLight is last with 0.08596ms.

77
Table 3: Average Round-Trip Time for Single Topology

No of Hosts OpenDayLight Floodlight HPE VAN

8 0.165 0.9496 0.1712

16 0.165 0.9736 0.1572

32 0.1808 0.96 0.1626

64 0.1736 1.0306 0.1702

128 0.2256 1.1946 0.1778

Average (ms) 0.182 1.02168 0.1678

Figure 38: Average Round-Trip Time for Single Topology

From figure38 obtained from table 3 ( the average RTT for single topology ), Floodlight has the

worst RTT that increases in correspondence with an increasing number of hosts. HPE VAN and

OpenDayLight have similar performance, the RTT remains constant regardless of the number of

hosts added to the network except for OpenDayLight that had a slight increase in RTT for 128

78
hosts. The average of RTT for the accumulation of hosts shows HPE VAN is first with echo

responds within 0.1678ms, OpenDayLight with 0.182ms and Floodlight is last with 1.02168ms.

This outcome of the RTT suggests that HPE VAN and OpenDayLight should be the preferred

SDN controllers for networks with single topology.

Table 4: Maximum Round-Trip Time for Single Topology

No of Hosts OpenDayLight Floodlight HPE VAN

8 0.6464 8.1378 0.761

16 0.6482 8.3234 0.7312

32 0.8064 8.2096 0.7352

64 0.710 9.1506 0.8166

128 0.897 10.6018 0.9086

Average(ms) 0.7416 8.88464 0.79052

Figure 39: Maximum Round-Trip Time

79
From figure39 obtained from table 4 ( the maximum RTT for single topology ), Floodlight has the

highest maximum RTT that increases to an increase in the number of hosts. HPE VAN and

OpenDayLight have similar RTT values that remain steady despite an increase in the number of

hosts. Floodlight has an average RTT of 8.88464ms for an accumulated number of hosts while

HPE VAN and OpenDayLight have values of 0.79052ms and 0.7416ms respectively. A higher

RTT value indicates that the controller has undesirable performance characteristics, as such

Floodlight is the worst of the three controllers.

Table 5: Minimum Round-Trip Time for Linear Topology

No of Switches OpenDayLight Floodlight HPE VAN

2 0.0908 0.0896 0.0898

4 0.1368 0.102 0.0914

8 0.1656 0.1144 0.1198

16 0.2228 0.1522 0.1516

Average (ms) 0.154 0.11455 0.11315

Figure 40: Minimum Round-Trip Time for Linear Topology

80
From figure40 obtained from table 5 ( the minimum RTT for linear topology ), it is observed that

the RTT for all the controllers increases to an increase in the number of hosts. However, HPE

VAN and Floodlight have similar performance with RTT values of 0.11315ms and 0.11455ms

respectively. OpenDayLight has the lowest RTT value of 0.154ms.

Table 6: Average Round-Trip Time for Linear Topology

No of Switches OpenDayLight Floodlight HPE VAN

2 0.1832 1.3746 0.1776

4 0.256 2.2308 0.191

8 0.412 2.419 0.4388

16 0.8278 3.5974 0.722

Average (ms) 0.41975 2.40545 0.38235

Figure 41: Average Round-Trip Time for Linear Topology

From figure41 obtained from table 6 ( the average RTT for linear topology ), the RTT increases as

the number of hosts increases for all controllers. Floodlight has the highest average RTT value,

81
2.40545ms for an accumulated number of hosts, OpenDayLight is second with a value of

0.41975ms while HPE VAN has the lowest value of 0.38235ms. Since a higher RTT value

indicates poor performance, it implies that Floodlight is not suitable for linear topology while

OpenDayLight and HPE VAN are ideal for this topology.

Table 7: Maximum Round-Trip Time for Linear Topology

No of Switches OpenDayLight Floodlight HPE VAN

2 0.7448 12.3302 0.8092

4 1.1184 20.7722 0.9304

8 2.377 22.479 3.1558

16 5.9012 33.8804 5.6718

Average (ms) 2.53535 22.36545 2.6418

Figure 42: Maximum Round-Trip Time for Linear Topology

From figure41 obtained from table 7 ( the Maximum RTT for linear topology ), the RTT in all the

controllers increases directly to an increase in the number of hosts. Floodlight has the highest RTT

82
of 22.36545ms while HPE VAN and OpenDayLight have lesser RTT values of 2.6418ms and

2.53535ms respectively. Therefore, HPE VAN and OpenDayLight have better performance than

Floodlight.

Table 8: Minimum Round-Trip Time for Tree Topology with Depth of 1

Fanout OpenDayLight Floodlight HPE VAN

2 0.0846 0.088 0.083

3 0.0842 0.086 0.0782

Average (ms) 0.0844 0.087 0.0806

Table 9: Average Round-Trip Time for Tree Topology with Depth of 1

Fanout OpenDayLight Floodlight HPE VAN

2 0.1396 0.832 0.1394

3 0.1652 1.3218 0.14

Average (ms) 0.1524 1.0769 0.1397

Table 10: Maximum Round-Trip Time for Tree Topology with Depth of 1

Fanout OpenDayLight Floodlight HPE VAN

2 0.5084 7.016 0.4936

3 0.6148 11.8424 0.58

Average (ms) 0.5616 9.4292 0.5368

83
Figure 43: Average Round-Trip Time for Tree Topology with Depth of 1

From figure43 obtained from table 9 ( the Average Round-Trip Time for Tree Topology with

Depth of 1 ), the RTT for HPE VAN and OpenDayLight remain constant for fanout of 2 and 3

thus has better performance with values of 0.5368ms 0.1524ms. Floodlight has the highest RTT

value of 9.4292ms and thus has poor performance. The maximum RTT values in table 7 and

minimum RTT values in table 9 follow a similar pattern where OpenDayLight and HPE VAN have

better performance than Floodlight.

Table 11: Minimum Round-Trip Time for Tree Topology with Depth of 2

Fanout OpenDayLight Floodlight HPE VAN

2 0.1484 0.1042 0.1024

3 0.1484 0.97 0.109

Average (ms) 0.1484 0.5371 0.1057

84
Table 12: Average Round-Trip Time for Tree Topology with Depth of 2

Fanout OpenDayLight Floodlight HPE VAN

2 0.3318 1.288 0.2532

3 0.3002 1.3948 0.2606

Average (ms) 0.316 1.3414 0.2569

Table 13: Maximum Round-Trip Time for Tree Topology with Depth of 2

Fanout OpenDayLight Floodlight HPE VAN

2 1.3264 9.6228 1.4802

3 1.4518 12.4632 1.5208

Average (ms) 1.3891 11.043 1.5005

85
Figure 44: Average Round-Trip Time for Tree Topology with Depth of 2

From figure44 obtained from table 12 ( the Average Round-Trip Time for Tree Topology with

Depth of 2 ), HPE VAN has the lowest RTT value of 0.2569ms, and OpenDayLight is second with

a value of 0.316ms. Floodlight has the highest RTT value of 1.3414ms. The same pattern is

observed from the minimum values in table 10 and maximum values in table 12 where HPE VAN

and OpenDayLight have better performance than Floodlight.

5.3 Throughput

The throughput of each controller was measured by transferring data from one host to another host

within 10 seconds. The Iperf measurement tool was used to take five different recordings for each

scenario of connected hosts ( from 8 hosts to 128 hosts) and an average in Gbits/sec calculated and

used as shown in table 14 below.

86
Table 14: Throughput in Single Topology

No of Hosts OpenDayLight Floodlight HPE VAN

8 14.6 14.74 14.54

16 14.375 14.66 14.28

32 14.22 14.32 14.82

64 14.15 14.06 14.14

128 14.08 12.94 14.18

Average (Gbits/sec) 14.285 14.144 14.392

Figure 45: Throughput in Single Topology

87
From figure 45 derived from table 14 ( the throughput in Single Topology ), Floodlight starts with

the highest throughput for both 8 and 16 hosts but steeply declines as the number of hosts increases.

OpenDayLight starts as the second controller with the highest throughput for 8 to 16 hosts but

declines gradually as the number of hosts increases. HPE VAN has the lowest throughput for 8 to

16 hosts but peaks at 32 host and declines at 64 hosts where its throughput remain constant for an

increasing number of hosts. The average throughput values put HPE VAN has the best controller

with 14.392Gbits/sec, followed by OpenDayLight with 14.285Gbits/sec while Floodlight has the

worst throughput of 14.144Gbits/sec.

5.4 Chapter Summary

This chapter has evaluated the performance of the three SDN controllers in terms of topology

discovery, round-trip time, and throughput.

It was observed in this chapter that all three SDN controllers use the same OpenFlow Discovery

Protocol (OFDP) to discover nodes and links to construct the network topology. They also provide

a GUI to display discovered switches, and end nodes, ports discovered on a switch, path between

end nodes, end nodes MAC or IP addresses and view of active flow rules, tools for adding, editing,

and deleting flows. The difference among the controllers in this category was that HPE VAN had

to be licensed to offer some of these features because it is proprietary while OpenDayLight and

Floodlight did not require licensing.

The round-trip time as shown in this chapter was divided into three categories namely; minimum,

average, and maximum RTT. The table below summaries the results.

88
Table 15: Summary of Single Topology RTT Results

RTT OpenDayLight Floodlight HPE VAN

Minimum (Min) 0.08596 0.08428 0.08594

Average (Ave) 0.182 1.02168 0.1678

Maximum (Max) 0.7416 8.88464 0.79052

Average of Min, Ave, Max (ms) 0.33652 3.3302 0.34809

The summary of RTT values in table 15 above shows OpenDayLight and HPE VAN with RTT

values of 0.33652ms and 0.34809ms respectively are the better controllers for single topology

while Floodlight is the worst controller with an RTT value of 3.3302ms.

Table 16: Summary of Linear Topology RTT Results

RTT OpenDayLight Floodlight HPE VAN

Minimum (Min) 0.14765 0.11455 0.11315

Average (Ave) 0.41975 2.40545 0.38235

Maximum (Max) 2.53535 22.36545 2.6418

Average of Min, Ave, Max (ms) 1.03425 8.29515 1.04577

Linear topology RTT values from the summarized table 16 show OpenDayLight with a value of

1.03425ms and HPE VAN with a value of 1.04577ms are the best controllers for linear topology

while Floodlight is the worst performing controller with a value of 8.29515ms.

89
Table 17: Summary of Tree Topology RTT Results

RTT OpenDayLight Floodlight HPE VAN

Minimum (Min) Depth=1 0.0844 0.087 0.0806

Minimum (Min) Depth=2 0.1484 0.5371 0.1057

Average (Ave) Depth=1 0.1524 1.0769 0.1397

Average (Ave) Depth=2 0.316 1.3414 0.2569

Maximum (Max) Depth=1 0.5616 9.4292 0.5368

Maximum (Max) Depth=2 1.3891 11.043 1.5005

Average of Min, Ave, Max (ms) 0.44198 3.9191 0.4367

HPE VAN with an RTT value of 0.4367ms and OpenDayLight with a value of 0.44198ms from

table 16 are better controllers for handling a tree topology while Floodlight with a value of

3.9191ms is the worst controller in such a topology.

Finally, in this chapter, it was deduced from the data that HPE VAN has the best throughput value,

followed by OpenDayLight with a marginal difference between them. However, Floodlight

produced better throughput when the number of hosts is smaller but decreased significantly when

the host is increased, therefore, it came out as the worst controller in the throughput performance

category.

90
CHAPTER SIX

6 CONCLUSION AND FUTURE WORK

6.1 Introduction

This chapter presents the conclusion to the thesis by highlighting its relevant aspects. A summary

of the study and findings are presented. It also gives an appraisal of how the findings satisfy the

research aims outlined in chapter one. Finally, ideas and suggestions for future research are

outlined to expand on this project.

6.2 Summary of the Study

In chapter one, the evolution of the internet that started in the 1960s was seen to stagnant recently

and a new approach was needed in networking. Traditional networks had complex designs,

therefore identifying and resolving network issues proved a daunting task. Moreover, the protocols

used in such networks had little improvement over the years and did not support the

experimentation of new features. Furthermore, network equipment vendors restricted innovation

by locking the inner operation of their devices through proprietary operating systems and closed

APIs. SDN was presented as a solution to these networking issues whereby the control and data

planes of network devices are separated from each other. The separation of the planes as it was

argued in this chapter would enable network programmability. In this way, automation tools and

development kits would be used to create custom network applications for business needs. Also,

SDN would relieve the burden of network administrators by presenting the entire network

intelligence in a single controller that would be responsible for pushing policies throughout the

network. The justification, significance, scope, and limitations of this project were also listed in

this chapter.

91
In chapter 2, a literature review was conducted on the history of SDN, the architecture of SDN,

SDN controllers, and research previously conducted in comparing SDN Controllers. In this

chapter, SDN was found to have three stages in its history, namely, Active Networking, Control -

Data Plane separation, and OpenFlow Protocol and Network Operating Systems. Active

Networking provided a programmable interface for network resources to be accessed leading to

innovation in Network Function Virtualization. In Control-Data Plane separation, packet

forwarding logic (data plane) was implemented directly into the hardware and had to be linked to

the control plane using an open interface called ForCES. Finally, the OpenFlow Protocol and

Network Operating Systems phase of SDN history migrated to the use of an OpenFlow-enabled

switch that has a flow-table. This flow-table can be modified by a Network Operating System

called controller. The study of available regarding the SDN architecture divides into the

application, control, and data plane. The application plane has network-centric programs a network

administrator would use to manage the network under his jurisdiction. The control plane is

regarded as the brain of an SDN that links network devices to the application plane. SDN

controllers are located in this plane. The data plane has network elements that support the

OpenFlow protocol. This chapter also identified and classified the different types of SDN

controllers into Centralized and Distributed Controllers. Centralized controllers managed a

network from one centralized plane while distributed controllers could manage a network from a

distributed control plane. The variation in features of the controllers was compared in terms of

their architecture, programming language, supported interfaces, and industry partners. The chapter

concluded with previous comparisons between SDN controllers.

In chapter three, the research methodology which was the scientific approach was explained as

well as the emulation method. The chapter further gave a detailed technical overview of the three

92
selected SDN controllers and the technologies they depend on for their operations. The emulation

environment that was used as a testbed was explained detailing their specifications. The network

topologies, single, linear, and tree topologies were explained with details of the number of

switches, and hosts that are used in each scenario. Topology discovery, round-trip time, and

throughput were described as the performance metrics used to determine which controller was

preferred for each network topology. Iperf and ping command were the tools used to measure these

performance metrics.

In chapter four, the deployment approaches of SDN were explained and the overlay SDN

deployment approach was chosen for this study since it leveraged on existing hardware. The

chapter gave justification to the selection of the three SDN controllers ( OpenDayLight, Floodlight,

and HPE VAN), they all had to be developed using the same programming language. Furthermore,

the comparison had to be done for open-source controllers (OpenDayLight and Floodlight) against

proprietary controller (HPE VAN), a centralized controller (Floodlight ) against distributed

controllers (OpenDayLight and HPE VAN). The testbed was set up and relevant software

packages for the controllers downloaded and installed from official websites and repositories.

In chapter five, different network topologies were generated in Mininet and presented to the

controllers. A comparison of how each controller handle displays was done. The results of round-

trip time measurement using ping command for one host to another were compared among the

controllers. Finally, the throughput values for single topology with varying numbers of hosts were

also compared among the three controllers.

93
6.3 Summary of Findings

This thesis had the aim of comparing three different SDN controllers given different networking

scenarios. To accomplish that, the thesis focused on some objectives. Firstly, the SDN architecture

had to be examined to know the various layers. This was to be followed by identifying the different

categories of SDN controllers, and SDN enabled network elements in the data plane. Another

objective was to find out which interfaces are used by the controllers to communicate with the

application and data planes. Finally, HPE VAN, Floodlight, and OpenDayLight were to be

installed on different virtual machines. They would then be compared against each by managing

the same network at different times. The performance metrics would be topology display, round-

trip time, and throughput.

The study accomplished the research aims outlined above. The SDN architecture, controllers, and

data plane devices were explained in detail in chapter two. The controllers used REST API as a

northbound interface to communicate with the application plane while OpenFlow was used as a

southbound interface for communication with data plane devices. The three controllers were

successfully installed in chapter four to handle network traffic. The performance analysis was

conducted in chapter five with a summary below.

To discover the network topology, the controllers used the same mechanism. They start the

topology discovery process through node discovery using the OpenFlow protocol and link discover

using the OpenFlow Discovery Protocol. The OpenFlow switch sends its features and ports to the

controller using Features Request and Reply messages. The controllers then send Link Layer

Discovery Protocol packets regularly to the switch to get connected nodes.

94
To establish communication between hosts, the controllers had to install flows to the switches and

this is where the differences in performance among the controllers emerged. When packets are

sent from one host (h1) to another host (h7), an Address Resolution Protocol (ARP) request is sent

to the OpenFlow switch. The switch provides h1 the Media Access Control (MAC) address of h7

for data transfer. However, when the switch does not know the MAC address, it sends a PACKET-

IN message to the controller. The controllers reply with a PACKET-OUT message to instruct the

switch to flood all of its port except the originating port that requesting the ARP. The host (h7)

responds with its MAC address which is forwarded to the controller. This ARP reply is sent back

to the switch as a FLOW-MOD message to be installed in the switch flow-table. Subsequent

requests are not sent to the controller.

As can be seen from the round-trip values in table 4 ( the maximum RTT of single topology) in

chapter 5, the controllers had to install flow entries for each host in the network into the OpenFlow

switch flow table before communication could occur. Floodlight took on average 8.88464ms to

install flows to enable communication between hosts. HPE VAN took 0.79052ms while

OpenDayLight took 0.7416ms. Due to a delay in flow entry installation, Floodlight was the worst-

performing controller among the three controllers for all the topologies. However, after the initial

hurdle of flow entry installation, Floodlight had the best minimum RTT for both single and linear

topology. HPE and OpenDayLight had the best performance for round-trip time because flow

entries were done much faster than Floodlight.

In the throughput category in table 14 of chapter 5, Floodlight once again had the worst

performance while OpenDayLight and HPE VAN exhibited better performance. When the number

of hosts was less than 64, Floodlight showed better throughput performance but worsened as the

number of hosts increased. OpenDayLight and HPE VAN performance were less than Floodlight

95
when the number of hosts was less than 64 but they outperform Floodlight for a growing number

of hosts. It could be argued from the literature review in chapter two that since Floodlight is a

centralized controller, it has a scalability issue hence the worst performance. Furthermore, the

literature review indicated that distributed controllers could be scaled and provided redundancy in

the event of hardware or network failure, HPE VAN and OpenDayLight benefited from the

advantages of distributed controllers, hence the improved performance. It can be concluded from

the results that Floodlight is the worst performing controller while HPE VAN and OpenDayLight

have better performance. But in a direct contest between OpenDayLight and HPE VAN, there is

no winner since the difference in performance between them is marginal. This stems from the fact

that they both use the OSGi technology and the underlining principle to their operation is the same

as explained in chapter three, however, OpenDayLight is an open-source project while HPE VAN

is a proprietary controller.

Finally, the study offers clues as to which scenario would benefit from choosing either a centralized

controller (Floodlight) or a distributed controller ( HPE VAN and OpenDayLight). The

observation made about Floodlight is that it gives the best minimum RTT values except that it also

has the maximum RTT value which leads to bad performance. Nonetheless, it would be ideal if

the flow entry installation could be reduced, a good network environment to use floodlight is in a

static network where the addition or removal of hosts is not dynamic. In this environment, a

proactive flow entry mechanism is used to preinstall flows in the OpenFlow switch thereby taking

advantage of Floodlight’s ability to respond with the minimum round-trip time. HPE VAN and

OpenDayLight can be used in any OpenFlow enabled network environment. However,

OpenDayLight, an open-source controller can be used to reduce SDN investment instead of HPE

VAN which is a proprietary controller requiring expensive licensing.

96
6.4 Suggestions for Further Work

In this study, the performance of Floodlight, OpenDayLight, and HPE VAN has been evaluated

with insight into which scenario is best suited for each controller. This study can be expanded by

future researchers using the following recommendations below.

• Improvement on Floodlight controller

Floodlight controller performance can be improved by looking at ways in which packet

forwarding could be achieved faster. An algorithm to reduce the flow entry installation process

could also be investigated.

• Comparison of Controllers with different programming languages

The SDN controllers compared in this study were all Java-based controllers as such their

performance could not be attributed to differences in language. Controllers developed with

different programming languages could be compared to ascertain the advantages a controller

derives from particular languages. Controllers can be selected against programming languages

from this sample pool: NOX, HyperFlow, Kandoo - C++, POX, ONIX, B4, Ravana, Ryu –

Python, and ONOS, DISCO, DevoFlow, Beacon – Java.

• Comparison of Controllers using different metrics:

The performance metrics used in this study were topology discovery, throughput, and round-

trip time. Studies could be conducted for different metrics or parameters such as controller

security by simulating an attack on a controller’s access list. Controller reliability is another

metric to be investigated. Since the control and data planes are detached, it presupposes that a

controller failure would collapse the network. Finally, controller consistency for distributed

97
controllers has not received much attention and this metric should be investigated especially

for large networks spanning different geographical areas.

• Experimentation with proprietary controllers:

SDN controller comparison studies are largely dependent on open-source controllers except

for this study where a proprietary controller (HPE VAN) was used. Researchers are encouraged

to take advantage of the free trial versions of proprietary controllers to test them against open-

source controllers.

98
References

Abuarqoub, A. (2020). A review of the control plane scalability approaches in software defined

networking. Future Internet, 12(3). https://doi.org/10.3390/fi12030049

Achir, N., Fonseca, M. S. P., & Doudane, Y. M. G. (2000). Active networking system evaluation:

A practical experience. Networking and …, (December). Retrieved from

https://www.researchgate.net/profile/Mauro_Fonseca/publication/2400076_Active_Networ

king_System_Evaluation_A_Practical_Experience/links/09e41507ec9d550d1e000000.pdf

Agg, P., Johanyák, Z., & Szilveszter, K. (2016). Survey on SDN Programming Languages.

Ahmed, A. F., & Lakshman, T. V. (2015). Softrouter Dynamic Binding Protocol .pdf. United

States Patent. Retrieved from http://www.freepatentsonline.com/8953432.html

Ahmed, H. G., & Ramalakshmi, R. (2018). Performance Analysis of Centralized and Distributed

SDN Controllers for Load Balancing Application. 2018 2nd International Conference on

Trends in Electronics and Informatics (ICOEI), 758–764.

https://doi.org/10.1109/ICOEI.2018.8553946

Al-khaffaf, D. A. J. (2018). Improving LAN Performance Based on IEEE802 . 1Q VLAN

Switching Techniques. Journal of University of Babylon, (1), 286–297.

Al-Somaidai, M. B. (2014). Survey of Software Components to Emulate OpenFlow Protocol as an

SDN Implementation. American Journal of Software Engineering and Applications, 3(6), 74.

https://doi.org/10.11648/j.ajsea.20140306.12

Alsaeedi, M., Mohamad, M. M., & Al-Roubaiey, A. A. (2019). Toward Adaptive and Scalable

OpenFlow-SDN Flow Control: A Survey. IEEE Access, 7, 107346–107379.

99
https://doi.org/10.1109/ACCESS.2019.2932422

AN Group, G. A. W. (1998). Architectural Framework for Active Networks. 1–14.

Anderson, C., Foster, N., Guha, A., Jeannin, J.-B., Kozen, D., Schlesinger, C., & Walker, D.

(2014). NetKAT: Semantic Foundations for Networks. Conference Record of the Annual

ACM Symposium on Principles of Programming Languages, 49, 113–126.

https://doi.org/10.1145/2578855.2535862

Arista. (2020). Arista EOS. Retrieved April 15, 2020, from

https://www.arista.com/en/products/eos

Arora, H. (2017). Software Defined Networking (SDN) - Architecture and role of OpenFlow.

Retrieved from https://www.howtoforge.com/tutorial/software-defined-networking-sdn-

architecture-and-role-of-openflow

Avaya. (2011). Network Virtualization using Shortest Path Bridging and IP / SPB. Avaya, 1–33.

Bakhshi, T. (2017). State of the Art and Recent Research Advances in Software. Wireless

Communications and Mobile Computing, 2017.

Bansal, D., Bailey, S., Dietz, T., & Shaikh, A. A. (2013). OpenFlow Management and

Configuration Protocol (OF-Config 1.1.1). Retrieved from https://www.opennetworking.org

Benzekki, K., El Fergougui, A., & Elbelrhiti Elalaoui, A. (2016). Software-defined networking

(SDN): a survey. Security and Communication Networks, 9(18), 5803–5833.

https://doi.org/10.1002/sec.1737

Bernini, G., & Caba, N. X. W. C. (2015). COSIGN Combining Optics and SDN In next Generation

data centre Networks.

100
Bhattacharjee, S., Calvert, K. L., & Zegura, E. W. (1997). An Architecture for Active Networking.

High Performance Networking VII, 265–279. https://doi.org/10.1007/978-0-387-35279-4_17

Bizanis, N., Kuipers, F. A., & Member, S. (2016). SDN and Virtualization Solutions for the

Internet of Things : A Survey. IEEE Access, 4, 5591–5606.

https://doi.org/10.1109/ACCESS.2016.2607786

Blial, O., Ben Mamoun, M., & Benaini, R. (2016). An Overview on SDN Architectures with

Multiple Controllers. Journal of Computer Networks and Communications, 2016.

https://doi.org/10.1155/2016/9396525

Bonelli, N., Procissi, G., Sanvito, D., & Bifulco, R. (2017). The acceleration of OfSoftSwitch.

2017 IEEE Conference on Network Function Virtualization and Software Defined Networks

(NFV-SDN), 1–6.

Braun, W., & Menth, M. (2014). Software-Defined Networking Using OpenFlow: Protocols,

Applications and Architectural Design Choices. Future Internet, 6(2), 302–336.

https://doi.org/10.3390/fi6020302

Caesar, M., Caldwell, D., Feamster, N., & Rexford, J. (2005). Design and Implementation of a

Routing Control Platform. Proceedings of the 2nd Conference on Symposium on Networked

Systems Design & Implementation, 2.

Calvert, K. L. (1999). Architectural Framework for Active Networks Version 1.0. (Icm), 1–15.

Casado, M., Freedman, M. J., Pettit, J., Luo, J., McKeown, N., & Shenker, S. (2007). Ethane.

ACM SIGCOMM Computer Communication Review, 37(4), 1.

https://doi.org/10.1145/1282427.1282382

101
Casado, M., Garfinkel, T., Akella, A., Freedman, M. J., Boneh, D., McKeown, N., & Shenker, S.

(2006). SANE: A protection architecture for enterprise networks. 15th USENIX Security

Symposium, 137–151.

Čejka, T., & Krejčí, R. (2016). Configuration of open vSwitch using OF-CONFIG. NOMS 2016 -

2016 IEEE/IFIP Network Operations and Management Symposium, 883–888.

https://doi.org/10.1109/NOMS.2016.7502920

Chaudhry, S., Bulut, E., & Yuksel, M. (2019). A distributed SDN application for cross-institution

data access. Proceedings - International Conference on Computer Communications and

Networks, ICCCN, 2019-July(Icccn). https://doi.org/10.1109/ICCCN.2019.8846921

Chua, R. (2012). OpenFlow Northbound API – A New Olympic Sport. Retrieved from

https://www.sdxcentral.com/articles/editorial/openflow-northbound-api-olympics/2012/07/

Cox, J. H., Chung, J., Donovan, S., Ivey, J., Clark, R. J., Riley, G., & Owen, H. L. (2017).

Advancing software-defined networks: A survey. IEEE Access, 5, 25487–25526.

https://doi.org/10.1109/ACCESS.2017.2762291

D., T. N., & Gray, K. (2013). SDN: Software Defined Networks (1st ed.). O’Reilly Media, Inc.

Denieffe, D., Kavanagh, Y., & Okello, D. (2016). Network Revolution - Software Defined

Networking and Network Function Virtualisation playing their part in the next Industrial

Revolution. 1–8.

Dubey, N. (2016). From Static Networks to Software-defined Networking:An Evolution in

Process. ISACA JOURNAL, 4, 1–7.

Farrel, A., Vasseur, J., & Ash, J. (2006). A Path Computation Element (PCE)-Based Architecture.

102
Internet Engineering Task Force. https://doi.org/10.1017/CBO9781107415324.004

Feamster, N., Rexford, J., Balakrishnan, H., & Shaikh, A. (2004). The Case for Separating Routing

from Routers.

Feamster, N., Rexford, J., & Zegura, E. (2014). The Road to SDN: An Intellectual History of

Programmable Networks. https://doi.org/10.1145/2602204.2602219

Feldmann, A. (2007). Internet Clean-Slate Design : What and Why ? ACM SIGCOMM Computer

Communication Review, 37(3), 59–64.

Foster, N., Freedman, M., Harrison, R., Rexford, J., Meola, M., & Walker, D. (2010). Frenetic: A

High-Level Language for OpenFlow Networks. 6. https://doi.org/10.1145/1921151.1921160

Girod, B., Gamal, A. El, Goel, A., Horowitz, M., Johari, R., Kahn, J., … Roughgarden, T. (2006).

Clean-Slate Design for the Internet. (April), 1–14.

Goldschlag, D. M., Reed, M. G., & Syverson, P. F. (1996). Hiding Routing Information.

Proceedings of the First International Workshop on Information Hiding, 1–14.

Goransson, P., & Black, C. (2014). The Genesis of SDN. Software Defined Networks, 37–57.

https://doi.org/10.1016/b978-0-12-416675-2.00003-6

Greenberg, A., Hjalmtysson, G., Maltz, D. A., Myers, A., Rexford, J., Xie, G., … Zhang, H.

(2005). A Clean Slate 4D Approach to Network Control and Management. ACM SIGCOMM

Computer Communication Review.

Greene, K. (2009). TR10: Software-Defined Networking. Retrieved from

http://www2.technologyreview.com/news/412194/tr10-software-defined-networking/

Gude, N., Koponen, T., Pettit, J., Pfaff, B., Casado, M., McKeown, N., & Shenker, S. (2008).

103
NOX: towards an operating system for networks. ACM SIGCOMM Computer

Communication Review, 38(3), 105. https://doi.org/10.1145/1384609.1384625

Gupta, A., & Feamster, N. (2016). Network Monitoring as a Streaming Analytics Problem.

HotNets 16: Proceedings of the 15th ACM Workshop on Hot Topics in Networks.

https://doi.org/10.1145/3005745.3005748

Gustavo A. A. Santana. (2013). VMware NSX Network Virtualization. CiscoPress.

Hewlett Packard Enterprise, H. (2016). HPE VAN SDN Controller 2.7 Administrator Guide.

(March 2016). Retrieved from

http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c05028095

Hu, T., Guo, Z., Baker, T., & Lan, J. (2017). Multi-controller Based Software-Defined

Networking : A Survey.

Izard, R. (2018). How to Work with Fast-Failover OpenFlow Groups. Retrieved from

https://floodlight.atlassian.net/wiki/spaces/floodlightcontroller/pages/7995427/How+to+Wo

rk+with+Fast-Failover+OpenFlow+Groups

Jammal, M., Singh, T., Shami, A., Asal, R., & Li, Y. (2014). Software defined networking: State

of the art and research challenges. Computer Networks, 72, 74–98.

https://doi.org/https://doi.org/10.1016/j.comnet.2014.07.004

Khondoker, R., Zaalouk, A., Marx, R., & Bayarou, K. (2014). Feature-based comparison and

selection of Software Defined Networking (SDN) controllers. 2014 World Congress on

Computer Applications and Information Systems, WCCAIS 2014.

https://doi.org/10.1109/WCCAIS.2014.6916572

104
Kim, H., & Feamster, N. (2013). Improving network management with software defined

networking. IEEE Communications Magazine, 51(2), 114–119.

https://doi.org/10.1109/MCOM.2013.6461195

Kreutz, D., Ramos, F. M. V., Verissimo, P. E., Rothenberg, C. E., Azodolmolky, S., & Uhlig, S.

(2015). Software-defined networking: A comprehensive survey. Proceedings of the IEEE,

103(1), 14–76. https://doi.org/10.1109/JPROC.2014.2371999

Kwon, J., Lee, T., Hahni, C., & Perrig, A. (2020). SVLAN: Secure & Scalable Network

Virtualization. Network and Distributed System Security Symposium, (February).

https://doi.org/10.14722/ndss.2020.24162

Lakshman, T. V, Nandagopal, T., Ramjee, R., Sabnani, K., & Woo, T. (2004). The SoftRouter

Architecture. In Proceedings of the 3rd ACM Workshop on Hot Topics in Networks

(HotNets), 1–6. Retrieved from http://conferences.sigcomm.org/hotnets/2004/HotNets-III

Proceedings/lakshman.pdf

Li, W., Meng, W., & Kwok, L. F. (2016). A survey on OpenFlow-based Software Defined

Networks: Security challenges and countermeasures. Journal of Network and Computer

Applications, 68(October 2018), 126–139. https://doi.org/10.1016/j.jnca.2016.04.011

Li, Y., Zhang, D., Taheri, J., & Li, K. (2018). SDN components and OpenFlow. Big Data and

Software Defined Networks, 49–67. https://doi.org/10.1049/pbpc015e_ch3

Li, Z., Li, Q., Zhao, L., & Xiong, H. (2014). Openflow channel deployment algorithm for software-

defined AFDX. 2014 IEEE/AIAA 33rd Digital Avionics Systems Conference (DASC), 4A6-

1-4A6-10. https://doi.org/10.1109/DASC.2014.6979466

105
Liyanage, M., Ylianttila, M., & Gurtov, A. (2014). Securing the control channel of software-

defined mobile networks. Proceeding of IEEE International Symposium on a World of

Wireless, Mobile and Multimedia Networks 2014, WoWMoM 2014, (March 2015).

https://doi.org/10.1109/WoWMoM.2014.6918981

McCloghrie. (2013). SNMPv2 Management Information Base for the User Datagram Protocol

using SMIv2. IETF. Retrieved from https://tools.ietf.org/html/rfc2013

Mckeown, N., Anderson, T., Peterson, L., Rexford, J., & Shenker, S. (2008). OpenFlow : Enabling

innovation in campus networks OpenFlow : Enabling Innovation in Campus Networks. ACM

SIGCOMM Computer Communication Review, 38(April).

https://doi.org/10.1145/1355734.1355746

Monsanto, C., Foster, N., Harrison, R., & Walker, D. (2012). A Compiler and Run-Time System

for Network Programming Languages. SIGPLAN Not., 47(1), 217–230.

https://doi.org/10.1145/2103621.2103685

Munir, S. (1997). Active Networks - A Survey. Retrieved from

https://www.cse.wustl.edu/~jain/cis788-97/ftp/active_nets/index.html

Nuangjamnong, C., Maj, S. P., & Veal, D. (2008). The OSI Network Management Model-

Capacity and performance management. Proceedings of the 4th IEEE International

Conference on Management of Innovation and Technology, ICMIT, 1266–1270.

https://doi.org/10.1109/ICMIT.2008.4654552

Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T. (2014). A survey of

software-defined networking: Past, present, and future of programmable networks. IEEE

Communications Surveys and Tutorials, 16(3), 1617–1634.

106
https://doi.org/10.1109/SURV.2014.012214.00180

Ogrodowczyk, A., Belter, B., Binczewski, A., Dombek, K., Juszczyk, A., Olszewski, I., …

Rajewski, R. (2014). HARDWARE ABSTRACTION LAYER FOR NON-OPENFLOW

CAPABLE DEVICES.

ONF. (2014). ONF TR-504 SDN Architecture Overview. V 1.1, 1–8.

Open Networking Foundation. (2014). SDN Architecture. ONF TR, (1).

https://doi.org/10.1109/MS.2006.52

Open Networking Foundation. (2015). OpenFlow Switch Specification (Version 1.5.1). ONF TS,

0, 1–36. Retrieved from https://www.opennetworking.org/images/stories/downloads/sdn-

resources/onf-specifications/openflow/openflow-switch-v1.5.1.pdf

Paliwal, M., Shrimankar, D., & Tembhurne, O. (2018). Controllers in SDN: A review report. IEEE

Access, 6, 36256–36270. https://doi.org/10.1109/ACCESS.2018.2846236

Pfaff, B. (2019). Open vSwitch Manual - ovs-fields.

Pfaff, B., & Davie, B. (2013). The Open vSwitch Database Management Protocol. RFC, 7047, 1–

35.

Pfaff, B., Pettit, J., Amidon, K., Casado, M., Koponen, T., & Shenker, S. (2009). Extending

Networking into the Virtualization Layer. HotNets.

Raju, V. R. S. (2018). Sdn Controllers Comparison. 1–5.

Ramjee, R., Ansari, F., Havemann, M., Lakshman, T. V., Nandagopal, T., Sabnani, K., & Woo, T.

(2006). Separating control software from routers. First International Conference on

Communication System Software and Middleware, Comsware 2006, 2006(October).

107
https://doi.org/10.1109/comswa.2006.1665183

Rehman, A. U., Aguiar, R. U. I. L., & Barraca, J. P. (2019). Network Functions Virtualization :

The Long Road to Commercial Deployments. IEEE Access, 60439–60464.

Reich, J., Monsanto, C., Foster, N., Rexford, J., & Walker, D. (2013). Modular SDN programming

with pyretic. USENIX Login, 38, 128–134.

Ren, J., & Li, T. (2010). Chapter 12: Network Management. Handbook of Technology

Management, 37. Retrieved from http://www.egr.msu.edu/~renjian/pubs/network-

management.pdf

Roberts, J., Leader, T., Services, A., Piasecki, R., Architect, S., & Services, A. (2014). Deploying

SDN on ASR. Cisco Live.

Salam, S., Kumar, D., & Eastlake, C. D. (2015). Loss and Delay Measurement in Transparent

Interconnection of Lots of Links …TRILL—. (March). https://doi.org/10.17487/RFC7456

sdx central. (2017). Understanding the SDN Architecture and SDN Control Plane. Sdxcentral,

LLC, 6–9.

Shah, S. A., Faiz, J., Farooq, M., Shafi, A., & Mehdi, S. A. (2013). An architectural evaluation of

SDN controllers. IEEE International Conference on Communications, 1, 3504–3508.

https://doi.org/10.1109/ICC.2013.6655093

Shalimov, A., Zuikov, D., Zimarina, D., Pashkov, V., & Smeliansky, R. (2013). Advanced study

of SDN/OpenFlow controllers. ACM International Conference Proceeding Series, (October).

https://doi.org/10.1145/2556610.2556621

Siamak Azodolmolky. (2013). Software Defined Networking with OpenFlow.

108
Song, H. (2013). Protocol-oblivious forwarding: unleash the power of SDN through a future-proof

forwarding plane. HotSDN ’13.

Tadros, C. N., Mokhtar, B., & Rizk, M. R. M. (2018). Logically Centralized-Physically Distributed

Software Defined Network Controller Architecture. 2018 IEEE Global Conference on

Internet of Things (GCIoT), 1–5. https://doi.org/10.1109/GCIoT.2018.8620166

Tajiki, M. M., Akbari, B., Shojafar, M., & Mokari, N. (2017). Joint QoS and congestion control

based on traffic prediction in SDN. Applied Sciences (Switzerland), 7(12), 1–15.

https://doi.org/10.3390/app7121265

Trois, C., Del Fabro, M. D., de Bona, L. C. E., & Martinello, M. (2016). A Survey on SDN

Programming Languages: Toward a Taxonomy. IEEE Communications Surveys Tutorials,

18(4), 2687–2712. https://doi.org/10.1109/COMST.2016.2553778

Tuncer, D., Charalambides, M., Tangari, G., & Pavlou, G. (2018). A Northbound Interface for

Software-based Networks.

van Asten, B. J., van Adrichem, N. L. M., & Kuipers, F. A. (2014). Scalability and Resilience of

Software-Defined Networking: An Overview. ArXiv, abs/1408.6.

Van Der Merwe, J., Cepleanu, A., D’Souza, K., Freeman, B., Greenberg, A., Knight, D., …

Zelingher, S. (2006). Dynamic connectivity management with an intelligent route service

control point. Proceedings of the 2006 SIGCOMM Workshop on Internet Network

Management, INM’06, 2006(January), 29–34. https://doi.org/10.1145/1162638.1162643

Varghese, G., & Estan, C. (2004). The measurement manifesto. Computer Communication

Review, 34(1), 9–14. https://doi.org/10.1145/972374.972377

109
Voellmy, A., & Hudak, P. (2011). Nettle: Taking the Sting Out of Programming Network Routers.

235–249. https://doi.org/10.1007/978-3-642-18378-2_19

Voellmy, A., Kim, H., & Feamster, N. (2012). Procera: A language for high-level reactive network

control. HotSDN’12 - Proceedings of the 1st ACM International Workshop on Hot Topics in

Software Defined Networks. https://doi.org/10.1145/2342441.2342451

Wetherall, D. J., Guttag, J. V., & Tennenhouse, D. L. (1998). ANTS: A toolkit for building and

dynamically deploying network protocols. 1998 IEEE Open Architectures and Network

Programming, OPENARCH 1998, 1998-April(April), 117–129.

https://doi.org/10.1109/OPNARC.1998.662048

Wolf, T., & Turner, J. S. (2001). Design issues for high-performance active routers. IEEE Journal

on Selected Areas in Communications, 19(3), 404–409. https://doi.org/10.1109/49.917702

Wu, Z., Jiang, Y., & Yang, S. (2016). An Efficiency Pipeline Processing Approach for OpenFlow

Switch. 2016 IEEE 41st Conference on Local Computer Networks (LCN), 204–207.

https://doi.org/10.1109/LCN.2016.43

Yan, H., Maltz, D. A., Ng, T. S. E., Gogineni, H., Zhang, H., & Cai, Z. (2007). Tesseract : A 4D

Network Control Plane. USENIX Symposium on Networked Systems Design &

Implementation.

Yang, L., R. Dantu, T. Anderson, R. G. (2004). Forwarding and Control Element Separation

(ForCES) Framework. https://doi.org/10.1017/CBO9781107415324.004

Yun, M. (2013). Huawei Agile Network : A Solution for the Three Major Problems Facing

Traditional Networking Huawei Agile Network : Adapting Networks to Services. ICT

110
Insights, (6).

Zhou, W., Li, L., Luo, M., & Chou, W. (2014). REST API Design Patterns for SDN Northbound

API. https://doi.org/10.1109/WAINA.2014.153

111
112

View publication stats

You might also like