Professional Documents
Culture Documents
Comparison of Three Open Flows DN Controllers
Comparison of Three Open Flows DN Controllers
net/publication/365449008
CITATIONS READS
0 109
1 author:
Godson Dawuni
University of Ghana
3 PUBLICATIONS 0 CITATIONS
SEE PROFILE
All content following this page was uploaded by Godson Dawuni on 17 November 2022.
(10805844)
SCIENCE
OCTOBER,2020
i
DECLARATION
I hereby declare that this thesis is my own original research work undertaken at the
my thesis supervisor except for other people’s work which have been duly cited and
acknowledged.
STUDENT
Signature: ..........................................................
Date: ..........................................................
SUPERVISOR
Signature: ..........................................................
Date: ..........................................................
CO-SUPERVISOR
Signature: ..........................................................
Date: ..........................................................
ii
DEDICATION
To my mother
iii
ACKNOWLEDGEMENT
I appreciate my supervisors Dr. Jamal-Deen Abdulai and Prof. Ferdinand A. Katsriku for their
I also thank the lecturers in the Department of Computer Science for both teaching and impacting
my life positively.
iv
ABSTRACT
Software Defined Networking (SDN) is an innovative way of programming the age-old traditional
networks. Traditional networking devices have the control and forwarding planes bundled
together. SDN enable independent evolution of the planes by separating them. The Control plane
is pushed to a Controller while packet forwarding resides in the switches and routers.
The Implementation of a Software Defined Network relies significantly on the SDN Controller.
The Controller acts as the brain of the network where decisions regarding routes and packet
forwarding are made. The Controller also has a detailed visibility of the data plane devices. The
capabilities of these devices are made available through the Controller Southbound Interface (SBI)
(SNMP) as well as OpenFlow. Applications for Security, Quality of Service, Traffic Engineering
among others are written and deployed to these data plane devices through the Northbound
There has been much research done on designing and implementing SDN Controllers. The
implemented SDN Controllers are either opensource or vendor specific. The Controllers are further
This thesis focuses on comparing the implementation and performance metrics of three SDN
Controllers for particular networks topologies. The Controllers examined in this thesis includes:
v
TABLE OF CONTENTS
DECLARATION ............................................................................................................................ ii
DEDICATION ............................................................................................................................... iii
ACKNOWLEDGEMENT ............................................................................................................. iv
ABSTRACT.................................................................................................................................... v
TABLE OF CONTENTS ............................................................................................................... vi
LIST OF FIGURES ....................................................................................................................... ix
LIST OF TABLES ......................................................................................................................... xi
LIST OF ABBREVIATIONS ....................................................................................................... xii
CHAPTER ONE ............................................................................................................................. 1
1 INTRODUCTION ................................................................................................................... 1
1.1 Background of the study .................................................................................................. 1
1.2 Statement of the problem ................................................................................................. 7
1.3 Aim ................................................................................................................................... 9
1.4 Justification of the study ................................................................................................ 10
1.5 Significance of the study ................................................................................................ 10
1.6 Scope of the study .......................................................................................................... 11
1.7 Limitations of the study.................................................................................................. 11
1.8 Organization of the study ............................................................................................... 12
CHAPTER TWO .......................................................................................................................... 13
2 LITERATURE REVIEW ...................................................................................................... 13
2.1 Introduction .................................................................................................................... 13
2.2 History of SDN Controllers ........................................................................................... 13
2.2.1 Active Networking .................................................................................................. 14
2.2.2 Control and Data Plane Separation ......................................................................... 15
2.2.3 OpenFlow Protocol and Network Operating Systems ............................................ 22
2.3 SDN Architecture ........................................................................................................... 29
2.3.1 Infrastructure Layer ................................................................................................ 30
2.3.2 Control Plane .......................................................................................................... 31
2.3.3 Application Layer ................................................................................................... 34
2.4 SDN Controllers ............................................................................................................. 34
2.4.1 Centralized Controllers ........................................................................................... 35
vi
2.4.2 Distributed Controllers............................................................................................ 36
2.5 Feature Based Comparison of SDN Controllers ........................................................... 36
2.6 Related Works ................................................................................................................ 38
2.7 Chapter Summary ........................................................................................................... 40
CHAPTER THREE ...................................................................................................................... 41
3 RESEARCH METHODOLOGY .......................................................................................... 41
3.1 Introduction .................................................................................................................... 41
3.2 Research Method ............................................................................................................ 41
3.3 Proposed Controllers ...................................................................................................... 41
3.3.1 OpenDayLight......................................................................................................... 42
3.3.2 Floodlight ................................................................................................................ 43
3.3.3 HPE VAN ............................................................................................................... 45
3.4 Emulation Environment ................................................................................................. 46
3.4.1 VMware ESXi 6.7 ................................................................................................... 47
3.4.2 Ubuntu..................................................................................................................... 48
3.4.3 Open vSwitch .......................................................................................................... 48
3.4.4 Mininet .................................................................................................................... 48
3.5 Emulation Scenarios ....................................................................................................... 49
3.5.1 Single Topology ...................................................................................................... 49
3.5.2 Linear Topology...................................................................................................... 50
3.5.3 Tree Topology ......................................................................................................... 50
3.6 Performance Metrics ...................................................................................................... 50
3.6.1 Topology Discovery................................................................................................ 51
3.6.2 Throughput .............................................................................................................. 51
3.6.3 Round Trip Time..................................................................................................... 51
3.7 Tools for Performance metrics ....................................................................................... 52
3.7.1 Iperf ......................................................................................................................... 52
3.7.2 Ping ......................................................................................................................... 53
3.8 Chapter Summary ........................................................................................................... 53
4 DESIGN AND IMPLEMENTATION .................................................................................. 54
4.1 Introduction .................................................................................................................... 54
4.2 SDN Deployment Approaches ....................................................................................... 55
vii
4.3 Justification of Implementation Options ........................................................................ 55
4.4 Testbed Setup ................................................................................................................. 56
4.4.1 VMware ESXi 6.7 ................................................................................................... 57
4.4.2 Ubuntu 18.04.2........................................................................................................ 59
4.4.3 Mininet .................................................................................................................... 59
4.5 SDN Controllers ............................................................................................................. 60
4.5.1 OpenDayLight......................................................................................................... 60
4.5.2 Floodlight ................................................................................................................ 61
4.5.3 HPE VAN ............................................................................................................... 62
4.6 SDN Topologies ............................................................................................................. 62
4.6.1 Single Topology ...................................................................................................... 62
4.6.2 Linear Topology...................................................................................................... 64
4.6.3 Tree Topology ......................................................................................................... 66
4.7 Chapter Summary ........................................................................................................... 67
CHAPTER FIVE .......................................................................................................................... 68
RESEARCH EVALUATION ....................................................................................................... 68
5 Introduction ........................................................................................................................... 68
5.1 Topology Discovery ....................................................................................................... 68
5.2 Round Trip Time ............................................................................................................ 76
5.3 Throughput ..................................................................................................................... 86
5.4 Chapter Summary ........................................................................................................... 88
CHAPTER SIX ............................................................................................................................. 91
6 CONCLUSION AND FUTURE WORK .............................................................................. 91
6.1 Introduction .................................................................................................................... 91
6.2 Summary of the Study .................................................................................................... 91
6.3 Summary of Findings ..................................................................................................... 94
6.4 Suggestions for Further Work ........................................................................................ 97
References ..................................................................................................................................... 99
viii
LIST OF FIGURES
ix
Figure 19: Single Topology Displayed by OpenDayLight ........................................................... 64
Figure 31: OpenDayLight with Linear topology of 8 switches and 8 hosts ................................. 72
Figure 32: Floodlight with the Linear topology of 8 switches and 8 hosts................................... 73
Figure 34: OpenDayLight with tree topology of the depth of 2 and fanout of 2 .......................... 74
Figure 35: OpenDayLight with tree topology of the depth of 2 and fanout of 2 .......................... 74
Figure 36: OpenDayLight with tree topology of the depth of 2 and fanout of 2 .......................... 75
x
Figure 42: Maximum Round-Trip Time for Linear Topology ..................................................... 82
Figure 43: Average Round-Trip Time for Tree Topology with Depth of 1 ................................. 84
Figure 44: Average Round-Trip Time for Tree Topology with Depth of 2 ................................. 86
LIST OF TABLES
Table 8: Minimum Round-Trip Time for Tree Topology with Depth of 1 .................................. 83
Table 9: Average Round-Trip Time for Tree Topology with Depth of 1 ..................................... 83
Table 10: Maximum Round-Trip Time for Tree Topology with Depth of 1................................ 83
Table 11: Minimum Round-Trip Time for Tree Topology with Depth of 2 ................................ 84
Table 12: Average Round-Trip Time for Tree Topology with Depth of 2 ................................... 85
Table 13: Maximum Round-Trip Time for Tree Topology with Depth of 2................................ 85
xi
LIST OF ABBREVIATIONS
OpenDayLight - ODL
xii
Software Defined Networking - SDN
xiii
CHAPTER ONE
1 INTRODUCTION
The internet is used by billions of people across the world daily. This project which started in 1969
as a research project in academia is today the world’s largest computer network. Vint Cerf and
Bob Kahn who are widely recognized as the fathers of the internet worked on its standardization
and invented the Internet Protocol Suite (Transmission Control Protocol (TCP) and the Internet
Protocol (IP). The Internet Protocol suite replaced the Network Control Program (NCP) which was
the first Host-To-Host protocol. NCP used a simplex protocol by selecting two port addresses, one
address to send data, and the other port to receive data. NCP also allowed different networks in
the Advanced Research Projects Agency Network (ARPANET) to route traffic through the
Ethernet technology at Xerox PARC by Bob Metcalfe. Ethernet is the key technology behind
Local Area Networks (LAN), Metropolitan Area Networks (MAN) and Wide Area Networks
(WAN). Networking at this period become full-blown and it became difficult to keep track of the
Internet Protocol Addresses that were assigned to computers. This led to the development of the
Domain Name System (DNS) to map the IP Addresses to their respective hostnames. Routing was
done using a single routing algorithm across all networks on the internet. This was quickly scaled
autonomous system and these autonomous systems were interconnected with another routing
1
The early success of innovation in computer networking has stagnated in recent times. The
networks of today have higher overhead, they are more difficult to troubleshoot and they do not
offer flexibility for experimentation. The routing protocols described above are still being used
with little improvements over the years. Routers and switches are shipped with vendor-specific
operating systems such as the Cisco IOS, the network administrator is thus stuck to what these
vendors have to offer which mostly is inadequate. Data Centers now deploy virtualization
technologies to share resources, reduce cost, and testing (Bizanis, Kuipers, & Member, 2016),
(Rehman, Aguiar, & Barraca, 2019). These technologies include; Application, Data, Desktop,
Network, Server, and Storage virtualization. The closest to a networking improvement is the
Network Functions Virtualization (NFV) where the functions of firewalls, Unified Threat
Management (UTM) appliances, routers, switches, and other networking devices are virtualized
and installed on a physical server. Networks implemented by these virtualized network devices are
called overlay networks. The traffic generated by the overlay network is pushed to the physical
network through the Network Interface Cards (NIC) of the physical server (hosting the virtualized
network devices).
network) to carry traffic through the internet. These overlay networks leave bare the problems
traditional (underlay) network present in terms of scalability, service setup, and configuration.
existing networks. The shift that SDN introduces is the separation of control and forwarding layers
of networking devices. The Open Networking Foundation (ONF) is the lead organization that
2
drives innovation in network infrastructure and carrier services. SDN is based on three principles
iii. an abstraction of network resources and state and exposing such resources to external
applications
The SDN as opposed to traditional networks as shown in Fig.1 offers several benefits which
include;
• Network Programmability: The policies governing the network are directly programmable
since control functions are located in a controller that has open APIs to automation tools and
Development Kit (SDK) such as the Java Development Kit (JDK) and YANG Development
Kit (YDK) are useful in developing custom applications for the network.
3
• Centralization of Management: The intelligence of the network is located at the controller and
the global view of all devices is shown in the controller topology. Applications and network
policy engines view the entire network as a single logical switch which makes policy
deployment easy.
• Content Delivery: SDN allows traffic engineering leading to the implementation and
automation of Quality of Services (QoS) for Voice over IP (VOIP), video, and audio
transmissions. The network automatically shifts resources from less congested applications to
applications that are operating at their peak thereby guaranteeing delay and bandwidth.
Resource re-allocator (re-routing module) is an app in the management plane that reallocates
flows through OpenFlow adjust to network congestion and periodically to optimize the use of
• Reduced capital expenditures and Hardware savings: SDN advocates for openness to both
control and data plane logic in networking equipment, this permits network administrators to
There have been several approaches to implementing SDN, most noteworthy is the NOX SDN
Controller project (Gude et al., 2008) developed by Nicira Networks. NOX became the first
SDN Controller and was installed on a physical server to have a networkwide view of
connected devices. Network devices connected to the NOX Controller through an OpenFlow
switch. The OpenFlow protocol (Mckeown, Anderson, Peterson, Rexford, & Shenker, 2008)
was developed to carry instructions to the connected devices through the OpenFlow switch.
The NOX Controller thus served as a bridge between applications written in higher
programming languages such as Java and the data plane devices. The key challenges that
4
emerged from the implementation of SDN were scalability, reliability, and security.
Organizations aiming to deploy SDN has to take into consideration these challenges.
Organizations such as Arista Networks have implemented SDN in their “Software-Driven Cloud
Networking (SDCN)” which integrates the principles of cloud computing like self-service
provisioning, automation, and scaling in terms of performance and operational cost. SDCN uses
“Arista Extensible Operating System (EOS)” to provide network virtualization, high availability,
simplified architectures, custom programmability, rapid integration with a wide range of third-
Ubiquiti Networks has employed the principles of SDN to develop a Network Management
Controller called the Unifi SDN Controller which runs on Apache Tomcat web server. The
controller is free to download for network administrators using Ubiquiti products with support on
Windows, Mac, and Linux and mobile app support for iOS and Android. The controller is bundled
with the UniFi-Discover tool for finding and managing UniFi devices on the local network. The
architecture is built for scalability to support a large number of Ubiquiti devices comprising of
Localized UniFi Controllers, Access Points, Fiber/Ethernet switches, and Internal Gateways. The
Controller uses proprietary CPE WAN Management Protocol (CWMP) protocol for auto-
diagnostics, A cluster of Unifi SDN Controllers can be managed at a central point or under a single
dashboard using the UniFi Hybrid Cloud. The communication between the remote controllers is
done through Web Real-Time Communication (WebRTC) for end-to-end access with the added
5
Cisco, the world leader in network equipment vending has adjusted to the SDN trend by rolling
out next-generation routers and switches. Cisco Nexus 9000 Series Switches supports OpenFlow,
an opensource SBI, and Cisco One Platform Kit (onePK), Cisco proprietary SBI, to connect to the
Cisco Open SDN Controller or any Controller the network administrator choice. Cisco ASR 9000
Series Aggregation Services Routers is SDN compatible with support for stateful, and stateless
(Roberts et al., 2014). Application agility in the data center is addressed with the introduction of
the Cisco Application-Centric Infrastructure (ACI). Cisco ACI uses the Application Policy
Infrastructure Controller (APIC) as a central management center. Cisco SDN coupled with
applications. There is also complete network visibility and threat protection, this is possible
through the integrating of third-party applications for advanced security, load balancing, and
monitoring. Examples of third-party solutions the network administrator can deploy in ACI are
SourceFire ( for network security), Embrane ( for VPNs, firewalls, load balancers ), and F5 ( for
Small to medium-sized organizations that do not have the capacity, expertise, or capital of the
above-mentioned companies can still manage their networks using SDN. Physical data plane that
supports SDN is currently expensive, as such VMware has introduced the VMware NSX to bring
network visibility, flexibility, agility, and scalability to networks. NSX Manager offers central
management of SDN networks while providing Rest API for creation, configuration, and
monitoring NSX components. A control plane is an appliance-based on the controller cluster while
the data plane is made of NSX vSphere Distributed Switch with kernel modules (VXLAN,
6
Distributed Logical router and Firewall) and the NSX Edge Services Gateway(Gustavo A. A.
Santana, 2013).
Network management involves the collection of data about the capabilities of network devices,
analysis of the data to detect faults, to measure performance, and to maintain Quality of Service.
Network management activities are in five functioning areas namely; Fault, Configuration,
Performance, Account, and Security Management (FCAPS) (Nuangjamnong, Maj, & Veal, 2008).
The network is managed by having a central server called Network Management System (NMS)
and agents of the NMS deployed in end devices to be monitored. The end devices store their data
in a Management Information Base (MIB) (McCloghrie, 2013) and use the agent program to
communicate with the server for the management functions i.e. FCAPS. The agent program relays
the device state, network traffic, and notifications to the NMS through supported protocols such
Protocol (CMIP) (Ren & Li, 2010). Network traffic gathered in this form is used to build the
network topology, statistics are analyzed by operators to ascertain how best the network and device
performance could be improved and this is a daunting task (Gupta & Feamster, 2016). Currently
SolarWinds Network Performance Monitoring tool, Nagios, Cacti, Argus, Zabbix, etc. These
network management systems for traditional networking is complex to implement and has to be
integrated with different products to support the five functional areas of network management.
Device configuration has to be done individually since none of the NSM has complete control of
the device’s control plane usually configured through the Command Line (CLI) (Dubey, 2016).
Network management is much easier with SDN because all control function of the devices is
7
located in the SDN Controller and network control frameworks such as Procera help network
In a traditional network, scalability at layer 2 is achieved through the use of Virtual Local Area
Network (VLAN). Broadcast domains are segmented for security and simpler management while
reducing device purchase cost (Al-khaffaf, 2018). This Ethernet technology uses the IEEE 802.1Q
standard to scale up to 4094 different networks which are inadequate for Data Centers with Cloud
Computing capabilities and Multi-Tenant Architecture (Kwon, Lee, Hahni, & Perrig, 2020).
Existing VLANs use the Spanning Tree Protocol (STP) with little or no support for the newer
Shortest Path Bridging(SPB) (Avaya, 2011) to converge the network topology thereby blocking
some of the ports (redundant paths) to prevent traffic loops, therefore only half of the available
paths are used. Transparent Interconnection of Lots of Links (TRILL) (Salam, Kumar, & Eastlake,
2015) support multipathing for the datalink enabling Fibre Channel over Ethernet (FCoE) and
improve latency is yet to be adopted in current layer 2 devices. Furthermore, traditional layer 2
devices do not support newer protocols such as the Virtual Extensible Local Area Network
(VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE) and Generic
The closed (Proprietary) nature of network devices means there is research stagnation in
networking and real-world experiments are difficult to perform on existing large-scale production
networks. For example, in a network, it is difficult to develop and install a new routing protocol to
route traffic throughout the network. Existing protocols that allow routing in a network include
Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), and
Border Gateway Protocol (BGP) that are prone to routing table issues. Protocols such as the Path
Computation Element Communication Protocol (PCEP) in traditional networking gives scope for
8
route manipulation. In PCEP, Path Computation Clients (PCC) request path computations from
Path Computation Elements (PCE). PCE is a network application that calculates sophisticated
Enterprise networks that need new features and functions added to these devices has to contact the
device vendors, which in turn consult with their software developers and chipset manufacturers,
the process can take several years to be implemented. These new features and functionalities could
have been easily added by the in-house software developers or network administrators of the
enterprise networks if the vendors had exposed the devices Application Programming Interface
(API). The preferred option would be for equipment vendors to sell only the data plane devices to
clients without an embedded control plane, in that case, the packet process could be defined with
match-action tables for switches, router, load balancers, etc. through OpenFlow or any selected
1.3 Aim
This thesis will compare the performance of different SDN controllers. In a typical traditional
network environment, the control and packet-forwarding planes are bundled together, those planes
• Protocols used by applications in the management plane to control data planes will be
identified and focus will be given to the Representational State Transfer (REST) API.
• In the Control plane, the different types and categories of SDN Controllers will be identified.
• Protocols used by the controller to communicate with the data plane will be identified. The
9
• OpenDayLight, Floodlight and HPE VAN Controllers will be installed on separate servers, the
same network traffic will be simulated to these controllers. Analysis of how the controllers
handle the same traffic will be done and compared on topology display, round-trip time, and
throughput.
• In the data plane, SDN enabled network devices both physical and virtual will be identified
and a survey of devices rolled-out by device manufacturers such as Cisco, HP, Huawei, etc.
At the time of this research, there was limited literature on the topic. Various companies, mostly
network vendors, cloud services providers, and network security organizations had white papers
on their websites about how they have deployed SDN in their respective products. The success
and critical evaluation of such deployments are not readily available to the research community.
For an instance, in a large Wide Area Network (WAN), there is the controller placement problem
where a network engineer does not how many SDN Controllers are needed to support the network
and where they should be located in the network. The documentation of SDN Controllers are
difficult to understand and cannot be easily followed by a beginner to implement SDN. This
research will add to the growing body of knowledge of SDN deployment and evaluation of SDN
Controllers.
The outcome of this research will simplify the concept and deployment of SDN, especially for
network engineers in small to medium-size organizations who are skeptical and uncertain about
SDN implementation but are seeking to improve and save cost in the network infrastructure. The
study will make the selection of an SDN controller for network administrators easier since
10
selection would be based on the performance of each controller for different scenarios. The steady
growth of internet usage demands innovation in the control and data planes of networking
equipment for Service Providers. This study demonstrates how such innovation can solve network
researchers can also refer to this study for further academic work on SDN.
The research examines the historical and academic background of SDN, the architectural
difference between traditional networking devices and SDN-enabled devices. The research then
focuses extensively on SDN Controllers, the different types, categories, and classification of SDN
Controllers. Three SDN Controllers are selected, one centralized controller (Floodlight), a
as a centralized control mechanism for OpenFlow-enabled data plane devices. Analysis of how
the selected controllers handle the flows and topology of the simulated network would also be
SDN is a relatively new technology, as such existing networking devices cannot be used for its
implement SDN in this study. Server virtualization technology (VMware ESXi 6.7) was used to
support Virtual Machines (VMs) for the SDN Controllers and Network Function Virtualization
(NFV) was used switches. The capabilities of the deployed systems were limited by the inherent
limitations of the physical server. Additionally, the physical server hard disk, RAM, and CPU
11
1.8 Organization of the study
Chapter One introduces the concept of SDN and why it is the future of networking. The problem
and aim of this study are also outlined in Chapter One. Chapter Two presents a literature review
in which the history of SDN, SDN architecture, types of SDN Controllers, comparison of SDN
Controllers, and the different types of protocols used in SDN are discussed in depth. Chapter
Three examines the methodology and various strategies used in solving the research aims. Chapter
Four focuses on the Design and Implementation of the selected SDN Controllers and how the data
plane devices will be managed. Chapter Five evaluates the research findings and analysis resulting
from the SDN Controller simulations. Chapter Six ends the thesis with a justification of the
12
CHAPTER TWO
2 LITERATURE REVIEW
2.1 Introduction
This chapter examines the theoretical framework with regards to SDN. A literature review will be
• SDN Architecture
• SDN Controllers
• Some previous research works conducted in comparing SDN Controllers and research gaps.
The term “Software-Defined Networking” was first used in an article (Greene, 2009) to describe
the OpenFlow protocol by Nick McKeown, and his peers at Stanford University.
The history and development of SDN can be split into three stages where each stage has different
objectives and researchers contributing to its development (Feamster, Rexford, & Zegura, 2014).
• Active Networks
13
Figure 2: Selected evolution of SDN with supporting technologies
Active networking is one of the early developments leading to SDN. The idea for an active network
was to have a programmable interface in the form of a network API that could be used to expose
the network node’s resources (Kreutz et al., 2015). The role of the network was extended from
transmitting of packets to a computation engine, in this case, packets can be modified, stored, or
Two programming models that were considered, namely; capsule model and the programmable
The capsule model is based on Active Node Transport System (ANTS) (Wetherall, Guttag, &
Tennenhouse, 1998) which works with three techniques namely mobile-code, load on request, and
caching. It allows dynamic deployment of new protocols to network nodes without the need for
14
physical nodes and running protocols to be synchronized. The concepts in its architectural design
include the following: network services are customized by capsules (which is a replacement for
traditional IP packets), routing functions are performed by Active nodes which process capsules
while maintaining the soft-store and finally routines are distributed dynamically to nodes (Achir,
routing or switching function (Munir, 1997). In this way, existing routers can have additional
processing capabilities by adding a computation engine at routers ports (Wolf & Turner, 2001).
Active networking provided three intellectual contributions to the development of SDN. These
include;
• The architectural framework of active networks was a precursor to network virtualization. This
architecture had a shared operating system called NodeOS which controlled shared resources,
Execution Environment (EE) in the form of a virtual machine for processing packets and
Active Applications(AA) hosted within the EE (Calvert, 1999) (AN Group, 1998).
• The concept of orchestrating middleboxes into a unified platform that laid the foundation for
Control and Data plane separation was a reaction to the growth in backbone networks, network
software.
15
Since Control and Data planes were now separated, there had to be a means for communication
between the two planes, and two innovations were developed for such purpose (Feamster et al.,
2014). These innovations were Open Interfaces and Logically Centralized Control of the network.
i. Open Interfaces
The Forwarding and Control Element Separation (ForCES) protocol (Yang, L., R. Dantu, T.
Anderson, 2004) was proposed by Internet Engineering Task Force (IETF) as a standard for
The Control Element (CE) is located at the control plane in the form of routing and signaling
The Forwarding Element (FE) is located at the forwarding (data) plane. These elements are either
Centralized Control of network in Internet Service Providers (ISP) for programmability and
network-wide visibility was a driving factor for Control and Data plane separation.
Intelligent Route Service Control Point (IRSCP) (Van Der Merwe et al., 2006) enabled route
computation and selection to be performed outside routers by external network intelligence. IRSCP
is a logically centralized, separated from routers, that have control over the selection of routes in
a Multiprotocol Label Switching (MPLS) network. IRSCP can also be used for connectivity
management tasks such as blackholing of DDoS traffic, Virtual Private Network ( VPN ) gateway
16
selection, load balancing across multiple egress points as well as moving traffic away from router
The architecture of IRSCP comprises of traditional network elements, routers, route-reflectors, and
IRSCP with its associated functions. IRSCP uses Internal BGP(iBGP) to receive routes from
routers, computes best routes, and send back these selected routes to routers.
Performing flexible route control for multiple administrative domains in an MPLS or Generalized
Multi-Protocol Label Switching (GMPLS) networks relied on Path Computation Element (PCE)-
Based Architecture (Farrel, Vasseur, & Ash, 2006). A Path Computation Element (PCE) is located
within the network or at a remote destination computer network based on a network graph. Traffic
Engineering Database (TED), stores information about resources and topology of the network
domain while a Path Computation Client (PCC) is a client application that requires path
computation from a PCE. Path computation in this architecture can be performed in inter-layer,
intra-domain, and inter-domain. A domain in this context refers to a group of network elements
existing on the same path computation responsibility or address management. These domains
include IGP (OSPF, EIGRP ) areas, Autonomous Systems (AS), or a collection of AS under an
ISP.
Route Control Platform (RCP) (Caesar, Caldwell, Feamster, & Rexford, 2005) was proposed in
2004 with support from telecom giant AT&T to solve routing loops, convergence, and traffic
The RCP proposal was to have Autonomous Systems (AS) rely on an RCP Server to handle
interdomain routing. The iBGP routers will simply forward packets and outsource their routing
functions to the RCP server which will make all computations and update routing tables of the
iBGP routers (Feamster, Rexford, Balakrishnan, & Shaikh, 2004), (Caesar et al., 2005).
17
The SoftRouter Architecture (Lakshman, Nandagopal, Ramjee, Sabnani, & Woo, 2004) allowed
Control and forwarding plane devices to be dynamically associated with each other.
The Control plane functionalities are executed in servers called Control Elements (CEs) that are
multiple hops away from the Forwarding Elements (FEs). These elements interact with each other
using two protocols namely Dynamic Binding Protocol (Dyna-BIND) and ForCES (Ramjee et al.,
2006).
A Dynamic Binding Protocol (A. F. Ahmed & Lakshman, 2015) performs three tasks namely,
discovery, association, and operation. Discovery enables Forwarding Elements to broadcast its
existence and discover Control Elements in a SoftRouter network using spanning tree protocol if
Ethernet services are supported. If the elements are heterogenous, then a hop by or source-routed
routing layer over IP is used to route the packets of the discovery protocol.
Each FE is associated with a primary CE and a backup CE at the planning stage of the network by
the administrator. The association is done taking into consideration the load managed by the CE,
location, or distance and reliability of network links between the FE and CE.
Failure and Error detection in a SoftRouter network is detected and repaired by using heartbeat
messages sent by the FE to the CE. When the path to CE is no longer valid, the FE switches over
Equipment vendors did not readily accept separating the Control and Data plane, a clean-slate
architecture was explored to combat this inertia. The clean-slate approach was to redesign
networks (the internet) from scratch, offering improved performance and abstractions while
avoiding the complexity of existing systems using a set of new core principles (Girod et al., 2006),
(Feldmann, 2007).
To achieve this objective, The United States (US) National Science Foundation’s
18
(NSF) launched a “100x100 Clean Slate Project” in 2003 to stimulate innovative thinking in
conjunction with multiple research institutions and universities (Denieffe, Kavanagh, & Okello,
2016). There were five areas of importance identified for innovation, namely; network
The project led to the development and adoption of the 4D Architecture, SANE, and Ethane as
discussed below.
The key principles of network-level objectives, network-wide views, and direct control of
networks caused the 4D architecture to restructure the network control plane. Network control
plane functionalities were organized into four components by the 4D architecture consisting of the
The Data plane provides services like Ethernet packet forwarding, IPv4, or IPv6 based on the state
of the decision plane. The flow-scheduling weights, forwarding table or forwarding information
base (FIB), queue-management parameters, network address translation mappings, and packet
filters are examples of the data plane state. The Data plane also has support for collecting
The discovery plane role is to discover the network physical components ( interfaces present on
the devices, their capacities, and connection to other devices) and represent them logically by
creating their logical identifiers. The persistence and scope of these identifiers are defined, the
discovery plane also set them to be automatically discovered and manage their relationship with
each other.
The dissemination plane serves a bridge between the data plane and decision plane by providing
19
The decision plane is the executive center in-charge of network control such as load balancing,
network reachability, security, access control as well as the configuration of interfaces. The
operation of the decision plane is in real-time and it has a network-wide view of the topology,
traffic flowing through the network, and the capabilities of physical components (routers and
switches). The decision plane is made of multiple servers called decision elements seen as a single
entity at the data plane (which receives commands from the decision plane through the
dissemination plane ). A system implementation such as Tesseract (Yan et al., 2007) can then used
Another achievement of the clean-slate project was for enterprise network security in the form of
a Protection Architecture. The Secure Architecture for the Networked Enterprise (SANE) (Casado
et al., 2006) offers a single protection layer to control an enterprise network connectivity. SANE
design goals were to support policies that do not depend on network topology or devices being
used, enforce those policies even at the link layer, and obfuscate network resources and services
from attackers. Policies are defined and executed from a central component instead of being
administered from several components or services like routers, switches, firewalls, authentication
A Domain Controller (DC) is a physical server, replicated at different locations that act as the brain
in a SANE network. The DC authenticates hosts and users, advertise services available on the
The DC offers three key functionalities including; Authentication Service, Network Service
20
The Authentication Service maintains an encrypted channel to authenticate users, switches, and
hosts.
The Network Service Directory is a substitute for the Domain Name System in a SANE network.
When system principals (users, groups) need to access a service, a service lookup is made in the
NSD ( servers use a unique name to publish their services ). The NSD has an Access List (ACL)
for each service that specifies the permissions granted to system principals. If a system principal
is whitelisted in that ACL, then permission will be granted to access those services.
The Protection Layer Controller manages SANE network connectivity by granting and revoking
capabilities (routes from clients to servers). The capabilities are encrypted in anonymous socket
connections, onion routing (Goldschlag, Reed, & Syverson, 1996) to limits a network's
Ethane (Casado et al., 2007), a clean-slate network architecture followed the lead of 4D
Architecture and SANE to design a centralized control architecture for network administration. An
Ethane network consists of the Controllers, Ethane Switches, and hosts. Ethane has control over
the network by allowing communication between end-hosts through explicit permission. This is
achieved through the use of a centralized Controller which has the global network policy used in
determining what happens to packets. Ethane Switches contains a flow table that contains how and
where packets should be sent to in the network. When a packet arrives at the switch, the flow table
is quickly looked at to determine what to do with the packet. If a flow (instruction) is not found,
the packet is forwarded to the Controller. The switches connect to the Controller using a secure
channel.
21
The operational deployment of Ethane at Standford University inspired the development of the
original OpenFlow.
The Open Networking Foundation (ONF) was formed in 2011 to be a leader in network
transformation, taking over from where the clean-slate project stopped. ONF is an operator-led
consortium with a partnership from Comcast, Turk Telekom, China Unicom, Google, AT&T, and
Deutsche Telekom (Jammal, Singh, Shami, Asal, & Li, 2014). This organization offers open
source solutions for operators and is the de facto standard Authority for SDN. The OpenFlow
specification, its information model, and the functionalities of its components was first published
OpenFlow is focused on the Ethernet switch, which has an internal flow-table with a standard
interface for adding or removing flow entries. It was designed to enable experimental protocols to
be run by researchers in a network. To achieve this goal, network vendors were encouraged to
incorporate OpenFlow into their switching devices, which would successively be deployed in
college campus wiring closets and backbone networks (Mckeown et al., 2008).
The OpenFlow protocol is implemented in a switch called OpenFlow Switch. The idea for this
switch is rooted in the fact that most vendors have equipped their devices to use flow-tables
running at different line-rate. OpenFlow exploits this diversity and creates uniformity in
A network that supports OpenFlow has its basis on three concepts: ( 1 ) an OpenFlow enabled
switch or switches that act as the data plane devices ( 2 ) the control plane is managed by a remote
OpenFlow enabled Controller(s) ( 3 ) a secure control channel that connects OpenFlow switches
22
to the OpenFlow Controller (Braun & Menth, 2014), (W. Li, Meng, & Kwok, 2016), (Alsaeedi,
communicate with an external controller. The switch contains at least one or more flow tables and
a group table. OpenFlow switches are classified into two types: OpenFlow-Only switches that
support mainly OpenFlow pipeline operations and OpenFlow-hybrid switches that process both
23
Figure 4: OpenFlow Switch Components
A Flow Table contains flow entries that detail how packets are matched and processed. Flow refers
24
Flow entries contain different components namely; Match Field, Priority, Counters, Instructions,
Timeouts, Cookie, and Flags (Y. Li, Zhang, Taheri, & Li, 2018).
i. Match Fields comprises of ingress port, sections of packet headers as well as other pipeline
fields such as metadata from previous steps. Matching is used to verify if particular field values
comply with a set of constraints referred to as a match. A match can either be, an Exact match
where a particular value must match in the field (value=field), Bitwise match where particular
bits values are matched in the field or Wildcard match where there is no constraint on the
matching field. There are also other types of matching which cannot be directly expressed and
indirect methods are used. Some of these matches are; Set match, Range match, Inequality
ii. Priority determines how Flow entries are sorted and which flow entry has to be processed
before the other, the flow entry with the highest priority is given precedence.
iv. Instructions can modify action-set which are linked to the processing ( matched ) packet or
v. Timeouts specify how long the switch caches a flow entry. Timeouts could either be an Idle or
Hard Timeout. Idle Timeout refers to the time in seconds that a flow entry is removed from the
flow table because it could not be matched while Hard Timeout is the maximum time in
seconds that a flow entry is removed from the flow table regardless of its matching status.
vi. A cookie is an opaque data type (value) randomly chosen by the external Controller to filter
flow entries, flow modification as well as flow deletion requests. Cookies are not used for
processing packets.
25
Flow Tables are managed by the external Controller that can add, modify, or delete flow entries
reactively or proactively. Reactive Flow Entries are created by the controller after it dynamically
discovers devices on the network. The flow tables of OpenFlow switches in the network are
updated to establish end-to-end connectivity between the discovered devices. Proactive Flow
Entries are created by the Controller before the devices are connected to the network or before
OpenFlow switches contain at least one Flow Table or several Flow Tables to form a pipeline. The
OpenFlow pipeline processing specifies how packets interact with flows in Flow Tables,
permitting flows to be directed through that pipeline until a match rule is found or forwarded to
26
When a host in the network wants to communicate with another host, it sent packets to the switch
Ingress Port, these packets are initially not matched because there is no flow entry for it, this is
termed table-miss. In this case, the switch forwards those packets to the Controller as a Packet-In
message. The Packet-In message encapsulates the original packets from the hosts and is referenced
using a Buffer Id by the Controller which then decides what happens to the packet. The Controller
can send a Packet Out message instructing the switch what to do with the packet ( i.e. the switch
should forward the packets out of a specific port or port range). Alternatively, the Controller can
send a Flow Modification ( Flow-MOD) message instructing the switch to install a new flow into
its Flow Table(s). After this initial process, when packets from hosts arrive, they traverse from the
first Flow Table ( table 0) through to table n to find matches that triggers the instruction set to
execute.
When a packet first arrives, metadata set consisting of Action List, Action Set, or both are created
for it. Actions are the operations to perform on the packet such as dropping the packet, forwarding
the packet to a particular port, or modifying the packet header. Action List and Action Set defer in
their time of execution, actions in a List are executed directly after leaving the current Flow Table
while actions in a Set are accumulated and execute cumulatively after being processed in all the
Flow Tables (van Asten, van Adrichem, & Kuipers, 2014). Action bucket is a collection of actions
to be selected as a bundle for packet processing and these action buckets are contained in a group
(Group Table) which determines a mechanism of choosing which action bucket is to be applied on
A requirement for an OpenFlow network is the presence of a secure channel that ensures
27
connection carries messages between switches and Controllers, a message is either a control
command, request, reply, or a status event. The interface that connects the switch to the controller
is called an OpenFlow channel, this channel allows the controller to send control messages to the
switch for operational purposes such as event notification, switch configuration, and management.
This channel uses Transport Layer Security (TLS) to encrypt communication thereby mitigating
security risks. In a situation where a switch is managed by more than one controller, then a separate
OpenFlow channel would be set for each, and an aggression of all these channels is called Control
Channel (Z. Li, Li, Zhao, & Xiong, 2014), (Y. Li et al., 2018), (Liyanage, Ylianttila, & Gurtov,
2014).
The OpenFlow protocol indicates three types of messages with their subtypes exchanged between
the controller and switch. All messages include an internal header instance that encapsulates
protocol version, message type, message length (in bytes), and transaction ID (XID) The three
Controller-to-switch messages are used to manage the switch directly or to inspect the state of the
switch. This message type is initiated by the controller and may not necessarily require a response
from the switch. The subtypes of Controller-to-switch messages include; Features, Configuration,
Features are requested by the controller to get the capacities of a switch; the switch is required to
respond to this request. Using Configuration messages, the configuration parameters of the switch
can be set and queried by the controller. Modify-State messages from the controller manage the
state of a switch by adding, deleting, changing the flow or group table entries or to set switch port
priorities. Read-State messages gather information about the switch current statistics and
configuration for the controller. Packet-out messages enable the controller specifies the action to
28
apply to a packet and which port of the switch packets are sent out from. Barrier messages are sent
to verify the completion of the previous operations, the switch respond with a Barrier reply upon
the completion of previous operations. Role-Request set or query the role of the controller in the
Asynchronous messages are initiated and used by the switch to update the controller of network
event and changes that has occurred in the switch state. There are four subtypes of Asynchronous
messages namely; Packet-in, Flow-Removal, Port-status and Error messages. Packet-in messages
occur when there is a table-miss in the switch flow table, the switch forward the mismatched
packets to the controller for processing. Flow-Removal messages are sent to update the controller
about flows that have been flushed from the switch flow table due to either an Idle or hard timeout.
Port-status update the controller of changes in the switch ports and Error messages are sent when
Symmetric messages are initiated without solicitation from either the switch or controller, these
include Hello, Echo and Vendor messages. Hello messages are exchanged when the switch and
controller are connected, Echo messages (Echo request/reply messages) are used to measure
throughput. Vendor messages also known as Experimenter messages allow switches to offer
The SDN architecture details how a network or computing system can be implemented using a
hardware that facilitate the separation of the SDN control and data planes of the networking stack
29
(sdx central, 2017). The architecture serves at a high level as the reference points and interfaces to
the SDN controller. It also allows the controller to manage a variety of data plane resources (Open
The SDN architecture as show in fig is divided into Application Layer, Control Layer and
Infrastructure Layer.
Network infrastructure at data plane is made of networking elements such as switches and routers.
These devices forward network traffic based on forwarding rules set by the control plane (Bakhshi,
30
2017). This plane differs from traditional data plane in how control logic is implemented, control
logic is located at a remote controller. Communication between data plane and controller is
established using open and standard interfaces called Southbound Interfaces such as OpenFlow.
SDN as a technology is well integrated into virtualized networks and data centers, as such
software-based data plane devices are in high demand. Examples of software-based switches that
support SDN include; Open vSwitch (Pfaff et al., 2009) from open source community, XorPlus
from pica8, ofsoftswitch13 (Bonelli, Procissi, Sanvito, & Bifulco, 2017) from Ericson, Switch
Light from Big Switch, Pantou for OpenWRT wireless environment from Stanford University.
OpenFlowClick developed by Yogesh Mundada and ZXR10 V6000 vRouter from ZTE.
There are also hardware switches and routers that support SDN. Hardware switches include; Arista
7150 series, BlackDiamond X8, NoviSwitch 1248, RackSwitch G8264, Pica8 3920 and Plexxi
Switch. Hardware routers include; Huawei CX600 series, Brocade MLX series and Cisco ASR
9000 Series.
Control plane is a logically centralized plane where network intelligence is located, it has global
view of networks under its administration and can reconfigure it dynamically (Cox et al., 2017).
This plane is composed of Northbound Interface, Southbound Interface and Network Operating
System (SDN Controller).Popular SDN Controllers include; Floodlight, Trema, Maestro, ONOS,
Pox, Onix, Nox and Ryu. Controllers are responsible for base network service functionalities such
redirection, link discovery, routing, message filtering, performance monitoring and routing.
31
2.3.2.1 Northbound Interface
Northbound Interface or API is used by push code ( custom network applications) developed by
programmers for business needs through the control plane to data plane (Raju, 2018). This
interface can be used to customize network control with popular programming languages like Java,
Python or Ruby. This is possible because the Northbound API can unmask network abstraction
data model as well as the functionality in the control plane for use by network applications (Tuncer,
Charalambides, Tangari, & Pavlou, 2018). Northbound API are in three categories, namely;
Representational State Transfer (REST) API, Network (Domain Specific) programming languages
REST API (Zhou, Li, Luo, & Chou, 2014), (Bakhshi, 2017) follows a client-server architecture in
which the server ( control plane) is stateless and the client ( application plane program) keeps track
of its session state. RESTful API use existing HTTP methodologies such as GET,POST,PUT and
DELETE. GET(read) is used to retrieve a resource, example the status of a switch flow table, PUT
(insert) to modify or update a resource, POST( write) to create a new flow table and DELETE
Another Northbound API implementation is Programming languages that are classified into Low-
level, API-based and Domain-Specific Programming Languages (Trois, Del Fabro, de Bona, &
Java, C, Python, or shell script to directly programme network devices through Control-Data-
Plane Interface (CDPI). API-based Programming use the Controller API program network
behavior (Agg, Johanyák, & Szilveszter, 2016). Domain-Specific Programming Languages offer
higher-level abstractions through the controller APIs. These languages include; Frenetic (Foster
et al., 2010), Procera (Voellmy, Kim, & Feamster, 2012) , Nettle (Voellmy & Hudak, 2011),
32
Pyretic (Reich, Monsanto, Foster, Rexford, & Walker, 2013), NetKAT (Anderson et al., 2014) and
Many SDN Controllers have also defined their own Northbound API customized to suit specific
needs. OpenDayLight and Floodlight use a customized RESTful Northbound API for rule
Southbound Interface enables communication between control plane and data plane ensuring that
end devices have the appropriate configuration and flow entries. OpenFlow (Kreutz et al., 2015)
is the dominant Southbound API supported on a wide variety of networking equipment. Other
Southbound APIs are either OpenFlow dependent such as OVSDB, POF, OpenState, HAL, OF-
• Open vSwitch Database Management Protocol (OVSDB) (Pfaff & Davie, 2013) is designed
to support protocol-independent data planes by using a generic flow instruction set (FIS).
(XFSM). XFSM enable OpenState to perform several stateful tasks inside forwarding devices
• Hardware Abstraction Layer ( HAL) (Ogrodowczyk et al., 2014) provide OpenFlow support
• OpenFlow Management and Configuration Protocol (OF-Config) (Čejka & Krejčí, 2016)
focus on function necessary to configure an OpenFlow data-path using the OpenFlow protocol.
33
OF-Config is used to configure ports ( i.e. port shutdown or up),set tunneling such as VXLAN
The application plane contains applications which has exclusive control of a group of network
resources retrieved through the Northbound API of one or more SDN controllers. Different
application can work collaboratively or override each other to achieve specific network goals set
by a network administrator. An SDN application can also perform the role of an SDN controller
Network vendors have taken advantage of the usefulness of the application plane and developed
customized applications for their products. Brocade has developed a Flow Optimizer, Virtual
router and Network advisor (Arora, 2017). Hewlett Packard Enterprise (HPE) has gone a step
further by having SDN App store where network administrators can download applications to
manage their SDN network. Examples of these applications are HPE Network Optimizer, Network
visualizer, TechM server load balancer, Network protector, TechM smart flow steering and NEC
SDN Controller is an operating system for networks with the aim of providing abstractions and
common API to developers. It also provides essential services and functionalities device discovery,
topology information and distribution of network configuration (Kreutz et al., 2015). SDN
Controllers are located in the control plane of the SDN architecture and forward policies from
34
There are several SDN controllers that can be used can installed on either a physical or virtual
server. Examples are; Beacon, Maestro, Trema, NOX, Floodlight, Contrail, Ryu, etc. (Benzekki,
El Fergougui, & Elbelrhiti Elalaoui, 2016), (D. & Gray, 2013). These controllers are classified as
either centralized controller where one control entity manages an entire network or a distributed
controller where the network is partitioned into different areas for management (Blial, Ben
Centralized controllers use a single control plane to manage an entire network. This category of
are installed on a single physical server and tasked to manage an entire network. The advantage of
physically centralized controller is simplicity and management since there is only one controller.
Also, this type of controllers use multithreading techniques However, scalability especially in
enterprise networks that spans different geographical areas is not feasible with this approach
Logically centralized controller uses multiple physical servers for operation, each controller is
assigned a dedicated task on the network but replicate a shared network state through a shared
centralized data store (Tadros, Mokhtar, & Rizk, 2018). However, the architecture of logically
centralized control hides the layout of physical servers, rather, they are presented to the data plane
as one entity (control plane) (Blial et al., 2016). Examples of this type of controllers are; Ryu,Oix,
35
2.4.2 Distributed Controllers
Distributed controllers function as a distributed control plane to manage a network, however, the
network is divided into multiple domains, each domain is then managed by its own controller
(Chaudhry, Bulut, & Yuksel, 2019), (H. G. Ahmed & Ramalakshmi, 2018). There are two variants
of distributed controllers; flat and hierarchical design. Flat design divides a network into different
domains with each domain assigned its own controller. Controllers using flat design use east-west
interface to communicate with each other to have a global view of the network. Hierarchical design
employees a two-layer controller concept, first, a domain controller to handle switches and run
applications in its local domains and second, a root controller that maintains global network while
also managing the domain controllers (Hu, Guo, Baker, & Lan, 2017). Examples of distributed
controllers are; OpenDayLigt, HPE VAN, Onix, HyperFlow, Kandoo and DISCO.
SDN Controllers have different properties and features which makes them more suited for one
deployment over another. The controllers are compared on the basis of architecture, programming
language used to develop them, supported northbound and southbound interfaces and the controller
developers or partnership.
API API
Centralized
36
NOX Physically C++ Ad-hoc OpenFlow 1.0 Nicira
Centralized
Centralized University
Logically
Centralized
NETCONF
Flat Architecture
Flat Architecture C
37
Kandoo Logically Distributed Python ------------- OpenFlow 1.0 University
Hierarchical C of Toronto
Architecture C++
This section gives a review of relevant past studies done on comparing SDN controllers. The most
common metrics in these papers used in evaluating controllers are throughout, latency and round-
trip time.
The authors examined four OpenFlow controllers in (Shah, Faiz, Farooq, Shafi, & Mehdi, 2013),
namely, Maestro, NOX, Floodlight and Beacon, all of them were opensource controllers. They
were compared on architectural design, packet batching, task batching and multi-core support.
Static switch partitioning and static batching were used in the architectural design of Floodlight,
NOX-MT and Beacon but Maestro used adaptive batching and shared-queue. The evaluation of
performance showed Beacon to have better results in terms of scalability when varying the number
of switches from 16 to 64 switches. NOX-MT had the second-best scalability, followed by Maestro
The work done in (Khondoker, Zaalouk, Marx, & Bayarou, 2014) evaluated five controllers,
namely, OpenDayLight, POX, Trema, Ryu and Floodlight. The authors used Analytic Hierarchy
Process (AHP) to determine how a controller should be chosen. This is because selecting a suitable
different properties of a controller are of importance to operators. The basis of comparision in this
study were; support for virtual switching, Transport Layer Security(TLS), REST API, OpenFlow,
38
availability of Graphical User Interface (GUI) and support for OpenStack networking. The study
results showed Ryu had the best priority vector value (0.287), Floodlight and OpenDayLight had
priority vector values of 0.275 and 0.268 respectively. Trema and POX had lower priority vector
The performance of seven selected controllers: Floodlight, NOX, Beacon, POX, Maestro, Ryu
and Mul were done in (Shalimov, Zuikov, Zimarina, Pashkov, & Smeliansky, 2013). The
performance metrics of latency, throughput and scalability were evaluated using Cbench while
hcprobe was used to conduct reliability and security tests. The average throughput with different
number of hosts, different number of switches and different number of threads showed Beacon
had the maximum throughput. POX was second but its throughput dropped significantly when the
number of switches reached 256 and Ryu had the least throughput. The smallest latency has been
demonstrated by Mul and Beacon controllers had the smallest latency, while python-based
controller POX achieved the largest latency. For controller reliability, all the controllers coped
with the test load except Mul and Maestro. Five tests were performed for security measurements,
including; invalid OpenFlow version, incorrect message length, incorrect OpenFlow message type
,malformed port status and malformed packet-In message. Ryu came out with the best results.
Simulation and emulation of SDN was done in (Al-Somaidai, 2014) with four different platforms.
These platforms were; NS-3,Mininet, Trema and EstiNet. The research also discussed different
switch software and tools including a comparison among the different versions of OpenFlow.
Controller used in this study were; Floodlight, OpenDayLight , NOX, Mul, POX, Beacon and Ryu.
The study observed that OpenDayLight and Floodlight were flexible and had good documentation.
39
2.7 Chapter Summary
In this chapter, the history of SDN controllers was discussed, starting from the days of Active
Networking through to the development of the OpenFlow standard. The OpenFlow protocol used
a pipeline to process matching criteria specified by a controller by using flow tables, any packet
which does not meet the criteria called a table-miss is sent to the controller to determine its fate.
The SDN architecture was also discussed and its three layers were identified as infrastructure (data)
plane, control plane and application plane. Network elements such as routers and switches that
support SDN were listed as data plane devices. The role of control plane was discussed and its
components listed as; Network Operating System (SDN controller), northbound interface and
southbound interface. Application plane hosted a variety of specific network application developed
This chapter further surveyed literature on SDN controllers and named some popular controllers
such as OpenDayLight, Floodlight, HPE VAN, Maestro, Beacon among others. These controllers
controllers. A centralized controller had one single physical server to serve the network.
Furthermore, a centralized controller could also logically centralized where a set of distributed
physical servers act as one entity to serve a network. Distributed controllers further divided into
flat and hierarchical architecture were installed on multiple physical servers with each controller
Finally, the research analyzed related works in implementing and comparing SDN controller
performance. In the next chapter, the methods used to conduct this research work is discussed.
40
CHAPTER THREE
3 RESEARCH METHODOLOGY
3.1 Introduction
This chapter discusses the research methodology used and why the selected method was best suited
for this study. Research methodology describes the systematic steps taken to resolve a research
problem, in this case, performance differences among SDN controllers. This can be scientific,
interpretive, and recently design science research. Scientific research includes; laboratory
SDN is a fairly new technology therefore examining its implementation and controller
performance would require a huge investment in network equipment that supports OpenFlow. Due
to this limitation, this study used emulation methodologies to create nodes and connected those
latency, and scalability data were collected for analysis and discussion. Moreover, emulation for
each performance metric was iterated fives times and an average derived. Details of emulation
software, selected controllers, and tools used to measure controller performance are provided
below.
Three SDN controllers were selected for this study, these are OpenDayLight, Floodlight, and HPE
41
3.3.1 OpenDayLight
OpenDayLight (ODL) is a modular open-source controller used to automate and custom networks
of any scale and size. As an open-source project, ODL is driven by the collaborative effort of
network equipment vendors, researchers, and user organizations to continually improve its
features. It is written in Java thus can be deployed on any operating system (Windows, Linux, etc.
) and commodity hardware that supports java. Due to this, it is the most commonly deployed SDN
controller platform and integrated into the core of open-source frameworks like OpenStack. Open
Network Automation Platform (ONAP) and Open Platform for NFV (OPNFV) uses it as a service
ODL has several versions since releasing the first version, Hydrogen, in February 2014. This study
used a stable version called Lithium-SR4 because it is stable and has good documentation. The
Lithium version as in other versions of ODL is presented as a modular platform allowing modules
to reuse services and interfaces. The core of this modular implementation is Model-Driven Service
Abstraction Layer (MD-SAL). MD-SAL presents underlying network elements and applications
as models and objects, processing their interactions within the SAL. The role played by MD-SAL
in user interaction with network elements makes ODL act as a Model-View-Control platform.
The internal representation is divided into three interconnected parts, namely, Model, View, and
Control. The data model used is YANG (obtained by using a remote procedure call ) which
describes nodes and how they interact. Views of resources are displayed using the northbound
interface, REST API, or RESTCONF. Control is implemented using java codes to handle
OpenDayLight Lithium relied on the following technologies before it was successfully set up for
use.
42
i. Maven: a software tool for managing and building Java projects. ODL Lithium uses Maven
ii. Java: a programming language used to develop ODL features and applications.
iii. Open Service Gateway Interface (OSGi): is a Java framework used in the back-end of
OpenDayLight allowing bundles and JAR packages to load dynamically and binding modules
iv. Karaf: is an OSGi based runtime used to load different modules. The controller is started by
The emulation of ODL Lithium required some features to be installed. These were;
i. DLUX: a web-based user interface for OpenDayLight. This web interface presents an MD-
SAL flow viewer for viewing OpenFlow entries in the network. It also has a network topology
visualizer that displays graphically how network elements are connected. There is also a
toolbox and YANG model to respond to queries while visualizing the YANG tree.
ii. OpenFlow plugin: This provided support for connecting ODL to OpenFlow-enabled network
devices for discovery and control. This feature has support for OpenFlow versions 1.0 and
1.3.
iii. Layer2 Switch: This feature provided layer2 (Ethernet) forwarding in emulated OpenFlow
3.3.2 Floodlight
Floodlight is an open-source SDN controller under Apache license, written in Java, and has its
foundation built on the Beacon SDN controller. It is supported by Big Switch Networks and its
architectural design is modeled after Big Network Controller (BNC), a commercial controller
43
from the company. Commercial applications developed for Floodlight by programmers can be
Floodlight has a modular design with different modules. The modules that were used in this study
were; topology module, Web Graphical User Interface (Web GUI), learning switch, device
management module, Link Discovery. The topology module has a topology Service to compute
network topologies from link information learned through Link Discovery Service. The topology
Service keeps an OpenFlow island which is a group of connected OpenFlow switches managed by
a single instance of Floodlight. Web GUI allows users to view connected switches, devices, and
hosts on the network flows installed in switch flow tables, controller state information, and the
overall network topology. The Learning Switch module detects new devices by learning about
their MAC addresses. It also identifies input and output switches when the controller detects a new
flow. The Device Management module uses PacketIn requests to detect devices, track the devices,
and specify a destination device for a new flow. It then classifies the devices according to an entity
classifier which by default uses MAC address and VLAN. Link discovery service discovers and
Floodlight has a total of six versions since the first stable version (v0.85) was released in 2012,
these versions are; 0.85, 0.90, 0.91, 1.1, and 1.2. The version of Floodlight used in this study was
version 1.1 because it was the last stable version. Floodlight v1.1 depended on these technologies;
Java Runtime Environment, Apache Ant, Python Development Package, Eclipse Integrated
• Java Development Kit (JDK8) was needed since Floodlight is a java-based platform and had
to run on standard JDK tools. The controller engine uses a Java library produced by Loxigen
44
• Apache Ant, a Java library as well as a command-line tool was used to build floodlight files
as targets.
• OpenFlow version 1.3, a southbound interface was used to connect to the OpenFlow network.
• REST API, a northbound interface, was used to query the controller for information to be
HPE Virtual Application Networks (VAN) SDN Controller is a java-based proprietary controller
marketed by the Hewlett Packard Enterprise Company. This controller is supported and maintained
by Aruba Networks, a subsidiary of HPE. Because of the support and feature improvement
received from Aruba Networks, this controller is often referred to as the Aruba SDN controller.
The controller was first released in 2012 followed by an SDN Developer Kit (SDK) and SDN App
Store in 2013. The Developer Kit equips developers with the necessary tools for creating, testing,
and validating SDN applications. By using this toolkit, developers can leverage HP’s SDN
infrastructure with its full complement of support services. HP certifies applications developed
with SDK and deploys them onto the SDN App store. HP SDN App Store can be used by customers
for searching, purchasing, and directly downloading SDN applications onto their deployment of
The controller has core network service applications installed as modules. This study used the
following modules; OpenFlow Link Discovery, OpenFlow Node Discovery, Topology Manager,
Topology Viewer, Client Mapper Service, and Web GUI. OpenFlow Link Discovery implements
" com.hp.sdn.supplier.LinkSuppliersBroker " interface while using Link Supplier Service and Link
45
Service APIs for creating and maintaining links information for data paths registered by the
controller. It also pushes flow-mods to get discovery packets, injecting discovery packets to ports
on data paths, and discovers links by listening to PACKET_IN messages. OpenFlow Node
Discovery creates and maintains node information for data paths, it pushes flow-mods to devices
to copy ARP or DHCP packets sent to the controller. Topology Manager and Topology Viewer
are used to data collection and displaying the OpenFlow network setup. Client Mapper Service
contains information on the network client’s host MAC addresses, IP address, and connected data
path and port. It also contains relevant information about device location on the network. The GUI
uses the REST API to fetch data from the network and passes it on in a readable form displayed
by the GUI. The Web GUI display topology of discovered switches and hosts, data flow details,
alerts, and logs. It also permits adding, enabling, and removing applications.
The last stable version 2.8.8 of the HPE VAN controller was used in this study. The controller
depended on the following technologies; Java, OSGi (Equinox framework and Virgo stack ), and
Apache Cassandra.
This research required an OpenFlow network to be set up, as such the first option available was to
purchase physical OpenFlow supported switches and servers. The second option, which was cost-
effective, was to purchase a physical server with good specifications ( Hard Disk, RAM, Processor,
etc. ) and virtualize all the OpenFlow network setup. A hypervisor software (VMware ESXi 6.5)
was installed on a physical server (Dell PowerEdge R730 CPU: 6 x Intel(R) Xeon(R) E5-
2620@2.0GHz RAM: 48GB HDD: 2TB NIC: 4 Ports). After the hypervisor was installed, four
Linux (Ubuntu 18) virtual machines were created to host the three SDN controllers and an
emulator (Mininet) to generate OpenFlow traffic. Details of the technologies are found below.
46
3.4.1 VMware ESXi 6.7
VMware ESXi is a type-1 and enterprise-class hypervisor product from VMware for deploying
and managing virtual computers. As a type-1 hypervisor, it is installed directly onto a hardware
server (bare metal) and has its kernel (Linux) and other vital OS components.
The hypervisor offered these functionalities relevant to the study; Virtual machines management,
storage management, and networking services. Virtual machine management allowed a new virtual
machine to be created, given a unique name, storage location, memory allocation, network
adapters, and USB controllers. Storage management display storage devices attached to the
hypervisor, their capacities, and the option to add more storage. It also displays the datastores and
provides a mechanism to browse through the files that make up a virtual machine such as the vmdk
file ( which can be copied to backup the VM elsewhere). Networking service was used to determine
how the Network Interface Cards (NIC) of the physical was to be assigned. Three out of the four
network cards were used. The first card was used as a management interface for administering the
hypervisor, the other two cards were designated as public and private interfaces. Each VM was
given both public and private interface, public interfaces connected the VMs to the internet to
download necessary packages for the study while private interface acted as a Local Area Network
The ESXi has several versions but version 6.7 was used in this study because it offered an easy to
use GUI presented in a web browser. Previous versions were complex to manage since a vSphere
Client was required to be installed on a dedicated management computer. But the version used in
this study offered flexibility since the hypervisor could be reached from any computer that has a
web browser.
47
3.4.2 Ubuntu
Ubuntu is a free and open-source Debian Linux distribution. There are three officially released
versions; Server, Desktop, and Core. The Desktop version of Ubuntu 18.04.2 Long Term Support
(LTS) was used since it provided a graphical interface making configuration of network interface
easy. Also, the SDN controllers used web-based GUI for management, so it was appropriate to use
Open vSwitch (OVS) is an enterprise-level open-source multilayer virtual switch platform. What
makes OpenvSwith suitable as a software switch for research and enterprise environment is its
support for a wide range of features like Standard 802.1Q VLAN, Spanning Tree Protocol (STP),
NetFlow, sFlow(R), QoS (Quality of Service) configuration, and OpenFlow 1.0 and above. The
version of Open vSwitch used was version 2.9.5 and was chosen for this research because of its
support for OpenFlow and ease of integration to Mininet. The switch was used to connect nodes
3.4.4 Mininet
Mininet is a software package installed on an operating system such as Linux (Ubuntu) to provide
a development environment and virtual testbed for network experimentation. It can generate
networks that run real code with real Linux kernel, networking stack, and standard Linux network
applications. Mininet was selected as a method to conduct this research because it enables SDN to
be rapidly prototypes and complex network topologies can be tested without a physical network.
A Mininet network of hosts that can run any Linux command and application, Switches that are
OpenFlow enabled, the default is Open vSwitch and Links that serve as ports and interface to
48
connect nodes to the switch. The version of Mininet used was version 2.3.0d6 which has better
The performance of the three controllers was measured by creating different network topologies
using Mininet and connecting those networks to each controller. The topologies used to measure
each controller’s performance were single, linear, and tree topology with a description below.
Single topology in Mininet uses one OpenFlow-enabled switch and defining the number of hosts
needed in that network. These hosts are connected to the switched and in this study, the switch has
The number of hosts on the switch is increased by a multiple of two from the previous value until
the hosts reach 128. The performance metrics of each controller is evaluated as the number of
hosts increases.
The command used to generate a single topology with 4 hosts is "sudo mn --topo single,4 --mac -
uses sudo mn to generate a network, --topo represents the network topology which is single, the
number of hosts is 4, --mac gives the hosts MAC addresses, controller represents the SDN
controller the switch should connect to which is located at a “remote” destination with an IP of
10.77.66.61 and connection port of 6633, the type of switch is OpenvSwitch (ovsk) that uses the
49
3.5.2 Linear Topology
Linear topology has switches and hosts connected in line with each host connecting a particular
switch. The switches are also connected and a connection is established to a remote controller. In
this topology, the number of hosts is the same, one host per switch but the switches are instantiated
from two switches and increased with a multiple of two till they reached 64 switches. Performance
metrics of generated traffic for each controller was collected for analysis.
The command used to generate a linear topology with two switches and a host per switch was "
protocols=OpenFlow13"
Tree topology arranges OpenFlow-enabled switches and hosts in the form of a tree structure. It
uses multiple branches and each branch has a connection of multiple switches and hosts. Tree
topology command syntax defined depth and fanout. Depth presents the number of switches levels
while and fanout is the number of output ports available for connecting hosts or switches.
The command used to generate tree topology in Mininet for a depth of 1 and fanout of 1 is “sudo
ovsk, protocols=OpenFlow13". The performance metrics of each controller for a depth of 1 with
fanout of 1,2 and 3 as well as a depth of 2 with fanout of 1,2 and 3 were taken for analysis.
The performance of an SDN controller is the overall quality of service it can provide for networks
under its administration. There are many different metrics used to measure a controller’s
50
performance. The selected metrics in this study were; topology discovery, throughput, latency, and
round-trip time.
Topology discovery is the ability of the controllers to detect and determine the topology of the
networks generated by Mininet. Each controller should be able to determine the topology type (
single, linear, and tree), connected switches, and nodes with their respective IP and MAC
addresses.
3.6.2 Throughput
Throughput is the number of flow setup requests a controller can handle. Two nodes on the
network were selected to act as a TCP server and TCP client. Data was sent from the client running
Iperf command with port 5001 through the controller to a server also running Iperf command on
port 5002. This was repeated for intervals of 10,20,30,40 and 50 milliseconds. The average
throughput was calculated as the transferred data over the time taken for delivery.
Round Trip Time (RTT) in this study is the total duration in time for a packet to be sent from a
host through the SDN controller, processed and passed to the destination host plus the reply from
the destination host to the sending host. Any delay would indicate the time taken by the controller
to locate where each host is located in the network and appropriate update the OpenFlow switch
flow table. It was expected that the first packet would record the highest RTT. Recordings in
milliseconds were taken for the minimum (min), average (avg), and maximum(max) Round Trip
Time for each controller performance to the topology and varying switches as well as node
numbers.
51
3.7 Tools for Performance metrics
There are several tools for evaluating an SDN controller performance, noteworthy among them
are; CBench, Hcprobe, OFCBenmark, and OFCProbe. They were all tried but failed to meet the
study expectations. In previous studies of SDN controller performance, the majority of them used
Controller Benchmarker (CBench) to generate packet-in messages for new flows. CBench
emulates its switches, then connects them to a controller, fakes packet-in messages, and observe
how flow-mods are sent to switches. It was initiated used in this study but was replaced because it
could not fulfill all the requirements of the study though it took significant time off the project.
The performance metrics of throughput and round-trip time were measured using standard
networking tools. These were Iperf and Ping command. They were selected because of their ease
of use and reliability in giving accurate results as opposed to the aforementioned benchmarkers.
3.7.1 Iperf
Iperf is available for Windows, Linux, etc. It performs traffic performance evaluation for both TCP
and UDP and works in unicast and multicast transmission. It generates reports for various
statistical measurements like datagram loss, throughput, and latency and these reports can be
directed to an output file for analysis. Iperf operates as a command-line performance tool but a
GUI interface version developed with java called Jperf is also available.
Iperf3 package was installed on the Mininet virtual and exposed to an emulated host to be used for
measurement. The Iperf command used for the client with IP address 10.0.0.1 was “iperf -c
10.0.0.17 -p 5001 -t 20” where -c denote client, IP address 10.0.0.17 refers to the server the client
should connect to, -p is the port used for the connection which was 5001 and -t is the time in
52
milliseconds to transfer data which was 20ms. The Iperf command at the server-side was “ iperf -
s -i 1” where -s denoted server and -i was the recording taken in unit time which was 1 millisecond.
3.7.2 Ping
Ping is a network utility software used to test the reachability of a host. It measures the round-trip
time of packets from a sending host to a destination host. A host sends echo request packets using
The Internet Control Message Protocol (ICMP) to target host on the network and waits for an echo
reply. Ping uses this process to report packet loss and errors then summarize these reports as round-
trip time with the minimum, maximum, average, and standard deviation of the average. The ping
command used was “ h1 ping -c 10 h17” where h1 denoted host 1 -c denoted the number of ICMP
packets to be sent and h17 was the destination host. The reply from h17 to h1 contained the relevant
round-trip time information which was recorded for five different iterations and an average found
for analysis.
In this chapter, the scientific research approach was adopted to achieve the aim of this study.
Further, emulation methods were used to set up the necessary environment to test and collect data
for analysis. The architectural design of the three SDN controllers namely; OpenDayLight,
Floodlight, and HPE VAN were discussed. The modules of each controller used in the study were
also identified and discussed. Finally, the supporting technologies that make these controllers
The chapter gave an overview of the research test environment and its components such VMware
ESXi 6.7 which was the hypervisor used to install the virtual machines, Ubuntu 18.04.2 which was
used to download and install SDN controller packages, Open vSwitch which acted as an
53
OpenFlow-enabled switch, and Mininet to generate an OpenFlow network. The emulation
scenarios were explained, these were a single topology that used only one OpenFlow-enables
switch with several hosts, a linear topology that had several OpenFlow-enables switches with only
one host attached to them, and tree topology that arranged switches in a hierarchical order using
The performance metrics used to measure how the controllers compare to each other were outlined
and explained. These were the ability of a controller to detect and determine the topology of a
network presented to it by Mininet for managing. A good controller should determine if the
topology is single, linear, or tree and the number of switches and nodes connected to it. The
throughput to determine the amount of data transfer from one host to another host managed by a
controller was explained. Also, round-trip time determined the time it took for packets to travel
from one host to another host and back. Finally, the measuring tools, Iperf and Ping, used to record
CHAPTER FOUR
4.1 Introduction
This chapter presents a brief overview of SDN deployment approaches and a justification for the
chosen deployment approach and selected controllers. It also discusses the testbed needed for
deployment, the installation procedure each component of the testbed. The implementation steps
of the three selected SDN controllers are outlined. There is also a simulation of three network
topologies and a demonstration of how each controller will handle the topologies. Generated traffic
54
between hosts for each topology is demonstrated and the time taken for hosts to respond to each
SDN as new technology to segregate control and data plane of networking devices has
implementation challenges. Foremost of this is the acquisition of devices that support SDN, which
are expensive to procure. A roadmap to deploying SDN has been simplified into three different
approaches, namely, switch-based networks, overlay networks, and hybrid networks that combine
the first two. Switch-based networks are the original deployment solution of SDN by using only
SDN compliant networking devices. With this deployment, an SDN controller installs flow entries
used to create SDN networks. Hybrid deployment combines both switch-based and overlays
network approaches.
The project was deployed in an overlay network. It used an existing network infrastructure
consisting of a traditional switch, router, and server. It was the approach chosen because physical
OpenFlow devices were expensive. This approach enabled SDN testing through the use of virtual
The selected controllers ( Floodlight, OpenDayLight, and HPE VAN) were chosen over other
55
• The controllers had to be developed using the same programming language. All three
controllers were developed with Java. The choice of Java was because it is a platform-
independent language as such SDN Controllers developed with it can be deployed on any
operating system. Furthermore, Java could be easily learned to compile different modules of
the controllers.
• The performance comparison had to be done for both centralized and distributed controllers.
Floodlight is centralized while OpenDayLight and HPE VAN are distributed controllers.
Floodlight was chosen because it was the only centralized controller developed with Java with
OpenDayLight was chosen because it is the most widely used distributed open-source
controller. HPE VAN was chosen because it is a proprietary controller distributed controller
• The features and performance comparison had to be conducted for open-source and proprietary
controllers. HPE VAN is a proprietary controller while Floodlight and OpenDayLight are
open-source controllers.
The testbed setup had a single physical server with the specification.
• RAM: 48GB
56
Figure 8: Test Server Mounted in Rack
The testbed also has Virtual Machines (VM) listed below. These VMs were created on top of the
• HPE VAN – Ubuntu 18 - 8GB RAM, 16GB HDD, 2CORE CPU, IP:10.77.66.63
An account was created at https://my.vmware.com for a 60-day trial version. The hypervisor
image (iso) was downloaded and a bootable USB drive created with it using PowerISO. The
operating system was then installed on the physical server and given an IP. The management
57
Figure 9: Virtual Servers
58
4.4.2 Ubuntu 18.04.2
4.4.3 Mininet
59
Figure 13: Mininet Network
This section demonstrates how the controllers were downloaded, installed, and operated.
4.5.1 OpenDayLight
https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/distributi
on-karaf/0.3.4-Lithium-SR4/distribution-karaf-0.3.4-Lithium-SR4.zip “.
It was installed and started with karaf as shown in Figure 14, OpenDayLight features necessary to
60
Figure 14: OpenDayLight Installation
4.5.2 Floodlight
The submodules were updated and ant used to build the modules as shown in figure16 below.
61
4.5.3 HPE VAN
A virtual machine with minimal installation containing the HPE VAN controller was downloaded
was downloaded pre-install as such only the network interface was configured as shown in
figure17 below.
This section demonstrates how Mininet created networks for the controllers to manage. The
Single topology has one switch with a different number of hosts. A sample test network with 20
62
Figure 18: Mininet Single Topology
63
Figure 19: Single Topology Displayed by OpenDayLight
Linear topology has two or more switches with one host each. A test network in Mininet is shown
64
Figure 20: Mininet Linear Topology
65
4.6.3 Tree Topology
Tree topology uses depth and fanout to arrange networks by hierarchy. Below is a network in
66
4.7 Chapter Summary
This chapter has examined and justified the overlay deployment approach to SDN. The rationale
of choosing the selected controllers was explained as well as the implementation steps needed to
make them work. Test networks were emulated in Mininet and connected to the implemented
controllers to manage.
67
CHAPTER FIVE
RESEARCH EVALUATION
5 Introduction
This chapter presents the results of the experimental evaluation, the results are discussed based on
performance metrics outlined and discussed in Chapter 3. These metrics were; topology discovery,
The controllers are compared on their ability to provide near a real-time and accurate display of
the network topology in a process called Topology Discovery. Accurately detecting and displaying
the network topology plays an essential role in other internal functionalities of the controller. It is
used for host tracking, network configuration, planning of routes, attack detection, among others.
The figures below show how the three controllers display and handle different networks.
68
Figure 25: ODL with 64hosts and 1 switch
69
Figure 27: Floodlight with 8host and 1 switch
70
Figure 29: Floodlight with 128host and 1 switch
71
Figure 31: OpenDayLight with Linear topology of 8 switches and 8 hosts
72
Figure 32: Floodlight with the Linear topology of 8 switches and 8 hosts
73
Figure 34: OpenDayLight with tree topology of the depth of 2 and fanout of 2
Figure 35: OpenDayLight with tree topology of the depth of 2 and fanout of 2
74
Figure 36: OpenDayLight with tree topology of the depth of 2 and fanout of 2
Single topology had a minimum of 8 hosts connected, an average of 64 hosts and a maximum of
128 hosts. Linear topologies had a minimum of 2switches and a maximum of 32 switches each
with an end host connected. Tree topology had a minimum depth of 1 and a maximum of 2. The
three controllers identified the different network topologies and displayed them as shown in the
figures above. Each controller had a unique topology GUI that provided:
• view of active flow rules, tools for adding, editing, and deleting flows
The controllers use the same OpenFlow Discovery Protocol (OFDP) to discover the different types
of topologies. OpenDayLight provided all topology features and could detect new networks
generated from Mininet without restarting its modules. OpenDayLight display for single topology
is shown in figures 24,25, and 26 while linear and tree topologies are shown in figure 31 and figure
34 respectively. Floodlight detected and displayed all topology features but had to be restarted and
its modules reloaded to clear previous networks that are cached. Floodlight display for single
topology is shown in figures 27,28, and 29 while linear and tree topologies are shown in figure 32
75
and figure 35 respectively. HPE VAN display for single topology is shown in figure 30 while
linear and tree topologies are shown in figure 33 and figure 36 respectively HPE VAN only showed
the number of switches and end nodes without a visual diagram of how each host is connected in
the network. It did not provide the MAC or IP addresses of end hosts or paths between the hosts.
This was because the license required for those topology features had expired. OpenDayLight has
a better display and labeling of switches, and hosts using conventional networking icons and
colors, Floodlight displays both switches and host in monotone color thus a technical eye is needed
The round-trip time for packets sent by a host to another host in a Mininet generated network
managed by each controller is compared. The number of packets sent by hosts is set to 10 packets,
the minimum, average, and maximum round-trip time for these packets for each controller network
76
Figure 37: Minimum Round-Trip Time for Single Topology
From figure37 derived from table 2 (the minimum RTT for single topology), it is observed that
Floodlight has the lowest RTT. The RTT decreases with an increasing number of hosts but starts
to increase after the number of hosts reach a threshold of 128. HPE VAN has the second-best
minimum RTT that increases steadily for up to 32 hosts but declines as the number of hosts
increase beyond 32. OpenDayLight has the worst minimum RTT that peaks for 16 hosts and
steadily declines till 64 hosts after which the RTT remains constant beyond 128 hosts. The average
RTT for an accumulation of hosts indicates put Floodlight at number one with 0.08428ms, HPE
77
Table 3: Average Round-Trip Time for Single Topology
From figure38 obtained from table 3 ( the average RTT for single topology ), Floodlight has the
worst RTT that increases in correspondence with an increasing number of hosts. HPE VAN and
OpenDayLight have similar performance, the RTT remains constant regardless of the number of
hosts added to the network except for OpenDayLight that had a slight increase in RTT for 128
78
hosts. The average of RTT for the accumulation of hosts shows HPE VAN is first with echo
responds within 0.1678ms, OpenDayLight with 0.182ms and Floodlight is last with 1.02168ms.
This outcome of the RTT suggests that HPE VAN and OpenDayLight should be the preferred
79
From figure39 obtained from table 4 ( the maximum RTT for single topology ), Floodlight has the
highest maximum RTT that increases to an increase in the number of hosts. HPE VAN and
OpenDayLight have similar RTT values that remain steady despite an increase in the number of
hosts. Floodlight has an average RTT of 8.88464ms for an accumulated number of hosts while
HPE VAN and OpenDayLight have values of 0.79052ms and 0.7416ms respectively. A higher
RTT value indicates that the controller has undesirable performance characteristics, as such
80
From figure40 obtained from table 5 ( the minimum RTT for linear topology ), it is observed that
the RTT for all the controllers increases to an increase in the number of hosts. However, HPE
VAN and Floodlight have similar performance with RTT values of 0.11315ms and 0.11455ms
From figure41 obtained from table 6 ( the average RTT for linear topology ), the RTT increases as
the number of hosts increases for all controllers. Floodlight has the highest average RTT value,
81
2.40545ms for an accumulated number of hosts, OpenDayLight is second with a value of
0.41975ms while HPE VAN has the lowest value of 0.38235ms. Since a higher RTT value
indicates poor performance, it implies that Floodlight is not suitable for linear topology while
From figure41 obtained from table 7 ( the Maximum RTT for linear topology ), the RTT in all the
controllers increases directly to an increase in the number of hosts. Floodlight has the highest RTT
82
of 22.36545ms while HPE VAN and OpenDayLight have lesser RTT values of 2.6418ms and
2.53535ms respectively. Therefore, HPE VAN and OpenDayLight have better performance than
Floodlight.
Table 10: Maximum Round-Trip Time for Tree Topology with Depth of 1
83
Figure 43: Average Round-Trip Time for Tree Topology with Depth of 1
From figure43 obtained from table 9 ( the Average Round-Trip Time for Tree Topology with
Depth of 1 ), the RTT for HPE VAN and OpenDayLight remain constant for fanout of 2 and 3
thus has better performance with values of 0.5368ms 0.1524ms. Floodlight has the highest RTT
value of 9.4292ms and thus has poor performance. The maximum RTT values in table 7 and
minimum RTT values in table 9 follow a similar pattern where OpenDayLight and HPE VAN have
Table 11: Minimum Round-Trip Time for Tree Topology with Depth of 2
84
Table 12: Average Round-Trip Time for Tree Topology with Depth of 2
Table 13: Maximum Round-Trip Time for Tree Topology with Depth of 2
85
Figure 44: Average Round-Trip Time for Tree Topology with Depth of 2
From figure44 obtained from table 12 ( the Average Round-Trip Time for Tree Topology with
Depth of 2 ), HPE VAN has the lowest RTT value of 0.2569ms, and OpenDayLight is second with
a value of 0.316ms. Floodlight has the highest RTT value of 1.3414ms. The same pattern is
observed from the minimum values in table 10 and maximum values in table 12 where HPE VAN
5.3 Throughput
The throughput of each controller was measured by transferring data from one host to another host
within 10 seconds. The Iperf measurement tool was used to take five different recordings for each
scenario of connected hosts ( from 8 hosts to 128 hosts) and an average in Gbits/sec calculated and
86
Table 14: Throughput in Single Topology
87
From figure 45 derived from table 14 ( the throughput in Single Topology ), Floodlight starts with
the highest throughput for both 8 and 16 hosts but steeply declines as the number of hosts increases.
OpenDayLight starts as the second controller with the highest throughput for 8 to 16 hosts but
declines gradually as the number of hosts increases. HPE VAN has the lowest throughput for 8 to
16 hosts but peaks at 32 host and declines at 64 hosts where its throughput remain constant for an
increasing number of hosts. The average throughput values put HPE VAN has the best controller
with 14.392Gbits/sec, followed by OpenDayLight with 14.285Gbits/sec while Floodlight has the
This chapter has evaluated the performance of the three SDN controllers in terms of topology
It was observed in this chapter that all three SDN controllers use the same OpenFlow Discovery
Protocol (OFDP) to discover nodes and links to construct the network topology. They also provide
a GUI to display discovered switches, and end nodes, ports discovered on a switch, path between
end nodes, end nodes MAC or IP addresses and view of active flow rules, tools for adding, editing,
and deleting flows. The difference among the controllers in this category was that HPE VAN had
to be licensed to offer some of these features because it is proprietary while OpenDayLight and
The round-trip time as shown in this chapter was divided into three categories namely; minimum,
average, and maximum RTT. The table below summaries the results.
88
Table 15: Summary of Single Topology RTT Results
The summary of RTT values in table 15 above shows OpenDayLight and HPE VAN with RTT
values of 0.33652ms and 0.34809ms respectively are the better controllers for single topology
Linear topology RTT values from the summarized table 16 show OpenDayLight with a value of
1.03425ms and HPE VAN with a value of 1.04577ms are the best controllers for linear topology
89
Table 17: Summary of Tree Topology RTT Results
HPE VAN with an RTT value of 0.4367ms and OpenDayLight with a value of 0.44198ms from
table 16 are better controllers for handling a tree topology while Floodlight with a value of
Finally, in this chapter, it was deduced from the data that HPE VAN has the best throughput value,
produced better throughput when the number of hosts is smaller but decreased significantly when
the host is increased, therefore, it came out as the worst controller in the throughput performance
category.
90
CHAPTER SIX
6.1 Introduction
This chapter presents the conclusion to the thesis by highlighting its relevant aspects. A summary
of the study and findings are presented. It also gives an appraisal of how the findings satisfy the
research aims outlined in chapter one. Finally, ideas and suggestions for future research are
In chapter one, the evolution of the internet that started in the 1960s was seen to stagnant recently
and a new approach was needed in networking. Traditional networks had complex designs,
therefore identifying and resolving network issues proved a daunting task. Moreover, the protocols
used in such networks had little improvement over the years and did not support the
by locking the inner operation of their devices through proprietary operating systems and closed
APIs. SDN was presented as a solution to these networking issues whereby the control and data
planes of network devices are separated from each other. The separation of the planes as it was
argued in this chapter would enable network programmability. In this way, automation tools and
development kits would be used to create custom network applications for business needs. Also,
SDN would relieve the burden of network administrators by presenting the entire network
intelligence in a single controller that would be responsible for pushing policies throughout the
network. The justification, significance, scope, and limitations of this project were also listed in
this chapter.
91
In chapter 2, a literature review was conducted on the history of SDN, the architecture of SDN,
SDN controllers, and research previously conducted in comparing SDN Controllers. In this
chapter, SDN was found to have three stages in its history, namely, Active Networking, Control -
Data Plane separation, and OpenFlow Protocol and Network Operating Systems. Active
forwarding logic (data plane) was implemented directly into the hardware and had to be linked to
the control plane using an open interface called ForCES. Finally, the OpenFlow Protocol and
Network Operating Systems phase of SDN history migrated to the use of an OpenFlow-enabled
switch that has a flow-table. This flow-table can be modified by a Network Operating System
called controller. The study of available regarding the SDN architecture divides into the
application, control, and data plane. The application plane has network-centric programs a network
administrator would use to manage the network under his jurisdiction. The control plane is
regarded as the brain of an SDN that links network devices to the application plane. SDN
controllers are located in this plane. The data plane has network elements that support the
OpenFlow protocol. This chapter also identified and classified the different types of SDN
network from one centralized plane while distributed controllers could manage a network from a
distributed control plane. The variation in features of the controllers was compared in terms of
their architecture, programming language, supported interfaces, and industry partners. The chapter
In chapter three, the research methodology which was the scientific approach was explained as
well as the emulation method. The chapter further gave a detailed technical overview of the three
92
selected SDN controllers and the technologies they depend on for their operations. The emulation
environment that was used as a testbed was explained detailing their specifications. The network
topologies, single, linear, and tree topologies were explained with details of the number of
switches, and hosts that are used in each scenario. Topology discovery, round-trip time, and
throughput were described as the performance metrics used to determine which controller was
preferred for each network topology. Iperf and ping command were the tools used to measure these
performance metrics.
In chapter four, the deployment approaches of SDN were explained and the overlay SDN
deployment approach was chosen for this study since it leveraged on existing hardware. The
chapter gave justification to the selection of the three SDN controllers ( OpenDayLight, Floodlight,
and HPE VAN), they all had to be developed using the same programming language. Furthermore,
the comparison had to be done for open-source controllers (OpenDayLight and Floodlight) against
controllers (OpenDayLight and HPE VAN). The testbed was set up and relevant software
packages for the controllers downloaded and installed from official websites and repositories.
In chapter five, different network topologies were generated in Mininet and presented to the
controllers. A comparison of how each controller handle displays was done. The results of round-
trip time measurement using ping command for one host to another were compared among the
controllers. Finally, the throughput values for single topology with varying numbers of hosts were
93
6.3 Summary of Findings
This thesis had the aim of comparing three different SDN controllers given different networking
scenarios. To accomplish that, the thesis focused on some objectives. Firstly, the SDN architecture
had to be examined to know the various layers. This was to be followed by identifying the different
categories of SDN controllers, and SDN enabled network elements in the data plane. Another
objective was to find out which interfaces are used by the controllers to communicate with the
application and data planes. Finally, HPE VAN, Floodlight, and OpenDayLight were to be
installed on different virtual machines. They would then be compared against each by managing
the same network at different times. The performance metrics would be topology display, round-
The study accomplished the research aims outlined above. The SDN architecture, controllers, and
data plane devices were explained in detail in chapter two. The controllers used REST API as a
northbound interface to communicate with the application plane while OpenFlow was used as a
southbound interface for communication with data plane devices. The three controllers were
successfully installed in chapter four to handle network traffic. The performance analysis was
To discover the network topology, the controllers used the same mechanism. They start the
topology discovery process through node discovery using the OpenFlow protocol and link discover
using the OpenFlow Discovery Protocol. The OpenFlow switch sends its features and ports to the
controller using Features Request and Reply messages. The controllers then send Link Layer
94
To establish communication between hosts, the controllers had to install flows to the switches and
this is where the differences in performance among the controllers emerged. When packets are
sent from one host (h1) to another host (h7), an Address Resolution Protocol (ARP) request is sent
to the OpenFlow switch. The switch provides h1 the Media Access Control (MAC) address of h7
for data transfer. However, when the switch does not know the MAC address, it sends a PACKET-
IN message to the controller. The controllers reply with a PACKET-OUT message to instruct the
switch to flood all of its port except the originating port that requesting the ARP. The host (h7)
responds with its MAC address which is forwarded to the controller. This ARP reply is sent back
As can be seen from the round-trip values in table 4 ( the maximum RTT of single topology) in
chapter 5, the controllers had to install flow entries for each host in the network into the OpenFlow
switch flow table before communication could occur. Floodlight took on average 8.88464ms to
install flows to enable communication between hosts. HPE VAN took 0.79052ms while
OpenDayLight took 0.7416ms. Due to a delay in flow entry installation, Floodlight was the worst-
performing controller among the three controllers for all the topologies. However, after the initial
hurdle of flow entry installation, Floodlight had the best minimum RTT for both single and linear
topology. HPE and OpenDayLight had the best performance for round-trip time because flow
In the throughput category in table 14 of chapter 5, Floodlight once again had the worst
performance while OpenDayLight and HPE VAN exhibited better performance. When the number
of hosts was less than 64, Floodlight showed better throughput performance but worsened as the
number of hosts increased. OpenDayLight and HPE VAN performance were less than Floodlight
95
when the number of hosts was less than 64 but they outperform Floodlight for a growing number
of hosts. It could be argued from the literature review in chapter two that since Floodlight is a
centralized controller, it has a scalability issue hence the worst performance. Furthermore, the
literature review indicated that distributed controllers could be scaled and provided redundancy in
the event of hardware or network failure, HPE VAN and OpenDayLight benefited from the
advantages of distributed controllers, hence the improved performance. It can be concluded from
the results that Floodlight is the worst performing controller while HPE VAN and OpenDayLight
have better performance. But in a direct contest between OpenDayLight and HPE VAN, there is
no winner since the difference in performance between them is marginal. This stems from the fact
that they both use the OSGi technology and the underlining principle to their operation is the same
as explained in chapter three, however, OpenDayLight is an open-source project while HPE VAN
is a proprietary controller.
Finally, the study offers clues as to which scenario would benefit from choosing either a centralized
observation made about Floodlight is that it gives the best minimum RTT values except that it also
has the maximum RTT value which leads to bad performance. Nonetheless, it would be ideal if
the flow entry installation could be reduced, a good network environment to use floodlight is in a
static network where the addition or removal of hosts is not dynamic. In this environment, a
proactive flow entry mechanism is used to preinstall flows in the OpenFlow switch thereby taking
advantage of Floodlight’s ability to respond with the minimum round-trip time. HPE VAN and
OpenDayLight, an open-source controller can be used to reduce SDN investment instead of HPE
96
6.4 Suggestions for Further Work
In this study, the performance of Floodlight, OpenDayLight, and HPE VAN has been evaluated
with insight into which scenario is best suited for each controller. This study can be expanded by
forwarding could be achieved faster. An algorithm to reduce the flow entry installation process
The SDN controllers compared in this study were all Java-based controllers as such their
derives from particular languages. Controllers can be selected against programming languages
from this sample pool: NOX, HyperFlow, Kandoo - C++, POX, ONIX, B4, Ravana, Ryu –
The performance metrics used in this study were topology discovery, throughput, and round-
trip time. Studies could be conducted for different metrics or parameters such as controller
metric to be investigated. Since the control and data planes are detached, it presupposes that a
controller failure would collapse the network. Finally, controller consistency for distributed
97
controllers has not received much attention and this metric should be investigated especially
SDN controller comparison studies are largely dependent on open-source controllers except
for this study where a proprietary controller (HPE VAN) was used. Researchers are encouraged
to take advantage of the free trial versions of proprietary controllers to test them against open-
source controllers.
98
References
Abuarqoub, A. (2020). A review of the control plane scalability approaches in software defined
Achir, N., Fonseca, M. S. P., & Doudane, Y. M. G. (2000). Active networking system evaluation:
https://www.researchgate.net/profile/Mauro_Fonseca/publication/2400076_Active_Networ
king_System_Evaluation_A_Practical_Experience/links/09e41507ec9d550d1e000000.pdf
Agg, P., Johanyák, Z., & Szilveszter, K. (2016). Survey on SDN Programming Languages.
Ahmed, A. F., & Lakshman, T. V. (2015). Softrouter Dynamic Binding Protocol .pdf. United
Ahmed, H. G., & Ramalakshmi, R. (2018). Performance Analysis of Centralized and Distributed
SDN Controllers for Load Balancing Application. 2018 2nd International Conference on
https://doi.org/10.1109/ICOEI.2018.8553946
SDN Implementation. American Journal of Software Engineering and Applications, 3(6), 74.
https://doi.org/10.11648/j.ajsea.20140306.12
Alsaeedi, M., Mohamad, M. M., & Al-Roubaiey, A. A. (2019). Toward Adaptive and Scalable
99
https://doi.org/10.1109/ACCESS.2019.2932422
Anderson, C., Foster, N., Guha, A., Jeannin, J.-B., Kozen, D., Schlesinger, C., & Walker, D.
(2014). NetKAT: Semantic Foundations for Networks. Conference Record of the Annual
https://doi.org/10.1145/2578855.2535862
https://www.arista.com/en/products/eos
Arora, H. (2017). Software Defined Networking (SDN) - Architecture and role of OpenFlow.
architecture-and-role-of-openflow
Avaya. (2011). Network Virtualization using Shortest Path Bridging and IP / SPB. Avaya, 1–33.
Bakhshi, T. (2017). State of the Art and Recent Research Advances in Software. Wireless
Bansal, D., Bailey, S., Dietz, T., & Shaikh, A. A. (2013). OpenFlow Management and
Benzekki, K., El Fergougui, A., & Elbelrhiti Elalaoui, A. (2016). Software-defined networking
https://doi.org/10.1002/sec.1737
Bernini, G., & Caba, N. X. W. C. (2015). COSIGN Combining Optics and SDN In next Generation
100
Bhattacharjee, S., Calvert, K. L., & Zegura, E. W. (1997). An Architecture for Active Networking.
Bizanis, N., Kuipers, F. A., & Member, S. (2016). SDN and Virtualization Solutions for the
https://doi.org/10.1109/ACCESS.2016.2607786
Blial, O., Ben Mamoun, M., & Benaini, R. (2016). An Overview on SDN Architectures with
https://doi.org/10.1155/2016/9396525
Bonelli, N., Procissi, G., Sanvito, D., & Bifulco, R. (2017). The acceleration of OfSoftSwitch.
2017 IEEE Conference on Network Function Virtualization and Software Defined Networks
(NFV-SDN), 1–6.
Braun, W., & Menth, M. (2014). Software-Defined Networking Using OpenFlow: Protocols,
https://doi.org/10.3390/fi6020302
Caesar, M., Caldwell, D., Feamster, N., & Rexford, J. (2005). Design and Implementation of a
Calvert, K. L. (1999). Architectural Framework for Active Networks Version 1.0. (Icm), 1–15.
Casado, M., Freedman, M. J., Pettit, J., Luo, J., McKeown, N., & Shenker, S. (2007). Ethane.
https://doi.org/10.1145/1282427.1282382
101
Casado, M., Garfinkel, T., Akella, A., Freedman, M. J., Boneh, D., McKeown, N., & Shenker, S.
(2006). SANE: A protection architecture for enterprise networks. 15th USENIX Security
Symposium, 137–151.
Čejka, T., & Krejčí, R. (2016). Configuration of open vSwitch using OF-CONFIG. NOMS 2016 -
https://doi.org/10.1109/NOMS.2016.7502920
Chaudhry, S., Bulut, E., & Yuksel, M. (2019). A distributed SDN application for cross-institution
Chua, R. (2012). OpenFlow Northbound API – A New Olympic Sport. Retrieved from
https://www.sdxcentral.com/articles/editorial/openflow-northbound-api-olympics/2012/07/
Cox, J. H., Chung, J., Donovan, S., Ivey, J., Clark, R. J., Riley, G., & Owen, H. L. (2017).
https://doi.org/10.1109/ACCESS.2017.2762291
D., T. N., & Gray, K. (2013). SDN: Software Defined Networks (1st ed.). O’Reilly Media, Inc.
Denieffe, D., Kavanagh, Y., & Okello, D. (2016). Network Revolution - Software Defined
Networking and Network Function Virtualisation playing their part in the next Industrial
Revolution. 1–8.
Farrel, A., Vasseur, J., & Ash, J. (2006). A Path Computation Element (PCE)-Based Architecture.
102
Internet Engineering Task Force. https://doi.org/10.1017/CBO9781107415324.004
Feamster, N., Rexford, J., Balakrishnan, H., & Shaikh, A. (2004). The Case for Separating Routing
from Routers.
Feamster, N., Rexford, J., & Zegura, E. (2014). The Road to SDN: An Intellectual History of
Feldmann, A. (2007). Internet Clean-Slate Design : What and Why ? ACM SIGCOMM Computer
Foster, N., Freedman, M., Harrison, R., Rexford, J., Meola, M., & Walker, D. (2010). Frenetic: A
Girod, B., Gamal, A. El, Goel, A., Horowitz, M., Johari, R., Kahn, J., … Roughgarden, T. (2006).
Goldschlag, D. M., Reed, M. G., & Syverson, P. F. (1996). Hiding Routing Information.
Goransson, P., & Black, C. (2014). The Genesis of SDN. Software Defined Networks, 37–57.
https://doi.org/10.1016/b978-0-12-416675-2.00003-6
Greenberg, A., Hjalmtysson, G., Maltz, D. A., Myers, A., Rexford, J., Xie, G., … Zhang, H.
(2005). A Clean Slate 4D Approach to Network Control and Management. ACM SIGCOMM
http://www2.technologyreview.com/news/412194/tr10-software-defined-networking/
Gude, N., Koponen, T., Pettit, J., Pfaff, B., Casado, M., McKeown, N., & Shenker, S. (2008).
103
NOX: towards an operating system for networks. ACM SIGCOMM Computer
Gupta, A., & Feamster, N. (2016). Network Monitoring as a Streaming Analytics Problem.
HotNets 16: Proceedings of the 15th ACM Workshop on Hot Topics in Networks.
https://doi.org/10.1145/3005745.3005748
Hewlett Packard Enterprise, H. (2016). HPE VAN SDN Controller 2.7 Administrator Guide.
http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c05028095
Hu, T., Guo, Z., Baker, T., & Lan, J. (2017). Multi-controller Based Software-Defined
Networking : A Survey.
Izard, R. (2018). How to Work with Fast-Failover OpenFlow Groups. Retrieved from
https://floodlight.atlassian.net/wiki/spaces/floodlightcontroller/pages/7995427/How+to+Wo
rk+with+Fast-Failover+OpenFlow+Groups
Jammal, M., Singh, T., Shami, A., Asal, R., & Li, Y. (2014). Software defined networking: State
https://doi.org/https://doi.org/10.1016/j.comnet.2014.07.004
Khondoker, R., Zaalouk, A., Marx, R., & Bayarou, K. (2014). Feature-based comparison and
https://doi.org/10.1109/WCCAIS.2014.6916572
104
Kim, H., & Feamster, N. (2013). Improving network management with software defined
https://doi.org/10.1109/MCOM.2013.6461195
Kreutz, D., Ramos, F. M. V., Verissimo, P. E., Rothenberg, C. E., Azodolmolky, S., & Uhlig, S.
Kwon, J., Lee, T., Hahni, C., & Perrig, A. (2020). SVLAN: Secure & Scalable Network
https://doi.org/10.14722/ndss.2020.24162
Lakshman, T. V, Nandagopal, T., Ramjee, R., Sabnani, K., & Woo, T. (2004). The SoftRouter
Proceedings/lakshman.pdf
Li, W., Meng, W., & Kwok, L. F. (2016). A survey on OpenFlow-based Software Defined
Li, Y., Zhang, D., Taheri, J., & Li, K. (2018). SDN components and OpenFlow. Big Data and
Li, Z., Li, Q., Zhao, L., & Xiong, H. (2014). Openflow channel deployment algorithm for software-
defined AFDX. 2014 IEEE/AIAA 33rd Digital Avionics Systems Conference (DASC), 4A6-
1-4A6-10. https://doi.org/10.1109/DASC.2014.6979466
105
Liyanage, M., Ylianttila, M., & Gurtov, A. (2014). Securing the control channel of software-
Wireless, Mobile and Multimedia Networks 2014, WoWMoM 2014, (March 2015).
https://doi.org/10.1109/WoWMoM.2014.6918981
McCloghrie. (2013). SNMPv2 Management Information Base for the User Datagram Protocol
Mckeown, N., Anderson, T., Peterson, L., Rexford, J., & Shenker, S. (2008). OpenFlow : Enabling
https://doi.org/10.1145/1355734.1355746
Monsanto, C., Foster, N., Harrison, R., & Walker, D. (2012). A Compiler and Run-Time System
https://doi.org/10.1145/2103621.2103685
https://www.cse.wustl.edu/~jain/cis788-97/ftp/active_nets/index.html
Nuangjamnong, C., Maj, S. P., & Veal, D. (2008). The OSI Network Management Model-
https://doi.org/10.1109/ICMIT.2008.4654552
Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T. (2014). A survey of
106
https://doi.org/10.1109/SURV.2014.012214.00180
Ogrodowczyk, A., Belter, B., Binczewski, A., Dombek, K., Juszczyk, A., Olszewski, I., …
CAPABLE DEVICES.
https://doi.org/10.1109/MS.2006.52
Open Networking Foundation. (2015). OpenFlow Switch Specification (Version 1.5.1). ONF TS,
resources/onf-specifications/openflow/openflow-switch-v1.5.1.pdf
Paliwal, M., Shrimankar, D., & Tembhurne, O. (2018). Controllers in SDN: A review report. IEEE
Pfaff, B., & Davie, B. (2013). The Open vSwitch Database Management Protocol. RFC, 7047, 1–
35.
Pfaff, B., Pettit, J., Amidon, K., Casado, M., Koponen, T., & Shenker, S. (2009). Extending
Ramjee, R., Ansari, F., Havemann, M., Lakshman, T. V., Nandagopal, T., Sabnani, K., & Woo, T.
107
https://doi.org/10.1109/comswa.2006.1665183
Rehman, A. U., Aguiar, R. U. I. L., & Barraca, J. P. (2019). Network Functions Virtualization :
Reich, J., Monsanto, C., Foster, N., Rexford, J., & Walker, D. (2013). Modular SDN programming
Ren, J., & Li, T. (2010). Chapter 12: Network Management. Handbook of Technology
management.pdf
Roberts, J., Leader, T., Services, A., Piasecki, R., Architect, S., & Services, A. (2014). Deploying
Salam, S., Kumar, D., & Eastlake, C. D. (2015). Loss and Delay Measurement in Transparent
sdx central. (2017). Understanding the SDN Architecture and SDN Control Plane. Sdxcentral,
LLC, 6–9.
Shah, S. A., Faiz, J., Farooq, M., Shafi, A., & Mehdi, S. A. (2013). An architectural evaluation of
https://doi.org/10.1109/ICC.2013.6655093
Shalimov, A., Zuikov, D., Zimarina, D., Pashkov, V., & Smeliansky, R. (2013). Advanced study
https://doi.org/10.1145/2556610.2556621
108
Song, H. (2013). Protocol-oblivious forwarding: unleash the power of SDN through a future-proof
Tadros, C. N., Mokhtar, B., & Rizk, M. R. M. (2018). Logically Centralized-Physically Distributed
Tajiki, M. M., Akbari, B., Shojafar, M., & Mokari, N. (2017). Joint QoS and congestion control
https://doi.org/10.3390/app7121265
Trois, C., Del Fabro, M. D., de Bona, L. C. E., & Martinello, M. (2016). A Survey on SDN
Tuncer, D., Charalambides, M., Tangari, G., & Pavlou, G. (2018). A Northbound Interface for
Software-based Networks.
van Asten, B. J., van Adrichem, N. L. M., & Kuipers, F. A. (2014). Scalability and Resilience of
Van Der Merwe, J., Cepleanu, A., D’Souza, K., Freeman, B., Greenberg, A., Knight, D., …
Varghese, G., & Estan, C. (2004). The measurement manifesto. Computer Communication
109
Voellmy, A., & Hudak, P. (2011). Nettle: Taking the Sting Out of Programming Network Routers.
235–249. https://doi.org/10.1007/978-3-642-18378-2_19
Voellmy, A., Kim, H., & Feamster, N. (2012). Procera: A language for high-level reactive network
control. HotSDN’12 - Proceedings of the 1st ACM International Workshop on Hot Topics in
Wetherall, D. J., Guttag, J. V., & Tennenhouse, D. L. (1998). ANTS: A toolkit for building and
dynamically deploying network protocols. 1998 IEEE Open Architectures and Network
https://doi.org/10.1109/OPNARC.1998.662048
Wolf, T., & Turner, J. S. (2001). Design issues for high-performance active routers. IEEE Journal
Wu, Z., Jiang, Y., & Yang, S. (2016). An Efficiency Pipeline Processing Approach for OpenFlow
Switch. 2016 IEEE 41st Conference on Local Computer Networks (LCN), 204–207.
https://doi.org/10.1109/LCN.2016.43
Yan, H., Maltz, D. A., Ng, T. S. E., Gogineni, H., Zhang, H., & Cai, Z. (2007). Tesseract : A 4D
Implementation.
Yang, L., R. Dantu, T. Anderson, R. G. (2004). Forwarding and Control Element Separation
Yun, M. (2013). Huawei Agile Network : A Solution for the Three Major Problems Facing
110
Insights, (6).
Zhou, W., Li, L., Luo, M., & Chou, W. (2014). REST API Design Patterns for SDN Northbound
API. https://doi.org/10.1109/WAINA.2014.153
111
112