You are on page 1of 157

MASTER THESIS

TITLE: Research on path establishment methods performance in SDN-based


networks

MASTER DEGREE: Master of Science in Telecommunication Engineering &


Management

AUTHOR: David José Quiroz Martiña

DIRECTOR: Cristina Cervelló-Pastor

DATE: November, 16th 2015


TITLE: Investigación en el rendimiento de métodos de establecimiento de
rutas en redes basadas en SDN.

MASTER DEGREE: Master of Science in Telecommunication Engineering &


Management

AUTHOR: David José Quiroz Martiña

DIRECTOR: Cristina Cervelló-Pastor

DATE: 16 de Noviembre del 2015

Overview

Este proyecto tiene como objetivo la evaluación de métodos seleccionados de


establecimiento de rutas adaptados a SDN. Se han realizado varios estudios
y experimentaciones incluyendo la instalación de un escenario de pruebas de
Segment Routing y Proactive Forwarding, la observación de su
interoperabilidad con el protocolo OpenFlow, su comportamiento en el manejo
del tráfico, su desempeño frente a eventos de la red, y sus efectos sobre el
tiempo de respuesta del controlador SDN. El objetivo principal es la
investigación sobre la influencia de los métodos de establecimiento de rutas
hacia el rendimiento del controlador SDN, que pueden o no, tener un efecto
sobre la capacidad de mantener el tráfico durante los cambios controlados e
inesperados en la red.
TITLE: Research on path establishment methods performance in SDN-based
networks

MASTER DEGREE: Master of Science in Telecommunication Engineering &


Management

AUTHOR: David José Quiroz Martiña

DIRECTOR: Cristina Cervelló-Pastor

DATE: November, 16th 2015

Overview

This project aims for the assessment of selected path establishment methods
adapted to SDN. Several studies and experimentations are performed
including the installation of a Segment Routing and Proactive Forwarding test
scenario, the observation of their interoperability with the OpenFlow protocol,
their behavior in traffic handling, their performance in front of network events,
and their effects on the SDN controller’s time response. The main goal is to
research about the influence of path establishment methods towards the SDN
controller performance, which may or may not have an effect over the ability to
sustain traffic during controlled and unexpected changes in the network.
CONTENT
INTRODUCTION ................................................................................................ 1
CHAPTER 1. SOFTWARE DEFINED NETWORKING AND OPENFLOW..... 3
1.1 Software Defined Networking (SDN).................................................. 3
1.1.1 Application layer .............................................................................. 4
1.1.2 Control layer ..................................................................................... 4
1.1.3 Infrastructure layer........................................................................... 4
1.2 OpenFlow ................................................................................................ 5
1.2.1 Flow tables ........................................................................................ 6
1.2.2 Group tables ..................................................................................... 8
1.2.3 Meter tables .................................................................................... 10
1.2.4 OpenFlow channel ......................................................................... 10
1.2.5 OpenFlow switch ............................................................................ 12
1.2.6 Openflow versions ......................................................................... 14
CHAPTER 2. PATH ESTABLISHMENT METHODS .................................... 15
2.1 Routing fundamentals .......................................................................... 15
2.1.1 Shortest path routing ..................................................................... 15
2.1.2 Multipath routing ............................................................................ 15
2.1.3 Source routing ................................................................................ 16
2.2 Network performance standards ......................................................... 16
2.3 Proactive Forwarding (PF) ................................................................... 18
2.4 Multiprotocol Label Switching (MPLS)................................................ 20
2.4.1 MPLS architecture .......................................................................... 20
2.4.2 MPLS topology ............................................................................... 21
2.5 Segment Routing (SR) .......................................................................... 22
2.5.1 Segment list .................................................................................... 23
2.5.2 Segment types ................................................................................ 24
2.5.3 Path computation algorithms ........................................................ 25
CHAPTER 3. TEST TOPOLOGY SETUP .................................................... 26
3.1 Control layer involved elements .......................................................... 27
3.1.1 SDN controller selection ................................................................ 27
3.1.2 Controller’s hardware .................................................................... 29
3.1.3 Controller’s software ..................................................................... 29
3.2 Infrastructure layer involved elements ............................................... 30
3.2.1 Software selection.......................................................................... 30
3.2.2 Hardware selection ........................................................................ 32
3.3 Proactive Forwarding Setup ................................................................ 33
3.4 Segment Routing Setup ....................................................................... 33
3.5 Measurement tools ............................................................................... 34
3.5.1 iPerf ................................................................................................. 34
3.5.2 Multi-Generator (MGEN) ................................................................ 35
3.5.3 Wireshark ........................................................................................ 35
CHAPTER 4. OPERATION IN ONOS CONTROLLERS .............................. 36
4.1 ONOS architecture ................................................................................ 36
4.1.1 ONOS subsystems ......................................................................... 37
4.2 Intent forwarding subsystem ............................................................... 38
4.2.1 Intent subsystem ............................................................................ 38
4.2.2 Topology subsystem ..................................................................... 39
4.2.3 Intent forwarding operation ........................................................... 41
4.3 Segment Routing subsystem .............................................................. 42
4.3.1 Path computation ........................................................................... 43
4.3.2 Group handling............................................................................... 43
4.3.3 Segment Routing operation .......................................................... 45
4.4 OpenFlow pipeline utilization .............................................................. 46
4.4.1 Intent Forwarding pipeline ............................................................ 46
4.4.2 Segment Routing pipeline ............................................................. 47
CHAPTER 5. EXPERIMENTATION AND RESULTS ................................... 48
5.1 Experimentation process ..................................................................... 48
5.1.1 Network performance measurement ............................................ 49
5.1.2 Response time measurement in front of network events ........... 49
5.1.3 Static path installation time ........................................................... 51
5.1.4 Switch packet forwarding delay measurement............................ 52
5.1.5 Balanced path load scenario ......................................................... 52
5.2 Measurement results ............................................................................ 53
5.2.1 Test topology traffic performance ................................................ 53
5.2.2 Response to network events ......................................................... 53
5.2.3 Response to static path installation ............................................. 54
5.2.4 Switch packet forwarding delay .................................................... 55
5.2.5 Response to network events with balanced path load ............... 55
5.3 Results observations............................................................................ 56
5.3.1 Packet switching ............................................................................ 57
5.3.2 OpenFlow messaging and method’s operation influence .......... 57
5.3.3 Jitter comparison ........................................................................... 60
CONCLUSIONS ............................................................................................... 61
From the point of view of the test topology ............................................. 62
Future work ................................................................................................. 62
REFERENCES ................................................................................................. 64
ACRONYMS..................................................................................................... 67
APPENDIX A: OPENFLOW PIPELINE PROCESSING ................................... 69
APPENDIX B: NETWORK TOPOLOGY SCRIPTS .......................................... 71
APPENDIX C: ONOS INTENT FRAMEWORK................................................. 77
APPENDIX D: SR SUBSYSTEM COMPONENTS ........................................... 78
I. Segment Routing Application ............................................................. 78
II. Configuration Manager ..................................................................... 79
III. Segment Routing Driver ................................................................... 79
OF 1.3 Group Handler ............................................................................. 80
Group Recovery Handler ........................................................................ 80
OF Message Pusher ................................................................................ 81
Driver API ................................................................................................. 81
APPENDIX E: SOFTWARE INSTALLATION ................................................... 82
I. ONOS Blackbird installation ................................................................ 82
Oracle Java 8 JDK installation ............................................................... 82
Apache Karaf 3.0.2 and Maven 3.2.3 installation .................................. 82
Blackbird download and building .......................................................... 83
II. ONOS SPRING-OPEN installation .................................................... 88
CLI installation ........................................................................................ 89
III. Mininet installation ............................................................................ 89
IV. CPqD switch installation .................................................................. 90
APPENDIX F: OPERATION EXAMPLES ......................................................... 92
I. Controller initialization ......................................................................... 92
OpenFlow messages............................................................................... 92
Flow tables (using a 2 host and 3 routers topology)............................ 92
II. Link behavior on Segment Routing ................................................. 94
Multipath links ......................................................................................... 95
Link failures ............................................................................................. 95
III. Label sequencing and forwarding operation .................................. 97
APPENDIX G: STATISTIC PROCEDURE ....................................................... 99
I. Average and standard deviation ......................................................... 99
II. Confidence Interval (CI) .................................................................... 99
APPENDIX H: MEASUREMENT TABLES ..................................................... 102
I. Steady state conditions ..................................................................... 102
II. Switch packet forwarding ............................................................... 108
III. Static path installation .................................................................... 113
IV. Network events ................................................................................ 119
V. Balanced path load ......................................................................... 133
APPENDIX I: JITEL 2015 ARTICLE ............................................................... 139

TABLE OF FIGURES
Figure 1.1 - Software Defined Networking Architecture ...................................... 3
Figure 1.2 - OpenFlow Architecture.................................................................... 5
Figure 1.3 - OpenFlow Table .............................................................................. 6
Figure 1.4 - OpenFlow version 1.3 flow table ..................................................... 8
Figure 1.5 - OpenFlow group table ..................................................................... 9
Figure 1.6 - OpenFlow meter table ................................................................... 10
Figure 1.7 - Components of an OpenFlow switch............................................. 13
Figure 1.8 - OpenFlow pipeline ........................................................................ 13
Figure 2.1 - Proactive forwarding model ........................................................... 19
Figure 2.2 - MPLS architecture......................................................................... 20
Figure 2.3 - MPLS model ................................................................................. 21
Figure 3.1 - General test network topology ...................................................... 26
Figure 3.2 - Proactive Forwarding test topology ............................................... 33
Figure 3.3 - Segment Routing test topology ..................................................... 33
Figure 4.1 - ONOS architecture ........................................................................ 36
Figure 4.2 - ONOS subsystem structure .......................................................... 37
Figure 4.3 – Intent compilation process within the subsystem.......................... 39
Figure 4.4 - Topology subsystem ..................................................................... 40
Figure 4.5 - Segment Routing subsystem ........................................................ 42
Figure 4.6 - Proactive forwarding pipeline utilization ........................................ 47
Figure 4.7 - Segment Routing pipeline utilization ............................................. 47
Figure 5.1 - Traffic generators and Wireshark probes location ......................... 48
Figure 5.2 – Controller’s response time frame.................................................. 50
Figure 5.3 - Segment Routing processing time frame ...................................... 50
Figure 5.4 - Segment Routing path installation time frame ............................... 51
Figure 5.5 - Proactive Forwarding path load topology ...................................... 52
Figure 5.6 - Counters summary CLI output ...................................................... 55
Figure 5.7 - Test results comparison ................................................................ 57
Figure 5.8 - OpenFlow messaging ................................................................... 58
Figure 5.9 - Flows allocation during static path configuration ........................... 59
Figure 5.10 - Path establishment after failures ................................................. 60
Figure 5.11 - Initial conditions VS Network events ........................................... 60
LIST OF TABLES
Table 1.1 - OpenFlow message description ..................................................... 12
Table 1.2 - OpenFlow versions support description ......................................... 14
Table 2.1 - Network performance parameters .................................................. 18
Table 3.1 - SDN controller selection criteria ..................................................... 28
Table 3.2 - ONOS software requirements ........................................................ 29
Table 3.3 - Infrastructure layer compatibility ..................................................... 31
Table 5.1 - Test topology traffic performance ................................................... 53
Table 5.2 - Network event performance ........................................................... 54
Table 5.3 - Static path installation response ..................................................... 54
Table 5.4 - Proactive Forwarding performance with balanced path load .......... 56
Introduction 1

INTRODUCTION
Software Defined Networking (SDN) has acquired great acceptance in the
networking community. Thus, several developments and advances have been
made to introduce new functions and applications that can give more maturity
and specialized operations to the system. Some developments include the
adaptation of layer 3 (L3) protocols, such as IGP and BGP to support routing
methods into the SDN environment. Since the principle of SDN is a centralized
control of the network, every operation triggered by these protocols implies
processing introduced to the controller and a response time, that may or may not
have an impact over the network.

Keeping track of the performance during the adaptation of a new path method to
SDN, can help developers setting up a margin, to fulfill a balance between the
times required to address the processing needs of the path method, and
maintaining a lower response time from the controller.

The main objective of this research is to conduct a conceptual analysis of path


establishment methods that may have a practical application over SDN. It is
intended to observe in which manner the working principle of path establishment
methods can influence the performance of an SDN controller in terms of time
response. For this reason, it was decided to make a comparative study of two
path establishment methods: The method to observe (Segment Routing), and a
reference method currently used in SDN (Proactive Forwarding).

One example of a L3 path establishment method that is being adapted to SDN is


Segment Routing. This method constructs an end to end path based on a
sequence of nodes and port codes called segments. On the first node, a packet
is guided by this sequence of segments. Once the packet passes through each
segment, its related segment id (SID) is pulled out of the sequence. This behavior
is maintained until the packet arrives to the edge node that connects to the
destination.

This research conducts a series of measurements that helps to determine the


approximated response time that Segment Routing can introduce into an SDN
controller during normal operations and during network events. The response
times are compared with the performance of the Proactive Forwarding method.
This mechanism is a L2 path establishment procedure, traditionally used in SDN
to construct paths according to traffic destination addresses, and set up by flows
installed on all switches relevant to the path. The response times of this method
are used as a reference to visualize any additional time response introduced by
Segment Routing, with respect of the controller’s default state where Proactive
Forwarding is supported.

The second objective of the investigation is to observe the interaction of the path
establishment methods with OpenFlow. Both path methods are established by
the controller using the OpenFlow protocol as their interface between the control
plane and data plane. The actions triggered by both methods are exchanged
between the controller and the network devices through OpenFlow messages.
2 Research on path establishment methods performance on SDN-based networks

OpenFlow has been the first protocol used in SDN as the communication
interface between the control plane and the forwarding plane, found in network
devices such as switches and routers. It is widely used in most of the current SDN
controllers to address the dynamic control of network resources according to the
needs of today's applications. It uses TCP sessions to carry out instructions given
by the SDN controller, and programs the flow tables found in different network
devices, to set an OpenFlow pipeline where the forwarding process is carried out
by entries found in the flow tables.

This research will show that, according to the method applied, the actions
triggered by both path methods will determine the way of how the OpenFlow
pipeline will be used on the network devices during packet forwarding.

The remainder of this work is organized as follows:

 Chapter 1 describes the fundamental concepts of SDN and OpenFlow, with


more specific information on the second one, to give the reader a wider view
about the operation of OpenFlow.

 Chapter 2 describes the fundamental concepts of routing, types of routing,


the path establishment methods to study, and the parameters to observe
performance and routing metrics.

 Chapter 3 describes the network scenario, including the test scenarios and
the involved elements specifications, in which comparison between several
options were made.

 Chapter 4 describes the specific operation of Proactive Forwarding and


Segment Routing on the selected SDN controller, in which its architecture is
explained.

 Chapter 5 describes the experimentation process, the results obtained, and


the observations gathered from both methods.

 Finally, the conclusions of the research will be given, plus some future work
that may follows from this research.
Chapter 1: Software Defined Networking and OpenFlow 3

CHAPTER 1. SOFTWARE DEFINED NETWORKING


AND OPENFLOW
Software Defined Networking (SDN) is a solution that departs from the idea of
giving a centralized view and management of the network infrastructure, being
capable of making traffic decisions depending on the need of applications and
services. In order to manage the various network devices to react according to
traffic behaviors, a set of protocols were developed to standardize an interface
between the centralized element and the network devices. This chapter gives the
necessary fundamentals on SDN and OpenFlow architectures to comprehend the
later experimentation performed in this research.

1.1 Software Defined Networking (SDN)

This technology basically separates the control plane from the data plane of each
network equipment (Fig. 1.1), centralizing the network intelligence that handles
traffic control, switching, routing, and Quality of Service (QoS) to achieve a more
efficient control of the network. To comply with Service Level Agreements (SLA),
each application communicates with the control plane through API (Application
Programming Interface) interconnections in order for the control plane to execute
decisions of package forwarding, traffic handling, and QoS policies that better
suits the service requirements in real time.

Figure 1.1 - Software Defined Networking Architecture1

1
Image retrieved from [1]
4 Research on path establishment methods performance on SDN-based networks

1.1.1 Application layer

The application layer represents network applications that interact with the control
layer to communicate their requirements for network resources and exercise
specific changes in the network, achieving the desired network behavior and QoS
for a certain traffic. The application layer communicates with the control layer
using application interfaces like North Bound Interface (NBI) or Java API to carry
out synchronous and asynchronous messages. They can provide a wide flexibility
since different types of traffic can be handled by different network applications
separately, using the network infrastructure as a shared resource. For example,
network control traffic like LLDP (Link Layer Data Protocol) messages, are treated
by a separate network application than normal data traffic like ICMP (Internet
Control Message Protocol) messages. The path establishment methods that
would be seen in this document, are network applications functioning at this layer.

1.1.2 Control layer

The control layer, also known as the Network Operating System (NOS) or SDN
controller, is an intermediary between the infrastructure layer and the application
layer. It gives applications an abstraction of the network topology, an inventory of
the features supported by each network device, network statistics, host tracking
information, and packet information. Applications can make decisions
accordingly, then the control layer translate those decisions into instructions
downloaded to the infrastructure layer, and execute the appropriate network
settings for the current traffic.

The control layer is conceived to be logically centralized, that means, that the
NOS can operate under a cluster of physical servers to share the workload of the
system.

1.1.3 Infrastructure layer

The infrastructure layer comprises the data plane of the SDN architecture,
represented by forwarding devices like switches and routers. They forward
packets according to the instructions received by the control layer, like dropping
a packet or sending it out through an output interface. The forwarding devices
maintains communications with the control layer to keep the system updated on
their network status like active ports, adjacent neighbors, supported features,
counters information, and event notifications.

Logically, the forwarding devices are viewed as a Datapath and identified by a


Datapath ID (DPID). This information is used by the SDN controller to address
specific instructions to certain devices, like forwarding actions, or to identify each
of the network devices in the topology abstraction.

The communication channel between the infrastructure layer and control layer is
referred as the SouthBound Interface, which is characterized by several
Chapter 1: Software Defined Networking and OpenFlow 5

communication protocols that carry out the interaction between the SDN
controller and the Datapaths. The most common SouthBound protocol used is
OpenFlow, being accepted by most vendors since it’s been in constant
development to work for SouthBound communications and network control.

1.2 OpenFlow
OpenFlow is a set of protocols and an API that is used in SDN to address the
separation of the control plane and data plane, using a standardized protocol
between the SDN controller and the network devices for SouthBound
communication, forwarding instantiation, and provisioning network
programmability from a centralized view via an extensible API. As the
communication interface between the control layer and infrastructure layer, it is
widely used in most of the current SDN controllers to address the dynamic control
of network resources according to the needs of today's traffic. Fig. 1.2, displays
the key components of the OpenFlow model:

LACP RSTP OSPF ...


Switch Control Plane

OpenFlow Controller

OpenFlow
Protocol

Figure 1.2 - OpenFlow Architecture2

From the control plane, the OpenFlow API delivers network programmability in
which a set of instructions are prepared to perform a forwarding abstraction, using
the flow tables of the network elements, and making possible to handle the traffic
according to the computation of network applications. Then the OpenFlow
protocol establishes a TCP based communication between the control plane and
data plane to carry out the OpenFlow messages to the network elements.

2
Image based on [2]
6 Research on path establishment methods performance on SDN-based networks

The OpenFlow protocol is divided in two parts: wire protocol and configuration
and management protocol. The wire protocol is the most referenced during the
experimentation of this project, and is responsible for establishing control
sessions, defining message structures for exchanging flow modifications,
collecting statistics, and defining the fundamental function of a switch (ports and
tables) [2]. On the other hand, the configuration and management protocol (of-
config protocol) allocates physical ports to a controller, and define high availability
and actions on controller connection failure [2].

1.2.1 Flow tables

A flow table is a data structure that resides in the data plane of an OpenFlow
switch, and is essential to perform matching operations on packets that are being
treated by the forwarding device. The common structure of the flow table is
presented on Fig. 1.3.

Ethernet VLAN IP TCP/UDP


Ingress
Match Fields Port
SA DA Type ID Priority SA DA Proto TOS Src Dst

Flow Table
Match Fields Counters Instructions
(OF 1.0 – 1.2)

Physical Port
ALL

Output
CONTROLLER
port_no
Mandatory Virtual Port LOCAL
Actions TABLE
IN_PORT
Drop
Group group_id

Output Virtual NORMAL


port_no Port FLOOD
Optional Set-Queue queue_id
Actions
Push-Tag/Pop-Tag
Set-Field field_type value
Change-TTL ttl

Figure 1.3 - OpenFlow Table3

The match fields of the packets are based on the TCP/IP model, each header
field can contain information including physical interfaces, MAC addresses, VLAN

3
Image based on [2] and [3]
Chapter 1: Software Defined Networking and OpenFlow 7

information, IP addressing, and transport port information. Once the packet has
found a match with any of the match fields, it executes an instruction that
performs a list of actions defined for the matched header field. The counters field
contains a set of counters that allows for the system to generate statistic
information based on table, flow and port information.

Within the instructions field, one or more actions can be associated to a flow
entry, in which some of them are mandatory or optional to complete the packet
forwarding operation. One of the actions is the Output action, where 3 types of
ports are used: “Physical”, “Logical”, and “Reserved”. These port can be used as
ingress, egress, or bidirectional ports.

 The Physical port is a hardware interface of a switch (OpenFlow compliant),


where is usually an Ethernet interface.
 The Logical port is a virtual interface of an OpenFlow switch, and is also an
Ethernet interface.
 The Reserved port is a logical interface that is set for specific forwarding
operations, and it may be associated to one or more logical and physical ports.
This is the description of each of the reserved ports:
o ALL: To forward packets to all interfaces including the input interface.
o CONTROLLER: To forward packets to the controller using the
OpenFlow channel.
o LOCAL: To forward packets to the local switch networking stack (i.e.
loopback address).
o TABLE: To forward packets to the next flow table when is needed. This
port is used during the pipeline processing explained later in this
chapter.
o IN_PORT: To forward packets through the input interface.
o NORMAL: Process the packet as a traditional Ethernet switch
(traditional L2, VLAN, and L3 processing)
o FLOOD: To forward packets to all switch interfaces, except the input
interface. This port is commonly used with ARP requests and LLDP
messages.

The other actions are applied depending on the circumstances. In most cases, a
list of actions is executed when the Apply-Action instruction is set on the
Instructions field. Also a list of actions are introduced into an array called “Action
Set”, used for maintaining a sequence of actions defined for a specific packet
during its processing throughout the OpenFlow pipeline. Here is a complete list
of instructions that can be found on the Instruction field:

 Write-Actions action(s)
 Apply-Actions action(s)
 Clear-Actions
 Meter meter_id
 Write-Metadata
 Goto-Table next-table_id
8 Research on path establishment methods performance on SDN-based networks

The structure of the flow table has been maintained on later versions of OpenFlow
until version 1.3, where 4 additional fields were added to the flow table (see Fig.
1.4). These fields are identified as: Priority, Timeouts, Cookie, and Flags.

Flow Table
Match Fields Priority Counters Instructions Timeouts Cookie Flags
(OF 1.3)

Figure 1.4 - OpenFlow version 1.3 flow table

 The Priority field identifies which flow entry with the same match field has
precedence to be executed. An example of this situation is when a packet has
different paths to get to the same destination, in which the shortest path is
commonly assigned with the higher priority.

 The Timeouts field indicates the maximum amount of time or idle time before
the flow entry is expired by the switch. This avoids the problem for the switch
to maintain large amount of flow entries in memory, which requires high levels
of memory and processing resources that only robust and expensive
hardware can provide.

 The Cookie field is a data value chosen by the controller to filter flow entries
affected by the flow statistics (counters), flow modification, and flow deletion
requests. This field is not used during packet processing, but to depurate the
flow entries in the table.

 The Flags field alters the way of how flow entries are managed, triggering
specific OpenFlow messages to the controller notifying the action taken on a
particular flow entry

1.2.2 Group tables

Supported since OpenFlow version 1.1, the group tables are means to support
packet replication, that is, to support the forwarding of one packet to different
ports for different purposes. Normally, the FLOOD action on flow tables allows
the forwarding of one packet to multiple ports, but it is limited to emulate the
behavior of only certain protocols like LLDP. With the group tables on the other
hand, it is allowed a more flexible port grouping for different actions, like
assembling a number of ports into an output port set to support multicasting,
multipath, and fast failover.

Like on the flow tables, the group table contains group entries (see Fig. 1.5) to
match the incoming packet to the appropriate forwarding operation. Each group
entry is identified by a group identifier given in integer values. Several counters
are used to maintain the statistic of the packets treated by the group entry, similar
to the counters field found in the flow tables. The forwarding instructions are
handled by Action Buckets, in which a packet can be forwarded through a single
or multiple ports (1 or more Action Buckets within a group entry), or to another
Chapter 1: Software Defined Networking and OpenFlow 9

group entry (this action is a key component of one of the path establishment
methods to be described later in this research).

Group Table Group Action


Group Type Counters
(OF 1.1 – 1.5) Identifier Buckets

Indirect
Required
All
Select
Optional
Fast Failover

Figure 1.5 - OpenFlow group table

There are 4 types of group entries in which 2 of them are required, and the rest
are optional:

 The Indirect type executes one defined bucket in a group. This allows for
multiple flows or group entries to point to a common group identifier. With this
group type, an OpenFlow switch can emulate a L3 behavior like pointing to a
next hop for IP forwarding.

 The All type executes all buckets that are present within a group. The packet
is cloned for each bucket and sent to the output interface related to each
bucket. With this group type, an OpenFlow switch can emulate multicast and
broadcast forwarding.

 The Select type executes one bucket of a set of buckets within a group. The
bucket is selected based on a switch-computed selection algorithm like a hash
function or simple round robin, rotating the use of each bucket for every
incoming packet to achieve equal load sharing on the output interfaces related
to the buckets. With this group type, an OpenFlow switch can emulate ECMP
operations to load balance the traffic among its interfaces.

 The Fast Failover type executes the first live bucket within a group. The
liveness of a bucket depends on the state of the interface and/or group entry
associated to the bucket. When an interface fails for some reason, the bucket
goes down, and the group entry looks for the next live bucket to send the
traffic. With this group type, an OpenFlow switch can emulate fast failover to
backup links without consulting the SDN controller.
10 Research on path establishment methods performance on SDN-based networks

1.2.3 Meter tables

Since OpenFlow 1.3, the protocol introduces the capability of implementing


several simple QoS operations like rate-limiting, or more complex like DiffServ
(Differentiated Services). The meter table consists on a set of meter entries (see
Fig. 1.6) that defines meters on per-flow basis. They measure the rate of packets
associated to a specified meter, enabling rate control on the packets. The meters
are associated with flow entries, in which each of the flow entries can specify a
meter within its instruction field to execute rate control on incoming packets.

Meter Table Meter


Meter Bands Counters
(OF 1.3 – 1.5) Identifier

Meter Bands Band Type Rate Burst Counters Type Specific Arguments

Figure 1.6 - OpenFlow meter table

The meter identifier is a 32 bit integer value that uniquely identifies the meter
entry, the counters are the same used in flow and group tables and they are
updated for every processed packet, and the meter bands are the set of
instructions that specifies the rate of the band and the way to process the packet.

The meter bands are subdivided into the following fields:

 The Band Type field, which defines the way of how packets are processed.
There are 2 band types available and they are all optional.
o Drop: discard the packet based on a rate limiter band.
o Dscp remark: Increase the drop priority of the DSCP field in the IP
header of the packet. It may define a DiffServ Policer.

 The Rate field, which selects the meter band, and specifies the lowest rate
applicable for the meter band.

 The Burst field, which specifies the granularity of the meter band.

 The Counters field, which is the same type of counters as in the main table,
but they are updated when a meter band process a packet.

 The Type Specific Arguments field, which allocates optional arguments of


some band types.

1.2.4 OpenFlow channel

According to the OpenFlow 1.3 specifications [3], the OF channel represents a


logical interconnection between OpenFlow switches and an OpenFlow controller.
Chapter 1: Software Defined Networking and OpenFlow 11

By means of this interconnection the controller configures and manage the switch
for network configuration and packet forwarding, and receives events from the
switches to identify network state and failures. A switch may support one or
multiple OpenFlow channels for shared management between several controllers
(i.e. when a switch is managed by a controller cluster).

The communication is handled through a messaging structure defined by the


OpenFlow protocol for SouthBound interconnection. It is normally encrypted
using TLS, but it can run directly through TCP without encryption.

1.2.4.1 OpenFlow messages

There are 3 message types supported by OpenFlow: “controller-to-switch”,


“asynchronous”, and “symmetric” (see Table 1.1 for detailed message description
of the message types).

 Controller-to-switch messages are initialized by the controller, in which


some of them may require a response from the switch. It monitors and
manage the behavior of the switch.

 Asynchronous messages are initialized by the switch to send update


messages to the controller, which contains information regarding network
events and changes on the switch state.

 Symmetric messages can be initialized by either the controller or the switch,


and sent without being required. They are usually used for OpenFlow devices
discovery, and keep alive notifications.
12 Research on path establishment methods performance on SDN-based networks

Table 1.1 - OpenFlow message description

1.2.5 OpenFlow switch

The OpenFlow switch, is a logical switch that is comprised of one or more flow
tables, group tables, meter tables, and one or more OpenFlow channels to
interact to several controllers (see Fig. 1.7). The switch maintains
communications with the controller, and the controller handles the switch
operation using the OpenFlow protocol. The controller can add, modify, or delete
flow entries in the flow tables in response to traffic behavior either reactively (in
response to packets) or proactively (in anticipation to packets).
Chapter 1: Software Defined Networking and OpenFlow 13

Figure 1.7 - Components of an OpenFlow switch4

As explained previously, each flow table is comprised of flow entries that the
incoming packets will match according to the network information they carry, but
the table lookup has to be done in order for the packets to be processed in a
proper sequence (most packets may need to be processed by more than one
flow entry that can be in separate flow tables). Therefore, OpenFlow establishes
a linear sequence of flow tables known as the OpenFlow Pipeline (see Fig. 1.8),
in which the packets will pass through different tables according to the match
fields and the instructions found in the first flow entry.

Figure 1.8 - OpenFlow pipeline5

The flow tables are sequentially numbered starting at “0”, in which the incoming
packets are first processed by this flow table. Then, depending on the outcome
of the instructions in the flow entry of the first flow table, the packet can be
forwarded to the next table (Table 1) and so on. Packets can only go forward
through the pipeline, never backwards, due to the fact that the table processing
the packet can only send it to a table of higher value than its own. On the other
hand, a packet may be processed by only one table, and then be forwarded to an

4
Image retrieved from [3]
5
Image retrieved from [3]
14 Research on path establishment methods performance on SDN-based networks

output port, to the controller, or dropped without passing through the rest of the
pipeline (depending on the instructions found in the matched flow entry).

During its processing through the pipeline, the packets will carry information of
the metadata (register value to carry additional information between tables) and
the action set, which can be modified when they are processed by a table using
the Write-Action or Clear-Action instructions on the Instruction field. At the final
table the actions within the action set are executed (Detailed diagrams about the
pipeline and matching process can be found on Appendix A). Here is the order of
action execution within the action set:

1. Copy TTL inwards


2. Pop (Apply all tag pop to the packet)
3. Push-MPLS (MPLS tag)
4. Push-PBB (PBB tag)
5. Push VLAN (VLAN tag)
6. Copy TTL outwards
7. Decrement TTL
8. Set (Apply all Set-Field actions)
9. QoS (Apply all QoS actions)
10. Group (Apply actions of the related group bucket into the order specified
by the list)
11. Output (Forward the packet to the port specified by the Output action)

1.2.6 Openflow versions

To summarize the capabilities of the OpenFlow protocol, Table 1.2 illustrates the
key features supported by the current versions of OpenFlow.

Table 1.2 - OpenFlow versions support description

With the support of group tables, MPLS actions like label manipulation, and the
priority field of the flow table, make possible that an OpenFlow switch could
emulate integrated platform behaviors like an MPLS LSR (necessary for adapting
Segment Routing into SDN, see Chapter 4).
Chapter 2: Path establishment methods 15

CHAPTER 2. PATH ESTABLISHMENT METHODS


This research defines path establishment methods as a general term to describe
L2 and L3 routing techniques. Although routing traditionally implies L3 routing
protocols, in SDN would also include L2 bridging since both achieves a common
goal, to establish an end-to-end path in a network using path computation
algorithms. The difference, is that L3 methods are IP-based routing, and L2
methods are MAC-based bridging. This chapter will introduce some
fundamentals about routing and type of routing, the current standards in network
performance, and the working principle of Proactive Forwarding, MPLS, and
Segment Routing.

2.1 Routing fundamentals

Routing can be defined as “the act of moving information across and internetwork
from a source to a destination” (Cisco Systems Inc., 1998). An ideal routing
process has to address certain goals, in which the result would have to represent
the most efficient use of the network. Some of these goals includes the ability to
choose optimal paths, to compute paths that has an efficient use of the
bandwidth, and to exhibit high convergence after network changes. There are
several types of routing that tries to approach these conditions like: Shortest Path
routing, Multipath routing, and Source routing.

2.1.1 Shortest path routing

Shortest path routing defines best routes according to the cost of links that forms
each path. The costs of each link in the network are determined by measurement
standards called metrics, which usually are latency, link bandwidth, hop counts
in the path, and sometimes the operational expenditures of each link (i.e. the use
of rented link platforms like radio bridges). These metrics are related by the
computation of a cost function that results on the total costs to be assigned for
each link.

With the link costs assigned, shortest path protocols can use different routing
algorithms to compute the best path between a pair of nodes in the network, and
pick random paths if there is more than one best path of equal cost between the
pair of nodes. Some of the most referred routing algorithms are Dijkstra’s and
Bellman-Ford algorithms, used for Link-State and Distance-Vector based
protocols, respectively.

2.1.2 Multipath routing

Multipath routing defines multiple alternative paths throughout the network in


which the traffic can be forwarded to get to a common destination, increasing the
available bandwidth, and improving fault tolerance and security. Investigations
16 Research on path establishment methods performance on SDN-based networks

suggest that the benefits of multipath routing are addressed to improve end-to-
end reliability, congested paths avoidance, and adaptation to application
performance requirements [5]. One of several strategies used in multipath routing
is Equal-cost multi-path (ECMP).

2.1.2.1 Equal-cost multi-path (ECMP)

ECMP is based on delivering multipath routing throughout several paths when


they offers the same cost in arriving to the destination. On this scenario, there is
no preferable path to follow since each of them offers the same performance,
besides, the multiple paths to choose from are most probably the best paths
computed by shortest path routing. Therefore, what ECMP does, is to share the
traffic flow among the available links using either random or round-robin (cyclic
order of links) selection. This allows for the network to load-balance the traffic
throughout sections where redundant links, or complete redundant paths are
available, achieving fault tolerance, high availability, and increased throughput.

ECMP is more generally used as an additional feature of several routing protocols


like Open Shortest Path First (OSPF), and Intermediate System-to-Intermediate
System (IS-IS), which are shortest-path based routing protocols.

2.1.3 Source routing

Source routing is a type of routing that allows a source network device to process
and define a partial or complete route through the network for packet forwarding,
instead of being processed by every intermediate device in the network. The
entire path of each packet is known when they are injected into the network, being
the routing information added to the packet header to avoid local routing
decisions at each hop [6].

In basic terms, the source routing operation is based on a list of intermediate


devices and links within the packet header at the source forwarding device, in
which the list represents an ordered sequence of hops where the packet will
traverse until it reaches the destination. Within this sequence or route, source
routing can allow for concurrent use of multiple paths in the network, being the
sender device capable to choose different routes on a per-packet basis. Some
investigations points out that for SDN environments, source routing can be an
alternative method for packet forwarding than traditional OpenFlow instantiation
[7].

2.2 Network performance standards


From the point of view of the customer, network performance can be a little
subjective, since it is a measurement of the service quality perceived by him/her
(better known as Quality of Experience, or QoE). From this perspective, network
performance parameters have been observed for values that limits the customer
perception between optimal and degraded communications. These parameters
Chapter 2: Path establishment methods 17

are used to determine the QoS levels to be assigned to each type of traffic, and
the reference to follow for service level agreements (SLAs) between network
service providers and customers. The main parameters used to measure network
performance are: Bandwidth, Throughput, Delay, Jitter, and Packet Loss.

 The Bandwidth in computer networks is the available bitrate or channel


capacity of a communications link expressed in bits per second (bps). It can
also refers to the transmitted information capacity in bits per second.

 The Throughput in computer networks is the transmitted or consumed


bandwidth in the network expressed in bits per second. When a traffic
transmitted at a line rate speed is measured, the maximum bandwidth
received over an end-to-end communication is the throughput that the
network delivers for that communication.

 The Delay is the time interval between the transmission and reception of a
packet. There are 2 types of delays in computer networks:
o One-way delay: Time interval of a packet to traverse the network from
a source to a destination. It is difficult to measure since both end points
(source and destination), have to be synchronized in time through a
reference clock, in which its accuracy can be expensive and hard to
achieve. This delay is more accurate to measure the response of the
network in front of situations like link failures, or to monitor that the
network is complying with the stipulated SLA.
o Round Trip Time (RTT): Time interval of a packet to traverse the
network from a source to a destination and return back to the source.
RTT is most commonly used since is easier to measure from most end
devices, but is less accurate to perceive the response of a network in
front of failures. This doesn’t mean that One-way delay is RTT/2, since
the trip and return delays may not be the same.

 The Jitter is the variation of the delay perceived in an end-to-end


communication. From the point of view of the transmission, packets traversing
at a fixed bitrate denotes a periodic transmission, in which the jitter represents
a variation of that periodicity. This parameter, is less desirable in computer
networks since it can degrade sensible traffic like voice and video, which
depends greatly on real time communications.

 The Packet Loss, like the name states, is the amount of packets that have
been lost during transmission over the network, either for errors in the
transmission, or for the network capacity to handle the traffic in terms of
bandwidth and throughput.

There are all kinds of traffic that can traverse throughout the network, and each
of them with their own demands in network performance to guarantee the desired
QoS and SLA. To test the network to cover all these demands, is better to focus
on the demands of the most critical type of traffic, the voice traffic. Voice
communication is the most sensible traffic, because all voice packets have to be
sent in real time over the network. If there is considerable delay and jitter, or if
there is not enough guaranteed bandwidth, the voice communication degrades
18 Research on path establishment methods performance on SDN-based networks

easily, perceiving gaps in the sound or interruptions. According to several


vendors and ITU recommendations, the following table represents the desired
parameters to ensure proper network performance for Voice over IP (VoIP)
communications.

Table 2.1 - Network performance parameters


Network performance for VoIP
One-way Delay ≤ 150ms
Packet Loss ≤ 1%
Jitter ≤ 30ms
Guaranteed Bandwidth 21Kbps to 320Kbps

The one-way delay limit is specified by the ITU-T G.114 recommendation [8],
which is acceptable for most user applications. Even levels of delays up to 400ms
can be accepted for several types of traffic, but network providers have to be
aware of possible impacts over the transmission quality. There is no direct
consent about a standard limit for jitter, but several vendors like Cisco Systems
Inc. [9], have developed several testing, and their results suggest that voice
communications tends to start degrading on jitter values greater than 30ms. As
for guaranteed bandwidth, it is variable for any kind of communication in which
providers have to guarantee that the physical link can cover for all desired traffic.

For the purpose of this research, the values of network performance for VoIP
communications are used as a reference, to compare network performance
measurements on the path establishment methods described in Chapter 5.

2.3 Proactive Forwarding (PF)

Proactive forwarding is defined in this research as a global term to describe path


establishment methods that use OpenFlow proactive instantiation, defining flow
rules in L2 switches in anticipation of traffic, and constructing end-to-end paths
on an SDN network. Generally on most SDN vendors, proactive forwarding is a
shortest path routing type, in which Dijkstra based algorithms are used for path
computation.

The basic function of proactive forwarding methods, is to establish end-to-end


paths using the source and destination MAC addresses, defining flow entries to
be added on the flow tables of the switches for pipeline processing. Only the
involved switches in the path will receive this update on their flow tables in order
to define forwarding actions according to the match field of the flow entries. The
SDN controller will start this process once received the first packet, and then in
anticipation of the rest of the traffic, the controller downloads all necessary flows
to the involved switches to establish the path (see Fig. 2.1).
Chapter 2: Path establishment methods 19

SDN Controller

2
OFPT_PACKET_IN
OFPT_PACKET_OUT 3
3 Flows

First Packet

Subsequent Packets 5

Figure 2.1 - Proactive forwarding model

The switch receiving the first packet will not know the route to follow to a
destination. When this happens, the switch attaches the packet into an OpenFlow
message (OFPT_PACKET_IN) and send it to the controller to consult the route
to take for the packet, and subsequent packets with the same destination. Using
the MAC address information stored in the packet header, the SDN controller
computes the shortest path (least-cost path) to the destination based on metrics
like hop count, link bandwidth, and delay.

After the path computation, the SDN controller defines the necessary flow
instructions and send them to the involved switches in the path, in parallel with
an OFPT_PACKET_OUT message to the first switch in response of the
OFPT_PACKET_IN message, defining the forwarding action of the first packet.
With the flow entries downloaded into the switches flow tables, the end-to-end
path is established allowing for subsequent packets with the same source and
destination address information, be forwarded throughout the network without
consulting the SDN controller.

With difference of OpenFlow reactive instantiation, in which flow entries have


expiration time for unused paths, proactive instantiation maintains the flow
entries, and hence preserves the path indefinitely until there is a change in the
physical path. Examples of practical applications of proactive forwarding
includes: L2 Switch application or OpenStack network abstraction in
OpenDaylight controllers, Intent forwarding application in ONOS controllers, and
NOX proactive mode in NOX controllers.
20 Research on path establishment methods performance on SDN-based networks

2.4 Multiprotocol Label Switching (MPLS)


MPLS is an architecture that defines a mechanism to perform label switching, in
which handles the combination of L2 packet forwarding and L3 routing [11]. It
assigns labels to packets for their transportation across packet-based or cell-
based networks. Label swapping is the forwarding mechanism used throughout
the network, in which data information carry a short and fix-length label that
instructs switching nodes how to process and forward the packets along the path.

2.4.1 MPLS architecture

The architecture is a distributed system, where each node within the MPLS
domain is divided into two main components: the data plane and control plane.
The data plane maintains a label-forwarding database (FIB and LFIB tables) to
perform the packet forwarding based on the labels carried by the packets. The
control plane, creates and maintains label-forwarding information (LIB table)
among a group of interconnected nodes. Fig. 2.2 illustrates the basic architecture
of an MPLS node.

Control Plane
IP routing protocols

Label Information Base Routing information


IP routing table exchange with other
(LIB)
nodes
MPLS IP routing control
Label binding
exchange with
other nodes
Data Plane
Forwarding Information Base
(FIB)
Incoming IP packets Outgoing IP packets
Label Forwarding Information Base
(LFIB)
Incoming labeled packets Outgoing labeled packets

Figure 2.2 - MPLS architecture6

Every MPLS node runs one or more IP routing protocols to exchange IP routing
information between them, making every node an IP router in the control plane.
This exchange, allows the nodes to identify the networks attached to each node
and determine where to send the packets through the use of labels. The nodes
exchange label information, in which they store the mappings of labels assigned
by a local node with the labels received from its neighbors into a Label Information
Base (LIB). These mappings are distributed within the MPLS domain through the
use of the Label-Distribution Protocol (LDP).

6
Image based on [11]
Chapter 2: Path establishment methods 21

During packet forwarding, not all the labels within the LIB table are needed. In
this sense a table in the data plane is used (the LFIB table), where it pulls out
only the necessary label mapping information from the LIB table, performing the
packet forwarding operation of the current traffic for a current established path.
The IP routing table is used with the MPLS IP routing control process to build the
FIB table, which is an IP forwarding table augmented with labeling information in
order to introduce labels to ingress packets or remove labels from egress packets
(functionality used in edge nodes), and send them either outside or inside the
MPLS domain.

2.4.2 MPLS topology

The control and data plane operation in each MPLS node, brings together the
traffic forwarding operation of the entire MPLS domain. The following figure,
contemplates the MPLS model which constitutes its forwarding operation and
topology.

SWAP
2 IP Packet L1 LSR LSR
PUSH
Edge-LSR Edge-LSR
IP Packet L2
IP Packet IP Packet
LSP
1
5
POP 6

LSR LSR PHP


4
IP Packet L3
SWAP

Figure 2.3 - MPLS model

In the MPLS domain, all nodes are designated as Label Switch Router (LSR), in
which the edge nodes are commonly identified as Edge-LSR. The difference in
operation, is that Edge-LSRs prepend or remove labels from the IP packets by
using the FIB table for IP and Label based forwarding. In this manner, the Edge-
LSR serves as an intermediary between the MPLS domain and other routing
domains like IGP (Interior Gateway Protocols) and EGP (Exterior Gateway
Protocols) domains. The rest of the LSR within the MPLS domain, executes only
label based forwarding operations like the SWAP action.

The ingress IP packets are attached to a label by the Edge-LSR using the PUSH
action. Then the Edge-LSR sends the packet to the next hop according to the
22 Research on path establishment methods performance on SDN-based networks

label information, in which the subsequent hops (LSRs) replace label information
through the SWAP action for every next hop that the packet traverse. Finally,
depending on the configuration made on the nodes, the POP action (removal of
the label) can be performed by either the Edge-LSR or by the hop before it, in
which the POP action is known as Penultimate Hop Popping (PHP). The entire
forwarding process according to the labels information, establishes and end-to-
end path within the MPLS domain referred as a Label Switched Path (LSP).
Several LSPs can be established by the use of path computation algorithms or
manual configuration that can be used for different purposes including VPN
routing, traffic engineering, and QoS.

2.5 Segment Routing (SR)


Segment Routing is a new source routing method that is being developed by the
IETF to be supported on several routing protocols running today. In principle, an
intermediate device or SR node, guides the packet through an ordered sequence
of instructions called segments [12]. A segment can be any topological or service-
based instruction, in which the packet will be treated according to the sequence
found on its header (see Fig. 2.2). It can be applied to both MPLS and IPv6
architectures, in which would only require small changes into their forwarding
planes.

3
2

PUSH 1007
104
102
1007 102 103
104 1002
1001 1003
101 104
1007 1008

1 1006 1004 POP


1005
106 105 5

104

Figure 2.4 - Segment Routing model

As described in the general functionality of source routing, segment routing


constructs the end-to-end path based on a list of segments that are identified by
an integer value called the segment identifier (SID), which also identifies a
network element (SR node, group of nodes, SR domain, link, and set of links)
Chapter 2: Path establishment methods 23

that is going to apply the segment instruction. Each segment is executed by the
SR node on any incoming packet. Following the above model (Fig. 2.2), the
ingress SR node introduces the list of segments into the incoming packet. The
list denotes 3 segments or instructions that are going to be executed by nodes
102 and 104. The order of execution goes from top to bottom. Node 102 will
receive the packet, execute and pull out the first and second instruction from the
list and send the packet through link 1007, nodes 106 and 105 will forward the
packet to node 104 without executing the remaining instruction, and node 104
will pull the final segment stored in the stack and forwards the packet to the
desired destination.

A segment involves actions like forwarding a packet according to shortest path


destination, an interface, or an application/service instance. An SID can be an
MPLS label, an index value on the MPLS label space, or an IPv6 address,
depending of the data plane that segment routing uses.

2.5.1 Segment list

An ordered list of SIDs that encodes the topological and service-based source
route of the packet. Depending of the data plane used, it could be and MPLS
label stack, or a list of IPv6 addresses. In the top of the list lies the active segment,
which is the instruction the must be executed by the receiving SR node to process
the packet. During the packet transit through the path, the following actions can
be executed on the segment list: Push, Next, Continue, and POP.

 The Push action is executed by an SR node to introduce a segment into the


segment list.

 The Next action is used when an active segment is completed, defining the
next segment in the list as the active segment.

 The Continue action, is used when the current active segment is not yet
completed, and maintains its state as an active segment. One example is
when an SR node receives a packet with segments that is not up to the node
to execute any of them, forwarding the packet to the node that supposed to
execute the active segment (i.e. nodes 106 and 105 of Figure 2.2).

 The POP action is used at the edge node to remove the final segment from
the segment list, and send the packet to its final destination. In the MPLS data
plane, the node previous to the edge node can perform this action (PHP).

2.5.1.1 SR Tunnel

Is a segment list specified with abstract constrains (like delay or priority) pushed
on a packet to define a route that can be used for traffic engineering, OAM, or
FRR reasons. SR tunnels can be configured manually by the operator.
24 Research on path establishment methods performance on SDN-based networks

2.5.2 Segment types

Segments can be classified in two categories: local and global segments.

 The Local segments are the ones that are originated by an individual node
and they are only supported by that node. Examples of these segments are
those related to links directly connected to the SR node.
 The Global segments are the ones that their related instructions are
supported by all SR-capable nodes in the domain [12]. It can be used to
identify a group of nodes that can handle a specific instruction, and relate their
local SIDs to a global SID.

Besides local and global segments, they can also be classified in two types: IGP
segments and BGP peering segments. BGP peering segments are out of the
scope of this research, so they will not be discussed. The IGP segments are the
ones advertised by an SR node for its attached prefixes and adjacencies within
a link-state IGP domain, and their classification can be observed on Fig 2.5:

Figure 2.5 - IGP segments classification

The IGP-Prefix segments are global segments attached to an IGP-Prefix such as


a summarized network addresses that are advertised within the IGP domain. IGP-
Anycast and IGP-Node segments are the two types of the described segments,
in which Anycast segments identifies a group of nodes by an Anycast SID, and
the Node segments identifies a single node using a Node-SID (usually related
with the node’s loopback address).

The IGP-Adjacency segments are local segments attached to a unidirectional or


a set of unidirectional adjacencies, and they are identified using an Adj-SID
(adjacency segment identifier). The adjacency is formed by the interconnection
Chapter 2: Path establishment methods 25

of a local node and its neighbor, and it can be advertised throughout the SR
domain.

When a packet is following the segment list and finds an Adj-SID of a specific
node, it means that the action that the node will take for the packet is to forward
it through the adjacency (link interface) assigned to the Adj-SID. If there is more
than one adjacency assigned to the Adj-SID (set of link interfaces), the node will
load balance the traffic through the set of adjacencies. This gives two options of
encoding the Adj-SID: to reference the use of protection in the adjacency (IPFRR
or MPLS-FRR), and to identify an adjacency as a local segment.

2.5.3 Path computation algorithms

There are two routing algorithms defined for segment routing: Shortest Path and
Strict Shortest Path.

 Shortest Path is the default algorithm, in which a packet is forwarded along


the well-known ECMP-aware SPF algorithm. It has the flexibility that an
intermediate device can implement a policy-based forwarding action that can
override the SPF decision.

 Strict Shortest Path works the same as the default shortest path algorithm,
but instructing each intermediate device to ignore any local policy-based
forwarding actions overriding the SPF decision.

Prefix-SID advertisement includes a set of flags and an algorithm field, in which


associates a given Prefix-SID (Anycast or Node) to either routing algorithms. An
ingress node gathers in this way all nodes and adjacencies information from the
Prefix-SID advertisements, and the routing algorithm constructs the end-to-end
path for any traffic incoming to the SR domain.
26 Research on path establishment methods performance on SDN-based networks

CHAPTER 3. TEST TOPOLOGY SETUP


The general idea to measure the behavior of path methods in front of network
events, is to provide a scenario where an end-to-end communication is available
through multiple paths. A virtual network topology was built to meet these
conditions, and is presented on Fig. 3.1. The network topology is contained within
2 physical computers, in which one of them runs an SDN controller, and the other
the infrastructure layer. The forwarding plane is comprised of 10 L2 virtual
switches interconnected with the SDN controller, using a TCP port per switch-
controller connection over a single physical Ethernet link, 2 virtual hosts (h1 and
h2) from where the end-to-end communication is taken place, and 12 virtual
Ethernet links interconnecting the switches.

SDN Controller
PC 1

Ethernet Link

S2 S3 S4 S5

S1 TCP S6
interconnections
PC 2
S10 S9 S8 S7

h1 h2

Figure 3.1 - General test network topology

The links disposition allows the SDN controller for the computation of 8 possible
paths between hosts h1 and h2 as listed below.

1) S1, S2, S3, S4, S5, S6


2) S1, S10, S9, S8, S7, S6
3) S1, S2, S3, S9, S8, S7, S6
4) S1, S2, S3, S4, S8, S7, S6
5) S1, S10, S9, S3, S4, S5, S6
6) S1, S10, S9, S8, S4, S5, S6
7) S1, S2, S3, S9, S8, S4, S5, S6
8) S1, S10, S9, S3, S4, S8, S7, S6

This chapter, describes the main hardware used to build this topology, the
measurement tools used for the experimentations, the SDN controllers selected
Chapter 3: Test topology setup 27

for the configuration of path establishment methods, and their setup over the
network topology.

3.1 Control layer involved elements


The main purpose of using two computers, is to isolate the processing load of the
SDN controller under one hardware. This allows to measure the controller’s
response to the network without being affected by other mayor processes
external to the normal operation of the computer. The same situation applies to
the infrastructure layer, in which one computer can dedicates its hardware
resources for the data plane processing, and ensure the network performance in
terms of packet forwarding as realistic as possible. Therefore, an SDN controller
is selected and hence, its related hardware to support its processing demands.

3.1.1 SDN controller selection

Since Segment Routing is a relative new method, the main idea in this project
was to find a complete adaptation of Segment Routing to SDN. Several
controllers were observed, along with their project developments. There were two
options for the SDN controller: OpenDaylight and ONOS.

 OpenDaylight, is a modular and multiprotocol controller infrastructure built


for SDN deployments on multi-vendor networks. It has mayor development
effort through a community sponsored by several vendors like Cisco,
Brocade, HP, Huawei, Vmware, and Oracle.

 ONOS, or Open Network Operating System, is a modular and distributed


core controller infrastructure targeted for service providers and mission-critical
networks, with the mission of ensuring high availability, scale-out and
performance to the service provider’s network. It has been developed by the
Open Networking Lab (ON.Lab) in coordination with service providers like
AT&T, and network vendors like Ericsson, Cisco and Huawei.

Since both controllers have full capabilities to manage the Proactive Forwarding
method, the selection criteria was based on how the way in handling Segment
Routing, allows for more flexibility to implement the test topology and the
measurements made in this research. Table 3.1 displays a comparison of both
platforms in terms of Segment Routing capabilities, hardware requirements, and
compatibility with the test scenario:
28 Research on path establishment methods performance on SDN-based networks

Table 3.1 - SDN controller selection criteria

Based on the characteristics observed, the SDN controller selected for the
experimentation was the ONOS controller. There were several factors taken into
account, in which the main reasons for selection were the following:

 The Segment Routing version of ONOS is designed to work over MPLS, in


which previous knowledge of this architecture gave more understanding of the
working principle of Segment Routing, and its operation on an SDN network.

 The ONOS controller gives more flexibility to build the infrastructure layer,
since it’s capable of configuring OpenFlow switches to emulate L3 routing
devices. During the selection, there were no OpenFlow routing devices
available to work with OpenDaylight.

 The Segment Routing version of ONOS can install end-to-end paths both
dynamically and manually, which gives more flexibility in the experimentation,
since it is not necessary to manually configure several routes during failure
recovery scenarios.

 The ONOS controller requires less hardware resources than OpenDaylight.

Knowing that ONOS was the desired platform for the experimentation process,
the final versions selected for Segment Routing and Proactive Forwarding
methods were the following:

 Segment Routing: ONOS version SPRING-OPEN

 Proactive Forwarding: ONOS version Blackbird 1.1.0rc2

Since the SPRING-OPEN version is an experimental operating system only used


for the adaptation of Segment Routing, it doesn’t support Proactive Forwarding.
Chapter 3: Test topology setup 29

For this reason, it is used a different version of ONOS (a public release version)
to test Proactive Forwarding. Taking into account that the purpose of the
investigation is a conceptual analysis of the methods working principle, the
accuracy demand of testing both methods under the same platform is not
considered mandatory, nevertheless both methods are tested under the same
platform type, which is the ONOS architecture. This gives an idea of what could
be the effects of Segment Routing in ONOS controllers compared with Proactive
Forwarding.

3.1.2 Controller’s hardware

Both ONOS versions require the same hardware resources to ensure minimum
optimal operations. For this purpose, a high resource computer is selected to host
both versions. This computer is purposed for high load processing, and it was
available at the UPC campus to be used in this experimentation. The computer’s
specifications are the following:

 3.4GHz Intel® Core™ i7-3770 processor


 16GB RAM memory
 64bit Ubuntu server 14.04.01 operative system
 1Gbps Ethernet NIC
 IEEE 802.11n Wireless NIC

With this computer, it is ensured that the normal operation of the ONOS
controllers is not affected by limitations in the hardware resources, and hence the
measurements of the controller’s time response. Both versions are installed in
this computer, but they are initialized one at a time to ensure this performance.

3.1.3 Controller’s software

Table 3.2, shows the software requirements for each of the selected ONOS
versions:

Table 3.2 - ONOS software requirements


ONOS versions Blackbird 1.1.0rc2 SPRING-OPEN

1) Apache Karaf 3.0.2


2) Java 8 JDK 1) Apache Zookeeper 3.4.6
Requirements 3) Apache Maven 3.2.3 2) OpenJDK 7
4) Git 3) Git
5) Bash
30 Research on path establishment methods performance on SDN-based networks

 Apache Karaf is a platform that provides a light weight container for


components and services through an OSGi runtime environment.7 This
platform is used to run the CLI console to manage the controller, to provision
and install the controller’s applications, and to define the controller’s dynamic
configuration using properties files.

 Apache Zookeeper “is a centralized service for maintaining configuration


information, naming, providing distributed synchronization, and providing
group services” (Apache Software Foundation, 2010).8 In the case of SDN,
Zookeerper can be used to manage the switch-controller mastership,
including the detection and reaction in front of instance failures [14].

 The Java JDK and OpenJDK are Java Development Kits (JDK) that provides
a development environment for the creation and deployment of java based
applications and components. Among other functions, this kit executes the .jar
files to run the controller’s applications for service deployment.

 Apache Maven is a project management and comprehension tool software,


used to manage a project building.9 In this case, Maven is used to build the
source code of the controller’s operating system.

 Git and Bash are used to download latest software projects and to create
packages of the controller’s operating system respectively.

3.2 Infrastructure layer involved elements


The infrastructure layer is based upon the compatibility with the selected ONOS
version controllers. The forwarding devices, must be capable of handling traffic
according to the path establishment method used in the SDN environment.
Additionally, it must give the necessary performance to approximate realistic
network deployments, and not to interfere with the measurements made in this
research. In this sense, there were several software and hardware
considerations.

3.2.1 Software selection

To emulate the infrastructure layer, it was necessary to include a software


solution that could allow the implementation of virtual switches with OpenFlow
capabilities, and to establish an interface with the SDN controller throughout the
hardware and the physical interconnection. The solution needed to perform the
packet treatment and transmission process in the virtual network, the same way

7
Apache Software Foundation. Karaf. Retrieved from Apache Software Foundation:
http://karaf.apache.org/
8
Apache Software Foundation. (2010). Apache Zookeeper. Retrieved from Apache Software
Foundation: https://zookeeper.apache.org/
9
Apache Software Foundation. (2015). Apache Maven. Retrieved from Apache Software
Foundation: https://maven.apache.org/
Chapter 3: Test topology setup 31

as it should be in a physical topology. Therefore, a couple of existing solutions


were observed for these purposes: OpenWrt image, and Mininet.

 OpenWrt, is a Linux distribution for embedded devices [15]. It allows to


customize a hardware device from any vendor through the use of packages,
achieving any application desired for the device. In essence, it is a framework
to develop a Linux based firmware for a given device to perform specific tasks.
In this case, OpenWrt was meant to implement an OpenFlow switch firmware
over the available hardware (MikroTik routing devices) in order for these
routers to perform as an OpenFlow switch.

 Mininet, as its states in the official site, “creates a realistic virtual network,
running real kernel, switch and application code, on a single machine (VM,
cloud or native)” (Mininet, 2015)10. It works directly with OpenFlow, allowing
the interaction with all vendors of SDN controllers, and the inclusion of several
versions of OpenFlow virtual switches. It comes with installable packages for
NOX, POX, and RYU SDN controllers, default kernel-space and user-space
Open Vswitch with OpenFlow 1.0, and CPqD user-space virtual switch with
OpenFlow 1.3.

Table 3.3 illustrates a comparison of the compatibility of OpenWrt, and Mininet


with the test topology, and the selected SDN controller:

Table 3.3 - Infrastructure layer compatibility


Compatibility OpenWrt Mininet

Available OpenFlow 1) Open Vswitch OF 1.3 (User 1) Open Vswitch OF 1.3


switch at time of space) (User/kernel space)
selection 2) CPqD OF 1.3 (User space) 2) CPqD OF 1.3 (User space)

ONOS
YES YES
Interoperability
Proactive
YES YES
Forwarding Support
Segment Routing
NO YES
Support
1) 10 physical network
devices (switches or routers).
2) additional devices for the
Test scenario control network and 1) 1 Physical computer hardware
requirements monitoring. 2) Single UTP cabling
3) UTP cabling for
interconnections.
4) Physical hosts

10
Mininet. (2015). Mininet; An Instant Virtual Network on your Laptop (or other PC). Retrieved
from Mininet: http://mininet.org/
32 Research on path establishment methods performance on SDN-based networks

Despite that OpenWrt would have given a more realistic infrastructure layer, by
using physical hosts and devices, it doesn’t support the use of Segment Routing,
which is one of the path establishment methods that was meant to be tested in
the network topology. At time of selection, the supported versions of Open
Vswitch within OpenWrt and Mininet did not support a feature called group-
chaining [17], which is an optional feature of OpenFlow 1.3 that uses a group
entry to points out a packet to a secondary group, and even to a third group or
output port. This action is necessary to carry out the segment sequencing in the
IP packet (Only CPqD had support for this feature).

The CPqD version in OpenWrt did not support Segment Routing either, in which
the reason is not clear since the Segment Routing driver supports the virtual
switch through OpenFlow 1.3 [18]. However, it is suspected that the hardware
implied in the Mikrotik router could limit the ability of the controller to use the full
OpenFlow pipeline that enables the use of the necessary flow tables and group
tables, but there was no documentation found on the Mikrotik routers to support
this assumption. What was observed, was that during the interconnection of the
Mikrotik routers with the SPRING-OPEN controller, the controller was unable to
establish IP, MPLS, and ACL flow tables, and group tables as well.

Having this considerable limitation with Segment Routing, it was decided to use
Mininet to emulate the infrastructure layer under a single hardware. This setup
will limit the topology in terms of bandwidth, since the supported virtual switch
(CPqD switch) works on user-space level, and each switch in the topology will
share hardware resources in the treatment of traffic. Nevertheless, in lower
bandwidths the performance of the network is expected to be with high
throughput and no packet loss in the traffic.

3.2.2 Hardware selection

There is no clear indication about what the minimum hardware requirements


should be for Mininet. However, a small topology based on 10 switches and a
traffic generated by only 2 hosts, doesn’t require a demanding load processing
for the hardware. Normally, if a virtual machine is used to host the Mininet
application, its default settings of hardware resources are a single core processor,
and a 1024MB of RAM memory according to the recommended downloadable
virtual machine image for Mininet [19].

Based on this initial reference, a computer with similar hardware resources was
available to host the Mininet application. The specifications of the selected
computer are the following:

 3.4GHz Intel® Pentium® D dual core processor


 1GB RAM memory
 64bit Ubuntu 14.04.01
 1Gbps Ethernet NIC
Chapter 3: Test topology setup 33

3.3 Proactive Forwarding Setup


Following the involved elements for the control and infrastructure layer, the
topology to test the Proactive Forwarding method is illustrated on Fig 3.2:

ONOS Blackbird 1.1.0rc2


10.60.1.1:6633

PC 1 Interface em1
PC 2 Interface eth2

S2 S3 Eth2
S4 S5
Eth2 Eth2
Eth1 Eth1 Eth1 Eth2
Eth1 Eth3 Eth3

S1 TCP S6
Eth2 Eth2 Eth1
Eth1 interconnections
Eth3 Eth3

S10 S9 Eth3 S8 Eth3 S7 Eth1


Eth2 Eth1 Eth1 Eth1
Eth2 Eth2 Eth2

h1 CPqD OF 1.3.4 h2
10.0.0.1/24 10.0.0.2/24
(User Space)

Figure 3.2 - Proactive Forwarding test topology

This topology is configured on Mininet to work under a single subnet, due to its
L2 based routing. A Python script is used to construct the topology on Mininet
(see TopoTestPF.py on Appendix B), in which the switches are set to point to the
controller’s default TCP port 6633 to interact with it using OpenFlow messages,
and to use the user space switch (CPqD). The switch interfaces bandwidth is
dependent on the limitation of the test network, in which its bandwidth capacity is
tested on the experimentation process in Chapter 5.

3.4 Segment Routing Setup


The topology to test the Segment Routing method is presented on Fig. 3.3:

ONOS SPRING-OPEN
10.60.1.1:6633

PC 1 Interface em1

PC 2 Interface Eth2

S2 S3 S4 S5
Eth2 Eth1 Eth2 Eth1 Eth2 Eth1

Gwy: 10.0.0.1/24 Gwy: 10.1.1.1/24


172.10.0.2/32 172.10.0.3/32 172.10.0.4/32 172.10.0.5/32
S1 SID: 102 SID: 103 SID: 104 SID: 105 S6
Eth3
TCP Eth3
Eth1
interconnections Eth1

172.10.0.1/32
SID: 101 172.10.0.6/32
S10 Eth3 S9 Eth3 S8 S7 SID: 106
h1
Eth1 Eth2 Eth1 Eth2 Eth1 Eth2 h2

10.0.0.5/24 172.10.0.10/32 172.10.0.9/32 172.10.0.8/32 172.10.0.7/32


Mac: 00:00:00:00:00:01 SID: 110 SID: 109 SID: 108 SID: 107 10.1.1.5/24
Mac: 00:00:00:00:00:02
CPqD OF 1.3.4
(User Space)

Figure 3.3 - Segment Routing test topology


34 Research on path establishment methods performance on SDN-based networks

In this case, the configuration of the topology is the same, but with small
differences in the python script (see TopoTestSR.py on Appendix B), in which the
host’s IP addresses are manually configured instead of being automatically
assigned. This is in order to configure both hosts under different subnets for the
ONOS Segment Routing prototype to work properly.

Another big difference is that the controller plays a role in the construction of the
network topology. It downloads a configuration to the CPqD switches in order to
program the OpenFlow pipeline, enabling routing functions on each of them. In
other words, it sets the switches to emulate SR nodes to perform Segment
Routing operations, in which different subnets can be configured to the switch’s
interfaces, and SID labels and loopback IP address can be assigned to each
switch.

The switches will depend on the controller’s network abstraction in order for them
to emulate SR nodes. This is because the IP addressing and SID information is
stored in the controller’s network abstraction. The routing information (IP and
MPLS) is stored on the switches within specific flow tables initially programmed
by the controller. The IP/MAC addressing and SID labeling are preconfigured on
the controller through the use of a configuration file (see toptst-ctrl-SR.conf on
appendix B), in which each switch is identified by the controller through their DPID
assigned as a result of the python script.

3.5 Measurement tools


For this research, accuracy in the measurement is not a mandatory objective.
Nevertheless, it should be precise enough to reveal general behavior and
tendencies of the path establishment methods towards the time response of the
SDN controller. Therefore, the following tools were selected for the sampling and
measurement of the SDN controller’s time response: iPerf, MGEN, and
Wireshark.

3.5.1 iPerf

iPerf is a tool for live measurements on IP networks, in which its main purpose is
to determine the maximum achievable throughput in the IP network.11 It can be
used for other measurements like jitter, and packet losses. It is customizable to
generate UDP and TCP packets of different MTUs, and at a specified rate and
interval of transmission.

This tool acquires its statistics based on the number of packets transmitted within
the transmission time and interval. It comprises two elements, a client and a
server. The client generates the traffic to the server according to the packet
specifications desired, and determines an average transmitted bandwidth; while

11
iPerf. iPerf – The network bandwidth measurement tool. Retrieved from iPerf: https://iperf.fr/
Chapter 3: Test topology setup 35

the server receives the generated traffic, performs the statistics, and sends the
results back to the client.

iPerf works at application level, which means that the throughput measured is not
exclusive of the links performance, but also of the packet process through the
TCP/IP model. Additionally, iPerf is executed at user-space level of Linux OS,
working at the top of the system architecture and using system calls to use the
resources.

3.5.2 Multi-Generator (MGEN)

MGEN is an open source tool that generates real-time traffic patterns into an IP
network.12 In other words, it can be customized to generate TCP and UDP traffic
based on different levels of bandwidth and intervals, allowing to produce constant
rate, burst, and Poisson distributed traffic. Like iPerf, it also consist of a client and
a server, in which the client generates the traffic based on a script with the desired
parameters, and the server receives and logs the traffic for later analysis like
throughput, delay, and packet loss statistics.

3.5.3 Wireshark

Wireshark is a network packet analyzer that captures a packet in the network to


display its TCP/IP layer information as detailed as possible, letting to examine its
content for purposes like network troubleshooting, security examinations,
protocol debugging, and network protocol learning.13 This tool allows to capture
live packets from a network interface (physical or virtual), displaying the packet
data with detailed protocol information, and saving all packet captures for further
studies and statistics. The captured data can be analyzed under different criteria,
in which the information can be filtered by timestamps, TCP/UDP ports, protocols,
TCP sessions, and more.

Since Wireshark is a tool that works at user-space, it uses the Pcap library to
allow Wireshark to capture packets at a lower level. This allows the capture of
packets with a more precise timestamp, since there is no additional delay caused
by the internal communication process between the user-space and kernel-space
levels.

For this research, it is used the Wireshark dissector provided by a Mininet


package, which enables the OpenFlow filter to capture OpenFlow messages and
observe their message format in detail, including flow entries and group entries.

12
U.S. Navy. Multi-Generator (MGEN). Retrieved from U.S. Navy Networks and Communication
Systems Branch: http://www.nrl.navy.mil/itd/ncs/products/mgen
13
Lamping, U. Sharpe, R. & Warnicke E. (2014). Wireshark User’s Guide. Retrieved from
Wireshark: https://www.wireshark.org/docs/wsug_html/
36 Research on path establishment methods performance on SDN-based networks

CHAPTER 4. OPERATION IN ONOS CONTROLLERS


Part of the observation of the methods working principle, is to understand how
their implementation on specific controllers works. The general internal process
in the controller, the traffic treatment throughut the network, and the way of how
the paths are configured, are key points to observe their behavior towards the
controller’s time response, and identifying tendencies on this measurement. This
chapter makes an aproach on how the path establishment methods are
implemented in the ONOS controller architecture, and their operation within an
SDN network based on preliminary tests made on their test topologies and official
ONOS documentation.

4.1 ONOS architecture

The ONOS architecture, like any other SDN controller, is based on the general
SDN architecture previously observed in Chapter 1. In the case of ONOS, it
comprises specific subsystems with north bound, south bound, and core
components that makes possible for ONOS to deliver different services along the
network. Fig. 4.1, displays the basic ONOS architecture initially introduced from
ONOS Avocet (first public release).

Figure 4.1 - ONOS architecture14

As it states in the ONOS project official site, this architecture “is strictly
segmented into a protocol-agnostic system core tier, and a protocol-aware
providers tier” (ON.Lab, 2015). The ONOS core is responsible for the network
topology abstraction according to the constant tracking of network state
information, and to distribute this information to the applications either
synchronously via query, or asynchronously via listener callbacks. Additionally,
the core is responsible for maintaining a synchronization between other

14
Image retrieved from [20]
Chapter 4: Operation in ONOS controllers 37

controllers in a cluster through its cluster peers, synchronizing network state, list
of managed devices, and current state of links and hosts.

The protocol-aware providers interact with the infrastructure layer using control
and configuration protocols, and interpreting the network state to the core. Also,
this tier applies changes to the infrastructure layer according to the core
indications using protocol-specific means.

SouthBound APIs are used to interconnect both tiers, while NorthBound APIs
interconnects the core with the network applications. The infrastructure layer
interconnects with the providers tier through their SouthBound protocols like
OpenFlow, in which a pipeline can be programed to each network device, flow
entries downloaded to configure the network, and OpenFlow messages
exchanged to carry out the actions and updates of the network state.

4.1.1 ONOS subsystems

An ONOS subsystem is a structure that describes the use of the ONOS


architecture by applications and services. In other words, it describes how each
application use the ONOS architecture (Segment Routing and Intent forwarding
are examples of ONOS subsystems). Fig. 4.2 illustrates the general structure of
an ONOS subsystem.

Figure 4.2 - ONOS subsystem structure15

15
Image retrieved from [20]
38 Research on path establishment methods performance on SDN-based networks

This diagram shows the interaction between the architecture layers for each
application. The provider component receives and configures the network state
through the SouthBound protocols, then it exchanges network information with
the manager component, in which devices can be registered to the core using
the ProviderRegistry, the device and port information can be supplied using the
ProviderService, and the core can give instructions to the provider component
to be executed on the infrastructure layer.

The manager component interacts with the application component through


NorthBound APIs, in which the Listener and the Service are used to exchange
Asynchronous notifications, and the AdminService to perform administrative
actions like setting mastership to a device, removing decommissioned devices,
among other thinks depending of the application. Within the core, the Store is
responsible for synchronizing and indexing information with cluster peers.

4.2 Intent forwarding subsystem


The Intent forwarding method is the ONOS version of Proactive Forwarding, in
which the controller anticipates an end-to-end path to incoming traffic. In ONOS,
this method works under one main subsystems (Intent subsystem), and uses
others subsystems as support (i.e. Topology Subsystem).

4.2.1 Intent subsystem

The Intent subsystem is a set of abstractions for conveying high-level intents for
treatment of selected network traffic, allowing application to express their traffic
needs, and conditioning the network accordingly. The intent name comes from
the word intention as “state your intentions”, which is the analogy applied to
applications, in which they state their intentions of conditioning the network
through the use of policy-based directives. Intents can alter the network behavior
based on network resources, constraints, specific criteria, and instructions [21].

The Intent application submits an Intent to the ONOS core, which accepts its
specifications and translates them via Intent compilation, into actionable
operations on the network environment. These actions are executed by an intent
installation process that changes the network environment in terms of tunnel links
being provisioned, flow entries being installed on a switch, or optical lambdas
(wavelengths) being reserved [21]. A more detailed diagram of the Intent
compilation process can be found on Appendix C. Figure 4.3 shows the
interaction between the components of the Intent subsystem.
Chapter 4: Operation in ONOS controllers 39

Figure 4.3 – Intent compilation process within the subsystem16

Inside Fig. 4.3 it can be appreciated at which component of the subsystem, each
stage of the intent process is being carried out. The application submits or
withdraw an Intent, the core compiles and install the intents, plus additional tasks
like distributing the intents along a cluster of controllers (if any), and the provider
component translates the installed actions into SouthBound instructions to be
delivered to the infrastructure layer.

4.2.2 Topology subsystem

The Topology subsystem carries the network topology model definitions, in which
the network abstraction of the infrastructure layer is taken place. It reads constant
updates of the infrastructure flayer, it creates a topology graph of the current
network state, manage the topology inventory to be synchronized among other
controllers in a cluster. It also determines the costs associated to the links, and
performs path computation during network initialization, network events, and
upon requests using the current topology snapshot (see Fig. 4.4).

16
Image retrieved from [22]
40 Research on path establishment methods performance on SDN-based networks

Application component

Topology Listener

Path/Topology Service

Manager component Topology store

Topology provider Topology provider


service registry

Topology provider

Provider component
Protocols

Figure 4.4 - Topology subsystem17

Intent forwarding relies on this subsystem to maintain the Intent subsystem


updated on the current network topology through the Topology Listener, and to
perform the end-to-end path computation through the Path Service. When a
network event occurs and affect current paths, the Intent subsystem can react
based on the information received on the Topology Listener, making decisions
of either submit new Intents or withdraw the affected ones. Then, during the Intent
compilation process, the Path Service is consulted for a best available path for
the new end-to-end path.

4.2.2.1 Path service computation

The Path Service uses a utility subsystem called, the Graph subsystem, to access
the path algorithm used for the path computation. The algorithm used is the
Dijkstra shortest-path algorithm, in which it calculates not only one, but also all
shortest paths from source to destination. The default metric used is based on
hop-count, but it can be customizable by using a self-designed lightweight
function (the default state is used during the experimentation process).

In the case of Intent Forwarding, this process is executed during Intent


compilation, where the Intent subsystem consults the Path Service to determine
the shortest path between 2 clients based on the current network state
information. Then, the Intent is installed with the new path information, and
translated into SouthBound instructions, setting the network devices to establish
the end-to-end path according to the algorithm’s computation.

17
Image based upon Fig. 4.2 and [23]
Chapter 4: Operation in ONOS controllers 41

4.2.3 Intent forwarding operation

There are several types of Intents in ONOS planned for Proactive Forwarding:
Host-to-Host, Point-to-Point, Point-to-Multipoint, MPLS, and Optical connectivity
Intents. Current ONOS controllers do not fully support all those Intents, but for
the setup of the test topology, it is used the most common which are the Host-to-
Host and Point-to-Point Intents. As the name states, these Intents establishes a
path between 2 hosts or 2 interfaces, respectively, using the shortest path
available.

Basically, there are two ways of using Intent forwarding: dynamic and static
configuration. They work under the application onos-app-ifwd, which can be
installed before onos initialization or during onos operation, and represents the
application component of the Intent subsystem. More information can be found
on Appendix E for the installation procedure of ONOS blackbird and its Intent
forwarding app.

4.2.3.1 Dynamic configuration

The following steps, describes the process of dynamic path configuration on


Intent forwarding:

1. The controller expects from the infrastructure layer either an


OFPT_PACKET_IN message (for incoming traffic), or an
OFPT_PORT_STATUS message (for switch or link failures).
2. The onos-app-fwd application submits a Host-to-Host intent to the
manager component.
3. The manager component compiles the intent. Within this process the
manager component consults the Path Service of the Topology
subsystem to acquire the shortest path information.
4. The manager component installs the compiled intent into actionable
instructions and send it to the provider component.
5. The provider component translates the instructions into flow entries.
6. The flow entries are downloaded to relevant switches using the
OpenFlow proactive instantiation through OFPT_FLOW_MOD
messages.
7. With the flow entries on the switches, the end-to-end path is established.

4.2.3.2 Static configuration

There are 2 ways to statically configure paths using Intent Forwarding: Through
a specific command, or through an application called push-test-intent used for
testing operations [34], which is used during the experimentation process in this
42 Research on path establishment methods performance on SDN-based networks

research. Both methods are executable from the controller’s CLI (Command Line
Interface). The following steps describes the process of static configuration:

1. Either of these commands are executed on the controller’s CLI:


a. add-host-intent <src man˃ <dst mac˃
b. push-test-intents <src man˃/<port˃ <dst mac˃/<port˃ <Nº intents˃

2. The onos-app-ifwd application submits a Host-to-Host intent (for


command a) or a Point-to-Point intent (for command b) to the manager
component.

3. Steps 3 to 7 from the dynamic configuration are executed.

4.3 Segment Routing subsystem


The Segment Routing subsystem is the prototype implementation installed on the
SPRING-OPEN version of ONOS. It comprises additional services that helps to
keep track of the topology in the network, and to handle packets of specific traffic
like ICMP and ARP. Fig. 4.5 illustrates the subsystem structure.

Figure 4.5 - Segment Routing subsystem18

18
Image retrieved from [24]
Chapter 4: Operation in ONOS controllers 43

Like in Proactive Forwarding, this subsystem comprises 3 components within the


ONOS architecture. First, the Segment Routing application, which constructs a
route based on default metrics or administration policies, handling packets of
specific traffic. Second, the core component, which handles the network
configuration and interprets the information received from the driver component.
And third, the Segment-Routing Driver component, which programs the
OpenFlow pipeline on the infrastructure layer, managing the group and flow
entries to be pushed to the network devices. More detailed description can be
found on Appendix D.

This subsystem is designed to work using the MPLS data plane. The segment
sequence is allocated on the MPLS stack, in which the MPLS labels are the SIDs
identifying nodes and adjacencies. The packets are treated in each node
according to the label found on the top of the stack until it reaches the Bottom of
Stack (BoF), where the action of Penultimate Hop Popping (PHP) is executed on
the node before the edge node connected to the destination network. The MPLS
and IP routing tables are prepopulated on each switch during the controller
initialization based on the current state of the network topology.

4.3.1 Path computation

Within the Segment Routing application, there is a service called Segment


Routing Manager (see Appendix D), which is the responsible for the path
computation carried out in Segment Routing. It uses the ECMP-aware SPF
algorithm based on Dijkstra to compute the shortest path along the nodes within
the SDN network.

Once all path computations are done, the Segment Routing Manager populates
all routing rules to the IP and MPLS table of the switches. This means that all
possible shortest ECMP paths are established from the controller’s initialization,
and all incoming traffic to a specific destination will be guided through the multiple
paths without consulting the controller first (as long the destination IP address is
known on the IP routing table).

4.3.2 Group handling

Before the IP and MPLS tables are populated, group entries are previously
configured on the Segment-Routing Driver with the action buckets related with
packet forwarding operations, PUSH and POP labeling, and ECMP load
balancing. In this implementation of Segment Routing, Indirect and Select
groups are used for these purposes. Let’s understand it further through the
following scenarios reviewed on the test topology.
44 Research on path establishment methods performance on SDN-based networks

4.3.2.1 Packet forwarding

When packets are processed either by an IP or MPLS table, most of the entries
of these flow tables are associated with a group entry for forwarding actions.
Within the associated group entry, there are action buckets that can execute
actions like forwarding a packet to an output port, pushing and MPLS label and
then sending a packet to an output port, or popping a label before sending out a
packet. If only one label needs to be pushed, and there are no redundant links
around, this actions are handled by Select groups, which also set the destination
mac address of the next hop, and decrements MPLS TTL counters. An example
of these actions can be seen on Appendix F.

4.3.2.2 Label stacking

When multiple labels need to be pushed into an incoming packet, a combination


of either Select or Indirect groups, or both, are used to execute a feature called
group chaining. This implies that the action bucket of the first matched group entry
with the packet, pushes an MPLS label and forwards the packet to another group
entry, which can carry the same type of action bucket, and continue the process
until the last matched group entry forwards the packet to an output port (see
Appendix F for practical example).

Along the path, each node that the packet traverses will carry out the actions
matched with the label found on the top of stack, and extract the label using the
POP action contained on the appropriate flow entry of the MPLS table, sending
the packet to a group entry with only forwarding action to an output port for the
next hop. This process is repeated along the way until the packet reaches to the
bottom of stack.

4.3.2.3 ECMP groups

When a neighbor node is reachable through more than one link or adjacency, the
Segment-Routing Driver configures Select groups (or ECMP groups) to assign
action buckets per link under one single group. Each action bucket, executes
indirect group actions like MPLS labeling, output port forwarding, or next group
forwarding. The particular case here, is that when traffic is forwarded to this
group, it uses the action buckets within the group to forward the traffic along the
multiple links assigned to the group (see Appendix F for practical example).

By default, Segment Routing distribute the traffic in a Round-Robin behavior


along the action buckets. Additionally, different links pointing to different
neighbors can be included under the same ECMP group by configuring Adj-SIDs,
identifying the set of links as one adjacency.
Chapter 4: Operation in ONOS controllers 45

4.3.3 Segment Routing operation

As observed on Intent Forwarding, a path in Segment Routing can be configured


either dynamically or statically. There is no specific documentation regarding the
exact process of path configuration within the Segment Routing subsystem, but
general steps were identified based on the initial testing made on Segment
Routing.

4.3.3.1 Dynamic configuration

1. After network initialization (Pipeline configuration, and SID and IP address


assignment), or after receiving an OFPT_PORT_STATUS message (in
case of switch or link failures), the Segment Routing application uses the
Topology Listener to gather an update of the current network state.
2. The Default Routing Handler uses the Segment Routing Manager for
the ECMP shortest path computation. The Default Routing Handler
constructs the path and sends the information across the core.
3. The core sends the routing information to the Segment-Routing Driver,
where the routing information is translated into instructions.
4. The Default Group Handler (Group Recovery Handler during failure
cases) creates or edits the group entries for the necessary forwarding
action on each switch.
5. The OF Flow Pusher sends the group entries to the infrastructure layer
through OFPT_GROUP_MOD messages.
6. The OF Flow Pusher sends the flow entries that populates the IP and
MPLS tables through OFPT_FLOW_MOD messages.

4.3.3.2 Static configuration

Manual configuration of paths are made from the controller’s CLI. The model is
based on the configuration of tunnels and policies. The tunnels defines the
segment sequence or MPLS label stack that identifies the end-to-end path. The
policies defines a specific traffic (src/dst addresses), the tunnel that the traffic will
use, and the priority that the tunnel will have for that traffic. The following steps
represents the static path establishment process:

1. After network initialization and initial dynamic configuration (prepopulated


IP and MPLS tables), the following commands are executed on the CLI for
tunnel configuration:
a. tunnel <tunnel name˃
b. node <SID 1˃
c. node <SID 2˃
d. node <SID n˃ exit (instructs the controller to execute the tunnel
configuration)
46 Research on path establishment methods performance on SDN-based networks

2. The Policy Routing Handler identifies the switches tagged by the SIDs
members of the tunnel, and identifies the adjacencies between them.
Then, it sends forwarding instructions to the core to be passed on to the
Segment-Routing Driver.
3. The Group Handler creates the necessary groups that will establish the
segment sequence.
4. The OF Flow Pusher sends the group entries to the edge switches
through OFPT_GROUP_MOD messages.
5. The following commands are executed on the CLI for the policy
configuration:
a. policy <policy name˃ policy-type tunnel-flow
b. flow-entry ip <src IP address/mask˃ <dst IP address/mask˃
c. tunnel <tunnel name˃
d. priority <priority number˃
e. exit (instructs the controller to execute the policy configuration)

6. The Policy Routing Handler identifies the tunnel assigned to the policy,
and prepares policy instructions based on the configured parameters to
send it across the core to the Segment-Routing Driver.
7. The OF Flow Pusher sends the necessary flow entries to the edge
switches using OFPT_FLOW_MOD messages to populate the ACL table
with the policies.

The flow entries on the ACL table will override any default instruction contained
within the IP and MPLS tables. This ultimately will force the desired traffic to follow
the static path, rather than following the dynamically computed path. The tunnels
and policies are unidirectional, this means that for establishing bidirectional
paths, a pair of tunnels and policies must be configured.

4.4 OpenFlow pipeline utilization


Both path establishment methods require the traffic to be processed by the
OpenFlow pipeline in different sequence of tables. Generally, with Intent
forwarding the traffic is processed only by flow tables, while in Segment Routing
the traffic needs to be processed by flow tables and group tables.

4.4.1 Intent Forwarding pipeline

Intent forwarding only use a flow table with match fields based on mac address
information (see Fig. 4.6), being enough for Intent Forwarding to execute L2
based routing decisions. There is no detailed information about the exact number
of flow tables that can be used in Intent Forwarding. However, since this
application is intended in the long run, not only to establish a route based on
shortest path decisions, but also based on policies like QoS, and traffic criteria, it
suggests the use of an OpenFlow pipeline with more than one flow table. In the
Chapter 4: Operation in ONOS controllers 47

specific case of the experimentation made in this research, Intent Forwarding


only used one flow table (MAC table).

Apply
MAC Action
Flow Flow sets
Flow
Table Table
Table
Packet 1 N Packet
0
in out
Proactive Forwarding

Figure 4.6 - Proactive forwarding pipeline utilization

4.4.2 Segment Routing pipeline

Segment Routing uses several flow tables to perform L3 based decisions (IP
routing, MPLS, ACL tables), and a group table to execute actions regarding
MPLS labeling, multiple paths forwarding, and regular IP packet forwarding. Fig.
4.7 represents a summary of the OpenFlow pipeline utilization in Segment
Routing.

IP Segment Routing
Routing
Table
VLAN MAC [20] Action
ACL
Flow Flow Group Buckets
Table
Packet Table Table Table Packet
MPLS [50]
in [0] [10] Apply Actions out
Forwarding
 Push/Pop
Table  TTL MPLS
[30]  Output port
 Output Group

Figure 4.7 - Segment Routing pipeline utilization19

The tables are identified with specific ID numbers, in which the packet process
sequence goes in incremental order. Since this pipeline carries L3 routing
decisions, it opens the door for inter-vlan routing, where the Vlan flow table can
be used to tag or untag IP packets with an 802.1Q label (In this experimentation,
Vlan tagging is not used). The MAC flow table identifies IP packets and MPLS
labeled packets, and forwards the packet to either the IP or MPLS table
accordingly.

Edge switches tends to use more the IP routing table, so it can forward packets
outside or inside the SR domain according to IP address criteria; whereas the
MPLS table is more used in intermediate switches to forward packets based on
the MPLS label they carry. And the ACL table is only used when tunnels and
policies are configured, otherwise, the default entry for this table bypasses the
packets directly to the Group table.

19
Diagram based on the OpenFlow pipeline displayed in [24]
48 Research on path establishment methods performance on SDN-based networks

CHAPTER 5. EXPERIMENTATION AND RESULTS


The working principle of a path establishment method can introduce different
values of response time to the SDN controller, which may or may not have an
impact over an SDN network. This chapter describes the experimentation
process applied to measure the response time of Proactive Forwarding and
Segment Routing under different cases, in which the result observations were
intended to identify general behavior and tendencies on their performance in
terms of time response.

5.1 Experimentation process

There were 4 main tests performed on both path establishment methods: network
performance, response time in front of network events, static path installation
time, and switch packet forwarding delay. For these tests, the measurement tools
described in Chapter 3 were used in specific locations along the network topology
to perform the measurements. Fig. 5.1 illustrates the general measurement setup
over the network test topology.

SDN Controller
Interface
wlan0
Interface em1
Remote Computer

S2 S3 S4 S5
Eth2

S1 S6

S10 S9 S8 S7
Eth1

h1 h2
iPerf/Mgen Client iPerf/Mgen Server

Figure 5.1 - Traffic generators and Wireshark probes location

In general terms, the traffic generators will send periodic traffic of 1400 Bytes
UDP datagrams, on intervals of 20 seconds between both hosts while performing
all measurements. On the iPerf server, statistics of jitter, packet loss percentage,
and throughput are captured. Wireshark is used for several purposes including
traffic monitoring as a way to identify the proper link to emulate failures (links S3-
S4 and S8-S9), OpenFlow message capturing between the ONOS controller and
the switches (interface em1 of the controller), and incoming and outgoing packet
capturing on one of the switches (S2 interfaces) as way to measure the packet
Chapter 5: Experimentation and results 49

forwarding delay. Wireshark will also capture packets coming from a remote
computer, in which the CLI commands for static route configuration are executed.

For each measurement a number of 100 samples are taken to acquire proper
results of average values, sample standard deviation, and media intervals at 95%
of confidence. For confidence intervals the samples are plotted in a Normal
Quantile-Quantile Plot to verify that the behavior of the samples follows a normal
distribution, and ensure that the measurement is not affected. More information
on the description of this procedure can be found on Appendix G.

5.1.1 Network performance measurement

The network performance is measured under steady state conditions, meaning


that the network topology does not experience any network event that can change
the network state during the test. Moreover, the test is performed after the
network is stabilized on its initial traffic forwarding, in which the paths are already
pre-established. The goal is to observe how the network topology behaves in
terms of traffic forwarding without the influence of external factors allowing to
identify the limits of the topology, and to ensure proper testing using stable values
of bitrates.

Traffic is measured in five main bandwidths: 100 Kbps, 1Mbps, 10Mbps, 100
Mbps, and 1 Gbps. The results to observe are the receiving bitrate and packet
loss percentage. Using the more stable bandwidth (0% packet loss and end to
end sustained bitrate), the round trip time and jitter are also measured.

5.1.2 Response time measurement in front of network events

The response time is measured under network event conditions, such as link
failures during packet transmission. The link failures are emulated by shutting
down specific links of the network (S3-S4 and S8-S9) in the Mininet console,
while the traffic is being forwarded through those links (all possible paths in the
topology will use either one or both links).

The Wireshark tool is used to measure the time that takes the controller to reroute
the paths and redirect the traffic. OpenFlow messages between the SDN
controller and the switches are captured based on [14].

The time frame is measured between the instant where the network event is
detected by the SDN controller (OFPT_PORT_STATUS message), and the last
flow message (OFPT_FLOW_MOD) sent by the SDN controller (see Fig. 5.2).
This is the time period in which the controller starts and finalize the process of
path redirection, and hence, the response time in front of network events. The
processing time of the switches to detect the failure, and send the port status
messages are not taken into account. Thus, the measurement only focuses on
the performance of the SDN controller to react in front of network events.
50 Research on path establishment methods performance on SDN-based networks

Within the response time is implied the processing time of the controller (Intent
processing and path computation). This time frame is measured between the
instant where the network event is detected by the SDN controller
(OFPT_PORT_STATUS message), and the first flow message
(OFPT_FLOW_MOD) sent by the SDN controller (see Fig. 5.2).

Figure 5.2 – Controller’s response time frame

5.1.2.1 Segment Routing processing time

In the case of Segment Routing, the ONOS project documentation is not specific
about the processing time frame, but it is suggested in this research that part of
the processes made by the controller like path computation, is executed during
the group entries transmission to the switches (see Fig. 5.3). Before this
transmission, the controller uses the Group Recovery Handler service to
determine the action buckets related to the affected links, and instructs the
switches to empty those buckets in a group through OFPT_GROUP_MOD
messages. During this process, other tasks like path instructions cannot be yet
created, since they need updated group information to attach as actions to flow
entries.

Figure 5.3 - Segment Routing processing time frame


Chapter 5: Experimentation and results 51

5.1.3 Static path installation time

Static paths are measured under steady state conditions. The time that takes
between the command execution on the CLI and the last OFPT_Flow_Mod
message sent by the controller, is the controller's time response to program and
install a static path in the network. This is done in order to observe the effects of
the controller response during a manual path installation.

The Wireshark tool probes the network in the same disposition as the previous
section. The only difference is that in the computer hosting the SDN controller,
the Wireshark probes not only the interface connecting to the virtual network
(em1), but also to a secondary interface where it can capture instructions sent by
a remote computer (wlan0), as previously seen on Fig. 5.1. This remote computer
is used to initiate a remote CLI session with the controller, and send the execution
of the path installation. Wireshark can capture the execution timestamp, and
compare it against the subsequent OpenFlow message time stamps, allowing to
measure the time frame under a common time reference.

5.1.3.1 Segment Routing path installation time

Since the path configuration in the CLI for this method comprises tunnel and
policy configuration, there are two separate executions sent to the controller, in
which each of them triggers a set of instructions sent by the controller to the
infrastructure layer (see Fig. 5.4). For this reason, it is assumed that the response
time of the path installation is the addition of both processes time frame (Eq. 5.1).

Figure 5.4 - Segment Routing path installation time frame

RT = TCT + PCT (5.1)

In Fig. 5.4, RT is the response time, TCT the tunnel configuration time, and PCT
the policy configuration time.
52 Research on path establishment methods performance on SDN-based networks

5.1.4 Switch packet forwarding delay measurement

In this case, time is measured under steady state conditions. The delay observed
for an IP packet to be forwarded by a switch in the network corresponds to the
time that takes this IP packet between entering an input switch port, and
forwarded to an output switch port. This is done in order to observe the effects of
how the path methods use the OpenFlow Pipeline of the switches. In this case,
the wireshark tool is used on switch S2, chosen randomly for this test to probe its
interfaces S2-eth1 as the input port, and S2-eth2 as the output port.

5.1.5 Balanced path load scenario

In this point, we define an additional test scenario, with the purpose of testing
both path establishment methods under the same path load (number of routes).
Since Segment Routing uses SIDs that are mapped with loopback IP addresses
on each node, this automatically creates the establishment of possible paths
related with those addresses (this also includes host’s default gateways). This
makes the network event testing based only on the working principle of each
routing method, and not about an exclusive route by route comparison.

In order to emulate the same path load as Segment Routing, the test topology of
Intent forwarding is added with additional hosts on each node (as if they were the
loopback and gateway addresses of Segment Routing). Fig. 5.5 shows the new
Intent Forwarding topology for this test.

ONOS Blackbird 1.1.0rc2


10.60.1.1:6633

PC 1 Interface em1 CPqD OF 1.3.4


PC 2 Interface eth2 (User Space)

S2 S3 Eth2
S4 S5
Eth2 Eth2
Eth1 Eth1 Eth1 Eth2
Eth1 Eth3 Eth3

S1 TCP S6
Eth2 Eth2 Eth1
Eth1 interconnections
Eth3 Eth3

S10 S9 Eth3 S8 Eth3 S7 Eth1


Eth2 Eth1 Eth1 Eth1
Eth2 Eth2 Eth2

h1 h2
10.0.0.1/24 10.0.0.2/24

Figure 5.5 - Proactive Forwarding path load topology


Chapter 5: Experimentation and results 53

5.2 Measurement results


Both Proactive Forwarding and Segment Routing methods have demonstrated
their capacity to sustain traffic during network events and manual configuration
of paths, despite of their differences in path processing times. The following
subsections illustrate the results of each path mechanism.

5.2.1 Test topology traffic performance

The average results observed in the test are illustrated on Table 5.1. Up to traffic
of 100 Mbps there is a huge packet loss detected on the network, and on the
rates of 1 Mbps and 10 Mbps, despite of not presenting packet losses, the
throughput is not maintained throughout the network, which means that the
subsequent tests have to be made below 1 Mbps rates. This is a limitation
introduced by the virtual switch by not working at kernel space, which cannot take
full advantage of the hardware to sustain higher bandwidth.

Table 5.1 - Test topology traffic performance


Proactive Forwarding Segment Routing
Bitrate Packet Loss Throughput Bitrate Packet Loss Throughput
100 Kbps 0% 100 Kbps 100 Kbps 0% 100 Kbps
1 Mbps 0% 412 Kbps 1 Mbps 0% 1 Mbps
10 Mbps 0% 2.24 Mbps 10 Mbps 1.60% 10 Mbps
100 Mbps 78% 3.31 Mbps 100 Mbps 90% 9.9 Mbps
1 Gbps 96% 2.89 Mbps 1 Gbps 98% 9.24 Mbps

Using the rate of 100 Kbps the values of jitter and round trip time (RTT) are
measured to keep track of the initial conditions of the network before the
subsequent tests. Using Proactive Forwarding, the average round trip time
observed was 1.54 ms and an average jitter of 43 µs. While in Segment routing,
the average round trip time observed was 2.734 ms, and an average jitter of
55.9 µs. Both Jitter values maintains on very low ranges (below 1 ms), which
doesn’t impact the traffic performance of the network.

5.2.2 Response to network events

Since the path computation is dynamic, not always the SDN controller computed
the same initial path on the network. In all cases of Proactive Forwarding, it
resolved the shortest paths (paths 1 or 2 as described in Chapter 3). The
controller always recalculates the shortest path after a failure, which in all cases
were also paths 1 or 2 described in Chapter 3. In the case of Segment Routing,
both paths are initially used through ECMP groups installed on the edge switches.
54 Research on path establishment methods performance on SDN-based networks

After a failure, the controller stays with either one of the paths (depending on the
link failure location).

Table 5.2 displays the minimum, maximum, and average time of response that
the controller took to process the path redirection, and to completely install it into
the network. It also shows the sample standard deviation (σ) and the 95%
Confidence Interval (CI).

Table 5.2 - Network event performance


Proactive Forwarding Segment Routing
Description Processing Time Response Time Description Processing Time Response Time
Minimum 22.513 ms 28.554 ms Minimum 112.232 ms 197.89 ms
Maximum 55.978 ms 64.292 ms Maximum 264.361 ms 435.843 ms
Average 25.226 ms 40.753 ms Average 122.399 ms 265.326 ms
Sample σ 4.89 ms 6.178 ms Sample σ 20.814 ms 56.432 ms
95% CI 24.25 ms - 26.2 ms 39.52 ms - 41.97 ms 95% CI 118.27 ms - 126.53 ms 254.13 ms - 276.52 ms

5.2.2.1 Jitter measurement

In all the measurements, there was a minimum traffic interruption observed in


both methods. In Proactive Forwarding, there was 0.0012% of average packet
loss between hosts h1 and h2, with a 95% confidence that the mean jitter
maintains on the range between 59.2 µs and 70.9 µs. While in Segment Routing,
the average packet loss was 0.0034%, with a 95% confidence that the mean jitter
is maintained within the range between 52.6 µs and 61.3 µs.

5.2.3 Response to static path installation

For the static path installation measurement, a point to point intent is manually
configured on the controller using the ONOS built-in app push-test-intent. 100
intents were emulated to measure 100 path installations, where the app execution
was made from a remote computer, and its timestamp captured by Wireshark as
earlier described in Figure 5.1. Table 5.3 illustrates the minimum, maximum, and
average path installation response time, as well as the sample standard deviation
(σ) and the 95% Confidence Interval (CI).

Table 5.3 - Static path installation response


Proactive Forwarding Segment Routing
Description Path installation time Description Path installation time
Minimum 27.125 ms Minimum 5.371 ms
Maximum 77.752 ms Maximum 16.729 ms
Average 52.29 ms Average 11.452 ms
Sample σ 13.679 ms Sample σ 2.488 ms
95% CI 49.57 ms - 55 ms 95% CI 10.959 ms - 11.946 ms
Chapter 5: Experimentation and results 55

Let’s remember that in Segment Routing, the path installation time is considered
as the sum of tunnel configuration, and policy configuration times described on
Eq. 5.1, which were measured to construct the path installation time. For detailed
sample information tables, consult Appendix H.

5.2.4 Switch packet forwarding delay

For Proactive Forwarding, packet switching through the measured switch (S2)
had an average delay of 186.49 µs, in which the mean delay is within the range
of 178.358 µs and 194.621µs, with a confidence level of 95%. While in Segment
Routing, the average packet forwarding delay observed on the switch was
290.2 µs with 95% confidence that the mean delay is within the range between
282.796 µs and 297.603 µs. A complete table of the samples measured can be
found on Appendix H.

5.2.5 Response to network events with balanced path load

Maintaining the Intent Forwarding topology with 14 hosts, proved difficult to the
ONOS controller. Normally, according to the number of hosts, switches, and links,
the controller submits a specific number of intents, installing specific number of
flow entries. In this case, at every time that the controller recovered the failed link
to start the test all over again, it installed more flow entries than before, increasing
the main flow registry stored in the controller. Fig. 5.6 displays the CLI output of
the number of flows and intents for every time the test is executed.

Figure 5.6 - Counters summary CLI output


56 Research on path establishment methods performance on SDN-based networks

The number of flows increased until certain point, where the number of intents
started to slightly increase and maintaining these values during further testing.
This behavior caused the routes to be restored with a considerable amount of
delay, affecting the traffic performance with maximum packet losses and jitter of
51% and 140 ms respectively. This happened even when the CPU processing
took a maximum load of 68%, and a RAM load of 17%, which in theory should it
handled the rerouting without affecting the traffic.

The situation resulted in an average controller’s response time of 9 s, and a


processing time of 8.7 s. Nevertheless, at the beginning of the measurement with
the controller rebooted, the initial measurements were considerable low, with jitter
values less than 1 ms and no packet losses (more realistic results). This suggests
that the first measurements made after every controller’s reboot, is the real
behavior expected by the controller with the specified path load. The rest of the
measurements, are most probably a result of stability issues of the ONOS version
at high processing loads, and not the Intent Forwarding itself.

When the number of flows starts to increase, the measurements degrades


considerably. For this reason, at of 100 samples measured on the controller, only
20 samples were able to correspond to the expected behavior of the controller.
Both tables of these samples can be found on Appendix H, where each of them
follows a normal distribution tendency, identifying them as separate behaviors,
and not a random action.

Table 5.4 displays the minimum, maximum, and average time of response,
sample standard deviation (σ), and the 95% Confidence Interval (CI) of the 20
samples measured.

Table 5.4 - Proactive Forwarding performance with balanced path load


Proactive Forwarding
Description Processing Time Response Time
Minimum 24.87 ms 83.419 ms
Maximum 45.01 ms 227.025 ms
Average 35.007 ms 154.983 ms
Sample σ 7.414 ms 40.457 ms
95% CI 31.537 ms - 38.477 136.048 ms - 173.917 ms

5.3 Results observations

The measurements performed are reviewed and analyzed on Fig. 5.7, comparing
the results of both mechanism working on the network topology previously
described.
Chapter 5: Experimentation and results 57

Switch Forwarding Delay 0.29


0.186

Manual Path Installation Time 11.45


52.29

Network Event Response Time 265.33


40.75
265.33
Response Time with Balanced Path Load
154.98
Controller Processing Time 122.40
25.23

Processing Time with Balanced Path Load 122.40


35.00
Time (ms) 0.10 1.00 10.00 100.00 1000.00

Segment Routing Proactive Forwarding

Figure 5.7 - Test results comparison

5.3.1 Packet switching

The switch forwarding delay shows the effects of the OpenFlow Pipeline usage
introduced by both mechanisms. On one hand, in Proactive Forwarding, each
ingress packet starts a lookup for an entry in the first flow table (MAC table 0). It
can go to a next flow table or forwarded to an egress port, or to the controller,
depending on the actions found in the matched flow entry. In the case of our
topology, only one flow table was created with several flow entries. This means
that the ingress packet only had to be processed using one table before being
forwarded.

On the other hand, Segment Routing uses a series of flow tables plus the group
table as described in Chapter 4. In this case, at least 4 tables are used: the IP
address routing table, the MPLS table, the Access List table (ACL table) and the
group table, which is used to apply actions using action buckets within each
group.

This is an interesting observation since this processing can influence the global
traffic delay in the network. In this case, an average round trip time (RTT) of
1.54 ms was observed for Proactive Forwarding, and 2.734 ms for Segment
Routing. It does not represent great difference since it is measured under a
virtualized network, but the difference could increase on realistic implementations
like WAN networks.

5.3.2 OpenFlow messaging and method’s operation influence

Segment Routing tends to have a higher response time than Proactive


Forwarding, except on static path configuration events. This tendency is mainly
due to the number of OpenFlow messages transmitted and the operation of each
method in ONOS controllers.
58 Research on path establishment methods performance on SDN-based networks

5.3.2.1 OpenFlow messaging

The number of messages transmitted by the controller is a factor to consider.


During network events, Segment Routing had to use both OFPT_GROUP_MOD
and OFPT_FLOW_MOD messages (see Fig. 5.8). The number of
OFPT_FLOW_MOD messages are greater due to their use in modifying flow
entries in more than one flow table (MPLS and IP tables). This caused the
exchange of a total of 742 OpenFlow messages, including port status and barrier
messages. Meanwhile, Proactive Forwarding only used a set of
OFPT_FLOW_MOD messages to modify one flow table (MAC table), exchanging
206 OpenFlow messages, including port status and barrier messages.

Segment Routing Proactive Forwarding

OFPT_GROUP_MOD

OFPT_FLOW_MOD
OFPT_FLOW_MOD
MAC
IP MPLS Group
Table
Table Table Table

Figure 5.8 - OpenFlow messaging

During static path configuration, Segment Routing takes lower time to statically
install paths due to the OpenFlow messages transmitted from the controller to the
switches. While in Proactive Forwarding the controller sends OFPT_FLOW_MOD
and pairs of OFPT_Barrier_[REQUEST|REPLY] messages, Segment Routing
had to send only a few OFPT_GROUP_MOD and OFPT_FLOW_MOD
messages into the edge switches to establish a path across the network. In most
cases, Proactive Forwarding exchanged a total of 54 OpenFlow messages, while
in Segment Routing, only 7 OpenFlow messages. In case of Segment Routing
static paths, the OFPT_FLOW_MOD messages introduce new entries to the ACL
table, which overrides the actions of both MPLS and IP table entries related to
specific traffic.

5.3.2.2 Methods operation

The amount of OpenFlow messages transmitted is directly related to the


operation of the path establishment methods in ONOS controllers. It is noticeable
during static path configuration events, where the working principle of Segment
Routing is used to establish the tunnel (segment sequence) on the network. Let’s
remember that source routing techniques, like Segment Routing, establishes the
path from the initial node (that’s why the name “source routing”), in which the
segment sequence is introduced on the IP packet at ingress in the SR domain.
Chapter 5: Experimentation and results 59

Translated to the SDN environment, this means that, in Segment Routing, the
controller only needs to send OpenFlow messages (OFPT-GROUP-MOD and
OFPT_FLOW_MOD) to the edge switches in order to establish the path, while in
Proactive Forwarding, the controller needs to send OpenFlow messages
(OFPT_FLOW_MOD) to all switches related to the configured path (see Figure
5.9).

SR flows

PF flows

Figure 5.9 - Flows allocation during static path configuration

During dynamic configuration, the behavior of Proactive Forwarding is the same.


However, in Segment Routing, the application used by the ONOS controller only
allocates one segment (the destination segment) on the MPLS label stack at the
ingress traffic. Thus, it doesn’t automatically allocates several labels to create a
specific tunnel. This is because the routing information within the MPLS and IP
tables, allows for the packet to be routed through the default shortest path,
without the need of using other labels to guide the packet towards the destination.

During network events, the working principle of Segment Routing doesn’t help
much to maintain a lower response time. This have to do with the dynamic
allocation of flow and group entries to all tables of the switches. A path may be
established from an edge switch, but this path depends on the routing information
allocated on the tables of each switch. So, when a network failure occurs, the
controller needs to edit the necessary flow and group entries to all switches, in
which related paths are affected by the failure. The controller re-computes all
possible paths (including the ones related with the IP loopback addresses and
default gateways) on the changed topology, and edit the flow and group tables
accordingly. On the other hand, Proactive Forwarding only re-computes the
affected route plus any other route that can carry control traffic like LLDP
messages, and edit the MAC tables accordingly (See Fig. 5.10).
60 Research on path establishment methods performance on SDN-based networks

PF routes

SR routes

Figure 5.10 - Path establishment after failures

Despite that the test was also made with a path load in Proactive Forwarding to
match the default paths configured by Segment Routing, Segment Routing still
presents more response time due to the number of flow and group tables to
update during a single network failure.

5.3.3 Jitter comparison

There is not much to say about the Jitter variation during network events,
compared with the observed on steady state conditions. Both measurements
show no considerable difference, maintaining their average values below 1 ms
(see Fig. 5.11) and no traffic interruption. This makes to consider that for this
experimentation in particular, there is no impact on the traffic performance during
network events.

Jitter (ms)
0.1 0.065 0.056 0.057
0.044
0.05
0
Bitrate (Mbps) 0.1 0.1

Proactive Forwarding Segment Routing

Figure 5.11 - Initial conditions VS Network events


Conclusions 61

CONCLUSIONS
Path establishment methods can induce response time to an SDN controller
based on their working principle and their interaction with the OpenFlow protocol.
In the case of the studied methods (Proactive Forwarding and Segment Routing),
the induced response time is more significant during network failures, due to the
increased modification of OpenFlow tables in most of the switches to reestablish
the paths. From this point, there are several observations:

1. Despite that Segment Routing establishes the paths from the edge nodes, it
cannot take full advantage of this behavior during network failures, since the
paths depends on the routing information stored dynamically on the IP and
MPLS tables.

2. In both methods, the controller sends several set of repeated


OFPT_FLOW_MOD and OFPT_GROUP_MOD messages to a switch,
carrying the same instructions. This increases the number of messages that
the controller sends to the switches, and hence the response time towards a
single operation.

3. The overall traffic performance can be maintained in Segment Routing as long


as the topology permits to establish multiple independent paths to execute
FRR in case of failures. However, behind the scenes the controller’s response
time increase, due to the re-computation of all possible paths affected by the
failure.

4. This tendency between Segment Routing and Proactive Forwarding is


expected to be the same. Segment Routing continues to induce the controller
to modify more OpenFlow tables than Proactive Forwarding, even when both
are carrying the same path load. Let’s remember that Segment Routing is
designed for L3 routing, which will always need to handle L3 routing
information.

During static path configuration, is where Segment Routing takes full advantage
of its working principle. In this situation, there is only need for the controller to
send OpenFlow messages to the edge nodes to establish the path. This case is
interesting, because business applications from the Application layer can use
Segment Routing to establish a policy based tunnel without introducing significant
processing demand to the controller. This particular advantage is the main reason
of other investigations made on the subject, which suggests Source Routing (not
specifically Segment Routing) as a possible alternative for SDN packet
forwarding [7].

In another matter, it has been seen how the usage of the OpenFlow pipeline can
introduce additional delay on the switch packet forwarding. Segment Routing, in
this case, presented a higher packet forwarding delay, related to the number of
OpenFlow tables processing every ingress IP packet. Although in the virtual
network did not had impact over the traffic performance, it is an interesting
observation to take into account for realistic networks, where the SR nodes are
62 Research on path establishment methods performance on SDN-based networks

geographically separated, and the global traffic delay comes to play an important
role.

For path establishment methods, every additional response time introduced to


the controller for a single operation like path redirection, is a factor to take into
account in cases like scalability and costs (OPEX and CAPEX). If the processing
demand of the controller is rapidly increased by these operations (plus other
operations regularly executed by the controller), there would be the need to
extend and maintain more hardware resources on the controller, depending on
the extent of the infrastructure layer (number of network devices and hosts).

Maintaining a lower response time on the SDN controller, must be part of the
effort to achieve performance efficiency. Performance that can guarantee an
efficient use of the resources in terms of hardware and energy. Moreover, the
adaptation of path establishment methods into SDN, allows for the use of
lightweight network devices that reduces energy consumption, since each of
them doesn’t execute routing protocols, or maintain a constant routing updates
like normally happens with distributed systems.

From the point of view of the test topology

The observed results are dependent of the hardware used. This means that with
different hardware, the values of response time can change considerably for both
path establishment methods. Nevertheless, based on their working principle and
their operation in SDN, it can be expected the same tendency of these results,
as long as the same implementation of these paths mechanisms are used.

Let’s remember that the Segment Routing implementation used for this
experimentation, is a prototype designed to work with the MPLS data plane.
According to the IETF, there can be other types of implementations of Segment
Routing like having extensions for OSPF, ISIS, and BGP, or as an instantiation
of IPv6, in which the SIDs are represented by IPv6 addresses [12]. A test scenario
using either of these types in SDN, could change the tendency of the results.

Future work
This research offers several areas for further investigation and development. Key
questions surfaced as a consequence of the experimentation:

1. Can the forwarding actions of Segment Routing be carried out by flow tables?

According to OpenFlow specifications [3], and as described early in Chapter 1,


Actions like MPLS Push/Pop and TTL decrement can be carried out by the Action
Set at the end of the pipeline processing, which can be executed on a flow table.
This may suggest a way of not using the group tables, reducing the pipeline
processing, and the OpenFlow messages towards the infrastructure layer.
Conclusions 63

2. Can the repeated set of OpenFlow messages be reduced on both path


establishment methods?

There is no specific ONOS documentation explaining why it sends this repeated


set of instructions to the infrastructure layer, but further investigation can be
carried out in order to analyze the possibility of reducing these set of messages,
without compromising the operation of the path establishment methods.

3. Can the Segment Routing application be enhanced to support controller’s


cluster implementations?

Being an ONOS subsystem, it gives the possibility to add a store service for
synchronization among servers in a cluster, or among other instances of the
controller. Further investigation would be needed about what information to
synchronize, and how extensive would be the software development, in order for
the cluster to perform the Segment Routing application as one entity towards the
infrastructure layer.

4. How would be the scalability rate of the controller based on increasing traffic
and number of switches?

This investigation can depart from the study of workloads on the SDN controllers,
by using benchmark tools like Cbench. Although, it would need to be developed
further to support current versions of OpenFlow. For Intent Forwarding, ONOS
offers a series of tests involving the application push-test-intent over a controller
cluster to observe the performance [34]. This study could lead to an estimation of
how would be the scalability of the SDN controller using both path establishment
methods.
64 Research on path establishment methods performance on SDN-based networks

REFERENCES
[1]. DeCusatis, C. (2012). ODIN Volume 3: Software Defined Networking and
Open Flow. Retrieved from IBM: http://www-01.ibm.com/common/ssi/cgi-
bin/ssialias?subtype=WH&infotype=SA&appname=STGE_QC_QC_USEN
&htmlfid=QCW03021USEN&attachment=QCW03021USEN.PDF
[2]. Nadeau, T. D., & Gray, K. (2013). SDN: Software Defined Networks. In T.
D. Nadeau, & K. Gray, SDN: Software Defined Networks (pp. 47-69).
Sebastopol: O'Reilly Media Inc.
[3]. OpenFlow Switch Specification Version 1.3.4. (2014). Retrieved from Open
Networking Foundation:
https://www.opennetworking.org/images/stories/downloads/sdn-
resources/onf-specifications/openflow/openflow-spec-v1.3.0.pdf
[4]. Cisco Systems Inc. (1998). In K. Downes, M. Ford, K. H. Lew, S. Spanier,
& T. Stevenson, Internetworking Technologies Handbook (pp. 63-75).
Indianapolis: Macmillan Technical Publishing.
[5]. He, J., & Rexford, J. (2008). Towards Internet-wide Multipath Routing.
Retrieved from Princeton University:
https://www.cs.princeton.edu/~jrex/papers/multipath08.pdf
[6]. Geoffray, P., & Hoefler, T. (2008). Adaptive Routing Strategies for Modern
High Performance Networks. 16th Annual IEEE Symposium on High
Performance Interconnects (pp. 165-172). Stanford, USA: IEEE Computer
Society. Retrieved from ETH Zürich:
http://htor.inf.ethz.ch/publications/img/mx_routing-geoffray.pdf
[7]. Soliman, M., Nandy, B., Lambadaris, I., & Ashwood-Smith, P. (2012).
Source Routed Forwarding with Software Defined Control, Considerations
and Implications. 8th International Conference on emerging Networking
EXperiments and Technologies (CoNEXT). Nice. Retrieved from Sigcomm:
http://conferences.sigcomm.org/conext/2012/eproceedings/student/p43.pd
f
[8]. International Telecommunication Union. (2003). ITU-T Recommendation
G.114: One-way transmission time. Retrieved from ITU:
http://handle.itu.int/11.1002/1000/6254-en?locatt=format:pdf&auth Al-
Shabibi, A. (2015). Basic ONOS Tutorial. Retrieved from Onosproject Wiki:
https://wiki.onosproject.org/display/ONOS10/Basic+ONOS+Tutorial
[9]. Szigeti, T., & Hattingh, C. (2004). Quality of Service Design Overview.
Retrieved from Cisco Press:
http://www.ciscopress.com/articles/article.asp?p=357102
[10]. Cisco Systems Inc. (2006). Understanding Delay in Packet Voice Networks.
Retrieved from Cisco Systems:
http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/5125-delay-
details.html
[11]. PepeInjak, I., & Guichard, J. (2001). MPLS and VPN Architectures.
Indianapolis: Cisco Press.
References 65

[12]. Filsfils, C., Previdi, S., Decraene, B., Litkowski, S., & Shakir, R. (2015).
Segment Routing Architecture: draft-ietf-spring-segment-routing-05.
Retrieved from Internet Engineering Task Force (IETF):
https://tools.ietf.org/pdf/draft-ietf-spring-segment-routing-05.pdf
[13]. Apache Software Foundation. (2010). Apache ZooKeeper™. Retrieved
from Apache website: https://zookeeper.apache.org/
[14]. Berde, P., Gerola, M., Hart, J., Higuchi, Y., Kobayashi, M., Koide, T., . . .
Parulkar, G. (n.d.). ONOS: Towards an Open, Distributed SDN OS.
Retrieved from Stanford University: http://www-cs-
students.stanford.edu/~rlantz/papers/onos-hotsdn.pdf
[15]. OpenWrt. (2015). About OpenWrt. Retrieved from OpenWrt:
http://wiki.openwrt.org/about/start
[16]. Mininet. (2015). Mininet An Instant Virtual Network on your Laptop (or other
PC). Retrieved from Mininet organization: http://mininet.org/
[17]. Das, S. (2014). Installation Guide. Retrieved from Onosproject Wiki:
https://wiki.onosproject.org/display/ONOS/Installation+Guide
[18]. Das, S. (2014). Software Architecture. Retrieved from Onosproject:
https://wiki.onosproject.org/display/ONOS/Software+Architecture
[19]. GitHub. (2015). Mininet VM Images. Retrieved from GitHub:
http://downloads.mininet.org/mininet-2.2.0-150106-ubuntu-14.04-server-
amd64.zip
[20]. ON.Lab. (2015). ONOS Java API (1.0.1). Retrieved from ONOS Project
Wiki: http://api.onosproject.org/1.0.1/
[21]. Koshibe, A. (2015). Intent Framework. Retrieved from Onosproject Wiki:
https://wiki.onosproject.org/display/ONOS/Intent+Framework
[22]. ON.Lab. (2015). Package org.onosproject.net.intent. Retrieved from ONOS
Project Wiki: http://api.onosproject.org/1.1.0/
[23]. ON.Lab. (2015). Package org.onosproject.net.topology. Retrieved from
ONOS Project Wiki: http://api.onosproject.org/1.1.0/
[24]. Das, S. (2014). Software Architecture. Retrieved from Onosproject:
https://wiki.onosproject.org/display/ONOS/Software+Architecture
[25]. OpenDaylight Controller:MD-SAL:L2 Switch. Retrieved from OpenDaylight
Wiki:
https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-
SAL:L2_Switch
[26]. (2014). In W. Stallings, Data and Computer Communications (pp. 607,592-
593,618-624). Pearson Education Inc.
[27]. Al-Shabibi, A. (2015). Basic ONOS Tutorial. Retrieved from Onosproject
Wiki: https://wiki.onosproject.org/display/ONOS10/Basic+ONOS+Tutorial
[28]. Apache Software Foundation. (2010). Apache ZooKeeper™. Retrieved
from Apache website: https://zookeeper.apache.org/
[29]. ON.LAB. (2014). Introducing ONOS - a SDN network operating system for
Service Providers. Retrieved from OnosProject Organization:
http://onosproject.org/wp-content/uploads/2014/11/Whitepaper-ONOS-
final.pdf
66 Research on path establishment methods performance on SDN-based networks

[30]. OpenDaylight Foundation. (2015). BGP LS PCEP:PCEP Use Cases.


Retrieved from OpenDaylight Wiki:
https://wiki.opendaylight.org/view/BGP_LS_PCEP:PCEP_Use_Cases#Ho
w_to_configure_and_use_PCEP_segment_routing
[31]. OpenDaylight Foundation. (2015). BGP LS PCEP:Programmer Guide.
Retrieved from OpenDaylight Wiki:
https://wiki.opendaylight.org/view/BGP_LS_PCEP:Programmer_Guide#P
CEP
[32]. OpenDaylight Foundation. (2015). OpenDaylight Controller:MD-SAL:L2
Switch. Retrieved from OpenDaylight Wiki:
https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-
SAL:L2_Switch
[33]. Zhang, S. (2015). Experiment A&B Plan - Topology (Switch, Link) Event
Latency. Retrieved from ONOS Project Wiki:
https://wiki.onosproject.org/pages/viewpage.action?pageId=3441825
[34]. Zhang, S. (2015). Experiment C Plan - Intent Install/Remove/Re-route
Latency. Retrieved from
https://wiki.onosproject.org/pages/viewpage.action?pageId=3441828
[35]. Quiroz, D., & Cervelló-Pastor, C. (2015). Influence of the path establishment
method on an SDN controller time response. Jornadas de Ingeniería
Telemática (JITEL) 2015. Palma de Mallorca, Spain.
Acronyms 67

ACRONYMS

 ACL Access List


 ADJ-SID Adjacency Segment ID
 API Application Programmable Interface
 ARP Address Resolution Protocol
 BGP Border Gateway Protocol
 CAPEX Capital Expenditure
 CI Confidence Interval
 CLI Command Line Interface
 CPU Central Processing Unit
 DPID Datapath Identifier
 DSCP Differentiated Services Code Point
 ECMP Equal Cost Multipath
 EGP Exterior Gateway Protocol
 FIB Forwarding Information Base
 FRR Fast Reroute
 GUI Graphical User Interface
 ICMP Internet Control Message Protocol
 IETF Internet Engineering Task Force
 IGP Interior Gateway Protocol
 IP Internet Protocol
 IPFRR IP Fast Reroute
 ISIS Intermediate System-to-Intermediate System
 ITU International Telecommunication Union
 JDK Java Development Kit
 LDP Label Distribution Protocol
 LFIB Label Forwarding Information Base
 LIB Label Information Base
 LLDP Link Layer Discovery Protocol
 LSP Label Switched Path
 LSR Label Switching Router
 MAC Media Access Control
 MGEN Multi-Generator
68 Research on path establishment methods performance on SDN-based networks

 MPLS Multiprotocol Label Switching


 NBI Northbound Interface
 NIC Network Interface Card
 NODE-SID Node Segment ID
 NOS Network Operating System
 OAM Operations, Administration and Maintenance
 ONOS Open Network Operating System
 OPEX Operational Expenditures
 OS Operating System
 OSGi Open Services Gateway initiative
 OSPF Open Shortest Path First
 PBB Provider Backbone Bridge
 PCEP Path Computation Element Protocol
 PF Proactive Forwarding
 PHP Penultimate Hop Popping
 QoE Quality of Experience
 QoS Quality of Service
 RAM Random Access Memory
 RTT Round-Trip Time
 SDN Software Defined Networking
 SID Segment Identifier
 SLA Service Level Agreement
 SPF Shortest Path First
 SPRING Source Packet Routing In Networking working group
 SR Segment Routing
 TCP Transmission Control Protocol
 TLS Transport Layer Security
 TTL Time To Live
 UDP User Datagram Protocol
 VLAN Virtual Local Area Network
 VM Virtual Machine
 VoIP Voice over IP
 VPN Virtual Private Network
 WAN Wide Area Network
Appendix A: OpenFlow pipeline processing 69

APPENDIX A: OPENFLOW PIPELINE PROCESSING


The following diagram, is a complete scheme extracted from the OpenFlow 1.3.4
specification, explaining the methodology of the pipeline processing:

Annex 1 - OpenFlow pipeline processing20

When IP packets are processed by a flow table, the packet header is compared
with the Match fields of the flow entries. The flow entry with higher match priority,
is used to apply the flow actions over the packet. Three main instructions are
applied to the packet: Packet modification, in which match fields are updated
through the Apply Action instruction, in order to prepare the packet (if needed) to
match the next flow table; Action Set update, which updates the action list within
the Action Set to be executed at the end of the pipeline processing; and Metadata
update, which carries control information between flow tables. Annex 2 shows the
flow diagram of the matching process.

20
Diagram retrieved from [3]
70 Research on path establishment methods performance on SDN-based networks

Annex 2 - Match process flow diagram21

21
Diagram retrieved from [3]
Appendix B: Network topology scripts 71

APPENDIX B: NETWORK TOPOLOGY SCRIPTS


The following codes, represent the scripts used to configure the Proactive
Forwarding, and Segment Routing topologies on Mininet:

Code 1 - Proactive Forwarding script. TopoTestPF.py


"""

10 switch, 2 hosts, 14 links for Proactive Forwarding

"""

from mininet.topo import Topo

class SRTopo( Topo ):

def __init__( self ):


"Create custom topo."

# Initialize topology
Topo.__init__( self )

# Add hosts and switches


host1 = self.addHost( 'h1' )
host2 = self.addHost( 'h2' )

s1 = self.addSwitch('s1')
s2 = self.addSwitch('s2')
s3 = self.addSwitch('s3')
s4 = self.addSwitch('s4')
s5 = self.addSwitch('s5')
s6 = self.addSwitch('s6')
s7 = self.addSwitch('s7')
s8 = self.addSwitch('s8')
s9 = self.addSwitch('s9')
s10 = self.addSwitch('s10')

# Add links for hosts


self.addLink( host1, s1)
self.addLink( host2, s6)

# Add links between switches


self.addLink(s1, s2)
self.addLink(s2, s3)
self.addLink(s3, s4)
self.addLink(s4, s5)
self.addLink(s5, s6)
self.addLink(s6, s7)
self.addLink(s7, s8)
self.addLink(s8, s9)
self.addLink(s9, s10)
self.addLink(s10, s1)
self.addLink(s3, s9)
self.addLink(s4, s8)

topos = { 'mytopo': ( lambda: SRTopo() ) }


72 Research on path establishment methods performance on SDN-based networks

Code 2 - Segment Routing script. TopoTestSR.py


"""

10 switch, 2 hosts, 14 links topology for Segment Routing

"""

from mininet.topo import Topo

class SRTopo( Topo ):

def __init__( self ):


"Create custom topo."

# Initialize topology
Topo.__init__( self )

# Add hosts and switches


host1 = self.addHost( 'h1', ip="10.0.0.5/24", defaultRoute="via 10.0.0.1" )
host2 = self.addHost( 'h2', ip="10.1.1.5/24", defaultRoute="via 10.1.1.1" )

s1 = self.addSwitch('s1')
s2 = self.addSwitch('s2')
s3 = self.addSwitch('s3')
s4 = self.addSwitch('s4')
s5 = self.addSwitch('s5')
s6 = self.addSwitch('s6')
s7 = self.addSwitch('s7')
s8 = self.addSwitch('s8')
s9 = self.addSwitch('s9')
s10 = self.addSwitch('s10')

# Add links for hosts


self.addLink( host1, s1)
self.addLink( host2, s6)

# Add links between switches


self.addLink(s1, s2)
self.addLink(s2, s3)
self.addLink(s3, s4)
self.addLink(s4, s5)
self.addLink(s5, s6)
self.addLink(s6, s7)
self.addLink(s7, s8)
self.addLink(s8, s9)
self.addLink(s9, s10)
self.addLink(s10, s1)
self.addLink(s3, s9)
self.addLink(s4, s8)

topos = { 'mytopo': ( lambda: SRTopo() ) }


Appendix B: Network topology scripts 73

The following code, displays the configuration file used in the SPRING-OPEN
controller to emulate the CPqD switches as SR nodes:

Code 3 - ONOS SPRING-OPEN configuration file. toptst-ctrl-sr.py


{
"comment": " 10 router 2 hosts",
"restrictSwitches": true,
"restrictLinks": true,

"switchConfig":
[
{ "nodeDpid": "00:01", "name": "R1", "type": "Router_SR", "allowed": true,
"params": { "routerIp": "172.10.0.1/32",
"routerMac": "00:00:01:01:01:80",
"nodeSid": 101,
"isEdgeRouter" : true,
"adjacencySids":[
{"adjSid":11111, "ports": [ 2 ,3 ] }
],
"subnets": [
{ "portNo": 1, "subnetIp": "10.0.0.1/24" }
]
}
},

{ "nodeDpid": "00:02", "name": "R2", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.2/32",
"routerMac": "00:00:02:02:02:80",
"nodeSid": 102,
"isEdgeRouter" : false
}
},

{ "nodeDpid": "00:03", "name": "R3", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.3/32",
"routerMac": "00:00:03:03:03:80",
"nodeSid": 103,
"isEdgeRouter" : false,
"adjacencySids":[
{"adjSid":33333, "ports": [ 2 ,3 ] },
{"adjSid":22222, "ports": [ 1 ,3 ] }
]
}
},

{ "nodeDpid": "00:04", "name": "R4", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.4/32",
"routerMac": "00:00:04:04:04:80",
"nodeSid": 104,
"isEdgeRouter" : false,
"adjacencySids":[
{"adjSid":44444, "ports": [ 1 ,3 ] },
{"adjSid":55555, "ports": [ 2 ,3 ] }
]
}
},
74 Research on path establishment methods performance on SDN-based networks

{ "nodeDpid": "00:04", "name": "R4", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.4/32",
"routerMac": "00:00:04:04:04:80",
"nodeSid": 104,
"isEdgeRouter" : false,
"adjacencySids":[
{"adjSid":44444, "ports": [ 1 ,3 ] },
{"adjSid":55555, "ports": [ 2 ,3 ] }
]
}
},

{ "nodeDpid": "00:05", "name": "R5", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.5/32",
"routerMac": "00:00:05:05:05:80",
"nodeSid": 105,
"isEdgeRouter" : false
}
},

{ "nodeDpid": "00:06", "name": "R6", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.6/32",
"routerMac": "00:00:06:06:06:80",
"nodeSid": 106,
"isEdgeRouter" : true,
"adjacencySids":[
{"adjSid":66666, "ports": [ 2 ,3 ] }
],
"subnets": [
{ "portNo": 1, "subnetIp": "10.1.1.1/24" }
]
}
},

{ "nodeDpid": "00:07", "name": "R7", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.7/32",
"routerMac": "00:00:07:07:07:80",
"nodeSid": 107,
"isEdgeRouter" : false
}
},

{ "nodeDpid": "00:08", "name": "R8", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.8/32",
"routerMac": "00:00:08:08:08:80",
"nodeSid": 108,
"isEdgeRouter" : false,
"adjacencySids":[
{"adjSid":88888, "ports": [ 2 ,3 ] },
{"adjSid":77777, "ports": [ 1 ,3 ] }
]
}
},
Appendix B: Network topology scripts 75

{ "nodeDpid": "00:09", "name": "R9", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.9/32",
"routerMac": "00:00:09:09:09:80",
"nodeSid": 109,
"isEdgeRouter" : false,
"adjacencySids":[
{"adjSid":99999, "ports": [ 1 ,3 ] },
{"adjSid":10101, "ports": [ 2 ,3 ] }
]
}
},

{ "nodeDpid": "00:0a", "name": "R10", "type": "Router_SR", "allowed": true,


"params": { "routerIp": "172.10.0.10/32",
"routerMac": "00:00:10:10:10:80",
"nodeSid": 110,
"isEdgeRouter" : false
}
}

],

"linkConfig":[

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "01", "nodeDpid2": "02",
"params": { "port1": 2, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "02", "nodeDpid2": "03",
"params": { "port1": 2, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "03", "nodeDpid2": "04",
"params": { "port1": 2, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "03", "nodeDpid2": "09",
"params": { "port1": 3, "port2": 3 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "04", "nodeDpid2": "08",
"params": { "port1": 3, "port2": 3 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "04", "nodeDpid2": "05",
"params": { "port1": 2, "port2": 1 }
},
76 Research on path establishment methods performance on SDN-based networks

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "05", "nodeDpid2": "06",
"params": { "port1": 2, "port2": 2 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "06", "nodeDpid2": "07",
"params": { "port1": 3, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "07", "nodeDpid2": "08",
"params": { "port1": 2, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "08", "nodeDpid2": "09",
"params": { "port1": 2, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "09", "nodeDpid2": "0a",
"params": { "port1": 2, "port2": 1 }
},

{ "type": "pktLink", "allowed": true,


"nodeDpid1": "0a", "nodeDpid2": "01",
"params": { "port1": 2, "port2": 3 }
}

}
Appendix C:ONOS intent framework 77

APPENDIX C: ONOS INTENT FRAMEWORK


The following flow diagram, displays the Intent compilation process:

Annex 3 - Intent compilation process22

The Application layer sends a submission for an install request. This request is
compiled by the control layer, where the process can be successful or a failure.
If the compilation is successful, the Intent becomes an installable instruction
where the SouthBound interface can translate it into flow entries, downloading
them into the infrastructure layer.

Installed intents can be withdrawn with a withdraw request when is executed from
the CLI, or when there is a network event that directly affects the flow related with
the installed intent. This action will remove the intent from the controller, and in
consequence, the controller will remove the related flow entries from the
infrastructure layer.

In other cases of network events, the intent can try to be recompiled again and
enter into a failure state, in which the intent can go back to the compilation state
once the network is recovered from the failure, and reinstall the intent related with
the affected flow.

22
Diagram retrieved from [21]
78 Research on path establishment methods performance on SDN-based networks

APPENDIX D: SR SUBSYSTEM COMPONENTS

I. Segment Routing Application

Annex 4, displays the components of the Segment Routing application:

Annex 4 - Segment Routing application components23

 Segment Routing Manager: computes shortest ECMP paths, and populates


all routing instructions into IP and MPLS tables. It also handles all packets
within the application, and forwards them to appropriate handlers according
to the packet type.

 ARP Handler: It handles ARP request and response packets. If the APR
request is sent to any SR node, then, the controller generates and sends out
the ARP response to the corresponding hosts.

 ICMP Handler: It handles ICMP request to the routers. The controller,


generates the ICMP response and sends them out to the corresponding hosts.

 Generic IP Handler: It handles any IP packet. If the destination of the IP


packet are hosts within subnets of routers, or the routers addresses, then it
set the forwarding instruction to the router and sends out the packet to the
corresponding hosts.

 Segment Routing Policy: It creates the policy and set the policy instruction
to the ACL tables. If it is a tunnel policy, then it creates the tunnel for that
policy. According to ONOS documentation, the Load Balancing policy is not
implemented yet.

23
Diagram retrieved from [24]
Appendix D:SR Subsystem components 79

II. Configuration Manager


Part of the core layer, this component handles the initial network configuration
where the switches are assigned with SIDs and IP addresses, in which a network
configuration service, and a filtering service are used. Annex 5 displays the
diagram of the Configuration Manager.

Annex 5 - Configuration Manager diagram24

 Configuration Service: It handles device configuration. Parameters like


router IP and MAC addresses, Node-SIDs, Subnets, and Filtering policies, are
configured on the network devices. The Topology publisher, Driver Manager
and Segment Routing Driver modules, will use the services provided by this
service while constructing global network view.

 Filtering service: Provides a logic to decide which network device discovered


will be configured by the Configuration Service, based on a filtering policy that
is set up from the configuration file (i.e. toptst-ctrl-sr.conf displayed on
Appendix B).

III. Segment Routing Driver


Annex 6 displays the components of the Segment Routing Driver:

Annex 6 - Segment Routing Driver


24
Diagram retrieved from [24]
80 Research on path establishment methods performance on SDN-based networks

OF 1.3 Group Handler25

At startup, pre-populates the OF 1.3 groups in all the Segment Routers. There
are two types of groups that Segment Routing driver creates in the switches:
Indirect and Select groups.

Indirect groups, their main advantage is that when many, many routes (IP dst
prefixes) have a Next-Hop that requires the router to send the packet out of the
same port with or without same label, it is easier to envelop that port and/or label
in an indirect group. Note that these Groups are only created on ports that are
connected to other routers (and not an L2 domain). Thus the Dst-MAC is always
known to the controller, since it is the router-MAC of another router in the
Segment Routing domain. Also Note that this group does NOT push or set VLAN
tags, as these groups are meant to be used between routers within the Segment
Routing cloud. Since the Segment Routing cloud does not use VLANs, such
actions are not needed. This group will contain a single bucket, with the following
possible actions:

 Set Dst MAC address (next hop router-mac address)


 Set Src MAC address (this router’s router-mac address)
 At ingress router
o Push MPLS header with ethtype as IP and mpls label
o Copy TTL out
o Decrement MPLS TTL
 Output to port (connected to another Router)

Select groups, will have one or more buckets, that each point to actions of an
Indirect group or point to another group (in case of group chaining). Note that this
group definition does not distinguish between hashes made on IPv4 packets, and
hashes made on packets with an MPLS label stack. It is understood that the
switch makes the best hash decision possible with the given information.

 By default all ports connected to the same neighbor router will be part of
the same ECMP group.

 In addition, ECMP groups will be created for all possible combinations of


neighbor routers.

Group Recovery Handler26

This component of Segment Routing driver handles network element failures and
ONOS controller failure.

25
Content retrieved from [24]
26
Content retrieved from [24]
Appendix D:SR Subsystem components 81

 Controller failures: When a ONOS instance restarts and switches connects


back to that controller, the Segment Routing driver performs an audit of
existing Groups in the switch so that it pushes only the missing groups in to
the switch.

 Network element failures: When a port of a switch fails, this component


determine all the group buckets where the failed port is part of and perform a
OF "GroupMod.MODIFY" operation on all such groups to remove those
buckets. As a result, there may be some empty groups (groups with no
buckets) lying in the switch. Similarly when the port is UP again, this
component determines all the impacted groups and perform a OF
"GroupMod.MODIFY" operation on all such groups to add those buckets with
the recovered ports.

OF Message Pusher

This component builds OpenFlow messages from the provided match action
operation entry objects from higher layers, and sends them down into the network
devices.

Driver API

This API provides the following APIs with the upper layer:

 createGroup: builds an individual or a sequence of groups, and return the


topmost group ID to the upper layers.

 removeGroup: removes groups associated with a given ID and group


sequence (if any).

 pushFlow: Executes the OF Message Pusher component.


82 Research on path establishment methods performance on SDN-based networks

APPENDIX E: SOFTWARE INSTALLATION

I. ONOS Blackbird installation

Oracle Java 8 JDK installation

 $ sudo apt-get install software-properties-common –y


 $ sudo add-apt-repository ppa:webupd8team/java –y
 $ sudo apt-get update
 $ sudo apt-get install oracle-java8-installer oracle-java8-set-default –y
o It will prompt a license agreement
o Accept the license agreement

Apache Karaf 3.0.2 and Maven 3.2.3 installation

Create to directories called Downloads and Applications in the home directory (in
this case /home/quirozd). The Downloads directory is used to save the tar files
of Apache Maven and Karaf downloaded from the internet, and the Applications
directory is used to save the content extracted from the tar files.

 $ sudo mkdir Downloads Applications


 $ cd Downloads
 $ sudo wget http://download.nextag.com/apache/karaf/3.0.2/apache-karaf-3.0.2.tar.gz
 $ sudo wget https://archive.apache.org/dist/maven/maven-3/3.2.3/binaries/apache-maven-3.2.3-bin.tar.gz
 Verify that both tar files are downloaded
o $ ls
 $ sudo tar -zxvf apache-karaf-3.0.2.tar.gz -C ../Applications/
 $ sudo tar -zxvf apache-maven-3.2.3-bin.tar.gz -C ../Applications/
 Verify that the content of both tar files have been extracted on the
Applications directory.
o $ cd ̴ /Applications
o $ ls

Annex 7 - Apache Karaf and Maven verification


Appendix E:Software installation 83

 Give the necessary permissions to apache-karaf-3.0.2 directory for the


system to access its subdirectories during operations.
o $ chown –R username.username apache-karaf-3.0.2
 Verify that the permissions have been established
o $ ls –l apache-karaf-3.0.2

Annex 8 - Apache Karaf permissions verification

Blackbird download and building

Clone the onos system from the onos project online page into the home directory
(in this case /home/quirozd):

 $ git clone https://gerrit.onosproject.org/onos


 Enter to ̴ /onos and verify the available releases of onos
o $ git tag

Annex 9 - onos available versions


84 Research on path establishment methods performance on SDN-based networks

ONOS Blackbird is represented from release 1.1.0, in this case the latest version
of this release is used (1.1.0-rc2):

 $ git checkout –b 1.1.0-rc2 1.1.0-rc2


 Verify that the onos version is correct by checking the file ̴ /onos/pom.xml

Annex 10 - ONOS version verification

 $ cd ..
Before building Blackbird, it is necessary to set up Ubuntu environment variables
to add the necessary path that the system will use to build and access the
controller. Instead of adding them manually, the file ̴ /onos/tools/dev/bash_profile
represent a script to add them automatically.

 Verify the file ̴ /onos/tools/dev/bash_profile to ensure that the correct


versions of Apache Karaf and Apache Maven will be called.

Annex 11 - Apache Karaf and Maven version verification

 Edit the file ̴ /.bashrc to add the following line at the end
o . ̴ /onos/tools/dev/bash_profile
Appendix E:Software installation 85

o export ONOS_USER=VM’s username


 Safe and exit the file
 $ source ./.bashrc
This will permanently add the necessary variables and paths to the Ubuntu
environment, and it will create the necessary aliases to execute building
command of ONOS using Apache Maven.

 Verify that the environment variables are set with the necessary paths
o $ env

Annex 12 - Environment variables verification

Build the ONOS controller by using the Apache Maven:


 $ cd ̴ /onos
 $ mvn clean install
 It may give compilation errors. If it does, rerun mvn clean install until the
building is successful.
86 Research on path establishment methods performance on SDN-based networks

Annex 13 - Successful system building

Edit the file ̴ /Applications/apache-karaf-3.0.2/etc/org.apache.karaf.features.cfg in


order to initialize the controller with the desired features and applications.

 Add the following lines on the file


o On featuresRepositories
 mvn:org.onosproject/onos-features/1.1.0-rc2/xml/features
o On featuresBoot
 webconsole,onos-api,onos-core-trivial,onos-cli,onos-
rest,onos-gui,onos-openflow,onos-app-fwd,onos-app-
proxyarp,onos-app-mobility
 Save and exit the file
 $ cd - - (to go back to home directory)
 Start the ONOS console through Apache Karaf
o $ karaf clean (when initializing for the first time to take effect the
features added to the configuration file)
o $ karaf (for regular console start)
Appendix E:Software installation 87

Annex 14 - Apache Karaf console

This is in fact the ONOS console, which is using the Karaf CLI for the controller’s
administration. At this point the controller can be managed with all its features
installed. To change the esthetic view of this console to the one provided by
ONOS, copy the following file:

 $ sudo cp $ONOS_ROOT/tools/package/branding/target/onos-branding-
1.1.0-rc2.jar $KARAF_ROOT/lib/
 Start the CLI again
o $ karaf clean

Annex 15 - ONOS console

 Verify that all features and applications stated in the Karaf configuration
file ( ̴ /Applications/apache-karaf-3.0.2/etc/org.apache.karaf.features.cfg)
are successfully installed
o onos˃ list
 At this point ONOS Blackbird is successfully installed in the virtual machine
88 Research on path establishment methods performance on SDN-based networks

II. ONOS SPRING-OPEN installation


 Download the SPRING-OPEN version of the ONOS controller source
code, and install the Open Java Development Kit 7.
o $sudo apt-get install openjdk-7-jdk openjdk-7-doc openjdk-7-jre-lib
o $ git clone https://gerrit.onosproject.org/spring-open

 Download and extract Apache Zookeeper 3.4.6.


o $ wget http://apache.arvixe.com/zookeeper/stable/
o $ tar xzf zookeeper-3.4.6.tar.gz

 Run the controller setup


o $ cd ~/spring-open
o $ ./onos.sh setup

 Compile the controller code


o $ mvn clean
o $ mvn compile

 Before running the controller, the configuration file must be included within
the file ~/onos/spring-open/conf/onos.properties
o At the end of the last line, include the .conf file (toptst-ctrl-sr.conf)

Annex 16 - onos.properties file

 To run the controller


o $ ./onos.sh start
 To stop the controller
o $ ./onos.sh stop
Appendix E:Software installation 89

CLI installation

 Download a basic functioning build environment plus a few build-time


dependencies.
o $ sudo apt-get install unzip python-dev python-virtualenv build-
essential

 Download the CLI source code


o $ git clone https://gerrit.onosproject.org/spring-open-cli

 Build the source code


o $ cd spring-open-cli
o $ ./setup.sh

 To run the CLI, make sure you have the latest code. From the spring-open-
cli folder
o $ git pull
o $ source ./workspace/ve/bin/activate
o $ make start-sdncon
o $ cd cli/
o $ sudo ./cli.py

Annex 17 - ONOS SPRING-OPEN console

By default the CLI tries to connect to the controller on localhost and expects the
controller to be listening on port 8080 (127.0.0.1:8080). To make the CLI connect
to a controller on a different host, use:

 $ sudo ./cli.py --controller <ip-addr-of-controller-host>:<port>

III. Mininet installation


To install natively from source, first you need to get the source code:

 $ git clone git://github.com/mininet/mininet


90 Research on path establishment methods performance on SDN-based networks

 check the available versions


o $ cd mininet
o $ git tag # list available versions

Annex 18 - List of available Mininet versions

 Select the desired version


o $ git checkout –b 2.2.1 2.2.1
o $ cd ..
 Once you have the source tree, install Mininet
o Mininet/util/install.sh –a # install all packages including Wireshark
dissector.
 Verify that the installation is successful
o $ sudo mn --test pingall

IV. CPqD switch installation

Having Mininet installed in the system, run the following script (it will download
the CPqD software switch and try to install it).

 $ mininet/util/install.sh -3f

If the compilation step failes go to


https://wiki.onosproject.org/display/ONOS/CPqD+1.3+switch+on+recent+Ubunt
u+versions to fix the issue. Once the compilation is successful, go to the next
step.

Once the switch compiles, it is not done yet. There are some bugs in the switch,
which ON.Lab have fixed. So it is needed to patch the code with these fixes. To
start with, it is needed to go back to the specific checkin on which the patch
applies. In the ofsoftswitch13 folder, enter:
Appendix E:Software installation 91

 $ git checkout -b cpqd-spring 36738aeb3501f66fb382e7b59138c88e8843b19c

Execute “git log” to verify the commit

Annex 19 - CPqD commit verification

 Download the patch


o $ wget
https://wiki.onosproject.org/download/attachments/2130895/patchf
ile-cpqd

 Apply the patch


o $ patch -p0 < patchfile-cpqd

 Execute “git status” to verify the patch

Annex 20 - Patch verification

 Finally, compile again


o $ make
o $ sudo make install

 To use mininet with CPqD, to start the desired topology, execute:


 $ sudo mn –custom mininet/custom/file.py --topo mytopo --switch user --
controller=remote,ip=<IP address˃,port=6633 # being file.py the python
script for the topology (TopoTestPF.py or TopoTestSR.py)
92 Research on path establishment methods performance on SDN-based networks

APPENDIX F: OPERATION EXAMPLES

I. Controller initialization

OpenFlow messages

 OFPT_HELLO messages are exchanged between controller and nodes to


discover each other and state their Open Flow version.
 The controller sends an OFPT_FEATURE_REQUEST and an
OFPT_STAT_REQUEST to the nodes to learn their parameters and
descriptions (Dpid, ports, ports description, etc), in which the nodes replies
with an OFPT_FEATURE_REPLY and OFPT_STATS_REPLY.
 The controller sends an OFPT_SET_CONFIG to the nodes to set
configuration parameters on them.
 The controller sends an OFPCR_ROLE_REQUEST to the nodes to see in
which role they see the controller, in which they answer with the
OFPCR_ROLE_EQUAL. Once learned that, the controller sends an
OFPC_ROLE_MASTER to the nodes to inform that his role is Master, in which
they acknowledge the change by sending to the controller the same message.

Flow tables (using a 2 host and 3 routers topology)

During controller’s initialization, the router configuration (including SIDs, IP


addresses, FIB tables, and routing tables) is downloaded to each virtual switch
generated by Mininet. Once initialized, the controller at first doesn’t have a list of
the connected hosts because is not yet aware of their connectivity. The following
figures shows the CLI output obtained from the controller after the topology is
initialized.

Annex 21 Table of switches/routers and hosts


Appendix F:Operation examples 93

Annex 22 - IP routing table

Annex 23 - MPLS table

Annex 24 - Group table


94 Research on path establishment methods performance on SDN-based networks

Observations:

 The routing table depends on a series of instructions to forward packets at


layer 2 and 1 levels, based on a group configuration that will determine the
path that each packet will follow.
 Each group represents an ECMP group where a set of links can be defined
with an instruction destined for each of them. This instructions determine
when a packet labeling is necessary for the traffic going out through a certain
port.
 The MPLS table will show the FIB saved under each device, and the labels
will be related to the ECMP groups depending on the location of the router
with SID equals to the label. For neighbor routers, they will be marked as POP
in order to do penultimate hop popping on packages heading to those devices.

II. Link behavior on Segment Routing


For this test, the following topology was used:

Annex 25 - 2 host, 3 routers topology

In this topology, it is tested the behavior of redundant links and how the network
respond during a link failure.
Appendix F:Operation examples 95

Multipath links

Observations:

 The redundant links that interconnects the same 2 devices, are set under the
same ECMP group.
 The redundant links load balance the traffic passing through them by using a
round robin method where one packet pass through one link, the next through
the other, and so on.
ONOS CLI allows to see the packets passing through to each of those links by a
series of counters displayed on the group table. For each packet passing, the
counter is incremented (see Annex 26).

Annex 26 - Group table status during ping operations

 Notice that at the beginning, the table shows 11 packages passed through
both links. After the 3 first pings, the packages pass through both links in round
robin (first packet through the link 1, second packet through link 2, and third
packet through link 1 again). And finally, after a one last ping, the counter of
the next link increments, living the counters on 13 packet passed through both
links.

Link failures

On the event of a link failure, the controller change the router’s group table to
adequate to the links available for all possible routes.
96 Research on path establishment methods performance on SDN-based networks

Annex 27 - Group table change during link failures

 For this case on R1, groups 2 and 3, which were related to the failed links, are
empty of any forwarding instruction, and only groups 1 and 4 are operational.
All traffic that was using groups 2 and 3, now use groups 1 and 4 to use
alternate routes.
 Once the failed links are recovered, the traffic by default doesn’t comes back
to use the old links, it would still use the alternate routes.

Annex 28 - Group table after link recovery

 Notice that after the link recovery, and performing test pings between hosts,
the traffic will still prefer group 1 to forward the packages.
Appendix F:Operation examples 97

III. Label sequencing and forwarding operation


To observe this operation, a bidirectional tunnel/policy was implemented (see
Annex 29)

Annex 29 - Tunnel and policy tables

Depending of the nature of the path, one or more stack of labels can be
automatically set by the controller (also called segment stitching), in which they
are separated by brackets.

This is the display of the ACL table, and groups on router R9 (let’s remember that
policy configuration is translated into ACL entries on the routers):
98 Research on path establishment methods performance on SDN-based networks

Annex 30 - Group chaining and forwarding action example

Observations

 Notice that a new entry has been added on the ACL list of the router stating a
call for group 58 on the packages that complies with the policy.
 Group 58 pushes the MPLS label 101 and calls for group 57, which pushes
label 102 on the stack for all outbound packages treated by the policy.
 Finally, group 58 forwards the packet to output port 4 (the output interface)
Appendix G:Statistic procedure 99

APPENDIX G: STATISTIC PROCEDURE


For the measurements exercised on each test, besides maximum, minimum, and
average values, there was also the observation of the sample standard deviation,
and the confidence interval taken at a 95% of confidence. To obtain these values,
the following procedure was followed:

I. Average and standard deviation

For n measurements, the average value is calculated with the following equation:

∑𝑛
1 𝑋𝑖
𝑋= (a)
𝑛

Where X is the average value, and Xi is a particular sample of the measurement.


The sample standard deviation (σ) is obtained by calculating the variance (σ²)
from the samples measured and the average value:

2 ∑𝑛
1 (𝑋𝑖−𝑋)
2
𝜎 = (b)
𝑛−1

II. Confidence Interval (CI)


It is recommendable to calculate the confidence interval, as long the samples
tends to approximate a normal distribution behavior. Otherwise, sudden
variations in the measurements can put the CI into different ranges, which not
necessarily is the tendency of the system. So, in order to verify that the
measurements tends to be normally distributed, the samples are compared with
the standard normal distribution variables (Z distribution)27.

The Z distribution is used to visualize the expected values of the samples


distributed along the normal curve. This expected values are compared with the
samples (Xi) in a graph called the Quantile-Quantile plot28, where the Z variables
are represented on the horizontal axis, and the samples in the vertical axis. If the
trend line resulted in the graph approaches to a linear trend, then it is reasonable
to consider that the measurements tends to a normal distribution.

To determine the Z variables, first it is necessary to determine each of the


quantiles for a given number of samples n. These quantiles are determined by
the following expression:
27
JBstatistics (2013), Standardizing Normally Distributed Random Variables, retrieved from
Youtube: https://www.youtube.com/watch?v=4R8xm19DmPM
28
JBstatistics (2013), Normal Quantile-Quantile Plots, retrieved from Youtube:
https://www.youtube.com/watch?v=X9_ISJ0YpGw
100 Research on path establishment methods performance on SDN-based networks

𝑖−0.5
(c)
𝑛

Where i is the ith ordered value of the samples (ordered from smallest to largest).
From these quantiles, the Z variables can be determined using either a standard
normal table, or through a computer software. In this case, Microsoft Excel offers
a function called NORM.S.INV(), which gives the inverse of the normal standard
cumulative distribution (Z variables).

Once it is confirmed that the samples tends to be normally distributed, the


Confidence Interval is calculated. The CI in this case, is used to determine how
close the average value X is to the mean µ from a given number of samples. In
order to do this, a margin of error is determined and the CI is expressed by:

𝐶𝐼 = 𝑋 ± 𝑀𝑎𝑟𝑔𝑖𝑛 𝑜𝑓 𝐸𝑟𝑟𝑜𝑟 29 (d)

This expressions is the most used with samples that are normally distributed.
Mathematical arguments are used to determine the appropriate margin of error,
in which the following expression is used when the standard deviation is not
known30,

𝜎
𝑀𝑎𝑟𝑔𝑖𝑛 𝑜𝑓 𝐸𝑟𝑟𝑜𝑟 = 𝑡𝛼⁄2 𝑥 (e)
√𝑛

where 𝑡𝛼⁄2 , is the t distribution with n-1 degrees of freedom31. It is said that the
standard deviation is not known, because the standard deviation obtained is an
estimated value from the samples (sample standard deviation), in which the
𝜎
expression is sometimes referred as the standard error of the average (SE(X)).
√𝑛

The confidence level of 95% is set by the expression:

(1 − 𝛼 )𝑥100% (f)

29
JBstatistics (2013), Introduction to Confidence Intervals, retrieved from Youtube:
https://www.youtube.com/watch?v=27iSnzss2wM

30
JBstatistics (2013), Confidence Intervals for One Mean: Sigma Not Known (t Method), retrieved
from Youtube: https://www.youtube.com/watch?v=bFefxSE5bmo

31
JBstatistics (2013), Introduction to the t Distribution (non-technical), retrieved from Youtube:
https://www.youtube.com/watch?v=Uv6nGIgZMVw
Appendix G:Statistic procedure 101

In which α, is the area left out of the confidence level within the t distribution, and
it is set to 0.05 for a confidence level of 95%.

The t distribution is then obtained through a t table or software. In this case,


Microsoft Excel offers the function TINV(α,DF), which determines the t variables
according to the probability α and the degrees of freedom DF. Finally, the error
margin is obtained, and hence the Confidence Interval by this final expression:

𝜎
𝐶𝐼 = 𝑋 ± 𝑡𝛼⁄2 𝑥 (g)
√𝑛
102 Research on path establishment methods performance on SDN-based networks

APPENDIX H: MEASUREMENT TABLES

I. Steady state conditions

Annex Table 1 – Segment Routing Jitter measurements


Segment Routing

Samples (i) Jitter (ms) Jitter Xi (μs) Sample mean X (µs) (Xi-X)² P((i-0.5)/n) Z Distrb
1 0.02 20 55.95 1292.4025 0.005 -2.576
2 0.026 26 55.95 897.0025 0.015 -2.170
3 0.028 28 55.95 781.2025 0.025 -1.960
4 0.028 28 55.95 781.2025 0.035 -1.812
5 0.03 30 55.95 673.4025 0.045 -1.695
6 0.03 30 55.95 673.4025 0.055 -1.598
7 0.031 31 55.95 622.5025 0.065 -1.514
8 0.031 31 55.95 622.5025 0.075 -1.440
9 0.031 31 55.95 622.5025 0.085 -1.372
10 0.032 32 55.95 573.6025 0.095 -1.311
11 0.032 32 55.95 573.6025 0.105 -1.254
12 0.033 33 55.95 526.7025 0.115 -1.200
13 0.033 33 55.95 526.7025 0.125 -1.150
14 0.035 35 55.95 438.9025 0.135 -1.103
15 0.035 35 55.95 438.9025 0.145 -1.058
16 0.035 35 55.95 438.9025 0.155 -1.015
17 0.035 35 55.95 438.9025 0.165 -0.974
18 0.036 36 55.95 398.0025 0.175 -0.935
19 0.036 36 55.95 398.0025 0.185 -0.896
20 0.036 36 55.95 398.0025 0.195 -0.860
21 0.037 37 55.95 359.1025 0.205 -0.824
22 0.037 37 55.95 359.1025 0.215 -0.789
23 0.037 37 55.95 359.1025 0.225 -0.755
24 0.039 39 55.95 287.3025 0.235 -0.722
25 0.039 39 55.95 287.3025 0.245 -0.690
26 0.04 40 55.95 254.4025 0.255 -0.659
27 0.04 40 55.95 254.4025 0.265 -0.628
28 0.041 41 55.95 223.5025 0.275 -0.598
29 0.041 41 55.95 223.5025 0.285 -0.568
30 0.044 44 55.95 142.8025 0.295 -0.539
31 0.044 44 55.95 142.8025 0.305 -0.510
32 0.045 45 55.95 119.9025 0.315 -0.482
33 0.045 45 55.95 119.9025 0.325 -0.454
34 0.045 45 55.95 119.9025 0.335 -0.426
35 0.045 45 55.95 119.9025 0.345 -0.399
Appendix H:Measurements tables 103

36 0.045 45 55.95 119.9025 0.355 -0.372


37 0.047 47 55.95 80.1025 0.365 -0.345
38 0.047 47 55.95 80.1025 0.375 -0.319
39 0.047 47 55.95 80.1025 0.385 -0.292
40 0.047 47 55.95 80.1025 0.395 -0.266
41 0.047 47 55.95 80.1025 0.405 -0.240
42 0.048 48 55.95 63.2025 0.415 -0.215
43 0.048 48 55.95 63.2025 0.425 -0.189
44 0.049 49 55.95 48.3025 0.435 -0.164
45 0.05 50 55.95 35.4025 0.445 -0.138
46 0.05 50 55.95 35.4025 0.455 -0.113
47 0.051 51 55.95 24.5025 0.465 -0.088
48 0.052 52 55.95 15.6025 0.475 -0.063
49 0.052 52 55.95 15.6025 0.485 -0.038
50 0.052 52 55.95 15.6025 0.495 -0.013
51 0.053 53 55.95 8.7025 0.505 0.013
52 0.054 54 55.95 3.8025 0.515 0.038
53 0.054 54 55.95 3.8025 0.525 0.063
54 0.055 55 55.95 0.9025 0.535 0.088
55 0.055 55 55.95 0.9025 0.545 0.113
56 0.056 56 55.95 0.0025 0.555 0.138
57 0.056 56 55.95 0.0025 0.565 0.164
58 0.057 57 55.95 1.1025 0.575 0.189
59 0.057 57 55.95 1.1025 0.585 0.215
60 0.058 58 55.95 4.2025 0.595 0.240
61 0.059 59 55.95 9.3025 0.605 0.266
62 0.059 59 55.95 9.3025 0.615 0.292
63 0.06 60 55.95 16.4025 0.625 0.319
64 0.061 61 55.95 25.5025 0.635 0.345
65 0.062 62 55.95 36.6025 0.645 0.372
66 0.063 63 55.95 49.7025 0.655 0.399
67 0.063 63 55.95 49.7025 0.665 0.426
68 0.064 64 55.95 64.8025 0.675 0.454
69 0.064 64 55.95 64.8025 0.685 0.482
70 0.065 65 55.95 81.9025 0.695 0.510
71 0.065 65 55.95 81.9025 0.705 0.539
72 0.066 66 55.95 101.0025 0.715 0.568
73 0.066 66 55.95 101.0025 0.725 0.598
74 0.066 66 55.95 101.0025 0.735 0.628
75 0.068 68 55.95 145.2025 0.745 0.659
76 0.072 72 55.95 257.6025 0.755 0.690
77 0.072 72 55.95 257.6025 0.765 0.722
78 0.073 73 55.95 290.7025 0.775 0.755
79 0.074 74 55.95 325.8025 0.785 0.789
80 0.075 75 55.95 362.9025 0.795 0.824
104 Research on path establishment methods performance on SDN-based networks

81 0.076 76 55.95 402.0025 0.805 0.860


82 0.076 76 55.95 402.0025 0.815 0.896
83 0.076 76 55.95 402.0025 0.825 0.935
84 0.076 76 55.95 402.0025 0.835 0.974
85 0.077 77 55.95 443.1025 0.845 1.015
86 0.077 77 55.95 443.1025 0.855 1.058
87 0.078 78 55.95 486.2025 0.865 1.103
88 0.078 78 55.95 486.2025 0.875 1.150
89 0.078 78 55.95 486.2025 0.885 1.200
90 0.082 82 55.95 678.6025 0.895 1.254
91 0.084 84 55.95 786.8025 0.905 1.311
92 0.089 89 55.95 1092.3025 0.915 1.372
93 0.091 91 55.95 1228.5025 0.925 1.440
94 0.095 95 55.95 1524.9025 0.935 1.514
95 0.095 95 55.95 1524.9025 0.945 1.598
96 0.097 97 55.95 1685.1025 0.955 1.695
97 0.097 97 55.95 1685.1025 0.965 1.812
98 0.1 100 55.95 1940.4025 0.975 1.960
99 0.108 108 55.95 2709.2025 0.985 2.170
100 0.109 109 55.95 2814.3025 0.995 2.576

Annex Table 2 – SR jitter confidence


interval
Sum (Xi-X)² 40406.75 Annex Table 3 - Segment Routing
Variance 408.1489899 network performance
Smp Std Dev 20.20269759 Minimum Average Maximum Std Dev
RTT (ms) 2.535 2.734 2.951 0.103
conf Level 0.95
α 0.05 Bw (Mbps) Rx Bw (Mbps) Loss %
DF 99 0.1 0.1 0
tα 1.984216952 1 1 0
Err Margin 4.008653503 10 10 1.6
ConfInt high 59.9586535 100 9.9 90
ConfInt low 51.9413465 1000 9.24 98

Quantile-Quantile Plot Jitter SR


150
100
50
0
-3 -2 -1 0 1 2 3

Annex 31 - SR Jitter Quantile-Quantile Plot


Appendix H:Measurements tables 105

Annex Table 4 - Proactive Forwarding jitter measurements


Intent Forwarding

Samples (i) Jitter (ms) Jitter Xi (μs) Sample mean X (µs) (Xi-X)² P((i-0.5)/n) Z Distrb
1 0.016 16 43.94 780.6436 0.005 -2.576
2 0.016 16 43.94 780.6436 0.015 -2.170
3 0.017 17 43.94 725.7636 0.025 -1.960
4 0.017 17 43.94 725.7636 0.035 -1.812
5 0.018 18 43.94 672.8836 0.045 -1.695
6 0.019 19 43.94 622.0036 0.055 -1.598
7 0.019 19 43.94 622.0036 0.065 -1.514
8 0.019 19 43.94 622.0036 0.075 -1.440
9 0.019 19 43.94 622.0036 0.085 -1.372
10 0.019 19 43.94 622.0036 0.095 -1.311
11 0.019 19 43.94 622.0036 0.105 -1.254
12 0.02 20 43.94 573.1236 0.115 -1.200
13 0.02 20 43.94 573.1236 0.125 -1.150
14 0.021 21 43.94 526.2436 0.135 -1.103
15 0.021 21 43.94 526.2436 0.145 -1.058
16 0.021 21 43.94 526.2436 0.155 -1.015
17 0.021 21 43.94 526.2436 0.165 -0.974
18 0.021 21 43.94 526.2436 0.175 -0.935
19 0.022 22 43.94 481.3636 0.185 -0.896
20 0.022 22 43.94 481.3636 0.195 -0.860
21 0.022 22 43.94 481.3636 0.205 -0.824
22 0.022 22 43.94 481.3636 0.215 -0.789
23 0.022 22 43.94 481.3636 0.225 -0.755
24 0.023 23 43.94 438.4836 0.235 -0.722
25 0.023 23 43.94 438.4836 0.245 -0.690
26 0.023 23 43.94 438.4836 0.255 -0.659
27 0.023 23 43.94 438.4836 0.265 -0.628
28 0.024 24 43.94 397.6036 0.275 -0.598
29 0.024 24 43.94 397.6036 0.285 -0.568
30 0.024 24 43.94 397.6036 0.295 -0.539
31 0.025 25 43.94 358.7236 0.305 -0.510
32 0.025 25 43.94 358.7236 0.315 -0.482
33 0.026 26 43.94 321.8436 0.325 -0.454
34 0.026 26 43.94 321.8436 0.335 -0.426
35 0.026 26 43.94 321.8436 0.345 -0.399
36 0.027 27 43.94 286.9636 0.355 -0.372
37 0.027 27 43.94 286.9636 0.365 -0.345
38 0.027 27 43.94 286.9636 0.375 -0.319
39 0.027 27 43.94 286.9636 0.385 -0.292
40 0.028 28 43.94 254.0836 0.395 -0.266
106 Research on path establishment methods performance on SDN-based networks

41 0.028 28 43.94 254.0836 0.405 -0.240


42 0.028 28 43.94 254.0836 0.415 -0.215
43 0.029 29 43.94 223.2036 0.425 -0.189
44 0.029 29 43.94 223.2036 0.435 -0.164
45 0.029 29 43.94 223.2036 0.445 -0.138
46 0.029 29 43.94 223.2036 0.455 -0.113
47 0.029 29 43.94 223.2036 0.465 -0.088
48 0.03 30 43.94 194.3236 0.475 -0.063
49 0.031 31 43.94 167.4436 0.485 -0.038
50 0.032 32 43.94 142.5636 0.495 -0.013
51 0.032 32 43.94 142.5636 0.505 0.013
52 0.032 32 43.94 142.5636 0.515 0.038
53 0.033 33 43.94 119.6836 0.525 0.063
54 0.033 33 43.94 119.6836 0.535 0.088
55 0.034 34 43.94 98.8036 0.545 0.113
56 0.034 34 43.94 98.8036 0.555 0.138
57 0.034 34 43.94 98.8036 0.565 0.164
58 0.034 34 43.94 98.8036 0.575 0.189
59 0.035 35 43.94 79.9236 0.585 0.215
60 0.036 36 43.94 63.0436 0.595 0.240
61 0.036 36 43.94 63.0436 0.605 0.266
62 0.036 36 43.94 63.0436 0.615 0.292
63 0.036 36 43.94 63.0436 0.625 0.319
64 0.037 37 43.94 48.1636 0.635 0.345
65 0.038 38 43.94 35.2836 0.645 0.372
66 0.038 38 43.94 35.2836 0.655 0.399
67 0.038 38 43.94 35.2836 0.665 0.426
68 0.039 39 43.94 24.4036 0.675 0.454
69 0.04 40 43.94 15.5236 0.685 0.482
70 0.041 41 43.94 8.6436 0.695 0.510
71 0.041 41 43.94 8.6436 0.705 0.539
72 0.041 41 43.94 8.6436 0.715 0.568
73 0.041 41 43.94 8.6436 0.725 0.598
74 0.046 46 43.94 4.2436 0.735 0.628
75 0.047 47 43.94 9.3636 0.745 0.659
76 0.047 47 43.94 9.3636 0.755 0.690
77 0.047 47 43.94 9.3636 0.765 0.722
78 0.047 47 43.94 9.3636 0.775 0.755
79 0.047 47 43.94 9.3636 0.785 0.789
80 0.047 47 43.94 9.3636 0.795 0.824
81 0.051 51 43.94 49.8436 0.805 0.860
82 0.056 56 43.94 145.4436 0.815 0.896
83 0.057 57 43.94 170.5636 0.825 0.935
84 0.07 70 43.94 679.1236 0.835 0.974
85 0.072 72 43.94 787.3636 0.845 1.015
Appendix H:Measurements tables 107

86 0.073 73 43.94 844.4836 0.855 1.058


87 0.073 73 43.94 844.4836 0.865 1.103
88 0.075 75 43.94 964.7236 0.875 1.150
89 0.077 77 43.94 1092.9636 0.885 1.200
90 0.079 79 43.94 1229.2036 0.895 1.254
91 0.08 80 43.94 1300.3236 0.905 1.311
92 0.08 80 43.94 1300.3236 0.915 1.372
93 0.085 85 43.94 1685.9236 0.925 1.440
94 0.093 93 43.94 2406.8836 0.935 1.514
95 0.103 103 43.94 3488.0836 0.945 1.598
96 0.117 117 43.94 5337.7636 0.955 1.695
97 0.151 151 43.94 11461.8436 0.965 1.812
98 0.16 160 43.94 13469.9236 0.975 1.960
99 0.232 232 43.94 35366.5636 0.985 2.170
100 0.268 268 43.94 50202.8836 0.995 2.576

Annex Table 5 - PF jitter confidence


interval
Sum (Xi-X)² 156131.64 Annex Table 6 - Proactive
Variance 1577.087273 Forwarding network performance
Smp Std Dev 39.71255812 Minimum Average Maximum Std Dev
RTT (ms) 1.384 1.548 1.839 0.091
conf Level 0.95
α 0.05 Bw (Mbps) Rx Bw (Mbps) Loss %
DF 99 0.1 0.1 0
tα 1.984216952 1 0.412 0
Err Margin 7.879833102 10 2.24 0
100 3.31 78
ConfInt high 51.8198331 1000 2.89 96
ConfInt low 36.0601669

Quantile-Quantile Plot Jitter PF


300

200

100

0
-3 -2 -1 0 1 2 3
-100

Annex 32 - PF Jitter Quantile-Quantile Plot


108 Research on path establishment methods performance on SDN-based networks

II. Switch packet forwarding

Annex Table 7 - SR switch packet forwarding measurements


Segment Routing

samples (i) Input Pkt Timestamp (s) Output Pkt Timestamp (s) Forwarding time Xi (μs) samp mean X (μs) (Xi-X)² P((i-0.5)/n) Z Distrb
1 0.495735 0.495968 233.000 290.2 3271.84 0.005 -2.576
2 15.495766 15.496007 241.000 290.2 2420.64 0.015 -2.170
3 14.495768 14.496011 243.000 290.2 2227.84 0.025 -1.960
4 15.745736 15.74598 244.000 290.2 2134.44 0.035 -1.812
5 15.995764 15.996011 247.000 290.2 1866.24 0.045 -1.695
6 6.745761 6.746011 250.000 290.2 1616.04 0.055 -1.598
7 7.745743 7.745996 253.000 290.2 1383.84 0.065 -1.514
8 7.24574 7.245994 254.000 290.2 1310.44 0.075 -1.440
9 14.99577 14.996024 254.000 290.2 1310.44 0.085 -1.372
10 10.245743 10.245998 255.000 290.2 1239.04 0.095 -1.311
11 6.245741 6.245998 257.000 290.2 1102.24 0.105 -1.254
12 8.745743 8.746 257.000 290.2 1102.24 0.115 -1.200
13 2.745743 2.746002 259.000 290.2 973.44 0.125 -1.150
14 8.24574 8.245999 259.000 290.2 973.44 0.135 -1.103
15 18.24574 18.245999 259.000 290.2 973.44 0.145 -1.058
16 10.745743 10.746002 259.000 290.2 973.44 0.155 -1.015
17 4.74575 4.746011 261.000 290.2 852.64 0.165 -0.974
18 9.745742 9.746003 261.000 290.2 852.64 0.175 -0.935
19 24.495729 24.495991 262.000 290.2 795.24 0.185 -0.896
20 3.245744 3.246006 262.000 290.2 795.24 0.195 -0.860
21 3.745742 3.746004 262.000 290.2 795.24 0.205 -0.824
22 23.49573 23.495992 262.000 290.2 795.24 0.215 -0.789
23 9.24574 9.246003 263.000 290.2 739.84 0.225 -0.755
24 21.495732 21.495995 263.000 290.2 739.84 0.235 -0.722
25 0.245733 0.245998 265.000 290.2 635.04 0.245 -0.690
26 2.245754 2.246019 265.000 290.2 635.04 0.255 -0.659
27 1.245737 1.246003 266.000 290.2 585.64 0.265 -0.628
28 23.245742 23.246008 266.000 290.2 585.64 0.275 -0.598
29 22.49575 22.496017 267.000 290.2 538.24 0.285 -0.568
30 5.24575 5.246017 267.000 290.2 538.24 0.295 -0.539
31 15.245745 15.246012 267.000 290.2 538.24 0.305 -0.510
32 23.995736 23.996003 267.000 290.2 538.24 0.315 -0.482
33 12.245745 12.246014 269.000 290.2 449.44 0.325 -0.454
34 19.245734 19.246003 269.000 290.2 449.44 0.335 -0.426
35 11.745737 11.746008 271.000 290.2 368.64 0.345 -0.399
36 14.245741 14.246012 271.000 290.2 368.64 0.355 -0.372
37 20.245735 20.246007 272.000 290.2 331.24 0.365 -0.345
38 7.495859 7.496131 272.000 290.2 331.24 0.375 -0.319
Appendix H:Measurements tables 109

39 17.745734 17.746006 272.000 290.2 331.24 0.385 -0.292


40 21.99573 21.996002 272.000 290.2 331.24 0.395 -0.266
41 13.245747 13.24602 273.000 290.2 295.84 0.405 -0.240
42 20.495727 20.496 273.000 290.2 295.84 0.415 -0.215
43 6.495854 6.496127 273.000 290.2 295.84 0.425 -0.189
44 16.745761 16.746035 274.000 290.2 262.44 0.435 -0.164
45 17.245739 17.246013 274.000 290.2 262.44 0.445 -0.138
46 14.745744 14.746019 275.000 290.2 231.04 0.455 -0.113
47 11.245748 11.246025 277.000 290.2 174.24 0.465 -0.088
48 13.745741 13.746018 277.000 290.2 174.24 0.475 -0.063
49 12.745747 12.746024 277.000 290.2 174.24 0.485 -0.038
50 20.745751 20.746028 277.000 290.2 174.24 0.495 -0.013
51 20.995736 20.996013 277.000 290.2 174.24 0.505 0.013
52 21.745731 21.746008 277.000 290.2 174.24 0.515 0.038
53 22.745732 22.746009 277.000 290.2 174.24 0.525 0.063
54 4.245779 4.246057 278.000 290.2 148.84 0.535 0.088
55 18.495816 18.496095 279.000 290.2 125.44 0.545 0.113
56 6.995858 6.996138 280.000 290.2 104.04 0.555 0.138
57 17.49582 17.4961 280.000 290.2 104.04 0.565 0.164
58 19.745737 19.746017 280.000 290.2 104.04 0.575 0.189
59 16.245736 16.246017 281.000 290.2 84.64 0.585 0.215
60 1.74575 1.746031 281.000 290.2 84.64 0.595 0.240
61 5.74574 5.746023 283.000 290.2 51.84 0.605 0.266
62 16.995825 16.996115 290.000 290.2 0.04 0.615 0.292
63 17.995822 17.996114 292.000 290.2 3.23999999 0.625 0.319
64 19.995816 19.996108 292.000 290.2 3.23999999 0.635 0.345
65 21.245737 21.246029 292.000 290.2 3.24000001 0.645 0.372
66 19.495812 19.496106 294.000 290.2 14.44 0.655 0.399
67 0.745829 0.746124 295.000 290.2 23.04 0.665 0.426
68 22.245638 22.245938 300.000 290.2 96.04 0.675 0.454
69 9.495885 9.496185 300.000 290.2 96.04 0.685 0.482
70 5.995868 5.99617 302.000 290.2 139.24 0.695 0.510
71 1.495842 1.496151 309.000 290.2 353.44 0.705 0.539
72 4.995835 4.996147 312.000 290.2 475.24 0.715 0.568
73 5.495855 5.496168 313.000 290.2 519.84 0.725 0.598
74 12.495825 12.496139 314.000 290.2 566.44 0.735 0.628
75 2.995861 2.996175 314.000 290.2 566.44 0.745 0.659
76 1.995851 1.996166 315.000 290.2 615.04 0.755 0.690
77 2.495847 2.496162 315.000 290.2 615.04 0.765 0.722
78 13.495826 13.496141 315.000 290.2 615.04 0.775 0.755
79 13.995824 13.99614 316.000 290.2 665.64 0.785 0.789
80 10.996865 10.997182 317.000 290.2 718.24 0.795 0.824
81 9.995879 9.996197 318.000 290.2 772.84 0.805 0.860
82 12.995837 12.996155 318.000 290.2 772.84 0.815 0.896
83 0.995848 0.996167 319.000 290.2 829.44 0.825 0.935
110 Research on path establishment methods performance on SDN-based networks

84 24.745741 24.74606 319.000 290.2 829.44 0.835 0.974


85 23.745745 23.746066 321.000 290.2 948.64 0.845 1.015
86 24.245749 24.246072 323.000 290.2 1075.84 0.855 1.058
87 16.495771 16.496095 324.000 290.2 1142.44 0.865 1.103
88 18.995838 18.996167 329.000 290.2 1505.44 0.875 1.150
89 22.995747 22.996083 336.000 290.2 2097.64 0.885 1.200
90 10.496189 10.496526 337.000 290.2 2190.24 0.895 1.254
91 3.495947 3.496289 342.000 290.2 2683.24 0.905 1.311
92 18.745744 18.74609 346.000 290.2 3113.64 0.915 1.372
93 11.995894 11.996252 358.000 290.2 4596.84 0.925 1.440
94 7.995871 7.996237 366.000 290.2 5745.64 0.935 1.514
95 8.995895 8.996261 366.000 290.2 5745.64 0.945 1.598
96 8.495871 8.496238 367.000 290.2 5898.24 0.955 1.695
97 11.496647 11.497018 371.000 290.2 6528.64 0.965 1.812
98 4.495839 4.496217 378.000 290.2 7708.84 0.975 1.960
99 0 0.000409 409.000 290.2 14113.44 0.985 2.170
100 3.995847 3.996282 435.000 290.2 20967.04 0.995 2.576

Annex Table 8 - SR packet


forwarding confidence interval
Sum (Xi-X)² 137826 Quantile-Quantile Plot Switch Fwd SR
Variance 1392.181818
500.000
Smp Std Dev 37.31195275
400.000

conf Level 0.95 300.000


α 0.05 200.000
DF 99
100.000
tα 1.984216952
0.000
Err Margin 7.403500915 -3 -2 -1 0 1 2 3

ConfInt high 297.6035009


Annex 33 - SR packet forwarding
ConfInt low 282.7964991
Quantile-Quantile plot

Annex Table 9 - PF switch packet forwarding measurements


Intent Forwarding

samples (i) Input Pkt Timestamp (s) Output Pkt Timestamp (s) Forwarding time Xi (μs) samp mean X (μs) (Xi-X)² P((i-0.5)/n) Z Distrb
1 13.47017 13.470315 145 186.49 1721.4201 0.005 -2.576
2 14.470173 14.470319 146 186.49 1639.4401 0.015 -2.170
3 15.470173 15.47032 147 186.49 1559.4601 0.025 -1.960
4 12.970181 12.970329 148 186.49 1481.4801 0.035 -1.812
5 10.095177 10.095326 149 186.49 1405.5001 0.045 -1.695
Appendix H:Measurements tables 111

6 12.095179 12.095328 149 186.49 1405.5001 0.055 -1.598


7 12.470182 12.470331 149 186.49 1405.5001 0.065 -1.514
8 13.970177 13.970326 149 186.49 1405.5001 0.075 -1.440
9 14.720176 14.720325 149 186.49 1405.5001 0.085 -1.372
10 16.220174 16.220323 149 186.49 1405.5001 0.095 -1.311
11 9.970179 9.970329 150 186.49 1331.5201 0.105 -1.254
12 11.970181 11.970331 150 186.49 1331.5201 0.115 -1.200
13 6.470183 6.470333 150 186.49 1331.5201 0.125 -1.150
14 11.47018 11.47033 150 186.49 1331.5201 0.135 -1.103
15 16.095163 16.095313 150 186.49 1331.5201 0.145 -1.058
16 11.095178 11.095329 151 186.49 1259.5401 0.155 -1.015
17 12.720176 12.720327 151 186.49 1259.5401 0.165 -0.974
18 10.970181 10.970332 151 186.49 1259.5401 0.175 -0.935
19 13.720177 13.720328 151 186.49 1259.5401 0.185 -0.896
20 14.095181 14.095332 151 186.49 1259.5401 0.195 -0.860
21 15.09518 15.095331 151 186.49 1259.5401 0.205 -0.824
22 15.220175 15.220326 151 186.49 1259.5401 0.215 -0.789
23 8.970184 8.970336 152 186.49 1189.5601 0.225 -0.755
24 10.470178 10.47033 152 186.49 1189.5601 0.235 -0.722
25 13.095182 13.095334 152 186.49 1189.5601 0.245 -0.690
26 14.220179 14.220331 152 186.49 1189.5601 0.255 -0.659
27 15.720176 15.720328 152 186.49 1189.5601 0.265 -0.628
28 9.47018 9.470332 152 186.49 1189.5601 0.275 -0.598
29 13.220188 13.220341 153 186.49 1121.5801 0.285 -0.568
30 14.595185 14.595338 153 186.49 1121.5801 0.295 -0.539
31 7.470181 7.470334 153 186.49 1121.5801 0.305 -0.510
32 12.595185 12.595339 154 186.49 1055.6001 0.315 -0.482
33 7.97019 7.970344 154 186.49 1055.6001 0.325 -0.454
34 12.220184 12.220339 155 186.49 991.6201 0.335 -0.426
35 6.970183 6.970338 155 186.49 991.6201 0.345 -0.399
36 10.595183 10.595339 156 186.49 929.6401 0.355 -0.372
37 13.595183 13.595339 156 186.49 929.6401 0.365 -0.345
38 9.345213 9.34537 157 186.49 869.6601 0.375 -0.319
39 8.470191 8.470349 158 186.49 811.6801 0.385 -0.292
40 9.595186 9.595344 158 186.49 811.6801 0.395 -0.266
41 12.345194 12.345355 161 186.49 649.7401 0.405 -0.240
42 15.595189 15.59535 161 186.49 649.7401 0.415 -0.215
43 11.595218 11.595381 163 186.49 551.7801 0.425 -0.189
44 14.345222 14.345387 165 186.49 461.8201 0.435 -0.164
45 13.34522 13.34539 170 186.49 271.9201 0.445 -0.138
46 6.220206 6.220383 177 186.49 90.0601 0.455 -0.113
47 9.845252 9.845433 181 186.49 30.1401 0.465 -0.088
48 12.845252 12.845433 181 186.49 30.1401 0.475 -0.063
49 11.845258 11.845439 181 186.49 30.1401 0.485 -0.038
50 4.845173 4.845355 182 186.49 20.1601 0.495 -0.013
112 Research on path establishment methods performance on SDN-based networks

51 5.470172 5.470355 183 186.49 12.1801 0.505 0.013


52 10.84526 10.845443 183 186.49 12.1801 0.515 0.038
53 11.34526 11.345443 183 186.49 12.1801 0.525 0.063
54 13.84526 13.845443 183 186.49 12.1801 0.535 0.088
55 4.470165 4.470349 184 186.49 6.2001 0.545 0.113
56 5.345178 5.345362 184 186.49 6.2001 0.555 0.138
57 5.845178 5.845362 184 186.49 6.2001 0.565 0.164
58 5.970177 5.970361 184 186.49 6.2001 0.575 0.189
59 4.970165 4.97035 185 186.49 2.2201 0.585 0.215
60 14.970252 14.970438 186 186.49 0.2401 0.595 0.240
61 10.345268 10.345455 187 186.49 0.2601 0.605 0.266
62 4.345174 4.345361 187 186.49 0.2601 0.615 0.292
63 16.470257 16.470446 189 186.49 6.30009999 0.625 0.319
64 15.97026 15.970449 189 186.49 6.3001 0.635 0.345
65 14.84525 14.845442 192 186.49 30.3601 0.645 0.372
66 6.095252 6.095447 195 186.49 72.4201 0.655 0.399
67 5.09528 5.095477 197 186.49 110.4601 0.665 0.426
68 15.845254 15.845452 198 186.49 132.4801 0.675 0.454
69 16.345254 16.345453 199 186.49 156.5001 0.685 0.482
70 15.345244 15.345443 199 186.49 156.5001 0.695 0.510
71 5.595255 5.595455 200 186.49 182.5201 0.705 0.539
72 5.22018 5.220382 202 186.49 240.5601 0.715 0.568
73 9.095175 9.095377 202 186.49 240.5601 0.725 0.598
74 8.095179 8.095385 206 186.49 380.6401 0.735 0.628
75 8.595181 8.595388 207 186.49 420.6601 0.745 0.659
76 7.345182 7.345391 209 186.49 506.7001 0.755 0.690
77 6.845181 6.845393 212 186.49 650.7601 0.765 0.722
78 4.220174 4.220388 214 186.49 756.8001 0.775 0.755
79 5.720269 5.720484 215 186.49 812.8201 0.785 0.789
80 9.220258 9.220474 216 186.49 870.8401 0.795 0.824
81 10.220256 10.220476 220 186.49 1122.9201 0.805 0.860
82 11.220254 11.220475 221 186.49 1190.9401 0.815 0.896
83 10.720257 10.720479 222 186.49 1260.9601 0.825 0.935
84 4.72018 4.720402 222 186.49 1260.9601 0.835 0.974
85 8.220265 8.220489 224 186.49 1407.0001 0.845 1.015
86 8.72026 8.720484 224 186.49 1407.0001 0.855 1.058
87 11.72026 11.720489 229 186.49 1807.1001 0.865 1.103
88 6.720277 6.720507 230 186.49 1893.1201 0.875 1.150
89 4.0952 4.095432 232 186.49 2071.1601 0.885 1.200
90 9.720262 9.720495 233 186.49 2163.1801 0.895 1.254
91 7.220272 7.220506 234 186.49 2257.2001 0.905 1.311
92 7.720277 7.720513 236 186.49 2451.2401 0.915 1.372
93 4.595212 4.59545 238 186.49 2653.2801 0.925 1.440
94 6.595181 6.595442 261 186.49 5551.7401 0.935 1.514
95 7.095182 7.095448 266 186.49 6321.8401 0.945 1.598
Appendix H:Measurements tables 113

96 7.595182 7.595451 269 186.49 6807.9001 0.955 1.695


97 7.845182 7.845453 271 186.49 7141.9401 0.965 1.812
98 6.345178 6.345484 306 186.49 14282.6401 0.975 1.960
99 8.845253 8.845585 332 186.49 21173.1601 0.985 2.170
100 8.345253 8.345595 342 186.49 24183.3601 0.995 2.576

Annex Table 10 - PF packet


forwarding confidence interval
Quantile-Quantile Plot Switch Fwd PF
Sum (Xi-X)² 166262.99
Variance 1679.424141 400
350
Smp Std Dev 40.98077771
300
250
conf Level 0.95 200
α 0.05 150
100
DF 99
50
tα 1.984216952 0
Err Margin 8.131475381 -3 -2 -1 0 1 2 3

ConfInt high 194.6214754


Annex 34 - PF packet forwarding
ConfInt low 178.3585246 Quantile-Quantile plot

III. Static path installation

Annex Table 11 - SR static path measurements


Segment Routing

Tunnel Policy
Exec Last Exec Last Policy
samples timestam GroupMod Tunnel timestamp FlowMod install Path install samp mean
(i) p (s) (s) install (ms) (s) (s) (ms) Xi (ms) X (ms) (Xi-X)²
234.9591 235.04681 36.9880
1 65 234.962585 3.42 235.044867 8 1.951 5.371 11.45278 4797
17.1377
2 49.56627 49.571118 4.848 49.6636 49.666065 2.465 7.313 11.45278 7845
234.7637 234.84889 15.8625
3 61 234.769267 5.506 234.846927 1 1.964 7.47 11.45278 3653
186.4402 186.53557 14.5907
4 17 186.445327 5.11 186.533055 8 2.523 7.633 11.45278 1925
265.2726 265.36467 14.0833
5 82 265.277006 4.324 265.361302 8 3.376 7.7 11.45278 5773
110.6657 110.76453 11.8527
6 18 110.670641 4.923 110.761451 8 3.087 8.01 11.45278 3413
139.7308 11.6402
7 96 139.735645 4.749 139.960788 139.96408 3.292 8.041 11.45278 4277
234.5400 234.63730 10.3669
8 89 234.546049 5.96 234.635031 4 2.273 8.233 11.45278 8325
110.7352 110.83386 9.16745
9 03 110.739609 4.406 110.829845 4 4.019 8.425 11.45278 1728
265.0565 265.15010 8.85526
10 55 265.062316 5.761 265.147385 1 2.716 8.477 11.45278 6608
114 Research on path establishment methods performance on SDN-based networks

205.8580 205.95656 8.63055


11 82 205.863396 5.314 205.953362 3 3.201 8.515 11.45278 1328
175.6756 175.77154 8.48428
12 11 175.681098 5.487 175.768492 5 3.053 8.54 11.45278 7329
56.31196 7.66614
13 2 56.317261 5.299 56.402553 56.405938 3.385 8.684 11.45278 2688
139.3887 139.62124 7.35375
14 33 139.393636 4.903 139.617403 1 3.838 8.741 11.45278 0768
265.4766 265.56926 6.71732
15 7 265.482741 6.071 265.566472 2 2.79 8.861 11.45278 3569
45.73214 6.54735
16 2 45.738171 6.029 45.845438 45.848303 2.865 8.894 11.45278 5088
175.4650 175.56355 6.26891
17 48 175.471327 6.279 175.560884 4 2.67 8.949 11.45278 4288
282.0007 6.22392
18 66 282.005501 4.735 282.115277 282.1195 4.223 8.958 11.45278 7248
247.3307 6.15426
19 13 247.335796 5.083 247.442461 247.44635 3.889 8.972 11.45278 9409
109.9283 110.13831 5.91355
20 7 109.933786 5.416 110.134708 3 3.605 9.021 11.45278 3968
56.50984 5.65859
21 2 56.516045 6.203 56.605299 56.60817 2.871 9.074 11.45278 4288
77.74536 5.07051
22 8 77.750239 4.871 77.832323 77.836653 4.33 9.201 11.45278 3168
175.9039 176.01146 4.88312
23 67 175.910061 6.094 176.008314 3 3.149 9.243 11.45278 7648
110.4539 4.44273
24 5 110.460269 6.319 110.552794 110.55582 3.026 9.345 11.45278 6528
205.4513 205.54796 4.38810
25 25 205.458684 7.359 205.545968 7 1.999 9.358 11.45278 3248
132.3101 132.41116 4.28812
26 92 132.316996 6.804 132.40859 8 2.578 9.382 11.45278 9808
77.94210 4.27571
27 7 77.947775 5.668 78.032449 78.036166 3.717 9.385 11.45278 4128
75.33996 4.06336
28 8 75.346561 6.593 75.461858 75.464702 2.844 9.437 11.45278 9008
176.1235 176.21334 4.03118
29 35 176.129892 6.357 176.210254 2 3.088 9.445 11.45278 0528
110.2300 110.37819 3.91557
30 62 110.235065 5.003 110.373722 3 4.471 9.474 11.45278 0288
213.0124 213.13328 2.97831
31 01 213.019416 7.015 213.130574 6 2.712 9.727 11.45278 6608
47.22473 2.94047
32 2 47.230849 6.117 47.36651 47.370131 3.621 9.738 11.45278 0448
142.3947 142.50040 2.47363
33 27 142.401198 6.471 142.496999 8 3.409 9.88 11.45278 6928
46.98139 2.24634
34 8 46.987789 6.391 47.065919 47.069482 3.563 9.954 11.45278 1488
16.38403 2.13095
35 4 16.390645 6.611 16.479096 16.482478 3.382 9.993 11.45278 7648
234.3289 234.43537 1.88452
36 44 234.335138 6.194 234.431488 4 3.886 10.08 11.45278 4928
212.7841 212.90739 1.76299
37 58 212.791807 7.649 212.904919 5 2.476 10.125 11.45278 9728
76.82481 1.73918
38 9 76.831465 6.646 76.917246 76.920734 3.488 10.134 11.45278 0688
264.7802 264.93006 1.53209
39 68 264.786896 6.628 264.926476 3 3.587 10.215 11.45278 9328
142.8089 142.90225 1.27866
40 89 142.815465 6.476 142.898409 5 3.846 10.322 11.45278 3408
142.6077 142.70221 0.94824
41 05 142.614256 6.551 142.698284 2 3.928 10.479 11.45278 7488
46.77594 0.80780
42 2 46.783113 7.171 46.879015 46.882398 3.383 10.554 11.45278 5488
77.47252 0.62533
43 8 77.478996 6.468 77.645481 77.649675 4.194 10.662 11.45278 3008
132.5164 132.61655 0.47579
44 14 132.524382 7.968 132.613758 3 2.795 10.763 11.45278 6448
Appendix H:Measurements tables 115

87.52969 0.35257
45 2 87.537206 7.514 87.624753 87.628098 3.345 10.859 11.45278 4688
16.61489 0.22352
46 1 16.623639 8.748 16.709557 16.711789 2.232 10.98 11.45278 0928
49.12932 0.21139
47 6 49.135753 6.427 49.239011 49.243577 4.566 10.993 11.45278 7648
75.75959 0.07170
48 2 75.767405 7.813 75.857753 75.861125 3.372 11.185 11.45278 6128
0.02125
49 76.05871 76.066983 8.273 76.171348 76.174382 3.034 11.307 11.45278 1808
156.0020 156.09292 0.02038
50 48 156.009044 6.996 156.088614 8 4.314 11.31 11.45278 6128
103.7413 0.01249
51 94 103.749691 8.297 103.825946 103.82899 3.044 11.341 11.45278 4768
46.55968 0.00451
52 7 46.567802 8.115 46.661902 46.665307 3.405 11.52 11.45278 8528
205.6504 205.75908 0.02729
53 65 205.658698 8.233 205.755697 2 3.385 11.618 11.45278 7648
103.2951 103.41545 0.09748
54 64 103.303805 8.641 103.412333 7 3.124 11.765 11.45278 1328
186.0331 186.13184 0.10511
55 25 186.040564 7.439 186.127509 7 4.338 11.777 11.45278 8608
110.2450 110.34722 0.28219
56 28 110.253722 8.694 110.343936 6 3.29 11.984 11.45278 4688
142.1741 142.27880 0.35906
57 74 142.182825 8.651 142.275404 5 3.401 12.052 11.45278 4608
205.2264 205.34659 0.44651
58 29 205.235549 9.12 205.343591 2 3.001 12.121 11.45278 7968
186.2287 186.32999 0.54349
59 88 186.236507 7.719 186.325522 3 4.471 12.19 11.45278 3328
213.4796 213.60200 0.57489
60 98 213.487762 8.064 213.597861 8 4.147 12.211 11.45278 7568
76.62898 0.71270
61 4 76.636754 7.77 76.722337 76.726864 4.527 12.297 11.45278 7408
282.2144 282.36483 1.03880
62 42 282.222738 8.296 282.360658 4 4.176 12.472 11.45278 9408
126.9568 127.06210 1.04697
63 4 126.966651 9.811 127.059438 3 2.665 12.476 11.45278 9168
75.04962 1.30923
64 9 75.056993 7.364 75.164312 75.169545 5.233 12.597 11.45278 9408
246.6624 246.78119 1.31840
65 57 246.671415 8.958 246.777554 7 3.643 12.601 11.45278 9168
16.80617 1.32070
66 1 16.815522 9.351 16.897521 16.900772 3.251 12.602 11.45278 6608
246.8930 1.52083
67 17 246.901966 8.949 246.982223 246.98596 3.737 12.686 11.45278 1568
76.28317 1.76151
68 2 76.292359 9.187 76.47232 76.475913 3.593 12.78 11.45278 2928
139.0108 139.25143 1.76416
69 2 139.019688 8.868 139.247526 9 3.913 12.781 11.45278 8368
247.0999 247.21662 1.95781
70 83 247.10899 9.007 247.212784 9 3.845 12.852 11.45278 6608
16.16919 2.28076
71 7 16.178052 8.855 16.274508 16.278616 4.108 12.963 11.45278 4448
213.2497 2.31411
72 7 213.258804 9.034 213.37328 213.37722 3.94 12.974 11.45278 0288
103.9444 104.04459 2.51292
73 11 103.953462 9.051 104.040612 9 3.987 13.038 11.45278 2448
110.4908 110.59530 2.68048
74 43 110.500019 9.176 110.591392 6 3.914 13.09 11.45278 9328
13.97093 4.06111
75 7 13.981191 10.254 14.081283 14.084497 3.214 13.468 11.45278 1648
282.4901 282.61395 4.17883
76 34 282.498726 8.592 282.609048 3 4.905 13.497 11.45278 5408
18.74674 4.26513
77 2 18.756389 9.647 18.837486 18.841357 3.871 13.518 11.45278 3648
126.7368 126.84153 4.46147
78 46 126.747473 10.627 126.838595 3 2.938 13.565 11.45278 3328
116 Research on path establishment methods performance on SDN-based networks

127.1705 127.28028 4.79268


79 4 127.178863 8.323 127.274968 7 5.319 13.642 11.45278 4208
75.57252 5.18117
80 5 75.58354 11.015 75.659753 75.662467 2.714 13.729 11.45278 7488
86.87190 5.45325
81 3 86.87994 8.037 86.97877 86.984521 5.751 13.788 11.45278 2448
87.09170 5.51883
82 9 87.102667 10.958 87.190626 87.19347 2.844 13.802 11.45278 4608
46.00218 5.82363
83 2 46.01318 10.998 46.170999 46.173867 2.868 13.866 11.45278 0768
48.89094 6.07238
84 5 48.900434 9.489 49.019043 49.023471 4.428 13.917 11.45278 0208
87.32033 6.47814
85 9 87.33122 10.881 87.422452 87.425569 3.117 13.998 11.45278 4848
156.4157 156.51934 6.86555
86 61 156.425227 9.466 156.51474 7 4.607 14.073 11.45278 2849
185.8135 185.92446 7.00777
87 71 185.823577 10.006 185.92037 4 4.094 14.1 11.45278 3728
14.49186 7.78532
88 9 14.501237 9.368 14.631419 14.636294 4.875 14.243 11.45278 7648
282.7283 282.86159 8.35337
89 87 282.739161 10.774 282.858022 1 3.569 14.343 11.45278 1648
103.5324 103.63877 9.16406
90 92 103.542876 10.384 103.634678 4 4.096 14.48 11.45278 0928
45.51111 12.1467
91 9 45.522167 11.048 45.621888 45.625778 3.89 14.938 11.45278 5845
49.36082 13.9892
92 2 49.370109 9.287 49.453311 49.459217 5.906 15.193 11.45278 4565
15.34719 17.0587
93 6 15.35957 12.374 15.4418 15.445009 3.209 15.583 11.45278 1725
77.12175 18.4403
94 1 77.132568 10.817 77.235321 77.240251 4.93 15.747 11.45278 2541
46.27106 18.7594
95 1 46.280514 9.453 46.366215 46.372546 6.331 15.784 11.45278 6669
138.7603 138.90064 20.2609
96 1 138.77074 10.43 138.895118 2 5.524 15.954 11.45278 8149
14.21410 22.3655
97 5 14.225264 11.159 14.321829 14.326852 5.023 16.182 11.45278 2181
15.12665 24.6534
98 6 15.13767 11.014 15.237764 15.243168 5.404 16.418 11.45278 0965
55.88018 25.6868
99 7 55.890679 10.492 55.980613 55.986642 6.029 16.521 11.45278 5397
111.0510 111.17255 27.8384
100 19 111.060644 9.625 111.16545 4 7.104 16.729 11.45278 9749

Annex Table 12 - SR static path


confidence interval
Quantile-Quantile Plot Static Path SR
Sum (Xi-X)² 612.9025132
Variance 6.190934476 20

Smp Std Dev 2.488158853


15

conf Level 0.95 10


α 0.05
5
DF 99
tα 1.984216952 0
Err Margin 0.493704697 -3 -2 -1 0 1 2 3

ConfInt high 11.9464847


Annex 35 - SR static path Quantile-
ConfInt low 10.9590753 Quantile plot
Appendix H:Measurements tables 117

Annex Table 13 - PF static path measurements


Intent Forwarding

Samples (i) Exec timestamp (s) 1st FlowMod (s) Proc time (ms) Last FlowMod (s) Path install Xi (ms) samp mean X (ms) (Xi-X)²
1 24.148332 24.163759 15.427 24.175457 27.125 52.29085 633.3200062
2 21.843188 21.861017 17.829 21.871478 28.29 52.29085 576.0408007
3 14.572622 14.589202 16.58 14.601089 28.467 52.29085 567.5758288
4 25.642074 25.659869 17.795 25.670946 28.872 52.29085 548.4425353
5 31.31051 31.329418 18.908 31.340736 30.226 52.29085 486.8576055
6 48.023155 48.053729 30.574 48.053942 30.787 52.29085 462.4155648
7 28.150048 28.170378 20.33 28.181348 31.3 52.29085 440.6157837
8 49.441095 49.46061 19.515 49.473489 32.394 52.29085 395.8846399
9 4.459075 4.479099 20.024 4.49147 32.395 52.29085 395.8448472
10 42.778728 42.799222 20.494 42.811529 32.801 52.29085 379.854253
11 34.503008 34.523884 20.876 34.535939 32.931 52.29085 374.803792
12 10.109707 10.131365 21.658 10.143578 33.871 52.29085 339.290874
13 8.863993 8.887068 23.075 8.899461 35.468 52.29085 283.0082821
14 41.410991 41.435602 24.611 41.447319 36.328 52.29085 254.8125801
15 11.726545 11.753689 27.144 11.763427 36.882 52.29085 237.4326583
16 29.00057 29.025426 24.856 29.037611 37.041 52.29085 232.557925
17 12.261572 12.286487 24.915 12.29917 37.598 52.29085 215.8798411
18 40.520457 40.54559 25.133 40.558113 37.656 52.29085 214.1788345
19 9.45717 9.481951 24.781 9.495581 38.411 52.29085 192.650236
20 46.268747 46.294899 26.152 46.307499 38.752 52.29085 183.3004593
21 48.74245 48.769086 26.636 48.781863 39.413 52.29085 165.8390206
22 46.022767 46.051438 28.671 46.062605 39.838 52.29085 155.0734731
23 47.279705 47.307633 27.928 47.319921 40.216 52.29085 145.8020025
24 43.719792 43.747842 28.05 43.760192 40.4 52.29085 141.3923137
25 10.91182 10.939656 27.836 10.952426 40.606 52.29085 136.5357195
26 32.394519 32.42244 27.921 32.435138 40.619 52.29085 136.2320824
27 45.917917 45.946304 28.387 45.958576 40.659 52.29085 135.2999344
28 18.829373 18.858428 29.055 18.870436 41.063 52.29085 126.0646156
29 50.174375 50.203697 29.322 50.215818 41.443 52.29085 117.6758496
30 33.661911 33.690911 29 33.703983 42.072 52.29085 104.4248953
31 45.023439 45.053948 30.509 45.066261 42.822 52.29085 89.65912032
32 21.120018 21.151065 31.047 21.163337 43.319 52.29085 80.49409242
33 33.601103 33.633669 32.566 33.644688 43.585 52.29085 75.79182422
34 26.191221 26.221973 30.752 26.23488 43.659 52.29085 74.50883442
35 14.269304 14.301065 31.761 14.313265 43.961 52.29085 69.38640102
36 30.242004 30.273794 31.79 30.286096 44.092 52.29085 67.22114132
37 43.859681 43.892612 32.931 43.904317 44.636 52.29085 58.59672852
38 36.952977 36.985497 32.52 36.997707 44.73 52.29085 57.16645272
39 5.087257 5.121704 34.447 5.133389 46.132 52.29085 37.93143332
118 Research on path establishment methods performance on SDN-based networks

40 41.408834 41.443151 34.317 41.455174 46.34 52.29085 35.41261572


41 18.426974 18.463734 36.76 18.475229 48.255 52.29085 16.28808522
42 36.637561 36.686574 49.013 36.686783 49.222 52.29085 9.417840322
43 68.267326 68.304444 37.118 68.316555 49.229 52.29085 9.374925423
44 15.521297 15.558427 37.13 15.571852 50.555 52.29085 3.013175223
45 52.086845 52.125798 38.953 52.137679 50.834 52.29085 2.122411922
46 24.484503 24.524543 40.04 24.535682 51.179 52.29085 1.236210422
47 27.822185 27.862948 40.763 27.873939 51.754 52.29085 0.288207923
48 38.251253 38.290918 39.665 38.303319 52.066 52.29085 0.050557522
49 27.273368 27.313454 40.086 27.325607 52.239 52.29085 0.002688422
50 52.467208 52.508331 41.123 52.520299 53.091 52.29085 0.640240023
51 71.71873 71.761435 42.705 71.771985 53.255 52.29085 0.929585223
52 6.317345 6.370374 53.029 6.370609 53.264 52.29085 0.947020922
53 34.834294 34.875472 41.178 34.888004 53.71 52.29085 2.013986723
54 6.639194 6.681649 42.455 6.693405 54.211 52.29085 3.686976023
55 38.988262 39.033305 45.043 39.043091 54.829 52.29085 6.442205422
56 21.167891 21.211214 43.323 21.223355 55.464 52.29085 10.06888092
57 30.29223 30.334014 41.784 30.347712 55.482 52.29085 10.18343832
58 15.155011 15.198099 43.088 15.210524 55.513 52.29085 10.38225062
59 19.605867 19.649554 43.687 19.662165 56.298 52.29085 16.05725112
60 61.700109 61.742596 42.487 61.756418 56.309 52.29085 16.14552942
61 45.320266 45.36254 42.274 45.376727 56.461 52.29085 17.39015102
62 18.900727 18.946259 45.532 18.958549 57.822 52.29085 30.59362032
63 41.693091 41.738484 45.393 41.751031 57.94 52.29085 31.91289572
64 40.558182 40.60375 45.568 40.61643 58.248 52.29085 35.48763612
65 43.794121 43.840138 46.017 43.852378 58.257 52.29085 35.59494582
66 23.407276 23.45478 47.504 23.467593 60.317 52.29085 64.41908382
67 11.897949 11.948786 50.837 11.959083 61.134 52.29085 78.20130192
68 32.21933 32.270073 50.743 32.28075 61.42 52.29085 83.34137972
69 19.653043 19.714853 61.81 19.714977 61.934 52.29085 92.99034192
70 14.791717 14.841314 49.597 14.853822 62.105 52.29085 96.31754022
71 27.528824 27.578381 49.557 27.591234 62.41 52.29085 102.3971967
72 21.266677 21.31903 52.353 21.329961 63.284 52.29085 120.8493469
73 22.152338 22.203644 51.306 22.216132 63.794 52.29085 132.3224599
74 9.77813 9.842508 64.378 9.842698 64.568 52.29085 150.7284121
75 64.846197 64.898213 52.016 64.910973 64.776 52.29085 155.8789705
76 12.990792 13.044337 53.545 13.055628 64.836 52.29085 157.3807885
77 34.578643 34.631658 53.015 34.643567 64.924 52.29085 159.5964789
78 37.175912 37.230867 54.955 37.241957 66.045 52.29085 189.1766422
79 48.494846 48.549223 54.377 48.561199 66.353 52.29085 197.7440626
80 35.833128 35.887285 54.157 35.899663 66.535 52.29085 202.8958092
81 24.453909 24.508188 54.279 24.520606 66.697 52.29085 207.5371578
82 17.140693 17.195457 54.764 17.208128 67.435 52.29085 229.3452792
83 58.431037 58.486686 55.649 58.498592 67.555 52.29085 232.9942752
84 29.909677 29.966321 56.644 29.977999 68.322 52.29085 256.9977703
Appendix H:Measurements tables 119

85 29.861664 29.918291 56.627 29.930017 68.353 52.29085 257.9926626


86 7.633285 7.689635 56.35 7.701918 68.633 52.29085 267.0658666
87 55.173106 55.230897 57.791 55.242726 69.62 52.29085 300.2994397
88 16.547131 16.604245 57.114 16.616803 69.672 52.29085 302.1043753
89 39.231334 39.289853 58.519 39.301074 69.74 52.29085 304.4728357
90 47.892406 47.949856 57.45 47.962261 69.855 52.29085 308.4993652
91 16.685175 16.742663 57.488 16.755182 70.007 52.29085 313.8619708
92 17.39471 17.452004 57.294 17.465073 70.363 52.29085 326.6026056
93 39.324909 39.383556 58.647 39.395575 70.666 52.29085 337.6461375
94 7.968298 8.027919 59.621 8.039866 71.568 52.29085 371.6085121
95 26.108382 26.166861 58.479 26.180169 71.787 52.29085 380.0998648
96 12.325165 12.385269 60.104 12.397673 72.508 52.29085 408.7331541
97 23.554778 23.616339 61.561 23.627732 72.954 52.29085 426.9657679
98 32.416407 32.477438 61.031 32.490021 73.614 52.29085 454.6767259
99 36.884485 36.946364 61.879 36.959347 74.862 52.29085 509.4568123
100 4.870121 4.93507 64.949 4.947873 77.752 52.29085 648.2701593

Annex Table 14 - PF static path


confidence interval
Quantile-Quantile Plot Static Path PF
Sum (Xi-X)² 18525.01717
Variance 187.1213855 100

Smp Std Dev 13.67923191 80

60
conf Level 0.95
40
α 0.05
DF 99 20
tα 1.984216952 0
Err Margin 2.714256383 -3 -2 -1 0 1 2 3

ConfInt high 55.00510638 Annex 36 - PF static path Quantile-


ConfInt low 49.57659362 Quantile plot

IV. Network events

Annex Table 15 - SR network events measurements


Segment Routing

1st Last
Samples PortStatus FlowMod FlowMod Proc Time samp mean Resp Time samp mean
(i) msg (s) (s) (s) Xi (ms) X (ms) (Xi-X)² Xi (ms) X (ms) (Xi-X)²
1640.0934 1640.2077 1640.2913 64.5649 4547.71
1 3 94 2 114.364 122.39923 2115 197.89 265.32677 7948
1348.9677 1349.0819 1349.1664 66.5240 4439.52
2 13 56 1 114.243 122.39923 8781 198.697 265.32677 625
1691.9117 1692.0270 1692.1135 51.4695 4043.02
3 86 11 28 115.225 122.39923 7609 201.742 265.32677 2976
120 Research on path establishment methods performance on SDN-based networks

1475.4296 1475.5432 1475.6317 77.8670 4006.60


4 78 53 07 113.575 122.39923 3509 202.029 265.32677 7687
1563.9041 1564.0182 1564.1070 69.8767 3906.22
5 81 21 08 114.04 122.39923 2619 202.827 265.32677 125
626.10399 626.21923 626.30942 51.2259 3587.02
6 4 6 9 115.242 122.39923 4127 205.435 265.32677 4114
1193.1911 1193.3038 1193.3971 93.3393 3520.85
7 58 96 48 112.738 122.39923 6512 205.99 265.32677 2274
906.26455 906.38051 906.47267 41.3864 3271.81
8 2 8 9 115.966 122.39923 4823 208.127 265.32677 3688
1606.1704 1606.2836 1606.3815 84.2582 2939.13
9 39 59 52 113.22 122.39923 6339 211.113 265.32677 2858
980.85916 98.8280 2900.76
10 2 980.97162 981.07063 112.458 122.39923 5391 211.468 265.32677 7106
849.74707 849.84294 34.4596 2801.56
11 849.63055 9 7 116.529 122.39923 0025 212.397 265.32677 0552
1781.1118 1781.2283 1781.3254 35.1320 2681.35
12 86 58 31 116.472 122.39923 5547 213.545 265.32677 1704
1115.7803 1115.8931 1115.9951 92.9340 2548.61
13 51 1 94 112.759 122.39923 3446 214.843 265.32677 1033
243.34785 243.46625 243.56272 15.9938 2545.68
14 6 6 8 118.4 122.39923 4059 214.872 265.32677 3816
2050.2724 2050.3875 2050.4879 52.6383 2477.03
15 11 55 68 115.144 122.39923 6235 215.557 265.32677 0006
1072.2395 1072.3540 1072.4554 62.3346 2437.86
16 01 05 53 114.504 122.39923 5675 215.952 265.32677 7913
580.90481 581.01976 581.12091 55.5208 2423.37
17 7 5 6 114.948 122.39923 2851 216.099 265.32677 3339
1239.6008 1239.7168 1239.8169 41.0141 2422.48
18 89 84 97 115.995 122.39923 6189 216.108 265.32677 732
331.91380 27.9442 2186.85
19 7 332.03092 332.13237 117.113 122.39923 2761 218.563 265.32677 0185
1430.1202 1430.2412 1430.3389 1.88301 2178.25
20 66 93 21 121.027 122.39923 5173 218.655 265.32677 4115
109.02989 109.13224 20.1891 2031.10
21 108.91199 6 9 117.906 122.39923 1583 220.259 265.32677 3893
1873.2788 1873.3933 1873.5004 62.2872 1912.73
22 82 89 74 114.507 122.39923 9437 221.592 265.32677 0107
722.81902 722.93542 723.04073 36.0387 1902.77
23 7 3 3 116.396 122.39923 7043 221.706 265.32677 1575
1693.4357 1693.5494 1693.6580 76.2518 1851.73
24 54 21 49 113.667 122.39923 4078 222.295 265.32677 3229
2008.5413 2008.6553 2008.7638 69.4260 1829.85
25 19 86 69 114.067 122.39923 5677 222.55 265.32677 2052
1782.1611 1782.2776 1782.3843 35.4171 1774.07
26 8 28 87 116.448 122.39923 3851 223.207 265.32677 5025
487.59312 487.71122 487.81635 18.4661 1772.22
27 7 9 6 118.102 122.39923 8567 223.229 265.32677 2239
1397.8292 1397.9449 1398.0525 43.8274 1761.46
28 09 88 66 115.779 122.39923 4525 223.357 265.32677 1594
1349.1185 1349.2336 1349.3421 53.3225 1734.62
29 06 03 84 115.097 122.39923 6297 223.678 265.32677 0043
937.37815 937.49195 937.60211 73.8779 1711.54
30 5 9 1 113.804 122.39923 7875 223.956 265.32677 061
1087.4049 1087.5200 1087.6289 52.8998 1705.75
31 71 97 97 115.126 122.39923 7463 224.026 265.32677 3603
1921.2348 1921.3579 1921.4596 0.54578 1641.44
32 16 54 28 123.138 122.39923 1113 224.812 265.32677 6588
463.55807 463.68942 463.78490 80.0446 1482.00
33 9 5 9 131.346 122.39923 9343 226.83 265.32677 13
42.1750 1351.28
34 20.309845 20.42575 20.538412 115.905 122.39923 2329 228.567 265.32677 069
860.28612 860.40107 860.51526 55.5357 1309.12
35 4 1 9 114.947 122.39923 3197 229.145 265.32677 048
767.84143 768.07598 16.1140 947.271
36 5 767.95982 4 118.385 122.39923 4249 234.549 265.32677 1262
946.32692 946.47173 946.56221 502.108 902.628
37 8 5 1 144.807 122.39923 1564 235.283 265.32677 1158
Appendix H:Measurements tables 121

1303.8418 1303.9584 1304.0794 33.8750 769.328


38 82 61 72 116.579 122.39923 7725 237.59 265.32677 41
198.94567 199.06419 199.18469 15.0251 692.309
39 6 9 1 118.523 122.39923 5901 239.015 265.32677 2405
898.90096 899.04808 899.14300 610.968 542.180
40 4 1 6 147.117 122.39923 1538 242.042 265.32677 514
428.14739 428.26299 428.38974 46.2703 528.207
41 8 5 2 115.597 122.39923 3297 242.344 265.32677 7169
1964.7057 1964.8198 1964.9485 69.3760 507.590
42 82 52 79 114.07 122.39923 7239 242.797 265.32677 5362
1592.2693 1592.3834 1592.5125 69.6428 488.178
43 59 13 91 114.054 122.39923 6375 243.232 265.32677 8614
287.14859 287.26344 57.0361 436.842
44 4 1 287.39302 114.847 122.39923 7797 244.426 265.32677 1866
533.63802 533.75327 533.88275 51.0686 423.979
45 2 5 8 115.253 122.39923 0321 244.736 265.32677 8092
1387.8224 1387.9368 1388.0671 63.0313 423.815
46 25 85 65 114.46 122.39923 7299 244.74 265.32677 099
508.55522 508.67256 508.80040 25.6160 405.731
47 4 2 8 117.338 122.39923 4911 245.184 265.32677 1833
1522.0142 1522.1290 1522.2594 57.9764 405.449
48 17 02 08 114.785 122.39923 9849 245.191 265.32677 2335
1648.1400 1648.2717 77.0222 399.391
49 1648.0264 23 42 113.623 122.39923 1301 245.342 265.32677 032
804.01912 804.13186 804.26500 93.3780 378.176
50 9 5 9 112.736 122.39923 1403 245.88 265.32677 8634
1735.4344 1735.5472 1735.6837 91.4171 256.825
51 37 75 38 112.838 122.39923 1911 249.301 265.32677 3041
47.2000 254.969
52 64.559371 64.6749 64.80873 115.529 122.39923 6025 249.359 265.32677 6788
813.54459 813.65971 813.79453 53.0017 236.783
53 2 1 1 115.119 122.39923 4885 249.939 265.32677 4656
2093.5476 2093.6638 2093.7988 38.5048 198.831
54 64 58 9 116.194 122.39923 7936 251.226 265.32677 7146
155.22645 155.34666 155.47776 4.77960 196.329
55 2 5 7 120.213 122.39923 1613 251.315 265.32677 6985
1024.0075 1024.1240 1024.2590 34.8717 190.792
56 58 52 72 116.494 122.39923 4135 251.514 265.32677 6151
671.26585 671.38281 671.51770 29.5091 181.488
57 2 9 7 116.967 122.39923 2277 251.855 265.32677 5869
1448.9932 1449.1100 1449.2471 31.7445 131.303
58 88 53 56 116.765 122.39923 4769 253.868 265.32677 4099
1736.8969 1737.0124 48.5562 123.515
59 87 18 1737.1512 115.431 122.39923 2933 254.213 265.32677 8836
379.75100 379.89646 46.4609 18.3164
60 379.63542 3 7 115.583 122.39923 9141 261.047 265.32677 3125
61.6889 15.6481
61 64.172286 64.286831 64.433657 114.545 122.39923 2889 261.371 265.32677 1629
1497.9664 1498.0916 1498.2302 7.98497 2.32797
62 21 46 22 125.225 122.39923 6093 263.801 265.32677 4093
1035.2943 1035.4325 1035.5590 248.370 0.36212
63 47 06 72 138.159 122.39923 3505 264.725 265.32677 7133
284.04554 284.16171 284.31175 38.7908 0.78363
64 5 6 7 116.171 122.39923 4893 266.212 265.32677 2153
251.04045 251.15607 251.30937 45.9850 12.8969
65 8 6 6 115.618 122.39923 8031 268.918 265.32677 3291
182.29995 182.41548 182.56905 47.1451 14.2523
66 5 8 7 115.533 122.39923 1441 269.102 265.32677 6155
990.88698 991.00089 991.15830 72.0500 35.8588
67 7 8 2 113.911 122.39923 4853 271.315 265.32677 9853
149.84520 150.00064 1091.75 36.6173
68 2 3 150.11658 155.441 122.39923 8565 271.378 265.32677 8451
445.77745 445.89227 446.05049 57.5053 59.4784
69 7 3 6 114.816 122.39923 7723 273.039 265.32677 9157
1160.4854 1160.6007 1160.7592 50.0447 72.8160
70 23 48 83 115.325 122.39923 3009 273.86 265.32677 1423
199.06610 199.19631 199.34096 60.9300 90.7681
71 9 4 3 130.205 122.39923 4529 274.854 265.32677 1147
122 Research on path establishment methods performance on SDN-based networks

316.511 215.156
72 14.09145 14.23164 14.371445 140.19 122.39923 4972 279.995 265.32677 9713
29.9453 543.133
73 61.392076 61.509003 61.680708 116.927 122.39923 0117 288.632 265.32677 7454
330.32967 330.42403 5420.31 627.965
74 330.13365 2 6 196.022 122.39923 2262 290.386 265.32677 0082
108.85825 108.97316 109.14883 56.1784 637.725
75 9 3 9 114.904 122.39923 7275 290.58 265.32677 6254
525.72284 525.83916 526.01584 36.8962 766.139
76 2 7 8 116.325 122.39923 7009 293.006 265.32677 7734
1256.8462 1256.9625 1257.1427 38.0347 967.535
77 85 17 17 116.232 122.39923 2587 296.432 265.32677 3333
1542.0952 1542.2185 1542.3937 0.66059 1097.81
78 98 1 58 123.212 122.39923 5073 298.46 265.32677 093
210.30611 210.42178 210.60880 45.2287 1395.93
79 3 7 2 115.674 122.39923 1855 302.689 265.32677 6231
107.10490 107.22704 107.40773 0.06720 1406.34
80 8 8 6 122.14 122.39923 0193 302.828 265.32677 2252
205.27734 205.49424 8929.63 1588.35
81 9 5 205.58253 216.896 122.39923 954 305.181 265.32677 9649
556.75999 556.89371 557.06971 128.091 1970.67
82 8 5 7 133.717 122.39923 9178 309.719 265.32677 0084
401.672 3508.69
83 37.202187 37.344628 37.526748 142.441 122.39923 5447 324.561 265.32677 4004
257.61717 257.72996 92.4334 3636.60
84 9 4 257.94281 112.785 122.39923 1849 325.631 265.32677 0156
381.04249 381.25306 22.4982 3956.69
85 380.92484 6 9 117.656 122.39923 3083 328.229 265.32677 0539
159.47383 159.80374 53.9964 4170.21
86 9 159.58889 3 115.051 122.39923 8413 329.904 265.32677 8634
499.33793 499.47593 499.66916 243.633 4344.29
87 1 9 9 138.008 122.39923 7009 331.238 265.32677 024
1212.4560 1212.6016 1212.7930 541.621 5137.05
88 05 77 05 145.672 122.39923 8235 337 265.32677 1899
400.88262 401.12177 86.1412 7557.73
89 400.76951 8 2 113.118 122.39923 3031 352.262 265.32677 4215
369.39724 369.50948 369.75059 103.372 7746.85
90 9 1 2 112.232 122.39923 5659 353.343 265.32677 6743
228.90226 229.01938 229.25719 27.7858 8029.81
91 1 9 7 117.128 122.39923 6571 354.936 265.32677 4101
474.56484 474.69530 474.92935 64.9598 9836.91
92 3 2 1 130.459 122.39923 9245 364.508 265.32677 6384
1285.6857 1285.8009 1286.0595 52.2617 11778.3
93 36 06 91 115.17 122.39923 6639 373.855 265.32677 7671
164.69313 164.82736 165.07060 139.943 12576.5
94 2 1 4 134.229 122.39923 4583 377.472 265.32677 5261
301.02039 301.27555 6.63974 13180.2
95 300.89542 6 2 124.976 122.39923 3633 380.132 265.32677 4084
153.07964 153.34400 153.47086 20153.1 15847.8
96 7 8 2 264.361 122.39923 4414 391.215 265.32677 4645
330.37338 330.49521 330.77362 0.32402 18200.2
97 9 9 4 121.83 122.39923 2793 400.235 265.32677 3052
1827.1119 1827.2284 1827.5302 34.8245 23416.1
98 15 13 65 116.498 122.39923 1551 418.35 265.32677 0892
352.11872 352.23610 352.53910 25.2127 24039.6
99 9 7 3 117.378 122.39923 5071 420.374 265.32677 4353
378.60167 378.72191 379.03751 4.66659 29075.7
100 5 4 8 120.239 122.39923 3653 435.843 265.32677 8469
Appendix H:Measurements tables 123

Annex Table 16 - SR network events Annex Table 17 - SR network events


response confidence interval processing confidence interval
Sum (Xi-X)² 315275.9 Sum (Xi-X)² 42893.01
Variance 3184.605 Variance 433.2627
Smp Std Dev 56.43231 Smp Std Dev 20.81496

conf Level 0.95 conf Level 0.95


α 0.05 α 0.05
DF 99 DF 99
tα 1.984217 tα 1.984217
Err Margin 11.19739 Err Margin 4.13014

Resp Time (ms) Proc Time (ms)


ConfInt high 276.5242 ConfInt high 126.5294
ConfInt low 254.1294 ConfInt low 118.2691

Quantile-Quantile Plot Network Event SR


500
400
300
200
100
0
-3 -2 -1 0 1 2 3

Annex 37 - SR network events Quantile-Quantile plot

Annex Table 18 - PF network events measurements


Intent Forwarding

1st Last
samples Port status FlowMod FlowMod Proc Time samp mean Resp Time samp mean
(i) msg (s) (s) (s) Xi (ms) X (ms) (Xi-X)² Xi (ms) X (ms) (Xi-X)²
817.97760 818.00139 818.00616 2.06821 148.826
1 7 5 1 23.788 25.22613 7897 28.554 40.75343 0923
387.37108 387.39432 387.40717 3.92487 21.7196
2 1 6 4 23.245 25.22613 6077 36.093 40.75343 0778
287.88295 1.83637 20.9071
3 9 287.90683 287.91914 23.871 25.22613 7317 36.181 40.75343 161
190.05928 190.08223 190.09548 5.15803 20.7337
4 2 7 2 22.955 25.22613 1477 36.2 40.75343 2476
319.11438 4.28957 20.5519
5 319.09123 5 319.12745 23.155 25.22613 9477 36.22 40.75343 8756
234.35733 234.38030 234.39369 5.07659 19.3285
6 5 8 2 22.973 25.22613 4797 36.357 40.75343 9675
622.59427 622.61788 622.63070 2.59896 18.6488
7 4 8 9 23.614 25.22613 3137 36.435 40.75343 3767
736.11012 736.14664 0.08359 17.9303
8 3 736.13506 2 24.937 25.22613 6157 36.519 40.75343 9742
510.22693 510.25114 510.26347 1.03862 17.7277
9 5 2 8 24.207 25.22613 5957 36.543 40.75343 2078
124 Research on path establishment methods performance on SDN-based networks

237.84182 237.86564 237.87843 1.98001 17.1514


10 6 5 8 23.819 25.22613 4837 36.612 40.75343 4244
2.16422 16.8956
11 23.58896 23.612715 23.625603 23.755 25.22613 3477 36.643 40.75343 3478
1.86085 16.7479
12 57.503059 57.526921 57.53972 23.862 25.22613 0657 36.661 40.75343 833
469.31183 469.33630 469.34853 0.57476 16.4789
13 9 7 3 24.468 25.22613 1097 36.694 40.75343 7193
647.01435 647.03784 647.05109 3.02109 16.1076
14 5 3 5 23.488 25.22613 5897 36.74 40.75343 2036
844.29881 844.32236 844.33564 2.80941 15.3305
15 1 1 9 23.55 25.22613 1777 36.838 40.75343 9208
331.20908 331.23290 331.24597 1.97158 14.9570
16 5 7 1 23.822 25.22613 1057 36.886 40.75343 1481
752.60402 752.62809 752.64103 1.33201 14.0432
17 6 8 2 24.072 25.22613 6057 37.006 40.75343 316
0.67919 13.9683
18 88.555757 88.580159 88.592773 24.402 25.22613 0257 37.016 40.75343 83
270.53108 270.55445 270.56823 3.42668 12.9702
19 2 7 4 23.375 25.22613 2277 37.152 40.75343 9804
629.03319 629.04644 1.69814 12.7765
20 629.00927 3 9 23.923 25.22613 7797 37.179 40.75343 4982
693.73626 693.76029 693.77344 1.43072 12.7765
21 7 7 6 24.03 25.22613 6977 37.179 40.75343 4982
596.09490 596.11849 596.13208 2.67038 12.7479
22 6 8 9 23.592 25.22613 0857 37.183 40.75343 7039
557.79444 557.81870 557.83165 0.92761 12.5204
23 2 5 7 24.263 25.22613 9397 37.215 40.75343 8687
495.78070 495.80471 495.81805 1.49115 11.5969
24 6 1 4 24.005 25.22613 8477 37.348 40.75343 5348
0.04207 11.2589
25 87.892372 87.917393 87.92977 25.021 25.22613 8317 37.398 40.75343 1048
642.58163 642.60600 642.61909 0.73124 10.8400
26 2 3 3 24.371 25.22613 7317 37.461 40.75343 953
654.63084 654.66831 7.36107 10.7874
27 7 654.65336 6 22.513 25.22613 4397 37.469 40.75343 8042
603.78554 603.81143 603.82302 0.44338 10.7087
28 4 6 5 25.892 25.22613 2857 37.481 40.75343 981
349.13679 2.49681 9.96936
29 4 349.16044 349.17439 23.646 25.22613 0817 37.596 40.75343 4205
2.33823 9.60026
30 22.760499 22.784196 22.798154 23.697 25.22613 8557 37.655 40.75343 8465
324.54069 324.57843 0.07776 9.09884
31 5 324.5662 2 25.505 25.22613 8477 37.737 40.75343 9945
166.59921 166.62331 166.63696 1.26591 9.03260
32 4 5 2 24.101 25.22613 7517 37.748 40.75343 9485
487.54490 487.56835 487.58272 3.14398 8.62849
33 6 9 2 23.453 25.22613 9997 37.816 40.75343 5005
390.48399 390.50772 390.52184 2.25639 8.44733
34 8 2 5 23.724 25.22613 4537 37.847 40.75343 5345
397.00554 397.01938 1.44511 8.34302
35 396.98152 4 5 24.024 25.22613 6537 37.865 40.75343 7865
245.22944 245.25361 245.26732 1.11329 8.26234
36 3 4 2 24.171 25.22613 9317 37.879 40.75343 7825
619.24892 619.27223 619.28687 3.67155 7.84240
37 6 6 9 23.31 25.22613 4177 37.953 40.75343 8185
119.81270 119.83615 119.85077 3.16886 7.22765
38 6 2 1 23.446 25.22613 2817 38.065 40.75343 5865
526.74688 526.77356 526.78497 2.12538 7.10451
39 3 7 1 26.684 25.22613 4937 38.088 40.75343 7085
163.37567 163.40025 163.41383 0.41619 6.73106
40 5 6 4 24.581 25.22613 2717 38.159 40.75343 7025
655.41674 1.78791 6.52000
41 655.39286 9 655.43106 23.889 25.22613 6637 38.2 40.75343 4765
612.99604 613.02016 613.03425 1.22573 6.46903
42 5 4 5 24.119 25.22613 6837 38.21 40.75343 6165
435.19138 435.21528 435.22959 1.74274 6.43347
43 1 7 8 23.906 25.22613 3217 38.217 40.75343 7145
Appendix H:Measurements tables 125

358.19618 358.22139 358.23441 0.00032 6.34248


44 3 1 8 25.208 25.22613 8697 38.235 40.75343 9665
804.64801 804.67223 804.68643 1.00426 5.45423
45 5 9 3 24.224 25.22613 4537 38.418 40.75343 3285
201.48357 201.49850 2.99334 5.44022
46 201.46008 6 1 23.496 25.22613 9817 38.421 40.75343 9705
561.47096 561.49567 0.26125 4.96144
47 4 9 561.50949 24.715 25.22613 3877 38.526 40.75343 4405
558.12502 558.14842 558.16357 3.35303 4.84629
48 7 2 9 23.395 25.22613 7077 38.552 40.75343 4045
461.20072 461.23930 1.28625 4.74120
49 8 461.22482 4 24.092 25.22613 0857 38.576 40.75343 1405
581.67749 581.70075 581.71620 3.86960 4.20016
50 7 6 1 23.259 25.22613 0437 38.704 40.75343 3325
524.66345 524.68669 524.70223 3.94471 3.89837
51 9 9 8 23.24 25.22613 2377 38.779 40.75343 3825
694.14026 694.16483 694.17912 0.42266 3.57372
52 1 7 4 24.576 25.22613 9017 38.863 40.75343 5585
784.80192 784.82622 784.84080 0.85771 3.51348
53 5 5 4 24.3 25.22613 6777 38.879 40.75343 7825
325.77141 325.79537 325.81035 1.61577 3.29941
54 9 4 6 23.955 25.22613 1477 38.937 40.75343 7945
167.93206 167.94752 2.91770 3.17349
55 167.90855 8 2 23.518 25.22613 8097 38.972 40.75343 2845
154.00795 154.03190 154.04705 1.61831 2.73383
56 1 5 1 23.954 25.22613 4737 39.1 40.75343 0765
133.25296 133.27709 133.29220 1.18402 2.27837
57 1 9 5 24.138 25.22613 6897 39.244 40.75343 8925
774.52525 774.56453 0.11433 2.16805
58 2 774.55014 3 24.888 25.22613 1897 39.281 40.75343 0105
215.29434 215.33181 149.765 2.05185
59 9 3 215.33367 37.464 25.22613 4621 39.321 40.75343 5705
4.45686 1.75676
60 55.673163 55.696278 55.712591 23.115 25.22613 9877 39.428 40.75343 4685
551.37985 551.40379 551.41932 1.66701 1.64976
61 9 4 8 23.935 25.22613 6677 39.469 40.75343 0425
429.95811 429.98284 429.99764 0.24219 1.48213
62 1 5 7 24.734 25.22613 1937 39.536 40.75343 5805
236.34700 236.36374 5.58911 1.32348
63 236.32414 2 3 22.862 25.22613 0657 39.603 40.75343 9185
363.32519 363.34949 363.36486 0.87073 1.18903
64 8 1 1 24.293 25.22613 1597 39.663 40.75343 7585
205.31901 205.33508 2.13490 0.84903
65 205.29525 5 2 23.765 25.22613 0877 39.832 40.75343 3245
184.20456 184.24204 184.24441 150.108 0.81618
66 5 3 5 37.478 25.22613 3185 39.85 40.75343 5765
204.79199 204.81569 204.83198 2.33518 0.58894
67 8 6 4 23.698 25.22613 1297 39.986 40.75343 8805
287.26627 287.29151 287.30659 0.00016 0.18872
68 4 3 3 25.239 25.22613 5637 40.319 40.75343 9425
786.90985 786.93299 786.95022 4.35193 0.15165
69 8 8 2 23.14 25.22613 8377 40.364 40.75343 5725
677.93341 677.95745 677.97382 1.39506 0.11453
70 4 9 9 24.045 25.22613 8077 40.415 40.75343 4865
424.49842 424.52190 424.53888 3.07697 0.08846
71 9 1 5 23.472 25.22613 2057 40.456 40.75343 4605
2.70645 0.03937
72 24.30298 24.326561 24.343535 23.581 25.22613 2717 40.555 40.75343 4465
521.51049 7.26911 0.02870
73 521.46991 521.49244 4 22.53 25.22613 6977 40.584 40.75343 6525
1.90752 0.00331
74 93.135945 93.15979 93.176756 23.845 25.22613 0077 40.811 40.75343 4305
292.70618 292.74393 292.74715 156.847 0.04909
75 2 2 7 37.75 25.22613 3198 40.975 40.75343 3265
591.37385 2.21155 0.19853
76 1 591.39759 591.41505 23.739 25.22613 5637 41.199 40.75343 2625
119.97747 120.00157 120.01888 1.25693 0.44165
77 1 6 9 24.105 25.22613 2477 41.418 40.75343 3285
126 Research on path establishment methods performance on SDN-based networks

359.12889 359.15415 359.17057 0.00114 0.84560


78 9 9 2 25.26 25.22613 7177 41.673 40.75343 8985
686.51133 686.53516 686.55331 1.95197 1.50202
79 3 2 2 23.829 25.22613 2237 41.979 40.75343 1825
753.22262 753.24673 753.26478 1.26142 1.95879
80 9 2 2 24.103 25.22613 0997 42.153 40.75343 6185
3.01067 3.94646
81 90.230038 90.253529 90.272778 23.491 25.22613 6117 42.74 40.75343 0365
122.38895 122.41602 122.43247 3.42571 7.69823
82 1 8 9 27.077 25.22613 9757 43.528 40.75343 8685
454.76840 454.79230 454.81254 1.76658 11.4282
83 8 5 2 23.897 25.22613 6557 44.134 40.75343 5352
240.959 11.4620
84 76.261335 76.302084 76.305474 40.749 25.22613 493 44.139 40.75343 8422
567.41430 567.43828 567.45847 1.55283 11.6388
85 7 7 2 23.98 25.22613 9977 44.165 40.75343 0987
661.76873 661.79365 661.81340 0.09432 15.2925
86 8 7 2 24.919 25.22613 8837 44.664 40.75343 5773
0.06714 16.5207
87 85.805629 85.830596 85.850447 24.967 25.22613 8357 44.818 40.75343 2928
854.88579 854.93125 3.03501 22.1518
88 6 854.90928 6 23.484 25.22613 6937 45.46 40.75343 0116
272.17489 272.19830 272.22078 3.31651 26.3022
89 9 4 1 23.405 25.22613 4477 45.882 40.75343 3024
685.01693 685.04049 685.06503 2.77598 53.9427
90 9 9 7 23.56 25.22613 9177 48.098 40.75343 0848
135.26767 135.31283 135.31599 397.080 57.1622
91 8 1 2 45.153 25.22613 148 48.314 40.75343 1872
151.99703 152.02067 152.04649 2.51580 75.7521
92 4 4 1 23.64 25.22613 8377 49.457 40.75343 3074
490.60293 490.62654 490.65302 2.59896 87.1902
93 2 6 3 23.614 25.22613 3137 50.091 40.75343 1351
718.49248 718.51586 718.54310 3.42298 97.3097
94 8 4 6 23.376 25.22613 1017 50.618 40.75343 4128
401.83883 401.86909 0.37838 199.503
95 401.81422 1 8 24.611 25.22613 4917 54.878 40.75343 4777
945.677 347.060
96 27.655682 27.71166 27.715065 55.978 25.22613 5085 59.383 40.75343 8784
423.88407 423.90847 423.94506 0.67754 409.721
97 2 5 7 24.403 25.22613 2997 60.995 40.75343 1561
306.44503 306.46854 306.50716 2.93481 457.171
98 4 7 9 23.513 25.22613 4397 62.135 40.75343 5357
250.61421 250.65091 131.764 548.710
99 2 7 250.67839 36.705 25.22613 4565 64.178 40.75343 4797
1.68773 554.064
100 62.173783 62.19771 62.238075 23.927 25.22613 8757 64.292 40.75343 2776
Appendix H:Measurements tables 127

Annex Table 19 - PF network events Annex Table 20 - PF network events


response confidence interval processing confidence interval
Sum (Xi-X)² 3778.722 Sum (Xi-X)² 2366.56
Variance 38.16891 Variance 23.90464
Smp Std Dev 6.178099 Smp Std Dev 4.889238

conf Level 0.95 conf Level 0.95


α 0.05 α 0.05
DF 99 DF 99
tα 1.984217 tα 1.984217
Err Margin 1.225869 Err Margin 0.970131

Resp. Time (ms) Proc. Time (ms)


ConfInt high 41.9793 ConfInt high 26.19626
ConfInt low 39.52756 ConfInt low 24.256

Quantile-Quantile Plot Network Event PF


80

60

40

20

0
-3 -2 -1 0 1 2 3

Annex 38 - PF network events Quantile-Quantile plot

Annex Table 21 - SR network events jitter measurements


Segment Routing

Samples (i) Jitter (ms) Jitter (μs) Pkt loss % Sample mean (µs) (Xi-X)² P((i-0.5)/n) Z Distrb
1 0.037 37 0% 57.01 400.4001 0.005 -2.576
2 0.037 37 0.56% 57.01 400.4001 0.015 -2.170
3 0.037 37 0% 57.01 400.4001 0.025 -1.960
4 0.037 37 0.56% 57.01 400.4001 0.035 -1.812
5 0.037 37 0% 57.01 400.4001 0.045 -1.695
6 0.037 37 0.56% 57.01 400.4001 0.055 -1.598
7 0.037 37 0.56% 57.01 400.4001 0.065 -1.514
8 0.037 37 0.56% 57.01 400.4001 0.075 -1.440
9 0.037 37 0.56% 57.01 400.4001 0.085 -1.372
10 0.037 37 0.56% 57.01 400.4001 0.095 -1.311
11 0.037 37 0% 57.01 400.4001 0.105 -1.254
12 0.037 37 0% 57.01 400.4001 0.115 -1.200
128 Research on path establishment methods performance on SDN-based networks

13 0.037 37 0.56% 57.01 400.4001 0.125 -1.150


14 0.037 37 0% 57.01 400.4001 0.135 -1.103
15 0.037 37 0% 57.01 400.4001 0.145 -1.058
16 0.037 37 0.56% 57.01 400.4001 0.155 -1.015
17 0.037 37 0% 57.01 400.4001 0.165 -0.974
18 0.037 37 0.56% 57.01 400.4001 0.175 -0.935
19 0.037 37 0% 57.01 400.4001 0.185 -0.896
20 0.037 37 0.56% 57.01 400.4001 0.195 -0.860
21 0.037 37 0.56% 57.01 400.4001 0.205 -0.824
22 0.038 38 0% 57.01 361.3801 0.215 -0.789
23 0.044 44 0% 57.01 169.2601 0.225 -0.755
24 0.045 45 1.10% 57.01 144.2401 0.235 -0.722
25 0.047 47 0.56% 57.01 100.2001 0.245 -0.690
26 0.049 49 0.56% 57.01 64.1601 0.255 -0.659
27 0.049 49 0.56% 57.01 64.1601 0.265 -0.628
28 0.05 50 0.56% 57.01 49.1401 0.275 -0.598
29 0.051 51 0% 57.01 36.1201 0.285 -0.568
30 0.051 51 0% 57.01 36.1201 0.295 -0.539
31 0.051 51 0.56% 57.01 36.1201 0.305 -0.510
32 0.051 51 0.56% 57.01 36.1201 0.315 -0.482
33 0.051 51 0.56% 57.01 36.1201 0.325 -0.454
34 0.051 51 0% 57.01 36.1201 0.335 -0.426
35 0.051 51 0% 57.01 36.1201 0.345 -0.399
36 0.051 51 0% 57.01 36.1201 0.355 -0.372
37 0.051 51 0.56% 57.01 36.1201 0.365 -0.345
38 0.051 51 0% 57.01 36.1201 0.375 -0.319
39 0.051 51 0.56% 57.01 36.1201 0.385 -0.292
40 0.051 51 0% 57.01 36.1201 0.395 -0.266
41 0.051 51 0% 57.01 36.1201 0.405 -0.240
42 0.051 51 0.56% 57.01 36.1201 0.415 -0.215
43 0.051 51 0.56% 57.01 36.1201 0.425 -0.189
44 0.051 51 0% 57.01 36.1201 0.435 -0.164
45 0.051 51 0.56% 57.01 36.1201 0.445 -0.138
46 0.051 51 0% 57.01 36.1201 0.455 -0.113
47 0.051 51 0.56% 57.01 36.1201 0.465 -0.088
48 0.051 51 0.56% 57.01 36.1201 0.475 -0.063
49 0.052 52 1.10% 57.01 25.1001 0.485 -0.038
50 0.052 52 0% 57.01 25.1001 0.495 -0.013
51 0.053 53 0.56% 57.01 16.0801 0.505 0.013
52 0.053 53 0% 57.01 16.0801 0.515 0.038
53 0.054 54 0.56% 57.01 9.0601 0.525 0.063
54 0.054 54 0.56% 57.01 9.0601 0.535 0.088
55 0.054 54 0.56% 57.01 9.0601 0.545 0.113
56 0.054 54 0.56% 57.01 9.0601 0.555 0.138
57 0.054 54 0% 57.01 9.0601 0.565 0.164
Appendix H:Measurements tables 129

58 0.054 54 0% 57.01 9.0601 0.575 0.189


59 0.054 54 0.56% 57.01 9.0601 0.585 0.215
60 0.054 54 0.56% 57.01 9.0601 0.595 0.240
61 0.054 54 0% 57.01 9.0601 0.605 0.266
62 0.054 54 0% 57.01 9.0601 0.615 0.292
63 0.054 54 0% 57.01 9.0601 0.625 0.319
64 0.054 54 0.56% 57.01 9.0601 0.635 0.345
65 0.054 54 0.56% 57.01 9.0601 0.645 0.372
66 0.054 54 0.56% 57.01 9.0601 0.655 0.399
67 0.054 54 0.56% 57.01 9.0601 0.665 0.426
68 0.054 54 0.56% 57.01 9.0601 0.675 0.454
69 0.054 54 0.56% 57.01 9.0601 0.685 0.482
70 0.054 54 0.56% 57.01 9.0601 0.695 0.510
71 0.054 54 0% 57.01 9.0601 0.705 0.539
72 0.054 54 0.56% 57.01 9.0601 0.715 0.568
73 0.057 57 0.56% 57.01 1E-04 0.725 0.598
74 0.057 57 0% 57.01 1E-04 0.735 0.628
75 0.058 58 0% 57.01 0.9801 0.745 0.659
76 0.059 59 0% 57.01 3.9601 0.755 0.690
77 0.06 60 0.56% 57.01 8.9401 0.765 0.722
78 0.061 61 0.56% 57.01 15.9201 0.775 0.755
79 0.062 62 0% 57.01 24.9001 0.785 0.789
80 0.062 62 0.56% 57.01 24.9001 0.795 0.824
81 0.063 63 0% 57.01 35.8801 0.805 0.860
82 0.066 66 0.56% 57.01 80.8201 0.815 0.896
83 0.067 67 0.56% 57.01 99.8001 0.825 0.935
84 0.069 69 0% 57.01 143.7601 0.835 0.974
85 0.069 69 0% 57.01 143.7601 0.845 1.015
86 0.07 70 0% 57.01 168.7401 0.855 1.058
87 0.07 70 0% 57.01 168.7401 0.865 1.103
88 0.071 71 0.56% 57.01 195.7201 0.875 1.150
89 0.075 75 0.56% 57.01 323.6401 0.885 1.200
90 0.079 79 0.56% 57.01 483.5601 0.895 1.254
91 0.079 79 0% 57.01 483.5601 0.905 1.311
92 0.08 80 0% 57.01 528.5401 0.915 1.372
93 0.09 90 0% 57.01 1088.3401 0.925 1.440
94 0.098 98 0% 57.01 1680.1801 0.935 1.514
95 0.105 105 1.10% 57.01 2303.0401 0.945 1.598
96 0.116 116 0.56% 57.01 3479.8201 0.955 1.695
97 0.117 117 0.56% 57.01 3598.8001 0.965 1.812
98 0.138 138 1.10% 57.01 6559.3801 0.975 1.960
99 0.146 146 0.56% 57.01 7919.2201 0.985 2.170
100 0.148 148 0% 57.01 8279.1801 0.995 2.576
loss mean 0.003408
130 Research on path establishment methods performance on SDN-based networks

Annex Table 22 - SR network events


jitter confidence interval
Quantile-Quantile Plot Net Event
Sum (Xi-X)² 48190.99
Jitter SR
Variance 486.7777
Smp Std Dev 22.06304 200

150
conf Level 0.95 100
α 0.05
50
DF 99
0
tα 1.984217
-3 -2 -1 0 1 2 3
Err Margin 4.377786

ConfInt high 61.38779 Annex 39 - SR network event jitter


Quantile-Quantile plot
ConfInt low 52.63221

Annex Table 23 - PF network event jitter measurements


Proactive Forwarding

Samples (i) Jitter (ms) Jitter (μs) Pkt loss % Sample mean (µs) (Xi-X)² P((i-0.5)/n) Z Distrb
1 0.025 25 0% 65.09 1607.2081 0.005 -2.576
2 0.028 28 0% 65.09 1375.6681 0.015 -2.170
3 0.029 29 0.56% 65.09 1302.4881 0.025 -1.960
4 0.03 30 0% 65.09 1231.3081 0.035 -1.812
5 0.032 32 0% 65.09 1094.9481 0.045 -1.695
6 0.034 34 0.56% 65.09 966.5881 0.055 -1.598
7 0.036 36 0% 65.09 846.2281 0.065 -1.514
8 0.036 36 0% 65.09 846.2281 0.075 -1.440
9 0.037 37 0.56% 65.09 789.0481 0.085 -1.372
10 0.038 38 0% 65.09 733.8681 0.095 -1.311
11 0.039 39 0.56% 65.09 680.6881 0.105 -1.254
12 0.04 40 0% 65.09 629.5081 0.115 -1.200
13 0.04 40 0% 65.09 629.5081 0.125 -1.150
14 0.041 41 0% 65.09 580.3281 0.135 -1.103
15 0.041 41 0% 65.09 580.3281 0.145 -1.058
16 0.043 43 0.44% 65.09 487.9681 0.155 -1.015
17 0.043 43 0% 65.09 487.9681 0.165 -0.974
18 0.044 44 0% 65.09 444.7881 0.175 -0.935
19 0.044 44 0% 65.09 444.7881 0.185 -0.896
20 0.045 45 0% 65.09 403.6081 0.195 -0.860
21 0.045 45 0% 65.09 403.6081 0.205 -0.824
22 0.046 46 0% 65.09 364.4281 0.215 -0.789
23 0.046 46 0% 65.09 364.4281 0.225 -0.755
24 0.046 46 0% 65.09 364.4281 0.235 -0.722
25 0.048 48 0% 65.09 292.0681 0.245 -0.690
Appendix H:Measurements tables 131

26 0.048 48 0.56% 65.09 292.0681 0.255 -0.659


27 0.048 48 0.56% 65.09 292.0681 0.265 -0.628
28 0.048 48 0.56% 65.09 292.0681 0.275 -0.598
29 0.049 49 0% 65.09 258.8881 0.285 -0.568
30 0.049 49 0.56% 65.09 258.8881 0.295 -0.539
31 0.049 49 0.56% 65.09 258.8881 0.305 -0.510
32 0.049 49 0% 65.09 258.8881 0.315 -0.482
33 0.05 50 0.56% 65.09 227.7081 0.325 -0.454
34 0.051 51 0% 65.09 198.5281 0.335 -0.426
35 0.052 52 0.56% 65.09 171.3481 0.345 -0.399
36 0.053 53 0.44% 65.09 146.1681 0.355 -0.372
37 0.053 53 0% 65.09 146.1681 0.365 -0.345
38 0.053 53 0% 65.09 146.1681 0.375 -0.319
39 0.053 53 0% 65.09 146.1681 0.385 -0.292
40 0.055 55 0.56% 65.09 101.8081 0.395 -0.266
41 0.055 55 0% 65.09 101.8081 0.405 -0.240
42 0.055 55 0.56% 65.09 101.8081 0.415 -0.215
43 0.056 56 0% 65.09 82.6281 0.425 -0.189
44 0.056 56 0% 65.09 82.6281 0.435 -0.164
45 0.057 57 0% 65.09 65.4481 0.445 -0.138
46 0.057 57 0% 65.09 65.4481 0.455 -0.113
47 0.058 58 0% 65.09 50.2681 0.465 -0.088
48 0.058 58 0.56% 65.09 50.2681 0.475 -0.063
49 0.059 59 0% 65.09 37.0881 0.485 -0.038
50 0.059 59 0% 65.09 37.0881 0.495 -0.013
51 0.059 59 0% 65.09 37.0881 0.505 0.013
52 0.059 59 0% 65.09 37.0881 0.515 0.038
53 0.06 60 0% 65.09 25.9081 0.525 0.063
54 0.061 61 0% 65.09 16.7281 0.535 0.088
55 0.061 61 0% 65.09 16.7281 0.545 0.113
56 0.062 62 0.56% 65.09 9.5481 0.555 0.138
57 0.062 62 0% 65.09 9.5481 0.565 0.164
58 0.062 62 0.56% 65.09 9.5481 0.575 0.189
59 0.062 62 0% 65.09 9.5481 0.585 0.215
60 0.064 64 0% 65.09 1.1881 0.595 0.240
61 0.064 64 0% 65.09 1.1881 0.605 0.266
62 0.064 64 0% 65.09 1.1881 0.615 0.292
63 0.064 64 0% 65.09 1.1881 0.625 0.319
64 0.065 65 0% 65.09 0.0081 0.635 0.345
65 0.066 66 0% 65.09 0.8281 0.645 0.372
66 0.067 67 0% 65.09 3.6481 0.655 0.399
67 0.067 67 0.56% 65.09 3.6481 0.665 0.426
68 0.067 67 0% 65.09 3.6481 0.675 0.454
69 0.068 68 0% 65.09 8.4681 0.685 0.482
70 0.069 69 0% 65.09 15.2881 0.695 0.510
132 Research on path establishment methods performance on SDN-based networks

71 0.069 69 0% 65.09 15.2881 0.705 0.539


72 0.069 69 0% 65.09 15.2881 0.715 0.568
73 0.07 70 0% 65.09 24.1081 0.725 0.598
74 0.071 71 0% 65.09 34.9281 0.735 0.628
75 0.072 72 0% 65.09 47.7481 0.745 0.659
76 0.072 72 0.56% 65.09 47.7481 0.755 0.690
77 0.072 72 0% 65.09 47.7481 0.765 0.722
78 0.074 74 0% 65.09 79.3881 0.775 0.755
79 0.077 77 0% 65.09 141.8481 0.785 0.789
80 0.079 79 0% 65.09 193.4881 0.795 0.824
81 0.079 79 0% 65.09 193.4881 0.805 0.860
82 0.08 80 0% 65.09 222.3081 0.815 0.896
83 0.084 84 0% 65.09 357.5881 0.825 0.935
84 0.086 86 0% 65.09 437.2281 0.835 0.974
85 0.088 88 0% 65.09 524.8681 0.845 1.015
86 0.089 89 0% 65.09 571.6881 0.855 1.058
87 0.089 89 0% 65.09 571.6881 0.865 1.103
88 0.093 93 0% 65.09 778.9681 0.875 1.150
89 0.096 96 0% 65.09 955.4281 0.885 1.200
90 0.097 97 0% 65.09 1018.2481 0.895 1.254
91 0.097 97 0% 65.09 1018.2481 0.905 1.311
92 0.098 98 0% 65.09 1083.0681 0.915 1.372
93 0.1 100 0% 65.09 1218.7081 0.925 1.440
94 0.106 106 0% 65.09 1673.6281 0.935 1.514
95 0.127 127 0% 65.09 3832.8481 0.945 1.598
96 0.13 130 0% 65.09 4213.3081 0.955 1.695
97 0.141 141 0.56% 65.09 5762.3281 0.965 1.812
98 0.145 145 0% 65.09 6385.6081 0.975 1.960
99 0.153 153 0.56% 65.09 7728.1681 0.985 2.170
100 0.227 227 0% 65.09 26214.8481 0.995 2.576
loss mean 0.001208
Appendix H:Measurements tables 133

Annex Table 24 - PF network event


jitter confidence interval
Quantile-Quantile Plot Net Event Jitter
Sum (Xi-X)² 87444.19
PF
Variance 883.2746
Smp Std Dev 29.71994 250
200

conf Level 0.95 150

α 0.05 100

DF 99 50
tα 1.984217 0
-3 -2 -1 -50 0 1 2 3
Err Margin 5.89708

ConfInt high 70.98708 Annex 40 - PF network event jitter


ConfInt low 59.19292 Quantile-Quantile plot

V. Balanced path load

Annex Table 25 - balanced path load complete measurements


Intent Forwarding

Port 1st Last


samples status FlowMod FlowMod Proc Time samp mean Proc Time samp mean
(i) msg (s) (s) (s) Xi (ms) X (ms) (Xi-X)² Xi (ms) X (ms) (Xi-X)²
7594264 801365
1 68.64359 68.688048 68.727009 44.458 8758.96552 1.32 83.419 9035.32265 78.96
158.6579 158.68531 158.76606 7624021 796943
2 13 4 8 27.401 8758.96552 8.97 108.155 9035.32265 22.25
119.7694 119.79428 119.88009 7628442 796491
3 16 6 9 24.87 8758.96552 4.55 110.683 9035.32265 92.88
7613533 795795
4 23.86486 23.898269 23.979446 33.409 8758.96552 6.58 114.586 9035.32265 42.38
88.81170 7593302 794804
5 7 88.856717 88.931849 45.01 8758.96552 0.8 120.142 9035.32265 46.02
40.80777 7597044 794225
6 8 40.850641 40.931166 42.863 8758.96552 3.14 123.388 9035.32265 79.21
30.96445 7628402 792272
7 8 30.989351 31.098811 24.893 8758.96552 2.78 134.353 9035.32265 60.71
181.5991 181.73591 7601315 791840
8 37 181.63955 7 40.413 8758.96552 8.04 136.78 9035.32265 61.29
171.54054 171.65362 7624987 791281
9 171.5137 8 1 26.848 8758.96552 6.38 139.921 9035.32265 70.51
53.90192 7595799 790420
10 2 53.945499 54.046684 43.577 8758.96552 7.05 144.762 9035.32265 68.67
42.76370 7604081 790381
11 4 42.802531 42.908684 38.827 8758.96552 5.81 144.98 9035.32265 92.43
101.2134 101.24111 101.38412 7623566 785821
12 56 8 4 27.662 8758.96552 1.16 170.668 9035.32265 02.06
121.2880 121.32412 121.46114 7608836 785385
13 23 4 7 36.101 8758.96552 5.43 173.124 9035.32265 64.91
38.00778 7600557 784921
14 2 38.04863 38.183524 40.848 8758.96552 3.09 175.742 9035.32265 69.29
136.4424 136.47056 136.62515 7622726 783682
15 18 1 4 28.143 8758.96552 1.88 182.736 9035.32265 90.4
36.50500 7598730 783531
16 8 36.546904 36.688602 41.896 8758.96552 1.02 183.594 9035.32265 00.09
70.43377 7620678 781946
17 8 70.463094 70.626327 29.316 8758.96552 0.74 192.549 9035.32265 45.83
134 Research on path establishment methods performance on SDN-based networks

153.3794 153.40853 153.59301 7621166 778242


18 98 4 5 29.036 8758.96552 9.42 213.517 9035.32265 54.93
7595614 777180
19 59.24318 59.286863 59.462717 43.683 8758.96552 9.4 219.537 9035.32265 76.63
119.9625 119.99348 120.18961 7617924 775861
20 9 3 5 30.893 8758.96552 9.91 227.025 9035.32265 07.49
354.3493 363.72790 363.89411 383944. 259579.
21 03 1 5 9378.598 8758.96552 4103 9544.812 9035.32265 3978
708.9973 718.59858 446839. 320219.
22 84 718.42481 6 9427.426 8758.96552 4133 9601.202 9035.32265 4388
461.9161 471.42401 471.56433 560872. 375611.
23 37 7 1 9507.88 8758.96552 8984 9648.194 9035.32265 2917
957.6822 967.22486 967.33983 614106. 387207.
24 49 4 2 9542.615 8758.96552 5075 9657.583 9035.32265 9432
1131.387 1140.9248 1141.0998 605731. 458222.
25 59 43 34 9537.253 8758.96552 4015 9712.244 9035.32265 5141
604.1375 613.82652 613.96887 864934. 633635.
26 4 4 5 9688.984 8758.96552 3731 9831.335 9035.32265 6614
771.5898 781.32578 781.47900 954412. 728983.
27 76 2 4 9735.906 8758.96552 7015 9889.128 9035.32265 5757
396.3511 406.12875 406.24735 1037538 741039.
28 94 6 3 9777.562 8758.96552 .789 9896.159 9035.32265 2215
287.8256 297.63835 297.74424 1110364 780155.
29 55 9 2 9812.704 8758.96552 .784 9918.587 9035.32265 912
272.4320 282.39701 1075394 864234.
30 52 282.22803 7 9795.978 8758.96552 .884 9964.965 9035.32265 8989
812.6409 822.52724 822.65933 1270932 966468.
31 24 6 8 9886.322 8758.96552 .633 10018.414 9035.32265 6024
1041.965 1051.9235 1052.0745 1436698 115195
32 926 15 39 9957.589 8758.96552 .247 10108.613 9035.32265 2.175
619.5569 629.52314 629.66607 1457296 115296
33 92 3 9 9966.151 8758.96552 .783 10109.087 9035.32265 9.879
796.2431 806.22314 806.35713 1491025 116361
34 02 3 3 9980.041 8758.96552 .328 10114.031 9035.32265 1.704
725.6248 735.72622 735.84944 1801969 141429
35 84 4 8 10101.34 8758.96552 .245 10224.564 9035.32265 4.989
284.3471 294.61844 2008389 152770
36 18 294.52326 5 10176.142 8758.96552 .175 10271.327 9035.32265 6.753
287.6421 297.82421 297.96416 2025331 165568
37 09 6 4 10182.107 8758.96552 .672 10322.055 9035.32265 0.141
148.1041 158.32424 158.46803 2134957 176515
38 26 1 9 10220.115 8758.96552 .803 10363.913 9035.32265 2.318
1077.060 1087.3306 1087.4589 2283781 185838
39 438 22 85 10270.184 8758.96552 .294 10398.547 9035.32265 0.628
684.1015 694.42923 694.56045 2460805 202651
40 78 9 8 10327.661 8758.96552 .509 10458.88 9035.32265 5.529
640.6073 651.02603 651.11484 2754718 216733
41 34 4 3 10418.7 8758.96552 .544 10507.509 9035.32265 2.649
361.5685 372.02754 372.10134 2889991 224232
42 78 1 1 10458.963 8758.96552 .432 10532.763 9035.32265 7.602
714.9481 725.42657 725.50103 2956561 230300
43 48 9 5 10478.431 8758.96552 .537 10552.887 9035.32265 1.556
1065.233 1075.7254 1075.8684 3004306 255988
44 192 51 79 10492.259 8758.96552 .288 10635.287 9035.32265 5.921
1554.918 1565.5255 1565.5987 3414385 270475
45 8 71 38 10606.771 8758.96552 .092 10679.938 9035.32265 9.649
1638.570 1649.2252 1649.3012 3592800 287350
46 819 53 84 10654.434 8758.96552 .759 10730.465 9035.32265 7.587
1421.496 1432.2261 1432.3077 3885039 315516
47 148 64 5 10730.016 8758.96552 .995 10811.602 9035.32265 8.329
1337.782 1348.5280 1348.6129 3945715 322200
48 676 24 93 10745.348 8758.96552 .357 10830.317 9035.32265 4.717
1346.277 1357.0270 1357.1473 3963023 336628
49 327 27 93 10749.7 8758.96552 .77 10870.066 9035.32265 3.16
104.2256 114.92763 115.11688 3775313 344441
50 54 6 9 10701.982 8758.96552 .042 10891.235 9035.32265 0.651
1715.772 1726.7263 1726.8229 4815459 405883
51 983 64 6 10953.381 8758.96552 .299 11049.977 9035.32265 2.15
Appendix H:Measurements tables 135

426.3254 437.22797 437.38780 4594834 410882


52 54 6 1 10902.522 8758.96552 .383 11062.347 9035.32265 7.715
193.7252 204.62573 204.80848 4586191 419401
53 3 5 3 10900.505 8758.96552 .344 11083.253 9035.32265 8.718
463.3201 474.32976 474.41177 5065373 422831
54 62 6 1 11009.604 8758.96552 .568 11091.609 9035.32265 3.553
878.7127 889.83259 5075461 434503
55 96 889.72464 3 11011.844 8758.96552 .446 11119.797 9035.32265 3.316
532.5435 543.52667 543.67280 4946992 438464
56 29 8 5 10983.149 8758.96552 .153 11129.276 9035.32265 0.632
705.3032 716.32197 716.44863 5106336 445221
57 87 3 7 11018.686 8758.96552 .648 11145.35 9035.32265 5.418
202.8653 213.92677 214.01247 5301388 445973
58 39 9 1 11061.44 8758.96552 .731 11147.132 9035.32265 8.731
543.8519 554.92888 555.04117 5373047 463934
59 39 8 6 11076.949 8758.96552 .414 11189.237 9035.32265 7.027
190.6605 201.72752 201.87173 5326921 473441
60 45 3 8 11066.978 8758.96552 .608 11211.193 9035.32265 1.78
1256.015 1267.1273 1267.2711 5534479 492813
61 856 69 2 11111.513 8758.96552 .646 11255.264 9035.32265 9.597
379.4515 390.62688 5838710 502192
62 73 1 390.72786 11175.308 8758.96552 .981 11276.287 9035.32265 1.218
1337.477 1348.6269 1348.7539 5715405 502353
63 294 51 41 11149.657 8758.96552 .753 11276.647 9035.32265 4.842
527.5030 538.62772 538.78106 5596567 502967
64 52 4 9 11124.672 8758.96552 .15 11278.017 9035.32265 7.948
1005.524 1016.8281 1016.9191 6475373 556599
65 558 99 16 11303.641 8758.96552 .299 11394.558 9035.32265 1.437
781.5354 792.82816 792.97104 6420084 576151
66 11 5 9 11292.754 8758.96552 .061 11435.638 9035.32265 3.779
193.3019 196.22265 204.79196 3408476 602571
67 09 9 5 2920.75 8758.96552 0.46 11490.056 9035.32265 5.82
604.7692 616.12629 616.29611 6750193 620794
68 16 5 4 11357.079 8758.96552 .655 11526.898 9035.32265 7.725
518.7172 530.12748 530.28298 7029023 640279
69 9 6 8 11410.196 8758.96552 .058 11565.698 9035.32265 9.412
946.5365 958.02688 7460586 684627
70 09 2 958.18837 11490.373 8758.96552 .822 11651.861 9035.32265 2.937
437.9681 449.52742 449.67188 7841895 712060
71 18 2 7 11559.304 8758.96552 .603 11703.769 9035.32265 5.923
803.6926 815.32930 815.40958 8281131 719106
72 45 6 4 11636.661 8758.96552 .276 11716.939 9035.32265 6.249
515.4381 527.02878 527.16470 8018250 724262
73 67 4 2 11590.617 8758.96552 .104 11726.535 9035.32265 3.913
858.6790 870.32625 870.47490 8342054 762063
74 32 9 4 11647.227 8758.96552 .377 11795.872 9035.32265 2.714
1159.861 1171.0234 1171.7154 5776910 794728
75 012 98 28 11162.486 8758.96552 .698 11854.416 9035.32265 7.316
244.8049 8860215 807579
76 53 256.54053 256.68207 11735.577 8758.96552 .903 11877.117 9035.32265 5.128
763.2383 775.02560 775.14018 9170585 821693
77 39 2 1 11787.263 8758.96552 .627 11901.842 9035.32265 3.184
682.6473 694.42836 694.55312 9132662 823937
78 73 8 6 11780.995 8758.96552 .178 11905.753 9035.32265 0.394
449.3688 461.12931 9009257 838212
79 07 5 461.29932 11760.508 8758.96552 .259 11930.513 9035.32265 7.163
911.9866 923.82905 9507728 843759
80 33 9 923.92671 11842.426 8758.96552 .532 11940.077 9035.32265 7.834
307.4783 319.33101 319.43183 9570747 851546
81 89 7 9 11852.628 8758.96552 .54 11953.45 9035.32265 7.231
1004.120 1016.0269 1016.1568 9908521 900790
82 17 16 1 11906.746 8758.96552 .95 12036.64 9035.32265 5.835
1082.358 1094.3262 1094.3968 1029329 901575
83 923 05 7 11967.282 8758.96552 4.64 12037.947 9035.32265 2.987
940.8616 952.72630 9645040 903878
84 9 2 952.90347 11864.612 8758.96552 .059 12041.78 9035.32265 5.797
865.4226 877.33165 877.46696 9923032 905429
85 03 3 1 11909.05 8758.96552 .231 12044.358 9035.32265 3.738
136 Research on path establishment methods performance on SDN-based networks

259.3995 271.33542 271.46806 1009292 920041


86 19 1 1 11935.902 8758.96552 5.4 12068.542 9035.32265 9.625
203.0109 215.08660 9957845 924373
87 29 214.9255 3 11914.571 8758.96552 .945 12075.674 9035.32265 6.331
363.2387 375.12572 375.32402 9784587 930234
88 3 8 8 11886.998 8758.96552 .196 12085.298 9035.32265 9.636
357.1691 369.26558 1022814 937059
89 17 369.12623 3 11957.113 8758.96552 7.3 12096.466 9035.32265 8.609
938.9525 950.92979 951.06104 1035735 944439
90 45 5 1 11977.25 8758.96552 4.99 12108.496 9035.32265 4.439
363.1618 375.12880 375.27112 1029092 944893
91 95 7 9 11966.912 8758.96552 0.62 12109.234 9035.32265 0.988
1249.659 1261.7256 1261.8011 1093693 964882
92 617 86 96 12066.069 8758.96552 3.43 12141.579 9035.32265 8.512
409.7011 421.82193 421.93970 1130174 102605
93 65 7 1 12120.772 8758.96552 2.81 12238.536 9035.32265 75.77
1175.274 1187.3265 1187.5233 1084300 103256
94 68 17 53 12051.837 8758.96552 2.58 12248.673 9035.32265 20.47
542.8648 555.12823 555.24770 1228129 112062
95 05 9 2 12263.434 8758.96552 9.33 12382.897 9035.32265 54.03
282.2307 294.52818 1252078 115721
96 45 2 294.66786 12297.437 8758.96552 0.41 12437.115 9035.32265 91.19
871.3954 883.72102 883.84169 1272036 116340
97 97 4 7 12325.527 8758.96552 0.79 12446.2 9035.32265 84.3
1575.003 1587.3246 1587.4498 1268927 116351
98 455 22 04 12321.167 8758.96552 9.38 12446.349 9035.32265 00.76
592.2537 604.62678 604.75672 1306133 120244
99 73 5 3 12373.012 8758.96552 1.96 12502.95 9035.32265 39.44
378.5204 390.62532 396.62225 1119500 822007
100 68 7 1 12104.859 8758.96552 3.18 18101.783 9035.32265 03.28

Balanced Path load Quantile-Quantile plot


20000

15000

10000

5000

0
-3 -2 -1 0 1 2 3
-5000

Annex 41 - Balanced path load complete samples Quantile-Quantile plot

Notice that in the figure, there are 2 separate behaviors with normal distribution
tendencies. So, at of 100 measured samples, 20 samples are identified with a
more realistic response time, which is considered that these samples
corresponds to the expected operation of Proactive Forwarding, and not to an
instability of the ONOS version.
Appendix H:Measurements tables 137

Annex Table 26 - Balanced path 20 samples measurement


Intent Forwarding

1st Last
samples Port status FlowMod FlowMod Proc Time samp mean Proc Time samp mean
(i) msg (s) (s) (s) Xi (ms) X (ms) (Xi-X)² Xi (ms) X (ms) (Xi-X)²
89.3147 5121.41
1 68.64359 68.688048 68.727009 44.458 35.00735 8542 83.419 154.98305 3252
158.65791 158.68531 158.76606 57.8565 2192.86
2 3 4 8 27.401 35.00735 6032 108.155 154.98305 6267
119.76941 119.79428 119.88009 102.765 1962.49
3 6 6 9 24.87 35.00735 865 110.683 154.98305 443
2.55472 1631.92
4 23.86486 23.898269 23.979446 33.409 35.00735 2723 114.586 154.98305 1649
100.053 1213.89
5 88.811707 88.856717 88.931849 45.01 35.00735 007 120.142 154.98305 8765
61.7112 998.247
6 40.807778 40.850641 40.931166 42.863 35.00735 3692 123.388 154.98305 1845
102.300 425.598
7 30.964458 30.989351 31.098811 24.893 35.00735 0759 134.353 154.98305 963
181.59913 181.73591 29.2210 331.351
8 7 181.63955 7 40.413 35.00735 5192 136.78 154.98305 0293
171.54054 171.65362 66.5749 226.865
9 171.5137 8 1 26.848 35.00735 9242 139.921 154.98305 3502
73.4389 104.469
10 53.901922 53.945499 54.046684 43.577 35.00735 0112 144.762 154.98305 8631
14.5897 100.061
11 42.763704 42.802531 42.908684 38.827 35.00735 2612 144.98 154.98305 0093
101.21345 101.24111 101.38412 53.9541 246.017
12 6 8 4 27.662 35.00735 6662 170.668 154.98305 6565
121.28802 121.32412 121.46114 1.19607 329.094
13 3 4 7 36.101 35.00735 0323 173.124 154.98305 0669
34.1131 430.934
14 38.007782 38.04863 38.183524 40.848 35.00735 9242 175.742 154.98305 0051
136.44241 136.47056 136.62515 47.1193 770.226
15 8 1 4 28.143 35.00735 0092 182.736 154.98305 2337
47.4534 818.586
16 36.505008 36.546904 36.688602 41.896 35.00735 9882 183.594 154.98305 4599
32.3914 1411.20
17 70.433778 70.463094 70.626327 29.316 35.00735 6482 192.549 154.98305 0599
153.37949 153.40853 153.59301 35.6570 3426.22
18 8 4 5 29.036 35.00735 2082 213.517 154.98305 3303
75.2669 4167.21
19 59.24318 59.286863 59.462717 43.683 35.00735 0292 219.537 154.98305 2461
119.99348 120.18961 16.9278 5190.04
20 119.96259 3 5 30.893 35.00735 7592 227.025 154.98305 256
138 Research on path establishment methods performance on SDN-based networks

Annex Table 27 – 20 sample based Annex Table 28 - 20 sample based


response confidence interval processing confidence interval
Sum (Xi-X)² 31098.72511 Sum (Xi-X)² 1044.460419
Variance 1636.775006 Variance 54.97160098
Smp Std Dev 40.45707609 Smp Std Dev 7.414283578

conf Level 0.95 conf Level 0.95


α 0.05 α 0.05
DF 19 DF 19
tα 2.093024054 tα 2.093024054
Err Margin 18.93449445 Err Margin 3.469991528

Resp. Time (ms) Proc. Time (ms)


ConfInt high 173.9175445 ConfInt high 38.47734153
ConfInt low 136.0485555 ConfInt low 31.53735847

Quantile Quantile Plot balanced path load 20


samples PF
250
200
150
100
50
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2

Annex 42 - 20 sample based Quantile-Quantile plot


Appendix I:Jitel 2015 article 139

APPENDIX I: JITEL 2015 ARTICLE


Influence of the path establishment method on an
SDN controller time response
David José Quiroz Martiña, Cristina Cervelló-Pastor
Department of Network Engineering,
Universitat Politècnica de Catalunya,
Esteve Terradas, 7, 08860, Castelldefels, Spain.
david.quiroz@estudiant.upc.edu, cristina@entel.upc.edu

Abstract—This paper aims for the assessment of a layer 3 path all switches relevant to the path. The response times of this
establishment method, to explore the time response added to an method are used as a reference to visualize the additional
Software Defined Networking (SDN) controller during network time response introduced by Segment Routing, with respect
events and some additional operations, measured from an SDN
path methodology used as a reference. The experience learned of controller’s default state where Proactive Forwarding is
in this experimentation is presented as an alternative to monitor supported.
the processing time that new path establishment methods can Both path methods are established by the controller into
bring to an SDN environment. the network using the OpenFlow protocol as their interface
Index Terms—Software Defined Networking (SDN), con- between the control plane and data plane. The actions triggered
trollers, control layer.
by both methods are exchanged between the controller and the
network devices through OpenFlow messages.
I. I NTRODUCTION
OpenFlow has been the first protocol used in SDN as the
Software Defined Networking (SDN) has acquired great communication interface between the control plane and the
acceptance in the networking community. Thus, several de- forwarding plane, found in network devices such as switches
velopments and advances have been made to introduce new and routers. It is widely used in most of the current SDN con-
functions and applications that can give more maturity and trollers to address the dynamic control of network resources
specialized operations to the system. Some developments in- according to the needs of today’s applications [2]. It uses TCP
clude the adaptation of layer 3 (L3) protocols, such as IGP and sessions to carry out instructions given by the SDN controller,
BGP to support routing methods into the SDN environment. and programs the flow tables found in different network
Since the principle of SDN is a centralized control of the devices, to set an OpenFlow pipeline where the forwarding
network, every operation triggered by these protocols implies process is carried out by entries found in the flow tables.
processing introduced to the controller and a response time, This paper will show that, according to the method applied,
that may or may not have an impact over the network. the actions triggered by both path methods will determine the
Keeping track of the response time during the adaptation way of how the OpenFlow pipeline will be used on the network
of a new path method to SDN, can help developers setting devices during packet forwarding.
up a margin, to fulfill a balance between the time required The remainder of this paper is organized as follows. Section
to address the processing needs of the path method, and II describes the network scenario, including the test scenarios
maintaining a lower response time from the controller. and the elements specifications. Section III illustrates the
One example of a L3 path establishment method that is testing process of each path mechanism and their results.
being adapted to SDN is Segment Routing. This method Section IV is devoted to the analysis of the experimental
constructs an end to end path based on a sequence of nodes results obtained in the previous section. Finally, Section V
and port codes called segments [1]. On the first node, a packet presents the conclusions and future work.
is guided by this sequence of segments. Once the packet passes
through each segment, its related segment id (SID) is pulled II. N ETWORK SCENARIO DESCRIPTION
out of the sequence. This behaviour is maintained until the The general idea to measure the behavior of path methods
packet arrives to the edge node that connects to the destination. in front of network events is to provide a scenario where
This article conducts a series of measurements that helps an end to end communication is available through multiple
to determine the approximated response time that Segment paths. A virtual network topology was built to meet these
Routing can introduce into an SDN controller during normal conditions, and is presented on Fig. 1. The network topology
operations and during network events. The response times are is comprised of 10 L2 virtual switches interconnected with
compared with the performance of the Proactive Forwarding an SDN controller, using a TCP port per switch-controller
method. This mechanism is a L2 path establishment procedure, connection over a single ethernet link of 1Gbps, 2 virtual
traditionally used in SDN to construct paths according to hosts (h1 and h2) from where the end to end communication
traffic destination addresses, and set up by flows installed on is taken place, and 12 virtual Ethernet links interconnecting
traffic is transmitted from one host to the other, using traffic
SDN Controller generators that allows to send traffic in different bitrates using
UDP datagrams.
Interface em1
Traffic is measured in five main bandwidths: 100 Kbps,
1Mbps, 10Mbps, 100 Mbps, and 1 Gbps, using UDP data-
S2 S3 Eth2
S4 S5 grams of 1400 Bytes. The results to observe are the receiving
Eth2 Eth2
Eth1 Eth1 Eth1 Eth2
Eth1 Eth3 Eth3 bitrate and packet loss percentage. Using the more stable
S1 S6
Eth1
Eth2 Eth2 Eth1 bandwidth (0% packet loss and end to end sustained bitrate),
Eth3 Eth3
the round trip time and jitter are also measured.
S10 S9 Eth3 S8 Eth3 S7 Eth1
Eth2 Eth1
Eth2
Eth1
Eth2
Eth1
Eth2
2) The Response time: This time is measured under net-
h1 h2
work event conditions, where a constant rate traffic is gener-
ated from one host to the other, and a network event such as
link failures occur during transmission. The link failures are
Fig. 1. Test Network Topology. emulated by shutting down specific links of the network in the
Mininet console while the traffic is being forwarded through
those links.
the switches. The transmission queues of these switches have
The Wireshark tool is used to measure the time that takes
a default length of 1000, and their bandwidth are dependent
the controller to reroute the paths and redirect the traffic.
of the limitations of the virtual switch. The links disposition
OpenFlow messages exchange between the SDN controller
allows the SDN controller for the computation of 8 possible
and the switches are captured based on [3]. The time frame
paths between hosts h1 and h2 as listed below.
is measured between the instant where the network event
1) S1, S2, S3, S4, S5, S6 is detected by the SDN controller (OFPT PORT STATUS
2) S1, S10, S9, S8, S7, S6 message), and the last flow message (OFPT FLOW MOD)
3) S1, S2, S3, S9, S8, S7, S6 sent by the SDN controller. This is the time period in which
4) S1, S2, S3, S4, S8, S7, S6 the controller starts and finalize the process of path redirection,
5) S1, S10, S9, S3, S4, S5, S6 and hence, the response time in front of network events. The
6) S1, S10, S9, S8, S4, S5, S6 processing time of the switches to detect the failure, and send
7) S1, S2, S3, S9, S8, S4, S5, S6 the port status messages are not taken into account. Thus, the
8) S1, S10, S9, S3, S4, S8, S7, S6 measurement only focuses on the performance of the SDN
The elements involved to build this scenario are comprised controller to react in front of network events. Throughout this
of 2 physical computers, one to host the SDN controller, and test, the Wireshark tool is also used to constantly monitor the
the other one to host the virtual network topology, both meet- traffic coming through the network, in order to identify the cor-
ing the minimum requirements to run the respective systems. rect link to emulate the failure (more used in Segment Routing
The network topology is built using the Mininet software with compared with Proactive Forwarding, where the traffic can be
CPqD user space switch running OpenFlow 1.3. The SDN monitored through the controller’s Graphic User Interface).
controller is an Open Network Operating System (ONOS) The Wireshark tool is installed in both equipments (the
controller configured to use the described path establishment computers hosting the virtual network and the SDN controller)
methods [3]. to perform the described measurements. For the capture of
OpenFlow messages Wireshark probes the SDN controller’s
A. Test scenarios interface with the switches (interface em1 at Fig. 1). More-
There are four test scenarios that were used to evaluate the over, for the capture of UDP traffic transmitted by host h1,
performance of Proactive Forwarding and Segment Routing Wireshark probes the links S3-S4 and S8-S9 through the
path methods: 1) Network performance, 2) Response time in corresponding interfaces (S3-eth2 and S9-eth1). No matter
front of network events, 3) Static Path installation time and, what path is calculated by the controller, the traffic will pass
4) Switch forwarding delay. through either one or both links in all cases.
1) Network performance: It is measured under steady state 3) Static path installation: Static paths are those in which
conditions, meaning that the network topology does not expe- they can be manually programmed from the Command Line
rience any network event that can change the network state Interface (CLI), and they are measured under steady state con-
during the test. Moreover, the test is performed after the ditions. The time that takes between the command execution
network is stabilized on its initial traffic forwarding, in which on the CLI and the last OFPT Flow Mod message sent by
the controller is consulted for forwarding decisions and paths the controller, is the controller’s time response to program and
are installed in the process, adding delay to the initial traffic. install a static path in the network. This is done in order to
The goal is to observe how the network topology behaves in observe the effects of the controller response during a manual
terms of traffic forwarding without the influence of external path installation.
factors allowing to identify the limits of the topology, and The Wireshark tool probes the network in the same disposi-
to ensure proper testing using stable values of bitrates. A tion as described in the previous section. The only difference is
that in the computer hosting the SDN controller, the Wireshark
probes not only the interface connecting to the virtual network SDN Controller
(em1), but also to a secondary interface where it can capture Interface
wlan0
instructions sent by a remote computer (wlan0), as illustrated Interface em1
Remote Computer
on Fig. 2. This remote computer is used to initiate a remote
CLI session with the controller, and send the execution of S2 S3 S4 S5
the path installation. Wireshark can capture the execution Eth2

timestamp, and compare it against the subsequent OpenFlow S6


S1
message time stamps, allowing to measure the time frame
under a common time reference.
S10 S9 S8 S7
4) Switch packet forwarding delay: This time is measured Eth1

under steady state conditions. The delay observed for an IP


h1 h2
packet to be forwarded by a switch in the network corresponds
to the time that takes this IP packet between entering an input
switch port, and forwarded to an output switch port. This is Fig. 2. Wireshark Probe Location.
done in order to observe the effects of how the path methods
use the OpenFlow Pipeline of the switches. In this case, the
wireshark tool is used on switch S2, chosen randomly for this used as a layer 2 routing mechanism, the virtual hosts (h1
test to probe its interfaces S2-eth1 as the input port, and S2- and h2) IP addresses are configured under the same subnet as
eth2 as the output port. shown in Fig. 3.
For the network performance test, Iperf was used to generate
B. Elements specifications traffic comprised of UDP datagrams within IP packets, using
The specifications of the involved elements in the network a constant bitrate, and measuring on intervals of 1 second
scenario are the following: during 1 minute test. The average results observed in the test
• Computer hosting the SDN controller are illustrated on Table I.
– 3.4GHz Intel(R) Core(TM) i7-3770 processor Up to traffic of 100 Mbps there is a huge packet loss
– 16GB RAM memory detected on the network, and on the rates of 1 Mbps and
– 64bit Ubuntu 14.04.01 10 Mbps, despite of not presenting packet losses, the band-
• Computer hosting the network topology
width is not maintained throughout the network, which means
that the subsequent tests have to be made under 1 Mbps rates.
– 3.4GHz Intel(R) Pentium(R) D dual core processor
This is a limitation introduced by the virtual switch by not
– 1GB RAM memory
working at kernel space, which cannot take full advantage of
– 64bit Ubuntu 14.04.01
the hardware to sustain higher bandwidth.
• ONOS Controller versions
Using the rate of 100 Kbps the values of jitter and round
– Blackbird release 1.1.0rc2 trip time (RTT) are measured to keep track of the initial
– Spring-Open conditions of the network before the subsequent tests. The
• Mininet release 2.2.0 average round trip time observed was 1.54 ms and an average
• CPqD Software switch release 1.3.0 jitter of 0.043 ms.
– User-space switch
– OpenFlow 1.3
• Measurement tools SDN Controller
– Wireshark version 1.10.6 10.60.1.1:6633
– Iperf version 2.0.5

III. E VALUATION OF THE PATH METHODS


Both Proactive Forwarding and Segment Routing mech-
S2 S3 S4 S5
anisms have demonstrated their capacity to sustain traffic
during network events and manual configuration of paths,
S1 S6
despite of their differences in path processing times. The
following subsections illustrate the testing process of each path
mechanism and their results. h1 S10 S9 S8 S7 h2
A. Proactive Forwarding Testing 10.0.0.1/24 10.0.0.2/24
Using the network topology, Proactive Forwarding is ap-
plied using the intent forwarding application of the Blackbird
version ONOS controller [4]. Since Proactive Forwarding is Fig. 3. Proactive Forwarding Topology.
TABLE I TABLE III
N ETWORK P ERFORMANCE U SING P ROACTIVE F ORWARDING . PATH I NSTALLATION TIME USING PROACTIVE FORWARDING .

Bitrate Packet Loss Received Bitrate Description Until Last Flow Mod
100 Kbps 0% 100 Kbps Minimum 27.125 ms
1 Mbps 0% 412 Kbps Maximum 77.752 ms
10 Mbps 0% 2.24 Mbps Average 52.29 ms
100 Mbps 78% 3.31 Mbps Sample σ 13.679 ms
1 Gbps 96% 2.89 Mbps 95% CI 49.57 ms - 55 ms

For the response time testing, Iperf generates a constant cases did not took advantage of other possible paths available
rate traffic, measuring on intervals of 1 second during a time in the network, and load balance the traffic among the links.
frame of 20 seconds in which the link failure is emulated. For the static path installation measurement, a point to
A number of 100 measurements were made for the network point intent is manually configured on the controller using
response, executing failures in specific links where the traffic the ONOS built-in app ”push-test-intent” [5]. 100 intents
was passing through. Since the path computation is dynamic, were emulated to measure 100 path installations, where the
not always the SDN controller computed the same initial path app execution was made from a remote computer, and its
on the network, but in most cases it resolved the shortest paths timestamp captured by Wireshark as earlier described in Fig. 2.
(1 or 2 as described in section II). The emulated failures The time frame between the app execution and the last
were links S3-S4 and S8-S9, since the traffic will always OFPT FLOW MOD message sent by the controller to the
pass through either one or both links in every possible path. switches, is the time response that the controller takes to
The controller always recalculates the shortest path after a process the intent submission and install the static path in the
failure, which in most cases were also paths 1 or 2 described network. Table III illustrates the minimum, maximum, and
in section II. Table II shows the minimum, maximum, and average path installation response time, as well as the sample
average time of response that the controller took to process the standard deviation (σ) and the 95% Confidence Interval (CI).
path redirection, and to completely install it into the network. Packet forwarding through the switches had an average
It also shows the sample standard deviation (σ) and the 95% delay of 186.49 µs, in which the mean delay is within the
Confidence Interval (CI). range of 178.358 µs and 194.621 µs, with a confidence level
A time frame is measured between the first port sta- of 95%.
tus message received by the controller, and the first
OFPT FLOW MOD message sent by the controller to the
B. Segment Routing Testing
switches. This is the approximated time that the controller
takes to process the information received about the change The implementation of this method is based on the ex-
occurred in the network topology [3]. The time between the perimentation made by the Onosproject [6]. In this project,
first port status message and the last OFPT FLOW MOD Segment Routing is used as an extension of MPLS into an
message, is the overall time that takes the controller to process SDN environment. The same ONOS version utilized for that
the topology change information, and completely install all experimentation (Spring-Open) is also used to conduct the
the necessary flows to redirect the traffic to the new path series of tests previously described in the preceding section.
[3]. In all the measurements made, there was a minimum The ONOS controller at the network topology is used to
traffic interruption observed (0.0012% of average packet loss) load a configuration to the CPqD switches to emulate routing
between hosts h1 and h2, with a 95% confidence that the mean capabilities on them. Each switch is assigned with a loopback
jitter maintains on the range between 0.0592 ms and 0.0709 IP address and a Segment Routing ID (SID), to identify
ms. them as Segment Routing nodes according to [7]. Since
In other observations made during the test of Proactive Segment Routing is a L3 mechanism, each host is configured
Forwarding, the paths created not always were multiple paths. under different subnets. Fig. 4 illustrates the logical network
In other words, the path computed by the controller, in many topology configured by the controller using the Mininet net-
work topology. In the topology, the flows downloaded to the
switches are MPLS and IP table entries that are used by each
TABLE II switch to compute the path between hosts h1 and h2. In all
R ESPONSE TIME USING PROACTIVE FORWARDING . the cases, the initial routes computed were the paths 1 and 2
Description Until First Flow Mod Until Last Flow Mod described in section II, in which both were used during traffic.
Minimum 22.513 ms 28.554 ms The forwarding actions are determined by group entries in the
Maximum 55.978 ms 64.292 ms group table. Here group chaining is used to push SIDs of the
Average 25.226 ms 40.753 ms hops involved in the path into the MPLS label stack in order
Sample σ 4.89 ms 6.178 ms to construct the segment sequence that will route the packet
95% CI 24.25 ms - 26.2 ms 39.52 ms - 41.97 ms through the network.
ONOS
Controller
10.60.1.1:6633

S2 S3 S4 S5

172.10.0.2/32 172.10.0.3/32 172.10.0.4/32 172.10.0.5/32


S1 SID: 102 SID: 103 SID: 104 SID: 105 S6
h1 h2

10.0.0.5/24 172.10.0.1/32 172.10.0.6/32 10.1.1.5/24


SID: 101 S10 S9 S8 S7 SID: 106

172.10.0.10/32 172.10.0.9/32 172.10.0.8/32 172.10.0.7/32


SID: 110 SID: 109 SID: 108 SID: 107

Fig. 4. Segment Routing Topology.

As was done with Proactive Forwarding, the network per- loss of 0.0034%, with a 95% confidence that the mean
formance of the Segment Routing topology was tested. The jitter is maintained within the range between 0.0526 ms and
results are presented in Table IV. 0.0613 ms. The processing time is divided in 2 stages, group
Once again, the network topology limits the transmitted recovery and path computation. Group recovery is handled by
bitrate causing huge packet loss over 100 Mbps. The traffic a module of the Segment Routing application called Group
still stable over 1 Mbps, but since the subsequent tests are Recovery Handler as described in [9]. The buckets within
meant to be compared with the ones of Proactive Forwarding, the groups affected by the link failure, are instructed to
the bitrate selected for the experimentation is 100 Kbps, the be deleted from the group by using OFPT GROUP MOD
same bitrate for stable traffic in Proactive Forwarding. Using messages. The time frame between the port status message
this bitrate the average round trip time observed was 2.734 ms, and the first OFPT GROUP MOD message, is the processing
and an average jitter of 0.0559 ms. time for the group recovery as illustrated in Fig. 5. It is not
Table V shows the results of the measurement made on clear the instant at which the path computation starts, but the
the response time in front of network events, in which the path computation time is some where within the time frame
emulated failures were made on links s3-s4 and s8-s9 like between the first OFPT GROUP MOD message and the first
it was done with Proactive Forwarding, where no matter the OFPT FLOW MOD message. For this paper, the overall
path initially calculated, the traffic would pass through the processing time is assumed to be the time frame between
described links. In the initial state of each test, Segment the port status message and the first OFPT FLOW MOD
Routing always compute 2 different paths. Thus, it can load message. Table V presents the processing and response time
balance the traffic using a round robin method within the group of Segment Routing in front of network events.
action buckets [8]. In Segment Routing, the path installation time is config-
The time that takes the controller to process the information ured through the implementation of tunnels and policies, that
received by the port status message, and the transmission of the defines a certain path across the network by introducing the
last OFPT FLOW MOD message, is the overall time response related nodes Segment IDs to the MPLS label stack [10]. In
in front of network events. During the failure recovery, the this manner, on each hop the destination is set according to
traffic presented a minimum interruption of an average packet the label found in the top of the stack, and then pushed out

TABLE IV TABLE V
N ETWORK P ERFORMANCE U SING S EGMENT ROUTING . R ESPONSE TIME USING S EGMENT ROUTING .

Bitrate Packet Loss Received Bitrate Description Until First Flow Mod Until Last Flow Mod
100 Kbps 0% 100 Kbps Minimum 112.232 ms 197.89 ms
1 Mbps 0% 1 Mbps Maximum 264.361 ms 435.843 ms
10 Mbps 1.6% 10 Mbps Average 122.399 ms 265.326 ms
100 Mbps 90% 9.9 Mbps Sample σ 20.814 ms 56.432 ms
1 Gbps 98% 9.24 Mbps 95% CI 118.27 ms - 126.53 ms 254.13 ms - 276.52 ms
TABLE VI
PATH I NSTALLATION TIME USING SEGMENT ROUTING .

Description Tunnel Policy Path Installation


Minimum 3.42 ms 1.951 ms 5.371 ms
Maximum 9.625 ms 7.104 ms 16.729 ms
Average 7.761 ms 3.69 ms 11.452 ms

deviation (σ) of 2.488 ms, with a 95% confidence that the


mean response time is maintained within the range between
Fig. 5. Segment Routing link recovery process time frame.
10.959 ms and 11.946 ms. Compared with the network event
test, the response time is lower, due to the operation that takes
place in the process. Thus, it only comprises an installation of
at each hop until it reaches the edge node that is connected an Access List (ACL) entry in the Access List table (ACL
to the destination network. In this implementation of Segment table) and the insertion of additional groups in the group
Routing brought by the ONOS project, the configuration of table, without modifying the entire group table on the related
tunnels represent the arrange of SIDs, which would be the switches.
group numbers that would be joined into a group chain. The average packet forwarding delay observed on the switch
At this point, when the tunnel configuration is executed was 290.2 µs and a sample standard deviation of 37.311 µs,
from the console, the controller processes the creation of the with 95% confidence that the mean delay is within the range
necessary groups and add them to the switches group tables by between 282.796 µs and 297.603 µs. Compared to the test
using OFPT GROUP MOD messages. On the other hand, the made on Proactive Forwarding, Segment Routing introduces
configuration of policies represents the information of source more delay in the switch for packet forwarding.
and destination addresses, the destination group to execute the
policy, and the priority that the path would have. When the IV. R ESULTS ANALYSIS
policies are executed from the console, the controller processes
the information into Access Lists (ACL) that are downloaded The measurements performed are reviewed and analysed on
to the switches by using OFPT FLOW MOD messages. Fig. 6 Fig. 7 comparing the results of both mechanism working on
displays the time frame of these processes. the network topology previously described.
Since the tunnel and policy configurations are separate There is a clear difference in time response, in which
executions, the measurement cannot be assumed as the Segment Routing presents more delay in most of the mea-
complete time frame from the first execution to the last surements. The switch forwarding delay shows the effects of
OFPT FLOW MOD message, because it would introduce OpenFlow Pipeline usage introduced by both mechanisms.
additional delay that is not relevant to the processing time of On one hand, in Proactive Forwarding, the pipeline process-
the controller. In other words, the measurement of the overall ing used is the standard method of OpenFlow as described in
time response for the static path installation would be: [8]. In this procedure, each ingress packet start a lookup for
an entry in the first flow table (table 0). It can go to a next
RT = T CT + P CT, (1) flow table or forwarded to an egress port, or to the controller,
depending on the action set found in the matched flow entry.
where RT is the response time of the path installation,
In the case of our topology, only one flow table was created
TCT is the tunnel configuration time, and PCT the policy
configuration time.
Table VI illustrates the times measured for the manually
installed paths. The measurements presents a sample standard 0.29
Switch Forwarding Delay
0.186

11.45
Manual Path Installation Time
52.29

265.33
Network Event Response Time
40.75

122.40
Controller Processing Time
25.23

Time (ms) 0.10 1.00 10.00 100.00 1000.00


Segment Routing Proactive Forwarding

Fig. 6. Segment Routing static path installation time frame. Fig. 7. Test Results Comparison.
with several flow entries. This means that the ingress packet the egress ports related to each path by using a round robin
only had to be processed by one table before being forwarded. method. On the other hand, Proactive Forwarding not always
On the other hand, Segment Routing uses a series of flow was able to compute more than one path to load balance the
tables plus the group table as described in [9]. In this case, at traffic. In many cases, Proactive Forwarding only computed
least 4 tables are used: the IP address routing table, the MPLS one path, leaving the rest of the available paths unused.
table, the Access List table (ACL table) and the group table, In overall results, compared with the performance of Proac-
which is used to apply the action sets using action buckets tive Forwarding, Segment Routing introduce an additional
within each group. This table is necessary for the pushing of response time of approximately 225 ms in front of network
SIDs into the MPLS label stack using the function of group events, and 104 µs in terms of switch packet forwarding.
chaining. Fig. 8 shows a summary of the OpenFlow pipeline While in static path installation, Segment Routing improve its
usage by both path establishment methods. response time in approximately 41 ms. Following the use case
Segment Routing takes lower time to statically install of Segment Routing described in this article, this study can be
paths due to the OpenFlow messages transmitted from the used to monitor the performance of the controller. Moreover, it
controller to the switches. While in Proactive Forward- can be suitable as a starting point to determine what to improve
ing the controller sends OFPT FLOW MOD and pairs of in Segment Routing to obtain a more efficient controller
OFPT Barrier [REQUEST—REPLY] messages to install the response time. For example, according to the observations, it is
necessary flows into all related switches, Segment Rout- known that most of the factors that introduces more response
ing has to send only a few OFPT GROUP MOD and time is the number of OpenFlow messages exchanged between
OFPT FLOW MOD messages into specific switches to es- the controller and the switches. In this case, we have several
tablish a path across the network. In most cases, Proactive OFPT FLOW MOD to change or add flow entries of IP,
Forwarding exchange a total of 54 OpenFlow messages, while MPLS, and ACL tables, and OFPT GROUP MOD messages
in Segment Routing exchange only 7 OpenFlow messages. In to edit an entry of the group table, in which several sets
case of Segment Routing static paths, the OFPT FLOW MOD of this type of messages carried repeated instructions to a
messages introduces new entries to the ACL table, which single switch. This is probably an area to start looking for
overrides the instruction set of both MPLS and IP table entries improvements, in a way to minimize the number of Flows
related to specific traffic according to [9]. and Group messages without affecting the working principle
The opposite situation is observed during the response of Segment Routing.
time in front of network events. Segment Routing has to use During the experimentations on failure recovery, the jitter
both OFPT GROUP MOD and OFPT FLOW MOD mes- observed was slightly higher compared with the initial testing
sages, in which the number of messages are greater due at steady state conditions. Since all observations were very
to their use in modifying both MPLS and IP table entries low (in the order of µs), the time difference is not considered
(742 OpenFlow messages, including port status and barrier significant. In other words, during network events the traffic
messages). Meanwhile, in Proactive Forwarding only use a set doesn’t suffer from considerable increase in the jitter.
of OFPT FLOW MOD messages (206 OpenFlow messages, V. C ONCLUSIONS
including port status and barrier messages).
Nevertheless, Segment Routing takes advantage of the The results obtained from the entire experimentation are not
available paths in the network. During the experimentation, definitive, it is dependent of several factors like the hardware
Segment Routing always used 2 different paths to load balance used for the controller and the switches. Nevertheless, in most
the traffic across the network. The action buckets within the cases can be expected the same pattern and behaviour observed
related groups were processing the packets to forward them to in this paper. The study is more focused on the conceptual
functionality of both path methods and how their working
principle can affect the time response of the controller. For
instance, it can be expected that in most cases Segment
Flow Flow Flow
Action Routing tends to introduce more time response to the controller
sets
Packet
Table Table Table
Packet
on scenarios like failure recovery and packet forwarding, while
0 1 N
in out on the scenario of static path configuration the response time is
Proactive Forwarding better. Lets also remember that the implementation of Segment
Routing used for this study is still experimental, and several
IP
Routing improvements are to be expected in future releases of ONOS.
Table
VLAN MAC [20] Action From the point of view of the network, the main limitation
ACL Group Buckets
Packet
Flow
Table
Flow
Table Table Table Packet
lies on the virtual switch. The CPqD switch works on the user
MPLS
in [0] [10]
Forwarding out space level, which limits the maximum bandwidth at which
Table
[30]
the traffic can be forwarded throughout the network. Using
Segment Routing a Kernel space switch would have been more realistic, but
the implementation used for Segment Routing wouldn’t have
Fig. 8. Proactive Forwarding and Segment Routing O.F. pipeline usage. supported it [11].
Path establishment methods introduce delay to an SDN con-
troller in handling operations related to topology changes, and
path configurations, plus an added delay to OpenFlow switches
in packet handling. The awareness of these parameters can be
used as a guide for performance testing in order to keep track
of the time response introduced, and look forward to maintain
the desired performance according to the needs established by
operators and standards.
The set of tests presented in this paper can be used as a
starting point to observe the performance of path establishment
methods, their behavior into an SDN environment, and to
identify initial needs for improvement that opens the door for
further investigation of the tested technologies.
ACKNOWLEDGMENTS
This work has been supported by the Ministerio de
Economı́a y Competitividad of the Spanish Government under
project TEC2013-47960-C4-1-P.
R EFERENCES
[1] D. C. Frost, B. F. Stewart and C. Filsfils. United States of America
Patent US2014/0098675, 2014.
[2] “OpenFlow,” [Online]. Available on May 18th, 2015:
https://www.opennetworking.org/sdn-resources/openflow.
[3] P. Berde, M. Gerola, J. Hart, Y. Higuchi, M. Kobayashi, T. Koide, B.
Lantz, B. O’Connor, P. Radoslavov, W. Snow and G. Parulkar, “ONOS:
Towards an Open, Distributed SDN OS,” [Online]. Available on May
18th, 2015:
http://www-cs-students.stanford.edu/∼rlantz/papers/onos-hotsdn.pdf.
[4] A. Koshibe, “Intent Framework,” 2015. [Online]. Available on May
18th, 2015:
https://wiki.onosproject.org/display/ONOS/Intent+Framework.
[5] S. Zhang, C. Franke, “Experiment C Plan - Intent Install/Remove/Re-
route Latency,” 2015. [Online]. Available on Jul 10th, 2015:
https://wiki.onosproject.org/pages/viewpage.action?pageId=3441828.
[6] S. Das, “Project Description,” 2014. [Online]. Available on May 18th,
2015:
https://wiki.onosproject.org/display/ONOS/Project+Description.
[7] S. Das, “Configuring ONOS (spring-open),” [Online]. Available on May
18th, 2015:
https://wiki.onosproject.org/pages/viewpage.action?pageId=2130918.
[8] “OpenFlow Switch Specification Version 1.3.4,” 2014. [Online].
Available on May 18th, 2015:
https://www.opennetworking.org/images/stories/downloads/sdn-
resources/onf-specifications/openflow/openflow-spec-v1.3.0.pdf.
[9] S. Das, “Software Architecture,” 2014. [Online]. Available on May 18th,
2015:
https://wiki.onosproject.org/display/ONOS/Software+Architecture.
[10] S. Das, “Using the CLI,” 2014. [Online]. Available on May 18th, 2015:
https://wiki.onosproject.org/display/ONOS/Using+the+CLI.
[11] S. Das, “Installation Guide,” 2014. [Online]. Available on Jul 10th,
2015:
https://wiki.onosproject.org/display/ONOS/Installation+Guide.

You might also like