You are on page 1of 57

MASTER THESIS

TITLE: µTSN-CP: A Microservices-based Control Plane for Time Sensitive Network-


ing

MASTERS DEGREE: Master’s degree in Applied Telecommunications and Engineer-


ing Management (MASTEAM)

AUTHORS: Gabriel David Orozco Urrutia

ADVISORS: David Remondo


Anna Agusti Torra

DATE: July 8, 2022


Title: µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking
(MASTEAM)
Authors: Gabriel David Orozco Urrutia
Advisors: David Remondo
Anna Agusti Torra
Date: July 8, 2022

Abstract

Time-Sensitive Networking (TSN) is a group of IEEE 802.1 standards that aim at providing
deterministic communications over IEEE Ethernet. The main characteristics of TSN are
low bounded latency and very high reliability, which complies with the strict requirements
of industry and automotive applications. In this context, the allocation of time slots the,
the configuration paths and Gate Control Lists (GCLs) to contending TSN streams is often
a laborious task. Software-Defined Networking (SDN) and the IEEE 802.1 Qcc standard
provide the basis to design a TSN control plane to face these challenges. However, cur-
rent SDN/TSN control plane solutions are monolithic applications designed to run on ded-
icated servers and require specific protocols for synchronization among them; these
SDN controllers do not provide the required flexibility to escalate when facing an increas-
ing number service requests. In this work, we present µTSN-CP, a microservices-based
control plane architecture for TSN/SDN that provides superior scalability in situations with
highly dynamic service demands. We evaluate our µTSN-CP solution as compared to
monolithic solution in terms of CPU usage, RAM usage, latency, and percentage of suc-
cessfully allocated TSN Streams.
Title : µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking
Authors: Gabriel David Orozco Urrutia
Advisors: David Remondo
Anna Agusti Torra
Date: July 8, 2022

Overview

This document contains guidelines for writing your TFC/PFC. However, you should also
take into consideration the standards established in the document Normativa del treball
de fi de carrera (TFC) i del projecte de fi de carrera (PFC), paying special attention to
the section Requeriments del treball, as this document has been approved by the EETAC
Standing Committee
To my family and loved ones
whom always find methods to show me
that my only limit is on my own will
posteris lvmen moritvrvs edat
CONTENTS

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

CHAPTER 1. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.1. Microservice Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2. Software Defined Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 4

CHAPTER 2. Related works . . . . . . . . . . . . . . . . . . . . . . . . . 7

CHAPTER 3. Architecture design . . . . . . . . . . . . . . . . . . . . . . 9

3.1. Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.2. Message Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.3. Jetconf Microservice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.4. Topology Discovery Microservice . . . . . . . . . . . . . . . . . . . . . . . 14

3.5. Preprocessing Microservice . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.6. ILP Calculator Microservice . . . . . . . . . . . . . . . . . . . . . . . . . . 17


3.6.1. ILP Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.6.2. Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.7. Scheduler Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.8. Vlan Configurator Microservice . . . . . . . . . . . . . . . . . . . . . . . . 21

3.9. SDN Controller Microservice . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.10.TSN Network, the surrounding of the CNC . . . . . . . . . . . . . . . . . . 21

3.11.CNC & CUC UNI Interface interaction . . . . . . . . . . . . . . . . . . . . . 21

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

APPENDIX A. Pyang structure of custom Yang model . . . . . . . . 33


APPENDIX B. How to execute the code . . . . . . . . . . . . . . . . . . 37

B.1. Docker and Docker compose Installations . . . . . . . . . . . . . . . . . . 37


B.1.1. Installation over Linux . . . . . . . . . . . . . . . . . . . . . . . . . 37
B.1.2. Installation over OSx and Windows . . . . . . . . . . . . . . . . . . 38
B.1.3. How to execute the code under Docker compose . . . . . . . . . . . 39

APPENDIX C. About Docker Images . . . . . . . . . . . . . . . . . . . . 43


LIST OF FIGURES

3.1 Motivational example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


3.2 Overview of the SDN-TSN prototype . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Microservices architecture of the CP functions. . . . . . . . . . . . . . . . . . 11
3.4 Hardware vs OS Virtualizations . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Message Queue with RabbitMQ as broker . . . . . . . . . . . . . . . . . . . . 13
3.6 Overview of Jetconf Microservice . . . . . . . . . . . . . . . . . . . . . . . . 14
3.7 Overview of Topology Discovery Microservice . . . . . . . . . . . . . . . . . . 15
3.8 Overview of Preprocessing Microservice . . . . . . . . . . . . . . . . . . . . . 16
3.9 Overview of ILP Calculator Microservice . . . . . . . . . . . . . . . . . . . . . 17
3.10First Scheduler problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.11Second Scheduler problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.12Third Scheduler problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.13Overview of Scheduler Post Processing Microservice . . . . . . . . . . . . . . 25

B.1 Docker Desktop Graphical interface . . . . . . . . . . . . . . . . . . . . . . . 38


B.2 Docker compose up command output . . . . . . . . . . . . . . . . . . . . . . 39
B.3 Docker compose ps command output . . . . . . . . . . . . . . . . . . . . . . 41
B.4 Accessing two container with docker-compose exec command . . . . . . . . . 42

C.1 Some of the images available in the Dockerhub repository . . . . . . . . . . . 43


LIST OF TABLES

1.1 Some of the TSN Devices available in the literature . . . . . . . . . . . . . . . 6

3.1 Different Mathematical Problem solvers . . . . . . . . . . . . . . . . . . . . . 21


INTRODUCTION

Time Sensitive Networking (TSN) refers to a set of IEEE 802.1 standards aimed at pro-
viding deterministic, low-latency and highly reliable communications over the existing Eth-
ernet. TSN enables applications with different time-critical levels to share transmission
resources in Ethernet while meeting their latency, bandwidth and reliability requirements
[12]. One of the main standards of TSN is the IEEE 802.1Qbv, which defines the Time-
Aware Shaper (TAS). The TAS establishes Gate Control Lists (GCLs) for each outgoing
port of the TSN switches to control which traffic classes can be transmitted at different time
intervals. This feature ensures that traffic classes can access the transmission medium in
a time-triggered manner, preventing non-critical traffic classes from invading the time slots
assigned to time-critical traffic classes and thereby achieving bounded end-to-end latency
for these [24]. It is essential to highlight that TAS requires precise time synchronization
among all the nodes of a TSN domain (i.e., end stations and TSN switches); this synchro-
nization is achieved with the the Precision Time Protocol (PTP) by using the IEEE 802.1AS
standard.
The assignment of time slots requires the TAS to know the network topology and the re-
quirements of the different data streams, which are demanded by end stations via the
User/Network Interface (UNI). The IEEE 802.1Qcc standard defines three architecture
models for getting the topology and user/network requirements and for configuring the
switches of the underlying network accordingly: the fully distributed model, the centralized
network/distributed user model, and the fully centralized model. This work focuses on the
fully centralized approach because it resembles an SDN architecture, since it removes the
control logic from the network devices and allocates it in two elements called Centralized
Network Configuration (CNC) and Centralized User Configuration (CUC).
Even though the IEEE 802.1Qcc standard defines the basis for implementing the fully
centralized model, it just includes general guidelines and does not provide concrete spec-
ifications. In the literature, several works contain designs of a centralized control plane
for TSN [12, 23, 16, 27]; however, these solutions follow non-scalable monolithic architec-
tures, which are unable to allocate additional resources to specific tasks as needed. Such
TSN control plane (CP) implementations waste a significant amount of computational re-
sources and increase the time invested to calculate schedules (i.e., the length and position
of time slots assigned to different GCLs at the outgoing ports of the TSN switches along
network paths).
The concept of microservices is a promising approach to deal with the scalability issues
of monolithic TSN CPs. Microservices are an architectural model that divides a monolithic
application into different components, each of them with a specific functionality [11]. Since
these components are smaller than the whole monolithic application, it is easier to add
or remove microservice instances [21], allowing to allocate resources to demanding tasks
more efficiently. In the literature, some works use microservices for SDN controllers: in
[19] the authors create a cloud-native SDN controller for transport based on the concept of
microservices; the work in [29] explores the use of microservices for a CP for open optical
networks. However, to the best of our knowledge, there is no work exploring the use of
microservices for the design of a TSN CP.
In this work, we propose µTSN-CP, a microservice-based SDN CP architecture for TSN
that aims to reduce the time needed to retrieve network topology and stream requirements,
1
2 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

calculate valid schedules and configure switches accordingly, by optimizing the associated
resource usage. Considering the atomic functionalities the CP, µTSN-CP decomposes the
CNC and CUC elements into microservices. Moreover, we implement a prototype of the
µTSN-CP elements using Python and JavaScript, and execute it on a hybrid cloud running
on Amazon Web Services (AWS) using Kubernetes. We analyze the performance of the
µTSN-CP prototype by comparing it with a TSN-CP built with a monolithic architecture. In
such comparison, we analyze the computational resource usage (i.e., CPU and RAM) and
the time spent to calculate new schedules when the TSN network topology changes or
new service demands arrive from end stations.
The main contributions presented in this work are:

• A microservice-based TSN CP architecture design to transfer data with mixed time-


criticality over Ethernet.

• A microservice-based TSN CP prototype that follows this design.

• The demonstration that our architecture provides increased scalability above a mono-
lithic implementation of the solution.
CHAPTER 1. BACKGROUND

1.1. Microservice Architecture

Microservices is an architectural model where a monolithic application is divided into mul-


tiple independent components, each with a specific function. [28]. Functional division
makes it faster and easier to create and delete microservices, allowing to allocate re-
sources to particular microservices. This specific allocation of resources leads to a more
efficient response to incoming traffic as we can asign resources specifically to the applica-
tions that consume the most of the computational resources.
For the proper functioning of microservices, a group of elements is necessary that together
make up the MSA (MicroService Architecture). Even though there is not such thing as
an standard for microservices, in the literature it is possible to find some of the basic
characteristics that made up the MSA.
To understand the characteristics of MSA available in the literature, we classified them into
two groups. The Microservices Peculiarities and the MSA Elements. The Microservices
Peculiarities consist of the properties of microservices that need to be follow to partitionate
a monolithic application into a MSA architecture. In this group we have:

1. The functional decomposition taking into account the business model [8]. This de-
composition indicates that each microservice must provide something of value to the
users or the other microservices while maintaining an adequate size. Microservices
that bundle many functions return to monolithic design, while very small microser-
vices separate tightly linked functions, adding management complexity [15].

2. Each microservice is independent from the others, in consequence, between mi-


croservices we need a common communication protocol such as HTTP [26] or as
we are are doing in this project, using a message broker such as RabbitMQ.

3. Each microservice has its own database in order to achieve data isolation [26, 9]. A
centralized database implies losing the independence of microservices.

The second group of MSA features are the MSA Elements, which ensure the proper func-
tioning of microservices providing security, reliability and management. These items are:

• Load Balancer that distributes network traffic when a microservice is scaled across
multiple instances to enhance the capabilities of the architecture [9].

• API Gateway is an interface responsible for preventing the establishment of direct


communication between users and microservices, hiding the internal structure of the
architecture [22].

• Service Discovery stores and provides the addresses of the microservices in order
to establish effective communication between the elements of the architecture [18,
20, 13].

• Circuit Breaker is in charge of responding to failures when a microservice is not


working properly [22, 20, 13].
3
4 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

• Orchestrator [26, 18, 20, 13] manages the resources allocated into microservices
(i.e., create, delete, replicate, and update microservice instances), this allows to
increase and release resources.

1.2. Software Defined Networks

Software Defined Networks (SDN) separates the two planes of the networks, Data plane
and Control plane [7, 25]. Data plane is the part of the network that carries directly the user
traffic, on the other hand, the Control plane is the part of the network that handles how the
packages are forwarded from one part of the network to another. The decoupling means
that network administrators can use software to schedule and control the entire network
from a single control panel instead of on a device-by-device basis. As well as in Network
Function Virtualization (NFV) in SDN the software is decoupled from the hardware[7].
A typical SDN architecture is divided in three layers:

1. Applications, which communicate requests for resources or information about the


network as a whole.

2. Drivers, which use information from applications to decide how to route a data
packet.

3. Network devices, which receive information from the controller about where to move
data.

Depending on the characteristics of the network architecture, those elements can be found
in different physical locations [25].
Virtual or physical network devices are the ones that actually move data across the net-
work. Virtual switches, which can be embedded in software or hardware, in some cases
take over the responsibilities of physical switches and consolidate their functions into a sin-
gle smart switch. The switch checks the integrity of both data packets and virtual machine
destinations and forwards the packets.
There are different ways of implementing SDN. Some of the approaches use the Open-
flow protocol [17] proposed to standardize the communication between the SDN controller
and the network devices in an SDN architecture. It was proposed to enable researchers
to test new ideas in a production environment. OpenFlow provides a specification to mi-
grate the control logic from a switch into the controller. It also defines a protocol for the
communication between the controller and the switches.
On the other hand, there is also the option of using Yang (Yet another Next Generation)
models and its linked protocols Netconf and Restconf [10]. YANG is a data modeling lan-
guage used to model configuration, state data, and administrative actions for Networks, the
first release of Yang was published on the RFC 7950 on August of 2016. Yang main charac-
teristics are its easy readability, hierarchical configuration data models and reusable types
and grouping [14]. It is linked to Netconf and Restconf as both protocols were designed to
provide a mechanisms to install, manipulate, and delete configurations of network devices
using Yang Models. However, they differentiate from one of another in the base protocol
they use, Netconf goes over SSH and Resconf uses an REST FULL API approach.
Background 5

In µTSN-CP we decided to use the combination between Yang models, Netconf and Rest-
conf as they are the best approach to match the requirements to easily configure the
parameters of a TSN Network programatically. Moreover, there are already Yang models
to define the configuration parameters of the TSN interfaces and the communication be-
tween the CNC and the CUC. As we will see in further sections, Netconf will be used to
configure the MTSN Devices from Soc-e directly a from an SDN controller and Restconf
will be used to send the configurations to the SDN controller throughout a REST FULL API
approach over HTTP and to communicate the CNC and the CUC in the UNI interface
6 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

Name of the Product Company Features Labor


It has the possibility of
synchronizing time and
with an interrupt-based
netX 90N Hilscher End-Station
event trigger, which allows
it to be used as a TSN
end-station
It has suport of 802.1AS,
INNO
Trustnode 802.1Qbv, 802.1Qci Swtich TSN
ROUTE
and a Netconf server

SJA1105TEL NXP Support of 802.1AS Switch TSN

It has 802.1AS, 802.1Qbv


Industrial Ethernet 4000 Cisco Switch TSN
According to Datasheet
The manufacturer specifies
that in the future this
TSN-G5008 Moxa Swtch TSN
device will
have TSN capabilities
All of the standad
802.1AS-2019,
802.1Q,802.1Qbv,
TSN machine switch BR Swtich TSN
802.1Qav,
802.1Qcc,802.1Qbu,
802.1Qci,802.1CB
802.1AS, 802.1Qbv,
TSN Starter Package TTTech 802.1Qbu, 802.1Qcc, Switch TSN
802.1CB
- 802.1 AS: timing and
synchronization,
802.1 Qbv: traffic
scheduling,
802.1 Qbu: frame
PCIE-0400-TSN Kotron End Station
preemption,
802.1 Qcc: Stream
Reservation
Protocol enhancements,
- 802.1 CB
IEEE 802.1AS,
IEEE 802.1Qbv,
IEEE 802.1Qav,
IEEE 802.1Qcc,
RELY-TSN-PCIe Relyum End Station
IEEE 802.1CB,
IEEE 802.1Qci,
IEEE 802.1Qbu and
IEEE 802.3br
MFN 100 TTTech Not specified Edge Computer

Table 1.1: Some of the TSN Devices available in the literature


CHAPTER 2. RELATED WORKS

Several works have addressed the design of a TSN CP based on the fully centralized
model of the IEEE 802.1Qcc standard. The authors of [23] offer an SDN CP for FPGA
TSN networks that includes a time-sensitive management protocol and a time-sensitive
switching model. However, that contribution resorts to a protocol for the management
of the underlying network that is different from the one described in the IEEE 802.1Qcc
standard (which uses Restconf and Netconf). Additionally, all the elements of the TSN
CP are merged into a monolithic element, which limits the scalability and flexibility of the
architecture. In the work [16], the authors leverage SDN and Object Linking and Em-
bedding for Process Control Unified Architecture (OPC-UA) to build an SDN/TSN CP. In
the implementation, they propose four elements (User Registration, Service Registration,
Stream Management Component and an OpenDaylight SDN controller), where each ele-
ment is running on a different computer. Even though this solution shows a certain degree
of platform independence and is not completely monolithic, the authors do not exploit all
the potential advantages of a microservice architecture. Also, the Stream Management
Component in particular performs several tasks that could be split further into separate
microservices.
On the other hand, the concept of microservices is used in the literature for the design
of SDN controllers. The authors of [19] propose µABNO (Application-based Network Op-
erations); this SDN architecture is based on microservices and achieves auto-scalability
while enabling cloud-scale traffic load management. They use Kubernetes to orchestrate
the containers that execute the microservices and a cloud-native architecture running on
the Adrenaline Testbed platform. In [29], authors build an SDN controller for Open Op-
tical Networks based on microservices. They rely on a microservices architecture with
Docker containers and Kubernetes to enable platform-as-a-service network control, with
automated and on-demand deployment of SDN controllers or applications, and on-the-fly
upgrades or swap of the software components.
In summary, there is no work in the literature that explores the use of microservices in the
design of a TSN CP: the only existing approaches present a monolithic design that fail to
achieve good scalability and flexibility to accommodate varying traffic demands and waste
significant computational resources. In contrast, our µTSN-CP architecture addresses this
issue by distributing the main CNC and CUC functionalities described in the IEEE 802.1
Qcc among microservices. This approach allows the system to dynamically adapt the
resources needed for just the necessary functions upon changing traffic demands.

7
CHAPTER 3. ARCHITECTURE DESIGN

We can see the general structure of the implemented TSN SDN prototype in Fig. 3.3. The
CUC is in charge of collecting the characteristics of the data streams requested by the
end stations (which include the end station identifiers, the message size and frequency,
and the end-to-end latency requirements for each stream) and passing on this information
to the CNC. Within the CNC, the TSN Controller receives the information about the re-
quested data streams from the CUC via the UNI by means of RESTCONF. The scheduler
determines a configuration of the switches that complies with the requirements of the data
streams according to the network topology information received from the SDN Controller
through RESTCONF. The SDN Controller distributes the configuration commands to the
switches via NETCONF and YANG models, while it also transfers the topology information
obtained from LLDP to the TSN Controller via RESTCONF.

Figure 3.1: Motivational example

Regarding the microservice architecture, Figure 2 illustrates the microservices created


for mapping the CP functions, as well as the communication tools and interfaces between
them. Jetconf gathers the requested data stream information from the CUC, receives input
on the already allocated streams from Southconf, and sends all of this to Preprocessing.
This block combines the information with the topology information received from the Topol-
ogy Discovery function, and sends it to the scheduler (ILP calculator). The configuration
resulting from the scheduler is sent to Southconf, which forwards it to the VLAN Config-
urator and the Opendaylight SDN controller. The configuration of the TSN switches will
be in accordance with the VLAN related information received via HTTP and the schedule
received via NETCONF.

9
10 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

Figure 3.2: Overview of the SDN-TSN prototype

3.1. Docker

For understanding what is Docker, first it is necessary to understand wider concepts, Op-
erative System virtualization (OS Virtualization) and Hypervisor Virtualization. In a glance,
virtualization is the process of isolating and abstracting resources of a computer system
and generate concurrent subsystems to run parallel over the same hardware. Each one of
these Virtual Machines (VMs) runs with a specific allocation of resources (i.e., CPU, mem-
ory, disk) from the host hardware, isolated from the rest of the resources. The purpose of
virtualization is to employ multiple smaller servers on a single large server, reducing the
costs of hosting and enhancing efficiency of resources usage.
In Hypervisor Virtualization the hardware is the entity that acts as the host of the virtual ma-
chines, the hypervisor can run directly over the hardware or can be installed over the host
OS. In both cases the VMs are abstracted into virtual environments, completely isolated
from the other environments or the host OS, this includes the OS which can be different
from the host and the Kernel. Some examples of this virtualization technique are Virtual
Box, VMware Workstation and Microsoft Hyper-V.
On the other hand, OS virtualization occurs on top of the OS, this means that the software
installed on the OS uses the same kernel, facilitating the prevalence of instances isolated
Architecture design 11

Figure 3.3: Microservices architecture of the CP functions.

from one of another. In OS virtualization, the instances are called containers, each con-
tainer has access to certain part of the computational resources and has no knowledge
of the rest of the available resources in the host machine. Image depicts the major differ-
ences between Hardware virtualization and OS virtualization. The higher representative
of OS virtualization is Docker being predominant in the IT industry, even though there are
other alternatives such as Linux containers or containerd.
The reason behind the usage of Docker in microservices architecture is due to the sim-
plification of the delivery and management of microservices. Containers are lighter than
VMs machines since they already run over the kernel and one of the design pattern is to
put only the necessary software inside the container. The fact that containers are lighter
12 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

Figure 3.4: Hardware vs OS Virtualizations

means also that they are created faster and easier than VMs, improving horizontal scala-
bility of the system (i.e., the possibility of add extra replicas of an instance in the system)
and the failure recovery times.
In µTSN-CP all of the microservices are running over Docker containers, we did two differ-
ent deployments one with Docker compose and another over Kubernetes. The usage of
Docker allowed us to use versions and distributions of Linux to execute each microservices
as well as to isolate them in both the development process and the operations maintaining.

3.2. Message Broker

In order to communicate microservices between themselves we decided to use a message


broker, and inter-application communication technology that helps to build a common in-
tegration mechanism to exchange information undependable on the language they were
written or the platform they ran over. In µTSN-CP we decided to use RabbitMQ as mes-
sage broker because it provides a message queuing services which allows asynchronous
processing of messages. When using message queuing a part of the application pushes
messages to a queue in the message broker and then the consumers of those messages
pull them, when the processing of the message is asynchronous means that messages
can stay in the queue without being immediately processed.
Each of the microservices of µTSN-CP pulls/pushes messages to the appropriated queue,
as the skeleton of the application is written in Python we used Pika add reference, a pure-
Architecture design 13

Python implementation of the AMQP 0-9-1 protocol for message queuing. The software
module to push/pull is the same and it is included as library in each of the microservices
that needs it. Besides, RabbitMQ also runs as a microservice in the cluster and it can be
scaled horizontally.

Figure 3.5: Message Queue with RabbitMQ as broker

3.3. Jetconf Microservice

Jetconf microservice is the point of contact between the CUC and the CNC implementing
the UNI interface with the yang model defined in the IEEE standard 802.1Qcc ADD REF-
ERENCE. This microservice will handle the communication of the user requirements to the
CNC internal microservices. As input, this microservice gets the JSON payload using a
REST/API using RESTCONF, such payload must match the parameters definitions in the
yang module. Some parameters included in the payload are the number of streams, as well
as the period, maximum latency, size and smallest and highest transmission offsets. With
such requirements we decided to use Jetconf ADD REFFERENCE an implementation of
the RESTCONF protocol written in Python3 as base for the microservice.
In the output, this microservice should communicate back to the CUC the answer to the
request payload. Adding the feasibility of the communication (true or false) as well as
the offset that each talker should follow to achieve the communication. Additionally, it
should send the requested information in a JSON payload using the appropriated Rab-
bitMQ queue to the Pre-processing microservice. In terms of security Jetconf includes
HTTP/2 over TLS certificated-based authentication to the clients, such certificates should
be shared to the client (CUC) in order to grant the communication appropriately. Figure
ADD FIGURE NUMBER depicts the general performing of Jetconf microservice.
14 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

Figure 3.6: Overview of Jetconf Microservice

3.4. Topology Discovery Microservice

The Topology Discovery microservice is in charge of getting the topology of the TSN Net-
work and provide it to the other microsevices as a Network Matrix. For this task this
microservice uses the LLDP ADD REFERENCE, a vendor-neutral protocol of the Data-
link layer of the ISO used for advertising the identity, capabilities and neighbors. In a
nutshell, the Topology Discovery microservice will generate a list of all the available de-
vices in the network by accessing through ssh to all of the available devices in the network
and retrieving its LLDP information. To program this microservice we used a combination
between Bash scripts and Python3, specifically using the Paramiko ADD REFERENCE
library, which is an implementation of the SSHv2 protocol. Image ADD IMAGE REFER-
ENCE shows the general architecture of this microservice and its parts.
This is the general performing of the microservice in terms of the steps followed from the
moment we start the communication to the moment the information is retrieved from the
devices and parsed to the Preprocessing Microservice:

1. Signal to activate the microservice coming from Preprocessing microservice to the


internal module of topology generation.

2. Topology generator module activates the retrieval of the topology information using
the Paramiko module. It uses a Paramiko to send a show lldpcli show neighbors
command ADD DESCRIPTION OF COMMANDS to every device on the network to
get the information of neighbors of each device in the network. Raw information is
collected and stored as files in the microservice.

3. Topology generator module gets all of the files, cleans the information and translates
it into a Network Topology Matrix.

4. Once the network information is ready, Topology Discovery Microservice generates


Architecture design 15

Figure 3.7: Overview of Topology Discovery Microservice

a JSON message in the rabbitMQ queue and sends it to the Preprocessing Microser-
vice.

for ip in addresses:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, username=’soc-e’, password=’soc-e’)
ssh_stdin, ssh_stdout, ssh_stderr = \
ssh.exec_command(’sudo lldpcli show neighbors’)
time.sleep(1)
data = ssh_stdout.readlines()
with open(’devices/topology_’+ ip + ’.txt’, ’a’) as f:
for line in data:
f.write(str(line) + ’\n’)

3.5. Preprocessing Microservice

This microservice is the entry point between the data provided by the Jetconf Microservice
and the Topology Discovery Microservice, in the figure ADD REFERENCE we can see
16 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

the input values that get inside this Microservice from the previous Microservices. Those
values are retrieved from the respective RabbitMQ queues. There are two modules doing
the funcionalities of the microservice. The first module is the Dijkstra Module, that is in
charge of implementing the Dijkstra algorithm to get the shortest path possible from origin
to destination for the routes in the schedule. Is important to highlight that in our µTSN-
CP the module for calculating the path from origins to destinations and the module for
calculating the schedule are in different microservices, as we will see in further sections.
The reason behind this design criteria is to reduce the number of variables involved in
the mathematical formulation for the schedule, and hence to reduce the complexity and
the computation time to find a solution. Furthermore, this design patter enables to easily
adapt the microservice to use different algorithms to calculate the path for the streams
(e.g., Floyd-Warshall and Jhonson’s algorithms) ADD REFERENCE.

Figure 3.8: Overview of Preprocessing Microservice

The other primal functionality of Preprocessing Microservice is to prepare the input param-
eter for the principal Microservice in the architecture, the ILP calculator. This prepocessing
job is necessary to reduce the number of tasks assigned to the ILP calculator. As it is pos-
sible to observe in the figure, the ouput parameters for the microservice, corresponds to
parameterized resultant values from Dijkstra module and the previous microservices. Be-
sides, maintaining and specific functionalities in that microservice enhances the possibility
to use different approaches implemented as microservice instead of the ILP calculator,
simply following the black box design criteria, respecting the input and outputs. Finally, as
in the previous microservices, the way to communicate the results to the ILP Microservice
Architecture design 17

is by a RabbitMQ queue.

3.6. ILP Calculator Microservice

ILP Calculator Microservice takes its name from the usage of ILP (Integer Linear Program-
ming) a mathematical optimization and feasibility programming method in which a set of
constraint (represented as linear inequalities) and an optimization linear function describe
a problem. The variables used in the definition of the linear constrains create a space
of sulution which is filled with the set of possible solutions that respect the constraints in-
equalities. The larger the number of variables and constraints the more complex is the
solution space and the harder to get a solution. In our µTSN-CP all of the characteristics
of TSN TAS are included in the mathematical model. We adapted the ILP Model described
in ADD Raagard reference and implemented it with Python 3 using Pyomo a open source
optimization modeling language. In this section we deepen on the characteristics of the
TAS as we explore the ILP formulation problem.

Figure 3.9: Overview of ILP Calculator Microservice

3.6.1. ILP Model

The first point to regard is the necessity of reducing the values for the total number of
excess queues denoted from now as K and the total number of extra end-to-end latency
occasioned by interfering flows denoted as Λ. K is mathematically defined as the sum-
mation of all the extra queues (Ka,b − 1) in all of the used TSN ports of the devices in
the network [Va , Vb ] ∈ E. On the other hand, Λ is defined as the summation of all of
the subtractions of the end-to-end latency of each flow (λi ) and the latency of each flow
without as if any other stream wouldn’t have interrupted the transmission (λi ) for all of the
streams in the schedule problem (si ∈ S). Consequently, the optimization function will be
as presented in 3.1, the weight of the constants C1 and C2 will determine the prioritisation
of reducing one parameter over the other.
18 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

C1 ∑ (Ka,b − 1) +C2 ∑ (λi − λi) (3.1)


[Va ,Vb ]∈E si ∈S

Constraint 3.2 represents that the number of queues used for all of the streams in a certain
[v ,v ]
link is lower-bounded by the number of queues used for each stream in that link si a b .ρ.
Equation 3.3 defines λi to be the total time used for transmitting the stream si from the
moment the first frame is transmitted in the first link of the path until the last frame is fully
received at its destination.

[va ,vb ] [va ,vb ]


s. j. ka,b ≥ si .ρ ∀si ∈ (3.2)

λi = fsi,ki .t .φ + fsi,ki .t .L − fsi,1i .s .φ ∀si ∈ S (3.3)

Constraint 3.4 ensures that the latency of each flow λi is upper bounded by its deadline
si .D. Consequently, in the results provided by the model, all of the streams will arrive within
its deadline. Equation 3.5 ensures that every stream has to be fully scheduled within its
period si .T . Equation 3.6 is to guarantee that every frame traversing a link [va , vb ] in its
path to destination has to start transmitting after the same frame has been fully received
on the previous link of the path [vx , va ], even if the worst synchronization error δ is present.

λi ≤ si .D ∀si ∈ S (3.4)

[v ,vb ] [v ,vb ] [v ,vb ]


fi,ma .φ ≤ si .T − fi,ma .L ∀fi,ma ∈F (3.5)

[v ,vb ] [v ,va ] [v ,va ] [v ,vb ] [v ,va ]


fi,ma .φ ≥ fi,mx .φ + fi,mx .L + δ ∀fi,ma , fi,mx ∈F (3.6)

Equation number 3.7 enforces that each link can transmit only one frame at the time, and
that the frames in a stream have to be transmitted in order. In constraint 3.8 is represented
the general parameter of the hyperperiod. Its defined as the Least Common Multiple of all
of the periods of the streams in the schedule hpi, j = lcm(si .T, s j .T ). The set of translations
of flow si and flow s j are denoted as A and B. All of the possible values in A and B are
denoted as α and a β respectively. However, is essential to highlight that in the real
implementation in python each stream in the schedule will have his own set of variables
representing the repetitions within the hyperperiod, so each stream will have its own sets
of variables for representing the repetitions in its hyperperiod.

[v ,vb ] [v ,vb ] [v ,vb ] [v ,vb ] [v ,vb ]


fi,ma .φ + fi,ma .L ≤ f j,na .φ ∀fi,ma , fi,na ∈ F2 , m < n (3.7)

   
hpi, j hpi, j
A = 0, 1, ..., −1 , B = 0, 1, ..., −1 (3.8)
si .T s j .T

As its possible to have an overlapping in time and link within that repetitions, constraints
[v ,v ] [v ,v ]
3.9 and 3.10 avoid this for happening by stating that fi,ma b must finish before f j,na b stars,
for all of the possible values of the repetitions of both streams in the hyperperiod (A and
Architecture design 19

B). Equations 3.9 and 3.10 are representing opposite cases, if one of the equations is
fulfilled the other should not, for those cases there is a variable σ ∈ 0, 1 that represents
this model disjunction. M is a large value used for prioritizing the variable in the inequality.
Is important to remark that in the Python implementation σ is a a N dimensional array,
each dimension being the set of repetitions of each stream.

[v ,vb ] [v ,vb ] [v ,vb ]


α.si .T + fi,ma .φ + fi,ma .L ≤ β.s j .T + f j,na .φ + M.σ (3.9)

[v ,vb ] [v ,vb ] [v ,vb ]


β.si .T + f j,na .φ + f j,na .L ≤ α.si .T + fi,ma .φ + M.(1 − σ) (3.10)

[v ,vb ] [v ,vb ]
∀fi,ma , f j,na ∈ F2 , ∀α ∈ A, ∀β ∈ B (3.11)

The determinism of the TAS is achieved allowing only one flow to be present in a queue at
any moment in time. None of the frames of any flow can enter into a queue while another
frame is being queued. This behaviour is achieved with constrains 3.12 and 3.13 for the
cases where two streams share the same port of egress in a switch. ω is an auxiliary
variable to represent the disjunction between 3.12 and 3.13, meaning that only one of the
two constraints have to be fulfilled for each pair of streams that share a same link.

[v ,vb ] [v ,vb ] [v ,vb ] [v ,vb ]


α.si .T + fi,ma .φ ≤ β.s j .T + f j,na .φ + M(ω + εi, ja + ε j,ia ) (3.12)

[v ,vb ] [v ,va ] [v ,vb ] [v ,vb ]


β.s j .T + f j,na .φ ≤ α.si .T + fi,mx .φ + M(1 − ω + εi, ja + ε j,ia ) (3.13)

[v ,va ] [v ,vb ] [v ,va ] [v ,vb ]


∀fi,mx , fi,ma , f j,nx , f j,na ∈ F4 , ∀α ∈ A, ∀β ∈ B (3.14)

[v ,vb ]
The variable εi, ja represented in 3.15 captures whether or not two streams are assigned
[va ,vb ]
in the same queue. The variable will be 1 only if si is assigned to a queue strictly less
[v ,v ] [v ,v ]
than s j a b . Such of behaviour for εi, ja b is depicted in equations 3.16 and 3.17.
(
[va ,vb ] [v ,vb ]
[v ,v ] 1, si .ρ < s j a .ρ
εi, ja b = (3.15)
0, otherwise

[v ,vb ] [va ,vb ] [v ,vb ]


sj a .ρ − si .ρ − M(εi, ja − 1) ≥ 1 (3.16)

[v ,vb ] [va ,vb ] [v ,vb ]


sj a .ρ − si .ρ − M.εi, ja ≤0 (3.17)

[va ,vb ] [v ,vb ]


∀si ,sj a ∈ S2 (3.18)

Finally, the equations 3.19 and 3.20 are for taking into account the cases in which two
streams si , s j that uses a same egress port of a TSN switch arrived at different ingress
port. In that case, there can be a synchronization error between the two internal clocks
20 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

of both ports, generating a disruption in the connection. Consequently, the maximum


synchronization error δ is included in the equations to prevent this error for happening

[v ,vb ] [v ,va ] [v ,vb ] [v ,vb ]


α.si .T + fi,ma .φ + δ ≤ β.s j .T + f j,ny .φ + M(ω + εi, ja + ε j,ia )) (3.19)

[v ,vb ] [v ,va ] [v ,vb ] [v ,vb ]


β.si .T + f j,na .φ + δ ≤ α.s j .T + fi,my .φ + M(1 − ω + εi, ja + ε j,ia )) (3.20)

[v ,va ] [v ,vb ] [v ,va ] [v ,vb ]


∀fi,mx , fi,ma , f j,ny , f j,na ∀F4 , ∀α ∈ A, ∀β ∈ B (3.21)

3.6.2. Implementation details

In an ILP program there are two pieces, the model itself that in our case we implemented
with Pyomo and the other is a the mathematical solver that has to be installed in the
same Docker container as the model. There is a vast amount of solvers available and
some of them are Open Source, Table 3.6.2. depicts some of the most used ILP solvers
available, including an small description to differentiate between them and the type of
license necessary for its usage.

Listing 3.1: How to change the problem solver

# o p t = S o l v e r F a c t o r y ( ’ g u r o b i ’ , s o l v e r i o =” python ” )
opt=SolverFactory ( ’ glpk ’ )
s e l f . i n s t a n c e = s e l f . model . c r e a t e i n s t a n c e ( )
s e l f . r e s u l t s = opt . solve ( s e l f . instance )
s e l f . instance . solutions . load from ( s e l f . r e s u l t s )

It is important to remark that the solver doesn’t maintain any relation with the Pyomo code
and changing between one another is as easy as changing a line in the code (Code Sniped
A.1 shwos how to do it) . However, the installation process does differ from one or another,
as some are available as Open Source Github repositories, others are available as PIP
packages for Python and the other have to be downloaded with Conda [3] package man-
ager for python. In our scenarios we analyzed GLPK and Gurobi, as we will remark in
further sections, Gurobi dramatically outperforms GLPK in terms of processing time. How-
ever, we decided to use GLPK in the final implementation for the ILP microservice as it
is Open Source and Gurobi due to the ephemeral nature of microservices and docker
containers (i.e., they can be replaced for another container when they present an error) it
requires the acquisition of a special license for being used inside of Docker containers.
Regarding the architecture of the microservice by itself there are several small pieces
that works together for performing the full task, the two most important pieces are the
ILP module, described above and the Solution Visualizer. The solution visualiser is a
module also written on python whom principal task is to facilitate the exploration of the
solution that the ILP is provided to the current scheduler problem. This module is takes
as reference what is described in the motivation scenario but adding the network topology
and information of the different modules
Architecture design 21

Solver Name License Type Description


Free reduced
Focus on maintainability, integrates a common
AMPL version,
language for analysis and debugging [1]
Pay license
allows you to enter an optimization problem as a
Open Source, high-level model, with painless support for vector
PIco
available in pip and matrix variables and multidimensional
algebra [6]
An open-source mixed-integer program (MIP)
solver written in C++. CBC is intended to be
CBC Open Source
used primarily as a callable library to create
customized branch-and-cut solvers [2]
It is a set of routines written in ANSI C and
organized in the form of a callable library.
GLPK Open Source Is intended for solving large-scale linear
programming (LP) and
mixed-integer programming (MIP) [4]
According to their website. Gurobi claims to be
Student license the fastest and most powerful mathematical
Gurobi
Pay license programming solver available for your Linear
Problems [5]

Table 3.1: Different Mathematical Problem solvers

3.7. Scheduler Postprocessing

3.8. Vlan Configurator Microservice

3.9. SDN Controller Microservice

3.10. TSN Network, the surrounding of the CNC

3.11. CNC & CUC UNI Interface interaction


22 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

Figure 3.10: First Scheduler problem


Architecture design 23

Figure 3.11: Second Scheduler problem


24 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking

Figure 3.12: Third Scheduler problem


Architecture design 25

Figure 3.13: Overview of Scheduler Post Processing Microservice


26 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking
CONCLUSIONS

Escriure aquı́ les conclusions del projecte.

27
28 µTSN-CP: A Microservices-based Control Plane for Time Sensitive Networking
BIBLIOGRAPHY

[1] “Ampl,” Jun 2022. [Online]. Available: https://ampl.com/ 21

[2] “Cbc,” Jun 2022. [Online]. Available: https://www.coin-or.org/Cbc/ 21

[3] “Conda,” Jun 2022. [Online]. Available: https://docs.conda.io/en/latest/ 20

[4] “Glpk,” Jun 2022. [Online]. Available: https://www.gnu.org/software/glpk/ 21

[5] “Gurobi,” Jun 2022. [Online]. Available: https://www.gurobi.com/ 21

[6] “Pico,” Jun 2022. [Online]. Available: https://www.swmath.org/software/2252 21

[7] K. Benzekki, A. El Fergougui, and A. Elbelrhiti Elalaoui, “Software-defined networking


(sdn): a survey,” Security and communication networks, vol. 9, no. 18, pp. 5803–5833,
2016. 4

[8] A. Boubendir, E. Bertin, and N. Simoni, “A vnf-as-a-service design through micro-


services disassembling the ims,” in Conference on Innovations in Clouds, Internet
and Networks. IEEE, 2017, pp. 203–210. 3

[9] A. Bucchiarone, N. Dragoni, S. Dustdar, S. T. Larsen, and M. Mazzara, “From mono-


lithic to microservices: An experience report from the banking domain,” IEEE Soft-
ware, vol. 35, no. 3, pp. 50–55, May 2018. 3

[10] B. Claise, J. Clarke, and J. Lindblad, Network programmability with YANG: the struc-
ture of network automation with YANG, NETCONF, RESTCONF, and gNMI. Addison-
Wesley Professional, 2019. 4

[11] D. Gallipeau and S. Kudrle, “Microservices: Building blocks to new workflows and
virtualization,” SMPTE Motion Imaging Journal, 2018. 1

[12] T. Gerhard, T. Kobzan, I. Blöcher, and M. Hendel, “Software-defined flow reservation:


Configuring ieee 802.1 q time-sensitive networks by the use of software-defined net-
working,” in 2019 24th IEEE International Conference on Emerging Technologies and
Factory Automation (ETFA). IEEE, 2019. 1

[13] F. Gutierrez, Spring Boot Messaging. Apress, 2017. 3, 4

[14] M. Jethanandani, “Yang, netconf, restconf: What is this all about and how is it used for
multi-layer networks,” in Optical Fiber Communication Conference. Optica Publishing
Group, 2017, pp. W1D–1. 4

[15] S. Klock, J. M. E. M. V. D. Werf, J. P. Guelen, and S. Jansen, “Workload-based cluster-


ing of coherent feature sets in microservice architectures,” in International Conference
on Software Architecture. IEEE, 2017. 3

[16] T. Kobzan, I. Blöcher, M. Hendel, S. Althoff, A. Gerhard, S. Schriegel, and


J. Jasperneite, “Configuration solution for tsn-based industrial networks utilizing sdn
and opc ua,” in 2020 25th IEEE International Conference on Emerging Technologies
and Factory Automation (ETFA). IEEE, 2020. 1, 7

29
[17] A. Lara, A. Kolasani, and B. Ramamurthy, “Network innovation using openflow: A
survey,” IEEE communications surveys & tutorials, vol. 16, no. 1, pp. 493–512, 2013.
4

[18] X. Larrucea, I. Santamaria, R. Colomo-Palacios, and C. Ebert, “Microservices,” IEEE


Software, vol. 35, no. 3, pp. 96–100, May 2018. 3, 4

[19] C. Manso, R. Vilalta, R. Casellas, R. Martı́nez, and R. Muñoz, “Cloud-native sdn con-
troller based on micro-services for transport networks,” in 2020 6th IEEE Conference
on Network Softwarization (NetSoft). IEEE, 2020. 1, 7

[20] F. Montesi and J. Weber, “Circuit breakers, discovery, and API gateways in microser-
vices,” CoRR, 2016. 3, 4

[21] F. Moradi, C. Flinta, A. Johnsson, and C. Meirosu, “Conmon: An automated con-


tainer based network performance monitoring system,” in Symposium on Integrated
Network and Service Management. IFIP/IEEE, 2017. 1

[22] S. Newman, Building microservices. O’Reilly Media, 2015. 3

[23] W. Quan, W. Fu, J. Yan, and Z. Sun, “Opentsn: an open-source project for time-
sensitive networking system development,” CCF Transactions on Networking, 2020.
1, 7

[24] M. L. Raagaard and P. Pop, “Optimization algorithms for the scheduling of ieee 802.1
time-sensitive networking (tsn),” Tech. Univ. Denmark, Lyngby, Denmark, Tech. Rep,
2017. 1

[25] S. Rowshanrad, S. Namvarasl, V. Abdi, M. Hajizadeh, and M. Keshtgary, “A survey on


sdn, the future of networking,” Journal of Advanced Computer Science & Technology,
vol. 3, no. 2, pp. 232–248, 2014. 4

[26] J. Rufino, M. Alam, J. Ferreira, A. Rehman, and K. F. Tsang, “Orchestration of con-


tainerized microservices for iot using docker,” in International Conference on Industrial
Technology (ICIT). IEEE, 2017, pp. 1532–1536. 3, 4

[27] M.-T. Thi, S. B. H. Said, and M. Boc, “Sdn-based management solution for time syn-
chronization in tsn networks,” in 2020 25th IEEE International Conference on Emerg-
ing Technologies and Factory Automation (ETFA). IEEE, 2020. 1

[28] J. Thönes, “Microservices,” IEEE Software, pp. 116–116, 2015. 3

[29] Q. P. Van, H. Tran-Quang, D. Verchere, P. Layec, H.-T. Thieu, and D. Zeghlache,


“Demonstration of container-based microservices sdn control platform for open op-
tical networks,” in 2019 Optical Fiber Communications Conference and Exhibition
(OFC). IEEE, 2019. 1, 7
APÈNDIXS
APPENDIX A. PYANG STRUCTURE OF CUSTOM
YANG MODEL

This section includes the yang tree used for implementing the communication between the
CUC and the CNC. This tree is a modification of the original module ieee802-dot1q-tsn-
types provided by the IEEE. To build this tree we used Pyang an easy to install python
library included in pip. The full code is in the Github repository.

Listing A.1: ieee802-dot1q-tsn-types-upc-version@2018-02-15.yang

module : ieee802 − dot1q − tsn − types −upc − v e r s i o n


+−− rw tsn − uni −
+−− rw stream − l i s t * [ stream − i d ]
| +−− rw stream − i d stream − i d − t y p e
| +−− rw r e q u e s t
| | +−− rw t a l k e r
| | | +−− rw stream − rank
| | | | +−− rw rank ? uint8
| | | +−− rw end− s t a t i o n − i n t e r f a c e s * [ mac− address i n t e r f a c e −name ]
| | | | +−− rw mac− address string
| | | | +−− rw i n t e r f a c e −name string
| | | +−− rw data − frame − s p e c i f i c a t i o n * [ i n d e x ]
| | | | +−− rw i n d e x uint8
| | | | +−− rw ( f i e l d ) ?
| | | | + − − :( ieee802 −mac− addresses )
| | | | | +−− rw ieee802 −mac− addresses
| | | | | +−− rw d e s t i n a t i o n −mac− address ? string
| | | | | +−− rw source −mac− address ? string
| | | | + − − :( ieee802 − vlan − t a g )
| | | | | +−− rw ieee802 − vlan − t a g
| | | | | +−− rw p r i o r i t y −code − p o i n t ? uint8
| | | | | +−− rw vlan − i d ? uint16
| | | | + − − :( ipv4 − t u p l e )
| | | | | +−− rw ipv4 − t u p l e
| | | | | +−− rw source − i p − address ? i n e t : ipv4 − address
| | | | | +−− rw d e s t i n a t i o n − i p − address ? i n e t : ipv4 − address
| | | | | +−− rw dscp? uint8
| | | | | +−− rw p r o t o c o l ? uint16
| | | | | +−− rw source − p o r t ? uint16
| | | | | +−− rw d e s t i n a t i o n − p o r t ? uint16
| | | | + − − :( ipv6 − t u p l e )
| | | | +−− rw ipv6 − t u p l e
| | | | +−− rw source − i p − address ? i n e t : ipv6 − address
| | | | +−− rw d e s t i n a t i o n − i p − address ? i n e t : ipv6 − address
| | | | +−− rw dscp? uint8
| | | | +−− rw p r o t o c o l ? uint16
| | | | +−− rw source − p o r t ? uint16
| | | | +−− rw d e s t i n a t i o n − p o r t ? uint16
| | | +−− rw t r a f f i c − s p e c i f i c a t i o n
| | | | +−− rw i n t e r v a l
| | | | | +−− rw numerator ? uint32
| | | | | +−− rw denominator ? uint32
| | | | +−− rw max− frames − per − i n t e r v a l ? uint16
| | | | +−− rw max− frame − s i z e ? uint16
| | | | +−− rw t r a n s m i s s i o n − s e l e c t i o n ? uint8
| | | | +−− rw time −aware !
| | | | +−− rw e a r l i e s t − t r a n s m i t − o f f s e t ? uint32
| | | | +−− rw l a t e s t − t r a n s m i t − o f f s e t ? uint32
| | | | +−− rw j i t t e r ? uint32
| | | +−− rw user − to − network − r e u i r e m e n t s
| | | | +−− rw num−seamless − t r e e s ? uint8
| | | | +−− rw max− l a t e n c y ? uint32
| | | +−− rw i n t e r f a c e − c a p a b i l i t i e s
| | | +−− rw vlan − tag − capable ? boolean
| | | +−− rw cb − stream − iden − type − l i s t * uint32
| | | +−− rw cb −sequence − type − l i s t * uint32
| | +−− rw l i s t e n e r s − l i s t * [ i n d e x ]

33
| | | +−− rw i n d e x uint16
| | | +−− rw end− s t a t i o n − i n t e r f a c e s * [ mac− address i n t e r f a c e −name ]
| | | | +−− rw mac− address string
| | | | +−− rw i n t e r f a c e −name string
| | | +−− rw user − to − network − r e q u i r e m e n t s
| | | | +−− rw num−seamless − t r e e s ? uint8
| | | | +−− rw max− l a t e n c y ? uint32
| | | +−− rw i n t e r f a c e − c a p a b i l i t i e s
| | | +−− rw vlan − tag − capable ? boolean
| | | +−− rw cb − stream − iden − type − l i s t * uint32
| | | +−− rw cb −sequence − type − l i s t * uint32
| | +−−− x compute − r e q u e s t
| +−− r o c o n f i g u r a t i o n
| +−− r o s t a t u s − i n f o
| | +−− r o t a l k e r − s t a t u s ? enumeration
| | +−− r o l i s t e n e r − s t a t u s ? enumeration
| | +−− r o f a i l u r e −code? uint8
| +−− r o f a i l e d − i n t e r f a c e s * [ mac− address i n t e r f a c e −name ]
| | +−− r o mac− address string
| | +−− r o i n t e r f a c e −name string
| +−− r o t a l k e r
| | +−− r o accumulated − l a t e n c y ? uint32
| | +−− r o i n t e r f a c e − c o n f i g u r a t i o n
| | +−− r o i n t e r f a c e − l i s t * [ mac− address i n t e r f a c e −name ]
| | +−− r o mac− address string
| | +−− r o i n t e r f a c e −name string
| | +−− r o c o n f i g − l i s t * [ i n d e x ]
| | +−− r o i n d e x uint8
| | +−− r o ( c o n f i g − v a l u e ) ?
| | + − − :( ieee802 −mac− addresses )
| | | +−− r o ieee802 −mac− addresses
| | | +−− r o d e s t i n a t i o n −mac− address ? string
| | | +−− r o source −mac− address ? string
| | + − − :( ieee802 − vlan − t a g )
| | | +−− r o ieee802 − vlan − t a g
| | | +−− r o p r i o r i t y −code − p o i n t ? uint8
| | | +−− r o vlan − i d ? uint16
| | + − − :( ipv4 − t u p l e )
| | | +−− r o ipv4 − t u p l e
| | | +−− r o source − i p − address ? i n e t : ipv4 − address
| | | +−− r o d e s t i n a t i o n − i p − address ? i n e t : ipv4 − address
| | | +−− r o dscp? uint8
| | | +−− r o p r o t o c o l ? uint16
| | | +−− r o source − p o r t ? uint16
| | | +−− r o d e s t i n a t i o n − p o r t ? uint16
| | + − − :( ipv6 − t u p l e )
| | | +−− r o ipv6 − t u p l e
| | | +−− r o source − i p − address ? i n e t : ipv6 − address
| | | +−− r o d e s t i n a t i o n − i p − address ? i n e t : ipv6 − address
| | | +−− r o dscp? uint8
| | | +−− r o p r o t o c o l ? uint16
| | | +−− r o source − p o r t ? uint16
| | | +−− r o d e s t i n a t i o n − p o r t ? uint16
| | + − − :( time −aware − o f f s e t )
| | +−− r o time −aware − o f f s e t ? uint32
| +−− r o l i s t e n e r − l i s t * [ i n d e x ]
| | +−− r o i n d e x uint16
| | +−− r o accumulated − l a t e n c y ? uint32
| | +−− r o i n t e r f a c e − c o n f i g u r a t i o n
| | +−− r o i n t e r f a c e − l i s t * [ mac− address i n t e r f a c e −name ]
| | +−− r o mac− address string
| | +−− r o i n t e r f a c e −name string
| | +−− r o c o n f i g − l i s t * [ i n d e x ]
| | +−− r o i n d e x uint8
| | +−− r o ( c o n f i g − v a l u e ) ?
| | + − − :( ieee802 −mac− addresses )
| | | +−− r o ieee802 −mac− addresses
| | | +−− r o d e s t i n a t i o n −mac− address ? string
| | | +−− r o source −mac− address ? string
| | + − − :( ieee802 − vlan − t a g )
| | | +−− r o ieee802 − vlan − t a g
| | | +−− r o p r i o r i t y −code − p o i n t ? uint8
| | | +−− r o vlan − i d ? uint16
| | + − − :( ipv4 − t u p l e )
| | | +−− r o ipv4 − t u p l e
| | | +−− r o source − i p − address ? i n e t : ipv4 − address
| | | +−− r o d e s t i n a t i o n − i p − address ? i n e t : ipv4 − address
| | | +−− r o dscp? uint8
| | | +−− r o p r o t o c o l ? uint16
| | | +−− r o source − p o r t ? uint16
| | | +−− r o d e s t i n a t i o n − p o r t ? uint16
| | + − − :( ipv6 − t u p l e )
| | | +−− r o ipv6 − t u p l e
| | | +−− r o source − i p − address ? i n e t : ipv6 − address
| | | +−− r o d e s t i n a t i o n − i p − address ? i n e t : ipv6 − address
| | | +−− r o dscp? uint8
| | | +−− r o p r o t o c o l ? uint16
| | | +−− r o source − p o r t ? uint16
| | | +−− r o d e s t i n a t i o n − p o r t ? uint16
| | + − − :( time −aware − o f f s e t )
| +−−− x deploy − c o n f i g u r a t i o n
| +−−− x undeploy − c o n f i g u r a t i o n
| +−−− x d e l e t e − c o n f i g u r a t i o n
+−−− x compute − a l l − c o n f i g u r a t i o n
+−−− x deploy − a l l − c o n f i g u r a t i o n
+−−− x undeploy − a l l − c o n f i g u r a t i o n
+−−− x delete −a l l −c o n f i g u r a t i o n
| | +−− r o time −aware − o f f s e t ? uint32
APPENDIX B. HOW TO EXECUTE THE CODE

This annex includes the necessary commands and recommendations to execute the code
inside of docker-compose.

B.1. Docker and Docker compose Installations

First it is necessary to install Docker and Docker-compose. Depending on the host opera-
tive system you can follow this series of steps. All of these commands have to be executed
within the Command Line Interface.

B.1.1. Installation over Linux

First make sure that your system repositories are up to date:

sudo a p t update
sudo a p t upgrade

Then is necessary to have the prerequisites (i.e., curl apt-transport-https ca-certificates


software-properties-common) installed

sudo apt − g e t i n s t a l l c u r l \
apt − t r a n s p o r t − h t t p s \
ca − c e r t i f i c a t e s \
s o f t w a r e − p r o p e r t i e s −common \

Go to add the necessary Docker repositories, this is to have the option of downloading
directly using the linux package manager. The first to add is GPG

c u r l − fsSL h t t p s : / / download . docker . com / l i n u x / ubuntu / gpg | \


sudo apt − key add −

Then the Docker repository:

sudo add− apt − r e p o s i t o r y ” deb [ arch=amd64 ] \


h t t p s : / / download . docker . com / l i n u x / ubuntu \
$ ( l s b r e l e a s e − cs ) s t a b l e ”
Then update the repositories again:

sudo a p t update

It is necessary to do it from the correct repository, then we will use this command to check
if it is the case:

apt − cache p o l i c y docker −ce

The output should be as follows:

37
docker −ce :
I n s t a l l e d : ( none )
Candidate : 1 6 . 0 4 . 1 ˜ ce ˜4 − 0˜ ubuntu
Version t a b l e :
1 6 . 0 4 . 1 ˜ ce ˜4 − 0˜ ubuntu 500
500
h t t p s : / / download . docker . com / l i n u x /
u b u n t u b i o n i c / stableamd64packages

Install Docker with apt packet manager:

sudo a p t i n s t a l l docker −ce

Check that everything is configured and docker is running with the following commands:

sudo s y s t e m c t l s t a t u s docker
docker − v e r s i o n

With those commands Docker should be up and running for using within a local machine.

B.1.2. Installation over OSx and Windows

For installing Docker over Mac and Windows it is better idea to use Docker Desktop the
installation is straight forward from the website of Docker:
https://docs.docker.com/desktop/
Once installed you will have a GUI similar as the presented in Figure B.1. Moreover, all of
the commands from the cli can be used as in the Linux distribution.

Figure B.1: Docker Desktop Graphical interface


B.1.3. How to execute the code under Docker compose

To execute the code we can use a set of commands necessary from docker.
In the file structure of the repository we have to go move to the proper folder in our repos-
itory and execute appropriated docker command. Supposing the code is in the home
directory it should be as follows:

cd TSN−CNC−CUC−UPC/CNC/ M i c r o s e r v i c e s
docker −compose up

This command will go to the docker-compose.yml file and look for the configuration of the
docker infrastructure to deploy, the output of the command should be as depicted in Figure
B.2.

Figure B.2: Docker compose up command output

The file docker-compose.yml contains the each of the definitions for the microservices.
For instance, Code Snipped B.1 shows the part of the code that corresponds to the pre-
processing microservice. the dockerfile shows the path for the Dockerfile (i.e., a file that
defines the steps followed to deploy the container), volumes generates a volume mirror
between a path within the host machine and the Docker container, this is extremely useful
for deploying and testing new code within the container. Additionally, depends on indi-
cates that none of the microservices should be executed if the rabbitmq-microservice is
not ready. The rest of the file is available at the code repository.

Listing B.1: Code Sniped of Docker-compose preprocessing Microservice


preprocessing −microservice :
build :
context : .
d o c k e r f i l e : Preprocessing microservice / Dockerfile
volumes :
− . / Preprocessing microservice : / preprocessing
stdin open : true
t t y : true
depends on :
− rabbitmq − m i c r o s e r v i c e

We can check containers are up and running with the fig:DockerComposeUP command
from the cli. The Image B.3 shows the proper output. This command will show the con-
tainers running (related to the docker-compose.yml file) the command they are executing,
the state, and the ports they are listening.
The open ports are the way of communicating within microservices and for the out-site
world to access to the microservices. For instance, the container that corresponds to the
Jetconf Microservice has the port 8443 open for receiving https traffic from the outside
world. This port is necessary since Jetconf exposes the RESTCONF API in this port for
the UNI interface with the CUC.
Additionally, it is possible to get inside the Command Line Interface of the docker container
if something there needs to be done. For that we can use the command with the following
structure:

docker −compose exec <Name o f t h e m i c r o s e r v i c e > bash

Figure B.4 depicts the usage of the command for accessing to two of the docker containers
of the Jetconf and ILP Calculator Microservices.
Finally, to bring down the docker containers and turn-off the system is necessary to use
the docker-compose down command.
Even thought there are far more docker commands used for implementing µTSN-CP with
the provided information in this Annex is enough to the system to work.
Figure B.3: Docker compose ps command output
Figure B.4: Accessing two container with docker-compose exec command
APPENDIX C. ABOUT DOCKER IMAGES

The work is not only reflected in the code repository in GitHub, we also have an essential
component that has a particular repository the Docker Images. Each one of this software
pieces are the building block for the architecture. Even thought, by themselves the Docker
image does not contain any code, each microservice needs a particular environment in
which to run. For instance, the ILP microservice needs the Pyomo library and GLPK to
be installed on the system to run correctly. Those images contain the majority are in
a repository in order to be downloaded every-time the system needs runs and they are
selected to be used in the Dockerfile or in the docker-compose.yml. The repository can
be in many places such as Docker Container Registry, Amazon Container Registry or
Dockerhub. In our case we decided to use directly Dockerhub as the Image repository.
Figure C.1 depicts some of the created images. Such images can be accessed in the
following repository:
https://hub.docker.com/repository/docker/gabrielorozcoupc.

Figure C.1: Some of the images available in the Dockerhub repository

43

You might also like