You are on page 1of 5

Hybrid Network Slicing: Composing Network

Slices based on VNFs, CNFs Network Services


Pol Alemany∗ , Ricard Vilalta∗ , Felipe Vicens† , Ignacio Dominguez Gómez† , Ramon Casellas∗ ,
Ricardo Martı́nez∗ , Sonia Castro† , Josep Martrat† , Raul Muñoz∗
∗ CTTC, Castelldefels, Spain, Email:palemany@cttc.es
† Atos, Madrid, Spain, Email: felipe.vicens@atos.net

Abstract—This paper presents the creation of Network Slices, necessary architecture to support the life-cycle for the Hybrid
which are composed of Network Services that are either com- Slices, their benefits and the experimental results obtained.
posed of Cloud-native Network Functions or Virtual Network This paper is structured as follows: the second section
Functions. This is referred to as Hybrid Network Slice. In
addition to describing this idea, this paper demonstrates the describes the state of the art; the third one presents Hybrid
benefits of Hybrid Network Slices. This idea has been validated Slice architecture and how it is supported within the NFV
in an immersive-media pilot. An Hybrid Network Slice and a [3] context and architecture; the fourth section explains the
classical Network Slice deployments have been compared in order experimental demonstration through one of the use cases
to validate the benefits from the hybrid case. belonging to the immersive media pilot developed [4]; the
fifth section describes the evaluation and its results guiding
I. I NTRODUCTION the reader towards the last section; finally, the sixth section
presents the conclusions.
Nowadays network operators look towards the usage of
Network Function Virtualization Orchestrators (NFV-O) in II. S TATE OF THE A RT
order to apply Network Function Virtualization (NFV) on their
The following section introduces NFV Orchestrators (NFV-
network with the objectives of network hardware space, power
O) and Slice Managers, presents which are the current Virtual
consumption and maintenance costs reduction, among others.
Machine (VM)-based and Container-based solutions to deploy
NFV uses Virtual Network Functions (VNFs) which might be
VNFs and CNFs and, finally, explaines how each NFV-O
interconnected with other VNFs, so Network Services (NSs)
manages the creation of these elements.
are composed and deployed over the network. In order to
gain flexibility, efficiency and agility to address the customer A. NFV Orchestrators and Slice Managers
needs, Network Slicing [1] concept was added by creating a
SONATA-NFV [5] is a NFV-O developed during the last
set of Network Slice Subnets in which each sub net might be
years under the supervision of the EUC 5GTANGO project
composed by one or more interconnected NSs.
[6]. This NFV-O is a set of tools aiming to design and
Despite the existence of multiple hypervisor technologies develop VNFs/CNFs, NSs and Slices, to test and validate the
like Kernel-based Virtual Machines (KVM) to create VNFs, previous NFV objects and finally to manage and deploy them
one of the current trends is to move towards Containers in production environments.
technology and so, VNFs are being substituted for what Open Network Operation Platform (ONAP) [7] and Open
is called Cloud-native Network Functions (CNFs) that are Source MANO (OSM) [8] are two open-source NFV-Os under
container-based. While the main difference between VNFs the guidance of The Linux Foundation and ETSI with the aim
and its predecessor (called Physical Network Functions, PNFs) of creating an NFV-O accessible to any actor involved around
was the evolution from hardware to software applications, the the network virtualization world.
difference between VNFs and CNFs is the evolution from
software to micro-based applications. This second one implies B. Using KVM-based virtualization
a set of advantages such as agile methodology, short updates A VNF running inside a VM can be associated to a set
cycles and more control over the applications. of resources namely storage, memory, CPU and networking
The authors in [2] presented the usage CNFs-based NSs in typically found in a cloud environment. For an operator to
a smart manufacturing scenario. This paper aims to follow the whom the key point is the performance of its resources,
already started path of using CNF technology over another there are additional parameters that improve its performance
5G scenario; an immersive-media (IMM) pilot. Furthermore, such as pages, SR-IOV, CPU pinning, among others. Another
while in [2] the whole scenario is using 2 individual NSs, the necessary parameter to consider is the deployment time of the
main novelty this paper presents is the use of Network Slices service that the telecommunications operator will offer to the
(Slices) combining VNF-based and CNF-based NSs, recalled clients.
as Hybrid Slices. In addition, this paper also presents the OpenStack acts as a Virtual Infrastructure Manager (VIM)
978-1-7281-5684-2/20/$31.00 c 2020 IEEE which is the software in charge of the physical resources
management to create the VMs hosting the VNFs. It allows to of creating the network connections between network elements
perform this duty thanks to the usage of intelligent algorithms in order to generate the data flows between NSs placed in the
that allocate the configuration described in a manifest file and multiple VIMs distributed in the network.
generate all the necessary VMs and the automatic connections
among the VMs to properly create the Network Service.
SONATA-NFV [5] has an Openstack wrapper in order to
translate VNFs descriptors into HEAT [9] templates to deploy
the VNFs over Openstack. The ONAP’s VNF orchestration is
done with the Infrastructure adaptation that makes use of the
Multi-VIM Cloud project to use a large ecosystem of VIMs
including Openstack, Kubernetes, Comercial VIMs and Public
Clouds. Additionally, in OSM the VNFs are orchestrated
using the Resource Orchestrator (RO) module for Openstack,
VMware Integrated OpenStack (VIO), VMware vCloud Direc-
tor (VCD) and public clouds as Azure, Amazon Web Services Fig. 1: Hybrid Slices architecture.
(AWS) in combination with the Network Service to VNF
Communication (N2VC) module for Kubernetes Descriptor The basic features that a Slice Mngr must have are: a)
Unit (KDUs) support. Management of NS life-cycle belonging to a Slice, b) NSs
composition by creating the Virtual Links (VLs) interconnect-
C. Extending NFV with Cloud-based Network Functions ing the NS instances and c) Sharing NSs among Slices that
New applications architectures move from hardware mono- have those NSs in common and so, to be resource efficient.
lithic Apps (PNFs), going through software monolithic Apps Apart from the previous features, a Slice might have its NSs
(VNFs) to finally reach software micro service-based Apps distributed in different VIMs due to the following reasons:
(CNFs) [10]. To reply to this requirement, the container-based service performance (edge or cloud networks), resource type
deployments have irrupted into the ecosystems to manage the (VNFs and CNFs) or resource placement (lack of resources
problem in an agile way, allowing the vendors to produce in a VIM), which oblige the Slice to be deployed in a multi-
applications in a flexible, efficient and easy way. VIM context. Because of this possibility and the architecture in
In the IT world, a container manager such as Kubernetes Fig.1, the Slice Mngr must include: a) Multi-VIM deployments
is gaining a lot of presence over all its competitors as it is a support by exchanging information with the WIM; and b)
graduated project of cloud native computing foundation. The Multi-VIM selection according to the type of the NSs being in-
project is stable enough to be ready for production. stantiated; CNFs must be deployed in Containter-based VIMs
SONATA-NFV [5] implemented a Kubernetes wrapper to (Kubernetes) and VNFs in KVM-based VIMs (OpenStack).
cover the deployment of the CNFs by converting the VNF de-
scriptor (VNFd) into Kubernetes objects allowing the VNF to A. Data Objects
be deployed as a CNF over a Kubernetes cluster. Meanwhile, In this section, we introduce the necessary data objects for
on the ONAP side, the management is done through the usage considering first CNFs and then Hybrid Slices. In order to
of Cloudify and on the OSM side, the CNF orchestration will detail them, we have selected SONATA-NFV data models.
be available in future releases. SONATA-NFV has been selected to develop the Hybrid Slice
concept as it covers the whole VNF/CNF, NS and Slice life-
III. S UPPORTING H YBRID N ETWORK S LICES IN A NFV-O cycle; from the descriptors design, testing and validating them
Fig.1 presents the architecture to manage Hybrid Slices; on until they are ready to be used in production. The main novelty
the top of the Fig.1 lies the Network Slice Manager (Slice this paper introduces within the SONATA-NFV context is the
Mngr) controlling all Slice life-cycle activities and being in use of two different technologies (VNFs and CNFs) within
constant relationship with the NFV-O as described in [3] a single Slice and the fact of making the whole process
(subsection 4.2.3.2.2). The NFV-O has all the necessary NFV absolutely transparent for the final user. Further information
object descriptors (VNFs, CNFs and NSs) and the information about the different SONATA-NFV components can be found
of the associated Virtual Infrastructure Managers (VIMs) such in [11].
as OpenStack or Kubernetes where to deploy the multiple NFV Similar to VNFs/CNFs and NSs, Slices are managed by
elements composing a Network Service, which will be part of using two data objects: descriptors and records. Descriptors
a Slice. are a static definition of an object with characteristics like
Furthermore, as different VIMs are being used it is neces- the name or a description including a list of VNFs/CNFs or
sary to manage NSs placed in different parts of the network NSs composing that object, etc. A record is the information
and to create the virtual network connections among them, of an instantiation based on a descriptor. Slices descriptors
while keeping the isolation that Slices require. To solve this are called Network Slice Template (NST) [12] and aside
VIM interconnection aspect, the NFV-O might be associated from the basic information, they contain a list of Network
to a WAN Infrastructure Manager (WIM). A WIM is in charge Service Descriptors (NSDs) which include a list of VNF/CNF
Descriptors. A record belonging to a Slice, called Network
Slice Instance (NSI) [13], contains the deployment information
of an NST with the instantiated data related to the NSs and
their CNFs/VNFs instantiations.
Due to the difference of information between VNF and CNF
objects and in order to create Hybrid Slices, the descriptor and
record schemes managed by the selected NFV-O were adapted:
1) VNF/CNF Descriptors and Records: In order for VNFs
and CNFs to work together and as VNFs descriptors were
already being used, the approach was to keep backward
compatibility with the VNFD schema and have the lowest
possible impact on the NFV-O internal components that refer
to the VNFD. The solution was to keep using the VNFD Fig. 2: Network Slice Instance Record.
schema [14] and creating a new element in it. In the same level
of the ”virtual deployment units” (VDUs) schema element, a
new one named ”cloudnative deployment units” (CDUs) was are barely remarkable (i.e. parameters renamed or a new
added to include all the information related to CNFs. Thanks value for a better management), the important evolutions
to this design decision, the creation process of the CDU is were implemented on the NSI side. Fig 2 shows the first
very similar to the one done for the VDU: in fact, it uses level of the latest NSI structure with the last updates in
the id of the key to refer to the CDU and to the image it: instantiation parameters ( instantiation params) allow the
to set the Docker container to deploy correctly the cloud- user to define some characteristics of the NSI (i.e. VIM or SLA
native deployment unit. As it is defined in the schema, the per NS selection) and the WAN Infrastructure Manager (WIM)
”cloudnative deployment units” is a list which means that information ( wim-connections) allow the user to manage a
multiple CDUs are allowed inside the descriptor. Slice deployed in different VIMs by exchanging the necessary
Furthermore, a particular type of connection point named requests with the WIM.
service endpoint was created to connect the CDUs to the B. SONATA NFV extensions
external world. This was made to model the Kubernetes
load balancer. The distinctive characteristic for this type of
connection point is that it requires to set a port in order to
create the connections inside Kubernetes, from the service
layer object to the deployments object. Since the Kubernetes
service layer is a load balancer, the connectivity-type is ”E-
TREE” because it is point-to-multipoint.
Additionally, the parameters sectin was added to the VNFD
schema in order to influence the deployment of the CDU by
sending the environmental variables for Kubernetes config-
map. Moreover, the creation of volumes to share data among
the PODs is possible thank to the use of the VNFD schema.
The VNF records [15] were also extended with the sec-
tion ”cloudnative deployment units” where MANO stores the
run-time information of the CNFs like ”load balancer ip”,
”cdu reference”, ”cdu instances” among others.
2) NS Descriptors and Records: The NS descriptor [16] Fig. 3: Hybrid Slices Creation Requests Flow.
and record [17] data objects were not modified because the
creation of the data objects for CNFs and Slices was the main All descriptors and records presented in the previous section
objective. The CNFs and Slices are presented in the previous are used and created during the deployment of a Slice. Fig.
and in the next sections respectively. 3 shows all the process which starts when the OSS/BSS
3) Slice Descriptors and Records: When the NSTs and requests the instantiation of a Slice. Then, the Slice Mngr
NSIs were created inside the SONATA-NFV Slice Mngr begins the process by creating an NSI based on the NST and
[18], they only had three sections: basic information (i.e. id, other information like the VIM where each NS is instantiated.
name, author, etc.), the NSs or so-called subnets list (nsr- Once the NSI record is ready, the Slice Mngr requests the
list) composing the Slice and the networks or Virtual Link NSs instantiation to the NFV-O as this one manages all the
Descriptors (VLDs) list (vldr-list) which links the different VNFs/CNFs instances composing each NS. While all the NSs
subnets. are being instantiated, the Slice Mngr keeps checking each NS
As the whole Network Slicing feature evolved, so did status until all of them are instantiated. The last step consists
both data objects. While on the NST side the modifications on requesting to the WIM the creation of the necessary data
flow connections to properly link all the VIMs among them 2) Media Aggregator Network Service: Part of the Slice
and so, to have the Hybrid Slice ready to be used. Event in Fig.4 acts as a proxy receiving the Real-Time
Messaging Protocol (RTMP) video flow from the cameras and
IV. E XPERIMENTAL D EMONSTRATION it redirects those videos to the right Media Streaming Engine
The use case presented in this paper shows an IMM expe- inside the same Slice.
rience in which a user will receive multiple information flows 3) Media Streaming Engine-Transcode Network Service:
in parallel. While watching a video-streaming event, users This NS is also a component of the Slice Event and it is in
have the possibility to log in and access their social media charge of processing the video-stream (RTMP) received from
accounts like Facebook or Twitter while the main video-stream the Media Aggregator, with ffmpeg the video is transcoded to
is running in parallel or background. different variants to support different bitrates (Adaptive Bitrate
This IMM experience belongs to a pilot presented in Streaming support). At this point, the video is segmented in
[4], [19] and among the different use cases described, this different video chunks (HTTP Live Streaming protocol) and
document presents the one using Hybrid Slices. With the saved into a persistent volume.
objective of combining different technologies such as VMs 4) Media Streaming Engine-Cache Network Service: Like
and Containers, this pilot aims to overcome each technology’s the other NSs, this is the last one composing the Slice Event
weaknesses and use the other’s strengths; the lightness of and it contains a web server that serves the segments and the
containers together with the safeness of virtual machines. video playlists stored in the volume.
The IMM application works using two NSTs as Fig.4
presents. The Slice CMS (green square) composed by a shared
Content Management System NS (CMS-NS), the Slice Event
(red square) composed by a shared CMS-NS and three more
NSs: Media Aggregator (MA-NS), Media Streaming Engine
Transcode (MSE transcode-NS) and Media Streaming engine
Cache (MSE cache-NS).
In order to make the IMM scenario work, the process
follows the following steps: a) the Slice CMS (with a shared
VNF-based CMS-NS) is deployed and waits for any user
request to watch one of its available events, b) when a user
wants to watch an event, a new Slice Event is requested and
its NSs deployed. As the deployed CMS-NS in the Slice CMS
is shared, the Slice Event is instantiated by only deploying the Fig. 4: Immersive-Media Hybrid Slices Architecture.
missing NSs (CNF-based) which makes it Hybrid. For each
new user request, a new Slice Event is created. In order to
interconnect each new Slice Event with the Slice CMS, it is V. E VALUATION AND R ESULTS
necessary to pass instantiation parameters to the Slice Event, When a service owner offers its service to a final user, this
so its CNFs-based NSs point towards the shared VNF-based second actor will have different requirements, such as a low
CMS-NS from the Slice CMS to let data and control traffic to latency and a specific Quality of Service (QoS), among others.
flow among them. Most of them must be fulfilled in run-time but there are other
A detailed description of each one of the network services requirements that even thought they might not seem crucial
(Fig.4) is presented: in order to use the service, they must be reached in order
1) Content Management System Network Service: Used in to accomplish the characteristics of a NFV/SDN architecture.
both NSTs (Slice CMS and Slice Event), this NS is composed The evaluation done is focused on the service deployment
by three VNFs, the CMS-VNF has the central role while the time necessary to have the whole Slice and its NSs ready
other two VNFs contain the two services with the Social to be used once a user has requested it. The test compares
Network Apps to be watched in parallel to the main video the times between the two possible cases: Hybrid Slice having
streaming. The main characteristics of each VNF are: a) CMS- NSs based on both CNF and VNF versus an equal VNF-based
VNF: this component is the entry-point of the immersive Slice.
streaming service. It allows to register new cameras for the Following the presented design in Fig.4, the test procedure
different events, contains a relation of the available contents consists in comparing the instantiation and termination times
in the service in the different Slices (each one related to a between cases and it was organised in two steps: First, the
sport event) and also manages the connection of the users to Slice CMS was instantiated and used as starting point for both
the social networks VNFs. b) Twitter-VNF: it uses the official cases and its deployment time was not taken into account in the
Twitter API to handle the user’s login, retrieve tweets and final comparison. The second step was to instantiate the Slice
publish the messages of the different users of the service. c) Event. This second NST was the one to be compared when the
YouTube-VNF: similar to the Twitter VNF, it acts using the MA, MSE-Transcode and MSE-Cache NSs were based either
YouTube API. on VNFs or CNFs.
The second NST was the interesting one as the time started conclude that the best evolution path for the NFV and SDN
to count when its instantiation requests reached the Slice Mngr fields is to follow the idea of combining different technologies
and it finished the moment the whole Slice was instantiated and use them within the right context in order to get the
and the status of the created NSI became ”INSTANTIATED”. best performance possible like the combination of VNFs
The time regarding the termination procedure was obtained (VMs/OpenStack) and CNFs (Kubernetes).
following the same idea: once the status of the NSI to be This paper has presented a novel architecture for Hybrid
terminated became ”TERMINATED”, the time stopped. Network Slicing, which introduces a variety of network ser-
Figures 5 and 6 show the Cumulative Distribution Function vices that may contain VNFs or CNFs. It is in this sense that
(CDF) with the probability that an instantiation or termination we have evaluated deployment times and it can be concluded
process lasts less than X seconds. Comparing the two figures, that Hybrid Slices are faster to instantiate, while still providing
it is possible to confirm that the instantiation of a Hybrid Slice the requested QoS per Slice.
has 90% probability to be ready in less than 100s while a VNF-
ACKNOWLEDGMENT
based Slice has a 80% probability to require more than 600s
to be fully ready. Work partially Funded by the EC through 5GPPP
5GTANGO (761493) project and Spanish AURORAS
(RTI2018-099178).
R EFERENCES
[1] 3GPP. 3GPP TR 28.801: Study on management and orchestration of
network slicing for next generation network, 2018.
[2] S. Schneider, M. Peuster, et al. ”Producing Cloud-Native”: Smart Man-
ufacturing Use Cases on Kubernetes, IEEE Conference on Network
Function Virtualization and Software Defined Networks (NFV-SDN)
Demo Track, 2019.
[3] ETSI. Network Functions Virtualisation (NFV) Release 3; Evolution
and Ecosystem; Report on Network Slicing Support with ETSI NFV
Architecture Framework, 2017.
[4] G.Xilouris, L.Roullet, et al. D7.1 Evaluation strategy and testbed setup
report, EUC 5GTANGO Project 2017.
[5] SONATA-NFV Home page, SONATA-NFV official site.
https://www.sonata-nfv.eu/
Fig. 5: Hybrid Net. Slice Instantiation/Termination CDF. [6] 5GTANGO Home page, 5GTANGO official site. https://5gtango.eu/
[7] ONAP Home page, ONAP official site. https://www.onap.org/
[8] OSM Home page, OSM official site. https://osm.etsi.org/
[9] HEAT module, OpenStack official site.
https://wiki.openstack.org/wiki/Heat
[10] ETSI. Network Functions Virtualisation (NFV) Release 3; Architecture;
Report on the Enhancements of the NFV architecture towards ”Cloud-
native” and ”PaaS”, 2019.
[11] 5GTANGO deliverables section, 5GTANGO official site.
https://5gtango.eu/project-outcomes/deliverables.html
[12] Network Slice Template Schema, SONATA-NFV official GitHub
repository. https://github.com/alemanyp/tng-schema/blob/master/slice-
descriptor/nst-schema.yml
[13] Network Slice Instance Schema, SONATA-NFV official GitHub
repository. https://github.com/alemanyp/tng-schema/blob/master/slice-
record/nsir-schema.yml
[14] Virtual Network Function Descriptor, SONATA-NFV
official GitHub repository. https://github.com/sonata-nfv/tng-
Fig. 6: VNF-based Net. Slice Instantiation/Termination CDF. schema/blob/master/function-descriptor/vnfd-schema.yml
[15] Virtual network Function Record, SONATA-NFV official GitHub repos-
itory. https://github.com/sonata-nfv/tng-schema/blob/master/function-
By checking the mean values to instantiate or terminate record/vnfr-schema.yml
each case, it is possible to validate how much faster the [16] Network Service Descriptor, SONATA-NFV official GitHub
repository. https://github.com/sonata-nfv/tng-schema/blob/master/service-
Hybrid Slice instantiation process is with a mean value of descriptor/nsd-schema.yml
87.5s (4.5s standard deviation) respect the other case that [17] Network Service Record, SONATA-NFV official GitHub repository.
requires 707.1s (258.556s standard deviation), which is around https://github.com/sonata-nfv/tng-schema/blob/master/service-record/nsr-
schema.yml
8.1 times higher. Similarly, on the termination procedure, the [18] R.Vilalta, P.Alemany, et al. Zero-Touch Network Slicing Through Multi-
hybrid case with 27.7s (10.11s standard deviation) is faster Domain Transport Networks, 20th International Conference on Transpar-
than the 60.1s (11.99s standard deviation) required on the ent Optical Networks (ICTON), 2018.
[19] D.Behnke, M.Müller, et al. D7.2 Implementation of pilots and first
VNF-based case, around 2.17 times. evaluation, EUC 5GTANGO Project 2019.

VI. C ONCLUSIONS
Having presented the previous results and based on the
idea that technology is continuously evolving, it is fair to

You might also like