Professional Documents
Culture Documents
Abstract—Network virtualization and network slicing are key- security and reducing transmission cost. It provides com-
features of the fifth generation (5G) of mobile networks to putation power and data storage close to the mobile users,
overcome the challenge of increasingly diverging network re- within the 5G Radio Access Network (RAN), at the so-
quirements emerging from new use-cases like IoT, autonomous
driving and Industry 4.0. In particular, low latency network slices called edge of the mobile network. Edge clouds are typically
require deployment of their services and applications, including deployed on 5G gNodeBs, that means macro base stations
Network Functions (NFs), close to the user, i.e., at the edge of or the multi-Radio Access Technology cell aggregation sites,
the mobile network. Since the users of those services might be instead of centralized core nodes. This leverages location
widely distributed and mobile, multiple instances of the same based services, like the communication between co-located
application are required to be available on numerous distributed
edge clouds. This paper tackles the problem of Network Slice devices in car-to-x and smart factory applications as well
Embedding (NSE) with edge computing. Based on an Integer as, for instance, augmented reality services. By using edge
Linear Program (ILP) formulation of the NSE problem, the computing the latency and volume of the transferred data is
optimal set of network slices, in terms of network slice revenue reduced. Hence, the available bandwidth can be used more effi-
and cost, is determined. The required network slice applications, ciently. Additionally, edge clouds can be used for computation
functions and services are allocated nearly optimally on the 5G
end-to-end mobile network infrastructure. The presented solution offloading, enabling services that cannot be performed by most
also provides the optimal number of application instances and mobile device, e.g., Artificial Intelligence (AI) based services.
their optimal deployment locations on the edge clouds, even for Furthermore, a better Quality of Experience (QoE) and a lower
multiple User Equipment (UE) connectivity scenarios. Evaluation battery consumption of the User Equipments (UEs) can be
shows that the proposed holistic approach for NSE and multiple provided by edge computing. However, edge clouds come at
edge cloud allocation is feasible and efficient for small and
medium sized problem instances. high Operational Expenditures (OPEX). Therefore, edge cloud
Index Terms—5G, Network Slice, Virtual Network Embedding, resources are limited, while their applications are constantly
End-to-End, Latency, Edge Computing, Resource Allocation, increasing. [3]
Low Latency, Integer Linear Programming A feasible deployment of one or several Network Slice In-
stances (NSIs) on a common physical network infrastructure,
I. I NTRODUCTION
is referred to as the Network Slice Embedding (NSE) problem
Various 5G use-cases are based on high performance, reli- in this work. The physical network infrastructure is also called
able mobile data connections with a low end-to-end latency. the substrate in the following. The NSE problem is a special
The 5G standardization comities ETSI NFV and 3GPP agree case of Virtual Network Embedding (VNE) problem. The VNE
on network slicing as a key feature to overcome these chal- problem is a well-researched NP-hard problem, but it does
lenges. Network slices are logical end-to-end networks sharing not consider all resources and capabilities required in an end-
a common mobile network infrastructure. Network slicing is a to-end mixed wired and wireless mobile NSE problem, in
mean to comply with privacy requirements and isolate services particular strong end-to-end latency, availability and reliability
with different resource requirements on the mobile network. requirements as well as numerous resource demands, like
[1] [2] throughput, on the RAN, transport and core network com-
In addition to that, edge computing is seen as a key feature munication links as well as computation power and memory
of 5G to enable local communication, with a very low latency requirements on the servers. In addition to that, allocating
and high throughput, while at the same time improving data edge cloud resources is not considered in conventional VNE
978-1-7281-4973-820$31.00 © 2020 IEEE solutions. Deploying a service close to its users on an edge
cloud often means that the same service has to be provided at Virtual Network Functions (VNFs) in slicing-enabled 5G
different locations, i.e., for one application in an NSI several networks using an edge computing infrastructure. Therefore,
instances might have to be deployed in the substrate network. the required number of instances of the VNFs are determined.
Optimizing the NSE in a way, that the most beneficial NSIs In comparison to this work, the presented approach is not
and as many NSIs as possible can be deployed and meet integrated with solving the NSE problem.
their Service Level Agreements (SLAs) while concurrently Further approaches focus on the allocation of computation
targeting at OPEX reduction is, to the best of the authors tasks on distributed clouds during runtime. For example,
knowledge, currently an unsolved problem in literature. In Alicherry et al. [9] present an efficient approximation al-
this paper, a formal, mathematical model of the NSE problem gorithm for allocating computation tasks on a distributed
leveraging edge computing is provided. This model can be cloud with the objective of minimizing communication cost
solved nearly optimally with an out-of-the-box Integer Linear and latency. Hao et al. [10] provide online heuristics for
Program (ILP) solver. the resource allocation problem for geographically diversi-
This paper is structured as follows: In section II an overview fied cloud servers. Both papers are providing algorithms for
over related work and the prior art is given. Section III presents solving the NSE problem. They are both taking latency and
the formal model of the NSE problem. Subsequently, a simple throughput restrictions as well as the network topology into
example and an evaluation of the feasibility and runtime of the account. However, they are not tackling the challenges of edge
proposed approach given in IV. The paper is concluded with computing in conjunction with the NSE problem. For instance,
a summary and outlook on future work in section V. Akhatar et al. [11] present an ILP-based solution for the
virtual function placement and traffic steering in 5G networks
II. R ELATED W ORK under the assumption, that every service instance is deployed
The NSE problem is a special case of the well-researched exactly once. In contrast to that, the algorithm presented in this
VNE problem. The VNE problem belongs to the class of NP- paper optimizes the number of application instances and their
hard problems, first proved in [4], see also [5]. Nevertheless, placement in the physical network. To the best of the authors
there are various algorithms and heuristics for the VNE knowledge none of the approaches in literature is tailored to
problem. The challenges of NSE in conjunction with edge end-to-end NSE leveraging MAI and edge computing.
computing in 5G and beyond are currently under intense However, in the field of VNE and NSE without explicitly
discussion. One of the most recent publication is the paper dealing with the challenges of edge computing further publi-
of Sanguanpuak et al. [6]. The authors are modeling the sce- cations can be found. For instance, Richart et al. [12] provide
narios of network slice allocation to multiple Mobile Network an overview over mobile NSE solutions and the paper of
Operators (MNOs). The allocation problem is solved using the Vassilaras et al. [13] summarizes the algorithmic challenges
generic Markov Chain Monte Carlo Method. It is targeting of efficient NSE. Fischer et al. [14] published a survey on
at minimizing infrastructure cost and considers latency con- VNE targeted towards wired communication networks. Also,
straints. With the greedy fractional knapsack algorithm, the op- Riggio et al. [15] and Tsompanidis et al. [16], deal with the
timal mapping solution is calculated. The presented algorithm challenges of VNE in wireless networks. Despotovic et al. [17]
is focusing on determining the best network slice to MNO present a scalable near-optimal solution for the VNE problem.
resource mapping for co-located resources. In contrast, Xian et However, the applicability to NSE is limited, since the model
al. present a Mixed Integer Nonlinear Problem formulation of does not consider latency and is not capable of many-to-one
resource allocation with edge computing in [7]. The proposed virtual to physical node mappings.
optimization algorithm takes the link and server capacities as Further publications on specific runtime aspects of NSE can be
well as latency restrictions into account. Sequential Fixing found in literature, e.g., Zhang et al. [18] are focusing on the
(SF) is used to efficiently compute near-optimal solutions runtime optimization of throughput resources in the context of
for the mobile edge resource allocation problem. In contrast NSE. Jiang et al. [19] present a UE admission control to avoid
to the publications [6] and [7], this paper provides an end- network overloading at runtime and Wang et al. [20] present
to-end NSE solution with distributed UEs using a common a resource price balancing method considering dynamic offer
network slice service. The number of required instances of and demand. In contrast, this paper focuses on NSE for end-
a specific application within a network slice is determined to-end mixed wired and wireless networks in the preparation
automatically, depending on the topology of the network, the phase.
end-to-end latency, throughput as well as CPU and memory
requirements of the respective application. The approaches III. P ROBLEM F ORMALIZATION
mentioned above focus on choosing the optimal edge cloud for In this section a formal mathematical specification of the
a predefined application instance associated with a specific UE NSE problem with respected to MAI is presented.
group at a specific location. However, they do not provide an
automated optimization of Multiple Application Instantiation A. Definitions
(MAI) for geographically distributed UEs in consideration of The NSE model uses the following graph theoretical
latency requirements. definitions from [21]. An undirected Graph G = (V, E) is
The authors of the paper in [8] propose a method of allocating defined by a set of n ∈ N vertices V = {v1 , v2 , . . . , vn } and
a set of m ∈ N edges E = {e1 , e2 , . . . , em }. Every edge NSIs or their revenue for the infrastructure provider, weights
ek for k = 1, . . . , m has two ends, vi ∈ V and vj ∈ V for ωk are assigned to them. Ak is the set of application nodes of
i, j = 1, . . . , n, which can be denoted as ek := {vi , vj } or as the k-th NSI, with akm ∈ Ak defined as the m-th application
ek := vi vj . Note that vi vj = vj vi in undirected graphs. node of Nk . To guarantee NSI isolation, NSIs never share
A path P = (V, E) in the network is defined as a subset of application instances. Even if they require the same type
an undirected Graph G, P ⊆ G. Paths consist of a set of of application, distinct instances of the same application
successive edges. Their length is defined as the number of are defined for the NSIs. The virtual communication links
contained edges. The set of paths within an undirected Graph connecting UE groups and application nodes, are defined as
G sharing the same end nodes vi ∈ V and vj ∈ V is defined lik ∈ Lk for the k-th NSI. Also, in the NSI no direct UE group
as Pvi vj which is equal to Pvj vi . to UE group connections are considered. A communication
In 5G mobile networks a variety of services are deployed connection, like a phone call, between two UE groups is
on a shared mobile network infrastructure. This includes modeled as two connections, one for each UE group towards
the RAN, transport and core network as well as edge, a common application. Without loss of generality, every
aggregation and central cloud servers. Fig. 1 illustrates a lik can be written as lik = {uv , akm } if it is a UE group
model of the end-to-end mobile network infrastructure with to application connection or as lik = {akq , akm } if it is an
edge computing. application to application connection for distinct applications
The following model is based on our previous work on NSE with q 6= m.
Embedding an NSI into the mobile network infrastructure
means to map each application node akm on at least one cloud
nodes cw such that each virtual link lik can be mapped on a
suitable path in the substrate and all resource requirements
and network capabilities are fulfilled. The resources and
capabilities considered in this work are restricted to the
communication link throughput, latency as well as the node
computation power and memory. However, this list can
easily be extended by additional resources and parameters.
For the mobile substrate network the relevant resources and
s
parameters on the nodes are the computation power Dw and
s
the memory capacity Mw of the cloud server nodes cw ∈ C.
The communication links are characterized by their available
throughput of the NSI Tjs , and their maximum latency Lsj .
The available throughput is defined as the maximum possible
throughput on the wired or wireless communication link
ej ∈ E of the network infrastructure. To keep the model
simple, uplink and downlink are not distinguished. For RAN
Fig. 1. End-to-End Mobile Network Infrastructure Model connections an expected Channel Quality Index (CQI) has to
be defined in advance in order to determine the maximum
without MAI in [22]. Also, selected principles from [17] throughput based on the frequency bandwidth. For simplicity,
are reused. The mobile network infrastructure, also called the maximum latency is modeled as an upper bound for the
substrate, is defined as a network graph N = (U, C, E), i.e., data transmission time if the link is on full load, but not
as an undirected Graph G = (V, E) with V := U ∪ C. The overloaded.
vertices of the network graph are of two different types, UE In contrast, the NSIs define the throughput Tik requirements
groups uv ∈ U and cloud nodes cw ∈ C (which include edge and the maximum allowed latency Lsj for each link lik ∈ Lk .
clouds as well as central and aggregation clouds). Due to the Virtual nodes require specified computation and memory
k k
end-to-end mobile network topology, communication links capacities, denoted as Dm and Mm for the k-th NSI.
ej ∈ E in the network can be defined between a UE group and Virtual links and nodes can only be mapped on network paths
a cloud node or between two cloud nodes E ⊆ {ui cj , cv cw } and substrate nodes which fulfill their requirements.
with v 6= w. The same applies to communication paths P in
N , Pr ∈ P can connect a UE group with a cloud node or
two cloud nodes, but not two UE groups.
n ∈ N NSI requests shall be embedded into the mobile B. NSE Model
network infrastructure. They are modeled as virtual network
graphs Nk = (Uk , Ak , Lk ) for k = 1, .., n with Uk ⊆ U, that We define the following embedding variables:
means the UE groups are already embedded in the physical (
network by definition. To compare the importance of the 1 if Nk is embedded into N
yk :=
NSIs relative to each other, for instance, the utility of the 0 otherwise
(
1 if akm is mapped on cw Graph constraints:
a2ckmw :=
0 otherwise Every application has to be instantiated at least once, if the
( NSI is embedded into the substrate network:
1 if lik is mapped on Pr
l2pkir :=
X
0 otherwise a2ckmw ≥ yk , ∀k, m (6)
w
For better readability an additional path to edge mapping,
All communication links connecting two applications must be
which is constant for the mobile network substrate, is defined:
( mapped suitably:
1 if ej is used in Pr
p2erj :=
X
0 otherwise l2pkir ≥ a2ckmw
Pr ∈Pcv cw =Pcw cv (7)
Together with the l2p mapping variables, the l2e mapping can k k k k k k
∀k, i, w if li = {am , ab } or li = {ab , am }
be derived without creating any additional variables:
l2ekij :=
X
(l2pkir · p2erj ) If an NSI application akm is mapped on a cloud node cw in
r
the substrate network, then every link connected to akm in the
NSI specification must be mapped on at least one path Pr with
To maximize the revenue minus the cost of embedding as
Pr ∈ Pcv cw = Pcw cv . That means a path which has cw as one
many beneficial NSIs as possible, the following revenue and
of its end-nodes.
cost objective function is used in this paper:
The same is achieved for the UE group to application links
P X a2ck · Dk
k ωk · yk Pmw k m − with:
max ρ1 · P − ρ2 ·
k ωk w Dw
X
k,m,w
(1) l2pkir ≥ yk
X a2ck · M k X l2ekij · Tik Pr ∈Puv cw =Pcw uv (8)
mw m
ρ3 · − ρ4 · s
∀k, i if lik = {uv , akb }
P s
P
k,m,w w Mw i,j,kj Tj
It maximized the weighted, normalized revenue of all Also, all nodes adjacent to a link must be mapped suitably:
embedded NSIs, represented by the NSI weights ωk ,
minus the weighted, cost represented by the overall utilization a2ckmv + a2ckmw ≥ l2pkir
percentage of the CPU, the memory and throughput capacities. ∀k, i, r with lik = {akm , akq } (9)
The weights ρ1 , ρ2 , ρ3 and ρ4 can be used if the absolute and Pr ∈ Pcv cw = Pcw cv
revenue and costs are unknown and have to be estimated.
The objective function is maximized under the following a2ckqv + a2ckqw ≥ l2pkir
capability, resource, mapping and graph constraints.
∀k, i, r with lik = {akm , akq } (10)
Capability constraints: and Pr ∈ Pcv cw = Pcw cv
The sum of the latencies of all links along the mapped paths The set of inequations defined in 9 and 10 concern the link
must not exceed the allowed virtual link latency: mapping, if lik = {akm , akq } and lik is mapped on Pr ∈ Pcv cw =
X
l2ekij · Lsj ≤ l2pkir · Lki , ∀k, i, Pr ∈ P (2) Pcw cv with the cloud nodes cv and cw as its end-nodes. Then
j akm must be mapped on cw or cv and akq must be mapped on
Resource constraints: cv or cw or vice versa. Note that, the inequations 8 and 9 do
The sum of required throughputs Tik of all links lik mapped to not exclude that both ends of the virtual link akm and akq are
an edge ej must not exceed the available throughput on this mapped on the same physical end-node of Pr . That means,
edge Tjs : both could be mapped on cw or both on cv . However, this is
X prevented by the inequations 7.
l2ekij · Tik ≤ Tjs , ∀j (3)
k,i a2ckmw ≥ l2pkir
(11)
Also, the sum of the required CPU resources Dm k
of all ∀k, i, r with lik = {uv , akm } and Pr ∈ Pcv cw = Pcw cv
k
applications am allocated on a cloud node cw must not exceed
the available CPU Dw s
on cw : If lik = {uv , akm } and lik is mapped on Pr ∈ Pcv cw = Pcw cv
X with the adjacent nodes uv and cw , then akm must be mapped
a2ckmw · Dmk
≤ Dws
, ∀w (4) on cw .
k,m