You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/319637778

A Formal Framework of Resource Management for VNFaaS in Cloud

Conference Paper · June 2017


DOI: 10.1109/CLOUD.2017.40

CITATIONS READS
6 121

2 authors:

A H M Jakaria Mohammad Ashiqur Rahman


Electric Power Research Institute Florida International University
17 PUBLICATIONS   152 CITATIONS    136 PUBLICATIONS   1,315 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Security Constrained Optimal Power Flow analysis View project

Adaptive routing protocol design View project

All content following this page was uploaded by A H M Jakaria on 05 August 2019.

The user has requested enhancement of the downloaded file.


A Formal Framework of Resource Management for
VNFaaS in Cloud
A H M Jakaria and Mohammad Ashiqur Rahman
Department of Computer Science, Tennessee Tech University, Cookeville, USA
Emails: ajakaria42@students.tntech.edu, marahman@tntech.edu

Abstract—Modern computer networks heavily depend on ex- Corporate Customer


Central DB & Server
pensive and proprietary hardware deployed at fixed locations. Corporate Customer Branch
Network functions virtualization (NFV), one of the fastest vCPE
emerging topics in networking, reduces the limitations of these
vendor specific hardware by introducing flexibility in the net-
vCPE
work architecture and elasticity in the deployment of innovative NFVI-PoP
network functions. Service providers are offering virtual network
functions as a service (VNFaaS), where consumers can use soft- NFVI-PoP
IP Backbone
warized network applications running on a cloud infrastructure.
NFV allows a flexible and dynamic implementation of virtual Corporate Customer
network functions in virtual machines deployed on commercial Branch
off-the-shelf (COTS) servers in various locations, as well as in
the core cloud infrastructure. However, allocating resources to vCPE

these virtual machines is a large combinatorial problem, and Virtualized Customer Premises Equipment (vCPE)
NFVI-PoP
NFV Infrastructure Point of Presence (NFVI-PoP)
requires a solution in a timely manner in terms of various
requirements. In this work, we propose VNFSynth, an automated Fig. 1. A typical network for VNFaaS in cloud located in various locations
synthesis framework, to solve this problem. VNFSynth models the (adapted from [2]).
resource specifications, incoming packet processing requirements,
bandwidth constraints, etc., with respect to the physical network, modern computing. NFV offers less complex network archi-
existing resources, and VNF properties, and determines the tecture, reduced power usage, lower OpEx, lower CapEx, and
VM network architecture. It uses satisfiability modulo theories a low time-to-market for launching new functionalities [1]. It
(SMT) to model this synthesis problem. The evaluation results
demonstrate the scalability and usability of the solution. allows us to test new security apps easily, while there is always
Index Terms—VNFaas; NFV architecture; formal modeling; an improved flexibility in assigning virtual network functions
topology synthesis (VNFs) to hardware. The concepts of VNF hosting justified by
CapEx reductions are the NFV equivalent of infrastructure as
I. I NTRODUCTION a service (IaaS) in cloud. On the other hand, virtual network
functions as a service (VNFaaS) is the equivalent of software
Computer networks today are composed of many propri- as a service (SaaS), in a sense that consumer can use software
etary hardware appliances of per-feature nature. Upgrading or applications running in cloud infrastructure. NFV consolidates
adding new network functions typically enforces the integra- many network equipment types onto industry standard servers,
tion of more of these hardware devices which requires time switches and storage [2]. These can be located in a variety of
and imposes high costs. They cannot satisfy the automation, NFV infrastructure points of presence (NFVI-PoPs) including
scalability, and robustness of today’s network operations. The cloud data centers, network nodes and in end user premises.
traditional methods of use cases, e.g., threat detection, are Fig. 1 shows an example where a corporate customer can
limited by the restricted computation capacity and inflexibility have NFVI-PoPs located in various locations, where they
of involved network functions in dedicated hardware, such as have VNFs as virtual customer premises equipments (vCPE).
firewall and routers. Despite being high volume servers, the commodity servers
NFV is a technology where network functions are imple- have fixed amount of resources. Utilizing the available re-
mented and deployed as virtual machines (VMs) in the form of sources efficiently is a challenge. The physical properties
software that runs on the commodity hardware environments of the servers, such as memory, CPU, etc., determine the
providing cloud computing capabilities. It is a form of cloud capabilities of the VNFs running on the VMs within these
that offers an alternative way to design, deploy, and manage servers [3]. Given the network of the servers, their capabilities,
networking services. The VMs run on these general purpose and required properties of VNFs, determining the number and
hardware systems, so that NFV not only provides the benefit locations of the VMs that needs to be deployed, is a bin
of elasticity, but also reduces the cost by running on low-cost packing problem. Some works in the literature try to solve this
commodity platforms like x86- or ARM-based servers instead using heuristic algorithms, while there are few works available
of specialized and dedicated hardware. The use of NFV opens providing the formal model of such a network architecture [4].
a new opportunity for enterprises, as well as small businesses, To the best of our knowledge, most of the solutions fail to
to find low cost solutions to new requirements of complex solve this problem in a timely responsive manner. In this
work, we present a novel tool, VNFSynth, which solves this runs on commercial off-the-shelf (COTS) servers. Operators
problem using formal verification. VNFSynth is an automated or service providers can install NFV servers in the data center,
framework for synthesizing virtual network configurations and and then extend their VNFs and services to the customer using
placements of VMs, using constraint satisfaction checking. software. By utilizing the features of an NFV cloud, service
It takes a network topology, VNF properties, and physical providers can roll out new services and VNFs using software
resources as inputs, and formulates the virtual architecture rather than specialized hardware networks in a more agile and
design synthesis problem. The problem is solved by encoding flexible way. The customers can access the VNFs, which are
the model into SMT. The major contributions of the paper are: basically software applications, via some cloud software and
1) Formal model of the resources and network topology Web provisioning.
that implements NFV.
B. Related Work
2) A quick responsive solution of the resource allocation
problem to the deployed VMs. There are several works in the literature that discuss the
3) Implementation and a thorough evaluation of the auto- NFV architecture within a cloud environment. Vilalta et al.
matic synthesis tool. present a detailed overview of the SDN/NFV services that
The rest of this paper is organized as follows: Section II are offered on top of the cloud computing platform [8].
presents the background, while Section III discusses the pro- They propose a generic architecture for SDN/NFV services
posed framework. We describe the formal model in Section IV. deployed over multi-domain transport networks and distributed
The implementation and a case study is discussed in Section V, data centers. Battula evaluates the architectural framework
while the evaluation results are presented in Section VI. approaches of scalable compute node with NFV and SDN for
Finally, we conclude the paper in Section VII. addressing various challenges on data center in the form of net-
work security function virtualization (NSFV) over Openflow
II. BACKGROUND AND R ESEARCH O BJECTIVE infrastructure [9]. In [10], Raho et al. analyze the performance
This section briefly overviews the relationship of NFV of ARM-based containers and hypervisors in NFV and cloud
technology and cloud, related works, and our objectives that computing. Some practical challenges of maximizing energy
used formal verification to solve the resource optimization efficiency for virtual content delivery networks (vCDN) work-
problems associated with NFV. loads in the context of NFV and cloud architectural framework
have been examined by Krishnaswamy et al. in [11].
A. Network Functions Virtualization (NFV) Cloud When it comes to resource management in NFV cloud,
As discussed in the work of ETSI NFV industry specifi- Fayaz et al. [4] proposed a flexible and elastic DDoS defense
cation group [5], NFV is composed of three key elements: system, Bohatei, that shows the benefits of software defined
network functions virtualization infrastructure (NFVI), vir- networking (SDN) [12] and NFV in the context of DDoS
tual network functions (VNF), and NFV management and defense. It makes the use of NFV capabilities to elastically
orchestration (NFV MANO). NFVI is composed of the COTS alter the required scale (e.g., 10 Gbps vs. 100 Gbps attacks)
hardware and the virtualization of the computing, storage, and type (e.g., SYN proxy vs. DNS reflector defense) of DDoS
and network resources. The abstraction is achieved through defense realized by defense functions running on VMs. The
a hypervisor-based virtualization layer, which decouples the work is focused on an ISP-centric deployment model, where
virtual resources from the underlying physical resources. A an ISP offers DDoS-defense-as-a-service to its customers by
VNF is a virtualized functional block within a network infras- deploying multiple data centers, and each data center has
tructure that has well-defined external interfaces and functional commodity hardware servers to run standard VNFs. The
behavior. Virtualized residential gateway, virtualized firewall, authors formulated the resource management problem as a
and virtualized load balancer are good examples of VNFs. constrained optimization via an integer linear program (ILP).
These can be realized through VMs. NFV MANO performs However, the ILP approach takes several hours to provide a
the orchestration and lifecycle management of NFVI resources solution, which is enough for an adversary to easily overwhelm
and VNFs. It is in charge of the configuration of the VNFs and the system. As a result, they use a hierarchical decomposition
the infrastructure that implements these functions. It covers of the resource optimization problem into two stages with
three functional blocks: NFV orchestrator, VNF managers, the help of two greedy algorithms. Younge et al. devised
and virtualized infrastructure manager. NFV MANO performs a power-aware VM scheduling algorithm that yields energy
interactions with the business support systems (OSS/BSS) efficient resource management for cloud computing environ-
landscape, which allows NFV to be integrated into an already ments [13]. Beloglazov et al. propose efficient heuristics for
existing network-wide management landscape. dynamic adaption of allocation of VMs in runtime by applying
NFV is essentially a form of the cloud. NFV cloud is a data live migration according to current utilization of resources
center and network built to host, deploy, and service VNFs in virtualized cloud data centers. They focus in minimizing
using a cloud network [6]. This has also gained popularity as energy consumption [14].
‘CloudNFV’ [7]. The main idea of NFV is to replace dedicated VNGuard [15] is a framework for effective provision and
network hardware appliances, such as routers and firewalls, management of virtual firewalls to keep virtual networks
wide area network (WAN) service, etc. with software that (VNs) safe. Leveraging the features of NFV and SDN, it
Network VM Properties and Connectivity To/From Internet
Topology Requirements

Type 2
VNF layer
VM Type and
Type 3
Placement Model VM Types Type 1
Web server
Incoming Type 2
Constraint SMT NFV Architecture
Packet Solver
Model Synthesis
Rate

VM Physical layer
Resource Model Placements
Server 2
VNFSynth
Server 1 Web server and DB
Memory CPU
Specifications Specifications Server 3

Fig. 2. The framework architecture of VNFSynth.


Fig. 3. An example of an NFV network topology that deploys diverse types
of VNFs.
defines a high-level firewall policy language, finds optimal
virtual firewall placement, and adapts virtual firewalls to VN • Formally models the network topology, required capa-
changes. The framework proposes an approach based on ILP bility (i.e., processing of incoming traffic) of VMs, and
to find an optimal virtual firewall placement, which fulfills constraints of VMs and resources.
resource and performance constraints. However, the solution • Formalizes the NFV design synthesis problem as the
only talks about virtual firewalls and cannot cope up with more determination of deployment decision of VMs and their
sophisticated types of VNFs. placements along with their types that satisfies the given
requirements/constraints.
C. Research Challenge and Our Objective • Encodes the synthesis problem into SMT logics and
provides a feasible solution using an SMT solver.
The NFV MANO, which governs the management and
orchestrations of all resources in the cloud data center, serves The VNFSynth framework is shown in Fig. 2. It takes the
as a standard for NFV architecture. However, it has been following as inputs: (i) packet processing requirements, (ii)
pointed out by analysts and critics that additional orchestration the network topology including bandwidths of the links, which
and management functions are needed in determining how constitute the VM type and placement model, and (iii) server
the clouds are built [6]. Some extra orchestration, manage- resource configuration constraints, which make up the resource
ment and operations software are needed to set up VNF model. The tool takes input from a user using an input file.
services, monitor them, and repair problems. Lifecycle service Many system components are exercised by a packet pro-
orchestration (LSO) is a new layer of software that can carry cessing system and it is important to identify individual
out these important management and orchestration tasks and contributions to the performance of the overall system [18],
integrate with legacy operations support systems (OSSes). [19]. Memory and CPU cores of the COTS servers are one
Software such as OpenStack provides some ability to manage of the main resources that we account for. A pipeline-based
VMs and the network connectivity of those VMs, but it system that processes only a fixed amount of bytes (basically
cannot aggregate statistics, monitor load balance, or adjust the header) of each packet and that forwards this data to
the number of instances of a particular process up or down the following stage in each clock cycle achieves a packet
automatically [16], [17]. throughput corresponding to its clock rate [20] of the CPU
A service provider needs an efficient way of assigning its it is assigned. Fig. 3 shows an example of different types of
available compute and network resources to the virtual system. VNFs deployed on servers located in the cloud. The VNFs are
Specifically, they need to decide how many VMs of each type responsible for processing the incoming traffic according to
to run on each available server so that incoming traffic is requirements and forward it to the next level. In this example,
handled properly. The challenge is how to accomplish this in ‘type 1’ is a simple load balancer, ‘type 2’ is in charge of
an efficient way. This involves solving a large combinatorial scrutinizing the legitimacy of each packet, and ‘type 3’ is a
NP-hard problem. Many of the state-of-the-art solvers may simple virtual switch. The VNFs can be launched dynamically
take hours to solve this kind of problem. Our objective is according to the intensity of incoming packet rate.
to introduce a tool that can solve this within a sustainable There are, obviously, other resources that are limited in
period of time. It can provide results that allows the network supply when it comes to the processing of incoming traffic
administrator to reconfigure the system without much delay. from clients or the Internet. I/O bus bandwidth is a good
example. However, it is very easy to change the framework to
III. VNFS YNTH F RAMEWORK take other inputs that affect the requirements, as well.
VNFSynth formalizes the NFV architecture design synthesis
VNFSynth follows a top-down architecture design automa- problem as the conjunction of all of the resource, connectivity
tion approach instead of the traditional bottom-up approach. and packet processing requirement constraints. It solves the
The major features of VNFSynth are as follows: overall ‘constraint model’ to determine the placement and
types of VMs. In essence, it is a part of the virtualized TABLE I
infrastructure manager. N OTATION TABLE
Notation Definition
IsVmDeployed i,j Is VM j of commodity server i deployed?
IV. V IRTUAL T OPOLOGY S YNTHESIS M ODEL VmType i,j Type of VM j of commodity server i.
VmMem i,j Memory of VM j of commodity server i.
This section discusses the formal modeling of the require- VmCpu i,j CPU of VM j of commodity server i.
ments and the constraints of the NFV architecture. Table I VmPR i,j Packet processing rate of VM j of server i.
lists the variables used in the model. It is notable that no ServMem i Memory of commodity server i.
ServCpu i CPU of commodity server i.
multiplication of two parameters is performed in the paper Reachable i,j,k,l Is VM j on server i is reachable from VM l on
without the multiplication sign. server k?
VBw i,j,k,l Required virtual bandwidth between VM j on server
A. Preliminary i and VM l on server k.
PBwi,k ,z Required physical bandwidth of z th link on the path
The incoming traffic to a product server is processed by from server i to server k.
several VNFs running on some commodity servers. There are share the limited memory and CPU. Our model assumes that
various types of VNFs, which have different processing tasks, no VM is deployed across more than one server.
and different requirements of resources. From the ingress point Let VmMem i,j be the memory of VM j running on server
(router), the traffic is sent to these virtual components. i; while VmCpu i,j be the CPU of that VM. The sum of the
The challenge is how to deploy the VMs on the network of memories of all the deployed VMs on a server should not
commodity servers so that they can handle the large amount exceed the actual physical memory of that server which is
of incoming traffic, given limited resources. The servers have available for the VMs. The same goes for CPU: the total CPU
limited memory and CPU, based on which they can deploy a of the VMs running on a server should be less than or equal
limited number of VMs. The packet processing rates of these to the available CPU of the server itself.
VMs also depend on their virtual memory and CPU. Again, the X
∀i∈S VmMem i,j ≤ ServMem i (1)
communication between any two VMs depends on the physical
j
bandwidth between their host machines. The deployed VMs X
should be able to process the incoming traffic, which may be ∀i∈S VmCpu i,j ≤ ServCpu i (2)
j
of a large volume in times of cyberattacks.
We model the topology synthesis as a constraint satisfaction VNF Resources: There can be a variety of VNFs that are de-
problem. First, we formalize the requirements and constraints. ployed in an NFV cloud according to customer requirements.
Then, we solve the problem using an SMT solver. The solver For example, an enterprise customer may need some traffic
tells us how many VMs should be deployed, as well as the load balancer to handle the traffic in the times of DDoS attack.
locations of all the types of VMs. That is, the solution has the They may also require a type that can work as a virtual filter
information of which VM should be deployed on which server. or a firewall that can drop any unwanted traffic that is headed
We measure the memory in unit GB, and CPU in number of towards a web server. Whatever the type of a VNF is, the VM
cores. We assume that each VNF is contained by a single VM. that contains it, must be allocated enough memory and CPU so
VNFSynth models the network topology as a graph. The that it has the best possible packet processing capabilities. The
network model is defined as hN, Li, where, packet processing rate of a VM depends on the memory and
• N defines a finite set of network nodes including the
CPU of the VM. If VmPR i,j refers to the packet processing
servers and some routers. Thus, N is a union of two sets: rate of a VM, we can express this as a function of memory
S and R. S denotes a finite set of servers, while R denotes and CPU, where VmMem i,j and VmCpu i,j are the memory
a finite set of routers. Each node is identified by an ID and CPU of the VM, respectively:
(e.g., IP address). A server contains zero or more VMs. V mP Ri,j = f (V mM emi,j , V mCpui,j )
• L ⊆ N × N is a finite set of links, which define the For simplicity, we express this equation as stated below,
interconnections between the network servers and routers. where T is the set of all the types, αt is a constant that
We define a path between two servers by the tuple hx, yi, determines the ratio (e.g., 20%) of the impact of memory and
where each path can have one or more links. The z th link CPU on the packet processing rate for a particular type.
in the path between x and y is expressed by hx, y, zi. ∀t∈T (VmType i,j = t) →
The virtual link between a VM j on server i and a VM (3)
l on server k is denoted by the quadruple hi, j, k, li. VmPR i,j = αt × VmMem i,j + (1 − αt ) × VmCpu i,j
If a VNF is deployed on a VM, it needs to have memory and
B. Constraints on Resources CPU greater than a minimum value. MIN VM MEMORY
and MIN VM CPU refer to these minimum values, respec-
The overall idea of NFV is implementing network functions
tively. This ensures that the packet processing rate depends
on commodity servers. Besides the fact that these servers are
both on memory and CPU.
lot less expensive than dedicated vendor-specific hardware,
they often have a limited amount of resources in terms of IsVmDeployed i,j
(4)
memory, CPU, etc. VMs are installed on these servers which → VmMemory i,j ≥ MIN VM MEMORY
IsVmDeployed i,j → VmCPU i,j ≥ MIN VM CPU (5) VBw i,j,k,l is the required virtual bandwidth between two
deployed VMs. Because there is a two-way communication
The combined packet processing rate of all the VMs of each
between the VMs, the required bandwidth between two VMs
type should be no less than the incoming packet rate at the
should be, at least, the sum of the packet processing rates of the
ingress point. If VmPR i,j is the processing rate of j th VM of
communicating VMs. Consequently, if two VMs do not need
type t located in server i, then the following holds:
to communicate, the virtual bandwidth between them should
∀t∈T (VmType i,j = t) be zero.
(6)
X
→ VmPR i,j ≥ IncomingPacketRate
i,j Reachable i,j,k,l → VBw i,j,k,l ≥ VmPR i,j + VmPR k,l (10)
C. Constraints on VNF Location
¬Reachable i,j,k,l → VBw i,j,k,l = 0 (11)
Sometimes there are certain service level agreements (SLA)
with some customers about the placement of the VNFs in the
The bandwidth of each physical link on the path from
cloud. As the servers in a cloud service provider can be located
the server implementing the communicating VMs should be
in various locations or NFVI-PoPs, regulations may restrict
sufficiently high so that the VMs in them can talk to each
the provider from placing the certain types of VNFs in certain
other. Say, Z is the set of all links on the path from a server
servers. For example, an enterprise client may wish to keep the
to another server. PBwi,k ,z is the required physical bandwidth
data and traffic within a comfortable geographical boundary. If
of z th link on the path from server i to server k. The physical
Sl denotes the set of servers located in geographical location
bandwidth of each link should be no less than the virtual
l and L is the total number of locations, then,
L
bandwidth required by two communicating VMs that are using
S=
[
Sl that link. We model the constraint as following:
l=1 X
∀z∈Z ∀i,k VBw i,j,k,l ≤ PBwi,k ,z (12)
If Lt is the set of all permissible locations, and L0t is the set j,l
of all forbidden locations for a VNF of type t, then the VMs
should be deployed on the servers located in benign locations: V. P ROTOTYPE I MPLEMENTATION AND A C ASE S TUDY
∀i ∀j (i ∈ Sl ) ∧ (l ∈ L0t ) ∧ (VmType i,j = t) The main objective of our configuration synthesis problem is
(7)
→ IsVmDeployed i,j to synthesize the virtual network topology for implementation
In an NFV environment, single point of failure has been of NFV by satisfying various customer requirements as well as
an issue already discussed in literature. These issues can be the NFV provider’s business constraints. Thus, the synthesis
software-based (e.g., MANO) or hardware-based. If a single problem is formalized as the satisfaction of the conjunction
server contains all required VNFs, the overall system becomes all the constraints in Equations 1 through 12.
vulnerable to hardware failure. We provision a reasonable
distribution of VMs among all available servers to avoid such A. SMT Encoding, Query Formulation, and Solving the Model
a failure. The number of VMs on a server should usually We implement our model by encoding the system config-
be below a threshold of the maximum possible VMs on that uration and business constraints into SMT logics [21]. As
particular server. If δ is the threshold percentage defined by the synthesis time principally depends on the model of the
the provider, and MAX NUM VM is the maximum possible problem, any SMT solver can be used for the implementation.
VMs in a server i, We use Z3, an efficient SMT solver [22]. We use mainly
X
∀i IsVmDeployed i,j ≤ δ × MAX NUM VM (8) two types of terms: boolean and integer. We use boolean
j terms for encoding the boolean configuration parameters and
D. Constraints on Network Bandwidth decision variables, e.g., VM types. The remaining parameters
are modeled as integer terms.
We take the overall network topology of the system as an
The solver checks the verification constraints and provides
input to our solver. That is, we know how the servers are
a satisfiable (SAT) result if all the constraints are satisfied.
connected to the ingress point, as well as to each other. The
The SAT result provides a SAT instance, which represents the
bandwidth of each link in the topology is also provided. It
value assignments to the parameters of the model. According
is required that the packet processing rate of the VMs do not
to our objective, we require the assignments to the following
exceed the bandwidth of the physical links, otherwise it would
variables: (i) the decision variable referring to whether a VM
be impossible for the VMs to communicate with each other.
is deployed, IsVmDeployed i,j , i.e., the placement of the VMs
We denote the reachability between two VMs running on
and (ii) the type of the deployed VMs, VmType i,j . A ‘true’
two servers by Reachable i,j,k,l , where VM j is running on
value to IsVmDeployed i,j means that VM j on server i is
server i and VM l is running on server k. Each deployed VM
deployed, while integer values to VmType i,j suggests that
should be reachable from others.
VM j on server i is of a type corresponding to that integer.
IsVmDeployed i,j ∧IsVmDeployed k,l → Reachable i,j,k,l (9) The VM placements and their types are printed in a text file.
17 18
B. Dynamic Update of NFV Topology 3 16 25
2
15
Network traffic patterns and intensity are very flexible and 1
52 36
37
45
can change very frequently. These changes require the VN-
FaaS solutions to be more flexible, and properly reconfigured, 51 54
Incoming
Traffic
so that the web or DB servers, as well as client machines re- 53 46 50 Web Server
ceive the same protection during and after changes. VNFSynth 26 47
35
is capable of providing a solution within a sustainable period 27
28
of time when these changes occur. However, it is not always
desirable to furnish a completely new solution whenever the
(a)
traffic pattern varies. This is because the cost involved in
shutting down some VMs and setting them up on new locations
might degrade the quality of service. The VNFs are generally 17 18
3 16 25
associated with some flows of the traffic and changing their 2
15
positions requires a lot of handling for the NFV orchestrator. 1
36
37
52 45
VNFSynth can take a previously found solution as input and
can generate a new output based on it. In essence, in the case of Incoming 51 54
a relatively small change in traffic pattern, the deployment and Traffic 53 46 50 Web Server
location of the existing VMs do not change. Either additional 26 47
35
VNFs are installed on available servers, or some idle VNFs 27
28
are put to sleep while satisfying all the constraints. When the
traffic intensity change is above a certain threshold, we need
to deploy everything from scratch. This threshold value can (b)
be determined by the user of the tool. This is an indicator of Fig. 4. (a) The physical network topology of the COTS servers and (b) The
how much the user wants to tolerate without reimplementing physical network topology of the servers and VMs implemented in them.
the whole network topology.
When the amount of incoming packets decreases, it is TABLE II
possible to free up some VMs. We run a load balancing I NPUT TO THE E XAMPLE
algorithm which checks for the load on the VMs, i.e., the
#Number of NFVI-PoPs
number of flows handled by each VM. If the load for a 5
VM is below a certain threshold, flows are migrated to some #Preferred locations
1235
nearby VMs and the less loaded VM is completely freed. It #Number of VNF types
can be put to sleep and the available resources (ServMem i 2
# Number of physical/COTS server
and ServCpu i ) for the corresponding server are updated for 50
the next run of VNFSynth, with the resources (VmMem i,j # Memory, CPU and location of COTS servers (GB)
and VmCpu i,j ) that were allocated to and used by the VM. 32 5 1
32 5 1
Consequently, some servers with no VM in it can be turned 24 5 2
off to save a lot of energy, which is a key feature to green ..
# Number of routers
cloud computing. 4 # Router IDs: 51 52 53 54
In case of an increase in incoming packet rate, we often # Number of links in the network
need to add some more VMs to the system. If the increase 10

is below a certain threshold (e.g., 20%), we keep the existing # Network topology: source, destination and link bandwidths (Gbps)
1 51 500
solution. That means, the existing ‘true’ values of placement 2 51 250
(IsVmDeployed i,j ) and VM types ( VmType i,j ) are kept 3 51 500
..
unchanged. The decision variables that were ‘false’ in the 51 52 1000
previous solution, are kept open. These are assigned new ..
# Incoming traffic rate (Gbps)
values by VNFSynth, so that the conjunction of equations 1 140
through 12 are still satisfiable. In some cases, the new VMs
are installed on servers that already have some VMs running,
while in other cases, they are deployed in new servers that the network traffic being moved alongside. Some ‘loss-free’
were unused so far. In case of an unavailability of a new server and ‘order-preserving’ algorithms have been discussed in the
when needed, VNFSynth returns an UNSAT result. literature when migrating VMs. However, these algorithms
In times of traffic patterns or intensity change over the suffer from usage of high traffic buffering, which might have
threshold (e.g., 20%), we run VNFSynth from scratch and negative effects on the performance of the virtual appliance
find a completely new solution. In the case of increase, mechanisms [23]. We leave these problems for our future
some existing VMs need to be moved to other places with work, as they are out of the scope of this paper.
Number of deployed VMs w.r.t. traffic rate Memory utilization w.r.t. traffic rate Number of deployed VMs w.r.t. number of server
7 100 10
Type 1 VNF Number of Servers = 100 Traffic = 80 Gbps
Type 2 VNF 90 Number of Servers = 75 9 Traffic = 130 Gbps
6 Type 3 VNF
Number of Deployed VMs

Number of Deployed VMs


80 8

Memory Utilization (%)


5
70 7
4 60 6

3 50 5

40 4
2
30 3
1
20 2

0 10 1
60 80 100 120 140 60 80 100 120 140 25 50 75 100 125
Incoming Traffic Rate (Gbps) Incoming Traffic Rate (Gbps) Number of COTS Servers

(a) (b) (c)


Fig. 5. (a) Number of required VMs w.r.t. incoming traffic rate, (b) memory utilization by all deployed VMs w.r.t. incoming traffic rate, and (c) number of
deployed VMs w.r.t. number of COTS servers.

C. A Case Study We gradually increase the incoming traffic rate starting from
Fig. 4(a) shows a small network for which an optimal 50 Gbps and observe the number of deployed VMs, which is
security design will be synthesized based on the given input demonstrated in Fig. 5(a) for three different types of VNFs
file as shown in Table II. We consider 50 COTS servers in having different purposes and requirements. The number of
a provider’s cloud network that are connected to each other. VMs increase slowly with increasing traffic. At certain points,
Memory and CPU of these servers are provided in GB and it is required to add more VMs. For example, for ‘type 1’ VNF,
number of cores respectively. The connectivity, the number of as the incoming rate moves beyond 110 Gbps, the number of
routers that connect these servers, and the bandwidths of all VMs for this type increases from 1 to 2; it remains the same
the links are also provided in the input file. Virtual bandwidth up to 150 Gbps. It can be observed that the number of VMs
between any two of the communicating VMs must comply is greater for ‘type 2’ VNFs. The reason is, this type of VNFs
with the physical bandwidth of the links between the host are more complex and need more resources.
servers. In this example, we consider 140 Gbps of traffic Fig. 5(b) shows the relationship between the incoming
arriving at the ingress point. It is worth mentioning that the traffic rate and the memory utilization of all the utilized
number of ingress points may be more than one. servers. Utilization is the ratio of the total memory of all
For this example, VNFSynth gives a SAT result, which deployed VMs and the total memory of the servers they reside.
provides the deployed VM types along with their placements As the traffic rate increases, the memory utilization remains
in the servers. Fig. 4(b) shows the placements of the network almost constant. As long as there are servers available, the
functions. This example shows that total 10 VMs are deployed, full amount of memory of a server is not put to use. This
which are of 2 different types. There are installed in server 1, 3, helps to keep the servers less loaded, and also helps reduce
15 and so on. It is worth mentioning that VNFSynth provides single point of failures, as discussed in Section IV. Memory
not only the number of VMs, but also the memory, CPU and utilization for 100 servers is slightly higher than 75 servers
the packet processing rate of each VM. for a certain traffic rate.
If we increase the incoming traffic to 150 Gbps which is less We also observe the number of deployed VMs with respect
than our threshold value (20%), we observe that the existing to the total number of available servers for a certain amount
placements of the VMs remain the same except one new VM of incoming traffic (80 and 130 Gbps) in Fig. 5(c). As the
of ‘type 2’ is deployed on a new server (server 5), although the number of servers increases, the number of VMs remains
existing servers had more resources available to accommodate almost the same. But at certain points, e.g., for 100 servers
the new VM. This is because of the bandwidth constraints and 80 Gbps of traffic, the number of VMs increases. This
associated with the already used servers. We may recall that is due to the bandwidth constraints. VNFSynth tries to find a
the physical bandwidth of the links must be greater than or solution utilizing all the prospective VMs. As there are more
equal to the virtual bandwidth between VMs. servers, there are more candidates for deployed VMs. It is
better not to use up the whole bandwidth of a server, which
VI. E VALUATION helps to avoid possible bottlenecks. For the same number of
We ran our experiments on different synthetic network servers, 130 Gbps traffic requires more VMs than 80 Gbps.
topologies with different arbitrary connectivity and configu-
ration of 25−125 COTS servers. The memory and CPU cores B. Performance Analysis
of the servers were taken randomly in the ranges of 16−48 GB The scalability of our proposed model is evaluated by the
and 2−7 cores, respectively. VNFSynth was run on a machine required time analysis for synthesizing the configurations by
running Windows 10 OS. The machine is equipped with an varying the problem size. The synthesis time includes the
Intel Core i5 Processor and a 12 GB memory. model generation time and the constraint verification time.
However, the model generation time is negligible compared to
A. Analysis of the Relationships Among Incoming Traffic, the verification time. The synthesis time of the NFV topology
Deployed VMs, and Resource Constraints requires to be low enough to reconfigure the system for a
In this analysis, we ran a number of experiments on similar network administrator when there is a change in traffic rate.
network topologies and configurations of a cloud data center. To the best of our knowledge, no other work deals with the
Time w.r.t. traffic rate Time w.r.t. number of server Time w.r.t. VNF variety
1600 1600
Number of Servers = 100 Traffic = 80 Gbps Traffic = 80 Gbps
1400 Number of Servers = 75 Traffic = 130 Gbps Traffic = 130 Gbps
1400 1500

1200 1400
1200
1000 1300
Time (s)

Time (s)

Time (s)
1000
1200
800
800
1100
600 600
1000
400 400 900

200 200 800


60 80 100 120 140 25 50 75 100 125
Incoming Traffic Rate (Gbps) Number of COTS Servers Number of Types of VNFs

(a) (b) (c)


Fig. 6. (a) The model synthesis time w.r.t. incoming Traffic Rate (Gbps), (b) the model synthesis time w.r.t. number of COTS servers, and (c) time w.r.t.
variety of VNFs in the system.

same problem of resource management of NFV in a cloud R EFERENCES


environment with automated synthesis. Therefore, we did not [1] Network functions virtualization - network operators perspective.
compare the efficiency of our tool with other works. http://tinyurl.com/jhhctwv.
[2] Network Functions Virtualization (NFV); Use Cases.
Impact of Incoming Traffic Rate: Fig. 6(a) shows the model http://www.etsi.org/deliver/etsi gs/nfv/001 099/001/01.01.01 60/
synthesis time with respect to the incoming traffic rate. We gs nfv001v010101p.pdf.
consider two different scenarios with 75 and 100 servers. We [3] N. Egi et al. Understanding the packet processing capability of multi-
core servers. Technical report, Intel Technical Report, 2009.
observe that the analysis time increases with incoming traffic [4] S.K. Fayaz et al. Bohatei: Flexible and elastic DDoS defense. In 24th
rate. According to the model, the combined packet processing USENIX Security Symposium, pages 817–832. USENIX, 2015.
rate of the VMs of each type must supersede the incoming [5] The etsi nfv isg homepage. http://www.etsi.org/technologies-
clusters/technologies/nfv.
traffic. This rate of the VMs depends on the memory and [6] Whats in an NFV Cloud? Why Cloud Service Providers Care? https:
CPU allocated to them. As the incoming traffic increases, the //www.sdxcentral.com/nfv/definitions/nfv-cloud/.
constraints become more strict to solve. In the case of 100 [7] CloudNFV. http://www.cloudnfv.com/.
[8] R. Vilalta et al. The SDN/NFV cloud computing platform and transport
servers, required time is more than that with 75 servers. network of the ADRENALINE testbed. In Network Softwarization
Impact of Number of Servers: We analyze the model syn- (NetSoft), 2015 1st IEEE Conference on, pages 1–5. IEEE, 2015.
[9] L.R. Battula. Network security function virtualization (NSFV) towards
thesis time by changing the number of servers while keeping cloud computing with NFV over openflow infrastructure: Challenges
the incoming traffic rate constant in Fig. 6(b). In this analysis, and novel approaches. In Advances in Computing, Communications and
we observe the time requirements for two different scenarios: Informatics (ICACCI, 2014 Intl Conf on, pages 1622–1628. IEEE, 2014.
[10] M. Raho et al. Kvm, xen and docker: A performance analysis for
80 Gbps and 130 Gbps of traffic rate. We observe that the arm based NFV and cloud computing. In Information, Electronic and
execution time increases significantly with the increase of Electrical Engg, IEEE 3rd Workshop on Adv. in, pages 1–8. IEEE, 2015.
the number of servers. As the number of servers goes up, [11] D. Krishnaswamy et al. An open NFV and cloud architectural frame-
work for managing application virality behaviour. In Consumer Commu-
the possibility of deployment of more VMs increases. The nications and Networking Conference (CCNC), 2015 12th Annual IEEE,
problem size depends on the number of possible flows between pages 746–754. IEEE, 2015.
the VMs. The number of resource constraints also increases [12] B. Nunes et al. A survey of software-defined networking: Past, present,
and future of programmable networks. Communications Surveys &
with the number of servers. The increase in model size requires Tutorials, IEEE, 16(3):1617–1634, 2014.
the verification of more constraints, and usually more search [13] A.J. Younge et al. Efficient resource management for cloud computing
(i.e., a longer time) is required to find a solution. environments. In Green Computing Conference, 2010 International,
pages 357–364. IEEE, 2010.
Impact of VNF Types: In Fig. 6(c), the time required for [14] A. Beloglazov and R. Buyya. Energy efficient resource management in
different numbers of types of VNFs is shown. When there are virtualized cloud data centers. In Proc of the 10th IEEE/ACM intl conf
on cluster, cloud and grid computing, pages 826–831. IEEE, 2010.
more variety of VNFs, VNFSynth take more time to find a [15] J. Deng et al. VNGuard: An NFV/SDN combination framework for
solution. It is obvious that when there are more types of VNFs, provisioning and managing virtual firewalls. In NFV and SDN, IEEE
the number of constraints in the model also goes up. Hence, Conference on, pages 107–114. IEEE, 2015.
[16] NFV in the Cloud: It’s Complicated. http://tinyurl.com/z7osv65.
the time increases almost linearly with the VNF variety. [17] OpenStack Open Source Cloud Computing Software. https://www.
VII. C ONCLUSION openstack.org/software/.
[18] L. Rizzo, L. Deri, and A. Cardigliano. 10 Gbit/s line rate packet
VNFSynth is an auxiliary tool for the recent networking processing using commodity hardware: Survey and new proposals, 2012.
trend, NFV in cloud. We address practical challenges in [19] L. Rizzo, G. Lettieri, and V. Maffione. Speeding up packet I/O in
virtual machines. In Proceedings of the 9th ACM/IEEE symp on Arch
allocating commodity server resources to virtual machines that for networking and comm systems, pages 47–58. IEEE Press, 2013.
implement virtual network functions. VNFSynth models the [20] S. Hauger et al. Packet processing at 100 Gbps and beyond - challenges
customer requirements and resource constraints and formalizes and perspectives. In ITG Symposium on Photonic Networks, pages 1–10.
VDE, 2009.
the NFV architecture synthesis problem. Finally, it solves the [21] L. D. Moura and N. Bjørner. Satisfiability modulo theories: An appetizer.
problem using an efficient SMT solver that results in the In Brazilian Symposium on Formal Methods, 2009.
placements and types of the virtual machines. We evaluated [22] L. D. Moura and N. Bjørner. Z3: An efficient SMT solver. In Conf. on
Tools and Algo. for the Construction and Analysis of Systems, 2008.
VNFSynth in different synthetic test networks by varying [23] G. Jacobson et al. OpenNF: Enabling innovation in network function
number of servers, their configurations and packet processing control. ACM SIGCOMM Comp Comm Review, 44(4):163–174, 2015.
requirements. We found that the tool can generate a feasible
result within a sustainable period of time.

View publication stats

You might also like