Professional Documents
Culture Documents
com
Virtualized Software-Defined
Networks and Services
www.EngineeringBooksPdf.com
For a complete listing of titles in the
Artech House Communications and Network Engineering Series,
turn to the back of this book.
www.EngineeringBooksPdf.com
Virtualized Software-Defined
Networks and Services
Qiang Duan
Mehmet Toy
www.EngineeringBooksPdf.com
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.
All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without permission
in writing from the publisher.
All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of
a term in this book should not be regarded as affecting the validity of any trademark or service
mark.
10 9 8 7 6 5 4 3 2 1
www.EngineeringBooksPdf.com
Contents
Preface xi
1.1 Introduction 1
2 Software-Defined Networking 25
2.1 Introduction 25
www.EngineeringBooksPdf.com
vi Virtualized Software-Defined Networks and Services
2.9 Conclusion 90
References 91
3 Virtualization in Networking 95
3.1 Introduction 95
www.EngineeringBooksPdf.com
Contents vii
www.EngineeringBooksPdf.com
viii Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Contents ix
References 231
Index 307
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Preface
Two important recent innovations in networking technologies are software-
defined networking (SDN) and virtualization in networks. The latter includes
the network virtualization (NV) vision and the network function virtualization
(NFV) architecture. SDN and NFV offer promising approaches to overcoming
inflexibilities in current network architecture, greatly enhancing service provi-
sioning and better resource utilization.
SDN and NV/NFV are independent networking paradigms. Realization
of network virtualization does not require SDN and vice versa. On the other
hand, they share many common objectives and complement each other. SDN
and NFV share key enabling technologies with cloud computing, such as re-
source virtualization and automation in service provisioning. They also facili-
tate unification of network and cloud applications as observed in cloud services.
The key idea of SDN lies in decoupling network control and manage-
ment functionalities from data forwarding operations to enable a centralized
control platform for supporting network programmability. The components of
SDN architecture include a data plane consisting of network resources for data
forwarding, a control plane comprising SDN controller(s) providing central-
ized control of network resources, and SDN applications that program net-
work operations through a controller. Consolidation of control functions to a
centralized controller in SDN may greatly simplify network operations while
allowing applications to program network behaviors for supporting diverse ser-
vice requirements. Therefore, SDN promises simplified and enhanced network
control, flexible and efficient network management, and improved network
service performance.
xi
www.EngineeringBooksPdf.com
xii Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Preface xiii
forum (MEF) and discuss applications of SDN and NFV technologies for
building cloud services.
We would like to thank the coauthors, who worked diligently to make this
book a valuable reference; the editors of Artech House, who provided valuable
comments to improve the book; and Molly Klemarczyk and Stephen Solomon
of Artech House, who helped greatly throughout the contract and publication
process.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
1
Introduction and Overview
Q. Duan and M. Toy
1.1 Introduction
The past few decades have witnessed rapid advancement of computer network-
ing from local area networks to the global Internet that provides a data com-
munication platform needed by virtually any contemporary computing ap-
plication. The recent advances and wide adoption of cloud computing make
computer networks an indispensable element of today’s information infrastruc-
ture. The stunning success of computer networking, on the other hand, has also
brought in challenges to networking technologies from various aspects that call
for more innovations in this field.
Computer networking technologies are facing the requirements of differ-
ent stakeholders, including network operators, service providers, application
developers, and end users. These requirements are often correlated but distinct
and might even conflict with each other. For example, end users may want to
have the highest possible level of quality of service (QoS) provided by networks,
while network operators prefer to minimize resource usage and energy con-
sumption in network infrastructures for providing QoS. Applications deployed
upon the networking platform have highly diverse requirements on the services
that networks must provide, thus demanding sophisticated technologies for di-
versifying networks and enhancing service flexibility. On the other hand, the
desire for reducing network capital and operational costs calls for technologies
that can simplify network control/management functions and data forwarding
operations.
www.EngineeringBooksPdf.com
2 Virtualized Software-Defined Networks and Services
Traditional network designs lack sufficient ability to meet the wide spec-
trum of requirements due to the ossification of the current IP-based network
architecture. Such ossification mainly comes from two aspects: (a) integration
between control and forwarding functions that cause complex and inflexible
network control and management; and (b) tight coupling between service func-
tions and network infrastructures that limits network ability for agile service
evolution. The research community has been exploring various approaches
and has made exciting progress for addressing the aforementioned challenges
to computer networking. Two important recent innovations in networking
technologies are software-defined networking (SDN) and virtualization in
networks. The latter includes the network virtualization (NV) vision and the
network function virtualization (NFV) architecture. SDN and NV/NFV offer
promising approaches to overcoming the ossification in traditional network ar-
chitecture, thus greatly enhancing network capabilities of service provisioning.
SDN and NV/NFV are independent networking paradigms. Realization
of virtualization in networks does not require SDN and vice versa. On the
other hand, they share many common objectives and follow similar technical
ideas and principles and thus may greatly complement each other. Evolutions
of both SDN and network virtualization have shown strong synergy between
them. These two emerging networking paradigms are expected to be integrated
into a unified software-defined and virtualization-based network architecture,
which allows network designs to fully exploit the advantages of both SDN and
NV/NFV. In addition, SDN and NV/NFV share some key enabling technolo-
gies with cloud computing, such as resource virtualization and automation in
service provisioning. Therefore, integration of SDN and NV/NFV may greatly
facilitate unification of network and cloud services, which enables convergence
between networking and cloud computing that may significantly enhance per-
formance and flexibility of cloud service provisioning.
A holistic view of software-defined and virtualization-based networking
in future networks for supporting unified network-cloud services would be very
beneficial to both researchers and practitioners in the field of computer net-
working. The main objective of this book is to reflect a vision of virtualized
software-defined networking and its impacts on service provisioning in future
networks.
The key idea of SDN lies in decoupling network control and management
functionalities from data forwarding operations to enable a centralized control
platform for supporting network programmability. Key components of the
SDN architecture include a data plane consisting of network resources for data
forwarding, a control plane providing logically centralized control of network
resources, and SDN applications for programming network behaviors through
a controller. Consolidation of control functions to a centralized controller in
SDN may greatly simplify network operations while allowing applications to
www.EngineeringBooksPdf.com
Introduction and Overview 3
www.EngineeringBooksPdf.com
4 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Introduction and Overview 5
A fundamental idea for the SDN paradigm lies in the notion of resource
abstraction, which is an important capability to support network programma-
bility. The centralized SDN controller provides a global abstract view of the
underlying network resources, upon which SDN applications may program
network behaviors.
SDN architecture, as shown in Figure 1.2, consists of three planes: the
data plane, control plane, and application plane; and two interfaces: the in-
terface between data and control planes (D-CPI) and the interface between
application and control planes (A-CPI).
The data plane comprises network resources for performing data forward-
ing and processing operations. Network elements on the data plane are simply
packet forwarding and/or processing engines without complex control logic
to make autonomous decisions. The D-CPI, which is also referred to as the
southbound interface (SBI), allows data plane elements to expose the capabili-
ties and states of their resources to the control plane and enables the controller
to instruct network elements for their operations The control plane presents a
global view of data plane infrastructure to SDN applications and provide a cen-
tralized control platform through which applications may define the operations
to be performed by data plane elements. The A-CPI, which is also called the
www.EngineeringBooksPdf.com
6 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Introduction and Overview 7
can easily reconfigure data plane devices, thus greatly facilitating deployment of
new services as well as evolution of existing services.
Networking devices on the SDN data plane, often called SDN switches,
typically comprise a packet processing engine, an abstraction layer, and a con-
troller interface. OpenFlow currently is the de factor D-CPI standard for con-
trolling SDN switches. An OpenFlow switch consists of a set of ingress ports
and output ports, an OpenFlow pipeline that contains one or multiple flow ta-
bles, and a secure channel for communicating with one or multiple OpenFlow
controllers. OpenFlow specification defines both the communication protocol
between controller and switch and the procedure for managing flow tables in
switches.
The SDN control plane is responsible for enabling an abstraction of the
data plane resources and providing an API for programing network behaviors.
Key functions of this plane include handling two types of objects: objects re-
lated to network control, including policies imposed by SDN applications and
rules controlling data plane operations; and objects related to network monitor-
ing that are in the format of local and global network states. An SDN controller
comprises a module for realizing D-CPI, a module for data plane abstraction,
and a module for implementing A-CPI. The D-CPI module implements a
southbound protocol for communicating with data plane devices. The abstrac-
tion module constructs a global abstract view of the data plane network that
is used by SDN applications to program network operations. Main functions
performed by this module include user/network device management, network
topology management, network traffic monitoring, and flow management. The
A-CPI module provides northbound APIs for SDN applications to access the
controller.
Control performance is a key factor that impacts performance of the en-
tire SDN network. Various technologies have been developed for enhancing
SDN control performance, including parallel and batch processing designs of
controller software, distributed controller deployment, and hierarchical con-
trol structure. Multidomain SDN control is a challenging problem that has
attracted research attention. Representative efforts made in this area include
the SDNi protocol for message exchanging across SDN domains, distributed
multidomain SDN control structure, and hierarchical orchestration for inter-
domain SDN control.
Control applications can be regarded as the “brain” in the SDN architec-
ture that makes decisions on network policies and programs network behaviors
to fulfill the policies. SDN applications interact with controllers through A-
CPI to obtain network state information and request certain control actions
be taken by the controllers. SDN applications can be classified as either proac-
tive or reactive. Proactive applications make decisions regarding traffic steering
for some predetermined flows and then request the controller to preinstall the
www.EngineeringBooksPdf.com
8 Virtualized Software-Defined Networks and Services
action rules at switches for handling those flows. Proactive applications can be
implemented using either language APIs or RESTful APIs. Reactive applica-
tions typically work with reactive flow management functions at the controller
to handle network events. Reactive applications are often written in the pro-
gramming language of the controller and leverage the language API to interact
with the controller.
Researchers have noticed that there are some issues associated with the
current SDN approach that may limit network designers to fully exploit the
advantages promised by this new networking paradigm. A root reason for such
limitation lies in the unnecessary tight coupling between architecture and in-
frastructure in the current SDN design, which constrains evolution of network
services to what the current network infrastructure can support. Various efforts
have been made by the research community for overcoming this barrier in order
to release the full power of SDN. Two representative proposals toward this di-
rection are the software-defined internet architecture (SDIA) and the protocol
independent layer (PI-layer). SDIA separates the network edge and core for
both packet forwarding and network control, thus decoupling network archi-
tecture from infrastructure on both data plane and control plane. The PI-layer
leverages protocol oblivious forwarding (POF) and programming protocol-in-
dependent packet processing (P4) language to enable a fully programmable data
plane in SDN that may support various network protocols for meeting diverse
service requirements.
www.EngineeringBooksPdf.com
Introduction and Overview 9
www.EngineeringBooksPdf.com
10 Virtualized Software-Defined Networks and Services
Figure 1.3 Main functional roles in an NV environment: InP, VNP, and VNO.
www.EngineeringBooksPdf.com
Introduction and Overview 11
www.EngineeringBooksPdf.com
12 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Introduction and Overview 13
and designed, and a VNF instance (VNFI) is the runtime instantiation of the
VNF. A VNF may comprise one or multiple VNF components (VNFCs), and
each VNFC can be instantiated as a VNFC instance (VNFCI). When a VNF
is composed of a group of VNFCs, the internal interfaces between them do not
need to be exposed and standardized.
The MANO component comprises three key functional blocks: virtu-
alized infrastructure manager (VIM), VNF manager (VNFM), and NFV or-
chestrator (NFVO), which are in charge of the management and orchestration
functions, respectively, for infrastructure, virtual functions, and network ser-
vices. The VIM in an infrastructure domain is responsible for managing NFVI
compute, storage, and network resources in the domain. A VIM may be special-
ized in managing a certain type of infrastructure resources (e.g., compute-only,
storage-only, or networking-only) or provides federated management across
multiple types of resources. VNFM is responsible for the lifecycle management
of VNF instances. Each VNF instance must have an associated VNFM, while
a VNFM may be assigned to manage either a single VNF instance or a group
of VNF instances. The NFVO functional block is responsible for resource or-
chestration that coordinates NFVI resources across multiple VIMs and service
orchestration that manages lifecycles of network services.
A key requirement for realizing the benefits of NFV is to implement VNFs
as software instances running on commercial off-the-shelf (COST) servers.
How to design COST server-based NFV implementations to support realistic
network loads and achieve performance comparable to hardware-based network
devices has become an important research topic. Data plane I/O operations
form a bottleneck for implementing high-performance NFV on COTS serv-
ers. Various I/O acceleration techniques have been developed, among which
single root I/O virtualization (SR-IOV) and Intel Data Plane Development Kit
(DPDK) are representative solutions for I/O virtualization improvement. Ex-
amples of recent research efforts for further improving the performance of NFV
implementations include the NetVM platform [8] and the ClickOS system [9].
The service-oriented architecture (SOA), which forms the basis of the
successful cloud service model, offers an approach to facilitating realization of
virtualization in networks. Applying SOA in networking enables the network-
as-a-service (NaaS) paradigm that has been adopted in the NFV architecture
for service provisioning. Representative NaaS-based service models include
network function virtualization infrastructure-as-a-service (NFVIaaS), virtual
network function-as-a-service (VNFaaS), and virtual network platform-as-a-
service (VNPaaS), which have been identified by NFV-ISG as main NFV use
cases [10]. The centralized and programmable control enabled by SDN pro-
vides an effective platform for supporting NaaS-based virtualization. Virtual-
ization, SOA, and SDN, together offer a promising approach to unifying the
www.EngineeringBooksPdf.com
14 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Introduction and Overview 15
www.EngineeringBooksPdf.com
16 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Introduction and Overview 17
area has indicated that the forwarding and control element separation (ForCES)
specification offers a promising basis for developing an abstraction model and
the associated control protocol for supporting SDN in an NFV environment.
NFV in principle is applicable to any network function on both data
plane and control plane. Many network control and management functions
(e.g., routing, path computation, traffic engineering, and load balancing) are
good candidates for being realized as VNFs. On the other hand, such functions
are often benefited from the availability of a centralized network controller, and
therefore are typically realized as applications running on top of an SDN con-
troller. Combination of NFV and SDN technologies enables virtual network
control functions to be implemented over an SDN control platform, which
allows network designs to exploit the advantages of both NFV and SDN.
In order to further improve performance of SDN-supported NFV, re-
search proposals have been made to explore the possibility of exploiting data
plane capabilities in SDN to implement some VNF functionalities. The main
idea of the proposed approach to using SDN for supporting VNFs is to keep
simple data processing in SDN data plane as much as possible and only forward
data traffic to VNF servers for more complex processing when it is necessary.
The key principles of SDN and NFV both lie in abstraction but focus
on different aspects of the network architecture. Two types of abstraction have
been deployed in general networking through the designs of layers and planes
in network architecture, respectively. Both layer and plane enable abstraction of
network resources, but in different dimensions. Both SDN and NFV principles
are based on abstraction but with emphasis on the plane and layer dimensions,
respectively. These two abstraction dimensions are orthogonal (i.e., network
architecture may have abstraction on one dimension but not on the other).
Therefore, SDN and NFV in principle are independent—NFV may be real-
ized with or without SDN and vice versa. On the other hand, the challenging
requirements for service provisioning in future networks demand abstraction
on both dimensions in order to fully exploit their advantages.
A software-defined network virtualization (SDNV) architectural frame-
work has been proposed to provide a holistic vision about the relationship
between key principles of SDN and NFV and how they may be combined
in a unified network architecture [1]. The SDNV framework integrates both
the layer- and plane-dimension abstractions and provides useful guidelines for
synthesizing the research efforts from various aspects toward enabling unified
software-defined virtualization in networking.
Research on integrating SDN and NFV is still at an early stage and many
technical issues must be fully addressed before the vision of software-defined
network virtualization may be realized. Therefore, this field offers a wide
spectrum of open topics for future research and opportunities for technology
innovation.
www.EngineeringBooksPdf.com
18 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Introduction and Overview 19
The key actors of a cloud service (Figure 1.5) are cloud service user; cloud
service provider (cSP), which is responsible for providing an end-to-end cloud
service to cloud service user; cloud carrier(s), who provide connectivity between
the user and cloud application; and cloud provider(s), who provides cloud ap-
plications. The cSP may or may not own cloud carrier (cC) and cloud provider
(cP) facilities, but provides a single bill to the cloud service user. A cSP can be
private or public. There could be cases that private and public cSPs collectively
provide a cloud service to a user.
A user interfaces to a cSP facilities via a standard interface called cloud
service user interface (cSUI), which is a demarcation point between the cSP
and the cloud consumer. From this interface, the consumer establishes a con-
nection, cloud service connection (cSC), with a cloud provider (cP) entity pro-
viding the application where the cP entity can be a virtual machine (VM) with
cloud service interface (cSI) or a physical resource such as storage with a cSUI.
In addition, a cSC can be between two cloud provider entities or between two
cloud consumers.
When a cSC is between a cloud user and a cP physical or virtual resource,
the cSC is established between two cloud service connection termination points
(cSCTPs) residing at the user interface (i.e., cSUI) and the cP interface (i.e.,
cSUI or cSI).
www.EngineeringBooksPdf.com
20 Virtualized Software-Defined Networks and Services
The cSP may own the cP and cloud carrier (cC) facilities. When the cP
and the cC are two independent entities belonging to two different operators,
the standards interface between them is called cloud carrier cloud provider in-
terface (cCcPI). In this case, a cSC for cloud services can be terminated at either
cCcPI or cSI.
It is also possible for two or more cSPs to be involved in providing a
cloud service to a cloud consumer where two cSPs interface to each other via a
standard interface called cloud service provider cloud service provider interface
(cSPcSPI). Since one of the cSPs needs to interface to the end user, coordinate
resources, and provide a bill, the cSP that does not interface to the end user is
called cloud service operator (cSO).
Software as a service (SaaS), platform as a service (PaaS), infrastructure as
a service (IaaS), network as a service (NaaS), security as a service (SECaaS), and
communication as a service (CaaS) are among the well-known cloud services.
SaaS is an application running on a cloud infrastructure where the con-
sumer does not manage or control the underlying cloud infrastructure, includ-
ing network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specific application
configuration settings. SaaS examples include Gmail from Google, Microsoft
“live” offerings, and salesforce.com.
PaaS is deploying onto the cloud infrastructure consumer-created or -ac-
quired applications created using programming languages and tools supported
by the provider. The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, or storage,
but has control over the deployed applications and possibly application hosting
environment configurations.
IaaS is to provision processing, storage, networks, and other fundamental
computing resources where the consumer is able to deploy and run arbitrary
software. The software can include operating systems and applications. The
consumer does not manage or control the underlying cloud infrastructure, but
has control over operating systems, storage, deployed applications, and possibly
limited control of selected networking components with firewalls.
Network as a service (NaaS) delivers assured, dynamic connectivity ser-
vices via virtual, or physical and virtual service endpoints orchestrated over
multiple operators’ networks. Such services will enable users, applications, and
systems to create, modify, suspend/resume, and terminate connectivity services
through standardized application programming interfaces (APIs). These ser-
vices are assured from both performance and security perspectives.
Security services such as connectivity security, application security, or
content security are referred as security as a service (SECaaS). With SECaaS,
a consumer does not manage or control the underlying security transport ne-
gotiation, encryption, detection algorithms, threat intelligence or network
www.EngineeringBooksPdf.com
Introduction and Overview 21
inspection, but has control over the selection of security solutions and scope
with respect to their data and network.
Real-time services such as Virtual PBX, voice and video conferencing sys-
tems, collaboration systems, and call centers are considered communication as
a service (CaaS).
For ETSI NFV, VNF represents an instance of a functional block respon-
sible for a specific treatment of received packets. End point represents an exter-
nal interface of one VNF instance that is always associated with a VNF. Each
VNF can have an associated physical/virtual interface, MAC, IP address, or a
higher-layer application such as HTTP.
Two major enablers of NFV are industry-standard servers and technolo-
gies developed for cloud computing. Recent developments of cloud computing,
such as various hypervisors, OpenStack, and Open vSwitch, also make NFV
achievable in reality. For example, the cloud management and orchestration
schemes enable the automatic instantiation and migration of VMs running spe-
cific network services. Network infrastructure will become more fluid when
deploying VNFs.
ETSI NFV divides network into infrastructure and virtual network layers,
and defines a virtual interface for it, Vn-Nf. These layers are the same as those
for NaaS. NFV also identifies a VM interface as (Vn-Nf )/VM or Vn-Nf-VM,
an interface to hardware as Vi-Ha, SWA (software architecture)-1 interface be-
tween various network functions within the same or different network service,
SW-5 interface between infrastructure (NFVI) and the VNF, and container
interface between host functional block (HFB) and virtualization functional
block (VFB).
ETSI NFV architecture does not define necessary interfaces between a
network and its user, between service providers, or between a cloud provider
and cloud carrier. Furthermore, it does not have connection and connection
termination concepts. However, it is possible to divide attributes of these cloud
services components as virtual and infrastructure categories. cSUI and cloud
service connection termination point (cSCTP) have virtual and infrastructure
components, while cSC and cSI have only virtual components.
This approach is applied to carrier Ethernet and IP services. Service chain-
ing for Ethernet private line (EPL), Access E-Line, and IP VPN are given as
examples.
In a network supporting cloud services, there can be virtualized, nonvir-
tualized, and legacy components. All of the network, applications, and service
components need to be managed together.
Life cycle services operations (LSO) functionalities for cloud services
maybe summarized as follows:
www.EngineeringBooksPdf.com
22 Virtualized Software-Defined Networks and Services
These functionalities for cloud services are being worked in the industry.
References
[1] Duan, Q., N. Ansari, and M. Toy, “Software-Defined Network Virtualization: An Ar-
chitectural Framework for Integrating SDN and NFV for Service Provisioning in Future
Networks,” IEEE Network Magazine, Vol. 30, No. 5, Sept. 2016, pp. l0–16.
[2] Open Networking Foundation, “Software-Defined Networking: The New Norm of Net-
works,” white paper, April 2012.
[3] Open Network Foundation, “ONF TR-521: SDN Architecture,” Issue 1.1, 2016.
[4] Feamster, N., L. Gao, and J. Rexford, “How to Lease the Internet in Your Spare Time,”
ACM SIGCOM Computer Communication Review, Vol. 37, No. 1, Jan. 2007, pp. 61–64.
[5] Turner, J., and D. E. Taylor, “Diversifying the Internet,” Proceedings of the 2015 IEEE
Global Telecommunications Conference (GLOBECOM’05), Dec. 2005.
[6] ETSI NFV-ISG, “Network Functions Virtualization: An Introduction, Benefits, Enablers,
Challenges, and Call for Action,” Proceedings of SDN and OpenFlow World Congress, Oct.
2012.
[7] ETSI NFV-ISG, “NFV 002: Network Function Virtualization (NFV)—Architectural
Framework v1.2.1,” Dec. 2014.
[8] Hwang, J., K. K. Ramakrishnan, and T. Wood, “NetVM: High Performance and Flexible
Networking Using Virtualization and Commodity Platforms,” IEEE Transactions on Net-
work and Service Management, Vol. 12, No. 1, March 2015, pp. 34–47.
[9] Martins, J., M. Ahmed, C. Raiciu, V. Olteanu, N. Honda, et al., “ClickOS and the Art
of Network Function Virtualization,” Proceedings of the 11th USENIX Symposium on Net-
worked Systems Design and Implementations, April 2014, pp. 459–472.
www.EngineeringBooksPdf.com
Introduction and Overview 23
[10] ETSI NFV-ISG, “Network Function Virtualization: Use Cases v1.1.1,” Oct. 2013.
[11] Csazar, A., W. John, M. Kind, C. Meirosu, G. Pongracz, et al., “Unifying Cloud and
Carrier Network,” Proceedings of the 2013 IEEE/ACM International Conference on Utility
and Cloud Computing (UCC2013), Dec. 2013.
[12] Peterson, L., “CORD: Central Office Re-Architected as a Datacenter,” IEEE Software
Defined Networks white paper, November 2015.
[13] Blenk, A., B. Basta, M. Reisslein, and W. Keller, “Survey on Network Virtualization
Hypervisors for Software-defined Networking,” IEEE Communications Surveys and
Tutorials, Vol. 18, No. 1, 2016, pp. 655–685.
[14] Sherwood, R., G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, et al., “FlowVisor: A
Network Virtualization Layer,” OpenFlow Switch Consortium Technical Report, 2009.
[15] Drutskoy, D., E. Keller, and J. Rexford, “Scalable Network Virtualization in Software-
Defined Networks,” IEEE Internet Computing Magazine, Vol. 17, No. 2, Feb. 2013, pp.
20–27.
[16] ETSI NFV ISG, “NFV EVE-005: Report on SDN Usage in NFV Architectural Framework
v1.1.1,” Dec. 2015.
[17] Toy, M., “OCC 1.0 Reference Architecture,” Dec. 2014.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
2
Software-Defined Networking
Q. Duan and M. Toy
2.1 Introduction
The rapid emergence of a wide spectrum of network-based computing ap-
plications with highly diverse requirements brings in many new challenges to
networking technologies. Such applications(e.g., mobile applications, social
networks, cloud services, and big data analysis) expect more bandwidth, dy-
namic network control, efficient network management, and more agile service
evolution. The growing popularity of multimedia applications and increasing
demand for big data analytics require higher speed network connections. The
huge number of mobile devices and rapid development of mobile and social
network applications demand ubiquitous data communications. In addition,
cloud computing has added higher expectation on the performance, flexibil-
ity, and agility of computer networking. Reliable and efficient access to the
computing and storage resources in the cloud via a network is critical to high-
performance cloud service provisioning.
Various networking technologies have been developed and a huge amount
of investment has been made to enhance network capability and expand net-
work infrastructure for meeting the aforementioned demands. However, intro-
duction of new networking technologies and expansion of network infrastruc-
ture have significantly increased the complexity and cost of network operation
and service provisioning. Network devices have become increasingly complex,
and thus more expensive, mainly due to the sophisticated control intelligence
added inside each device. Network control and management are also becoming
complex and inflexible. Network operators often need to configure individual
25
www.EngineeringBooksPdf.com
26 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 27
sion of programmable networks, the technology did not receive wide adoption
at the time it was developed.
Subsequent research efforts toward overcoming network ossification made
in early to middle 2000s mainly focused on enhancing routing and configura-
tion management. Research progress made during this time period embraced
the ideas of separated control and forwarding functions and logically central-
ized control entity. For example, forwarding and control element separation
(ForCES) framework released by IETF splits the functions of networking de-
vices into forwarding and controlling elements with an open interface between
them. Routing control platform (RCP) replaces BGP interdomain routing
with centralized routing control to reduce the complexity of fully distributed
path computation. Path computation element (PCE) architecture separates
computation of label switched paths from actual packet forwarding operation
in MPLS networks. Unfortunately, when the aforementioned proposals were
made, dominant equipment vendors did not have strong incentive to adopt
standards for control-forwarding separation. Therefore, although industry pro-
totypes and standardization efforts made some progress, such technologies have
not been widely adopted in network device designs yet.
Despite the limited adoption in industry, researchers kept broadening the
vision of control and data plane separation when exploring clean-slate design
of Internet architecture. The 4D approach introduced in [2] advocates a new
network architecture that comprises four planes—a data plane for processing
packets based on configurable rules, a discovery plane for collecting network
topology and state information, a dissemination plane for installing packet-pro-
cessing rules, and a decision plane consisting of logically centralized controllers
for making decisions regarding network operations. This approach has been
applied to new applications beyond routing control in some research projects.
In particular, the Ethan project created a logically centralized flow-level solu-
tion for access control in enterprise networks, where a separated controller in-
stalls rules generated based on high-level security policies into the flow tables
at switches.
In 2008 a research group at Stanford University published their research
on OpenFlow [3], which is widely believed as the first instance of software-de-
fined networking. OpenFlow embraces the principle of separated data and con-
trol planes and logically centralized controller. An OpenFlow switch has one or
more tables of packet handling rules. Each rule matches a subset of traffic and
performs certain actions on the traffic that matches the rule. The rules are in-
stalled by a separated controller into all switches. OpenFlow attracted extensive
attention from the research community and received wide adoption in industry
soon after its publication, especially as compared with its intellectual predeces-
sors. OpenFlow stimulated a wave of innovations in networking technologies
to enable a new networking paradigm with separated data and control planes
www.EngineeringBooksPdf.com
28 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 29
www.EngineeringBooksPdf.com
30 Virtualized Software-Defined Networks and Services
that application developers can simplify their program logic without the need
for detailed knowledge of the underlying network resources and technologies.
SDN is expected to provide abstractions from the following three aspects:
forwarding abstraction, distribution abstraction, and specification abstraction.
The forwarding abstraction should allow any forwarding behavior required by
the network controller (and applications) while hiding details of the underlying
data plane operations. An SDN controller acts as a driver to data plane switches
to support this abstraction. The distribution abstraction shield network control
and management functions from the distributed resource states, thus trans-
forming distributed control problems to logically centralized problems. SDN
controllers realize such an abstraction by collecting state information about
data plane devices to form a global network view. The specification abstraction
should allow a network application to express the desired network behaviors
without being responsible for implementing those behaviors by itself. Network
programmability provided by SDN controller allows the abstract configura-
tions expressed by network applications to be mapped to physical configura-
tions of data plane devices, thus supporting the specification abstraction [6].
www.EngineeringBooksPdf.com
Software-Defined Networking 31
through decoupling data and control planes. Specifically, SDN offers a simple
programmable control platform rather than making networking devices more
complex for supporting programmability, as in the case of active networking.
Moreover, SDN adopts separation of control and data planes as a core part of
network architectural design, which not only enables a simpler programmable
environment, but also provides greater freedom for defining network behaviors
to meet the highly diverse service requirements in future networks.
Unique features of the SDN paradigm bring in some advantages over
traditional networking technologies in various ways [7, 8]. SDN may signifi-
cantly simplify data plane network elements. The main complexity of IP rout-
ers comes from the control intelligence, such as routing protocols and path
computation functions, which must be implemented on each individual router.
Separation of data and control planes in SDN allows devices on the data plane
to simply perform packet forwarding operations by following rules installed by
a controller. Removing the complex control functions from individual devices
and consolidating the control logic on a dedicated controller simplify network-
ing devices, thus reducing their costs.
SDN may also greatly simplify network control. Traditional IP network
control relies on distributed routing protocols that require participation and
collaboration of numerous routers. The highly distributed control mechanism
in IP-based networks, although seeming to be a good way to guarantee net-
work resilience, results in very complex and relatively static control architec-
ture. SDN enables one single controller to monitor and control all networking
devices on the data plane, which transforms the complex distributed control
problems in IP networks into simpler control decisions made on a central point
with a global network view.
SDN also enhances network management. Due to the heterogeneity in
network devices and configuration interfaces, current network management
typically involves a certain level of manual processing. With current network
design, automatic and dynamic reconfiguration of a network remains a big
challenge. In SDN, a unified control plane oversees all kinds of networking
devices, including switches, routers, network address translators, firewalls, and
load balancers, among others, and is thus able to manage networking devices
via a single standard interface through software programming. Therefore, the
entire network can be programmatically configured and dynamically optimized
based on service requirements and network status.
SDN offers a global approach to optimizing network performance. One
of the key objectives of network operation is to maximize utilization of net-
work resources for meeting service performance requirements. Currently avail-
able approaches to network optimization are based on local information, which
may lead to suboptimal performance or conflicting operations. SDN offers
an opportunity to improve network performance globally. SDN allows for
www.EngineeringBooksPdf.com
32 Virtualized Software-Defined Networks and Services
centralized control with a global network view, thus making many challenging
performance optimization problems manageable with properly designed cen-
tralized algorithms. The logically centralized control platform of SDN enables
a higher degree of automation in network operations and service delivery, and
therefore may also improve network resource availability and utilization.
SDN provides better support for new service development and deploy-
ment. Network programmability provided by SDN allows various upper layer
applications and services to be developed and deployed without being con-
strained by any specific technology employed in the underlying network infra-
structure. The centralized SDN controller supports network operating systems
that can easily reconfigure data plane devices, thus greatly facilitating deploy-
ment of new services and evolution of current services. SDN allows network
customization for supporting network services with different requirements
through programming network operations (e.g., dynamic enforcement of a
set of policies for meeting various service requirements). SDN can also reduce
the response time of business requests to network providers, increase customer
satisfaction, and shorten investment payback time through automation of net-
work operations.
SDN encourages innovations by providing a programmable network plat-
form to implement, experiment, and deploy new network architectures, tech-
nologies, applications, and services. SDN facilitate realization of multitenant
virtual networks, each of which may implement customized network architec-
ture, addressing scheme, and routing protocol. The separation of data and con-
trol planes allows developments in data forwarding technologies and network
control mechanisms to follow their respective innovation paths.
www.EngineeringBooksPdf.com
Software-Defined Networking 33
general architectural framework for SDN and then introduce the architectures
developed by ONF, ITU-T, and IRTF.
www.EngineeringBooksPdf.com
34 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 35
concept of layer is used in the context of a layering model (e.g., layer-3 pack-
ets are encapsulated into layer-2 frameworks and then transmitted by layer-1
media). A key feature of layering lies in that a higher layer strictly relies on
the services provided by the lower layer in order to perform its own functions.
However, planes in the SDN architecture, while needing to cooperate with each
other, do not have such strict dependence relationship between them. For ex-
ample, an SDN controller in the control plane is typically hosted on a dedicated
server whose operations are not rely on data plane devices. Although the terms
plane and layer are often used as exchangeable in some SDN-related documents,
we believe it is beneficial for the readers to be aware of the difference between
the two concepts.
www.EngineeringBooksPdf.com
36 Virtualized Software-Defined Networks and Services
up the network elements and assigning resources to the respective SDN con-
troller. On the controller plane, management functions are needed for config-
uring SDN controllers, defining the scope of control given to each application,
and monitoring system performance. Application plane management typically
configures the contracts and service level agreements that are to be enforced by
the control plane. In addition, there are across-plane management functions
(e.g., configuration of the security associations that allow distributed functions
to safely intercommunicate) [9].
One of the main objectives of SDN is to enhance network capability of
service provisioning for meeting users’ requirements. In order to clearly pres-
ent the relationship among the key components in the SDN architecture from
a service perspective, ONF defines a basic service model for SDN in [5], as
shown in Figure 2.3.
In this model, a service consumer (end user) of an SDN network plays
the roles of both service requestor and resource user of the network. As a service
requestor, the end user exchanges information with an SDN controller through
a management-control session. As a resource user, it consumes the capabilities
provided by SDN data plane resources for packet forwarding and processing.
The service consumer controls its services via A-CPI with the controller by
invoking actions on a set of virtual resources that it perceives to be its own.
The SDN controller is responsible for virtualizing physical resources in the data
plane and exposing virtual resources to the service consumer. The controller
may also orchestrate virtual resources in order to provision the service required
by the end user. Therefore, the SDN controller plays a role of service provider
to end users.
www.EngineeringBooksPdf.com
Software-Defined Networking 37
www.EngineeringBooksPdf.com
38 Virtualized Software-Defined Networks and Services
Figure 2.4 ITU-T SDN architecture [10]. (Source: recommendation ITU-TY.3300: Framework
of Software-defined networking.)
www.EngineeringBooksPdf.com
Software-Defined Networking 39
plane. The forwarding plane is basically equivalent to the data plane in the gen-
eral SDN architecture, except that management-related functions of network
devices are modeled by a separated operational plane. The operational plane
is responsible for monitoring and managing the operational states of network
devices (e.g., device activity state, the number of available ports, and status of
each port).
The control plane in the IRTF SDN architecture is responsible for mak-
ing decisions on how packets should be forwarded and pushing such decisions
to network devices for execution. Management plane is for monitoring, con-
figuring, and maintaining network devices and making decisions regarding the
states of a network device. The control plane focuses mostly on the forwarding
plane, while the management plane mainly interacts with the operational plane
in network devices. The separation between control and management leads to
split southbound interfaces, respectively, between the control and forwarding
planes and between the management and operational planes.
Distinction between control and management made in the IRTF SDN
architecture is to reflect the different characteristics of control and manage-
ment functionalities. Control and management have different timescales—how
fast and frequent the respective function is required to react to or manipulate
network operations. In general, control has much shorter time scales, roughly
in the range between milliseconds and seconds; while management has longer
time scales, such as minutes, hours, or even days. Control typically maintains
ephemeral states that have limited lifespan, while management often handles
www.EngineeringBooksPdf.com
40 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 41
www.EngineeringBooksPdf.com
42 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 43
www.EngineeringBooksPdf.com
44 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 45
The OpenFlow specification has evolved for a number of years. The non-
profit Internet organization openflow.org was created in 2008 soon after the
seminal research paper on OpenFlow [3] was published. The first release of
OpenFlow specification, version 1.0.0, appeared at the end of 2009. In March
2011, the ONF was formed for accelerating the delivery and commercialization
of SDN. ONF has OpenFlow as the core of its vision of SDN and has become
the responsible entity for evolving OpenFlow specification.
The architecture of a generic OpenFlow switch is shown in Figure 2.8. As
defined in the OpenFlow switch specification [13], the main component of an
OpenFlow switch include a set of ingress ports and output ports, an OpenFlow
pipeline that contains one or multiple flow tables, and a secure channel for
communicating with one or multiple OpenFlow controllers. As in any packet
switch, the core function of an OpenFlow switch is to take packets that arrive
at ingress ports and forward them to their destined output ports. A unique as-
pect of OpenFlow switch is embodied in the OpenFlow pipeline processing of
packet matching function.
For each received packet, the OpenFlow switch will first identify the flow
that this packet belongs to and then execute the processing instructions speci-
fied for the flow. Searching for the matching flow of each received packet and
determining the actions that should be taken for the packet is the core respon-
sibility of the OpenFlow pipeline. The pipeline contains one or multiple flow
tables. Each entry in a flow table contains match fields and a set of instructions.
Matching starts at the first flow table and may continue to additional flow
tables of the pipeline. Flow entries match packets in priority order, with the first
matching entry in each table being used. If a matching entry is found, the in-
structions in the flow entry are executed. Instructions associated with each flow
entry either contains the actions for packet processing or specifies modification
www.EngineeringBooksPdf.com
46 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 47
messages is used mainly for initialization after the OpenFlow channel has been
established or for regularly checking the channel status.
www.EngineeringBooksPdf.com
48 Virtualized Software-Defined Networks and Services
Table 2.1
OpenFlow Flow Table Entry Fields
match fields priority counters Instruction set timeouts cookies flags
Table 2.2
Flow Match Fields Defined in OpenFlow Specification Version 1.5.1
Switch input port ARP source IPv4 address
Switch physical input port ARP target IPv4 address
Metadata passed between tables ARP source hardware address
Ethernet destination address ARP target hardware address
Ethernet source address IPv6 source address
Ethernet frame type IPv6 destination address
VLAN id IPv6 flow label
VLAN priority ICMPv6 type
IP DSCP (6 bits in ToS field) ICMPv6 code
IP ECN (2 bits in ToS field Target address for ND
IP protocol Source link-layer for ND
IPv4 source address Target link-layer for ND
IPv4 destination address MPLS label
TCP source port MPLS TC
TCP destination port MPLS BoS bit
UDP source port PBB I-SID
UDP destination port Logical port metadata
SCTP source port IPv6 extension header pseudo-field
SCTP destination port PBB UCA header field
ICMP type TCP flags
ICMP code Output port from action set metadata
ARP opcode Packet type value
date of the action set associated with the packet, and adjustment in the
pipeline process sequence for the packet.
• timeouts: to specify the maximum amount of time or idle time before
the flow is expired by the switch.
• cookie: opaque data value chosen by the controller, which may be used
by the controller to filter flow entries affected by flow statistics, flow
modification, and flow deletion requests.
• flags: used to alter the way flow entries are managed.
Table 2.2 lists the flow match fields defined in OpenFlow specification
version 1.5.1 [13]. A packet matches a flow entry if all the match fields of the
www.EngineeringBooksPdf.com
Software-Defined Networking 49
flow entry are matching the corresponding header fields and pipeline fields
from the packet. The flow entry match fields may be wildcards using a bit
mask, meaning that any value that matches on the unmasked bits in the packet’s
match field will be a match. The OpenFlow extensible match (OXM) descrip-
tor specified by the OpenFlow protocol offers a generic and extensible packet-
matching capability. OXM defines a set of type-length-value pairs that can de-
scribe virtually any of the packet header fields that an OpenFlow switch would
need to use for matching.
When processed by a flow table, the packet is matched against the flow
entries of the table. In order to search for a matched entry, packet header fields
are extracted and packet pipeline fields are retrieved. Depend on the packet
type, various packet header fields can be used for table lookup, such as Ethernet
source/destination addresses or IPv4 source/destination addresses. In addition
to packet headers, matching can also be performed against the ingress port,
the metadata field, and other pipeline fields. Figure 2.10 shows the 12-tuple of
header fields that are used for packet matching process in a flow table.
The flow tables in an OpenFlow pipeline are numbered starting at 0 in
the order they can be traversed by packets. Pipeline processing for each packet
starts with ingress processing to match the packet against flow entries of flow
table-0. Other ingress flow tables may be used depending on the matching
outcome of the first flow table. If the matched flow entry in this table has a
Goto-Table-n instruction, where n is a table number, then the processing of this
packet will be transferred to table-n as the next step.
If the outcome of ingress processing is to forward the packet to an output
port, the OpenFlow switch will start performing egress processing in the con-
text of that output port. If no valid egress table is configured as the first egress
table, the packet will be processed by the output port and in most cases sent
out from the port. If a valid flow table is configured as the first egress table, the
packet must be matched against flow entries in that flow table, and other egress
Figure 2.10 Packet header fields used for matching packet in a flow table.
www.EngineeringBooksPdf.com
50 Virtualized Software-Defined Networks and Services
flow tables may be used depending on the matching outcome from that flow
table.
The matching and instruction execution procedure at a flow table is illus-
trated in Figure 2.11. If a flow entry is matched, the associated instruction set
of that entry is executed. Execution of the instructions may transfer the packet
to another flow table that has a greater table number (i.e., pipeline processing
can only go forward and not backward). An action set is associated with each
packet. This set is empty by default for a packet starting OpenFlow pipeline
processing. During the process, each matched flow entry for the packet may
modify the action set of the packet by using a Write-Action instruction or a
Clear-Action instruction. The action set is carried between flow tables. Pipeline
processing stops when the instruction set of a matched flow entry does not con-
tain a Goto-Table instruction. Then the actions in the action set of the packet
are executed.
OpenFlow specification requires the actions in an action set to be applied
in the order specified here, regardless of the order that they were added to the
set.
1. copy TTL inwards: apply copy TTL inward actions to the packet;
2. pop: apply all tag pop actions to the packet;
3. push-MPLS: apply MPLS tag push action to the packet;
4. push-PBB: apply PBB tag push action to the packet;
5. push-VLAN: apply VLAN tag push action to the packet;
6. copy TTL outwards: apply copy TTL outwards action to the packet;
Figure 2.11 Flow entry matching and instruction execution in a flow table [13].
www.EngineeringBooksPdf.com
Software-Defined Networking 51
If a packet does not match any flow entry in a flow table, this is called a
table-miss. The behavior of pipeline process on a table-miss depends on the ta-
ble configuration. OpenFlow requires every flow table to have a table-miss flow
entry, which specifies how to process packets unmatched by other flow entries
in the flow table. Typical processes specified in a table-miss flow entry include
sending packets to the controller, dropping packets, or directing packets to a
subsequent table. The table-miss flow entry is identified by its match fields and
its priority. It wildcards all match fields (i.e., all fields are omitted) and has the
lowest priority.
Figure 2.12 shows a flow chart presented in [13] that illustrates the Open-
Flow pipeline processing for transferring a packet through an OpenFlow switch.
www.EngineeringBooksPdf.com
52 Virtualized Software-Defined Networks and Services
Figure 2.12 A flow chart for OpenFlow pipeline processing. (Source: ONF TS-25 OpenFlow
Switch Specification.)
www.EngineeringBooksPdf.com
Software-Defined Networking 53
Figure 2.13 Two processing directions in the SDN control plane and the associated objects.
functions for collecting and synthesizing network status, managing network to-
pology information, and presenting a global network view and event informa-
tion to the application layer. The downward processing direction is to translate
application requests, which typically specify policies for network operations,
into action rules for packet processing in the data plane. Main functions in this
direction include generating, installing, and updating action rules into flow ta-
bles at data plane devices, ensuring validity and consistency of the action rules,
and maintaining a database of the flows being managed. Two types of objects
are used by an SDN controller. One type is used for network controlling in the
downward direction, including policies imposed by the application layer and
actions rules for packet processing. The other type is related to network moni-
toring used in the upward direction in the form of local and global network
topology and status.
It is worth noting that an SDN controller cooperates with the applica-
tions running on the controller for implementing network policies regarding
routing, forwarding, load balancing, and the like. A controller often comes with
its own set of common application modules, such as a learning switch, a router,
a firewall, and a load balancer. Such modules are SDN applications but are
often bundled with the controller, which are similar to the utility programs
bundled with an operating system.
The architecture of a generic SDN controller is depicted in Figure 2.14.
As shown in the figure, an SDN controller comprises a module for realizing
D-CPI, a module for data plane abstraction, and a module for implement-
ing A-CPI. The D-CPI module implements SDN southbound protocol(s) for
communicating with data plane devices. Such protocols include OpenFlow
protocol and its alternatives such as ForCES protocol. The abstraction module
constructs a global abstract view of the data plane network based on which ap-
plications may program network operations. Main functions performed in this
module include user/network device discovery, network device management,
www.EngineeringBooksPdf.com
54 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 55
www.EngineeringBooksPdf.com
56 Virtualized Software-Defined Networks and Services
map that gives the costs of the connections shown on the network map. ALTO
allows the operator to specify policies and rules to configure the generation of
abstract network topologies from the physical topology of the underlying net-
work. This configurability of topology management in ALTO is very desirable
to support diverse SDN applications that may need different types of abstract
topologies of the data plane network. Also, the configurable topology manage-
ment makes ALTO naturally embrace multitenant virtual networks sharing a
common data plane substrate.
Although current applications of the topology collection and representa-
tion service provided by ALTO seem to be mainly in content delivery networks
(CDN), researchers are actively promoting its broader adoption in SDN net-
works [15]. More recently, IETF started an effort called interface to routing
system (I2RS) [16] toward developing a standard A-CPI protocol for SDN, in
which standardization of generalized network topology management is identi-
fied as an important element. A key feature of I2RS topology management is its
ability of collecting topology data from diverse sources, including device moni-
toring, routing protocols, and other sources. I2RS topology management also
normalizes the collected topology data and transforms them into a standardized
format that is portable across SDN applications.
The development of topology management technologies for SDN con-
trol, from LLDP to BGP-LS to ALTO and I2RS, follows a similar evolution
track of the entire SDN ecosystem, from proprietary systems to service-oriented
open systems with standard southbound and northbound interfaces.
2.6.2.2 Network Traffic Monitoring
In addition to topology management, which presents a relatively static view
of the data plane infrastructure, the SDN control plane also performs traffic
monitoring that reflects the dynamic aspect of network status. Traffic monitor-
ing function collects statistics of traffic load and packet forwarding actions in
the network (e.g., the duration time, packet number, data size, and bandwidth
share of a flow). Typically, individual data plane devices collect and store local
traffic statistics in their own storage, which then can be either retrieved by a
controller in a pull mode or proactively reported to a controller in a push mode.
In the pull mode, a controller collects the statistics of a set of flows that
match some specification from chosen devices. In this way, the controller may
limit the communication overheads introduced by traffic monitoring but may
not be able to provide timely response to events occurred in the data plane.
In the push mode, statistics are sent by data plane devices to the controller
either periodically or triggered by some events (e.g., a flow counter reaches a
preset threshold). This model allows the controller to obtain real-time moni-
toring of network status but at the cost of more switch-controller communi-
cation overheads. The two monitoring modes have different characteristics in
www.EngineeringBooksPdf.com
Software-Defined Networking 57
www.EngineeringBooksPdf.com
58 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 59
to indicate which rule set should be applied to process the packet. After the
controller updates the rule set for the flow, all the packets that enter the network
after rule updating will be stamped a new version number for the updated rule
set at the ingress switch and will be processed using the updated rule set at all
switches. In this way, no more packet will be processed using the original rule
set after a time period, after which the original rule set will be removed. Before
that, both the original and updated rule sets will be kept in all the switches that
are involved in forwarding packets for this flow.
www.EngineeringBooksPdf.com
60 Virtualized Software-Defined Networks and Services
in Java supports multithreading for achieving high performance and linear per-
formance scaling. Since most of the current open source SDN controllers were
forked from the original Beacon source code, its multithreading scheme for im-
proving performance has significant influence on the designs of contemporary
SDN controllers. Maestro is also a Java-based controller that exploits the mul-
tithread mechanism together with additional optimization techniques, includ-
ing I/O batching and core-thread binding for enhancing delay and through-
put performance [21]. Another representative design for enhancing controller
performance using parallelism is NOX-MT [22]. NOX-MT is a multithread
extension of the single thread C++ implementation of NOX controller. NOX-
MT also uses optimization techniques including I/O batching for minimiz-
ing I/O overhead and boost asynchronous I/O (ASIO) library for simplifying
multithread operations. Benchmark testing results reported in [22] for different
controllers, including Beacon, NOX, Maestro, and NOX-MT, show perfor-
mance advantages of NOX-MT in terms of both response time and throughput
for handling data plane requests.
www.EngineeringBooksPdf.com
Software-Defined Networking 61
the order of events published by the same controller, and be resilient against
network partitioning. The distributed event propagation system in HyperFlow
is implemented based on WheelFS [24].
ONIX is a distributed SDN control platform where one or multiple con-
troller instances may run on one or more clustered servers [25]. Figure 2.16 de-
picts the ONIX platform structure. As a control platform, ONIX is responsible
for providing SDN applications with programmatic access to network states
for controlling data plane operations. Controllers in the ONIX platform oper-
ate on a global view of the network. Network state information is stored in a
network information base (NIB) at each controller, which is responsible for
disseminating network states to its peer controllers.
Different SDN applications may have different requirements on the
scalability, update frequency, and consistency of the network states that they
www.EngineeringBooksPdf.com
62 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 63
flow table entries on switches), it delegates the requests to the respective local
controllers.
Similar to HyperFlow, communications between the root and local con-
trollers in Kandoo are based on a messaging system for event propagation. The
root controller can subscribe to specific events in local controllers using the
messaging channel. However, unlike the flat structure of HyperFlow, the two-
layer structure of Kandoo only allows event messaging between local and root
controllers and no direct communication between local controllers.
2.6.3.3 SDN Controller Placement
Although distributed controller deployment offers an approach to enhancing
performance, scalability, and reliability of the SDN control plane, it also intro-
duces extra complexity due to the required intercontroller communications and
collaboration. Therefore, an SDN control plane framework must be carefully
designed in order to achieve an optimal balance between network performance
and control complexity. Specifically, for a given network topology network de-
signers need to determine how many controllers are needed and where they
should be deployed. Such design choice is often referred to as the controller
placement problem. The answers to these two questions influence every aspect
of the control plane, from state distribution to fault tolerance to performance
metric; therefore, having a significant impact on the performance and cost of a
control plane design.
SDN controller placement is a complex problem that has not been fully
solved yet. Its complexity comes from multiple aspects. First, the optimization
objectives for controller placement vary in different networking scenarios. For
example, in wide-area networks the “best” controller placement is typically ex-
pected to minimize the state propagation delay between controllers; while in
data center and enterprise networks the objective could become maximizing
www.EngineeringBooksPdf.com
64 Virtualized Software-Defined Networks and Services
fault tolerance or load balancing. Second, different control structures may have
different requirements on intercontroller collaboration; thus, their performance
needs to be evaluated differently. For example, a flat distributed controller de-
ployment such as Hypervisor requires consistency of the full network topology
to be kept across all controllers, while a hierarchical control structure such as
Kandoo allows local controllers to only maintain information about a part of
the entire network topology.
A comparative study on various proposed controller placement solutions
for wide area networks is reported in [27]. The authors chose switch-to-con-
troller latency as the performance metric for their evaluations, since such la-
tency imposes fundamental limits on reaction delay, fault discovery, and event
propagation efficiency that can be achieved by a controller placement scheme.
As expected, their findings show that there is no general controller placement
rule applicable to every network. Rather, the effectiveness of various solutions
depends on multiple factors such as network topology and operators’ require-
ments. More surprisingly, results obtained in [27] indicate that a single control-
ler location can be sufficient to meet the reaction-time requirements in many
existing networking scenarios. Of course, single controller deployment still has
issues related to reliability and resilience.
www.EngineeringBooksPdf.com
Software-Defined Networking 65
interface in the current SDN architecture makes it more difficult to cope with
the heterogeneity in SDN domain controllers. Another challenge to interdo-
main SDN control lies in sharing network states among the controllers in dif-
ferent domains. Aggregation and abstraction of network topologies and states
for individual domains are necessary due to both scalability and security rea-
sons. Therefore, a standard information model that provides an appropriate
level of state abstraction is needed.
Multidomain control is a challenging problem that has attracted research
attention of the SDN community. Exciting progress has been made in this area,
although the problem still requires more thorough investigation.
An early effort made for enabling interdomain SDN control is a message
exchange protocol for SDN across multiple domains called SDNi, which was
proposed by IRTF as an Internet Draft at the end of 2012. In this draft, an
SDN domain is defined as a portion of a network infrastructure determined by
network operators. Each domain has a (logically centralized) SDN controller
that controls multiple SDN-enabled networking devices and maintains a global
view of the portion of the network covered by the domain. Inside each SDN
domain, its controller defines domain-specific policies for monitoring network
states and programming network operations. Such policies may not be made
public; therefore, a domain controller does not know the existence of such poli-
cies in other domains. Two SDN domains are adjacent if there exists physical
network link(s) between them [28].
SDNi was proposed as a protocol for interfacing SDN domains for ex-
changing control-related information across multiple SDN domains and co-
ordinating the functions performed by different domain controllers. More
specifically, main responsibilities of SDNi include the following two aspects:
(a) exchange the reachability information required by interdomain routing
across SDN domains, and (b) coordinate the functions of domain controllers
for establishing end-to-end flow paths traversing multiple SDN domains. The
following types of messages are defined in the SDNi protocol: messages for
reachability information update, messages for flow setup/tear-down/update
request, and messages for capability update request, including both network
related capabilities such as bandwidth and system/software capabilities inside
the domain [28].
The SDNi proposal suggests using extension of BGP and SIP over SCTP
to exchange information for implementing the SDNi protocol. However, the
hop-by-hop nature of BGP requires interdomain routing in a decentralized
manner without global knowledge of end-to-end routes. The scalability of
SIP is also an issue, especially in multidomain networking environments. In
addition, SDNi mainly focuses on information exchanging between domain
controllers without a clearly defined mechanism for coordination between con-
trol functions for service orchestration. Therefore, how to realize the proposed
www.EngineeringBooksPdf.com
66 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 67
tions, which has been employed for loosely coupled communications between
different components in OpenStack.
Each DISCO controller has a group of agents for interdomain network
control. The reachability agent advertises the presence of hosts in domains to
make them reachable from other domains. The connectivity agent shares with all
other domains about the presence of peering links with neighboring domains.
The monitoring agent periodically sends information about available bandwidth
and latency between all the pairs of peering points. The path computation agent
uses the connectivity information to make local routing decisions. The reserva-
tion agent is responsible for requesting neighboring domains for interdomain
flow management to set up flow paths traversing other domains.
Another approach to achieving interdomain SDN control is to use the hi-
erarchical control structure together with service orchestration, as proposed in
[30]. In its simplest form, a hierarchical control/orchestration structure consists
of two levels, as shown in Figure 2.19. The lower level comprises a group of
controllers, one for each network domain, for performing SDN control func-
tions within the domain scopes. The higher level has a main controller that
serves as an orchestrator to coordinate multiple domain controllers for provi-
sioning end-to-end services across domains.
The main controller essentially composes the network services provided
by individual domains into end-to-end network services. For example, in the
network shown in Figure 2.19, when the main controller receives a request for
establishing a flow path from a source node S in domain-1 to a destination node
D in domain-n transiting through domain-2, the main controller translates
this request into requests to the controllers in all the involved domains—the
domains 1, 2, and n. Then the controller in each of these domains sets up a seg-
ment of the flow path within the respective domain. Then the main controller
Figure 2.19 A hierarchical control and orchestration architecture for multidomain SDN.
www.EngineeringBooksPdf.com
68 Virtualized Software-Defined Networks and Services
establishes the interdomain links (links between domains 1 and 2 and between
domains 2 and n) to assemble the path segments in these domains together to
form an end-to-end flow path from node S to node D.
Interfaces between domain controllers and the main controller play an
important role in the hierarchical orchestration structure for interdomain SDN
control. The main controller interacts with individual domain controllers via
the northbound interface supported by the domain controllers. Considering
heterogeneity in the SDN controllers across autonomous domains, the interface
between the main controller and domain controllers needs to provide a layer of
abstraction that hides domain specific control functions. On the other hand,
this interface should also expose the domain control capabilities as services that
can be accessed and orchestrated by the main controller. As suggested in [30], a
RESTful web service interface is a good choice for meeting these requirements.
www.EngineeringBooksPdf.com
Software-Defined Networking 69
www.EngineeringBooksPdf.com
70 Virtualized Software-Defined Networks and Services
naturally lend themselves to the proactive mode are usually for some sort of
preconfiguration of the network topology or presetting for traffic engineering
(e.g., applications of source routing and multipath routing).
Proactive applications can be implemented using either language APIs
(e.g., Java API or Python API) or using RESTful APIs. The RESTful APIs pro-
vided by an SDN controller and called by applications can operate at different
levels. They could be low-level APIs that allow applications to directly program
flow table entries at switches. But more typically, RESTful APIs offer a high-
level data plane abstraction so that SDN applications can program network
behaviors based on an abstract view of the underlying network infrastructure.
Proactive applications typically have less frequent interaction with the SDN
controller and relatively loose requirement for the communication delay on A-
CPI. Therefore, such applications have the flexibility to be deployed at servers
that are separated from the controller hosting platform. Virtually all contempo-
rary SDN controllers support RESTful northbound API for communications
with proactive applications hosted on different servers.
Reactive applications typically work with reactive flow management func-
tions of the SDN controller to handle network events. The most common
events are flow table-miss triggered by packets to which the switches find no
flow table match. These packets are encapsulated in packet_in messages and
forwarded to the controller and thence to an application. The application ex-
amines the packets and makes decisions regarding how to process the packets.
The outcomes are often setting up new flow table entries at switches that the
new flows are expected to traverse.
Reactive applications typically have frequent interaction with controllers.
Standard RESTful APIs may not have all the capabilities required for the com-
munications between a reactive application and an SDN controller. For exam-
ple, reactive applications often need to be asynchronously notified of incoming
packets forwarded to the controller by switches. It is not straightforward to
implement this kind of asynchronous notification using a basic RESTful API.
Therefore, reactive applications are often written in the programming language
of the controller and leverage the language API to interact with the controller.
Consequently, reactive applications tend to be tightly coupled with a controller
and are typically hosted on the same server as the controller.
www.EngineeringBooksPdf.com
Software-Defined Networking 71
www.EngineeringBooksPdf.com
72 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 73
are tightly coupled in that a network domain can run certain architecture only
if the architecture is explicitly supported by the network infrastructure of that
domain. Such tight coupling implies that any significant architectural change
requires major upgrade of network infrastructure, which is often very expensive
and time consuming [38].
SDN offers a promising approach for meeting the service provisioning re-
quirements for future networks. The key principle of separated data and control
planes and logically centralized controllers in SDN may significantly enhance
network capabilities for service provisioning. However, the current SDN tech-
nologies still have limitations that may prevent network designers from fully ex-
ploiting potentials of this emerging networking paradigm for supporting future
network services. The fundamental issue of architecture and infrastructure cou-
pling still exists in the current SDN architecture. For example, adopting alter-
native network architecture requires the SDN data plane to be able to perform
fully general packet matching and forwarding actions, which is not supported
yet by currently available SDN D-CPI protocols (e.g., OpenFlow).
A network architecture essentially provides three types of interfaces: host-
network interface, operator-network interface, and packet-switch interface.
The host-network interface allows the hosts (actually users of the hosts, includ-
ing upper layer applications running on the hosts) to inform the network of
their service requirements (e.g., using packet headers to specify the destinations
of data transfer). The operator-network interface is used by the network op-
erators to specify their requirements regarding network operations (e.g., traffic
engineering, virtualization, tunneling, and so on). The packet-switch interface
determines how a packet is identified and thus processed by a switch (e.g., some
packet header fields are used as an index to look up flow tables at the switch for
determining the appropriate actions to be taken for the packet) [39].
In the original IP-based Internet design, the host-network interface and
packet-switch interface are essentially identical; both rely on information car-
ried in IP packet headers. Therefore, each router checks packet header fields to
interpret service requirements (e.g., the destinations for data delivery) as well as
determine appropriate forwarding action (e.g., the next-hop router interface for
packet forwarding). There is no explicit operator-network interface provided by
the IP-based Internet design.
Evolution of networking technologies has led to label-based packet
switching mechanisms, with MPLS as the representative example. MPLS dis-
tinguishes the host-network interface and packet-switch interface by decou-
pling packet labels used for data transportation from the host protocol used for
specifying service requirements. However, MPLS does not provide a general
operator-network interface.
Lack of standard operator-network interface causes complex and inflexi-
ble control and management when networks grow into a large scale and attempt
www.EngineeringBooksPdf.com
74 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 75
www.EngineeringBooksPdf.com
76 Virtualized Software-Defined Networks and Services
The SDIA framework does not specify any particular implementation ap-
proach for either the network edge or the core; instead, the framework enables
various implementation technologies to be developed and deployed indepen-
dently in these two network components. On the other hand, software-based
forwarding is suggested for edge switches (i.e., performing packet forwarding
and processing functions at edge switches using software running on commod-
ity processors). Interestingly, such an implementation suggestion is aligned with
the concept of network function virtualization (NFV), which advocates real-
izing network functions as software instances hosted on commodity servers.
Since all architectural dependencies in SDIA reside at the edge, they can be
easily modified if the edge switches and edge controller are implemented based
on software. Therefore, the SDIA essentially transforms architectural evolution
from a hardware problem to a software problem [38]. In addition, employing
NFV in the network edge will enable the protocols supporting alternative net-
work architectures to be realized as virtual network functions (VNFs) that can
be deployed and upgraded easily. This offers a promising approach to support-
ing multitenant SDNs with alternative architectures for meeting diverse service
requirements by sharing a common network core infrastructure that may be
implemented with various transportation technologies.
• Interdomain task: this is the high-level task for forwarding packets from
domain A to domain B, which may require traversing one or multiple
intermediate domains.
• Intradomain transit task: a domain must be able to forward packets
from an ingress peering domain to an egress peering domain to support
the interdomain task.
• Intradomain delivery task: domain A must forward packets from host
X to the edge, where the interdomain task takes over; domain B must
forward packets from the edge of this domain to host Y. In addition,
www.EngineeringBooksPdf.com
Software-Defined Networking 77
A key objective of the SDIA proposal is to make the realization of all the
required tasks and thus the entire end-to-end service to be independent with
the implementation technologies employed in any network domain. Follow-
ing the architectural principle of SDIA, the network of each domain is sepa-
rated into an edge and a core. The core of a network domain may use any
internal forwarding and control mechanisms that the domain chooses, rang-
ing from SDN to conventional intradomain routing/forwarding protocols, as
long as they support intradomain tasks, including edge-to-edge, host-to-edge,
and host-to-host packet forwarding. The interdomain task will be implemented
through collaboration among the edge switches of different domains.
A key to the interdomain task is to compute interdomain routes and then
provide instructions to each domain in terms of domain-level packet forward-
ing. An important aspect of the SDIA framework is to suggest a strict separation
between interdomain and intradomain addressing, which will be handled by
the network edge and core, respectively, and each domain may choose its own
internal addressing scheme. An interdomain addressing may be based on some
form of domain identifiers. The entire interdomain task should be realized by
just leveraging the domain identifiers without the need of knowledge for any
intradomain address.
An SDIA-based end-to-end network service delivery system is illustrated
in Figure 2.22. In such a service delivery system, only edge switches and edge
controllers need to understand interdomain protocols, including the interdo-
main addressing scheme. The network core of each domain is only responsible
for providing the intradomain tasks by using its own choice of forwarding,
routing, and addressing schemes. Each domain has one (logically centralized)
www.EngineeringBooksPdf.com
78 Virtualized Software-Defined Networks and Services
SDN controller (i.e., the edge controller) that controls all the edge switches of
the domain. The edge controller participates in interdomain route computation
then requests the network core to provide the required intra-domain tasks for
supporting interdomain service delivery.
An important aspect of SDIA-based end-to-end service provisioning is
that the only components involved in the interdomain task are the edge con-
trollers and edge switches. This design has some profound implications on ar-
chitectural evolution of SDN for supporting future network services. Evolution
of interdomain routing (e.g., changing from the current BGP to a new rout-
ing protocol) only requires changing software in the edge controllers and edge
switches to implement the new routing protocol. Therefore, deployment of
new interdomain protocols for meeting new service requirements is simplified
[39]. When network edges are implemented with NFV technologies, functions
for various interdomain protocols can be easily deployed as VNFs. This allows
multiple interdomain protocols, and more fundamentally alternative network
architectures, to coexist in the network. In addition, VNFs for various protocols
may be loaded onto and unloaded from different edge switches on demand.
Such on-demand deployment of network protocols may significantly enhance
network capability of supporting adaptive and elastic services, which is a key
expectation of future network service provisioning.
www.EngineeringBooksPdf.com
Software-Defined Networking 79
current protocols or adopting new protocol standards, let alone applying user-
defined protocols for meeting diverse service requirements. As a consequence,
expanding the capability of OpenFlow to support additional networking tech-
nologies leads to continual modification of the specification and makes Open-
Flow protocol more and more complicated over time. However, it still offers
little support for clean slate solutions expected in future networks.
Another challenge that OpenFlow is facing comes from its limited sup-
port of stateful network processing that can be performed on data plane devices.
Current OpenFlow standard lacks the capability of actively monitoring flow
states and programming switch operations without the involvement of a con-
troller. Relying on the controller for tracking and managing all network states
not only causes scalability and performance issues, but also limits the agility of
network infrastructure for dynamic service provisioning.
SDN control and management on D-CPI can be divided into two dis-
tinct stages: datapath configuration and run-time control. The configuration
stage defines the packet processing functions that can be provided by a switch;
while the run-time control stage uses these functions to control traffic flows
through the switch. Currently, OpenFlow mainly focuses on the run-time con-
trol stage, which controls switch operations for packet forwarding by managing
flow tables in switches. An OpenFlow switch can only recognize a set of pre-
determined packet header fields and perform predefined actions for processing
packets. The protocol(s) that can be supported by a switch (thus the packet
header formats that the switch can recognize and process) are typically prede-
termined by the switch implementation and cannot be easily configured by a
controller.
Until recent, the development of OpenFlow specification has been fol-
lowing a reactive rather than proactive path by continuously adding new pro-
tocol features in new versions. Recently, the SDN research community realized
that a different approach to evolving the D-CPI standard is needed for meeting
the requirements of future network services. The new approach should sup-
port a fully programmable SDN data plane for performing protocol-oblivious
packet processing with supported protocols that can be easily configured by a
controller. That is, the new approach should allow both datapath configuration
and run-time control of SDN switches to be programmable through an SDN
controller.
www.EngineeringBooksPdf.com
80 Virtualized Software-Defined Networks and Services
protocols with an abstract model for packet forwarding engine, which provides
an abstraction of data plane functionalities.
The position of PI-layer in the SDN architecture is shown in Figure 2.23.
This layer is between the control plane and data plane, therefore playing the
role of D-CPI. Unlike current OpenFlow-based D-CPI protocol that focuses
on run-time control of data plane switches, PI-layer allows SDN applications
and network controllers to perform datapath configuration and express much
more flexible packet processing functionalities. Specifically, the PI-layer is pro-
posed for achieving the following three goals [40]:
Protocol independence: the PI-layer should allow an SDN controller to
program data plane devices without being tied to specific network protocols
(and thus packet formats). The controller should be able to configure (a) a
packet parser for extracting header fields with particular names and types, and
(b) a collection of typed match-action tables that process these headers.
Target independence: the PI-layer should enable SDN controllers and ap-
plications to program data plane devices without knowing specifics of the de-
vices, thus allowing switches with heterogeneous implementations to coexist in
the data plane.
Reconfigurability: the PI-layer should allow an SDN controller to recon-
figure the packet parsing and processing functions that are performed by data
plane devices in the field.
As summarized in [40], these goals lead to the following three guiding
principles for the PI-layer.
www.EngineeringBooksPdf.com
Software-Defined Networking 81
packet headers, and datapath actions for processing packets. The PI-
layer should be based on a protocol-oblivious primitive instruction set
that can be used for programming datapath processing to support both
existing and new protocols
• The PI-layer should help creating an SDN software development eco-
system. The PI-layer should allow a programmer to define behaviors of
SDN switches using a high-level language and then use a compiler to au-
tomatically generate switch configurations for various switch platforms.
• The PI-layer should support existing OpenFlow specification. The pro-
posed PI-layer enables a general D-CPI that may lead to some new data-
path protocols that are different from the existing OpenFlow; however,
PI-layer should be designed to be backward compatible to allow effective
evolution for the existing OpenFlow protocol.
Figure 2.24 An abstract model for packet forwarding in SDN switch [40].
www.EngineeringBooksPdf.com
82 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 83
www.EngineeringBooksPdf.com
84 Virtualized Software-Defined Networks and Services
and reported in [42], where the instructions are grouped and summarized as
follows.
Editing Instructions
These kinds of instructions are used to edit packet data or packet metadata.
Editing packet data is an important function of the packet forwarding engine
that is required by almost all datapath protocols. SET_FIELD, ADD_FIELD,
and DEL_FIELD are three of the most useful instructions in this group. By us-
ing these three instructions, a control program can define a customized packet
field for forwarding processing. The remaining editing instructions, such as
ALG, INC_FIELD, DEC_FIELD, CALCULATE_CHECKSUM, and some
other logical operation instructions, all perform some kinds of calculating of
the packet data.
Forwarding Instructions
The instructions in this group are used for packet forwarding. The whole for-
warding process for one packet in an SDN switch might contain multiple stag-
es, each has different types of flow tables according to the functionality (e.g.,
layer-3 parse table and layer-3 encapsulate table). When the processing in a flow
table is complete, GOTO_TABLE instruction can be used to transfer the pack-
et data to the next flow table. COUNTER instruction can track the number
of packets that have already been processed. OUTPUT instruction sends the
packet out of the switch through the specified port. MOVE_PACKET_OFF-
SET and SET_PACKET_OFFSET can be used to specify a header field in a
particular location in the packet. For example, when SET_PACKET_OFFSET
instruction sets the packet offset to be 112 bits, it specifies the start position of
IPv4 header field in a normal Ethernet frame.
www.EngineeringBooksPdf.com
Software-Defined Networking 85
POF FIS acts on controllers, D-CPI, and data plane devices; therefore,
it is independent of A-CPI. An SDN controller can provide various types of
A-CPI to users or applications. The controller is responsible for translating the
high-level policies specified by applications into POF instructions and loading
them to data plane devices. A-CPI independence provides the flexibility and
diversity required by SDN for supporting future network services. POF FIS
is not designed for any specific service or application. Various services can be
implemented through different combinations of the same set of instructions
and every instruction in the FIS can be used in realization of various services.
www.EngineeringBooksPdf.com
86 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 87
www.EngineeringBooksPdf.com
88 Virtualized Software-Defined Networks and Services
switch whose behaviors are defined by a published standard, one can specify a
datapath model for the switch based on the standard. The packet header for-
mat, parsing rules, actions, and flow table processing pipeline can all be pre-
defined and supported by standard libraries.
With an interactive approach, all aspects of the configuration and run-
time control of data plane devices, including defining packet format, flow table
match rules, packet processing actions, and so on, are provided through the PI-
layer. This approach allows network operators to adjust switch datapath model
to support various protocols; therefore, it is most applicable to networking sce-
narios where configurability for dynamic network operations plays a key role in
service provisioning.
The customized approach allows network architecture and associated
switch datapath models to be developed and tested before being deployed in
a production network. Datapath models for SDN switches can be customized
to particular network services by a program that is written in P4 language and
compiled into the configuration files loaded on the switches. This approach
may require a controlled upgrading procedure for making any change to the
datapath configuration in a production network.
In order to achieve a balance between an abstract programing model for
supporting various datapath protocols and performance optimization tailored
for specific data plane device platforms, the compiling procedure that trans-
forms control programs written in a generic P4 language to the executable codes
running on data plane devices can be split into two stages. The first stage is
platform independent, in which the compiler parses a P4 language program
and transforms the program into an intermediate representation based on the
POF FIS. The second stage is platform specific in order to take advantage of
the technologies developed in various data plane devices for enhancing datapath
performance; for example, mapping header specifications to flexible parse en-
gines and mapping match-action tables to platform-specific pipeline processes
and memory blocks with varying capabilities (e.g., DRAM, SRAM, TCAM).
The PI-layer enables protocols to be specified by programs written in P4
language (e.g., one program for IPv4 and other programs for GRE, VLAN,
and so on). It is desirable to organize the programs in libraries and allow switch
developers to combine protocols they need from several libraries, which may
support code reuse and facilitate rapid prototyping of SDN networks. For this
to be possible, the libraries need to be “composable” in the sense that each pro-
tocol library is self-contained and may be assembled with others without modi-
fication. In addition, control applications have been developed and optimized
for some particular types of data plane devices for achieving the best possible
performance. In order to reuse the available platform-specific applications, such
programs may be precompiled and provided in a library by any third party and
used by the compiling process [40].
www.EngineeringBooksPdf.com
Software-Defined Networking 89
www.EngineeringBooksPdf.com
90 Virtualized Software-Defined Networks and Services
2.9 Conclusion
SDN is a significant innovation in networking technologies for addressing
some of the fundamental challenges to current networks. The main principles
of SDN lie in decoupling network control and management functionalities
from data forwarding operations to enable a centralized control platform that
supports network programmability. SDN is expected to significantly simplify
both network control/management software and packet forwarding devices as
well as greatly enhance network capabilities of service provisioning.
Although SDN architecture has been developed by multiple standardiza-
tion organizations, including ONF, ITU-T, and IETF, all the proposed archi-
tectural frameworks share a three-plane structure that comprises the data plane,
control plane, and application plane. The data plane simply performs packet
forwarding operations by following the action rules installed by the control
plane through the southbound interface. The control plane enables a layer of
abstraction for data plane network based on which SDN applications can make
decisions on network policies. The control plane also provides northbound
APIs through which SDN applications may program network behaviors.
Networking devices on the SDN data plane, often called SDN switches,
typically comprise a packet processing engine, an abstraction layer, and a con-
troller interface. OpenFlow currently is the de facto standard for controlling
SDN switches. The core element of an OpenFlow-enabled switch is the Open-
Flow pipeline that processes each incoming packet through one or multiple
flow tables to determine the actions to be performed on the packet. OpenFlow
specification defines a communication protocol between the controller and
switch as well as the procedure for managing flow tables in SDN switches.
The control plane is the core component in the SDN architecture for
achieving decoupling between data forwarding and network control. An SDN
controller bridges the data and application planes by enabling data plane ab-
straction and providing an API for programing network behaviors. Key func-
tions of an SDN controller include topology management, traffic monitoring,
and flow management. Control performance is a key factor that impacts perfor-
mance of the entire SDN network. Various technologies have been developed
for enhancing SDN control performance, including multithread controller
software, distributed controller deployment, and hierarchical control structure.
Multidomain SDN control is a challenging problem that has attracted research
attention and is still open for further study.
www.EngineeringBooksPdf.com
Software-Defined Networking 91
References
[1] Feamster, N., J. Rexford, and E. Zegura, “The Road to SDN,” ACM Queue, Vol. 11, No.
12, December 2013, pp. 1–12.
[2] Greenberg, A., G. Hjalmtysson, D. A. Maltz, A. Myers, J. Rexford, et al., “A Clean Slate
4D Approach to Network Control and Management,” ACM SIGCOMM Computer Com-
munication Review, Vol. 35, No. 5, October, 2005, pp. 41–54.
[3] McKeown, N., T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, et al., “Open-
Flow: Enabling Innovation in Campus Networks,” ACM SIGCOMM Computer Commu-
nication Review, Vol. 38, No. 2, April 2008, pp. 69–74.
[4] Open Networking Foundation, “Software-Defined Networking: the New Norm of Net-
works,” white paper, April 2012.
[5] Open Network Foundation, “ONF TR-521: SDN Architecture,” Issue 1.1, 2016.
[6] Schenker, S., M. Casado, T. Koponen, and N. McKeown, “The Future of Networking,
and the Past of Protocols,” Open Networking Summit, 2011.
www.EngineeringBooksPdf.com
92 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Software-Defined Networking 93
[23] Tootoonchian, T., and Y. Ganjali, “HyperFlow: A Distributed Control Plane for
OpenFlow,” Proceedings of the 2010 Internet Network Management Workshop/Workshop on
Research on Enterprise Networking (INM/WREN’10), April 2010.
[24] Stribling, J., Y. Sovran, I. Zhang, X. Pretzer, J. Li, M. et al., “Flexible Wide-Area Storage
for Distributed Systems with WheelFS,” Proceedings of the 6th USENIX Symposium on
Networked Systems Design and Implementation (NSDI’09), April 2009, pp. 43–58.
[25] Koponen, T., M. Casado, N. Gude, J. Stribling, L. Poutievski, M. et al., “ONIX: A
Distributed Control Platform for Large-Scale Production Networks,” Proceedings of the
9th USENIX Conference on Operating Systems Design and Implementation (OSDI’10),
October 2010.
[26] Yeganeh, S. H., and Y. Ganjali, “Kandoo: A Framework for Efficient and Scalable
Offloading of Control Applications,” Proceedings of the 2012 ACM Workshop on Hot Topics
in Software Defined Networks (HotSDN’12), August 2012, pp. 19–24.
[27] Heller, B., R. Sherwood, and N. McKeown, “The Controller Placement Problem,”
Proceedings of the 2012 ACM Workshop on Hot Topics in Software Defined Networks
(HotSDN’12), August, 2012.
[28] IRTF Internet-Draft, “SDNi: A Message Exchange Protocol for Software-Defined
Networks (SDNs) Across Multiple Domains,” June 2012.
[29] Phemius, K., M. Bouet, and J. Leguay, “DISCO: Distributed Multi-Domain SDN
Controller,” Proceedings of the 2014 IEEE Network Operation and Management Symposium
(NOMS’14), May 2014.
[30] Figueira, N., and R. Krishnam, “SDN Multi-Domain Orchestration and Control:
Challenges and Innovative Future Directions,” Proceedings of the 2015 International
Conference on Computing, Networking and Communications (ICNC’15), Feb. 2015, pp.
406–412.
[31] Hinrichs, T. L., N. S. Gude, M. Casado, J. C. Mitchell, and S. Shenker, “Practical
Declarative Network Management,” Proceedings of the 1st ACM Workshop on Research on
Enterprise Networking (WREN’09), 2009.
[32] Foster, N., A. Guha, M. Reitblatt, A. Story, M. Freedman, et al., “Languages for Software-
Defined Networks,” IEEE Communications Magazine, Vol. 51, No. 2, Feb. 2012, pp.
128–134.
[33] Monsanto, C., N. Foster, R. Harrison, and D. Walker, “A Compiler and Run-Time System
for Network Programming Languages,” Proceedings of the 39th Annual ACM Symposium
on Principles of Programming Languages (POPL’12), 2012, pp. 217–230.
[34] Al-Shaer, E., and S. Al-Haj, “FlowChecker: Configuration Analysis and Verification of
Federated OpenFlow Infrastructure,” Proceedings of the 3rd ACM Workshop on Hot Topics
in Software Defined Networks (HotSDN’12), 2012, pp. 121–126.
[35] Peresini, P., and M. Canini, “Is Your OpenFlow Application Correct?” Proceedings of the
ACM CoNEXT Student Workshop (CoNEXT’11), 2011.
[36] Fielding, R. T., “Architectural Styles and the Design of Network-based Software
Architectures,” Ph.D Dissertation University of California, Irvine, 2000.
www.EngineeringBooksPdf.com
94 Virtualized Software-Defined Networks and Services
[37] Toy, M., “Cable Networks, Services and Management,” Hoboken, NJ: IEEE/J.Wiley,
2015.
[38] Raghavan, B., T. Koponen, A. Ghodsi, M. Casado, S. Ratnasamy, et al., “Software-
Defined Internet Architecture: Decoupling Architecture from Infrastructure,” Proceedings
of the 11th ACM Workshop on Hot Topics on Networks (Hotnets’12), October 2012, pp.
43–48.
[39] Casado, M., T. Koponen, S. Shenker, and A. Tootoonchian, “Fabric: A Retrospective on
Evolving SDN,” Proceedings of the 2012 ACM SIGCOMM Workshop on Hot Topics in
Software Defined Networking (HotSDN’12), January 2012, pp. 85–90.
[40] Open Networking Foundation ONF TR-505, “OF-PI: A Protocol Independent Layer,”
Version 1.1, September 2014.
[41] Song, H., “Protocol-Oblivious Forwarding: Unleash the Power of SDN Through a
Future-Proof Forwarding Plane,” Proceedings of the 2013 ACM SIGCOMM Workshop on
Hot Topics in Software Defined Networking (HotSDN’13), August 2013, pp. 127–132.
[42] Yu, J., X. Wang, J. Song, Y. Zheng, and H. Song, “Forwarding Programming in Protocol-
Oblivious Instruction Set,” Proceedings of the 2014 IEEE International Conference on
Network Protocols (ICNP’14), October 2014, pp. 577–582.
[43] Bosshart, P., D. Daly, G. Gibb, M. Izzard, N. McKeown, et al., “Programming Protocol-
Independent Packet Processors,” ACM Computer Communication Review, Vol. 44, No. 3,
July 2014, pp. 88–95.
[44] Open Networking Foundation, “The Forwarding Abstraction Working Group (FAWG)
Charter,” April 2013.
www.EngineeringBooksPdf.com
3
Virtualization in Networking
Q. Duan, Y. Wang, A. Bernstein, and M. Toy
3.1 Introduction
Virtualization in computing often refers to the act of separating software from
the underlying hardware for creating virtual instances of computing resources.
In general, virtualization focuses on decoupling the functions or services that
a system provides from the implementations that the system employs to sup-
port such functions or services. Virtualization technologies have been widely
employed in various computing areas, especially the recently emerged cloud
computing. Success of virtualization in computing has inspired its adoption
in the field of networking. Applying virtualization in networks leads to the
notions of network virtualization (NV) and network function virtualization
(NFV), which are expected to have significant impacts on networking and ser-
vices provisioning. In this chapter, we focus our discussion on virtualization-
based networking and network service provisioning.
Supporting a wide spectrum of services with highly diverse requirements
based on networks that are implemented with heterogeneous technologies has
become a major challenge to research and development of networking technol-
ogies. The current IP-based network architecture cannot meet the requirements
of future network services due to its ossification. In order to tackle this challeng-
ing problem, researchers have proposed a vision of network design that adopts
virtualization as a key attribute in future network architecture. Such an archi-
tectural vision for future networks is typically referred to as network virtualiza-
tion (NV). Essentially, NV advocates decoupling network services from the un-
derlying network infrastructures, thus allowing alternative network architecture
95
www.EngineeringBooksPdf.com
96 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 97
www.EngineeringBooksPdf.com
98 Virtualized Software-Defined Networks and Services
Table 3.1
Representative Applications of Virtualization in Computing
Server Virtualization of computing, storage, and I/O resources
Networking device Virtualization of NICs, switches, and links
Service Virtualization of software, platform, and infrastructure
www.EngineeringBooksPdf.com
Virtualization in Networking 99
sors, utilizing a standard host OS, may support a broader range of hardware
configurations on hosting servers.
Various hypervisors have been developed and deployed. Representative
hypervisors that have been widely adopted in industry include Xen [1], KVM
[2], and VMware ESX/ESXi [3]. Main features and the supported guest OSs for
these hypervisors are listed in Table 3.2.
Container-based virtualization, also called operating system-level virtu-
alization, is another approach to server virtualization. In this approach, the
operating system kernel runs on server hardware and allows multiple isolated
user-space instances installed on top of it. Such isolated instances are called
containers, which may look and feel like real servers from the point of view of
their owners and users.
Unlike hypervisor-based server virtualization where each VM runs a com-
plete guest OS, container-based virtualization allows all the virtual instances
(containers) to share a common operating system. Therefore, container-based
virtualization removes the overheads associated with VM guest OS and im-
proves virtualization performance due to its lightweight implementation. How-
ever, the container-based virtualization approach requires each virtual instance
on a host server to use the same operating system that the host is running;
therefore, it limits the flexibility that can be achieved by server virtualization.
In addition to standard computing resources such as CPU and storage,
server virtualization also includes virtualization of some networking devices.
The multiple VMs on the same server share the NIC of the server and expect
their communication sessions to be isolated from each other. Consequently,
the physical NIC of the server needs to be virtualized to a set of virtual NIC
instances, one for each VM. A virtual NIC is a software emulation of a physical
Table 3.2
Representative Hypervisors and Their Features
Hypervisor Features Guest OS Support
Xen Works for IA-32, X86-64, and ARM Support most Unix-like operating
instruction sets; contain a privileged systems as well as Microsoft Windows
domain called dom0, which is the only systems; the dom0 is generally a version
virtual machine that has direct access to of Linux or BSD
the hardware
KVM A Linux-kernel hypervisor; needs Support a variety of guest OSs, including
hardware virtualization extension (e.g., many versions of Linux, BSD, Solaris,
Intel VT, AMD-V) support and Windows
VMware ESX/ Works for i386 (before version 4.0) and Support Windows, Linux, Unix, and
ESXi X86-64 platform; can run directly on Macintosh
a bare metal hardware; has vital OS
components self-included; support live-
migration of virtual machines
www.EngineeringBooksPdf.com
100 Virtualized Software-Defined Networks and Services
NIC and can have its own network identifies (IP and MAC addresses). NIC
virtualization is typically a function provided by the hypervisor.
In traditional networks, NICs are interconnected through switches and
transmission links to form a layer 2 network or subnet. Network switches can
also be virtualized to bridge multiple VMs by linking the virtual NICs of these
VMs. The virtual switch function can also be provided inside a hypervisor to
connect multiple VMs that are managed by the hypervisor on the same physi-
cal machine. Virtualization of network links allows creation of logical links that
connect VMs. The key function of virtual links lies in bandwidth allocation for
individual channels between VMs. Virtual links may be realized in different
forms (e.g., wavelength channels in optical networks and label-switch paths in
MPLS networks).
Virtualization technologies, when applied to computing and networking
systems, bring in some desirable features that may significantly enhance system
capabilities and performance. These benefits make virtualization a key enabler
of the state of art computing systems, especially cloud computing systems.
A key feature of virtualization is the abstraction of physical resource that
decouples services/application software from infrastructure hardware, which
allows these two aspects of information systems to evolve freely along their
own paths. Virtualization enables software applications to be developed and
deployed without being constrained by the implementations of their hosting
platforms, thus facilitating innovations in applications and services. Hardware-
independent applications enabled by virtualization lead to greater flexibility
for supporting elastic service provisioning through on-demand resource alloca-
tion and configuration. For instance, a user can easily request adjustment in
the amount of allocated resources in response to work load fluctuation. The
software nature of applications supports fast configuration/reconfiguration of
virtual instances to provide quick responses to user demands.
Resource sharing and consolidation is another aspect of the benefits
brought in by virtualization. By allowing multitenant virtual instances to share
common physical infrastructure, virtualization may greatly improve resource
utilization and enhance system flexibility as well. Hardware independence of
virtual instances allows them to migrate across hosting platforms, which fa-
cilitates resource consolidation for enhancing system utilization. Virtualization
also provides isolation among the virtual instances to create an illusion that
each tenant has the full ownership of the hosting platform. Each VM may have
its own guest OS and applications for meeting the service requirements of dif-
ferent users, therefore allowing a common infrastructure substrate to support a
wide range of applications with diverse requirements.
Some of the key features and benefits of virtualization-based computing
are summarized in Table 3.3.
www.EngineeringBooksPdf.com
Virtualization in Networking 101
Table 3.3
Features and Benefits of Virtualization-Based Computing
Virtualization Features Virtualization Benefits
Resource abstraction Virtualization decouples software applications and services
from hardware infrastructure to enable hardware-independent
application/service development and deployment.
Resourcing sharing Virtualization allows multitenants to share a common infrastructure
platform for achieving more efficient and flexible usage of physical
resources.
VM isolation Virtualization provides isolation among virtual instances, which
enables independent tenants sharing a common substrate to support
applications with diverse requirements.
Elastic resource provisioning Virtualization enables dynamic resource allocation that supports
elastic service provisioning for meeting user demands.
Agile system management Virtualization supports fast deployment, configuration, and
management of the virtual instances in response to user requests.
www.EngineeringBooksPdf.com
102 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 103
www.EngineeringBooksPdf.com
104 Virtualized Software-Defined Networks and Services
link spans over a path in the network infrastructure and includes a portion of
the physical resources along the path. The virtualization layer is responsible for
mapping virtual nodes and links to physical resources and managing partition
of infrastructure to guarantee isolation between VNs. VNs can be set up and
torn down dynamically in responses to users’ service requests [9].
In general, the NV architecture has the following features:
It is worth noting that the VNs enabled by the NV architecture and the
VPNs realized in the current Internet, although seeming to be similar, actually
have fundamental difference. VPNs simply provide connectivity between edge
sites over a single ISP backbone, while the NV architecture gives VN operators
direct control over the protocols and services that run on their VNs. In addition,
VNs in the NV architecture may span across multiple infrastructure domains,
while VPNs are typically constrained within individual ISP infrastructures.
www.EngineeringBooksPdf.com
Virtualization in Networking 105
establish and operate virtual networks (VNs) by leasing resources from InPs to
offer end-to-end services. Figure 3.3 shows an NV environment with split roles
of InP and SP.
InPs are responsible for deploying and managing the underlying physi-
cal infrastructure resources. They offer their resources through programmable
interfaces to different SPs. Multiple independent InPs may exist in an NV envi-
ronment, and the InPs distinguish themselves through factors such as the qual-
ity of their resources and the manageability that they provide for utilizing their
resources. According to the 4WARD model, the InPs must fulfill the following
requirements [10]:
SPs lease resources from one or multiple InPs to create and deploy VNs
for providing services to end users. An SP may also provide network services to
Figure 3.3 NV environment with split roles of infrastructure and service providers.
www.EngineeringBooksPdf.com
106 Virtualized Software-Defined Networks and Services
other SPs by acting as a virtual InP that partitions and leases its virtual resources
to other SPs. The role of SPs might be further split to virtual network providers
(VNPs) and virtual network operator (VNOs).
The main job of a VNP is to construct VNs for meeting the requests from
VNOs while VNOs are responsible for the actual operations of the constructed
VNs for service provisioning to end users. VNPs plays a mediation role between
the InPs and the VNOs. An analogy for the function of a VNP is that of a travel
agency that has expertise in traveling methods and knowledge about available
routes, and thus can make the travel plans and book the flights and hotels for
customers based on their requirements. However, after the customers decide
on their trips, they mostly interact with the on-site service providers (InPs in
a VN environment). The following lists of functions for VNP and VNO were
summarized in [10].
The main responsibilities of a VNP include the following:
www.EngineeringBooksPdf.com
Virtualization in Networking 107
www.EngineeringBooksPdf.com
108 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 109
In the direction from InP(s) to a VNP, this layer presents the availability and
capability information about infrastructure resources to the VNP in an abstract
format. In the direction from the VNP to InP(s), the virtualization layer han-
dles requests for creating VNs and allocates appropriate physical infrastructure
resources for meeting the requirement of each VN. The function in the first di-
rection is often referred to as resource description and discovery (RDD), while
the process in the second direction is typically called resource allocation or vir-
tual network embedding (VNE).
www.EngineeringBooksPdf.com
110 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 111
www.EngineeringBooksPdf.com
112 Virtualized Software-Defined Networks and Services
VNE scenario where two VNs, each with three nodes, are embedded in a physi-
cal infrastructure network with five physical nodes.
In addition to meeting the basic functionality and capability constraints,
VNE often needs to achieve some other objectives in order to create VNs for
meeting various requirements specified by both service providers as well as by
infrastructure providers. Some typical VNE objectives are discussed in the fol-
lowing paragraphs.
• Meeting the QoS requirements for VNs: VN requests are often specified
by VNOs according to a set of QoS constraints defined for meeting cer-
tain service requirements. For example, a VN for real-time multimedia
content delivery services may require high bandwidth on virtual links,
high switching throughput and CPU capacity on virtual nodes, and low
delay and delay variation for end-to-end packet forwarding through
the VN. These QoS constraints must be satisfied by the VNE process
through appropriate allocation of physical resource.
• Maximizing the benefits of InPs: This objective is directly related to
maximize the utilization of infrastructure resources to embed as many
VNs as possible (i.e., high acceptance ratio for VN requests). To achieve
this objective, VNE process attempts to minimize the amount of re-
sources for hosting each VN and prefers to host VNs that bring in more
www.EngineeringBooksPdf.com
Virtualization in Networking 113
revenue when infrastructure resources are not sufficient for accepting all
VN requests. Since energy consumption is also an important part of InP
operation cost, it is also desirable for VNE to consolidate the physical
resources for hosting VNs so that the idle part of the infrastructure may
be put in a sleep mode for saving energy.
Figure 3.6 VNE by solving the VNM and VLM subproblems in sequence.
www.EngineeringBooksPdf.com
114 Virtualized Software-Defined Networks and Services
Figure 3.7 Iterative coordination with feedback between VNM and VLM.
www.EngineeringBooksPdf.com
Virtualization in Networking 115
Figure 3.8 An illustrative example of VNE using the integrated network flow model.
InPs. In such cases, the VNP needs to decompose the VN request into multiple
subrequests and assign each subrequest to the most appropriate infrastructure
domain. Inside each domain, the InP employs a VNE algorithm to embed a
part of VN specified by a subrequest. Then all the VN parts are connected to-
gether using external links among infrastructure domains to form the complete
VN. Such inter-InP coordination makes the multidomain VNE a very chal-
lenging problem that currently is still open for further research.
www.EngineeringBooksPdf.com
116 Virtualized Software-Defined Networks and Services
sufficient external and internal network probing, an attacker can discover the
locations as well as attributes of potential target hosts and/or VMs. For ex-
ample, the attacker can keep sending virtual network requests until its virtual
node (the attacker VM) is mapped on the same host as the target virtual node
(the victim VM).
When an attacker achieves node-level coresidence with the target net-
work, as shown in Figure 3.9, the attacker can take advantage of vulnerabilities
existing in the VM management software (hypervisor), which might lead to
penetration of the physical node. Since the latter has the ultimate privilege, it
could in turn manipulate any other coresident target VMs. The attacker can
exploit the target VM even without penetrating the host by using side-channel
attacks. Coresidency generally indicates sharing of physical hardware, which
can serve as a common medium or side channel between the VMs. As a result,
activities unique to the victim VM can be “listened to” and analyzed by the at-
tacker VM. For instance, the attacker VM could uncover some sensitive infor-
mation about the victim VM after a sufficiently long period of monitoring and
analyzing the victim VM’s computational load in shared memory.
On the network level, attackers may exploit coresidency of VNs to launch
attacks from their VNs to the victim VNs. For instance, an attacker may em-
ploy denial of service (DOS) attacks to overwhelm the victim VN. As shown in
Figure 3.10, the attacker VN and victim VN share a physical link with 4-Gbps
bandwidth. If the attacker VN purposely transmits at the maximum rate 4
Gpbs, then the shared link will be saturated and becomes unavailable to the
victim VN, thus leading to unavailable service or performance degradation in
the victim VN. The attacker may achieve this objective by specifying a lower
link rate in his VN request but actually transmitting data at a much higher rate
on that virtual link after the VN is deployed. Since the attacker VN resides in
www.EngineeringBooksPdf.com
Virtualization in Networking 117
the same physical network (e.g., a data center network) as the victim VN, it is
difficult to prevent such attacks using firewalls or intrusion detection systems
that are typically deployed at the boundary of a physical network.
In order to cope with security threats that exploiting the coresidence fea-
ture of multitenant VNs on either node level or network level, it is very im-
portant for the virtualization layer to guarantee isolation between VNs. The
virtualization layer should provide both logical isolation of distinct VNs (e.g.,
separated address space) and resource isolation to ensure the tenants cannot
interfere each other. In addition, the virtualization layer should also make im-
plementation details of physical network infrastructures transparent to VNs,
which will prevent attackers from exploiting the vulnerabilities in the underly-
ing hardware for launching attacks to coresident VNs, such of DOS attacks that
saturate physical links.
3.4.3.2 Virtual Network Survivability
With network virtualization gaining momentum, VN survivability has become
an issue that attracts considerable attention. Survivable VNs need to be able to
cope with various types of network faults, such as single node/link failure, mul-
tiple node/link failure, and regional failures. Survivability has been extensively
studied in traditional networks, but some new challenges have been brought in
by virtualization in networking. An important question raised in the context of
network virtualization is how to ensure that the mapping of a virtual network
can survive under network failures. Answering this question leads to the so-
called survivable VNE that includes survivability as an objective for the VNE
process.
We use an example of single node failure to illustrate the survivable VNE
problem. As shown in Figure 3.11, when the physical node hosting the virtual
node a fails, from the VN viewpoint, the virtual node a, virtual link a–b, and
www.EngineeringBooksPdf.com
118 Virtualized Software-Defined Networks and Services
virtual link a–c all become unavailable; therefore, the virtual topology of the VN
is broken. To avoid this problem, a survivable VNE scheme needs to preplan a
replacement mapping to ensure that the same virtual network topology can still
be provided when a physical node fails. One approach to achieving preplanned
mapping for survivable VNE is to augment a virtual network with inherent
protection, as proposed in [23]. To protect single node failure, for instance, one
can add an extra virtual node into the VN topology to serve as a backup node.
This idea is illustrated in Figure 3.12. For the VN shown in the left, one can
add an extra virtual node d connecting to all the other virtual nodes, thus form-
ing an augmented VN shown on the right. After mapping the augmented VN,
any single node failure will not affect the virtual network topology as the node
d can replace the failed node. In addition to single node failures, survivability
of VNs under other types of failures such as single link failures, multiple link
failures, and regional failures, could also be enhanced by using the augmented
VN approach, as discussed in [23].
www.EngineeringBooksPdf.com
Virtualization in Networking 119
www.EngineeringBooksPdf.com
120 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 121
www.EngineeringBooksPdf.com
122 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 123
www.EngineeringBooksPdf.com
124 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 125
static transfer function, its dynamic state, and the inputs that the NF receives
from its interfaces [26].
The objective of NFV is to separate software that defines the network
function (the VNF) from the virtualized network infrastructure (NFVI) that
executes the VNF. Therefore, VNFs and the NFVI should be specified separate-
ly. Virtualization of NE is illustrated by the example shown in Figure 3.14, in
which the upper part of the figure shows how traditional NEs are connected for
providing a network service; while the lower part of the figure shows the situa-
tion where NFs have been virtualized and implemented as VNFs executing on
host functions in the NFVI. As pointed out in [26], virtualization of network
functions has resulted in some significant changes in the following aspects:
www.EngineeringBooksPdf.com
126 Virtualized Software-Defined Networks and Services
These distinctions indicate that unlike a functional block that may ex-
ist autonomously, a VNF depends on the host function for its existence. The
VNF will be interrupted or terminated if its host function is interrupted or
terminated. Such existence dependence between a VNF and its host function is
reflected by the container interface.
The relationship between the VNF and its host function can be described
from the following two aspects: (a) the VNF is a configuration of the host
function, and (b) the VNF is an abstract view of the host function when the
host function is configured by the VNF. Therefore, when a host function is
configured with a VNF, it shows the external behaviors for implementing the
VNF specification. On the other hand, the VNF is an abstract view of the host
function. Therefore, the NFV architecture is defined using (a) host functions
with their offered container interfaces and associated infrastructure interfaces,
and (b) VNFs with their virtualized interfaces and the container interfaces that
they use [26].
www.EngineeringBooksPdf.com
Virtualization in Networking 127
Figure 3.15 Network functions, forwarding graph, and network service [25].
VNF instance on the infrastructure is not visible from the end-to-end service
perspective. This enables the VNF instances of a VNF-FG to be implemented
on different physical resources as long as the overall end-to-end service perfor-
mance and other policy constraints are met.
Figure 3.16 also depicts the case of a nested VNF-FG. In the example
service, the VNF 2 specified by the high-level FG (VNF-FG1) are decomposed
into three components that are realized by three VNFs (i.e., VNF-2A, VNF-
2B, and VNF-2C). A lower-level FG (VNF-FG2) defines how these three
VNFs collaborate to realize functions of VNF-2. The three VNF components
of VNF-2 may be instantiated as three VMs that are hosted on either the same
or different physical servers.
The new networking capabilities enabled by NFV are expected to bring in
some significant differences in the way of network service provisioning. Some
key differences are summarized as follows [25]:
www.EngineeringBooksPdf.com
128 Virtualized Software-Defined Networks and Services
Another key concept behind NFV is microservices: the idea that a func-
tion needs to do one thing and do it well as opposed to creating large function
monoliths. Various microservice network functions can be composed together
through NFV MANO and presented externally as a single complex service. A
simple example of such a configuration can be a firewall for providing security
services and a router providing tunnel termination, implemented as two sepa-
rate VNFs but orchestrated to work together. The firewall VNF does not have
to directly deal with the specific tunneling technology used (e.g., GRE, L2TP,
IPSEC) and the router VNF does not have to implement firewall functions. In
addition to binding functions together, service orchestration allows us to pass
context from one function to the other. In the firewall/router example, we can
tag different tunnels with different service chain tags so that even though the
tunnel header is removed, significant context passed through the tunnel (e.g.,
subscriber identification) can still be passed along to the firewall function.
www.EngineeringBooksPdf.com
Virtualization in Networking 129
single geographical location where a number of NFVI nodes are sited is called
a NFVI point of presence (NFVI-PoP), which could be as large as a data center
or as small as a single network device [26].
In order to manage system complexity and enhance system scalability,
the NFVI component in the NFV architecture is further partitioned into three
functional domains: the compute domain, the hypervisor domain, and the in-
frastructure network domain, as shown in Figure 3.17. According to the current
technology and industry structure, it is already the case that compute, hypervi-
sor, and infrastructure networking technologies are largely separated with suf-
ficient standards for supporting the interactions between these domains.
www.EngineeringBooksPdf.com
130 Virtualized Software-Defined Networks and Services
multicore CPU with high I/O bandwidth, smart Ethernet NICs for load shar-
ing and TCP offloading, and polling packets directly to VM memory [26].
3.6.1.2 Hypervisor Domain
The hypervisor domain of NFVI manages compute domain resources for sup-
porting the VMs running VNF software. Essentially, the hypervisor domain
implements the virtualization layer between the physical and virtual computing
(including storage) resources in NFVI. Therefore, a hypervisor domain is re-
sponsible for providing all the required capabilities for infrastructure virtualiza-
tion, including abstraction of physical resources, coordination across VMs for
resource sharing, and isolation between VMs. A popular open source hypervisor
implementation is KVM, and various packaging of KVM has been integrated
into commercial products.
A special challenge to the hypervisor domain in NFVI is to achieve the
high performance expected by many NFV applications, which means allowing
the VMs hosting VNF instances to run as fast as possible. Current and emerg-
ing server hardware technologies offer some features that may greatly improve
VM performance, including multicore processors supporting parallel threads
of execution, system-on-chip processors, specific CPU enhancement for VM
memory allocation and direct I/O access, and PCIe bus enhancements (e.g.,
SR-IOV). Some specific approaches that may be employed in the hypervisor
domain for enhancing NFV VM performance include exclusive allocation of
whole CPU cores to VMs, direct memory mapped drivers for inter-VM com-
munications and for VMs to directly access physical NICs, and implementing
vSwitch as a high-performance VM [26].
Figure 3.18 depicts the NFV hypervisor architecture presented by ETSI
NFV-ISG.
www.EngineeringBooksPdf.com
Virtualization in Networking 131
www.EngineeringBooksPdf.com
132 Virtualized Software-Defined Networks and Services
Figure 3.19 Container interfaces provided by NFVI and its constituent domains.
www.EngineeringBooksPdf.com
Virtualization in Networking 133
www.EngineeringBooksPdf.com
134 Virtualized Software-Defined Networks and Services
demand instantiation of the VNF for service provisioning. In [27], the main
information elements in a VNFD are grouped as follows.
Figure 3.21 shows the example provided in [27] for illustrating the rela-
tionship between a VNFD and the associated VNF. This example shows a VNF
instance that is made up of four VNFC instances that are of three different
types: A, B, and C. Each VNFC type has its own requirements on the operating
system and the execution environment. These virtual resources and their inter-
connectivity requirements are described in a VNFD. Besides resource require-
ments, a VNFD also contains references to VNF binaries, scripts, configuration
data, and so on, which are needed by the VNFM to configure the VNF prop-
erly. The requirements for NFVI resources (e.g., connectivity requirements,
bandwidth, latency) are also included in the VNFD (but not shown in Figure
3.21).
3.6.2.2 VNF Lifecycle Management
Physical appliances have a life cycle: they get unpacked, placed in a location,
powered up, connected to the network, and go through various stages of con-
figurations. The instantiation process of a VNF are the equivalent of unpack-
ing, installing, and powering up a network device. The configuration steps are
similar to those used in physical appliances, typically referred to as day 0/1/2
configuration as outlined next:
www.EngineeringBooksPdf.com
Virtualization in Networking 135
www.EngineeringBooksPdf.com
136 Virtualized Software-Defined Networks and Services
Day 0/1/2 configurations are persistent and often stored in a local da-
tabase of the VM. There are many transient states for session-based signaling,
such as voice calls, video sessions, and so on, that are typically not stored in
persistent memory. Note that many of these transient states can be driven from
legacy management systems as long as the virtual appliance appears to have the
same management interface as the legacy physical appliance.
A large array of protocols may be used for different phases of configu-
rations. For example, DHCP and various authentication protocols for day-0
configuration, file downloads for day-1 configuration, and NETCONF/YANG
for day-2 configuration.
Other life cycle events include decommissioning a VM, software upgrad-
ing for a VM, recovery from a failure condition, and moving a VM to a differ-
ent location. The events all can be managed by the NFV MANO component as
described in the next subsection.
www.EngineeringBooksPdf.com
Virtualization in Networking 137
www.EngineeringBooksPdf.com
138 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 139
network services. The two responsibilities are kept within one functional block
in the current MANO framework mainly for simplicity reasons. It is worth
noting that a key idea of network virtualization is to decouple the function-
alities focusing on service provisioning from the resources in the underlying
infrastructures. Therefore, separating the orchestration of network services and
orchestration of infrastructure resources into two independent entities that
interact with each other through a standard abstract interface will make the
NFV MANO component design align better with the principle of network
virtualization.
The following two lists of capabilities are defined in [28], respectively, for
network service orchestration and resource orchestration.
NFVO capabilities for network service orchestration include:
www.EngineeringBooksPdf.com
140 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 141
Both NFVO and VNFM can query the VNF catalogue for finding and retriev-
ing VNFDs to support different operations.
NFV instances repository holds information of all VNF instances and
network service instances. VNF instances and NS instances are represented by
VNF records and NF records, respectively. Those records are updated during
the lifecycles of the respective instances to reflect the changes caused by execu-
tion of lifecycle management functions for VNF and network service instances.
NFVI resource repository holds information about available, reserved,
and allocated NFVI resources as abstracted by the VIM across infrastructure
domains, thus providing information useful for resource reservation, allocation,
and monitoring purposes.
Figure 3.23 is a diagram provided by NFV-ISG for illustrating how the
various MANO information elements are organized in catalogs and repositories.
www.EngineeringBooksPdf.com
142 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 143
www.EngineeringBooksPdf.com
144 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 145
www.EngineeringBooksPdf.com
146 Virtualized Software-Defined Networks and Services
hypervisor and the destination VM of the packet. The descriptor contains infor-
mation about the memory location where the packet is stored. The descriptor
can also be used for specifying the actions to be performed for packet forward-
ing or transmitting (e.g., forwarding the packet to another VM or transmitting
the packet through an NIC). A VNF can ask the NetVM core to forward a
packet to a different VM or transmit the packet to an NIC after the VNF com-
pletes its processing on the packet [29].
Since direct access to shared memory region plays a key role in NetVM
for achieving zero-copy data delivery, ensuring consistency of shared data be-
comes an important issue. Locks are typically used as a consistency protection
mechanism in memory sharing; however, even an uncontested lock introduces
delay that may degrade the high-performance packet processing that NetVM
attempts to offer. Fortunately, analysis given in [29] indicates that the com-
munication structure used in NetVM can be implemented safely without any
lock. Since a packet descriptor will be held only by either the hypervisor or a
single VM at any time, packet data in the shared memory region will never see
concurrent access.
Another issue that has been considered in NetVM implementation is
nonuniform memory access (NUMA) awareness. Modern servers often have
multiple CPU sockets connecting to different banks of memory. This may re-
sult in variable memory access time depending on memory location relative to
a processor, especially when a thread accesses data that is spread across multiple
memory modules. To avoid NUMA costs, NetVM uses one memory page re-
gion per CPU socket and ensures that a packet stored in one region is only
processed by the CPU cores on that socket.
www.EngineeringBooksPdf.com
Virtualization in Networking 147
www.EngineeringBooksPdf.com
148 Virtualized Software-Defined Networks and Services
VNFI devices, VNF applications, MANO software, among others) and assures
interoperability among them.
Open platform for NFV (OPNFV) is a Linux Foundation collaborative
project that intends to provide an open source NFV implementation platform
for testing and validating NFV solutions. The objective of OPNFV is to pro-
mote interoperable NFV solutions and stimulate the open source communities
to create software and hardware for NFV implementations based on common
industry requirements.
The overall design of the OPNFV platform is modular and allows for
extensions and innovations beyond community components. Such design pro-
vides users with choices to obtain additional values from proprietary or special-
ized components. The OPNFV architecture that is currently under develop-
ment follows the NFV architectural framework specified by ETSI NFV-ISG.
The OPNFV project initially started on the NFV infrastructure layer compris-
ing the NFVI and VIM components, and focused on the ways these compo-
nents interact and the interfaces between them. These interfaces include VNF
interfaces to virtual infrastructure, interfaces used by applications to execute on
virtual infrastructure, interfaces between the virtual infrastructure and VIM,
and interfaces between the VIM and VNFM/VNFO.
The technical overview published by the OPNFV project (www.opnfv.
org/software/technical-overview) identifies the following list of functionalities
as the main use cases of the OPNFV platform:
A key part of OPNFV is the Pharos Community Lab project and the bare
metal lab infrastructure hosted by the Linux Foundation. Pharos is a test lab for
developing and testing NFV solutions. The lab is geographically and technically
diverse to allow NFV technologies to be developed based on various hardware
environments.
www.EngineeringBooksPdf.com
Virtualization in Networking 149
www.EngineeringBooksPdf.com
150 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 151
www.EngineeringBooksPdf.com
152 Virtualized Software-Defined Networks and Services
NIaaS plays the same role in network virtualization as IaaS does in a cloud
environment, but focuses on networking capabilities instead of general com-
putational resources in the infrastructure. Similarly, NaaS in service-oriented
network virtualization is comparable to SaaS in the cloud service model.
It is worth noting that from an end user’s perspective, services provided
by the current IP-based Internet can be regarded as NSs, although they might
not have all the features of cloud services, such as elastic on-demand service
delivery. For the current TCP/IP protocol stack, the interface between the ap-
plication layer and the TCP layer (e.g., socket interface) provides a service in-
terface. However, although this interface is standardized, it lacks the necessary
abstraction of the network platform for decoupling the functions and imple-
mentations of network services. That is, service access methods used by the user
are dependent on service implementation. For example, socket API needs to be
revised if the transport layer protocol is changed or replaced by a new protocol.
www.EngineeringBooksPdf.com
Virtualization in Networking 153
www.EngineeringBooksPdf.com
154 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 155
www.EngineeringBooksPdf.com
156 Virtualized Software-Defined Networks and Services
Figure 3.30 NaaS-based virtualization for unification of network and cloud services [35].
www.EngineeringBooksPdf.com
Virtualization in Networking 157
www.EngineeringBooksPdf.com
158 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 159
The infrastructure layer (IL) consists of all the resources in compute and
network infrastructures and the control functions in the infrastructure domains.
Each infrastructure domain has a logically centralized domain controller, which
virtualizes and controls the infrastructure resources in the domain through local
agents. Local resource agents operate physical resources by following instruc-
tions from the corresponding controllers. For example, an OpenFlow control-
ler controls network infrastructure resources through the OpenFlow agents in
OpenFlow switches; OpenStack Nova component controls compute infrastruc-
ture resources through Nova compute interface. By exploiting suitable virtual-
ization technologies, the IL supports creation of virtual instances of network
and compute functions out of physical resources.
The orchestration layer (OL) comprises two major components: the re-
source orchestrator (RO) and the controller adaptor (CA). The CA provides
domain-wide resource abstraction and virtualization for the various types of
resources in different infrastructure domains. The CA works with the domain
controllers in the IL to maintain a global view of resources and capabilities for
each domain. The CA presents the abstract view of each domain to the RO.
The RO harmonizes the virtualized domain resources to form a global virtual
resource view of the entire infrastructure layer. RO also presents this global view
of virtual resources to the service layer.
The service layer (SL) performs service life cycle management and or-
chestration over virtual resources for supporting unified network-cloud servic-
es. The SL is responsible for defining and managing the logics of consumable
services, establishing programmable interfaces to service users, and providing
interfaces to OSS/BSS systems for service management. This layer also creates
service abstraction as needed toward different users and realizes necessary adap-
tation according to the abstraction.
The management component in the UNIFY architecture comprises
management functions for both physical infrastructure resources and virtual
network/compute functions. This component includes network function in-
formation base (NF-IB) used by the RO for network resource orchestration.
The network function system in the UNIFY architecture comprises records of
instantiated network functions in the system, including data and control/man-
agement plane components and the corresponding forwarding overlays.
The UNIFY architecture creates a unified platform for rapid and flex-
ible service creation and delivery through joint virtualization and orchestra-
tion of networks and compute resources. It is worth noting that although the
design shares some similarity with ETSI NFV architecture, it takes a different
architectural approach by generalizing the SDN principles in a network virtu-
alization environment. This approach attempts to enable multilevel recursive
resource control of network functions with split data and control plane func-
www.EngineeringBooksPdf.com
160 Virtualized Software-Defined Networks and Services
tions, thus bringing in the benefits of SDN technologies to address control and
management challenges in network virtualization [44].
More information about UNIFY project can be found at https://www.
fp7-unify.eu.
www.EngineeringBooksPdf.com
Virtualization in Networking 161
over both the overlay infrastructure and the underlay fabric allows them to be
orchestrated together to deliver the features and services required in a central
office.
The fabric in CORD is organized in a leaf-spin topology to optimize for
traffic flowing east to west; namely, traffic between the access network connect-
ing users to a central office and the upstream links connecting the central office
to the operator’s backbone network. This design assures that high I/O rates can
be supported. In addition, by connecting I/O servers directly to the fabric, the
CORD architecture does not rely on routers to connect to the data center. In
CORD, routing functions are realized as VNF instances hosted on data center
servers.
The CORD software architecture of is depicted in Figure 3.33. As sum-
marized in [45], the reference implementation of CORD software architecture
leverages technologies from the following open projects:
• OpenStack: the management suite that provides the core IaaS capability
in a data center and is responsible for creating and provisioning VMs
and VNs;
• ONOS: an SDN network operating system that hosts a collection of
control applications for implementing services and embedding VNs in
CORD fabric network;
www.EngineeringBooksPdf.com
162 Virtualized Software-Defined Networks and Services
• Side by side: some servers are managed as containers and the others as
VMs with OpenStack. Virtual functions implemented based on con-
tainers and VMs can be orchestrated to offer network services.
• Docker on top of OpenStack: since containers are nothing but a way
to fragment Linux processes into well-isolated “VM-like” instances, it is
possible to use OpenStack to create a base Linux instance in a VM and
then use Docker to overlay containers on top of it.
Not all VMs can be in containers because any change to the kernel that
an NFV implementation made (e.g., for performance improvements or simply
because the original code used a nonstandard Linux distro) cannot be in a con-
tainer. Therefore, it makes senses to have both options available. However, in
some use cases with very high scale (e.g., a virtual CPE), container-based virtu-
alization has advantages due to its lightweight implementation.
www.EngineeringBooksPdf.com
Virtualization in Networking 163
3.9 Conclusion
In this chapter, we discussed application of virtualization in networking for ad-
dressing some of the fundamental challenges to current networking technolo-
gies for meeting the requirements of future network services.
We first introduced the vision of network virtualization, which adopts
virtualization as a key architectural attribute in network designs. Network vir-
tualization allows multiple independent virtual networks to share a common
infrastructure substrate and alternative network architectures to be deployed
in individual virtual networks. Network virtualization split the role of tradi-
tional Internet service providers into multiple independent entities, including
infrastructure providers, virtual network providers, virtual network operators,
and service providers. By decoupling network functions for service provisioning
from the infrastructures for data transportation and processing, network virtu-
alization enables independent evolution of service functions and infrastructure
technologies. A key to realizing network virtualization lies in creating virtual
networks for meeting service requirements while achieving various optimiza-
tion objectives. In this chapter, we presented the general architecture of net-
work virtualization, described the main functional roles in the architecture, and
discussed the benefits brought in by the architecture. We also reviewed repre-
sentative technologies for creating virtual networks, mainly in two areas—dis-
covering available infrastructure resources and embedding virtual networks into
physical network infrastructures.
The second part of this chapter covers network function virtualization
(NFV). NFV leverages standard IT virtualization technologies to implement
network functions as software instances that are consolidated on industry stan-
dard servers and storages. Essentially, NFV embraces the general architectural
vision introduced by network virtualization and provides specific architecture
and related mechanisms for realizing this vision. In this chapter, we present-
ed the NFV architectural framework proposed by ETSI and discussed some
basic principles for virtualizing network functions. Then we described the
key components in the NFV architecture, including the NFV infrastructure,
VNF software architecture, and the management and orchestration compo-
nent. Achieving high-performance VNF based on standard server platform is a
key to realizing NFV. Therefore, we particularly discussed some representative
technologies for implementing high-performance virtual functions, including
DPDK, NetVM, ClickOS, and OpenNFV, in this chapter.
The last part of this chapter is focused on virtualization-based network
service provisioning. We introduced the service-oriented architecture (SOA)
www.EngineeringBooksPdf.com
164 Virtualized Software-Defined Networks and Services
that has been widely adopted in web and cloud service models and discussed its
applications in networking. The SOA may facilitate realization of virtualization
in networking through the network-as-a-service (NaaS) paradigm. NaaS has
been employed in the NFV environment in the form of NFVIaaS, VNFaaS,
and VNPaaS. The centralized and programmable control enabled by SDN pro-
vides an effective platform for supporting NaaS-based virtualization. Virtualiza-
tion, SOA, and SDN together offer a promising approach to unifying network
and cloud services, which is expected to have a significant impact on future
service provisioning. We briefly reviewed two research projects, UNIFY and
CORD, to reflect the state of the art on research toward network-cloud service
unification.
References
[1] Xen, http://www.xenproject.org. Last accessed in July 2016.
[2] Kernel Virtual Machine, http://www.linux-kvm.org. Last accessed in July 2016.
[3] VMware, http://www.vmware.com. Last accessed in July 2016.
[4] Turner, J., and D. E. Taylor, “Diversifying the Internet,” Proceedings of the 2015 IEEE
Global Telecommunications Conference (GLOBECOM’05), Dec. 2005.
[5] Anderson, T., L. Peterson, S. Shenker, and J. Turner, “Overcoming the Internet Impassse
Through Virtualization, ” IEEE Computer Magazine, Vol. 38, No. 4, April 2005, pp.
34–41.
[6] Feamster, N., L. Gao, and J. Rexford, “How to Lease the Internet in Your Spare Time,”
ACM SIGCOM Computer Communication Review, Vol. 37, No. 1, January 2007, pp.
61–64.
[7] GENI Planning Group, “GENI Design Principles,” IEEE Computer Magazine, Vol. 39,
No. 9, Sept. 2007, pp. 102–105.
[8] Szegedi, P., J. F. Riera, J. A. Garcia-Espin, M. Hidell, P.Sjodin, et al., “Enabling Future
Internet Research: the FEDERICA Case,” IEEE Communications Magazine, Vol. 49, No.
7, July 2011, pp. 54–61.
[9] Chowdhury, N. M. M. K., and R. Boutaba, “Network Virtualization: State of the Art and
Research Challenges,” IEEE Communications Magazine, Vol. 47, No. 7, July 2009, pp.
20–26.
[10] Baucke, S., and C. Gorg, “Virtualization Approach: Concept,” 4WARD Project
Deliverable 3.1.1, September 2009.
[11] Belbekkouche, A., M. Hassan, and A. Karmouch, “Resource Discovery and Allocation in
Network Virtualization,” IEEE Communications Surveys & Tutorials, Vol. 14, No. 4, 2015,
pp. 1114–1125.
www.EngineeringBooksPdf.com
Virtualization in Networking 165
[12] Ham, J., P. Grosso, R. Pol, A. Toonk, and C. Laat, “Using the Network Description
Language in Optical Networks,” Proccedings of the 10th IFIP/IEEE International Symposium
on Integrated Network Management, May 2007, pp. 199–205.
[13] Campi, A., and F. Callegai, “Network Resource Description Language,” Proceedings of the
2009 IEEE Global Communication Conference (GLOBECOM’09), Dec. 2009.
[14] Abosi, C. E., R. Nejabati, and D. Simeonidou, “A Novel Service Composition Mechanism
for the Future Optical Internet,” Journal of Optical Communications and Networking, Vol.
1, No. 2, 2009, pp. A106–A120.
[15] Duan, Q., “Network Service Description and Discovery for High-Performance Ubiquitous
and Pervasive Grids,” ACM Transactions on Autonomous and Adaptive Systems, Vol. 6, No.
1, 2011.
[16] Koslovski, G. P., P. V.-B. Primet, and A. S. Charao, “VXDL: Virtual Resources and
Interconnection Networks Description Language,” Proceedings of the 2nd International
Conference on Networks for Grid Applications, Oct. 2008.
[17] Houidi, I., W. Louati, D. Zeghlache, and S. Baucke, “Virtual Resource Description and
Clustering for Virtual Network Discovery,” Proceedings of the 2009 IEEE International
Conference on Communications (ICC09), June 2009.
[18] Fischer, A., J. F. Botero, M. T. Beck, H. de Meer, and X. Hesselbach, “Virtual Network
Embedding: A Survey,” IEEE Communications Surveys & Tutorials, Vol. 15, No. 4, 2013,
pp. 1888–1906.
[19] Yu, M., Y. Yi, J. Rexford, and M. Chiang, “Rethinking Virtual Network Embedding:
Substrate Support for Path Splitting and Migration,” ACM SIGCOM Computer
Communication Review, Vol. 38, No. 2, April 2008, pp. 17–29.
[20] Hu, Q., Y. Wang, and X. Cao, “Virtual Network Embedding: An Optimal Decomposition
Approach,” Proceedings of the 23rd IEEE International Conference on Computer
Communications and Networks (ICCCN2014), August 2014.
[21] Chowdhury, N. K., M. R. Rahman, and R. Boutaba, “Virtual Network Embedding with
Coordinated Node and Link Mapping,” Proceedings of 2009 IEEE International Conference
on Computer Communications (INFOCOM2009), April 2009, pp. 783–791.
[22] Wang, Y., Q. Hu, and X. Cao, “A Branch-and-Price Framework for Optimal Virtual
Network Embedding,” Elsevier Journal of Computer Networks, Vol. 94, No. 1, Jan. 2016,
pp. 318–326.
[23] Hu, Q., Y. Wang, and X. Cao, “Survivable Network Virtualization for Single Facility
Node Failure: A Network Flow Perspective,” Elsevier Journal of Optical Switching and
Networking, Vol. 10, No. 4, April 201, pp. 406–4153.
[24] ETSI NFV-ISG, “Network Functions Virtualization: An Introduction, Benefits, Enablers,
Challenges and Call for Action,” Proceedings of SDN and OpenFlow World Congress, Oct.
2012.
[25] ETSI NFV-ISG, “NFV 002: Network Function Virtualization (NFV)—Architectural
Framework v1.2.1,” December 2014.
www.EngineeringBooksPdf.com
166 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualization in Networking 167
[44] Szabo, R., B. Sonkoly, and M. Kind, “UNIFY: Unifying Cloud and Carrier Networks—
Deliverable 2.3 Final Architecture,” November 2014.
[45] Peterson, L., “CORD: Central Office Re-Architectured as a Datacenter,” IEEE Software
Defined Networks, white paper, Nov. 2015.
[46] Das, S., A. Al-Shabibi, J. Hart, C. Chan, F. Castro, et al., “CORD Fabric, Overlay
Optimization, and Service Composition,” Open Networking Lab CORD design notes,
March 2016.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
4
Integrating SDN and NFV in Future
Networks
Qiang Duan
4.1 Introduction
Software-defined networking (SDN) and network virtualization (NV) are two
major innovations in the field of networking. SDN separates control and man-
agement functionalities from data forwarding operations to enable a central-
ized and programmable network control platform. NV introduces a layer of
abstraction for the underlying infrastructure upon which virtual networks with
alternative architectures may be constructed to meet diverse service require-
ments. Embracing the vision of network virtualization, the network function
virtualization (NFV) architecture has been proposed to leverages virtualization
technologies to transfer network functions from hardware appliances to soft-
ware applications.
Although SDN and NFV were initially developed as two independent
networking paradigms, evolution of both technologies has shown strong syn-
ergy between them. SDN and NFV share some common goals and similar tech-
nical ideas and therefore complement each other. Integrating SDN and NFV in
future networking may trigger innovative network designs that fully exploit the
advantages of both paradigms. Therefore, the relationship between SDN and
169
www.EngineeringBooksPdf.com
170 Virtualized Software-Defined Networks and Services
NFV and how these two paradigms may be combined in future networks have
become an important research topic that attracts extensive attention from both
industry and academia.
The past few years witnessed exciting progress in SDN technologies and
their applications in various networking scenarios. On the other hand, research-
ers have noticed some issues of the current SDN approach that may limit its
ability to fully support future network services. Currently SDN lacks the flex-
ibility to support multiple alternative network architectures that may be needed
for meeting diverse service requirements. For meeting such requirements, SDN
switches (e.g., OpenFlow-enabled switches) on the data plane must be pre-
pared to support fully general flow matching and packet forwarding actions,
which introduces significant cost and complexity in switches, thus compro-
mising the promise of simplified data plane devices made by SDN. In addi-
tion, service evolution leads to increasing the generality in flow matching and
packet forwarding operations, and this additional generality must be present
on every switch in current SDN design [1]. On the control plane, the current
SDN architecture lacks standard northbound APIs for network programming
and effective mechanisms for coordinating heterogeneous network controllers.
Lack of interoperability between SDN controllers prevents applications from
functioning seamlessly across multiple network domains for end-to-end service
provisioning.
A root reason for the limitations of current SDN designs to achieve its
full potential for service provisioning is the tight coupling between network
architecture and infrastructure on both data and control planes [2]. Separa-
tion between data and control planes alone in the current SDN architecture is
not sufficient to overcome this obstacle. Another dimension of abstraction to
decouple service functions and network infrastructures is needed in order to
unlock SDN full potential. Therefore, applying the network virtualization no-
tion and the NFV architecture into SDN may further enhance SDN capability
of flexible service provisioning to meet the challenging requirements of future
networking and cloud computing.
On the other hand, many technical challenges must be addressed before
realizing the NFV paradigm. Much more sophisticated control and manage-
ment mechanisms for both virtual and physical network resources are required
by the highly dynamic networking environment enabled by NFV, in which pro-
grammatic network control is indispensable. Some of the networking challeng-
es that the NFV architecture is facing match the design goals of SDN. There-
fore, employing the SDN principle—decoupling control intelligence from the
controlled resources to enable a logically centralized programmable control/
management plane—in the NFV architecture may greatly facilitate realization
of NFV. Many desirable network features expected for an NFV environment
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 171
can be built based on SDN capabilities (e.g., dynamic control of network in-
frastructures, automatic management of network functions and services, elastic
and fine-grained scalability adapted to service demands, seamless mobility of
network functions, and efficient multitenancy support).
The research and industry communities of both SDN and NFV share a
common vision about the synergistic nature between SDN and NFV. Although
the goals of NFV can be achieved using non-SDN mechanisms, separation of
network control and data forwarding functions in SDN may greatly facilitate
NFV development and enhance its performance. On the other hand, NFV is
able to support SDN by providing the infrastructure upon which the SDN
software can be running. When SDN controllers and applications are realized
as VNF instances, they can be composed with other VNFs into service chains
through orchestration, which allows SDN to benefit from the flexibility, elastic-
ity, and reliability brought in by NFV. SDN and NFV should be coordinated
to achieve their overall business objectives. It is expected that SDN and NFV
will become less distinguishable as independent topics and merge into a unified
software-defined and virtualization-based networking paradigm.
Although a vision of combining SDN and NFV has been widely accepted
in the networking community, researchers have different ideas of realizing this
vision. Integrating SDN and NFV into unified network architecture in order to
maximize the benefits of both paradigms is not straightforward due to the wide
variety of intertwined network elements involved and the complex interaction
among them. Currently, SDN and NFV are still being studied and standardized
without sufficient synergy. There is an urgent need for a holistic architectural
framework in which SDN and NFV principles may be naturally combined.
In this chapter, we review the recent technical developments toward in-
tegrating the principles of SDN and NFV in future networks for meeting the
challenging requirements of service provisioning. In Section 4.2, we discuss
research progress on virtualization of SDN control platform that enables mult-
itenant virtual software-defined networks. In Section 4.3, we review technolo-
gies that employ SDN-based network control in NFV infrastructure for provid-
ing connectivity services that support VNF orchestration and service chaining.
SDN-based network control and management have also been applied to virtual
network functions as well as physical infrastructure resources in the entire NFV
architecture. Some technologies for combing SDN with the NFV architecture
are discussed in Section 4.4. In Section 4.5, we examine the key principles of
SDN and NFV and show their relationship in a two-dimensional abstraction
model. In this section, we also present an architectural framework that provides
a holistic vision about integrating the SDN and NFV principles in unified net-
work architecture and then discuss some challenges and research topics toward
integration of SDN and NFV in future networks by following this framework.
www.EngineeringBooksPdf.com
172 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 173
Separation of control and data planes in SDN is not equivalent to the decou-
pling between virtual networks and physical infrastructures introduced by net-
work virtualization. Although each VN is realized with virtual resources, it is a
complete network that comprises functions of both data forwarding and con-
trol/management. The underlying network infrastructure also contains its own
control and management as well as data forwarding functionalities. Therefore,
in the SDN architecture the data plane comprises both virtual and physical re-
sources, and the control plane consists of functions for controlling both virtual
network functions and physical network infrastructures.
Although SDN brings in benefits such as simplified data plane devices,
flexible network control, efficient management, and improved network service
performance, the lack of explicit distinction between virtual service functions
and physical infrastructures in the SDN architecture causes unnecessary cou-
pling between network architecture and the underlying network infrastructure,
which hinders network designs to fully exploit SDN potential for supporting
future network services. It is worth noting that some supposed benefits of SDN
(e.g., cost reduction through sharing physical resources or dynamic network
configuration in multitenant environments) actually come from network vir-
tualization. Although SDN facilitates network virtualization and thus makes
realization of such features easier, it is important to recognize that SDN alone
does not directly provide these benefits [3].
On the other hand, the SDN architecture does facilitates realizing the no-
tion of network virtualization. A centralized SDN controller provides a global
view of an entire network domain, thus offering a perfect platform upon which
a network virtualization layer may be realized. As a hypervisor plays a key role
in computing virtualization, the network virtualization layer acts as a “network
hypervisor” to provide an abstract view of physical network infrastructures,
manage lifecycles of VNs, map virtual network nodes and links to physical
resources, and translate the communications between virtual and physical net-
works. All these key functions of a network virtualization layer may be signifi-
cantly simplified by the centralized and programmable SDN control platform,
as compared to network virtualization implemented based on the distributed
control in traditional networks.
Although network virtualization and SDN are in principle independent
of each other, there exists symbiosis between these two networking paradigms.
SDN and network virtualization may be related and complement each other
in various ways. For example, SDN can be employed as an enabling technol-
ogy for network virtualization. Virtualization can be applied in SDN to realize
multitenant virtual SDNs. In addition, network virtualization provides an ap-
proach for evaluating and testing SDNs.
Virtualization of SDN allows network designs to leverage the combined
benefits of software-defined networking and network virtualization. A general
www.EngineeringBooksPdf.com
174 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 175
from vSDN tenant controllers to appropriate controlling messages for the cor-
responding network devices and vice versa. Therefore, a virtualization layer for
SDN network can be thought of having a network hypervisor sitting upon an
SDN control platform for the network infrastructure.
The virtualization layer provides an abstract view of the physical network
infrastructure, which typically includes three types of attributes: network topol-
ogy, node resources, and link resources. Different levels of abstraction may be
applied to physical network topology. The lowest level of abstraction represents
the physical nodes and links in an identical virtual topology (i.e., the virtual
topology is a transparent 1-to-1 mapping of the physical topology). The highest
level of abstraction represents the entire physical network as single virtual node
with ingress and egress ports. Generally, there is a range of levels of abstrac-
tion between the lowest and highest levels. In addition, network virtualization
should allow independent addressing schemes used in virtual and physical net-
works; therefore, mapping between virtual and physical network addresses is
another important aspect of infrastructure abstraction provided by the network
hypervisor [4].
Isolation provided by the network hypervisor between vSDNs should be
applicable to both the control plane and the data plane. Control plane iso-
lation allows each vSDN controller to have the impression of controlling its
own network without interference from other vSDN controllers. Data plane
isolation requires allocation of sufficient amount of data plane resources, in-
cluding switch capacities and link bandwidth, to each vSDN for meeting their
service requirements. Switch resources include capacities of packet forwarding
and processing. Flow-based SDN switches typically use TCAM for storing flow
tables and matching rules; therefore, specific amounts of TCAM capacity at
each switch should be assigned to vSDNs to provide proper isolation between
them. The network hypervisor should also assign a specific amount of physical
link bandwidth to each vSDN. As SDN switches follow a match-action pattern,
the rules for flows from different vSDNs must be uniquely identified in order
to guarantee that forwarding decisions of multitenant vSDNs do not conflict
with each other. The hypervisor should be able to associate a specific set of traf-
fic flows to virtual networks so that one set of traffic can be clearly isolated from
another [4].
The controller of each vSDN may run on a dedicated host server that is
typically located in the tenant network operation center. In principle, it can be
deployed using any currently available SDN controller implementation, such
as OpenDaylight, Floodlight, and ONOS. However, this type of static deploy-
ment of tenant controllers may limit the full benefits brought in by network vir-
tualization in SDN. Whenever a new vSDN is created, a tenant controller for
the vSDN is required to be installed on a server, and the connectivity between
the network hypervisor and the server must be configured. A more desirable
www.EngineeringBooksPdf.com
176 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 177
physical network data plane that allows FlowVisor to partition the network
infrastructure into “slices.” Each slice of infrastructure is assigned to a virtual
network and is controlled by a tenant OpenFlow controller to support a set of
flows. FlowVisor uses OpenFlow protocol to communicate with both tenant
controllers and data plane switches, thus allowing any OpenFlow-based SDN
controller to be used in vSDN without modification [5]. The architecture of
FlowVisor system for SDN virtualization is depicted in Figure 4.3.
FlowVisor acts as a transparent proxy between OpenFlow-enabled switch-
es in the data plane and the tenant OpenFlow controllers of vSDNs. All Open-
Flow messages, both from switches to tenant controllers and vice versa, are sent
through FlowVisor, which makes sure that a tenant controller can only commu-
nicate with an OpenFlow-switch that is allocated to the corresponding vSDN.
For a message generated from a tenant controller, FlowVisor ensures that the
message acts only on switches assigned to the vSDN controlled by the control-
ler. For a message from a switch, FlowVisor examines the message content to
determine which tenant controller(s) the message should be forwarded to and
assures that each tenant controller only receives messages relevant to their own
slices of network infrastructure [5].
FlowVisor introduces the term of flowspace as a basis for achieving isola-
tion between vSDNs. The set of flows forwarded by a vSDN can be thought of
constituting a predefined subspace of the entire space of possible packet head-
ers, which is called as flowspace in FlowVisor. FlowVisor allocates each vSDN
www.EngineeringBooksPdf.com
178 Virtualized Software-Defined Networks and Services
its own flowspace and ensures that the flowspaces of distinct vSDNs do not
overlap. Given a packet header, FlowVisor can decide which flowspace contains
the packet and therefore which vSDN it belongs to. FlowVisor limits the ten-
ant controllers of vSDNs to operate only on their own flowspaces in order to
prevent interference between tenant controllers.
For achieving topology isolation between vSDNs, FlowVisor examines
and edits OpenFlow messages to only report states of the physical network re-
sources, including switches, ports, and network links, that are part of a vSDN
to the respective tenant controller. To enforce bandwidth isolation, FlowVisor
maps the packets of a given virtual network to a prescribed VLAN priority
code point (PCP). The 3-bit VLAN PCP allows for mapping all traffic to eight
distinct classes with different priority levels. In order to provide isolation be-
tween flow table entries of multitenant vSDNs, FlowVisor keeps track of the
number of flow entries inserted by a tenant controller to each switch. If a tenant
controller exceeds a prescribed limit of flow entries at a switch, then FlowVi-
sor replies with a message indicating that the flow table of the switch is full. In
order to isolate the SB interfaces of individual vSDNs, FlowVisor rewrites the
OpenFlow transaction identifiers to ensure that different vSDNs utilize distinct
transaction identifiers. Similarly, controller buffer access and status messages are
modified by FlowVisor to create isolated OpenFlow control slices [4].
FlowVisor is the first network hypervisor reported in the literature for
virtualizing SDN control platform. Although successfully demonstrated realiza-
tion of multitenant virtual networks based on the SDN architecture, FlowVisor
has some limitations that need to be addressed in order to fully realize the virtu-
alization notion in SDN. Since the work on FlowVisor was published in 2009,
researchers who are inspired by the idea of SDN-based network hypervisor have
made progress in extending FlowVisor in various aspects.
The mechanisms employed by FlowVisor for providing bandwidth isola-
tion and switch capacity allocation were not inherent to network hypervisor
design, but rather short-term solutions to deal with the existing hardware ab-
straction. More advanced bandwidth allocation and scheduling mechanisms
have been developed to enhance bandwidth and CPU capacity isolation in
SDN virtualization. For example, an enhanced FlowVisor [6] is implemented
as an extension to the NOX SDN controller to use VLAN PCP for achieving
flow-based bandwidth guarantees. Admission control is also employed by the
enhanced FlowVisor to further protect the resources allocated to vSDNs. Upon
receiving a request for creating a new virtual network, the admission control
function checks if sufficient node and link capacities are available in the physi-
cal network infrastructure to support the new virtual network. In case that the
residual resources are not sufficient, the request for creating a new virtual net-
work will be rejected.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 179
www.EngineeringBooksPdf.com
180 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 181
When a tenant application invokes an API call to the NOX controller, FlowN
intercepts the call and translates it to instructions for physical switches. When a
tenant application requests the controller to install a flow match-action rule in
a switch, FlowN maps the virtual switch to the corresponding physical switch,
checks that the tenant has not exceeded its share of flow table on that switch,
and then installs the rule on that switch. In order to provide resource isola-
tion between the tenant applications, FlowN allocates each tenant a prereserved
amount of flowspace and flow table memory on each switch, and assigns one
processing thread per container [8].
To differentiate the traffic and rules for different tenant networks in
FlowN, edge switches encapsulate each incoming packet with a protocol-ag-
nostic extra header (e.g., VLAN id) to identify the tenant network to which this
packet belongs. The extra headers are determined by the virtualization control-
ler. Mapping from virtual to physical addresses occurs when a tenant control
application modifies the flow table. Mapping from physical to virtual addresses
must be performed when a switch sends a message to the controller, which then
forwards the packets to the right tenant control application.
Mapping between virtual topology and physical network is another aspect
of a virtualization layer that may degrade performance of SDN. FlowN lever-
ages advances in database technologies to overcome this potential bottleneck.
Both topology descriptions and assignments to physical resources can be repre-
sented by the relational model of a database. Each virtual topology consists of a
number of nodes, interfaces, and links that can be uniquely identified by some
keys. A physical network topology can also be represented in a similar fashion
by using a database model.
FlowN uses two tables to store the mapping relation between virtual and
physical topologies. One table stores the node assignments from virtual net-
work nodes to physical switches. The other table stores the path assignment
from virtual links to a set of physical links. Then, mapping between virtual
and physical topologies can be achieved through database query operations.
FlowN employs a master-slave database organization for addressing the scal-
ability challenge. The state of the master database is replicated among multiple
slave databases. Using the replicated database, the FlowN virtualization layer
can be distributed among multiple physical servers, each of which is colocated
with a replica of the database. Each physical server is connected with a subset
of physical switches and running the control applications for a subset of tenant
networks [8].
Comparing with FlowVisor, which only “slices” physical network resourc-
es, FlowN enables a complete abstraction of the physical network by providing
virtual topologies and virtual address spaces to tenant networks. An advantage
of this abstraction is to make physical resource management transparent to
virtual tenant networks (e.g., virtual nodes can be transparently migrated on
www.EngineeringBooksPdf.com
182 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 183
www.EngineeringBooksPdf.com
184 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 185
www.EngineeringBooksPdf.com
186 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 187
all network domains, and, on the other hand, allows communications between
the orchestrator and domain controllers and also supports collaboration be-
tween different domains.
The control orchestration protocol (COP) provides an interface protocol
between the orchestration layer and diverse SDN domain controllers. COP
unifies the orchestration functionalities in a single protocol and provides a com-
mon NB API so that heterogeneous SDN controllers can be orchestrated using
a common protocol. COP is composed of three basic functions—call, topology,
and path computation services. The call service is defined as the provisioning
interface. A call object must describe the type of service that is requested or
served by it and specifies the end points between which the service is provided.
The call object also contains the list of effective connections made into the data
plane to support the service call. A connection object is used for a single net-
work domain scope and should include the path across the network topology of
the domain. The topology service provides a common and homogeneous defi-
nition of the network topology information maintained by different domain
controllers. A topology object consists of a set of nodes and edges that form
a tree structure. A node object must contain a list of ports and the associated
switching capabilities. An edge object is defined as the connection link between
two endpoints. The path computation service provides an interface to request
and return a path object, which contains the information about the route be-
tween two endpoints [10].
An SDN network orchestration system has been developed based on the
COP protocol and the IETF application-based network operation (ABNO)
framework. In this system, each domain controller supports a standard REST-
ful NB interface to communicate with the network orchestrator using the COP
protocol. The orchestrator builds an abstract multidomain topology based on
one of the two aggregation mechanisms: virtual node aggregation or abstract
link aggregation. The virtual node aggregation abstracts internal connectivity
by representing each domain as a virtual node. The border nodes of each do-
main are seen as ports of a virtual node and are connected with other virtual
nodes through interdomain links. For abstract link aggregation, the internal
connectivity of a network domain can be dynamically mapped to a mesh of vir-
tual links. Each domain controller computes a path between the border nodes
of the domain and exposes the virtual links and border nodes to the orchestra-
tor. Path computation for providing end-to-end connectivity is performed in
two stages. First, the orchestrator calculates a path through the abstract multi-
domain topology and performs the domain sequence selection by identifying
the domains and border nodes involved in the path. Then the controllers of all
selected domains for this path perform path computation in parallel to find the
internal connections in their respective domains between the pairs of border
nodes identified by the orchestrator [11].
www.EngineeringBooksPdf.com
188 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 189
www.EngineeringBooksPdf.com
190 Virtualized Software-Defined Networks and Services
Figure 4.8 Usage of SDN in NFVI to provide connectivity for service function chaining.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 191
acts as a control application running upon the SDN controller and programs
the underlying network infrastructure using NB API of the controller to build
network paths between VNF instances. For end-to-end service provisioning in
large-scale networks, such as the Internet, the compute and network infrastruc-
tures in an NFV environment may belong to multiple administration domains.
For example, VNFs are hosted on servers located in multiple data centers that
are interconnected through WANs. In such cases, there will be multiple SDN
controllers involved for providing network connectivity between VNF instanc-
es, typically including at least a controller for each data center network and a
(logical) controller for each WAN. Therefore, the NFV orchestrator needs to
interact with all these SDN controllers to provide the network connectivity
required by end-to-end services.
If the data centers and WANs fall into one single trust domain (e.g., be-
long to the same service provider), the interactions between the orchestrator
and SDN controllers may basically follow the same mechanism as the case of
single domain network, except having a one-to-many structure. If the data cen-
ters and WANs are in different trust domains (e.g., owned and operated by dif-
ferent infrastructure providers), then the interface between the orchestrator and
SDN controllers typically requires a higher-level abstraction of the underlying
network infrastructures. Due to the autonomous ownership of infrastructures,
providers’ policies may forbid exposition of internal details of their network
infrastructures. In such cases, the IaaS model offers a useful mechanism for
addressing the challenge to achieve effective interactions between the NFV
orchestrator and SDN controllers without exposing network infrastructure
details. Following the IaaS model, each SDN controller may encapsulate the
network infrastructure that is under its centralized control as a service—es-
sentially network infrastructure-as-a-service (NIaaS)—and expose an abstract
view of the network infrastructure with a service description. Each SDN con-
troller provides an on-demand connectivity service explicitly requested by the
orchestrator, which then composes the connectivity services from different in-
frastructure domains to realize service function chaining for end-to-end service
provisioning.
The NIaaS model offers a very flexible approach to abstracting, exposing,
selecting, composing, and utilizing networking resources in virtual computing
environments and thus may greatly facilitate networking for NFV. Latest devel-
opments in cloud operating systems embrace the notion of NIaaS. For example,
in OpenStack, a widely adopted cloud operating system for both public and
private clouds, the latest networking module Neutron that focuses on deliver-
ing networking-as-a-service has replaced the original networking API Quan-
tum. The state of the art of SDN control platform also supports the NIaaS
model. For example, the open daylight framework provides a network service
platform that supports RESTful APIs and OGSI service interfaces.
www.EngineeringBooksPdf.com
192 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 193
Figure 4.9 SDN Inline service and forwarding (StEERING) system architecture.
www.EngineeringBooksPdf.com
194 Virtualized Software-Defined Networks and Services
instance into the destination MAC address field of the packet. Then the packet
is forwarded through the transport network toward the next VNF instance in
the service chain. Such a forwarding procedure is repeated until the packet is
returned to the transport network by the last VNF instance. Then the MAC
address of the egress router is written into the destination MAC address field of
the packet, which allows the packet to be forwarded by the transport network
to the egress router [13].
In order to support the large number of service instances with diverse
requirements of traffic steering in a large-scale network, design of the StEER-
ING architecture payed special attention to scalability for network control. In
order to reduce the amount of state required at each OpenFlow switch, StEER-
ING employs the multiple table feature supported by OpenFlow specification
since its version 1.1 to transform the flat policy space into a multidimension
space. The multitable mechanism separates packet matching process into mul-
tiple steps and results in linear scaling of each table. In order to facilitate mul-
titable organization and fast rule matching across multiple tables, StEERING
uses metadata to communicate intermediate results among different tables and
associated actions. The set of service functions applied to a flow is defined as
a metadata type called the service set of the flow. Such metadata enables every
table to operate on the service set independently, thus simplifying integration
of different types of policies. In addition, the StEERING architecture pushes
complex forwarding operations, such as flow classification and packet head-
er rewriting, to the boundary of the transport network, therefore simplifying
packet forwarding inside the transport network [12].
The other module in StEERING control unit is the service placement
module, which performs an algorithm that periodically determines the best lo-
cations of the VNF instances for all services. This module may obtain topology
and state information of the transport network from the OpenFlow controller;
therefore, it is able to perform a network-aware placement algorithm to find the
best locations for the VNF instances in a service chain for minimizing the delay
for service traffic to traverse all required VNF instances.
With SDN-based network control for traffic steering, VNF instances in
principle may be deployed at any location in the transport network. Therefore,
the NFV orchestrator together with VIM in the NFV MANO component may
determine VNF placement based on the function requirements and compute/
storage infrastructure states and then request the network domain in NFVI
provide the required connectivity. In such cases, control functions for the com-
puting resources and networking resources are independent, which typically
leads to relatively simple implementations. However, separated control of the
computing and networking domains in NFVI may lead to suboptimal resource
utilization for service provisioning. Network-aware VNF placement strate-
gies consider the topology and resource availability of the underlying network
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 195
www.EngineeringBooksPdf.com
196 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 197
Figure 4.10 The CONCERT architecture using SDN to support NFV in radio access networks.
www.EngineeringBooksPdf.com
198 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 199
industry and academia explored SDN and NFV technologies to address the
challenges to MPC in mobile networks.
ONF has chartered the Wireless and Mobile Working Group (WMWG)
to foster adoption of OpenFlow-based SDN technologies in wireless mobile
networks. The main research objectives of WMWG include studying mobile
network architecture leveraging OpenFlow-based SDN, simplifying interaction
between wireless and fixed networks, and proposing extensions to OpenFlow
protocol for supporting MPC. The ETSI NFV community is also interested in
applying NFV in mobile packet core and has identified virtualized MPC as a
typical use case of NFV. In an NFV-based deployment environment, the key
MPC function entities such as SGW, PGW, MME, and their combinations
may be implemented as software instances running on VMs.
A representative effort of employing SDN technologies for supporting ap-
plication of NFV for enhancing flexibility, programmability, and efficiency of
mobile network control is the SDN-based MPC architecture proposed in [18].
This architecture, as shown in Figure 4.11, comprises an OpenFlow-enabled
network and a logically centralized control plane. The control plane contains
MME, combined SGW and PGW, and SSH entities running as a group of
VNFs hosted in operator’s data centers. GPRS tunnel protocol (GTP) is used to
connect eNodeBs (eNBs) to ingress switches of the OpenFlow network, which
is responsible for transmitting data to the data center. The NFV domain SDN
controller is responsible for providing network links inside a date center to
provide connectivity between virtual function entities hosted in the data center.
Those virtual functions will be attached as endpoints to the OpenFlow net-
work. The E2E connectivity domain SDN controller in this architecture man-
ages the data plane of the OpenFlow network to provide connections between
data centers.
In an operator data center, MPC control function entities, such as the
combined control function of SGW and PGW (S/PGW-C), can be imple-
mented as a VNF software instance running on top of the NFVI. The S/PGW-
C VNF interacts with the E2E connectivity domain SDN controller via an NB
interface in order to set up network connections between eNBs. For instance,
the S/PGW-C VNF sends the UE bearer GTP TEIDs to the controller, which
translates these IDs into OpenFlow messages to control SDN switches in the
transport network [18].
www.EngineeringBooksPdf.com
200 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 201
such as ADSL routers and set-top boxes, which typically provide connectivity to
home terminals (e.g., smart phones, laptops, and networked TVs). This part of
access network is a critical segment on the end-to-end service delivery path that
can become a bottleneck in terms of transmission delay and data throughput.
The current wireline access network architecture is facing many challenges
for meeting the requirements of future network services. Deploying CPE devic-
es that can perform various network functions in a large number of residential
networks may incur significantly high capital cost. Complex configuration for
the wide variety of CPE devices also leads to high operation cost to the service
providers. In addition, the existing residential network architecture typical lacks
the necessary flexibility required for dynamic deployment of new services.
The NFV paradigm has been applied together with SDN technologies in
wireline access networks to address these challenges. An example is the virtual-
ized residential gateway (vRGW) framework presented in [19]. Rather than
deploying CPE devices with complex network functions (e.g., IP routing, NAT,
and firewall), the vRGW framework only keeps simple layers 1 and 2 functions
on the CPE devices in local and access networks. Network layer and upper-layer
functions of CPE devices are decoupled from layer 1/2 protocols and virtual-
ized as software VNF instances called vRGWs. These vRGWs are deployed in
data centers located in the carrier’s edge network. Each CPE device has a cor-
responding and dedicated vRGW instance that handles the network layer (and
upper layer) functions for that customer’s traffic. Virtualizing and consolidating
complex network functions to data centers in the edge network may greatly
reduce carrier’s investment on CPE equipment, simplify device configuration
and network management, and enhance network flexibility for supporting ser-
vice evolution. vRGWs can be hosted in different locations in the edge network
for improving network performance. For example, considering network delay
performance a service provider may deploy vRGWs in a metropolitan network
that is closer to end users.
The network connectivity between user CPE devices and the correspond-
ing vRGW instances is a key element in the vRGW framework. SDN technolo-
gies may be applied in wireline access networks for enabling service providers
to provision vRGWs in a flexible, scalable, and fine-grain manner. As shown in
Figure 4.12, the SDN controller of the access network sets up network connec-
tions between user CPE devices and vRGW servers, and also between vRGW
servers and the carrier’s core network. For upstream (from users to the network)
traffic, the controller configures SDN switches in the access network to forward
packets from users’ home devices to their vRGWs and then forward the pro-
cessed data from vRGWs into the core network to reach their final destinations.
Similarly, for downstream (from the network to users), packets destined for an
end user are forwarded to the corresponding vRGW in a data center first and
then forwarded to the user’s home device [19].
www.EngineeringBooksPdf.com
202 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 203
www.EngineeringBooksPdf.com
204 Virtualized Software-Defined Networks and Services
Figure 4.13 Possible locations of SDN resources, controllers, and applications in the NFV
architectural framework.
The first two cases are for physical SDN resources, respectively, in the
form of hardware-based switches/routers in network infrastructure and soft-
ware-based switches/routers implemented on the compute infrastructure of
NFVI. Cases c and d are for SDN data plane functions realized as virtual func-
tions in the tenant domain of the NFV architecture.
Possible locations of SDN controllers in the NFV architecture include the
following:
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 205
www.EngineeringBooksPdf.com
206 Virtualized Software-Defined Networks and Services
be realized as VNFs (case 4 for controller and case iii for application); therefore,
various flexible interconnections among virtualized SDN controllers and con-
trol applications may be enabled through NFV orchestration.
A key principle of virtualization in networking is to decouple the service
provisioning–oriented functionalities from the underlying network and com-
pute infrastructures. From such a virtualization perspective, the NFV architec-
ture can be viewed as comprising two layers—the infrastructure layer and the
service tenant layer. Based on such a layering structure of NFV, there are two
types of connectivity services in the NFV architecture. The first type of con-
nectivity services is provided by the NFVI to enable communications among
VNFs. Clearly, SDN plays a key role in meeting the requirement of elasticity
and virtualization for network infrastructure in order to provide such connec-
tivity services. This is the role of the infrastructure controller—the SDN control-
ler in the infrastructure layer. The second type of connectivity services is for
supporting network services provided at the service tenant layer and has to deal
with the operation and management of VNFs. Applying the SDN principle at
the service tenant layer provides a concordance of the upper part of the NFV
architecture with a centralized controller, which can be referred to as the ten-
ant controller. The set of control actions of the tenant controller are related to
semantics of service functions and virtual tenant networks for service provision-
ing [21]. Figure 4.14 illustrates the two-layer controller structure in the NFV
architecture.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 207
The SDN tenant controller, which itself might be realized as a VNF, in-
structs different deployed VNFs for performing packet processing functions.
The SDN infrastructure controller is responsible for setting up the required
network connections for supporting communication among those VNFs. Sepa-
rated control for the infrastructure and tenant layers in the NFV architecture
allows the infrastructure controller to provide the supporting underlay through
the virtualization layer, while the tenant controller provides an overlay com-
posed of tenant VNFs. A key objective of network virtualization is to enable
multiple virtual tenant networks, possibly with alternative network architectures
and protocols, to share a common infrastructure substrate. Therefore, multiple
independent tenant controllers may exist, one for each virtual tenant network
constructed upon NFVI through VNF management and orchestration.
Despite their different nature, the infrastructure controller and tenant
controller have to be coordinated in a consistent way to perform the expected
control actions and to dynamically adapt to changes in service conditions. For
example, instantiation of a new VNF performed by the tenant controller on the
service tenant layer must be supported by the infrastructure controller by al-
locating the corresponding network and compute infrastructure resources. The
two types of controllers might interact directly with each other through a new
reference point or indirectly via the current MANO stack in the NFV archi-
tecture. Coordination between the two types of controllers does not necessarily
imply direct control of one controller by the other. The MANO stack could
provide the tenant controller with abstract information about the infrastructure
layer and allow some degree of interaction in both directions. On the other
hand, using the MANO stack would require some extensions to the MANO re-
lated interfaces currently defined in the NFV architecture and could violate the
decoupling between MANO and network service semantics [21]. The specific
mechanism for coordination between the tenant and infrastructure controllers
in the NFV architecture is still an open topic for future research.
www.EngineeringBooksPdf.com
208 Virtualized Software-Defined Networks and Services
both physical and virtual resources. For example, the OpenFlow standard pro-
vides a fairly low-level abstraction of data plane through flow tables in switches.
Traffic flows are identified in OpenFlow using addressing schemes of physi-
cal networks such as the MAC address, IP addresses, and port numbers, thus
lacking the ability to handle independent addressing schemes in virtual tenant
networks. Higher-level resource abstraction models and interface protocols for
supporting SDN control in the entire NFV architecture, including both virtual
functions and infrastructure resources, are still an open area for further study.
Recent research progress in this area has indicated that the forwarding and con-
trol element separation (ForCES) specification offers a promising basis for de-
veloping an abstraction model and associated control protocol for supporting
SDN in an NFV environment [22].
The ForCES specification developed by IETF was one of the original pro-
posals recommending decoupling of packet forwarding and network control.
The idea of ForCES is to provide simple hardware-based forwarding entities in
networking devices and software-based control elements. The ForCES frame-
work comprises two main types of elements—forwarding elements (FEs) that
perform packet forwarding operations and control elements (CEs) that con-
trol the operations of FEs. In addition, the framework also defines two helper
elements—the FE manager and CE manager—that assist the bootstrapping
phase. ForCES defines an object-oriented model realized by an XML schema
to abstract FE resources. A basic building block of the model is an object class
called logical functional block (LFB) that performs well-defined functions such
as receiving, processing, modifying, and forwarding packets. Multiple LFB in-
stances can be interconnected in a directed graph to form a service. In order to
allow CEs to control the operations of LFBs, each LFB class defines input and
output ports, operational parameters visible to a CE, capabilities advertised to
a CE, and events to which a CE can subscribe. The ForCES model supports
definition of new LFBs, each with its own customized set of parameters.
In the ForCES framework, the abstract model is complemented with a
protocol to enable interactions between CEs and FEs. An advantage of this
protocol is that it is model agnostic and thus can control and configure any FE
that is abstracted with the ForCES model. ForCES protocol provides all the
necessary functions for supporting robust and efficient control of the underly-
ing resources. On the other hand, the ForCES protocol comes with a concise
yet complete set of commands including SET, GET, DELETE, REPORT, and
REDIRECT [23].
The natural extensibility and expressibility of ForCES abstract model and
protocol make ForCES a viable candidate for realizing the interface between
SDN controllers and data plane resources in an NFV environment. A proof-
of-concept (PoC) prototype for evaluating applicability of ForCES to support
SDN-enabled NFV has been presented in the ETSI report on SDN usage in
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 209
www.EngineeringBooksPdf.com
210 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 211
www.EngineeringBooksPdf.com
212 Virtualized Software-Defined Networks and Services
network intelligence resides. This layer is able to create middleware that allows
end users to consume the enabled services using the web service interface of
each resource. All interlayer communications in the OpenNaaS framework are
performed through either the OSGi service interface or RESTful web service
interface [24].
The OpenNaaS framework may be applied to provide resource abstrac-
tion for supporting SDN-enabled control in the NFV architecture. Both physi-
cal network infrastructure and virtual network functions in NFV can be repre-
sented as resources in the OpenNaaS framework, which then can be accessed
and configured by an SDN controller through an OGSi or RESTful service
interface. In addition, the OpenNaaS platform provides a set of common func-
tions for controlling and managing resources, which can be inherited by a re-
source once it is defined. Therefore, OpenNaaS may greatly facilitate SDN-
enabled resource control and management in an NFV environment.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 213
required information (e.g., IP version, the source and destination addresses, and
the arrival switch port); it then triggers a routing request to the VRF module.
Once the controller receives a routing function response back from the VRF
module, it sends a packet-out message to the switch and configures the cor-
responding flow table entries in the switches involved in packet forwarding for
the flow.
VRF module receives different routing requests from OpenFlow control-
lers, performs path computation to find a feasible path for each routing request,
and then responses the requests with the routing results. The VRF module in
this system is implemented as a resource in the OpenNaaS framework. The
module maintains a global view of the routing states using a set of capabilities
and an associated model provided by the OpenNaaS framework. The model
follows a routing table-like structure but contains two tables, respectively, for
IPv4 and IPv6. The basic capabilities defined in VRF resource provide path
finding functions for specific input parameters and for management features
such as inserting or deleting a given route and retrieving information corre-
sponding to the whole routing table [25].
The communication protocol between OpenFlow controllers and the
VRF modules is based on the RESTful interface provided by the OpenNaaS
framework. The VRF resource model and capabilities can be accessed through
this interface. This RESTful interface can be called by an OpenFlow controller
in order to obtain feasible route information from the VRF module. Each con-
troller is connected to the VRF module through a VPN link. Controllers and
switches are connected using a secure channel that follows OpenFlow protocol.
The VRF implementation over an OpenFlow network infrastructure en-
ables separation of tenant-oriented control, such as the virtual routing func-
tion, from infrastructure-oriented control performed at OpenFlow controllers.
www.EngineeringBooksPdf.com
214 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 215
counters and timers). Therefore, it is possible for SDN data plane devices (e.g.,
OpenFlow-enabled switches) to perform some relatively simple data processing
functions. The main idea of the proposed approach to using SDN for support-
ing VNF functionalities is to keep simple data processing in SDN data plane as
much as possible and only forward data traffic to VNF servers for more com-
plex processing when it is necessary.
This proposed approach to combining SDN in NFV technologies has
an impact on the way a VNF is designed. It requires decoupling between two
components of a VNF design: the stateful function component that needs to be
performed at a server with more computing capacity and the stateless function
component that can be processed on SDN data plane. The stateful component
is still implemented as a VNF running on virtual compute resources. The state-
less component makes use of SDN data plan devices to perform traffic process-
ing efficiently. The interface between these two components must be clearly
defined on both data plane and control plane. The control plane interface is
used for configuring and updating the behaviors of the stateless data path pro-
cessing component. The data plane interface is used when some portion of the
traffic needs stateful processing and thus must be redirected to a server where
the stateful function component is hosted [26].
The flow-based network access control (FlowNAC) system is a represen-
tative example of combining SDN and NFV technologies by following the
aforementioned approach. The architecture of the FlowNAC system is shown
in Figure 4.18. In this system, the FlowNAC function design is separated into
two blocks: the authentication and authorization (AA) block, which keeps the
state of the currently executed AA process; and the access control enforcing
(ACE) block, which performs access control without requiring any state infor-
mation. The AA block relies on computing resources for complex stateful data
Figure 4.18 Combining SDN and NFV technologies for flow-based network access control.
www.EngineeringBooksPdf.com
216 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 217
Figure 4.19 A two-dimensional model for layer and plane abstraction in network architec-
ture [27].
www.EngineeringBooksPdf.com
218 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 219
lack a clear abstraction on the layer dimension. For example, signaling systems
(e.g., SSN-7) for network control are logically separated from the transport net-
works that are under their control. In the intelligent network architecture, the
service control function and service management function elements are sepa-
rated from the service switching function element that is responsible for data
transmission and switching. On the other hand, the IP-based Internet archi-
tecture shows a clear layer-dimension abstraction through the TCP/IP layering
model but lacks explicitly defined abstraction in the plane-dimension. Packet
forwarding, routing, and network management functions are all specified in the
same set of IP protocols. Wide adoption of IP-based architecture has caused the
layer-dimension abstraction to dominate in current network designs.
Rapid development of the wide spectrum of Internet services calls for
much more flexible network control and management, thus bringing in chal-
lenges to the tightly coupled control and forwarding functions in the current
Internet architecture. SDN essentially enables the plane-dimension abstraction
by separating data forwarding and network control/management. Although the
TCP/IP stack provides layer-dimension abstraction, the interfaces between lay-
ers are not defined flexibly enough to meet the requirement of future network
services. A key obstacle lies in the unnecessary coupling between service-ori-
ented functions and transport-oriented infrastructure, which prevents network
designs from fully exploiting the benefits of layer-dimension abstraction. The
network virtualization notion advocates decoupling service provisioning from
network infrastructure, and the NFV architecture attempts to leverage standard
IT technologies to realize such decoupling through simple yet flexible abstrac-
tion of underlying hardware infrastructure.
Although the TCP/IP layer stack is used in Figure 4.19 to show the con-
cept of layer-dimension abstraction, the two-dimensional abstraction model is
applicable to network architecture with alternative layers. The vertical decou-
pling highlighted between the network interface layer and the Internet layer in
this figure is also only for illustration. In fact, position of virtualization in the
layer dimension is a design option for virtualization-based network architec-
ture. Similarly, control and management can be considered either as one plane
or two separated planes in the plane dimension.
From the layer-plane abstraction model, we can see that the key prin-
ciples of SDN and NFV both are based on abstraction but with emphasis on
the plane and layer dimensions, respectively. These two abstraction dimensions
are orthogonal; that is, network architecture may have abstraction on one di-
mension but not on the other. Therefore, SDN and NFV in principle are in-
dependent—NFV may be realized with or without SDN and vice versa. On
the other hand, the challenging requirements for service provisioning in fu-
ture networks demand abstraction on both dimensions in order to fully exploit
their advantages. Therefore, integrating the software-defined principle and the
www.EngineeringBooksPdf.com
220 Virtualized Software-Defined Networks and Services
Figure 4.20 A framework for integrating SDN and NFV principles [27].
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 221
most of the existing approaches focus on various aspects of the key principles
of SDN and NFV. In order to provide a high-level holistic vision of integrat-
ing SDN and NFV principles in unified network architecture, an architectural
framework called software-defined network virtualization (SDNV) was recently
proposed in [27]. This framework may offer useful guidelines for synthesizing
research and industry development efforts made from various aspects toward
the common objective of integrating SDN and NFV for service provisioning
in future networks.
Figure 4.21 depicts the SDNV framework, which follows the two-dimen-
sional model shown in Figure 4.19. The layer dimension of the framework
comprises the infrastructure layer, virtualization layer, and service layer. The
plane dimension of the framework consists of the data plane and the control/
management plane.
The infrastructure layer in the SDNV framework comprises physical re-
sources of network and compute infrastructures. This layer may consist of mul-
tiple autonomous domains owned and operated by different InPs. The virtual-
ization layer realizes infrastructure abstraction and provides mapping between
physical and virtual resources. All functionalities for network service provision-
ing reside on the service layer. This layer utilizes the virtual resources provided
by the virtualization layer to realize virtual service functions (VSFs), which in-
clude both virtual network functions (VNFs) and virtual compute functions
www.EngineeringBooksPdf.com
222 Virtualized Software-Defined Networks and Services
(VCFs). The service layer is responsible for constructing virtual networks (VNs)
by discovering and orchestrating appropriate VSFs.
The virtualization layer decouples the control/management functions
for service provisioning from the functions for infrastructure controlling and
provides a standard interface through which service-oriented control/manage-
ment functions may interact with infrastructure controllers. Such decoupling
on the control/management plane enables differentiation between control/
management functions related to service provisioning and those associated with
transport infrastructures and thus allows them to be provided, maintained, and
developed independently following their own evolutionary paths.
In the SDNV framework, the data plane and control/management plane
are separated on both the infrastructure layer and the service layer. The con-
trol/management plane on the infrastructure layer consists of controllers for
network and compute infrastructures. Heterogeneous SDN controllers may be
applied in different infrastructure domains, which are referred to as infrastruc-
ture domain controllers (IDCs). The control/management plane on the service
layer is responsible for life cycle management of VSFs and VNs, including con-
struction, instantiation, maintenance, and termination of VSFs/VNs. VNs are
constructed by composing appropriate VSFs for meeting service requirements.
Each VN has its own controller (called VNC) that controls all the data plane
VSFs involved in this VN, just like a SDN controller controls all data plane
devices in a physical network.
It is worth noting that although control and management are contained
in the same plane in the SDNV framework, these two types of functionalities
may be separated in network designs and implementations. Control and man-
agement functions focus on different stages in the entire life cycles of VSFs and
VNs. Management functions are responsible for creating, deploying, scaling,
migrating, and terminating VSFs and VNs. Control functions mainly focus
on maintaining the expected operation behaviors of VSFs/VNs during the ac-
tive stage of their lifecycles (e.g., routing, scheduling, and signaling functions
for packet forwarding). Management actions usually have a longer time scale
compared to control events, while control functions typically have stricter re-
quirements in terms of processing capacity and latency. Therefore, in some net-
work designs control and management functionalities are separated into two
separated planes with a standard interface in between.
Integration of the key principles of SDN and NFV in the SDNV frame-
work requires interfaces on the layer and plane dimensions, respectively. On
the layer dimension, the virtualization layer provides an important interface for
decoupling the infrastructure layer and the service layer. The interface between
applications and the service layer allows users to access and configures network
services. On the plane dimension, the key interface is between the data and
control/management planes in the SDNV framework.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 223
www.EngineeringBooksPdf.com
224 Virtualized Software-Defined Networks and Services
programmable VNC that controls all the data plane VSFs involved in this VN
for service provisioning.
The SDNV framework naturally supports multiprovider service scenarios
in which diverse virtual networks are created upon a physical substrate con-
sisting of heterogeneous network and compute infrastructures in multiple do-
mains. Therefore, SDNV embraces the trend of unified network-cloud service
provisioning. VSFs in SDNV may provide service functions virtualized from
networking systems (VNFs) as well as from cloud resources (VCFs). End-to-
end services delivered by VNs through orchestrating VNFs and VCFs are es-
sentially composite network-cloud services. Such a converged service ecosystem
may introduce new functional roles, such as suppliers of VSFs and providers
of composite network-cloud services, and trigger innovations in new service
development.
The objective of the SDNV framework is not to replace the current SDN
and NFV architecture but to provide an architectural framework showing how
these two paradigms may be integrated together for future networking. On
the other hand, SDNV is not to simply put current architecture of SDN and
NFV together but to combine the key insights of both paradigms into unified
network architecture and show how SDN and NFV may cooperate inside such
architecture. This framework provides useful guidelines to synthesize research
from various aspects toward the common objective of integrating SDN and
NFV for supporting service provisioning in future networks.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 225
Figure 4.22 An SDNV-based platform for interdomain network service delivery [27].
The VNF of the SDN controller for each infrastructure domain registers all the
VNF service components in the domain to the VNF/VN management mod-
ule on the service layer. During the registration process, the domain controller
VNF publishes a description about the infrastructure services provided by its
domain at the VNF/VN management module.
The procedure for creating a virtual network through VNF orchestration
in the SDP is illustrated in Figure 4.23. Upon receiving a service request, the
service orchestration module works with the VNF/VN management module
to select and compose the appropriate VNFs to form a forwarding graph that
meets the service requirement. Then, the VSF/VN management module in-
stantiates a VN to realize this forwarding graph. The controller of this VN is
also realized through composition of a set of controller VNFs, each of which
virtualizes the controller in an infrastructure domain utilized by this VN. In this
way, the VN controller essentially orchestrates the VNFs hosted by SDN con-
trollers in heterogeneous domains to control end-to-end service delivery. With
such a service platform, the uniform abstraction provided by the virtualization
layer makes heterogeneous network domains transparent to service manage-
ment, which may greatly facilitate interdomain service delivery in SDN.
Multiple virtual networks may be constructed upon this platform for
meeting the diverse service requirements of different end users. Each of the
virtual tenant networks has its own forwarding graph realized by a set of VNFs,
and all the VNFs in a virtual network are controlled by a single VNC. All the
virtual networks share the service orchestration module and VNF/VN man-
agement module in the SDP framework, which are responsible for creating,
instantiating, scaling, and terminating virtual network.
www.EngineeringBooksPdf.com
226 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 227
www.EngineeringBooksPdf.com
228 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 229
www.EngineeringBooksPdf.com
230 Virtualized Software-Defined Networks and Services
4.6 Conclusion
In this chapter, we discussed the relationship between software-defined net-
working and network virtualization. Although SDN and NFV are two innova-
tive networking paradigms that were initially developed independently, they
share many common goals and follow some similar technical principles for
achieving such goals. Evolution of both paradigms has shown that SDN and
NFV are synergistic and complementary to each other. Therefore, integrating
SDN and NFV into unified architecture for future networking to fully exploit
the advantages of both paradigms has formed an active research area that at-
tracts interest from both academia and industry.
Encouraging progress has been made toward combining SDN and NFV
in future networks. Both hypervisor- and container-based virtualization tech-
nologies have been employed to enable network virtualization in SDN, which
allows multitenant virtual networks to be constructed upon a shared SDN
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 231
References
[1] Casado, M., T. Koponen, Scott Shenker, and A. Tootoonchian, “Fabric: A Retrospective
on Evolving SDN,” Proceedings of ACM 2012 Workshop on Hot Topics in Software-Defined
Networking (HotSDN’12), August 2012.
[2] Raghavan, B., T. Koponen, A. Ghodsi, M. Casado, S. Ratnasamy, et al., “Software-De-
fined Internet Architecture: Decoupling Architecture from Infrastructure,” Proceedings of
the 11th ACM Workshop on Hot Topics on Networks (Hotnets’12), October 2012, pp. 43–48.
[3] Feamster, N., J. Rexford, and E. Zegura. “The Road to SDN,” ACM Queue, Vol.11, No.
12, Dec. 2013, pp. 1–12.
www.EngineeringBooksPdf.com
232 Virtualized Software-Defined Networks and Services
[4] Blenk, A., A. Basta, M. Reisslein, and W. Kellerer, “Survey on Network Virtualization Hy-
pervisors for Software Defined Networking,” IEEE Communications Surveys and Tutorials,
Vol. 18, No. 1, 2016, pp. 655–685.
[5] Sherwood, R., G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, et al., “FlowVisor: A
Network Virtualization Layer,” OpenFlow Switch Consortium Technical Report, 2009.
[6] Min, S., S. Kim, J. Lee, and B. Kim, “Implementation of an OpenFlow Network Virtual-
ization for Multi-Controller Environment,” Proceedings of the 2012 International Confer-
ence on Advanced Communication Technologies (ICACT2012), Feb. 2012.
[7] Salvadori, E., R. D. Corin, A. Broglio, and M. Gerola, “Generalizing Virtual Network
Topologies in OpenFlow-Based Networks,” Proceedings of the 2011 IEEE Global Com-
munication Conference (GLOBECOM2011), Dec. 2011.
[8] Drutskoy, D., E. Keller and J. Rexford, “Scalable Network Virtualization in Software-
Defined Networks,” IEEE Internet Computing Magazine, Vol. 17, No. 2, Feb. 2013, pp.
20–27.
[9] Huang, S., J. Griffioen, and K. L. Calvert, “Network Hypervisors: Enhancing SDN In-
frastructure,” Elsevier Computer Communications Journal, Vol. 46, No. 6, June 2014, pp.
87–96.
[10] Munoz, R., R. Vilalta, R. Casellas, R. Martinez, T. Szykowiec, et al., “Integrated SDN/
NFV Management and Orchestration Architecture for Dynamic Deployment of Virtual
SDN Control Interfaces for Virtual Tenant Networks,” Journal of Optical Communication
Networks, Vol. 7, No. 11, Nov. 2015, pp. B62–B70.
[11] Vilalta, R., R. Muñoz, R. Casellas, R. Martinez, F. Francois, et al., “Network Virtualization
Controller for Abstraction and Control of OpenFlow-Enabled Multi-Tenant Multi-
technology Transport Networks,” Proceedings of the 2015 Optical Fiber Communication
Conference (OFC2015), March 2015.
[12] Zhang, Y., N. Beheshti, L. Beliveau, G. Lefebvre, R. Manghirmalani, et al., “StEERING:
A Software-Defined Networking for Inline Service Chaining,” Proceedings of the 21st IEEE
International Conference on Network Protocols (ICNP2013), Oct. 2013.
[13] Blendi, J., J. Buckert, N. Leymann, G. Schyguda, and D. Hausheer, “Software-Defined
Network Service Chaining,” Proceedings of the 2014 Third European Workshop on Software-
Defined Networks (EWSDN2014), Sept. 2014, pp. 109–114.
[14] Gember-Jacobson, A., R. Viswanathan, C. Parkash, R. Grandl, J. Khalid, et al., “OpenNF:
Enabling Innovation in Network Function Control,” Proceedings of the 2014 Conference of
ACM Special Interest Group on Data Communication (SIGCOMM2014), August 2014.
[15] China Mobile, “C-RAN: A Road Toward Green Radio Access Network,” white paper,
2011.
[16] Liu, J., T. Zhao, S. Zhou, Y. Cheng, and Z. Niu, “CONCERT: A Cloud-Based Architecture
for Next Generation Cellular Systems,” IEEE Wireless Communications Magazine, Vol. 21,
No. 6, June 2014, pp. 14–22.
[17] Nguyen, V-G., T-X. Do, and Y. Kim, “SDN and Virtualization-Based LTE Mobile Network
Architecture: A Comprehensive Survey,” Springer Wireless Personal Communications
Journal, Vol. 86, No. 3, Feb. 2016, pp. 1401–1438.
www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 233
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
5
Virtualized Network Services
Mehmet Toy
5.1 Introduction
In recent years, types of user devices and applications for cloud services have
grown rapidly. High-speed personal devices such as phones, laptops and tab-
lets, and high-definition (HD) IP video and HD IPTV applications are driving
huge bandwidth demand in networks. Applications such as storage network-
ing, video streaming, collaborative computing, and online gaming and video
sharing are driving bandwidth demand in networks as well as resources of data
centers connected with these networks. The users prefer services that are on
demand, scalable, survivable, and secure with usage-based billing. The concepts
of cloud computing, cloud networking and cloud services are expected to help
service providers to meet these demands, quickly create the services and utilize
their resources effectively.
Cloud computing technologies are emerging as infrastructure services
for provisioning computing and storage resources on demand in a simple and
uniform way. Multiprovider and multidomain resources and integration with
the legacy services and infrastructures are involved. Current cloud technology
development is targeted to developing intercloud models, architectures, and in-
tegration tools that could allow integrating cloud-based infrastructure services
into existing enterprise and campus infrastructures. These developments also
provide a common/interoperable environment for moving existing infrastruc-
tures and infrastructure services to virtualized cloud environment.
Cloud-based virtualization allows for easy upgrade and migration of en-
terprise application, including entire IT infrastructure segments, which brings
235
www.EngineeringBooksPdf.com
236 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 237
www.EngineeringBooksPdf.com
238 Virtualized Software-Defined Networks and Services
1. OCC merged with MEF. Therefore, all cloud services specifications are under MEF.
www.EngineeringBooksPdf.com
Virtualized Network Services 239
www.EngineeringBooksPdf.com
240 Virtualized Software-Defined Networks and Services
Table 5.1
Standards Organizations for Cloud
ARTS: Association for Retail Technology Standards
ATIS: Alliance for Telecommunications Industry Standards
CADF: Cloud Auditing Data Federation Working Group
CCIF: Cloud Computing Interoperability Forum
CSA: Cloud Security Alliance
CSCC: Cloud Standards Customer Council
CSC: Cloud Standards Coordination
DMTF: Distributed Management Task Force
ETSI: European Telecommunications Standards Institute
GICTF: Global InterCloud Technology Forum
IEEE Intercloud Working Group (IEEE P2302)
IETF: Internet Engineering Task Force
ISO: International Organization for Standardization
ISO/IEC JTC 1/SC 38 Cloud Computing and Distributed Platforms
itSMF-IT Service Management Forum
ITU-T Focus Group on Cloud Computing (FG-Cloud)
ITU-T SG13: Future networks including cloud computing, mobile and next-generation networks
NIST: National Institute of Standards and Technology
Cloud Computing Target Business Use Cases Working Group
Cloud Computing Reference Architecture and Taxonomy Working Group
Cloud Computing Standards Roadmap Working Group
Cloud Computing SAJACC Working Group
Cloud Computing Security Working Group
Working Definition of Cloud Computing
Standards Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC)
JCA: Joint Coordination Activity on Cloud Computing
ODCA: Open Data Center Alliance
OGF: Open Grid Forum
SNIA: Storage Network Industry Association
TC CLOUD
TM Forum: Telecommunications Management Forum
OASIS Organization for the Advancement of Structured Information Standards
OASIS Cloud-Specific or Extended Technical Committees (TC)
OASIS Cloud Application Management for Platforms (CAMP) TC
OASIS Identity in the Cloud (IDCloud) TC
OASIS Symptoms Automation Framework (SAF) TC
OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) TC
OASIS Cloud Authorization (CloudAuthZ) TC
OASIS Public Administration Cloud Requirements (PACR) TC
OCC: Open Cloud Connect
OCC: Open Cloud Consortium
OCCI Working Group: Open Cloud Computing Interface Working Group
www.EngineeringBooksPdf.com
Virtualized Network Services 241
The cSP may own the cP and cloud carrier (cC) facilities (Figure 5.3).
When the cP and the cC are two independent entities belonging to two differ-
ent operators as depicted in Figure 5.3, the standards interface between them
is called cloud carrier cloud provider interface (cCcPI). In this case, a cSC for
cloud services can be terminated at either cCcPI or cSI.
It is also possible for two or more cSPs to be involved in providing a cloud
service to a cloud consumer as depicted in Figure 5.4, where two cSPs interface
to each other via a standards interface called cloud service provider cloud service
provider interface (cSPcSPI). In this scenario, only one of the cSPs needs to
interface to the end user, coordinate resources, and provide a bill. The cSP that
does not interface to the end user is called cloud service operator (cSO).
The cSPs may employ a gateway to connect to each other (Figure 5.7),
cloud service gateway (cSGW). The cSGW might provide connection multi-
plexing among other features that are required by cSPcSPI.
A cSP can be private or public. There could be cases that private and pub-
lic cSPs collectively provide a cloud service to a user, as depicted in Figure 5.8.
www.EngineeringBooksPdf.com
242 Virtualized Software-Defined Networks and Services
Figure 5.2 cSUI functionalities distributed between customer edge (CE) and cSP as cSU-C
and cSUI-P.
The cloud services architectures described here is the base for interoper-
ability among vendors and service providers for cloud services and applications.
It is also expected to be the base for a cloud service exchange gateway between
well-known cloud service providers. Further details of the architecture can be
found in [1, 10].
www.EngineeringBooksPdf.com
Virtualized Network Services
www.EngineeringBooksPdf.com
Figure 5.3 Virtual resources (i.e., VMs) and physical resources (i.e., computing and storage resources), that belong to one operator, providing cloud ap-
243
plications.
244 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 245
In this service, it is possible that just the computing applications together with
computing resources are based on cloud resources and everything else is not. A
user may use noncloud-based NaaS or cloud-based NaaS to access cloud com-
puting applications. The cSP coordinates all resources acting as the single point
of contact and provides a bill to the cloud user.
Software as a service (SaaS), platform as a service (PaaS), and infrastruc-
ture as a service (IaaS) are among the well-known cloud services in the industry.
www.EngineeringBooksPdf.com
246 Virtualized Software-Defined Networks and Services
Figure 5.7 Two cloud service providers collectively providing cloud services.
In fact they are the cloud applications provided by well-known cloud providers
such as Amazon.
SaaS is an application running on a cloud infrastructure where the con-
sumer does not manage or control the underlying cloud infrastructure includ-
ing network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specific application
configuration settings. SaaS examples include Gmail from Google, Microsoft
“live” offerings, and salesforce.com.
PaaS is deploying onto the cloud infrastructure consumer-created or -ac-
quired applications created using programming languages and tools supported
by the provider. The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, or storage
but has control over the deployed applications and possibly application hosting
environment configurations.
PaaS provides the capability to build or deploy applications on top of
IaaS. Typically, a cloud computing provider offers multiple application compo-
nents that align with specific development models and programming tools. For
the most part, PaaS offerings are built upon either a Microsoft-based stack (i.e.,
Windows, .NET, IIS, SQL Server, and so on) or an open source–based stack
(i.e., the “LAMP” stack containing Linux, Apache, MySQL, and PHP).
IaaS is to provision processing, storage, networks, and other fundamental
computing resources where the consumer is able to deploy and run arbitrary
software. The software can include operating systems and applications. The
consumer does not manage or control the underlying cloud infrastructure, but
www.EngineeringBooksPdf.com
Virtualized Network Services 247
Figure 5.8 Private and public cSPs.
www.EngineeringBooksPdf.com
248 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 249
has control over operating systems, storage, deployed applications, and possibly
limited control of selected networking components with firewalls.
This is the most basic cloud application model, aligning the on-demand
resources of the cloud with tactical IT needs. IaaS is similar to managed services
offerings such as hosting services. The primary difference is that cloud resources
are virtual rather than physical and can be consumed on an as-needed basis.
Enterprise consumers pay for virtual machines (VMs), storage capacity, and
network bandwidth for a variable amount of time rather than servers, storage
arrays, and switches/routers on a contractual basis. IaaS prices are based upon
IaaS resource consumption and the duration of use.
Cloud computing services can be deployed in a number of ways depend-
ing upon factors like security requirements, IT skills, and network access NIST
[20].
The OCC grouped services under network as a service (NaaS), IaaS, PaaS,
SaaS, communications as a service (CaaS), and security as a service (SECaaS)
for now. There is no hierarchy in these service offerings.
There is no consensus among various standards developing organizations
(SDOs) and cloud service providers regarding which application belongs to
which service category. For example:
The cSP negotiates the contract and monitors its realization in real time.
The monitoring encompasses the SLO contract definition, the SLO nego-
tiation, the SLO monitoring, and the SLO enforcement. The contract may
www.EngineeringBooksPdf.com
250 Virtualized Software-Defined Networks and Services
include price reductions and discounts that are applied when a cSP fails to meet
the desired service parameters or does not fulfill an agreement. The resource
usage may be tracked to align them with the billing rules agreed in the SLOs.
cSP provides a set of security services and mechanisms (e.g., IP address
filtering, firewall, message integrity and confidentiality, private key encryption,
dynamic session key encryption, user authentication, and service certification)
to protect cloud services data and their operating environment from unauthor-
ized use, policy/operation violation, and intrusion. In addition, the cSP may
offer the following:
www.EngineeringBooksPdf.com
Virtualized Network Services 251
5.4.1 NaaS
Network as a service (NaaS) delivers assured, dynamic connectivity services via
virtual or physical and virtual service endpoints orchestrated over multiple op-
erators’ networks. Such services will enable users, applications, and systems to
create, modify, suspend/resume, and terminate connectivity services through
standardized application programming interfaces (APIs). These services are as-
sured from both performance and security perspectives.
NaaS is expected to support on-demand network configuration, secure
and QoS guaranteed connectivity and compatibility with heterogeneous net-
works. It is the responsibility of NaaS provider, cSP, to maintain and manage
the network resources. It is possible that cSP may not own NaaS, but provides
coordination. NaaS offers network as a utility.
Possible NaaS services are:
www.EngineeringBooksPdf.com
252 Virtualized Software-Defined Networks and Services
NaaS can provide methods for users to provision cSP resources in a cloud
virtual network that the user defines. The users have complete control over their
virtual networking environments, including selection of user owned IP address
ranges, creation of subnets, and configuration of route tables and network gate-
ways. This would enable users to create a VPN connection between the users’
corporate datacenter and their cloud virtual network and leverage the cSP as an
extension of the corporate datacenter. In the context of DR, users can use this
virtual network to extend their existing network topology to the cloud.
5.4.2 IaaS
The capability provided to the consumer [1] via IaaS is to provision process-
ing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include oper-
ating systems and applications. The consumer does not manage or control the
underlying cloud infrastructure but has control over operating systems, storage,
deployed applications, and possibly limited control of select networking com-
ponents (e.g., host firewalls).
IaaS cloud provider (cP) configures, deploys, and maintains computing,
storage, and networking resources to the user. Also, IaaS cP provides the ca-
pability for users to use and monitor computing, storage, and networking re-
sources so that they are able to deploy and run arbitrary software.
A customer portal could be provided to access the infrastructure. An API
is needed to reduce human intervention for system management and total cost
of operation.
5.4.2.1 Cloud Computing
Cloud computing is being able to provision computing and storage resources
on demand, specifically storage and virtual servers that IT can access on de-
mand. IT can create virtual datacenters from commodity servers, enabling IT to
stitch together memory, I/O, storage, and computational capacity as a virtual-
ized resource pool available over the network.
Servers are the key elements of cloud computing. They can be:
• Bare metal servers with single processor, dual processors, or quad proces-
sors;
• Mass storage servers storing large amounts of data in solid state disks,
hard disks, optical disks, or tapes;
• Virtual servers deployed on multitenant or single-tenant hosts as local
or SAN storage. Portable storage can be added. Payment could be by
the hour or month. Integration and migration between bare metal and
www.EngineeringBooksPdf.com
Virtualized Network Services 253
www.EngineeringBooksPdf.com
254 Virtualized Software-Defined Networks and Services
• Memory to provide rapid access to data such as file caches, object cach-
es, in-memory databases, and RAM disks.
• Message queues to provide temporary durable storage for data sent asyn-
chronously between computer systems or application components.
• Storage area networks (SAN), which are block devices (virtual disk logi-
cal unit numbers) on dedicated SANs providing the highest level of disk
www.EngineeringBooksPdf.com
Virtualized Network Services 255
performance and durability for both business-critical file data and da-
tabase storage. They can be used like physical hard drives, typically by
formatting them with the file system of user choice and using the file
I/O interface provided by the instance operating system.
• Direct-attached storage (DAS), which are local hard disk drives or arrays
residing in each server, providing higher performance than a SAN but
lower durability for temporary and persistent files, database storage, and
operating system (OS) boot storage than a SAN.
• Network attached storage (NAS) providing a file-level interface to stor-
age that can be shared across multiple systems. NAS tends to be slower
than either SAN or DAS.
• Databases such as a traditional SQL relational database, a NoSQL non-
relational database, or a data warehouse where the underlying database
storage typically resides on SAN or DAS devices, or in some cases in
memory.
• Backup and archive for data retained for backup and archival purposes,
which are typically stored on nondisk media such as tapes or optical me-
dia, often stored offsite in remote, secure locations for disaster recovery.
There could be a limit on single archive and total amount of data in
gigabytes, terabytes, or petabytes.
• Durable2 reduced availability (DRA) storage buckets [21] can be intro-
duced to have lower costs and lower availability, but are designed to have
the same durability as simple storage buckets.
DRA storage is appropriate for applications that are particularly cost sen-
sitive or for which some unavailability is acceptable. For example:
• Data backup where high durability is critical, but the highest availability
is not required;
• Batch jobs to recover from unavailable data (e.g., by keeping track of
the last object that was processed and resuming from that point upon
restarting).
Cloud storage allows users to enable DRA at the bucket level. User can
specify DRA storage at the time of bucket creation.
2. Durability measures the length of a product’s life. When the product can be repaired, estimat-
ing durability is more complicated. The item will be used until it is no longer economical to
operate it. This happens when the repair rate and the associated costs increase significantly.
www.EngineeringBooksPdf.com
256 Virtualized Software-Defined Networks and Services
• Basic:
• Preconfigured database software;
• Managed by customer;
• Full administrative access.
• Managed:
• Basic management by cP;
• Automated backup;
• Point-in-time recovery available;
www.EngineeringBooksPdf.com
Virtualized Network Services 257
• Administrative access.
• Premium managed:
• Managed offering previous services;
• Optional data guard or active data guard;
• Pluggable database utility services;
• Flexible upgrade options.
• Recovery time objective (RTO), which is the duration of time and the
service level to which a business process must be restored after a disaster
(or disruption) to avoid unacceptable consequences associated with a
break in business continuity.
• Recovery point objective (RPO) that describes the acceptable amount
of data loss measured in time. For example, if the RPO was 1 hour, after
the system was recovered, it would contain all data up to a point in time
that is prior to 11:00 AM because the disaster occurred at noon.
In the preparation phase of DR, data migration and durable storage need
to be considered. When reacting to a disaster, it is important to either quickly
commission compute resources to run the user system in the cloud provider
domain or to orchestrate the failover to already running resources in cloud
provider domain.
The cloud user can choose the most appropriate location for the selected
disaster recovery site, in addition to the site where the user system is fully de-
www.EngineeringBooksPdf.com
258 Virtualized Software-Defined Networks and Services
ployed. A cP may have multiple regions where the selected recovery site can be
chosen to be different.
Possible architectures for DR are given in Figures 5.11, 5.12, and 5.13,
when server in zone 1 failed,
• If the backup is 1:1 (i.e., active and standby configuration) and VM is
already available in zone 2, only application is moved to zone 2 from
zone 1.
www.EngineeringBooksPdf.com
Virtualized Network Services 259
Figure 5.13 Cloud application access protection via two different cPs.
• If backup is 1+1 (i.e., active and active configuration) and the applica-
tion in zone 2 is current, then from active connection cSC1 to backup
connection cSC2 will be switched.
• If only backup server available in zone 2, VM along with applications
can be moved to zone2. In this case TCP/IP is likely to be used in mov-
ing VMs. The distance (i.e., propagation time) between zone 1 and zone
2 and the rate of connectivity between zones must be such that these
factors will not be the dominant contributor of TCP time out. There-
fore, VM moves between zones connected with high-speed transport are
more likely.
5.4.3 SECaaS
Security services such as connectivity security, application security, or content
security, can be provided by a cSP to cloud consumers. Such services are referred
as security as a service (SECaaS).
www.EngineeringBooksPdf.com
260 Virtualized Software-Defined Networks and Services
Security around data storage services must allow consumer fine control
of network access control list (ACL) for modification and accessibility of data
stored in cSP. Additional security is provided by audit tracking of data access
or modification, along with data leakage technologies applied to the network
access between cloud users and cSP.
Network traffic between over a cSC is subject to protection from attack
and intrusion vectors. cSP can provide the traffic inspection and intrusion/at-
tack blocking via combination of traditional firewall/security appliances, along-
side virtual security solutions provided by NFV. Both content inspection and
packet inspection technologies should be utilized to provide high security.
The cSUI allows the consumer to tailor the security offerings for their
intended use of cSP services. For example, a SaaS provider with a CDN would
focus security on intrusion and attack vectors while an email service may focus
on antispam technologies.
The cSP may provide the service where security events and responses are
utilized to gather threat intelligence and react in a manner to protect the con-
sumer services. Should an attack or intrusion be detected, an automatic re-
sponse to isolate the attack vector or continue to provide the service through
alternate infrastructure can be taken.
SECaaS may provide network security functions through cSC set up for
delivery of security functions by the cSP, regardless of whether the consumer
traffic would normally access the cSP. Selection of routing or tunneling tech-
nologies to establish the cSC and security services is performed at cSUI.
www.EngineeringBooksPdf.com
Virtualized Network Services 261
5.4.4 PaaS
By platform as a service (PaaS) [1], the capability provided to the consumer is to
deploy onto the cloud infrastructure consumer-created or acquired applications
created using programming languages and tools supported by a cP. The con-
sumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly application hosting environment configurations.
PaaS can be a stand-alone development environment that does not in-
clude technical, licensing, or financial dependencies on specific SaaS applica-
tions or web services. These development environments are intended to provide
a generalized development environment.
PaaS can be application delivery-only environments that do not include
development, debugging, and test capabilities as part of the service, though they
may be supplied offline. The services provided generally focus on security and
on-demand scalability.
PaaS can be an open platform as a service that does not include hosting
as such; rather, it provides open source software to allow a PaaS provider to run
applications. For example, AppScale allows a user to deploy some applications
written for Google App Engine to their own servers, providing data-store ac-
cess from a standard SQL or NoSQL database. Similarly mobile PaaS (mPaaS)
is formed by the Yankee Group for mobile users. Some open platforms let the
developer use any programming language, any database, any operating system,
any server, and so on to deploy their applications.
With PaaS, a scalable and high-performing network can be formed. As a
fully managed application platform for running and consolidating software ap-
plications and databases in the cloud, PaaS includes the following:
Since business changes are unpredictable, users need a way to quickly mod-
ify applications in response. A web-based platform as a service portal can help to:
www.EngineeringBooksPdf.com
262 Virtualized Software-Defined Networks and Services
5.4.5 SaaS
The capability provided to the consumer via SaaS [1] is to use the cloud pro-
vider’s applications running on a cloud infrastructure. The applications are ac-
cessible from various client devices through a thin client interface such as a web
browser (e.g., web-based email). The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems,
storage, or even individual application capabilities, with the possible exception
of limited user-specific application configuration settings.
Software is installed on demand via customer portal, and licensed and
billed monthly. Open-source and enterprise 32- and 64-bit operating system
software options from various vendors are available. Here are a few examples of
vendors and operating systems that could be installed:
• Microsoft;
• RedHat;
• CentOS;
• Debian;
• FreeBSD;
• Ubuntu;
• Vyatta Network;
• Cloud Linux;
• Parallels;
• cPanel;
• Server virtualization software such as VMWare ESX and ESXi, Citrix
Xenserver, Citrix CloudPlatform, Parallels Virtuozzo, Microsoft Hyper-
V;
• Security software such as McAfee Total Protection, McAfee Anti-Virus,
Microsoft Windows Firewall, McAfee Host Intrusion Protection, Nim-
soft Monitoring, APF Software Firewall;
• Database software such as Microsoft SQL Server (2000, 2005, 2008,
2012), MySQL, Cloudera Hadoop, MongoDB, Basho Riak;
• Control panel software such as cPanel/WHM with Fantastico, RVSkin
and Softaculous, Parallels Plesk Panel.
www.EngineeringBooksPdf.com
Virtualized Network Services 263
5.4.5.1 CDN
In cloud content delivery network (CDN) service, user content is distributed
to a network of edge servers. Users can access the content from a server near
them, ensuring faster load times. Large objects are delivered to many users with
sustained high data transfer rates. And if user traffic fluctuates, the service auto-
matically adjusts as demand increases or decreases.
User content can be placed onto cloud object storage and then CDN
enables the content. The user then visits a CDN site and requests files from
the nearest edge server. The edge server delivers a local, cached copy or pulls
one from cloud object storage. The object’s time-to-live (TTL) will expire at
intervals the user defines, such as 24 hours. If the TTL has expired when the
next request is made, the file is again retrieved from cloud object storage. The
content is cached once again by the edge servers and the TTL restarts.
5.4.6 CaaS
Real-time services such as virtual PBX, voice and video conferencing systems,
collaboration systems, and call centers can be considered as communication as
a service (CaaS). CaaS capabilities can be:
www.EngineeringBooksPdf.com
264 Virtualized Software-Defined Networks and Services
• Mobile application support allowing free download for both iOS and
Android platforms;
• Professional voice recording service for user greetings and other mes-
sages recorded by an industry-leading voice talent;
• Bring your own device (BYOD) capabilities;
• SLAs including quality of service and availability such as next business
day replacement of phones for equipment maintenance of virtual PBX
service;
• Dynamic security policy including authentication, media encryption,
and access control;
• Scalability.
www.EngineeringBooksPdf.com
Virtualized Network Services 265
VNFs can be categorized as active VNFs that are in fact part of the main
course of a packet.
They may drop packets or forward them, such as a firewall. They can ac-
tually change packets such as an IPSec VPN server. VNFs can be passive VNFs
that are considered to be out of the main course of the chain. These functions
mainly inspect packets (e.g., a monitoring system or a deep packet inspection).
In practice one can think of a VNF in a physical device connected to a hub
through a single network interface configured in promiscuous mode. Traffic is
considered to be duplicated when having to reach a passive function.
In short, passive functions can rely on packet characteristics as packets
are not modified, while active functions must be integrated at a service level
because ingress and egress packets can be different (e.g., VPN). If a VNF has
active functions that change packets, the classification may differ when passing
one of these functions.
VNF forwarding graph simplifies the service chain provisioning by quick-
ly and inexpensively creating, modifying, and removing service chains. On one
hand, we can compose several VNFs together to reduce management com-
plexity. On the other hand, we can decompose a VNF into smaller functional
blocks for reusability and faster response time. However, we note that the actual
carrier-grade deployment of VNF instances should be transparent to end-to-
end services.
NFV introduces separation of software from hardware and flexible de-
ployment of network functions. This separation enables the software to evolve
independent from the hardware and vice versa. NFV can automatically deploy
network function software on a pool of hardware resources that may run differ-
ent functions at different times in different data centers. Network operators can
scale the NFV performance dynamically and on a grow-as-you-need basis with
fine granularity control based on the current network conditions.
Two major enablers of NFV are industry-standard servers and technolo-
gies developed for cloud computing. A common feature of industry-standard
servers is that their high volume makes it easy to find interchangeable com-
ponents inside them at a competitive price, compared to network appliances
based on bespoke application-specific integrated circuits (ASICs). Using these
general-purpose servers can also reduce the number of different hardware ar-
chitectures in operators’ networks and prolong the life cycle of hardware when
technologies evolve (e.g., running different software versions on the same plat-
form). Recent developments of cloud computing, such as various hypervisors,
OpenStack, and Open vSwitch, also make NFV achievable in reality. For ex-
ample, the cloud management and orchestration schemes enable the automatic
instantiation and migration of VMs running specific network services.
The recent effort from the telecommunications industry has been centered
on the software virtualization and its management. However, it is challenging
www.EngineeringBooksPdf.com
266 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 267
www.EngineeringBooksPdf.com
268 Virtualized Software-Defined Networks and Services
network. E-line and E-LAN services of Metro Ethernet Forum (MEF) are being
considered as examples of (Vn-Nf )/VN.
NFV also identifies a VM interface [2] as (Vn-Nf )/VM or Vn-Nf-VM
(Figure 5.15), which is considered an equivalent of cSI.
NFV identifies SWA-1 interface [5] as depicted in Figure 5.18 to enable
communications between various network functions within the same or differ-
ent network service. They may represent data and/or control plane interfaces
of the network functions. SWA-1 is considered an equivalent of virtual compo-
nent of MEF user network interface (UNI). We consider SWA-1 an equivalent
of cSI.
NFV also identifies SW-5 interfaces which are an abstraction of all subin-
terfaces between the NFV infrastructure (NFVI) and the VNF, including VNF
interswitch connectivity services such as E-LAN and E-line [5], as depicted in
Figure 5.19.
NFV divides functional blocks as host functional block (HFB) and virtu-
alization functional block (VFB) [8, 9] as depicted in Figure 5.20. The interface
between HFB and VFB is called container interface, which is the virtual interface
between two containers. This interface can be also considered an equivalent of cSI.
The mapping between OCC and ETSI NFV architectural constructs are
given in Figure 5.21, Figure 5.22, Figure 5.23, and Table 5.2 [7]. Cloud user
and bare metal server interfaces to NaaS are depicted in Figures 5.21 and Figure
5.22 using NFV constructs.
Figure 5.15 VM interface.NFV identified an interface to hardware [4] as Vi-Ha and interface
to bare metal operating system (OS) as depicted in Figures 5.16 and 5.17. This interface can be
a subset of cSUI or cCcPI or cSPcSPI as described in Section 5.3.
www.EngineeringBooksPdf.com
Virtualized Network Services 269
www.EngineeringBooksPdf.com
270 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 271
Figure 5.22 Bare metal server interface NaaS architecture with NFV constructs and NaaS.
www.EngineeringBooksPdf.com
272 Virtualized Software-Defined Networks and Services
Table 5.2
Mapping Between OCC and NFV Constructs
Architectural Construct NFV Construct OCC Construct
User interface (Vi-Ha)+(Vn-Nf)/VN cSUI
VM interface (Vn-Nf)/VM cSI
Container interface Container interface cSI
SWA-1 Software architecture-1 cSI
Cloud carrier-cloud provider interface — cCcPI
Cloud service provider-cloud service provider — cSPcSPI
interface
Connection between users or between a user and VNF forwarding graph cSC
VM or between VMs
Connection termination point — cSCTP
Figure 5.24 VNFs and infrastructure for cSC and cSCTP when cSC is between two cSUIs.
www.EngineeringBooksPdf.com
Virtualized Network Services 273
Figure 5.25 VNF and infrastructure components of cSC when cSC is between cSUI and cSI.
www.EngineeringBooksPdf.com
274 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 275
From Table 5.3, we can identify basic and enhanced virtual capabilities of
UNI, and infrastructure capabilities of UNI. Possible implementation of UNI
can be as shown in Table 5.4.
Table 5.3
UNI Service Attributes and Parameter Values for All Service Types
Component of VNF or Categories of VNF and
UNI Service Attribute [13] Infrastructure or Both Infrastructure
UNI ID VNF VNFUNI-Prov
Physical layer Infrastructure INFUNI-Prov
Synchronous mode Both INF +VNF
UNI-sync UNI-sync
Number of links Both INFUNI-prot +VNFUNI-prot
UNI resiliency Both INFUNI-prot +VNFUNI-prot
Service frame format Infrastructure INFUNI-Prov
UNI maximum service frame size Both INFUNI-Prov +VNFUNI-Prov
Service multiplexing Both INFUNI-Prov +VNFUNI-Prov
CE-VLAN ID for untagged and VNF VNFUNI-Prov
priority tagged service frames
CE-VLAN ID/EVC map VNF VNFUNI-Prov
Maximum number of EVCs Both INFUNI-Prov VNFUNI-Prov
Bundling VNF VNFUNI-Prov
All to one bundling VNF VNFUNI-Prov
Token share Both INFUNI-tsh +VNFUNI-tsh
Envelopes VNF VNFUNI-env +INFUNI-env
Ingress bandwidth profile per UNI VNF VNFUNI-Prov +INFUNI-Prov
Egress bandwidth profile per UNI VNF VNFUNI-Prov +INFUNI-Prov
Link OAM Both VNFUNI-loam +INFUNI-loam
UNI MEG Both VNFUNI-soam +INFUNI-soam
E-LMI Both VNFUNI-elmi +INFUNI-elmi
UNI L2CP address set VNF VNFUNI-Prov
UNI L2CP peering VNF VNFUNI-Prov
Test probes for ITU Y.1564 testing Both INFUNI-test +VNFUNI-test
RFC 6349 TCP testing
www.EngineeringBooksPdf.com
276 Virtualized Software-Defined Networks and Services
Table 5.4
UNI Configurations
VNF and INF Components Required (i.e., SFC
UNI Functionalities Components)
Basic UNI provisioning VNFUNI-Prov +INFUNI-Prov
Basic UNI provisioning + link OAM VNFUNI-Prov +INFUNI-Prov + VNFUNI-loam +INFUNI-loam
Basic UNI provisioning + link Protection VNFUNI-Prov +INFUNI-Prov + VNFUNI-prot +INFUNI-prot
Basic UNI provisioning + token sharing VNFUNI-Prov +INFUNI-Prov + VNFUNI-tsh +INFUNI-tsh
Basic UNI provisioning + envelopes VNFUNI-Prov +INFUNI-Prov + VNFUNI-env +INFUNI-env
Basic UNI provisioning + service OAM VNFUNI-Prov +INFUNI-Prov + VNFUNI-soam +INFUNI-soam
Basic UNI provisioning + ELMI VNFUNI-Prov +INFUNI-Prov + VNFUNI-elmi +INFUNI-elmi
Basic UNI provisioning + service OAM+ VNFUNI-Prov +INFUNI-Prov + VNFUNI-soam +INFUNI-soam +
link OAM VNFUNI-loam +INFUNI-Prov + VNFUNI-loam +INFUNI-loam
In Table 5.6, we have mapped additional EVC attributes to VNF and INF
categories of an EVC.
From the previous tables, we can identify virtual capabilities and infra-
structure capabilities of EVC. Possible implementation of EVC can be as shown
in Table 5.7.
www.EngineeringBooksPdf.com
Virtualized Network Services 277
Table 5.5
VNFs and Infrastructure Components of EVC per UNI Service Attributes and
Parameter Values for All Service Types
Categories of VNF
EVC per UNI Service Attribute [13] and Infrastructure
UNI EVC ID VNFEVC-Prov
Class of service identifier for data service frame VNFEVC-Prov
Class of service identifier for L2CP service frame VNFEVC-Prov
Class of service identifier for SOAM service frame VNFEVC-soam
Color identifier for service frame VNFEVC-Prov
Egress equivalence class identifier for data service VNFEVC-eqc
frames
Egress equivalence class identifier for L2CP service VNFEVC-eqc
frames
Egress equivalence class identifier for SOAM service VNFEVC-soam
frames
Ingress bandwidth profile per EVC VNF +INF
EVC-Prov EVC-Prov
Egress bandwidth profile per EVC VNFEVC-Prov +INFEVC-Prov
Ingress bandwidth profile per class of service identifier VNFEVC-Prov
Egress bandwidth profile per egress equivalence class VNFEVC-eqc
Source MAC address limit VNFEVC-Prov
Test MEG VNFEVC-soam +INFEVC-soam
Subscriber MEG MIP VNFEVC-soam +INFEVC-soam
Table 5.6
Categorization of EVC Attributes as VNF and Infrastructure
Component
Categories of VNF
EVC Service Attribute [13] and Infrastructure
EVC type VNFEVC-Prov
EVC ID VNFEVC-Prov
UNI List VNFEVC-Prov
Maximum number of UNIs VNF +INF
EVC-Prov EVC-Prov
Unicast service frame delivery VNFEVC-Prov
Multicast service frame delivery VNFEVC-Prov
Broadcast service frame delivery VNFEVC-Prov
CE-VLAN ID preservation VNFEVC-Prov
CE-VLAN CoS preservation VNFEVC-Prov
EVC performance VNFEVC-Prov + INFEVC-Prov
EVC maximum service frame size VNFEVC-Prov
www.EngineeringBooksPdf.com
278 Virtualized Software-Defined Networks and Services
Table 5.7
VNFs and Infrastructure Components of EVC
VNF and INF Components
EVC Functionalities Required (i.e., SFC Components)
Basic EVC provisioning VNFEVC-Prov +INFEVC-Prov
Basic EVC provisioning + VNFEVC-Prov +INFEVC-Prov + VNFEVC-
service OAM soam + INFEVC-soam
The main orchestrator needs to talk to the controller associated with UNIs
and NFV orchestrator associated with both UNIs and EVC. Let’s assume that
both UNIs belong to one vendor and are in the same subnetwork (domain);
therefore, the same controller can configure both UNIs. Per request from the
main orchestrator, the controller configures INFUNI-Prov for both UNIs. The
VNFs, VNFUNI-Prov for both UNIs, can be configured independently from
INFUNI-Prov . For an EPL provisioning, the flows (i.e., service chains) are
depicted in Figures 5.28, 5.29, and 5.30.
The provisioning components in Figures 5.28, 5.29, and 5.30 are quite
different than objects defined in MEF [15]. We believe that what we define here
constitutes a layer which is below a provisioning layer formed of service layer
and resource layer objects defined by MEF.
Additional OVC service attributes for Access E-Line are given in Tables
5.10 and 5.11.
Service chaining for access E-line service is depicted in Figure 5.31.
www.EngineeringBooksPdf.com
Virtualized Network Services 279
INFs and VNFs and service chaining for remaining carrier Ethernet ser-
vices such as access E-LAN, transit E-line, and transit E-LAN services can be
similarly identified.
www.EngineeringBooksPdf.com
280 Virtualized Software-Defined Networks and Services
Figure 5.30 Provisioning of basic EPL with link OAM, OAM, and SOAM capabilities.
Table 5.8
OVC End Point Per UNI Service Attributes
OVC End Point per UNI Service Categories of VNF and
Attribute [14] Infrastructure
UNI OVC identifier VNFOVC-Prov
OVC end point map VNFOVC-Prov
Class of service identifiers VNFOVC-Prov
Ingress bandwidth profile per OVC INF
OVC-Prov +VNFOVC-Prov
end point
Ingress bandwidth profile per class INF
OVC-Prov +VNFOVC-Prov
of service identifier
Egress bandwidth profile per OVC INFOVC-Prov +VNFOVC-Prov
end point
Egress bandwidth profile per class INF
OVC-Prov +VNFOVC-Prov
of service identifier
Maintenance end point (MEP) list VNFOVC-Prov
www.EngineeringBooksPdf.com
Virtualized Network Services 281
Table 5.9
OVC End Point per ENNI Service Attributes
OVC End Point per ENNI Service Categories of VNF and
Attribute [14] Infrastructure
OVC end point identifier VNFOVC-Prov
Trunk identifiers VNFOVC-Prov
Class of service identifier for ENNI frames VNFOVC-Prov
Ingress bandwidth profile per OVC end point INF +VNF
OVC-Prov OVC-Prov
Ingress bandwidth profile per class of service INF
OVC-Prov +VNFOVC-Prov
identifier
Egress bandwidth profile per OVC end point INFOVC-Prov +VNFOVC-Prov
Egress bandwidth profile per class of service INF
OVC-Prov +VNFOVC-Prov
identifier
Maintenance end point (MEP) list VNFOVC-Prov
Maintenance intermediate point (MIP) VNFOVC-Prov
Table 5.10
OVC Services Attributes
OVC Service Attribute [14] Categories of VNF and Infrastructure
OVC identifier VNFOVC-Prov
OVC type VNFOVC-Prov
OVC end point list VNFOVC-Prov
Maximum number of UNI OVC end VNFOVC-Prov
points
Maximum number of ENNI OVC VNFOVC-Prov
end points
OVC MTU size INF +VNF
OVC-Prov OVC-Prov
CE-VLAN ID preservation VNFOVC-Prov
CE-VLAN CoS preservation VNFOVC-Prov
S-VLAN ID preservation VNFOVC-Prov
S-VLAN CoS preservation VNFOVC-Prov
Color forwarding VNFOVC-Prov
Service level specification INFOVC-Prov +VNFOVC-Prov
Unicast frame delivery VNFOVC-Prov
Multicast frame delivery VNFOVC-Prov
Broadcast frame delivery VNFOVC-Prov
OVC available MEG level VNFOVC-Prov
www.EngineeringBooksPdf.com
282 Virtualized Software-Defined Networks and Services
Table 5.11
OVC End Point per UNI Service Attributes
Categories of VNF and
OVC End Point per UNI Service Attribute Infrastructure
UNI OVC identifier VNFOVC-Prov
OVC end point map VNFOVC-Prov
Class of service identifiers VNFOVC-Prov
Ingress bandwidth profile per OVC end point INF +VNF
OVC-Prov OVC-Prov
Ingress bandwidth profile per class of service VNFOVC-Prov
identifier
Egress bandwidth profile per OVC end point INFOVC-Prov +VNFOVC-Prov
Egress bandwidth profile per class of service identifier VNFOVC-Prov
Maintenance end point (MEP) list VNFOVC-Prov
Subscriber MEG maintenance intermediate point VNFOVC-Prov
(MIP)
that have two endpoints identified by the tunnel source and tunnel destination
addresses at each endpoint.
Figure 5.33 shows encapsulation process of GRE packet as it traversers the
router and enters the tunnel interface.
Configuring a GRE tunnel involves creating a tunnel interface, which is a
logical interface, and configuring the tunnel endpoints for the tunnel interface.
www.EngineeringBooksPdf.com
Virtualized Network Services 283
www.EngineeringBooksPdf.com
284 Virtualized Software-Defined Networks and Services
Table 5.12
IP VPN Interface Attributes
Component Component
IP VPN Interface Attributes [16] of INF of VNF Main Functions
Physical interface (i.e., rate, MAC √ — INF + VNF
address, and so on) VPNI-prov VPNI-prov
MTU √ — INF
VPNI-prov
www.EngineeringBooksPdf.com
Virtualized Network Services 285
Table 5.13
IP VPN End Point Attributes
Component Component
IP VPN End Point Attributes of INF of VNF Main Functions
Tunnel EP ID — √ VNF
VPNE-prov
www.EngineeringBooksPdf.com
286 Virtualized Software-Defined Networks and Services
Table 5.14
IP VPN Connection
IP VPN Connection Component Component
Attributes of INF of VNF Main Functions
Tunnel ID — √ VNF
VPNE-prov
VPN ID — √ VNF
VPNC-prov
Connection type — √ VNF
VPNC-prov
SLA √ √ INF VNF
VPNC-prov + VPNC-prov
MTU √ √ INF VNF
VPNC-prov + VPNC-prov
Administrative state — √ VNF
VPNC-prov
Operational state √ √ INF VNF
VPNC-prov + VPNC-prov
Connection duration — √ VNF
VPNC-prov
Connection start time — √ VNF
VPNC-prov
Table 5.15
IP VPN SOAM
Component
of VNF or
IP VPN SOAM Functionality/ Infrastructure Categories of VNF and
Attribute [24]* or Both Infrastructure
IP VPN connection MEG Both INFVPN-soam +VNFVPN-soam
IP loopback — —
www.EngineeringBooksPdf.com
Virtualized Network Services 287
Table 5.16
VNFs and Infrastructure Components of IP VPN
IP VPN
Functionalities VNF and INF Components Required (i.e., SFC Components)
Basic IP VPN VNFVPNI-prov +INFVPNI-prov + VNFVPNE-prov +INFVPNE-prov +VNFVPNC-prov +INFVPNC-
provisioning
prov
Basic IP VPN VNFVPNI-prov +INFVPNI-prov + VNFVPNE-prov +INFVPNE-prov +VNFVPNC-prov +INFVPNC-
provisioning +
prov + VNFVPN-soam +INFVPN-soam
service OAM
www.EngineeringBooksPdf.com
288
Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Figure 5.35 Cloud services management with a cloud orchestrator, SDN controllers, NFV orchestrator, and NMS/EMS.
Virtualized Network Services 289
onto hypervisors and manages their network connectivity. It also analyzes the
root cause of performance issues and collects information about infrastructure
fault for capacity planning and optimization.
Cloud services may be provided by multiple service operators where each
cloud service operator (cSO) may have its own cloud orchestrator. The end-
to-end coordination between cloud orchestrators can be provided by a cloud
orchestrator owned by the cSP as depicted in Figure 5.36.
LSO functionalities defined by MEF for carrier Ethernet network (CEN),
as depicted in Figure 5.37, are [11]:
Although the LSO for cloud services is not worked out yet in the industry,
these generic steps apply cloud services as well.
Order fulfillment and service control deal with the orchestration of provi-
sioning related activities involved in the fulfillment of a customer order or of a
Figure 5.36 Management of cloud services provided by multiple cloud service operators.
www.EngineeringBooksPdf.com
290 Virtualized Software-Defined Networks and Services
service control request, including the tracking and reporting of the provisioning
progress. The process can be broken down into multiple functional orchestra-
tion areas:
www.EngineeringBooksPdf.com
Virtualized Network Services 291
www.EngineeringBooksPdf.com
292 Virtualized Software-Defined Networks and Services
• The LSO ecosystem may manage unit level testing within infrastructure
and element management levels, therefore abstract to LSO, or may be
www.EngineeringBooksPdf.com
Virtualized Network Services 293
orchestrated from LSO with testing requests, via APIs, to systems ca-
pable of conducting and reporting on unit tests.
• The LSO ecosystem needs to orchestrate and control end-to-end service
test, and issues testing requests, via APIs, to systems capable of conduct-
ing and reporting on unit tests.
• The LSO ecosystem needs to orchestrate customer acceptance testing.
• The LSO ecosystem needs to support alarm surveillance, detect errors
and faults, and correlation to services.
www.EngineeringBooksPdf.com
294 Virtualized Software-Defined Networks and Services
The LSO ecosystem provides authentication for all interactions. The LSO
ecosystem may provide role-based access control for users. It supports encryp-
tion across interfaces and the associated key management capabilities. The LSO
ecosystem orchestrates filtering controls for connectivity services and maintains
administrative and trust domains and relationships.
The LSO ecosystem supports the fusion and analysis of information
among management and control functionality across management domains.
The LSO ecosystem assembles a relevant and complete operational picture of
the end to end services, service components, and the supporting network in-
frastructure, both physical and virtual. It ensures that information is visible,
accessible, and understandable when needed and where needed to accelerate
decision-making. The LSO ecosystem also supports prediction and trending of
service growth and resource demand as compared to available resource capacity.
The LSO ecosystem may provide rules-based coordination and automa-
tion of management processes across administrative domains supporting ef-
fective configuration, assurance, and control of services and their supporting
service components:
• LSO may support service related policies that encode rules that describe
the design and dynamic behavior of the services.
• LSO may support service objective based policies that implement sets of
rules with event triggered conditions and associated actions.
• LSO may adjust the behavior of services and service resources, including
bandwidth, traffic priority, and traffic admission controls through poli-
cies, allowing connectivity services to adapt rapidly to dynamic condi-
tions.
• Within the LSO ecosystem, user/party and service policies may be used
to control and bound the objects, parameters, value ranges, and states
that are allowed to be created, modified, or deleted.
www.EngineeringBooksPdf.com
Virtualized Network Services 295
www.EngineeringBooksPdf.com
296 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 297
5.11 NFV and SDN for Unified Network and Cloud Service
Provisioning
Modern servers benefit from abstractions of operating systems, programming
languages, file systems, and virtual memory. As a result, servers are highly vir-
tualized, capable of supporting tens to hundreds of virtual machines (VMs) per
physical server, each of which can be dynamically created, moved to a different
host, modified, or deleted in a few minutes. Server virtualization resulted in
significant reductions in capital and operating expense, reduction in the physi-
cal footprint of devices, lower energy consumption, faster provisioning times,
and higher utilization.
However, networks for data centers, cloud computing, and services have
not yet evolved its own set of fundamental abstractions. Conventional data
networks can require five days or more to provision the necessary service chains
within a data center, and weeks or longer to reprovision service between data
centers. Current industry trends such as dynamic workloads, mobile comput-
ing, multitenant cloud computing, warehouse-scale data centers, and big data
analytics have led to a need for much richer network functionality. Abstractions
for networks with SDN are expected to bring benefits similar to those derived
from server virtualization.
With SDN, management and control functions can be moved into soft-
ware running on a server cluster, known as a network controller. Centralized
management through cloud middleware such as the OpenStack Quantum in-
terface and virtualized layer 2–3 network capabilities through network function
virtualization (NFV) are being worked in the industry. Network resources are
expected to be programmable, automated, and eventually optimized, resulting
in a workload or application aware network.
www.EngineeringBooksPdf.com
298 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Virtualized Network Services 299
www.EngineeringBooksPdf.com
300 Virtualized Software-Defined Networks and Services
plane to provision the infrastructure resources where VNFs run (VMs, virtual
networks, and so on). The orchestrator has a REST interface that exposes the
ability to create and delete VNFs as well as to chain them.
The VIM plane includes the components for management of infrastruc-
ture resources. It includes cloud DC controllers (one per DC) and a WAN
controller that is able to establish inter-DC connectivity services.
From a networking perspective, OpenStack allows the creation and man-
agement of networks (L2 network segments) and ports (attachment points for
devices connecting to networks, such as virtual network interface cards or
vNICs, in VMs). The OpenStack community introduced new Neutron net-
work service types: L3 routing, firewall as a service (FWaaS), load balancer as a
service (LBaaS), and VPN as a service (VPNaaS).
With the orchestration and composition of SFs in mind, it is easy to
identify the need to fill a gap in OpenStack: steering traffic between OpenStack
elements (e.g., VMs, routers). We envision a new OpenStack service abstraction
that extends and relies on current OpenStack networking features, allowing
traffic steering between Neutron ports according to classification criteria. New
entities are introduced into the OpenStack Neutron data model: port steering
and classifier. Both entities have a set of common OpenStack data model at-
tributes (i.e., id, name, description, and tenant_id). Port steering adds to this
common set a list of ports (ports attribute) and a list of classifiers (classifiers
attribute). The former lists the sets of ports that must be targeted for classifica-
tion and then steered. The classifier entity adds the following attributes: type,
protocol, port_min, port_max, src_ip and dst_ip.
OpenDaylight has a module that integrates with OpenStack Neutron for
the enforcement of services in the infrastructure. This module can be extended
in order to support and enforce the previously mentioned OpenStack traffic
steering feature where this implementation relies on OpenFlow and Open
vSwitch database management protocol (OVSDB) for the management of net-
work resources.
The WAN controller is responsible for managing the operator network,
and it exposes connectivity services to the orchestrator. In this context, WAN
services are used to support VNFs. Point-to-point and multipoint connections
with guaranteed network QoS are provided. These are exposed through a ser-
vice interface that, similar to cloud IaaS interfaces, is technology agnostic. The
details and mechanisms to manage the automatic establishment of connectivity
services across different locations are detailed in [6].
Reference [18] provides the configuration of virtual CPE (vCPE) with Ju-
niper’s Contrail orchestrator. cCPE Selfcare application of Juniper enables user
to configure Contrail-based virtual CPE services, which are hosted in user cloud
computing environment. Customer network administrators can then enable
www.EngineeringBooksPdf.com
Virtualized Network Services 301
these virtual CPE services through the cCPE selfcare portal on a self-serve basis.
Contrail, which works within open cloud orchestration systems such as Open-
Stack and CloudStack, provides orchestration and management of networking
functions, such as a virtual firewall, in a VM instead of physical hardware. The
vCPE services rely on the preconfigured routing instances and interfaces on
MX Series routers, which the cCPE Selfcare portal identifies and can modify.
The vCPE services can be configured in the Selfcare portal, which passes
the authentication credentials and virtual service definition properties to Con-
trail and OpenStack to create the virtual service. cCPE Selfcare application
communicates with Contrail over the Contrail northbound RESTful APIs.
The Selfcare portal acts as a SDN orchestrator that enables MX routers to route
selected traffic to virtual services managed by Contrail controller. A user defines
Contrail-based virtual services as parameterized service templates and VM im-
ages in Contrail that are instantiated by the Contrail controller and OpenStack.
The cCPE customers can then enable these virtual services on a self-serve basis
in the cCPE Selfcare portal.
Contrail, by combining a controller and virtual routers on virtualized
servers, enables the chaining of virtual services provided by applications run-
ning on VMs. Contrail, together with OpenStack, automates the addition of
new features and virtual services based on VMs for customers who have IP or
VPN connectivity based on MX edge routers. BGP protocol announces the
routes with SDN targets so that all routers in the VPN can provide connectiv-
ity between the VPN sites. The VMs are dynamically created by Contrail using
OpenStack.
cCPE Selfcare application, along with Contrail, can dynamically provi-
sion services and replace traditional router-based services running on edge rout-
ers such as DHCP server or static firewall and cloud services provided by VMs
in a cloud-based environment like an external DHCP server.
5.12 Conclusion
In this chapter, novel cloud services architectures defined by OCC consisting
of actors for cloud services, standards interfaces between the actors, standards
connection, and connection termination points associated with cloud user and
applications are described. Network functions virtualization (NFV) architec-
ture of ETSI NFV is summarized. Components of cloud services architectures
and NFV architectures are mapped. An implementation approach providing
substantial flexibility to cloud service providers is described. A method for im-
plementing cloud services architectures with virtualized components is given.
Based on this approach, implementation details of carrier Ethernet services and
IP VPN services are given.
www.EngineeringBooksPdf.com
302 Virtualized Software-Defined Networks and Services
References
[1] Toy, M., “OCC 1.0 Reference Architecture,” December, 2014.
[2] Draft ETSI GS NFV-INF V0.3.1 (2014-05), “Network Functions Virtualisation; Infra-
structure Architecture; Architecture of the Hypervisor Domain.”
[3] DGS NFV-INF003 V0.34 (2014-11-18), “Network Functions Virtualisation; Part 1:In-
frastructure Architecture; Sub-Part 3: Architecture of Compute Domain.”
[4] Draft ETSI GS NFV-INF 001 V0.3.12 (2014-11), “Network Functions Virtualisation;
Infrastructure Overview.”
[5] Draft ETSI GS NFV-SWA 001 v0.2.4 (2014-11), “Network Functions Virtualisation;
Virtual Network Functions Architecture.”
[6] ETSI GS NFV-MAN 001 v1.1.1 (2014-12), Network Functions Virtualisation (NFV);
Management and Orchestration.
[7] Toy, M., “Cloud Services Architectures with SDN and NFV Constructs,” OCC Draft,
July 2015.
[8] GS NFVINF 0007 v0.3.1 (2013-11-15), Network Function Virtualisation Infrastructure
Architecture: Interfaces and Abstractions.
[9] ETSI GS NFV-INF 007 V1.1.1 (2014-10) Network Functions Virtualisation (NFV);
Infrastructure; Methodology to describe Interfaces and Abstractions.
[10] Toy, M., “Cloud Services Architectures,” Procedia Computer Science 00 (2015)000-000,
Elsevier, November 2015.
[11] MEF, “Lifecycle Service Orchestration: Reference Architecture and Framework,” Feb
2016.
[12] Cannistra, R., B. Carle, M. Johnson, J. Kapadia, Z. Meath, et al., “Enabling Autonomic
Provisioning in SDN Cloud Networks with NFV Service Chaining,” Proceedings of Optical
Fiber Communications Conference, San Francisco, CA, March 2014.
[13] MEF 6.2, “EVC Ethernet Services Definitions Phase 3,” August 2014.
[14] MEF 51, “OVC Services Definitions,” August 2015.
[15] MEF 7.2, “Carrier Ethernet Management Information Model,” April 2013.
[16] RFC 4087, IP Tunnel MIB, June 2005.
www.EngineeringBooksPdf.com
Virtualized Network Services 303
[17] Hwang, J., K. K. Ram Krishnon, and T. Wood, “NetVM: High Performance and Flexible
Networking using Virtualization on Commodity Platforms,” IEEE Transactions on
Network and Service Management, Vol. 12, No. 1, March 2015, pp. 34–47. https://www.
usenix. org/system/files/conference/nsdi14/nsdi14-paper-hwang.pdf.
[18] Juniper Technical Document, “Understanding SDN Provisioning and Cloud CPE Selfcare
Application for MX Series Routers,” http://www.juniper.net/techpubs/en_US/junos-
space-apps/ccpe1.1/topics/ccpe-selfcare-sdn-temp-book.pdf. Last accessed in August
2016.
[19] MEF 50, “Carrier Ethernet Service Lifecycle Process Model,” December 2014.
[20] National Institute of Standards and Technologies (NIST) Special Publication 500-291,
NIST Cloud Computing Roadmap, July 2013.
[21] https://developers.google.com/storage/docs/durable-reduced-availability.
[22] Wang, G., and T. S. Eugene Ng, “The Impact of Virtualization on Network Performance
of Amazon EC2 Data Center,” Proceedings of IEEE INFOCOM 2010, Piscataway, N.J.,
2010.
[23] Martins, J., et al., “Enabling Fast, Dynamic Network Processing with ClickOS,”
SIGCOM.
[24] RFC 4176 “Framework for Layer 3 Virtual Private Networks (L3VPN) Operations and
Management,” 2005.
www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
About the Authors
Dr. Qiang Duan is an associate professor of information sciences and technolo-
gy at the Pennsylvania State University, Abington College. His general research
interest is about data communications and computer networking. Currently his
active research areas include the next generation Internet architecture, network
virtualization and NFV, network-as-a-service, software-defined networking,
and cloud computing. He has published more than eighty journal articles and
conference papers and authored six book chapters. Dr. Duan is serving on the
editorial board as an associate editor or area editor for multiple international re-
search journals and has served on the technical program committees for numer-
ous international conferences, including GLOBECOM, ICC, ICCCN, AINA,
and WCNC. Dr. Duan received his Ph.D. degree in electrical engineering from
the University of Mississippi. He holds a B.S. degree in electrical and computer
engineering and an M.S. degree in telecommunications and electronic systems.
Mehmet Toy received his Ph.D degree in electrical and computer engi-
neering from Stevens Institute of Technology, Hoboken, NJ. He is currently a
distinguished member of technical staff in Verizon Communications and in-
volved in SDN, NFV, and cloud architectures and standards. Prior to his cur-
rent position, Dr. Toy held technical and management positions in well-known
companies and startups including Comcast, Intel, Verizon Wireless, Fujitsu
Network Communications, AT&T Bell Labs, and Lucent Technologies. He
also served as a tenure-track faculty member and adjunct professor at various
universities, including Stevens Institute of Technology, New Jersey Institute of
Technology, Worchester Polytechnic Institute, and University of North Caro-
line at Charlotte.
Dr. Toy contributed to research and development of cloud, SDN and
virtualization-based commercial networks and services, carrier Ethernet, IP
305
www.EngineeringBooksPdf.com
306 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Index
Abstraction Carrier Ethernet services, 271–79, 289
packet forwarding, 79–83 cCPE Selfcare, 301
resource, 5, 29–30, 100–1, 207–10 Central Office Rearchitected as Data Center
SDN virtualization, 100–1, 175 (CORD), 160–63
types of, 17 ClickOS, 146–47, 267
two-dimensional, 217–20 Cloud infrastructure service (CIS), 156–62
Access control enforcing (ACE), 215–16 Cloud services
Access control list (ACL), 260 applications, 18–22
Access E-line, 278–79 architectures, 239–42
Advanced FlowVisor, 179 CaaS, 263–64
Advanced message queuing protocol characteristics of, 236, 242–50
(AMQP), 66–67 conclusions on, 301–2
Amazon EC2, 266 devices, 18–22
Application-based network operation IaaS, 252–59
(ABNO), 187 NaaS, 251–52
Application-control plane interface (A-CPI), network functionalities, 299–301
5, 6, 7, 34 network unification and, 156–62,
Application-layer traffic optimization 297–301
(ALTO), 55–56, 111, 154 NFV components, 269–71
Application-specific integrated circuit life cycle services, 285–97
(ASIC), 265 PaaS, 261
Application plane, 5, 6 protocol stacks, 242
Asynchronous message, 46 SaaS, 262–63
Authentication and authorization (AA), service delivery, 236–37
215–16, 294 SECaaS, 259–60
standards, 239
technology emergence, 2, 18–22, 119,
Base station (BS), 196 235–38
Beacon open source controller, 59–60 virtualization and, 264–69
Border gateway patrol—link state (BGP-LS), Commercial off-the-shelf (COST) server, 13,
55–56 141–43
Broadband Forum (BBF), 202
307
www.EngineeringBooksPdf.com
308 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Index 309
www.EngineeringBooksPdf.com
310 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Index 311
www.EngineeringBooksPdf.com
312 Virtualized Software-Defined Networks and Services
www.EngineeringBooksPdf.com
Index 313
www.EngineeringBooksPdf.com