You are on page 1of 328

www.EngineeringBooksPdf.

com
Virtualized Software-Defined
Networks and Services

www.EngineeringBooksPdf.com
For a complete listing of titles in the
Artech House Communications and Network Engineering Series,
turn to the back of this book.

www.EngineeringBooksPdf.com
Virtualized Software-Defined
Networks and Services
Qiang Duan
Mehmet Toy

www.EngineeringBooksPdf.com
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Congress.

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library.

Cover design by John Gomes

ISBN 13: 978-1-63081-130-3

© 2017 ARTECH HOUSE


685 Canton Street
Norwood, MA 02062

All rights reserved. Printed and bound in the United States of America. No part of this book
may be reproduced or utilized in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without permission
in writing from the publisher.
  All terms mentioned in this book that are known to be trademarks or service marks have been
appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of
a term in this book should not be regarded as affecting the validity of any trademark or service
mark.

10 9 8 7 6 5 4 3 2 1

www.EngineeringBooksPdf.com
Contents

Preface xi

1 Introduction and Overview 1

1.1 Introduction 1

1.2 Software-Defined Networking 4

1.3 Virtualization in Networking 8

1.4 Integrating SDN and NFV in Future Networks 14

1.5 Virtualized Network Services 18


References 22

2 Software-Defined Networking 25

2.1 Introduction 25

2.2 SDN Concept and Principles 28

2.3 Features and Advantages 30

2.4 SDN Architecture 32


2.4.1 General Architecture 33
2.4.2 ONF Architecture 35

www.EngineeringBooksPdf.com
vi Virtualized Software-Defined Networks and Services

2.4.3 ITU-T Architecture 37


2.4.4 IRTF Architecture 38
2.4.5 Cooperating Layered Architecture for SDN 40

2.5 SDN Data Plane and Southbound Interface 42


2.5.1 Key Components in an SDN Switch 42
2.5.2 OpenFlow Switch Structure 44
2.5.3 OpenFlow Pipeline Processing 47

2.6 SDN Control and Applications 51


2.6.1 SDN Controller Architecture 51
2.6.2 SDN Controller Functions 54
2.6.3 Enhancing SDN Control Performance 59
2.6.4 Multidomain SDN Control 64
2.6.5 SDN Control Applications 68
2.6.6 RESTful Northbound API 70

2.7 Software-Defined Internet Architecture for Network


Service Provisioning 72
2.7.1 Challenges to Current SDN Architecture for Service
Provisioning 72
2.7.2 Software-Defined Internet Architecture 74
2.7.3 End-to-End Service Provisioning in Software-Defined
Internet Architecture 76

2.8 Protocol Independent Layer in SDN for Future


Network Service Provisioning 78
2.8.1 Limitation of the Current OpenFlow-Based SDN
Data Plane 78
2.8.2 Protocol-Independent Layer and Abstract Model for
Packet Forwarding 79
2.8.3 Protocol Oblivious Packet Forwarding 83
2.8.4 Programming Protocol-Independent Packet Processing 85
2.8.5 An Ecosystem for SDN Data Plane Programming 87
2.8.6 ONF Forwarding Abstractions Working Group 89

2.9 Conclusion 90
References 91

3 Virtualization in Networking 95

3.1 Introduction 95

www.EngineeringBooksPdf.com
Contents vii

3.2 Virtualization in Computing 97

3.3 Network Virtualization 101


3.3.1 Network Virtualization Architecture 103
3.3.2 Functional Roles in Network Virtualization 104
3.3.3 Virtual Network Lifecycle Management 107
3.3.4 Benefits of Network Virtualization for Service
Provisioning 108

3.4 Technologies for Creating Virtual Networks 108


3.4.1 Resource Description and Discovery for Network
Virtualization 109
3.4.2 Virtual Network Embedding 111
3.4.3 Virtual Network Security and Survivability 115

3.5 Network Function Virtualization 118


3.5.1 NFV Architectural Framework 121
3.5.2 Principle for Virtualizing Network Functions 124
3.5.3 Network Services in NFV 126

3.6 Key Components of NFV Architecture 128


3.6.1 NFV Infrastructure 128
3.6.2 Virtual Network Functions 131
3.6.3 NFV Management and Orchestration 136

3.7 NFV Implementation and Performance 141


3.7.1 Challenges to High-Performance NFV 141
3.7.2 Data Plane I/O Virtualization 143
3.7.3 NFV Implementation Example—NetVM 145
3.7.4 NFV Implementation Example—ClickOS 146
3.7.5 Open NFV Platform 147
3.7.6 NFV Implementation Portability and Reliability 149

3.8 Virtualization-Based Network Service Provisioning 150


3.8.1 Service-Oriented Architecture 150
3.8.2 Service-Oriented Network Virtualization 151
3.8.3 Network-as-a-Service (NaaS) in NFV 152
3.8.4 Software-Defined Control for NaaS in NFV 154
3.8.5 Virtualization-Based Unification of Network and
Cloud Services 156

3.9 Conclusion 163


References 164

www.EngineeringBooksPdf.com
viii Virtualized Software-Defined Networks and Services

4 Integrating SDN and NFV in Future Networks 169

4.1 Introduction 169

4.2 Virtualization in Software-Defined Network 172


4.2.1 SDN Virtualization 172
4.2.2 Hypervisor-Based Virtualization for SDN 176
4.2.3 Container-Based Virtualization for SDN 179
4.2.4 Virtualization of Multidomain SDN 182
4.2.5 Orchestration-Based Virtualization for
Multidomain SDN 185

4.3 Software-Defined Networking in NFV Infrastructure 188


4.3.1 Using SDN in the NFV Infrastructure 188
4.3.2 SDN-Based Network Control for NFV Service
Function Chaining 192
4.3.3 SDN for Supporting NFV in Radio Access Network 196
4.3.4 SDN for Supporting NFV in Mobile Packet Core 198
4.3.5 SDN for Supporting NFV in Wireline Access Network 199

4.4 Combining SDN and NFV for Service Provisioning 203


4.4.1 Software-Defined Network Control in the NFV
Architecture 203
4.4.2 Resource Abstraction for SDN Control in the NFV
Architecture 207
4.4.3 Network-as-a-Service for Supporting SDN Control
in NFV 210
4.4.4 Routing Function Virtualization over an OpenFlow
Infrastructure 212
4.4.5 Extended SDN Architecture for Supporting VNF
Functionalities 214

4.5 Integration of SDN and NFV in Unified Network


Architecture 217
4.5.1 Two-Dimensional Abstraction Model for Integrating
SDN and NFV 217
4.5.2 Software-Defined Network Virtualization (SDNV)
Architectural Framework 220
4.5.3 SDNV-Based Service Delivery Platform 224
4.5.4 Challenges to SDN-NFV Integration and
Opportunities for Future Research 227

4.6 Conclusion 230

www.EngineeringBooksPdf.com
Contents ix

References 231

5 Virtualized Network Services 235

5.1 Introduction 235

5.2 Cloud Standards 239

5.3 Cloud Services Architectures 239


5.3.1 Protocol Stacks and Interfaces 242

5.4 Cloud Services 242


5.4.1 NaaS 251
5.4.2 IaaS 252
5.4.3 SECaaS 259
5.4.4 PaaS 261
5.4.5 SaaS 262
5.4.6 CaaS 263

5.5 Virtualization and Cloud 264

5.6 Virtualized Cloud Services Architectures 267

5.7 Basic NFV Components of Cloud Services Architecture 269

5.8 Virtualized Carrier Ethernet Services 271


5.8.1 Components of Virtualized Carrier Ethernet Services 272
5.8.2 Service Chaining for EPL 276
5.8.3 Access E-Line and Its Service Chaining 278

5.9 Virtualized IP VPN Services 279

5.10 Life Cycle Services Operations (LSO) for Cloud


Services 285

5.11 NFV and SDN for Unified Network and Cloud


Service Provisioning 297

5.12 Conclusion 301


References 302

About the Authors 305

Index 307

www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
Preface
Two important recent innovations in networking technologies are software-
defined networking (SDN) and virtualization in networks. The latter includes
the network virtualization (NV) vision and the network function virtualization
(NFV) architecture. SDN and NFV offer promising approaches to overcoming
inflexibilities in current network architecture, greatly enhancing service provi-
sioning and better resource utilization.
SDN and NV/NFV are independent networking paradigms. Realization
of network virtualization does not require SDN and vice versa. On the other
hand, they share many common objectives and complement each other. SDN
and NFV share key enabling technologies with cloud computing, such as re-
source virtualization and automation in service provisioning. They also facili-
tate unification of network and cloud applications as observed in cloud services.
The key idea of SDN lies in decoupling network control and manage-
ment functionalities from data forwarding operations to enable a centralized
control platform for supporting network programmability. The components of
SDN architecture include a data plane consisting of network resources for data
forwarding, a control plane comprising SDN controller(s) providing central-
ized control of network resources, and SDN applications that program net-
work operations through a controller. Consolidation of control functions to a
centralized controller in SDN may greatly simplify network operations while
allowing applications to program network behaviors for supporting diverse ser-
vice requirements. Therefore, SDN promises simplified and enhanced network
control, flexible and efficient network management, and improved network
service performance.

xi

www.EngineeringBooksPdf.com
xii Virtualized Software-Defined Networks and Services

Network virtualization introduces a vision of applying virtualization in


networking to enable an abstraction of network infrastructure upon which vir-
tual networks with alternative architectures and protocols may be constructed
for meeting diverse service requirements. The NFV architecture leverages stan-
dard IT virtualization technologies to transfer network functions from hard-
ware appliances to software applications. Essentially, NFV embraces the NV
vision and provides specific mechanisms to decouple service functions from
network infrastructures for realizing virtualization in networks. The NFV ar-
chitecture comprises virtualized infrastructures, virtual network functions, and
management and orchestration functions. Benefits introduced by NFV include
simplified service development, more flexible service delivery, and reduced net-
work capital and operational costs.
With significant advancement in SDN technology development, research-
ers have also realized that the current SDN approach has some issues that may
limit its ability to fully support future network services. For example, the SDN
data plane needs to fully perform general flow matching for meeting evolving
service requirements, which may significantly increase the complexity and cost
of SDN devices. A root reason for such limitation is the tight coupling between
network architecture and infrastructure on both data and control planes. Sepa-
ration between data and control planes alone in the current SDN architecture is
not sufficient to overcome this obstacle. Applying virtualization in SDN, which
enables virtual networks with alternative architectures to share common infra-
structure, may further enhance SDN capability of flexible service provisioning.
On the other hand, many technical challenges must be addressed for re-
alizing virtualization in networks. Management and orchestration have been
identified as key components in the NFV architecture. Much more sophisticat-
ed control and management mechanisms for both virtual and physical resources
are required by the highly dynamic networking environment enabled by NFV,
in which programmatic network control is indispensable. Therefore, applica-
tion of the SDN principle, which is decoupling control intelligence from the
controlled resources to enable a centralized programmable control platform,
may greatly facilitate realization of NFV.
In this book, we attempt to describe software-defined and virtualization-
based networking technologies and how they may be combined in a unified
network architecture for facilitating network and cloud service provisioning.
We discuss principles, features, architectures, key technologies, and application
examples of SDN. We also provide an overview of the network virtualization
vision, present the NFV architecture and its implementation technologies, and
discuss virtualization-based service provisioning in future networks. The state
of the art of research on combining SDN and NFV and possible topics for fu-
ture research in this direction are reviewed. Finally, we describe the novel cloud
services architectures defined by open cloud connect (OCC)/metro Ethernet

www.EngineeringBooksPdf.com
Preface xiii

forum (MEF) and discuss applications of SDN and NFV technologies for
building cloud services.
We would like to thank the coauthors, who worked diligently to make this
book a valuable reference; the editors of Artech House, who provided valuable
comments to improve the book; and Molly Klemarczyk and Stephen Solomon
of Artech House, who helped greatly throughout the contract and publication
process.

www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
1
Introduction and Overview
Q. Duan and M. Toy

1.1  Introduction
The past few decades have witnessed rapid advancement of computer network-
ing from local area networks to the global Internet that provides a data com-
munication platform needed by virtually any contemporary computing ap-
plication. The recent advances and wide adoption of cloud computing make
computer networks an indispensable element of today’s information infrastruc-
ture. The stunning success of computer networking, on the other hand, has also
brought in challenges to networking technologies from various aspects that call
for more innovations in this field.
Computer networking technologies are facing the requirements of differ-
ent stakeholders, including network operators, service providers, application
developers, and end users. These requirements are often correlated but distinct
and might even conflict with each other. For example, end users may want to
have the highest possible level of quality of service (QoS) provided by networks,
while network operators prefer to minimize resource usage and energy con-
sumption in network infrastructures for providing QoS. Applications deployed
upon the networking platform have highly diverse requirements on the services
that networks must provide, thus demanding sophisticated technologies for di-
versifying networks and enhancing service flexibility. On the other hand, the
desire for reducing network capital and operational costs calls for technologies
that can simplify network control/management functions and data forwarding
operations.

www.EngineeringBooksPdf.com
2 Virtualized Software-Defined Networks and Services

Traditional network designs lack sufficient ability to meet the wide spec-
trum of requirements due to the ossification of the current IP-based network
architecture. Such ossification mainly comes from two aspects: (a) integration
between control and forwarding functions that cause complex and inflexible
network control and management; and (b) tight coupling between service func-
tions and network infrastructures that limits network ability for agile service
evolution. The research community has been exploring various approaches
and has made exciting progress for addressing the aforementioned challenges
to computer networking. Two important recent innovations in networking
technologies are software-defined networking (SDN) and virtualization in
networks. The latter includes the network virtualization (NV) vision and the
network function virtualization (NFV) architecture. SDN and NV/NFV offer
promising approaches to overcoming the ossification in traditional network ar-
chitecture, thus greatly enhancing network capabilities of service provisioning.
SDN and NV/NFV are independent networking paradigms. Realization
of virtualization in networks does not require SDN and vice versa. On the
other hand, they share many common objectives and follow similar technical
ideas and principles and thus may greatly complement each other. Evolutions
of both SDN and network virtualization have shown strong synergy between
them. These two emerging networking paradigms are expected to be integrated
into a unified software-defined and virtualization-based network architecture,
which allows network designs to fully exploit the advantages of both SDN and
NV/NFV. In addition, SDN and NV/NFV share some key enabling technolo-
gies with cloud computing, such as resource virtualization and automation in
service provisioning. Therefore, integration of SDN and NV/NFV may greatly
facilitate unification of network and cloud services, which enables convergence
between networking and cloud computing that may significantly enhance per-
formance and flexibility of cloud service provisioning.
A holistic view of software-defined and virtualization-based networking
in future networks for supporting unified network-cloud services would be very
beneficial to both researchers and practitioners in the field of computer net-
working. The main objective of this book is to reflect a vision of virtualized
software-defined networking and its impacts on service provisioning in future
networks.
The key idea of SDN lies in decoupling network control and management
functionalities from data forwarding operations to enable a centralized control
platform for supporting network programmability. Key components of the
SDN architecture include a data plane consisting of network resources for data
forwarding, a control plane providing logically centralized control of network
resources, and SDN applications for programming network behaviors through
a controller. Consolidation of control functions to a centralized controller in
SDN may greatly simplify network operations while allowing applications to

www.EngineeringBooksPdf.com
Introduction and Overview 3

program network behaviors for supporting diverse service requirements. There-


fore, SDN promises simplified and enhanced network control, flexible and ef-
ficient network management, and improved network service performance.
NV introduces a vision of applying virtualization in networking to en-
able an abstraction of network infrastructure upon which virtual networks with
alternative architectures and protocols may be constructed for meeting diverse
service requirements. The NFV architecture leverages standard IT virtualiza-
tion technologies to transfer network functions from hardware appliances to
software applications. Essentially, NFV embraces the NV vision and provides
specific mechanisms to decouple service functions from network infrastruc-
tures for realizing virtualization in networks. The NFV architecture comprises
virtualized infrastructures, virtual network functions, and management/or-
chestration functions. Benefits introduced by NFV include simplified service
deployment, more flexible service delivery, and reduced network capital and
operational costs.
With significant advancement in SDN technology development, research-
ers have also realized that the current SDN approach has some issues that may
limit its ability to fully support future network services. For example, the SDN
data plane needs to perform fully general flow matching for meeting evolving
service requirements, which may significantly increase the complexity and cost
of SDN devices. A root reason for such limitation is the tight coupling between
network architecture and infrastructure on both data and control planes. Sepa-
ration between data and control planes alone in the current SDN architecture
is not sufficient to overcome this obstacle. Applying virtualization in SDN,
which enables virtual SDN networks to share common physical network infra-
structures, may further enhance SDN capability of flexible service provisioning.
On the other hand, many technical challenges must be addressed for real-
izing virtualization in networks. Management and orchestration has been iden-
tified as a key component in the NFV architecture. Much more sophisticated
control and management mechanisms for both virtual and physical resources
are required by the highly dynamic networking environment enabled by NFV,
in which programmatic network control is indispensable. Therefore, applica-
tion of the SDN principle—decoupling control intelligence from the con-
trolled resources to enable a centralized programmable control platform—may
greatly facilitate realization of NFV.
Encouraging research progress has been made toward combining SDN
and NFV in future networks. Both hypervisor- and container-based virtualiza-
tion technologies have been employed to enable SDN virtualization that allows
multitenant virtual networks to share a common SDN infrastructure. SDN
technologies have been applied in NFV infrastructure to provide network con-
nectivity for supporting service orchestration. SDN also offers a general control
and management approach that is applicable to virtual network functions as

www.EngineeringBooksPdf.com
4 Virtualized Software-Defined Networks and Services

well as physical infrastructure resources. Therefore, SDN-based control mecha-


nisms may be applied to the entire NFV architecture, including both the infra-
structure layer and the service tenant layer. An architectural framework (Figure
1.1) has been proposed for integrating SDN and NFV by combining separation
between data and control planes with decoupling between service functions and
network infrastructures.
In this book, we describe the software-defined and virtualization-based
networking technologies, discuss how they may be combined in a unified net-
work architecture for facilitating network service provisioning, and present the
latest development in network/cloud service architectures.
This chapter provides a high-level overview for each of the chapters in-
cluded in the rest of the book; namely, chapters for software-defined network-
ing, virtualization in networking, integrating SDN and NFV, and virtualized
network/cloud services.

1.2  Software-Defined Networking


Originally, the term software-defined networking was used to refer to a network
architecture where the packet forwarding operations performed at networking
devices are controlled by a separated controller. With the development of SDN
technologies, the networking community has broadened the notion of SDN

Figure 1.1  Architectural framework for software-defined network virtualization [1].

www.EngineeringBooksPdf.com
Introduction and Overview 5

and tends to refer to anything that involves software-based network control as


being SDN. The Open Networking Foundation (ONF) has provided an ex-
plicit and well-received definition of SDN as follows: “Software-Defined Net-
working (SDN) is an emerging network architecture where network control is
decoupled from forwarding and is directly programmable” [2].
A key objective of SDN is to provide open interfaces that enable the
development of software that can define the data forwarding and processing
operations performed by a set of network resources on traffic flows. In order
to achieve the objective, SDN separates the control functions from data for-
warding operations and relocates network control to a dedicated element called
SDN controller. The SDN controller provides a platform through which SDN
application software may define the behaviors of network resources for data
forwarding and processing.
The SDN is based on the following architectural principles [3]:

• Decoupling network control and management from data forwarding


and processing;
• Logically centralized network control and management;
• Programmability through open interfaces for supporting diverse net-
work services.

A fundamental idea for the SDN paradigm lies in the notion of resource
abstraction, which is an important capability to support network programma-
bility. The centralized SDN controller provides a global abstract view of the
underlying network resources, upon which SDN applications may program
network behaviors.
SDN architecture, as shown in Figure 1.2, consists of three planes: the
data plane, control plane, and application plane; and two interfaces: the in-
terface between data and control planes (D-CPI) and the interface between
application and control planes (A-CPI).
The data plane comprises network resources for performing data forward-
ing and processing operations. Network elements on the data plane are simply
packet forwarding and/or processing engines without complex control logic
to make autonomous decisions. The D-CPI, which is also referred to as the
southbound interface (SBI), allows data plane elements to expose the capabili-
ties and states of their resources to the control plane and enables the controller
to instruct network elements for their operations The control plane presents a
global view of data plane infrastructure to SDN applications and provide a cen-
tralized control platform through which applications may define the operations
to be performed by data plane elements. The A-CPI, which is also called the

www.EngineeringBooksPdf.com
6 Virtualized Software-Defined Networks and Services

Figure 1.2  A general architectural framework for SDN.

northbound interface (NBI), provides a standard API that allows applications


to program the underlying network infrastructure.
The ONF, the International Telecommunication Union–Telecommuni-
cation Standardization Sector (ITU-T), and the SDN Research Group (SD-
NRG) in Internet Research Task Force (IRTF) all proposed SDN architectures.
SDN adopts separation of control and data planes as a core part of net-
work architectural design. With this design, network control can be done
separately on the control plane while data plane elements can be externally
controlled without onboard intelligence. The decoupling of data and control
planes offers not only a simpler programmable environment, but also greater
freedom for software to define network behaviors.
Separation of data and control planes in SDN allows data plane devices
to simply perform packet forwarding operations by following rules installed by
a controller. Removing the complex control functions from individual devices
and consolidating the control logic on a dedicated controller greatly simplify
networking devices, thus reducing their costs. SDN enables one logically cen-
tralized controller that transforms the complex distributed control problems in
IP-based networks to simpler control decisions made on a central point. The
control plane also makes the entire network programmatically configurable.
Therefore, SDN may significantly simplify and enhance network control and
management. In addition, the global network view provided by the centralized
control plane makes the challenging problems of network performance opti-
mization manageable in SDN with properly designed centralized algorithms.
SDN provides better support for new service development and deploy-
ment. Network programmability supported by the SDN control platform al-
lows various upper layer applications and services to be developed and deployed
without being limited by the technologies employed in the underlying network
infrastructure. The SDN controller supports network operating systems that

www.EngineeringBooksPdf.com
Introduction and Overview 7

can easily reconfigure data plane devices, thus greatly facilitating deployment of
new services as well as evolution of existing services.
Networking devices on the SDN data plane, often called SDN switches,
typically comprise a packet processing engine, an abstraction layer, and a con-
troller interface. OpenFlow currently is the de factor D-CPI standard for con-
trolling SDN switches. An OpenFlow switch consists of a set of ingress ports
and output ports, an OpenFlow pipeline that contains one or multiple flow ta-
bles, and a secure channel for communicating with one or multiple OpenFlow
controllers. OpenFlow specification defines both the communication protocol
between controller and switch and the procedure for managing flow tables in
switches.
The SDN control plane is responsible for enabling an abstraction of the
data plane resources and providing an API for programing network behaviors.
Key functions of this plane include handling two types of objects: objects re-
lated to network control, including policies imposed by SDN applications and
rules controlling data plane operations; and objects related to network monitor-
ing that are in the format of local and global network states. An SDN controller
comprises a module for realizing D-CPI, a module for data plane abstraction,
and a module for implementing A-CPI. The D-CPI module implements a
southbound protocol for communicating with data plane devices. The abstrac-
tion module constructs a global abstract view of the data plane network that
is used by SDN applications to program network operations. Main functions
performed by this module include user/network device management, network
topology management, network traffic monitoring, and flow management. The
A-CPI module provides northbound APIs for SDN applications to access the
controller.
Control performance is a key factor that impacts performance of the en-
tire SDN network. Various technologies have been developed for enhancing
SDN control performance, including parallel and batch processing designs of
controller software, distributed controller deployment, and hierarchical con-
trol structure. Multidomain SDN control is a challenging problem that has
attracted research attention. Representative efforts made in this area include
the SDNi protocol for message exchanging across SDN domains, distributed
multidomain SDN control structure, and hierarchical orchestration for inter-
domain SDN control.
Control applications can be regarded as the “brain” in the SDN architec-
ture that makes decisions on network policies and programs network behaviors
to fulfill the policies. SDN applications interact with controllers through A-
CPI to obtain network state information and request certain control actions
be taken by the controllers. SDN applications can be classified as either proac-
tive or reactive. Proactive applications make decisions regarding traffic steering
for some predetermined flows and then request the controller to preinstall the

www.EngineeringBooksPdf.com
8 Virtualized Software-Defined Networks and Services

action rules at switches for handling those flows. Proactive applications can be
implemented using either language APIs or RESTful APIs. Reactive applica-
tions typically work with reactive flow management functions at the controller
to handle network events. Reactive applications are often written in the pro-
gramming language of the controller and leverage the language API to interact
with the controller.
Researchers have noticed that there are some issues associated with the
current SDN approach that may limit network designers to fully exploit the
advantages promised by this new networking paradigm. A root reason for such
limitation lies in the unnecessary tight coupling between architecture and in-
frastructure in the current SDN design, which constrains evolution of network
services to what the current network infrastructure can support. Various efforts
have been made by the research community for overcoming this barrier in order
to release the full power of SDN. Two representative proposals toward this di-
rection are the software-defined internet architecture (SDIA) and the protocol
independent layer (PI-layer). SDIA separates the network edge and core for
both packet forwarding and network control, thus decoupling network archi-
tecture from infrastructure on both data plane and control plane. The PI-layer
leverages protocol oblivious forwarding (POF) and programming protocol-in-
dependent packet processing (P4) language to enable a fully programmable data
plane in SDN that may support various network protocols for meeting diverse
service requirements.

1.3  Virtualization in Networking


Ossification of the conventional IP-based network architecture limits network
capabilities to effectively support future network services. Researchers have real-
ized that a key to overcoming deficency of the traditional network archtiecture
and breaking the impasse to network innovations lies in decoupling the net-
work functions for service provisioning and the network infrastructures for data
forwarding and processing [4]. Virtualization offers a promising approach to
achieving such decoupling required for future network archtiecture. Applying
viritualziation in network architecture leads to a new vision of network design
that is broadly referred to as network virtualization (NV).
Virtualization in general describes the separation of a service or appli-
cation from the underlying physical delivery of that service or application.
Virtualization abstracts physical resources as virtual instances and enables in-
dependent multitenant access to the resources. Some representative applica-
tion scenarios of virtualization in computing include virtualization of servers,
networking devices, and services. At the server level, virtualization abstracts
platform hardware, such as processor, memory, and I/O interfaces, into virtual

www.EngineeringBooksPdf.com
Introduction and Overview 9

resources for hosting various application instances. Hypervisor is the software


that provides abstraction of server hardware resources and determines how such
resources should be virtualized, allocated, and presented to virtual machines
(VMs). A VM is a software emulation of a physical machine. Each VM can run
its own operating system (Guest OS) and various applications upon the Guest
OS. Each VM is isolated from other VMs and behaves as if it is running on an
individual physical machine.
Virtualization technologies have already been employed in networking,
for example in the form of virtual local area networks (VLANs) and virtual
private networks (VPNs). Virtualization has also been explored as a means to
construct experimental platforms for conducting networking research. Howev-
er, none of the traditional approaches of applying virtualization in networking
is capable of fully addressing the fundamental deficiency of IP-based network
archtiecture. This is mainly because that these research efforts are based on a
conventional vision on network archtiecture, which expects a general-purpose
network architecture to provide a suitable paltform for supporting all existing
and emerging services.
The pluralist view on network archtiecture proposed in [5] advocates co-
existence of alternative network archtiectures upon shared network infrastruc-
ture. The pluralist architectural view allows network designers to fully exploit
the power of virtualization to address some of the fundamental challenges to
networking. In network designs with the pluralist view, virtualization is a key ar-
chitectural attribute that supports multitenant networks with alternative archti-
ectures and protocols customized for meeting diverse service requirements.
The key objective of NV is to enable a networking environment that
allows multiple heterogeneous virtual networks (VNs) to share a common net-
work infrastructue. Each VN may have its own network archtiecture, including
packet format, addressing scheme, forwarding mechanism, and routing pro-
tocols designed for provisioning various network services, both existing and
emerging, for meeting diverse application requirements.
NV decouples service provisioning functionalities from network infra-
structures, therefore splitting the traditional role of Internet service provid-
ers (ISPs) into two independent entities: infrastructure providers (InPs) that
manage the physical infrastructures, and service providers (SPs) that establish
and operate VNs to offer end-to-end services. The role of SPs may be further
divided into virtual network providers (VNPs) and virtual network operators
(VNOs). A VNP constructs VNs for meeting the requests from VNOs, while
VNOs are responsible for operating the VNs and offering network services to
end users (Figure 1.3).
NV brings in some advantages that may significantly enhance network
capabilitie for supporting future services. NV allows multiple VNs for meet-
ing diverse service requirements to cohabit on a shared physical infrastructure

www.EngineeringBooksPdf.com
10 Virtualized Software-Defined Networks and Services

Figure 1.3  Main functional roles in an NV environment: InP, VNP, and VNO.

substrate, which may enhance flexibility and manageability of future network


services. The separated roles of InP and SP enabled by NV provide SPs with
an abstract view of the physical infrastructure and allow InPs to obtain service-
oblivious freedom to implement their infrastructures, which enables indepen-
dent evolutions of both roles. NV also simplifies network and service manage-
ment by “outsourcing” the responsibility of physical network devices to InPs
and allowing SPs to run multiple simple VNs in parallel for meeting different
service requirements.
NV may also greatly facilitate end-to-end service provisioning across
autonomous network domains. Interdomain QoS provisioning has been a
long-standing challenge to the current Internet due to its requirement of col-
laboration across network domains with heterogeneous QoS mechanisms and
policies. NV enables a single SP to obtain holistic control over the entire end-
to-end service delivery path across network infrastructures that may belong to
different domains, which offers a promising approach to achieving end-to-end
QoS guarantee.
Key enabling technologies for NV center on creating virtual networks
upon a shared network substrate for meeting diverse service requirements while
achieving various optimiztion objectives. Such technologies may be roughly
categoried into two areas: resource description and discovery (RDD) and vir-
tual network embedding (VNE). The main goal of RDD is to provide a VNP
with sufficient information about the infrastructure resources offered by InPs
so that the VNP may choose the appropriate InP (or a set of InPs) for hosting
each VN. VNE is responsible for allocating the physical resources in network
infrastructure to host VNs. VNE has two main aspects: virtual node mapping
(VNM), which allocates the physical node for hosting each virtual node, and
virtual link mapping (VLM), which assigns each virtual link onto a physical
path.

www.EngineeringBooksPdf.com
Introduction and Overview 11

With virtualization, outsourcing computation, storage, and network re-


sources to a third party gives rise to inherent vulnerabilities in network security.
A particular security issue in network virtualization comes from the coresidency
of multitenant VNs on the shared infrastructure substrate. Such coresidency ex-
ists on the node level (multiple virtual nodes hosted by the same physical node)
and network level (multitenant VNs embedded into the same infrastructure
substrate). Possible security threats exist on both levels of coresidency.
In order to cope with security threats that exploit the multitenant feature
of network virtualization at either node level or network level, it is very impor-
tant for the virtualization layer to guarantee isolation between the coresident
tenants. In addition, the virtualization layer should also make implementation
details of physical network infrastructure transparent to VNs, which prevents
attackers from exploiting the vulnerabilities in the underlying hardware for
launching attacks to coresident VNs.
Survivability of virtual networks is also an important issue. Survivable
VNE needs to cope with the failures of various network faults such as single
node/link failure, multiple node/link failure, and regional failures.
The latest innovation in networking that employs virtualization technol-
ogies for enhancing network operations and service provisioning is network
function virtualization (NFV). The recent success of cloud computing as an ap-
proach to scalable, flexible, and cost-effective service provisioning has inspired
telecommunication service providers (TSPs) to explore virtualization-based
cloud technologies for improving network services. Toward this direction, some
major TSPs formed an industry specification group in ETSI (NFV-ISG) and
proposed the notion of NFV. NFV aims to address some fundamental chal-
lenges to networking by leveraging standard IT virtualization technologies to
consolidate many network equipment types onto industry standard servers,
switches, and storage [6].
The NFV paradigm fully embraces the NV vision—applying virtual-
ization in networks for decoupling service functions and physical infrastruc-
tures—and proposes a specific architecture and related mechanisms for realiz-
ing this vision. On the other hand, NFV and NV focus on different scopes and
granularity levels of virtualization in networking. NV focuses on network-level
virtualization to enable multiple virtual networks with alternative network ar-
chitectures for meeting diverse service requirements. NFV focuses on virtualiza-
tion of individual network functions and provides end-to-end services through
orchestration of virtual network functions.
The ETSI NFV architectural framework comprises NFV infrastructure
(NFVI), virtualization network functions (VNFs), and NFV management and
orchestration (MANO), as shown in Figure 1.4. The NFVI consists of all in-
frastructure resources, including both hardware and software components, that

www.EngineeringBooksPdf.com
12 Virtualized Software-Defined Networks and Services

Figure 1.4  ETSI NFV architectural framework [7].

build up the environment in which VNFs may be deployed, managed, and


executed. VNF is the software implementation of a network function that is
capable of running over the NFVI. The MANO component is responsible for
management and orchestration of physical and virtual resources in NFVI and
lifecycle management of VNFs. The MANO consists of virtualized infrastruc-
ture manager (VIM), VNF manager (VNFM), and NFV orchestrator (NFVO)
[7].
NFVI is partitioned into three functional domains: the compute domain,
the hypervisor domain, and the infrastructure network domain. The role of the
compute domain is to provide computational and storage resources that are
needed for hosting individual VNFs. The hypervisor domain mediates resourc-
es of the compute domain to the VMs running VNF software. The infrastruc-
ture network domain inside NFVI is responsible for providing the required
connectivity to support communications among VNFs for end-to-end service
provisioning. An important requirement for the infrastructure network domain
in NFVI is to make VNF software and network infrastructure independent and
transparent to each other. Therefore, the network domain has a virtualization
layer between the virtual networks and physical network resources.
NFV-ISG has developed a software architecture of VNF that identifies
the relevant software architectural patterns that can be leveraged for decoupling
software from hardware in NFV. As defined in the VNF software architecture,
a VNF is an abstract entity that allows network function software to be defined

www.EngineeringBooksPdf.com
Introduction and Overview 13

and designed, and a VNF instance (VNFI) is the runtime instantiation of the
VNF. A VNF may comprise one or multiple VNF components (VNFCs), and
each VNFC can be instantiated as a VNFC instance (VNFCI). When a VNF
is composed of a group of VNFCs, the internal interfaces between them do not
need to be exposed and standardized.
The MANO component comprises three key functional blocks: virtu-
alized infrastructure manager (VIM), VNF manager (VNFM), and NFV or-
chestrator (NFVO), which are in charge of the management and orchestration
functions, respectively, for infrastructure, virtual functions, and network ser-
vices. The VIM in an infrastructure domain is responsible for managing NFVI
compute, storage, and network resources in the domain. A VIM may be special-
ized in managing a certain type of infrastructure resources (e.g., compute-only,
storage-only, or networking-only) or provides federated management across
multiple types of resources. VNFM is responsible for the lifecycle management
of VNF instances. Each VNF instance must have an associated VNFM, while
a VNFM may be assigned to manage either a single VNF instance or a group
of VNF instances. The NFVO functional block is responsible for resource or-
chestration that coordinates NFVI resources across multiple VIMs and service
orchestration that manages lifecycles of network services.
A key requirement for realizing the benefits of NFV is to implement VNFs
as software instances running on commercial off-the-shelf (COST) servers.
How to design COST server-based NFV implementations to support realistic
network loads and achieve performance comparable to hardware-based network
devices has become an important research topic. Data plane I/O operations
form a bottleneck for implementing high-performance NFV on COTS serv-
ers. Various I/O acceleration techniques have been developed, among which
single root I/O virtualization (SR-IOV) and Intel Data Plane Development Kit
(DPDK) are representative solutions for I/O virtualization improvement. Ex-
amples of recent research efforts for further improving the performance of NFV
implementations include the NetVM platform [8] and the ClickOS system [9].
The service-oriented architecture (SOA), which forms the basis of the
successful cloud service model, offers an approach to facilitating realization of
virtualization in networks. Applying SOA in networking enables the network-
as-a-service (NaaS) paradigm that has been adopted in the NFV architecture
for service provisioning. Representative NaaS-based service models include
network function virtualization infrastructure-as-a-service (NFVIaaS), virtual
network function-as-a-service (VNFaaS), and virtual network platform-as-a-
service (VNPaaS), which have been identified by NFV-ISG as main NFV use
cases [10]. The centralized and programmable control enabled by SDN pro-
vides an effective platform for supporting NaaS-based virtualization. Virtual-
ization, SOA, and SDN, together offer a promising approach to unifying the

www.EngineeringBooksPdf.com
14 Virtualized Software-Defined Networks and Services

provisioning of network services and cloud services. Two representative research


projects that reflect the state of the art on network-cloud service unification are
EU FP7 project Unifying Cloud and Carrier Network (UNIFY) [11] and the
Open Networking Lab–AT&T joint project Central Office Re-Architected as
Data Center (CORD) [12].

1.4  Integrating SDN and NFV in Future Networks


Although SDN and NFV were initially developed as two independent net-
working paradigms, evolution of both technologies has shown strong synergy
between them. Applying the network virtualization notion and the NFV archi-
tecture into SDN may further enhance SDN capability of flexible service provi-
sioning. On the other hand, SDN can be employed as an enabling technology
for network virtualization and facilitate realization of the NFV architecture.
Integrating SDN and NFV in future networking may trigger innovative net-
work designs that fully exploit the advantages of both paradigms. Therefore,
the relationship between SDN and NFV and how these two paradigms may
be combined in future networks have become an important research topic that
attracts extensive attention from both industry and academia.
Virtualization of SDN allows network designers to leverage the combined
benefits of software-defined networking and network virtualization. In an SDN
virtualization environment, the physical network infrastructure is controlled
by a logically centralized controller that is separated from data plane devices.
Each virtual network has its own tenant controller that controls all nodes and
links in the virtual network. In this sense, each tenant network is a virtual SDN
(vSDN) with a centralized controller that can be programmed by applications
via a northbound interface. The virtualization layer translates the southbound
control messages from vSDN tenant controllers to appropriate controlling mes-
sages for the corresponding network devices and vice versa. Therefore, the vir-
tualization layer in SDN can be thought of having a network hypervisor sitting
upon an SDN control platform for the network infrastructure [13].
In hypervisor-based SDN virtualization, the SDN control platform pro-
vides an abstraction of data plane resources through which a network hyper-
visor can control network infrastructure resources to create multitenant vir-
tual networks. FlowVisor is a seminal network hypervisor developed based on
OpenFlow for virtualizing SDN networks. FlowVisor partitions the network
infrastructure into slices, each of which is assigned to a virtual network and
controlled by a tenant OpenFlow controller. FlowVisor acts as a transparent
proxy between OpenFlow-enabled switches and the tenant OpenFlow control-
lers of vSDNs [14].

www.EngineeringBooksPdf.com
Introduction and Overview 15

Network hypervisors have the advantage of allowing vSDNs to have in-


dependent tenant controllers but introduce overheads that may limit scalability
of SDN virtualization. Container-based virtualization offers an approach to
addressing this challenge. FlowN is a representative container-based approach
for SDN virtualization. Rather than running a separate controller for each ten-
ant network, FlowN provides a common NOX-based SDN controller that is
shared by multiple tenant networks. Each tenant network has its own control
application (not a complete instance of controller) that runs on top of its vir-
tual topology and address space. A control application consists of handlers that
respond to network events by sending commands to the underlying data plane
switches [15].
Both hypervisor-based and container-based methods for SDN virtualiza-
tion focus on virtualization of a single network domain with an assumption of
a single controller for the entire network infrastructure. Multidomain network-
ing brings in new challenges to SDN virtualization because the assumption of
a global controller is no longer valid in such networking scenarios. In order to
face such challenges, the virtualization layer of multidomain SDN must provide
a higher-level abstraction that masks the fact that the underlying infrastructure
actually consists of multiple domains with heterogeneous implementations. A
possible approach to multidomain SDN virtualization is to include a set of
domain handlers at the lower half of the virtualization layer, one for each do-
main controller. An alternative approach is to realize a multidomain network
hypervisor upon an orchestration layer, which acts as a parent controller that
coordinates a set of domain controllers to handle interdomain networking.
NFV creates a very dynamic networking environment that calls for much
more sophisticated control and management mechanisms than what can be
offered by traditional networking technologies. Many of the networking chal-
lenges that NFV is facing match the design goals of SDN. Applying SDN in
the network domain of NFVI enables complex network topologies to be readily
built to support automated service orchestration. An SDN controller can work
with the NFV orchestrator to control network resources as well as monitor net-
work states. Therefore, many desirable networking features expected by NFV
can be built based on SDN capabilities.
In principle, the NFV architecture specifies a general network infrastruc-
ture domain in NFVI without requiring any particular networking technolo-
gies to be used. A key objective of network virtualization is to allow the service
functions, including the connectivity services provided by the NFVI network
domain, to be decoupled from the technologies used for realizing them. There-
fore, NFV does not require usage of SDN in its network infrastructure. On the
other hand, the logically centralized control platform enabled by SDN offers
an effective approach that may greatly facilitate fulfillment of the virtualization

www.EngineeringBooksPdf.com
16 Virtualized Software-Defined Networks and Services

principle. For example, a centralized SDN controller enables a standard abstract


interface between SDN controller and NFV orchestrator. Such an abstract in-
terface supports encapsulation of detailed network implementations in form of
an infrastructure service provided to the NFV orchestrator, which allows the
orchestrator to utilize the underlying network resources through the infrastruc-
ture-as-a-service (IaaS) paradigm.
The NFV architecture enables flexible network function deployment and
orchestration for supporting dynamic service function chaining (SFC). The
programmable control platform of SDN allows simple and effective control for
flexible traffic steering in NFVI network domain to support automated SFC in
the NFV environment. Virtualization has also been employed in various parts
of the wireless mobile networks for enhancing network flexibility and improve
service performance. For example, in radio access network (RAN), baseband
data processing functions can be separated from radio signal processing, re-
alized as VNF instances, and consolidated onto servers in data centers. The
servers hosting the baseband processing VNFs may be located at various sites
in the network for improving network scalability. Similarly, various functional
entities in mobile packet core (MPC), such as service gateway (SGW), packet
data network gateway (PGW), and mobility management entity (MME), may
be virtualized as VNFs and hosted on data centers. SDN-enabled networking
has been employed in NFVI to provide flexible network connections required
among these VNF instances.
The concept of resource in SDN is generic. Both virtual functions and
physical infrastructures in NFV can be regarded as resources and thus can be
controlled by an SDN controller. Therefore, in the NFV architecture SDN-
based technologies may be applied in the infrastructure domain to control phys-
ical resources, or in the tenant domain to control virtual resources, or in both
domains to provide centralized control of both virtual and physical resources.
Multiple options of mapping SDN key elements into the NFV architec-
tural framework are discussed in a technical report on usage of SDN in NFV
published by ETSI NFV-ISG [16]. Key elements of SDN considered in this re-
port are SDN resources, SDN controllers, and SDN control applications. The
generality and flexibility of the SDN principle allow a wide variety of possible
locations of using SDN resources, controllers, and applications in the NFV
architecture.
Application of SDN on both the infrastructure layer and service tenant
layer in the NFV architecture calls for a more general abstraction model that is
applicable to both physical and virtual resources. The currently available SDN
SB protocols (e.g., OpenFlow) may not meet the requirement. Higher-level re-
source abstraction models and protocols for supporting SDN control in the en-
tire NFV architecture are still an open area for research. Recent progress in this

www.EngineeringBooksPdf.com
Introduction and Overview 17

area has indicated that the forwarding and control element separation (ForCES)
specification offers a promising basis for developing an abstraction model and
the associated control protocol for supporting SDN in an NFV environment.
NFV in principle is applicable to any network function on both data
plane and control plane. Many network control and management functions
(e.g., routing, path computation, traffic engineering, and load balancing) are
good candidates for being realized as VNFs. On the other hand, such functions
are often benefited from the availability of a centralized network controller, and
therefore are typically realized as applications running on top of an SDN con-
troller. Combination of NFV and SDN technologies enables virtual network
control functions to be implemented over an SDN control platform, which
allows network designs to exploit the advantages of both NFV and SDN.
In order to further improve performance of SDN-supported NFV, re-
search proposals have been made to explore the possibility of exploiting data
plane capabilities in SDN to implement some VNF functionalities. The main
idea of the proposed approach to using SDN for supporting VNFs is to keep
simple data processing in SDN data plane as much as possible and only forward
data traffic to VNF servers for more complex processing when it is necessary.
The key principles of SDN and NFV both lie in abstraction but focus
on different aspects of the network architecture. Two types of abstraction have
been deployed in general networking through the designs of layers and planes
in network architecture, respectively. Both layer and plane enable abstraction of
network resources, but in different dimensions. Both SDN and NFV principles
are based on abstraction but with emphasis on the plane and layer dimensions,
respectively. These two abstraction dimensions are orthogonal (i.e., network
architecture may have abstraction on one dimension but not on the other).
Therefore, SDN and NFV in principle are independent—NFV may be real-
ized with or without SDN and vice versa. On the other hand, the challenging
requirements for service provisioning in future networks demand abstraction
on both dimensions in order to fully exploit their advantages.
A software-defined network virtualization (SDNV) architectural frame-
work has been proposed to provide a holistic vision about the relationship
between key principles of SDN and NFV and how they may be combined
in a unified network architecture [1]. The SDNV framework integrates both
the layer- and plane-dimension abstractions and provides useful guidelines for
synthesizing the research efforts from various aspects toward enabling unified
software-defined virtualization in networking.
Research on integrating SDN and NFV is still at an early stage and many
technical issues must be fully addressed before the vision of software-defined
network virtualization may be realized. Therefore, this field offers a wide
spectrum of open topics for future research and opportunities for technology
innovation.

www.EngineeringBooksPdf.com
18 Virtualized Software-Defined Networks and Services

1.5  Virtualized Network Services


In recent years, types of user devices and applications for cloud services have
grown rapidly. The users prefer services that are on-demand, scalable, surviv-
able, and secure with usage-based billing. The concepts of cloud computing,
cloud networking, and cloud services are expected to help service providers meet
these demands, quickly create the services, and utilize their resources effectively.
Cloud computing technologies are emerging as infrastructure services for
provisioning computing and storage resources on demand in a simple and uni-
form way. Cloud-based virtualization allows for easy upgrade and migration of
enterprise application, including the whole IT infrastructure segments, which
brings significant cost saving comparing to traditional infrastructure develop-
ment and management requiring good amount of manual work.
In 2010, NIST launched cloud computing program to support the fed-
eral government effort to incorporate cloud computing as a replacement for
traditional information system and application models where appropriate. The
characteristics of cloud computing are:

• On-demand provisioning of computing capabilities;


• Broad network access for various devices such as mobile phones includ-
ing smart phones, laptops, tablets, and PDAs;
• Pooled computing resources to serve multiple consumers using a multi-
tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand;
• Rapidly and elastically provisioning of capabilities;
• Automatically control and optimize resource usage by leveraging a me-
tering capability at some level of abstraction appropriate to the type of
service such as storage, processing, bandwidth, and active user accounts.

The network acts as the foundation for cloud computing. It needs to


become a virtual service supporting mobile users, innovative applications, and
protocols; and providing network visibility at a granular level. In order to reduce
the cost of network operations and network elements, SDN and virtualization
techniques have been widely explored by network operators. Combining cloud,
SDN, and virtualization techniques are necessary to achieve substantial optimi-
zation in networks and services.
Cloud services architectures are defined by open cloud connect and MEF
organizations [17].1 The commercial cloud services include not only applica-
tions but also end-to-end networking paths between the users and applications.

1. OCC and MEF have merged recently.

www.EngineeringBooksPdf.com
Introduction and Overview 19

The key actors of a cloud service (Figure 1.5) are cloud service user; cloud
service provider (cSP), which is responsible for providing an end-to-end cloud
service to cloud service user; cloud carrier(s), who provide connectivity between
the user and cloud application; and cloud provider(s), who provides cloud ap-
plications. The cSP may or may not own cloud carrier (cC) and cloud provider
(cP) facilities, but provides a single bill to the cloud service user. A cSP can be
private or public. There could be cases that private and public cSPs collectively
provide a cloud service to a user.
A user interfaces to a cSP facilities via a standard interface called cloud
service user interface (cSUI), which is a demarcation point between the cSP
and the cloud consumer. From this interface, the consumer establishes a con-
nection, cloud service connection (cSC), with a cloud provider (cP) entity pro-
viding the application where the cP entity can be a virtual machine (VM) with
cloud service interface (cSI) or a physical resource such as storage with a cSUI.
In addition, a cSC can be between two cloud provider entities or between two
cloud consumers.
When a cSC is between a cloud user and a cP physical or virtual resource,
the cSC is established between two cloud service connection termination points
(cSCTPs) residing at the user interface (i.e., cSUI) and the cP interface (i.e.,
cSUI or cSI).

Figure 1.5  Cloud service actors.

www.EngineeringBooksPdf.com
20 Virtualized Software-Defined Networks and Services

The cSP may own the cP and cloud carrier (cC) facilities. When the cP
and the cC are two independent entities belonging to two different operators,
the standards interface between them is called cloud carrier cloud provider in-
terface (cCcPI). In this case, a cSC for cloud services can be terminated at either
cCcPI or cSI.
It is also possible for two or more cSPs to be involved in providing a
cloud service to a cloud consumer where two cSPs interface to each other via a
standard interface called cloud service provider cloud service provider interface
(cSPcSPI). Since one of the cSPs needs to interface to the end user, coordinate
resources, and provide a bill, the cSP that does not interface to the end user is
called cloud service operator (cSO).
Software as a service (SaaS), platform as a service (PaaS), infrastructure as
a service (IaaS), network as a service (NaaS), security as a service (SECaaS), and
communication as a service (CaaS) are among the well-known cloud services.
SaaS is an application running on a cloud infrastructure where the con-
sumer does not manage or control the underlying cloud infrastructure, includ-
ing network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specific application
configuration settings. SaaS examples include Gmail from Google, Microsoft
“live” offerings, and salesforce.com.
PaaS is deploying onto the cloud infrastructure consumer-created or -ac-
quired applications created using programming languages and tools supported
by the provider. The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, or storage,
but has control over the deployed applications and possibly application hosting
environment configurations.
IaaS is to provision processing, storage, networks, and other fundamental
computing resources where the consumer is able to deploy and run arbitrary
software. The software can include operating systems and applications. The
consumer does not manage or control the underlying cloud infrastructure, but
has control over operating systems, storage, deployed applications, and possibly
limited control of selected networking components with firewalls.
Network as a service (NaaS) delivers assured, dynamic connectivity ser-
vices via virtual, or physical and virtual service endpoints orchestrated over
multiple operators’ networks. Such services will enable users, applications, and
systems to create, modify, suspend/resume, and terminate connectivity services
through standardized application programming interfaces (APIs). These ser-
vices are assured from both performance and security perspectives.
Security services such as connectivity security, application security, or
content security are referred as security as a service (SECaaS). With SECaaS,
a consumer does not manage or control the underlying security transport ne-
gotiation, encryption, detection algorithms, threat intelligence or network

www.EngineeringBooksPdf.com
Introduction and Overview 21

inspection, but has control over the selection of security solutions and scope
with respect to their data and network.
Real-time services such as Virtual PBX, voice and video conferencing sys-
tems, collaboration systems, and call centers are considered communication as
a service (CaaS).
For ETSI NFV, VNF represents an instance of a functional block respon-
sible for a specific treatment of received packets. End point represents an exter-
nal interface of one VNF instance that is always associated with a VNF. Each
VNF can have an associated physical/virtual interface, MAC, IP address, or a
higher-layer application such as HTTP.
Two major enablers of NFV are industry-standard servers and technolo-
gies developed for cloud computing. Recent developments of cloud computing,
such as various hypervisors, OpenStack, and Open vSwitch, also make NFV
achievable in reality. For example, the cloud management and orchestration
schemes enable the automatic instantiation and migration of VMs running spe-
cific network services. Network infrastructure will become more fluid when
deploying VNFs.
ETSI NFV divides network into infrastructure and virtual network layers,
and defines a virtual interface for it, Vn-Nf. These layers are the same as those
for NaaS. NFV also identifies a VM interface as (Vn-Nf )/VM or Vn-Nf-VM,
an interface to hardware as Vi-Ha, SWA (software architecture)-1 interface be-
tween various network functions within the same or different network service,
SW-5 interface between infrastructure (NFVI) and the VNF, and container
interface between host functional block (HFB) and virtualization functional
block (VFB).
ETSI NFV architecture does not define necessary interfaces between a
network and its user, between service providers, or between a cloud provider
and cloud carrier. Furthermore, it does not have connection and connection
termination concepts. However, it is possible to divide attributes of these cloud
services components as virtual and infrastructure categories. cSUI and cloud
service connection termination point (cSCTP) have virtual and infrastructure
components, while cSC and cSI have only virtual components.
This approach is applied to carrier Ethernet and IP services. Service chain-
ing for Ethernet private line (EPL), Access E-Line, and IP VPN are given as
examples.
In a network supporting cloud services, there can be virtualized, nonvir-
tualized, and legacy components. All of the network, applications, and service
components need to be managed together.
Life cycle services operations (LSO) functionalities for cloud services
maybe summarized as follows:

• Market analysis and product strategy;

www.EngineeringBooksPdf.com
22 Virtualized Software-Defined Networks and Services

• Service and resource design;


• Launch products;
• Marketing fulfillment response;
• Sale proposal and feasibility;
• Capture customer order;
• Service configuration and activation;
• End-to-end service testing;
• Service problem management;
• Service quality management;
• Billing and revenue management;
• Terminate customer relationship.

These functionalities for cloud services are being worked in the industry.

References
[1] Duan, Q., N. Ansari, and M. Toy, “Software-Defined Network Virtualization: An Ar-
chitectural Framework for Integrating SDN and NFV for Service Provisioning in Future
Networks,” IEEE Network Magazine, Vol. 30, No. 5, Sept. 2016, pp. l0–16.
[2] Open Networking Foundation, “Software-Defined Networking: The New Norm of Net-
works,” white paper, April 2012.
[3] Open Network Foundation, “ONF TR-521: SDN Architecture,” Issue 1.1, 2016.
[4] Feamster, N., L. Gao, and J. Rexford, “How to Lease the Internet in Your Spare Time,”
ACM SIGCOM Computer Communication Review, Vol. 37, No. 1, Jan. 2007, pp. 61–64.
[5] Turner, J., and D. E. Taylor, “Diversifying the Internet,” Proceedings of the 2015 IEEE
Global Telecommunications Conference (GLOBECOM’05), Dec. 2005.
[6] ETSI NFV-ISG, “Network Functions Virtualization: An Introduction, Benefits, Enablers,
Challenges, and Call for Action,” Proceedings of SDN and OpenFlow World Congress, Oct.
2012.
[7] ETSI NFV-ISG, “NFV 002: Network Function Virtualization (NFV)—Architectural
Framework v1.2.1,” Dec. 2014.
[8] Hwang, J., K. K. Ramakrishnan, and T. Wood, “NetVM: High Performance and Flexible
Networking Using Virtualization and Commodity Platforms,” IEEE Transactions on Net-
work and Service Management, Vol. 12, No. 1, March 2015, pp. 34–47.
[9] Martins, J., M. Ahmed, C. Raiciu, V. Olteanu, N. Honda, et al., “ClickOS and the Art
of Network Function Virtualization,” Proceedings of the 11th USENIX Symposium on Net-
worked Systems Design and Implementations, April 2014, pp. 459–472.

www.EngineeringBooksPdf.com
Introduction and Overview 23

[10] ETSI NFV-ISG, “Network Function Virtualization: Use Cases v1.1.1,” Oct. 2013.
[11] Csazar, A., W. John, M. Kind, C. Meirosu, G. Pongracz, et al., “Unifying Cloud and
Carrier Network,” Proceedings of the 2013 IEEE/ACM International Conference on Utility
and Cloud Computing (UCC2013), Dec. 2013.
[12] Peterson, L., “CORD: Central Office Re-Architected as a Datacenter,” IEEE Software
Defined Networks white paper, November 2015.
[13] Blenk, A., B. Basta, M. Reisslein, and W. Keller, “Survey on Network Virtualization
Hypervisors for Software-defined Networking,” IEEE Communications Surveys and
Tutorials, Vol. 18, No. 1, 2016, pp. 655–685.
[14] Sherwood, R., G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, et al., “FlowVisor: A
Network Virtualization Layer,” OpenFlow Switch Consortium Technical Report, 2009.
[15] Drutskoy, D., E. Keller, and J. Rexford, “Scalable Network Virtualization in Software-
Defined Networks,” IEEE Internet Computing Magazine, Vol. 17, No. 2, Feb. 2013, pp.
20–27.
[16] ETSI NFV ISG, “NFV EVE-005: Report on SDN Usage in NFV Architectural Framework
v1.1.1,” Dec. 2015.
[17] Toy, M., “OCC 1.0 Reference Architecture,” Dec. 2014.

www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
2
Software-Defined Networking
Q. Duan and M. Toy

2.1  Introduction
The rapid emergence of a wide spectrum of network-based computing ap-
plications with highly diverse requirements brings in many new challenges to
networking technologies. Such applications(e.g., mobile applications, social
networks, cloud services, and big data analysis) expect more bandwidth, dy-
namic network control, efficient network management, and more agile service
evolution. The growing popularity of multimedia applications and increasing
demand for big data analytics require higher speed network connections. The
huge number of mobile devices and rapid development of mobile and social
network applications demand ubiquitous data communications. In addition,
cloud computing has added higher expectation on the performance, flexibil-
ity, and agility of computer networking. Reliable and efficient access to the
computing and storage resources in the cloud via a network is critical to high-
performance cloud service provisioning.
Various networking technologies have been developed and a huge amount
of investment has been made to enhance network capability and expand net-
work infrastructure for meeting the aforementioned demands. However, intro-
duction of new networking technologies and expansion of network infrastruc-
ture have significantly increased the complexity and cost of network operation
and service provisioning. Network devices have become increasingly complex,
and thus more expensive, mainly due to the sophisticated control intelligence
added inside each device. Network control and management are also becoming
complex and inflexible. Network operators often need to configure individual

25

www.EngineeringBooksPdf.com
26 Virtualized Software-Defined Networks and Services

devices using low-level and often vendor-specific commands in order to express


desired network-level policies. Traditional network protocols tend to be defined
in isolation for solving specific problems without the benefit of fundamental
coherence, which also results in additional network complexity.
The overwhelming complexity of control and management of current
networks significantly limit their capabilities of meeting future service require-
ments. Due to the complexity, today’s networks are relatively static as network
operators seek to minimize the risk of service disruption. The static nature of
network operation cannot support the dynamic service provisioning expected
by user applications in future networks. The complexity of today’s networks
also makes it very difficult to apply and enforce consistent network-wide poli-
cies across heterogeneous network devices.
The current network architecture also lacks sufficient agility for support-
ing future services and applications. Automatic configuration and dynamic re-
sponse mechanisms are very limited in current networks, which makes it vir-
tually impossible for dynamic and elastic network service provisioning. Data
forwarding operation and network control/management functionality in the
IP-based network architecture are bundled together and implemented inside
individual routers distributed across the network. Due to the ossification caused
by the integration of data forwarding and network control in current networks,
a new routing protocol can take years to be fully developed, evaluated, and
deployed. Clean-slate approaches with any significant change to the current
Internet architecture are not feasible in practice.
The networking research community has realized the aforementioned
challenges and made efforts from various aspects in order to overcome limita-
tions of the current network architecture. Actually, enhancing network control
and management abilities in order to meet diverse service requirements has
been an important goal of network research since the Internet started becoming
a platform for supporting a wide variety of network-based computing applica-
tions [1].
An early research effort toward this direction is active networking pro-
posed in mid-1990s. Active networking attempted to expose the resources (e.g.,
switching, processing, and storage capacities) on individual network nodes via a
programming interface and construct customized network functions for subsets
of packets forwarded through the nodes. Active networking followed the idea
of programmable networking that attempts to control network behaviors in a
real-time manner using software. The active networking community pursued
two programming models: the capsule model where the codes to be executed
at network nodes were carried in data packets and the programmable router/
switch model where the control programs were established on network nodes
through out-of-band mechanisms. Although active networking articulated a vi-

www.EngineeringBooksPdf.com
Software-Defined Networking 27

sion of programmable networks, the technology did not receive wide adoption
at the time it was developed.
Subsequent research efforts toward overcoming network ossification made
in early to middle 2000s mainly focused on enhancing routing and configura-
tion management. Research progress made during this time period embraced
the ideas of separated control and forwarding functions and logically central-
ized control entity. For example, forwarding and control element separation
(ForCES) framework released by IETF splits the functions of networking de-
vices into forwarding and controlling elements with an open interface between
them. Routing control platform (RCP) replaces BGP interdomain routing
with centralized routing control to reduce the complexity of fully distributed
path computation. Path computation element (PCE) architecture separates
computation of label switched paths from actual packet forwarding operation
in MPLS networks. Unfortunately, when the aforementioned proposals were
made, dominant equipment vendors did not have strong incentive to adopt
standards for control-forwarding separation. Therefore, although industry pro-
totypes and standardization efforts made some progress, such technologies have
not been widely adopted in network device designs yet.
Despite the limited adoption in industry, researchers kept broadening the
vision of control and data plane separation when exploring clean-slate design
of Internet architecture. The 4D approach introduced in [2] advocates a new
network architecture that comprises four planes—a data plane for processing
packets based on configurable rules, a discovery plane for collecting network
topology and state information, a dissemination plane for installing packet-pro-
cessing rules, and a decision plane consisting of logically centralized controllers
for making decisions regarding network operations. This approach has been
applied to new applications beyond routing control in some research projects.
In particular, the Ethan project created a logically centralized flow-level solu-
tion for access control in enterprise networks, where a separated controller in-
stalls rules generated based on high-level security policies into the flow tables
at switches.
In 2008 a research group at Stanford University published their research
on OpenFlow [3], which is widely believed as the first instance of software-de-
fined networking. OpenFlow embraces the principle of separated data and con-
trol planes and logically centralized controller. An OpenFlow switch has one or
more tables of packet handling rules. Each rule matches a subset of traffic and
performs certain actions on the traffic that matches the rule. The rules are in-
stalled by a separated controller into all switches. OpenFlow attracted extensive
attention from the research community and received wide adoption in industry
soon after its publication, especially as compared with its intellectual predeces-
sors. OpenFlow stimulated a wave of innovations in networking technologies
to enable a new networking paradigm with separated data and control planes

www.EngineeringBooksPdf.com
28 Virtualized Software-Defined Networks and Services

and a centralized programmable control platform. This emerging networking


paradigm is referred to as software-defined networking (SDN).

2.2  SDN Concept and Principles


The term SDN was originally created based on the idea and work of OpenFlow
to refer to a network architecture where the packet forwarding states in net-
working devices are managed by a separated controller. With the development
of SDN technologies, the networking community has broadened the notion
of SDN and tends to refer to anything that involves software-based network
control as being SDN. The Open Networking Foundation (ONF) has provided
an explicit and well-received definition of SDN as follows: “Software-defined
networking (SDN) is an emerging network architecture where network control
is decoupled from forwarding and is directly programmable” [4].
A key objective of SDN is to provide open interfaces that enable the de-
velopment of software that can define the data forwarding and processing op-
erations performed by a set of network resources on traffic flows. In order to
achieve the objective, SDN separates the control functions and data forwarding
functions, and relocates network control to a dedicated element called an SDN
controller. The SDN controller provides an approach to controlling and man-
aging network resources through software that are typically called SDN applica-
tions. Therefore, key components of the SDN paradigm can be organized into
three groups: the data plane, control plane, and application plane.
The data plane comprises distributed network resources that perform
functions of data transport and processing. Network elements on the data plane
expose their capabilities and resource states to the control plane via a standard
interface. The behaviors of data plane resources are directly controlled through
this interface. The SDN controller manages distributed network resource states
and provides a global abstract view of the data plane to the application plane via
another standardized interface. The SDN applications specify their networking
requirements to the controller and define operations of the abstracted network
resources through this interface. The SDN controller translates applications’
requirements to low-level control instructions that may be performed by the
network elements on the data plane.
The SDN concept is based on the following architectural principles [5]:

• Decoupling network control and management from data forwarding


and processing: The purpose of this principle is to permit independent
development and deployment of network control/management func-
tionalities and data forwarding/processing capabilities. Decoupling
between control/management and data forwarding/processing makes

www.EngineeringBooksPdf.com
Software-Defined Networking 29

logically centralized control possible. Decoupling also allows for separate


optimization and lifecycle management for data plane technologies and
network control mechanisms. An important consequence of the decou-
pling principle is the separation of concerns introduced between defini-
tion of network policies, their implementations in network devices, and
the forwarding actions performed on traffic. The separation of concerns
plays a key role in enabling the desired network flexibility, breaking net-
working problems into tractable pieces, simplifying network control and
management, and facilitating network evolution and innovation.
• Logically centralized network control: The centralized control principle
allows network resources to be utilized more efficiently when viewed
from a global perspective. A centralized SDN controller can abstract
the distributed states of data plane resources to form a global network
view, upon which control applications can program the underlying net-
work infrastructure. A centralized controller may orchestrate resources
that span multiple network elements and thereby offer better abstraction
than if it could only abstract subsets of individual elements. It is worth
noting that the logically centralized SDN control plane might have a
distributed physical implementation in order to meet the challenges of
scalability, reliability, performance, and security in large scale networks.
• Programmability of network services: Network programmability per-
mits application software to program data plane operations through the
SDN controller. The centralized control platform in SDN enables a pro-
grammable API to the controller. Through this API, applications may
exchange information with an SDN controller in order to specify their
service requests and achieve agile control of service states provisioned by
network resources on the data plane. The programmable API to SDN
controller allows applications to express their desired network services
but leave the service realization and real-time resource optimization to
the SDN controller. The network programmability principle decouples
service provisioning from specific data plane operations, thus allowing
SDN applications to be developed independently with the underlying
network infrastructure.

A fundamental idea of the SDN paradigm lies in resource abstraction.


Abstraction is an essential method of research in computer science and informa-
tion technology that has already been employed in many computer architecture
and system designs. Resource abstraction is a key capability for supporting net-
work programmability. Information and data models are means to provide an
abstracted view of the underlying network resources to SDN applications, so

www.EngineeringBooksPdf.com
30 Virtualized Software-Defined Networks and Services

that application developers can simplify their program logic without the need
for detailed knowledge of the underlying network resources and technologies.
SDN is expected to provide abstractions from the following three aspects:
forwarding abstraction, distribution abstraction, and specification abstraction.
The forwarding abstraction should allow any forwarding behavior required by
the network controller (and applications) while hiding details of the underlying
data plane operations. An SDN controller acts as a driver to data plane switches
to support this abstraction. The distribution abstraction shield network control
and management functions from the distributed resource states, thus trans-
forming distributed control problems to logically centralized problems. SDN
controllers realize such an abstraction by collecting state information about
data plane devices to form a global network view. The specification abstraction
should allow a network application to express the desired network behaviors
without being responsible for implementing those behaviors by itself. Network
programmability provided by SDN controller allows the abstract configura-
tions expressed by network applications to be mapped to physical configura-
tions of data plane devices, thus supporting the specification abstraction [6].

2.3  Features and Advantages


As discussed in Section 2.1, the notion of software-defined networking was
eventually formed after years of technology evolution toward enabling software-
based programmable networks. However, SDN has some defining features that
give it advantages over its predecessors, which allow SDN to bring in special
benefits for future networking and service provisioning.
Although software has been widely employed for controlling networks for
decades, the programmatic control platform in SDN enables software to define
network behaviors rather than merely controlling network operations. Defin-
ing network behaviors means constructing network topologies and specifying
the data forwarding and processing operations to be performed for achieving
certain objectives. Unlike traditional software-based network control, which is
often limited to configuring some aspects of existing functionalities, SDN al-
lows software applications to define new functions that the data plane elements
may perform. In addition, the software ecosystem in SDN, which has been
greatly expanded beyond the traditional network control software, supports a
wide variety of applications interacting the control plane to perform distinct
functions and/or act on behalf of different tenants.
Separated data and control planes and logically centralized network con-
trol are two key features of SDN. Although similar features can be found in some
previous research proposals such as active networking, ForCES, and PCE, the
uniqueness of the SDN paradigm lies in its network programmability provided

www.EngineeringBooksPdf.com
Software-Defined Networking 31

through decoupling data and control planes. Specifically, SDN offers a simple
programmable control platform rather than making networking devices more
complex for supporting programmability, as in the case of active networking.
Moreover, SDN adopts separation of control and data planes as a core part of
network architectural design, which not only enables a simpler programmable
environment, but also provides greater freedom for defining network behaviors
to meet the highly diverse service requirements in future networks.
Unique features of the SDN paradigm bring in some advantages over
traditional networking technologies in various ways [7, 8]. SDN may signifi-
cantly simplify data plane network elements. The main complexity of IP rout-
ers comes from the control intelligence, such as routing protocols and path
computation functions, which must be implemented on each individual router.
Separation of data and control planes in SDN allows devices on the data plane
to simply perform packet forwarding operations by following rules installed by
a controller. Removing the complex control functions from individual devices
and consolidating the control logic on a dedicated controller simplify network-
ing devices, thus reducing their costs.
SDN may also greatly simplify network control. Traditional IP network
control relies on distributed routing protocols that require participation and
collaboration of numerous routers. The highly distributed control mechanism
in IP-based networks, although seeming to be a good way to guarantee net-
work resilience, results in very complex and relatively static control architec-
ture. SDN enables one single controller to monitor and control all networking
devices on the data plane, which transforms the complex distributed control
problems in IP networks into simpler control decisions made on a central point
with a global network view.
SDN also enhances network management. Due to the heterogeneity in
network devices and configuration interfaces, current network management
typically involves a certain level of manual processing. With current network
design, automatic and dynamic reconfiguration of a network remains a big
challenge. In SDN, a unified control plane oversees all kinds of networking
devices, including switches, routers, network address translators, firewalls, and
load balancers, among others, and is thus able to manage networking devices
via a single standard interface through software programming. Therefore, the
entire network can be programmatically configured and dynamically optimized
based on service requirements and network status.
SDN offers a global approach to optimizing network performance. One
of the key objectives of network operation is to maximize utilization of net-
work resources for meeting service performance requirements. Currently avail-
able approaches to network optimization are based on local information, which
may lead to suboptimal performance or conflicting operations. SDN offers
an opportunity to improve network performance globally. SDN allows for

www.EngineeringBooksPdf.com
32 Virtualized Software-Defined Networks and Services

centralized control with a global network view, thus making many challenging
performance optimization problems manageable with properly designed cen-
tralized algorithms. The logically centralized control platform of SDN enables
a higher degree of automation in network operations and service delivery, and
therefore may also improve network resource availability and utilization.
SDN provides better support for new service development and deploy-
ment. Network programmability provided by SDN allows various upper layer
applications and services to be developed and deployed without being con-
strained by any specific technology employed in the underlying network infra-
structure. The centralized SDN controller supports network operating systems
that can easily reconfigure data plane devices, thus greatly facilitating deploy-
ment of new services and evolution of current services. SDN allows network
customization for supporting network services with different requirements
through programming network operations (e.g., dynamic enforcement of a
set of policies for meeting various service requirements). SDN can also reduce
the response time of business requests to network providers, increase customer
satisfaction, and shorten investment payback time through automation of net-
work operations.
SDN encourages innovations by providing a programmable network plat-
form to implement, experiment, and deploy new network architectures, tech-
nologies, applications, and services. SDN facilitate realization of multitenant
virtual networks, each of which may implement customized network architec-
ture, addressing scheme, and routing protocol. The separation of data and con-
trol planes allows developments in data forwarding technologies and network
control mechanisms to follow their respective innovation paths.

2.4  SDN Architecture


In general, network architecture partitions a complex networking system into
modular parts and specifies the key functions of each part with the interface
among the parts. A good network architecture design is expected to be consis-
tent, useful, and open. Consistent architecture should not reveal any contradic-
tions when viewed from various perspectives. Useful architecture may facilitate
transforming concepts into realization. Open architecture should allow exten-
sion to the architecture in previously unforeseen directions [5].
SDN architecture specifies the main SDN components, key functions
of each component, and the interfaces for interactions between components.
SDN architecture forms the foundation of and provides guidelines for SDN
technical development in all aspects. Therefore, all major networking-related
standardization organizations, such as ONF, ITU-T, and IRTF, have developed
their specifications of SDN architecture. In this section, we will first give a

www.EngineeringBooksPdf.com
Software-Defined Networking 33

general architectural framework for SDN and then introduce the architectures
developed by ONF, ITU-T, and IRTF.

2.4.1  General Architecture


Following the principles of software-defined networking, SDN architecture
should decouple the network control functionalities from traffic forwarding
and processing operations and enable a logically centralized control platform
for supporting network programmability. Therefore, a general architectural
framework for SDN, as shown in Figure 2.1, consists of three planes—the data
plane, control plane, and application plane—and the two interfaces—the in-
terface between data and control planes and the interface between application
and control planes.
The data plane in the SDN architecture comprises network resources for
performing data forwarding and processing operations, which could be either
physical or virtual resources in network elements. A network element can be
considered as a container for resources or can be a resource by itself. The main
difference between traditional network elements and the elements in SDN
data plane lies in that the latter are simple packet forwarding and/or processing
engines without complex control logic to make autonomous decisions. SDN
data plane elements simply take actions by following the rules setup by the
controller(s) on the control plane.
The data-control plane interface (D-CPI) between the data and control
planes, which is also referred to as the southbound interface (SBI), is a key
to realizing the principle of decoupling between data forwarding and network
control. The protocols on D-CPI allow data plane elements to expose the ca-
pabilities and states of their resources to the control plane and enable con-
trollers to instruct network elements for their operations by configuring rules
in the elements. It is crucial for this interface to be standardized for ensuring

Figure 2.1  A general architectural framework for SDN.

www.EngineeringBooksPdf.com
34 Virtualized Software-Defined Networks and Services

configuration and communication compatibility and interoperability among


different data and control plane devices (i.e., D-CPI standard should allow con-
trollers to dynamically program heterogeneous network elements).
The control plane is a core component in the SDN architecture that
bridges the application plane and the data plane. Key functions of this plane
include handling two types of objects that are, respectively, related to network
control and monitoring. Objects for network control include the policies im-
posed by SDN applications and the rules controlling the operations performed
at data plane devices. Objects related to network monitoring are in the format
of local and global network states. Therefore, the general logical structure of
this plane comprise two counter-directional processing flows. In the downward
direction, the control plane translates the policies expressed by applications into
rules and then installs the rules on data plane devices. In the upward direction,
the control plane synthesizes the network states that it collects from data plane
devices to form a global network view and presents the network view to the ap-
plication plane for decision making.
The application-control plane interface (A-CPI), which is also called the
northbound interface (NBI), is between the control plane and the application
plane. This interface is a key to achieving network programmability in the SDN
architecture. The interface provides a standard API that allows applications to
program the underlying network infrastructure. A-CPI provides applications
with the global network view synthesized by the control plane and presents
the policies specified by SDN applications to the control plane. As the D-CPI
enables an abstraction of individual networking devices to SDN controllers, the
A-CPI provides an abstraction of the entire network domain to SDN applica-
tions. Therefore, it is essential to have a well-defined open and standard A-CPI
between the application and control planes.
The application plane in the SDN architecture comprises a set of applica-
tions that implement control and management logic to make decisions on net-
work operations for meeting various service requirements. These applications
can be seen as “network brains” that issue policies to the control plane, which
then translates the policies to rules installed on data plane devices for control-
ling behaviors of the devices. The application plane in the SDN architecture
should not be confused with the application layer defined in a network layering
model (e.g., the TCP/IP layer model or OSI layer model). SDN applications
typically perform the control and management functions that belong to layers
2 and 3 of the TCP/IP or OSI model. In contrast, applications in a network
layering model play the role of end users of a network, which only consume the
services provisioned by the network without participating in network control
or management.
It is worth noting that we refer to the major groups of components in the
SDN architecture as planes in order to avoid confusion with the term layer. The

www.EngineeringBooksPdf.com
Software-Defined Networking 35

concept of layer is used in the context of a layering model (e.g., layer-3 pack-
ets are encapsulated into layer-2 frameworks and then transmitted by layer-1
media). A key feature of layering lies in that a higher layer strictly relies on
the services provided by the lower layer in order to perform its own functions.
However, planes in the SDN architecture, while needing to cooperate with each
other, do not have such strict dependence relationship between them. For ex-
ample, an SDN controller in the control plane is typically hosted on a dedicated
server whose operations are not rely on data plane devices. Although the terms
plane and layer are often used as exchangeable in some SDN-related documents,
we believe it is beneficial for the readers to be aware of the difference between
the two concepts.

2.4.2  ONF Architecture


ONF presented its specification of SDN architecture in [5, 9]. The architec-
ture, as shown in Figure 2.2, comprises the data plane, controller plane, and
application plane, and the D-CPI and A-CPI, which are equivalent to the corre-
sponding planes and interfaces in the general architectural framework shown in
Figure 2.1. Compared to the general architecture, the SDN architecture given
by ONF has an additional component for management functions.
Although many traditional management functions may be bypassed by
the direct A-CPI, some essential management functions still cannot be real-
ized via A-CPI. Therefore, a dedicated component for management functions
is included in the ONF SDN architecture. As shown in Figure 2.2, the man-
agement component is across all planes because each plane has its own specific
management functions. Data plane management is required for initially setting

Figure 2.2  ONF SDN architecture [9].

www.EngineeringBooksPdf.com
36 Virtualized Software-Defined Networks and Services

up the network elements and assigning resources to the respective SDN con-
troller. On the controller plane, management functions are needed for config-
uring SDN controllers, defining the scope of control given to each application,
and monitoring system performance. Application plane management typically
configures the contracts and service level agreements that are to be enforced by
the control plane. In addition, there are across-plane management functions
(e.g., configuration of the security associations that allow distributed functions
to safely intercommunicate) [9].
One of the main objectives of SDN is to enhance network capability of
service provisioning for meeting users’ requirements. In order to clearly pres-
ent the relationship among the key components in the SDN architecture from
a service perspective, ONF defines a basic service model for SDN in [5], as
shown in Figure 2.3.
In this model, a service consumer (end user) of an SDN network plays
the roles of both service requestor and resource user of the network. As a service
requestor, the end user exchanges information with an SDN controller through
a management-control session. As a resource user, it consumes the capabilities
provided by SDN data plane resources for packet forwarding and processing.
The service consumer controls its services via A-CPI with the controller by
invoking actions on a set of virtual resources that it perceives to be its own.
The SDN controller is responsible for virtualizing physical resources in the data
plane and exposing virtual resources to the service consumer. The controller
may also orchestrate virtual resources in order to provision the service required
by the end user. Therefore, the SDN controller plays a role of service provider
to end users.

Figure 2.3  Basic service model for SDN [5].

www.EngineeringBooksPdf.com
Software-Defined Networking 37

A service consumer requests services from an SDN controller and achieves


its data transfer and processing objectives by using the corresponding resources
provided by the SDN data plane. While the service requestor role represents
setup of services by an end user, the resource user role represents the service
consumer’s usage of the according resources to satisfy its service needs. Usually
this is accomplished by data exchanges in the data plane. Therefore, the data
plane plays a role of resource provider to end users in the SDN service model.
The concepts of services and resources in this SDN service model are
intentionally made unbounded by ONF. In general, a resource in this model
refers to anything that can be used to deliver a service. Any SDN service is built
upon some set of resources, whose functions and interfaces are configured to
meet the particular need for delivering the service. Resources may be physical
or virtual, active or passive, and, in many cases, may be created, scaled, or de-
leted by the service provider in response to the service requests made by service
consumers.
An SDN controller exposes services to end users via A-CPI while con-
suming underlying resources through the D-CPI. The SDN controller satisfies
users’ requests by virtualizing and orchestrating resources. Orchestration is to
select the resources provided by multiple network elements and coordinate the
usage of the resources to satisfy service requirements. As the network environ-
ment and user demands change, the SDN controller is responsible for con-
tinuously updating network and service states toward a policy-based optimal
configuration, where the available resources and optimization criteria are also
subject to change.

2.4.3  ITU-T Architecture


ITU-T has also developed a high-level architecture for SDN as shown in Fig-
ure 2.4. The ITU-T SDN architecture consists of resource layer, SDN control
layer, and application layer. The resource, controller, and application layers in
the ITU-T SDN architecture are essentially equivalent to the data, control, and
application planes, respectively, in the general SDN architecture. In addition,
ITU-T also includes a component of multilayer management in its SDN archi-
tecture as ONF does. The multilayer management component provides cross-
layer functions for managing the functionalities in the application, control, and
resource layers [10].
The control layer in the ITU-T SDN architecture is split into three sub-
layers: the bottom sublayer enables an abstraction of the network resources for
data transport and processing; the top sublayer supports communications be-
tween the controller and SDN applications; and the middle sublayer performs
SDN orchestration. An SDN controller is expected to coordinate a number
of interrelated resources, often distributed across a number of subordinate

www.EngineeringBooksPdf.com
38 Virtualized Software-Defined Networks and Services

Figure 2.4  ITU-T SDN architecture [10]. (Source: recommendation ITU-TY.3300: Framework
of Software-defined networking.)

platforms. This is commonly called SDN orchestration, which is essentially the


ability to program network behaviors to coordinate the required resources for
meeting application and service requirements. An orchestrator itself sometimes
is considered to be an SDN controller that coordinates the functions provided
by multiple lower-level controllers with reduced control scopes.

2.4.4  IRTF Architecture


The SDN Research Group (SDNRG) in Internet Research Task Force (IRTF)
has also developed an SDN architecture as shown in Figure 2.5. This SDN
architecture follows a similar overall structure as the general architecture with
three main planes: the forwarding/operational plane, control/management
plane, and application plane. However, the IRTF SDN architecture has some
special features [11].
A special feature of this SDN architecture is that it makes distinction
between control and management functions in the context of SDN, which leads
to separated control and management planes and separated forwarding and op-
erational planes.
The forwarding plane in this architecture is responsible for handling
packets in the data path based on the instructions received from the control

www.EngineeringBooksPdf.com
Software-Defined Networking 39

Figure 2.5  IRTF SDN architecture [11].

plane. The forwarding plane is basically equivalent to the data plane in the gen-
eral SDN architecture, except that management-related functions of network
devices are modeled by a separated operational plane. The operational plane
is responsible for monitoring and managing the operational states of network
devices (e.g., device activity state, the number of available ports, and status of
each port).
The control plane in the IRTF SDN architecture is responsible for mak-
ing decisions on how packets should be forwarded and pushing such decisions
to network devices for execution. Management plane is for monitoring, con-
figuring, and maintaining network devices and making decisions regarding the
states of a network device. The control plane focuses mostly on the forwarding
plane, while the management plane mainly interacts with the operational plane
in network devices. The separation between control and management leads to
split southbound interfaces, respectively, between the control and forwarding
planes and between the management and operational planes.
Distinction between control and management made in the IRTF SDN
architecture is to reflect the different characteristics of control and manage-
ment functionalities. Control and management have different timescales—how
fast and frequent the respective function is required to react to or manipulate
network operations. In general, control has much shorter time scales, roughly
in the range between milliseconds and seconds; while management has longer
time scales, such as minutes, hours, or even days. Control typically maintains
ephemeral states that have limited lifespan, while management often handles

www.EngineeringBooksPdf.com
40 Virtualized Software-Defined Networks and Services

persistent states that have extended lifespan. In addition, traditionally control-


related functions have been executed locally on network devices and distributed
in nature, while management is usually executed in a centralized manner, re-
motely from the device. However, with the advent of SDN, this distinction is
no longer so clear cut [11].
Another special feature of the IRTF SDN architecture lies in its emphasis
on multilevel abstractions. This architecture includes two main abstraction lay-
ers: the device abstraction layer (DAL) and the service abstraction layer (SAL).
The DAL abstracts the network device resources on the forwarding and
operational planes. Variations of DAL may abstract either both forwarding and
operational planes or one of the two planes. DAL may interact with either the
control or management plane or both the planes. DAL may be expressed by one
or more abstraction models. Examples of forwarding-plane abstraction models
are ForCES, OpenFlow, and YANG model; examples of operational-plane ab-
straction model include ForCES, YANG model, and SNMP MIBs.
The SAL provides service abstraction that can be used by SDN applica-
tions and other services through the service interface between the application
plane and control/management planes. Service interfaces can take many forms
pertaining to their specific requirements. Example approaches to realizing the
service interface include RESTful APIs, open protocols such as NETCONF,
and remote procedure call (RPC).

2.4.5  Cooperating Layered Architecture for SDN


The capability of supporting a wide spectrum of diverse services independently
of heterogeneous transport technologies is a key requirement for future network
architecture. This requirement calls for decoupling between the service-related
functionalities and transport-oriented capabilities, which needs abstraction of
network infrastructure to hide the specificities of data transport technologies
while offering a common set of transport capabilities.
However, the separation of data forwarding and network control/manage-
ment in the current SDN architecture does not sufficiently support decoupling
between service-related and transport-oriented functionalities. The logically
centralized SDN controller is responsible for performing various control func-
tions with different purposes, including both service-oriented and transport-
oriented functions. This approach may cause a number of issues, including
unclear responsibilities between actors involved in service delivery, complex
reuse of functions for service provisioning, closed and monolithic control archi-
tecture, and blurred business boundaries among service providers. As a conse-
quence, the current SDN architecture lacks a clear separation between service
provisioning and data transportation [12].

www.EngineeringBooksPdf.com
Software-Defined Networking 41

IRTF SDN Research Group proposed cooperating layer architecture for


SDN (CLAS) to address the aforementioned issues. The idea behind this archi-
tectural proposal is to differentiate the control functions associated with data
transport from those related to service provisioning in such a way that they can
be provided and maintained independently and can follow their own evolution
paths. As shown in Figure 2.6, The CLAS architecture comprises two separated
strata: the service stratum and transport stratum. The functions on each stratum
are further grouped into the control, management, and resource planes [12].
The transport stratum in this architecture comprises the functions focused
on data transfer between communication end points (e.g., between end-user
devices or two service gateways). The control plane on this stratum controls
network resources to build end-to-end communication paths and assure data
transportation is appropriately set up for each service. The management plane
on this stratum implements management functions on network resources, like
fault monitoring and performance measuring.
The service stratum in CLAS contains the functions related to service pro-
visioning. The resource plane on this stratum consists of the resources involved
in service delivery, such as computing capacities and storage space. The control
plane is in charge of controlling and configuring those resources. In addition,
this plane also interacts with the control plane on the transport stratum for re-
questing transport capabilities for a given service. The management plane per-
forms management actions on the service-related resources and interacts with
the management plane on the transport stratum.

Figure 2.6  Cooperating layer architecture for SDN [12].

www.EngineeringBooksPdf.com
42 Virtualized Software-Defined Networks and Services

Despite the separation of transport-oriented and service-related func-


tionalities, close cooperation between the service and transport strata is also
an important aspect of the CLAS architecture. This requires effective coopera-
tion between the control and management planes residing, respectively, on the
service and transport strata. The service-oriented control/management planes
need to easily access transport capabilities through well-defined APIs. On the
other hand, the transport-oriented control/management planes should be able
to properly capture the service requirements specified by the service stratum.

2.5  SDN Data Plane and Southbound Interface


SDN data plane comprises network elements for packet forwarding and pro-
cessing, which are typically called SDN switches, although they often perform
more functions such as firewall, access control, and load balance, in addition to
the regular packet forwarding function of a switch. OpenFlow currently is the
de facto standard of the southbound interface for controlling SDN switches;
therefore, we focus our discussion about SDN data plane on OpenFlow switch-
es and their communications with an OpenFlow controller.

2.5.1  Key Components in an SDN Switch


The high-level structure for an SDN switch, as shown in Figure 2.7, consists of
three key components—a controller interface, an abstraction layer, and a packet
processing engine.
The controller interface maintains a secure channel and implements a
D-CPI protocol (e.g., OpenFlow) for supporting communications between the

Figure 2.7  A high-level structure for an SDN switch.

www.EngineeringBooksPdf.com
Software-Defined Networking 43

switch and an SDN controller. Essentially, the controller interface provides an


API that allows a controller to control the operations performed at the switch
by installing and updating packet match-action rules in the switch. The con-
troller interface also allows the controller to collect device state information
from the switch.
The abstraction layer is between the controller interface and the packet
processing engine. This layer provides an abstract view of the packet process-
ing engine, upon which the controller can program switch operations without
knowing implementation details of the processing engine. The abstraction layer
maintains one or multiple flow tables to store the packet match-action rules
received from the controller. A flow table contains a list of flow entries, and
each entry consists of the match fields and a set of instructions with some other
elements. A flow is essentially a stream of packets transferred from a source to
a destination. Each flow entry in a flow table specifies the rule for identifying a
flow and the actions that should be performed to all packets in that flow.
The packet processing engine in an SDN switch performs the actions of
forwarding and processing packets. This component comprises a set of ingress
ports, a set of egress ports, and a switching fabric that interconnecting ingress
ports to egress ports. For each incoming packet received at an ingress port,
the processing engine will look up the flow table(s) in the abstraction layer to
identify the flow entry for this packet. The instructions stored in the matched
flow entry specify the actions that the processing engine should perform on
the packet. Typical actions include forwarding the packet to an egress port (or
a particular queue at an egress port), forwarding the packet to the controller,
modifying part of the packet, and dropping the packet. If no match is found in
the flow table(s) for a packet, then the packet could be forwarded to the con-
troller for further processing or be discarded by the switch.
Some traditional technologies of packet switching (e.g., the hardware and
software for implementing a switching fabric that transfers packets from ingress
ports to their destined egress ports with high throughput and short delay) still
play a crucial role in an SDN switch, especially in the packet processing en-
gine. Unlike conventional networking devices that also run routing protocols
to decide how to forward packets, decision-making functions are removed from
SDN switches and relocated to a controller. As a result, an SDN switch is sim-
ply responsible for processing packets by following the action rules imposed by
a controller as well as collecting and reporting network status to the controller.
SDN switches may be realized in various forms. Software-based imple-
mentations employ software running on a commodity server with standard
general-purpose CPU and operating system (e.g., Linux). Hardware-based im-
plementations use specialized hardware, such as content-addressable memories
(CAMs), ternary content-addressable memories (TCAMs), and network pro-

www.EngineeringBooksPdf.com
44 Virtualized Software-Defined Networks and Services

cessors. Software-based and hardware-based implementations each have their


own advantages and limitations.
Implementing SDN switches with software offers a simple means of cre-
ating SDN devices, because the flow tables, flow entries, and match fields in-
volved are easily mapped to general software data structures. Software-based
SDN switch design has become mature and two widely recognized reference
implementations are Open vSwitch (OVS) from Nicira and Indigo from Big
Switch. However, software-based SDN switches are likely to be slower and less
efficient than hardware-based implementations, since they do not benefit from
any hardware acceleration. Consequently, software-based SDN switch imple-
mentations might not be feasible for networking devices that must be running
at very high speeds.
Hardware-based implementations of SDN switches may run much faster
than their software counterparts, and thus are more applicable to performance-
sensitive environments such as core networks in data centers and the backbone
of carrier networks. On the other hand, realizing flexible flow table looking up
and rule matching operations in hardware brings in some challenges to SDN
switch design. For example, although hardware will handle the flow table look-
up much faster, hardware tables have limitations on the number of flow entries
that they can hold at any time. In addition, some actions such as packet modi-
fication may be limited or even not available if handled in hardware. Therefore,
many SDN switch designs combined software- and hardware-based technolo-
gies to fully exploit their advantages and overcome their respective limitations.
It is worth noting here that hardware technologies are still playing an
important role in SDN implementations, although this paradigm is called
software-defined networking. It is always necessary to use specialized hardware
technologies in order to realize high-performance packet switching. What the
word software in the term SDN really means is the feature that the underlying
network infrastructure is open and programmable to upper layer software, in-
stead of implementing the entire network with pure software.

2.5.2  OpenFlow Switch Structure


OpenFlow is a de facto standard for D-CPI in the SDN architecture. Open-
Flow specification defines both the communication protocol between the
SDN controller and switches and the procedure for managing flow tables in
SDN switches. When an SDN controller and the controlled switches follow
the OpenFlow specification, they are referred to as OpenFlow controller and
OpenFlow switches, respectively. Although alternative D-CPI protocols do ex-
ist, OpenFlow is believed to be the only nonproprietary general-purpose proto-
col for programming SDN switches.

www.EngineeringBooksPdf.com
Software-Defined Networking 45

The OpenFlow specification has evolved for a number of years. The non-
profit Internet organization openflow.org was created in 2008 soon after the
seminal research paper on OpenFlow [3] was published. The first release of
OpenFlow specification, version 1.0.0, appeared at the end of 2009. In March
2011, the ONF was formed for accelerating the delivery and commercialization
of SDN. ONF has OpenFlow as the core of its vision of SDN and has become
the responsible entity for evolving OpenFlow specification.
The architecture of a generic OpenFlow switch is shown in Figure 2.8. As
defined in the OpenFlow switch specification [13], the main component of an
OpenFlow switch include a set of ingress ports and output ports, an OpenFlow
pipeline that contains one or multiple flow tables, and a secure channel for
communicating with one or multiple OpenFlow controllers. As in any packet
switch, the core function of an OpenFlow switch is to take packets that arrive
at ingress ports and forward them to their destined output ports. A unique as-
pect of OpenFlow switch is embodied in the OpenFlow pipeline processing of
packet matching function.
For each received packet, the OpenFlow switch will first identify the flow
that this packet belongs to and then execute the processing instructions speci-
fied for the flow. Searching for the matching flow of each received packet and
determining the actions that should be taken for the packet is the core respon-
sibility of the OpenFlow pipeline. The pipeline contains one or multiple flow
tables. Each entry in a flow table contains match fields and a set of instructions.
Matching starts at the first flow table and may continue to additional flow
tables of the pipeline. Flow entries match packets in priority order, with the first
matching entry in each table being used. If a matching entry is found, the in-
structions in the flow entry are executed. Instructions associated with each flow
entry either contains the actions for packet processing or specifies modification

Figure 2.8  Architecture of a generic OpenFlow switch [13].

www.EngineeringBooksPdf.com
46 Virtualized Software-Defined Networks and Services

of the pipeline processing. Actions included in instructions may be packet for-


warding, packet modification, or group processing. Pipeline processing instruc-
tions allow packets to be sent to subsequent tables for further processing and
allow information to be communicated between tables in the form of metadata.
More detailed description of OpenFlow pipeline processing will be given in the
next subsection.
OpenFlow specification defines ports in an OpenFlow switch as an ab-
straction of network interfaces for passing packets between the switch and the
rest of the network. The set of OpenFlow ports in a switch may not be identical
to the set of network interfaces provided by the switch hardware; some inter-
faces may be disabled for OpenFlow and the OpenFlow switch may define ad-
ditional ports. An OpenFlow switch must support three types of ports: physical
ports, logical ports, and reserved ports. Physical ports are switch-defined ports that
correspond to hardware interfaces of the switch. Logical ports are higher-level
abstractions that do not correspond directly to switch interfaces but may be
mapped to various physical ports. The reserved ports defined by OpenFlow are
for generic forwarding actions such as sending packets to the controller, broad-
casting packets, or forwarding packets using non-OpenFlow methods such as
“normal” switch processing.
The OpenFlow channel realizes D-CPI that connects each OpenFlow
switch to an OpenFlow controller. Through this interface, the controller con-
figures and manages the switch, receives events from the switch, and sends
packets out the switch. Typically, an OpenFlow controller manages multiple
OpenFlow channels, each one to a different switch. An OpenFlow switch may
support a single channel to one controller or multiple channels that allow mul-
tiple OpenFlow controllers to share management of the switch, typically for
enhancing network reliability. An OpenFlow channel is usually encrypted us-
ing transport layer security (TLS) protocol, but may run directly on TCP. The
OpenFlow specification requires each OpenFlow switch to have the ability to
initiate a connection to a controller for creating an OpenFlow channel.
The OpenFlow protocol defines the message exchange procedure between
a controller and a switch through the OpenFlow channel. OpenFlow supports
three types of messages: controller-to-switch messages, asynchronous messages,
and symmetric messages. Controller-to-switch messages are initiated by the con-
troller and may or may not require a response from the switch. This type of
messages is used by the controller to directly monitor and manage switch states.
Asynchronous messages are initiated by a switch without solicitation from the
controller. This type of messages is used by the switch to notify the controller
about occurrences of network events and changes in switch states. Received
packets at the switch may be forwarded to the controller using asynchronous
messages for further processing. Symmetric messages may be sent by either a
switch or a controller without having been solicited by the other. This type of

www.EngineeringBooksPdf.com
Software-Defined Networking 47

messages is used mainly for initialization after the OpenFlow channel has been
established or for regularly checking the channel status.

2.5.3  OpenFlow Pipeline Processing


The OpenFlow pipeline contains one or more flow tables. The pipeline pro-
cessing defines how packets interact with those flow tables. Figure 2.9 shows
a diagram from [13] that depicts the high-level sequence for a packet to flow
through the processing pipeline. As shown in the figure, pipeline processing
happens in two stages: ingress processing and egress processing. An OpenFlow
switch is required to have at least one ingress flow table and can optionally have
more flow tables for both ingress and egress processing. An OpenFlow switch
may have only a single flow table, in this case pipeline processing is simplified
as a single table lookup process.
Each flow table comprises a list of flow entries. The main components
of a flow table entry defined in the OpenFlow specification are listed next and
shown in Table 2.1.

• match fields: to be used as the criteria to determine whether an incom-


ing packet matches this entry. Detailed contents of match fields defined
by the current OpenFlow specification are listed in Table 2.2.
• priority: to specify the matching precedence of the flow entry; a higher
priority entry in the table will be matched before a lower priority entry.
• counters: to track statistics relative to the flow that matches this entry.
• instructions: to be executed when a packet matches this entry; execution
of the instructions may result in modification of the packet content, up-

Figure 2.9  Packet processing flow through the OpenFlow pipeline.

www.EngineeringBooksPdf.com
48 Virtualized Software-Defined Networks and Services

Table 2.1
OpenFlow Flow Table Entry Fields
match fields priority counters Instruction set timeouts cookies flags

Table 2.2
Flow Match Fields Defined in OpenFlow Specification Version 1.5.1
Switch input port ARP source IPv4 address
Switch physical input port ARP target IPv4 address
Metadata passed between tables ARP source hardware address
Ethernet destination address ARP target hardware address
Ethernet source address IPv6 source address
Ethernet frame type IPv6 destination address
VLAN id IPv6 flow label
VLAN priority ICMPv6 type
IP DSCP (6 bits in ToS field) ICMPv6 code
IP ECN (2 bits in ToS field Target address for ND
IP protocol Source link-layer for ND
IPv4 source address Target link-layer for ND
IPv4 destination address MPLS label
TCP source port MPLS TC
TCP destination port MPLS BoS bit
UDP source port PBB I-SID
UDP destination port Logical port metadata
SCTP source port IPv6 extension header pseudo-field
SCTP destination port PBB UCA header field
ICMP type TCP flags
ICMP code Output port from action set metadata
ARP opcode Packet type value

date of the action set associated with the packet, and adjustment in the
pipeline process sequence for the packet.
• timeouts: to specify the maximum amount of time or idle time before
the flow is expired by the switch.
• cookie: opaque data value chosen by the controller, which may be used
by the controller to filter flow entries affected by flow statistics, flow
modification, and flow deletion requests.
• flags: used to alter the way flow entries are managed.

Table 2.2 lists the flow match fields defined in OpenFlow specification
version 1.5.1 [13]. A packet matches a flow entry if all the match fields of the

www.EngineeringBooksPdf.com
Software-Defined Networking 49

flow entry are matching the corresponding header fields and pipeline fields
from the packet. The flow entry match fields may be wildcards using a bit
mask, meaning that any value that matches on the unmasked bits in the packet’s
match field will be a match. The OpenFlow extensible match (OXM) descrip-
tor specified by the OpenFlow protocol offers a generic and extensible packet-
matching capability. OXM defines a set of type-length-value pairs that can de-
scribe virtually any of the packet header fields that an OpenFlow switch would
need to use for matching.
When processed by a flow table, the packet is matched against the flow
entries of the table. In order to search for a matched entry, packet header fields
are extracted and packet pipeline fields are retrieved. Depend on the packet
type, various packet header fields can be used for table lookup, such as Ethernet
source/destination addresses or IPv4 source/destination addresses. In addition
to packet headers, matching can also be performed against the ingress port,
the metadata field, and other pipeline fields. Figure 2.10 shows the 12-tuple of
header fields that are used for packet matching process in a flow table.
The flow tables in an OpenFlow pipeline are numbered starting at 0 in
the order they can be traversed by packets. Pipeline processing for each packet
starts with ingress processing to match the packet against flow entries of flow
table-0. Other ingress flow tables may be used depending on the matching
outcome of the first flow table. If the matched flow entry in this table has a
Goto-Table-n instruction, where n is a table number, then the processing of this
packet will be transferred to table-n as the next step.
If the outcome of ingress processing is to forward the packet to an output
port, the OpenFlow switch will start performing egress processing in the con-
text of that output port. If no valid egress table is configured as the first egress
table, the packet will be processed by the output port and in most cases sent
out from the port. If a valid flow table is configured as the first egress table, the
packet must be matched against flow entries in that flow table, and other egress

Figure 2.10  Packet header fields used for matching packet in a flow table.

www.EngineeringBooksPdf.com
50 Virtualized Software-Defined Networks and Services

flow tables may be used depending on the matching outcome from that flow
table.
The matching and instruction execution procedure at a flow table is illus-
trated in Figure 2.11. If a flow entry is matched, the associated instruction set
of that entry is executed. Execution of the instructions may transfer the packet
to another flow table that has a greater table number (i.e., pipeline processing
can only go forward and not backward). An action set is associated with each
packet. This set is empty by default for a packet starting OpenFlow pipeline
processing. During the process, each matched flow entry for the packet may
modify the action set of the packet by using a Write-Action instruction or a
Clear-Action instruction. The action set is carried between flow tables. Pipeline
processing stops when the instruction set of a matched flow entry does not con-
tain a Goto-Table instruction. Then the actions in the action set of the packet
are executed.
OpenFlow specification requires the actions in an action set to be applied
in the order specified here, regardless of the order that they were added to the
set.

1. copy TTL inwards: apply copy TTL inward actions to the packet;
2. pop: apply all tag pop actions to the packet;
3. push-MPLS: apply MPLS tag push action to the packet;
4. push-PBB: apply PBB tag push action to the packet;
5. push-VLAN: apply VLAN tag push action to the packet;
6. copy TTL outwards: apply copy TTL outwards action to the packet;

Figure 2.11  Flow entry matching and instruction execution in a flow table [13].

www.EngineeringBooksPdf.com
Software-Defined Networking 51

7. decrement TTL: apply decrement TTL action to the packet;


8. set: apply all set-field actions to the packet;
9. qos: apply all QoS actions, such as meter and set queue to the packet;
10. group: if a group action is specified, apply the actions of the relevant
group bucket(s) in the order specified by this list;
11. output: if no group action is specified, forward the packet on the port
specified by the output action.

If a packet does not match any flow entry in a flow table, this is called a
table-miss. The behavior of pipeline process on a table-miss depends on the ta-
ble configuration. OpenFlow requires every flow table to have a table-miss flow
entry, which specifies how to process packets unmatched by other flow entries
in the flow table. Typical processes specified in a table-miss flow entry include
sending packets to the controller, dropping packets, or directing packets to a
subsequent table. The table-miss flow entry is identified by its match fields and
its priority. It wildcards all match fields (i.e., all fields are omitted) and has the
lowest priority.
Figure 2.12 shows a flow chart presented in [13] that illustrates the Open-
Flow pipeline processing for transferring a packet through an OpenFlow switch.

2.6  SDN Control and Applications


2.6.1  SDN Controller Architecture
The SDN controller that bridges the data and application planes is a core com-
ponent in the SDN architecture for achieving separation between network con-
trol/management functions and data forwarding operations. The responsibili-
ties of an SDN controller include two main aspects: (a) enabling an abstraction
of the data plane resources, and (b) providing an API for programing network
behaviors. The data plane abstraction provides a global view of the underlying
network infrastructure upon which SDN applications can make decisions on
network operations. The network programming API allows SDN applications
to control behaviors of data plane devices by specifying policies with high-level
languages without considering implementation details in the data plane.
The control plane in the SDN architecture plays a similar role as an op-
eration system in a computer, which enables an abstraction layer on top of
hardware resources and provides APIs for programming hardware operations.
Therefore, the SDN control plane is often referred to as a network operating
system (NOS) for SDN networks.
As shown in Figure 2.13, key functionalities of SDN control plane can
be categorized into two processing directions. The upward direction includes

www.EngineeringBooksPdf.com
52 Virtualized Software-Defined Networks and Services

Figure 2.12  A flow chart for OpenFlow pipeline processing. (Source: ONF TS-25 OpenFlow
Switch Specification.)

www.EngineeringBooksPdf.com
Software-Defined Networking 53

Figure 2.13  Two processing directions in the SDN control plane and the associated objects.

functions for collecting and synthesizing network status, managing network to-
pology information, and presenting a global network view and event informa-
tion to the application layer. The downward processing direction is to translate
application requests, which typically specify policies for network operations,
into action rules for packet processing in the data plane. Main functions in this
direction include generating, installing, and updating action rules into flow ta-
bles at data plane devices, ensuring validity and consistency of the action rules,
and maintaining a database of the flows being managed. Two types of objects
are used by an SDN controller. One type is used for network controlling in the
downward direction, including policies imposed by the application layer and
actions rules for packet processing. The other type is related to network moni-
toring used in the upward direction in the form of local and global network
topology and status.
It is worth noting that an SDN controller cooperates with the applica-
tions running on the controller for implementing network policies regarding
routing, forwarding, load balancing, and the like. A controller often comes with
its own set of common application modules, such as a learning switch, a router,
a firewall, and a load balancer. Such modules are SDN applications but are
often bundled with the controller, which are similar to the utility programs
bundled with an operating system.
The architecture of a generic SDN controller is depicted in Figure 2.14.
As shown in the figure, an SDN controller comprises a module for realizing
D-CPI, a module for data plane abstraction, and a module for implement-
ing A-CPI. The D-CPI module implements SDN southbound protocol(s) for
communicating with data plane devices. Such protocols include OpenFlow
protocol and its alternatives such as ForCES protocol. The abstraction module
constructs a global abstract view of the data plane network based on which ap-
plications may program network operations. Main functions performed in this
module include user/network device discovery, network device management,

www.EngineeringBooksPdf.com
54 Virtualized Software-Defined Networks and Services

Figure 2.14  Architecture of a generic SDN controller.

network topology management, network traffic monitoring, and flow man-


agement. The A-CPI module provides a northbound interface between the
controller and SDN applications. Currently, there is no standard for A-CPI as
the OpenFlow for D-CPI, partially due to the diversity of SDN applications
and their requirements on A-CPI. Existing controller implementations define
their own northbound APIs for SDN applications to access the controllers. The
lack of standard northbound API limits application portability across controller
platforms; therefore, it is considered a deficiency in current SDN. Some organi-
zations are developing proposals for standardizing A-CPI so that SDN applica-
tions can be developed and deployed independently without being constrained
to specific controller implementations.

2.6.2  SDN Controller Functions


2.6.2.1  Network Topology Management
Managing network topology information and presenting it to applications is a
key aspect of SDN control. Generally speaking, topology management includes
discovering availability of network devices and checking the status of network
connections between the devices.
Although traditional routing protocols are capable of exchanging network
topology information, it is not reasonable to require all SDN applications to be

www.EngineeringBooksPdf.com
Software-Defined Networking 55

involved in routing protocols for obtaining topology information. In addition,


the formats in which the topology information is organized by routing proto-
cols are mainly optimized for path computation but not necessarily for other
network control and management functions that may be performed by SDN
applications. Therefore, it is the responsibility of the controller to make net-
work topology information available to SDN applications in a suitable format.
Some of the early SDN controllers and OpenFlow switches originally
lacked the important functionality of network topology discovery. In order to
solve this problem, the link layer discovery protocol (LLDP) was enabled on
switch ports by default. LLDP is a protocol that allows data plane devices to
advertise and discover identity and capability information in a layer 2 network.
As OpenFlow switch ports advertise their discovery information using LLDP,
such information is forwarded to the SDN controller using the packet_in mes-
sage. The controller collects and synthesizes such information to form a global
view of the network topology.
Although using LLDP on SDN switches provided a solution to topology
discovery at the early stage of SDN deployment, this approach has some issues
that limit its application in SDN networks. LLDP is based on Ethernet proto-
cols. With SDN adoption spanning to various networking scenarios, including
wide area carrier networks, it is not reasonable to assume that all data plane
devices support Ethernet and LLDP. In addition, the LLDP topology informa-
tion is localized to just neighboring layer 2 switches, thus lacking the necessary
scalability for topology management in large-scale networks.
Another technology that can be leveraged in SDN control plane for more
scalable topology management is BGP-LS (link state). BGP-LS is an extension
to BGP that allows it to carry link state information. The link state informa-
tion can be acquired from the traffic engineering databases in various network
domains using the IGP employed in the domains. Such information from mul-
tiple domains can then be aggregated to form a global topology for the entire
network infrastructure. BGP-LS was designed specially to leverage properties
of BGP that give it better scaling characteristics, including TCP-based flow
control and use of route reflector(s). Therefore, BGP-LS offers a more scalable
choice for managing multidomain topology information in large-scale SDN
networks.
An important aspect of topology management in SDN control plane is
to represent topology information in a standard format and make it available
to applications that do not have direct interaction with routing protocols. One
of the efforts toward this direction is the application-layer traffic optimization
(ALTO) network service [14], which can be exposed to SDN applications via a
RESTful web service interface. The information that ALTO provides is based
on abstract or logical topology maps of a network, which comprises two parts—
a network map that shows the connectivity among network nodes and a cost

www.EngineeringBooksPdf.com
56 Virtualized Software-Defined Networks and Services

map that gives the costs of the connections shown on the network map. ALTO
allows the operator to specify policies and rules to configure the generation of
abstract network topologies from the physical topology of the underlying net-
work. This configurability of topology management in ALTO is very desirable
to support diverse SDN applications that may need different types of abstract
topologies of the data plane network. Also, the configurable topology manage-
ment makes ALTO naturally embrace multitenant virtual networks sharing a
common data plane substrate.
Although current applications of the topology collection and representa-
tion service provided by ALTO seem to be mainly in content delivery networks
(CDN), researchers are actively promoting its broader adoption in SDN net-
works [15]. More recently, IETF started an effort called interface to routing
system (I2RS) [16] toward developing a standard A-CPI protocol for SDN, in
which standardization of generalized network topology management is identi-
fied as an important element. A key feature of I2RS topology management is its
ability of collecting topology data from diverse sources, including device moni-
toring, routing protocols, and other sources. I2RS topology management also
normalizes the collected topology data and transforms them into a standardized
format that is portable across SDN applications.
The development of topology management technologies for SDN con-
trol, from LLDP to BGP-LS to ALTO and I2RS, follows a similar evolution
track of the entire SDN ecosystem, from proprietary systems to service-oriented
open systems with standard southbound and northbound interfaces.
2.6.2.2  Network Traffic Monitoring
In addition to topology management, which presents a relatively static view
of the data plane infrastructure, the SDN control plane also performs traffic
monitoring that reflects the dynamic aspect of network status. Traffic monitor-
ing function collects statistics of traffic load and packet forwarding actions in
the network (e.g., the duration time, packet number, data size, and bandwidth
share of a flow). Typically, individual data plane devices collect and store local
traffic statistics in their own storage, which then can be either retrieved by a
controller in a pull mode or proactively reported to a controller in a push mode.
In the pull mode, a controller collects the statistics of a set of flows that
match some specification from chosen devices. In this way, the controller may
limit the communication overheads introduced by traffic monitoring but may
not be able to provide timely response to events occurred in the data plane.
In the push mode, statistics are sent by data plane devices to the controller
either periodically or triggered by some events (e.g., a flow counter reaches a
preset threshold). This model allows the controller to obtain real-time moni-
toring of network status but at the cost of more switch-controller communi-
cation overheads. The two monitoring modes have different characteristics in

www.EngineeringBooksPdf.com
Software-Defined Networking 57

measurement overhead and accuracy. A key design objective of SDN control


is to find an optimal point that achieves adequate monitoring accuracy while
maintaining low measurement overhead.
Traffic matrix (TM) that describes the volume of traffic flowing between
all pairs of sources and destinations in a network is commonly used for network
traffic monitoring. In an SDN environment, a controller can retrieve the flow
counters from switches (in a pull mode) to construct a TM for the data plane
network. The appropriate query strategy that determines a set of switches from
which the controller collects counter data plays a key role in achieving a bal-
ance between measurement accuracy and query loads. The OpenTM scheme
developed in [17] uses selective query strategies with various query distributions
along the flow path. Research results reported in this paper indicate that non-
uniform distribution strategies that tend to query switches closer to the destina-
tion with a higher probability typically have better performance compared to
the schemes that query switches uniformly distributed along the flow path.
A representative SDN monitoring scheme using the push mode is the
FlowSense developed in [18]. This approach leverages the OpenFlow control
messages sent from switches to the controller for estimating network status in-
cluding traffic load and link utilization. Specially, the packet_in message sent
to the controller upon arrival of the first packet in a flow and the flow_re-
moved message triggered by expiration of a flow entry are used by FlowSense to
compute utilization of links between switches.

2.6.2.3  Flow Management


The flow management functionality of an SDN controller is responsible for
establishing and maintaining all flow paths in the network. Specifically, the
controller generates the packet processing rules for each flow to realize the poli-
cies specified by SDN applications and then install the rules in the flow tables
at appropriate network devices for operations. The controller may update flow
table entries at network devices either for modifying network configuration or
for dynamic network control. The procedures for installing and updating flow
table entries are defined by the D-CPI protocol (e.g., the OpenFlow protocol).
Flow management has two modes: proactive versus reactive. In the proac-
tive mode, the SDN controller establishes a flow path before the network starts
forwarding the first packet for the flow. This is done by the controller through
preinstalling the packet processing rules for this flow at all the switches that the
flow is expected to traverse. In the reactive mode, the flow table entry and its
associate rules are set up by the controller for a new flow only when a switch
receives the first packet of the flow and thus cannot find a table match for the
packet. The table-miss event at a switch for the first packet of a new flow trig-
gers communications on D-CPI to forward the packet to the controller. The
controller, upon receiving this packet, will work with relevant applications (e.g.,

www.EngineeringBooksPdf.com
58 Virtualized Software-Defined Networks and Services

path computing application) to determine the appropriate rules for processing


the traffic of the new flow, and then installs the rules in all the switches involved
in forwarding packets for this flow. The flow entries installed by the reactive
mode will expire after a predefined timeout and should be wiped out from the
table.
Proactive flow management allows a switch to response to the first packet
of a new flow immediately without contacting a controller, therefore avoiding
the extra delay for processing new flows caused by switch-controller communi-
cations. However, due to the limited storage space available in SDN switches, it
is not feasible to preinstall the rules for all possible flows. Therefore, proactive
flow management is typically used for the preconfigured flows in the network.
Although the reactive mode of flow management suffers a round-trip delay
between the controller and switches, it has the flexibility to make decisions for
individual flows based on their service requirements and the current network
status and thus may offer better performance for dynamic network service pro-
visioning to meet diverse user requirements.
Another design choice regarding flow management is about the flow
granularity. SDN allows a flow to be identified and matched by using multiple
packet header fields, as illustrated in Figure 2.10. Therefore, flow-based net-
work control can be performed on various granularity levels. Finer-grained flow
management can differentiate micro flows with diverse traffic load characteris-
tics and QoS requirements, thus offering more flexibility for network control.
However, finer flow granularity requires a larger number of flow entries to be
maintained in data plane devices and thus may not be feasible in large-scale
networks. In contrast, coarse-grained flow management aggregates traffic into
macro flows, thus reducing the number of flow entries that need to be managed
by the controller and stored at switches. Therefore, coarser flow granularity may
gain better scalability at the cost of control flexibility.
Flow management in the SDN control plane is also responsible for updat-
ing the packet processing rules stored in flow tables to achieve dynamic control
on network operations (e.g., dynamic load balance, redirect traffic after VM
migration, and recovery after network failure). Consistency must be preserved
during rule updating in order to ensure proper network operations. Rule consis-
tency may be achieved at two levels: strict consistency and eventual consistency.
Strict consistency ensures that all packets in a flow are processed by following
either the original rule or the updated rule at all switches that the flow traverses.
Eventual consistency only ensures that the later packets in a flow will eventu-
ally be processed following the updated rule and allows the earlier packets of
the flow still be processed using the original rule during the update procedure.
A possible mechanism for achieving strict rule consistency is to use rule
versioning with rule timeout. For example, the SDN rule update scheme de-
veloped in [19] stamps each packet with a version number at its ingress switch

www.EngineeringBooksPdf.com
Software-Defined Networking 59

to indicate which rule set should be applied to process the packet. After the
controller updates the rule set for the flow, all the packets that enter the network
after rule updating will be stamped a new version number for the updated rule
set at the ingress switch and will be processed using the updated rule set at all
switches. In this way, no more packet will be processed using the original rule
set after a time period, after which the original rule set will be removed. Before
that, both the original and updated rule sets will be kept in all the switches that
are involved in forwarding packets for this flow.

2.6.3  Enhancing SDN Control Performance


Due to the crucial role that the control plane plays in the SDN architecture,
control performance is a key factor that impacts the performance of SDN net-
work. The logically centralized SDN controller carries the load of controlling
the entire network, thus becoming a potential performance bottleneck that may
limit network scalability and reliability. The main questions that one may ask
about SDN control performance are (a) how fast a controller can response to
the data plane request for initializing a new flow and (b) how many data plane
requests can a controller handle in each second. Early study showed that a pop-
ular SDN controller (NOX) can handle 30K flow initialization events per sec-
ond while maintaining a sub-10ms flow installation time [20]. However, more
recent measurements of some large-scale SDN deployment environments indi-
cate that a controller performing at this level is far from sufficient. Therefore,
technologies for enhancing SDN control performance have been an important
topic for research since the very beginning of SDN development. Various ap-
proaches to high-performance SDN control have been explored from different
aspects. We briefly review some representative technologies for enhancing SDN
control performance in this subsection.
2.6.3.1  Enhancing Processing Capability of SDN Controller
An SDN controller is essentially a piece of software running on a hosting plat-
form, typically a high-volume server. Therefore, controller performance may be
enhanced from both hardware and software aspects. From the hardware aspect,
the hosting platform for an SDN controller should use an available server with
the highest possible computing capabilities, including CPU capacity, cache/
memory and disk space, and I/O throughput. The recent development in high-
volume server technologies has made the hardware platform a less serious con-
straint for SDN controller performance compared to the complex software that
actually performs the control functions.
Parallelism and batch processing are typical software optimization tech-
niques that have been employed for designing high-performance SDN control-
lers. For example, Beacon as a pioneer open source SDN controller implemented

www.EngineeringBooksPdf.com
60 Virtualized Software-Defined Networks and Services

in Java supports multithreading for achieving high performance and linear per-
formance scaling. Since most of the current open source SDN controllers were
forked from the original Beacon source code, its multithreading scheme for im-
proving performance has significant influence on the designs of contemporary
SDN controllers. Maestro is also a Java-based controller that exploits the mul-
tithread mechanism together with additional optimization techniques, includ-
ing I/O batching and core-thread binding for enhancing delay and through-
put performance [21]. Another representative design for enhancing controller
performance using parallelism is NOX-MT [22]. NOX-MT is a multithread
extension of the single thread C++ implementation of NOX controller. NOX-
MT also uses optimization techniques including I/O batching for minimiz-
ing I/O overhead and boost asynchronous I/O (ASIO) library for simplifying
multithread operations. Benchmark testing results reported in [22] for different
controllers, including Beacon, NOX, Maestro, and NOX-MT, show perfor-
mance advantages of NOX-MT in terms of both response time and throughput
for handling data plane requests.

2.6.3.2  Distributed Deployment of SDN Controller


Although various hardware and software technologies have been developed for
improving performance of SDN controllers, network designs with a single con-
troller still suffer scalability and reliability issues. To tackle this challenge, re-
searchers have explored architectural solutions that employ multiple controllers
deployed according to specific configurations. Typical multicontroller deploy-
ment configurations include logically centralized distributed controllers, load
partitioning among multiple controllers, and hierarchical control structures.
The control framework HyperFlow proposed in [23] allows network
operators to deploy any number of OpenFlow controllers in their SDN net-
works but keep the control plane logically centralized. The high-level structure
of HyperFlow is shown in Figure 2.15. An SDN network with HyperFlow-
based control is composed of a set of OpenFlow switches and multiple NOX
controllers. All the controllers share a consistent network-wide view and each
controller runs as if it is controlling the whole network. All controllers also run
the same set of control applications, including an instance of the HyperFlow
application and an event propagation module for intercontroller communica-
tions. Each switch is connected to the best controller in its proximity. Each
controller directly manages the switches connected to it and indirectly manage
other switches through communications with other controllers.
In order to ensure a consistent global network view across all controllers,
the HyperFlow application running on top of the NOX controller selectively
pushes events of network state changes to other controllers using a publish/sub-
scribe messaging system. To propagate network state events among controllers,
the messaging system must provide persistent storage of published events, keep

www.EngineeringBooksPdf.com
Software-Defined Networking 61

Figure 2.15  HyperFlow SDN control framework.

the order of events published by the same controller, and be resilient against
network partitioning. The distributed event propagation system in HyperFlow
is implemented based on WheelFS [24].
ONIX is a distributed SDN control platform where one or multiple con-
troller instances may run on one or more clustered servers [25]. Figure 2.16 de-
picts the ONIX platform structure. As a control platform, ONIX is responsible
for providing SDN applications with programmatic access to network states
for controlling data plane operations. Controllers in the ONIX platform oper-
ate on a global view of the network. Network state information is stored in a
network information base (NIB) at each controller, which is responsible for
disseminating network states to its peer controllers.
Different SDN applications may have different requirements on the
scalability, update frequency, and consistency of the network states that they

Figure 2.16  ONIX distributed SDN control platform.

www.EngineeringBooksPdf.com
62 Virtualized Software-Defined Networks and Services

need. As a general platform for various control applications, ONIX provides


two separate mechanisms for distributing network state updates between the
NIBs stored at different controllers. One mechanism is based on a replicated
transactional database, which is designed for meeting long durability and strong
consistency requirements. The other mechanism is a memory-based one-hop
distributed hash table (DHT) for more volatile states that are updated frequent-
ly but more tolerant to inconsistency. ONIX allows application developers to
explicitly choose their preferred mechanism for any given state in the NIB.
The ONIX platform may be deployed with at least two types of configu-
rations. In the first configuration, ONIX instances are horizontally distributed
and each instance controls a partition of the network. A single controller only
keeps a subset of the NIB and controls a group of switches. In the second con-
figuration, controllers are organized in a hierarchical structure with two levels.
The lower level consists of a cluster of ONIX instances that are aggregated to
appear as a single element to the ONIX instance on the higher level. For ex-
ample, in a large campus network, each building has an ONIX controller that
exposes all the network devices in the building as an aggregated node to a global
ONIX controller that performs campus-wide control functions [25].
Hierarchical structure for deploying multiple controllers offers another
solution to enhance performance and scalability of the SDN control plane.
The basic idea of hierarchical control structure is to distinguish local control
functions that only need a partial network view from global control functions
that require the network-wide states. A hierarchical SDN control structure can
offload heavy communication and processing tasks to highly replicable local
controllers and thus may significantly improve SDN control performance.
For example, in the Kandoo framework SDN controllers are organized in
a two-layer structure. The lower layer comprises a group of controllers that take
local control actions without knowledge of the network-wide states. The up-
per layer is a logically centralized controller (root controller) that maintains the
network-wide states and performs global control functions [26]. The structure
of the Kandoo framework is shown in Figure 2.17.
In the Kandoo framework, each switch is controlled only by one local
controller, and each local controller can control a subset of switches on the data
plane. Local controllers handle most of the frequent events near data paths;
therefore, releasing the root controller from the large amount of control load
that can be processed locally. For example, detection of “elephant” flows, which
needs to continuously query each switch to check if a flow has enough data for
forwarding, can be done by local controllers. The root controller is responsible
only for network-wide control functions (e.g., establishing the end-to-end path
for a flow traversing multiple switches controlled by different local controllers).
When the root controller needs to program data plane switches (e.g., installing

www.EngineeringBooksPdf.com
Software-Defined Networking 63

Figure 2.17  Kandoo hierarchical SDN control architecture.

flow table entries on switches), it delegates the requests to the respective local
controllers.
Similar to HyperFlow, communications between the root and local con-
trollers in Kandoo are based on a messaging system for event propagation. The
root controller can subscribe to specific events in local controllers using the
messaging channel. However, unlike the flat structure of HyperFlow, the two-
layer structure of Kandoo only allows event messaging between local and root
controllers and no direct communication between local controllers.
2.6.3.3  SDN Controller Placement
Although distributed controller deployment offers an approach to enhancing
performance, scalability, and reliability of the SDN control plane, it also intro-
duces extra complexity due to the required intercontroller communications and
collaboration. Therefore, an SDN control plane framework must be carefully
designed in order to achieve an optimal balance between network performance
and control complexity. Specifically, for a given network topology network de-
signers need to determine how many controllers are needed and where they
should be deployed. Such design choice is often referred to as the controller
placement problem. The answers to these two questions influence every aspect
of the control plane, from state distribution to fault tolerance to performance
metric; therefore, having a significant impact on the performance and cost of a
control plane design.
SDN controller placement is a complex problem that has not been fully
solved yet. Its complexity comes from multiple aspects. First, the optimization
objectives for controller placement vary in different networking scenarios. For
example, in wide-area networks the “best” controller placement is typically ex-
pected to minimize the state propagation delay between controllers; while in
data center and enterprise networks the objective could become maximizing

www.EngineeringBooksPdf.com
64 Virtualized Software-Defined Networks and Services

fault tolerance or load balancing. Second, different control structures may have
different requirements on intercontroller collaboration; thus, their performance
needs to be evaluated differently. For example, a flat distributed controller de-
ployment such as Hypervisor requires consistency of the full network topology
to be kept across all controllers, while a hierarchical control structure such as
Kandoo allows local controllers to only maintain information about a part of
the entire network topology.
A comparative study on various proposed controller placement solutions
for wide area networks is reported in [27]. The authors chose switch-to-con-
troller latency as the performance metric for their evaluations, since such la-
tency imposes fundamental limits on reaction delay, fault discovery, and event
propagation efficiency that can be achieved by a controller placement scheme.
As expected, their findings show that there is no general controller placement
rule applicable to every network. Rather, the effectiveness of various solutions
depends on multiple factors such as network topology and operators’ require-
ments. More surprisingly, results obtained in [27] indicate that a single control-
ler location can be sufficient to meet the reaction-time requirements in many
existing networking scenarios. Of course, single controller deployment still has
issues related to reliability and resilience.

2.6.4  Multidomain SDN Control


A key principle of SDN is to offer a single (logically) centralized control plat-
form that has a global view of the entire network. With the rapid development
of SDN technologies, this new networking paradigm is expected to be adopted
in wide area carrier networks in addition to the campus and data center net-
works where SDN was initially applied. Wide area networks typically comprise
multiple autonomous systems that may be managed by different operators;
therefore, interdomain networking is often mandatory for end-to-end network
service provisioning.
Interdomain networking brings in some new challenges to SDN control.
First, single controller deployment is not feasible due to both the large network
scale and administrative partition between domains. In addition, even though
all domains in the network adopt SDN, each of them should be allowed to
choose its own SDN controller implementation and deployment. Therefore,
multidomain SDN networks need the capability of collaboration among het-
erogeneous SDN controllers. The distributed controller deployments we dis-
cussed previously, such as HyperFlow and NOIX, assume homogeneity among
controllers and thus may not meet the multidomain networking requirements.
Also, network applications may need to work with multiple heterogeneous con-
trollers in different domains in order to program end-to-end flow paths for
service provisioning. The lack of standard A-CPI and controller-to-controller

www.EngineeringBooksPdf.com
Software-Defined Networking 65

interface in the current SDN architecture makes it more difficult to cope with
the heterogeneity in SDN domain controllers. Another challenge to interdo-
main SDN control lies in sharing network states among the controllers in dif-
ferent domains. Aggregation and abstraction of network topologies and states
for individual domains are necessary due to both scalability and security rea-
sons. Therefore, a standard information model that provides an appropriate
level of state abstraction is needed.
Multidomain control is a challenging problem that has attracted research
attention of the SDN community. Exciting progress has been made in this area,
although the problem still requires more thorough investigation.
An early effort made for enabling interdomain SDN control is a message
exchange protocol for SDN across multiple domains called SDNi, which was
proposed by IRTF as an Internet Draft at the end of 2012. In this draft, an
SDN domain is defined as a portion of a network infrastructure determined by
network operators. Each domain has a (logically centralized) SDN controller
that controls multiple SDN-enabled networking devices and maintains a global
view of the portion of the network covered by the domain. Inside each SDN
domain, its controller defines domain-specific policies for monitoring network
states and programming network operations. Such policies may not be made
public; therefore, a domain controller does not know the existence of such poli-
cies in other domains. Two SDN domains are adjacent if there exists physical
network link(s) between them [28].
SDNi was proposed as a protocol for interfacing SDN domains for ex-
changing control-related information across multiple SDN domains and co-
ordinating the functions performed by different domain controllers. More
specifically, main responsibilities of SDNi include the following two aspects:
(a) exchange the reachability information required by interdomain routing
across SDN domains, and (b) coordinate the functions of domain controllers
for establishing end-to-end flow paths traversing multiple SDN domains. The
following types of messages are defined in the SDNi protocol: messages for
reachability information update, messages for flow setup/tear-down/update
request, and messages for capability update request, including both network
related capabilities such as bandwidth and system/software capabilities inside
the domain [28].
The SDNi proposal suggests using extension of BGP and SIP over SCTP
to exchange information for implementing the SDNi protocol. However, the
hop-by-hop nature of BGP requires interdomain routing in a decentralized
manner without global knowledge of end-to-end routes. The scalability of
SIP is also an issue, especially in multidomain networking environments. In
addition, SDNi mainly focuses on information exchanging between domain
controllers without a clearly defined mechanism for coordination between con-
trol functions for service orchestration. Therefore, how to realize the proposed

www.EngineeringBooksPdf.com
66 Virtualized Software-Defined Networks and Services

SDNi protocol to actually achieve its objective—enabling end-to-end service


provisioning across multiple domains in large scale SDN networks—is still an
open issue.
More recently, a distributed SDN control plane called DISCO has been
proposed in [29] to cope with the distributed and heterogeneous nature of
multidomain SDN control to deliver end-to-end network services. Figure 2.18
depicts the overall architecture of DISCO. Each network domain has a DISCO
controller that performs SDN control functions within the domain. A DISCO
controller communicates with its neighboring domain controllers to exchange
aggregated network-wide state information. The intercontroller communica-
tion system in DISCO consists of two types of modules: (a) a messenger that
discovers neighboring controllers and maintains a distributed communication
channel between neighboring controllers, and (b) a set of agents that use the
channel to perform interdomain network control functions.
The messenger module implements a control channel between neighbor-
ing controllers to exchange network state information and request actions from
other controllers (e.g., set up flow table entries on switches in different do-
mains). The usual communication patterns used by protocols such as BGP and
RSVP are supported by this channel. The messenger module in DISCO is im-
plemented based on advanced message queuing protocol (AMQP). AMQP is
an open standard application layer protocol for message-oriented communica-

Figure 2.18  DISCO: distributed multidomain SDN control architecture.

www.EngineeringBooksPdf.com
Software-Defined Networking 67

tions, which has been employed for loosely coupled communications between
different components in OpenStack.
Each DISCO controller has a group of agents for interdomain network
control. The reachability agent advertises the presence of hosts in domains to
make them reachable from other domains. The connectivity agent shares with all
other domains about the presence of peering links with neighboring domains.
The monitoring agent periodically sends information about available bandwidth
and latency between all the pairs of peering points. The path computation agent
uses the connectivity information to make local routing decisions. The reserva-
tion agent is responsible for requesting neighboring domains for interdomain
flow management to set up flow paths traversing other domains.
Another approach to achieving interdomain SDN control is to use the hi-
erarchical control structure together with service orchestration, as proposed in
[30]. In its simplest form, a hierarchical control/orchestration structure consists
of two levels, as shown in Figure 2.19. The lower level comprises a group of
controllers, one for each network domain, for performing SDN control func-
tions within the domain scopes. The higher level has a main controller that
serves as an orchestrator to coordinate multiple domain controllers for provi-
sioning end-to-end services across domains.
The main controller essentially composes the network services provided
by individual domains into end-to-end network services. For example, in the
network shown in Figure 2.19, when the main controller receives a request for
establishing a flow path from a source node S in domain-1 to a destination node
D in domain-n transiting through domain-2, the main controller translates
this request into requests to the controllers in all the involved domains—the
domains 1, 2, and n. Then the controller in each of these domains sets up a seg-
ment of the flow path within the respective domain. Then the main controller

Figure 2.19  A hierarchical control and orchestration architecture for multidomain SDN.

www.EngineeringBooksPdf.com
68 Virtualized Software-Defined Networks and Services

establishes the interdomain links (links between domains 1 and 2 and between
domains 2 and n) to assemble the path segments in these domains together to
form an end-to-end flow path from node S to node D.
Interfaces between domain controllers and the main controller play an
important role in the hierarchical orchestration structure for interdomain SDN
control. The main controller interacts with individual domain controllers via
the northbound interface supported by the domain controllers. Considering
heterogeneity in the SDN controllers across autonomous domains, the interface
between the main controller and domain controllers needs to provide a layer of
abstraction that hides domain specific control functions. On the other hand,
this interface should also expose the domain control capabilities as services that
can be accessed and orchestrated by the main controller. As suggested in [30], a
RESTful web service interface is a good choice for meeting these requirements.

2.6.5  SDN Control Applications


Control applications can be regarded as the “brain” in the SDN architecture
that makes decisions on network policies and program network behaviors to
fulfill the policies. SDN control applications interact with controllers through
A-CPI to obtain network state information and request certain control actions
to be taken by the controllers.
As we discussed before, there is no standard A-CPI yet for the interac-
tions between controllers and control applications. This is partially due to the
fact that the northbound interface is more a set of APIs than a protocol expos-
ing the functionalities and data models of the controller. Programming using
these APIs allows SDN applications to define network behaviors through the
platform provided by the controllers. Northbound APIs also enable an orches-
tration system such as OpenStack Neutron to manage network services inside
a cloud environment.
2.6.5.1  Languages for Programming SDN Applications
Development of SDN control applications typically requires programming lan-
guages for specifying network policies and compiling tools for translating the
policies to actions rules, which are then installed by an SDN controller onto
network devices. SDN programming languages may have different levels of
abstraction. Although low-level programming enables developers to deal with
network operation details, high-level programming languages facilitate devel-
opers to fully exploit the advantages of SDN provided through the centralized
control platform with network resource abstraction.
Flow-based management language (FML) is a declarative language that al-
lows expressive description of network connectivity policies in SDN networks.
FML can describe fine granular operations on unidirectional network flows

www.EngineeringBooksPdf.com
Software-Defined Networking 69

and support expressive constraints on traffic, bandwidth, and so on. A FML


policy file consists of a set of declarative statements and may also include some
external references to, for instance, SQL queries. The order irrelevance feature
of FML greatly facilitates combination of multiple independent policies. In
order to cope with conflicts that could be caused by policy combination, FML
provides a conflict resolution mechanism as a layer on top of the core semantics.
FML is written in C++ and Python and uses the operators defined in the NOX
controller [31].
Frenetic is another programming language for SDN applications. Fre-
netic provides a run-time system that translates high-level policies and queries
into low-level flow rules. The high-level abstraction provided by Frenetic is an
SQL-like declarative network query language together with a functional reac-
tive network policy management library. The query language provides means
for accessing network states and classifying and aggregating traffic statistics.
The policy management library provides high-level packet processing operators
that manipulate packets as discrete streams. The library also handles the details
of installing and uninstalling switch level rules [32]. NetCore is a successor of
Frenetic that enriches the policy management library of the latter. NetCore
also adds algorithms for compiling policies and managing interactions between
controllers and switches. NetCore defines a core calculus for manipulating two
components: predicates that match sets of packets and polices that specify des-
tination locations for forwarding these packets [33].
Since multiple independent applications may specify policies that are to
be realized by an SDN controller, policy conflict checking and rule consisten-
cy validation is an important aspect of SDN application development. Model
checking is a widely used approach to verifying correctness in software devel-
opment, which has been adopted for policy and rule validation in SDN pro-
gramming. For example, FlowChecker proposed in [34] uses binary decision
diagrams to encode network configurations and models the global behaviors of
a network in a single state machine for conducting “what-if ” analysis to verify
policy and rule consistency. Another example is the “no bugs in control execu-
tion” (NICE) system developed in [35], which takes an SDN application, net-
work topology, and correctness properties as inputs for performing state space
search to check potential property violation.
2.6.5.2  General Design of SDN Applications
In general, SDN application designs can be classified as either in the proactive
mode or in the reactive mode.
Proactive SDN applications make decisions regarding traffic steering for
some predetermined flows based on network topology and state information,
and then request the control plane to perform proactive flow management to
preinstall the action rules at switches for handling those flows. Applications that

www.EngineeringBooksPdf.com
70 Virtualized Software-Defined Networks and Services

naturally lend themselves to the proactive mode are usually for some sort of
preconfiguration of the network topology or presetting for traffic engineering
(e.g., applications of source routing and multipath routing).
Proactive applications can be implemented using either language APIs
(e.g., Java API or Python API) or using RESTful APIs. The RESTful APIs pro-
vided by an SDN controller and called by applications can operate at different
levels. They could be low-level APIs that allow applications to directly program
flow table entries at switches. But more typically, RESTful APIs offer a high-
level data plane abstraction so that SDN applications can program network
behaviors based on an abstract view of the underlying network infrastructure.
Proactive applications typically have less frequent interaction with the SDN
controller and relatively loose requirement for the communication delay on A-
CPI. Therefore, such applications have the flexibility to be deployed at servers
that are separated from the controller hosting platform. Virtually all contempo-
rary SDN controllers support RESTful northbound API for communications
with proactive applications hosted on different servers.
Reactive applications typically work with reactive flow management func-
tions of the SDN controller to handle network events. The most common
events are flow table-miss triggered by packets to which the switches find no
flow table match. These packets are encapsulated in packet_in messages and
forwarded to the controller and thence to an application. The application ex-
amines the packets and makes decisions regarding how to process the packets.
The outcomes are often setting up new flow table entries at switches that the
new flows are expected to traverse.
Reactive applications typically have frequent interaction with controllers.
Standard RESTful APIs may not have all the capabilities required for the com-
munications between a reactive application and an SDN controller. For exam-
ple, reactive applications often need to be asynchronously notified of incoming
packets forwarded to the controller by switches. It is not straightforward to
implement this kind of asynchronous notification using a basic RESTful API.
Therefore, reactive applications are often written in the programming language
of the controller and leverage the language API to interact with the controller.
Consequently, reactive applications tend to be tightly coupled with a controller
and are typically hosted on the same server as the controller.

2.6.6  RESTful Northbound API


As we have discussed in previous subsections, although A-CPI still lacks a stan-
dardized protocol defining the interactions between SDN applications and
controllers, as OpenFlow does for D-CPI, RESTful interface has been widely
accepted as the mainstream API that allows the applications to program SDN
controllers.

www.EngineeringBooksPdf.com
Software-Defined Networking 71

REST standards for representational state transfer, which is an architec-


tural style that consists of a coordinated set of components, connectors, and
data elements within a distributed hypermedia system. REST focuses on the
component roles and interactions between data elements rather than specific
implementations for components or data elements. REST architectural style
describes a set of constraints that RESTful system designs should follow [36].
Expected northbound API for SDN programming adheres to this set of con-
straints therefore may be realized by following the REST architecture style.
These constraints are listed as follows.

• Client-server: The client-server relationship must exist to maximize the


portability of server-side functions to other platforms. SDN applications
would be the clients and the controller would be the server.
• Stateless: All states are kept on the client side and the server does not re-
tain any record of client states, resulting in a much more efficient design
for SDN controller.
• Caching: To improve performance and scalability, the client maintains
a local copy of information that is commonly used. This decreases the
number of times that SDN applications would have to query an SDN
controller; thus mitigating the overheads on A-CPI, especially when the
application and controller are hosted on different servers.
• Layered system: A RESTful API needs to be built in a way that a client
interacts with its neighbor which could be a server, load-balancer, and
so on, and doesn’t need to see “beyond” that neighbor. By providing a
RESTful northbound API, SDN applications do not have to understand
southbound protocols such as OpenFlow, ForCES, NETConfig, and so
on.
• Uniform interface: Independent from the information retrieved, the
method by which it is presented is always consistent. For instance, a
RESTful API function may return a value from a database.
• Code-on-demand: This is actually an optional constraint of REST to
transmit working code inside an API call. This constraint allows devel-
opers to transmit the necessary code to run on the server side when none
of an API’s functions meets the development requirement.

A purpose of these constraints is to maximize the usefulness of an API to


provide services to a large number and variety of clients, which in the case of
SDN is likely to be business logic applications or cloud orchestration such as
OpenStack. Figure 2.20 shows the location of RESTful northbound API in the
SDN architecture.

www.EngineeringBooksPdf.com
72 Virtualized Software-Defined Networks and Services

Figure 2.20  RESTful northbound API in the SDN architecture [37].

2.7  Software-Defined Internet Architecture for Network Service


Provisioning
2.7.1  Challenges to Current SDN Architecture for Service Provisioning
One of the key objectives of SDN is to enhance network capability of ser-
vice provisioning in future networks. The challenges to future network service
provisioning mainly come from two aspects—diversity in the network services
required by higher layer applications and heterogeneity in networking tech-
nologies employed in the underlying network infrastructure. Therefore, a key
requirement for future network architecture is to be able to provision a wide
spectrum of services upon infrastructures implemented with various network-
ing technologies. Such a requirement calls for the capability of architectural
evolution in future networks; that is, the future network should be able to easily
adopt new architectures required by new services without being constrained by
the specific technologies employed in the underlying network infrastructure.
However, the current network designs suffer ossification caused by tight
coupling between architecture and infrastructure, thus lacking the capabil-
ity for fully supporting architectural evolution. In this context, network ar-
chitecture represents globally agreed regulations that determine how data are
transferred through the network (e.g., the IP protocol). Infrastructure refers
to the physical equipment used to build networks, such as switches, routers,
and transmission media. In current networks, architecture and infrastructure

www.EngineeringBooksPdf.com
Software-Defined Networking 73

are tightly coupled in that a network domain can run certain architecture only
if the architecture is explicitly supported by the network infrastructure of that
domain. Such tight coupling implies that any significant architectural change
requires major upgrade of network infrastructure, which is often very expensive
and time consuming [38].
SDN offers a promising approach for meeting the service provisioning re-
quirements for future networks. The key principle of separated data and control
planes and logically centralized controllers in SDN may significantly enhance
network capabilities for service provisioning. However, the current SDN tech-
nologies still have limitations that may prevent network designers from fully ex-
ploiting potentials of this emerging networking paradigm for supporting future
network services. The fundamental issue of architecture and infrastructure cou-
pling still exists in the current SDN architecture. For example, adopting alter-
native network architecture requires the SDN data plane to be able to perform
fully general packet matching and forwarding actions, which is not supported
yet by currently available SDN D-CPI protocols (e.g., OpenFlow).
A network architecture essentially provides three types of interfaces: host-
network interface, operator-network interface, and packet-switch interface.
The host-network interface allows the hosts (actually users of the hosts, includ-
ing upper layer applications running on the hosts) to inform the network of
their service requirements (e.g., using packet headers to specify the destinations
of data transfer). The operator-network interface is used by the network op-
erators to specify their requirements regarding network operations (e.g., traffic
engineering, virtualization, tunneling, and so on). The packet-switch interface
determines how a packet is identified and thus processed by a switch (e.g., some
packet header fields are used as an index to look up flow tables at the switch for
determining the appropriate actions to be taken for the packet) [39].
In the original IP-based Internet design, the host-network interface and
packet-switch interface are essentially identical; both rely on information car-
ried in IP packet headers. Therefore, each router checks packet header fields to
interpret service requirements (e.g., the destinations for data delivery) as well as
determine appropriate forwarding action (e.g., the next-hop router interface for
packet forwarding). There is no explicit operator-network interface provided by
the IP-based Internet design.
Evolution of networking technologies has led to label-based packet
switching mechanisms, with MPLS as the representative example. MPLS dis-
tinguishes the host-network interface and packet-switch interface by decou-
pling packet labels used for data transportation from the host protocol used for
specifying service requirements. However, MPLS does not provide a general
operator-network interface.
Lack of standard operator-network interface causes complex and inflexi-
ble control and management when networks grow into a large scale and attempt

www.EngineeringBooksPdf.com
74 Virtualized Software-Defined Networks and Services

to support more diverse applications. SDN essentially enables an interface for


the operators to specify their requirements on network behaviors through a
centralized controller that is separated from the data plane. However, there is no
explicit distinction between the host-network interface and the packet-switch
interface in current SDN design. Much like the original Internet design, each
SDN switch must examine the original packet headers generated by the source
hosts in order to make forwarding decisions. Therefore, users’ expression of
service requirements and switches’ capabilities of packet forwarding are closely
related to each other, which essentially leads to the issue of tight coupling be-
tween network architecture and data transportation infrastructure [39].

2.7.2  Software-Defined Internet Architecture


The issue of coupled architecture and infrastructure in current SDN design has
been recognized by the research community. In order to address this issue, a
new SDN architectural framework called software-defined internet architecture
(SDIA) is proposed in [38]. This architectural proposal is centered on the idea
of separating the SDN architecture into two parts: the network edge and the
network core. As shown in Figure 2.21, a network with SDIA comprises three
types of components: hosts that act as sources and destinations of packets, the
network core whose primary purpose is for packet transfer, and the network
edge comprising of switches that serve as both ingress and egress elements of
the network core.
A key feature of SDIA lies in the separation of network edge and core
for both packet forwarding and control (i.e., the architecture decouples the
functionalities of network edge and core on both the data plane and the control
plane). Please note that such decoupling advocated by this SDIA framework
is not achievable directly from the separation of data and control planes in

Figure 2.21  Separation of network edge and core in SDIA [39].

www.EngineeringBooksPdf.com
Software-Defined Networking 75

the current SDN architecture. It is interesting to note that separation of both


forwarding and control functions of the network edge and core in the SDIA
framework essentially embraces the notion of network virtualization—decou-
pling service-related functionalities from data transport-oriented infrastruc-
tures, on which we will elaborate in Chapter 3.
In SDIA, the network edge and core are controlled by separated control-
lers because the edge and core functions focus on solving different problems.
The core is responsible for packet transport, while the edge is responsible for
providing complex services required by network users. Separating the control
planes for edge and core allows them each to evolve separately, focusing on the
specifics of their own problems. A network core should be able to support any
type of edge design including addressing schemes and policies. On the other
hand, an edge design should be able to work with any network core regardless
of its internal implementation.
Essentially, the SDN controller for network edge provides the operator-
network interface. The ingress edge switches, along with the edge controller,
handle the host-network interface. The switches in network core are where
packet-switch interface is realized for identifying and forwarding packets.
Through separation of edge and core in SDIA, the host-network interface for
specifying service requirements and the packet-switch interface for forwarding
packets are distinguished and decoupled. Therefore, SDIA allows alternative
architecture for meeting various service requirements to be deployed in network
edge independently with the transport technologies employed in the network
core.
In order for the network core to remain decoupled from the edge it should
provide a minimal set of forwarding primitives without exposing any detail
of its internal forwarding mechanisms. There are multiple aspects of techni-
cal requirements for achieving this objective. It is particularly important that
the addressing scheme for the edge should not be used in the core for making
packet forwarding decisions. The network edge may use the current IP ad-
dress or adopt new address schemes of newly developed protocols. The network
core may choose any scheme for identifying packets that is appropriate to the
infrastructure implementation in the core. MPLS-like label-based switching is
a possible approach to supporting a general forwarding model in the network
core. With this approach, the ingress edge switches map the addresses in packet
headers (e.g., IP addresses) to some labels associated to the packets and attach
the labels to packets. However, it is worth noting that the SDIA framework in
principle is not concerned with specifics of the network core implementation,
thus allowing the core to evolve independently of the edge and may have mul-
tiple forwarding models exist simultaneously [39].

www.EngineeringBooksPdf.com
76 Virtualized Software-Defined Networks and Services

The SDIA framework does not specify any particular implementation ap-
proach for either the network edge or the core; instead, the framework enables
various implementation technologies to be developed and deployed indepen-
dently in these two network components. On the other hand, software-based
forwarding is suggested for edge switches (i.e., performing packet forwarding
and processing functions at edge switches using software running on commod-
ity processors). Interestingly, such an implementation suggestion is aligned with
the concept of network function virtualization (NFV), which advocates real-
izing network functions as software instances hosted on commodity servers.
Since all architectural dependencies in SDIA reside at the edge, they can be
easily modified if the edge switches and edge controller are implemented based
on software. Therefore, the SDIA essentially transforms architectural evolution
from a hardware problem to a software problem [38]. In addition, employing
NFV in the network edge will enable the protocols supporting alternative net-
work architectures to be realized as virtual network functions (VNFs) that can
be deployed and upgraded easily. This offers a promising approach to support-
ing multitenant SDNs with alternative architectures for meeting diverse service
requirements by sharing a common network core infrastructure that may be
implemented with various transportation technologies.

2.7.3  End-to-End Service Provisioning in Software-Defined Internet Architecture


End-to-end service provisioning in the Internet typically requires collabora-
tion of multiple autonomous network domains. The SDIA framework provides
network designers with a top-down perspective for designing new networking
technologies for supporting interdomain end-to-end service provisioning in fu-
ture networks. Such a top-down perspective allows network designers to first
look at how to decompose Internet services into well-defined tasks, and then
consider how to implement each task in a modular fashion. As an example, con-
sidering a case for data delivery from host X in domain A to host Y in domain
B, the end-to-end connectivity service between X and Y may be decomposed to
the following tasks [38]:

• Interdomain task: this is the high-level task for forwarding packets from
domain A to domain B, which may require traversing one or multiple
intermediate domains.
• Intradomain transit task: a domain must be able to forward packets
from an ingress peering domain to an egress peering domain to support
the interdomain task.
• Intradomain delivery task: domain A must forward packets from host
X to the edge, where the interdomain task takes over; domain B must
forward packets from the edge of this domain to host Y. In addition,

www.EngineeringBooksPdf.com
Software-Defined Networking 77

a domain should be able to deliver packets between any pair of hosts


within the domain.

A key objective of the SDIA proposal is to make the realization of all the
required tasks and thus the entire end-to-end service to be independent with
the implementation technologies employed in any network domain. Follow-
ing the architectural principle of SDIA, the network of each domain is sepa-
rated into an edge and a core. The core of a network domain may use any
internal forwarding and control mechanisms that the domain chooses, rang-
ing from SDN to conventional intradomain routing/forwarding protocols, as
long as they support intradomain tasks, including edge-to-edge, host-to-edge,
and host-to-host packet forwarding. The interdomain task will be implemented
through collaboration among the edge switches of different domains.
A key to the interdomain task is to compute interdomain routes and then
provide instructions to each domain in terms of domain-level packet forward-
ing. An important aspect of the SDIA framework is to suggest a strict separation
between interdomain and intradomain addressing, which will be handled by
the network edge and core, respectively, and each domain may choose its own
internal addressing scheme. An interdomain addressing may be based on some
form of domain identifiers. The entire interdomain task should be realized by
just leveraging the domain identifiers without the need of knowledge for any
intradomain address.
An SDIA-based end-to-end network service delivery system is illustrated
in Figure 2.22. In such a service delivery system, only edge switches and edge
controllers need to understand interdomain protocols, including the interdo-
main addressing scheme. The network core of each domain is only responsible
for providing the intradomain tasks by using its own choice of forwarding,
routing, and addressing schemes. Each domain has one (logically centralized)

Figure 2.22  End-to-end network service delivery in SDIA.

www.EngineeringBooksPdf.com
78 Virtualized Software-Defined Networks and Services

SDN controller (i.e., the edge controller) that controls all the edge switches of
the domain. The edge controller participates in interdomain route computation
then requests the network core to provide the required intra-domain tasks for
supporting interdomain service delivery.
An important aspect of SDIA-based end-to-end service provisioning is
that the only components involved in the interdomain task are the edge con-
trollers and edge switches. This design has some profound implications on ar-
chitectural evolution of SDN for supporting future network services. Evolution
of interdomain routing (e.g., changing from the current BGP to a new rout-
ing protocol) only requires changing software in the edge controllers and edge
switches to implement the new routing protocol. Therefore, deployment of
new interdomain protocols for meeting new service requirements is simplified
[39]. When network edges are implemented with NFV technologies, functions
for various interdomain protocols can be easily deployed as VNFs. This allows
multiple interdomain protocols, and more fundamentally alternative network
architectures, to coexist in the network. In addition, VNFs for various protocols
may be loaded onto and unloaded from different edge switches on demand.
Such on-demand deployment of network protocols may significantly enhance
network capability of supporting adaptive and elastic services, which is a key
expectation of future network service provisioning.

2.8  Protocol Independent Layer in SDN for Future Network Service


Provisioning
2.8.1  Limitation of the Current OpenFlow-Based SDN Data Plane
In order to meet diverse service requirements, the data transport infrastructure
in the future networks must be able to support various networking protocols,
including currently existing protocols as well as protocols that may be devel-
oped in future. This calls for a flexible and programmable forwarding plane that
is oblivious to any particular network protocol, thus enabling fully decoupled
network control and data forwarding functionalities. The currently available
SDN D-CPI standard (e.g., the OpenFlow specification), while separating data
and control planes, has not fully achieved protocol-oblivious packet forwarding
yet.
The OpenFlow specification, the current de facto standard of D-CPI,
plays a key role in enabling control and forwarding separation in the SDN ar-
chitecture. In OpenFlow, controllers and switches communicate at the protocol
semantics level, which requires switches to understand specific packet formats
defined by the protocol in order to extract search keys and execute the packet
processing pipeline. Such protocol-awareness assumption about data plane de-
vices significantly limits OpenFlow capability of supporting new features in

www.EngineeringBooksPdf.com
Software-Defined Networking 79

current protocols or adopting new protocol standards, let alone applying user-
defined protocols for meeting diverse service requirements. As a consequence,
expanding the capability of OpenFlow to support additional networking tech-
nologies leads to continual modification of the specification and makes Open-
Flow protocol more and more complicated over time. However, it still offers
little support for clean slate solutions expected in future networks.
Another challenge that OpenFlow is facing comes from its limited sup-
port of stateful network processing that can be performed on data plane devices.
Current OpenFlow standard lacks the capability of actively monitoring flow
states and programming switch operations without the involvement of a con-
troller. Relying on the controller for tracking and managing all network states
not only causes scalability and performance issues, but also limits the agility of
network infrastructure for dynamic service provisioning.
SDN control and management on D-CPI can be divided into two dis-
tinct stages: datapath configuration and run-time control. The configuration
stage defines the packet processing functions that can be provided by a switch;
while the run-time control stage uses these functions to control traffic flows
through the switch. Currently, OpenFlow mainly focuses on the run-time con-
trol stage, which controls switch operations for packet forwarding by managing
flow tables in switches. An OpenFlow switch can only recognize a set of pre-
determined packet header fields and perform predefined actions for processing
packets. The protocol(s) that can be supported by a switch (thus the packet
header formats that the switch can recognize and process) are typically prede-
termined by the switch implementation and cannot be easily configured by a
controller.
Until recent, the development of OpenFlow specification has been fol-
lowing a reactive rather than proactive path by continuously adding new pro-
tocol features in new versions. Recently, the SDN research community realized
that a different approach to evolving the D-CPI standard is needed for meeting
the requirements of future network services. The new approach should sup-
port a fully programmable SDN data plane for performing protocol-oblivious
packet processing with supported protocols that can be easily configured by a
controller. That is, the new approach should allow both datapath configuration
and run-time control of SDN switches to be programmable through an SDN
controller.

2.8.2  Protocol-Independent Layer and Abstract Model for Packet Forwarding


In order to achieve full decoupling between packet forwarding and network
control for supporting future network service provisioning, ONF recently pre-
sented a proposal for protocol-independent layer (PI-layer) in a technical report
[40]. The proposed PI-layer aims at expressing both existing and new datapath

www.EngineeringBooksPdf.com
80 Virtualized Software-Defined Networks and Services

protocols with an abstract model for packet forwarding engine, which provides
an abstraction of data plane functionalities.
The position of PI-layer in the SDN architecture is shown in Figure 2.23.
This layer is between the control plane and data plane, therefore playing the
role of D-CPI. Unlike current OpenFlow-based D-CPI protocol that focuses
on run-time control of data plane switches, PI-layer allows SDN applications
and network controllers to perform datapath configuration and express much
more flexible packet processing functionalities. Specifically, the PI-layer is pro-
posed for achieving the following three goals [40]:
Protocol independence: the PI-layer should allow an SDN controller to
program data plane devices without being tied to specific network protocols
(and thus packet formats). The controller should be able to configure (a) a
packet parser for extracting header fields with particular names and types, and
(b) a collection of typed match-action tables that process these headers.
Target independence: the PI-layer should enable SDN controllers and ap-
plications to program data plane devices without knowing specifics of the de-
vices, thus allowing switches with heterogeneous implementations to coexist in
the data plane.
Reconfigurability: the PI-layer should allow an SDN controller to recon-
figure the packet parsing and processing functions that are performed by data
plane devices in the field.
As summarized in [40], these goals lead to the following three guiding
principles for the PI-layer.

• The PI-layer should be protocol neutral. It should provide a language


for defining packet header formats, the parsing rules for interpreting

Figure 2.23  Protocol independent layer in the SDN architecture.

www.EngineeringBooksPdf.com
Software-Defined Networking 81

packet headers, and datapath actions for processing packets. The PI-
layer should be based on a protocol-oblivious primitive instruction set
that can be used for programming datapath processing to support both
existing and new protocols
• The PI-layer should help creating an SDN software development eco-
system. The PI-layer should allow a programmer to define behaviors of
SDN switches using a high-level language and then use a compiler to au-
tomatically generate switch configurations for various switch platforms.
• The PI-layer should support existing OpenFlow specification. The pro-
posed PI-layer enables a general D-CPI that may lead to some new data-
path protocols that are different from the existing OpenFlow; however,
PI-layer should be designed to be backward compatible to allow effective
evolution for the existing OpenFlow protocol.

A key to a successful design of the PI-layer is to make the interface between


data and control planes work at the right level of abstraction. In particular, the
interface should not be tied to any particular datapath protocol or switch imple-
mentation. Instead, it should allow control programs to be easily mapped to any
target device while supporting specific optimization to fully exploit the device-
specific capability. Toward enabling an appropriate level of data plane abstrac-
tion for supporting the PI-layer, an abstract model for packet forwarding in an
SDN switch has been proposed together with the PI-layer in [40].
The abstract model for packet forwarding engine, as shown in Figure 2.24,
is based on flow match-action tables containing rules for packet processing. In
this model, a switch forwards packets via a programmable parser followed by

Figure 2.24  An abstract model for packet forwarding in SDN switch [40].

www.EngineeringBooksPdf.com
82 Virtualized Software-Defined Networks and Services

multiple stages of match-action table processing, arranged in a certain series.


For each received packet, the switch first parses the packet header to obtain the
initial header contents. The parser recognizes and extracts some fields from the
packet header, thus defining the protocols that can be supported by the switch.
The model makes no assumption about the meaning of protocol headers, only
that the parsed representation defines a collection of fields based on which the
match-action process is performed. Packet processing proceeds through a se-
quence of match-action table stages, each of which may take actions for access-
ing packet data and may perform additional packet parsing. Finally, the packet
is assigned to a queue and scheduled for being sent out from an output port.
Compared to OpenFlow, the proposed abstract model of packet forward-
ing makes generalization in the following aspects. First, OpenFlow assumes a
fixed parser for a specific set of supported protocols, whereas the abstract model
has a programmable parser that enables the switch to process new packet head-
ers defined by new protocols. Second, OpenFlow assumes that match-action
stages are in series, whereas the abstract model allows them to be either in par-
allel or in series. Third, the abstract model assumes that actions are composed
from protocol-independent primitives, thus allowing the switch to perform
more general packet processing functions.
Programming an abstract forwarding engine through the PI-layer consists
of two types of operations: configuration and run-time control. Configuration
operations program the parser, set the order of match-action stages, and specify
the header fields processed by each stage. Configuration for a switch can be
described by a program written in some sort of high-level language for switch
programming, which is then compiled to low-level protocol-oblivious instruc-
tions that can be loaded and executed in the switch. After the configuration is in
place, run-time control operations can add, remove, or modify table entries in
the switch and pass packets between the switch and the controller. Essentially,
configuration defines the protocols that an SDN switch supports and how the
switch may process packets; while run-time control determines the policy ap-
plied to packets at any given time after the switch starts operating with the
configuration.
This abstract model for packet forwarding engine provides an abstraction
of SDN data plane devices upon which the controller can program the devices
to implement various network protocols for meeting diverse service require-
ments. An analogy can be made between the abstraction provided by this model
for SDN packet forwarding and the abstraction provided by the PC hardware
architecture to run various software programs. The application-agnostic nature
of PC architecture allows almost any application program to utilize the general
CPU and other hardware resources through a standard instruction set. An op-
erating system abstracts hardware resources and presents a high-level function
interface to applications, which allows hardware to perform basic instructions

www.EngineeringBooksPdf.com
Software-Defined Networking 83

without knowing any application specific information. The PI-layer aims at


providing similar abstraction for the SDN data plane in order to support vari-
ous network protocols without being constrained by data plane implementa-
tion technologies.
Realization of the PI-layer proposal requires enabling technologies in two
aspects: (a) protocol-oblivious packet forwarding (POF), and (b) programming
protocol-independent packet processing (P4), which will be discussed in the
next two subsections.

2.8.3  Protocol Oblivious Packet Forwarding


POF technology offers a promising approach to making packet forwarding in
SDN switches totally protocol oblivious through a standard flow instruction set
(FIS) [41]. With POF, SDN switches have no need to understand the packet
format. All a switch needs to do is to extract and assemble the search keys from
packet headers, look up flow tables, and then execute the associated instruc-
tions, which are executable codes written in FIS or compiled from FIS. As a
result, SDN switches can easily support any new feature added to existing pro-
tocols as well as new protocols developed in future for supporting new services.
Although POF follows a similar packet process pipeline as the current
OpenFlow standard, it introduces some significant changes. In current Open-
Flow, the search key assembly is conducted by explicitly specifying the tar-
get header fields (e.g., IPv4 destination address or Ethernet source address),
which requires switches to understand the packet format in order to extract
the required header bits. In contrast, POF allows packet parsing operation to
be programmed by the controller through a sequence of generic key assembly
and table lookup instructions. A search key is simply defined by one or more
{offset, length} tuples, where offset indicates the skipped bits from the current
cursor within the packet and length indicates the number of bits that should be
included in the key. Similarly, POF makes packet forwarding and processing
operations protocol agnostic. Instead of having actions like push MPLS label or
decrement IP TTL as in current OpenFlow, which requires semantic interpreta-
tion of a particular protocol, POF instructions employ {offset, length} tuples
to locate the packet or metadata that need to be manipulated (e.g., inserting,
deleting, and rewriting some packet header fields [41]).
Realization of POF requires a generic FIS that is carefully designed to be
platform independent and sufficient to compose any network service from the
control plane. Each instruction in FIS represents a primitive in order to allow
efficient hardware implementation. Constructing network services with FIS is
like writing application programs in an assembly language. This level of com-
plexity can be hidden by using a high-level P4 language and function libraries
that can be compiled into machine code. A FIS for POF has been developed

www.EngineeringBooksPdf.com
84 Virtualized Software-Defined Networks and Services

and reported in [42], where the instructions are grouped and summarized as
follows.
Editing Instructions
These kinds of instructions are used to edit packet data or packet metadata.
Editing packet data is an important function of the packet forwarding engine
that is required by almost all datapath protocols. SET_FIELD, ADD_FIELD,
and DEL_FIELD are three of the most useful instructions in this group. By us-
ing these three instructions, a control program can define a customized packet
field for forwarding processing. The remaining editing instructions, such as
ALG, INC_FIELD, DEC_FIELD, CALCULATE_CHECKSUM, and some
other logical operation instructions, all perform some kinds of calculating of
the packet data.
Forwarding Instructions
The instructions in this group are used for packet forwarding. The whole for-
warding process for one packet in an SDN switch might contain multiple stag-
es, each has different types of flow tables according to the functionality (e.g.,
layer-3 parse table and layer-3 encapsulate table). When the processing in a flow
table is complete, GOTO_TABLE instruction can be used to transfer the pack-
et data to the next flow table. COUNTER instruction can track the number
of packets that have already been processed. OUTPUT instruction sends the
packet out of the switch through the specified port. MOVE_PACKET_OFF-
SET and SET_PACKET_OFFSET can be used to specify a header field in a
particular location in the packet. For example, when SET_PACKET_OFFSET
instruction sets the packet offset to be 112 bits, it specifies the start position of
IPv4 header field in a normal Ethernet frame.

Flow Entry Instructions


This group of instructions are included in FIS to allow an SDN switch to make
decisions on updating some flow table entries by itself (instead of merely fol-
lowing commands issued by a controller). SET_TABLE_ENTRY instruction
sets the parameter and match information of flow entries. ADD_TABLE_EN-
TRY and DEL_TABLE_ENTRY instructions are respectively for inserting a
new entry into a flow table and deleting an existing entry from a flow table.
With these instructions, an SDN switch is able to automatically add, delete,
and modify flow table entries when some conditions are met. This kind of in-
structions allow SDN switches to define autonomous datapath functions and
enable delegation of a variety of control capabilities to switches while maintain-
ing the SDN principle of separation between control and data planes.
More detailed information about this FIS design may be found online at
http://www.poforwarding.org.

www.EngineeringBooksPdf.com
Software-Defined Networking 85

POF FIS acts on controllers, D-CPI, and data plane devices; therefore,
it is independent of A-CPI. An SDN controller can provide various types of
A-CPI to users or applications. The controller is responsible for translating the
high-level policies specified by applications into POF instructions and loading
them to data plane devices. A-CPI independence provides the flexibility and
diversity required by SDN for supporting future network services. POF FIS
is not designed for any specific service or application. Various services can be
implemented through different combinations of the same set of instructions
and every instruction in the FIS can be used in realization of various services.

2.8.4  Programming Protocol-Independent Packet Processing


SDN control applications need to be converted to low-level instructions that
are executable on data plane devices. It would be handy to have some high-
level language to program the devices. Please note that such languages discussed
here are for device-level programming rather than writing control application
programs. A high-level device programming language provides another layer of
abstraction for data plane infrastructure that allows developers to focus on de-
scribing high-level switch functions rather than conducting low-level flow table
manipulation with specific knowledge of particular switch implementations.
Programming languages for SDN control have attracted researchers’ in-
terest, and progress in such languages has been reported in the literature. How-
ever, many of them mainly focus on network-wide policy deployment on data
plane devices built with predetermined functions and protocols. Realization of
the proposed PI-layer requires a high-level language for programming protocol-
independent packet processing (P4) with some important new features. For
example, the language needs to specify how the parser is programmed for han-
dling general packet formats for supporting various protocols. This requires the
language to provide a mechanism for declaring types of packet headers. The
language should also be able to express the processing actions to be performed
on packet headers (e.g., defining an imperative control flow using the declared
header types and a primitive set of actions to describe header field processing).
In addition, such a language must allow the programmer to express, either ex-
plicitly or implicitly, any dependence relationship between header fields.
Recently, a P4 programming language was developed for meeting these
requirements. This language contains definitions of the following key compo-
nents [43]:

• Header: a header definition declares the sequence and structure of a


series of packet header fields and also specifies the widths of fields and
constraints on field values;

www.EngineeringBooksPdf.com
86 Virtualized Software-Defined Networks and Services

• Parser: a parser definition specifies how to identify header fields and


valid header field sequences within packets;
• Table: the language defines the fields on which a table entry may match
and the actions to be executed for each table entry matching;
• Action: the language supports construction of complex packet process-
ing functions using simple protocol-independent primitive actions and
makes the actions available with match-action tables;
• Control Program: a control program determines the order in which
match-action tables are applied to a packet and describes the control
flow between match-action table stages.

Dependence between header fields and thus between match-action tables


is a core part of control flow description for packet processing, which determines
what tables must be processed in sequence and what tables may be processed in
parallel. For example, sequential execution is required for an IP routing table
and an ARP table due to the data dependency between them. Dependence can
be identified by analyzing table dependence graphs (TDGs), which describe the
field inputs, actions, and control flow between tables. However, programmers
often think of packet processing algorithms using imperative constructs rather
than graphic representations such as TDGs. Therefore, the P4 language uses a
two-step compilation process. The language first allows programmers to express
packet processing programs using an imperative language representing the con-
trol flow; then a compiler translates the representations to TDGs to facilitate
dependency analysis [43].
A main design goal of P4 language is to be platform independent in order
to offer a generic method for programming data plane devices with various
implementations for supporting the protocols required by diverse network ser-
vices. On the other hand, different packet forwarding technologies are expected
to coexist in SDN data plane, at least in the near future. Each of them may have
some special features such as customized hardware-acceleration mechanisms
for facilitating hardware resource provisioning. A general P4 language without
considering any specific feature of the target data plane platform could bring in
significant challenges to the compiler, which may lead to poor performance or
even failure for translating high-level programs to low-level instructions that are
to be performed on switch hardware.
A possible approach to addressing such challenges is to have some pre-
processor directives for the P4 language, which guide the compiling process by
considering the special features of the target data plane platform. In this way,
a platform independent compiler compiles a control program written in the

www.EngineeringBooksPdf.com
Software-Defined Networking 87

general P4 language by calling some platform-optimized libraries. The compil-


ing process generates first an intermediate representation and then platform-
specific executable codes that are loaded onto data plane devices.

2.8.5  An Ecosystem for SDN Data Plane Programming


An ecosystem of SDN data plane programming may be formed based on the
PI-layer. Such an ecosystem includes languages (e.g., the P4 language and POF-
FIS) for configuring the datapath protocols supported by switches, libraries of
reusable switch configuration modules for specific datapath protocols, a run-
time API for controlling a configured switch, and a D-CPI protocol for provid-
ing communications between an SDN controller and data plane devices [40].
Many different approaches may be taken for programming an SDN
switch. These approaches are derived from programmers’ choices and driven
partially by the capabilities of the target platforms. As summarized in [40],
representative approaches of SDN data plane programming include standard-
based approach, interactive approach, and customized approach, which are il-
lustrated in Figure 2.25.
The standard-based approach is for controlling data plane devices that
have been preconfigured for realizing standard datapath protocols. For an SDN

Figure 2.25  SDN programming approaches [40].

www.EngineeringBooksPdf.com
88 Virtualized Software-Defined Networks and Services

switch whose behaviors are defined by a published standard, one can specify a
datapath model for the switch based on the standard. The packet header for-
mat, parsing rules, actions, and flow table processing pipeline can all be pre-
defined and supported by standard libraries.
With an interactive approach, all aspects of the configuration and run-
time control of data plane devices, including defining packet format, flow table
match rules, packet processing actions, and so on, are provided through the PI-
layer. This approach allows network operators to adjust switch datapath model
to support various protocols; therefore, it is most applicable to networking sce-
narios where configurability for dynamic network operations plays a key role in
service provisioning.
The customized approach allows network architecture and associated
switch datapath models to be developed and tested before being deployed in
a production network. Datapath models for SDN switches can be customized
to particular network services by a program that is written in P4 language and
compiled into the configuration files loaded on the switches. This approach
may require a controlled upgrading procedure for making any change to the
datapath configuration in a production network.
In order to achieve a balance between an abstract programing model for
supporting various datapath protocols and performance optimization tailored
for specific data plane device platforms, the compiling procedure that trans-
forms control programs written in a generic P4 language to the executable codes
running on data plane devices can be split into two stages. The first stage is
platform independent, in which the compiler parses a P4 language program
and transforms the program into an intermediate representation based on the
POF FIS. The second stage is platform specific in order to take advantage of
the technologies developed in various data plane devices for enhancing datapath
performance; for example, mapping header specifications to flexible parse en-
gines and mapping match-action tables to platform-specific pipeline processes
and memory blocks with varying capabilities (e.g., DRAM, SRAM, TCAM).
The PI-layer enables protocols to be specified by programs written in P4
language (e.g., one program for IPv4 and other programs for GRE, VLAN,
and so on). It is desirable to organize the programs in libraries and allow switch
developers to combine protocols they need from several libraries, which may
support code reuse and facilitate rapid prototyping of SDN networks. For this
to be possible, the libraries need to be “composable” in the sense that each pro-
tocol library is self-contained and may be assembled with others without modi-
fication. In addition, control applications have been developed and optimized
for some particular types of data plane devices for achieving the best possible
performance. In order to reuse the available platform-specific applications, such
programs may be precompiled and provided in a library by any third party and
used by the compiling process [40].

www.EngineeringBooksPdf.com
Software-Defined Networking 89

2.8.6  ONF Forwarding Abstractions Working Group


It is worth noting that the work done by the forwarding abstractions working
group (FAWG) in ONF is also relevant to enhance SDN data plane flexibility
and programmability for better support of new network protocols required by
future network services, which essentially shares a similar objective with the
proposal of PI-layer with POF and P4 programming.
The initial incentive for forming FAWG in ONF was the slow adoption
of OpenFlow specifications in network device development. Several efforts have
been undertaken to analyze the OpenFlow standard in the context of switch
implementation. These efforts have identified some issues that may prevent
rapid adoption of OpenFlow, among which is the lack of an effective approach
for specifying switch-level behaviors before run-time. Analysis results indicated
that the current OpenFlow standard provides a framework in which controllers
request switch operations at run-time. This framework focuses on specification
of low-level actions that switches perform rather than description of high-level
switch behaviors required for supporting network services.
In order to address this challenge and promote adoption of OpenFlow
in SDN switch development, ONF formed FAWG with an objective to en-
able prerun-time description of switch-level behavioral abstraction. Describing
switch-level behaviors in advance of run-time allows switch developers to obtain
information about which details of low-level actions are relevant to the required
switch-level operations. FAWG plans to first focus on developing a means for
sharing well-known table type patterns (TTPs) in advance of run-time for eas-
ing the mapping from behavioral descriptions expressed in match-action tables
to existing hardware implementations. A TTP consists of the linkages between
tables, the types of tables, a set of the parameterized table properties for each ta-
ble, the legal actions for modifying tables, and the metadata that can be passed
between each table pair [44].
Unlike the top-down strategy that the PI-layer proposal follows for
achieving a protocol-independent data plane, the FAWG takes a bottom-up ap-
proach for describing an abstract profile of data plane devices. TTPs developed
by FAWG provide switch vendors with an approach for describing the profile
of features and pipeline structure that an SDN switch supports. TTPs can rep-
resent the capabilities of a predefined datapath, while the PI-layer with P4 lan-
guage and POF-FIS defines the behaviors of a switch and configure its datapath
setting. A TTP can be used to specify the OpenFlow controllable capabilities
of a switch, while a P4 program can configure a switch to either support some
existing OpenFlow versions or define a collection of existing and new protocols.
Both FAWG TTPs and the PI-layer proposal represent datapath configuration
but approach it from different perspectives. Both approaches should be able to
achieve equivalent run-time behaviors at a switch; that is, a controller would
not detect any run-time difference between a fixed-function switch conforming

www.EngineeringBooksPdf.com
90 Virtualized Software-Defined Networks and Services

to a specific TTP and a programmable switch configured by a P4 program to


provide the same behavior [40].

2.9  Conclusion
SDN is a significant innovation in networking technologies for addressing
some of the fundamental challenges to current networks. The main principles
of SDN lie in decoupling network control and management functionalities
from data forwarding operations to enable a centralized control platform that
supports network programmability. SDN is expected to significantly simplify
both network control/management software and packet forwarding devices as
well as greatly enhance network capabilities of service provisioning.
Although SDN architecture has been developed by multiple standardiza-
tion organizations, including ONF, ITU-T, and IETF, all the proposed archi-
tectural frameworks share a three-plane structure that comprises the data plane,
control plane, and application plane. The data plane simply performs packet
forwarding operations by following the action rules installed by the control
plane through the southbound interface. The control plane enables a layer of
abstraction for data plane network based on which SDN applications can make
decisions on network policies. The control plane also provides northbound
APIs through which SDN applications may program network behaviors.
Networking devices on the SDN data plane, often called SDN switches,
typically comprise a packet processing engine, an abstraction layer, and a con-
troller interface. OpenFlow currently is the de facto standard for controlling
SDN switches. The core element of an OpenFlow-enabled switch is the Open-
Flow pipeline that processes each incoming packet through one or multiple
flow tables to determine the actions to be performed on the packet. OpenFlow
specification defines a communication protocol between the controller and
switch as well as the procedure for managing flow tables in SDN switches.
The control plane is the core component in the SDN architecture for
achieving decoupling between data forwarding and network control. An SDN
controller bridges the data and application planes by enabling data plane ab-
straction and providing an API for programing network behaviors. Key func-
tions of an SDN controller include topology management, traffic monitoring,
and flow management. Control performance is a key factor that impacts perfor-
mance of the entire SDN network. Various technologies have been developed
for enhancing SDN control performance, including multithread controller
software, distributed controller deployment, and hierarchical control structure.
Multidomain SDN control is a challenging problem that has attracted research
attention and is still open for further study.

www.EngineeringBooksPdf.com
Software-Defined Networking 91

SDN applications make decisions on policies for defining network behav-


iors and program network operations to fulfill the policies. Various high-level
languages have been proposed for programming SDN applications. SDN ap-
plication designs in general can be categorized as in either the proactive mode
or the reactive mode. Application programs interact with an SDN controller via
A-CPI, which currently still lacks a standardized protocol. Various northbound
APIs have been developed, among which RESTful API has been widely ac-
cepted for programming SDN networks.
SDN has become an active innovation area in the field of networking.
Exciting progress has been made in SDN technology development, including
SDN switches, control platform, and application programing. On the other
hand, researchers have noticed that there are some issues associated with the
current SDN approach that may limit network designers to fully exploit the
advantages promised by this new networking paradigm. A root reason for such
limitation lies in the unnecessary tight-coupling between architecture and in-
frastructure in the current SDN design, which constrains evolution of network
services to what the current network infrastructure can support. Various efforts
have been made by the research community for overcoming this barrier in order
to release the full power of SDN. Two representative proposals toward this di-
rection are the software-defined Internet architecture (SDIA), which advocates
separation between the core and edge parts of network architecture,� and the
PI-layer, which leverages protocol oblivious forwarding (POF) and program-
ming protocol-independent packet processing (P4) language to enable a fully
programmable data plane in SDN.

References
[1] Feamster, N., J. Rexford, and E. Zegura, “The Road to SDN,” ACM Queue, Vol. 11, No.
12, December 2013, pp. 1–12.
[2] Greenberg, A., G. Hjalmtysson, D. A. Maltz, A. Myers, J. Rexford, et al., “A Clean Slate
4D Approach to Network Control and Management,” ACM SIGCOMM Computer Com-
munication Review, Vol. 35, No. 5, October, 2005, pp. 41–54.
[3] McKeown, N., T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, et al., “Open-
Flow: Enabling Innovation in Campus Networks,” ACM SIGCOMM Computer Commu-
nication Review, Vol. 38, No. 2, April 2008, pp. 69–74.
[4] Open Networking Foundation, “Software-Defined Networking: the New Norm of Net-
works,” white paper, April 2012.
[5] Open Network Foundation, “ONF TR-521: SDN Architecture,” Issue 1.1, 2016.
[6] Schenker, S., M. Casado, T. Koponen, and N. McKeown, “The Future of Networking,
and the Past of Protocols,” Open Networking Summit, 2011.

www.EngineeringBooksPdf.com
92 Virtualized Software-Defined Networks and Services

[7] Kreutz, D., F. M. V. Ramos, P. Esteves Verissimo, C. E. Rothenberg, S. Azodolmolky, et al.,


“Software-Defined Networking: A Comprehensive Survey,” Proceedings of the IEEE, Vol.
103, No. 1, January 2015, pp. 14–76.
[8] Xia, W., Y. Wen, C. H. Foh, D. Niyato, and H. Xie, “A Survey on Software-Defined Net-
working,” IEEE Communications Surveys & Tutorials, Vol. 17, No. 1, January 2015, pp.
27–51.
[9] Open Networking Foundation, “ONF TR-504: SDN Architecture Overview,” November
2014.
[10] ITU-T, “Recommendation Y.3300: Framework of Software-Defined Networking,” June
2014.
[11] Internet Research Task Force, “RFC7426 Software-Defined Networking (SDN): Layers
and Architecture Terminology,” January 2015.
[12] Contreras, L. M., C. J. Bernardos, D. Lopez, M. Boucadir, and P. Iovanna, “IRTF SDNRG
Internet-Draft: Cooperating Layered Architecture for SDN,” March 2016.
[13] Open Networking Foundation, “ONF TS-025: OpenFlow Switch Specification,” Version
1.5.1, March 2015.
[14] IETF “RFC 7285: Application-Layer Traffic Optimization (ALTO) Protocol,” September
2014.
[15] Gurbani, V. K., M. Scharf, T. V. Lakshman, and V. Hilt, “Abstracting Network State
in Software Defined Networks for Rendezvous Services,” Proceedings of 2012 IEEE
International Conference on Communications (ICC’12), June 2012, pp. 6627–6632.
[16] Harras, S., and R. White, “Software-Defined Networks and the Interface to Routing
System (I2RS),” IEEE Internet Computing, Vol. 17, No. 4, July 2013, pp. 84–88.
[17] Tootoonchian, A., M. Ghobadi, and Y. Ganjali, “OpenTM: Traffic Matrix Estimator for
OpenFlow Networks,” Passive and Active Measurement, Springer, 2010, pp. 201–210.
[18] Yu, C., C. Lumezanu, Y. Zhang, V. Singh, G. Jiang, et al., “FlowSense: Monitoring
Network Utilization with Zero Measurement Cost,” Passive and Active Measurement,
Springer, 2010, pp. 31–41.
[19] Reiblatt, M., N. Foster, J Rexfor, and D. Walker, “Consistent Updates for Software-
Defined Networks: Change You Can Believe In,” Proceedings of the 10th ACM Workshop
on Hot Topics in Networks, Nov. 2011, pp. 7:1–7:6.
[20] Tavakoli, A., M. Casado, T. Koponen, and S. Shenker, “Applying NOX to the Data
Center,” Proceedings of the 8th ACM Workshop on Hot Topics in Networks (HotNets-VIII),
Oct. 2009.
[21] Cai, Z., “Maestro: Achieving Scalability and Coordination in Centralized Network
Control Plane,” Ph.D. Dissertation, Rice University, 2011.
[22] Tootoonchian, A., S. Gorbunov, Y. Ganjali, M. Casado, and R. Sherwood, “On Controller
Performance in Software-Defined Networks,” Proceedings of the 2nd USENIX Workshop on
Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services (Hot-
ICE’12), April 2012.

www.EngineeringBooksPdf.com
Software-Defined Networking 93

[23] Tootoonchian, T., and Y. Ganjali, “HyperFlow: A Distributed Control Plane for
OpenFlow,” Proceedings of the 2010 Internet Network Management Workshop/Workshop on
Research on Enterprise Networking (INM/WREN’10), April 2010.
[24] Stribling, J., Y. Sovran, I. Zhang, X. Pretzer, J. Li, M. et al., “Flexible Wide-Area Storage
for Distributed Systems with WheelFS,” Proceedings of the 6th USENIX Symposium on
Networked Systems Design and Implementation (NSDI’09), April 2009, pp. 43–58.
[25] Koponen, T., M. Casado, N. Gude, J. Stribling, L. Poutievski, M. et al., “ONIX: A
Distributed Control Platform for Large-Scale Production Networks,” Proceedings of the
9th USENIX Conference on Operating Systems Design and Implementation (OSDI’10),
October 2010.
[26] Yeganeh, S. H., and Y. Ganjali, “Kandoo: A Framework for Efficient and Scalable
Offloading of Control Applications,” Proceedings of the 2012 ACM Workshop on Hot Topics
in Software Defined Networks (HotSDN’12), August 2012, pp. 19–24.
[27] Heller, B., R. Sherwood, and N. McKeown, “The Controller Placement Problem,”
Proceedings of the 2012 ACM Workshop on Hot Topics in Software Defined Networks
(HotSDN’12), August, 2012.
[28] IRTF Internet-Draft, “SDNi: A Message Exchange Protocol for Software-Defined
Networks (SDNs) Across Multiple Domains,” June 2012.
[29] Phemius, K., M. Bouet, and J. Leguay, “DISCO: Distributed Multi-Domain SDN
Controller,” Proceedings of the 2014 IEEE Network Operation and Management Symposium
(NOMS’14), May 2014.
[30] Figueira, N., and R. Krishnam, “SDN Multi-Domain Orchestration and Control:
Challenges and Innovative Future Directions,” Proceedings of the 2015 International
Conference on Computing, Networking and Communications (ICNC’15), Feb. 2015, pp.
406–412.
[31] Hinrichs, T. L., N. S. Gude, M. Casado, J. C. Mitchell, and S. Shenker, “Practical
Declarative Network Management,” Proceedings of the 1st ACM Workshop on Research on
Enterprise Networking (WREN’09), 2009.
[32] Foster, N., A. Guha, M. Reitblatt, A. Story, M. Freedman, et al., “Languages for Software-
Defined Networks,” IEEE Communications Magazine, Vol. 51, No. 2, Feb. 2012, pp.
128–134.
[33] Monsanto, C., N. Foster, R. Harrison, and D. Walker, “A Compiler and Run-Time System
for Network Programming Languages,” Proceedings of the 39th Annual ACM Symposium
on Principles of Programming Languages (POPL’12), 2012, pp. 217–230.
[34] Al-Shaer, E., and S. Al-Haj, “FlowChecker: Configuration Analysis and Verification of
Federated OpenFlow Infrastructure,” Proceedings of the 3rd ACM Workshop on Hot Topics
in Software Defined Networks (HotSDN’12), 2012, pp. 121–126.
[35] Peresini, P., and M. Canini, “Is Your OpenFlow Application Correct?” Proceedings of the
ACM CoNEXT Student Workshop (CoNEXT’11), 2011.
[36] Fielding, R. T., “Architectural Styles and the Design of Network-based Software
Architectures,” Ph.D Dissertation University of California, Irvine, 2000.

www.EngineeringBooksPdf.com
94 Virtualized Software-Defined Networks and Services

[37] Toy, M., “Cable Networks, Services and Management,” Hoboken, NJ: IEEE/J.Wiley,
2015.
[38] Raghavan, B., T. Koponen, A. Ghodsi, M. Casado, S. Ratnasamy, et al., “Software-
Defined Internet Architecture: Decoupling Architecture from Infrastructure,” Proceedings
of the 11th ACM Workshop on Hot Topics on Networks (Hotnets’12), October 2012, pp.
43–48.
[39] Casado, M., T. Koponen, S. Shenker, and A. Tootoonchian, “Fabric: A Retrospective on
Evolving SDN,” Proceedings of the 2012 ACM SIGCOMM Workshop on Hot Topics in
Software Defined Networking (HotSDN’12), January 2012, pp. 85–90.
[40] Open Networking Foundation ONF TR-505, “OF-PI: A Protocol Independent Layer,”
Version 1.1, September 2014.
[41] Song, H., “Protocol-Oblivious Forwarding: Unleash the Power of SDN Through a
Future-Proof Forwarding Plane,” Proceedings of the 2013 ACM SIGCOMM Workshop on
Hot Topics in Software Defined Networking (HotSDN’13), August 2013, pp. 127–132.
[42] Yu, J., X. Wang, J. Song, Y. Zheng, and H. Song, “Forwarding Programming in Protocol-
Oblivious Instruction Set,” Proceedings of the 2014 IEEE International Conference on
Network Protocols (ICNP’14), October 2014, pp. 577–582.
[43] Bosshart, P., D. Daly, G. Gibb, M. Izzard, N. McKeown, et al., “Programming Protocol-
Independent Packet Processors,” ACM Computer Communication Review, Vol. 44, No. 3,
July 2014, pp. 88–95.
[44] Open Networking Foundation, “The Forwarding Abstraction Working Group (FAWG)
Charter,” April 2013.

www.EngineeringBooksPdf.com
3
Virtualization in Networking
Q. Duan, Y. Wang, A. Bernstein, and M. Toy

3.1  Introduction
Virtualization in computing often refers to the act of separating software from
the underlying hardware for creating virtual instances of computing resources.
In general, virtualization focuses on decoupling the functions or services that
a system provides from the implementations that the system employs to sup-
port such functions or services. Virtualization technologies have been widely
employed in various computing areas, especially the recently emerged cloud
computing. Success of virtualization in computing has inspired its adoption
in the field of networking. Applying virtualization in networks leads to the
notions of network virtualization (NV) and network function virtualization
(NFV), which are expected to have significant impacts on networking and ser-
vices provisioning. In this chapter, we focus our discussion on virtualization-
based networking and network service provisioning.
Supporting a wide spectrum of services with highly diverse requirements
based on networks that are implemented with heterogeneous technologies has
become a major challenge to research and development of networking technol-
ogies. The current IP-based network architecture cannot meet the requirements
of future network services due to its ossification. In order to tackle this challeng-
ing problem, researchers have proposed a vision of network design that adopts
virtualization as a key attribute in future network architecture. Such an archi-
tectural vision for future networks is typically referred to as network virtualiza-
tion (NV). Essentially, NV advocates decoupling network services from the un-
derlying network infrastructures, thus allowing alternative network architecture

95

www.EngineeringBooksPdf.com
96 Virtualized Software-Defined Networks and Services

and protocols to be deployed in independent virtual networks (VNs) sharing an


infrastructure substrate for meeting diverse service requirements.
A more recent innovation toward virtualization-based networking is net-
work function virtualization (NFV), which was initially proposed by ETSI in
2012 and has attracted extensive interest from the networking community since
then. The main objective of NFV is to address some of the fundamental chal-
lenges to networking by leveraging standard IT virtualization technologies. The
basic idea of NFV lies in separating network functions from physical infra-
structures by running virtual functions implemented by software upon stan-
dard server hardware. ETSI NFV Industry Specialist Group (NFV-ISG) has
published a set of documents that provide guidelines for realizing the NFV no-
tion, including the requirements, architectural framework, infrastructure and
software architecture, and various use cases for NFV. Exciting progress toward
implementing high-performance NFV has been made by researchers from both
industry and academia.
The NFV paradigm fully embraces the NV vision—applying virtualiza-
tion in networks for decoupling service functions and physical infrastructures—
and proposes a specific architecture and related mechanisms for realizing this
vision. On the other hand, NFV and NV focus on different scopes and granu-
larity levels of virtualization in networking. NV focuses on network-level virtu-
alization to enable multiple virtual networks with alternative network architec-
tures for meeting diverse service requirements. NFV focuses on virtualization of
individual network functions and provides end-to-end services through orches-
tration of virtual network functions. In addition, from a historical perspective
the NV vision was first proposed by academic researchers in middle 2000s,
whereas NFV development was started mainly by the telecom and networking
industry communities much later around 2012. Therefore, in this chapter we
present NV and NFV as two concepts in different sections but also discuss how
they are closely related.
Decoupling network functions/services from the underlying network in-
frastructures splits the traditional role of network service providers to multiple
independent entities, including infrastructure providers, virtual network func-
tion providers, and virtual network providers and operators. Therefore, effective
and flexible interactions among these entities play a crucial role in virtualiza-
tion-based networks for service provisioning. The service-oriented architecture
(SOA), which provides the basis for the successful cloud service model, offers an
approach for meeting this requirement. Applying SOA in networking enables
the network-as-a-service (NaaS) paradigm that may greatly facilitate network
virtualization. Virtualization and SOA, when employed in both networking
and cloud computing, offers a promising approach to enabling unification of
network and cloud services, which is expected to significantly enhance future
service performance.

www.EngineeringBooksPdf.com
Virtualization in Networking 97

In this chapter, we first give an overview of the virtualization concept


and its application in computing. Then we introduce the vision of NV, discuss
its impact on future networking, and review some key technologies for creat-
ing virtual networks. In the second half of this chapter, we focus on NFV and
present the NFV architectural framework, key components in the NFV archi-
tecture, and technologies for implementing high-performance NFV. At the end
of this chapter, we discuss virtualization-based network service provisioning,
especially the NaaS paradigm in NFV and unification of network and cloud
services.

3.2  Virtualization in Computing


Virtualization is one of the most significant technologies that have greatly im-
pacted the IT industry development in the past decade. With roots that can
be traced back to the age of mainframes, virtualization recently regained its
popularity in computing and has been widely employed in various application
scenarios, especially in data centers and cloud infrastructures.
In general, virtualization broadly describes the separation of a service or
application from the underlying physical delivery of that service or applica-
tion. Essentially, virtualization provides a layer of abstraction between physi-
cal resources (including computing, storage, and networking hardware) and
the applications running on them. Virtualization abstracts physical resources as
virtual instances and enables independent multitenant access to a shared infra-
structure substrate.
Virtualization has been widely applied throughout the IT industry and
often has different focuses in various application scenarios. Some representative
application scenarios of virtualization in computing include virtualization of
servers, networking devices, and services, as listed in Table 3.1. At the server
level, virtualization abstracts platform hardware, such as processor, memory and
hard drive, and I/O interfaces, into virtual resources for hosting various applica-
tion instances. Virtualization may also be applied to networking devices, such
as network interface cards (NICs), switches, and transmission links to form
virtual networks (e.g., VLANs). In addition, resources for service provisioning,
including software applications, development and/or programming platforms,
and physical infrastructures all can be virtualized. Since server virtualization is
the most representative use case of virtualization in computing, we use it in the
rest of this section as the context for discussing virtualization technologies and
their features.
A key objective of server virtualization is to enable multiple independent
virtual machines (VMs) to share the physical resources of a server. Figure 3.1
depicts general architecture of server virtualization, in which a VM manager,

www.EngineeringBooksPdf.com
98 Virtualized Software-Defined Networks and Services

Table 3.1
Representative Applications of Virtualization in Computing
Server Virtualization of computing, storage, and I/O resources
Networking device Virtualization of NICs, switches, and links
Service Virtualization of software, platform, and infrastructure

Figure 3.1  General architecture of server virtualization.

often referred to as a hypervisor, manages a group of VMs sharing server hard-


ware. A VM is a software emulation of a physical machine. Each VM can run
its own operating system, called the guest OS, and various applications upon
the guest OS. Each VM is isolated from other VMs and behaves as if it is run-
ning on an individual physical machine. VMs are essentially isolated from one
another in the same way that two physical machines would be on the same
network. The guest OS on a VM has no knowledge of other VMs running on
the same physical machine.
Hypervisor is the software that implements the core functions of virtu-
alization. The hypervisor provides an abstraction of server hardware resources
and determines how such resources should be virtualized, allocated, and pre-
sented to VMs. As shown in Figure 3.1, there are two types of hypervisors. A
type I hypervisor runs directly on top of bare metal hardware without using any
operating system. A type II hypervisor operates upon the operating system of
the host server (host OS). Type I hypervisors, with direct access to hardware re-
sources, are typically more efficient than type II hypervisors and achieve greater
scalability, robustness, and performance. On the other hand, type II hypervi-

www.EngineeringBooksPdf.com
Virtualization in Networking 99

sors, utilizing a standard host OS, may support a broader range of hardware
configurations on hosting servers.
Various hypervisors have been developed and deployed. Representative
hypervisors that have been widely adopted in industry include Xen [1], KVM
[2], and VMware ESX/ESXi [3]. Main features and the supported guest OSs for
these hypervisors are listed in Table 3.2.
Container-based virtualization, also called operating system-level virtu-
alization, is another approach to server virtualization. In this approach, the
operating system kernel runs on server hardware and allows multiple isolated
user-space instances installed on top of it. Such isolated instances are called
containers, which may look and feel like real servers from the point of view of
their owners and users.
Unlike hypervisor-based server virtualization where each VM runs a com-
plete guest OS, container-based virtualization allows all the virtual instances
(containers) to share a common operating system. Therefore, container-based
virtualization removes the overheads associated with VM guest OS and im-
proves virtualization performance due to its lightweight implementation. How-
ever, the container-based virtualization approach requires each virtual instance
on a host server to use the same operating system that the host is running;
therefore, it limits the flexibility that can be achieved by server virtualization.
In addition to standard computing resources such as CPU and storage,
server virtualization also includes virtualization of some networking devices.
The multiple VMs on the same server share the NIC of the server and expect
their communication sessions to be isolated from each other. Consequently,
the physical NIC of the server needs to be virtualized to a set of virtual NIC
instances, one for each VM. A virtual NIC is a software emulation of a physical

Table 3.2
Representative Hypervisors and Their Features
Hypervisor Features Guest OS Support
Xen Works for IA-32, X86-64, and ARM Support most Unix-like operating
instruction sets; contain a privileged systems as well as Microsoft Windows
domain called dom0, which is the only systems; the dom0 is generally a version
virtual machine that has direct access to of Linux or BSD
the hardware
KVM A Linux-kernel hypervisor; needs Support a variety of guest OSs, including
hardware virtualization extension (e.g., many versions of Linux, BSD, Solaris,
Intel VT, AMD-V) support and Windows
VMware ESX/ Works for i386 (before version 4.0) and Support Windows, Linux, Unix, and
ESXi X86-64 platform; can run directly on Macintosh
a bare metal hardware; has vital OS
components self-included; support live-
migration of virtual machines

www.EngineeringBooksPdf.com
100 Virtualized Software-Defined Networks and Services

NIC and can have its own network identifies (IP and MAC addresses). NIC
virtualization is typically a function provided by the hypervisor.
In traditional networks, NICs are interconnected through switches and
transmission links to form a layer 2 network or subnet. Network switches can
also be virtualized to bridge multiple VMs by linking the virtual NICs of these
VMs. The virtual switch function can also be provided inside a hypervisor to
connect multiple VMs that are managed by the hypervisor on the same physi-
cal machine. Virtualization of network links allows creation of logical links that
connect VMs. The key function of virtual links lies in bandwidth allocation for
individual channels between VMs. Virtual links may be realized in different
forms (e.g., wavelength channels in optical networks and label-switch paths in
MPLS networks).
Virtualization technologies, when applied to computing and networking
systems, bring in some desirable features that may significantly enhance system
capabilities and performance. These benefits make virtualization a key enabler
of the state of art computing systems, especially cloud computing systems.
A key feature of virtualization is the abstraction of physical resource that
decouples services/application software from infrastructure hardware, which
allows these two aspects of information systems to evolve freely along their
own paths. Virtualization enables software applications to be developed and
deployed without being constrained by the implementations of their hosting
platforms, thus facilitating innovations in applications and services. Hardware-
independent applications enabled by virtualization lead to greater flexibility
for supporting elastic service provisioning through on-demand resource alloca-
tion and configuration. For instance, a user can easily request adjustment in
the amount of allocated resources in response to work load fluctuation. The
software nature of applications supports fast configuration/reconfiguration of
virtual instances to provide quick responses to user demands.
Resource sharing and consolidation is another aspect of the benefits
brought in by virtualization. By allowing multitenant virtual instances to share
common physical infrastructure, virtualization may greatly improve resource
utilization and enhance system flexibility as well. Hardware independence of
virtual instances allows them to migrate across hosting platforms, which fa-
cilitates resource consolidation for enhancing system utilization. Virtualization
also provides isolation among the virtual instances to create an illusion that
each tenant has the full ownership of the hosting platform. Each VM may have
its own guest OS and applications for meeting the service requirements of dif-
ferent users, therefore allowing a common infrastructure substrate to support a
wide range of applications with diverse requirements.
Some of the key features and benefits of virtualization-based computing
are summarized in Table 3.3.

www.EngineeringBooksPdf.com
Virtualization in Networking 101

Table 3.3
Features and Benefits of Virtualization-Based Computing
Virtualization Features Virtualization Benefits
Resource abstraction Virtualization decouples software applications and services
from hardware infrastructure to enable hardware-independent
application/service development and deployment.
Resourcing sharing Virtualization allows multitenants to share a common infrastructure
platform for achieving more efficient and flexible usage of physical
resources.
VM isolation Virtualization provides isolation among virtual instances, which
enables independent tenants sharing a common substrate to support
applications with diverse requirements.
Elastic resource provisioning Virtualization enables dynamic resource allocation that supports
elastic service provisioning for meeting user demands.
Agile system management Virtualization supports fast deployment, configuration, and
management of the virtual instances in response to user requests.

3.3  Network Virtualization


The main challenges to service provisioning in future networks come from two
aspects: (a) the wide spectrum of services that must be provisioned for meeting
highly diverse application requirements, and (b) the various hetoergenous net-
working technologies that may be employed in different autonomous network
domains for implementing the services.
However, ossification of the current IP-based network architecture,
caused partially by the interplay of IP end-to-end design and the vested interests
of competing stakeholders, limits the network capabilities of effectively sup-
porting future network services. The end-to-end design principle of IP requires
global agreement of coordination to deploy any fundamental change in net-
work architecture. The significant capital investiment in the Internet infrastruc-
ture and competing interests of its major stakeholders have created a barrier to
introducing distruptive technologies in the Internet [4]. As a result, althrough
the research community has attempted to take a “clean slate” approach to de-
veloping new network architectures and protocols for facing the challenges of
future networking, deployments of innovative technologies are basically lim-
ited to overlay-based approaches with little ability to introduce fundamental
changes in network core architecture. As pointed out by Anderson et al. in [5],
the existing overlay approaches cannot provide an effective deployment path
for distruptive networking technologies mainly due to two reasons: First, they
are mostly used to deploy incremental solutions to specific problems without a
holistic view of the interactions between coexisting overlays. Second, overlays
are often designed and deployed in the application layer on top of IP, and thus
cannot support radically different network architecture.

www.EngineeringBooksPdf.com
102 Virtualized Software-Defined Networks and Services

Researchers have realized that a key to overcoming ossification of the cur-


rent Internet architecture and breaking the impasse to network innovations lies
in decoupling the network functions for service provisioning and the network
infrastructures for data transporation and processing [6]. Such decoupling al-
lows alternative network architectures and new protocols to be developed and
deployed for meeting various service requirements without being constrained
by infrastructure implementations. The notion of virtualization, which essen-
tially separates functions from their hosting platforms, offers a promising ap-
proach to achieving such decoupling required for future network archtiecture.
Applying viritualziation in network architecture leads to a new vision of net-
work design that is broadly referred to as network virtualization (NV).
Virtualization technologies have already been employed in networking
in various scenarios. For example, a virtual local area network (VLAN) forms
a single broadcast domain by logically interconneting a group of hosts, and a
virtual private network (VPN) enables proviate connections among multiple
sites using secured tunnels over public networks. With the popularity of serv-
er virtualization, the demand of communications between VMs on the same
physical server leads to virtual switches (e.g., Open vSwitch) that are typically
implemented as part of a hypervisor. Virtualization has also been explored as
a means to construct experimental platforms for conducting network research.
For example, virtualization technologies are employed in some large-scale re-
search projects, including PlanetLab [5], GENI [7], and FEDERICA [8], to
construct virtual networks as testbeds for evaluating proposals of new network
architectures and protocols.
However, none of the afoermentioned approaches of virtualization in
networking is capable of fully addressing the deficiency of IP-based Internet
archtiecture for future networking and service provisioning. This is mainly be-
cause that these research efforts are based on a conventional view on network
archtiecture called the purist view [4]. The purist view expects a general-purpose
network architecture that provides a suitable paltform for the set of all existing
and emerging applications. Such a platform (e.g., the IP protocol) forms a “nar-
row waist” in network archtiecture that is between all applications and various
transportation technologies. If this narrow waist is a single end-to-end packet
delivery platform, then any major modification of that platform requires uni-
versal agreement and coordination among all involved stakeholders, which is
very difficult—if not impossible—in today’s Internet.
In contrast, a pluralist view on network archtiecture, as proposed in [4],
advocates coexistence of alternative network archtiectures upon shared network
infrastructure, which provides a means to overcome ossification of the current
Internet and introduces additional design freedom for future network archti-
ecture. The pluralist view allows network designers to fully exploit the power
of virtualization to address some of the fundamental challenges to networking.

www.EngineeringBooksPdf.com
Virtualization in Networking 103

In network designs with the pluralist view, virtualization is a key architectural


attribute that supports independent virtual networks with alternative archtiec-
tures and protocols to coexist and share a common infrastructure substrate. In
this chapter, the term NV refers to application of virtualization in networking
with the pluralist view.
Therefore, the key objective of NV is to enable a networking environ-
ment that allows multiple independent virtual networks (VN) to share a com-
mon network infrastructue. Each VN may have its own network archtiecture,
including packet format, addressing scheme, forwarding mechanism, routing
protocols, and so on, designed to provision various network services for meet-
ing diverse application requirements. VNs are implemented on top of an infra-
structure substrate comprising physical network resoruces. The narrow waist
of virtualization-based network architecture will be a thin layer of network vir-
tualization that provides automatic mechanisms for mapping VNs to physical
infrastructure resources.

3.3.1  Network Virtualization Architecture


Figure 3.2 depicts general architecture for network virtualization. The layer of
network infrastructure comprises physical network resources including nodes
and links. The links may be fibers, copper lines, or wireless connections, and
nodes are networking devices such as routers, switches, and servers. The physi-
cal network infrastructure is abstracted by the layer of virtualization as a virtual-
ized infrastructure substrate consisting of virtual resources, upon which VNs
are created and operated. A VN is a collection of virtual nodes that are inter-
connected through virtual links to form a virtual topology. Multiple VNs may
be constructed with the virtual resources provisioned by the virtualization layer.
Each virtual node is hosted on a particular physical node, whereas a virtual

Figure 3.2  Network virtualization architecture.

www.EngineeringBooksPdf.com
104 Virtualized Software-Defined Networks and Services

link spans over a path in the network infrastructure and includes a portion of
the physical resources along the path. The virtualization layer is responsible for
mapping virtual nodes and links to physical resources and managing partition
of infrastructure to guarantee isolation between VNs. VNs can be set up and
torn down dynamically in responses to users’ service requests [9].
In general, the NV architecture has the following features:

• Abstraction: the virtualization layer provides abstraction of physical in-


frastructure resources upon which VNs may be built independently of
the implementation technologies employed in infrastructure.
• Resource sharing: physical infrastructure resources can be partitioned
and utilized by multiple VNs, which allows a common infrastructure
substrate to support diverse service requirements with higher resource
utilization.
• Isolation: isolation between VNs is guaranteed by the virtualization
layer to allow multitenant VNs to be deployed in the same network
environment.
• Heterogeneity: heterogeneous networking technologies (e.g., optical or
wireless networking) may be used in the infrastructure substrate, and
different virtual network technologies may be employed in individual
VNs.
• Manageability: the NV architecture supports VN lifecycle management,
including creation, instantiation, updating, scaling, and termination of
VNs, as well as allowing VN operators to customize their VNs, includ-
ing topologies, configuration, and policies.

It is worth noting that the VNs enabled by the NV architecture and the
VPNs realized in the current Internet, although seeming to be similar, actually
have fundamental difference. VPNs simply provide connectivity between edge
sites over a single ISP backbone, while the NV architecture gives VN operators
direct control over the protocols and services that run on their VNs. In addition,
VNs in the NV architecture may span across multiple infrastructure domains,
while VPNs are typically constrained within individual ISP infrastructures.

3.3.2  Functional Roles in Network Virtualization


The NV architecture decouples service provisioning-related functionalities
from network infrastructures, therefore splitting the traditional role of Internet
service providers (ISPs) into two independent entities: infrastructure providers
(InPs) who manage the physical infrastructure, and service providers (SPs) who

www.EngineeringBooksPdf.com
Virtualization in Networking 105

establish and operate virtual networks (VNs) by leasing resources from InPs to
offer end-to-end services. Figure 3.3 shows an NV environment with split roles
of InP and SP.
InPs are responsible for deploying and managing the underlying physi-
cal infrastructure resources. They offer their resources through programmable
interfaces to different SPs. Multiple independent InPs may exist in an NV envi-
ronment, and the InPs distinguish themselves through factors such as the qual-
ity of their resources and the manageability that they provide for utilizing their
resources. According to the 4WARD model, the InPs must fulfill the following
requirements [10]:

• Virtualize their physical resources and provide isolation between virtual


resource instances in order to support multitenant VNs;
• Provide an interface that allows SPs to lease virtual resources;
• Offer control interfaces to the virtual resources through which SPs may
instantiate and manage VNs for meeting service requirements;
• Monitor and manage their infrastructure resources and perform dynam-
ic resource allocation/reallocation in response to VN requests.

SPs lease resources from one or multiple InPs to create and deploy VNs
for providing services to end users. An SP may also provide network services to

Figure 3.3  NV environment with split roles of infrastructure and service providers.

www.EngineeringBooksPdf.com
106 Virtualized Software-Defined Networks and Services

other SPs by acting as a virtual InP that partitions and leases its virtual resources
to other SPs. The role of SPs might be further split to virtual network providers
(VNPs) and virtual network operator (VNOs).
The main job of a VNP is to construct VNs for meeting the requests from
VNOs while VNOs are responsible for the actual operations of the constructed
VNs for service provisioning to end users. VNPs plays a mediation role between
the InPs and the VNOs. An analogy for the function of a VNP is that of a travel
agency that has expertise in traveling methods and knowledge about available
routes, and thus can make the travel plans and book the flights and hotels for
customers based on their requirements. However, after the customers decide
on their trips, they mostly interact with the on-site service providers (InPs in
a VN environment). The following lists of functions for VNP and VNO were
summarized in [10].
The main responsibilities of a VNP include the following:

• Provide an interface for VNOs that allow them to request creation,


modification, or termination of VNs;
• Request the virtual resources from one or multiple InPs and assemble
the obtained virtual resources to construct VNs;
• Hand over control of VNs to VNOs;
• Perform migration of virtual network nodes and links transparently to
VNOs.

A VNO should be able to fulfill the following tasks:

• Request a VNP for creating, modifying, or terminating a VN for meet-


ing service requirements;
• Access the virtual resources allocated to the VN that it is operating;
• Establish a functioning VN, including installation and instantiation of
network architecture, protocols, and configuration in the VN;
• Monitor and manage the VN, including reconfiguration of the VN in
response to user’s demands;
• Grant access to the VN to its customers.

A VN customer (VNC) is an entity that consumes the services provided


by the VN. Depending on the specific use case, the VNC may be of different
natures (e.g., an end user or a content service provider). A cloud service provid-

www.EngineeringBooksPdf.com
Virtualization in Networking 107

er can be a VNC that leverages a VN to interconnect data centers and provide


connectivity to it end users.
The main functional roles in a NV environment are illustrated in Figure
3.4.

3.3.3  Virtual Network Lifecycle Management


According to the 4WARD model, the VN lifecycle management process can be
divided into four basic phases: VN design, VN construction, VN instantiation,
and VN operation. All functional roles in an NV environment, including InP,
VNP, and VNO, are involved in the process. A VNO first describes the require-
ments for a VN, including the virtual topology with expected capabilities, QoS
requirement, and related policies and constraints such as geographical restric-
tion. This VN request is submitted by the VNO to a VNP. Upon receiving the
request, the VNP discovers available virtual resources that match the required
VN topology. The VNP then contacts one or multiple InPs to make prereserva-
tion of physical resources for supporting the required virtual nodes and links. If
multiple InPs must be involved, the VNP needs to determine how to partition
the described VN topology and embed each part to the appropriate InP infra-
structure. Upon receiving a request from the VNP, each involved InP allocates
physical resources in its network infrastructure for supporting the virtual nodes
and links of the requested VN. At this point, the VN is set up and ready to be
operated by a VNO. Then the VNP hands over the control for the just estab-
lished VN to a VNO, which now can configure the virtual resources in this VN
and install any customized network protocol for the VN. After the instantiation
stage, this VN is activated and starts providing services to end users. The VNO
is responsible for all run-time operations of the VN (e.g., scaling up or down of
VN capacity, modification of QoS requirement, attaching new end users, and
tearing down the VN) [10].

Figure 3.4  Main functional roles in a network virtualization environment.

www.EngineeringBooksPdf.com
108 Virtualized Software-Defined Networks and Services

3.3.4  Benefits of Network Virtualization for Service Provisioning


Network virtualization, through applying virtualization in networking with a
pluralist architectural view, brings in all the benefits of virtualization that we
discussed in the previous section into network systems (e.g., efficient network
resource utilization, flexible resource management, and agile system configura-
tion). In this subsection, we particularly discuss the significant impact of net-
work virtualization on service provisioning in networks.
NV allows multiple virtual networks with alternative network architec-
tures and protocols for meeting diverse service requirements to cohabit on a
shared physical infrastructure substrate, which may significantly enhance flex-
ibility and manageability of future network services. A virtualization-based
Internet architecture offers a rich environment for innovations in networking
technologies that may stimulate development and deployment of a wide spec-
trum of new network services.
Another key feature introduced by NV to future network service pro-
visioning is the decoupling between service provisioning from network infra-
structures. This decoupling, on one hand, provides SPs (including VNPs and
VNOs) with the ability to build VNs upon an abstract view of physical infra-
structure, and, on the other hand, allows InPs to obtain service-oblivious free-
dom to implement their infrastructures. Therefore, NV enables independent
evolutions of service functions and infrastructure technologies. NV also simpli-
fies network and service management by “outsourcing” the responsibility of
physical network devices to InPs and allowing SPs to run multiple simple VNs
in parallel for meeting different service requirements.
NV may also greatly facilitate end-to-end service provisioning across
autonomous network domains. Interdomain QoS provisioning has been a
long-standing challenge to the current Internet due to its requirement of col-
laboration across network domains with heterogeneous QoS mechanisms and
policies. NV enables a single SP to obtain holistic control over the entire end-
to-end service delivery path across network infrastructures that may belong to
different InPs, and thus may eventually achieve end-to-end QoS guarantee.

3.4  Technologies for Creating Virtual Networks


The main objective of network virtualization is to create independent VNs that
are free to deploy the required network architecture and protocols for meeting
service requirements. In order to achieve this objective, the network virtualiza-
tion layer in the NV architecture provides abstraction of the underlying net-
work infrastructure and maintains the mapping from VNs to physical network
resources. More specifically, the virtualization layer serves as a middleware be-
tween VNPs and InPs by performing functions in the following two directions.

www.EngineeringBooksPdf.com
Virtualization in Networking 109

In the direction from InP(s) to a VNP, this layer presents the availability and
capability information about infrastructure resources to the VNP in an abstract
format. In the direction from the VNP to InP(s), the virtualization layer han-
dles requests for creating VNs and allocates appropriate physical infrastructure
resources for meeting the requirement of each VN. The function in the first di-
rection is often referred to as resource description and discovery (RDD), while
the process in the second direction is typically called resource allocation or vir-
tual network embedding (VNE).

3.4.1  Resource Description and Discovery for Network Virtualization


The main goal of RDD in an NV environment is to provide a VNP with suffi-
cient information about the infrastructure resources offered by InPs so that the
VNP may choose the appropriate InP (or a set of InPs) for hosting each VN.
This process is done by cooperation between the VNP and InPs, with the for-
mer inquiring and analyzing the resource information for making InP selection
and the latter providing the needed information either proactively or reactively
in response to VNP inquiries [11].
In general, information about infrastructure resources includes two main
aspects—network topology that shows how network nodes are interconnected
through links, and network capability information such as available processing
capacity at each node and transmission capacity on each link. Both types of
information are dynamic for typical large-scale networks; therefore, the RDD
process requires continuous interactions between InPs and the VNP. In addi-
tion, information of available network capability apparently may change much
more frequently compared to that of network topology. Real-time information
update to reflect the current network capability status would cause overwhelm-
ing communication overheads between the VNP and InPs and thus might not
be feasible in typical networking environments.
A challenge to RDD in NV is to achieve a balance between the exposition
and abstraction (aggregation) of network resource information. The VNP needs
to have enough information about both topology and capability of physical
networks to make optimal selection of hosting InPs for VNs. The more details
the VNP knows about infrastructure resources, the better discovery and selec-
tion it may perform. On the other hand, aggregation of resource information
is desirable for multiple reasons. Frist, exposing detailed infrastructure status
to the VNP in large-scale networks may limit network scalability due to the
communication overheads. Also, due to the separated roles of VNP and InP,
the latter tends to avoid exposing detailed information about its infrastructure
implementation. Therefore, the virtualization layer is expected to provide an
appropriate level of resource abstraction for meeting VN requirements without
exposing detailed information of the infrastructure substrate.

www.EngineeringBooksPdf.com
110 Virtualized Software-Defined Networks and Services

In order to address this challenge, researchers have applied some semantic


web techniques for network resource description and discovery. For example,
network description language [12] is an ontology designed based on resource
description framework (RDF) for describing network elements and topologies.
For facilitating resource abstraction, network resource description language
(NRDL) [13], which is also based on RDF, focuses on describing the inter-
actions among network elements rather than individual network objects like
switches and links.
In order to describe network capability together with topology informa-
tion, the resource abstraction approach proposed in [14] first abstracts network
topology as a full-mesh representation consisting only of service end nodes,
then it associates the network connectivity between each pair of end nodes with
QoS metrics such as bandwidth and delay. A capability matrix was developed in
[15] for describing network capabilities using a capability profile for each pair
of ingress and egress nodes. The profile is defined based on the service curve
concept in network calculus theory, thus offering a general service capability
description that is agnostic to infrastructure implementations.
Description and discovery of virtualized resources are also important for
constructing virtual networks. Virtual execution infrastructure description lan-
guage (VXDL) [16] is a language developed for describing virtualized infra-
structure resources focusing on computing resources interconnected by net-
work links. However, some features important to network virtualization are not
yet sufficiently supported by this language (e.g., no separated entity for network
infrastructure description required by InPs for advertising their resource infor-
mation to a VNP).
For better supporting network virtualization, the resource description and
discovery framework developed in the 4WARD project provides a schema for
describing virtual resource properties and their relationship [17]. In this frame-
work, the basic building element for virtual resource description is network
element, which can be a node, an interface, a link, or a path. Each element has
both functional and nonfunctional attributes. Functional attributes describe
the functionalities supported by the element, such as the element type, ele-
ment execution environment, and supported protocols and operating systems.
Nonfunctional attributes describe capability and performance parameters of
the element. Functional attributes are stored in external repositories that can
be accessed by the VNP for discovering the available resources for meeting VN
requirements. Nonfunctional attributes are updated and stored in local reposi-
tories by individual InPs to avoid extra communication overhead between the
VNP and InPs. This framework also organizes resource information in a tree
structure for accelerating the resource discovery process.
Despite its importance, resource description and discovery have received
relatively less research attention compared to resource allocation for embedding

www.EngineeringBooksPdf.com
Virtualization in Networking 111

virtual networks. As we will present in the next subsection, a wide variety of


algorithms have been developed for VNE, but most of them just assume that
the information about physical infrastructure resource has been made avail-
able. Therefore, network resource description and discovery is still an important
open issue that deserves more thorough study.
The recent development of SDN with centralized control that manages a
global and abstract view of an entire network domain may greatly facilitate real-
izing network virtualization. As we discussed in the previous chapter, network
topology management, as one of the core functions of an SDN controller, is
responsible for collecting network states (including both topology and capa-
bility information) and presenting such information in an abstract format to
SDN applications. Therefore, technologies for network topology management
in the SDN control plane, such as the application layer traffic optimization
(ALTO) and interface to routing systems (I2RS) specifications, can be lever-
aged to realize resource description and discovery for network virtualization.
We will discuss more about how SDN and NFV may complement each other
in Chapter 4.

3.4.2  Virtual Network Embedding


The goal of VNE is to allocate the physical resources in network infrastruc-
ture to host VNs. Each VN comprises a set of virtual nodes and virtual links
that form a virtual topology. Therefore, VNE has two main aspects: (a) virtual
node mapping (VNM) that allocates the physical nodes for hosting each virtual
node, and (b) virtual link mapping (VLM) that assigns each virtual link onto a
physical path that connects the virtual nodes connected by this virtual link [18].
The infrastructure abstraction enabled by network virtualization allows
any physical resource to host virtual resources of the same type. Typically, a
physical resource is partitioned to host multiple instances of virtual resources.
For example, a virtual node in principle can be hosted by any available physi-
cal node and a single physical node may host several virtual nodes. Physical
resources can be combined for hosting a virtual resource. This is often the case
for virtual link mapping, where a virtual link spans a path consisting of multiple
physical links in a network infrastructure.
Typically, there are some constraints that must be considered during
VNE. Most obviously, the physical resource allocated for a virtual resource must
support the required functionalities and provide sufficient capabilities needed
by the virtual resource. For example, the CPU capacity and packet switching
throughput required by a virtual node cannot be more than the CPU power
and switching throughput provided by its hosting physical node. Similarly, the
capacity of a virtual link cannot be more than the maximum transmission rate
of the physical path upon which it is mapped. Figure 3.5 depicts an exemplary

www.EngineeringBooksPdf.com
112 Virtualized Software-Defined Networks and Services

Figure 3.5  An exemplary scenario of virtual network embedding.

VNE scenario where two VNs, each with three nodes, are embedded in a physi-
cal infrastructure network with five physical nodes.
In addition to meeting the basic functionality and capability constraints,
VNE often needs to achieve some other objectives in order to create VNs for
meeting various requirements specified by both service providers as well as by
infrastructure providers. Some typical VNE objectives are discussed in the fol-
lowing paragraphs.

• Meeting the QoS requirements for VNs: VN requests are often specified
by VNOs according to a set of QoS constraints defined for meeting cer-
tain service requirements. For example, a VN for real-time multimedia
content delivery services may require high bandwidth on virtual links,
high switching throughput and CPU capacity on virtual nodes, and low
delay and delay variation for end-to-end packet forwarding through
the VN. These QoS constraints must be satisfied by the VNE process
through appropriate allocation of physical resource.
• Maximizing the benefits of InPs: This objective is directly related to
maximize the utilization of infrastructure resources to embed as many
VNs as possible (i.e., high acceptance ratio for VN requests). To achieve
this objective, VNE process attempts to minimize the amount of re-
sources for hosting each VN and prefers to host VNs that bring in more

www.EngineeringBooksPdf.com
Virtualization in Networking 113

revenue when infrastructure resources are not sufficient for accepting all
VN requests. Since energy consumption is also an important part of InP
operation cost, it is also desirable for VNE to consolidate the physical
resources for hosting VNs so that the idle part of the infrastructure may
be put in a sleep mode for saving energy.

Other possible objectives for VNE include meeting VN requirements


for reliability, resilience, and security. In general, the virtualization layer, as a
middleware between VNOs as consumers of virtualized infrastructure and InPs
as providers of infrastructure resources, has to balance the benefits of these two
sides, which often leads to conflicting objectives for VNE. Therefore, VNE is a
complex optimization problem that has been shown by researchers as NP-hard.
The VNE problem has been extensively studied in recent years, and various
algorithms have been developed for solving this challenging problem. Some
representative approaches are briefly discussed here and a more comprehensive
survey on VNE technologies can be found in [18].
Since VNE needs to map both virtual nodes and virtual links onto physi-
cal infrastructure resources, it can be viewed as comprising two subproblems,
respectively, for VNM and VLM. A possible strategy for solving the VNE prob-
lem is to solve each subproblem in an isolated and independent way. In this
case, the VNM problem must be solved first because the results of VNM pro-
vide inputs to the VLM problem; that is, after determining the hosting node for
each virtual node, physical paths then can be searched and allocated to provide
connections between virtual nodes for realizing the VN topology. The idea of
this strategy is shown in Figure 3.6. An example of this strategy is the algorithm
proposed in [19]. The algorithm first solves the VNM problem by choosing
a set of eligible physical nodes for each virtual node and mapping the virtual
node based on the amount of available capacity on the physical nodes. The goal
is to assign virtual nodes with stronger demands to physical nodes with larger
capacities. After completing VNM, the VLM problem can be solved by either
finding the shortest single path between the pair nodes for each virtual link or
providing a multipath routing solution for each virtual link.

Figure 3.6  VNE by solving the VNM and VLM subproblems in sequence.

www.EngineeringBooksPdf.com
114 Virtualized Software-Defined Networks and Services

Although solving the VNM and VLM subproblems in a sequential order


simplifies the VNE process, this strategy may restrict the solution space and
degrade the overall performance of VNE. Coordination between node map-
ping and link mapping during VNE process is desirable for achieving higher
performance. Figure 3.7 illustrates the idea of a possible approach to coordinat-
ing VNM and VLM processes. The node mapping and link mapping processes
are performed iteratively. The temporary solution of each process obtained after
each iteration provides feedback to the other process for searching a better solu-
tion (if possible) at the next iteration. Such iterative coordinated feedback leads
to enhanced temporary solutions for both VNM and VLM until the optimal-
ity is achieved. As an example, the VNE algorithm proposed in [20] takes this
iterative VNM-VLM coordination scheme to obtain optimal VN embedding
solutions.
Another possible approach to coordinating the node and link mapping
is to combine these two aspects into an integrated network flow problem for
obtaining the optimal VNE solution. This strategy was first proposed in [21]
and then adopted in other works (e.g., the algorithm developed in [22]). The
main idea of this approach is illustrated by the example shown in Figure 3.8.
Given a virtual network and a physical substrate network, this approach cre-
ates an integrated auxiliary graph by connecting each virtual node to candidate
substrate nodes (that can potentially host the virtual node) via auxiliary edges.
Consequently, the VNE process is reduced to the multicommodity flow prob-
lem on the auxiliary graph, and then node mapping and link mapping can be
determined in a coordinately manner.
It is worth noting that all the VNE algorithms that we discussed are appli-
cable only to single InP scenarios. However, in realistic network virtualization
environments, it is possible that none of the network infrastructures provided
by individual InPs can meet all the requirements for a VN request; therefore,
the VN needs to span across the infrastructure domains operated by multiple

Figure 3.7  Iterative coordination with feedback between VNM and VLM.

www.EngineeringBooksPdf.com
Virtualization in Networking 115

Figure 3.8  An illustrative example of VNE using the integrated network flow model.

InPs. In such cases, the VNP needs to decompose the VN request into multiple
subrequests and assign each subrequest to the most appropriate infrastructure
domain. Inside each domain, the InP employs a VNE algorithm to embed a
part of VN specified by a subrequest. Then all the VN parts are connected to-
gether using external links among infrastructure domains to form the complete
VN. Such inter-InP coordination makes the multidomain VNE a very chal-
lenging problem that currently is still open for further research.

3.4.3  Virtual Network Security and Survivability


3.4.3.1  Virtual Network Security
Applying virtualization in networking brings in new security challenges that
call for new network security technologies. In this section, we discuss some
implications of network virtualization on network security.
Given the essence of virtualization, outsourcing computation, storage,
and network resources to a third party (e.g., the InPs in an NV environment)
gives rise to inherent vulnerabilities in various aspects of network security, in-
cluding network confidentiality, integrity, and availability. A particular security
issue in network virtualization comes from the coresidency of multitenant VNs
on the shared infrastructure substrate. Such coresidency exists at two levels. On
the node level, multiple virtual node instances are hosted by the same physical
node, which is essentially equivalent to multiple VMs sharing a hosting server.
On the network level, multitenant VNs are embedded into the same infrastruc-
ture substrate. Possible security threats exist on both levels of coresidency.
On the node level, coresident VMs typically have internal IP addresses
in the same range and small round-trip time for packet transmissions. With

www.EngineeringBooksPdf.com
116 Virtualized Software-Defined Networks and Services

sufficient external and internal network probing, an attacker can discover the
locations as well as attributes of potential target hosts and/or VMs. For ex-
ample, the attacker can keep sending virtual network requests until its virtual
node (the attacker VM) is mapped on the same host as the target virtual node
(the victim VM).
When an attacker achieves node-level coresidence with the target net-
work, as shown in Figure 3.9, the attacker can take advantage of vulnerabilities
existing in the VM management software (hypervisor), which might lead to
penetration of the physical node. Since the latter has the ultimate privilege, it
could in turn manipulate any other coresident target VMs. The attacker can
exploit the target VM even without penetrating the host by using side-channel
attacks. Coresidency generally indicates sharing of physical hardware, which
can serve as a common medium or side channel between the VMs. As a result,
activities unique to the victim VM can be “listened to” and analyzed by the at-
tacker VM. For instance, the attacker VM could uncover some sensitive infor-
mation about the victim VM after a sufficiently long period of monitoring and
analyzing the victim VM’s computational load in shared memory.
On the network level, attackers may exploit coresidency of VNs to launch
attacks from their VNs to the victim VNs. For instance, an attacker may em-
ploy denial of service (DOS) attacks to overwhelm the victim VN. As shown in
Figure 3.10, the attacker VN and victim VN share a physical link with 4-Gbps
bandwidth. If the attacker VN purposely transmits at the maximum rate 4
Gpbs, then the shared link will be saturated and becomes unavailable to the
victim VN, thus leading to unavailable service or performance degradation in
the victim VN. The attacker may achieve this objective by specifying a lower
link rate in his VN request but actually transmitting data at a much higher rate
on that virtual link after the VN is deployed. Since the attacker VN resides in

Figure 3.9  Security attack exploiting node-level coresidence.

www.EngineeringBooksPdf.com
Virtualization in Networking 117

Figure 3.10  An example of DOS attack by exploiting network-level coresidency.

the same physical network (e.g., a data center network) as the victim VN, it is
difficult to prevent such attacks using firewalls or intrusion detection systems
that are typically deployed at the boundary of a physical network.
In order to cope with security threats that exploiting the coresidence fea-
ture of multitenant VNs on either node level or network level, it is very im-
portant for the virtualization layer to guarantee isolation between VNs. The
virtualization layer should provide both logical isolation of distinct VNs (e.g.,
separated address space) and resource isolation to ensure the tenants cannot
interfere each other. In addition, the virtualization layer should also make im-
plementation details of physical network infrastructures transparent to VNs,
which will prevent attackers from exploiting the vulnerabilities in the underly-
ing hardware for launching attacks to coresident VNs, such of DOS attacks that
saturate physical links.
3.4.3.2  Virtual Network Survivability
With network virtualization gaining momentum, VN survivability has become
an issue that attracts considerable attention. Survivable VNs need to be able to
cope with various types of network faults, such as single node/link failure, mul-
tiple node/link failure, and regional failures. Survivability has been extensively
studied in traditional networks, but some new challenges have been brought in
by virtualization in networking. An important question raised in the context of
network virtualization is how to ensure that the mapping of a virtual network
can survive under network failures. Answering this question leads to the so-
called survivable VNE that includes survivability as an objective for the VNE
process.
We use an example of single node failure to illustrate the survivable VNE
problem. As shown in Figure 3.11, when the physical node hosting the virtual
node a fails, from the VN viewpoint, the virtual node a, virtual link a–b, and

www.EngineeringBooksPdf.com
118 Virtualized Software-Defined Networks and Services

Figure 3.11  An example case of single node failure in a virtual network.

virtual link a–c all become unavailable; therefore, the virtual topology of the VN
is broken. To avoid this problem, a survivable VNE scheme needs to preplan a
replacement mapping to ensure that the same virtual network topology can still
be provided when a physical node fails. One approach to achieving preplanned
mapping for survivable VNE is to augment a virtual network with inherent
protection, as proposed in [23]. To protect single node failure, for instance, one
can add an extra virtual node into the VN topology to serve as a backup node.
This idea is illustrated in Figure 3.12. For the VN shown in the left, one can
add an extra virtual node d connecting to all the other virtual nodes, thus form-
ing an augmented VN shown on the right. After mapping the augmented VN,
any single node failure will not affect the virtual network topology as the node
d can replace the failed node. In addition to single node failures, survivability
of VNs under other types of failures such as single link failures, multiple link
failures, and regional failures, could also be enhanced by using the augmented
VN approach, as discussed in [23].

3.5  Network Function Virtualization


The latest innovation in networking that employs virtualization technologies
for enhancing network operation and service provisioning is network function
virtualization (NFV). Development of NFV is motivated by the lack of suffi-
cient capability in current networks for easily supporting service evolution and
innovations. Launching new services in today’s networks often requires deploy-
ment of additional physical devices, which are designed to implement particu-
lar functions and cannot be easily reconfigured for supporting other operations.
However, lifecycles of hardware-based network appliances become shorter as

Figure 3.12  An example of augmented virtual network for enhancing survivability.

www.EngineeringBooksPdf.com
Virtualization in Networking 119

technology development accelerates, which requires much of the long service


deployment cycles to be repeated with little revenue benefit for network op-
erators. The static approach for service management in current networks leads
to high capital and operational costs, increasing energy consumption, lower
resource utilization, and longer time to market, which all significantly limit
network capability of supporting future network services.
Recently, the success of cloud computing as a proven approach to scalable,
flexible, and cost-effective service provisioning inspired telecommunication ser-
vice providers (TSPs) to explore cloud technologies, which are centered on vir-
tualization, for enhancing network services. Toward this direction, some major
TSPs formed an industry specification group in ETSI (NFV-ISG) and pro-
posed the notion of NFV. NFV aims to address some fundamental challenges to
networking by leveraging standard IT virtualization technologies to consolidate
many network equipment types onto industry standard servers, switches, and
storage [24]. Essentially, NFV is to enable network functions to be run as cloud
applications and allow network services to be provisioned in a similar matter as
cloud services. Therefore, “cloudification” of networking is where NFV shines.
A key mechanism for realizing NFV is the ability to implement network
functions on general-purpose computing platform. In a way the IT industry has
gone through a full circle: if this book was written before the 1990s the reader
would live in a world where most networking devices were implemented on
generic computers. A typical router implementation was nothing but a work-
station with multiple interface cards. The late 1980s saw the first wave of dedi-
cated network appliances. The network appliances were essentially workstations
with multiple network cards, but packaged in a smaller and dedicated form as
well as running an operating system optimized for packet processing. With the
explosion of the Internet in the 1990s a whole range of devices dedicated to
network processing started appearing—from custom ASICs with little or no
programmability to network processors (NPUs) that were CPUs augmented
with specific network processing hardware assists.
The common wisdom in those days was that general purpose CPUs were
too far off the price/performance/power curve to be seriously considered for
packet processing. However, this view started being challenged in recent years.
While no one disputes that a custom built ASIC or NPU should be faster than
a CPU that is not designed ground up for packet processing, the success of
cloud computing demonstrated that speed/cost/performance should not be the
only metric by which a solution is measured. In the long run it is the OPEX
that dominates costs, and OPEX reduction is a key benefit. This means that
capabilities such as feature velocity, dynamic scaling, and in-production testing
are all important factors, which may be even more important than one-time
CAPEX. The following paragraphs outlines these benefits in more details.

www.EngineeringBooksPdf.com
120 Virtualized Software-Defined Networks and Services

• Dynamic scaling: Traditional networking devices scale up—as new scale


requirements emerge better devices have to be built. In contrast, NFV
enables “scale out” network functions—instead of waiting for a new
version of hardware to address the scaling requirement an operator can
rack-and-stack more generic servers to achieve the scaling goals.
• Feature velocity: Since many existing networking devices are developed
on specialized embedded systems and optimized for particular network
applications, many of the software tools that are available to other soft-
ware developers (e.g., web developers) cannot be used for developing
network applications. This may result in a longer development cycle
and slower test/feature velocity. NFV may address this issue by allowing
network functions to be developed and deployed as software instances
running on standard servers.
• In production testing: Traditional networking devices tend to be mono-
lithic when it comes to handling users. There is hardware modularity in
the form of line cards and some equipment even has the ability to up-
grade individual software functions. What those devices generally lack
is the ability to upgrade software to a small subset of users. However,
this is exactly the capability needed for in-production testing (e.g., test a
new configuration or software version to a limited number of users and
propagate it to a larger set of users if proven successful). It is much easier
to do in-production testing in a virtualized environment by implement-
ing network functions on VMs.

On the other hand, performance is not a capability to be ignored by NFV.


If the cost/performance/power is prohibitive, a system cannot be built in the
first place. The cost/performance/power have to be judged for every virtualized
function separately. Some make sense and some don’t. The good news is that as
NFV becomes a requirement for general-purpose processors, there will be more
hardware-level support in chipset design for assisting packet processing. This in
turn would make more NFV implementations hit the right cost/performance/
power curve.
NFV can also improve management of complexity through decoupling
between software-based network functions from hardware-based infrastruc-
tures. Separated development and evolution of network functions and infra-
structures may greatly reduce the dependency between them, which is one of
the main sources of network complexity.
In summary, NFV could potentially offer the following benefits [24]:

• Reduced equipment costs and power consumption through consolidat-


ing equipment and exploiting the economies of scale of the IT industry.

www.EngineeringBooksPdf.com
Virtualization in Networking 121

• Increased speed of time to market by minimizing network operators’


innovation cycle. NFV should enable network operators to significantly
reduce the time needed for service development and deployment.
• Availability of multiversion and multitenancy network appliances,
which enables use of a single platform for different applications and
services. This allows network operators to share resources across services
and different customer bases.
• Targeted service introduction based on geography or customer sets is
possible. Services can be rapidly scaled up or down as required.
• Enable a wide variety of eco-systems and encourage openness. NFV
opens the virtual appliance market to pure software entrants, small play-
ers, and academia, thus encouraging more innovations for new services.

3.5.1  NFV Architectural Framework


The ETSI NFV-ISG has published an architectural framework that identifies
the main functional blocks required for realizing the NFV paradigm and the
interfaces between such blocks. This framework, as shown in Figure 3.13, com-
prises three key components—NFV infrastructure (NFVI), virtualization net-
work functions (VNFs), and NFV management and orchestration (MANO)
[25].
The NFVI consists of all infrastructure resources, including both hard-
ware and software components, that build up the environment in which VNFs
may be deployed, managed, and executed. The physical hardware resources in
NFVI include computing, storage, and network systems. Although ETSI-NFV
does not elaborate on the implementations of physical resources, a general as-
sumption is that NFVI may be a combination of commodity servers and legacy
equipment where the commodity hardware piece will grow over time.
The virtualization layer in NFVI abstracts hardware infrastructure to vir-
tual resources. The virtualization layer is responsible for: (a) abstracting and
partitioning physical resources; (b) enabling VNF software to utilize the under-
lying virtualized infrastructure; (c) provisioning the virtual resources required to
VNFs for supporting their execution. The NFV architectural framework does
not restrict itself to any specific implementation for the virtualization layer, but
expects the virtualization layer to have standard features and open interfaces
toward both VNFs and hardware. The use of hypervisors is one of the present
typical solutions for VNF deployment. For example, Linux kernel-based virtual
machine (KVM) is employed in many NFVI implementations for realizing a
virtualization layer.
VNF is the software implementation of a network function that is capable
of running over the NFVI. A VNF corresponds to a network node in legacy

www.EngineeringBooksPdf.com
122 Virtualized Software-Defined Networks and Services

Figure 3.13  ETSI NFV architectural framework [25].

www.EngineeringBooksPdf.com
Virtualization in Networking 123

networks, but is delivered as software without hardware dependency. A VNF


can be composed of multiple internal components. One VNF may be hosted
on a single VM or be deployed over multiple VMs, where each VM hosts a
single component of the VNF. A VNF may be accompanied by an element
management (EM) module that performs the typical management functional-
ity for one or multiple VNFs.
The MANO component in the NFV architectural framework is respon-
sible for management and orchestration of physical and virtual resources and
lifecycle management for VNFs. The MANO component comprises three key
elements: virtualized infrastructure manager (VIM), VNF manager (VNFM),
and NFV orchestrator (NFVO).
The VIM manages the virtual resources in NFVI, including creation, mi-
gration, monitoring, and deletion of VMs. A VNFM is responsible for VNF
life cycle management (e.g., instantiation, update, scaling, and termination of
VNFs). A VNFM may manage one or multiple VNFs. The key difference be-
tween the VIM and the VNFM is that the VIM provides a set of APIs to control
the life cycles of VMs, while the VNFM controls those APIs for managing the
VNFs running on the VMs. An example implementation is the OpenStack
Tacker (both a VNFM and an orchestrator) and the OpenStack Heat templates
that are passed to the VNFM to define service configuration.
The NFVO is in charge of orchestration and management of NFV infra-
structure resources and virtual functions for realizing network services. One can
view the orchestrator as a manager of a distributed transaction: it takes a service
specification (most likely coming from the OSS/BSS user input) and performs
a series of steps to realize it. If any of the steps fails, it is the orchestrator that has
to roll back the transaction to a known state for recovery.
NFV-ISG also defines some reference points in the NFV architectural
framework. In general, a reference point in architecture defines an external view
that a functional block of the architecture exposes. A reference point is typically
realized as a standard interface in a system design. Reference points between the
key components of the NFV architectural framework include:

• Vn-Nf: between VNFs and NFVI;


• Nf-Vi: between VIM and NFVI;
• Ve-Vnfm: between VNF and VNFM;
• Os-Ma: between OSS/BSS and MANO;
• Reference points between elements inside key components;
• Vl-Ha: between the virtualization layer and hardware resources inside
NFVI;
• Vi-Vnfm: between VIM and VNFM inside MANO;

www.EngineeringBooksPdf.com
124 Virtualized Software-Defined Networks and Services

• Or-Vnfm: between orchestrator and VNFM inside MANO;


• Or-Vi: between orchestrator and VIM.

Comparison between the ETSI NFV architectural framework with the


general NV architecture shows that they have many similarities. Both archi-
tectures are based on a virtualization layer on top of the infrastructure layer to
provide virtualized resources, which then can be utilized and shared by multiple
tenants. Both NFV and NV essentially share the same principle—applying vir-
tualization in networking. NV offers an architectural vision for virtualization-
based network design that is embraced by the NFV architecture proposed by
ETSI NFV-ISG.
On the other hand, we can also see some differences from comparison
between the NV and NFV architectures. The NV architecture is more general
and aims at highlighting the virtualization principle in networking instead of
specifying any particular mechanism for realizing the principle. Any implemen-
tation technology, either based on proprietary appliances or using commodity
servers, as long as it supports the network virtualization layer for infrastructure
abstraction, in principle can be used in the NV architecture. The NFV architec-
ture is more specific and provides functional components and reference points
for virtualizing infrastructures as well as managing virtualized resources and
functions. Regarding infrastructure implementation, NFV particularly advo-
cates standard IT virtualization technologies on top of commodity servers and
storage devices.
The NV architecture emphasizes network-level virtualization and creation
of multitenant virtual networks for providing services to end users. In contrast,
the NFV architecture focuses on virtualization of individual network func-
tions instead of the entire network. Service provisioning in NFV is achieved
by MANO through orchestration of VNFs (possible with physical networks).
The orchestrator in the NFV architecture is able to select and compose ap-
propriate VNFs to form a forwarding graph, which is equivalent to creating a
VN from a perspective of service provisioning. The finer granularity enabled by
NFV with virtualization of individual network functions offers the flexibility
of partial virtualization of networks; that is, virtual and physical network func-
tions coexist and collaborate in the same network. Such flexibility may greatly
facilitate smooth evolution toward the purely virtualization-based networking
envisioned by NV.

3.5.2  Principle for Virtualizing Network Functions


Network function (NF) is a functional block in a network that has well-defined
behaviors and external interfaces, and may perform autonomous operations.
The behavior of an NE is determined jointly by multiple factors, including its

www.EngineeringBooksPdf.com
Virtualization in Networking 125

static transfer function, its dynamic state, and the inputs that the NF receives
from its interfaces [26].
The objective of NFV is to separate software that defines the network
function (the VNF) from the virtualized network infrastructure (NFVI) that
executes the VNF. Therefore, VNFs and the NFVI should be specified separate-
ly. Virtualization of NE is illustrated by the example shown in Figure 3.14, in
which the upper part of the figure shows how traditional NEs are connected for
providing a network service; while the lower part of the figure shows the situa-
tion where NFs have been virtualized and implemented as VNFs executing on
host functions in the NFVI. As pointed out in [26], virtualization of network
functions has resulted in some significant changes in the following aspects:

• NFV divides an NF into a VNF and a host function and introduces a


new container interface between the VNF and its host function;
• NFV divides the interface between NFs into a virtualized interface and
an infrastructure interface

The following two important distinctions should be made between the


descriptions of standard network functions and VNFs:

• The virtualized network function (VNF) is not a functional block inde-


pendent of its host function.
• The container interface is not an interface between functional blocks
equivalent to other interfaces.

Figure 3.14  Virtualization of network functions.

www.EngineeringBooksPdf.com
126 Virtualized Software-Defined Networks and Services

These distinctions indicate that unlike a functional block that may ex-
ist autonomously, a VNF depends on the host function for its existence. The
VNF will be interrupted or terminated if its host function is interrupted or
terminated. Such existence dependence between a VNF and its host function is
reflected by the container interface.
The relationship between the VNF and its host function can be described
from the following two aspects: (a) the VNF is a configuration of the host
function, and (b) the VNF is an abstract view of the host function when the
host function is configured by the VNF. Therefore, when a host function is
configured with a VNF, it shows the external behaviors for implementing the
VNF specification. On the other hand, the VNF is an abstract view of the host
function. Therefore, the NFV architecture is defined using (a) host functions
with their offered container interfaces and associated infrastructure interfaces,
and (b) VNFs with their virtualized interfaces and the container interfaces that
they use [26].

3.5.3  Network Services in NFV


NFV will have a significant impact on how network services may be provi-
sioned. From an architectural perspective, a network service can be viewed as a
forwarding graph (FG) comprising NFs that are interconnected through net-
work infrastructure. In general, an FG is an order graph that specifies a set of
NFs involved in a network service and the relationship between these NFs. A
simple example of FG is a service chain (SC) in which all NFs are performed in
sequence. The behaviors of a network service are jointly determined by the be-
haviors of its constituent NFs and their dependency described by the FG. The
end points and NFs of the network service are represented as nodes in the FG,
which are connected by logical links that could be unidirectional, bidirectional,
multicast, or broadcast [25].
An example network service constructed with NFs following an FG is
shown in Figure 3.15, where NFs 1, 2, and 3 are connected in sequence through
logical links to provide an end-to-end service between the end points A and B.
In this example, the end point A and NF1 are in infrastructure domain 1, NF2
is implemented in infrastructure domain 2, while NF3 and end point B are
hosted in infrastructure domain 3.
In an NFV environment, network functions are virtualized and real-
ized as software instances (VNFs); therefore, an end-to-end network service is
defined by VNF forwarding graph (VNF-FG), in which the nodes and links
are, respectively, VNFs and virtual network connections. As shown in Figure
3.16, VNF-1, VNF-2, and VNF-3 together provide the end-to-end network
service. The virtualization layer in NFVI provides abstraction of infrastructure
resources upon which VNFs are realized. The exact physical deployment of a

www.EngineeringBooksPdf.com
Virtualization in Networking 127

Figure 3.15  Network functions, forwarding graph, and network service [25].

Figure 3.16  End-to-end network service in an NFV environment [25].

VNF instance on the infrastructure is not visible from the end-to-end service
perspective. This enables the VNF instances of a VNF-FG to be implemented
on different physical resources as long as the overall end-to-end service perfor-
mance and other policy constraints are met.
Figure 3.16 also depicts the case of a nested VNF-FG. In the example
service, the VNF 2 specified by the high-level FG (VNF-FG1) are decomposed
into three components that are realized by three VNFs (i.e., VNF-2A, VNF-
2B, and VNF-2C). A lower-level FG (VNF-FG2) defines how these three
VNFs collaborate to realize functions of VNF-2. The three VNF components
of VNF-2 may be instantiated as three VMs that are hosted on either the same
or different physical servers.
The new networking capabilities enabled by NFV are expected to bring in
some significant differences in the way of network service provisioning. Some
key differences are summarized as follows [25]:

www.EngineeringBooksPdf.com
128 Virtualized Software-Defined Networks and Services

• Decoupling software from hardware: NFV enables the software service


functions and network infrastructure hardware to progress separately
following their respective evolution paths.
• Flexible network function deployment: NFV allows automatic instanti-
ation of different network functions that perform on standard hardware
platform, which helps network operators to quickly deploy new services
over the same physical infrastructure.
• Dynamic operation: NFV provides greater flexibility to scale network
function capability and performance in a more dynamic way and with
finer granularity, which supports elastic on-demand service provisioning
required in future networks.

Another key concept behind NFV is microservices: the idea that a func-
tion needs to do one thing and do it well as opposed to creating large function
monoliths. Various microservice network functions can be composed together
through NFV MANO and presented externally as a single complex service. A
simple example of such a configuration can be a firewall for providing security
services and a router providing tunnel termination, implemented as two sepa-
rate VNFs but orchestrated to work together. The firewall VNF does not have
to directly deal with the specific tunneling technology used (e.g., GRE, L2TP,
IPSEC) and the router VNF does not have to implement firewall functions. In
addition to binding functions together, service orchestration allows us to pass
context from one function to the other. In the firewall/router example, we can
tag different tunnels with different service chain tags so that even though the
tunnel header is removed, significant context passed through the tunnel (e.g.,
subscriber identification) can still be passed along to the firewall function.

3.6  Key Components of NFV Architecture


In this section, we describe three key components in the NFV architecture: the
virtualized infrastructure for NFV, software architecture for VNFs, and NFV
management and orchestration.

3.6.1  NFV Infrastructure


NFVI comprises all the hardware and software components that form an envi-
ronment in which VNFs are deployed and executed. The physical devices that
are deployed and managed as single entities for performing NFVI functions are
referred to NFVI nodes. NFVI nodes may be distributed in various locations in
order to provide the locality and performance expected by various use cases. A

www.EngineeringBooksPdf.com
Virtualization in Networking 129

single geographical location where a number of NFVI nodes are sited is called
a NFVI point of presence (NFVI-PoP), which could be as large as a data center
or as small as a single network device [26].
In order to manage system complexity and enhance system scalability,
the NFVI component in the NFV architecture is further partitioned into three
functional domains: the compute domain, the hypervisor domain, and the in-
frastructure network domain, as shown in Figure 3.17. According to the current
technology and industry structure, it is already the case that compute, hypervi-
sor, and infrastructure networking technologies are largely separated with suf-
ficient standards for supporting the interactions between these domains.

3.6.1.1  Compute Domain


As defined in [26], the compute domain is to provide the compute and stor-
age infrastructure resources that are needed for hosting VNFs. The compute
domain also provides an interface to the infrastructure network domain, but
itself does not support network connectivity between VNFs. Cloud comput-
ing technologies will be widely leveraged in this domain, especially virtualizing
server hardware by means of hypervisors and connecting VMs and NICs using
virtual switches.
The main elements of compute domain include CPUs that execute VNFs
software, NICs that provide physical connections with the infrastructure net-
work domain, and various storage devices. The use of industry standard serv-
ers and storage devices in NFVI is a key factor in the economic case for NFV,
which is expected to reduce equipment cost and power consumption through
resource sharing and consolidation. An industry standard server is a server built
with standardized IT components. With rapid development in IT technologies,
the high-performance packet processing required by communication intensive
network functions can be supported by standard servers. Possible technolo-
gies for enhancing server performance of packet processing include high-speed

Figure 3.17  Compute, hypervisor, and infrastructure network domains in NFVI.

www.EngineeringBooksPdf.com
130 Virtualized Software-Defined Networks and Services

multicore CPU with high I/O bandwidth, smart Ethernet NICs for load shar-
ing and TCP offloading, and polling packets directly to VM memory [26].
3.6.1.2  Hypervisor Domain
The hypervisor domain of NFVI manages compute domain resources for sup-
porting the VMs running VNF software. Essentially, the hypervisor domain
implements the virtualization layer between the physical and virtual computing
(including storage) resources in NFVI. Therefore, a hypervisor domain is re-
sponsible for providing all the required capabilities for infrastructure virtualiza-
tion, including abstraction of physical resources, coordination across VMs for
resource sharing, and isolation between VMs. A popular open source hypervisor
implementation is KVM, and various packaging of KVM has been integrated
into commercial products.
A special challenge to the hypervisor domain in NFVI is to achieve the
high performance expected by many NFV applications, which means allowing
the VMs hosting VNF instances to run as fast as possible. Current and emerg-
ing server hardware technologies offer some features that may greatly improve
VM performance, including multicore processors supporting parallel threads
of execution, system-on-chip processors, specific CPU enhancement for VM
memory allocation and direct I/O access, and PCIe bus enhancements (e.g.,
SR-IOV). Some specific approaches that may be employed in the hypervisor
domain for enhancing NFV VM performance include exclusive allocation of
whole CPU cores to VMs, direct memory mapped drivers for inter-VM com-
munications and for VMs to directly access physical NICs, and implementing
vSwitch as a high-performance VM [26].
Figure 3.18 depicts the NFV hypervisor architecture presented by ETSI
NFV-ISG.

3.6.1.3  Infrastructure Network Domain


The infrastructure network domain inside NFVI is responsible for providing
the required connectivity to support communications among VNFs for end-to-
end service provisioning. Specifically, this domain provides the communication
channels between different VNFs, between the VNFCs of a distributed VNF,
and between VNFs and their orchestration and management entities. In ad-
dition, this domain also provides means of interconnection with legacy (not
virtualization-based) carrier networks [26].
The infrastructure network domain needs to have all essential network-
ing mechanisms available in order to provide the connectivity services. Such
mechanisms include an addressing scheme for the infrastructure network, a
routing process that can relate infrastructure addresses to routes through the
infrastructure network topology, and effective bandwidth allocation that guar-
antees the required communication performance.

www.EngineeringBooksPdf.com
Virtualization in Networking 131

Figure 3.18  Hypervisor architecture in NFVI [26].

An important requirement for the infrastructure network domain in


NFVI is to make VNF software and network infrastructure independent and
transparent to each other. Therefore, the network domain has a virtualization
layer between the virtual networks and physical network resources. Network
hardware is abstracted by the virtualization layer to realize virtual paths, which
provide the required connectivity between different VNF instances or between
the VMs that are hosting the same VNF.
The NFVI forms the hosting platform that provides the container inter-
face for deploying VNFs. The VNFI container interface is composed of the VM
container interface provided by the hypervisor domain and the VN container
interface provided by the infrastructure network domain. The former is the pri-
mary hosting interface on which VNF instances run and the latter is the inter-
face for accessing the virtual connections between VNF instances. Figure 3.19
depicts the container interfaces provided by NFVI and its constituent domains.

3.6.2  Virtual Network Functions


3.6.2.1  VNF Software Architecture
A core element of NFV is virtualization of network functions that previously are
implemented as proprietary hardware appliances. Therefore, understanding the
transition from hardware-based NFs to software-based VNF implementations
has been identified as an important study area by ETSI NFV-ISG. In order to
provide guidelines for realizing VNFs, the NFV-ISG has developed a software
architecture of VNF (referred to as VNF architecture). The VNF architecture
identifies the most common and relevant software architectural patterns that
can be leveraged for decoupling software from hardware in NFV. The main ob-

www.EngineeringBooksPdf.com
132 Virtualized Software-Defined Networks and Services

Figure 3.19  Container interfaces provided by NFVI and its constituent domains.

jective of this architecture is to address functional requirements for virtualizing


network functions in the form of software components deployed upon NFVI
and the requirements for deploying network services involving VNFs [27].
The VNF architecture is shown in Figure 3.20. A VNF may comprises
one or multiple VNF components (VNFCs). A VNFC is an internal compo-
nent of a VNF providing a subset of that VNF’s functionality. NFVI provides
the container interface for hosting each VNFC. A VNFC may implement a
single network entity. When a VNF is composed of a group of VNFCs, the in-
ternal interfaces between them do not need to be exposed and standardized. For
example, a packet gateway entity in the LTE network may be composed with
other network entities like the mobile management entity and service gateway
to form an “EPC VNF.” The interfaces between internal entities of the “EPC
VNF” do not need to be strictly compatible to 3GPP standards, but the exter-
nal interfaces of the entire VNF should follow relevant standards.

Figure 3.20  Software architecture for VNF [27].

www.EngineeringBooksPdf.com
Virtualization in Networking 133

As defined by NFV-ISG in this VNF software architecture, a VNF is an


abstract entity that allows the software to be defined and designed, and a VNF
instance (VNFI) is the runtime instantiation of the VNF. A VNFC is a compo-
nent of a VNF, and VNFC instances (VNFCIs) are the executing constituents
that make up a VNF instance. VNF manager creates one or more VNFCIs
in order to instantiate the virtual network function defined by a VNF. These
VNFCIs together provide the functionality of the VNF and expose the external
interfaces of that VNF.
Five types of interfaces related to the VNF architecture have been iden-
tified in [27]. They are referred to as SAW-i, i=1, 2, 3, 4, and 5, as shown in
Figure 3.20.

• SWA-1 interfaces enable communications between various network


functions within the same service or between different network services,
including communications between two VNFs, between a VNF and a
nonvirtualized network function, or between a VNF and an end point.
A VNF may have multiple SWA-1 interfaces.
• SWA-2 interfaces are internal interfaces for communications between
the VNFCs inside a VNF. They are not visible to users of the VNF and
are often vendor specific.
• SWA-3 interfaces are between VNFs and the MANO component, spe-
cifically VNF manager in MANO, and used to perform life cycle man-
agement of one of more VNFs.
• SWA-4 interfaces are used by the EMs to communicate with their as-
sociated VNFs for performing run-time management of the VNFs.
• SWA-5 interfaces are between each VNF and the underlying VNFI to
provide the access to a virtualized slice of the NFVI resources allocated
to the VNF. Each VNF relies on potentially different sets of infrastruc-
ture resources, including compute, storage, and networking resources.
Multiple SWA-5 subinterfaces are defined for accessing different types
of resources in NFVI. The SWA-5 subinterfaces include interfaces to
generic compute, network, and storage resources and interfaces to spe-
cialized functions.

The VNF architecture also defines an information template called VNF


descriptor (VNFD) for specifying the deployment configuration and opera-
tional behavior of a VNF. VNFD describes information in two categories: (a)
the state and environment for deploying a VNF, and (b) the required functions
for operating and managing a VNF. Such information is used to support on-

www.EngineeringBooksPdf.com
134 Virtualized Software-Defined Networks and Services

demand instantiation of the VNF for service provisioning. In [27], the main
information elements in a VNFD are grouped as follows.

• VNF identification data: include data for uniquely identifying vendor/


provider, version, type, and description of the VNF (which enables in-
teroperability between VNFs from different providers).
• VNF specific data: include specific VNF configuration data, interde-
pendencies and connectivity requirements of VNFCs, VNF lifecycle
workflow scripts, deployment flavors such as the number of instances of
each VNFC type and the version to be instantiated, and other deploy-
ment constraints.
• VNFC data: include type and identification of the VNFC, specific
VNFC configuration data and scripts, deployment constraints, and vir-
tualization container files/images references, which include the possibil-
ity to define packages of VNFC binaries plus operating system, empty
operating system, and/or empty virtualization container (i.e., unloaded
operating system).
• Virtualized resource requirements, including requirements for compute
resources, storage resources, and network resources.

Figure 3.21 shows the example provided in [27] for illustrating the rela-
tionship between a VNFD and the associated VNF. This example shows a VNF
instance that is made up of four VNFC instances that are of three different
types: A, B, and C. Each VNFC type has its own requirements on the operating
system and the execution environment. These virtual resources and their inter-
connectivity requirements are described in a VNFD. Besides resource require-
ments, a VNFD also contains references to VNF binaries, scripts, configuration
data, and so on, which are needed by the VNFM to configure the VNF prop-
erly. The requirements for NFVI resources (e.g., connectivity requirements,
bandwidth, latency) are also included in the VNFD (but not shown in Figure
3.21).
3.6.2.2  VNF Lifecycle Management
Physical appliances have a life cycle: they get unpacked, placed in a location,
powered up, connected to the network, and go through various stages of con-
figurations. The instantiation process of a VNF are the equivalent of unpack-
ing, installing, and powering up a network device. The configuration steps are
similar to those used in physical appliances, typically referred to as day 0/1/2
configuration as outlined next:

www.EngineeringBooksPdf.com
Virtualization in Networking 135

Figure 3.21  An example of a VNFD and its associated VNF.

• Day-0 configuration: network devices need a dedicated address to attach


them to management systems for follow up operations. The first step in
onboarding a device, virtual or physical, is the creation of an IP address
for management (e.g., assigning a management port via DHCP). For
VNF, the day-0 configuration can be specified in VNFD and used in the
VM creation process.
• Day-1 configuration: Before a device can provide any function, each
one of its data interfaces has to be assigned with an address, either L2 or
L3 (e.g., assigning IP addresses for all interfaces of a router). For VNF
instantiation, the day-1 configuration can also be part of the VM cre-
ation process.
• Day-2 configuration: Following the basic setup a network device can
start performing its functions. The setup of a pseudo-wire for a customer
is an example of day-2 configuration.

www.EngineeringBooksPdf.com
136 Virtualized Software-Defined Networks and Services

Day 0/1/2 configurations are persistent and often stored in a local da-
tabase of the VM. There are many transient states for session-based signaling,
such as voice calls, video sessions, and so on, that are typically not stored in
persistent memory. Note that many of these transient states can be driven from
legacy management systems as long as the virtual appliance appears to have the
same management interface as the legacy physical appliance.
A large array of protocols may be used for different phases of configu-
rations. For example, DHCP and various authentication protocols for day-0
configuration, file downloads for day-1 configuration, and NETCONF/YANG
for day-2 configuration.
Other life cycle events include decommissioning a VM, software upgrad-
ing for a VM, recovery from a failure condition, and moving a VM to a differ-
ent location. The events all can be managed by the NFV MANO component as
described in the next subsection.

3.6.3  NFV Management and Orchestration


The new virtual network functions added by NFV into networks call for more
sophisticated capabilities for managing and orchestrating network resources,
functions, and services, which are the main responsibilities of the MANO
component in the NFV architecture. By decoupling virtual network function
software from infrastructure hardware, NFV exposes a new set of entities in
networks, including VNFs, NFVI, and network services (NSs). The virtual re-
sources provided by VNFI are leveraged by VNFs, which are then orchestrated
by following a VNF-FG to form end-to-end NSs. The management and or-
chestration functions in NFV can be grouped in a hierarchy with three layers:
management and orchestration for NFVI, for VNFs, and for NSs, from the
bottom up. Therefore, the MANO component comprises three key functional
blocks: virtualized infrastructure manager (VIM), VNF manager (VNFM), and
NFV orchestrator (NFVO), which are in charge of the management and or-
chestration functions, respectively, for the infrastructure, virtual functions, and
network services [28]. Figure 3.22 shows the MANO architectural framework
defined by ETSI NFV-ISG.

3.6.3.1  Virtualized Infrastructure Manager


The VIM in an NFVI infrastructure domain is responsible for managing the
compute, storage, and network resources within the domain. A VIM may be
specialized in managing a certain type of infrastructure resources (e.g., com-
pute-only, storage-only, or networking-only) or provides federated manage-
ment across multiple types of resources.
The following list of VIM functionalities are defined in [28]:

www.EngineeringBooksPdf.com
Virtualization in Networking 137

Figure 3.22  NFV MANO architectural framework [28].

• Orchestrating the allocation, upgrade, release, and reclamation of NFVI


resources; and managing the association between virtual and physical
resources in NFVI. VIM keeps an inventory of the mapping between
virtual and physical resources.
• Supporting management of VNF-FGs, including creating, querying,
updating, and deleting VNF-FGs, by creating and maintaining virtual
networks and performing the required network traffic control functions.
• Managing related information of NFVI hardware resources (e.g., com-
pute, storage, network devices) and software resources (e.g., hypervi-
sors), and discovering the capabilities and features of such resources.
• Managing virtualized resource capacity and forwarding information re-
lated to NFVI resource capacity and usage.
• Managing (adding, deleting, updating, querying, copying) software im-
ages as requested by other MANO functional blocks (e.g., NFVO).
• Collecting performance and fault information of NFVI hardware and
software resources, and forwarding performance measurement results
and fault/event information related to virtual resources.
• Managing catalogs of virtualized resources that can be consumed from
NFVI.

www.EngineeringBooksPdf.com
138 Virtualized Software-Defined Networks and Services

A VIM exposes interfaces to VNFM and NFVO, which are defined by


the Vi-Vnfm and Or-Vi reference points, respectively, as shown in Figure 3.22.
VIM also has an interface for supporting communications with a variety of
hypervisors and network controllers in NFVI in order to accomplish resource
management functions. This interface is defined by the Nf-Vi reference point.

3.6.3.2  Virtual Network Function Manager


Virtual network function manager (VNFM) is responsible for lifecycle manage-
ment of VNF instances. Each VNF instance must have an associated VNFM,
while a VNFM may manage either a single or multiple VNF instances. VNFM
uses the information about deployment and operational behaviors of a VNF de-
scribed in its VNFD to create instances of the VNF and to manage the lifecycles
of these instances. VNFM performs its management on a VNF instance by
maintaining the virtualized resources that support the VNF without interfering
with the functions performed by the VNF [28].
As summarized in [28], VNF lifecycle management include the following
main functions:

• VNF instantiation: create a VNF instance and perform the required


configuration specified by the VNF deployment template (VNFD);
• VNF scaling: increase or reduce the capacity of a VNF instance;
• VNF updating and/or upgrading: support changes in VNF software
and/or configuration;
• VNF termination: release the NFVI resources associated with the VNF
instance and return them to the NFVI resource pool;
• VNF information management: collect performance measurement re-
sults and fault/event information related to the VNF instance.

A VNFM is supposed to expose its functions to other MANO blocks


(e.g., NFVO), as services in an open and abstract manner. The interface be-
tween VNFM and NFVO is defined by the Or-Vnfm reference point. VNFM
leverages the services provided by VIMs through an interface defined by the
Vi-Vnfm reference point for managing the infrastructure resources allocated to
VNF instances. In addition, a VNFM interacts with the VNF instance(s) that
it is managing via the Ve-Vnfm-vnf reference point and communicates with the
element manager of the VNF instances via the Ve-Vnfm-em reference point.
3.6.3.3  NFV Orchestrator
The NFVO functional block in the NFV MANO component has two main
responsibilities: (a) resource orchestration that orchestrates the NFVI resources
across multiple VIMs, and (b) service orchestration that manages lifecycles of

www.EngineeringBooksPdf.com
Virtualization in Networking 139

network services. The two responsibilities are kept within one functional block
in the current MANO framework mainly for simplicity reasons. It is worth
noting that a key idea of network virtualization is to decouple the function-
alities focusing on service provisioning from the resources in the underlying
infrastructures. Therefore, separating the orchestration of network services and
orchestration of infrastructure resources into two independent entities that
interact with each other through a standard abstract interface will make the
NFV MANO component design align better with the principle of network
virtualization.
The following two lists of capabilities are defined in [28], respectively, for
network service orchestration and resource orchestration.
NFVO capabilities for network service orchestration include:

• Onboarding network service (i.e., register a network service in the cata-


log and ensure that all the templates describing the service are onboard-
ed);
• Instantiating network service (i.e., create a network service using the
network service onboarding mechanisms);
• Scaling network service (i.e., adjust the capacity of a network service);
• Updating network service (i.e., make changes in network service con-
figuration such as changing inter-VNF connectivity or the constituent
VNF instances);
• Creating, deleting, querying, and updating VNF-FGs associated to a
network service;
• Terminating network service (i.e., request the termination of constitu-
ent VNF instances of a network service and the release of NFVI re-
sources associated to the service).

NFVO responsibilities for resource orchestration include the following:

• Validation and authorization of the requests from VNF manager(s) for


NFVI resource allocation;
• NFVI resource management across infrastructure domains, including
distribution, reservation, and allocation of NFVI resources to network
service instances and VNF instances;
• Supporting management of the relationship between VNF instances and
the NFVI resources allocated to those VNF instances by using NFVI re-
source repository and information received from VIMs;

www.EngineeringBooksPdf.com
140 Virtualized Software-Defined Networks and Services

• Policy management and enforcement for network service instances and


VNF instances;
• Collecting information about usage of NFVI resources by VNF
instances.

3.6.3.4  MANO Information Model


In order to provide common information models required for network service
operation and management, NFV-ISG defines some information elements and
their organization in the MANO specification [28]. The information required
by MANO includes two main categories: (a) information contained in descrip-
tors, which are deployment templates that provide relatively static information
used for deploying VNFs and NSs, and (b) information residing in records,
which contain more dynamic run-time data representing VNF and NS instanc-
es; such data is maintained throughout the lifetime of the instances. The in-
formation elements and their organization defined in [28] are listed as follows.

• Network service descriptor (NSD) is a deployment template for a NS


that refers all other information elements describing the constituent
components of the NS. In general, four types of information elements
are needed for describing a NS; namely, the descriptors for VNF, virtual
link (VL), VNF FG, and physical network functions.
• VNF descriptor (VNFD) describes the deployment and operation be-
havior requirements for a VNF. The information provided by VNFD is
primarily used by VNFM for VNF instantiation and lifecycle manage-
ment and may also be used by the NFVO to manage and orchestrate NS
and NFVI resources.
• A VNF FG descriptor (VNFFGD) describes the topology of an NS by
referencing the VNFs involved in the NS and the VLs connecting these
VNFs. A virtual link descriptor (VLD) describes the resource require-
ments needed for a virtual link, which could be used for connecting
VNFs, physical functions, and endpoints of a network service.
• VNF and NS instantiations will create records for representing the new-
ly created instances. Such records include network service record (NSR),
VNF record (VNFR), virtual link record (VLR), and VNF FG record
(VNFFGR).

The NS catalog is a repository of all the onboarded network services that


supports creating and managing NS deployment templates, including NSDs,
VLDs, and VNFFGDs. The VNF catalog represents the repository of all the
onboarded VNF instances and supports creating and managing VNF packages.

www.EngineeringBooksPdf.com
Virtualization in Networking 141

Both NFVO and VNFM can query the VNF catalogue for finding and retriev-
ing VNFDs to support different operations.
NFV instances repository holds information of all VNF instances and
network service instances. VNF instances and NS instances are represented by
VNF records and NF records, respectively. Those records are updated during
the lifecycles of the respective instances to reflect the changes caused by execu-
tion of lifecycle management functions for VNF and network service instances.
NFVI resource repository holds information about available, reserved,
and allocated NFVI resources as abstracted by the VIM across infrastructure
domains, thus providing information useful for resource reservation, allocation,
and monitoring purposes.
Figure 3.23 is a diagram provided by NFV-ISG for illustrating how the
various MANO information elements are organized in catalogs and repositories.

3.7  NFV Implementation and Performance


3.7.1  Challenges to High-Performance NFV
A key requirement for realizing the benefits of NFV is to implement VNFs as
software instances running on commercial off-the-shelf (COTS) servers. The
NFV approach should be applicable to both data plane and control plane func-
tions in both fixed and mobile networks, where some network functions require
high-performance packet processing with short delay and high throughput.
Therefore, how to design COTS server-based NFV implementations to sup-

Figure 3.23  Information elements and their organization in MANO [28].

www.EngineeringBooksPdf.com
142 Virtualized Software-Defined Networks and Services

port realistic network loads and achieve performance comparable to hardware-


based appliances has become an important research topic.
In general, the requirements for high-performance NFV implementations
that support the deployment of various VNFs include the following aspects:

• High throughput and low latency to handle packet processing opera-


tions at the line rates required in backbone and/or datacenter networks;
• Flexibility to run different types of VNFs developed by different ven-
dors and configured by either network operators or third parties;
• Support orchestration of various VNF instances for flexible service pro-
visioning;
• Isolation of compute, storage, and network resources that allow multiple
tenants to share common hardware platform;
• Scalability to easily scale up or down the system capacity in response to
changes in traffic load.

As reviewed in Section 3.2, various technologies have been developed


for server virtualization and could be employed to implement NFV on COTS
servers. Typical server virtualization technologies are either hypervisor based or
container based. Although containers are lightweight and efficient, they force
all VNF instances to run on top of the same operating system, thus limiting
the system flexibility. In contrast, hypervisor-based technologies provide the
flexibility needed for multitenant VNFs at the cost of high performance mainly
due to the overheads introduced by a hypervisor.
A general structure of a hypervisor-based virtualization platform for net-
work functions is shown in Figure 3.24. In this system, packets arrive at serv-
er NICs and then are copied into the hypervisor. In the hypervisor, a virtual
switch performs layer 2 processing (or more complex functions based on packet
header information) to determine which VM each packet should be delivered
to, then notifies the virtual NIC to forward the packet to the destined VM. The
memory page containing the packet is then either copied or granted to the guest
OS of the VM, and finally the packet data is copied to a user space application
running on the VM for performing the network functions.
Two main factors in such a virtualized platform prevent COTS servers
from implementing high-performance network functions. First, interrupts are
typically used to notify an operating system about packet arrivals since network
packets arrive at unpredictable time. However, interrupt handling can be ex-
pensive because the long pipeline, speculative execution, and multilevel mem-
ory systems used in modern processors all cause penalty for interrupts in terms
of CPU cycles. Therefore, a high packet arrival rate may make the throughput

www.EngineeringBooksPdf.com
Virtualization in Networking 143

Figure 3.24  A generic structure of virtualized platform for packet processing.

of such systems drop significantly. Second, existing operating systems typically


handle network operations by first reading the incoming packets into kernel
space and then copying the data to user space for processing. The extra packet
copying increases processing delay and limits processing throughput. Network
I/O in virtualized setting may cause even greater overheads due to the addi-
tional data copying from the hypervisor to guest OS. Chaining the service com-
ponents hosted by different VMs on the same server may require data copying
between VM guest OSs, thus making the situation even worse [29].

3.7.2  Data Plane I/O Virtualization


As we discussed in the previous section, data plane I/O operations form a bot-
tleneck for implementing high-performance NFV on COTS servers. Various
I/O acceleration techniques have been developed, among which single root I/O
virtualization (SR-IOV) and Intel data plane development kit (DPDK) are rep-
resentative solutions for I/O virtualization improvement.
SR-IOV works with I/O hardware built based on PCIe technology. SR-
IOV divides the physical functions of an PCIe device into multiple lighter-
weight virtual functions, maps each virtual function to a PCI express requestor
ID, and assigns each virtual function to a VM. The virtual functions appear
to be hardware devices to the VM. All packets are moved directly between a
guest OS and a virtual function through an independent memory space using
interrupt and DMA. In this way, SR-IOV bypasses the software switch in the
hypervisor and thus can improve I/O throughput, CPU utilization, and latency
performance.

www.EngineeringBooksPdf.com
144 Virtualized Software-Defined Networks and Services

Intel data plane development kit (DPDK) provides a programming frame-


work for x86 processors to enable faster development of high-speed data plane
functions. DPDK architecture is shown in Figure 3.25. The main mechanism
employed by DPDK to reduce data plane I/O overheads in a virtualization
environment is to allow user space applications to directly poll data from NICs.
DPDK provides data plane libraries and optimized poll-mode NIC drivers in
Linux user space. The data plane libraries support queue management, buf-
fer management, and packet flow classification. The environment abstraction
layer (EAL) in DPDK hides the environmental specifics from the libraries and
applications and provides a standard programming interface for applications
and libraries to access available hardware resources and the operating system.
Once the EAL is setup for a specific hardware/software environment, develop-
ers can create their applications by linking the set of libraries for the environ-
ment. DPDK supports SR-IOV to allow multiple VMs to access the NIC.
Although SR-IOV and DPDK-based virtualization technologies enable
high throughput user space VNFs, they still have some deficiencies that need
to be addressed in order to meet the requirements for high-performance NFV
implementation. For example, data transform throughput to and from a VM
achieved by DPDK’s direct DMA is typically much lower than throughput of
native I/O. DPDK working with SR-IOV only allows packet switching among
VMs to be performed based on layer 2 address. In addition, DPDK still lacks a
complete framework with some important abilities that are necessary for con-
structing complex network services (e.g., efficient communications between
direct-chained VMs).

Figure 3.25  Intel DPDK architecture.

www.EngineeringBooksPdf.com
Virtualization in Networking 145

3.7.3  NFV Implementation Example—NetVM


NetVM platform is recent research progress toward high-performance NFV.
NetVM is based on DPDK’s high-throughput packet processing capabilities
with additional abstraction for supporting flexible creation and orchestration
of VNFs [29]. A key goal of NetVM is to enable efficient data delivery to and
between VMs in order to support chaining VNFs into network services, which
is critical to service provisioning in an NFV environment. Figure 3.26 depicts
the NetVM architecture. NetVM implementation is based on Linux KVM hy-
pervisor. The NetVM Core component manages VMs and delivers packet data
to them. Individual VMs perform the packet processing operations required by
the network function realized by the VM.
A key mechanism employed in NetVM to achieve its design goal is zero-
copy packet delivery between NICs and VMs and between different VMs. Net-
VM uses two communication channels for packet delivery. The first channel,
shown by solid lines in Figure 3.26, is a small memory region organized as ring
buffers and shared between the hypervisor and individual VMs for transmitting
packet descriptors. The second communication channel, shown by dashed lines
in Figure 3.26, is a memory region shared between a group of VMs that allows
VNFs to directly access packet data. With the two channels, NetVM requires
only simple packet descriptors to be copied to enable inter-VM data communi-
cations via shared memory without actual data copying.
NetVM Core runs as a DPDK-enabled application in the host user space
and reads packets from NICs to the shared memory region using DMA. Net-
VM inserts a descriptor for each packet in the ring buffer shared between the

Figure 3.26  NetVM implementation of virtual network functions [29].

www.EngineeringBooksPdf.com
146 Virtualized Software-Defined Networks and Services

hypervisor and the destination VM of the packet. The descriptor contains infor-
mation about the memory location where the packet is stored. The descriptor
can also be used for specifying the actions to be performed for packet forward-
ing or transmitting (e.g., forwarding the packet to another VM or transmitting
the packet through an NIC). A VNF can ask the NetVM core to forward a
packet to a different VM or transmit the packet to an NIC after the VNF com-
pletes its processing on the packet [29].
Since direct access to shared memory region plays a key role in NetVM
for achieving zero-copy data delivery, ensuring consistency of shared data be-
comes an important issue. Locks are typically used as a consistency protection
mechanism in memory sharing; however, even an uncontested lock introduces
delay that may degrade the high-performance packet processing that NetVM
attempts to offer. Fortunately, analysis given in [29] indicates that the com-
munication structure used in NetVM can be implemented safely without any
lock. Since a packet descriptor will be held only by either the hypervisor or a
single VM at any time, packet data in the shared memory region will never see
concurrent access.
Another issue that has been considered in NetVM implementation is
nonuniform memory access (NUMA) awareness. Modern servers often have
multiple CPU sockets connecting to different banks of memory. This may re-
sult in variable memory access time depending on memory location relative to
a processor, especially when a thread accesses data that is spread across multiple
memory modules. To avoid NUMA costs, NetVM uses one memory page re-
gion per CPU socket and ensures that a packet stored in one region is only
processed by the CPU cores on that socket.

3.7.4  NFV Implementation Example—ClickOS


Another representative implementation of high-performance NFV is ClickOS
published in [30]. Similar to NetVM, the design goal of ClickOS is to provide
a high-performance and flexible virtualization platform for deploying various
network functions. Unlike NetVM, ClickOS is built based on Xen, another
popular Linux hypervisor, and leverages the Click modular router software [31].
The hypervisor virtualization layer between server hardware and network
function software is the main performance bottleneck for NFV implementa-
tions. To minimize virtualization overheads, paravirtualization is preferable to
full virtualization because paravirtualization makes minor changes to the guest
OS. The developers of ClickOS chose to use Xen due to its support for para-
virtualization. With paravirtualization Xen allows a guest OS to access different
types of NICs through a single hardware-agnostic driver connected to the driver
domain. In addition, Xen comes with MiniOS, which is a tiny operating system
that provides the required support for running virtualized network functions

www.EngineeringBooksPdf.com
Virtualization in Networking 147

without the unnecessary functionalities included in a conventional operating


system. Therefore, MiniOS is chosen as the basis for ClickOS VMs. ClickOS
leverages the Click modular router software as the programming abstraction for
VNFs because Click makes it easy to reuse the significant amount of common
functionality across a wide variety of network functions.
ClickOS architecture is shown in Figure 3.27. The general structure of
Xen is split into a privileged virtual machine or domain called dom0 and a set
of guest or user domains comprising users’ VMs. Xen includes a driver domain
that hosts the device drivers, and in most cases dom0 acts as the driver domain.
Xen has a split-driver model, in which the front-end part of a driver runs in
the VM guest OS while the back-end part runs in the driver domain. Com-
munications between the front- and back-end drivers happen through shared
memory and a common ring-based API. Following this model, the dom0 con-
tains a netback driver and each VM implements a netfront driver. Each ClickOS
VM consists of the Click modular router software running on top of MiniOS,
which implements all the basic functionality needed to run as a Xen VM guest
OS [30].

3.7.5  Open NFV Platform


NFV has attracted extensive interest from both networking and IT communi-
ties, including network operators, service providers, server vendors, and ap-
plication developers. Various technologies have been developed in both aca-
demia and industry for achieving high-performance NFV implementations.
An important aspect of NFV technology development is to provide an open
platform that facilitates integration of the wide variety of components (e.g.,

Figure 3.27  ClickOS implementation of virtual network functions [30].

www.EngineeringBooksPdf.com
148 Virtualized Software-Defined Networks and Services

VNFI devices, VNF applications, MANO software, among others) and assures
interoperability among them.
Open platform for NFV (OPNFV) is a Linux Foundation collaborative
project that intends to provide an open source NFV implementation platform
for testing and validating NFV solutions. The objective of OPNFV is to pro-
mote interoperable NFV solutions and stimulate the open source communities
to create software and hardware for NFV implementations based on common
industry requirements.
The overall design of the OPNFV platform is modular and allows for
extensions and innovations beyond community components. Such design pro-
vides users with choices to obtain additional values from proprietary or special-
ized components. The OPNFV architecture that is currently under develop-
ment follows the NFV architectural framework specified by ETSI NFV-ISG.
The OPNFV project initially started on the NFV infrastructure layer compris-
ing the NFVI and VIM components, and focused on the ways these compo-
nents interact and the interfaces between them. These interfaces include VNF
interfaces to virtual infrastructure, interfaces used by applications to execute on
virtual infrastructure, interfaces between the virtual infrastructure and VIM,
and interfaces between the VIM and VNFM/VNFO.
The technical overview published by the OPNFV project (www.opnfv.
org/software/technical-overview) identifies the following list of functionalities
as the main use cases of the OPNFV platform:

• Life-cycle management of VNFs;


• Specifying and interconnecting VNFs and VNFCs;
• Dynamically instantiating new VNF instances or decommissioning
VNF instances;
• Detecting faults and failure in NFVI, VIM, and other components of
the infrastructure layer;
• Enabling NFVI-as-a-service for hosting VNF instances on the infra-
structure layer;
• Sourcing/sinking traffic from/to a physical network function to/from a
virtual network function.

A key part of OPNFV is the Pharos Community Lab project and the bare
metal lab infrastructure hosted by the Linux Foundation. Pharos is a test lab for
developing and testing NFV solutions. The lab is geographically and technically
diverse to allow NFV technologies to be developed based on various hardware
environments.

www.EngineeringBooksPdf.com
Virtualization in Networking 149

3.7.6  NFV Implementation Portability and Reliability


The decoupling between network functions and their hosting platform enabled
by NFV should simplify the interaction between various players in the network
ecosystem. Hardware vendors could be totally unaware of the VNFs that would
be deployed and operate on their devices in future. On the other hand, VNF
providers should be able to provide reliable performance estimation for dif-
ferent hardware configuration in order to meet expected service requirements.
Performance and portability issues of NFV implementations are being actively
studied within ETSI NFV-ISG and the obtained results have been published in
a technical report [32]. This document provides a comprehensive description
of the information that is required to (a) specify hardware requirements of a
network function for a given performance target, and (b) describe the hardware
elements available in a server.
The information models defined by NFV-ISG for achieving these goals
include compute host descriptor (CHD) and VM descriptor (VMD). A com-
pute host (CH) is the whole entity composed of the hardware platform and a
hypervisor running on it. A CHD provides an information template for de-
scribing the capabilities and currently available resources offered by a CH for
deploying VM images. There will be one CHD for each CH. VMD is a tem-
plate that declares the NFV resources to be required by the VNF’s VMs from
the NFVI. In order to distinguish the demanded resources from the applied
configuration for a VM, another descriptor called VM configuration is defined
to describe the final configuration to be applied when deploying a VM on a
specific CH. It is expected to have a VM configuration for each deployed VM
instance. In [32], NFV-ISG defined a list of VM requirements that should be
included in the VMD template and a list of hardware capabilities that should
be included in the CDH in order to achieve high-performance VNFs while as-
suring portability between VNFs.
In addition to performance such as packet processing throughput and
delay, service reliability and availability are also important aspects of NFV
implementation. The unique challenges and opportunities to ensuring service
availability and maintaining resiliency in an NFV environment has also been
studied by NFV-ISG, and the findings are brought together in [33]. This doc-
ument describes the resiliency problem and discusses the principles, require-
ments, and deployment and engineering guidelines relating to NFV resiliency.
It also analyzes some relevant use cases and provides guidelines for future work
to bring software engineering best practices to design resilient NFV-based net-
work systems.

www.EngineeringBooksPdf.com
150 Virtualized Software-Defined Networks and Services

3.8  Virtualization-Based Network Service Provisioning


3.8.1  Service-Oriented Architecture
Service-oriented architecture (SOA) provides a set of architectural principles
for system designs in which all functions are encapsulated as independent ser-
vices with interfaces that can be called in specified sequences to form business
processes [34]. The concept of service in SOA is defined as a system module
that is self contained (i.e., the service maintains its own states) and platform
independent (i.e., interface to a service is independent of the implementation
of the service). Services can be described, published, located, orchestrated, and
programmed through standard interfaces and messaging protocols. All services
in SOA are independent of each other and service operations are perceived as
opaque by its external components, which neither know nor care how services
perform their functions. The technologies providing the functionality of a ser-
vice are hidden behind the service interface.
A key feature of SOA is the loosely coupled interaction among hetero-
geneous system modules in the architecture. The term coupling indicates the
degree of dependency any two systems have on each other. In loosely coupled
interactions, systems need not know how their partners behave or are imple-
mented, which allows systems to connect and interact more freely. Therefore,
loose coupling of heterogeneous systems provides a level of flexibility and in-
teroperability that cannot be matched using traditional approaches for building
highly integrated, cross-platform, interdomain communication environments.
Other features of SOA include reusable services, formal contract among ser-
vices, service abstraction, service autonomy, service discoverability, and service
composability [35].
SOA provides effective architectural principles for heterogeneous system
integration, which may facilitate realizing virtualization by abstracting system
resources and functionalities as services and providing loose-coupling interfaces
among services. SOA has been adopted as the main model for cloud service pro-
visioning through the infrastructure-as-a-service (IaaS), platform-as-a-service
(PaaS), and software-as-a-service (SaaS) paradigms.
The SOA principles have also been applied in the field of telecommuni-
cations and computer networking in order to enhance network capability and
flexibility for service provisioning [36]. Early efforts toward this direction may
be traced back to the service-independent platform that intelligent network
(IN) attempted to build in the 1980s, the telecom APIs such as Parlay and Java
APIs for integrated networks (JAIN) developed in the 1990s, and the Parlay
X and service delivery platform (SDP) developed by 3GPP in the 2000s. The
IEEE next generation service overlay network (NGSON) is a more recent ex-
ample of applying SOA in networking for service provisioning.

www.EngineeringBooksPdf.com
Virtualization in Networking 151

The architectural principles of SOA may be realized with different ap-


proaches, including web services and representational state transfer (REST)
technologies. A web service provides an interface for describing a collection of
operations that can be accessed using standardized XML messages. A web ser-
vice can be described using a standard XML schema called service description,
which should cover the information that users need to interact with the service.
Web service interface hides the internal details of a service, thus allowing users
to utilize the service functions without knowledge of service implementation.
REST offers an alternative to standard web service technologies for implement-
ing SOA. The focus of REST is on the resources exposed by services. Each re-
source is identified by a URI represented by a certain MIME type (such as XML
or JSON) and accessed and controlled using POST, GET, PUT, or DELETE
http methods. The technologies that follow the REST design style for realizing
SOA principles are also called RESTful web services.

3.8.2  Service-Oriented Network Virtualization


Some of the unique features of SOA make it a promising approach to address-
ing some key challenges that NV and NFV are facing, and thus may serve as
a key enabling technology for service provisioning in the virtualization-based
networks.
A key requirement for network virtualization is abstraction of network
infrastructure through which infrastructure resources can be transparently uti-
lized by VNs. In order to create multitenant VNs for meeting diverse service re-
quirements, an SP (or a VNP) needs to discover available resources in network
infrastructures that may belong to multiple InPs, and then select and orches-
trate the appropriate resources to form VNs. Therefore, resource abstraction
and encapsulation, together with mechanisms for flexible and effective interac-
tion and collaboration among InPs, SPs (may including VNPs and VNOs), and
service consumers (e.g., applications running on VNs) play a crucial role in an
NV environment.
SOA offers a promising approach to facilitating network virtualization.
A layered structure for service-oriented network virtualization is shown in Fig-
ure 3.28. Following the SOA principle, the resources and capabilities of an
infrastructure domain can be encapsulated into network infrastructure services
(NISs) provided by the InP to SPs through a network infrastructure-as-a-service
(NIaaS) model. An SP can leverage the NIaaS of a single InP or compose the
NIaaSs provided by multiple InPs to provide end-to-end network services (NSs)
to end users. An end user, often an application running on a VN, utilizes the
underlying networking platform by consuming the NS offered by an SP. Such a
network service model based on virtualization and SOA is essentially a network-
as-a-service (NaaS) paradigm [35].

www.EngineeringBooksPdf.com
152 Virtualized Software-Defined Networks and Services

Figure 3.28  A layered structure for service-oriented network virtualization.

NIaaS plays the same role in network virtualization as IaaS does in a cloud
environment, but focuses on networking capabilities instead of general com-
putational resources in the infrastructure. Similarly, NaaS in service-oriented
network virtualization is comparable to SaaS in the cloud service model.
It is worth noting that from an end user’s perspective, services provided
by the current IP-based Internet can be regarded as NSs, although they might
not have all the features of cloud services, such as elastic on-demand service
delivery. For the current TCP/IP protocol stack, the interface between the ap-
plication layer and the TCP layer (e.g., socket interface) provides a service in-
terface. However, although this interface is standardized, it lacks the necessary
abstraction of the network platform for decoupling the functions and imple-
mentations of network services. That is, service access methods used by the user
are dependent on service implementation. For example, socket API needs to be
revised if the transport layer protocol is changed or replaced by a new protocol.

3.8.3  Network-as-a-Service (NaaS) in NFV


The NaaS paradigm has also been employed in the NFV architecture for ser-
vice provisioning in virtualization-based networks. Representative NaaS-based
service models of NFV include network function virtualization infrastructure-
as-a-service (NFVIaaS), virtual network function-as-a-service (VNFaaS), and
virtual network platform-as-a-service (VNPaaS), which have been identified by
NFV-ISG as main NFV use cases. The descriptions of these service models
given in [37] are summarized next.

www.EngineeringBooksPdf.com
Virtualization in Networking 153

3.8.3.1  NFV Infrastructure as a Service (NFVIaaS)


The NFV infrastructure comprises computing resources (including processing
capacity, storage space, and hypervisors) as well as network resources, which of-
fers an environment in which VNFs can execute. Following the SOA principles,
NFVI may provide computing capabilities to VNFs through the IaaS model,
which can be referred to as compute infrastructure as a service (CIaaS). In addi-
tion, NFVI may provide networking capability through the NIaaS model and
combines NIaaS with CIaaS to form NFVI-as-a-service (NFVIaaS).
The NFVIaaS model can be used by a network operator for deploying
VNFs upon its own NFVI for providing network services, which essentially
follows the private cloud deployment mode. In order to meet network perfor-
mance and/or regulatory requirements, it may be desirable for a service provider
to be able to run VNF instances inside an NFV infrastructure operated by a
different service provider. NFVIaaS provides a perfect approach to accomplish-
ing this objective. In such use cases, NFVIaaS allows an NFVI to be offered as
a public cloud infrastructure.
NFVIaaS also naturally supports resources pooling that enhances infra-
structure utilization and multitenancy that allows multiple VNFs to be de-
ployed upon shared infrastructure. In addition, NFVIaaS provides federated
computing and networking capabilities as general infrastructure services, which
offers a unified platform for deploying both virtual network functions and
cloud applications. Therefore, NFVIaaS may serve as a key enabler for unifica-
tion of network and cloud services, which will be discussed in more detail in
later sections.

3.8.3.2  VNF as a Service (VNFaaS)


With rapid adoption of cloud services, many enterprises have migrated appli-
cations to either enterprise data centers or public clouds. This trend calls for
new designs for enterprise networks (e.g., virtualization of enterprise CPEs—
access routers—in an operator’s network). Instead of asking enterprises to build
network infrastructure for deploying some new functions, the service provider
may be able to provide the network functions as measured services that may be
utilized by enterprises for their networks. For example, a service provider could
operate a VNF instance for implementing CPE for an enterprise and poten-
tially another VNF instance for performing the control functions of the CPE
for improving its scalability.
Making the VNF functionality available as a service, which is referred
to as VNF as a service (VNFaaS), is comparable to the cloud computing op-
tion of SaaS. In the virtualized enterprise CPE example, the enterprise is the
consumer of the virtual CPE service. A service consumer can manage his VNF
services from a configuration perspective but cannot control the underlying in-
frastructure hosting the VNF instance. The service provider can scale the NFVI

www.EngineeringBooksPdf.com
154 Virtualized Software-Defined Networks and Services

resources allocated to the VNF instance in response to changes in the demand


for a VNF service.
3.8.3.3  Virtual Network Platform as a Service (VNPaaS)
In order to increase the flexibility of resource sharing and facilitate VNPs to
setup VNs, providers of network infrastructures can make available a suite of
infrastructure and applications as a platform on which VNs customized for dif-
ferent application requirements can be easily deployed. NFV supports such use
cases by providing a toolkit of NFVI with some VNFs as a platform for creating
VNs and offers the platform as a service; that is virtual network platform as a
service (VNPaaS).
The main difference between VNPaaS and VNFaaS lies in the scale of
service and the scope of control provided to the consumers for the service. VN-
FaaS focuses on offering individual VNF instances as services, while VNPaaS
provides a larger-scale service typically for creating virtual networks. VNFaaS
limits service consumers to only configure the VNFs made available by the
service provider, whereas VNPaaS typically provides the capability for service
consumers to introduce their own VNF instance as well.

3.8.4  Software-Defined Control for NaaS in NFV


A key to realizing the NaaS paradigm is an open and abstract service interface
that exposes the network service capabilities while hiding service implementa-
tion details. The centralized and programmable control plane enabled by SDN
may greatly facilitate NaaS by supporting such a service interface through its
northbound (NB) API.
In current development of SDN controllers and control applications,
REST APIs have become a prevalent choice for the NB API [38]. For example,
Floodlight [39], a popular open source SDN controller, provides a built-in vir-
tual network module that exposes a REST API through OpenStack Neutron.
Meridian, implemented as a module inside Floodlight, provides a REST API
for managing virtual networks [40]. The application layer traffic optimization
(ALTO) protocol also supports a REST NB API between an SDN controller
and the control applications running upon it [41].
OpenStack [42] is a popular open source software package for building
IaaS-based clouds. OpenStack provides REST APIs to control and manage the
distributed resources in a cloud infrastructure. Neutron is the component in
OpenStack providing networking services. Neutron API consists of three types
of resources: network, port, and subnet. A network resource represents a virtual
network with ports to which VMs can attach. A subnet resource in a network
represents an IP address block allocated for the ports. A port resource receives
an IP address when it joins a subnet.

www.EngineeringBooksPdf.com
Virtualization in Networking 155

Neutron employs a plugin-based architecture that allows different net-


work infrastructure domains implemented with heterogeneous networking
technologies to be integrated into OpenStack. Neutron API transforms requests
from upper layer applications into calls to the responsible plugins that interact
with the infrastructure elements (e.g., switches and routers). In an SDN envi-
ronment, the plugins interact with the associated SDN controllers in network
infrastructure domains via the NB interface of the controller. For example, the
Floodlight plugin named RESTProxy allows Neutron API to interact with a
Floodlight SDN controller through its REST NB API.
An architectural framework of SDN-based NFVIaaS is shown in Figure
3.29. In this framework, the NFVI comprises multiple autonomous infrastruc-
ture network domains, each of which is controlled by an SDN controller that
supports a REST NB interface. Therefore, the networking capabilities of each
infrastructure network domain can be exposed as network infrastructure ser-
vices (NISs) through the REST NB interface on the SDN controller of this
domain. The service orchestration function provided by NFV MANO can or-
chestrate the NISs to form end-to-end network services across multiple do-
mains. Similarly, the compute and storage resources in NFVI can be abstracted

Figure 3.29  An architectural framework for SDN-based NFVIaaS.

www.EngineeringBooksPdf.com
156 Virtualized Software-Defined Networks and Services

as compute infrastructure services (CISs), for instance, using OpenStack com-


ponents such as Nova.

3.8.5  Virtualization-Based Unification of Network and Cloud Services


Virtualization serves as the key technology for both cloud computing and future
networking, and therefore is expected to bridge the gap between these two fields
to enable a trend toward unification of cloud and network services. The SOA
forms the basis for the service models of both cloud computing and network
virtualization, thus offering a promising approach to unifying the provisioning
of network and cloud services. The NaaS paradigm, including NFVIaaS, VN-
FaaS, and VNPaaS models, allows virtualized network functions to be offered
as SOA-compliant network services that can be composed with cloud services
through service orchestration.
Figure 3.30 depicts an architectural framework for network-cloud service
unification via NaaS-based network virtualization. In this framework, resources
and functions provided by network and compute infrastructures are virtualized
and exposed as network infrastructure services (NISs) and cloud infrastruc-
ture services (CISs), respectively. Therefore, this framework provides a uniform
mechanism for virtualizing and orchestrating networking and computing re-
sources for service provisioning. Service-oriented virtualization enables a ho-
listic view of both networking and computing resources that allows federated
management, control, and optimization of resources across the networking and

Figure 3.30  NaaS-based virtualization for unification of network and cloud services [35].

www.EngineeringBooksPdf.com
Virtualization in Networking 157

computing domains. Unification of network and cloud services may signifi-


cantly enhance service flexibility, improve service performance, and expand the
spectrum of services that can be offered by future networks [35].
Network-cloud service unification has attracted attention from the com-
munities of academic research and industry development. In the rest of this
section, we briefly discuss two representative research projects related to this
direction.
3.8.5.1  Unifying Cloud and Carrier Network
In recent years, telecom providers started employing NFV to enhance flexibil-
ity and reduce costs for network service provisioning. Many telecom providers
have also built their own data center facilities that comprise virtualized comput-
ing and storage resources for offering cloud services. Although networking and
cloud computing have been two active fields of research, there still lacks inte-
gration between the vast networking infrastructures and data center facilities
of telecom providers. Unifying the cloud and carrier networks to form a telco
cloud in which network and cloud services can be created and delivered upon a
uniform platform may bring in significant benefits to telecom service providers.
Motivated by this telco cloud vision, the EU FP7 integrated project unify-
ing cloud and carrier network (UNIFY) pursues full network and service vir-
tualization to enable rich and flexible services and operational efficiency. In
this project, the researchers aim at developing and evaluating technologies for
creating, orchestrating, and delivering composite network-cloud services in a
telco cloud environment, which spans from home/enterprise networks through
aggregation and core networks up to data centers. In order to achieve this ob-
jective, the UNIFY architecture combines abstraction of compute, storage, and
network resources to realize logically centralized, automated, and recursive re-
source orchestration across networking and computing domains with heteroge-
neous implementation technologies [43].
More specifically, the UNIFY project identifies the following aspects as
main research topics [44]:

• Develop an automated, dynamic service creating platform leveraging


SDN technologies;
• Create a service abstraction model and a service creation language en-
abling dynamic and automatic deployment of networking and comput-
ing function components across the unified infrastructure;
• Develop an orchestrator with novel optimization mechanisms to ensure
optimal placement of elementary service components across the unified
infrastructure;

www.EngineeringBooksPdf.com
158 Virtualized Software-Defined Networks and Services

• Research new management technologies and operation schemes for sup-


porting dynamic and agile service development and provisioning;
• Evaluate the applicability of a universal network node based on com-
modity hardware to support both network functions and cloud comput-
ing functions.

Unified virtualization and programmability on both network and cloud


resources plays a crucial role in realizing the vision of UNIFY. Therefore, a key
research objective of this project is to derive a generic programming and opti-
mization framework that may support a variety of services and service chains
with integrated management of networking and computing resources. Research
results obtained from the UNIFY project indicate that such a platform requires
network and service virtualization as well as interfaces for resource abstraction.
Network virtualization can be achieved through the orchestrator and SDN net-
work controllers providing service-oriented northbound interfaces.
A three-layered architecture as shown in Figure 3.31 has been developed
in UNIFY. The architecture comprises the infrastructure layer, orchestration
layer, and service layer. This architecture also includes a management compo-
nent and a network function system.

Figure 3.31  UNIFY Architecture [44].

www.EngineeringBooksPdf.com
Virtualization in Networking 159

The infrastructure layer (IL) consists of all the resources in compute and
network infrastructures and the control functions in the infrastructure domains.
Each infrastructure domain has a logically centralized domain controller, which
virtualizes and controls the infrastructure resources in the domain through local
agents. Local resource agents operate physical resources by following instruc-
tions from the corresponding controllers. For example, an OpenFlow control-
ler controls network infrastructure resources through the OpenFlow agents in
OpenFlow switches; OpenStack Nova component controls compute infrastruc-
ture resources through Nova compute interface. By exploiting suitable virtual-
ization technologies, the IL supports creation of virtual instances of network
and compute functions out of physical resources.
The orchestration layer (OL) comprises two major components: the re-
source orchestrator (RO) and the controller adaptor (CA). The CA provides
domain-wide resource abstraction and virtualization for the various types of
resources in different infrastructure domains. The CA works with the domain
controllers in the IL to maintain a global view of resources and capabilities for
each domain. The CA presents the abstract view of each domain to the RO.
The RO harmonizes the virtualized domain resources to form a global virtual
resource view of the entire infrastructure layer. RO also presents this global view
of virtual resources to the service layer.
The service layer (SL) performs service life cycle management and or-
chestration over virtual resources for supporting unified network-cloud servic-
es. The SL is responsible for defining and managing the logics of consumable
services, establishing programmable interfaces to service users, and providing
interfaces to OSS/BSS systems for service management. This layer also creates
service abstraction as needed toward different users and realizes necessary adap-
tation according to the abstraction.
The management component in the UNIFY architecture comprises
management functions for both physical infrastructure resources and virtual
network/compute functions. This component includes network function in-
formation base (NF-IB) used by the RO for network resource orchestration.
The network function system in the UNIFY architecture comprises records of
instantiated network functions in the system, including data and control/man-
agement plane components and the corresponding forwarding overlays.
The UNIFY architecture creates a unified platform for rapid and flex-
ible service creation and delivery through joint virtualization and orchestra-
tion of networks and compute resources. It is worth noting that although the
design shares some similarity with ETSI NFV architecture, it takes a different
architectural approach by generalizing the SDN principles in a network virtu-
alization environment. This approach attempts to enable multilevel recursive
resource control of network functions with split data and control plane func-

www.EngineeringBooksPdf.com
160 Virtualized Software-Defined Networks and Services

tions, thus bringing in the benefits of SDN technologies to address control and
management challenges in network virtualization [44].
More information about UNIFY project can be found at https://www.
fp7-unify.eu.

3.8.5.2  Central Office Rearchitected as Data Center


Network operators are exploring new technologies that allow them to benefit
from the cloud service model (e.g., constructing shared infrastructure using
commodity servers to benefit from the economics of scale and applying virtual-
ization and orchestration to support agile service provisioning). Such economic
and agility features are especially needed at network edge in telco central offices,
where a variety of proprietary network devices are assembled over time with
little coherence. Motivated by this demand, Open Networking Lab (ONL) and
AT&T have started a collaborative project Central Office Rearchitected as Data
Center (CORD) to develop a new architecture for telco central office.
The goal of CORD is not only to replace the purpose-built network ap-
pliances with more agile software-based implementations, but also to make the
central office as an integral part of a telco cloud strategy for offering more
valuable unified network-cloud services to end users [45]. Therefore, CORD
architecture must be general and flexible enough to support a wide variety of
services. Three use cases identified by the CORD project are NFV implementa-
tions in residential, mobile, and enterprise networks.
The basic approach taken by the CORD project to achieve its design
objectives is to integrate SDN, NFV, and cloud computing technologies. As
shown in Figure 3.32, the high-level system architecture developed in the
CORD project comprises an SDN-based fabric network as the core component
that connects a set of servers and the ONOS controller cluster [46].
The fabric network is the core of the CORD architecture that intercon-
nects all other components in the architecture. In general, the fabric in data
center designs is an interconnection network providing nonblocking connectiv-
ity among servers. The fabric is typically divided into two levels of hierarchy—
leaves connecting to individual servers and spins that connect the leaves. The
term top of rack (ToR) switch is commonly used instead of the term leaf, but
note that ToR refers to a specific implementation (a switch that is located at the
top of a rack of servers), whereas the term leaf is an architecture term.
The fabric in CORD architecture is a pure SDN-based network built
with bare metal hardware and open source switch software. The fabric imple-
ments layer 2 switching at leaf level ToR switches and provides layer 3 rout-
ing across racks using ECMP hashing and MPLS segment routing. The fabric
network itself has an overlay/underlay structure. The overlay network is also
SDN-based and uses software switches (e.g., open vSwitch with DPDK) with
a custom-designed pipeline for service chaining. The common SDN control

www.EngineeringBooksPdf.com
Virtualization in Networking 161

Figure 3.32  CORD system architecture [46].

over both the overlay infrastructure and the underlay fabric allows them to be
orchestrated together to deliver the features and services required in a central
office.
The fabric in CORD is organized in a leaf-spin topology to optimize for
traffic flowing east to west; namely, traffic between the access network connect-
ing users to a central office and the upstream links connecting the central office
to the operator’s backbone network. This design assures that high I/O rates can
be supported. In addition, by connecting I/O servers directly to the fabric, the
CORD architecture does not rely on routers to connect to the data center. In
CORD, routing functions are realized as VNF instances hosted on data center
servers.
The CORD software architecture of is depicted in Figure 3.33. As sum-
marized in [45], the reference implementation of CORD software architecture
leverages technologies from the following open projects:

• OpenStack: the management suite that provides the core IaaS capability
in a data center and is responsible for creating and provisioning VMs
and VNs;
• ONOS: an SDN network operating system that hosts a collection of
control applications for implementing services and embedding VNs in
CORD fabric network;

www.EngineeringBooksPdf.com
162 Virtualized Software-Defined Networks and Services

Figure 3.33  CORD software architecture.

• XOS: a framework for service orchestration that unifies the infrastructure


services provided by OpenStack, control services provided by ONOS,
and data plane or cloud services running in OpenStack-provided VMs;
• Docker provides a container-based means to deploy and interconnect
services. The software of OpenStack, ONOS, and XOS may be instanti-
ated in Docker containers.

Containers offer another approach to realizing server virtualization in ad-


dition to the “standard” hypervisor-VM virtualization mechanism. In CORD
software architecture, containers and VMs managed by OpenStack can coexist
and collaborate in the following two ways:

• Side by side: some servers are managed as containers and the others as
VMs with OpenStack. Virtual functions implemented based on con-
tainers and VMs can be orchestrated to offer network services.
• Docker on top of OpenStack: since containers are nothing but a way
to fragment Linux processes into well-isolated “VM-like” instances, it is
possible to use OpenStack to create a base Linux instance in a VM and
then use Docker to overlay containers on top of it.

Not all VMs can be in containers because any change to the kernel that
an NFV implementation made (e.g., for performance improvements or simply
because the original code used a nonstandard Linux distro) cannot be in a con-
tainer. Therefore, it makes senses to have both options available. However, in
some use cases with very high scale (e.g., a virtual CPE), container-based virtu-
alization has advantages due to its lightweight implementation.

www.EngineeringBooksPdf.com
Virtualization in Networking 163

More information about CORD can be found at http://opencord.org.

3.9  Conclusion
In this chapter, we discussed application of virtualization in networking for ad-
dressing some of the fundamental challenges to current networking technolo-
gies for meeting the requirements of future network services.
We first introduced the vision of network virtualization, which adopts
virtualization as a key architectural attribute in network designs. Network vir-
tualization allows multiple independent virtual networks to share a common
infrastructure substrate and alternative network architectures to be deployed
in individual virtual networks. Network virtualization split the role of tradi-
tional Internet service providers into multiple independent entities, including
infrastructure providers, virtual network providers, virtual network operators,
and service providers. By decoupling network functions for service provisioning
from the infrastructures for data transportation and processing, network virtu-
alization enables independent evolution of service functions and infrastructure
technologies. A key to realizing network virtualization lies in creating virtual
networks for meeting service requirements while achieving various optimiza-
tion objectives. In this chapter, we presented the general architecture of net-
work virtualization, described the main functional roles in the architecture, and
discussed the benefits brought in by the architecture. We also reviewed repre-
sentative technologies for creating virtual networks, mainly in two areas—dis-
covering available infrastructure resources and embedding virtual networks into
physical network infrastructures.
The second part of this chapter covers network function virtualization
(NFV). NFV leverages standard IT virtualization technologies to implement
network functions as software instances that are consolidated on industry stan-
dard servers and storages. Essentially, NFV embraces the general architectural
vision introduced by network virtualization and provides specific architecture
and related mechanisms for realizing this vision. In this chapter, we present-
ed the NFV architectural framework proposed by ETSI and discussed some
basic principles for virtualizing network functions. Then we described the
key components in the NFV architecture, including the NFV infrastructure,
VNF software architecture, and the management and orchestration compo-
nent. Achieving high-performance VNF based on standard server platform is a
key to realizing NFV. Therefore, we particularly discussed some representative
technologies for implementing high-performance virtual functions, including
DPDK, NetVM, ClickOS, and OpenNFV, in this chapter.
The last part of this chapter is focused on virtualization-based network
service provisioning. We introduced the service-oriented architecture (SOA)

www.EngineeringBooksPdf.com
164 Virtualized Software-Defined Networks and Services

that has been widely adopted in web and cloud service models and discussed its
applications in networking. The SOA may facilitate realization of virtualization
in networking through the network-as-a-service (NaaS) paradigm. NaaS has
been employed in the NFV environment in the form of NFVIaaS, VNFaaS,
and VNPaaS. The centralized and programmable control enabled by SDN pro-
vides an effective platform for supporting NaaS-based virtualization. Virtualiza-
tion, SOA, and SDN together offer a promising approach to unifying network
and cloud services, which is expected to have a significant impact on future
service provisioning. We briefly reviewed two research projects, UNIFY and
CORD, to reflect the state of the art on research toward network-cloud service
unification.

References
[1] Xen, http://www.xenproject.org. Last accessed in July 2016.
[2] Kernel Virtual Machine, http://www.linux-kvm.org. Last accessed in July 2016.
[3] VMware, http://www.vmware.com. Last accessed in July 2016.
[4] Turner, J., and D. E. Taylor, “Diversifying the Internet,” Proceedings of the 2015 IEEE
Global Telecommunications Conference (GLOBECOM’05), Dec. 2005.
[5] Anderson, T., L. Peterson, S. Shenker, and J. Turner, “Overcoming the Internet Impassse
Through Virtualization, ” IEEE Computer Magazine, Vol. 38, No. 4, April 2005, pp.
34–41.
[6] Feamster, N., L. Gao, and J. Rexford, “How to Lease the Internet in Your Spare Time,”
ACM SIGCOM Computer Communication Review, Vol. 37, No. 1, January 2007, pp.
61–64.
[7] GENI Planning Group, “GENI Design Principles,” IEEE Computer Magazine, Vol. 39,
No. 9, Sept. 2007, pp. 102–105.
[8] Szegedi, P., J. F. Riera, J. A. Garcia-Espin, M. Hidell, P.Sjodin, et al., “Enabling Future
Internet Research: the FEDERICA Case,” IEEE Communications Magazine, Vol. 49, No.
7, July 2011, pp. 54–61.
[9] Chowdhury, N. M. M. K., and R. Boutaba, “Network Virtualization: State of the Art and
Research Challenges,” IEEE Communications Magazine, Vol. 47, No. 7, July 2009, pp.
20–26.
[10] Baucke, S., and C. Gorg, “Virtualization Approach: Concept,” 4WARD Project
Deliverable 3.1.1, September 2009.
[11] Belbekkouche, A., M. Hassan, and A. Karmouch, “Resource Discovery and Allocation in
Network Virtualization,” IEEE Communications Surveys & Tutorials, Vol. 14, No. 4, 2015,
pp. 1114–1125.

www.EngineeringBooksPdf.com
Virtualization in Networking 165

[12] Ham, J., P. Grosso, R. Pol, A. Toonk, and C. Laat, “Using the Network Description
Language in Optical Networks,” Proccedings of the 10th IFIP/IEEE International Symposium
on Integrated Network Management, May 2007, pp. 199–205.
[13] Campi, A., and F. Callegai, “Network Resource Description Language,” Proceedings of the
2009 IEEE Global Communication Conference (GLOBECOM’09), Dec. 2009.
[14] Abosi, C. E., R. Nejabati, and D. Simeonidou, “A Novel Service Composition Mechanism
for the Future Optical Internet,” Journal of Optical Communications and Networking, Vol.
1, No. 2, 2009, pp. A106–A120.
[15] Duan, Q., “Network Service Description and Discovery for High-Performance Ubiquitous
and Pervasive Grids,” ACM Transactions on Autonomous and Adaptive Systems, Vol. 6, No.
1, 2011.
[16] Koslovski, G. P., P. V.-B. Primet, and A. S. Charao, “VXDL: Virtual Resources and
Interconnection Networks Description Language,” Proceedings of the 2nd International
Conference on Networks for Grid Applications, Oct. 2008.
[17] Houidi, I., W. Louati, D. Zeghlache, and S. Baucke, “Virtual Resource Description and
Clustering for Virtual Network Discovery,” Proceedings of the 2009 IEEE International
Conference on Communications (ICC09), June 2009.
[18] Fischer, A., J. F. Botero, M. T. Beck, H. de Meer, and X. Hesselbach, “Virtual Network
Embedding: A Survey,” IEEE Communications Surveys & Tutorials, Vol. 15, No. 4, 2013,
pp. 1888–1906.
[19] Yu, M., Y. Yi, J. Rexford, and M. Chiang, “Rethinking Virtual Network Embedding:
Substrate Support for Path Splitting and Migration,” ACM SIGCOM Computer
Communication Review, Vol. 38, No. 2, April 2008, pp. 17–29.
[20] Hu, Q., Y. Wang, and X. Cao, “Virtual Network Embedding: An Optimal Decomposition
Approach,” Proceedings of the 23rd IEEE International Conference on Computer
Communications and Networks (ICCCN2014), August 2014.
[21] Chowdhury, N. K., M. R. Rahman, and R. Boutaba, “Virtual Network Embedding with
Coordinated Node and Link Mapping,” Proceedings of 2009 IEEE International Conference
on Computer Communications (INFOCOM2009), April 2009, pp. 783–791.
[22] Wang, Y., Q. Hu, and X. Cao, “A Branch-and-Price Framework for Optimal Virtual
Network Embedding,” Elsevier Journal of Computer Networks, Vol. 94, No. 1, Jan. 2016,
pp. 318–326.
[23] Hu, Q., Y. Wang, and X. Cao, “Survivable Network Virtualization for Single Facility
Node Failure: A Network Flow Perspective,” Elsevier Journal of Optical Switching and
Networking, Vol. 10, No. 4, April 201, pp. 406–4153.
[24] ETSI NFV-ISG, “Network Functions Virtualization: An Introduction, Benefits, Enablers,
Challenges and Call for Action,” Proceedings of SDN and OpenFlow World Congress, Oct.
2012.
[25] ETSI NFV-ISG, “NFV 002: Network Function Virtualization (NFV)—Architectural
Framework v1.2.1,” December 2014.

www.EngineeringBooksPdf.com
166 Virtualized Software-Defined Networks and Services

[26] ETSI NFV-ISG, “NFV-INF 001: Network Function Virtualization (NFV)—


Infrastructure Overview v1.1.1,” January 2015.
[27] ETSI NFV-ISG, “NFV-SWA 001: Network Function Virtualization—Virtual Network
Functions Architecture v1.1.1,” December 2014.
[28] ETSI NFV-ISG, “NFV-MAN 001: Network Function Virtualization—Management and
Orchestration v1.1.1,” December 2014.
[29] Hwang, J., K. K. Ramakrishnan, and T. Wood, “NetVM: High Performance and Flexible
Networking Using Virtualization and Commodity Platforms,” IEEE Transactions on
Network and Service Management, Vol. 12, No. 1, March 2015, pp. 34–47.
[30] Martins, J., M. Ahmed, C. Raiciu, V. Olteanu, N. Honda, et al., “ClickOS and the Art
of Network Function Virtualization,” Proceedings of the 11th USENIX Symposium on
Networked Systems Design and Implementations, April 2014, pp. 459–472.
[31] Morris, R., E. Kohler, J. Jannotti and M. Frans Kaashoek, “The Click Modular Router,”
ACM Transactions on Computer Systems, Vol. 18, No. 3, August 2000, pp. 263–297.
[32] ETSI NFV-ISG, “Network Function Virtualization: NFV Performance and Portability
Best Practice v1.1.2,” December 2014.
[33] ETSI NFV-ISG, “Network Function Virtualization: Report on Models and Features for
End-to-End Reliability v. 1.1.1,” April 2016.
[34] Erl, T., Service-Oriented Architecture: Concepts, Technology, and Design, Upper Saddle, NJ:
Prentice Hall, 2005.
[35] Duan, Q., Y. Yan, and A. V. Vasilakos, “A Survey on Service-Oriented Network
Virtualization toward Convergence of Networking and Cloud Computing,” IEEE
Transactions on Network and Service Management, Vol. 9, No. 4, Dec. 2012, pp. 372–392.
[36] Magedanz, T., N. Blun, and Simon Dutkowski, “Evolution of SOA Concepts in
Telecommunications,” IEEE Computer Magazine, Vol. 39, No. 11, Nov. 2007, pp. 46–50.
[37] ETSI NFV-ISG, “Network Function Virtualization: Use Cases v1.1.1,” October 2013.
[38] Li, L., W. Chou, W. Zhou, and M. Luo, “Design Patterns and Extensibility for REST API
for Networking Applications,” IEEE Transactions on Network and Service Management,
Vol. 13, No. 1, January 2016, pp. 154–167.
[39] Floodlight, http://www.projectfloodlight.org/floodlight. Last accessed in July 2016.
[40] Banikazemi, M., D. Olshfski, A. Shaikh, J. Tracey, and G. Wang, “Meridian: An SDN
Platform for Cloud Network Services,” IEEE Communications Magazine, Vol. 51, No. 2,
Feb. 2013, pp. 120–127.
[41] IETF, “RFC 7285: Application Layer Traffic Optimization (ALTO) Protocol,” September
2014.
[42] Rhoton, J., J. de Clercq, and F. Novak, OpenStack Cloud Computing, Recursive Press,
2014.
[43] Csazar, A., W. John, M. Kind, C. Meirosu, G. Pongracz, et al., “Unifying Cloud and
Carrier Network,” Proceedings of the 2013 IEEE/ACM International Conference on Utility
and Cloud Computing (UCC2013), Dec. 2013.

www.EngineeringBooksPdf.com
Virtualization in Networking 167

[44] Szabo, R., B. Sonkoly, and M. Kind, “UNIFY: Unifying Cloud and Carrier Networks—
Deliverable 2.3 Final Architecture,” November 2014.
[45] Peterson, L., “CORD: Central Office Re-Architectured as a Datacenter,” IEEE Software
Defined Networks, white paper, Nov. 2015.
[46] Das, S., A. Al-Shabibi, J. Hart, C. Chan, F. Castro, et al., “CORD Fabric, Overlay
Optimization, and Service Composition,” Open Networking Lab CORD design notes,
March 2016.

www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
4
Integrating SDN and NFV in Future
Networks
Qiang Duan

4.1  Introduction
Software-defined networking (SDN) and network virtualization (NV) are two
major innovations in the field of networking. SDN separates control and man-
agement functionalities from data forwarding operations to enable a central-
ized and programmable network control platform. NV introduces a layer of
abstraction for the underlying infrastructure upon which virtual networks with
alternative architectures may be constructed to meet diverse service require-
ments. Embracing the vision of network virtualization, the network function
virtualization (NFV) architecture has been proposed to leverages virtualization
technologies to transfer network functions from hardware appliances to soft-
ware applications.
Although SDN and NFV were initially developed as two independent
networking paradigms, evolution of both technologies has shown strong syn-
ergy between them. SDN and NFV share some common goals and similar tech-
nical ideas and therefore complement each other. Integrating SDN and NFV in
future networking may trigger innovative network designs that fully exploit the
advantages of both paradigms. Therefore, the relationship between SDN and

169

www.EngineeringBooksPdf.com
170 Virtualized Software-Defined Networks and Services

NFV and how these two paradigms may be combined in future networks have
become an important research topic that attracts extensive attention from both
industry and academia.
The past few years witnessed exciting progress in SDN technologies and
their applications in various networking scenarios. On the other hand, research-
ers have noticed some issues of the current SDN approach that may limit its
ability to fully support future network services. Currently SDN lacks the flex-
ibility to support multiple alternative network architectures that may be needed
for meeting diverse service requirements. For meeting such requirements, SDN
switches (e.g., OpenFlow-enabled switches) on the data plane must be pre-
pared to support fully general flow matching and packet forwarding actions,
which introduces significant cost and complexity in switches, thus compro-
mising the promise of simplified data plane devices made by SDN. In addi-
tion, service evolution leads to increasing the generality in flow matching and
packet forwarding operations, and this additional generality must be present
on every switch in current SDN design [1]. On the control plane, the current
SDN architecture lacks standard northbound APIs for network programming
and effective mechanisms for coordinating heterogeneous network controllers.
Lack of interoperability between SDN controllers prevents applications from
functioning seamlessly across multiple network domains for end-to-end service
provisioning.
A root reason for the limitations of current SDN designs to achieve its
full potential for service provisioning is the tight coupling between network
architecture and infrastructure on both data and control planes [2]. Separa-
tion between data and control planes alone in the current SDN architecture is
not sufficient to overcome this obstacle. Another dimension of abstraction to
decouple service functions and network infrastructures is needed in order to
unlock SDN full potential. Therefore, applying the network virtualization no-
tion and the NFV architecture into SDN may further enhance SDN capability
of flexible service provisioning to meet the challenging requirements of future
networking and cloud computing.
On the other hand, many technical challenges must be addressed before
realizing the NFV paradigm. Much more sophisticated control and manage-
ment mechanisms for both virtual and physical network resources are required
by the highly dynamic networking environment enabled by NFV, in which pro-
grammatic network control is indispensable. Some of the networking challeng-
es that the NFV architecture is facing match the design goals of SDN. There-
fore, employing the SDN principle—decoupling control intelligence from the
controlled resources to enable a logically centralized programmable control/
management plane—in the NFV architecture may greatly facilitate realization
of NFV. Many desirable network features expected for an NFV environment

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 171

can be built based on SDN capabilities (e.g., dynamic control of network in-
frastructures, automatic management of network functions and services, elastic
and fine-grained scalability adapted to service demands, seamless mobility of
network functions, and efficient multitenancy support).
The research and industry communities of both SDN and NFV share a
common vision about the synergistic nature between SDN and NFV. Although
the goals of NFV can be achieved using non-SDN mechanisms, separation of
network control and data forwarding functions in SDN may greatly facilitate
NFV development and enhance its performance. On the other hand, NFV is
able to support SDN by providing the infrastructure upon which the SDN
software can be running. When SDN controllers and applications are realized
as VNF instances, they can be composed with other VNFs into service chains
through orchestration, which allows SDN to benefit from the flexibility, elastic-
ity, and reliability brought in by NFV. SDN and NFV should be coordinated
to achieve their overall business objectives. It is expected that SDN and NFV
will become less distinguishable as independent topics and merge into a unified
software-defined and virtualization-based networking paradigm.
Although a vision of combining SDN and NFV has been widely accepted
in the networking community, researchers have different ideas of realizing this
vision. Integrating SDN and NFV into unified network architecture in order to
maximize the benefits of both paradigms is not straightforward due to the wide
variety of intertwined network elements involved and the complex interaction
among them. Currently, SDN and NFV are still being studied and standardized
without sufficient synergy. There is an urgent need for a holistic architectural
framework in which SDN and NFV principles may be naturally combined.
In this chapter, we review the recent technical developments toward in-
tegrating the principles of SDN and NFV in future networks for meeting the
challenging requirements of service provisioning. In Section 4.2, we discuss
research progress on virtualization of SDN control platform that enables mult-
itenant virtual software-defined networks. In Section 4.3, we review technolo-
gies that employ SDN-based network control in NFV infrastructure for provid-
ing connectivity services that support VNF orchestration and service chaining.
SDN-based network control and management have also been applied to virtual
network functions as well as physical infrastructure resources in the entire NFV
architecture. Some technologies for combing SDN with the NFV architecture
are discussed in Section 4.4. In Section 4.5, we examine the key principles of
SDN and NFV and show their relationship in a two-dimensional abstraction
model. In this section, we also present an architectural framework that provides
a holistic vision about integrating the SDN and NFV principles in unified net-
work architecture and then discuss some challenges and research topics toward
integration of SDN and NFV in future networks by following this framework.

www.EngineeringBooksPdf.com
172 Virtualized Software-Defined Networks and Services

4.2  Virtualization in Software-Defined Network


4.2.1  SDN Virtualization
As discussed in Chapter 3, the notion of virtualization has been widely ap-
plied in the area of computing and become a key enabler for modern data cen-
ters to provision cloud services. Virtualization introduces a layer of abstraction
for physical computational resources, such as CPU, memory, and I/O devices.
Upon such an abstraction layer, multiple independent virtual machines (VMs)
can be created, and each can run its own operating system to support various
applications for meeting different user requirements. Essentially, the abstrac-
tion of physical resources enabled by virtualization decouples operating systems
and application software from the underlying hardware, thus allowing software
applications to be developed and deployed without knowledge of specific hard-
ware implementation details. Such decoupling greatly enhances the ability of
computing systems to provision various services for meeting diverse user re-
quirements. Virtualization allows multiple VMs to share a common hardware
platform while providing each VM a slice of hardware resources. The VMs can
be dynamically deployed and easily migrate across hosting servers. Therefore,
virtualization also significantly improves the flexibility of computing systems
to offer elastic on-demand services and increases the utilization of hardware
resources in the systems.
Benefits of virtualization have been proved by years of engineering prac-
tice, especially through development of the cloud infrastructures. Virtualization
has also been propounded as a key architectural attribute of future networking
to address some of the fundamental challenges faced by the next generation
networks. Network virtualization enables abstraction of the network infrastruc-
tures with heterogeneous implementations, which allows various virtual net-
works (VNs) to be created upon a common infrastructure platform. As VMs
may run their own operating systems and different application programs, VNs
created in a network virtualization environment may have alternative network
architectures, including data forwarding mechanisms as well as network control
and management protocols, to provision different network services for meeting
diverse user requirements. All VNs shared a common network substrate, while
each VN is allocated with its own share of infrastructure resources. VNs can
be dynamically deployed and migrated across physical network infrastructures.
Therefore, network virtualization may improve utilization of network resources
and offer a promising approach to enabling flexible network services with a
similar service model as cloud computing.
Software-defined networking was originally developed as a network
paradigm that is independent of network virtualization. The key principle of
software-defined networking lies in decoupling network control functionalities
from data forward operations through abstraction of the network resources.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 173

Separation of control and data planes in SDN is not equivalent to the decou-
pling between virtual networks and physical infrastructures introduced by net-
work virtualization. Although each VN is realized with virtual resources, it is a
complete network that comprises functions of both data forwarding and con-
trol/management. The underlying network infrastructure also contains its own
control and management as well as data forwarding functionalities. Therefore,
in the SDN architecture the data plane comprises both virtual and physical re-
sources, and the control plane consists of functions for controlling both virtual
network functions and physical network infrastructures.
Although SDN brings in benefits such as simplified data plane devices,
flexible network control, efficient management, and improved network service
performance, the lack of explicit distinction between virtual service functions
and physical infrastructures in the SDN architecture causes unnecessary cou-
pling between network architecture and the underlying network infrastructure,
which hinders network designs to fully exploit SDN potential for supporting
future network services. It is worth noting that some supposed benefits of SDN
(e.g., cost reduction through sharing physical resources or dynamic network
configuration in multitenant environments) actually come from network vir-
tualization. Although SDN facilitates network virtualization and thus makes
realization of such features easier, it is important to recognize that SDN alone
does not directly provide these benefits [3].
On the other hand, the SDN architecture does facilitates realizing the no-
tion of network virtualization. A centralized SDN controller provides a global
view of an entire network domain, thus offering a perfect platform upon which
a network virtualization layer may be realized. As a hypervisor plays a key role
in computing virtualization, the network virtualization layer acts as a “network
hypervisor” to provide an abstract view of physical network infrastructures,
manage lifecycles of VNs, map virtual network nodes and links to physical
resources, and translate the communications between virtual and physical net-
works. All these key functions of a network virtualization layer may be signifi-
cantly simplified by the centralized and programmable SDN control platform,
as compared to network virtualization implemented based on the distributed
control in traditional networks.
Although network virtualization and SDN are in principle independent
of each other, there exists symbiosis between these two networking paradigms.
SDN and network virtualization may be related and complement each other
in various ways. For example, SDN can be employed as an enabling technol-
ogy for network virtualization. Virtualization can be applied in SDN to realize
multitenant virtual SDNs. In addition, network virtualization provides an ap-
proach for evaluating and testing SDNs.
Virtualization of SDN allows network designs to leverage the combined
benefits of software-defined networking and network virtualization. A general

www.EngineeringBooksPdf.com
174 Virtualized Software-Defined Networks and Services

architectural framework for network virtualization in SDN networks is shown


in Figure 4.1. A key component of the architecture is the virtualization lay-
er between physical network infrastructure and virtual tenant networks. The
main responsibilities of this layer include (a) providing abstraction of physical
network resources to decouple virtual networks from physical infrastructure;
(b) maintaining the mapping relationship between virtual network nodes and
links to physical network resources; (c) keeping isolation between individual
virtual networks for allowing each tenant network to independently control its
own share of network infrastructure without being interfered by other tenant
networks.
In an SDN virtualization environment, the physical network infrastruc-
ture is controlled by a logically centralized controller that is separated from data
plane devices. The virtualization layer may play the role of an SDN controller
to interact with all data plane devices through the southbound (SB) interface
of the SDN architecture (e.g., using OpenFlow or ForCES protocol). Similarly,
each virtual network has its own tenant controller that controls all nodes and
links in the virtual network. In this sense, each tenant network is a virtual
SDN (vSDN) with a centralized controller that can be programmed by ap-
plications via a northbound (NB) interface. From the perspective of a vSDN
controller, it is directly controlling the virtual network topology provided by
the virtualization layer. Therefore, the tenant controller of each vSDN may be
realized as a regular SDN controller, such as NOX or Floodlight, and interacts
with the virtualization layer using a standard SB interface. The virtualization
layer is essentially an extended SDN controller that translates the SB messages

Figure 4.1  General architecture for virtualization in SDN.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 175

from vSDN tenant controllers to appropriate controlling messages for the cor-
responding network devices and vice versa. Therefore, a virtualization layer for
SDN network can be thought of having a network hypervisor sitting upon an
SDN control platform for the network infrastructure.
The virtualization layer provides an abstract view of the physical network
infrastructure, which typically includes three types of attributes: network topol-
ogy, node resources, and link resources. Different levels of abstraction may be
applied to physical network topology. The lowest level of abstraction represents
the physical nodes and links in an identical virtual topology (i.e., the virtual
topology is a transparent 1-to-1 mapping of the physical topology). The highest
level of abstraction represents the entire physical network as single virtual node
with ingress and egress ports. Generally, there is a range of levels of abstrac-
tion between the lowest and highest levels. In addition, network virtualization
should allow independent addressing schemes used in virtual and physical net-
works; therefore, mapping between virtual and physical network addresses is
another important aspect of infrastructure abstraction provided by the network
hypervisor [4].
Isolation provided by the network hypervisor between vSDNs should be
applicable to both the control plane and the data plane. Control plane iso-
lation allows each vSDN controller to have the impression of controlling its
own network without interference from other vSDN controllers. Data plane
isolation requires allocation of sufficient amount of data plane resources, in-
cluding switch capacities and link bandwidth, to each vSDN for meeting their
service requirements. Switch resources include capacities of packet forwarding
and processing. Flow-based SDN switches typically use TCAM for storing flow
tables and matching rules; therefore, specific amounts of TCAM capacity at
each switch should be assigned to vSDNs to provide proper isolation between
them. The network hypervisor should also assign a specific amount of physical
link bandwidth to each vSDN. As SDN switches follow a match-action pattern,
the rules for flows from different vSDNs must be uniquely identified in order
to guarantee that forwarding decisions of multitenant vSDNs do not conflict
with each other. The hypervisor should be able to associate a specific set of traf-
fic flows to virtual networks so that one set of traffic can be clearly isolated from
another [4].
The controller of each vSDN may run on a dedicated host server that is
typically located in the tenant network operation center. In principle, it can be
deployed using any currently available SDN controller implementation, such
as OpenDaylight, Floodlight, and ONOS. However, this type of static deploy-
ment of tenant controllers may limit the full benefits brought in by network vir-
tualization in SDN. Whenever a new vSDN is created, a tenant controller for
the vSDN is required to be installed on a server, and the connectivity between
the network hypervisor and the server must be configured. A more desirable

www.EngineeringBooksPdf.com
176 Virtualized Software-Defined Networks and Services

deployment approach is to virtualize the tenant SDN control functions as


VNFs by leveraging the NFV architecture and to host the controller VNFs in a
cloud data center. Such NFV-based deployment enables independent controller
VNFs to be dynamically instantiated whenever a new vSDN is created and thus
can easily scale up or down the controller capacity for adaptively meeting the
service demand of virtual networks. This approach also allows tenant controller
migration that may enhance network flexibility and service performance. On
the other hand, such NFV-based controller deployment requires SDN control-
lers to be implemented as software images on a virtualization platform, which
brings in new challenges to SDN controller development.

4.2.2  Hypervisor-Based Virtualization for SDN


In order to implement the virtualization layer in SDN, researchers have taken
similar technical approaches as for realizing virtualization in computing sys-
tems. As discussed in Chapter 3, hypervisors play a key role in server virtual-
ization; therefore, the hypervisor-based approach has also be taken to realize
network virtualization in SDN. As shown in Figure 4.2, a hypervisor is between
hardware resources and multitenant operating systems. The instruction set pro-
vides an abstraction of the underlying hardware, through which a hypervisor
controls hardware resources. Similarly, in hypervisor-based SDN virtualization,
the SDN control platform provides abstraction of data plane resources through
which a network hypervisor can control network infrastructure resources to cre-
ate multitenant virtual networks.
FlowVisor is a seminal network hypervisor developed based on Open-
Flow for virtualizing SDN networks. OpenFlow provides an abstract view of

Figure 4.2  Hypervisor-based virtualization for SDN.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 177

physical network data plane that allows FlowVisor to partition the network
infrastructure into “slices.” Each slice of infrastructure is assigned to a virtual
network and is controlled by a tenant OpenFlow controller to support a set of
flows. FlowVisor uses OpenFlow protocol to communicate with both tenant
controllers and data plane switches, thus allowing any OpenFlow-based SDN
controller to be used in vSDN without modification [5]. The architecture of
FlowVisor system for SDN virtualization is depicted in Figure 4.3.
FlowVisor acts as a transparent proxy between OpenFlow-enabled switch-
es in the data plane and the tenant OpenFlow controllers of vSDNs. All Open-
Flow messages, both from switches to tenant controllers and vice versa, are sent
through FlowVisor, which makes sure that a tenant controller can only commu-
nicate with an OpenFlow-switch that is allocated to the corresponding vSDN.
For a message generated from a tenant controller, FlowVisor ensures that the
message acts only on switches assigned to the vSDN controlled by the control-
ler. For a message from a switch, FlowVisor examines the message content to
determine which tenant controller(s) the message should be forwarded to and
assures that each tenant controller only receives messages relevant to their own
slices of network infrastructure [5].
FlowVisor introduces the term of flowspace as a basis for achieving isola-
tion between vSDNs. The set of flows forwarded by a vSDN can be thought of
constituting a predefined subspace of the entire space of possible packet head-
ers, which is called as flowspace in FlowVisor. FlowVisor allocates each vSDN

Figure 4.3  FlowVisor architecture for SDN virtualization [5].

www.EngineeringBooksPdf.com
178 Virtualized Software-Defined Networks and Services

its own flowspace and ensures that the flowspaces of distinct vSDNs do not
overlap. Given a packet header, FlowVisor can decide which flowspace contains
the packet and therefore which vSDN it belongs to. FlowVisor limits the ten-
ant controllers of vSDNs to operate only on their own flowspaces in order to
prevent interference between tenant controllers.
For achieving topology isolation between vSDNs, FlowVisor examines
and edits OpenFlow messages to only report states of the physical network re-
sources, including switches, ports, and network links, that are part of a vSDN
to the respective tenant controller. To enforce bandwidth isolation, FlowVisor
maps the packets of a given virtual network to a prescribed VLAN priority
code point (PCP). The 3-bit VLAN PCP allows for mapping all traffic to eight
distinct classes with different priority levels. In order to provide isolation be-
tween flow table entries of multitenant vSDNs, FlowVisor keeps track of the
number of flow entries inserted by a tenant controller to each switch. If a tenant
controller exceeds a prescribed limit of flow entries at a switch, then FlowVi-
sor replies with a message indicating that the flow table of the switch is full. In
order to isolate the SB interfaces of individual vSDNs, FlowVisor rewrites the
OpenFlow transaction identifiers to ensure that different vSDNs utilize distinct
transaction identifiers. Similarly, controller buffer access and status messages are
modified by FlowVisor to create isolated OpenFlow control slices [4].
FlowVisor is the first network hypervisor reported in the literature for
virtualizing SDN control platform. Although successfully demonstrated realiza-
tion of multitenant virtual networks based on the SDN architecture, FlowVisor
has some limitations that need to be addressed in order to fully realize the virtu-
alization notion in SDN. Since the work on FlowVisor was published in 2009,
researchers who are inspired by the idea of SDN-based network hypervisor have
made progress in extending FlowVisor in various aspects.
The mechanisms employed by FlowVisor for providing bandwidth isola-
tion and switch capacity allocation were not inherent to network hypervisor
design, but rather short-term solutions to deal with the existing hardware ab-
straction. More advanced bandwidth allocation and scheduling mechanisms
have been developed to enhance bandwidth and CPU capacity isolation in
SDN virtualization. For example, an enhanced FlowVisor [6] is implemented
as an extension to the NOX SDN controller to use VLAN PCP for achieving
flow-based bandwidth guarantees. Admission control is also employed by the
enhanced FlowVisor to further protect the resources allocated to vSDNs. Upon
receiving a request for creating a new virtual network, the admission control
function checks if sufficient node and link capacities are available in the physi-
cal network infrastructure to support the new virtual network. In case that the
residual resources are not sufficient, the request for creating a new virtual net-
work will be rejected.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 179

FlowVisor focuses on slicing physical resources to vSDNs and isolating


tenant controllers from each other. However, FlowVisor lacks sufficient support
for abstraction of the underlying network infrastructures. FlowVisor presents
1-to-1 mapped subsets of physical topology to tenant controllers. The slicing of
the underlying topology provided by FlowVisor makes the mapping between
physical and virtual topologies visible to tenant controllers and therefore can-
not provide a high-level abstract view of the physical topology. Abstraction of
network infrastructure is a key aspect of network virtualization in order to en-
able decoupling between virtual network functions and physical infrastructure
resources. Extensions to FlowVisor for addressing this limitation have been
reported in the literature. For example, Advanced FlowVisor (AdVisor) [7]
enhances FlowVisor by introducing an improved abstraction mechanism that
hides physical switches in virtual topologies. AdVisor can show only the end-
points of a virtual path to tenant controllers. When an OpenFlow switch sends
a message, AdVisor checks whether this message is from an endpoint of a virtual
path. If so, the message is forwarded to the tenant controller. Otherwise, AdVi-
sor will handle the processing of this message and perform the necessary control
functions in response to the message independently from the tenant controller.

4.2.3  Container-Based Virtualization for SDN


Virtualization in SDN, although greatly enhancing network flexibility for sup-
porting diverse applications and services, also introduces extra overhead in
network control and management, thus bringing in a new challenge to SDN
performance. The centralized controller in SDN, even without virtualization,
needs to interact with all data plane devices and maintain a global view of the
entire data plane—therefore forming a potential performance bottleneck that
limits network scalability. With virtualization, the network hypervisor must
handle all interactions between tenant controllers and physical network devices,
which makes the scalability issue of SDN even more serious. As the number of
virtual networks and physical devices increases, the virtualization layer becomes
a limiting factor of network performance in SDN.
Network hypervisors (e.g., FlowVisor and its extensions) have the advan-
tage of allowing vSDNs to have independent tenant controllers. However, a
hypervisor-based approach also introduces overheads that may limit scalability
of the virtualization layer. Instantiating a complete SDN controller for each
virtual tenant network consumes extra memory space and CPU capacity. Al-
though the overhead for supporting a single tenant controller may not be signif-
icant, the cumulative overhead caused by a large number of coexisting vSDNs
may degrade network performance.
Container-based solutions for virtualization offer a promising approach
to addressing these challenges and have been applied to enhance scalability of

www.EngineeringBooksPdf.com
180 Virtualized Software-Defined Networks and Services

SDN virtualization. Container-based virtualization, also called operating sys-


tem-level virtualization, is a virtualization method where the kernel of an oper-
ating system allows for multiple isolated user-space instances. Such instances,
often called containers, have independent name spaces and resource scheduling
schemes. Each container may look and feel like a real server from the point
of view of its owners and users. Compared to hypervisor-based virtualization,
where each VM emulates hardware and runs a guest operating system, contain-
er-based virtualization shares a common operating system among user applica-
tions and therefore is more efficient with less amount of overheads.
FlowN is a representative container-based approach for SDN virtualiza-
tion. The system architecture of FlowN virtualization layer is shown in Figure
4.4. Rather than running a separate controller for each tenant network, FlowN
provides a common NOX-based SDN controller that is shared by multiple ten-
ant networks. Each tenant network has its own control application (not a com-
plete instance of controller) that runs on top of its virtual topology and address
space. A control application consists of handlers that respond to network events
by sending commands to the underlying data plane switches [8].
FlowN essentially is a modified NOX controller that, on the one hand,
handles NB API calls from various tenant control applications, and, on the oth-
er hand, controls data plane switches using OpenFlow protocol. FlowN runs
its own event handlers that call tenant-specific event handers and map API calls
to the NOX controller between physical and virtual networks. When a packet
is forwarded by a switch to the controller, the FlowN packet-in event handler
identifies the appropriate tenant application and invokes its packet-in handler.

Figure 4.4  FlowN architecture for SDN virtualization.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 181

When a tenant application invokes an API call to the NOX controller, FlowN
intercepts the call and translates it to instructions for physical switches. When a
tenant application requests the controller to install a flow match-action rule in
a switch, FlowN maps the virtual switch to the corresponding physical switch,
checks that the tenant has not exceeded its share of flow table on that switch,
and then installs the rule on that switch. In order to provide resource isola-
tion between the tenant applications, FlowN allocates each tenant a prereserved
amount of flowspace and flow table memory on each switch, and assigns one
processing thread per container [8].
To differentiate the traffic and rules for different tenant networks in
FlowN, edge switches encapsulate each incoming packet with a protocol-ag-
nostic extra header (e.g., VLAN id) to identify the tenant network to which this
packet belongs. The extra headers are determined by the virtualization control-
ler. Mapping from virtual to physical addresses occurs when a tenant control
application modifies the flow table. Mapping from physical to virtual addresses
must be performed when a switch sends a message to the controller, which then
forwards the packets to the right tenant control application.
Mapping between virtual topology and physical network is another aspect
of a virtualization layer that may degrade performance of SDN. FlowN lever-
ages advances in database technologies to overcome this potential bottleneck.
Both topology descriptions and assignments to physical resources can be repre-
sented by the relational model of a database. Each virtual topology consists of a
number of nodes, interfaces, and links that can be uniquely identified by some
keys. A physical network topology can also be represented in a similar fashion
by using a database model.
FlowN uses two tables to store the mapping relation between virtual and
physical topologies. One table stores the node assignments from virtual net-
work nodes to physical switches. The other table stores the path assignment
from virtual links to a set of physical links. Then, mapping between virtual
and physical topologies can be achieved through database query operations.
FlowN employs a master-slave database organization for addressing the scal-
ability challenge. The state of the master database is replicated among multiple
slave databases. Using the replicated database, the FlowN virtualization layer
can be distributed among multiple physical servers, each of which is colocated
with a replica of the database. Each physical server is connected with a subset
of physical switches and running the control applications for a subset of tenant
networks [8].
Comparing with FlowVisor, which only “slices” physical network resourc-
es, FlowN enables a complete abstraction of the physical network by providing
virtual topologies and virtual address spaces to tenant networks. An advantage
of this abstraction is to make physical resource management transparent to
virtual tenant networks (e.g., virtual nodes can be transparently migrated on

www.EngineeringBooksPdf.com
182 Virtualized Software-Defined Networks and Services

the physical network without interfering tenant network operations). On the


other hand, since FlowN is an operating system-level container-based virtual-
ization method, each tenant network runs a control application upon a com-
mon NOX-based SDN controller. Therefore, FlowN limits the control of all
tenant networks to the capabilities of the shared controller. The design strategy
of FlowN to sacrifice tenant network controllability for improving network
performance reflects the tradeoff between the flexibility achieved by virtual-
ization for supporting diverse service requirements and the performance cost
introduced by network virtualization.

4.2.4  Virtualization of Multidomain SDN


The virtualization technologies for SDN discussed in the previous subsections,
including both hypervisor-based approaches such as FlowVisor and container-
based methods such as FlowN, focus on virtualization of a single network do-
main. They all assume the existence of a single network control platform that
is able to obtain a global view of the entire network topology and control all
data plane devices in the network infrastructure. This might be a reasonable
assumption for some networking scenarios where SDN was initially deployed
(e.g., data center networks and enterprise networks that are typically operated
by a single administration entity). However, with the application of SDN into
a broader spectrum of networks, networking across autonomous SDN domains
often becomes necessary for service provisioning. For example, with adoption of
SDN in carrier networks, Internet service provisioning may require end-to-end
networking through multiple network domains controlled by different SDN
controllers. Also, the emerging notion of cloud federation expects communica-
tions among cloud infrastructures, where each may have an independent data
center network with its own SDN controller.
Multidomain networking scenarios bring in new challenges to realizing
virtualization in SDN. The assumption of a single control platform with a glob-
al view of the entire network infrastructure is no longer valid. Currently, there
are various SDN controllers that follow different protocols and interface stan-
dards. Although OpenFlow is considered the most popular standard for the SB
interfaces of SDN controllers, alternative protocols such as IETF ForCES do
exist. Therefore, it is possible that the SDN controllers in multiple autonomous
network domains implement different SB interface protocols and thus are not
compatible to control the switches in other domains. The lack of standard NB
APIs for SDN controllers also causes difficulty in service provisioning across au-
tonomous network domains with heterogeneous controllers. Creating a virtual
tenant network begins with discovering and then gaining access to network in-
frastructure resources. The currently available APIs for discovering, allocating,
and accessing resources may vary across the controllers in different domains. As

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 183

a result, constructing an end-to-end virtual network requires the virtualization


layer to interact with a variety of different APIs offered by the various network
domains that make up the resulting vSDN.
In order to face these challenges, the virtualization layer of multidomain
SDN must provide a higher-level abstraction of the underlying network infra-
structures. Specifically, the network virtualization layer should provide a single
common platform upon which vSDN controllers can control and leverage
physical network infrastructure resources. The common platform should mask
the fact that the underlying infrastructure actually consists of multiple network
domains with heterogeneous implementations, including SDN controllers that
may have different NB APIs and SB protocols.
The key to realizing such a general network virtualization layer across het-
erogeneous infrastructure domains is to handle the interactions with the diverse
SDN controllers in various domains to make them transparent to upper-layer
control/management applications. An approach to achieving such transparence
is to include a set of domain handlers at the lower half of the virtualization
layer, one for each domain controller. Each handler talks with the correspond-
ing SDN domain controller using the API provided by the controller to collect
network state information from the domain and deliver control and manage-
ment requests to the domain. The upper half of the virtualization layer is es-
sentially a network hypervisor that synergizes the topology and resource infor-
mation collected from all network domains into one global abstract view of the
entire underlying network infrastructure. The network hypervisor offers a set
of high-level APIs to tenant controllers for configuring physical resources that
are allocated to their respective virtual networks. The network hypervisor maps
the requests from tenant controllers to specific domains and invokes the corre-
sponding handlers to translate the requests to API calls to the domain control-
lers. In this way, the virtualization layer glues resources reserved from multiple
domains and coordinates control of various domains to form individual virtual
tenant networks. Figure 4.5 depicts general architecture for handler-based vir-
tualization for multidomain SDN.
A representative multidomain SDN virtualization system that employs
the handler-based approach is the HyperNet system. In HyperNet, the network
hypervisor provides APIs that hide the details of network domain implementa-
tions and make them transparent to tenant networks. The network hypervisor
works with a set of domain handlers for executing tenant networks, obtaining
resources from multiple domains, connecting resources together to form the
topology required by each tenant network, loading necessary software and/or
configuration files on each node of the topology, and then monitoring and
adapting the topology over time as network states change [9].
The HyperNet virtualization system allows multiple network hypervisors
operated by different entities to coexist. Such entities essentially play the role of

www.EngineeringBooksPdf.com
184 Virtualized Software-Defined Networks and Services

Figure 4.5  Handler-based virtualization for multidomain SDN.

virtual network providers in a network virtualization environment. All network


hypervisors are supposed to provide a standard set of APIs to all tenant net-
works but differ in the business relationship that they form with the providers
of network infrastructures. A network hypervisor acts as a broker between the
service providers who want to create virtual networks for service provisioning
and the infrastructure providers who offer the physical resources in their net-
work domains to support virtual networks.
A prototype implementation of the HyperNet architecture has been de-
veloped based on ProtoGENI. ProtoGENI is one of the GENI control frame-
works that fully support the GENI aggregate manager (AM) API. In GENI,
network resources are owned and managed by an aggregate. Access to resources
is achieved through the AM API. Currently the GENI AM API provides func-
tions to discover GENI resources; reserve, renew, and delete a slice of GENI
resources; check GENI resource status; and create or tear down a GENI slice.
There exist multiple aggregates in the ProtoGENI, each with its own AM and
managed network resources. Essentially, each aggregate is an autonomous net-
work domain including an AM as the controller.
The HyperNet prototype implementation, as shown in Figure 4.6, in-
cludes a set of GENI AM handlers in the bottom part of the virtualization layer.
These handlers talk to different GENI AMs via the GENI AM API to discover,
monitor, and manage GENI resources. The top part of the virtualization layer
provides a set of APIs for tenant network management, router management,
topology management, and end system management. API calls are invoked by
tenant networks using XML-RPC. The network hypervisor is comprised of

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 185

Figure 4.6  HyperNet implementation based on ProtoGENI.

three components: the location manager (LM), topology server/routing serv-


er (TS/RS), and information base (IB). The LM fetches location information
about each domain from the domain controllers and saves it in the IB. The TS
maintains up-to-date topology information about a network domain. The RS
makes use of the topology information stored in IB to perform path computa-
tion for accomplishing tasks such as routing between virtual network nodes [9].

4.2.5  Orchestration-Based Virtualization for Multidomain SDN


The virtualization layer of multidomain SDN needs to mask the difference in
the NB APIs offered by the SDN controllers in various infrastructure domains.
One approach to achieving this objective, as discussed in the previous subsec-
tion, is to implement a set of handlers in the virtualization layer, one for each
domain controllers. However, due to the lack of a standard NB interface and
the diversity of SDN controllers that may be deployed in network domains, the
sublayer of domain handlers may introduce extra complexity for implement-
ing network virtualization and may also limit flexibility and scalability of the
virtualization layer.
An alternative approach for multidomain SDN virtualization is to realize
a multidomain network hypervisor upon an orchestration platform of SDN
domain controllers. The architecture of orchestration-based SDN virtualization
is shown in Figure 4.7. In this architecture, network infrastructure consists of
multiple domains controlled by their respective SDN controllers. The network
orchestration layer is between the multidomain network hypervisor and the
SDN domain controllers. The orchestration layer acts as a parent controller that

www.EngineeringBooksPdf.com
186 Virtualized Software-Defined Networks and Services

Figure 4.7  Orchestration-based virtualization for multidomain SDN.

coordinates a set of domain controllers to handle interdomain networking for


end-to-end service provisioning.
The orchestration layer works at a high (and more abstract) level and
focuses on interdomain aspects of connectivity across different domains. Essen-
tially, this layer offers a unified network operating system that allows composi-
tion of the connectivity services provided by individual infrastructure domains
into end-to-end network services, regardless the specific control technologies
employed in each domain. The orchestration layer maintains a global view of
the entire network infrastructure across all network domains. Based on such a
global topology, the network hypervisor creates virtual tenant networks across
multiple physical network domains and provides an abstract view of each vir-
tual network to the corresponding tenant controller. The network hypervisor
also works with the orchestrator to translate control messages between tenant
controllers and the domain controllers that are involved in controlling the in-
frastructure resources of the corresponding virtual networks.
A key aspect of multidomain network orchestration lies in the standard
interface between the network orchestrator and the domain controllers. In or-
der to mask the diverse NB APIs offered by various SDN controllers in a multi-
domain environment, a network orchestration interface should define a generic
control functional model for connectivity provisioning, topology dissemina-
tion, and path computation. Such an interface should also provide a standard
protocol for supporting communications between the network orchestrator and
all domain controllers. Essentially, the orchestration interface needs an effective
mechanism that, on the one hand, encapsulates the implementation details of

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 187

all network domains, and, on the other hand, allows communications between
the orchestrator and domain controllers and also supports collaboration be-
tween different domains.
The control orchestration protocol (COP) provides an interface protocol
between the orchestration layer and diverse SDN domain controllers. COP
unifies the orchestration functionalities in a single protocol and provides a com-
mon NB API so that heterogeneous SDN controllers can be orchestrated using
a common protocol. COP is composed of three basic functions—call, topology,
and path computation services. The call service is defined as the provisioning
interface. A call object must describe the type of service that is requested or
served by it and specifies the end points between which the service is provided.
The call object also contains the list of effective connections made into the data
plane to support the service call. A connection object is used for a single net-
work domain scope and should include the path across the network topology of
the domain. The topology service provides a common and homogeneous defi-
nition of the network topology information maintained by different domain
controllers. A topology object consists of a set of nodes and edges that form
a tree structure. A node object must contain a list of ports and the associated
switching capabilities. An edge object is defined as the connection link between
two endpoints. The path computation service provides an interface to request
and return a path object, which contains the information about the route be-
tween two endpoints [10].
An SDN network orchestration system has been developed based on the
COP protocol and the IETF application-based network operation (ABNO)
framework. In this system, each domain controller supports a standard REST-
ful NB interface to communicate with the network orchestrator using the COP
protocol. The orchestrator builds an abstract multidomain topology based on
one of the two aggregation mechanisms: virtual node aggregation or abstract
link aggregation. The virtual node aggregation abstracts internal connectivity
by representing each domain as a virtual node. The border nodes of each do-
main are seen as ports of a virtual node and are connected with other virtual
nodes through interdomain links. For abstract link aggregation, the internal
connectivity of a network domain can be dynamically mapped to a mesh of vir-
tual links. Each domain controller computes a path between the border nodes
of the domain and exposes the virtual links and border nodes to the orchestra-
tor. Path computation for providing end-to-end connectivity is performed in
two stages. First, the orchestrator calculates a path through the abstract multi-
domain topology and performs the domain sequence selection by identifying
the domains and border nodes involved in the path. Then the controllers of all
selected domains for this path perform path computation in parallel to find the
internal connections in their respective domains between the pairs of border
nodes identified by the orchestrator [11].

www.EngineeringBooksPdf.com
188 Virtualized Software-Defined Networks and Services

4.3  Software-Defined Networking in NFV Infrastructure


4.3.1  Using SDN in the NFV Infrastructure
As we discussed in Chapter 3, NFV infrastructure (NFVI) comprises network
infrastructure as well as computing and storage hardware, which are virtual-
ized to form the virtual network, computing, and storage resources. Upon the
NFVI, VNFs can be created, deployed, and orchestrated to offer various end-
to-end services through service function chaining. The key responsibility of
network infrastructure in NFVI is to provide connectivity among VNFs to sup-
port the provisioning of various network services.
NFV, driven by customers’ needs for elastic on-demand services and op-
erators’ needs for flexible network and service management, creates a very dy-
namic networking environment. There are various scenarios of service function
chaining supported by NFV. Some network services are predefined by a VNF
forwarding graph (VNF-FG). When creating an instance of such a network
service, VNFs are chained together based on the prespecified VNF-FG to pro-
vide static service delivery. For more dynamic and flexible service provisioning,
VNFs involved in a service and the connectivity between them will be deter-
mined at the time of service instantiation, based on a variety of factors such as
service and/or customer policies and the states of network, compute, and stor-
age resources. NFV also supports adaptive service function chaining, in which
connectivity between VNFs may be modified during run-time based on the
processing results provided by one or multiple VNFs. VNF instances may mi-
grate across different hosting servers for various reasons, such as service scale-up
or down or load balancing, and therefore requiring dynamic reconfiguration of
network infrastructure in order to provide the required connectivity. In addi-
tion, NFVI must be able to support multitenant service chains that share the
common network and compute infrastructures and effectively control traffic
steering between the VNFs involved in these service chains. Traffic flows must
be isolated not only among different network services but also between VNFs
that compose the services as well.
Therefore, networking for supporting NFV is very challenging and calls
for much more sophisticated control and management mechanisms than what
can be offered by traditional networking technologies. SDN offers a promising
approach to facing the challenges of dynamic networking brought in by NFV.
Since the initial proposal of the NFV concept, ETSI NFV ISG has envisioned
that SDN-based networking technologies could be very beneficial to NFV de-
velopment. After years of development in both SDN and NFV, the synergistic
and complementary relationship between these two emerging paradigms be-
comes even clearer.
Some of the networking challenges that NFV is facing, such as dy-
namic control and configuration of network infrastructure and automated

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 189

management of network services, match the design goals of SDN. Applying


SDN in the network infrastructure of NFVI enables complex network topolo-
gies to be readily built to support automated service orchestration. An SDN
controller can work with NFV orchestrator to control network resources as
well as monitor network states. Therefore, many desirable networking features
expected by NFV can be built based on SDN capabilities.
SDN provides the necessary flexibility of network control/management
to support dynamic service provisioning in an NFV environment. For example,
if a VNF migrates from one server to another, the SDN controller can easily
reroute the traffic flows to the new hosting server by updating the configura-
tions of data plane switches. If an additional instance is created to increase the
processing capacity of a VNF, the SDN controller may set up a new network
path to the server hosing the new instance and also balance the traffic flows
between the two servers. In addition, SDN also facilitates multitenant service
provisioning in NFV. The SDN virtualization technologies discussed in Section
4.2 may be employed to create multiple virtual networks sharing a common
network infrastructure. Each virtual network may have an independent tenant
SDN controller, and thus may realize its own network control/management
policies for meeting specific service requirements.
In principle, the NFV architecture specifies a general network infrastruc-
ture domain in NFVI without requiring any particular networking technolo-
gies to be used. A key objective of network virtualization is to allow the service
functions, including the connectivity services provided by the NFVI network
domain, to be decoupled from the technologies used for realizing them. There-
fore, NFV does not require usage of SDN in its network infrastructure. On the
other hand, the logically centralized control platform enabled by SDN offers
an effective approach that may greatly facilitate fulfillment of the virtualiza-
tion principle. For example, a centralized SDN controller enables a standard
abstract interface between SDN controller and NFV orchestrator. Such an ab-
stract interface supports encapsulation of detailed network implementations in
the form of an infrastructure service provided to the NFV orchestrator, which
allows the orchestrator to utilize the underlying network resources through the
infrastructure-as-a-service (IaaS) paradigm.
An illustrative example of using SDN-enabled network in NFVI to pro-
vide connectivity for VNF service chaining is shown in Figure 4.8. In this ex-
ample, a network service is defined by a FG comprising three VNFs in se-
quence. NFV orchestration constructs a chain of three VNFs for instantiating
the network service. In this service chain, output from VNF1 is forwarded to
VNF2 and then the result of VNF2 is forwarded to VNF3 for further process-
ing to complete the service. Therefore, virtual network links are needed to pro-
vide connectivity from VNF1 to VNF2 and from VNF2 to VNF3.

www.EngineeringBooksPdf.com
190 Virtualized Software-Defined Networks and Services

Figure 4.8  Usage of SDN in NFVI to provide connectivity for service function chaining.

In order to deploy the network service on physical infrastructures, NFV


orchestrator works with VNF manager and virtual infrastructure manager
(VIM) to deploy the VNFs on compute infrastructures. In this example, the
three VNFs are hosted on servers S1, S2, and S3, respectively. The NFV or-
chestrator also interacts with the SDN controller in the network infrastructure
for establishing physical connections that realize virtual links between VNFs.
After deploying VNFs at the data center servers, the orchestrator provides the
locations of these VNF instances (e.g., IP addresses of the hosting servers) and
requests the SDN controller to set up physical network paths in the network
infrastructure to provide the required connectivity. Interactions between NFV
orchestrator and the control plane of network infrastructure may be imple-
mented through the NB APIs of the SDN controller. Then the SDN controller
uses an SB protocol to configure data plane switches to realize the requested vir-
tual links in physical network infrastructure. For OpenFlow-enabled switches,
flow tables entries will be configured by the SDN controller to steer traffic flows
between the hosting servers of VNF instances. As shown in Figure 4.8, the vir-
tual link1 from VNF1 to VNF2 is embedded in a physical path S1-SW1-SW3-
SW4-S2 and the virtual link 2 from VNF2 to VNF3 is realized on the physical
path S2-SW4-SW6-SW7-S3.
This example shows that network service provisioning in an NFV envi-
ronment requires cooperation between computing and networking resources.
This is achieved by the NFV orchestrator through interacting with both the
VIM and the SDN controller. If both the compute and network infrastructures
involved in service provisioning belong to the same administration domain
(e.g., in the same data center or enterprise network), the NFV orchestrator

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 191

acts as a control application running upon the SDN controller and programs
the underlying network infrastructure using NB API of the controller to build
network paths between VNF instances. For end-to-end service provisioning in
large-scale networks, such as the Internet, the compute and network infrastruc-
tures in an NFV environment may belong to multiple administration domains.
For example, VNFs are hosted on servers located in multiple data centers that
are interconnected through WANs. In such cases, there will be multiple SDN
controllers involved for providing network connectivity between VNF instanc-
es, typically including at least a controller for each data center network and a
(logical) controller for each WAN. Therefore, the NFV orchestrator needs to
interact with all these SDN controllers to provide the network connectivity
required by end-to-end services.
If the data centers and WANs fall into one single trust domain (e.g., be-
long to the same service provider), the interactions between the orchestrator
and SDN controllers may basically follow the same mechanism as the case of
single domain network, except having a one-to-many structure. If the data cen-
ters and WANs are in different trust domains (e.g., owned and operated by dif-
ferent infrastructure providers), then the interface between the orchestrator and
SDN controllers typically requires a higher-level abstraction of the underlying
network infrastructures. Due to the autonomous ownership of infrastructures,
providers’ policies may forbid exposition of internal details of their network
infrastructures. In such cases, the IaaS model offers a useful mechanism for
addressing the challenge to achieve effective interactions between the NFV
orchestrator and SDN controllers without exposing network infrastructure
details. Following the IaaS model, each SDN controller may encapsulate the
network infrastructure that is under its centralized control as a service—es-
sentially network infrastructure-as-a-service (NIaaS)—and expose an abstract
view of the network infrastructure with a service description. Each SDN con-
troller provides an on-demand connectivity service explicitly requested by the
orchestrator, which then composes the connectivity services from different in-
frastructure domains to realize service function chaining for end-to-end service
provisioning.
The NIaaS model offers a very flexible approach to abstracting, exposing,
selecting, composing, and utilizing networking resources in virtual computing
environments and thus may greatly facilitate networking for NFV. Latest devel-
opments in cloud operating systems embrace the notion of NIaaS. For example,
in OpenStack, a widely adopted cloud operating system for both public and
private clouds, the latest networking module Neutron that focuses on deliver-
ing networking-as-a-service has replaced the original networking API Quan-
tum. The state of the art of SDN control platform also supports the NIaaS
model. For example, the open daylight framework provides a network service
platform that supports RESTful APIs and OGSI service interfaces.

www.EngineeringBooksPdf.com
192 Virtualized Software-Defined Networks and Services

4.3.2  SDN-Based Network Control for NFV Service Function Chaining


Service function chaining (SFC) is an emerging service deployment concept
that promises increased flexibility and efficiency. Recently, SFC has received
considerable attention in the standardization and research communities. How-
ever, the ossification of the current network architecture makes automated and
dynamic SFC difficult. In the traditional IP-based network architecture, the
whole network is purpose-built and optimized for a small set of static services.
Although such a service model has advantages in terms of service performance
guarantees, it is particularly inflexible in new service development and deploy-
ment; thus, it is insufficient to support future network service evolution. The
NFV architecture enables flexible network function deployment and orchestra-
tion for supporting dynamic SFC. The programmable control platform of SDN
allows simple and effective control for flexible traffic steering in infrastructure
networks, which is a key enabler for automated SFC in the NFV environment.
SDN-based network control for supporting NFV SFC must meet some
challenging requirements. A large number of service chains may coexist in fu-
ture networks for supporting a wide variety of users and applications. There-
fore, scalability of network control is a challenging issue that must be addressed.
The diverse service requirements of different users, applications, and operators
may introduce a wide spectrum of policies and rules for traffic steering that
must be performed by the SDN controller simultaneously. In addition, SDN
control should achieve efficient network resource utilization for packet forward-
ing. The controller must steer traffic through VNF instances in a prespecified
sequence and also avoid any unnecessary packet forwarding. Also, certain poli-
cies could require traffic to be selectively steered away (bypassed) from specific
VNF instances.
SDN inline services and forwarding (StEERING) is an OpenFlow-based
network control architecture proposed to achieve scalable and flexible network
control for SFC [12]. As shown in Figure 4.9, the StEERING architecture es-
sentially provides an NFV infrastructure that consists of computing facilities
for hosting VNF instances and an OpenFlow-enabled network for data trans-
portation. OpenFlow switches are interposed between VNF hosting servers
and the rest of the transport network, which comprises Ethernet switches and
IP routers. That is, NFV compute infrastructure is connected to the transport
network by OpenFlow switches. The central control unit in the StEERING
architecture is composed of two modules—an OpenFlow control module and
a service placement module. The OpenFlow module controls and monitors
OpenFlow switches in the transport network and also provides a single interface
to the NFV orchestrator, which specifies traffic steering policies to provide the
required connectivity for chaining VNF instances.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 193

Figure 4.9  SDN Inline service and forwarding (StEERING) system architecture.

The OpenFlow protocol is used to perform a two-step process in StEER-


ING for traffic steering. The first step classifies incoming packets and assigns
them to a service path based on predefined policies. The second step is to for-
ward packets to the next VNF instance based on its current position along the
assigned service path.
On the path that a traffic flow traverses, the ingress router connects the
source user device to the OpenFlow-enabled transport network and the egress
router connects the transport network to destination user device. Layer 3 ad-
dress is used at edge routers for identifying users and mapping packets to ser-
vices. Packet forwarding inside the transport network is based on layer 2 address
and employs the layer 2 header rewriting feature supported by many existing
OpenFlow switches. Reference points for forwarding are the MAC addresses
and ports of both ingress and egress routers. At the first OpenFlow switch that
the traffic flow traverses in the transport network, an OpenFlow rule matches
the IP source address of the incoming packet from the ingress router, and then
the MAC address of the first VNF instance in the service chain is written into
the destination MAC address field of the packet. The packet is then forward-
ed through the transport network based on this rewritten destination MAC
address until it reaches the OpenFlow switch that is connected to the server
hosting the destined VNF instance. This switch rewrites the destination MAC
address of the packet to be the one required by the VNF instance and then
forwarded the packet to the server. After the packet is processed by the VNF
instance, it leaves the hosting server and enters the transport network again.
The incoming port of the OpenFlow switch of this packet identifies the associ-
ated service instance of the packet and writes the MAC address of the next VNF

www.EngineeringBooksPdf.com
194 Virtualized Software-Defined Networks and Services

instance into the destination MAC address field of the packet. Then the packet
is forwarded through the transport network toward the next VNF instance in
the service chain. Such a forwarding procedure is repeated until the packet is
returned to the transport network by the last VNF instance. Then the MAC
address of the egress router is written into the destination MAC address field of
the packet, which allows the packet to be forwarded by the transport network
to the egress router [13].
In order to support the large number of service instances with diverse
requirements of traffic steering in a large-scale network, design of the StEER-
ING architecture payed special attention to scalability for network control. In
order to reduce the amount of state required at each OpenFlow switch, StEER-
ING employs the multiple table feature supported by OpenFlow specification
since its version 1.1 to transform the flat policy space into a multidimension
space. The multitable mechanism separates packet matching process into mul-
tiple steps and results in linear scaling of each table. In order to facilitate mul-
titable organization and fast rule matching across multiple tables, StEERING
uses metadata to communicate intermediate results among different tables and
associated actions. The set of service functions applied to a flow is defined as
a metadata type called the service set of the flow. Such metadata enables every
table to operate on the service set independently, thus simplifying integration
of different types of policies. In addition, the StEERING architecture pushes
complex forwarding operations, such as flow classification and packet head-
er rewriting, to the boundary of the transport network, therefore simplifying
packet forwarding inside the transport network [12].
The other module in StEERING control unit is the service placement
module, which performs an algorithm that periodically determines the best lo-
cations of the VNF instances for all services. This module may obtain topology
and state information of the transport network from the OpenFlow controller;
therefore, it is able to perform a network-aware placement algorithm to find the
best locations for the VNF instances in a service chain for minimizing the delay
for service traffic to traverse all required VNF instances.
With SDN-based network control for traffic steering, VNF instances in
principle may be deployed at any location in the transport network. Therefore,
the NFV orchestrator together with VIM in the NFV MANO component may
determine VNF placement based on the function requirements and compute/
storage infrastructure states and then request the network domain in NFVI
provide the required connectivity. In such cases, control functions for the com-
puting resources and networking resources are independent, which typically
leads to relatively simple implementations. However, separated control of the
computing and networking domains in NFVI may lead to suboptimal resource
utilization for service provisioning. Network-aware VNF placement strate-
gies consider the topology and resource availability of the underlying network

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 195

infrastructure and jointly optimizes VNF placement and network routing;


therefore, it may significantly enhance network resource utilization and im-
prove service performance such as delay and throughput. On the other hand,
such federated resource management across compute and network infrastruc-
tures in NFVI requires more sophisticated control algorithms, thus increasing
system complexity.
It is worth noting that the dynamic, elastic, on-demand service environ-
ment enabled by NFV particularly requires coordinated control and manage-
ment of the states of VNFs as well as states of network connections. To see why
this is important, consider a scenario where an intrusion detection system (IDS)
VNF is overloaded and must be scaled up to meet the service performance re-
quirement. With NFV supported by SDN-enabled network infrastructure, the
VNF manager can easily deploy a new instance of the IDS VNF and request the
SDN controller in NFVI network domain to reroute some in-progress flows to
the new instance. However, attacks may go undetected during the scale-up pro-
cedure. This is because some necessary IDS state information may be unavail-
able yet when the rerouted flows arrive at the new instance. The SDN controller
can wait for the existing flows to terminate and only reroute new flows, but this
introduces extra delay in the scale-up process of the already overloaded VNF
and thus may further degrade service performance. Therefore, in order to avoid
compromising flexibility and performance of the IDS function, it is desirable to
quickly and safely move the internal IDS states for some flows from the original
VNF instance to the newly deployed instance and update network forwarding
state alongside. Similar needs arise in the context of other applications that rely
on dynamic reallocation of VNF capabilities [14].
This example scenario illustrates that coordination between the manage-
ment of computing resources in NFVI, which focuses on lifecycles of VNFs
and virtual services, and the management of network infrastructure, which pro-
vides connectivity between VNFs for service chaining, is particularly impor-
tant in order to support dynamic and elastic service provisioning in an NFV
environment.
A basic issue that must be addressed to face this challenge is related to
rerouting in-progress traffic flows when some VNF instances are being moved.
Packets may arrive at the source instance after the relevant VNF states have
been moved or at the destination instance before the VNF state transfer finish-
es. In addition, such coordination of VNF reallocation in the compute domain
and traffic rerouting in the network domain needs to be efficient to reduce the
associated overhead as much as possible. Currently effective and efficient mech-
anisms for federated management of network and compute infrastructures in
an NFV environment are still open for research. Some encouraging progress
toward this direction has been reported in the literature (e.g., the OpenNF
control system proposed in [14]).

www.EngineeringBooksPdf.com
196 Virtualized Software-Defined Networks and Services

4.3.3  SDN for Supporting NFV in Radio Access Network


With the increasing number of mobile devices and rapid development of mo-
bile computing technologies, wireless mobile networks are becoming a signifi-
cant component of future networks, where a variety of network functions may
be deployed in order to provision a wide spectrum of mobile network services.
The LTE network architecture comprises two main parts—the radio ac-
cess network (RAN) and the evolved packet core (EPC). In RAN, user equip-
ment (UE) is connected to a base station (BS) via a radio interface; and the BS
is connected to a service gateway (SGW) in EPC through a mobile backhaul
network. Key functions of a BS in RAN include two main aspects—radio sig-
nal processing and baseband data processing. In conventional RANs, these two
aspects of network functions are tightly integrated on BSs that are often pro-
prietary devices optimized for specific tasks. Such hardware-based implementa-
tion of network functions limits the flexibility of RANs for supporting network
service evolution. In addition, distributing baseband processing to all BSs leads
to low utilization of the computational resources in RANs.
Virtualization has been applied in RAN of LTE to enhance network
flexibility and resource utilization. C-RAN is a representative technology for
virtualization-based RAN, which has attracted extensive interest from both in-
dustry and academia since it was initially proposed by China Mobile in 2011
[15]. In C-RAN, radio signal processing function and baseband data processing
function are decoupled and performed, respectively, by the radio remote head
(RRH) entity and the baseband unit (BBU) entity. Only RRH that provides a
simple radio interface is located at each BS. The BBUs of all BSs are virtual-
ized as software applications and consolidated onto a centralized pool of com-
putational resources in a data center. Each BS is connected to the data center
through high bandwidth front-haul network links. RRH at each BS digitizes
baseband communication signals and passes them to the central computing
pool for baseband data processing. The data to and from a particular RRH are
processed by a virtual base station (vBS), which is a VM running the BBU soft-
ware at the data center. Essentially, C-RAN virtualizes the baseband processing
functions, which used to be implemented as proprietary hardware at BSs, to be
software hosted by a data center; therefore, it represents a use case of NFV in
the RAN part of mobile networks.
In the C-RAN architecture, all baseband processing tasks are consolidated
into the BBU pool, which requires baseband data to be continuously exchanged
between RRHs and the BBU pool through the front haul network. The ex-
tra bandwidth requirement on the front haul network introduced by C-RAN
may hinder its application in networks with limited front haul capacities. The
central pool of BBUs is likely to be far away from some RRHs; therefore, the
extra delay caused by data transmissions between them makes it difficult to
use C-RAN for supporting latency-sensitive applications. The converged cloud

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 197

and cellular systems (CONCERT) architecture has been proposed to address


such challenges for further enhancing performance and flexibility of virtualiza-
tion-based RANs. Instead of consolidating all vBSs in a single data center loca-
tion, CONCERT allows various computational resources located at different
sites to be orchestrated for realizing virtual mobile network and cloud services.
Software-defined networking is employed in the CONCERT architecture to
provide flexible network connections among the VNFs at different locations to
facilitate realizing various mobile services [16].
The proposed CONCERT architecture is shown in Figure 4.10. It con-
tains a virtualized infrastructure upon which software-defined services can be
constructed. The infrastructure is divided to two planes: a physical data plane
and a separated control plane entity called conductor. The data plane consists of
heterogeneous physical resources, including radio interface equipment (RIE),
SDN switches, and various computational resources. RIE performs conversion
between radio signals and baseband data. Baseband processing functions are
handled by computational resources that may be scattered at different locations
in the virtual infrastructure. Local servers may be located with RIE to handle
processing tasks with strict latency requirements. Regional servers can be placed
in multiple sites to consolidate tasks from small regions. A central server with
large computing capacity can process a great number of tasks collected from
the entire network domain. RIE and various servers are interconnected through

Figure 4.10  The CONCERT architecture using SDN to support NFV in radio access networks.

www.EngineeringBooksPdf.com
198 Virtualized Software-Defined Networks and Services

an SDN-enabled transport network. The data plane of the transport network


comprises SDN switches controlled by the conductor.
The conductor plays the role of a centralized SDN controller. It coor-
dinates physical data plane resources using an SDN SB interface and presents
them as virtual resources on the NB interface. The SB control functions include
radio interface management, transport network management, and location-
aware computing management. On the NB interface, the conductor virtualizes
physical resources to form a virtual infrastructure that comprises virtual com-
puting, networking, and radio resources. Upon such a virtual infrastructure,
various mobile communication services (e.g., virtual BS) and cloud services can
be constructed by the conductor via chaining virtual resources. The conductor
dynamically coordinates data plane physical resources such as servers, switches,
and radio interface equipment in response to changes in service requirements
[16].
Like C-RAN, CONCERT applies virtualization in RAN to enhance
service performance of mobile networks. Compared to C-RAN, CONCERT
further exploits the advantages brought in by virtualization to enable more flex-
ible service provisioning using heterogeneous physical resources. CONCERT
also employs a software-defined control mechanism to decouple the data and
control planes in the network infrastructure. The conductor in the CONCERT
architecture is a logically centralized control entity that plays the roles of both a
VIM and an SDN controller in the NFV infrastructure.

4.3.4  SDN for Supporting NFV in Mobile Packet Core


Mobile packet core (MPC) is a core part of mobile networks. Key function
entities of MPC include service gateway (SGW), packet data network gate-
way (PGW), mobility management entity (MME), and home subscriber server
(HSS). Each BS is connected to an SGW through a mobile backhaul network.
An SGW serves a group of BSs and acts as the local mobility anchor point for
inter-BS handover. The SGW is connected to a PGW, which provides an inter-
face with an external network. The MME is the main control entity responsible
for maintaining mobility management states for UEs and setting up connec-
tions to carry user traffic. The HSS is a central database where user profiles are
stored [17].
Key MPC functions in conventional mobile networks are often realized
based on customized hardware that requires static deployment, provisioning,
and configuration. In addition, the current mobile network function enti-
ties are also suffering complex management and inflexible configuration and
prone to vendor locking. As a consequence, the network architecture does not
inherently offer sufficient flexibility, dynamicity, or on-demand capability ex-
pected by future mobile network services [18]. Recently, researchers from both

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 199

industry and academia explored SDN and NFV technologies to address the
challenges to MPC in mobile networks.
ONF has chartered the Wireless and Mobile Working Group (WMWG)
to foster adoption of OpenFlow-based SDN technologies in wireless mobile
networks. The main research objectives of WMWG include studying mobile
network architecture leveraging OpenFlow-based SDN, simplifying interaction
between wireless and fixed networks, and proposing extensions to OpenFlow
protocol for supporting MPC. The ETSI NFV community is also interested in
applying NFV in mobile packet core and has identified virtualized MPC as a
typical use case of NFV. In an NFV-based deployment environment, the key
MPC function entities such as SGW, PGW, MME, and their combinations
may be implemented as software instances running on VMs.
A representative effort of employing SDN technologies for supporting ap-
plication of NFV for enhancing flexibility, programmability, and efficiency of
mobile network control is the SDN-based MPC architecture proposed in [18].
This architecture, as shown in Figure 4.11, comprises an OpenFlow-enabled
network and a logically centralized control plane. The control plane contains
MME, combined SGW and PGW, and SSH entities running as a group of
VNFs hosted in operator’s data centers. GPRS tunnel protocol (GTP) is used to
connect eNodeBs (eNBs) to ingress switches of the OpenFlow network, which
is responsible for transmitting data to the data center. The NFV domain SDN
controller is responsible for providing network links inside a date center to
provide connectivity between virtual function entities hosted in the data center.
Those virtual functions will be attached as endpoints to the OpenFlow net-
work. The E2E connectivity domain SDN controller in this architecture man-
ages the data plane of the OpenFlow network to provide connections between
data centers.
In an operator data center, MPC control function entities, such as the
combined control function of SGW and PGW (S/PGW-C), can be imple-
mented as a VNF software instance running on top of the NFVI. The S/PGW-
C VNF interacts with the E2E connectivity domain SDN controller via an NB
interface in order to set up network connections between eNBs. For instance,
the S/PGW-C VNF sends the UE bearer GTP TEIDs to the controller, which
translates these IDs into OpenFlow messages to control SDN switches in the
transport network [18].

4.3.5  SDN for Supporting NFV in Wireline Access Network


In today’s network architecture, service providers typically need to deploy a
large number of customer-premise equipment (CPE) in residential networks,
which form an important part of the wireline access network. For example,
residential subscribers’ home networks usually consist of multiple CPE devices,

www.EngineeringBooksPdf.com
200 Virtualized Software-Defined Networks and Services

Figure 4.11  Software-defined mobile packet core architecture.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 201

such as ADSL routers and set-top boxes, which typically provide connectivity to
home terminals (e.g., smart phones, laptops, and networked TVs). This part of
access network is a critical segment on the end-to-end service delivery path that
can become a bottleneck in terms of transmission delay and data throughput.
The current wireline access network architecture is facing many challenges
for meeting the requirements of future network services. Deploying CPE devic-
es that can perform various network functions in a large number of residential
networks may incur significantly high capital cost. Complex configuration for
the wide variety of CPE devices also leads to high operation cost to the service
providers. In addition, the existing residential network architecture typical lacks
the necessary flexibility required for dynamic deployment of new services.
The NFV paradigm has been applied together with SDN technologies in
wireline access networks to address these challenges. An example is the virtual-
ized residential gateway (vRGW) framework presented in [19]. Rather than
deploying CPE devices with complex network functions (e.g., IP routing, NAT,
and firewall), the vRGW framework only keeps simple layers 1 and 2 functions
on the CPE devices in local and access networks. Network layer and upper-layer
functions of CPE devices are decoupled from layer 1/2 protocols and virtual-
ized as software VNF instances called vRGWs. These vRGWs are deployed in
data centers located in the carrier’s edge network. Each CPE device has a cor-
responding and dedicated vRGW instance that handles the network layer (and
upper layer) functions for that customer’s traffic. Virtualizing and consolidating
complex network functions to data centers in the edge network may greatly
reduce carrier’s investment on CPE equipment, simplify device configuration
and network management, and enhance network flexibility for supporting ser-
vice evolution. vRGWs can be hosted in different locations in the edge network
for improving network performance. For example, considering network delay
performance a service provider may deploy vRGWs in a metropolitan network
that is closer to end users.
The network connectivity between user CPE devices and the correspond-
ing vRGW instances is a key element in the vRGW framework. SDN technolo-
gies may be applied in wireline access networks for enabling service providers
to provision vRGWs in a flexible, scalable, and fine-grain manner. As shown in
Figure 4.12, the SDN controller of the access network sets up network connec-
tions between user CPE devices and vRGW servers, and also between vRGW
servers and the carrier’s core network. For upstream (from users to the network)
traffic, the controller configures SDN switches in the access network to forward
packets from users’ home devices to their vRGWs and then forward the pro-
cessed data from vRGWs into the core network to reach their final destinations.
Similarly, for downstream (from the network to users), packets destined for an
end user are forwarded to the corresponding vRGW in a data center first and
then forwarded to the user’s home device [19].

www.EngineeringBooksPdf.com
202 Virtualized Software-Defined Networks and Services

Figure 4.12  SDN supported virtual residential gateway framework.

From an NFV architectural perspective, the SDN-enabled wireline ac-


cess network between user devices to vRGWs form the network infrastructure
in an NFVI. The NFVI also comprises the datacenter infrastructure hosting
vRGWs and the related VIM element. The vRGW manager controls instantia-
tion, deployment, and possible migration of vRGW instances and requests the
SDN controller to set up and/or reconfigure network connections. Therefore,
from an SDN architectural perspective, the vRGW manager plays the role of a
control application running on top of the network operating system provided
by the SDN controller and interacts with the controller via an NB API.
Application of SDN and NFV in wireline access networks has started
receiving more attention from standard organizations. For example, ETSI NFV
ISG described two NFV use cases related to home and access networks—vir-
tualization of the home environment and fixed access network function virtu-
alization. Broadband Forum (BBF) is an industry consortium for developing
multiservice broadband networking specifications. A main goal of BBF is to ad-
dress interoperability, architecture, and management issues in broadband net-
works to enable home, business, and converged broadband services. Recently,
BBF engaged in the virtualization area to assess the impact of cloud, SDN,
and NFV on multiservice broadband access networks. The relevant works from
BBF include cloud intelligent broadband network, SDN in telecommunication
broadband networks, flexible service chaining, and network function virtualiza-
tion in multiservice broadband networks. More information about BBF-related
work can be found from its website www.broadband-forum.org.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 203

4.4  Combining SDN and NFV for Service Provisioning


4.4.1  Software-Defined Network Control in the NFV Architecture
In last section, we discussed some representative technologies that employ
SDN-enabled networking in the NFV infrastructure (NFVI), particularly in
the network domain of NFVI. The main objective of such technologies is to
provide the required physical network connectivity between VNF hosting serv-
ers for supporting NFV orchestration and service function chaining. Applica-
tions of the SDN paradigm in these technologies are mainly limited to physical
network infrastructure in the NFV architecture. Actually, the key principle of
SDN—separating the controlling intelligence from the controlled resources to
enable a logically centralized and programmable control plane—offers a general
approach to flexible network control and management. Therefore, software-
defined networking may be applied in the entire NFV architecture rather than
being limited within NFVI.
The concept of resource in SDN is generic; in principle anything that
can contribute to any kind of service is in scope. Virtual network functions
as well as physical infrastructures in the NFV architecture can all be regarded
as resources and therefore can be controlled and managed by following the
SDN principle. For example, a VNF can be viewed by an SDN controller as a
node in an abstract network graph with known connectivity points and control-
lable transfer functions [20]. Therefore, in principle an SDN controller may be
used in the NFV architecture to control virtual network resources/functions as
well as physical infrastructure resources. That is, within the NFV architecture,
SDN-based technologies may be applied in the infrastructure domain to con-
trol physical resources, or in the tenant domain to control virtual resources,
or in both domains to provide unified centralized control of both virtual and
physical resources.
This more general concept of resources in the SDN control principle
allows development of a more holistic vision about the relationship between
SDN and NFV, especially SDN usage in the NFV architecture. Multiple op-
tions of mapping SDN key elements into the NFV architectural framework
have been discussed in a technical report on usage of SDN in NFV published
by ETSI NFV-ISG [21]. Key elements of SDN considered in this report are
SDN resources, SDN controllers, and SDN applications. The generality and
flexibility of the SDN principle allow a wide variety of possible locations of
using SDN resources, controllers, and applications in the NFV architecture.
Some of the mapping locations presented in [21] are shown in Figure 4.13 and
summarized next.
Possible locations of SDN resources in the NFV architecture include:

www.EngineeringBooksPdf.com
204 Virtualized Software-Defined Networks and Services

Figure 4.13  Possible locations of SDN resources, controllers, and applications in the NFV
architectural framework.

• Case a: physical SDN-enabled switch or router in the network domain


of NFVI;
• Case b: software-based SDN switch implemented on a server in the
compute domain of NFVI;
• Case c: virtual SDN switch or router realized as virtual network resourc-
es upon the virtualization layer;
• Case d: SDN-enabled switch or router realized as a VNF.

The first two cases are for physical SDN resources, respectively, in the
form of hardware-based switches/routers in network infrastructure and soft-
ware-based switches/routers implemented on the compute infrastructure of
NFVI. Cases c and d are for SDN data plane functions realized as virtual func-
tions in the tenant domain of the NFV architecture.
Possible locations of SDN controllers in the NFV architecture include the
following:

• Case 1: SDN controller for controlling a physical network in the NFVI,


where the controller is not implemented as a VNF;

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 205

• Case 2: SDN controller for controlling a virtual network in the NFVI,


where the controller is not implemented as a VNF;
• Case 3: SDN controller functionality merged with the VIM functional-
ity, which may control either physical or virtual network resources;
• Case 4: SDN controller, controlling either a virtual or a physical net-
work, is virtualized as a VNF or part of a VNF;
• Case 5: SDN control functionality for a virtual network merged with
VNF management functions;
• Case 6: SDN controller as part of the operation support system (OSS).

Possible locations of SDN applications in the NFV architecture include


the following:

• Case i: an application might be implemented as a physical network ele-


ment on a physical network appliance;
• Case ii: the VIM might be an application interfacing an SDN control-
ler in the NFVI (e.g., OpenStack Neutron as a VIM interfacing with an
SDN controller in NFVI);
• Case iii: an application might be virtualized as a VNF interfacing an
SDN controller that is either virtualized or not;
• Case iv: an application might be an element manager (EM) interfac-
ing with an SDN controller to collect some metrics or configure some
parameters of a VNF;
• Case v: an application might be part of the NFV orchestrator or VNF
manager interfacing an SDN controller for managing virtual networks/
services;
• Case vi: an application might be as a part of OSS/BSS.

From an SDN architectural perspective, both controllers and control ap-


plications belong to the control/management plane that is separated from data
forwarding functions; therefore, they are often located together in system de-
signs in order to facilitate the interactions between them. For example, case 1 of
controller location and case i of application location can be peered; that is, both
a controller and the applications running on top of it are implemented on a
physical network appliance in NFVI. Case 3 of controller location and case ii of
application location may also be peered together. In this scenario, both the con-
troller and applications are merged with the VIM that is responsible for manag-
ing NFV infrastructure. In addition, both SDN controller and application can

www.EngineeringBooksPdf.com
206 Virtualized Software-Defined Networks and Services

be realized as VNFs (case 4 for controller and case iii for application); therefore,
various flexible interconnections among virtualized SDN controllers and con-
trol applications may be enabled through NFV orchestration.
A key principle of virtualization in networking is to decouple the service
provisioning–oriented functionalities from the underlying network and com-
pute infrastructures. From such a virtualization perspective, the NFV architec-
ture can be viewed as comprising two layers—the infrastructure layer and the
service tenant layer. Based on such a layering structure of NFV, there are two
types of connectivity services in the NFV architecture. The first type of con-
nectivity services is provided by the NFVI to enable communications among
VNFs. Clearly, SDN plays a key role in meeting the requirement of elasticity
and virtualization for network infrastructure in order to provide such connec-
tivity services. This is the role of the infrastructure controller—the SDN control-
ler in the infrastructure layer. The second type of connectivity services is for
supporting network services provided at the service tenant layer and has to deal
with the operation and management of VNFs. Applying the SDN principle at
the service tenant layer provides a concordance of the upper part of the NFV
architecture with a centralized controller, which can be referred to as the ten-
ant controller. The set of control actions of the tenant controller are related to
semantics of service functions and virtual tenant networks for service provision-
ing [21]. Figure 4.14 illustrates the two-layer controller structure in the NFV
architecture.

Figure 4.14  Two-layer SDN controllers in the NFV architecture [21].

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 207

The SDN tenant controller, which itself might be realized as a VNF, in-
structs different deployed VNFs for performing packet processing functions.
The SDN infrastructure controller is responsible for setting up the required
network connections for supporting communication among those VNFs. Sepa-
rated control for the infrastructure and tenant layers in the NFV architecture
allows the infrastructure controller to provide the supporting underlay through
the virtualization layer, while the tenant controller provides an overlay com-
posed of tenant VNFs. A key objective of network virtualization is to enable
multiple virtual tenant networks, possibly with alternative network architectures
and protocols, to share a common infrastructure substrate. Therefore, multiple
independent tenant controllers may exist, one for each virtual tenant network
constructed upon NFVI through VNF management and orchestration.
Despite their different nature, the infrastructure controller and tenant
controller have to be coordinated in a consistent way to perform the expected
control actions and to dynamically adapt to changes in service conditions. For
example, instantiation of a new VNF performed by the tenant controller on the
service tenant layer must be supported by the infrastructure controller by al-
locating the corresponding network and compute infrastructure resources. The
two types of controllers might interact directly with each other through a new
reference point or indirectly via the current MANO stack in the NFV archi-
tecture. Coordination between the two types of controllers does not necessarily
imply direct control of one controller by the other. The MANO stack could
provide the tenant controller with abstract information about the infrastructure
layer and allow some degree of interaction in both directions. On the other
hand, using the MANO stack would require some extensions to the MANO re-
lated interfaces currently defined in the NFV architecture and could violate the
decoupling between MANO and network service semantics [21]. The specific
mechanism for coordination between the tenant and infrastructure controllers
in the NFV architecture is still an open topic for future research.

4.4.2  Resource Abstraction for SDN Control in the NFV Architecture


The key aspect of the SDN architecture for enabling decoupling between the
control and data planes lies in abstraction of data plane resources and an inter-
face between the two planes for supporting the resource abstraction. Therefore,
application of SDN on both the physical infrastructure layer and virtual service
tenant layer in the NFV architecture calls for a more general data plane abstrac-
tion model that is applicable to both physical and virtual resources. The inter-
face for supporting general data plane abstraction is between the controlled data
plane resources and their centralized controllers, which essentially plays the role
of the SB interface in the SDN architecture. However, the currently available
SDN SB protocols may not meet the requirement of a general abstraction for

www.EngineeringBooksPdf.com
208 Virtualized Software-Defined Networks and Services

both physical and virtual resources. For example, the OpenFlow standard pro-
vides a fairly low-level abstraction of data plane through flow tables in switches.
Traffic flows are identified in OpenFlow using addressing schemes of physi-
cal networks such as the MAC address, IP addresses, and port numbers, thus
lacking the ability to handle independent addressing schemes in virtual tenant
networks. Higher-level resource abstraction models and interface protocols for
supporting SDN control in the entire NFV architecture, including both virtual
functions and infrastructure resources, are still an open area for further study.
Recent research progress in this area has indicated that the forwarding and con-
trol element separation (ForCES) specification offers a promising basis for de-
veloping an abstraction model and associated control protocol for supporting
SDN in an NFV environment [22].
The ForCES specification developed by IETF was one of the original pro-
posals recommending decoupling of packet forwarding and network control.
The idea of ForCES is to provide simple hardware-based forwarding entities in
networking devices and software-based control elements. The ForCES frame-
work comprises two main types of elements—forwarding elements (FEs) that
perform packet forwarding operations and control elements (CEs) that con-
trol the operations of FEs. In addition, the framework also defines two helper
elements—the FE manager and CE manager—that assist the bootstrapping
phase. ForCES defines an object-oriented model realized by an XML schema
to abstract FE resources. A basic building block of the model is an object class
called logical functional block (LFB) that performs well-defined functions such
as receiving, processing, modifying, and forwarding packets. Multiple LFB in-
stances can be interconnected in a directed graph to form a service. In order to
allow CEs to control the operations of LFBs, each LFB class defines input and
output ports, operational parameters visible to a CE, capabilities advertised to
a CE, and events to which a CE can subscribe. The ForCES model supports
definition of new LFBs, each with its own customized set of parameters.
In the ForCES framework, the abstract model is complemented with a
protocol to enable interactions between CEs and FEs. An advantage of this
protocol is that it is model agnostic and thus can control and configure any FE
that is abstracted with the ForCES model. ForCES protocol provides all the
necessary functions for supporting robust and efficient control of the underly-
ing resources. On the other hand, the ForCES protocol comes with a concise
yet complete set of commands including SET, GET, DELETE, REPORT, and
REDIRECT [23].
The natural extensibility and expressibility of ForCES abstract model and
protocol make ForCES a viable candidate for realizing the interface between
SDN controllers and data plane resources in an NFV environment. A proof-
of-concept (PoC) prototype for evaluating applicability of ForCES to support
SDN-enabled NFV has been presented in the ETSI report on SDN usage in

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 209

NFV architecture [21], which demonstrates usage of SDN-based control in


an NFV environment in the context of LTE Evolved Packet Core (EPC). The
architecture of the PoC system is shown in Figure 4.15.
In this PoC system, the data and signaling parts of SGW and PGW func-
tions of LTE network are split into a data plane and a control plane. The data
plane function elements, referred to as S/PGW-D, are implemented as VNFs
and responsible for tunnel encapsulation/decapsulation and analytics collec-
tion. The control plane elements, S/PGW-C, handle tunnel endpoints setup
and process signaling following 3GPP standard. ForCES specification is em-
ployed in the system to realize the interface between the data plane VNFs and
the control and management functions in NFV MANO. Various networking
LFBs are implemented for providing connectivity between the instantiated net-
work functions in this system. The networking LFBs include a bridge LFB that
interconnects VNF instances within the network infrastructure and a port LFB
that specifies the connectivity between VNFs in virtual networks [23].
SDN-based control is applied in this PoC system in two specific cases. The
first case is to control and manage the networking and computing resources in
NFV infrastructure. An SDN LFB is implemented as part of the virtualization

Figure 4.15  SDN-enabled NFV architecture with ForCES-based control/management.

www.EngineeringBooksPdf.com
210 Virtualized Software-Defined Networks and Services

layer in this system to control connections in the network infrastructure. There


is also a hypervisor LFB in the virtualization layer for managing compute and
storage resources to instantiate virtual environments. Both the hypervisor and
SDN LFBs are controlled and managed by the ForCES infrastructure manager
application located at the VIM. The second case of applying SDN-based control
in this PoC system is to achieve centralized control of data plane VNFs. PGW-
D and SGW-D elements are implemented as VNFs and modeled as ForCES
LFBs. These LFBs are controlled through the ForCES protocol by a CE—the
FE management (FEM) application located at the VNF manager. These two
cases of applying SDN-based control in this PoC system are, respectively, on
the infrastructure layer and the service tenant layer. The ForCES protocol is
employed on both layers for controlling physical infrastructure resources as well
as virtual network functions, thus demonstrating the applicability of ForCES
specification to support applying SDN in the entire NFV architecture.

4.4.3  Network-as-a-Service for Supporting SDN Control in NFV


As discussed in the previous subsection, a key enabler for SDN-based control
in an NFV environment is the abstraction of both physical and virtual network
resources. In addition to the IETF ForCES specification, the service-oriented
architecture (SOA) and the infrastructure-as-a-service (IaaS) paradigm, which
have been widely applied for web and cloud services provisioning, also offer a
promising approach to achieving the objective of general resource abstraction.
SOA has been approved as an effective architectural principle for inte-
gration of heterogeneous autonomous systems. A service defined in SOA is a
module that is self-contained (i.e., the module maintains its own states) and
platform independent (i.e., interface to the module is independent of its imple-
mentation platform). Services can be described, discovered, orchestrated, and
accessed through abstract interfaces and messaging protocols. Essentially, SOA
enables abstraction of computing resources in form of services and provides a
flexible interaction mechanism among services. The IaaS paradigm embraces
the SOA principle by provisioning computing resources in cloud infrastruc-
ture as services to users. IaaS abstracts users from internal details of computing
infrastructure and allow users to program and utilize infrastructure resources
through abstract service interfaces.
Recent application of SOA and IaaS in networking has introduced the
network-as-a-service (NaaS) paradigm, which allows abstraction of network-
ing resources and functionalities as SOA-compliant services that can be pro-
gramed and utilized through abstract interfaces. The NaaS paradigm provides
an approach to achieving high-level abstraction of network infrastructure and
enables flexible orchestration of heterogeneous networking systems for service
provisioning. In addition, NaaS allows networking resources to be abstracted

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 211

as network services through the same mechanism for abstracting computing


resources as infrastructure services. Therefore, NaaS and IaaS together in-
troduce a unified abstraction platform for both networking and computing
infrastructures.
NaaS has attracted extensive research interest and various technologies
have been developed to realize this emerging paradigm. Among them, Open-
NaaS is an open-source framework that has been employed in SDN and NFV
environments for managing different resources in network infrastructures [24].
The OpenNaaS framework was created within the EU-sponsored FP7 Man-
tychore project to enable operators to deploy innovative NaaS offerings. It is
developed by an open source software community and is released with dual
L-GPL/ASF licensing.
The OpenNaaS architecture, as shown in Figure 4.16, comprises three
main layers—the platform layer, the resource layer, and network intelligence
layer. The platform layer is the common layer of the framework that provides
common tools for managing different infrastructure resources. All the reus-
able software building blocks providing basic functionalities of resource man-
agement are located in this layer. The resource layer provides abstraction of
different resource in network infrastructure. In NaaS, a resource is a logical
representation of a manageable unit, which can be either a physical device or
a virtual network function. Each resource holds a list of capabilities, which are
the actions that can be performed using the resource. Each resource also con-
tains a model that describes the necessary information to allow OpenNaaS to
work with the resource. The top layer of the OpenNaaS framework is where the

Figure 4.16  OpenNaaS architecture.

www.EngineeringBooksPdf.com
212 Virtualized Software-Defined Networks and Services

network intelligence resides. This layer is able to create middleware that allows
end users to consume the enabled services using the web service interface of
each resource. All interlayer communications in the OpenNaaS framework are
performed through either the OSGi service interface or RESTful web service
interface [24].
The OpenNaaS framework may be applied to provide resource abstrac-
tion for supporting SDN-enabled control in the NFV architecture. Both physi-
cal network infrastructure and virtual network functions in NFV can be repre-
sented as resources in the OpenNaaS framework, which then can be accessed
and configured by an SDN controller through an OGSi or RESTful service
interface. In addition, the OpenNaaS platform provides a set of common func-
tions for controlling and managing resources, which can be inherited by a re-
source once it is defined. Therefore, OpenNaaS may greatly facilitate SDN-
enabled resource control and management in an NFV environment.

4.4.4  Routing Function Virtualization over an OpenFlow Infrastructure


NFV in principle is applicable to any network function on both data plane and
control plane. Many network control and management functions (e.g., routing,
path computation, traffic engineering, and load balancing) are good candidates
for being realized as VNFs to enhance network flexibility. On the other hand,
such functions are often benefited from the availability of a centralized network
controller, which may provide a global view of network topology and configure
all data plane devices. Therefore, these functions are typically realized as control
applications running on top of an SDN controller in the SDN architecture.
Combination of NFV and SDN technologies enables virtual network control
functions to be implemented over an SDN control platform, which allows net-
work designs to exploit the advantages of both NFV and SDN. Virtualization
of network functions may greatly enhance the flexibility, scalability, and effi-
ciency of network control and management. SDN supported VNF implemen-
tations may improve performance of the virtualized network control functions.
The SDN supported NFV approach has been applied to routing func-
tions. For example, virtualized routing function (VRF) has been implemented
over an OpenFlow infrastructure [25]. The high-level architecture of Open-
Flow-based VRF, as depicted in Figure 4.17, is composed of three key com-
ponents—controllers with an extended OpenFlow control module, the VFR
module, and the communication protocol between OpenFlow controllers and
the VRF module. This VRF system is implemented based on the OpenNaaS
framework.
The OpenFlow control module in this system supports all standard func-
tions required by OpenFlow specification. When a controller receives a packet-
in message from a switch, this module first processes the packet header to parse

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 213

Figure 4.17  Routing function virtualization over an OpenFlow network infrastructure.

required information (e.g., IP version, the source and destination addresses, and
the arrival switch port); it then triggers a routing request to the VRF module.
Once the controller receives a routing function response back from the VRF
module, it sends a packet-out message to the switch and configures the cor-
responding flow table entries in the switches involved in packet forwarding for
the flow.
VRF module receives different routing requests from OpenFlow control-
lers, performs path computation to find a feasible path for each routing request,
and then responses the requests with the routing results. The VRF module in
this system is implemented as a resource in the OpenNaaS framework. The
module maintains a global view of the routing states using a set of capabilities
and an associated model provided by the OpenNaaS framework. The model
follows a routing table-like structure but contains two tables, respectively, for
IPv4 and IPv6. The basic capabilities defined in VRF resource provide path
finding functions for specific input parameters and for management features
such as inserting or deleting a given route and retrieving information corre-
sponding to the whole routing table [25].
The communication protocol between OpenFlow controllers and the
VRF modules is based on the RESTful interface provided by the OpenNaaS
framework. The VRF resource model and capabilities can be accessed through
this interface. This RESTful interface can be called by an OpenFlow controller
in order to obtain feasible route information from the VRF module. Each con-
troller is connected to the VRF module through a VPN link. Controllers and
switches are connected using a secure channel that follows OpenFlow protocol.
The VRF implementation over an OpenFlow network infrastructure en-
ables separation of tenant-oriented control, such as the virtual routing func-
tion, from infrastructure-oriented control performed at OpenFlow controllers.

www.EngineeringBooksPdf.com
214 Virtualized Software-Defined Networks and Services

The VRF module is essentially a VNF instance that can be deployed on a VM


and migrated to different locations if needed. Multiple VRF instances may be
deployed at different locations to enhance scalability and reliability of routing
function. One VRF module can handle routing requests from multiple Open-
Flow controllers and therefore may easily support interdomain routing. In addi-
tion, this architecture allows multiple VRF modules to realize different routing
strategies and policies for supporting multitenant virtual networks that share a
common SDN controlled network infrastructure.

4.4.5  Extended SDN Architecture for Supporting VNF Functionalities


Another perspective about combining SDN and NFV is based on the function-
al roles that these two paradigms typically play in unified network architecture.
According to their respective initial design goals, NFV focuses on virtualizing
data processing-oriented network functions and deploying them on commod-
ity compute facilities; while SDN focuses on achieving more flexible control
for data forwarding operations performed on simplified network devices. The
difference in design goals is reflected in NFVI architecture specified by ETSI,
where computing infrastructure (including storage hardware) and network in-
frastructure are considered two separated domains. VNFs for data processing
are often assumed to be deployed only on computing resources, while the net-
work infrastructure, which may be SDN controlled, only provides connectivity
among VNFs for supporting service orchestration and function chaining. This
is the scenario of applying SDN within NFVI network infrastructure, as dis-
cussed in Section 4.3.
Such a strategy for using SDN in an NFV environment requires redirect-
ing all traffic that needs to be processed by a VNF to a server hosting the VNF
instance, which might lower network resource utilization and degrade service
performance. Although various technologies have been developed to improve
performance of VMs on commodity servers for data processing, as we discussed
in Chapter 3, hardware-based networking devices typically achieve better per-
formance if they are appropriately utilized for processing the packets forwarded
through them. Some complex network functions (e.g., intrusion detection) re-
quire continuous analysis of network states that are only available on the data
plane, which is more efficient to be performed by a networking device on the
packet forwarding path.
In order to further improve performance of SDN-supported NFV, re-
search proposals have been made to explore the possibility of exploiting data
plane capabilities in SDN to implement some VNF functionalities. Current
data plane implementations in SDN are mostly stateless, since there is very
limited state associated with flow table entries. However, some lightweight
states can be kept in data plane devices (e.g., using flow-level or queue-level

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 215

counters and timers). Therefore, it is possible for SDN data plane devices (e.g.,
OpenFlow-enabled switches) to perform some relatively simple data processing
functions. The main idea of the proposed approach to using SDN for support-
ing VNF functionalities is to keep simple data processing in SDN data plane as
much as possible and only forward data traffic to VNF servers for more com-
plex processing when it is necessary.
This proposed approach to combining SDN in NFV technologies has
an impact on the way a VNF is designed. It requires decoupling between two
components of a VNF design: the stateful function component that needs to be
performed at a server with more computing capacity and the stateless function
component that can be processed on SDN data plane. The stateful component
is still implemented as a VNF running on virtual compute resources. The state-
less component makes use of SDN data plan devices to perform traffic process-
ing efficiently. The interface between these two components must be clearly
defined on both data plane and control plane. The control plane interface is
used for configuring and updating the behaviors of the stateless data path pro-
cessing component. The data plane interface is used when some portion of the
traffic needs stateful processing and thus must be redirected to a server where
the stateful function component is hosted [26].
The flow-based network access control (FlowNAC) system is a represen-
tative example of combining SDN and NFV technologies by following the
aforementioned approach. The architecture of the FlowNAC system is shown
in Figure 4.18. In this system, the FlowNAC function design is separated into
two blocks: the authentication and authorization (AA) block, which keeps the
state of the currently executed AA process; and the access control enforcing
(ACE) block, which performs access control without requiring any state infor-
mation. The AA block relies on computing resources for complex stateful data

Figure 4.18  Combining SDN and NFV technologies for flow-based network access control.

www.EngineeringBooksPdf.com
216 Virtualized Software-Defined Networks and Services

processing and therefore is realized as a VNF running on a server. The ACE


block can be implemented on an SDN switch for achieving better performance.
Traffic can be classified into three categories in the FlowNAC system: a-
type traffic for authentication and authorization that must be processed by the
AA block; b-type data traffic for authorized services that must be granted access
to the network; and c-type data traffic for nonauthorized services that must be
denied access to the network. Separation of stateful and stateless functions in
the FlowNAC system allows redirecting each type of traffic to the appropriate
resources. Only a-type traffic is sent to the VNF server for AA processing, and
the results of AA processing is used to configure the ACE block for enforcing
network access control. Therefore, b-type and c-type traffic does not need to be
redirected to the server but must be forwarded to the access switch where the
ACE block is implemented [26].
In the FlowNAC system, the AA block, realized as a VNF, interacts with
an SDN controller through an NB interface to configure the ACE block on an
SDN switch for enforcing network access control policies. In this sense, the AA
VNF acts as an SDN application running upon a controller. Therefore, from an
SDN perspective, virtualization is adopted in this FlowNAC system for realiz-
ing control applications such as the AA block, while from an NFV perspective,
SDN-enabled control is adopted for performing network functions (enforcing
access control policies) in addition to providing connectivity services.
Combination of SDN and NFV technologies in the FlowNAC system
brings in some advantages in terms of network flexibility, efficiency, and perfor-
mance. Separation between the stateless ACE block and the stateful AA block
allows intensive data processing for enforcing access control to be performed on
the data path (without traffic redirection) by hardware-based network devices,
therefore improving both the delay and throughput performance. In addition,
such separation makes it easier to meet the different levels of scalability require-
ments for the AA and ACE blocks. As AA traffic is expected to be less demand-
ing, the computing capacity for the AA block may scale slower compared to the
capacity required by the ACE block for real-time traffic processing.
Extending SDN data plane capability to support VNF functionality also
brings in some challenges. On one hand, the VNF design needs to be split
into two components that are deployed separately on compute and network
infrastructures; on the other hand, network infrastructure must support data
processing for performing network functions as well as packet forwarding for
traffic steering. Therefore, network services must be carefully analyzed to de-
termine whether the performance gain overcomes the effort involved in the
redesign process. Possible overheads caused by splitting one VNF into two
separated components need to be considered during the analysis. Splitting net-
work functions into two components also adds complexity to VNF placement,
which requires VNF manager and NFV orchestrator to consider the availability

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 217

of both computing capacities and networking resources during service deploy-


ment. In addition, dual use of the network infrastructure for data processing as
well as packet forwarding requires SDN switches to guarantee isolation among
the processing of different network functions and isolation among control mes-
sages for different functions as well [26].

4.5  Integration of SDN and NFV in Unified Network Architecture


As we have discussed in the previous sections of this chapter, SDN and NFV
are highly complementary to each other and may be combined in a variety of
forms. On the other hand, SDN and NFV are two independent networking
paradigms, and each has its own fundamental principles. A clear holistic vision
about the relationship between the key SDN and NFV principles and how
they can be naturally integrated in unified network architecture will be very
beneficial to network designers for exploiting the advantages of both paradigms
in future networks.

4.5.1  Two-Dimensional Abstraction Model for Integrating SDN and NFV


The key principles of SDN and NFV both lie in abstraction; however, they
focus on different aspects of network architecture. As shown by the two-di-
mensional model depicted in Figure 4.19, two types of abstraction have been
deployed in general networking, respectively, through the designs of layers and
planes in network architecture. Both layer and plane enable abstraction of net-
working resources and functionalities, but in different dimensions.

Figure 4.19  A two-dimensional model for layer and plane abstraction in network architec-
ture [27].

www.EngineeringBooksPdf.com
218 Virtualized Software-Defined Networks and Services

Abstraction provided by layers in network architecture is in the vertical di-


mension, which is presented as a stack of layers in both the OSI layering model
and the TCP/IP layering model. Layer-dimension abstraction starts with the
underlying physical hardware and then adds a sequence of layers, each provid-
ing a higher (more abstract) level of service. A key property of layering design is
that each layer provides a set of services to the higher layer sitting upon it, and
the functions of the higher layer rely on the services provided by the lower layer.
Therefore, all layers form a stack of upward service provisioning and together
offer services to applications on top of the stack.
Abstraction provided by planes, on the other hand, is shown in the model
with a horizontal layout. This is to reflect the collaborative relationship be-
tween planes—functions performed on one plane do not necessarily rely on
the functions of another plane; therefore, there is no notion of higher plane or
lower plane in network architecture. Instead, each plane focuses on a particular
aspect of the entire network system, such as data transport, network control,
and system management. Each plane may comprise multiple layers from physi-
cal hardware to application software, and collaborates with other planes for
network service provisioning.
It is worth noting that distinction between the concepts of layer and plane
has not been clearly reflected in the majority of literature relevant to SDN and
NFV. Instead, many technical documents, especially SDN-related documents,
often use these two terms as exchangeable. For example, in the SDN architec-
ture defined by ONF [28], the data plane is also called infrastructure layer.
The control and application planes are also called control and application lay-
ers, respectively. The high-level SDN architecture described in ITU-T Y.3300
comprises three layers—the resource layer, SDN control layer, and application
layer [29]. In the SDN layer architecture proposed by IRTF [30], different
architectural components are called either layer or plane without clear descrip-
tion of the difference between them. For example, there is a device and resource
abstraction layer upon the forwarding and operational plane. There is a control
abstraction layer inside the control plane and a management abstraction layer is
defined as a part of the management plane.
We feel that such exchangeable usage of the concepts of layer and plane
does not help clarification of the architectural principles behind SDN and
NFV. Therefore, we particularly distinguish the layer and plane concepts in the
two-dimensional model. The layer and plane reflect two orthogonal dimensions
of abstraction on which NFV and SDN principles respectively emphasize.
The abstractions in these two dimensions have been embraced by tele-
communication and computer networks, respectively. The network architecture
of conventional circuit switching–based telecommunication systems focuses on
the plane-dimension abstraction. Designs of telecom networks typically show
clear logical separation between the data, control, and management planes but

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 219

lack a clear abstraction on the layer dimension. For example, signaling systems
(e.g., SSN-7) for network control are logically separated from the transport net-
works that are under their control. In the intelligent network architecture, the
service control function and service management function elements are sepa-
rated from the service switching function element that is responsible for data
transmission and switching. On the other hand, the IP-based Internet archi-
tecture shows a clear layer-dimension abstraction through the TCP/IP layering
model but lacks explicitly defined abstraction in the plane-dimension. Packet
forwarding, routing, and network management functions are all specified in the
same set of IP protocols. Wide adoption of IP-based architecture has caused the
layer-dimension abstraction to dominate in current network designs.
Rapid development of the wide spectrum of Internet services calls for
much more flexible network control and management, thus bringing in chal-
lenges to the tightly coupled control and forwarding functions in the current
Internet architecture. SDN essentially enables the plane-dimension abstraction
by separating data forwarding and network control/management. Although the
TCP/IP stack provides layer-dimension abstraction, the interfaces between lay-
ers are not defined flexibly enough to meet the requirement of future network
services. A key obstacle lies in the unnecessary coupling between service-ori-
ented functions and transport-oriented infrastructure, which prevents network
designs from fully exploiting the benefits of layer-dimension abstraction. The
network virtualization notion advocates decoupling service provisioning from
network infrastructure, and the NFV architecture attempts to leverage standard
IT technologies to realize such decoupling through simple yet flexible abstrac-
tion of underlying hardware infrastructure.
Although the TCP/IP layer stack is used in Figure 4.19 to show the con-
cept of layer-dimension abstraction, the two-dimensional abstraction model is
applicable to network architecture with alternative layers. The vertical decou-
pling highlighted between the network interface layer and the Internet layer in
this figure is also only for illustration. In fact, position of virtualization in the
layer dimension is a design option for virtualization-based network architec-
ture. Similarly, control and management can be considered either as one plane
or two separated planes in the plane dimension.
From the layer-plane abstraction model, we can see that the key prin-
ciples of SDN and NFV both are based on abstraction but with emphasis on
the plane and layer dimensions, respectively. These two abstraction dimensions
are orthogonal; that is, network architecture may have abstraction on one di-
mension but not on the other. Therefore, SDN and NFV in principle are in-
dependent—NFV may be realized with or without SDN and vice versa. On
the other hand, the challenging requirements for service provisioning in fu-
ture networks demand abstraction on both dimensions in order to fully exploit
their advantages. Therefore, integrating the software-defined principle and the

www.EngineeringBooksPdf.com
220 Virtualized Software-Defined Networks and Services

virtualization notion leads to a unified framework with key components in four


quadrants and the interfaces for decoupling between them, as shown in Figure
4.20. Such a unified framework contains two layers—the infrastructure layer
and service layer—and two planes—the data plane and control/management
plane. Quadrants I and II are data and control/management plane components
on the infrastructure layer, while quadrants III and IV are data and control/
management plane components on the service layer.
The technologies for combining software-defined networking and net-
work virtualization that we discussed in previous sections may be mapped into
this quadrant framework. Virtualization of SDN controllers presented in Sec-
tion 4.2 enables decoupling between the control of network infrastructure (e.g.,
SDN-enabled switches) and the control for virtual tenant networks for service
provisioning. Therefore, such technologies focus on the layer-dimensional ab-
straction on the control plane (between the quadrants II and IV). Technologies
for using SDN in NFVI discussed in Section 4.3 employ SDN-based control
mechanisms to enhance network infrastructure in the NFV architecture, there-
fore emphasizing the plane-dimension abstraction on the infrastructure layer
(between the quadrants I and II). The general resource abstraction models and
protocols discussed in Section 4.4 enable SDN-based control for virtual func-
tions as well as physical resources in an NFV environment, which essentially in-
troduce decoupling between the data and control planes on both infrastructure
layer and service layer. The emphasis of such technologies is on the interfaces
between quadrants I, III and quadrants II, IV in the quadrant framework.

4.5.2  Software-Defined Network Virtualization (SDNV) Architectural Framework


As we discussed in previous sections, researchers have taken various technical
approaches to combine SDN and NFV technologies in order to fully exploit the
advantages of the both networking paradigms. As discussed in Section 4.5.1,

Figure 4.20  A framework for integrating SDN and NFV principles [27].

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 221

most of the existing approaches focus on various aspects of the key principles
of SDN and NFV. In order to provide a high-level holistic vision of integrat-
ing SDN and NFV principles in unified network architecture, an architectural
framework called software-defined network virtualization (SDNV) was recently
proposed in [27]. This framework may offer useful guidelines for synthesizing
research and industry development efforts made from various aspects toward
the common objective of integrating SDN and NFV for service provisioning
in future networks.
Figure 4.21 depicts the SDNV framework, which follows the two-dimen-
sional model shown in Figure 4.19. The layer dimension of the framework
comprises the infrastructure layer, virtualization layer, and service layer. The
plane dimension of the framework consists of the data plane and the control/
management plane.
The infrastructure layer in the SDNV framework comprises physical re-
sources of network and compute infrastructures. This layer may consist of mul-
tiple autonomous domains owned and operated by different InPs. The virtual-
ization layer realizes infrastructure abstraction and provides mapping between
physical and virtual resources. All functionalities for network service provision-
ing reside on the service layer. This layer utilizes the virtual resources provided
by the virtualization layer to realize virtual service functions (VSFs), which in-
clude both virtual network functions (VNFs) and virtual compute functions

Figure 4.21  Architectural framework for software-defined network virtualization [27].

www.EngineeringBooksPdf.com
222 Virtualized Software-Defined Networks and Services

(VCFs). The service layer is responsible for constructing virtual networks (VNs)
by discovering and orchestrating appropriate VSFs.
The virtualization layer decouples the control/management functions
for service provisioning from the functions for infrastructure controlling and
provides a standard interface through which service-oriented control/manage-
ment functions may interact with infrastructure controllers. Such decoupling
on the control/management plane enables differentiation between control/
management functions related to service provisioning and those associated with
transport infrastructures and thus allows them to be provided, maintained, and
developed independently following their own evolutionary paths.
In the SDNV framework, the data plane and control/management plane
are separated on both the infrastructure layer and the service layer. The con-
trol/management plane on the infrastructure layer consists of controllers for
network and compute infrastructures. Heterogeneous SDN controllers may be
applied in different infrastructure domains, which are referred to as infrastruc-
ture domain controllers (IDCs). The control/management plane on the service
layer is responsible for life cycle management of VSFs and VNs, including con-
struction, instantiation, maintenance, and termination of VSFs/VNs. VNs are
constructed by composing appropriate VSFs for meeting service requirements.
Each VN has its own controller (called VNC) that controls all the data plane
VSFs involved in this VN, just like a SDN controller controls all data plane
devices in a physical network.
It is worth noting that although control and management are contained
in the same plane in the SDNV framework, these two types of functionalities
may be separated in network designs and implementations. Control and man-
agement functions focus on different stages in the entire life cycles of VSFs and
VNs. Management functions are responsible for creating, deploying, scaling,
migrating, and terminating VSFs and VNs. Control functions mainly focus
on maintaining the expected operation behaviors of VSFs/VNs during the ac-
tive stage of their lifecycles (e.g., routing, scheduling, and signaling functions
for packet forwarding). Management actions usually have a longer time scale
compared to control events, while control functions typically have stricter re-
quirements in terms of processing capacity and latency. Therefore, in some net-
work designs control and management functionalities are separated into two
separated planes with a standard interface in between.
Integration of the key principles of SDN and NFV in the SDNV frame-
work requires interfaces on the layer and plane dimensions, respectively. On
the layer dimension, the virtualization layer provides an important interface for
decoupling the infrastructure layer and the service layer. The interface between
applications and the service layer allows users to access and configures network
services. On the plane dimension, the key interface is between the data and
control/management planes in the SDNV framework.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 223

The interface provided by the virtualization layer enables a high-level ab-


straction of underlying network and compute infrastructures, including both
data plane capabilities and control/management functionalities. This interface
decouples the logical topologies, addressing schemes, and routing mechanisms
of virtual networks from those of physical infrastructures while maintaining the
mapping between virtual and physical objects. In addition, this virtualization
interface should provide isolation between virtual objects to allow multitenant
VNs to share a common infrastructure substrate.
The interface between user applications and service control/management
functions plays a similar role as the NB API in the SDN architecture but is for
programming VN behaviors. Therefore, this interface is called virtual NB in-
terface. This interface offers service abstraction through which user applications
may access and configure network services. This interface should support isola-
tion among different VNCs in order to provide independent programmability
for individual VNs.
The interface between the data plane and the control/management plane
in the SDNV framework realizes the plane-dimension abstraction. This in-
terface decouples control/management functionalities from the controlled re-
sources, including both physical infrastructure resources and virtual network
functions, therefore playing a similar role as the SB interface in the SDN archi-
tecture. Separation between the service and infrastructure layers in the SDNV
framework splits the SB interface to two subinterfaces. The SB interface on the
infrastructure layer provides interactions between IDCs and the physical net-
work/compute resources under their control, and therefore is called physical SB
(P-SB) interface. The SB interface on the service layer allows each VNC to con-
trol the data plane VSFs in its VN following the centralized control principle of
SDN and is therefore called virtual SB (V-SB) interface. SDNV allows multiple
independent P-SB interfaces for meeting requirements of different domains
coexisting in the infrastructure layer. Similarly, VNs customized for various ser-
vices may adopt different V-SB interface protocols.
The SDNV framework combines the notion of network virtualization—
decoupling service functions from underlying infrastructures—with the core
principle of SDN—separating data and control/management planes—and can
thus fully exploit the advantages of both paradigms. The layer-dimension ab-
straction introduced by the virtualization layer allows lifecycles of VSFs and
VNs to be independent from those of physical infrastructures, thus enabling
rapid innovations both above and below the virtualization layer. The plane-
dimension abstraction in the SDNV framework separates data forwarding and
control/management functions on both the infrastructure layer and the service
layer. Such abstraction on the infrastructure layer supports logically centralized
programmable control for each infrastructure domain. Similarly, decoupling
data and control planes on the service layer allows each VN to have a central

www.EngineeringBooksPdf.com
224 Virtualized Software-Defined Networks and Services

programmable VNC that controls all the data plane VSFs involved in this VN
for service provisioning.
The SDNV framework naturally supports multiprovider service scenarios
in which diverse virtual networks are created upon a physical substrate con-
sisting of heterogeneous network and compute infrastructures in multiple do-
mains. Therefore, SDNV embraces the trend of unified network-cloud service
provisioning. VSFs in SDNV may provide service functions virtualized from
networking systems (VNFs) as well as from cloud resources (VCFs). End-to-
end services delivered by VNs through orchestrating VNFs and VCFs are es-
sentially composite network-cloud services. Such a converged service ecosystem
may introduce new functional roles, such as suppliers of VSFs and providers
of composite network-cloud services, and trigger innovations in new service
development.
The objective of the SDNV framework is not to replace the current SDN
and NFV architecture but to provide an architectural framework showing how
these two paradigms may be integrated together for future networking. On
the other hand, SDNV is not to simply put current architecture of SDN and
NFV together but to combine the key insights of both paradigms into unified
network architecture and show how SDN and NFV may cooperate inside such
architecture. This framework provides useful guidelines to synthesize research
from various aspects toward the common objective of integrating SDN and
NFV for supporting service provisioning in future networks.

4.5.3  SDNV-Based Service Delivery Platform


In this subsection, we present an end-to-end service delivery platform designed
by following the SDNV framework as an illustrative use case for the guidelines
provided by the SDNV framework to network design.
End-to-end service delivery across autonomous network domains has
been a challenging networking problem, even in an SDN environment. An
SDNV-based service delivery platform (SDP) for SDN, as shown in Figure
4.22, offers a promising approach to addressing this challenging problem. Fol-
lowing the SDNV framework, the SDP comprises an infrastructure layer, a
virtualization layer, and a service layer. The infrastructure layer consists of mul-
tiple domains, each with an SDN controller that is separated from data plane
devices in the domain. The virtualization layer provides abstraction of both the
data plane devices and the SDN controller in each infrastructure domain, upon
which VNFs performing data and control plane functions are realized. These
VNFs can be encapsulated as SOA-compliant service components through the
VNF-as-a-service (VNFaaS) model. Each VNF service component implements
an abstract interface (e.g., RESTful web service interface) to support loose-cou-
pling interaction with control and management modules on the service layer.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 225

Figure 4.22  An SDNV-based platform for interdomain network service delivery [27].

The VNF of the SDN controller for each infrastructure domain registers all the
VNF service components in the domain to the VNF/VN management mod-
ule on the service layer. During the registration process, the domain controller
VNF publishes a description about the infrastructure services provided by its
domain at the VNF/VN management module.
The procedure for creating a virtual network through VNF orchestration
in the SDP is illustrated in Figure 4.23. Upon receiving a service request, the
service orchestration module works with the VNF/VN management module
to select and compose the appropriate VNFs to form a forwarding graph that
meets the service requirement. Then, the VSF/VN management module in-
stantiates a VN to realize this forwarding graph. The controller of this VN is
also realized through composition of a set of controller VNFs, each of which
virtualizes the controller in an infrastructure domain utilized by this VN. In this
way, the VN controller essentially orchestrates the VNFs hosted by SDN con-
trollers in heterogeneous domains to control end-to-end service delivery. With
such a service platform, the uniform abstraction provided by the virtualization
layer makes heterogeneous network domains transparent to service manage-
ment, which may greatly facilitate interdomain service delivery in SDN.
Multiple virtual networks may be constructed upon this platform for
meeting the diverse service requirements of different end users. Each of the
virtual tenant networks has its own forwarding graph realized by a set of VNFs,
and all the VNFs in a virtual network are controlled by a single VNC. All the
virtual networks share the service orchestration module and VNF/VN man-
agement module in the SDP framework, which are responsible for creating,
instantiating, scaling, and terminating virtual network.

www.EngineeringBooksPdf.com
226 Virtualized Software-Defined Networks and Services

Figure 4.23  Procedure of VN construction and control in the SDNV-based SDP.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 227

With the rapid development of cloud services and emergence of federated


cloud computing, unification of network and cloud services is expected to play
a significant role in the future service environment. In such an environment,
the services requested by end users will be composite network-cloud services
that must be realized upon infrastructures across multiple network domains and
cloud data centers. The SDNV-based SDP also facilitates unification of net-
work and cloud service provisioning. The infrastructure layer of the SDP may
be extended to include compute and storage infrastructures located in mul-
tiple cloud data centers. Through the virtualization layer, compute and storage
infrastructures can be virtualized as virtual compute/storage resources, upon
which VSFs can be deployed and abstracted as service components. Therefore,
the virtualization layer offers a unified mechanism for abstracting and virtual-
izing compute, storage, and network infrastructure resources into VNF ser-
vice components. The orchestration and management modules on the service
layer of the SDP can compose the service components of both virtual networks
and virtual compute functionalities to offer end-to-end services, thus achieving
composite network-cloud service provisioning to the end users.

4.5.4  Challenges to SDN-NFV Integration and Opportunities for Future Research


In this subsection, we briefly discuss some technical challenges to integrating
SDN and NFV following the SDNV framework and identify some possible
topics for future research.
4.5.4.1  Virtualization of Network and Compute Infrastructure
The SDNV framework indicates that infrastructure virtualization on both data
and control/management planes plays a significant role in achieving the layer-
dimension abstraction for SDN-NFV integration. Infrastructure virtualization
is being extensively studied; however, current research on virtualization mainly
focuses on either data plane resources or control plane functions but lacks a
coordinated view of general virtualization on both planes. Virtualization on
the control/management plane should provide decoupling between the control/
management functions related to virtual networks for service provisioning from
the functions dedicated to infrastructure control and management. How to
coordinate virtualization of control/management functions with virtualization
of data plane infrastructure is a topic that needs more thorough study.
The SDNV framework also calls for unified abstraction of heterogeneous
infrastructures (e.g., network, compute, and storage) through a standard plat-
form for supporting composite services across networking and computing do-
mains. XML-based specification languages offer a promising approach to pro-
viding standard interfaces. However, whether such interfaces should be highly
descriptive or simple RESTful interfaces might be more appropriate should be

www.EngineeringBooksPdf.com
228 Virtualized Software-Defined Networks and Services

further examined. In addition, infrastructure information must be aggregated


to provide a scalable global abstract view, while service layer control/manage-
ment relies on precise infrastructure information to create VNs for meeting
service requirements. Therefore, finding an appropriate degree of state aggrega-
tion that balances abstraction and precision of logical infrastructure view is also
a challenging issue that should be further investigated.
Another key aspect of the virtualization layer in the SDNV framework is
to instantiate VSFs and VNs on a shared infrastructure substrate through map-
ping virtual functions and networks to physical infrastructure resources. As we
discussed in Chapter 3, resource allocation for network virtualization (i.e., VN
embedding) is a challenging problem that has not been fully solved yet. Inte-
gration of SDN and NFV brings in some new requirements to VN embedding
that make the problem even more challenging. First, VNs that comprise the
virtual network and compute functions need to be embedded into heteroge-
neous infrastructures (e.g., networks as well as cloud data centers). This requires
federated control and management of network, compute, and storage resources
across autonomous infrastructure domains in the Internet scale, which is still an
open issue for future research. Also, current works on VN embedding mainly
focus on resource allocation on the data plane. SDN and NFV integration calls
for more study on distinction and coordination between embedding of data
plane objects and their respective control/management functions. In addition,
coexisting of multiple VNCs for multitenant VNs require effective mechanisms
to guarantee isolation between the control functions for different VNs embed-
ded in a shared common substrate. Dynamic elastic VN embedding for sup-
porting service scale-up or down and comigration of VNFs and VCFs are also
challenging issues that need more thorough study.

4.5.4.2  Control and Management of Virtual Service Functions and Virtual


Networks
Integrating SDN with network virtualization leads to decoupling data and con-
trol/management planes on both infrastructure and service layers. This calls for
separated interfaces for controlling physical infrastructure resources and virtual
service functions, respectively. This type of interface on the infrastructure layer
is the physical SB interface in each infrastructure domain, which has been rela-
tively well studied in the context of SDN (e.g., OpenFlow and ForCES pro-
tocols). However, control/management interface on the service layer between
virtual networks and their controllers (i.e., the virtual SB interface) has received
little research attention and deserves more investigation in the future. Appropri-
ate models for abstracting virtual resources and service functions are required
by this interface. Also, such an interface should isolate the control/manage-
ment for different individual VNs to support multiple VNs with customized
protocols. In addition, elastic service provisioning requires flexible mechanisms

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 229

for scaling-up or down VN control capacity and dynamically deploying and


migrating VN controllers. These are all open problems for future research.
Constructing VNs for meeting user requirements is a core function ex-
pected for future networking, which may be greatly facilitated by integration
of SDN and NFV following the SDNV framework. In this framework, the
control/management plane on the service layer selects and composes appro-
priate data plane VSFs to form VNs for meeting service requirements. Effec-
tive and efficient approaches to describing VSF attributes, publishing available
VSFs, and discovering/orchestrating the optimal set of VSFs for meeting service
requirements are all important topics that need thorough study. Service dis-
covery and composition in web and cloud service areas have been extensively
studied and may offer some useful techniques for VSF orchestration to con-
struct VNs. For example, centralized broker-based orchestration schemes and
distributed policy-based choreograph mechanisms are both possible approaches
to addressing this challenging problem. However, cloud service composition
research mainly focused on computing services instead of networking services;
therefore, further investigation on VSF composition in the SDNV context, es-
pecially composition of VNFs and VCFs across networking and computing
domains, offers an interesting topic for future research.
4.5.4.3  Quality of Service Guarantee in Software-Defined Virtual Networks
Integrated software-defined and virtualization-based networking brings in new
challenges to service quality assurance. How to implement software-based vir-
tual network functions to achieve a comparable level of service quality as what
hardware devices can provide is an important problem that has not been fully
solved. Although various approaches have been proposed for achieving high-
performance SDN and NFV, as we discussed in Chapter 2 and Chapter 3,
respectively, most of the existing technologies are developed without a holistic
vision of integrated SDN and NFV; therefore, how to apply these technologies
for QoS guarantee in software-defined virtual networks is still an open problem
that needs more thorough investigation.
Integration of SDN and NFV introduces more diverse functional roles,
such as infrastructure providers, VSF suppliers, VN operators, and composite
network-cloud service providers, in future networks. These functional roles in
the new service ecosystem might have conflicting interest and demands, but
must cooperate for meeting performance requirements of various services. The
trend toward network-cloud service unification particularly calls for new ap-
proaches to providing end-to-end QoS guarantee across networking and cloud
computing domains.
In addition, dynamic deployment of virtual network functions and servic-
es brings in new challenges to service performance evaluation. Traditional mod-
eling and analysis methods for evaluating network service performance (e.g.,

www.EngineeringBooksPdf.com
230 Virtualized Software-Defined Networks and Services

techniques developed based on queueing theory) often make certain assump-


tions about service implementations, such as the particular queuing mechanism
and service time distribution of the hosting platform. However, in software-
defined virtualization-based networking, decoupling between service functions
and the underlying infrastructure implies that virtual functions and services
may be deployed on various hosting platforms as they are instantiated. Also,
virtual functions may be migrated to different platforms during their lifespans.
Therefore, new modeling and performance analysis approaches must be devel-
oped to cope with the resource abstraction and dynamic service deployment in
future networks, which offers an important topic for future research.

4.5.4.4  Energy-Aware Network Design


Building environmentally friendly network infrastructure by reducing energy
consumption is a very important aspect of future network design. Network
resource virtualization together with flexible SDN control and management
provides great potential to achieve energy-efficient networking; however, such
advantage is yet to be fully exploited. A challenge to energy-aware NFV-SDN
integration lies in the variety of intertwined network elements that must be
considered in this issue, including both infrastructures and service functions on
both data and control/management planes. For example, VSF/VN embedding
in network and compute infrastructures should minimize energy consumption
while meeting service quality requirements. Energy-aware VSF composition
needs to achieve optimal balance among energy consumption, resource utiliza-
tion, and service performance. Therefore, applying the holistic view of SDN-
NFV integration provided by the SDNV framework to facilitate energy-aware
future network design will be a very interesting topic for future research.

4.6  Conclusion
In this chapter, we discussed the relationship between software-defined net-
working and network virtualization. Although SDN and NFV are two innova-
tive networking paradigms that were initially developed independently, they
share many common goals and follow some similar technical principles for
achieving such goals. Evolution of both paradigms has shown that SDN and
NFV are synergistic and complementary to each other. Therefore, integrating
SDN and NFV into unified architecture for future networking to fully exploit
the advantages of both paradigms has formed an active research area that at-
tracts interest from both academia and industry.
Encouraging progress has been made toward combining SDN and NFV
in future networks. Both hypervisor- and container-based virtualization tech-
nologies have been employed to enable network virtualization in SDN, which
allows multitenant virtual networks to be constructed upon a shared SDN

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 231

infrastructure substrate. Network hypervisor and network orchestration have


been proposed to address the challenge of network virtualization in multi-
domain SDN. SDN has been applied in the NFV infrastructure to provide net-
work connectivity for supporting service function chaining and orchestration in
an NFV environment. Network systems that employ SDN-based control have
been developed to support NFV in various networking scenarios, including
radio access networks, mobile packet cores, and wireline residential networks.
The principle of SDN offers a general control and management approach that
is applicable to virtual network functions as well as physical infrastructure re-
sources. Therefore, SDN can be applied to the entire NFV architecture, includ-
ing both the infrastructure layer and the service tenant layer. Researchers have
also explored exploiting SDN data plane capabilities to support virtual network
functions, such as virtual routing function and access control function.
Research efforts have been made from various aspects toward the com-
mon objective of combining SDN and NFV in future networks. In order to
provide a holistic vision about the relationship between the key principles of
SDN and NFV and how they may be integrated in unified network architec-
ture, we present a two-dimensional abstraction model and the software-defined
network virtualization (SDNV) framework at the end of this chapter. The two-
dimensional model shows that key principles of SDN and NFV are both based
on abstraction but emphasize the plane and layer dimensions, respectively. The
SDNV framework integrates both the layer- and plane-dimension abstrac-
tions and provides useful guidelines for synthesizing the research efforts from
various aspects toward enabling unified software-defined virtualization-based
networking.
Research on integrating SDN and NFV is still at an early stage,� and
many technical issues must be fully addressed before the vision of software-
defined network virtualization may be realized. Therefore, this field offers a
wide spectrum of open topics for future research and opportunities for technol-
ogy innovation.

References
[1] Casado, M., T. Koponen, Scott Shenker, and A. Tootoonchian, “Fabric: A Retrospective
on Evolving SDN,” Proceedings of ACM 2012 Workshop on Hot Topics in Software-Defined
Networking (HotSDN’12), August 2012.
[2] Raghavan, B., T. Koponen, A. Ghodsi, M. Casado, S. Ratnasamy, et al., “Software-De-
fined Internet Architecture: Decoupling Architecture from Infrastructure,” Proceedings of
the 11th ACM Workshop on Hot Topics on Networks (Hotnets’12), October 2012, pp. 43–48.
[3] Feamster, N., J. Rexford, and E. Zegura. “The Road to SDN,” ACM Queue, Vol.11, No.
12, Dec. 2013, pp. 1–12.

www.EngineeringBooksPdf.com
232 Virtualized Software-Defined Networks and Services

[4] Blenk, A., A. Basta, M. Reisslein, and W. Kellerer, “Survey on Network Virtualization Hy-
pervisors for Software Defined Networking,” IEEE Communications Surveys and Tutorials,
Vol. 18, No. 1, 2016, pp. 655–685.
[5] Sherwood, R., G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, et al., “FlowVisor: A
Network Virtualization Layer,” OpenFlow Switch Consortium Technical Report, 2009.
[6] Min, S., S. Kim, J. Lee, and B. Kim, “Implementation of an OpenFlow Network Virtual-
ization for Multi-Controller Environment,” Proceedings of the 2012 International Confer-
ence on Advanced Communication Technologies (ICACT2012), Feb. 2012.
[7] Salvadori, E., R. D. Corin, A. Broglio, and M. Gerola, “Generalizing Virtual Network
Topologies in OpenFlow-Based Networks,” Proceedings of the 2011 IEEE Global Com-
munication Conference (GLOBECOM2011), Dec. 2011.
[8] Drutskoy, D., E. Keller and J. Rexford, “Scalable Network Virtualization in Software-
Defined Networks,” IEEE Internet Computing Magazine, Vol. 17, No. 2, Feb. 2013, pp.
20–27.
[9] Huang, S., J. Griffioen, and K. L. Calvert, “Network Hypervisors: Enhancing SDN In-
frastructure,” Elsevier Computer Communications Journal, Vol. 46, No. 6, June 2014, pp.
87–96.
[10] Munoz, R., R. Vilalta, R. Casellas, R. Martinez, T. Szykowiec, et al., “Integrated SDN/
NFV Management and Orchestration Architecture for Dynamic Deployment of Virtual
SDN Control Interfaces for Virtual Tenant Networks,” Journal of Optical Communication
Networks, Vol. 7, No. 11, Nov. 2015, pp. B62–B70.
[11] Vilalta, R., R. Muñoz, R. Casellas, R. Martinez, F. Francois, et al., “Network Virtualization
Controller for Abstraction and Control of OpenFlow-Enabled Multi-Tenant Multi-
technology Transport Networks,” Proceedings of the 2015 Optical Fiber Communication
Conference (OFC2015), March 2015.
[12] Zhang, Y., N. Beheshti, L. Beliveau, G. Lefebvre, R. Manghirmalani, et al., “StEERING:
A Software-Defined Networking for Inline Service Chaining,” Proceedings of the 21st IEEE
International Conference on Network Protocols (ICNP2013), Oct. 2013.
[13] Blendi, J., J. Buckert, N. Leymann, G. Schyguda, and D. Hausheer, “Software-Defined
Network Service Chaining,” Proceedings of the 2014 Third European Workshop on Software-
Defined Networks (EWSDN2014), Sept. 2014, pp. 109–114.
[14] Gember-Jacobson, A., R. Viswanathan, C. Parkash, R. Grandl, J. Khalid, et al., “OpenNF:
Enabling Innovation in Network Function Control,” Proceedings of the 2014 Conference of
ACM Special Interest Group on Data Communication (SIGCOMM2014), August 2014.
[15] China Mobile, “C-RAN: A Road Toward Green Radio Access Network,” white paper,
2011.
[16] Liu, J., T. Zhao, S. Zhou, Y. Cheng, and Z. Niu, “CONCERT: A Cloud-Based Architecture
for Next Generation Cellular Systems,” IEEE Wireless Communications Magazine, Vol. 21,
No. 6, June 2014, pp. 14–22.
[17] Nguyen, V-G., T-X. Do, and Y. Kim, “SDN and Virtualization-Based LTE Mobile Network
Architecture: A Comprehensive Survey,” Springer Wireless Personal Communications
Journal, Vol. 86, No. 3, Feb. 2016, pp. 1401–1438.

www.EngineeringBooksPdf.com
Integrating SDN and NFV in Future Networks 233

[18] Sama, M. R., L. M. Contreras, J. Kaippallimalil, I. Akiyoshi, H. Qian, et al., “Software-


Defined Control of the Virtualized Mobile Packet Core,” IEEE Communications Magazine,
Vol. 53, No. 2, Feb. 2015, pp. 107–115.
[19] Xie, H., Y. Li, J. Wang, D. Lopez, T. Tsou, et al., “vRGW: Towards Network Function
Virtualization Enabled by Software Defined Networking,” Proceedings of the 21st IEEE
International Conference on Network Protocols (ICNP2013), Oct. 2013.
[20] Open Networking Foundation, “TR-518: Relationship of SDN and NFV,” Issue 1,
October 2015.
[21] ETSI NFV ISG, “NFV EVE-005: Report on SDN Usage in NFV Architectural Framework
v1.1.1,” Dec. 2015.
[22] Haleplidis, E., J. H. Salim, S. Denazis, and O. Koufopavlou, “Toward a Network
Abstraction Model for SDN,” Journal of Network and System Management, Vol. 23, No. 2,
Feb. 2015, pp. 309–327.
[23] Haleplidis, E., S. Denazis, O. Koufopavlou, D. Lopez, D. Joachimpillai, et al., “ForCES
Applicability to SDN-Enabled NFV,” Proceedings of the 2014 Third European Workshop on
Software-Defined Networks (EWSDN2014), Sept. 2014, pp. 43–48.
[24] Riera, J. R., E. Escalona, J. Batalle, J. A. Garcia-Espin, and S. Figuerola, “Management of
Virtual Infrastructures Through OpenNaaS,” Proceedings of the 2013 IEEE International
Conference on Smart Communications in Network Technologies, June 2013.
[25] Batalle, J., J. F. Riera, E. Escalona, and J. A. Garcia-Espin, “On the Implementation of
NFV over an OpenFlow Infrastructure: Routing Function Virtualization,” Proceedings of
the 2013 IEEE Conference on SDN for Future Networks and Services (SDN4FNS), Nov.
2013.
[26] Matias, J., J. Garay, N. Toledo, J. Unzilla, and E. Jacob, “Toward an SDN-Enabled NFV
Architecture,” IEEE Communications Magazine, Vol. 53, No. 4, April 2015, pp. 187–193.
[27] Duan, Q., N. Ansari, M. Toy, “Software-Defined Network Virtualization: An Architectural
Framework for Integrating SDN and NFV for Service Provisioning in Future Networks,”
IEEE Network Magazine, Vol. 30, No. 5, Sept. 2016, pp. 10–16.
[28] Open Networking Foundation (ONF), “SDN Architecture,” Issue 1, June 2014.
[29] ITU-T, “Y.3300 Framework of Software-Defined Networking,” June 2014.
[30] Internet Research Task Force (IRTF), “RFC7426 Software-Defined Networking (SDN):
Layers and Architecture Terminologies,” January 2015.

www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
5
Virtualized Network Services
Mehmet Toy

5.1  Introduction
In recent years, types of user devices and applications for cloud services have
grown rapidly. High-speed personal devices such as phones, laptops and tab-
lets, and high-definition (HD) IP video and HD IPTV applications are driving
huge bandwidth demand in networks. Applications such as storage network-
ing, video streaming, collaborative computing, and online gaming and video
sharing are driving bandwidth demand in networks as well as resources of data
centers connected with these networks. The users prefer services that are on
demand, scalable, survivable, and secure with usage-based billing. The concepts
of cloud computing, cloud networking and cloud services are expected to help
service providers to meet these demands, quickly create the services and utilize
their resources effectively.
Cloud computing technologies are emerging as infrastructure services
for provisioning computing and storage resources on demand in a simple and
uniform way. Multiprovider and multidomain resources and integration with
the legacy services and infrastructures are involved. Current cloud technology
development is targeted to developing intercloud models, architectures, and in-
tegration tools that could allow integrating cloud-based infrastructure services
into existing enterprise and campus infrastructures. These developments also
provide a common/interoperable environment for moving existing infrastruc-
tures and infrastructure services to virtualized cloud environment.
Cloud-based virtualization allows for easy upgrade and migration of en-
terprise application, including entire IT infrastructure segments, which brings

235

www.EngineeringBooksPdf.com
236 Virtualized Software-Defined Networks and Services

significant cost saving comparing to traditional infrastructure development and


management requiring good amount of manual work.
Cloud-based applications operate as regular applications in particular us-
ing web services platforms for services and applications integration; however,
their composition and integration into distributed cloud-based infrastructure
require a number of functionalities and services that can be jointly defined as
intercloud architecture.
In 2010, NIST launched cloud computing program to support the fed-
eral government effort to incorporate cloud computing as a replacement for
traditional information system and application models where appropriate.
Cloud computing [1] is a model for enabling convenient, on-demand
network access to a shared pool of configurable computing resources (e.g., net-
works, servers, storage, applications, and services) that can be rapidly provi-
sioned and released with minimal management effort or service provider inter-
action. The characteristics of cloud computing are:

• On-demand provisioning of computing capabilities;


• Broad network access for various devices including smart phones, lap-
tops, tablets, and PDAs;
• Pooled computing resources to serve multiple consumers using a mult-
itenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand;
• Rapidly and elastically provisioning of capabilities;;
• Automatic control and optimization of resource usage by leveraging a
metering capability at some level of abstraction appropriate to the type
of service such as storage, processing, bandwidth, and active user ac-
counts.

From an enterprise IT perspective, the overwhelming benefit with cloud


computing is flexible on-demand access to IT resources without the usual pur-
chasing, deployment, and management overhead. With cloud computing, tera-
bytes of storage can be available instantly.
The network acts as the foundation for cloud computing. It needs to
become a virtual service supporting mobile users, innovative applications, and
protocols and needs to provide network visibility at a granular level.
Three service delivery models are defined:

• Cloud software as a service (SaaS) in which the capability provided to


the consumer is to use the provider’s applications running on a cloud
infrastructure. In this model, an entire business or set of IT applications

www.EngineeringBooksPdf.com
Virtualized Network Services 237

runs in the cloud. Enterprise consumers outsource the entire underlying


technology infrastructure to a SaaS provider and thus have no responsi-
bility or management oversight for SaaS-based IT components.
• Cloud platform as a service (PaaS) in which the capability provided
to the consumer is to deploy onto the cloud infrastructure consumer-
created or -acquired applications created using programming languages
and tools supported by the provider. The consumer does not manage or
control the underlying cloud infrastructure including network, servers,
operating systems, or storage but has control over the deployed applica-
tions and possibly application hosting environment configurations. PaaS
provides the capability to build or deploy applications on top of IaaS.
• Cloud infrastructure as a service (IaaS) in which the capability provided
to the consumer is to provision processing, storage, networks, and other
fundamental computing resources where the consumer is able to deploy
and run arbitrary software. The software can include operating systems
and applications.

Similar to the cloud services models, cloud computing can be deployed in


a number of ways. The IT industry has outlined four cloud computing deploy-
ment models that are described by NIST [20]:

• Private cloud: The cloud infrastructure is operated solely for an organi-


zation. It may be managed by the organization or a third party and may
exist on premise or off premise. The cloud infrastructure is operated
within a single organization. The resources and services provided by an
internal IT department or external cloud computing provider are con-
sumed by internal groups.
• Community cloud: The cloud infrastructure is shared by several organi-
zations and supports a specific community that has shared concerns such
as security requirements, policy, and compliance considerations. It may
be managed by the organizations or a third party and may exist on prem-
ise or off premise. A community cloud is a superset of a private cloud.
The cloud supports the needs of several or an extended community of
organizations. Again, community clouds can be built and operated by
members of the community or third party providers.
• Public cloud: The cloud infrastructure is made available to the general
public or a large industry group and is owned by an organization selling
cloud services. The cloud infrastructure and services are available to the
general public. Examples of public clouds include Amazon Elastic Com-

www.EngineeringBooksPdf.com
238 Virtualized Software-Defined Networks and Services

pute Cloud (EC2), Google App Engine, Microsoft Azure, or Terremark


Cloud Computing services.
• Hybrid cloud: The cloud infrastructure is a composition of two or more
clouds (private, community, or public) that remain unique entities but
are bound together by standardized or proprietary technology that en-
ables data and application portability such as load-balancing between
clouds.

Cloud computing architectures are being massively deployed in data cen-


ters since they offer great flexibility, scalability, and cost effectiveness. These
advantages are important to financial firms in Wall Street, which have imple-
mented cloud computing to help them innovate and compete more success-
fully. Fortune 500 enterprises are also using cloud computing to quickly scale
application capacity in response to changing business conditions.
Smaller businesses are turning to cloud computing to accomplish more
work with limited budgets, using virtualization to squeeze out optimal effi-
ciency from their server and network investments.
In this chapter, novel cloud services architectures defined by open cloud
connect (OCC)1 [1] consisting of actors for cloud services, standards interfaces
between the actors, standards connection, and connection termination points
associated with cloud user and applications are described. Network functions
virtualization (NFV) architecture of ETSI NFV is summarized. A mapping
between components of cloud services architectures and NFV architectures is
proposed.
Implementations of cloud services architectures with virtualized compo-
nents may be accomplished in various ways. An implementation approach pro-
viding substantial flexibility to cloud service providers is described.
In order to reduce the cost of network operations and network elements,
software-defined networking (SDN) and virtualization techniques have been
widely explored by network operators. Combining cloud, SDN, and virtualiza-
tion techniques are necessary to achieve substantial optimization in networks
and services. In implementing a virtual network, it is not clear which functions
should be virtualized. A method for implementing cloud services architectures
with virtualized components is proposed.
Management of cloud services with virtualized and nonvirtualized com-
ponents is challenging. A high-level management architecture is given. A high
level management architecture for cloud services employing software defined
network (SDN) controllers and NFV management entities is described.

1. OCC merged with MEF. Therefore, all cloud services specifications are under MEF.

www.EngineeringBooksPdf.com
Virtualized Network Services 239

5.2  Cloud Standards


Cloud standards are being defined by well-known standards organizations such
as ITU, IEEE, IETF, ETSI, NIST, and TM as well as by industry groups such
as DMTF, Cloud Security Council, and OCC. The organizations working on
cloud computing, infrastructure, and services are listed in Table 5.1.

5.3  Cloud Services Architectures


Cloud services architectures are defined by Open Cloud Connect [1]. The
cloud services for business applications are defined slightly different than those
briefly described in Section 5.1. The commercial cloud services include not
only applications and their supporting infrastructure, but also networking path
end to end between the user and application.
The key actors of cloud services are depicted in Figure 5.1 [1], where a
cloud service provider (cSP) is responsible for providing an end-to-end cloud
service to a cloud consumer (i.e., cloud service user) using cloud carrier(s) and
cloud provider(s). The cSP may or may not own cloud carrier (cC) and cloud
provider (cP) facilities, but provides a single bill to the cloud service user.
A user can be an enterprise with multiple users sharing the same cSUI,
where CE may represent a gateway device. The CE could be a physical equip-
ment, a virtual machine (VM), a collection of VMs with a virtual switch, a con-
tainer, or a collection of containers with a virtual switch. Individual functional
elements in a CE may be either entirely in the user domain or may be entirely
in the cSP domain (and managed by the cSP).
A user interfaces to a cSP facilities via a standards interface called cloud
service user interface (cSUI) (Figure 5.2), which is a demarcation point between
the cSP and the cloud consumer.
From this interface, the consumer establishes a connection, cloud service
connection (cSC), with a cloud provider (cP) entity providing the application
(Figure 5.3), where the cP entity can be a virtual machine (VM) with cloud ser-
vice interface (cSI) or a physical resource such as storage with a cSUI. In addi-
tion, a cSC can be between two cloud provider entities (Figure 5.4) or between
two cloud consumers (Figure 5.5).
The cSI can represent an interface of a vNIC, an interface of a virtual
switch, an interface of a container, or an interface of a virtualized network func-
tion (VNF) (Figure 5.6), in addition to an interface of a VM.
When a cSC is between a cloud user and a cP physical or virtual resource,
the cSC is established between two cloud service connection termination points
(cSCTPs) residing at the user interface (i.e., cSUI) and the cP interface (i.e.,
cSUI or cSI).

www.EngineeringBooksPdf.com
240 Virtualized Software-Defined Networks and Services

Table 5.1
Standards Organizations for Cloud
ARTS: Association for Retail Technology Standards
ATIS: Alliance for Telecommunications Industry Standards
CADF: Cloud Auditing Data Federation Working Group
CCIF: Cloud Computing Interoperability Forum
CSA: Cloud Security Alliance
CSCC: Cloud Standards Customer Council
CSC: Cloud Standards Coordination
DMTF: Distributed Management Task Force
ETSI: European Telecommunications Standards Institute
GICTF: Global InterCloud Technology Forum
IEEE Intercloud Working Group (IEEE P2302)
IETF: Internet Engineering Task Force
ISO: International Organization for Standardization
ISO/IEC JTC 1/SC 38 Cloud Computing and Distributed Platforms
itSMF-IT Service Management Forum
ITU-T Focus Group on Cloud Computing (FG-Cloud)
ITU-T SG13: Future networks including cloud computing, mobile and next-generation networks
NIST: National Institute of Standards and Technology
Cloud Computing Target Business Use Cases Working Group
Cloud Computing Reference Architecture and Taxonomy Working Group
Cloud Computing Standards Roadmap Working Group
Cloud Computing SAJACC Working Group
Cloud Computing Security Working Group
Working Definition of Cloud Computing
Standards Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC)
JCA: Joint Coordination Activity on Cloud Computing
ODCA: Open Data Center Alliance
OGF: Open Grid Forum
SNIA: Storage Network Industry Association
TC CLOUD
TM Forum: Telecommunications Management Forum
OASIS Organization for the Advancement of Structured Information Standards
OASIS Cloud-Specific or Extended Technical Committees (TC)
OASIS Cloud Application Management for Platforms (CAMP) TC
OASIS Identity in the Cloud (IDCloud) TC
OASIS Symptoms Automation Framework (SAF) TC
OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) TC
OASIS Cloud Authorization (CloudAuthZ) TC
OASIS Public Administration Cloud Requirements (PACR) TC
OCC: Open Cloud Connect
OCC: Open Cloud Consortium
OCCI Working Group: Open Cloud Computing Interface Working Group

www.EngineeringBooksPdf.com
Virtualized Network Services 241

Table 5.1  (continued)


OGF: Open Grid Forum
OMG: Object Management Group
OpenStack
Storage Networking Industry Association (SNIA)
Telemanagement (TM) Forum

Figure 5.1  Cloud service actors.

The cSP may own the cP and cloud carrier (cC) facilities (Figure 5.3).
When the cP and the cC are two independent entities belonging to two differ-
ent operators as depicted in Figure 5.3, the standards interface between them
is called cloud carrier cloud provider interface (cCcPI). In this case, a cSC for
cloud services can be terminated at either cCcPI or cSI.
It is also possible for two or more cSPs to be involved in providing a cloud
service to a cloud consumer as depicted in Figure 5.4, where two cSPs interface
to each other via a standards interface called cloud service provider cloud service
provider interface (cSPcSPI). In this scenario, only one of the cSPs needs to
interface to the end user, coordinate resources, and provide a bill. The cSP that
does not interface to the end user is called cloud service operator (cSO).
The cSPs may employ a gateway to connect to each other (Figure 5.7),
cloud service gateway (cSGW). The cSGW might provide connection multi-
plexing among other features that are required by cSPcSPI.
A cSP can be private or public. There could be cases that private and pub-
lic cSPs collectively provide a cloud service to a user, as depicted in Figure 5.8.

www.EngineeringBooksPdf.com
242 Virtualized Software-Defined Networks and Services

Figure 5.2  cSUI functionalities distributed between customer edge (CE) and cSP as cSU-C
and cSUI-P.

The cloud services architectures described here is the base for interoper-
ability among vendors and service providers for cloud services and applications.
It is also expected to be the base for a cloud service exchange gateway between
well-known cloud service providers. Further details of the architecture can be
found in [1, 10].

5.3.1  Protocol Stacks and Interfaces


The previous section identified interfaces between user and cSP, between cSPs,
between cP and cC, and between network as a service (NaaS) and cloud service
application supporting entity. The protocol stack at each interface that can be
supported is depicted in Figure 5.9. Each of the protocol layer may be further
decomposed into their data, control, and management plane components.
cSUI may support a protocol stack of only layer 1 or up to layer 7. cSPc-
SPI and cCcPI may support a protocol stack of layer 1 or up to layer 3. Given
TCP is used by routing protocols such as BGP, cCcPI and cSPcSPI might go up
to L4. On the other hand, cSI may support a protocol stack of only L2 or up
to layer 7 without layer 1.
The protocol stack for each interface is designed to support various net-
work interfaces and applications.

5.4  Cloud Services


So far we have described entities and interfaces between them, and connection
and its termination points for the transportation of cloud services. This section
describes cloud services and their possible attributes.
A cloud service may include just cloud resources or cloud and noncloud
resources. For example, a cloud service can include entities such as applications
based on cloud resources, but the access network to the cloud applications may
be based on noncloud resources. However, they form a cloud service end to end.

www.EngineeringBooksPdf.com

Virtualized Network Services

www.EngineeringBooksPdf.com
Figure 5.3  Virtual resources (i.e., VMs) and physical resources (i.e., computing and storage resources), that belong to one operator, providing cloud ap-
243

plications.
244 Virtualized Software-Defined Networks and Services

Figure 5.4  cSC between two cloud provider entities

Figure 5.5  Network connectivity cloud service provided by NaaS.

For example, in Figure 5.3, computing applications, computing resources,


and virtual networks depicted collectively can form a cloud computing service.

www.EngineeringBooksPdf.com
Virtualized Network Services 245

Figure 5.6  cSI is an interface to virtual switch, vNIC, or a VNF.

In this service, it is possible that just the computing applications together with
computing resources are based on cloud resources and everything else is not. A
user may use noncloud-based NaaS or cloud-based NaaS to access cloud com-
puting applications. The cSP coordinates all resources acting as the single point
of contact and provides a bill to the cloud user.
Software as a service (SaaS), platform as a service (PaaS), and infrastruc-
ture as a service (IaaS) are among the well-known cloud services in the industry.

www.EngineeringBooksPdf.com
246 Virtualized Software-Defined Networks and Services

Figure 5.7  Two cloud service providers collectively providing cloud services.

In fact they are the cloud applications provided by well-known cloud providers
such as Amazon.
SaaS is an application running on a cloud infrastructure where the con-
sumer does not manage or control the underlying cloud infrastructure includ-
ing network, servers, operating systems, storage, or even individual applica-
tion capabilities, with the possible exception of limited user-specific application
configuration settings. SaaS examples include Gmail from Google, Microsoft
“live” offerings, and salesforce.com.
PaaS is deploying onto the cloud infrastructure consumer-created or -ac-
quired applications created using programming languages and tools supported
by the provider. The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, or storage
but has control over the deployed applications and possibly application hosting
environment configurations.
PaaS provides the capability to build or deploy applications on top of
IaaS. Typically, a cloud computing provider offers multiple application compo-
nents that align with specific development models and programming tools. For
the most part, PaaS offerings are built upon either a Microsoft-based stack (i.e.,
Windows, .NET, IIS, SQL Server, and so on) or an open source–based stack
(i.e., the “LAMP” stack containing Linux, Apache, MySQL, and PHP).
IaaS is to provision processing, storage, networks, and other fundamental
computing resources where the consumer is able to deploy and run arbitrary
software. The software can include operating systems and applications. The
consumer does not manage or control the underlying cloud infrastructure, but

www.EngineeringBooksPdf.com
Virtualized Network Services 247


Figure 5.8  Private and public cSPs.

www.EngineeringBooksPdf.com
248 Virtualized Software-Defined Networks and Services

Figure 5.9  Protocol stacks that can be supported at external interfaces.

www.EngineeringBooksPdf.com
Virtualized Network Services 249

has control over operating systems, storage, deployed applications, and possibly
limited control of selected networking components with firewalls.
This is the most basic cloud application model, aligning the on-demand
resources of the cloud with tactical IT needs. IaaS is similar to managed services
offerings such as hosting services. The primary difference is that cloud resources
are virtual rather than physical and can be consumed on an as-needed basis.
Enterprise consumers pay for virtual machines (VMs), storage capacity, and
network bandwidth for a variable amount of time rather than servers, storage
arrays, and switches/routers on a contractual basis. IaaS prices are based upon
IaaS resource consumption and the duration of use.
Cloud computing services can be deployed in a number of ways depend-
ing upon factors like security requirements, IT skills, and network access NIST
[20].
The OCC grouped services under network as a service (NaaS), IaaS, PaaS,
SaaS, communications as a service (CaaS), and security as a service (SECaaS)
for now. There is no hierarchy in these service offerings.
There is no consensus among various standards developing organizations
(SDOs) and cloud service providers regarding which application belongs to
which service category. For example:

• Server, desktop, database, and VLAN can be considered as part of IaaS;


• Development environment and test environment can be categorized as
PaaS;
• Business, consumer, network, and communication applications can be
categorized as SaaS;
• Virtual PBX, audio and video conferencing, and telepresence can be
categorized as CaaS.

The characteristics and parameters of the cloud resources can be:

• Type of resources (CPU, memory, hard disk space, bandwidth);


• Amount of resources;
• Nature of the resources (dedicated, shared);
• Timing of resources (scheduled or on demand);
• Duration of resources.

The cSP negotiates the contract and monitors its realization in real time.
The monitoring encompasses the SLO contract definition, the SLO nego-
tiation, the SLO monitoring, and the SLO enforcement. The contract may

www.EngineeringBooksPdf.com
250 Virtualized Software-Defined Networks and Services

include price reductions and discounts that are applied when a cSP fails to meet
the desired service parameters or does not fulfill an agreement. The resource
usage may be tracked to align them with the billing rules agreed in the SLOs.
cSP provides a set of security services and mechanisms (e.g., IP address
filtering, firewall, message integrity and confidentiality, private key encryption,
dynamic session key encryption, user authentication, and service certification)
to protect cloud services data and their operating environment from unauthor-
ized use, policy/operation violation, and intrusion. In addition, the cSP may
offer the following:

• Committed or pay-as-you-go billing options;


• Optional virtual machine management support;
• Self-provisioning of server images and storage resources;
• Multiple access methods for controlling user resources;
• Built-in security and redundancy;
• Virtualized infrastructure round-the-clock monitoring (24x7x365).

As depicted in Figure 5.10, it is possible to build cloud services in a hi-


erarchical fashion starting with NaaS where each builds on the previous and
provides services for the next in the hierarchy. The hierarchy from the bottom
to the top would be NaaS, PaaS, IaaS, SaaS, CaaS, and SECaaS.

Figure 5.10  Possible hierarchy for building cloud services.

www.EngineeringBooksPdf.com
Virtualized Network Services 251

5.4.1  NaaS
Network as a service (NaaS) delivers assured, dynamic connectivity services via
virtual or physical and virtual service endpoints orchestrated over multiple op-
erators’ networks. Such services will enable users, applications, and systems to
create, modify, suspend/resume, and terminate connectivity services through
standardized application programming interfaces (APIs). These services are as-
sured from both performance and security perspectives.
NaaS is expected to support on-demand network configuration, secure
and QoS guaranteed connectivity and compatibility with heterogeneous net-
works. It is the responsibility of NaaS provider, cSP, to maintain and manage
the network resources. It is possible that cSP may not own NaaS, but provides
coordination. NaaS offers network as a utility.
Possible NaaS services are:

• Load balancing among servers in the same location or over a geographi-


cal region consisting of multiple locations where servers are added and
removed in real time. Load balancers can be with or without fail-over
protection and automatic fallback. Dynamic load balancing automati-
cally distributes incoming application traffic across multiple cSP service
instances. As users can preallocate dynamic IP addresses, they can preal-
locate a dynamic load balancer so that its DNS name is already known,
which can simplify the execution of protection.
• Domain registration services such as registering or transferring a domain
name, full domain name system (DNS) control, geographically redun-
dant DNS, managed DNS.
• Hardware and software solutions to serve as routers, firewalls, VPN de-
vices, and load balancers.
• IPv4 and IPv6 capable dual stack.
• Network link upgrade.
• Public or private bandwidth.
• Security.

NaaS needs to be a highly available and scalable DNS web service. IP


addresses can be dynamic such that they are static IP addresses designed for
dynamic cloud computing. Dynamic IP addresses enable users to mask instance
or zone failures by programmatically remapping user public IP addresses to
instances in a user account in a particular region. For disaster recovery (DR), a
user can also preallocate some IP addresses for the most critical systems, so that
their IP addresses are already known before disaster strikes.

www.EngineeringBooksPdf.com
252 Virtualized Software-Defined Networks and Services

NaaS can provide methods for users to provision cSP resources in a cloud
virtual network that the user defines. The users have complete control over their
virtual networking environments, including selection of user owned IP address
ranges, creation of subnets, and configuration of route tables and network gate-
ways. This would enable users to create a VPN connection between the users’
corporate datacenter and their cloud virtual network and leverage the cSP as an
extension of the corporate datacenter. In the context of DR, users can use this
virtual network to extend their existing network topology to the cloud.

5.4.2  IaaS
The capability provided to the consumer [1] via IaaS is to provision process-
ing, storage, networks, and other fundamental computing resources where the
consumer is able to deploy and run arbitrary software, which can include oper-
ating systems and applications. The consumer does not manage or control the
underlying cloud infrastructure but has control over operating systems, storage,
deployed applications, and possibly limited control of select networking com-
ponents (e.g., host firewalls).
IaaS cloud provider (cP) configures, deploys, and maintains computing,
storage, and networking resources to the user. Also, IaaS cP provides the ca-
pability for users to use and monitor computing, storage, and networking re-
sources so that they are able to deploy and run arbitrary software.
A customer portal could be provided to access the infrastructure. An API
is needed to reduce human intervention for system management and total cost
of operation.
5.4.2.1  Cloud Computing
Cloud computing is being able to provision computing and storage resources
on demand, specifically storage and virtual servers that IT can access on de-
mand. IT can create virtual datacenters from commodity servers, enabling IT to
stitch together memory, I/O, storage, and computational capacity as a virtual-
ized resource pool available over the network.
Servers are the key elements of cloud computing. They can be:

• Bare metal servers with single processor, dual processors, or quad proces-
sors;
• Mass storage servers storing large amounts of data in solid state disks,
hard disks, optical disks, or tapes;
• Virtual servers deployed on multitenant or single-tenant hosts as local
or SAN storage. Portable storage can be added. Payment could be by
the hour or month. Integration and migration between bare metal and

www.EngineeringBooksPdf.com
Virtualized Network Services 253

virtual can be performed. Users can customize their server configuration


of computing cores, RAM, and storage on host servers.

Hardware selection and upgrade are common features of cloud comput-


ing. Monthly RAM upgrade, local disk upgrade, drives such as SCSI, SATA
hard drive, and solid state drives (SSD), HW controller, and redundant power
supplies are among them.
The core of cloud computing services is flexible compute, storage, and
network capacity, which can be adjusted up or down based on user demand.
Within minutes, a user can create computing instances, which are virtual ma-
chines over which the user has complete control. In the context of DR, this
ability to rapidly create virtual machines that a user can control is critical.
Machine images (MIs) can be preconfigured with operating systems and
some application stacks. Users can also configure their own MIs. For disaster
recovery, user should own their MIs configured and identified so that the MIs
can be launched as part of the recovery procedure. Such MIs should be pre-
configured with the operating system of choice plus appropriate pieces of the
application stack.
Availability zones are distinct locations that are engineered to be insulated
from failures in other availability zones and provide inexpensive, low latency
network connectivity to other availability zones in the same region. By launch-
ing instances in separate availability zones, users can protect their applications
from the failure of a single location. Regions consist of one or more availability
zones.
The VM import feature enables users to import virtual machine images
from their existing environment to cloud provider instances.
Compute as a service may get quick, secure access to virtual infrastructure,
servers, and storage without the costs, time, and installation requirements of
adding physical hardware. Unlimited computing capacity can be offered while
a user provides and manages the operating system, database, and application.
To manage the service, a user can choose either graphical user interface (GUI)
or application programming interfaces (APIs). The service may include portal
interface and API, built-in security features, and choice of operating system
templates such as Windows or Linux.
Each customer may be limited to a number of VMs (e.g., 100 VMs),
where VMs may be grouped into one or more virtual data centers (VDCs), each
with an individual firewall policy.
Once a user provisions computing resources, the user can scale infrastruc-
ture on demand by adding more resources where and when needed. When the
flood of activity is over, the user can reduce capacity using a web portal.

www.EngineeringBooksPdf.com
254 Virtualized Software-Defined Networks and Services

Video applications may have variable volume or demand additional pro-


visions for security and reliability. A user can go online and turn up server
capacity for its video generation software in minutes on demand.
5.4.2.2  Storage Services
Storage services can be any of the following:

• A simple storage service providing highly durable storage infrastructure


designed for mission-critical and primary data storage. Objects are re-
dundantly stored on multiple devices across multiple facilities within a
region.
• A dynamic block store service (DBS) [21] providing the ability to create
point-in-time snapshots of data volumes. Such snapshots can be used as
the starting point for new DBS volumes and to protect data for long-
term durability. Once a volume is created, it can then be attached to a
running service instance. DBS volumes provide off-instance storage that
persists independently from the life of an instance.
• An import/export service for moving of large amounts of data into and
out of a cP using portable storage devices for transport. The cP transfers
user data directly onto and off of storage devices by using NaaS. For data
sets of significant size, import/export could be often faster than Internet
transfer and more cost effective than upgrading connectivity. Users can
use import/export to migrate data into and out of buckets or into DBS
snapshots.
• A cP may employ a storage gateway enabling seamless migration of data
between cloud storage and on-premises applications. The storage gate-
way stores volume data locally in the user’s infrastructure and in cP. This
enables existing on-premises applications to seamlessly store data in the
cost-effective, secure, and durable storage infrastructure while preserving
low-latency access to this data.

The storage options can be:

• Memory to provide rapid access to data such as file caches, object cach-
es, in-memory databases, and RAM disks.
• Message queues to provide temporary durable storage for data sent asyn-
chronously between computer systems or application components.
• Storage area networks (SAN), which are block devices (virtual disk logi-
cal unit numbers) on dedicated SANs providing the highest level of disk

www.EngineeringBooksPdf.com
Virtualized Network Services 255

performance and durability for both business-critical file data and da-
tabase storage. They can be used like physical hard drives, typically by
formatting them with the file system of user choice and using the file
I/O interface provided by the instance operating system.
• Direct-attached storage (DAS), which are local hard disk drives or arrays
residing in each server, providing higher performance than a SAN but
lower durability for temporary and persistent files, database storage, and
operating system (OS) boot storage than a SAN.
• Network attached storage (NAS) providing a file-level interface to stor-
age that can be shared across multiple systems. NAS tends to be slower
than either SAN or DAS.
• Databases such as a traditional SQL relational database, a NoSQL non-
relational database, or a data warehouse where the underlying database
storage typically resides on SAN or DAS devices, or in some cases in
memory.
• Backup and archive for data retained for backup and archival purposes,
which are typically stored on nondisk media such as tapes or optical me-
dia, often stored offsite in remote, secure locations for disaster recovery.
There could be a limit on single archive and total amount of data in
gigabytes, terabytes, or petabytes.
• Durable2 reduced availability (DRA) storage buckets [21] can be intro-
duced to have lower costs and lower availability, but are designed to have
the same durability as simple storage buckets.

DRA storage is appropriate for applications that are particularly cost sen-
sitive or for which some unavailability is acceptable. For example:

• Data backup where high durability is critical, but the highest availability
is not required;
• Batch jobs to recover from unavailable data (e.g., by keeping track of
the last object that was processed and resuming from that point upon
restarting).

Cloud storage allows users to enable DRA at the bucket level. User can
specify DRA storage at the time of bucket creation.

2. Durability measures the length of a product’s life. When the product can be repaired, estimat-
ing durability is more complicated. The item will be used until it is no longer economical to
operate it. This happens when the repair rate and the associated costs increase significantly.

www.EngineeringBooksPdf.com
256 Virtualized Software-Defined Networks and Services

If a user wants to move data from a simple storage to a durable reduced


availability storage bucket, the user needs to download the data from the simple
storage bucket to his/her computer and then upload it to the durable reduced
availability bucket.
A cP can provide a highly durable storage infrastructure designed for mis-
sion-critical and primary data storage where objects are redundantly stored on
multiple devices across multiple facilities within a region.
5.4.2.3  Databases
A database service can be set up, operated, and scaled a relational database
(RDS) in the cloud. RDS can be used either in the preparation phase for di-
saster recovery to hold critical data in a running database already, and/or in the
recovery phase to run the production database.
A simple database can be a highly available, flexible, nonrelational data
store that offloads the work of database administration. It can also be used in
the preparation and the recovery phase of disaster recovery. Users can also install
and run their choice of database software on cP and can choose from a variety
of leading database systems.
Deployment automation, post-startup software installation/configuration
processes, and tools can be used in the cP domain. This can be helpful in the
recovery phase to create the required set of resources in an automated fashion.
Database cloud services are dedicated database instances with a cP data-
base software. Users may have full administrative access via secure socket shell
(SSH), structured query language (SQL) developer, Datapump, SQL*Plus, and
other tools. The database could be a simple database with no SQL*Net access or
administrative control. The choice of storage can be from gigabytes to terabytes.
Network access using any type of network connectivity, including SQL*Net
and other drivers to access user dedicated instances, are possible. RESTful web
services can be used for data access. Software development environment may
run on an Oracle database such as Oracle Application Express (APEX).
The database cloud services may be categorized as basic, managed, or
premium:

• Basic:
• Preconfigured database software;
• Managed by customer;
• Full administrative access.
• Managed:
• Basic management by cP;
• Automated backup;
• Point-in-time recovery available;

www.EngineeringBooksPdf.com
Virtualized Network Services 257

• Administrative access.
• Premium managed:
• Managed offering previous services;
• Optional data guard or active data guard;
• Pluggable database utility services;
• Flexible upgrade options.

The basic service level is customer managed. Managed and premium


are managed by the cP providing full customer access. Resources are dynamic
such that the user can add or remove compute resources, memory, or storage
as needed.
Lifecycle management can be also provided by flexible control of data-
bases for production or test cloning, plus simple storage management on virtual
machine instances.
The security for database services may have its own unique set of security
rules.
5.4.2.4  Disaster Recovery (DR)
Disaster recovery is recovering from a failure that has a negative impact on
business continuity or finances. This could be hardware or software failure,
a network outage, a power outage, physical damage to a building like fire or
flooding, human error, or some other significant disaster.
Two parameters are important for DR services:

• Recovery time objective (RTO), which is the duration of time and the
service level to which a business process must be restored after a disaster
(or disruption) to avoid unacceptable consequences associated with a
break in business continuity.
• Recovery point objective (RPO) that describes the acceptable amount
of data loss measured in time. For example, if the RPO was 1 hour, after
the system was recovered, it would contain all data up to a point in time
that is prior to 11:00 AM because the disaster occurred at noon.

In the preparation phase of DR, data migration and durable storage need
to be considered. When reacting to a disaster, it is important to either quickly
commission compute resources to run the user system in the cloud provider
domain or to orchestrate the failover to already running resources in cloud
provider domain.
The cloud user can choose the most appropriate location for the selected
disaster recovery site, in addition to the site where the user system is fully de-

www.EngineeringBooksPdf.com
258 Virtualized Software-Defined Networks and Services

ployed. A cP may have multiple regions where the selected recovery site can be
chosen to be different.
Possible architectures for DR are given in Figures 5.11, 5.12, and 5.13,
when server in zone 1 failed,

• If the backup is 1:1 (i.e., active and standby configuration) and VM is
already available in zone 2, only application is moved to zone 2 from
zone 1.

Figure 5.11  Protection coordination via orchestrators of cP and cC.

Figure 5.12  Protection coordination via the cSP orchestrator.

www.EngineeringBooksPdf.com
Virtualized Network Services 259

Figure 5.13  Cloud application access protection via two different cPs.

• If backup is 1+1 (i.e., active and active configuration) and the applica-
tion in zone 2 is current, then from active connection cSC1 to backup
connection cSC2 will be switched.
• If only backup server available in zone 2, VM along with applications
can be moved to zone2. In this case TCP/IP is likely to be used in mov-
ing VMs. The distance (i.e., propagation time) between zone 1 and zone
2 and the rate of connectivity between zones must be such that these
factors will not be the dominant contributor of TCP time out. There-
fore, VM moves between zones connected with high-speed transport are
more likely.

In order to support the configurations in Figures 5.11, 5.12, and 5.13


in timely manner, cloud carrier and cloud provider orchestrators must be in
coordination.

5.4.3  SECaaS
Security services such as connectivity security, application security, or content
security, can be provided by a cSP to cloud consumers. Such services are referred
as security as a service (SECaaS).

www.EngineeringBooksPdf.com
260 Virtualized Software-Defined Networks and Services

With security as a service (SECaaS), a consumer does not manage or


control the underlying security transport negotiation, encryption, detection al-
gorithms, threat intelligence, or network inspection, but has control over the
selection of security solutions and scope with respect to their data and network.
SECaaS can be any of the following:

• Security of storage services with managed authorized access and custom-


ized data leakage prevention technologies;
• NaaS security provided through network traffic data inspection and fil-
tering, DDoS, and other intrusion attack vector protection;
• Threat intelligence where attack vectors are detected and propagated
through cSP for mitigation;
• Traffic cleaning, where consumer network traffic that would not nor-
mally utilize the cSP is routed expressly for SECaaS.

Security around data storage services must allow consumer fine control
of network access control list (ACL) for modification and accessibility of data
stored in cSP. Additional security is provided by audit tracking of data access
or modification, along with data leakage technologies applied to the network
access between cloud users and cSP.
Network traffic between over a cSC is subject to protection from attack
and intrusion vectors. cSP can provide the traffic inspection and intrusion/at-
tack blocking via combination of traditional firewall/security appliances, along-
side virtual security solutions provided by NFV. Both content inspection and
packet inspection technologies should be utilized to provide high security.
The cSUI allows the consumer to tailor the security offerings for their
intended use of cSP services. For example, a SaaS provider with a CDN would
focus security on intrusion and attack vectors while an email service may focus
on antispam technologies.
The cSP may provide the service where security events and responses are
utilized to gather threat intelligence and react in a manner to protect the con-
sumer services. Should an attack or intrusion be detected, an automatic re-
sponse to isolate the attack vector or continue to provide the service through
alternate infrastructure can be taken.
SECaaS may provide network security functions through cSC set up for
delivery of security functions by the cSP, regardless of whether the consumer
traffic would normally access the cSP. Selection of routing or tunneling tech-
nologies to establish the cSC and security services is performed at cSUI.

www.EngineeringBooksPdf.com
Virtualized Network Services 261

5.4.4  PaaS
By platform as a service (PaaS) [1], the capability provided to the consumer is to
deploy onto the cloud infrastructure consumer-created or acquired applications
created using programming languages and tools supported by a cP. The con-
sumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly application hosting environment configurations.
PaaS can be a stand-alone development environment that does not in-
clude technical, licensing, or financial dependencies on specific SaaS applica-
tions or web services. These development environments are intended to provide
a generalized development environment.
PaaS can be application delivery-only environments that do not include
development, debugging, and test capabilities as part of the service, though they
may be supplied offline. The services provided generally focus on security and
on-demand scalability.
PaaS can be an open platform as a service that does not include hosting
as such; rather, it provides open source software to allow a PaaS provider to run
applications. For example, AppScale allows a user to deploy some applications
written for Google App Engine to their own servers, providing data-store ac-
cess from a standard SQL or NoSQL database. Similarly mobile PaaS (mPaaS)
is formed by the Yankee Group for mobile users. Some open platforms let the
developer use any programming language, any database, any operating system,
any server, and so on to deploy their applications.
With PaaS, a scalable and high-performing network can be formed. As a
fully managed application platform for running and consolidating software ap-
plications and databases in the cloud, PaaS includes the following:

• A virtualized, scalable infrastructure of application and database servers;


• Performance, reliability, and security of the network;
• Network, server, and storage infrastructure management;
• 24x7x365 infrastructure monitoring and support;
• Built-in redundancy and security of data centers.

Since business changes are unpredictable, users need a way to quickly mod-
ify applications in response. A web-based platform as a service portal can help to:

• Access and manage user application environment from nearly anywhere;


• Quickly adapt forms and fields within the application template;
• View activity reports to identify improvement areas.

www.EngineeringBooksPdf.com
262 Virtualized Software-Defined Networks and Services

5.4.5  SaaS
The capability provided to the consumer via SaaS [1] is to use the cloud pro-
vider’s applications running on a cloud infrastructure. The applications are ac-
cessible from various client devices through a thin client interface such as a web
browser (e.g., web-based email). The consumer does not manage or control the
underlying cloud infrastructure including network, servers, operating systems,
storage, or even individual application capabilities, with the possible exception
of limited user-specific application configuration settings.
Software is installed on demand via customer portal, and licensed and
billed monthly. Open-source and enterprise 32- and 64-bit operating system
software options from various vendors are available. Here are a few examples of
vendors and operating systems that could be installed:

• Microsoft;
• RedHat;
• CentOS;
• Debian;
• FreeBSD;
• Ubuntu;
• Vyatta Network;
• Cloud Linux;
• Parallels;
• cPanel;
• Server virtualization software such as VMWare ESX and ESXi, Citrix
Xenserver, Citrix CloudPlatform, Parallels Virtuozzo, Microsoft Hyper-
V;
• Security software such as McAfee Total Protection, McAfee Anti-Virus,
Microsoft Windows Firewall, McAfee Host Intrusion Protection, Nim-
soft Monitoring, APF Software Firewall;
• Database software such as Microsoft SQL Server (2000, 2005, 2008,
2012), MySQL, Cloudera Hadoop, MongoDB, Basho Riak;
• Control panel software such as cPanel/WHM with Fantastico, RVSkin
and Softaculous, Parallels Plesk Panel.

www.EngineeringBooksPdf.com
Virtualized Network Services 263

5.4.5.1  CDN
In cloud content delivery network (CDN) service, user content is distributed
to a network of edge servers. Users can access the content from a server near
them, ensuring faster load times. Large objects are delivered to many users with
sustained high data transfer rates. And if user traffic fluctuates, the service auto-
matically adjusts as demand increases or decreases.
User content can be placed onto cloud object storage and then CDN
enables the content. The user then visits a CDN site and requests files from
the nearest edge server. The edge server delivers a local, cached copy or pulls
one from cloud object storage. The object’s time-to-live (TTL) will expire at
intervals the user defines, such as 24 hours. If the TTL has expired when the
next request is made, the file is again retrieved from cloud object storage. The
content is cached once again by the edge servers and the TTL restarts.

5.4.6  CaaS
Real-time services such as virtual PBX, voice and video conferencing systems,
collaboration systems, and call centers can be considered as communication as
a service (CaaS). CaaS capabilities can be:

• Business voice continuity avoiding missing a call even when disaster


strikes;
• Unlimited inbound, local, and domestic long distance;
• Fixed mobile convergence that removes the distinctions between fixed
and mobile networks, providing a superior experience to customers by
creating seamless services using a combination of fixed broadband and
local access wireless technologies to meet their needs in homes, offices,
other buildings, and on the go;
• Voicemail in user inbox or on user smartphone;
• Integrated business communications where a user makes calls from a
desk or mobile phone and the call appears as originating from the user’s
office number;
• Easy call management and feature editing through Microsoft Outlook,
Internet Explorer, or Firefox;
• Fully managed and hosted;
• Point-to-point or multipoint video calling;
• Point-to-point or multipoint voice calling;
• Point-to-point or multipoint voice and video conferencing;

www.EngineeringBooksPdf.com
264 Virtualized Software-Defined Networks and Services

• Mobile application support allowing free download for both iOS and
Android platforms;
• Professional voice recording service for user greetings and other mes-
sages recorded by an industry-leading voice talent;
• Bring your own device (BYOD) capabilities;
• SLAs including quality of service and availability such as next business
day replacement of phones for equipment maintenance of virtual PBX
service;
• Dynamic security policy including authentication, media encryption,
and access control;
• Scalability.

5.5  Virtualization and Cloud


Virtualization provides the opportunity for a flexible software design. Existing
networking services are supported by diverse network functions that are con-
nected in a static way. NFV enables additional dynamic schemes to create and
manage network functions.
Virtualized network function (VNF) represents an instance of a function-
al block responsible for a specific treatment of received packets. End point rep-
resents an external interface of one VNF instance that is always associated with
a VNF. Each VNF can have associated with physical/virtual interface, MAC, IP
address, or a higher layer application such as HTTP.
From an infrastructure perspective, the resources considered to realize a
service function are as follows:

• Compute resources such as virtual or physical machines, disk image;


• Compute flavor such as CPU, memory, and root disk;
• Block storage formed by additional disks, a network interface, a network
segment;
• Link between two ports from different compute instances, which has an
associated link flavor with dedicated bandwidth, delay, and jitter among
the infrastructure resources to be considered for virtualization.

A VNF can be associated with multiple compute instances, while each


compute instance has a single image and a single flavor, and can have multiple
ports and block storages. The network QoS is represented by link and link
flavor.

www.EngineeringBooksPdf.com
Virtualized Network Services 265

VNFs can be categorized as active VNFs that are in fact part of the main
course of a packet.
They may drop packets or forward them, such as a firewall. They can ac-
tually change packets such as an IPSec VPN server. VNFs can be passive VNFs
that are considered to be out of the main course of the chain. These functions
mainly inspect packets (e.g., a monitoring system or a deep packet inspection).
In practice one can think of a VNF in a physical device connected to a hub
through a single network interface configured in promiscuous mode. Traffic is
considered to be duplicated when having to reach a passive function.
In short, passive functions can rely on packet characteristics as packets
are not modified, while active functions must be integrated at a service level
because ingress and egress packets can be different (e.g., VPN). If a VNF has
active functions that change packets, the classification may differ when passing
one of these functions.
VNF forwarding graph simplifies the service chain provisioning by quick-
ly and inexpensively creating, modifying, and removing service chains. On one
hand, we can compose several VNFs together to reduce management com-
plexity. On the other hand, we can decompose a VNF into smaller functional
blocks for reusability and faster response time. However, we note that the actual
carrier-grade deployment of VNF instances should be transparent to end-to-
end services.
NFV introduces separation of software from hardware and flexible de-
ployment of network functions. This separation enables the software to evolve
independent from the hardware and vice versa. NFV can automatically deploy
network function software on a pool of hardware resources that may run differ-
ent functions at different times in different data centers. Network operators can
scale the NFV performance dynamically and on a grow-as-you-need basis with
fine granularity control based on the current network conditions.
Two major enablers of NFV are industry-standard servers and technolo-
gies developed for cloud computing. A common feature of industry-standard
servers is that their high volume makes it easy to find interchangeable com-
ponents inside them at a competitive price, compared to network appliances
based on bespoke application-specific integrated circuits (ASICs). Using these
general-purpose servers can also reduce the number of different hardware ar-
chitectures in operators’ networks and prolong the life cycle of hardware when
technologies evolve (e.g., running different software versions on the same plat-
form). Recent developments of cloud computing, such as various hypervisors,
OpenStack, and Open vSwitch, also make NFV achievable in reality. For ex-
ample, the cloud management and orchestration schemes enable the automatic
instantiation and migration of VMs running specific network services.
The recent effort from the telecommunications industry has been centered
on the software virtualization and its management. However, it is challenging

www.EngineeringBooksPdf.com
266 Virtualized Software-Defined Networks and Services

to offer guaranteed network performance for virtual appliances. Wang and Ng


[22] measured the end-to-end networking performance of the Amazon EC2
cloud service. They found that the sharing of processors may lead to very un-
stable TCP/UDP throughput, fluctuating between 0 and 1 Gbps at the tens of
milliseconds time granularity, and the delay variations among Amazon EC2 in-
stances can be 100 times larger than most propagation delays, which are smaller
than 0.2 ms, even when the network is not heavily loaded. The unstable net-
working characteristics caused by virtualization can obviously affect the perfor-
mance and deployment of virtual appliances.
It may be possible to leverage Linux NAPI and Intel’s DPDK to improve
the network performance of VNFs. NAPI is a modification of the packet pro-
cessing framework in Linux device drivers, aiming to improve the performance
of high-speed networking. NAPI disables some interrupts when the network
traffic load is high and switches to polling the devices instead, and thus avoids
frequent interruptions sharing the same message that there are lots of packets to
process. When the kernel is overwhelmed, the packets that cannot be handled
in time are simply dropped in the device queues (i.e., overwritten in the incom-
ing buffer).
Intel’s DPDK is another software-based acceleration for high-speed net-
working applications that also uses polling to avoid the overhead of interrupt
processing.
Recent work by Hwang et al. [17] extends the DPDK libraries to provide
low latency and high throughput networking in virtualized environments.
VNFs should be located where they will be used most effectively and
least expensively. For example, network functions offered by middle-boxes usu-
ally depend on the network topology, and these boxes are placed on the direct
path between two endpoints. When virtualizing these functions and moving
their software implementations into data centers, data traffic may go through
indirect paths, causing a potential delay of packets. Therefore, the placement of
VMs that carry VNFs is crucial to the performance of offered services. For these
services, it would be advantageous and efficient to run some network functions
at the edge of the network.
Placement problems usually involve optimization through linear pro-
gramming, integer programming, or a mix, which works on a snapshot of the
network and may take a long time to solve an instance. Given the dynamic na-
ture of user traffic, the online approximation algorithms for these optimization
problems are challenging.
Network infrastructure will become more fluid when deploying VNFs. To
consolidate VNFs running in VMs based on traffic demand, network operators
need to instantiate and migrate virtual appliances dynamically and efficiently.
The native solution of running VNFs in Linux or other commodity OS VMs
has a slow instantiation time (several seconds) and a relatively large memory

www.EngineeringBooksPdf.com
Virtualized Network Services 267

footprint. The carrier-grade deployment of VNFs requires a lightweight VM


implementation. For instance, Martins et al. [23] recently proposed ClickOS,
a tiny Xen-based VM to facilitate NFV. ClickOS can be instantiated within
around 30 ms and requires about 5 MB memory when running. However, op-
timizing the performance of this type of lightweight simplified VM, especially
during wide-area migration, is still an open research issue.
For example, when we employ virtual routers, it is challenging to keep
the packet forwarding uninterrupted and minimize the migration disruptions
while at the same time guarantee the stringent throughput and latency require-
ments and other service level agreements.
Furthermore, when deploying network functions in software at different
locations, troubleshooting and fault isolation become harder.
A set of associated VNFs may define an ordered sequence of functions
(path). This is called service function chaining (SFC). For the identification of
appropriate actions such as packet forwarding with a specific IP or MAC ad-
dress, packets are classified based a on a policy. Classification can occur at each
VNF of the SFC independent from the previous VNFs. In such cases, multiple
classification policy entries should be allowed in an SFC system.
Classification can take place only at the initial redirection points to an
SFC, if upon this classification packets are tagged. After that, packets can be
steered to the SFC and routed along it according to the embedded tags.
Classification can be performed not only at the redirection points but also
at each hop of the SFC. In this case, packets are not tagged and are subject to
classification and steering at each SFC hop.

5.6  Virtualized Cloud Services Architectures


SDN, cloud, and virtualization are three complementary concepts. SDN is a
networking technology that decouples the control plane from the underlying
data plane and consolidates the control functions into a logically centralized
controller. SDN can support NFV to enhance its performance, facilitate its
operation, and simplify the compatibility with legacy deployments. However,
the virtualization and deployment of network functions do not rely on SDN
technologies, and vice versa scaling offered services up and down rapidly as
required.
Given the benefits of virtualization, how can we implement cloud archi-
tectures using virtualization constructs? In this section, we will try to answer
this question.
ETSI NFV [2, 3] divides network into two layers, network hardware (or
infrastructure) and virtual network (or virtual network function) as depicted in
Figure 5.14. (Vn-Nf )/VN interface is identified as the virtual interface for the

www.EngineeringBooksPdf.com
268 Virtualized Software-Defined Networks and Services

Figure 5.14  Network layering and interface of NFV.

network. E-line and E-LAN services of Metro Ethernet Forum (MEF) are being
considered as examples of (Vn-Nf )/VN.
NFV also identifies a VM interface [2] as (Vn-Nf )/VM or Vn-Nf-VM
(Figure 5.15), which is considered an equivalent of cSI.
NFV identifies SWA-1 interface [5] as depicted in Figure 5.18 to enable
communications between various network functions within the same or differ-
ent network service. They may represent data and/or control plane interfaces
of the network functions. SWA-1 is considered an equivalent of virtual compo-
nent of MEF user network interface (UNI). We consider SWA-1 an equivalent
of cSI.
NFV also identifies SW-5 interfaces which are an abstraction of all subin-
terfaces between the NFV infrastructure (NFVI) and the VNF, including VNF
interswitch connectivity services such as E-LAN and E-line [5], as depicted in
Figure 5.19.
NFV divides functional blocks as host functional block (HFB) and virtu-
alization functional block (VFB) [8, 9] as depicted in Figure 5.20. The interface
between HFB and VFB is called container interface, which is the virtual interface
between two containers. This interface can be also considered an equivalent of cSI.
The mapping between OCC and ETSI NFV architectural constructs are
given in Figure 5.21, Figure 5.22, Figure 5.23, and Table 5.2 [7]. Cloud user
and bare metal server interfaces to NaaS are depicted in Figures 5.21 and Figure
5.22 using NFV constructs.

Figure 5.15  VM interface.NFV identified an interface to hardware [4] as Vi-Ha and interface
to bare metal operating system (OS) as depicted in Figures 5.16 and 5.17. This interface can be
a subset of cSUI or cCcPI or cSPcSPI as described in Section 5.3.

www.EngineeringBooksPdf.com
Virtualized Network Services 269

Figure 5.16  Bare metal server interface [2].

Figure 5.17  Bare metal server interface [4].

Figure 5.18  SWA-1 interface.

5.7  Basic NFV Components of Cloud Services Architecture


In order to employ the virtualization techniques in building cloud services’ ar-
chitectural components, we need to identify the functions/attributes of each
architectural construct that can be built as a VNF.

www.EngineeringBooksPdf.com
270 Virtualized Software-Defined Networks and Services

Figure 5.19  SWA-5 interface.

Figure 5.20  Container interface.

Figure 5.21  Cloud user interface.

NFV architectures does not define necessary interfaces between a network


and its user, between service providers, or between a cloud provider and cloud
carrier. Furthermore, they do not have connection and connection termination
concepts. However, it is possible to divide attributes of these cloud services
components as virtual and infrastructure categories. Clearly this categorization
depends on various implementation factors. Here, we describe VNF and infra-
structure components of a cSC and its associated constructs.

www.EngineeringBooksPdf.com
Virtualized Network Services 271

Figure 5.22  Bare metal server interface NaaS architecture with NFV constructs and NaaS.

Figure 5.23  Cloud user access to VM over NaaS.

VNF and infrastructure components of a point-to-point cSC and its


cSCTPs in support of cloud services are depicted in Figure 5.24 and Figure
5.25. cSUI and cSCTP have virtual and infrastructure components, while cSC
and cSI have only virtual components. Reference [7] provides a detailed break-
down of the virtualized and infrastructure components.

5.8  Virtualized Carrier Ethernet Services


Similar to cloud services, carrier Ethernet services can be virtualized. External
interfaces (i.e., UNI and ENNI) and connection termination points (i.e., EVC

www.EngineeringBooksPdf.com
272 Virtualized Software-Defined Networks and Services

Table 5.2
Mapping Between OCC and NFV Constructs
Architectural Construct NFV Construct OCC Construct
User interface (Vi-Ha)+(Vn-Nf)/VN cSUI
VM interface (Vn-Nf)/VM cSI
Container interface Container interface cSI
SWA-1 Software architecture-1 cSI
Cloud carrier-cloud provider interface — cCcPI
Cloud service provider-cloud service provider — cSPcSPI
interface
Connection between users or between a user and VNF forwarding graph cSC
VM or between VMs
Connection termination point — cSCTP

Figure 5.24  VNFs and infrastructure for cSC and cSCTP when cSC is between two cSUIs.

at UNI or EVC termination point, OVC termination point) can be divided


into virtualized and infrastructure components.
Figure 5.26 depicts virtualized and infrastructure components of an Eth-
ernet private line (EPL) service provided by one operator.
Figure 5.27 depicts virtualized and infrastructure components of an EPL
crossing multiple operators.

5.8.1  Components of Virtualized Carrier Ethernet Services


In the previous section, we divided components of carrier Ethernet services as
virtualized and infrastructure components. How do we determine what func-
tion is virtual and what function is infrastructure? Clearly this categorization

www.EngineeringBooksPdf.com
Virtualized Network Services 273

Figure 5.25  VNF and infrastructure components of cSC when cSC is between cSUI and cSI.

Figure 5.26  Virtualized components of an EVC between two UNIs.

depends on the implementation. In Table 5.3, we have categorized UNI at-


tributes as follows:

1. VNFUNI-Prov is a virtual function consisting of attributes that can


be configured and supported by software.

www.EngineeringBooksPdf.com
274 Virtualized Software-Defined Networks and Services

Figure 5.27  Virtualized components of an EPL crossing ENNI.

2. INFUNI-Prov is an infrastructure function consisting of attributes


that can be configured and supported by physical hardware.
3. VNFUNI-prot is a virtual function consisting of UNI protection at-
tributes that can be configured and supported by software.
4. INFUNI-prot an infrastructure function consisting of UNI protec-
tion attributes that can be configured and supported by hardware.
5. VNFUNI-loam is a virtual function consisting of UNI link OAM
attributes that can be configured and supported by software.
6. INFUNI-loam an infrastructure function consisting of UNI link
OAM attributes that can be configured and supported by hardware.
7. VNFUNI-soam is a virtual function consisting of UNI service OAM
attributes that can be configured and supported by software.
8. INFUNI-soam an infrastructure function consisting of UNI service
OAM attributes that can be configured and supported by hardware.
9. VNFUNI-sync is a virtual function consisting of UNI synchroniza-
tion attributes that can be configured and supported by software.
10. INFUNI-sync an infrastructure function consisting of UNI synchro-
nization attributes that can be configured and supported by hardware.
11. VNFUNI-tsh is a virtual function consisting of UNI token sharing
attributes that can be configured and supported by software.
12. INFUNI-tsh an infrastructure function consisting of UNI token
sharing attributes that can be configured and supported by hardware.

www.EngineeringBooksPdf.com
Virtualized Network Services 275

13. VNFUNI-env is a virtual function consisting of UNI envelope attri-


butes for multiple bandwidth profile flows that can be configured and
supported by software.
14. INFUNI-env an infrastructure function consisting of UNI envelope
attributes for multiple bandwidth profile flows that can be configured
and supported by hardware.

From Table 5.3, we can identify basic and enhanced virtual capabilities of
UNI, and infrastructure capabilities of UNI. Possible implementation of UNI
can be as shown in Table 5.4.

Table 5.3
UNI Service Attributes and Parameter Values for All Service Types
Component of VNF or Categories of VNF and
UNI Service Attribute [13] Infrastructure or Both Infrastructure
UNI ID VNF VNFUNI-Prov
Physical layer Infrastructure INFUNI-Prov
Synchronous mode Both INF +VNF
UNI-sync UNI-sync
Number of links Both INFUNI-prot +VNFUNI-prot
UNI resiliency Both INFUNI-prot +VNFUNI-prot
Service frame format Infrastructure INFUNI-Prov
UNI maximum service frame size Both INFUNI-Prov +VNFUNI-Prov
Service multiplexing Both INFUNI-Prov +VNFUNI-Prov
CE-VLAN ID for untagged and VNF VNFUNI-Prov
priority tagged service frames
CE-VLAN ID/EVC map VNF VNFUNI-Prov
Maximum number of EVCs Both INFUNI-Prov VNFUNI-Prov
Bundling VNF VNFUNI-Prov
All to one bundling VNF VNFUNI-Prov
Token share Both INFUNI-tsh +VNFUNI-tsh
Envelopes VNF VNFUNI-env +INFUNI-env
Ingress bandwidth profile per UNI VNF VNFUNI-Prov +INFUNI-Prov
Egress bandwidth profile per UNI VNF VNFUNI-Prov +INFUNI-Prov
Link OAM Both VNFUNI-loam +INFUNI-loam
UNI MEG Both VNFUNI-soam +INFUNI-soam
E-LMI Both VNFUNI-elmi +INFUNI-elmi
UNI L2CP address set VNF VNFUNI-Prov
UNI L2CP peering VNF VNFUNI-Prov
Test probes for ITU Y.1564 testing Both INFUNI-test +VNFUNI-test
RFC 6349 TCP testing

www.EngineeringBooksPdf.com
276 Virtualized Software-Defined Networks and Services

Table 5.4
UNI Configurations
VNF and INF Components Required (i.e., SFC
UNI Functionalities Components)
Basic UNI provisioning VNFUNI-Prov +INFUNI-Prov
Basic UNI provisioning + link OAM VNFUNI-Prov +INFUNI-Prov + VNFUNI-loam +INFUNI-loam
Basic UNI provisioning + link Protection VNFUNI-Prov +INFUNI-Prov + VNFUNI-prot +INFUNI-prot
Basic UNI provisioning + token sharing VNFUNI-Prov +INFUNI-Prov + VNFUNI-tsh +INFUNI-tsh
Basic UNI provisioning + envelopes VNFUNI-Prov +INFUNI-Prov + VNFUNI-env +INFUNI-env
Basic UNI provisioning + service OAM VNFUNI-Prov +INFUNI-Prov + VNFUNI-soam +INFUNI-soam
Basic UNI provisioning + ELMI VNFUNI-Prov +INFUNI-Prov + VNFUNI-elmi +INFUNI-elmi
Basic UNI provisioning + service OAM+ VNFUNI-Prov +INFUNI-Prov + VNFUNI-soam +INFUNI-soam +
link OAM VNFUNI-loam +INFUNI-Prov + VNFUNI-loam +INFUNI-loam

In Table 5.5, we have categorized the attributes of “EVC per UNI” as


follows:

1. VNFEVC-Prov is a virtual function consisting of attributes that can


be configured and supported by software.
2. INFEVC-Prov is an infrastructure function consisting of attributes
that can be configured and supported by physical hardware.
3. VNFEVC-soam is a virtual function consisting of UNI service OAM
attributes that can be configured and supported by software.
4. VNFUNI-eqc is a virtual function consisting of EVC equivalence at-
tributes for multiple bandwidth profile flows that can be configured
and supported by software.
5. INFUNI-env an infrastructure function consisting of UNI envelope
attributes for multiple bandwidth profile flows that can be configured
and supported by hardware.

In Table 5.6, we have mapped additional EVC attributes to VNF and INF
categories of an EVC.
From the previous tables, we can identify virtual capabilities and infra-
structure capabilities of EVC. Possible implementation of EVC can be as shown
in Table 5.7.

5.8.2  Service Chaining for EPL


Let’s assume that a user requests an EPL service from product catalog (i.e., EPL
from E-line category as depicted in Figure 5.28) between UNI1 and UNI2.

www.EngineeringBooksPdf.com
Virtualized Network Services 277

Table 5.5
VNFs and Infrastructure Components of EVC per UNI Service Attributes and
Parameter Values for All Service Types
Categories of VNF
EVC per UNI Service Attribute [13] and Infrastructure
UNI EVC ID VNFEVC-Prov
Class of service identifier for data service frame VNFEVC-Prov
Class of service identifier for L2CP service frame VNFEVC-Prov
Class of service identifier for SOAM service frame VNFEVC-soam
Color identifier for service frame VNFEVC-Prov
Egress equivalence class identifier for data service VNFEVC-eqc
frames
Egress equivalence class identifier for L2CP service VNFEVC-eqc
frames
Egress equivalence class identifier for SOAM service VNFEVC-soam
frames
Ingress bandwidth profile per EVC VNF +INF
EVC-Prov EVC-Prov
Egress bandwidth profile per EVC VNFEVC-Prov +INFEVC-Prov
Ingress bandwidth profile per class of service identifier VNFEVC-Prov
Egress bandwidth profile per egress equivalence class VNFEVC-eqc
Source MAC address limit VNFEVC-Prov
Test MEG VNFEVC-soam +INFEVC-soam
Subscriber MEG MIP VNFEVC-soam +INFEVC-soam

Table 5.6
Categorization of EVC Attributes as VNF and Infrastructure
Component
Categories of VNF
EVC Service Attribute [13] and Infrastructure
EVC type VNFEVC-Prov
EVC ID VNFEVC-Prov
UNI List VNFEVC-Prov
Maximum number of UNIs VNF +INF
EVC-Prov EVC-Prov
Unicast service frame delivery VNFEVC-Prov
Multicast service frame delivery VNFEVC-Prov
Broadcast service frame delivery VNFEVC-Prov
CE-VLAN ID preservation VNFEVC-Prov
CE-VLAN CoS preservation VNFEVC-Prov
EVC performance VNFEVC-Prov + INFEVC-Prov
EVC maximum service frame size VNFEVC-Prov

www.EngineeringBooksPdf.com
278 Virtualized Software-Defined Networks and Services

Table 5.7
VNFs and Infrastructure Components of EVC
VNF and INF Components
EVC Functionalities Required (i.e., SFC Components)
Basic EVC provisioning VNFEVC-Prov +INFEVC-Prov
Basic EVC provisioning + VNFEVC-Prov +INFEVC-Prov + VNFEVC-
service OAM soam + INFEVC-soam

Basic EVC provisioning + VNFEVC-Prov +INFEVC-Prov + VNFEVC-eqc


equivalence class support

The main orchestrator needs to talk to the controller associated with UNIs
and NFV orchestrator associated with both UNIs and EVC. Let’s assume that
both UNIs belong to one vendor and are in the same subnetwork (domain);
therefore, the same controller can configure both UNIs. Per request from the
main orchestrator, the controller configures INFUNI-Prov for both UNIs. The
VNFs, VNFUNI-Prov for both UNIs, can be configured independently from
INFUNI-Prov . For an EPL provisioning, the flows (i.e., service chains) are
depicted in Figures 5.28, 5.29, and 5.30.
The provisioning components in Figures 5.28, 5.29, and 5.30 are quite
different than objects defined in MEF [15]. We believe that what we define here
constitutes a layer which is below a provisioning layer formed of service layer
and resource layer objects defined by MEF.

5.8.3  Access E-Line and Its Service Chaining


Similarly, virtual and infrastructure components of operator virtual connection
(OVC) termination point and ENNI can be defined. OVC services attributes
listed in Tables 5.8 and 5.9 are valid for access E-line [14].

1. VNFOVC-Prov is an OVC virtual function consisting of attributes


that can be configured and supported by software.
2. INFOVC-Prov is an OVC infrastructure function consisting of at-
tributes that can be configured and supported by physical hardware.
3. VNFENNKI-Prov is an ENNI virtual function consisting of attri-
butes that can be configured and supported by software.
4. INFENNI-Prov is an ENNI infrastructure function consisting of at-
tributes that can be configured and supported by physical hardware.

Additional OVC service attributes for Access E-Line are given in Tables
5.10 and 5.11.
Service chaining for access E-line service is depicted in Figure 5.31.

www.EngineeringBooksPdf.com
Virtualized Network Services 279

Figure 5.28  Basic EPL provisioning.

Figure 5.29  Provisioning of basic EPL with link OAM capabilities.

INFs and VNFs and service chaining for remaining carrier Ethernet ser-
vices such as access E-LAN, transit E-line, and transit E-LAN services can be
similarly identified.

5.9  Virtualized IP VPN Services


We model VPN as a set of interfaces, connections, and their termination points
as depicted in Figure 5.32.

www.EngineeringBooksPdf.com
280 Virtualized Software-Defined Networks and Services

Figure 5.30  Provisioning of basic EPL with link OAM, OAM, and SOAM capabilities.

Table 5.8
OVC End Point Per UNI Service Attributes
OVC End Point per UNI Service Categories of VNF and
Attribute [14] Infrastructure
UNI OVC identifier VNFOVC-Prov
OVC end point map VNFOVC-Prov
Class of service identifiers VNFOVC-Prov
Ingress bandwidth profile per OVC INF
OVC-Prov +VNFOVC-Prov
end point
Ingress bandwidth profile per class INF
OVC-Prov +VNFOVC-Prov
of service identifier
Egress bandwidth profile per OVC INFOVC-Prov +VNFOVC-Prov
end point
Egress bandwidth profile per class INF
OVC-Prov +VNFOVC-Prov
of service identifier
Maintenance end point (MEP) list VNFOVC-Prov

Subscriber MEG MIP VNFOVC-Prov

We use GRE tunnel based IP VPN as an example. Tunneling provides a


mechanism to transport packets of one protocol within another protocol. The
protocol that is carried is called as the passenger protocol, and the protocol that
is used for carrying the passenger protocol is called as the transport protocol.

www.EngineeringBooksPdf.com
Virtualized Network Services 281

Table 5.9
OVC End Point per ENNI Service Attributes
OVC End Point per ENNI Service Categories of VNF and
Attribute [14] Infrastructure
OVC end point identifier VNFOVC-Prov
Trunk identifiers VNFOVC-Prov
Class of service identifier for ENNI frames VNFOVC-Prov
Ingress bandwidth profile per OVC end point INF +VNF
OVC-Prov OVC-Prov
Ingress bandwidth profile per class of service INF
OVC-Prov +VNFOVC-Prov
identifier
Egress bandwidth profile per OVC end point INFOVC-Prov +VNFOVC-Prov
Egress bandwidth profile per class of service INF
OVC-Prov +VNFOVC-Prov
identifier
Maintenance end point (MEP) list VNFOVC-Prov
Maintenance intermediate point (MIP) VNFOVC-Prov

Table 5.10
OVC Services Attributes
OVC Service Attribute [14] Categories of VNF and Infrastructure
OVC identifier VNFOVC-Prov
OVC type VNFOVC-Prov
OVC end point list VNFOVC-Prov
Maximum number of UNI OVC end VNFOVC-Prov
points
Maximum number of ENNI OVC VNFOVC-Prov
end points
OVC MTU size INF +VNF
OVC-Prov OVC-Prov
CE-VLAN ID preservation VNFOVC-Prov
CE-VLAN CoS preservation VNFOVC-Prov
S-VLAN ID preservation VNFOVC-Prov
S-VLAN CoS preservation VNFOVC-Prov
Color forwarding VNFOVC-Prov
Service level specification INFOVC-Prov +VNFOVC-Prov
Unicast frame delivery VNFOVC-Prov
Multicast frame delivery VNFOVC-Prov
Broadcast frame delivery VNFOVC-Prov
OVC available MEG level VNFOVC-Prov

Generic routing encapsulation (GRE) is one of the available tunneling mecha-


nisms which uses IP as the transport protocol and can be used for carrying many
different passenger protocols. The tunnels behave as virtual point-to-point links

www.EngineeringBooksPdf.com
282 Virtualized Software-Defined Networks and Services

Table 5.11
OVC End Point per UNI Service Attributes
Categories of VNF and
OVC End Point per UNI Service Attribute Infrastructure
UNI OVC identifier VNFOVC-Prov
OVC end point map VNFOVC-Prov
Class of service identifiers VNFOVC-Prov
Ingress bandwidth profile per OVC end point INF +VNF
OVC-Prov OVC-Prov
Ingress bandwidth profile per class of service VNFOVC-Prov
identifier
Egress bandwidth profile per OVC end point INFOVC-Prov +VNFOVC-Prov
Egress bandwidth profile per class of service identifier VNFOVC-Prov
Maintenance end point (MEP) list VNFOVC-Prov
Subscriber MEG maintenance intermediate point VNFOVC-Prov
(MIP)

Figure 5.31  Service chaining for access E-line service.

that have two endpoints identified by the tunnel source and tunnel destination
addresses at each endpoint.
Figure 5.33 shows encapsulation process of GRE packet as it traversers the
router and enters the tunnel interface.
Configuring a GRE tunnel involves creating a tunnel interface, which is a
logical interface, and configuring the tunnel endpoints for the tunnel interface.

www.EngineeringBooksPdf.com
Virtualized Network Services 283

Figure 5.32  IP VPN model.

Figure 5.33  GRE encapsulation process.

Attributes of the interface, connection, and connection termination point for a


GRE tunnel-based VPN are given next as an example.
Similar to carrier Ethernet services, virtual and infrastructure components
of IP VPN termination point, IP interface, and IP VPN connection can be
defined. Some of these attributes and their categorizations are the author’s rec-
ommendations to the industry and may change as the standards for IP services
evolve:

1. VNFVPNI-Prov is a VPN interface virtual function consisting of at-


tributes that can be configured and supported by software.
2. INFVPNI-Prov is a VPN interface infrastructure function consisting
of attributes that can be configured and supported by physical hard-
ware.
3. VNFVPNI-oam is a VPN interface OAM virtual function consisting
of attributes that can be configured and supported by software.
4. INFVPNI-oam is a VPN OAM infrastructure function consisting of
attributes that can be configured and supported by physical hardware.

www.EngineeringBooksPdf.com
284 Virtualized Software-Defined Networks and Services

5. VNFVPNI-loam is a VPN interface link OAM virtual function con-


sisting of attributes that can be configured and supported by software.
6. INFVPNI-loam is a VPN interface link OAM infrastructure function con-
sisting of attributes that can be configured and supported by physical
hardware.

From Tables 5.12–5.15, we can identifiy virtual capabilities and infra-


structure capabilities of IP VPN. Possible implementations of IP VPN are give

Table 5.12
IP VPN Interface Attributes
Component Component
IP VPN Interface Attributes [16] of INF of VNF Main Functions
Physical interface (i.e., rate, MAC √ — INF + VNF
address, and so on) VPNI-prov VPNI-prov

MTU √ — INF
VPNI-prov

Port ID — √ INF + VNF


VPNI-prov VPNI-prov
Link OAM √ √ INF + VNF
VPNI-loam VPNI-loam
Routing protocol attributes — √ VNF
VPNI-prov

Maximum number of VPN sessions √ √ INF + VNF


supported VPNI-prov VPNI-prov

IP ECMP attributes — √ VNF


VPNI-ecmp

LAG/LACP attributes √ √ INF + VNF


VPNI-lag VPNI-lag
IPSec attributes — √ VNF
VPNI-lPSec

NAT-T attributes — √ VNF


VPNI-lPSec

Firewall Server attributes — √ VNF


VPNI-fw

DHCP server attributes — √ VNF


VPNI-dhcp

DNS server attributes — √ VNF


VPNI-dns

TOD server attributes — √ VNF


VPNI-tod

TFTP server attributes — √ VNF


VPNI-tftp

Port loopback √ √ INF VNF


VPNI-oam + VPNI-oam
Dying gasp √ √ INF VNF
VPNI-oam + VPNI-oam
Link trace √ √ INF VNF
VPNI-oam + VPNI-oam

www.EngineeringBooksPdf.com
Virtualized Network Services 285

Table 5.13
IP VPN End Point Attributes
Component Component
IP VPN End Point Attributes of INF of VNF Main Functions
Tunnel EP ID — √ VNF
VPNE-prov

Tunnel address type — √ VNF


VPNE-prov

IP address of local end point — √ VNF


VPNE-prov

IP address of remote end point — √ INF VNF


VPNE-prov + VPNE-prov
Encapsulation method — √ INF VNF
VPNE-prov + VPNE-prov
IP v4 TOS or IPv6 TC √ √ INF VNF
VPNE-prov + VPNE-prov
IPv6 flow label — √ INF VNF
VPNE-prov + VPNE-prov
IPv6 hop limit — √ INF VNF
VPNE-prov + VPNE-prov
VPN ID — √ INF VNF
VPNE-prov + VPNE-prov
OAM: tunnel security protocol √ √ INF VNF
VPNE-oam + VPNE-oam
OAM: tunnel security protocol √ √ INF VNF
attributes—IPSec attributes VPNE-oam + VPNE-oam

OAM: alarms, keep-alive messages √ √ INF VNF


VPNE-oam + VPNE-oam
OAM: measurements and thresholds √ √ INF VNF
VPNE-oam + VPNE-oam
OAM: operational state √ √ INF VNF
VPNE-oam + VPNE-oam
OAM: administrative state — √ VNF
VPNE-oam

in Table 5.16. Service chaining to provision an IP VPN service is depicted in


Figure 5.34.

5.10  Life Cycle Services Operations (LSO) for Cloud Services


In a network supporting cloud services, there can be virtualized, nonvirtualized,
and legacy components. All of the network, applications, and service compo-
nents need to be managed together.
Figure 5.35 depicts a possible cloud services management architecture
to manage cloud and noncloud components of cloud services. VNFs are be-
ing managed by an NFV orchestrator [7]. Nonvirtualized components of the
network is managed by software defined network (SDN) controllers. The
legacy components are managed by network management system/element

www.EngineeringBooksPdf.com
286 Virtualized Software-Defined Networks and Services

Table 5.14
IP VPN Connection
IP VPN Connection Component Component
Attributes of INF of VNF Main Functions
Tunnel ID — √ VNF
VPNE-prov
VPN ID — √ VNF
VPNC-prov
Connection type — √ VNF
VPNC-prov
SLA √ √ INF VNF
VPNC-prov + VPNC-prov
MTU √ √ INF VNF
VPNC-prov + VPNC-prov
Administrative state — √ VNF
VPNC-prov
Operational state √ √ INF VNF
VPNC-prov + VPNC-prov
Connection duration — √ VNF
VPNC-prov
Connection start time — √ VNF
VPNC-prov

Table 5.15
IP VPN SOAM
Component
of VNF or
IP VPN SOAM Functionality/ Infrastructure Categories of VNF and
Attribute [24]* or Both Infrastructure
IP VPN connection MEG Both INFVPN-soam +VNFVPN-soam

ETH-AIS and ETH-RDI Both INFVPN-soam +VNFVPN-soam


Keep-alive messaging — —

IP loopback — —

One-way or round trip delay Both INFVPN-soam +VNFVPN-soam


Availability Both INFVPN-soam +VNFVPN-soam
Test probes for RFC 6349 TCP Both INFEVC-soam +VNFEVC-soam
testing
*These attributes can be part of IP VPN end point attributes.From the previous tables, we can
identify virtual capabilities and infrastructure capabilities of IP VPN. Possible implementation of
IP VPN can be found in Table 5.15.

management system (NMS/EMS). Cloud orchestrator coordinates and main-


tains all virtual, nonvirtual, and legacy resources to accommodate services driv-
en by various applications.

www.EngineeringBooksPdf.com
Virtualized Network Services 287

Table 5.16
VNFs and Infrastructure Components of IP VPN
IP VPN
Functionalities VNF and INF Components Required (i.e., SFC Components)
Basic IP VPN VNFVPNI-prov +INFVPNI-prov + VNFVPNE-prov +INFVPNE-prov +VNFVPNC-prov +INFVPNC-
provisioning
prov
Basic IP VPN VNFVPNI-prov +INFVPNI-prov + VNFVPNE-prov +INFVPNE-prov +VNFVPNC-prov +INFVPNC-
provisioning +
prov + VNFVPN-soam +INFVPN-soam
service OAM

Figure 5.34  Basic IP VPN service chaining.

The orchestrator is responsible for the management and orchestration of


software resources and the virtualized hardware infrastructure to realize net-
working services. The VNF manager is in charge of the instantiation, scaling,
termination, and update events during the life cycle of a VNF, and supports
zero-touch automation. The virtualization layer abstracts the physical resources
and anchors the VNFs to the virtualized infrastructure. It ensures that the VNF
life cycle is independent of the underlying hardware platforms by offering stan-
dardized interfaces.
This type of functionality is typically provided in the form of virtual ma-
chines (VMs) and their hypervisors or containers. The virtualized infrastructure
manager is used to virtualize and manage the configurable compute, network,
and storage resources, and control their interaction with VNFs. It allocates VMs

www.EngineeringBooksPdf.com
288
Virtualized Software-Defined Networks and Services

www.EngineeringBooksPdf.com
Figure 5.35  Cloud services management with a cloud orchestrator, SDN controllers, NFV orchestrator, and NMS/EMS.
Virtualized Network Services 289

onto hypervisors and manages their network connectivity. It also analyzes the
root cause of performance issues and collects information about infrastructure
fault for capacity planning and optimization.
Cloud services may be provided by multiple service operators where each
cloud service operator (cSO) may have its own cloud orchestrator. The end-
to-end coordination between cloud orchestrators can be provided by a cloud
orchestrator owned by the cSP as depicted in Figure 5.36.
LSO functionalities defined by MEF for carrier Ethernet network (CEN),
as depicted in Figure 5.37, are [11]:

• Market analysis and product strategy;


• Service and resource design;
• Launch products;
• Marketing fulfillment response;
• Sale proposal and feasibility;
• Capture customer order;
• Service configuration and activation;
• End-to-end service testing;
• Service problem management;
• Service quality management;
• Billing and revenue management;
• Terminate customer relationship.

Although the LSO for cloud services is not worked out yet in the industry,
these generic steps apply cloud services as well.
Order fulfillment and service control deal with the orchestration of provi-
sioning related activities involved in the fulfillment of a customer order or of a

Figure 5.36  Management of cloud services provided by multiple cloud service operators.

www.EngineeringBooksPdf.com
290 Virtualized Software-Defined Networks and Services

Figure 5.37  Product and service operations lifecycle stages.

service control request, including the tracking and reporting of the provisioning
progress. The process can be broken down into multiple functional orchestra-
tion areas:

• Order fulfillment orchestration;


• Service control orchestration;
• Service configuration orchestration;
• Service delivery orchestration;
• Preservice testing orchestration.

www.EngineeringBooksPdf.com
Virtualized Network Services 291

Order fulfillment orchestration is triggered from an issued customer or-


der, generally originating from a business application such as a customer rela-
tionship management (CRM) system or order entry system. Its responsibilities
include the following:

• Scheduling, assigning, and coordinating customer provisioning related


activities;
• Generating the respective service order(s) or service creation/modifica-
tion/move/deletion request(s) based on specific customer orders;
• Escalating status of customer orders in accordance with local policy;
• Undertaking necessary tracking of the execution process;
• Adding additional information to an existing customer order under ex-
ecution;
• Modifying information in an existing customer order under execution;
• Modifying the customer order status;
• Canceling a customer order when the initiating sales request is canceled;
• Monitoring the jeopardy status of customer orders, and escalating cus-
tomer orders as necessary;
• Indicating completion of a customer order by modifying the customer
order status.

Service control orchestration functions are scheduling, assigning, and co-


ordinating service control provisioning related activities. They can be listed as
follows:

• Undertaking necessary tracking of the execution process of service con-


trol requests;
• Adding additional information to an existing service control request un-
der execution;
• Modifying information in an existing service control request under ex-
ecution;
• Modifying the service control request status;
• Canceling a service control request when the initiating request is can-
celed;
• Monitoring the jeopardy status of service control requests, and escalat-
ing service control requests as necessary;

www.EngineeringBooksPdf.com
292 Virtualized Software-Defined Networks and Services

• Instantiating, when appropriate, an event for the billing system to cap-


ture the policy-constrained change;
• Indicating completion of a service control request.

Service configuration orchestration functions are as follows:

• Verifying whether specific service designs sought by customers are fea-


sible;
• Allocating the appropriate specific service parameters to support service
orders, control requests, or requests from other processes;
• Reserving specific service parameters (if required by the business rules)
for a given period of time until the initiating customer order is con-
firmed, or until the reservation period expires (if applicable);
• Configuring specific services, as appropriate;
• Recovery of specific services;
• Updating of the service inventory database to reflect that the specific
service has been allocated, modified, or recovered;
• Assigning and tracking service component provisioning activities;
• Managing service provisioning jeopardy conditions;
• Reporting progress on service orders or control requests.

Service delivery orchestration encompasses the orchestrated activation or


configuration of specific service components:

• Via one or multiple domain controllers;


• Via diverse human-involved methods or work management systems,
where automation is not possible and if applicable (e.g., installing physi-
cal fiber optics to a building for one of the customer sites, or shipping
and connecting customer premises equipment [CPE]);
• Via access provider product orders for off-net service components.

The LSO ecosystem needs to orchestrate end-to-end network connectiv-


ity testing, but flexibility for real-time staggered testing by site is required:

• The LSO ecosystem may manage unit level testing within infrastructure
and element management levels, therefore abstract to LSO, or may be

www.EngineeringBooksPdf.com
Virtualized Network Services 293

orchestrated from LSO with testing requests, via APIs, to systems ca-
pable of conducting and reporting on unit tests.
• The LSO ecosystem needs to orchestrate and control end-to-end service
test, and issues testing requests, via APIs, to systems capable of conduct-
ing and reporting on unit tests.
• The LSO ecosystem needs to orchestrate customer acceptance testing.
• The LSO ecosystem needs to support alarm surveillance, detect errors
and faults, and correlation to services.

• The LSO ecosystem performs fault verification, isolation, and testing.


• The LSO ecosystem reports correlated alarms, performance events,
trouble reports, and so on, including the potential root cause of a trouble
and identified impact on services.
• The LSO ecosystem controls filtering of problem-related notifications.
• The LSO ecosystem provides problem-related information allowing the
status of problem resolution to be tracked.
• The LSO ecosystem orchestrates connectivity service fault recovery.

The LSO ecosystem collects service performance–related information


across the domain. The LSO ecosystem gathers customer perceived quality
feedback within the LSO ecosystem, and service quality is to be analyzed by
comparing the service performance metrics with the service quality objectives
described in the SLS. The LSO ecosystem allows the definition of thresholds
related to service quality objectives. The LSO ecosystem provides the results of
the service quality analysis to as well as information about known events that
may impact the overall service quality (e.g., maintenance events, congestion,
relevant known troubles, demand peaks, and so on). The LSO ecosystem per-
forms capacity analysis and traffic engineering. The LSO ecosystem performs
service quality improvement. LSO orchestrates the management of aggregate
traffic flows though the network based on projected demands. LSO allows the
definition of end to end SLA enforcement, assurance, and resolution policies
associated with the product offering
The LSO ecosystem supports the reporting of the usage of service capa-
bilities and associated resources. LSO assembles service component usage data
to compose an end to end view of service usage. LSO captures control-based
service events such as change in bandwidth, and so on. LSO may generate ex-
ception reports to describe where service resources have been used beyond the
commitments as described in the SLS. The LSO ecosystem may include billing
management capabilities [19].

www.EngineeringBooksPdf.com
294 Virtualized Software-Defined Networks and Services

The LSO ecosystem provides authentication for all interactions. The LSO
ecosystem may provide role-based access control for users. It supports encryp-
tion across interfaces and the associated key management capabilities. The LSO
ecosystem orchestrates filtering controls for connectivity services and maintains
administrative and trust domains and relationships.
The LSO ecosystem supports the fusion and analysis of information
among management and control functionality across management domains.
The LSO ecosystem assembles a relevant and complete operational picture of
the end to end services, service components, and the supporting network in-
frastructure, both physical and virtual. It ensures that information is visible,
accessible, and understandable when needed and where needed to accelerate
decision-making. The LSO ecosystem also supports prediction and trending of
service growth and resource demand as compared to available resource capacity.
The LSO ecosystem may provide rules-based coordination and automa-
tion of management processes across administrative domains supporting ef-
fective configuration, assurance, and control of services and their supporting
service components:

• LSO may support service related policies that encode rules that describe
the design and dynamic behavior of the services.
• LSO may support service objective based policies that implement sets of
rules with event triggered conditions and associated actions.
• LSO may adjust the behavior of services and service resources, including
bandwidth, traffic priority, and traffic admission controls through poli-
cies, allowing connectivity services to adapt rapidly to dynamic condi-
tions.
• Within the LSO ecosystem, user/party and service policies may be used
to control and bound the objects, parameters, value ranges, and states
that are allowed to be created, modified, or deleted.

The LSO ecosystem provides capabilities for the customer/partner to


browse product catalog:

• The LSO ecosystem provides capabilities for the customer/partner to


browse and query commercial asset inventory.
• The LSO ecosystem provides capabilities for the customer/partner to
develop, place, and track orders.
• The LSO ecosystem provides capabilities for the customer/partner to
modify service instance, including rules guiding the dynamic service
characteristics.

www.EngineeringBooksPdf.com
Virtualized Network Services 295

• The LSO ecosystem provides capabilities for the customer/partner to


view service performance and fault information.
• The LSO ecosystem provides capabilities for the customer/partner to
place and track trouble reports.
• The LSO ecosystem provides capabilities for the customer/partner to
view usage and billing information.

LSO reference architecture is depicted in Figure 5.38. Descriptions of the


components and interfaces are as follows:

• Service portal (SvP): supporting interactions with the customer or cloud


application coordinator to request, modify, manage, control, and termi-
nate their services.
• Business applications (BUS): the provider functionality supporting
business management layer functionality (e.g., product catalog, order-
ing, billing, relationship management, and so on).
• Service orchestration functionality: the set of service management layer
functionality supporting an agile framework to streamlining and auto-
mating the service lifecycle in a sustainable fashion for coordinated man-

Figure 5.38  LSO management reference architecture [11].

www.EngineeringBooksPdf.com
296 Virtualized Software-Defined Networks and Services

agement supporting fulfillment, control, performance, assurance, usage,


security, analytics, and policy capabilities encompassing all network do-
mains that require coordinated end-to-end management and control to
deliver connectivity services.
• Infrastructure control and management (ICM): the set of functionality
providing domain specific resource management layer capabilities in-
cluding configuration, control, and supervision of the network infra-
structure.
• Element control and management (ECM): the set of functionality sup-
porting element management layer management capabilities for groups
of individual network elements.
• Service gateway (SGW): supporting interactions with the partner ser-
vice provider (e.g., access provider) request, modify, manage, control,
and terminate aspects of the connectivity services provided by the part-
ner service provider.
• Cantata (SvP:BUS): the management interface reference point that pro-
vides a customer (including enterprise customers) with capabilities via
the service portal to support the operations interactions (e.g., ordering,
billing, trouble management, and so on) for a portion of the service pro-
vider service capabilities related to the customer’s services (e.g., customer
service management interface).
• Allegro (SvP:LSO): the management interface reference point that al-
lows customer supervision, via the service portal, of the LSO service
capabilities under its purview through interactions with the LSO or-
chestrator.
• Legato (BUS:LSO): the management interface reference point between
the business applications and LSO needed to allow management and
operations interactions supporting LSO services.
• Sonata (BUS:BUS): the management interface reference point support-
ing the management and operations interactions (e.g., ordering, billing,
trouble management, and so on) between two network providers (e.g.,
service provider and partner domain), analogous to TMN X reference
point.
• Interlude (LSO:LSO): the management interface reference point that
provides for the supervision of a portion of LSO services within the
partner domain that are coordinated by a service provider LSO within
the bounds and policies defined for the service.

www.EngineeringBooksPdf.com
Virtualized Network Services 297

• Presto (LSO:ICM): the resource management interface reference point


needed to manage the network infrastructure, including network view
management functions.
• Animato (AC:SvP): the management interface reference point between
the application coordinator and the service portal. Animato allows cloud
application coordination, via the service portal, of the LSO service capa-
bilities under its purview.
• Adagio (ICM:ECM): the element management interface reference
point needed to manage the network resources, including element view
management functions.

5.11  NFV and SDN for Unified Network and Cloud Service
Provisioning
Modern servers benefit from abstractions of operating systems, programming
languages, file systems, and virtual memory. As a result, servers are highly vir-
tualized, capable of supporting tens to hundreds of virtual machines (VMs) per
physical server, each of which can be dynamically created, moved to a different
host, modified, or deleted in a few minutes. Server virtualization resulted in
significant reductions in capital and operating expense, reduction in the physi-
cal footprint of devices, lower energy consumption, faster provisioning times,
and higher utilization.
However, networks for data centers, cloud computing, and services have
not yet evolved its own set of fundamental abstractions. Conventional data
networks can require five days or more to provision the necessary service chains
within a data center, and weeks or longer to reprovision service between data
centers. Current industry trends such as dynamic workloads, mobile comput-
ing, multitenant cloud computing, warehouse-scale data centers, and big data
analytics have led to a need for much richer network functionality. Abstractions
for networks with SDN are expected to bring benefits similar to those derived
from server virtualization.
With SDN, management and control functions can be moved into soft-
ware running on a server cluster, known as a network controller. Centralized
management through cloud middleware such as the OpenStack Quantum in-
terface and virtualized layer 2–3 network capabilities through network function
virtualization (NFV) are being worked in the industry. Network resources are
expected to be programmable, automated, and eventually optimized, resulting
in a workload or application aware network.

www.EngineeringBooksPdf.com
298 Virtualized Software-Defined Networks and Services

The telecom sector is actively exploring the opportunities offered by the


cloud. In a cloud environment, communication endpoints are user devices and
virtual machines (VMs) that can be hosted in different physical locations ac-
cording to varying conditions. Compared to traditional networking environ-
ments, network capacity requirements are no longer static, but are likely to
change as the associated computing and storage resources expand and reduce.
This poses a whole new set of challenges to the network, now jointly including
the data center (DC) and the wide area network (WAN) segments. To provide
assured levels of performance to cloud services, cloud and telco services need
to be provisioned, managed, controlled, and monitored in an integrated way.
In [12], experimental results are presented from an SDN/NFV testbed
with automated, dynamically provisioned, 125-km optical WAN. Live VM
migration for NFV video serving is demonstrated, along with Layer 0–3 or-
chestration using Open Daylight, OpenFlow, and IBM Distributed Overlay
Virtual Ethernet (DOVE). The test bed consists of three data centers connected
by a 125-km ring of single-mode optical fiber. All of the data center switches
and WDM equipment is orchestrated through an open source–based controller
running OpenFlow 1.0.
Automatic triggering live VM migration (uninterrupted operation of
functions in the VM) when server utilization exceeds 75 percent is exercised.
The SDN controller automatically provisions all the switches in the source and
target data center, as well as wavelengths on the optical network (subject to
available physical resources and workload priority levels).
End-to-end network provisioning can be accomplished in about a min-
ute. Once migration is complete, the SDN controller reverts all network devices
to their original states. Fixed wavelengths in most metropolitan area optical
networks are currently under-utilized and high cost, since they must be provi-
sioned for peak network capacity.
The VM migration time is a function of the VM memory size, M, page
dirty rate per second, W, and network bandwidth in pages/second, R. Appli-
cations such as VMWare perform live migration (uninterrupted operation of
functions in the VM) using a variant of stop and copy technique with a pre-
migration phase (periodically suspending the VM for a stop time, S, and itera-
tively transferring its active memory page contents, as well as its execution state,
and architectural configuration to another physical host where it is reinstalled).
The migration time, T, is given by T = M/(R – W(T-S)/T) [12].
Today, the establishment, management, and composition of service func-
tions (SFs) such as router and firewall follow a rigid, static, and time-consuming
process. For example, resource overprovisioning is usually necessary to cope
with estimated peak demand, and a fault in a single function can disrupt an
entire network, imposing the need for faster disaster recovery methods. As vir-
tualization technologies reach maturity and are able to provide carrier-grade

www.EngineeringBooksPdf.com
Virtualized Network Services 299

performance and reliability, it becomes feasible to consolidate multiple network


equipment types, traditionally running on specialized hardware platforms, onto
industry-standard hardware, which minimizes costs, reduces time to market,
and facilitates open innovation.
Cloud computing, combined with software defined networking (SDN)
and network function virtualization (NFV), promises to make SF management
processes much more agile. Cloud computing represents a paradigm for infor-
mation technology (IT) services that can now be delivered in an on-demand
and self-service manner. SDN brings new capabilities in terms of network auto-
mation, programmability, and agility that facilitate integration with the cloud.
On the other hand, NFV accelerates the innovation of networks and services,
allowing new operational approaches, novel services, faster service deployment
(shorter time to market), increased service assurance, and stronger security.
An SF is a functional block responsible for a specific treatment of received
packets and has well-defined external interfaces [3]. An SF can be embedded
in a virtual instance or directly in a physical element (the usual situation until
recently). Virtual SFs offer the opportunity to compose and organize virtual SFs
dynamically. SFC is loosely defined as “an ordered set of service functions that
must be applied to packets and/or frames selected as a result of classification”
[3]. It requires the placement of SFs and the adaptation of traffic-forwarding
policies of the underlying network to steer packets through an ordered chain of
service components. However, the lack of automatic configuration and custom-
ization capabilities increases the operational complexity.
The most relevant functionalities of cloud based virtualized networks are
as follows:

• Automated deployment, configuration, and life cycle management (in-


stantiation, configuration, update, scale up/down, termination, and so
on) of SFs.
• Exposure of functionalities such as service deployment and provision-
ing, service monitoring and reconfiguration, and service teardown.
• Federated management and optimization of WAN and cloud resources
for accommodating VNFs. The federated management and optimiza-
tion of WAN and cloud resources gives the platform a broad and dis-
tributed scope. It allows the establishment of end-to-end services over a
distributed physical infrastructure.
• Support of SFCs.

The orchestrator is responsible for the automated provision, management,


and monitoring of VNFs over the virtual infrastructure. It exposes the ability
to create and delete VNFs, as well as the ability to SFCs. It relies on the VIM

www.EngineeringBooksPdf.com
300 Virtualized Software-Defined Networks and Services

plane to provision the infrastructure resources where VNFs run (VMs, virtual
networks, and so on). The orchestrator has a REST interface that exposes the
ability to create and delete VNFs as well as to chain them.
The VIM plane includes the components for management of infrastruc-
ture resources. It includes cloud DC controllers (one per DC) and a WAN
controller that is able to establish inter-DC connectivity services.
From a networking perspective, OpenStack allows the creation and man-
agement of networks (L2 network segments) and ports (attachment points for
devices connecting to networks, such as virtual network interface cards or
vNICs, in VMs). The OpenStack community introduced new Neutron net-
work service types: L3 routing, firewall as a service (FWaaS), load balancer as a
service (LBaaS), and VPN as a service (VPNaaS).
With the orchestration and composition of SFs in mind, it is easy to
identify the need to fill a gap in OpenStack: steering traffic between OpenStack
elements (e.g., VMs, routers). We envision a new OpenStack service abstraction
that extends and relies on current OpenStack networking features, allowing
traffic steering between Neutron ports according to classification criteria. New
entities are introduced into the OpenStack Neutron data model: port steering
and classifier. Both entities have a set of common OpenStack data model at-
tributes (i.e., id, name, description, and tenant_id). Port steering adds to this
common set a list of ports (ports attribute) and a list of classifiers (classifiers
attribute). The former lists the sets of ports that must be targeted for classifica-
tion and then steered. The classifier entity adds the following attributes: type,
protocol, port_min, port_max, src_ip and dst_ip.
OpenDaylight has a module that integrates with OpenStack Neutron for
the enforcement of services in the infrastructure. This module can be extended
in order to support and enforce the previously mentioned OpenStack traffic
steering feature where this implementation relies on OpenFlow and Open
vSwitch database management protocol (OVSDB) for the management of net-
work resources.
The WAN controller is responsible for managing the operator network,
and it exposes connectivity services to the orchestrator. In this context, WAN
services are used to support VNFs. Point-to-point and multipoint connections
with guaranteed network QoS are provided. These are exposed through a ser-
vice interface that, similar to cloud IaaS interfaces, is technology agnostic. The
details and mechanisms to manage the automatic establishment of connectivity
services across different locations are detailed in [6].
Reference [18] provides the configuration of virtual CPE (vCPE) with Ju-
niper’s Contrail orchestrator. cCPE Selfcare application of Juniper enables user
to configure Contrail-based virtual CPE services, which are hosted in user cloud
computing environment. Customer network administrators can then enable

www.EngineeringBooksPdf.com
Virtualized Network Services 301

these virtual CPE services through the cCPE selfcare portal on a self-serve basis.
Contrail, which works within open cloud orchestration systems such as Open-
Stack and CloudStack, provides orchestration and management of networking
functions, such as a virtual firewall, in a VM instead of physical hardware. The
vCPE services rely on the preconfigured routing instances and interfaces on
MX Series routers, which the cCPE Selfcare portal identifies and can modify.
The vCPE services can be configured in the Selfcare portal, which passes
the authentication credentials and virtual service definition properties to Con-
trail and OpenStack to create the virtual service. cCPE Selfcare application
communicates with Contrail over the Contrail northbound RESTful APIs.
The Selfcare portal acts as a SDN orchestrator that enables MX routers to route
selected traffic to virtual services managed by Contrail controller. A user defines
Contrail-based virtual services as parameterized service templates and VM im-
ages in Contrail that are instantiated by the Contrail controller and OpenStack.
The cCPE customers can then enable these virtual services on a self-serve basis
in the cCPE Selfcare portal.
Contrail, by combining a controller and virtual routers on virtualized
servers, enables the chaining of virtual services provided by applications run-
ning on VMs. Contrail, together with OpenStack, automates the addition of
new features and virtual services based on VMs for customers who have IP or
VPN connectivity based on MX edge routers. BGP protocol announces the
routes with SDN targets so that all routers in the VPN can provide connectiv-
ity between the VPN sites. The VMs are dynamically created by Contrail using
OpenStack.
cCPE Selfcare application, along with Contrail, can dynamically provi-
sion services and replace traditional router-based services running on edge rout-
ers such as DHCP server or static firewall and cloud services provided by VMs
in a cloud-based environment like an external DHCP server.

5.12  Conclusion
In this chapter, novel cloud services architectures defined by OCC consisting
of actors for cloud services, standards interfaces between the actors, standards
connection, and connection termination points associated with cloud user and
applications are described. Network functions virtualization (NFV) architec-
ture of ETSI NFV is summarized. Components of cloud services architectures
and NFV architectures are mapped. An implementation approach providing
substantial flexibility to cloud service providers is described. A method for im-
plementing cloud services architectures with virtualized components is given.
Based on this approach, implementation details of carrier Ethernet services and
IP VPN services are given.

www.EngineeringBooksPdf.com
302 Virtualized Software-Defined Networks and Services

Management of cloud services with virtualized and nonvirtualized com-


ponents is challenging. A high-level management architecture and provisioning
vCPE services using SDN technology are described.
We expect mature products and further implementation details of systems
and services using cloud, virtualization, and SDN technologies to be available
in the near future.

References
[1] Toy, M., “OCC 1.0 Reference Architecture,” December, 2014.
[2] Draft ETSI GS NFV-INF V0.3.1 (2014-05), “Network Functions Virtualisation; Infra-
structure Architecture; Architecture of the Hypervisor Domain.”
[3] DGS NFV-INF003 V0.34 (2014-11-18), “Network Functions Virtualisation; Part 1:In-
frastructure Architecture; Sub-Part 3: Architecture of Compute Domain.”
[4] Draft ETSI GS NFV-INF 001 V0.3.12 (2014-11), “Network Functions Virtualisation;
Infrastructure Overview.”
[5] Draft ETSI GS NFV-SWA 001 v0.2.4 (2014-11), “Network Functions Virtualisation;
Virtual Network Functions Architecture.”
[6] ETSI GS NFV-MAN 001 v1.1.1 (2014-12), Network Functions Virtualisation (NFV);
Management and Orchestration.
[7] Toy, M., “Cloud Services Architectures with SDN and NFV Constructs,” OCC Draft,
July 2015.
[8] GS NFVINF 0007 v0.3.1 (2013-11-15), Network Function Virtualisation Infrastructure
Architecture: Interfaces and Abstractions.
[9] ETSI GS NFV-INF 007 V1.1.1 (2014-10) Network Functions Virtualisation (NFV);
Infrastructure; Methodology to describe Interfaces and Abstractions.
[10] Toy, M., “Cloud Services Architectures,” Procedia Computer Science 00 (2015)000-000,
Elsevier, November 2015.
[11] MEF, “Lifecycle Service Orchestration: Reference Architecture and Framework,” Feb
2016.
[12] Cannistra, R., B. Carle, M. Johnson, J. Kapadia, Z. Meath, et al., “Enabling Autonomic
Provisioning in SDN Cloud Networks with NFV Service Chaining,” Proceedings of Optical
Fiber Communications Conference, San Francisco, CA, March 2014.
[13] MEF 6.2, “EVC Ethernet Services Definitions Phase 3,” August 2014.
[14] MEF 51, “OVC Services Definitions,” August 2015.
[15] MEF 7.2, “Carrier Ethernet Management Information Model,” April 2013.
[16] RFC 4087, IP Tunnel MIB, June 2005.

www.EngineeringBooksPdf.com
Virtualized Network Services 303

[17] Hwang, J., K. K. Ram Krishnon, and T. Wood, “NetVM: High Performance and Flexible
Networking using Virtualization on Commodity Platforms,” IEEE Transactions on
Network and Service Management, Vol. 12, No. 1, March 2015, pp. 34–47. https://www.
usenix. org/system/files/conference/nsdi14/nsdi14-paper-hwang.pdf.
[18] Juniper Technical Document, “Understanding SDN Provisioning and Cloud CPE Selfcare
Application for MX Series Routers,” http://www.juniper.net/techpubs/en_US/junos-
space-apps/ccpe1.1/topics/ccpe-selfcare-sdn-temp-book.pdf. Last accessed in August
2016.
[19] MEF 50, “Carrier Ethernet Service Lifecycle Process Model,” December 2014.
[20] National Institute of Standards and Technologies (NIST) Special Publication 500-291,
NIST Cloud Computing Roadmap, July 2013.
[21] https://developers.google.com/storage/docs/durable-reduced-availability.
[22] Wang, G., and T. S. Eugene Ng, “The Impact of Virtualization on Network Performance
of Amazon EC2 Data Center,” Proceedings of IEEE INFOCOM 2010, Piscataway, N.J.,
2010.
[23] Martins, J., et al., “Enabling Fast, Dynamic Network Processing with ClickOS,”
SIGCOM.
[24] RFC 4176 “Framework for Layer 3 Virtual Private Networks (L3VPN) Operations and
Management,” 2005.

www.EngineeringBooksPdf.com
www.EngineeringBooksPdf.com
About the Authors
Dr. Qiang Duan is an associate professor of information sciences and technolo-
gy at the Pennsylvania State University, Abington College. His general research
interest is about data communications and computer networking. Currently his
active research areas include the next generation Internet architecture, network
virtualization and NFV, network-as-a-service, software-defined networking,
and cloud computing. He has published more than eighty journal articles and
conference papers and authored six book chapters. Dr. Duan is serving on the
editorial board as an associate editor or area editor for multiple international re-
search journals and has served on the technical program committees for numer-
ous international conferences, including GLOBECOM, ICC, ICCCN, AINA,
and WCNC. Dr. Duan received his Ph.D. degree in electrical engineering from
the University of Mississippi. He holds a B.S. degree in electrical and computer
engineering and an M.S. degree in telecommunications and electronic systems.
Mehmet Toy received his Ph.D degree in electrical and computer engi-
neering from Stevens Institute of Technology, Hoboken, NJ. He is currently a
distinguished member of technical staff in Verizon Communications and in-
volved in SDN, NFV, and cloud architectures and standards. Prior to his cur-
rent position, Dr. Toy held technical and management positions in well-known
companies and startups including Comcast, Intel, Verizon Wireless, Fujitsu
Network Communications, AT&T Bell Labs, and Lucent Technologies. He
also served as a tenure-track faculty member and adjunct professor at various
universities, including Stevens Institute of Technology, New Jersey Institute of
Technology, Worchester Polytechnic Institute, and University of North Caro-
line at Charlotte.
Dr. Toy contributed to research and development of cloud, SDN and
virtualization-based commercial networks and services, carrier Ethernet, IP

305

www.EngineeringBooksPdf.com
306 Virtualized Software-Defined Networks and Services

multimedia systems (IMS), optical, IP/MPLS, wireless, and ATM technologies.


He holds a patent, has four pending patent applications, and published numer-
ous articles, six books, and a video tutorial in these areas and signal processing.
Two of his books are being used as college textbooks, and one of them has been
translated into Turkish. Dr. Toy served on the Open Cloud Connect (OCC)
Board and IEEE Network Magazine Editorial Board, as well as a guest editor
of IEEE Communications Magazine, and in various capacities at the IEEE-USA
and IEEE ComSoc. He received various awards from Comcast, AT&T Bell
Labs, and IEEE-USA. He is a Senior Member of IEEE and chairs the IEEE
ComSoc Cable Networks and Services subcommittee.

www.EngineeringBooksPdf.com
Index
Abstraction Carrier Ethernet services, 271–79, 289
packet forwarding, 79–83 cCPE Selfcare, 301
resource, 5, 29–30, 100–1, 207–10 Central Office Rearchitected as Data Center
SDN virtualization, 100–1, 175 (CORD), 160–63
types of, 17 ClickOS, 146–47, 267
two-dimensional, 217–20 Cloud infrastructure service (CIS), 156–62
Access control enforcing (ACE), 215–16 Cloud services
Access control list (ACL), 260 applications, 18–22
Access E-line, 278–79 architectures, 239–42
Advanced FlowVisor, 179 CaaS, 263–64
Advanced message queuing protocol characteristics of, 236, 242–50
(AMQP), 66–67 conclusions on, 301–2
Amazon EC2, 266 devices, 18–22
Application-based network operation IaaS, 252–59
(ABNO), 187 NaaS, 251–52
Application-control plane interface (A-CPI), network functionalities, 299–301
5, 6, 7, 34 network unification and, 156–62,
Application-layer traffic optimization 297–301
(ALTO), 55–56, 111, 154 NFV components, 269–71
Application-specific integrated circuit life cycle services, 285–97
(ASIC), 265 PaaS, 261
Application plane, 5, 6 protocol stacks, 242
Asynchronous message, 46 SaaS, 262–63
Authentication and authorization (AA), service delivery, 236–37
215–16, 294 SECaaS, 259–60
standards, 239
technology emergence, 2, 18–22, 119,
Base station (BS), 196 235–38
Beacon open source controller, 59–60 virtualization and, 264–69
Border gateway patrol—link state (BGP-LS), Commercial off-the-shelf (COST) server, 13,
55–56 141–43
Broadband Forum (BBF), 202

307

www.EngineeringBooksPdf.com
308 Virtualized Software-Defined Networks and Services

Communication as a service (CaaS), 20, 21, decoupling, 33–34


249, 263–64 flow management, 57
Community cloud, 237 limitations, 78–79
Compute domain, 129–30 Decision plane, 27
Compute host (CH), 149 Decoupling
Compute host descriptor (CHD), 149 network control, 28–29
Connectivity agent, 67 service provisioning, 108, 170, 206
Container-based virtualization, 3 Device abstraction layer (DAL), 40
network integration and, 15 Direct-attached storage (DAS), 255
as server virtualization, 99 DISCO, 66–68
for SDN, 179–82 Disaster recovery (DR), 257–59
Content-addressable memory (CAM), 43 Discovery plane, 27
Content delivery network (CDN), 56, 263 Dissemination plane, 27
Contrail, 300–1 Distributed controller deployment, 60–63
Control orchestration protocol (COP), 187 Distributed hash table (DHT), 62
Control plane Docker, 162
overview, 3, 5, 6 Dynamic block store (DBS), 254
integrating, 170 Dynamic scaling, 120
performance, 7–8
SDN, 33, 39–40, 90, 205–7
Controller adapter (CA), 159 Editing instructions, 84
Controller-to-switch message, 46 Element management system (EMS),
Converged cloud and cellular system 285–86
(CONCERT), 196–98 End-to-end service, 10, 12, 76–78, 108,
Cooperating layer architecture, 40–42 126, 298
Coupling, 150 SDNV-based, 224–27
C-RAN, 196–98 Energy-aware network, 230
Customer-premise equipment (CPE), 199, Ethernet services, 21, 271–79
201 ETSI NFV, 21, 119, 121–24
Customer relationship management (CRM), cloud services, 267–69
291 NFV implementation, 149, 188
mapping SDN, 203
wireline access networks, 202
Data plane virtual network function, 131
programming, 87–88 Evolved packet core (EPC), 196
4D approach, 27 Extended SDN architecture, 214–17
functions, 28, 33
integration, 170
I/O operations, 13, 143–44 Feature velocity, 120
OpenFlow pipeline, 47–51 Floodlight, 174, 175
OpenFlow switch, 44–47 Flow entry instructions, 84
overview, 3, 5, 6 Flow granularity, 58
SDN switch, 52–44, 173 Flow instruction set (FIS), 83
term usage, 218 Flow management, 57–59
Data Plane Development Kit (DPDK), 13, Flow matching, 3
143–45, 266 Flow-based management language (FML),
Database service, 25657 68–69
Data-control plane interface (D-CPI), 5, 6, 7 Flow-based network access control
(FlowNAC), 215–16

www.EngineeringBooksPdf.com
Index 309

Flow-based SDN switch, 175 Infrastructure network domain, NFVI,


FlowChecker, 69 130–31
FlowN, 15, 180–82 Infrastructure provider (InP), 104–5
Flowspace, 177–78 In-production testing, 120
FlowVisor, 14, 176–79, 181 Intrusion detection system (IDS), 195
Forwarding Integration, SDN and NFV, 14–17,
IETF framework, 27 169–71. See also Unified network
IRTF framework, 38–39 architecture
Forwarding abstractions working group Interface to routing system (I2RS), 56
(FAWG), 89–90 Internet Engineering Task Force (IETF),
Forwarding and control element separation 27, 56
(ForCES), 208–10 International Telecommunication
Forwarding graph (FG), 126 Union—Telecommunication
Forwarding instructions, 84 Standardization Sector (ITU-T),
Frenetic, 69 6, 37–38
Internet protocol (IP) services, 21, 73
Internet Research Task Force (IRTF), 6
Generic routing encapsulation (GRE), SDN architecture, 38–40
280–83 cooperating layer architecture, 41–42
Guest operating system (Guest OS), 9 Internet service provider (ISP), 104

Home subscriber server (HSS), 198 Kandoo controller, 62–63


Host functional block (HFB), 21
Hybrid cloud, 238
HyperFlow, 60–61, 63 Label-based packet switching, 73, 75
HyperNet, 183–85 Layer concept, 34–35, 218–19
Hypervisor, 3, 9 Life cycle services operations (LSO), 21–22,
controller placement, 64 285–97
multidomain SDN, 183 Link layer discovery protocol (LLDP), 55
network integration and, 14–15 Linux NAPI, 266
NFVI domain, 130 Location manager (LM), 185
Hypervisor-based SDN virtualization, Logical functional block (LFB), 208–10
98–99, 173, 176–79, 180 Logical port, 46
Logically centralized network control, 29,
62, 65
Information base (IB), 185
Infrastructure as a service (IaaS), 20, 189,
237, 246, 249 Maestro controller, 60
cloud computing, 252–54 Management and orchestration (MANO),
databases, 256–57 11, 12, 13
disaster recovery, 257–59 NFV framework, 123, 136–41
storage services, 254–56 SDN traffic steering, 194–95
Infrastructure controller, 206–7 SDN/NFV integration, 207
Infrastructure domain controller (IDC), 222, Management plane, 39–40
223 Microservices, 128
Infrastructure layer (IL), 159, 206 Mobile packet core (MPC), 16, 198–99, 200

www.EngineeringBooksPdf.com
310 Virtualized Software-Defined Networks and Services

Mobility management entity (MME), 16, hypervisor domain, 130


198–99, 200 infrastructure, 121, 128–31
Monitoring agent, 67 resource repository, 141
Multidomain SDN SDN in, 188–202, 203– 212
control application, 64–68 Network information base (NIB), 61
orchestration-based virtualization, Network infrastructure service (NIS), 155,
185–88 156–62
virtualization, 15, 182–88 Network infrastructure as a service (NIaaS),
Multilevel abstraction, 40 151–52, 191
Multiprotocol label switching (MPLS), 73, Network management system (NMS),
75 285–86
Network operating system (NOS), 51
Network programmability principle, SDN,
NETCONE, 40 29
NetCore, 69 Network resource description language
NetVM, 145–46 (NRDL), 110
Network as a service (NaaS), 13–14, 20, 96 Network service descriptor (NSD), 140
in NFV, 151, 152–54 Network service provisioning
in SDN, 154–56 combing, 203–17
SDN control in NFV, 210–12 end-to-end, 76–78
virtualized, 249, 251–52 SDN, 72–74
Network function (NF), 124–26 SDIA, 74–76
Network function information base (NF-IB), virtualization-based, 108, 152–56
159 Network topology management, 54–56
Network function virtualization (NFV), 2–4 Network traffic monitoring, 56–57
architecture, 121–24 Network virtualization (NV), 2, 3
challenges, 141–43, 170–71 challenges, 101–3
ClickOS, 146–47 conclusions on, 163–64
cloud services, 18–22 functional roles, 104–7
data plane, 143–44 lifecycle management, 107
integration of, 14–17, 169–71 network architecture, 103–4
infrastructure, 128–31 overview, 8–14, 95–97
innovation, 118–21 service provisioning, 108
MANO, 136–41 Network-attached storage (NAS), 255
network function, 124–26 NFV orchestrator (NFVO), 12, 13, 123,
network services, 126–28 138–40
NetVM, 145–46 NIST, 18, 236, 237–238, 239
open platform, 147–48 No bugs in control execution (NICE), 69
overview, 8–14 Nonuniform memory access (NUMA), 146
portability, 149 Northbound interface, 6. See also
resource repository, 141 Application-control plane interface
service function chaining, 192–95 (A-CPI)
SDIA framework, 76 NOX-based controller, 60, 174, 180
unified network, 297–302
VNF software, 131–36
Network function virtualization ONIX, 61–62
infrastructure (NFVI), 11–13, ONOS, 161, 175
15–16, 21 Open cloud connect (OCC), 238, 239–42
compute domain, 129–30 Open Networking Foundation (ONF)

www.EngineeringBooksPdf.com
Index 311

FAWG, 89–90 packet processing, 85–87, 91


SDN architecture, 35–37 Protocol oblivious forwarding (POF), 8, 82,
SDN definition, 5, 6, 28 83–85, 91
WMWG, 199 Protocol stack, 242, 248
Open platform for NFV (OPNFV), 147–48 ProtoGENI, 184–85
Open vSwitch, 265, 300 Public cloud, 237–38
OpenDaylight, 175, 300 Pull mode, 56–57
OpenFlow, 7 Purist view, 102
abstract forwarding and, 82 Push mode, 56–57
data plane, 78–79, 89
FlowN, 180
FlowVisor, 14, 176–79 Quality of service, 229–30
integration, 300
pipeline processing, 47–51
resource abstraction, 208 Radio access network (RAN), 16, 196–98
routing function virtualization, 212–14 Radio interface equipment (RIE), 197
service function chaining, 192–94 Reachability agent, 67
Stanford University research, 27–28 Reactive flow management, 57, 58, 70
switch structure, 44–47 Reactive SDN application, 70
OpenNaaS, 211–12, 213 Relational database (RDS), 256
OpenStack, 154–56, 161, 191, 265, 297, Remote procedure call (RPC), 40
300 Representational state transfer (REST), 71,
Orchestration layer (OL), 159 151, 154–55. See also RESTful
Orchestration-based virtualization, 185–88 API
Reservation agent, 67
Reserved port, 46
P4 language, 85–87, 88, 91 Resource abstraction, 5, 29–30, 100–1,
Packet data network gateway (PGW), 16, 207–10
198–99, 200 Resource description and discovery (RDD),
Packet forwarding 10, 109–11
abstract model, 79–83 Resource description framework (RDF), 110
protocol oblivious, 82, 83–85 Resource orchestrator (RO), 159
Path computation agent, 67 RESTful API, 40, 68, 70–72, 212, 213
Path computation element (PCE), 27 Routing control platform (RCP), 27
Physical port, 46 Routing function virtualization over
Physical SB (P-SB) interface, 223 OpenFlow, 212–14
Planes concept, 34–35, 218–19 Rule versioning, 58–59
Platform as a service (PaaS), 20, 237, 246,
249, 261
Pluralist view, 9, 102–3 SB interface, 223
Priority code point (PCP), 178 Security, 115–18
Private cloud, 237 Security as a service (SECaaS), 20, 249,
Proactive flow management, 57–58 259–60
Proactive SDN application, 69–70 Service, 210
Protocol independent layer (PI–layer), 8 Service abstraction layer (SAL), 40
data plane limitations, 78–79 Service chain (SC), 126, 276–79
data plane programming, 87–88 Service control orchestration, 291–92
ONF forwarding, 89–90 Service delivery orchestration, 292
packet forwarding, 79–85 Service function (SF), 298–99

www.EngineeringBooksPdf.com
312 Virtualized Software-Defined Networks and Services

Service function chaining (SFC), 16, features, 30–32


192–95, 267 integration, 14–17, 169–71, 297–302
Service gateway (SGW), 196, 198–99 introduction to, 2–4, 4–8, 25–28
Service layer (SL), 159 multidomain control, 64–68
Service provider (SP), 104–6 using in NFV, 188–212
Service tenant layer, 206 ONF architecture, 35–37
Service provisioning. See Network service RESTful interface, 70–71
provisioning quality of service, 229–30
Service-oriented architecture (SOA), 13, virtualization in, 172–87
150–52, 210 See also SDN switch
SDN control applications Software-defined network virtualization
programming languages, 68–69 (SDNV), 17, 220–24, 225, 231
general design, 69–70, 91 Southbound interface (SBI), 5. See also
SDN controller, 30 Data-control plane interface
applications, 68–70 (D-CPI)
architecture, 51–54 Storage services, 254–56
distributed deployment, 60–63 Survivability, 117–18
functions, 54–59 SWA interface, 133, 190
multidomain control, 64–68 Symmetric message, 46–47
performance, 59–64
placement, 63–64
SDN inline services and forwarding Table dependence graph (TDG), 86
(StEERING), 192–94 Table type pattern (TTP), 89–90
SDNi protocol, 65–66 Table-miss, 51, 57
SDN Research Group (SD-NRG), 6, 38–40 Telecommunication service provider (TSP),
SDN switch 119
OpenFlow standard, 7, 44–47 Tenant controller, 175–76, 180–82, 206–7
key components, 42–44, 90 Ternary content-addressable memory
pipeline processing, 47–51 (TCAM), 43
Single root I/O virtualization (SR–IOV), Time-to-live (TTL), 263
143–44 Topology server/routing server (TS/RS), 185
Software as a service (SaaS), 20, 236–37, Traffic matric (TM), 57
246, 249, 262–63 Transport layer security (TLS), 46
Software-defined internet architecture Two-dimensional abstraction, 217–20, 231
(SDIA), 8
architectural framework, 74–76
end-to-end service provisioning, 76–78 Unified network architecture
service provisioning challenges, 72–74 challenges, 227–30
Software-defined networking (SDN) cloud services, 297–301
architecture, 32–35, 37–40 SDNV, 220–27
concepts, 28–30 two-dimensional abstraction, 217–20,
conclusions on, 90–91 231
control applications, 68–70, 91 UNIFY project, 157–60
controller architecture, 51–54
controller functions, 54–59
controller performance, 59–64 VNF as a service (VNFaaS), 153–54,
cooperating layer architecture, 40–42 224–27
data plane, 42–51 VNF forwarding graph (VNF-FG), 188–89
VNF manager (VNFM), 12, 13, 123

www.EngineeringBooksPdf.com
Index 313

Virtual compute function (VCF), 221–24 Virtualization-based network service


Virtual CPE (vCPE), 300–1 provisioning
Virtual execution infrastructure description network as a service, 152–56
language (VXDL), 110 network-cloud unification, 156–62
Virtual infrastructure manager (VIM), 190 service-oriented architecture, 150–52
Virtual link mapping (VLM), 111, 113–14 software-defined control, 154–56
Virtual local network (VLAN), 102 virtual network platform, 154
Virtual machine (VM) Virtualized carrier Ethernet services, 271–79
life cycle services, 287, 289 Virtualized cloud services
multiple independent, 172 architectures, 267–69
unified network, 297, 298 NFV components, 269–71
Virtual NB interface, 223 Virtualization network function (VNF),
Virtual network embedding (VNE), 111–15 11–13, 264–67
Virtual network function (VNF), 131–36, Virtualized infrastructure manager (VIM),
221–24 12, 13, 123, 136–38
Virtual network function manager (VNFM), Virtualized IP VPN, 279–85
38 Virtualized residential gateway (vRGW),
Virtual network mapping (VNM), 111, 113, 201–2
114 Virtualized routing function (VRF), 212–14
Virtual network operator (VNO), 106 Virtual link mapping (VLM), 10
Virtual network platform as a service Virtual local area network (VLAN), 9
(VNPaaS), 154 Virtual machine (VM), 9, 97–101
Virtual network provider (VNP), 105 Virtual network (VN), 9
Virtual network technologies lifestyle management, 107
control and management, 228–29 services overview, 18–22
embedding, 111–15 Virtual network embedding (VNE), 10
objective, 108–109 Virtual network operator (VNO), 9
resource description/discovery, 109–11 Virtual network provider (VNP), 9, 10
security, 115–17 Virtual node mapping (VNM), 10
survivability, 117–18 Virtual private network (VPN), 9, 279–85
Virtual NIC, 99–100 VN customer (VNC), 106–7
Virtual private network (VPN), 102 VNF component (VNFC), 13, 132
Virtual SB (V-SB), 223 VNF descriptor (VNFD), 133–3, 140
Virtual SDN (vSDN), 174–79 VNF forwarding graph, 265
Virtual service function (VSF), 221–24, VNF FG descriptor (VNFFGD), 140
228–29 VNF instance (VNFI), 13, 133, 140
Virtualization VNFC instance (VNFCI), 133
in computing, 97–101, 227–28
in networking, 95–97, 227–28.
in software-defined network, 172–87 Wide area network (WAN), 298, 300
See also Network function Wireline access network, 199, 201–2
virtualization (NFV); Network
virtualization (NV); Virtual
network technologies; XML-based language, 227–28
Virtualization-based network XML–RPC, 184
service provisioning XOS, 162

www.EngineeringBooksPdf.com

You might also like