Professional Documents
Culture Documents
NFV - Introduction
NFV - Introduction
• NFV Introduction
• NFV Architecture
Service provision within the telecommunications industry has traditionally been based
on network operators deploying physical proprietary devices and equipment for each
function that is part of a given service.
In addition, service components have strict chaining and/or ordering that must be
reflected in the network topology and in the localization of service elements.
These, coupled with requirements for high quality, stability and stringent protocol
adherence, have led to long product cycles, very low service agility and heavy
dependence on specialized hardware.
However, the requirements by users for more diverse and new (short-lived) services
with high data rates continue to increase.
Continuous Investment
This does not only require high and rapidly changing skills for technicians operating and
managing this equipment, but also requires dense deployments of network equipment
such as base stations.
Even with these high customer demands, the resulting increase in capital and
operational costs cannot be translated in higher subscription fees, since TSPs have
learned that due to the high competition, both among themselves and from services
being provided over-the-top on their data channels, increasing prices only leads to
customer churn.
Therefore, TSPs have been forced to find ways of building more dynamic and service-
aware networks with the objective of reducing product cycles, operating & capital
expenses and improving service agility.
Network Function Virtualization (NFV) has been proposed as a way to address these
challenges by leveraging virtualization technology to offer a new way to design, deploy
and manage networking services.
The main idea of NFV is the decoupling of physical network equipment from the
functions that run on them.
This means that a network function - such as a firewall - can be dispatched to a TSP as
an instance of plain software.
This allows for the consolidation of many network equipment types onto high volume
servers, switches and storage, which could be located in data centers, distributed
network nodes and at end user premises.
It provides the tools to create an ecosystem of partners who create new services that may
include contextual based communication, such as in the medical or transportation industry
where real time communications are combined with access to various data and services.
vPCRF - vPolicy and Charging Rules Function (PCRF) - It provides the ability to
authorize and offer personalized services increasing service usage and making stronger
customer relationship as a result of a better usability. cPCRF is key to secure the network
behavior when users access data services, such as, authorizing the services, assigning the
QoS of the data-session and the charging that shall be applied; and performs a business
critical role for monetization and differentiation of operator’s mobile and converged
broadband commercial packages.
vDSC – vDiameter Signaling Controller - Diameter signaling is used for Policy Control,
Subscriber Registration, Charging & Roaming Procedures in EPC and IMS. Diameter
Signaling Controller (DSC) is the key network component to secure and centralize Diameter
communication. DSC is a product that supports standard IETF/3GPP Diameter
functionalities.
vIMS - vIP Multimedia Subsystem (IMS) is a core network solution that delivers rich real-
time communication services for both consumer and business users over any access
network and for any device types, including smartphones, tablets, wearables, laptops and
fixed phones. Examples of communication services are HD voice (VoLTE), Wi-Fi calling,
enriched messaging, enriched calling with pre-call info, video calling, HD video
conferencing and web communication.
© Nex-G Exuberant Solutions Pvt. Ltd. 10
NFV INTRODUCTION
vMME – vMobility Management Entity helps operator to scale its network capability,
flexibility and manageability. With the virtual MME, the operator's network can be adjusted
according to capacity needs. A flexible and manageable network means that software
upgrade and expansions can be rolled out quickly. This enables operaotrs to launch new
services faster and more efficiently, with shorter lead times and lower capex and opex.
vUDC - vUser Data Consolidation - UDC solution eliminates the complexity of managing
subscription by consolidating subscriber information from the different network silos into one
common repository, creating a Unified Profile. This allows for a simpler and more scalable
network topology, more efficiency on managing the network database resources and it also
gives more flexibility to introduce new services.
vSBG - vSession Border Gateway. SBG integrated with Session Border Controller is a
product for virtualized communication networks, guarantees security and interoperability on
the border between IMS and other networks, both for signaling and media. The Session
Border Controller provides one commercial offering for Voice over LTE (VoLTE), Voice over
WiFI (VoWiFi), Video over LTE (ViLTE), Richs Communication Service (RCS), Interconnect,
Fixed VoIP and Web communication solutions.
vEMA – vEricsson Multi Activation is integrated with Service Activation platform for
service provisioning on IMS.
vEPC - Virtual Evolved Packet Core (vEPC) is a framework for virtualizing the functions
required to converge voice and data on 4G Long-Term Evolution (LTE) networks. vEPC
moves the core network's individual components that traditionally run on dedicated
hardware to software that operates on low-cost commercial off-the-shelf (COTS) servers.
The VNFs may then be relocated and instantiated at different network locations (e.g.,
aimed at introduction of a service targeting customers in a given geographical location)
without necessarily requiring the purchase and installation of new hardware.
NFV promises TSPs with more flexibility to further open up their network capabilities and
services to users and other services, and the ability to deploy or support new network
services faster and cheaper so as to realize better service agility.
To achieve these benefits, NFV paves the way to a number of differences in the way
network service provisioning is realized in comparison to current practice.
This allows separate development timelines and maintenance for software and
hardware.
This helps network operators deploy new network services faster over the same
physical platform.
• Dynamic Scaling - The decoupling of the functionality of the network function into
instantiable software components provides greater flexibility to scale the actual VNF
performance in a more dynamic way and with finer granularity, for instance,
according to the actual traffic for which the network operator needs to provision
capacity.
The general concept of decoupling NFs from dedicated hardware does not necessarily
require virtualization of resources.
This means that TSPs could still purchase or develop software (NFs) and run it on
physical machines.
The difference is that these NFs would have to be able to run on commodity servers.
However, the gains (such as flexibility, dynamic resource scaling, energy efficiency)
anticipated from running these functions on virtualized resources are very strong selling
points of NFV.
The concept and collaborative work on NFV was born in October 2012 when a number
of the world’s leading TSPs jointly authored a white paper calling for industrial and
research action.
In November 2012 seven of these operators (AT&T, BT, Deutsche Telekom, Orange,
Telecom Italia, Telefonica and Verizon) selected the European Telecommunications
Standards Institute (ETSI) to be the home of the Industry Specification Group for NFV
(ETSI ISG NFV).
Specifications:-
http://www.etsi.org/technologies-clusters/technologies/nfv
Six years and over 100 publications later, the ISG NFV community has evolved through
several phases, its publications have moved from pre-standardization studies to detailed
specifications (see Release 2 and Release 3) and the early Proof of Concepts (PoCs)
efforts have evolved and led to interoperability events (Plugtests).
This large community (300+ companies including 38 of the world's major service
providers) is still working intensely to develop the required standards for NFV as well as
sharing their experiences of NFV implementation and testing.
Last set of NFV standards were released in August 2018 (Version 3.1.1).
ISG NFV, like any other ETSI Industry Specification Group is open to ETSI members
and non-members alike, with different conditions depending on ETSI membership
status.
ETSI’s aim is to produce requirements and potential specifications that TSPs and
equipment vendors can adapt for their individual environments, and which may be
developed by an appropriate standards development organization (SDO).
However, since standards bodies such as the 3GPP are in liaison with the ETSI, we can
expect these proposals will be generally accepted and enforced as standards.
3GPP’s Telecom Management working group (SA5) is also studying the management of
virtualized 3GPP network functions.
NFV Examples
The ETSI has proposed a number of use cases for NFV. In this subsection, we will
explain how NFV may be applied to Customer Premises Equipment (CPE), and to an
Evolved Packet Core (EPC) network.
In Figures 1 and 2, we use an example of a CPE to illustrate the economies of scale that
may be achieved by NFV.
For example, if the functions are part of a service chain2, it may be required to perform
firewall functions before NAT.
With such an implementation, if there is a need to make changes to the CPE, say, by
adding, removing or updating a function, it may be necessary for a technician from the
ISP to individually talk to or go to each of the customers.
This is not only expensive (operationally) for the ISPs, but also for the customers.
This makes the changes described above easier since, for example, updating the DHCP
for all customers would only involve changes at the ISP. In the same way, adding
another function such as parental controls for all or a subset of customers can be done
at once.
In addition to saving on operational costs for the ISP, this potentially leads to cheaper
CPEs if considered on a large scale.
Virtualizing the EPC is another example of NFV that has attracted a lot of attention from
industry.
The EPC is the core network for Long Term Evolution (LTE) as specified by 3GPP.
On the left side of Fig. 3, we show a basic architecture of LTE without NFV.
The User Equipment (UE) is connected to the EPC over the LTE access network (E-
UTRAN).
The evolved NodeB (eNodeB) is the base station for LTE radio.
It is made up of four NFs: Serving Gateway (S-GW), Packet Data Network (PDN)
Gateway (PGW), Mobility Management Entity (MME), and Policy and Charging Rules
Function (PCRF).
It is also connected to external networks, which may include the IP Multimedia Core
Network Subsystem (IMS).
In the current EPC, all its functions are based on proprietary equipment.
Therefore, even minor changes to a given function may require a replacement of the
equipment.
The same applies to cases when the capacity of the equipment has to be changed.
On the right side of Fig. 3, we show the same architecture in which the EPC is
virtualized. In this case, either all functions in the EPC, or only a few of them are
transferred to a shared (cloud) infrastructure.
Virtualizing the EPC could potentially lead to better flexibility and dynamic scaling, and
hence allow TSPs to respond easily and cheaply to changes in market conditions.
For example, as represented by the number of servers allocated to each function in Fig.
3, there might be a need to increase user plane resources without affecting the control
plane.
In this case, VNFs such as a virtual MME may scale independently according to their
specific resource requirements.
In the same way, VNFs dealing with the data plane might require a different number of
resources than those dealing with signaling only.
Finally, it also allows for easier software upgrades on the EPC network functions, which
would hence allow for faster launch of innovative services.
Virtual EPS
According to ETSI, the NFV Architecture is composed of three key elements: Network
Function Virtualization Infrastructure (NFVI), VNFs and NFV MANO.
We represent them graphically in Fig. 4. In this section these elements are defined
The NFVI is the combination of both hardware and software resources which make up
the environment in which VNFs are deployed.
Virtual resources are abstractions of the computing, storage and network resources.
In a data center environment, the computing and storage resources may be represented
in terms of one or more Virtual Machines (VMs), while virtual networks are made up of
virtual links and nodes.
A virtual node is a software component with either hosting or routing functionality, for
example an operating system encapsulated in a VM.
A NF is a functional block within a network infrastructure that has well defined external
interfaces and well-defined functional behaviour.
Examples of NFs are elements in a home network, e.g. Residential Gateway (RGW);
and conventional network functions, e.g. DHCP servers, firewalls, etc.
A single VNF may be composed of multiple internal components, and hence it could be
deployed over multiple VMs, in which case each VM hosts a single component of the
VNF.
In the case of NFV, the NFs that make up the service are virtualized and deployed on
virtual resources such as a VM. However, in the perspective of the users, the services-
whether based on functions running dedicated equipment or on VMs - should have the
same performance.
The number, type and ordering of VNFs that make it up are determined by the service’s
functional and behavioral specification.
Therefore, the behaviour of the service is dependent on that of the constituent VNFs.
According to the ETSI’s MANO framework, NFV MANO provides the functionality
required for the provisioning of VNFs, and the related operations, such as the
configuration of the VNFs and the infrastructure these functions run on.
It also includes databases that are used to store the information and data models which
define both deployment as well as lifecycle properties of functions, services, and
resources.
In addition the framework defines interfaces that can be used for communications
between the different components of the NFV MANO, as well as coordination with
traditional network management systems such as Operations Support System (OSS)
and Business Support Systems (BSS) so as to allow for management of both VNFs as
well as functions running on legacy equipment.
The NFV architectural framework identifies functional blocks and the main reference
points between such blocks.
Some of these are already present in current deployments, whilst others might be
necessary additions in order to support the virtualisation process and consequent
operation.
Next Figure shows the NFV architectural framework depicting the functional blocks and
reference points in the NFV framework.
The main (named) reference points and execution reference points are shown by solid
lines and are in the scope of NFV.
The dotted reference points are available in present deployments but might need
extensions for handling network function virtualisation.
However, the dotted reference points are not the main focus of NFV at present.
The architectural framework shown focuses on the functionalities necessary for the
virtualisation and the consequent operation of an operator's network.
It does not specify which network functions should be virtualised, as that is solely a
decision of the owner of the network.
The functional behaviour and the external operational interfaces of a Physical Network
Function (PNF) and a VNF are expected to be the same.
For example, one VNF can be deployed over multiple VMs, where each VM hosts a
single component of the VNF.
The Element Management performs the typical management functionality for one or
several VNFs.
NFV Infrastructure
The NFV Infrastructure is the totality of all hardware and software components which
build up the environment in which VNFs are deployed, managed and executed.
The NFV Infrastructure can span across several locations, i.e. places where NFVI-PoPs
are operated.
The network providing connectivity between these locations is regarded to be part of the
NFV Infrastructure.
From the VNF's perspective, the virtualisation layer and the hardware resources look
like a single entity providing the VNF with desired virtualised resources.
Hardware Resources
In NFV, the physical hardware resources include computing, storage and network that
provide processing, storage and connectivity to VNFs through the virtualisation layer
(e.g. hypervisor).
Computing and storage resources are commonly pooled. Network resources are
comprised of switching functions, e.g. routers, and wired or wireless links.
• NFVI-PoP network: the network that interconnects the computing and storage
resources contained in an NFVI-PoP.
The virtualisation layer abstracts the hardware resources and decouples the VNF
software from the underlying hardware, thus ensuring a hardware independent lifecycle
for the VNFs.
Role Of Hypervisor
The virtualisation layer in the middle ensures VNFs are decoupled from hardware
resources and therefore, the software can be deployed on different physical hardware
resources.
Typically, this type of functionality is provided for computing and storage resources in
the form of hypervisors and VMs.
The NFV architectural framework does not restrict itself to using any specific
virtualisation layer solution.
NFV expects to use virtualisation layers with standard features and open execution
reference points towards VNFs and hardware (computation, network and storage).
VMs may have direct access to hardware resources (e.g. network interface cards) for
better performance.
In NFV, VMs shall always provide standard ways of abstracting hardware resources
without restricting its instantiation or dependence on specific hardware components.
The use of hypervisors is one of the present typical solutions for the deployment of
VNFs.
Other solutions to realize VNFs may include software running on top of a non-virtualised
server by means of an operating system (OS), e.g. when hypervisor support is not
available, or VNFs implemented as an application that can run on virtualised
infrastructure or on bare metal.
Several techniques allow this, including network abstraction layers that isolate resources
via virtual networks and network overlays, including Virtual Local Area Network (VLAN),
Virtual Private LAN Service (VPLS), Virtual Extensible Local Area Network (VxLAN),
Network Virtualisation using Generic Routing Encapsulation (NVGRE), etc.
Other possible forms of virtualisation of the transport network include centralizing the
control plane of the transport network and separating it from the forwarding plane, and
isolating the transport medium, e.g. in optical wavelengths, etc.
According to the list of hardware resources specified in the architecture, the Virtualised
Infrastructure Manager performs:
• Operations, for:
• Visibility into and management of the NFV infrastructure.
• Root cause analysis of performance issues from the NFV infrastructure
perspective.
• Collection of infrastructure fault information.
• Collection of information for capacity planning, monitoring, and optimization.
NFV Orchestrator
VNF Manager(s)
A VNF Manager is responsible for VNF lifecycle management (e.g. instantiation, update,
query, scaling, termination).
Multiple VNF Managers may be deployed; a VNF Manager may be deployed for each
VNF, or a VNF Manager may serve multiple VNFs.
This data-set provides information regarding the VNF deployment template, VNF
Forwarding Graph, service-related information, and NFV infrastructure information
models.
Reference Points
This reference point interfaces the virtualisation layer to hardware resources to create
an execution environment for VNFs, and collect relevant hardware resource state
information for managing the VNFs without being dependent on any hardware platform.
This reference point represents the execution environment provided by the NFVI to the
VNF. It does not assume any specific control protocol. It is in the scope of NFV in order
to guarantee hardware independent lifecycle, performance and portability requirements
of the VNF.
Reference Points
Reference Points
Reference Points
Reference Points
This layer abstracts and logically partitions physical hardware resources and anchors
between the VNF and the underlying virtualised infrastructure.
The primary tools to realize the virtualisation layer would be the hypervisors.
On top of such a virtualisation layer, the primary means of VNF deployment would be
instantiating it in one or more VMs.
Therefore, the virtualisation layer shall provide open and standard interfaces towards
the hardware resources as well as the VNF deployment container, e.g. VMs, in order to
ensure independence among the hardware resources, the virtualisation layer and the
VNF instances.
Network functions are well-defined; hence both their functional behaviour as well as
their external interfaces.
A VNF can be decomposed into smaller functional modules for scalability, reusability,
and/or faster response.
Alternatively, multiple VNFs can be composed together to reduce management and VNF
Forwarding Graph complexity.
• VNF decomposition, as the process whereby a higher-level VNF is split into a set of
lower-level VNFs. The NFV ISG shall provide guidelines in determining how VNF
decomposition should be standardized.
We can take the Evolved Packet Core (EPC), Serving Gateway (SGW) and Packet Data
Network Gateway (PGW) NFs to illustrate the above.
Both GW types can be provided as VNFs in their own right with the applicable properties
of VNFs as described in the present document.
It is also conceivable that vendors could implement the SGW and the PGW
in combination, thereby creating a new combined VNF "SGW-PGW".
A vendor that initially implements this combined VNF could also create independent
VNFs for the SGW and the PGW.
Different management needs may arise when VNFs are composed or decomposed out
of other VNFs.
This may differ from when VNF Forwarding Graphs are constructed and each VNF is
individually manageable, whereas in a composed VNF individual, VNF management
interfaces may not be visible or in a decomposed VNF more management interfaces
may be created.
Such VNF components can only exist within the context of their "parent" VNF.
The decoupling of a VNF from the underlying hardware resources presents new
management challenges.
Such decoupling also presents challenges in determining faults and correlating them for
a successful recovery over the network.
While designing the NFV Management and Orchestration, such challenges need to be
addressed.
In order to perform its task, the NFV Management and Orchestration should work with
existing management systems such as OSS/BSS, hardware resource management
system, CMS used as a Virtualised Infrastructure Manager, etc. and augment their
ability in managing virtualisation-specific issues.
Performance
Performance and scalability are important since the implementation of a VNF may have
a per-instance capacity that is less than that of a corresponding physical network
function on dedicated hardware.
Therefore, methods are needed to split the workload across many distributed/clustered
VNF instances.
Reliability
NFV should provide reliability as high as that provided with equivalent non-virtualised
legacy network functions, but with improved cost efficiency.
To address this issue, functions can be organized into classes or categories that have
similar reliability requirements.
Security
Physical network functions assume a tight coupling of the NF software and hardware
which, in most cases, is provided together by a single vendor.
In the NFV scenario, multiple vendors are expected to be involved in the delivery and
setup of different NFV elements (e.g. hardware resources, virtualisation layer, VNF,
virtualised infrastructure manager, etc.).
As a result, due to the virtualisation process, new security issues need to be addressed.
• The use of hypervisors may introduce additional security vulnerabilities, even though
hypervisor-based virtualisation is the state of the art.
To ensure that the right hypervisor is being executed calls for authenticating the
hypervisor at the boot time through secure boot mechanisms.
• The usage of shared storage and shared networking may also add additional
dimensions of vulnerability
• The execution of diverse VNFs over the NFV infrastructure can also create additional
security issues, in particular if VNFs are not properly isolated from others.