Professional Documents
Culture Documents
Building The Network of The Future: Getting Smarter, Faster, and More Flexible With A Software Centric Approach 1st Edition John Donovan
Building The Network of The Future: Getting Smarter, Faster, and More Flexible With A Software Centric Approach 1st Edition John Donovan
https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-
loucas/
https://textbookfull.com/product/the-triathlon-training-book-how-
to-be-faster-smarter-stronger-1st-edition-beckinsale/
https://textbookfull.com/product/building-green-software-a-
sustainable-approach-to-software-development-and-operations-1st-
edition-anne-currie/
https://textbookfull.com/product/faster-smarter-louder-master-
attention-in-a-noisy-digital-market-aaron-agius/
Achieve More Succeed Faster 1st Edition Deepak Bajaj
https://textbookfull.com/product/achieve-more-succeed-faster-1st-
edition-deepak-bajaj/
https://textbookfull.com/product/building-green-software-a-
sustainable-approach-to-software-development-and-
operations-1-converted-edition-anne-currie/
https://textbookfull.com/product/flexible-network-architectures-
security-principles-and-issues-first-edition-rudra/
https://textbookfull.com/product/getting-started-with-sql-a-
hands-on-approach-for-beginners-1st-edition-thomas-nield/
https://textbookfull.com/product/forge-your-future-with-open-
source-build-your-skills-build-your-network-build-the-future-of-
technology-1st-edition-vm-vicky-brasseur/
Building the Network
of the Future
Getting Smarter, Faster, and More Flexible
with a Software Centric Approach
Building the Network
of the Future
Getting Smarter, Faster, and More Flexible
with a Software Centric Approach
by
John Donovan and Krish Prabhu
Trademarks or registered trademarks of others referenced in this book are used for informational purposes only.
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the
validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the
copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to
publish in this form has not been obtained. If any copyright material has not been acknowledged, please write and let
us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including pho-
tocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission
from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users.
For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been
arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
v
vi Contents
Index...............................................................................................................................................407
Foreword
It’s easy to forget just how thoroughly mobile connectivity has changed our relationship with
technology. Any new piece of hardware is judged not just for the processor, memory, or camera
it contains but for how well it connects. Wi-Fi, fiber, 4G—and soon 5G—have become the new
benchmarks by which we measure technological progress.
In many ways, this era of high-speed connectivity is a lot like the dawn of the railroads or the
superhighways in the nineteenth and twentieth centuries, respectively. Like those massive infra-
structure projects, connectivity is redefining society. And it is bringing capabilities and knowledge
to all facets of society, enabling a new era of collaboration, innovation, and entertainment.
Yet most people take networks for granted because so much of what happens on them is hidden
from view. When we use our phones or tablets or stream video to our connected cars, we expect
the experience to be simple and seamless. The last thing we should care about is how they are con-
necting or onto what networks. And all the while we continue to place ever-heavier loads on these
networks, which now include 4K video and virtual reality.
Keeping up with these demand curves is a monumental task. In fact, knowing how to manage,
secure, transport, and analyze these oceans of data might be the single greatest logistical challenge
of the twenty-first century.
Needless to say, this is a relatively new phenomenon. It used to be that the biggest test for any
telephone network was the ability to handle the explosion of calls on Mother’s Day. Now, any day
can be the busiest for a network—when an entire season of a popular show is released or when the
latest smartphone OS upgrade goes live. A network today must always be prepared.
This paradigm shift requires a new kind of network, one that that can respond and adapt in
near-real time to constant and unpredictable changes in demand while also being able to detect and
deflect all manner of cyber threats.
The old network model completely breaks down in the face of this. That’s why software-defined
networking (SDN) and network function virtualization (NFV) are moving from concept to imple-
mentation so rapidly. This is the network of the future: open-sourced, future-proofed, highly secure,
and flexible enough to scale up to meet any demand.
This book lays out much of what we have learned at AT&T about SDN and NFV. Some of the
smartest network experts in the industry have drawn a map to help you navigate this journey. Their
goal is not to predict the future but to help you design and build a network that will be ready for
whatever that future holds. Because if there’s one thing the last decade has taught us, it’s that net-
work demand will always exceed expectations. This book will help you get ready.
Randall Stephenson
Chairman and Chief Executive Officer, AT&T
vii
Acknowledgments
The authors would like to acknowledge the support from and the following contributions by Mark
Austin, Rich Bennett, Chris Chase, John DeCastra, Ken Duell, Toby Ford, Brian Freeman, Andre
Fuetsch, Kenichi Futamura, John Gibbons, Mazin Gilbert, Paul Greendyk, Carolyn Johnson, Hank
Kafka, Rita Marty, John Medamana, Kathleen Meier-Hellstern, Han Nguyen, Steven Nurenberg,
John Paggi, Anisa Parikh, Chris Parsons, Paul Reeser, Brian Rexroad, Chris Rice, Jenifer Robertson,
Michael Satterlee, Raj Savoor, Irene Shannon, Tom Siracusa, Greg Stiegler, Satyendra Tripathi, and
Jennifer Yates.
The authors would also like to acknowledge the contributions of Tom Restaino and Gretchen
Venditto in the editing of the book.
All royalties will be donated to the AT&T Foundation and will be used to support STEM
education.
ix
Authors
John Donovan is chief strategy officer and group president AT&T Technology and Operations
(ATO). Donovan was executive vice president of product, sales, marketing, and operations at
VeriSign Inc., a technology company that provides Internet infrastructure services. Prior to that,
he was chairman and CEO of inCode Telecom Group Inc. Mr. Donovan was also a partner with
Deloitte Consulting, where he was the Americas Industry Practice director for telecom.
Krish Prabhu retired as president of AT&T Labs and chief technology officer for AT&T, where he
was responsible for the company’s global technology direction, network architecture and evolution,
and service and product design. Krish has an extensive background in technology innovation from
his time at AT&T Bell Laboratories, Alcatel, Morgenthaler Ventures, and Tellabs. He also served as
COO of Alcatel, and CEO of Tellabs.
xi
1 The Need for Change
John Donovan and Krish Prabhu
CONTENTS
References........................................................................................................................................... 8
Although the early history of electrical telecommunication systems dates back to the 1830s, the
dawn of modern telecommunications is generally viewed as Alexander Graham Bell’s invention of
the telephone in 1876. For the first 100 years, the telecom network was primarily “fixed”—the users
were restricted to being stationary (other than some limited mobility offered by cordless phones)
in locations that were connected to the telecom infrastructure through fixed lines. The advent of
mobile telephony in the 1980s, made it possible to move around during a call, eventually ushering in
the concept of “personal communication services,” allowing each user to conduct his or her own call
on his or her own personal handset, when and where he or she wants to. To facilitate this, the fixed
network was augmented with special mobility equipment (towers with radio antennas, databases
that tracked the movement of users, etc.) that interoperated with the existing infrastructure.
The invention of the browser launched the Internet in the mid-1990s. Like mobile networks, the
Internet also builds on top of the fixed telecom network infrastructure, by augmenting it with spe-
cial equipment (modems, routers, servers, etc.). However, the development of the Internet has not
only made it possible to bring communication services to billions of people, but has also dramati-
cally transformed the business and the social world. Today, the widespread use of the Internet by
people all over the world and from all walks of life has made it possible for Internet Protocol (IP) to
be not only the data network protocol of choice, but also the technology over which almost all forms
of future networks are built, including the traditional public telephone network.
At a high level, a telecom network can be thought of as comprising two main components: a core
network with geographically distributed shared facilities interconnected through information trans-
port systems, and an access network comprising dedicated links (copper, fiber, or radio) connecting
individual users (or their premises) to the core network. The utility provided by the infrastructure is
largely dependent on availability of transport capacity. The Communications Act of 1934 called for
“rapid, efficient, Nation-wide, and world-wide wire and radio communication service with adequate
facilities at reasonable charges” to “all the people of the United States.” This principle of universal
service helped make telephone service ubiquitous. Large capital investments were made to deploy
transport facilities to connect customers who needed telephone service, across wide geographic areas.
The deployed transport capacity was much above what was needed for a basic voice service, and as
new technology solutions developed (digital subscriber, optical fiber, etc.), it became clear that the
network was capable of doing much more than providing voice services. The infrastructure that sup-
ported ubiquitous voice service, evolved to support data services and provide broadband access to the
Internet. Today, according to the Federal Communications Commission (FCC), high-speed (broad-
band) Internet is an essential communication technology, and ought to be as ubiquitous as voice. The
quality of political, economic, and social life is so dependent on the telecom network that it is rightly
deemed as a nation’s critical infrastructure.
The history of mobile telephony and Internet has been one of continuous evolution and ever
increasing adoption. The pace of innovation has been exponential—matching the explosion of ser-
vice offerings with those of new applications. Starting with the 1G mobility system that were first
1
2 Building the Network of the Future
introduced in the 1970s and reached a peak subscriber base of 20 million voice customers, today’s
4G systems serve a global base of nearly 2 billion smartphone customers.1
“The transformative power of smartphones comes from their size and connectivity.”2 Size makes
it easy to carry them around; connectivity means smartphones can not only connect people to one
another, but also can deliver the full power of online capabilities and experiences, spawning new busi-
ness models as with Uber and Airbnb. In addition, the fact that there are 2 billion smartphones spread
out all over the world, and each one of them can “serve as a digital census-taker,” makes it possible
to get real-time views of what people like and dislike, at a very granular level. But the power of this
framework is limited by the ability to interconnect the devices and web sites—at the necessary inter-
connection speeds and manage the latency requirements for near-instantaneous services. This is made
possible by a transport layer, with adequate bandwidth and throughput between any two end points.
Transport technology has continually evolved from the early local access and long-distance
transport networks. Fundamental advances in the copper, radio, and fiber technologies have pro-
vided cheaper, more durable, and simpler to manage transport. The traffic being transported has
evolved from analog to digital (time division multiplexed) to IP. Layered protocols have enabled
many different services to share a common transport infrastructure. The hub and spoke, fixed band-
width, fixed circuits, and relatively long duration switched sessions (an architecture that grew with
the growth of voice services) are no longer the dominant patterns but one of many different traffic
patterns driven by new applications such as video distribution, Internet access, and machine-to-
machine communication, with mobility being an essential part of the offering. This evolution has
now made it possible for the global telecom network to be the foundation for not just voice, data,
and video communication among people, but also for a rich variety of social and economic activity
transacted by billions of people all over the world. The next thing on the horizon is the Internet of
Things (IoT)—driven by the ability to connect and control tens of billions of remote devices, facili-
tating the next round of productivity gains as operations across multiple industries get streamlined.
Traditionally, the users of carrier networks were fairly static and predictable, as was the traffic
associated with them. As a result, the underlying architecture that carriers relied on at the time was,
more often than not, based on fixed, special purpose, networking hardware. Since the operating
environment within a Telecom operator followed an organizational approach that separated the IT
and Network groups, the growth in traffic and the variety of services that needed to be supported,
created an environment where the ability to respond to business needs was increasingly restricted.
The networks of today support billions of application-heavy, photo-sharing, video-streaming,
mobile devices, and emerging IoT applications, all of which require on-demand scaling, high resil-
iency, and the capability to do on-the-fly service modifications.
Network connectivity is at the core of every innovation these days, from cars to phones to video
and more. Connectivity and network usage are exploding—data traffic on AT&T’s mobile network
alone grew by nearly 150,000% between 2007 and 2014. This trend will continue, with some esti-
mates showing wireless data traffic growing by a factor of 10 by 2020. Video comprises about 60%
of AT&T’s total network traffic—roughly 114 petabytes per day as of this writing. To put that into
perspective, 114 petabytes equals about 130 million hours of HD video. As it stands right now, send-
ing a minute of video takes about 4 megabytes of data. In comparison, sending a minute of virtual
reality (VR) video takes hundreds of megabytes. With VR becoming increasingly popular, having
an easy and convenient way to access this data-hungry content is essential. At the same time, IoT
is also generating mountains of data that have to be stored and analyzed. By 2020, there will be an
estimated 20–50 billion connected devices.
Public networks and especially those that cover large areas have always been expensive to deploy.
To enable the growth and the use of networks (the value of a network increases as more users get
on it), operators have relied on technology advances to continue to improve performance and drive
costs down. Compared to a public network in the twentieth century, today’s all-IP network with
advanced services radically changes many of the requirements. The number of different services or
applications available at any one time has grown by several orders of magnitude and services are
The Need for Change 3
constantly evolving, driven by mobile device capabilities, social media, and value created by avail-
able content and applications.
To keep pace with these new applications, and to be able to provide reliable networking services
in the face of growing traffic and diminishing revenue, we need a new approach to networking.
Traditional telecom network design and implementation has followed a tried and tested approach for
several decades. When new network requirements are identified, a request for proposals (RFPs) is
issued by the carrier. The vendors respond with proprietary solutions that meet interoperability stan-
dards, and each network operator then picks the solution they like. The networks are built by inter-
connecting physical implementation of functions. Today’s telecom networks have over 250 distinct
network functions deployed—switches, routers, access nodes, multiplexors, gateways, servers, etc.
Most of these network functions are implemented as stand-alone appliances—a physical “box” with
unique hardware and software that implements the function and conforms to standard interfaces to
facilitate interoperability with other boxes. For operational ease, network operators prefer to use one
or two vendors, typically, for a given class of appliances (routers, for example). The appliance vendor
often uses custom hardware to optimize the cost/performance; the software is mated with hardware,
and the product can be thought of as “closed.” This creates vendor lock in, and since most deployed
appliances are seldom replaced, the result is a platform lock in, with limited options for upgrading with
technology advances. Figure 1.1 qualitatively depicts the evolution of cost over a 10-year period. For
purposes of illustration, the unit cost signifies the cost for performing one unit of the network func-
tion, (e.g., packet processing cost for 1 GB of IP traffic). Due to platform lock in, any cost improve-
ment comes from either hardware redesigns at the plug-in level with cheaper components, or price
concessions made by the vendor to maintain market position. On the other hand, technology advances
characterized by advances in component technology (Moore’s Law) and competitive market dynam-
ics, facilitate a much faster decline in unit cost, more in-line with what is needed to keep up with the
growth in traffic. A second problem with the current way of network design and implementation is
that in the face of rampant traffic growth, network planners deploy more capacity than what is needed
(since it could take several months to enhance deployed capacity, beyond simple plug add-ons), thus
creating a low utilization rate.
We need to transition from a hardware-centric network design methodology to one that is soft-
ware centric. Wherever possible, the hardware deployed ought to be one that is standardized and
commoditized (cloud hardware, for example) and can be independently upgraded so as to benefit
from technology advances as and when they occur. The network function capability ought to be
largely implemented in software running on the commodity hardware. The hardware is shared
among several network functions, so that we get maximum utilization. Network capacity upgrades
occur in a fluid manner, with continuous deployment of new hardware resources and network func-
tion software as and when needed. In addition, the implementation of software-defined networking
(SDN) would provide three key benefits: (1) separation of the control plane from the data plane
allowing greater operational flexibility; (2) software control of the physical layer enabling real-
time capacity provisioning; and (3) global centralized SDN control with advanced and efficient
algorithms coupled with real-time network data allowing much better multilayer network resource
optimization on routing, traffic engineering, service provisioning, failure restoration, etc. This new
network design methodology (depicted in Figure 1.2) has a much lower cost structure in both capital
outlays and ongoing operating expenses, greater flexibility to react to traffic surges and failures, and
better abilities to create new services.
In order to become a software-centric network, and tap into the full power of SDN, several
steps need to be taken—rethinking the IT/Network separation, disaggregation of hardware and
software, implementing network functions predominantly in software and capable of executing on
a commodity cloud hardware platform, and a high degree of operational automation. Add to this
the requirement that the implementation be done on open source platforms with open Application
Programming Interfaces (APIs).
Today, SDN is commonly perceived to be a new networking paradigm that arose from the vir-
tualization of data-center networks, which in turn was driven by the virtualization of compute and
storage resources. However, SDN techniques have been used at AT&T (and other carriers) for more
than a decade. These techniques have been embedded in the home grown networking architecture
used to create an overlay of network-level intelligence provided by a comprehensive and global view
of not only the network traffic, resources, and policies, but also the customer application’s resources
and policies. The motivation was to enable creation of customer application-aware, value-added
service features that are not inherent in the vendor’s switching equipment. This SDN-like approach
has been repeatedly applied successfully to multiple generations of carrier networking services,
from traditional circuit-switching services through the current IP/Multi-Protocol Label Switching
(MPLS) services.
As computer technology evolved, the telecommunication industry became a large user of com-
puters to automate tasks such as setting up connections. This also launched the first application of
software in telecom—software that allowed the control of the telecommunications specific hard-
ware in the switches. As the number and complexity of the routing decisions grew, there came a
need to increase the flexibility of the network—to make global decisions about routing a call that
could not be done in a distributed fashion. The first “software-defined network controller” was
invented, namely the Network Control Point (NCP). Through a signaling “API,” the controller could
make decisions in real time about whether a request for a call should be accepted, which end point
to route the call to, and who should be billed for the call. These functions could not be implemented
at the individual switch level but needed to be done from a centralized location since the subscriber,
network, and service data were needed to decide how to process the call.
Initially, the NCP was a relatively static service-specific program. It had an application to handle
collect calls, an application for toll-free 800 services, an application for blocking calls from delin-
quent accounts, etc. It soon became clear that many of the service application shared a common set
of functions. The “service logic” was slightly different but there were a set of primitive functions
that the network supported, and a small set of logical steps that tied those functions together. The
data varied and the specific parameters used for a service might vary but there was a reusable set.
The programmable network and, in particular, the programmable network controller was born.3
The Direct Services Dialing Capability (DSDC) network was a new way to do networking.
Switches, known as action points, had an API that could be called over a remote transport network
(CCIS6/SS7) and controllers, known as control points, could invoke the programs in a service exe-
cution environment. The service execution environment was a logical view of the network. Switches
queried controllers, controllers executed the program and called APIs into the switches to bill and
route calls. The controller had a function to set billing parameters and to route calls for basic func-
tions. Advanced functions used APIs to play announcements and collect digits, to look up digits for
validation and conversion, and to temporarily route to a destination and then reroute the call. These
functions allowed different services to be created.
The Toll Free Service (i.e., 800-service) that translated one dialed number into another destination
number and set billing parameters to charge the called party for the call was another example of using
SDN-like techniques in a pre-SDN world. The decisions on which destination number was controlled
by the customer for features such as area code routing, time of day routing, etc. were implemented
using software that controlled the switches. An entire multibillion dollar industry grew out of the
simple concept of reversing the charges and the power of letting the customer decide which data center
to route the call to, for reducing their agent costs or providing better service to their end customers.
For enterprise customers, AT&T provided a trademarked service called AT&T SDN ONENET,
which provided private dialing plans, authentication codes and departmental charging capability
as a virtual Time-Division Multiplexing (TDM) network for enterprise customers. With or with-
out a Private Branch Exchange (PBX), the service exposed a programmable virtual network to
enterprises. The DSDC network also provided the ability to add or remove resources to the call for
advanced functions and mechanisms for dealing with federated controllers. Service Assist was a
mechanism to service-chain an announcement and digit collection node into a call; this grew into
Interactive Voice Response (IVR) technology with speech recognition but it was an early form of
service chaining for virtual networks. When more than one controller was needed to handle the load
6 Building the Network of the Future
(or to perform the entire set of desired features) NCP transfer was used, which today we would see
as controller-to-controller federation. Many variations of these types of service were created, across
the globe, as both the ability for software control and the availability of computing to do more pro-
cessing, allowed the service providers to let their customers manage their business.
These SDN-like networks in the circuit-switched world were followed by a similar capability of
providing network-level intelligent control in the packet-routed world, where the routers replace the
switches. The development and implementation of SDN for AT&T’s IP/MPLS network—generally
known as the Intelligent Routing Service Control Platform (IRSCP)—was launched in 2004.4 It was
driven with the goal of providing AT&T with “competitive service differentiation” in a fast-maturing
and commoditized general IP networking services world. The IRSCP design uses a software-based
network controller to control a set of specially enhanced multiprotocol Border Gateway Protocol
(BGP) (MP-BGP) route reflectors and dynamically distribute selective, fine-grained routing and for-
warding controls to appropriate subsets of IP/MPLS routers. The essential architectural construct
of IRSCP enables on-demand execution of customer-application specific routing and forwarding
treatment of designated traffic flows, with potential control triggers based on any combination of the
network’s and the customer-application’s policies, traffic and resource status, etc. This yields a wide
range of value-added features for many key networking applications such as Content Distribution
Network (CDN), networked-based security (Distributed Denial of Service [DDoS] mitigation),
Virtual Private Cloud, etc.
Apart from SDN, the other key aspect of the transformation is network function virtualization
with transitioning to an architecture that is cloud-centric. AT&T started its own cloud journey in
late 2010 with multiple efforts, each focused on different objectives—a VMware-based cloud for
traditional IT business applications, a Compute-as-a-Service (CaaS) offering for external custom-
ers, and an OpenStack-based effort for internal and external developers. These disparate clouds
were subsequently merged into a common cloud platform, which evolved into the AT&T Integrated
Cloud (AIC). Today, this is the cloud environment in which AT&T’s business applications run. It is
also the platform for the cloud-centric network function virtualization effort. By 2020, it is antici-
pated that 75% of the network would be virtualized and running in the cloud.
The methods and mechanisms for deploying applications on a dedicated server infrastructure are
well known, but a virtualized infrastructure has properties, such as scalability and active reassign-
ment of idle capacity, which are not well understood. If applications are not structured to make use
of these capabilities, they will be more costly and less efficient than the same application running
on a dedicated infrastructure. Building services that are designed around a dedicated infrastructure
concept and deploying them in a virtualized infrastructure fails to exploit the capabilities of the vir-
tualized network. Furthermore, building a virtualized service that makes no use of SDN to provide
flexible routing of messages between service components adds significantly to the complexity of the
solution when compared to a dedicated application. Virtualization and SDN are well defined, but the
key is to enable using virtualization and SDN together to simplify the design of virtualized services.
Integration of virtualization and SDN technologies boils down to understanding how decomposition,
orchestration, virtualization, and SDN can be used together to create, manage, and provide a finished
service to a user. It enables services to be created from modular components described in recipes
where automated creation, scaling, and management are provided by Open Network Automation
Platform (ONAP), an open source software platform that powers AT&Ts SDN. ONAP enables
ONAP will benefit service providers by driving down operations costs. It will also give providers
and businesses greater control of their network services becoming much more “on demand.” At the
end of the day, customers are the ones who will benefit the most. The idea of creating a truly per-
sonalized secure set of services on demand will change the way we enable consumer applications
and business services.
Today’s telecom networks are rich treasure troves of data—especially pertaining to the world of
mobility. On an average day, AT&T measures 1.9 billion network-quality check points from where
our wireless customers are actually using their service. This data go into network analytics to better
understand the customers’ true network experience. The program is internally referred to as Service
Quality Management; sophisticated analytics is used to make sense of this massive volume of net-
work data and discern what customers are experiencing. These technologies have transformed how
we manage our network—driving smarter decisions and resolving issues quicker than ever before.5
For instance, if two areas have disrupted cell towers—restoring service to all customers quickly
is a key operational objective. Based on the real-time data and historical measurements that are
available, AT&T’s home developed algorithm called TONA—the Tower Outage and Network
Analyzer—factors in current users, typical usage for various times of the day, population and posi-
tioning of nearby towers, to assess which towers can respond by off-loading the traffic from the
disrupted towers. TONA has created a 59% improvement in identifying customer impact, and it
shortens the duration of network events for the greatest number of customers. This is but one exam-
ple of the power of using real-time data to improve network performance.
This transformation is not limited to the network and its associated technologies. The transformed
network also needs a software-centric workforce. Reskilling the employees in a variety of software
and data specialties is as important as transforming the architecture of the network. Besides, the tra-
ditional RFP approach, used today to procure new products or introduce new vendors, gets replaced
by a more iterative and continuous approach for specifying and procuring software. It also calls
for active involvement with the open source community.6 Open source speeds up innovation, lower
costs, and helps to converge quickly on a universal solution, embraced by all. Initially conceived to
be something people can modify and share, because the design is publicly accessible, “open source”
has come to represent open exchange, collaborative participation, rapid prototyping, transparency,
meritocracy, and community-oriented development.7 The phenomenal success of the Internet can be
attributed to open source technologies (Linux, Apache Web server application, etc.) Consequently,
anyone who is using the Internet today benefits from open source software. One of the tenets of the
open source community is that you do not just take code. You contribute to it, as well.
AT&T strongly believes that the global telecom industry needs to actively cultivate and nur-
ture open source. To spur this, we have decided to contribute the ONAP platform to open source.
Developed by AT&T and its collaboration partners, the decision to release ONAP into open source
was driven by the desire to have a global standard on which to deliver the next generation of applica-
tions and services. At the time of this writing, there are at least four active open source communities
pertaining to the Network Functions Virtualization (NFV)/SDN effort:
• OpenStack
• ON.Lab
• OpenDaylight
• OPNFV
AT&T has been active in all four, and will continue to do so. Many other telecom companies, as
well as equipment vendors, have also been equally active.
Any significant effort to re-architect the telecom network to be software-centric cannot ignore
the needs of wireless (mobile) systems. The fourth generation wireless system, Long-Term Evolution
(LTE), which was launched in late 2009, made significant advances in the air interface, as well as
the architecture of the core network handling mobile traffic. The driver behind this was the desire to
8 Building the Network of the Future
build a network that was optimized for smartphones and the multitude of applications that runs on
them. As of this writing, fifth generation mobile networks are being architected and designed and
their rollout is expected in 2020. Compared to the existing 4G systems, 5G networks offer
• Average data rate to each device in the range of 50 Mb/s, compared to the current 5 Mb/s
• Peak speeds of up to 1 Gb/s simultaneously to many users in the same local area
• Several hundreds of thousands of simultaneous connections for massive wireless sensor
network
• Enhanced spectral efficiency for existing bands
• New frequency bands with wider bandwidths
• Improved coverage
• Efficient signaling for low latency applications
New use cases such as the IoT, broadcast-like services, low latency applications, and lifeline
communication in times of natural disaster will call for new networking architectures based on
network slicing (reference). NFV and SDN will be an important aspect of 5G systems—for both the
radio access nodes as well as the network core.
The chapters that follow explore the concepts discussed in this chapter, in much more detail. The
transformation that is upon us will achieve hyper scaling, not unlike what the web players have done
for their data centers. We think this is one of the biggest shifts in networking since the creation of
the Internet.
REFERENCES
1. https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/
2. http://www.economist.com /news/leaders/21645180-smartphone-ubiquitous-addictive-and-
transformative-planet-phones
3. 800 service using SPC network capability—special issue on stored program controlled network, Bell
System Technical Journal, Volume 61, Number 7, September 1982.
4. J. Van der Merwe et al. Dynamic connectivity management with an intelligent route service control
point, Proceedings of the 2006 SIGCOMM Workshop on Internet Network Management. Pisa, Italy, pp.
147–161.
5. http://about.att.com/innovationblog/121014datapoints
6. http://about.att.com/innovationblog/061714hittingtheopen
7. https://opensource.com/resources/what-open-source
2 Transforming a Modern
Telecom Network—From
All-IP to Network Cloud
Rich Bennett and Steven Nurenberg
CONTENTS
2.1 Introduction...............................................................................................................................9
2.2 Rapid Transition to All-IP....................................................................................................... 10
2.3 The Network Cloud................................................................................................................. 10
2.4 The Modern IP Network.......................................................................................................... 11
2.4.1 The Open Systems for Interconnection Model............................................................ 11
2.4.2 Regulation and Standards............................................................................................ 11
2.5 Transforming an All-IP Network to a Network Cloud............................................................ 12
2.5.1 Network Functions Virtualization (NFV)................................................................... 13
2.5.2 NFV Infrastructure...................................................................................................... 13
2.5.3 Software-Defined Networking..................................................................................... 14
2.5.4 Open Network Automation Platform........................................................................... 15
2.5.5 Network Security......................................................................................................... 15
2.5.6 Enterprise Customer Premises Equipment.................................................................. 16
2.5.7 Network Access........................................................................................................... 17
2.5.8 Network Edge.............................................................................................................. 17
2.5.9 Network Core............................................................................................................... 18
2.5.10 Service Platforms......................................................................................................... 19
2.5.11 Network Data and Measurements................................................................................ 22
2.5.12 Network Operations..................................................................................................... 23
References.........................................................................................................................................24
This chapter provides context for and introduces topics covered in subsequent chapters. We sum-
marize major factors that are driving the transformation of telecom networks, characteristics of a
modern all-IP network, and briefly describe how the characteristics change as an all-IP network
transforms to a Network Cloud. The brief descriptions of change include references to succeeding
chapters that explore each topic in more detail.
2.1 INTRODUCTION
The dawn of the Internet began with industry standards groups such as the International
Telecommunications Union (ITU) and government research projects, such as ARPANET, in the
United States proposing designs based on loosely coupled network layers. The layered design com-
bined with the ARPANET open, pragmatic approach to enhancing or extending the design proved
to have many benefits including the following: integration of distributed networks without central
control; reuse of layers for multiple applications; incremental evolution or substitution of technology
9
10 Building the Network of the Future
without disruption to other layers; and creation of a significant sized network in parallel with the
development of the standards.
Tremendous growth and commercialization of the network was triggered by the ubiquitous
availability of the Transmission Control Protocol/Internet Protocol (TCP/IP) network stack on
Windows and UNIX servers and the development of web browsers and servers using the Hypertext
Transfer Protocol (HTTP). This convergence of events created the World Wide Web where islands
of information could be linked together by distributed authors and seamlessly accessed by anyone
connected to the Internet. The Public Telecom network became the transport network for Internet
traffic. Analog telephony access circuits were increasingly used with modems for Internet data,
driving redesign or replacement of access to support broadband data. Traffic on core network links
associated with Internet and IP applications grew rapidly.
The IP application protocols were initially defined to support many application types such
as email, instant messaging, file transfer, remote access, file sharing, audio and video streams,
etc. and to consolidate multiple proprietary protocol implementations on common network lay-
ers. As the HTTP protocol became widely used across public and private networks, usage grew
beyond the original use to retrieve HTML Hypertext. HTTP use today includes transport of
many IP application protocols such as a file transfer, the segments of a video stream, application-
to-application programming interfaces, and tunneling or relaying lower level datagram packets
between private networks. A similar evolution happened with mobile networks that started pro-
viding mobile voice and today with 4G Long-Term Evolution (LTE) wireless networks, IP data
are the standard transport and are used for all applications including use for voice telephony. IP
has become the universal network traffic standard and modern telecom networks are moving to
all-IP.
across change in service demand and infrastructure availability; and increasing competencies in
software integration and a DevOps operations model. We call this the “Network Cloud.”
The benefits of a Network Cloud framework include lower unit cost of providing services, faster
delivery and flexibility to support new services compared to the traditional approach (where a spe-
cific service requires dedicated infrastructure with low utilization to ensure high availability), and
the opportunity to automate network operations.
To help put the transformation of public networking into context, this chapter describes two
examples—a traditional “appliance”-based network and a contemporary software-defined and vir-
tualized network (software-defined networking [SDN]/NFV) at a high level. This is to convey that
the new SDN/NFV network is more than just faster networking and certainly different than previ-
ous efforts to add intelligence and programmed control.
2.5.2 NFV Infrastructure
Using the NFV approach, traditional network elements are decomposed into their constituent func-
tional blocks. Traditionally, in a telecom network built with appliances, software is used in two
major ways—network element providers acquire and/or develop the software components to sup-
port the operation and management of the network element, and service providers acquire and/
or develop the software components needed to integrate the network elements into business and
operations support systems (OSS). However, in an NFV model, the potential for greater efficiency
is expected from more standardization and reuse of components that go into the network functions
and as a result, the effort to integrate functions and create and operate services is lower because they
can be supported in a common NFVI platform.
Software plays a very important role in the transformation of a traditional telecom network to
a Network Cloud. The software discipline provides extensible languages and massive collabora-
tion processes that enable solutions to complex problems; reusing solution components to automate
tasks and find information and meaning in large amounts of raw data; and insulating solutions from
current hardware design in a way that allows solutions to evolve independently of hardware and to
benefit from a declining cost curve of commodity computing infrastructure.
In the new architecture, PNF are supplanted by VNF that run on NFVI, which is the heart of the
Network Cloud. Key drivers for the platform are the ability to utilize open source software, create
sharable resources, and provide for the use of modern implementation practices such as continuous
14 Building the Network of the Future
integration/continuous deployment (CI/CD). The Open Stack environment is an obvious choice for
NFVI. Open Stack provides the functionality to schedule compute tasks on servers and administrate
the environment. In addition, it provides a fairly extensive set of features; this is discussed in more
detail in Chapter 4.
2.5.5 Network Security
Security for networks is a multidisciplinary problem—starting with the classic security problems
such as confidentiality and integrity, coupled with the problem of availability, that is, ensuring the
service and network work, and can withstand a hostile attempt to bring it down. With Network
Cloud, there is the need to understand security through the combined lens of software and network,
in an operating cloud environment. For example, using the security technique of “defense in depth”
16 Building the Network of the Future
and “separations of concerns,” it is a pragmatic approach to categorize VNF keeping each category
type from running on the same server simultaneously. For these and other reasons, security is a key
architecture and design factor for the Network Cloud.
Security manifests itself from two different perspectives—the protection of the infrastructure
itself and the ability to deliver security functionality in a service. The basic structure for infrastruc-
ture security is to ensure each component is individually assessed and designed with security in
mind. For the Network Cloud, the fabric component contains functionality such as access control
lists (ACLs) and virtual private networks to limit and separate traffic flows. The servers run operat-
ing systems with hypervisors that provide separate execution environments with memory isolation.
Around this, operational networks are used to provide access to managements ports and use fire-
walls to provide a point of control and inspection.
Other aspects of security architecture also come into play. While the desire is to prevent a secu-
rity problem, a good security approach also provides mechanisms for mitigation, recovery, and
forensics. Mitigation approaches for infrastructure include overload controls and diversion capabili-
ties. Overload controls (which also help in the case of network problems) prioritize functionality
so that resources are best applied to control plane and high priority traffic. Diversion is the ability,
typically using specific portions of the IP header to identify candidate network traffic, to redirect
packets for subsequent processing, rate limiting, and when necessary, elimination.
Forensics is fundamental to security. The ability to log activity provides the ability to analyze
past incidents to determine root cause and to develop prevention and mitigation solutions. It also
plays a role in active security enforcement by acting as an additional point of behavior inspection,
which may indicate either the failure or a weakness in the security design or implementation.
Within security processes, two key components are automation and identity management.
Automation allows for the administration of complex sequences and eliminates the human element
from configuration that is a typical source of security problems. All it takes is entering the wrong
IP address in an ACL to create a vulnerability. Identity management ensures that both people and
software are authorized to view, create, delete, or change records or settings. The Network Cloud
uses a centralized approach for identity verification. This prevents another weakness of local pass-
words, which are more easily compromised. These topics are discussed in more detail in Chapter 9.
58
Pittsburgh Gazette, January 23, 1801.
59
Collinson Read. An Abridgment of the Laws of
Pennsylvania, Philadelphia, MDCCCI, pp. 264–269.
60
Pittsburgh Gazette, December 7, 1799.
61
Neville B. Craig. The Olden Time, Pittsburgh, 1848, vol. ii.,
pp. 354–355.
62
A Brief State of the Province of Pennsylvania, London,
1755, p. 12.
63
Tree of Liberty, December 27, 1800.
64
John Austin Stevens. Albert Gallatin, Boston, 1895, p.
370.
65
Major Ebenezer Denny. Military Journal, Philadelphia,
1859, p. 21.
66
Pittsburgh Gazette, October 23, 1801.
67
Dr. F. A. Michaux. Travels to the Westward of the Alleghany
Mountains in the Year 1802, London, 1805, p. 36.
68
Morgan Neville. In John F. Watson’s Annals of
Philadelphia and Pennsylvania, Philadelphia, 1891, vol.
ii., pp. 132–135.
69
Henry Adams. The Life of Albert Gallatin, Philadelphia,
1880, p. 68.
70
Tree of Liberty, November 7, 1800; Pittsburgh Gazette,
February 20, 1801.
71
Dr. F. A. Michaux. Travels to the Westward of the Alleghany
Mountains in the Year 1802, London, 1805, p. 29.
72
Henry Adams. The Life of Albert Gallatin, Philadelphia,
1880, p. 650.
73
Count De Gallatin. “A Diary of James Gallatin in Europe”;
Scribner’s Magazine, New York, vol. lvi., September,
1914, pp. 350–351.
74
Richard Hildreth. The History of the United States of
America, New York, vol. iv., p. 425.
75
Tree of Liberty, September 27, 1800.
76
Pittsburgh Gazette, February 6, 1801.
77
Political Miscellany, New York, 1793, pp. 27–31.
78
Tree of Liberty, March 13, 1802.
79
Tree of Liberty, September 19, 1801.
80
Pittsburgh Gazette, October 26, 1799.
81
William C. Armor. Lives of the Governors of Pennsylvania,
Philadelphia, 1873, p. 289.
82
Neville B. Craig. The Olden Time, Pittsburgh, 1848, vol. ii.,
p. 355.
83
H. M. Brackenridge. Recollections of Persons and Places
in the West, Philadelphia, 1868, p. 68; Pittsburgh
Gazette, December 29, 1798.
CHAPTER IV
LIFE AT THE BEGINNING OF THE NINETEENTH
CENTURY