Professional Documents
Culture Documents
Why Every Size IT Team Should Strive to Implement a Software-Defined Data Center...........................4
Public vs. Private Cloud......................................................................................................................................................... 4
Software-Defined Data Center............................................................................................................................................. 5
Pluribus Networks – Putting SDDC and Private Cloud Within Reach......................................................................... 6
The “Easy Button” for SDN Control of Physical and Virtual Data Center Networks...............................11
The Underlay..........................................................................................................................................................................11
Controller-based vs. Controllerless SDN Underlay Automation................................................................................12
The Overlay.............................................................................................................................................................................12
A New Approach to Overlay Fabric Networking.............................................................................................................14
Underlay and Overlay Unified – It Just Works................................................................................................................16
Summary.............................................................................................................................................24
Additional Resources...........................................................................................................................25
In a fast-moving and increasingly digital world, businesses of all sizes are working toward digital
transformation (DX) to stay competitive and grow profitably (Figure 1). This results in thinking through
existing application workloads and new applications that might be required for DX.
-4.0% -3.1%
Figure 1: IDC’s “DX Reinvention — The Race to the Future Enterprise,” Doc #DR2019_GS4_MW, March 2019
Public vs. Private Cloud units. That said, many IT teams have now built
experience and an understanding of which type
On this DX journey, the allure of the hyperscale of workloads make sense in the public cloud
public cloud is undeniable: no need for and which do not, based on criteria such as cost,
infrastructure, spin up workloads quickly, easily performance, security, data privacy and more.
spin them down (if you remember) and scale
nearly infinitely. This has driven more and As a result, while many businesses continue to
more businesses to use public hyperscalers for move workloads to hyperscalers, they are also
development and testing (dev and test), as well making decisions to keep specific workloads
as production workloads, either sponsored by IT on-prem or in colo facilities running on
or as shadow IT projects directly out of business end-user-owned data
How would you characterize your company’s timeliness developing and launching
new products and services, relative to its competition? (percent of respondents)
39%
32%
13% 11%
We are usually significantly ahead We are often ahead of our We are usually in line with or
of our competition competition behind our competition
Figure 3: Organizations that deploy SDN in the data center are significantly more competitive.
From Enterprise Strategy Group, 2018
SDN provides new levels of network automation to Pluribus Networks – Putting SDDC
accelerate IT transformation: and Private Cloud Within Reach
•
97% of transformed companies have At Pluribus Networks, we believe there is an
committed to SDN approach based on open networking and open
•
SDN users are 3.5 times more likely to be source principles for businesses of all sizes to
significantly ahead of their competitors in time achieve SDN, network virtualization and network
to market (49% versus 13%) analytics cost-effectively – putting SDDC and
•
2.5 times more SDN users made excellent private cloud within reach of every IT team, even
progress enabling a rapidly elastic data center for small data centers and lightly staffed IT teams.
environment (46% versus 18%) It is clear that breaking through the barrier of
automating the network is the critical hurdle to
supporting SDDC and private cloud.
In Chapter 1, I talked about the fact that, while many 3: Do I want to create a virtualized network overlay
workloads are moving to hyperscale public clouds, fabric that abstracts the physical network into
many will continue to run either in data centers with a set of software-based virtual tunnels between
end-user owned infrastructure (on-prem or in colos) or all switches supported by software-based
in hosted private clouds. I also reviewed the business routers, switches and load balancers that offer
benefits of transforming to an SDDC and, in particular, the ability to establish new network topologies
focused on the challenges of providing a software- and services in seconds? Is the cost and
defined underlay and virtual overlay networking complexity of deploying a virtual network worth
infrastructure, which has been the Achilles heel for IT the effort? What are the various approaches?
teams in terms of achieving SDDCs.
These questions can be asked in any order, and
Given the clear benefits of SDDC transformation, often can and should all be asked and investigated in
is there an affordable and simple approach to parallel. Here I will focus on the first set of questions,
get there? Is there an “Easy Button” that makes it about open networking, and then address SDN and
feasible for even small and medium data center network virtualization in Chapter 3.
operators? We believe the answer is clearly yes.
To understand how, I will look at three sets Open Networking Has Matured
of questions:
Over the last decade, some customers have
1: Should I deploy open networking or go with a been reluctant to take a risk on open networking
vertically integrated vendor when it comes time because of its perceived immaturity and concerns
to upgrade, expand, migrate or consolidate my with service and support in a model where
data center and to support SDDC? How much software comes from one company and hardware
risk is there in open networking, and what is the from another. This is juxtaposed against the
support model? tremendous benefits that have been achieved from
2: Do I want my leaf-and-spine physical network disaggregating software from hardware, including
to be deployed as an SDN fabric, or am I driving capital costs down by up to 50% and, more
comfortable with box-by-box configuration, importantly, speeding hardware and software
operations and troubleshooting? Is the cost and innovation by enabling separate innovation
complexity of deploying SDN worth the effort? paths and leveraging an open source community
What are the various approaches? approach. For example, Pluribus utilizes Free Range
40%
selling to enterprise are expanding
Bare Metal 46% YoY
30%
Open switch designs certified by the Open
Compute Project continue to increase 20%
Figure 4: White box switching is growing faster than any other switching category
For example, AT&T has completely committed Pluribus is deployed in over 240 customers today,
to the white box and open source path across including deployments in over 60 virtualized
multiple places in the network publicly. Many (NFVi) 4G/5G mobile cores of Tier 1 service
other institutions, from cloud service providers providers, where our software is carrying the
to K-12 school districts and local governments to traffic of hundreds of millions of mobile data
enterprises, have deployed open networking with subscribers. These sorts of mission-critical, large-
great success. scale deployments have allowed the software and
hardware technology to mature and be hardened.
In spite of all of our talk of network virtualization, platforms to run containerized applications like
we will always need the physical underlay for SDN, network virtualization, virtual network
connectivity. Our view at Pluribus is that if this functions (VNFs), virtual routers, network analytics
latent server-like processing power is being and more. This novel approach is covered in more
deployed anyway, it should not go to waste. detail in Chapter 3.
With clever software, one can leverage these
Figure 6: Overlay networking creates software-based networks that can be defined per tenant
Network
Monitoring
Infrastructure
Network
Virtualization and
Segmentation
SDN Underlay
Control
API
White Box
Packet L2/L3 Packet L2/L3 SDN Network DCGW
Analytics
Processing NOS Processing NOS Control Virtualization Router
Figure 7: Leveraging the distributed compute power of white box switches to implement SDN and network virtualization
Layer 2 and Layer 3 unicast and multicast services expense of other network virtualization solutions,
are distributed throughout the overlay fabric using it also is controllerless – again, where no external
an Anycast Gateway approach, again leveraging controllers are needed – reducing space, power
the distributed processing power of the switches. and cost, which is especially critical in smaller data
Not only does this eliminate the per-CPU license center environments.
expense and the optional per-server SmartNIC
Optional: SmartNics per server for network virtualization data plane acceleration
VIM Controller
Cluster
Any WAN
Figure 8: A dramatic reduction in cost, space and power for constrained environments and a pre-integrated solution
that is simple for resource-constrained IT teams to deploy.
In Chapter 3, I wrote about a controllerless are not optimal for smaller environments as they
implementation for SDN automation of the require a set of external test access points (TAPs),
underlay and a virtualized network overlay fabric probes and packet brokers that effectively overlay
that leverages the distributed processing power the network fabric, not to mention a number of
of open networking switches. The result of this servers to execute the analytics. This results, again,
approach is a very efficient and highly integrated in high cost and space and power consumption, as
network automation solution for smaller data well as additional complexity.
center environments where traditional SDN
approaches are simply too expensive, consume
too much space and power and struggle to span Traditional Approaches
geographically distributed multi-site or edge data to Analytics
center locations.
Traditional switches and routers switch billions of
This novel approach is very powerful and packets per second between servers and clients at
necessary but not sufficient – there is another sub-microsecond latencies using custom ASICs but
layer of functionality required to support have limited capability to record enough telemetry
comprehensive data center automation. In order detail to provide a truly useful picture of network
to monitor the network and quickly identify performance over time. It is a very similar story for
and troubleshoot performance issues, granular OpenFlow-based switches, which use merchant
telemetry on every flow that traverses the fabric silicon but have insufficient telemetry. As such,
is essential. In fact, major vendors like Cisco, with external TAPs and monitoring networks have to be
their Tetration offering, have heavily validated the built to get a sense of what is actually going on in
need for application analytics for today’s modern the infrastructure. The figure below shows what
applications. But these traditional approaches monitoring today looks like.
Tap
Tools
VM VM VM VM VM VM VM VM VM
EXPENSIVE Fabric to
VM VM VM VM VM VM VM VM VM aggregate and filter
traffic
Server Server Server
Figure 9: A traditional approach to network telemetry and analytics requires external TAPS, probes and packet brokers.
This is where challenges arise. A typical data center routes traffic to the monitoring tools. While
network that connects servers runs a combination this is less costly, it means the inner network
of 10, 25, 40 and 100 GbE today. These switches becomes a black hole with no visibility. Many
typically have many servers connected to them of us have learned the hard way over time that
that are pumping traffic at high speed. without 100% visibility, you can’t fix a problem
very efficiently. In addition, even this selective
Some possible approaches to instrumenting the
deployment of hardware makes the cost go up
network today are as follows:
dramatically, as more switches are deployed
1. Provision a copper or fiber optic TAP at every and require monitoring – the monitoring fabric
link and divert a copy of every packet to a packet needs more capacity and the monitoring
broker fabric, which in turn routes traffic to software gets more complex and needs more
the monitoring tools. With the fiber optics TAP hardware resources.
and passives, every packet is mirrored, and the
3. Using the networking switches themselves
monitoring tools need to deal with a few Tb/s
to selectively sample traffic (e.g., sFlow with
or 1B+ packets per second from each switch.
standard hardware or NetFlow with proprietary
However, the reality is that this approach is
hardware) and send this traffic and flow
impossibly expensive, and thus no one deploys it.
information to monitoring tools. This approach
2. Selectively place copper or fiber optic TAPs is built upon the premise of sampling, where
at uplinks or edge ports. Mirror these edge the sampling rates are typically 1 in 5,000 to
packets to a packet broker fabric, which in turn 10,000 packets – any more than this runs into
Figure 10: One of many prepackaged reporting displays in UNUM Insight Analytics
As applications become more distributed, with significant hardware overlay infrastructures that are
both east-west and north-south traffic, and expensive, complex and consume space and power
services are deployed within private clouds, the – not ideal for smaller data center environments.
ability to monitor each and every connection is of The best approach is to leverage the distributed
paramount importance for both performance and processing power of the network switches
security reasons (find more on security in chapter themselves with some clever software to provide
5). Given the amount of data, traditional sampled the data sources and analytics tools the ability to
data network analytics sources do not scale. More observe every packet and flow at a fraction of the
traditional packet monitoring solutions designed cost of traditional hardware-based solutions.
to overcome this limitation unfortunately require
We can’t talk about data center networking it is necessary to configure multiple VRFs per
automation without addressing the topic of switch on multiple switches across the data center
security. Firewalls sit at the perimeter of the data or campus – a nightmare of complexity that is
center network and can protect against very prone to human error. In addition, because
north-south traffic entering the data center fabric, of the heavy protocol exchange in a typical VRF
but do not address east-west traffic and threats implementation, traditional solutions run into
moving laterally once inside the network. For VRF scale challenges. Similarly, setting up a VXLAN
example, sophisticated malware can hide within fabric using, for example, BGP-EVPN requires N x
encrypted data and be missed by conventional tens of steps per switch, and then adding VRFs
firewalls, and once inside can create significant on top of that adds another N x tens of steps
damage. IoT is one example of an application per switch.
with many new endpoints generating traffic and a
potentially immature security model that results
in a new, large attack surface. There are already Automating Network
examples of successful attacks through Segmentation
IoT devices, such as an attack on a casino via a
On the other hand, Pluribus’ open SDN approach
WiFi-connected fish tank temperature sensor, as
with the Adaptive Cloud Fabric sets up a mesh of
well as a massive retail attack via a WiFi-connected
VXLAN tunnels automatically. Once deployed,
HVAC system.
VRFs can be programmed to run across the fabric
Consequently, the industry has moved toward on every switch within a VXLAN segment with
leveraging virtual routing and forwarding instances a single atomic command. Literally only one
(VRFs) to segment the network and isolate traffic. command is needed – a dramatic simplification.
VRFs can be deployed in a traditional underlay or In addition, Pluribus’ VRF scale is limited only by
on top of a VXLAN-based overlay. In either case, hardware because the Adaptive Cloud Fabric’s SDN
one of the challenges traditional networking approach does not need the protocol exchange
solutions face is the complexity of provisioning and typically required by VRFs.
deploying the VRFs. If deployed in the underlay,
Ext Network
CORE ROUTERS
MULTIPLE vFLOW
VRF SERVICE
VRF VRF VRF VRF
INSTANCES INSERTION
(COLORS) POLICY
PERIMETER
VM-10 VM-20 VM-30 VM-20 VM-20 FIREWELL (OR
VM-10 VM-10 IPS/IDS)
VM-10 VM-20 VM-20
VM-20 VM-30
VM-20 VM-10 VM-30 CLUSTER
VM-30
Figure 11: VRFs distributed across the fabric with Anycast Gateways to better leverage pooled firewall resources
The Pluribus Netvisor ONE operating system and Adaptive Cloud Fabric have been designed
to deliver a superior level of network automation to small IT teams while simultaneously fitting
into cost-, space- and power-constrained environments. Pluribus Networks takes advantage of
the underutilized distributed computational power, memory and packet processing inherent
in leaf-and-spine network switches distributed across one or multiple data center sites. By
leveraging these resources, Pluribus delivers a unique “controllerless” approach to SDN
automation of the physical network, provides a service-rich and secure VXLAN virtual fabric
and enables comprehensive and granular telemetry and analytics. Not only is the solution
cost-, space- and power-efficient, it is unified and pre-integrated, so it just works out of the box.
Supporting well-known orchestration systems, including VMware vCenter, Red Hat OpenStack
and Kubernetes, Pluribus Networks puts fully automated SDDC within reach for small IT teams
with constrained physical environments - the Easy Button for SDDC.
Copyright © 2019 Pluribus Networks, Inc. All Rights Reserved. Netvisor is a registered trademark, and The Pluribus
Networks logo, Pluribus Networks, Freedom, Adaptive Cloud Fabric, and VirtualWire are trademarks of Pluribus
Networks, Inc. All other brands and product names are registered and unregistered trademarks of their respective
owners. Pluribus Networks reserve the right to make changes to its technical information and specifications at any
time, without notice. This informational document may describe features that may not be currently available or Pluribus Networks, Inc.
may be part of a future release, or may require separate licensed software to function as described. www.pluribusnetworks.com