Professional Documents
Culture Documents
Designs
BRKDCT-2218
#clmel
Abstract
• Network design for the data centre has evolved over time, yet typically there
has been the common requirement for networked connectivity to all
applications and their respective resources of physical and virtual compute,
storage and network services, as well as to other required services and
locations. Many of the technical design challenges are the same regardless
the size of the organisation. This session will discuss example architectures for
small to medium data centres, starting from entry-level and then illustrate
transition points to increase scale and capacity whilst providing support for
additional features and functionality. The Nexus switching product range will be
referenced in the examples and guidance provided around optimisation of
features and protocols. Also included is a discussion on connecting to remote
data centres as well as considerations for extending workloads to public clouds
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Cisco Live Melbourne Related Sessions
BRKDCT-2048 Deploying Virtual Port Channel (vPC) in NXOS
BRKDCT-2049 Data Centre Interconnect with Overlay Transport Virtualisation
BRKDCT-2334 Data Centre Deployments and Best Practices with NX-OS
BRKDCT-2404 VXLAN Deployment Models - A Practical Perspective
BRKDCT-2615 How to Achieve True Active-Active Data Centre Infrastructures
BRKDCT-3640 Nexus 9000 Architecture
BRKDCT-3641 Data Centre Fabric Design: Leveraging Network Programmability
and Orchestration
BRKARC-3601 Nexus 7000/7700 Architecture and Design Flexibility for Evolving
Data Centres
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Cisco Live Melbourne Related Sessions
BRKACI-2000 Application Centric Infrastructure Fundamentals
BRKACI-2001 Integration and Interoperation of Existing Nexus Networks into an
ACI Architecture
BRKACI-2006 Integration of Hypervisors and L4-7 Services into an ACI Fabric
BRKACI-2601 Real World ACI Deployment and Migration
BRKVIR-2044 Multi-Hypervisor Networking - Compare and Contrast
BRKVIR-2602 Comprehensive Data Centre & Cloud Management with UCS
Director
BRKVIR-2603 Automating Cloud Network Services in Hybrid Physical and Virtual
Environments
BRKVIR-2931 End-to-End Application-Centric Data Centre
BRKVIR-3601 Building the Hybrid Cloud with Intercloud Fabric - Design and
Implementation
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Start Small …Then Grow …..Then Evolve
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Juggling Many Pieces…
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Which Pieces to Select?
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Designing Small to Medium Sized Data Centres
Typical Requirements Client Access
WAN / DCI
Minimum pair of dedicated DC Switches
Campus
Transition from collapsed core
Workloads mostly virtualised, some physical
Connect to network periphery L3
-----------
L2
Scalable
FC
Size for current needs
FCoE
iSCSI / NAS
Reuse components in larger designs
Topology options: from single layer to spine-leaf
Design Options
Feature choice + priority = tradeoffs
Driving efficiency: SDN, Programmability, Orchestration, Automation
“Cloud with Control”
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Design Goals
Flexible
Practical
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single-Tier, Dual-Tier, Spine/Leaf
Scalable Spine/Leaf DC Fabric
VXLAN
Dual Tier DC
VXLAN
Single Layer DC
Small Spine/Leaf
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Compute Connectivity & Usage Needs Drive Design Choices
Compute Form Factor Hypervisor Network Virtualisation
– Unified Computing Fabric Requirements
– 3rd Party Blade Servers – vSwitch vSS/vDS, OVS, Hyper-V,
Nexus 1000v/AVS
– Rack Servers (Non-UCS Managed)
Automation/Orchestration
Storage Protocols
– Abstraction
– Fibre Channel (FC)
– APIs/Programmability/Orchestration
– FCoE
– VMM’s ; Fabric
– IP (iSCSI, NAS)
Connectivity Model
– 10 or 1-GigE Server ports
– NIC/HBA Interfaces per-server
– NIC Teaming models
FC
iSCSI NFS/
FCoE CIFS
VM VM VM VM VM VM
1
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Data Centre Fabric Needs
Internet
Public Offsite DC
• “North-South”: end-users Cloud Enterprise
Site B
and external entities. Mobile
Network
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Traditional Multi-Tier Hierarchical Design
• Extremely wide customer-deployment
L3
footprint
core1 core2
• Scales well, but scoping of failure
domains imposes some restrictions
– L3 Boundary
– VLAN extension / workload mobility
options limited
L3 agg1 agg2 aggX aggY
– Default Gateway Placement L2 …
• Network Services repeated at every
aggregation tier
• Discrete device management
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Topology Selection: Single/Dual/Multi-Layer vs. Spine-Leaf
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Data Centre “Fabric” Journey
STP
VPC
FabricPath
VXLAN
(Flood & Learn)
MAN/WAN
FabricPath VXLAN
/BGP /EVPN
MAN/WAN MAN/WAN
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Why Spine-Leaf Design? Pay as You Grow Model
To speed up flow
completion times, add
Need
Need even
moremore
host more backplane,
Utilisation
Per Spine
host ports?
ports? spread load across
AddAdd
another
a leafleaf more spines
Per Spine
Utilization
40G fabric ports
FCT
FCT
FCT
FCT
FCT
FCT
* FCT = Flow Completion Times 10G host ports
Lower FCT = FASTER
96 ports APPLICATIONS
144 ports
2x48 10G192 ports
(960 Gbps total)
3x48 10G (1440 Gbps total)
4x48 10G (1920 Gbps total)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Spine/Leaf DC Fabric ≅ Large Non-Blocking Switch
Host
4
Host
1
Host
5
Host
2
Host
6
Host
3
Host
7
Host Host Host Host Host Host Host
1 2 3 4 5 6 7
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Spine/Leaf DC Fabric ≅ Large Modular Switch
5 6 7
4
Host Host Host
Host
Line Line Line Line Line
Card Card Card Card Card
Fabric Fabric Fabric
Module Module Module
Cisco Public
© 2015 Cisco and/or its affiliates. All rights reserved.
Line Line Line Line Line
Card Card Card Card Card
Host Host Host
1 2 3
BRKDCT-2218
Impact of Link Speed – the Drive Past 10G Links
Uplinks
200G
Uplinks Uplinks
Aggregate
Bandwidth
200G
• 40 & 100Gbps fabric provide very similar performance for fabric links
• 40G provides performance, link redundancy, and low cost with BiDi
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Statistical Probabilities of Efficient Forwarding
Probability of 100% throughput ≅ 3%
20×10Gbps 5×40Gbps 2×100Gbps
Uplinks Uplinks Uplinks
1 2 20
Probability of 100% throughput ≅ 75%
1 2 3 4 5
Probability of 100% throughput ≅ 99%
11×10Gbps flows
(55% load) 1 2
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Impact of Link Speed on Flow Completion Times
20 Avg FCT: Large (10MB,∞) background flows
FCT (normalised to optimal) 18
16
Lower
FCT is 14 OQ-Switch
Better 12
20x10Gbps
10
5x40Gbps
8
6 2x100Gbps
4
2
0
30 40 50 60 70 80
Load (%)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Impact of Link Speed on Flow Completion Times
20 Avg FCT: Large (10MB,∞) background flows
FCT (normalized to optimal) 18
16
Flow Completion is dependent on queuing and
Lower latency.
FCT is 14 OQ-Switch
Better 12
40G is not just about faster ports and optics, 20x10Gbps
10 it’s about
5x40Gbps
8 Faster Flow Completion.
6 2x100Gbps
4
• 40/100Gbps
2 fabric: ~ same FCT as non-blocking switch
0
• 10Gbps fabric
30 links:
40FCT up 50
40% worse
60 than 40/100G
70 80
Load (%)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
DC and Cloud Networking Portfolio – Nexus Family
Nexus 9000
ACI Ecosystem
Nexus Nexus 7000/7700
6000
Nexus
5000/5600
Nexus
Nexus 3548/3100
Nexus 2000/2300
1000V/AVS
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Decoding the Nexus Product Numbers
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Single Layer Data Centre, Nexus 5500
• Dedicated Nexus 5500-based switch pair Client Access
WAN / DCI
Positive Negative
Campus
Unified Port on all ports – L3 card: 160G max, not
Max Flexibility cumulative
Models:
Nexus 5548P; Nexus 5548UP; Nexus 5596UP; Nexus 5596T
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 5600
• Dedicated Nexus 5600-based switch pair Client Access
WAN / DCI
Positive Negative
Campus
Low Price/Performance No ACI support
Models:
Nexus 5624Q; Nexus 5648Q; Nexus 5696Q; Nexus 5672UP; Nexus 56128P
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 6000
• Positioned for rapid scalability and a 40-GigE Fabric Client Access
WAN / DCI
Positive Negative
Campus
Unified Ports – Good No VXLAN support in HW in
Flexibility with expansion early models (need 6004-EF)
Models:
Nexus 6001; Nexus 6004; Nexus 6004-EF
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 9300
• Dedicated Nexus 9300-based switch pair Client Access
WAN / DCI
Positive Negative
Campus
iSCSI / NAS
Native 40G & 10G Breakout on some 40G ports
Models:
Nexus 9372TX ; Nexus 9396TX ; Nexus 93120TX ; Nexus 93128TX
Nexus 9372PX ; Nexus 9396PX ; Nexus 9332PQ ; Nexus 9336PQ (ACI Spine only)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 7000/7700
• Highly Available Virtualised Chassis Access/Aggregation Model
Client Access
Positive Negative
WAN / DCI
More feature rich platform Higher initial capital cost
Campus
Modular, easy scale up No Unified Ports
DFA Spine/Leaf
FCoE
Supports 32 FEX
iSCSI / NAS
No native DCI
iSCSI / NAS
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Fabric Server Access Starter Pod
24x40G fabric ports needed for non-oversubscribed
72x40G available
*** Server/Rack density dependent on required load, available power and cooling (geo-diverse)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling with Spine/Leaf:
72x40G fabric ports needed for non-oversubscribed
60x40G
36x40G
48x40G
24x40G
72x40G available
Two
Five
Six
Three
Four Racks,
Racks,
Racks,
Racks,
Racks, 96x10Gports
240x10G
288x10G
192x10G
144x10Gports (960GB)
(2400GB)
ports(2880GB)
ports (1920GB)
(1440GB)
*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or
BRKDCT-2218 fewer
© 2015fabric links.
Cisco and/or its affiliates.Server/Rack
All rights reserved. density
Cisco Public dependent on load, power, cooling (geo-diverse)
When Do You Add/Upgrade Spines?
72x40G
96x40G fabric ports needed for non-oversubscribed
144x40G now144x40G
available,
72x40G smaller failure impact
available
available
Eight
Six Racks,
Racks,288x10G
384x10Gports
ports(2880GB)
(3840GB)
*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or
BRKDCT-2218 fewer
© 2015fabric links.
Cisco and/or its affiliates.Server/Rack
All rights reserved. density
Cisco Public dependent on load, power, cooling (geo-diverse)
When Do You Add/Upgrade Spines?
96x40G fabric ports needed for non-oversubscribed
2x36 in each modular spine, 280x40G,
140x40G LC Redundancy, Spine ISSU, etc.
available
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Q: Okay, Have My Spine-Leaf Topology Now What?
Controllers
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrated Overlays
• Flexibility/Programmability
– Reduced number of touch points
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Flexible Data Centre Fabrics
Create Virtual Networks on
top of an efficient IP network
• Mobility
• Segmentation + Policy
L3 • Scale
• Automated &
Programmable
L2/L3 • Full Cross Sectional BW
V
M
V
M
Physical • L2 + L3 Connectivity
Hosts O
S
O
S • Physical + Virtual
Virtual
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
SVI/VNI/VLAN Scoping and Provisioning
Orchestration leads to scale optimisation Mgmt
L3 Fabric L3 Fabric
L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Q: How Do I Integrate Spine-leaf To An
Existing Classic Tiered Network?
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling a VPC-based DC Design
Access L3
Layer L2
VLANs
100-150 Host Host Host
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling a VPC-based DC Design
DC
Core
Layer
Access L3 Access
Layer L2 Layer
VLANs VLANs
100-150 Host Host Host Host Host Host 151-200
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrating Spine/Leaf with an Existing Network
ACI Pod
New DC
Core Data Row Upgrade
Layer New Application
Spine
Layer
L3
Agg
Layer L2 ACI Fabric
(VXLAN based)
ACI
Access Access Access Border
Layer Layer Layer
VLANs VLANs VLANs Leafs
Host
Host
100-150 Host 151-200 201-250
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrating Spine/Leaf with an Existing Network
ACI Pod
New DC
Core Data Row Upgrade
Layer New Application
Spine
Layer
L3
Agg
Layer L2 ACI Fabric
(VXLAN based)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Data Centre Interconnect Options
Client Access
• Options for L2 Interconnect
ASR1000
WAN / DCI
Client Access Campus
WAN / DCI
Campus
ASR1000 L3
-----------
L2
L3
-----------
L2 N7K
Virtual DC Virtual DC
Services in Services in
Software VM VM VM VM VM VM VM VM VM VM VM VM Software
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Nexus Programmability Nexus 7K Nexus 5K / 6K Nexus 9K
Puppet/Chef Future Shipping Shipping
Provisioning &
PoAP Shipping Shipping Shipping
Orchestration
OpenStack Shipping Shipping Shipping
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programming for Many Boxes – Git Hub Repository
https://github.com/datacenter/
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programming Examples
Here’s an example that uses the NXAPI for the N9K. It can automate mundane
configuration tasks: you launch it remotely (from your Mac/PC) and use it to get
an inventory of the switch, configure new interfaces, etc:
https://github.com/datacenter/nexus9000/blob/master/nx-
os/nxapi/getting_started/nxapi_basics.py
Here’s another one that collects the output of several “show commands” and
puts them together to create a “super command” which nice NxOS-style
formatting:
https://github.com/datacenter/nexus9000/blob/master/nx-
os/python/samples/showtrans.py
There are a few others such as a CRC error check here:
https://github.com/datacenter/nexus7000/blob/master/crc_checker_n7k.py
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
UCS Manages Compute through Abstraction
SAN
SAN Connectivity Configuration
LAN
LAN Connectivity Configuration
Service Profile
Motherboard Firmware
BIOS Configuration
Adapter Firmware
Boot Order
RAID configuration
Maintenance Policy
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
ACI Manages Communications through Abstraction
External Connectivity
SLB Configuration
SLB Configuration
FW Configuration
FW Configuration
Host Connectivity
Host Connectivity
Host Connectivity
ACL
QoS
QoS
QoS
ACL
ACL
Network Path
QoS QoS QoS
Forwarding
Application Network
Profile
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Different Modes of Operation with Nexus 9000
Nexus 9000 Standalone (with Controller*) Application Centric Infrastructure (with APIC)
NCS
1/10/40/100GE
VTS Common Platform
NX-OS Working w/ multiple SDN controllers APIC data object / policy model integrated natively with NX-OS
(inclusive for NfV) running on Nexus 9000 switches (spines and leaves)
Interoperable w/ 3rd Party ToR Switches Must be Nexus 9000 hardware for leaves and spines as well as ACI
and WAN gear Software (switch code and APIC controller)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Cisco InterCloud Architectural Details
UCSD SP Admin deploys
ICPEP
Administrator installs
InterCloud Director
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
InterCloud Components
InterCloud Director
UCSD-based, separate interface
InterCloud Secure Fabric
N1Kv-based, doesn’t require a full N1Kv install
vNIC from intercloud connecter into the vSwitch
Optional services integration with CSR1000v
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Key Takeaways
Key Takeaways
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco Live 2015 T-Shirt!
Complete your Overall Event Survey and 5 Session
Evaluations.
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Additional Resources
Additional Resources
Follow up information for more details:
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public