You are on page 1of 71

Small to Medium Data Centre

Designs
BRKDCT-2218

Nic Rouhotas - Data Centre Consulting Engineer

#clmel
Abstract
• Network design for the data centre has evolved over time, yet typically there
has been the common requirement for networked connectivity to all
applications and their respective resources of physical and virtual compute,
storage and network services, as well as to other required services and
locations. Many of the technical design challenges are the same regardless
the size of the organisation. This session will discuss example architectures for
small to medium data centres, starting from entry-level and then illustrate
transition points to increase scale and capacity whilst providing support for
additional features and functionality. The Nexus switching product range will be
referenced in the examples and guidance provided around optimisation of
features and protocols. Also included is a discussion on connecting to remote
data centres as well as considerations for extending workloads to public clouds

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Cisco Live Melbourne Related Sessions
BRKDCT-2048 Deploying Virtual Port Channel (vPC) in NXOS
BRKDCT-2049 Data Centre Interconnect with Overlay Transport Virtualisation
BRKDCT-2334 Data Centre Deployments and Best Practices with NX-OS
BRKDCT-2404 VXLAN Deployment Models - A Practical Perspective
BRKDCT-2615 How to Achieve True Active-Active Data Centre Infrastructures
BRKDCT-3640 Nexus 9000 Architecture
BRKDCT-3641 Data Centre Fabric Design: Leveraging Network Programmability
and Orchestration
BRKARC-3601 Nexus 7000/7700 Architecture and Design Flexibility for Evolving
Data Centres

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Cisco Live Melbourne Related Sessions
BRKACI-2000 Application Centric Infrastructure Fundamentals
BRKACI-2001 Integration and Interoperation of Existing Nexus Networks into an
ACI Architecture
BRKACI-2006 Integration of Hypervisors and L4-7 Services into an ACI Fabric
BRKACI-2601 Real World ACI Deployment and Migration
BRKVIR-2044 Multi-Hypervisor Networking - Compare and Contrast
BRKVIR-2602 Comprehensive Data Centre & Cloud Management with UCS
Director
BRKVIR-2603 Automating Cloud Network Services in Hybrid Physical and Virtual
Environments
BRKVIR-2931 End-to-End Application-Centric Data Centre
BRKVIR-3601 Building the Hybrid Cloud with Intercloud Fabric - Design and
Implementation
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Start Small …Then Grow …..Then Evolve

Blade Runner, BrickWorld US

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Juggling Many Pieces…

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Which Pieces to Select?

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Designing Small to Medium Sized Data Centres
Typical Requirements Client Access

WAN / DCI
 Minimum pair of dedicated DC Switches
Campus
 Transition from collapsed core
 Workloads mostly virtualised, some physical
 Connect to network periphery L3
-----------
L2
Scalable
FC
 Size for current needs
FCoE
iSCSI / NAS
 Reuse components in larger designs
 Topology options: from single layer to spine-leaf
Design Options
 Feature choice + priority = tradeoffs
 Driving efficiency: SDN, Programmability, Orchestration, Automation
 “Cloud with Control”
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Design Goals

Flexible

Practical

Image Credit: In speaker notes


Agile
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
What Are You Ready For?

Direction will depend on where you draw the line:


 Want to stay with existing toolsets for config & management?
 Interested in new toolsets to buy some efficiency?
 Capable of consuming a new set of tools?
 New or traditional operational model?

Image Credit: In notes

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single-Tier, Dual-Tier, Spine/Leaf
Scalable Spine/Leaf DC Fabric

VXLAN

Dual Tier DC

VXLAN

Single Layer DC

Small Spine/Leaf

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Compute Connectivity & Usage Needs Drive Design Choices
 Compute Form Factor  Hypervisor Network Virtualisation
– Unified Computing Fabric Requirements
– 3rd Party Blade Servers – vSwitch vSS/vDS, OVS, Hyper-V,
Nexus 1000v/AVS
– Rack Servers (Non-UCS Managed)
 Automation/Orchestration
 Storage Protocols
– Abstraction
– Fibre Channel (FC)
– APIs/Programmability/Orchestration
– FCoE
– VMM’s ; Fabric
– IP (iSCSI, NAS)
 Connectivity Model
– 10 or 1-GigE Server ports
– NIC/HBA Interfaces per-server
– NIC Teaming models
FC

iSCSI NFS/
FCoE CIFS

VM VM VM VM VM VM
1
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Data Centre Fabric Needs
Internet
Public Offsite DC
• “North-South”: end-users Cloud Enterprise
Site B
and external entities. Mobile
Network

N ORTH - SOUTH TRAFFIC


• “East-West”: clustered
applications, workload FC

mobility. DATA FCoE


API iSCSI / NAS
CENTRE
• High throughput, low latency FABRIC Storage
• Increasing high availability Orchestration/
Monitoring
requirements.
• Automation & Orchestration
Services
Server/Compute

EAST – WEST TRAFFIC

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Traditional Multi-Tier Hierarchical Design
• Extremely wide customer-deployment
L3
footprint
core1 core2
• Scales well, but scoping of failure
domains imposes some restrictions
– L3 Boundary
– VLAN extension / workload mobility
options limited
L3 agg1 agg2 aggX aggY
– Default Gateway Placement L2 …
• Network Services repeated at every
aggregation tier
• Discrete device management

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Topology Selection: Single/Dual/Multi-Layer vs. Spine-Leaf

Core, Aggregation and Spine-Leaf


Access

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Data Centre “Fabric” Journey
STP
VPC
FabricPath
VXLAN
(Flood & Learn)

MAN/WAN

FabricPath VXLAN
/BGP /EVPN

MAN/WAN MAN/WAN

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Why Spine-Leaf Design? Pay as You Grow Model
To speed up flow
completion times, add
Need
Need even
moremore
host more backplane,

Utilisation
Per Spine
host ports?
ports? spread load across
AddAdd
another
a leafleaf more spines

Per Spine
Utilization
40G fabric ports

FCT

FCT

FCT
FCT

FCT

FCT
* FCT = Flow Completion Times 10G host ports
Lower FCT = FASTER
96 ports APPLICATIONS
144 ports
2x48 10G192 ports
(960 Gbps total)
3x48 10G (1440 Gbps total)
4x48 10G (1920 Gbps total)

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Spine/Leaf DC Fabric ≅ Large Non-Blocking Switch

Host
4
Host
1

Host
5
Host
2

Host
6
Host
3

Host
7
Host Host Host Host Host Host Host
1 2 3 4 5 6 7

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Spine/Leaf DC Fabric ≅ Large Modular Switch

5 6 7
4
Host Host Host
Host
Line Line Line Line Line
Card Card Card Card Card
Fabric Fabric Fabric
Module Module Module

Cisco Public
© 2015 Cisco and/or its affiliates. All rights reserved.
Line Line Line Line Line
Card Card Card Card Card
Host Host Host
1 2 3

BRKDCT-2218
Impact of Link Speed – the Drive Past 10G Links

20×10Gbps 5×40Gbps 2×100Gbps


Aggregate
Bandwidth

Uplinks
200G

Uplinks Uplinks
Aggregate
Bandwidth
200G

20×10Gbps 20×10Gbps 20×10Gbps


Downlinks Downlinks Downlinks

• 40 & 100Gbps fabric provide very similar performance for fabric links
• 40G provides performance, link redundancy, and low cost with BiDi
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Statistical Probabilities of Efficient Forwarding
Probability of 100% throughput ≅ 3%
20×10Gbps 5×40Gbps 2×100Gbps
Uplinks Uplinks Uplinks
1 2 20
Probability of 100% throughput ≅ 75%

1 2 3 4 5
Probability of 100% throughput ≅ 99%

11×10Gbps flows
(55% load) 1 2

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Impact of Link Speed on Flow Completion Times
20 Avg FCT: Large (10MB,∞) background flows
FCT (normalised to optimal) 18
16
Lower
FCT is 14 OQ-Switch

Better 12
20x10Gbps
10
5x40Gbps
8
6 2x100Gbps

4
2
0
30 40 50 60 70 80
Load (%)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Impact of Link Speed on Flow Completion Times
20 Avg FCT: Large (10MB,∞) background flows
FCT (normalized to optimal) 18
16
Flow Completion is dependent on queuing and
Lower latency.
FCT is 14 OQ-Switch
Better 12
40G is not just about faster ports and optics, 20x10Gbps
10 it’s about
5x40Gbps
8 Faster Flow Completion.
6 2x100Gbps
4
• 40/100Gbps
2 fabric: ~ same FCT as non-blocking switch
0
• 10Gbps fabric
30 links:
40FCT up 50
40% worse
60 than 40/100G
70 80
Load (%)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
DC and Cloud Networking Portfolio – Nexus Family

Nexus 9000
ACI Ecosystem
Nexus Nexus 7000/7700
6000
Nexus
5000/5600
Nexus
Nexus 3548/3100
Nexus 2000/2300
1000V/AVS

OPEN HIGH PERFORMANCE FABRIC SCALABLE SECURE SEGMENTATION


APIs/ Open Source/ Application Policy Model 1/10/40/100 GE VXLAN, BGP-EVPN

Resilient, Scalable Workload Mobility LAN/SAN Operational Architectural


Fabric Within/ Across DCs Convergence Efficiency—P-V-C Flexibility

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Decoding the Nexus Product Numbers

Decoding Nexus 5600 Model numbers:


(((32+16)*10G)=480G)+((6*40G)=240G)=(720/10)= 72  5672
(((48+24+24)*10G)=960G)+(((4+2+2)*40G)=320G)=(1280/10)=128  56128

Decoding Nexus 9300 Model Numbers:


((48*10G)=480G)+((12*40G)=480G)=(960/10)= 96 9396
((96*10G)=960G)+((8*40G)=320G)=(1280/10)=128  93128

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Single Layer Data Centre, Nexus 5500
• Dedicated Nexus 5500-based switch pair Client Access

WAN / DCI

Positive Negative
Campus
 Unified Port on all ports –  L3 card: 160G max, not
Max Flexibility cumulative

 Can work as FC/FCOE  DFA “L2 ONLY Leaf”


access transition switch
 No VXLAN HW support L3
 Non-blocking, Line Rate Nexus 5500 -----------
10Gpbs L2  No ACI support L2

 ~2us Latency  No native DCI support FC

 Supports Fabric Path, DFA*  No VDC FCoE


iSCSI / NAS

 160G Layer-3 with L3  ISSU not supported w/L3 Nexus


daughter card or GEM 2000
 FEX count lower w/L3 10-GigE
 Supports 24 FEX, A-FEX, UCS C-Series
VM-FEX  Q: 5500 or 5600? 1Gig/100M
Servers
 Most CVD’s (i.e. FlexPod) 10 or 1-Gig attached
UCS C-Series

Models:
Nexus 5548P; Nexus 5548UP; Nexus 5596UP; Nexus 5596T

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 5600
• Dedicated Nexus 5600-based switch pair Client Access

WAN / DCI

Positive Negative
Campus
 Low Price/Performance  No ACI support

 Unified Ports – Good  No native DCI support


Flexibility (not all ports)
 ISSU not supported w/L3
 Supports VXLAN, Fabric L3
Path, DFA  Q: 5500 or 5600? Nexus 5600 -----------
L2
 Non-blocking, Line Rate
L2/L3 FC

 Native 40G/10G, breakout FCoE


iSCSI / NAS

 ~1us Latency Nexus


2000
 Supports 24 FEX, A-FEX, VM- 10-GigE
FEX UCS C-Series
1Gig/100M
Servers
10 or 1-Gig attached
UCS C-Series

Models:
Nexus 5624Q; Nexus 5648Q; Nexus 5696Q; Nexus 5672UP; Nexus 56128P

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 6000
• Positioned for rapid scalability and a 40-GigE Fabric Client Access

WAN / DCI
Positive Negative
Campus
 Unified Ports – Good  No VXLAN support in HW in
Flexibility with expansion early models (need 6004-EF)

 Non-disruptive scale-up  No ACI support

 96*40G or 384*10G  No native DCI support L3


Nexus 6004 -----------
L2
 Supports VXLAN, Fabric  FEX count Lower w/L3
Path, DFA
 ISSU not supported w/L3 FC
 Non-blocking, Line Rate
L2/L3  Higher initial cost FCoE
iSCSI / NAS

 Native 100G/40G/10G, BiDi,


breakout support Nexus
10-GigE
2000
UCS C-Series
 ~1us Latency 1Gig/100M
Servers
 Supports 48 L2 FEX, 24 L3 10 or 1-Gig attached
UCS C-Series
FEX, A-FEX, VM-FEX

Models:
Nexus 6001; Nexus 6004; Nexus 6004-EF
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 9300
• Dedicated Nexus 9300-based switch pair Client Access

WAN / DCI

Positive Negative
Campus

 Low Price/Performance  No FC, Unified Ports

 VXLAN Support in HW  FCoE will require SW

 ACI Leaf & Spine support  No FP, DFA support


L3
Nexus 9300 -----------
 Standalone Leaf & Spine  VXLAN Control plane is Mcast L2
until EVPN
 Non-blocking, Line Rate
L2/L3  No native DCI support

 
iSCSI / NAS
Native 40G & 10G Breakout on some 40G ports

 <1us Latency  ACI Spine <> ACI Leaf Nexus


10-GigE
2000
UCS C-Series
 FEX Support - 16
1Gig/100M
Servers
 FCoE Hardware Support* 10 or 1-Gig attached
UCS C-Series

Models:
Nexus 9372TX ; Nexus 9396TX ; Nexus 93120TX ; Nexus 93128TX
Nexus 9372PX ; Nexus 9396PX ; Nexus 9332PQ ; Nexus 9336PQ (ACI Spine only)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 7000/7700
• Highly Available Virtualised Chassis Access/Aggregation Model
Client Access
Positive Negative
WAN / DCI
 More feature rich platform  Higher initial capital cost
Campus
 Modular, easy scale up  No Unified Ports

 Flexible L2/L3 with ISSU  VXLAN support in Future


 LISP*, OTV, FEX, FCoE, FP, Nexus 7700
VXLAN*  No ACI Support
L3
 Physical Footprint -----------
 Native 100G, 40G & 10G, L2
breakout

 DFA Spine/Leaf
FCoE

 Supports 32 FEX
iSCSI / NAS

 VDC, PBR, WCCP, MACSec

 Different models (18-slot to 2- 10-GigE


Nexus
2000
slot*) UCS C-Series
1Gig/100M
Models: Servers
Chassis: Nexus 7004/7009/7010/7018; Nexus 7702*/7706/7710/7718 10 or 1-Gig attached
UCS C-Series
I/O Modules : M1 (10/100/1000GE ; 1GE ; 10GE) , M2 (10GE; 40GE; 40/100GE), F2E
(1/10GE), F3 (1/10GE ; 40GE ; 100GE)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Single Layer Data Centre, Nexus 9500
• Highly Available Chassis Access/Aggregation Model
Client Access
Positive Negative
WAN / DCI
 Modular, easy scale up  Higher initial capital cost
Campus
 Flexible L2/L3 with ISSU*  No FC, Unified Ports
 FEX*, FCoE*, VXLAN*
 FEX, VXLAN, FCoE support
 Native 100G, 40G & 10G, in future
breakout
 No DFA, FP Support
 Supports 32 FEX* L3
 ISSU coming in future Nexus 9500
-----------
L2
 ACI Spine/Leaf Support*
 VDC in future

 No native DCI
iSCSI / NAS

Models: 10 or 1-Gig attached UCS C-Series


Chassis: Nexus 9504; Nexus 9508; Nexus 9516
I/O Modules: 94xx (NX-OS) ; 95xx (NX-OS, ACI) ; 96xx (NX-OS) ; 97xx (ACI)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Fabric Server Access Starter Pod
24x40G fabric ports needed for non-oversubscribed
72x40G available

5600 starter ACI starter


4x5672UP 40G fabric ports 2x9336PQ
Full SW Bundle 4x9396PX
(including DCNM) 3xAPIC & 192 Port Leaf licensing
~250K US list ~250K US list
10G host ports

Two Racks, 96x10G ports (960GB)***

*** Server/Rack density dependent on required load, available power and cooling (geo-diverse)
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling with Spine/Leaf:
72x40G fabric ports needed for non-oversubscribed
60x40G
36x40G
48x40G
24x40G
72x40G available

40G fabric ports

10G host ports***

Two
Five
Six
Three
Four Racks,
Racks,
Racks,
Racks,
Racks, 96x10Gports
240x10G
288x10G
192x10G
144x10Gports (960GB)
(2400GB)
ports(2880GB)
ports (1920GB)
(1440GB)

*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or
BRKDCT-2218 fewer
© 2015fabric links.
Cisco and/or its affiliates.Server/Rack
All rights reserved. density
Cisco Public dependent on load, power, cooling (geo-diverse)
When Do You Add/Upgrade Spines?
72x40G
96x40G fabric ports needed for non-oversubscribed
144x40G now144x40G
available,
72x40G smaller failure impact
available
available

40G fabric ports

10G host ports***

Eight
Six Racks,
Racks,288x10G
384x10Gports
ports(2880GB)
(3840GB)

*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX or
BRKDCT-2218 fewer
© 2015fabric links.
Cisco and/or its affiliates.Server/Rack
All rights reserved. density
Cisco Public dependent on load, power, cooling (geo-diverse)
When Do You Add/Upgrade Spines?
96x40G fabric ports needed for non-oversubscribed
2x36 in each modular spine, 280x40G,
140x40G LC Redundancy, Spine ISSU, etc.
available

40G fabric ports

10G host ports***

Eight Racks, 384x10G ports (3840GB)

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Q: Okay, Have My Spine-Leaf Topology Now What?

Choice for Fabric mode of operation:

L3 ECMP with L3 ECMP with


L2 vPC L2 Routed Fabric
Overlay Overlay + Control
(Traditional) (FabricPath)
(Flood and Learn) Plane

Controllers
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrated Overlays

Robust Underlay/Fabric Flexible Overlay Virtual Network


• High Capacity Resilient Fabric • Mobility – Track end-point attach at edges
• Intelligent Packet Handling • Segmentation
• Programmable & Manageable • Scale – Reduce core state
– Distribute and partition state to network edge

• Flexibility/Programmability
– Reduced number of touch points

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Flexible Data Centre Fabrics
Create Virtual Networks on
top of an efficient IP network

• Mobility
• Segmentation + Policy
L3 • Scale
• Automated &
Programmable
L2/L3 • Full Cross Sectional BW
V
M
V
M
Physical • L2 + L3 Connectivity
Hosts O
S
O
S • Physical + Virtual
Virtual

Use VXLAN to Create DC Fabrics

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
SVI/VNI/VLAN Scoping and Provisioning
Orchestration leads to scale optimisation Mgmt

L3 Fabric L3 Fabric
L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY L3 GWY

All VNIs/SVIs everywhere VNIs/SVIs scoped as hosts attach


• Umbrella catch-all provisioning • Provision on host attach/policy
• Full ARP state on all Leaf Nodes • ARP state only for local subnets
• Can be manually provisioned up-front • Requires orchestration (i.e. ACI ,VTS*)
• Open to L2 Flooding everywhere • L2 Flooding is scoped

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Q: How Do I Integrate Spine-leaf To An
Existing Classic Tiered Network?

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling a VPC-based DC Design

Access L3
Layer L2
VLANs
100-150 Host Host Host

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling a VPC-based DC Design

DC
Core
Layer

Access L3 Access
Layer L2 Layer
VLANs VLANs
100-150 Host Host Host Host Host Host 151-200
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrating Spine/Leaf with an Existing Network
ACI Pod
New DC
Core Data Row Upgrade
Layer New Application

Spine
Layer

L3
Agg
Layer L2 ACI Fabric
(VXLAN based)

ACI
Access Access Access Border
Layer Layer Layer
VLANs VLANs VLANs Leafs
Host
Host
100-150 Host 151-200 201-250

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Integrating Spine/Leaf with an Existing Network
ACI Pod
New DC
Core Data Row Upgrade
Layer New Application

Spine
Layer

L3
Agg
Layer L2 ACI Fabric
(VXLAN based)

Access Access ACI


ACI
Layer Layer Border
Leafs
VLANs VLANs Leafs
Host and Border Leafs
100-150 Host Host 151-200
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Data Centre Interconnect Options
Client Access
• Options for L2 Interconnect
ASR1000
WAN / DCI
Client Access Campus

WAN / DCI

Campus

ASR1000 L3
-----------
L2
L3
-----------
L2 N7K

Virtual DC Virtual DC
Services in Services in
Software VM VM VM VM VM VM VM VM VM VM VM VM Software

Virtualised Servers with Nexus CSR1000v Virtualised Servers with Nexus


1000v, vPath, CSR 1000v 1000v, vPath, CSR 1000v

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Nexus Programmability Nexus 7K Nexus 5K / 6K Nexus 9K
Puppet/Chef Future Shipping Shipping
Provisioning &
PoAP Shipping Shipping Shipping
Orchestration
OpenStack Shipping Shipping Shipping

XMPP Shipping Shipping Future


LDAP Shipping Shipping Shipping

Protocols and NetConf/XML Shipping Shipping Shipping


Data Models NXAPI (JSON/XML) Future Future Shipping
YANG Future Future Future
REST Future Future Shipping

Native Python Shipping Shipping Shipping


Integrated container Coming Future Shipping

Programmatic Guest Shell Future Future Shipping


Interfaces OnePK Future Shipping Roadmap
OpenFlow Future Shipping Shipping
OpFlex Future Future Future

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programming for Many Boxes – Git Hub Repository

https://github.com/datacenter/

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Programming Examples
 Here’s an example that uses the NXAPI for the N9K. It can automate mundane
configuration tasks: you launch it remotely (from your Mac/PC) and use it to get
an inventory of the switch, configure new interfaces, etc:
 https://github.com/datacenter/nexus9000/blob/master/nx-
os/nxapi/getting_started/nxapi_basics.py
 Here’s another one that collects the output of several “show commands” and
puts them together to create a “super command” which nice NxOS-style
formatting:
 https://github.com/datacenter/nexus9000/blob/master/nx-
os/python/samples/showtrans.py
 There are a few others such as a CRC error check here:
 https://github.com/datacenter/nexus7000/blob/master/crc_checker_n7k.py

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
UCS Manages Compute through Abstraction
SAN
SAN Connectivity Configuration

LAN
LAN Connectivity Configuration
Service Profile

Motherboard Firmware
BIOS Configuration
Adapter Firmware
Boot Order
RAID configuration
Maintenance Policy

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
ACI Manages Communications through Abstraction

External Connectivity

SLB Configuration

SLB Configuration
FW Configuration

FW Configuration
Host Connectivity

Host Connectivity

Host Connectivity
ACL

QoS
QoS

QoS
ACL
ACL

Network Path
QoS QoS QoS
Forwarding

Application Network
Profile
BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Different Modes of Operation with Nexus 9000
Nexus 9000 Standalone (with Controller*) Application Centric Infrastructure (with APIC)

NCS

1/10/40/100GE
VTS Common Platform

NX-OS Working w/ multiple SDN controllers APIC data object / policy model integrated natively with NX-OS
(inclusive for NfV) running on Nexus 9000 switches (spines and leaves)

Loosely coupled integration


Tightly coupled integration – Out of the box ready system
(custom integration and open programmability)

Deploy for multiple topologies Deployed as a well-known CLOS topology.


Leaf/Spine, 2-Tier Aggregation, Full Mesh It’s a system approach.

Interoperable w/ 3rd Party ToR Switches Must be Nexus 9000 hardware for leaves and spines as well as ACI
and WAN gear Software (switch code and APIC controller)

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Agenda
• Introduction
• Spine/Leaf Primer
• Initial Design Options
• Scale Up or Out
• Data Centre Interconnect
Solutions
• Programmability
• Automation & Orchestration
• Cloud Considerations

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Cisco InterCloud Architectural Details
UCSD SP Admin deploys
ICPEP
Administrator installs
InterCloud Director

End Users IT Admins


InterCloud Provider Cisco Global
Enablement Intercloud
Platform Services
VM InterCloud Cisco Global InterCloud
Manager Director (or Partner White-Label)
Installed and
configured through
InterCloud Director V V
M M
V
M VM
InterCloud Secure Fabric
InterCloud Services

Private InterCloud InterCloud Public


Extender Switch

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
InterCloud Components
 InterCloud Director
 UCSD-based, separate interface
 InterCloud Secure Fabric
 N1Kv-based, doesn’t require a full N1Kv install
 vNIC from intercloud connecter into the vSwitch
 Optional services integration with CSR1000v

 InterCloud Provider Enablement Platform


 ICF-Provider Edition implemented by Provider

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Key Takeaways
Key Takeaways

 Cisco has many options for building DC solutions


 All solutions can start small and grow
 Does not have to be a “rip and replace”
 Spine-Leaf does not have to be expensive
 Automated fabrics can provide new tools for simplified operations
 Cloud technologies can expose new operational models

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Q&A
Complete Your Online Session Evaluation
Give us your feedback and receive a
Cisco Live 2015 T-Shirt!
Complete your Overall Event Survey and 5 Session
Evaluations.

• Directly from your mobile device on the Cisco Live


Mobile App
• By visiting the Cisco Live Mobile Site
http://showcase.genie-connect.com/clmelbourne2015
• Visit any Cisco Live Internet Station located
throughout the venue Learn online with Cisco Live!
Visit us online after the conference for full
T-Shirts can be collected in the World of Solutions access to session videos and
on Friday 20 March 12:00pm - 2:00pm presentations. www.CiscoLiveAPAC.com

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Additional Resources
Additional Resources
Follow up information for more details:

 ACI home page on CCO: http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-


infrastructure/index.html

 Promise Theory for Dummies (careful, adult language): https://www.socallinuxexpo.org/scale11x/presentations/promise-


theory-dummies

 Meta Data in the Software Defined Data Center:


https://www.youtube.com/watch?v=e29hQ7kCcNs&list=PLinuRwpnsHaf7ePRWHZ4Jb5gvTSrxkwpw&index=5

BRKDCT-2218 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public

You might also like