You are on page 1of 89

#CLUS

Data Center Design for


Midsize Enterprises
Jeff Kreis
Technical Solutions Architect
BRKDCN-2218

#CLUS
Agenda
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Programmability, Automation & Orchestration
• Cloud Considerations
• Conclusion

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Cisco Webex Teams
Questions?
Use Cisco Webex Teams (formerly Cisco Spark)
to chat with the speaker after the session

How
1 Find this session in the Cisco Events App
2 Click “Join the Discussion”
3 Install Webex Teams or go directly to the team space
4 Enter messages/questions in the team space

Webex Teams will be moderated cs.co/ciscolivebot#BRKDCN-2218


by the speaker until June 18, 2018.

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Begin With the End in Mind – Stephen Covey

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Sometimes you dig yourself a hole…

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
You likely already have some of the tools you need

…but some work better than others


#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
Designing Data Centers for Midsize Enterprises
Client Access

• Defining “Midsize” WAN / DCI

• Require Dedicated DC Switches Campus

• Mostly virtualized, some physical

• Might be in a transition from collapsed core L3


-----------
L2
• Scalability
FC
• Size for current needs
FCoE

• Reuse components in larger designs


iSCSI / NAS

• Design Options

• Feature choice + priority = tradeoffs

• Where the industry is going: Programmability, Orchestration, Automation

• “Cloud with Control”


#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
LOTS of Related Sessions
Session ID Title Presenter Date / Time
BRKCLD-2931 Cisco MultiCloud: the How! Carlos Pereira Monday, Jun 11, 1:30 pm

BRKDCN-2044 Effective Evolution of the Data Center Virtual Bill Dufresne Monday, Jun 11, 1:30 pm
Network
BRKDCN-3378 Building DataCenter networks with VXLAN BGP- Lukas Krattiger Thursday, Jun 14 8:00 am
EVPN
TECDCN-2181 Deployment Consideration for Interconnecting Yves Louis, Victor Moreno Sunday, Jun 10, 2:00 pm
Distributed Virtual Data Center
BRKDCN-2342 Easy Fabric Management of VXLAN EVPN Ahmed Abeer Wednesday, Jun 13, 10:00 am
Networks with DCNM 11
BRKNMS-2002 Management, Monitoring, and Automation of Data Ahmed Abeer Thursday, Jun 14, 10:30 am
Center Deployments with DCNM 11
BRKDCN-2025 Maximizing Network Programmability and Nicolas Delecroix Monday, Jun 11, 4:00 am
Automation with Open NX-OS
BRKDCN-2458 Nexus 9000/7000/6000/5000 Operations and Arvind Durai, Anis Edvalath Monday, Jun 11, 1:30 pm
Maintenance Best Practices

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
LOTS of Related Sessions
Session ID Title Presenter Date / Time
TECDCN-2821 Operating and Deploying NX-OS Nexus David Jansen, Brenden Sunday, Jun 10, 6:00 pm
Devices in an Evolving World Buresh
BRKACI-2110 Tetration and ACI: Better Together Chris McHenry Monday, Jun 11, 1:30 pm

TECSEC-4273 Cisco’s architectural Approach to Securing Loy Evans Sunday, Jun 10, 9:00 am
Modern Data Centers
BRKDCN-3346 End-to-End QoS Implementation and Matthias Wessendorf Wednesday, Jun 13, 1:30 pm
Operation with Cisco Nexus Switches
BRKACI-2300 ACI for Vmware Admins Nicolas Vermande Tuesday, Jun 12, 1:30 pm

BRKDCN-3001 Leveraging Micro Segmentation to Build Brenden Buresh Tuesday, Jun 12, 4:00 pm
Comprehensive Data Center Security
Architecture

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
What are you ready for?

Decisions will depend on where you draw the line:


• Want to stay with existing toolsets for config & management?

• Interested in new toolsets to buy some efficiency?

• Capable of consuming a new set of tools?

• New or traditional operational model?

#CLUS
Image Credit: In speaker notes BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Design Goals

Flexible Practical Speed

#CLUS
Image Credit: In speaker notes BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
Single-Tier, Dual-Tier, Spine/Leaf
Scalable Spine/Leaf DC Fabric

VXLAN

Dual Tier DC

VXLAN

Single Layer DC

Small Spine/Leaf

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Connectivity & Features Drive Design Choices
 Form Factor  Virtual Networking Requirements
– Unified Computing Fabric – vSwitch/DVS/OVS/Nexus1Kv/AVS
– 3rd Party Blade Servers
 Programmabiltiy/Automation/Orchestration
– Rack Servers (Non-UCS Managed)
– Abstraction
 Storage & Storage Protocols – APIs/Programmability/Orchestration
– Native Fibre Channel
 Connectivity Model
– Unified Ports, FCoE
– 25, 10 or 1-GigE Server ports
– IP-based storage (iSCSI, NAS)
– NIC/HBA Interfaces per-server
– NIC Teaming models

iSCSI NFS/
FCoE FC CIFS

VM VM VM VM VM VM

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Data Center Fabric Needs
• “North-South”: end-users and Public
Internet
Offsite DC
external entities. Cloud Enterprise
Site B
Network
• “East-West”: clustered Mobile

NORTH - SOUTH TRAFFIC


applications, workload FC

mobility. DATA FCoE


API iSCSI / NAS
• High throughput, low latency CENTER
FABRIC Storage
• Increasing high availability Orchestration/
Monitoring
requirements.
• Automation & Orchestration
Services
Server/Compute

EAST – WEST TRAFFIC

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
History Lesson: Spanning tree
• Spanning Tree introduced around 1985
• 32 years ago, we also saw:
• Windows 1.0
• DNS come out of academia
• First Nintendo Entertainment System

• Since then, most DC Designs built to


Host or
Switch

work around STP

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Port Channel (PC)

• PAgP invented to address STP in 90s


• IEEE standard in 2000 (802.3ad)
• Not perfect, but a good workaround
STP is still there on every link
Host or
• Switch

• Human error, misconfiguration, bug can still cause


issues

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Virtual Port Channel (VPC) “Fabric”
• VPC Northbound & Southbound
• More efficient than native STP
• STP is still running
• Another good workaround
• Configuration can become complex
as switch counts grow
Host or
Switch

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
L3-Based Fabrics
• Every link forwarding
• L3 ”routing” convergence
• Fast convergence (properly tuned)
• STP might still exist, not in the “fabric”
• Drastic reduction in blocking & convergence

• VPC still needed at edge


• Spine/Leaf:
• Flexible design
• Consistent hop count & latency Host or
Switch

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Flexibility and Efficiency
Why Spine-Leaf Design? To speed up flow
completion times, add
Need
Need even
moremore
host more backplane,

Utilization
Per Spine
host ports?
ports? spread load across

Per Spine
Utilization
AddAdd
another
a leafleaf more spines

40G fabric ports


FCT

FCT

FCT
FCT

FCT

FCT
* FCT = Flow Completion Times 10G host ports
Lower FCT = FASTER
96 ports
144 ports
APPLICATIONS
2x48 10G192 ports
(960 Gbps total)
3x48 10G (1440 Gbps total)
4x48 10G (1920 Gbps total)
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Spine/Leaf DC Fabric ≅ Large Non-Blocking
Switch

Host
4
Host
1

Host
5
Host
2

Host
6
Host
3

Host
7
Host Host Host Host Host Host Host
1 2 3 4 5 6 7

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
24
© 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public
Spine/Leaf DC Fabric ≅ Large Modular Switch

5 6 7
4
Host Host Host
Host
Line Line Line Line Line
Card Card Card Card Card
BRKDCN-2218
#CLUS
Fabric Fabric Fabric
Module Module Module
Line Line Line Line Line
Card Card Card Card Card
Host Host Host
1 2 3
Impact of Link Speed – the Drive Past 10G Links
20×10Gbps 5×40Gbps 2×100Gbps
Aggregate
Bandwidth

Uplinks Uplinks Uplinks


200G
Aggregate
Bandwidth
200G

20×10Gbps 20×10Gbps 20×10Gbps


Downlinks Downlinks Downlinks

• 40 & 100Gbps fabric provide very similar performance for fabric links
• 40/100G provides performance, link redundancy, and low cost with BiDi
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Statistical Probabilities of Efficient Forwarding
Probability of 100% throughput ≅ 3%
20×10Gbps 5×40Gbps 2×100Gbps
Uplinks Uplinks Uplinks
1 2 20
Probability of 100% throughput ≅ 75%

1 2 3 4 5
Probability of 100% throughput ≅ 99%

11×10Gbps flows
(55% load) 1 2

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Impact of Link Speed on Flow Completion Times
20 Avg FCT: Large (10MB,∞) background flows
FCT (normalized to optimal) 18

Lower 16
FCT is 14 OQ-Switch
Better 12
20x10Gbps
10
8 5x40Gbps

6 2x100Gbps
4
2
0
30 40 50 60 70 80
Load (%)
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Impact of Link Speed on Flow Completion Times
20 Avg FCT: Large (10MB,∞) background flows
FCT (normalized to optimal) 18 Flow Completion is dependent on
16
Lower queuing and latency
FCT is 14 OQ-Switch
Better 12
40G is not just about faster ports & 20x10Gbps
10
8
higher bandwidth, it’s about 5x40Gbps

6
Faster Flow Completion 2x100Gbps
4

• 40/100Gbps
2 fabric: ~ same FCT as non-blocking switch
0
30 40 50 60 70 80
• 10Gbps fabric links: FCT up 40% worse than 40/100G
Load (%)
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
40G BiDi Optics Preserve Existing 10G Cabling
OM4 Fiber
MMF LC Plant MMF LC
Used Fiber Pair
Patch cord Patch cord
SFP-10G-SR SFP-10G-SR
$995 $995

QSFP-40G-SR4 QSFP-40G-SR4
$2995 OM4 Fiber $2995
Plant
Used Fiber Pair

MPO
MPO
Used Fiber Pair
Used Fiber Pair
Used Fiber Pair

OM4 Fiber
QSFP-40G-SR-BD Plant QSFP-40G-SR-BD
$1095 MMF LC MMF LC
Patch cord
Used Fiber Pair
Patch cord $1095
QSFP-40/100G-SR-BD
QSFP-40/100G-SR-BD
$1995
$1995
Distance <= 125m with
OM4#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
What Features Matter Most to You? (as of 6/18)
Feature 5500 5600 6000 7000 7700 9300 9500 3000 3500

Unified Ports (FC or E)


FCoE *
FEX
VXLAN Bridging
VXLAN Routing
VXLAN Routing – F&L
VXLAN Routing – BGP EVPN
Automation – DCNM
Automation – ACI
ISSU
DCI – OTV
DCI – VXLAN**
VDC
LISP
ITD

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Fabric Starter Pod
24x40G fabric ports needed for non-oversubscribed
72x40G available

N9K DCNM starter ACI starter


2x9364C 40/100G fabric ports 2x9364C
2x93xxx-EX 2x93XXX-EX
DCNM 3xAPIC+192 Port Leaf licensing
10G host ports

Two Racks, 96x10G ports (960GB)***

*** Server/Rack density dependent on required load, available power and cooling (geo-diverse)
#CLUS © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scaling with Spine/Leaf:
72x40G fabric ports needed for non-oversubscribed
60x40G
36x40G
48x40G
24x40G
72x40G available

40/100G fabric ports

10G host ports***

Two
Five
Six
Three
Four Racks,
Racks,
Racks,
Racks,
Racks, 96x10Gports
240x10G
288x10G
192x10G
144x10Gports (960GB)
(2400GB)
ports(2880GB)
ports (1920GB)
(1440GB)

*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX
or fewer fabric links. Server/Rack density dependent
#CLUS on load, power, cooling
© 2018 Cisco and/or (geo-diverse)
its affiliates. All rights reserved. Cisco Public
When do you add/upgrade spines?
72x40G
96x40G fabric ports needed for non-oversubscribed
144x40G now144x40G
available,
72x40G smaller failure impact
available
available

40/100G fabric ports

10G host ports***

Eight
Six Racks,
Racks,288x10G
384x10Gports
ports(2880GB)
(3840GB)

*** This example is 100% non-blocking, non-oversubscribed. Could build an oversubscribed model with FEX
or fewer fabric links. Server/Rack density dependent
#CLUS on load, power, cooling
© 2018 Cisco and/or (geo-diverse)
its affiliates. All rights reserved. Cisco Public
When do you add/upgrade spines?
96x40G
96x40G fabric
fabric ports
ports needed
needed for
for non-oversubscribed
non-oversubscribed
2x36 in each modular spine, 280x40G,
140x40G LC Redundancy, Spine ISSU,
available
etc.

40/100G fabric ports

10G host ports***

Eight Racks, 384x10G ports (3840GB)

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
Scaling a VPC-based DC design

Access L3
Layer L2
VLANs
100-150 Host Host Host

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Scaling a VPC-based DC design

Consolidated
Core/Agg
Layer

Access L3 Access
Layer L2 Layer
VLANs VLANs
100-150 Host Host Host Host Host Host 151-200
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Integrating Spine/Leaf with
an existing network
ACI Pod
New DC
Data Row Upgrade
New Application
Distributed or
Consolidated Spine
Core/Agg Layer
Layer
L3
ACI Fabric
L2 (VXLAN based)

Access
Access Access ACI
Layer
Layer Layer Border
VLAN
VLANs VLANS Leafs
201-250
Host
100-150 Host Host 151-200
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Integrating Spine/Leaf with
an existing network
ACI Pod
New DC
Data Row Upgrade
New Application
Distributed or
Consolidated Spine
Core/Agg Layer
Layer
L3
L2 ACI Fabric
(VXLAN based)

Access Access ACI


ACI
Layer Layer Border
Leafs
VLANs VLANs Leafs
Host and Border Leafs
100-150 Host Host 151-200
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
OTV options for Data Center Interconnect Client Access

WAN / DCI
Client Access Campus

WAN / DCI

Campus

L3
-----------
L2
L3
-----------
L2

Virtual DC Virtual DC
Services in Services in
Software VM VM VM VM VM VM VM VM VM VM VM VM Software

Virtualized Servers with CSR CSR1000v Virtualized Servers with CSR


1000v 1000v

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
OTV options for Data Center Interconnect Client Access

ASR1000
WAN / DCI
Client Access Campus

WAN / DCI

Campus

ASR1000 L3
-----------
L2
L3
-----------
L2 N7K

Physical or Physical or
Virtual Virtual
Workloads Workloads
VM VM VM VM VM VM VM
and Services VM and Services

CSR1000v

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
VXLAN as a Data Center Interconnect?
• DCI is an architectural discussion

• VXLAN is an encapsulation
• DCI has always been an architectural discussion

• Recently it’s become an encapsulation discussion

• VXLAN can fit into the architecture


• If you are CAREFUL

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
VXLAN as a Data Center Interconnect?
BRKACI-2215
Tuesday, Jun 12,
8:00 a.m.
• Why use VXLAN in an DCI implementation?
• VXLAN with BGP/EVPN fabric end-to-end
• Host Reachability information is managed and distributed end-to-end

• When to use VXLAN/EVPN Multi-Site Fabric ?


• Across short distances (Metro), Private L3 DCI, IP Multicast available end-to-end
• Multiple Greenfield DC with VXLAN/EVPN, continuity of control/data plane
Host Reachability is End-to-End
• Caveats:
• VXLAN for DCI is a solution not a feature. DCI Leaf nodes
• Architectural protection mechanisms at edge*
• Use hardware protection mechanisms:
• Storm Control, BPDU Guard, HMM Route Tracking VXLAN Stretched Fabric
• Control-Plane with MAC-learning, ARP suppress
VXLAN Leaf
* DCI Leaf nodes Traffic is encapsulated & de-encapsulated on each far end side

VXLAN tunnel
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
VXLAN for DCI
• VXLAN is an encapsulation, not an architecture

• VXLAN for DCI has some hardware dependencies… (EX/FX/FX2)

• To make VXLAN a DCI you have to carefully put together many parts

• You shouldn’t do it unless you know each part very well

• It’s like building your own custom animal


• Looks cool
• Takes special care

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
VXLAN Multi-Site Site Integration BRKACI-2215
Tuesday, Jun 12,
8:00 a.m.

Pair of Pseudo-BGWs Pair of Pseudo-BGWs


(EX/FX/FX2 Switches) BGW BGW (EX/FX/FX2 Switches)
VTEP VTEP VTEP VTEP

Legacy Site 1 Legacy Site 2

 A pair of Pseudo-BGWs inserted in each legacy site to extend Layer-2


and Layer-3 connectivity between sites
• Replacement of traditional DCI technologies (EoMPLS, VPLS, OTV, …)
 Slowly phase out the legacy networks and replace them with VXLAN EVPN
fabrics
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
ACI Multi-Site
The Easiest DCI Solution in the Industry!
Communication between endpoints in separate sites (Layer 2 and/or Layer 3) is enabled simply by
creating and pushing a contract between the endpoints’ EPGs

IP

Site 1 Site 2
DP-ETEP A DP-ETEP B

S1 S2 S3 S4 S5 S6 S7 S8

EP1 EP2

Define and push inter-site policy = VXLAN Encap/Decap


EP1 EP2
EPG
C EPG
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
FCoE and Unified Fabric
Fibre Channel with simpler infrastructure and lower cost

FCoE Benefits

• Standards based
• Operationally same as existing LAN /SAN
Ethernet Carrier, DCB • Transparent to OS and Apps

Individual • Fewer Cables


Ethernets • Fewer switches

Individual • Fewer adapters


Storage
• Overall less power
(IP, Eth, FC)

Byte 0 Byte 2229


Ethernet
Header

Header
Header
FCoE

CRC
EOF
FCS

FC Payload
FC

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
Dynamic Unified Ports – High Level of Flexibility
• Protocol support
(Ethernet / FCoE and
native Fibre Channel) on
the same port
• Flexible LAN & storage Unified
convergence Port
• Gives Flexibility for any
direction Storage Industry
goes (trending towards
IP) FC Eth
• Multi-protocol
optimizations—iSCSI, Native Fibre Lossless
NFS/CIFS, FC, FCoE
Channel Ethernet:
1/10GbE, FCoE,
iSCSI, NAS

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
What is ITD ?
Intelligent Traffic
Director

• Traffic distribution and redirection

• ASIC based solution(HW-switched)

• Caters to multi-terabit traffic

• Works on Nexus switches – 5/7/9k


ITD does L3-L4 traffic distribution,
does not replace Layer-7 Load-
balancers

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Where to use ITD ?

#1 ITD to load-balance to the destination


Example: Server-Load Balancing

Clients Servers

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
Why ITD ?
Intelligent Traffic
Director
No service-module
Line-Rate
or external
Appliance required Traffic-distribution

Ease of deployment,
Automatic
reduced
Failure Handling configuration

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Cisco Data Center Network Automation Types
Providing Choice in Automation and Programmability
Programmable Application Centric
Programmable Fabric
Networking Infrastructure
Connection

Creation Expansion

Reporting Fault Mgmt

DCNM

DB DB

Web Web App Web App

Modern NX-OS with enhanced VxLAN-BGP EVPN Turnkey integrated solution with
NX-APIs standard-based security, centralized management,
compliance and scale
DevOps toolset used for Network 3rd party controller support
Management Automated application centric-policy
(Puppet, Chef, Ansible etc.) Cisco Controller for software model with embedded security
overlay provisioning and
management across N2K-N9K Broad and deep ecosystem

Tools, APIs, Controllers


#CLUS
and Automation BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Nexus Programmability Nexus 7K Nexus 5K / 6K Nexus 9K
Puppet/Chef/Ansible Future Shipping* Shipping
Provisioning &
PoAP Shipping Shipping Shipping
Orchestration
OpenStack Shipping Shipping Shipping

XMPP Shipping Shipping Shipping


LDAP Shipping Shipping Shipping

Protocols and NetConf/XML Shipping Shipping Shipping


Data Models NXAPI (JSON/XML) Shipping Future Shipping
YANG Future Future Shipping
REST Future Future Shipping

Native Python Shipping Shipping Shipping


Integrated container Coming Future Shipping

Programmatic Guest Shell Shipping Future Shipping


Interfaces OpenFlow Future Shipping Shipping

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Open Fabric Programming – Git Hub Repository
https://github.com/datacenter/

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Open Fabric Programming – Git Hub Repository
https://github.com/datacenter/

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Programming a Fabric
• A lot of work is being done to provide customers maximum flexibility in programming &
automation interfaces
• Free Open Programmability book:
• http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/nexus9000/sw/open_nxos/programmability/guide
/Programmability_Open_NX-OS.pdf
• New community site dedicated to NXOS programmability:
• https://opennxos.cisco.com

• A lot of work has been done to increase available knowledge on network programming across
all Cisco products
• DevNet: If you haven’t visited, please do
• https://devnet.cisco.com
• SANDBOX! – FREE 24 X 7 hosted labs

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Cisco Data Center Network Automation Types
Providing Choice in Automation and Programmability
Programmable Application Centric
Programmable Fabric
Networking Infrastructure
Connection

Creation Expansion

Reporting Fault Mgmt

DCNM

DB DB

Web Web App Web App

Modern NX-OS with enhanced VxLAN-BGP EVPN Turnkey integrated solution with
NX-APIs standard-based security, centralized management,
compliance and scale
DevOps toolset used for Network 3rd party controller support
Management Automated application centric-policy
(Puppet, Chef, Ansible etc.) Cisco Controller for software model with embedded security
overlay provisioning and
management across N2K-N9K Broad and deep ecosystem

Tools, APIs, Controllers


#CLUS
and Automation BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
Along the Spectrum from CLI to ACI
A New Way To Do Fabric Management
Scripting to the
CLI and/or API

Basic
Element
Manager

CLI ACI

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Fabric Management Needs
• Fabric mgmt automation – high interest
• Many Fabrics are based on things like
Scripting to
the CLI
and/or API

VXLAN, BGP/EVPN, IS-IS Basic


Element
Manager

• New protocols, new configurations, new


things to learn
ACI
• Simple tool to ease burden of adoption CLI

• CVD/Best practices – done for you!


Data Center Network Manager (DCNM)
• Simplified interaction– GUI + API
• Build fabric-wide broadcast domains
• Ops – minimal CLI requirements
• Point-n-click simplified interface

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
CLI method: Building a Fabric – Day 0, 1
Steps Required to Build Fabric and Establish Connection Between Two
Devices
1. Rack and Cable 2. Power-on and Initial Configuration 3. Set Up Common Configuration 4. Set Up Initial Topology

 Rack all switches; note serial numbers  Power all switches, attach console  Verify all connections using show cdp  Refer to spreadsheet from step 1
for future reference  Complete initial configuration dialog box neighbor per switch  Configure switch-facing interfaces
 Run cables between switches  Assign management IP address  Configure common features: NTP, to Layer 3 mode
 Note in spreadsheet all connections and default route SNMP, syslog, AAA, usernames per  Configure all port channels in fabric
switch, etc. from CLI: time consuming
 Attach management interfaces  Upgrade switch software if required
 Enable Layer 3 interfaces on spines

8. Build VXLAN and EVPN Configuration 7. Build Broadcast Domain 6. Set


Set Up Common
BGP Routing
Configuration
and EVPN 5. Set Up Underlay Routing

 Establish VNI and VLAN mapping on  Refer to spreadsheet from step 1  Establish neighbor configuration per  Create addressing plan for underlay
each switch  Assign new VLAN for bridge domain in peer and per switch  Configure point-to-point subnets
 Configure VTEP on each switch each leaf switch  Establish EVPN configuration per peer between switches
 Configure EVPN on each switch  Add host ports to VLAN in each and per switch  Configure loopback interfaces
 Test connectivity applicable leaf switch  Configure IGP (for example, OSPF)

Port Channels ISLs


VLANs  Heavy reliance on CLI: time consuming
OSPF
NTP  This setup doesn’t include host-facing vPCs
BGP
Spanning or VRF instances
vPCs Tree
 Need to repeat many steps shown here for each
AAA HSRP QoS VRFs broadcast domain

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
Cisco Nexus Fabric Manager: Building a
Fabric – Days 0, 1
Steps Required to Build Fabric and Establish Connection Between Two
Devices
1. Rack and Cable 2a. Power-on and Initial Configuration 3. Discover Fabric 4. Set Up Broadcast Domain

 Rack all switches; note serial numbers  Power all switches; attach console  Enter seed switch IP address in fabric  Select discovered host devices
for future reference (optional)  Complete initial configuration dialog manager GUI for fabric and choose Create New Broadcast
 Run cables between switches box discovery ✶click✶ Domain ✶click✶
 Attach management interfaces  Assign management IP address,  Select all discovered Cisco Nexus  Test connectivity
default route, and username and 9000 Series Switches and set them to
password for Cisco Nexus ® Managed mode through
Fabric Manager group edit ✶click✶

2b. Power-on and POAP


 Set basic boot script in to assign IP
address, defauPOAPlt gateway, and
username and password
 Power all switches
 Very simple procedure to go from
Switch Pool
boxes of switches to a functioning fabric
 3 simple steps, and you have a full VXLAN and EVPN
fabric with connected devices
 Ignite (POAP) eliminates need to assign
Broadcast
Domain

initial IP address and credentials to switches


 User asks for broadcast domain, and the fabric
manager handles full VXLAN configuration
#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
How About Adding a vPC? A New Leaf?
1. Find Ports and Prestage 2. Create vPC (on CLI) 3. Create Port Channels (on CLI)

 Refer to spreadsheet to see where  Enable vPC feature per switch  Enable VLANs on peer-link PC
CLI-Method hosts ports connect to fabric  Create vPC domain per switch  Create host-facing PC per switch

vPC  Verify suitable cable between leaf


switches for vPC peer link
 Create vPC peer keep-alive per
switch and verify IP addresses
 Assign port channels to vPC domain
 Test connectivity
 Assign vPC domain and PC IDs  Create vPC peer link per switch

1. Find Ports 2. Create vPC (in GUI)

Cisco DCNM  Search in GUI for host ports by name  Multi-select host ports
 Select Add to New Port Channel from
vPC GUI menu ✶click✶
 Test connectivity

1. Rack and Cable 2. Power-on and Initial Configuration 3. Configure New Switch

 Rack switch and note serial number  Power switch and complete initial  Copy static configuration from other leaf
CLI-Method  Cable switch into topology configuration  Create routing configuration (IGP)
 Assign management IP address and
New Leaf  Note connections on spreadsheet
 Attach management connection
default route
 Create VXLAN configuration
(VTEP and EVPN)
 Upgrade switch software if required
 Determine IP address plan  Go to every other switch and update
 Determine VLANs in use within fabric routing peers, VXLAN configuration, etc.

Cisco DCNM
1. Rack and Cable 2. Power-on and POAP 3. Configure New Switch

 Rack switch and note serial number  Power switch  Select newly discovered
New Leaf  Cable switch into topology switch ✶click✶
 Attach management connection  Set switch to Managed mode ✶click✶

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Cisco Data Center Network Automation Types
Providing Choice in Automation and Programmability
Programmable Application Centric
Programmable Fabric
Networking Infrastructure
Connection

Creation Expansion

Reporting Fault Mgmt

DCNM

DB DB

Web Web App Web App

Modern NX-OS with enhanced VxLAN-BGP EVPN Turnkey integrated solution with
NX-APIs standard-based security, centralized management,
compliance and scale
DevOps toolset used for Network 3rd party controller support
Management Automated application centric-policy
(Puppet, Chef, Ansible etc.) Cisco Controller for software model with embedded security
overlay provisioning and
management across N2K-N9K Broad and deep ecosystem

Tools, APIs, Controllers


#CLUS
and Automation BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
Effort vs Time – Traditional IT Build/Run

Ops
Config

Ops
Confi
g

Ops
Config

Config Ops

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
Effort vs Time – Traditional IT Build/Run

Ops
Config

Ops
Confi
g

Ops
Config

Config Ops

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Effort vs Time – Utilizing Abstraction &
Automation

Plan Implement
Ops
Config

Ops
Confi
g Ops

Ops
Config

Config Ops

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
UCS Manages Compute through Abstraction
SAN
SAN Connectivity Configuration

LAN
LAN Connectivity Configuration
Service Profile

Motherboard Firmware
BIOS Configuration
Adapter Firmware
Boot Order
RAID configuration
Maintenance Policy

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
ACI Manages Infrastructure through Abstraction

External Connectivity

SLB Configuration

SLB Configuration
Host Connectivity

Host Connectivity

Host Connectivity
FW Configuration

FW Configuration
ACL
QoS
QoS

QoS
ACL
ACL

Network Path
QoS QoS QoS
Forwarding

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
ACI Manages Infrastructure through Abstraction

External Connectivity

SLB Configuration

SLB Configuration
Host Connectivity

Host Connectivity

Host Connectivity
FW Configuration

FW Configuration
ACL
QoS
QoS

QoS
ACL
ACL

Network Path
QoS QoS QoS
Forwarding

Network Profile abstracts the logical state


from the physical infrastructure

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
Cisco UCS Director for Compute & storage
Infrastructure consumption made easier

Converged Stack
Secure Cloud Control Panel
Container
OS and
Network Compute VMs Storage Virtual VM VM
Bare
Metal
Machines
Single Pane of Glass for Virtualized and Bare-Metal

Policy-Driven Virtual AND Physical


Compute
Provisioning Speed & Accuracy
Compute and Hypervisor

UCS Director More efficient use of People & Network A B C


Time Network and Services

Consistency, Less Error in Tenant Tenant Tenant


Repetitive Tasks Storage A B C

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
Agenda
• Introduction
• STP, VPC and Spine/Leaf
• Initial Design Options
• Scale Up or Out
• Data Center Interconnect Solutions
• Feature-Specific Considerations
• Programmability, Automation & Orchestration
• Cloud Considerations

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
Cisco Cloud Center Details
One Platform

Cloud Agnostic Cloud API-Specific

Secure

ORCHESTRATOR

Scalable

ORCHESTRATOR

Extendable

ORCHESTRATOR
MANAGER PROFILE

Multi-
tenant

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
Key Takeaways
Key Takeaways

• Cisco has many options for building data center infrastructure

• All solutions can start small and grow

• No Cisco solution has to be a “rip and replace”

• Spine-Leaf does not have to be expensive

• Programmable fabrics provide new tools for simplified operations

• Automated fabrics provide new methods of managing DC Networking

• Cloud technologies can expose new ways to build & scale

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
Complete your online session evaluation

Give us your feedback to be entered


into a Daily Survey Drawing.
Complete your session surveys through
the Cisco Live mobile app or on
www.CiscoLive.com/us.
Don’t forget: Cisco Live sessions will be available for viewing
on demand after the event at www.CiscoLive.com/Online.

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 88
Continue
your Demos in
the Cisco
Walk-in
self-paced
Meet the
engineer
Related
sessions
education campus labs 1:1
meetings

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 89
Cisco Webex Teams
Questions?
Use Cisco Webex Teams (formerly Cisco Spark)
to chat with the speaker after the session

How
1 Find this session in the Cisco Events App
2 Click “Join the Discussion”
3 Install Webex Teams or go directly to the team space
4 Enter messages/questions in the team space

Webex Teams will be moderated cs.co/ciscolivebot#BRKDCN-2218


by the speaker until June 18, 2018.

#CLUS BRKDCN-2218 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
Thank you

#CLUS
#CLUS

You might also like