You are on page 1of 73

Cisco Complete Nexus portfolio

Deployment best practices


BRKDCT-2204

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 1
Session Goal

Understand how to design a scalable data center based upon customer


requirements

How to choose different flavor of the designs using Nexus family.

Share a case study.

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
Recommended Sessions
BRKARC-3470: Cisco Nexus 7000 Hardware Architecture
BRKARC-3452: Cisco Nexus 5000/5500 and 2000 Switch Architecture
BRKARC-3471: Cisco NX-OS Software Architecture
BRKVIR-3013: Deploying and Troubleshooting the Nexus 1000v Virtual Switch
BRKDCT-2048: Deploying Virtual Port Channel in NX-OS
BRKDCT-2049: Overlay Transport Virtualization
BRKDCT-2081: Cisco FabricPath Technology and Design
BRKDCT-2202: FabricPath Migration Use Case
BRKDCT-2121: VDC Design and Implementation Considerations with Nexus 7000
BRKRST-2509: Mastering Data Center QoS
BRKDCT-2214: Ultra Low Latency Data Center Design - End-to-end design approach
BRKDCT-2218: Data Center Design for the Small and Medium Business

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Session Agenda
Nexus Platform Overview
Data Center Design and Considerations
Case Study #1: Green Field Data Center Design
Key Takeaways

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Data Center Architecture
Life used to be easy

The Data Centre Switching Design was based on the hierarchical switching we
used everywhere
Three tiers: Access, Aggregation and Core Core
L2/L3 boundary at the aggregation
Add in services and you were done Layer 3
Layer 2 Aggregation
What has changed? Most everything
Hypervisors
Cloud Iaas, Pass, Sass
Services
MSDC
Ultra Low Latency
Competition (Merchant Silicon, )
Access
We now sell compute !!
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Center Drivers

Business Regulatory Security Budget


Agility Compliance Threats Constraints

Business Challenges

Technology Trends
Proliferation
Cloud Big Data Energy Efficiency
of Devices

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
Data Centre Architecture
There is no single design anymore

Spectrum of Design Evolution

blade1 blade1 blade1 blade1


slot 1
blade2 slot 1
blade2 slot 1
blade2 slot 1
blade2
slot 2
blade3 slot 2
blade3 slot 2
blade3 slot 2
blade3
slot 3
blade4 slot 3
blade4 slot 3
blade4 slot 3
blade4
slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 4
blade5
slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 5
blade6
slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 6
blade7
slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 7
blade8
slot 8 slot 8 slot 8 slot 8
blade1 blade1 blade1 blade1
slot 1
blade2 slot 1
blade2 slot 1
blade2 slot 1
blade2
slot 2
blade3 slot 2
blade3 slot 2
blade3 slot 2
blade3
slot 3
blade4 slot 3
blade4 slot 3
blade4 slot 3
blade4
slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 4
blade5
slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 5
blade6
slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 6
blade7
slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 7
blade8
slot 8 slot 8 slot 8 slot 8
blade1 blade1 blade1 blade1
slot 1
blade2 slot 1
blade2 slot 1
blade2 slot 1
blade2
slot 2
blade3 slot 2
blade3 slot 2
blade3 slot 2
blade3
slot 3
blade4 slot 3
blade4 slot 3
blade4 slot 3
blade4
slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 4
blade5
slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 5
blade6
slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 6
blade7
slot 7
blade8 slot 7
blade8 slot
blade8 7 slot
blade8 7
slot 8 slot 8 slot 8 slot 8

Ultra Low Latency HPC/GRID Virtualized Data Center MSDC


High Frequency Trading Layer 3 & Layer 2 SP and Enterprise Layer 3 Edge (iBGP, ISIS)
Layer 3 & Mul:cast No Virtualization Hypervisor Virtualiza:on 1000s of racks
No Virtualization iWARP & RCoE Shared infrastructure Homogeneous Environment
Limited Physical Scale Nexus 2000, 3000, 5500, Heterogenous No Hypervisor virtualiza:on
Nexus 3000 & UCS 7000 & UCS 1G Edge moving to 10G 1G edge moving to 10G
10G edge moving to 40G 10G moving to 40G Nexus 1000v, 2000, 5500, 7000 Nexus 2000, 3000, 5500, 7000 &
Presentation_ID & UCS
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS
Cisco DC Switching Portfolio
LAN LAN/SAN
Scalability

Nexus 7000

Nexus 5000
Nexus 4000
B22 FEX
Nexus 2000
Nexus 1010
Nexus 3000

Nexus 1000V

Cisco NX-OS: One OS from the Hypervisor to the Data Center Core
Convergence VM-Aware 10/40/100G Fabric Cloud Mobility
Networking Switching Extensibility

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Nexus 7000 Series
Broad Range of Deployment Options

Highest 10GE Density in


Modular Switching

Nexus 7004 Nexus 7009 Nexus 7010 Nexus 7018

Height 7 RU 14 RU 21 RU 25 RU

Max BW per Slot 440 Gig/Slot 550 Gig/Slot 550 Gig/Slot 550 Gig/slot

Max 10/40/100GE ports 96/12/4 336/42/14 384/48/16 768/96/32

Air Flow Side-to-Rear Side-to-Side Front-to-Back Side-to-Side


2 x 6KW AC/DC 3 x 6KW AC/DC 4 x 6KW AC/DC
Power Supply Configurations 4 x 3KW AC
2 x 7.5KW AC 3 x 7.5KW AC 4 x 7.5KW AC
Small to Medium Core/ Data Center and Large Scale Data
Application Presentation_ID Edge Campus Core CiscoData
Public Center
2012 Cisco and/or its affiliates. All rights reserved. Center
Nexus 5596UP & 5548UP
Virtualized Data Center Access

High density 1RU/2RU ToR Switches


10GE / 1GE / FCoE / 8G FC
Reverse airflow / DC-power
Native FC Lossless Ethernet
FCoE, iSCSI, NAS
Innovations
Unified Port capability
Layer-2, -3 support
Benefits / Use-cases
Investment protection in action!
FEX support (24/L2)
Proven, resilient NX-OS, designs
Multihop FCoE/Lossless Ethernet
Low, predictable latency at scale
Cisco FabricPath (future)
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Nexus 2000 Series
Platform Overview

Nexus B222 N2224TP


48 Port 1000M Host Interfaces 24 Port 100/1000M Host Interfaces
4 x 10G Uplinks 2 x 10G Uplinks

N2248TP N2232PP
48 Port 100/1000M Host Interfaces 32 Port 1/10G FCoE Host Interfaces
4 x 10G Uplinks 8 x 10G Uplinks

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Changing the device paradigm

Cisco Nexus 7000


Cisco Nexus 5500

+
+

Distributed High Density Edge


Switching System
(up to 4096 virtual Ethernet
interfaces)
Cisco Nexus 2000 FEX
Cisco Nexus 2000 FEX
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Nexus 3000: For Ultra Low Latency
1RU NX-OS Switches for 1/10/40G Connectivity

Major wins
in HFT/Web 2.0

Nexus 3048 Nexus 3064 Nexus 3016


48 ports 64 ports 16 ports
100M/1GE 1/10GE 10/40GE

Robust NX-OS with Dieren:ated Feature Set


Wire-rate L2/L3 VPC, PTP, Congurable CoPP, Power-on Auto-Provisioning,
feature-set ERSPAN IPv6

FOR
Presentation_ID
High-Frequency Trading | Big Data | Web 2.0
2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Nexus 1000V

VM VM VM VM VM VM VM VM

Nexus 1000V Nexus 1000V


VSM Nexus VSM Nexus
1000V 1000V
VEM VEM
VMware vSphere Windows 8 Hyper-V

VMware vCenter SCVMM

Consistent architecture, feature-set & network services ensures


opera]onal transparency across mul]ple hypervisors
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Virtual Services for Nexus 1000V
Server Server Virtual Services
VM VM VM VM VM VM VM VM
Virtual ASA VSG

Nexus 1000V Switch


VMWare or Hyper-V VMWare or Hyper-V vWAAS NAM

Adapter Adapter VSG: Virtual Security Gateway


ASA: Adap]ve Security Appliance
WAAS: Wide Area Accelera]on Service
NAM: Network Analysis Module

(e.g. Nexus Network)

Customer Benefits
Operational consistency across physical and virtual networks
Network team manages physical and virtual networks
Integrated advanced Cisco NX-OS networking features
Support existing Cisco virtual network services
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Session Agenda

Nexus Platform Overview


Data Center Design and Considerations
Case Study #1: Green Field Data Center Design
Key Takeaways

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
Data Center Architecture Building Blocks
Data Center Data Center Data Center
(Intranet) (Internet/DMZ) Security (Extranet)
Intranet Extranet
Perimeter

Security
Intranet Extranet
Perimeter

Data Center Enterprise


Core Core
Core Core
BUSINESS & ENTERPRISE APPLICATIONS

DATA CENTER INTERCONNECT (DCI)

Core Core
VIRTUALIZATION

MANAGEMENT
SECURITY
Data Center Data Center
Aggregation Aggregation Aggregation DC Services
Service POD

Aggregation DC Services

Data Center
Access Access

Access

COMPUTE

STORAGE

FACILITIES

Allow customization within blocks while maintain overall architecture

Blocks aligned to meeting business and technical requirements


Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
A Cloud Ready Data Center Architecture
Cisco Virtualized Multi-Tenant Data Center

Validated reference
architecture that delivers a
highly scalable, available,
secure, flexible, and efficient
data center infrastructure.
Proven layered approach
Reduced time to
deployment
Reduced risk
Increased flexibility
Improved operational
efficiency
hap://www.cisco.com/en/US/partner/solu]ons/ns340/ns414/ns742/ns743/ns1050/landing_vmdc.html

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
What Makes Designing Networks for
the Data Center Different?
Extremely high density of end nodes and
switching
Power, cooling, and space management
constraints
Mobility of servers a requirement, without DHCP
The most critical shared end-nodes in the
network, high availability required with very
small service windows
Multiple logical multi-tier application
architectures built on top of a common physical
topology
Server load balancing, firewall, other services
required
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
The Evolving Data Centre Architecture
Data Center 2.0 (Physical Design == Logical Design)

The IP portion of the Data Center Architecture


has been based on the hierarchical switching Core
design
Workload is localized to the Aggregation Block
Layer 3
Services localized to the applications running on
the servers connected to the physicalpod Layer 2 Aggregation

Mobility often supported via a centralized cable


plant
Architecture is often based on optimized design
Services
for control plane stability within the network fabric
Goal #1: Understand the constraints of the
current approach (De-Couple the Elements of the
Design)
Goal #2: Understand the options we have to build
a more efficient architecture (Re-assemble the
elements into a more flexible design)
Compute
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #1 to Re-Visit Where are the VLANs

As we move to new designs need to Where is the L2/


re-evaluate the L2/L3 scaling and design L3 boundary?
assumptions
Need to consider VLAN Usage
Layer 3
Policy assignment (QoS, Security, Closed User
Groups)
Layer 2 Aggregation
IP Address Management
Scalability
Some factors are fixed (e.g. ARP load) ARP, STP, FHRP
Some factors can be modified by altering VLAN/
Subnet ratio
Still need to consider L2/L3 Boundary Control
Plane Scaling
ARP scaling (how many L2 adjacent devices)
FHRP, PIM, IGMP
STP logical port count (BPDUs generated per
second)
VLAN/Subnet Ratio?
Goal: Evaluate which elements can change in VLAN span?
your architecture
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #2 to Re-Visit Where are the Pods?
Network Pod:
Repeatable physical, compute and network infrastructure including
L2/L3 boundary equipment. The pod is traditionally the L2 failure
domain fate-sharing domain

Access Pod:
Collection of compute nodes and network ports behind a pair
of access switches

10GE 10GE
Compute Pod:
Collection of compute nodes behind a single management
domain or HA domain
Network and Fabric design ques]ons that depend on the choice of the
Compute Pod
How Large is a Pod?
Is Workload local to a Pod?
Are Services local to a Pod?
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #2 to Re-Visit Where are the Pods?

The efficiency of the power


and cooling for the Data
Center is largely driven by the
physical/logical layout of the
Compute Pod
The design of the network
and cable plant interoperate
to define the flexibility
available in the architecture
Evolution of Server and
Storage connectivity driving
changes in the cable plant

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #2 to Re-Visit Where are the cables?

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #2 to Re-Visit Where are the cables?

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #3 to Re-Visit How is the compute attached?

How is striping of workload across the physical Data Center accomplished (Rack,
Grouping of Racks, Blade Chassis, )?
How is the increase in percentage of devices attached to SAN/NAS impacting the
aggregated I/O and cabling density per compute unit?
Goal: Define the unit of Compute I/O and how it is managed (how does the cabling
connect the compute to the network and fabric)

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

blade1
slot 1
blade2
slot 2
blade3
slot 3
blade4
slot 4
blade5
slot 5
blade6
slot 6
blade7
slot 7
blade8
slot 8

Rack Mount Blade Chassis Integrated - UCS


Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #4 to Re-Visit - Where Is the Edge?

veth vFC vFC vFC vFC


Eth vFC 1 2 3 4 126
Eth FC Eth FC 2/12 3 Converged SR-IOV adapter
2/12 provides
2/12 3/11 3/11 Network

10GbE
multiple PCIe

10GbE Fibre Channel


VETH

Adapter 10GE - VNTag resources


Edge of the Network and Fabric Link provides
virtualization of

Ethernet
pNIC HBA Still 2 PCI the physical


NIC HBA Addresses Media Eth FC Eth FC Eth
PCIe 1 2 3 4 126
on the BUS
PCI-E Bus
VMFS PCI-E Bus
Edge of the VMFS
PCI-E Bus VETH SCSI Edge of PCI-E Bus
VETH SCSI
Fabric the Fabric Edge of
Pass VMFS
Thru SCSI
VNIC VNIC the Fabric

Operating VNIC

System and Hypervisor provides Hypervisor provides


Device Drivers virtualization of PCI-E virtualization of PCI-E Hypervisor provides
resources resources virtualization of PCI-E
resources

Compute and Fabric Edge are Merging

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
connected?
iSCSI iSCSI NAS NAS
FCoE SAN Appliance Gateway Appliance Gateway
Computer System Computer System Computer System Computer System Computer System

Applica:on Applica:on Applica:on Applica:on Applica:on


File System File System File System File System File System
Volume
Volume MManager
anager Volume Manager Volume Manager I/O Redirector I/O Redirector
SCSI Device Driver SCSI Device Driver SCSI Device Driver NFS/CIFS NFS/CIFS

The Flexibility of a
iSCSI Driver iSCSI Driver TCP/IP Stack
FCoE Driver TCP/IP Stack TCP/IP Stack
TCP/IP Stack
NIC NIC NIC NIC NIC

Unified Fabric Block I/O File I/O


Transport SAN IP IP IP IP
Any RU to Any
NIC NIC NIC NIC
Spindle TCP/IP Stack TCP/IP Stack TCP/IP Stack TCP/IP Stack
iSCSI Layer iSCSI Layer File System File System
FCoE Bus Adapter FC HBA Device Driver FC HBA

FC Block I/O FC

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolving Data Centre Architecture
Design Factor #6 to Re-VisitWhere Are the Services?
Client

In the non-virtualized model


services are inserted into the Data
Path at choke points
Logical Topology matches the
Physical
Virtualized workload may require a
re-evaluation of where the services
are applied and how they are
scaled
Virtualized Services associated
with the Virtual Machine
(Nexus 1000v & vPath) VM VM VM
#2 #3 #4

Virtualized Services Nexus 1000v &


Virtual Machine Isolation (VXLAN) vPath
VSG, vWAAS
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Hierarchical Design Network Layers
Defining the Terms

Data Center Core


Routed layer which is distinct from enterprise network core Enterprise Network

Provides scalability to build multiple aggregation blocks


Data Center
Aggregation Layer Core
Provides the boundary between layer-3 routing and layer-2
switching
Point of connectivity for service devices (firewall, LB, etc.)
Layer 3 Links
Aggrega3on
Access Layer Layer 2 Trunks
Provides point of connectivity for servers and shared
resources
Typically layer-2 switching
Access

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Data Center Core Layer Design
Core Layer Function & Key Considerations

High speed switching &100% layer 3


Fault domain isolation between Enterprise and DC
Enterprise Network
AS / Area boundary
Routing table scale Data Center
Core
Fast routing convergence

Layer 3 Links
Aggrega3on
Layer 2 Trunks

Access

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Data Center Core Layer Design
Commonly Deployed Platform and Modules

M2-10G LC
Platform: Nexus 7K M2-40G LC M1-10G LC F2-Series LC
M2-100G LC*
Modules
4.0 and 6.0(1) and
Software 6.1 or above
M1: L2/L3/L4 with large forwarding later* later
tables Fabric
240G/200G* 80G 480G*
Connection
and rich feature set L3 IPv4
128K/1M 128K/1M 32K
Unicast
F2: Low-cost, high density with high L3 IPv4
N/A 32K 16K
Multicast
performance, low latency and low power
L3 IPv6
6K/350K Up to 350K 32K
Unicast
Classic layer 3 Core: M1 or F2
L3 IPv6
N/A 16K 8K
Large routing and ACL tables: M1 Multicast
ACL Entries 64/128K 128K 16K
High density linerate10G: F2 MPLS
MPLS: M1 LISP and OTV
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Data Center Aggregation Layer Design
Virtualized Aggregation Layer provides

Enterprise Network
L2 / L3 boundary
Access layer connectivity point: STP root,
Data Center
loop-free features Core

Service insertion point


Network policy control point: default GW,
DHCP Relay, ACLs Aggrega3on

Access

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Data Center Aggregation Layer Design
Commonly Deployed Platform and Modules

M1-10G LC F1-Series LC F2-Series LC N5500 with L3


Platform: N7K/N5K
Features(L3, OTV
etc.) Min. Software 4.0* 5.1(1) 6.0(1) 5.0(3)N1(1)

Fabric Connection 80G 230G 480G* -


Scalability L3 IPv4 Unicast 128K/1M - 32K 8K
(routing/mac table) L3 IPv4 Multicast 32K - 16K 2K

Performance and MAC Entries 128K 16K (per SOC) 16K (per SOC) 32K
FEX Support Yes* No Yes Yes
port density L2 Portchannel 8 active 16 active 16 active 16 active
LISP and OTV
FabricPath
FCOE Support

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Data Center Aggregation Layer Design
Key Design Considerations

Enterprise Network
Data Center physical infrastructure
POD design & cabling infrastructure
Data Center
Size of the layer 2 domain Core

Oversubscription ratio
Traffic flow
No. of access layer switches to aggregate Aggrega3on

Scalability requirement
Service insertion
Service chassis vs. appliance Access

Firewall deployment model


Load balancer deployment model
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
Data Center Access Layer Design
Access Layer & Virtualized Edge

Access Layer provides


Hosts connectivity point
Mapping from virtual to physical
L2 Services: LACP, VLAN Trunking

Virtualized Edge provides


Virtual host connectivity point
Virtual extension of access services
Network policy enforcement point

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
Data Center Access Layer Design
Access Layer Key Considerations & Commonly Deployed Platform

Physical infrastructure N5548/N5596 N5010/N5020 N7 F-Series LC


TOR vs. MoR
Server Types Fabric Throughput 960G/1.92T 520G/1.04T 230G/480G
1 G vs 10G Port Density 48/96 26/52 32/48 per LC
A/A or A/S NIC No. of Vlans 4096 512 4096
Single attached MAC Entries 32K 16K 16K (per SOC)
No. of FEXs 24 12 32 (F2 only)
Oversubscription ratio
1G FEX Ports 1152 576 32/48 per LC
No. of servers and uplinks
10G FEX Ports 768 384 32/48 per LC
Virtual Access Requirements
8G Native FC Ports 48/96 6/12 -
Virtual Machine Visibility
FabricPath
Virtual Machine Management
Boundary

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Cisco FEXlink: Virtualized Access Switch
Changing the device paradigm

De-Coupling of the Layer 1 and Layer 2 Topologies


New Approach to Structured Building Block
Simplified Management Model, plug and play provisioning,
centralized configuration
Technology Migration with minimal operational impact
Long Term TCO due to ease of Component Upgrade

...
Virtualized Switch

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Evolutionary Fabric Edge
Mixed 1/10G, FC/FCoE, Rack and Blade

Consolidation for all servers both rack and blade onto the same virtual switch
Support for 1G, migration to 10G, FC and migration to FCoE
10G server racks are supported by the Support for direct connection of HBA to
1G server racks are supported by 1G FEX (2248TP, addition of 10G FEX (2232PP or 2232TM, Unified Ports on Nexus 5500UP
2224TP) or future proofed with 1/10G FEX 2248PQ)
(2232PP or 2232TM)

1G, 10G and FCoE connectivity for HP or Support for NPV attached blade switches
Dell Blade Chassis during FC to FCoE migration
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Data Center Interconnect Design
Data Center Interconnect Drivers
IP Routed
DC to DC IP connectivity Main Data
Center
L3
Service L3 Backup
Data Center

DC to DC LAN extension EoMPLS


L2
L2
Workload scaling with vMotion WAAS
EoMPLSoGRE WAAS

L2 L2
GeoCluster
DWDM/
CWDM

Disaster recovery SAN SAN

Non-disruptive DC migration FC FC

Storage extension and replication

Storage
Storage

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Data Center Interconnect Design
DCI LAN Extension Key Considerations

STP domain isolation


Multihoming and loop avoidance
Unknown unicast flooding and
broadcast storm control
FHRP redundancy and localization
Scalability and convergence time
Three Nexus based options
OTV
vPC
Fabric Path

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Cisco FabricPath
Switching Rou:ng
Easy Congura:on Mul:-pathing (ECMP)
Plug & Play Fast Convergence
Provisioning Flexibility Highly Scalable

FabricPath

FabricPath brings Layer 3 rou5ng benets to


exible Layer 2 bridged Ethernet networks

Blocked Links

Fully Non-Blocking
2:1

FabricPath
Oversubscription 16:1

Pods

4
8:1

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
2
Overlay Transport Virtualization
Technology Pillars

OTV is a MAC in IP technique to


extend Layer 2 domains
OVER ANY TRANSPORT

Dynamic Encapsulation Protocol Learning

No Pseudo-Wire State Nexus 7000 Preserve Failure


Maintenance First platform to support OTV Boundary
(since 5.0 NXOS Release)
Optimal Multicast
Built-in Loop Prevention
Replication

Multipoint Connectivity Automated Multi-homing


ASR 1000
Point-to-Cloud Model Now also supporting OTV Site Independence
(since 3.5 XE Release)
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
DCI Architectures
OTV

Greeneld ASR 1K Browneld


Greeneld
L3
Si Si

OTV OTV
L2
Nexus 7K

L3
Si Si
L2
OTV OTV OTV OTV
Nexus 7K
Nexus 7K Nexus 7K

L3 Leverage OTV capabili]es on Nexus 7000 (Greeneld) and ASR


L3
L2 1000 (Browneld)
L2 Build on top of the tradi]onal DC L3 switching model (L2-L3
FabricPath boundary in Agg, Core is pure L3)
OTV Virt. Link
Possible integra]on with the FabricPath/TRILL model
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Overlay Transport Virtualization

Extensions over any transport (IP, MPLS) Automated Built-in Multihoming

Failure boundary preservation End-to-End loop prevention

Optimal BW utilization ARP Optimization


(no head-end replication)

MAC TABLE Transport MAC TABLE


VLAN MAC IF Infrastructure VLAN MAC IF
100 MAC 1 Eth 2 IP A IP B 100 MAC 1 IP A
OTV OTV OTV OTV
100 MAC 2 Eth 1
MAC 1 MAC 3 IP A IP B 100 MAC 2 IP A
100 MAC 3 IP B MAC 1 MAC 3 IP A IP B

100 MAC 4 IP B
100 MAC 3 Eth 3

100 MAC 4 Eth 4

MAC 1 MAC 3 6
MAC 1 MAC 3 MAC 1 West East
Site Site MAC 3
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Fabric Simplicity, Scale and Flexibility
Nexus Edge, Core & Boundary Nodes
Isolation of function when possible
Spine provides transport
Nexus Boundary Compute Edge provides media type and scaled control plane
(OTV, LISP, MPLS) Boundary provides localization of complex functions

Nexus Spine (Redundant


and Simple)

Nexus
Edge

blade1 blade1 blade1 blade1 blade1 blade1


slot 1 slot 1 slot 1
blade2 slot 1
blade2 slot 1
blade2 slot 1
blade2
blade2
slot 2 blade2
slot 2 slot 2
blade3 slot 2
blade3 slot 2
blade3 slot 2
blade3
blade3 blade3 slot 3 slot 3 slot 3
blade4 slot 3
blade4
slot 3
blade4 slot 3
blade4 blade4 blade4 slot 4
blade5 slot 4
blade5
slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 5
blade6 slot 5
blade6
slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 6
blade7 slot 6
blade7
slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 7
blade8 slot 7
blade8
slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 8 slot 8
slot 8 slot 8 slot 8 slot 8

blade1 blade1 blade1 blade1 blade1 blade1


slot 1 slot 1 slot 1
blade2 slot 1
blade2 slot 1
blade2 slot 1
blade2
blade2
slot 2 blade2
slot 2 slot 2
blade3 slot 2
blade3 slot 2
blade3 slot 2
blade3
blade3 blade3 slot 3 slot 3 slot 3
blade4 slot 3
blade4
slot 3
blade4 slot 3
blade4 blade4 blade4 slot 4
blade5 slot 4
blade5
slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 5
blade6 slot 5
blade6
slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 6
blade7 slot 6
blade7
slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 7
blade8 slot 7
blade8
slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 8 slot 8
slot 8 slot 8 slot 8 slot 8
blade1 blade1 blade1 blade1 blade1 blade1
slot 1 slot 1 slot 1
blade2 slot 1
blade2 slot 1
blade2 slot 1
blade2
blade2
slot 2 blade2
slot 2 slot 2
blade3 slot 2
blade3 slot 2
blade3 slot 2
blade3
blade3 blade3 slot 3 slot 3 slot 3
blade4 slot 3
blade4
slot 3
blade4 slot 3
blade4 blade4 blade4 slot 4
blade5 slot 4
blade5
slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 4
blade5 slot 5
blade6 slot 5
blade6
slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 5
blade6 slot 6
blade7 slot 6
blade7
slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 6
blade7 slot 7
blade8 slot 7
blade8
slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 7
blade8 slot 8 slot 8
slot 8 slot 8 slot 8 slot 8

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public
Session Agenda

Nexus Platform Overview


Data Center Design and Considerations
Case Study #1: Green Field Data Center Design
Key Takeaways

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Case Study #1
Data Center High Level Requirements
A leading online higher education Customer Business Challenges
institution x10G based virtualized next
More than 500,000 students, 24,000 generation data center architecture
faculty members No STP blocking topology
Approximately 1200 servers and 600 VMs Firewall protection for secured servers
across 5 data centers
Support vMotion within and between
Current data centers reach the limit of data centers
switching, power, and cooling capacity
Network team gains visibility to VM
Business decision made to build two new networking
green field data centers to consolidate and
provide DR capability

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Virtualized Access Layer Requirements

400 10G capable server connections


30 ESX servers with roughly 600 VMs

800 1G connections for standalone servers, and


out of band management network
Support both active/active and active/standby NIC
teaming configuration

Network team manages network, Server team


manages server/virtual machines
Network policies are retained during vMotion

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
Data Center Access Layer Design

N2K being remote line card to reduce number of devices to


manage
Migration to ToR for 10GE servers or selective 1GE server racks if
required (mix of ToR and EoR)
Mixed cabling environment (optimized as required)
Flexible support for Future Requirements

. . .
Nexus 5000/2000 Mixed
ToR & EoR
Combina:on of EoR (End of Row) and ToR (Top of Rack) cabling
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
Access Layer Port Counts & Oversubscription

For 10G server off the 5596s For 1G server off the 5548s
Total 10G ports = 20*32 = 640 Total 1G ports = 20*48 = 960
Server NIC utilization = 50% Server NIC utilization = 50%
Total uplink BW = 16*10 = 160G Total uplinks BW = 8*10 = 80G
Oversubscription ratio = 160/(640*0.5*10) = 1/20 Oversubscription ratio = 80/(960x0.5) = 1/6

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
N1KV Gains Visibility Into VM Environment

Cisco Nexus 1000V


Sokware Based
VM VM VM VM
Built on Cisco NX-OS
Compa]ble with all switching plalorms
Maintain vCenter provisioning model Nexus
1000V

unmodied for server administra]on;


allow network administra]on of virtual
vSphere

network via familiar Cisco NX-OS CLI

Nexus 1000V

Policy-Based Mobility of Network and Non-Disrup]ve


VM Connec]vity Security Proper]es Opera]onal Model

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Nexus 1000V Uplink Options

Spanning-Tree (Active/Passive)
Mac Pinning
UCS Blade Server Environment
3rd party blade server environment in non-MCEC Channel-group auto mode
topologies on mac-pinning

Single Switch Port-Channel


Port-Channel with single switch
Upstream switches do not support MCEC Channel-group auto mode
[active | passive]

Multi-Chassis EtherChannel
Port-channel with two switches Channel-group auto mode
Any server connected to upstream switches that
[active | passive]
supports Multi-Chassis EtherChannel (MCEC)

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Access Layer Design Highlight

Requirement Solution
Flexible cabling N5K/2K provide mixed ToR & EoR
Ease of management Configurations only done on the 5Ks

1G, 10G server connectivity Straight-through FEX supports all the


with active/active, active/ NIC teaming options
standby NIC teaming Note: EVPC provides flexible server/
FEX topology

vMotion within the Data N5K operates in layer 2 only to make


Center larger layer 2 adjacency possible

Visibility to VMs N1KV provides network visibility to VM


Network team manages Clear management boundary defined
network, server team by N1KV
manages server

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Aggregation Requirements

Facility
Drop any server anywhere in the data center
L2-L3
Layer 2 domain within data center
No STP blocking topology
Service Layer
Secured zone and non-secured zone
FW protection between zones, no FW protection within the zone
LB service is required for Web server, server needs to track the client IP
High performance FW and LB are required
NAM and IPS solution are also required

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Physical Infrastructure and Network Topology
Physical to Logical Mapping

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
Aggregation Oversubscription Ratio

Large layer 2 domain with single pair of 7Ks


Worse Case Calculation
Assume all the traffic is north-south bound
Assume 100% utilization from the 5Ks
All the ports operated in dedicated mode

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Service Integration at Aggregation Layer
Service chassis vs. Appliance

Virtual Service with Nexus 1000V


Virtual/Cloud Data Center

VDC-1
APP

OS

Hypervisor
VDC-2

Virtual Virtual appliance form factor


Service Dynamic instantiation/provisioning
Node Service transparent to VM mobility
(VSN) Support scale-out
Large scale multitenant operation

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Service Integration-Physical Design

High performance solution


ASA5585 Firewall and IPS
ACE30 module

Low TCO
6500 repurpose

Most scalable
NAM module inside service chassis
Available slot for future expansion

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
Firewall Logical Deployment Model
3.3.3.0 3.3.3.0
Bridging
Router GW Vlan 30 Vlan 31

1.1.1.0 1.1.1.0
FW
Transparent Mode Router GW Vlan 10 Vlan 11

Pros : Easy to implement Router GW Vlan 40 Vlan 41


4.4.4.0 4.4.4.0

Cons: 8 bridge-group per contextRouting 3.3.


3.0
30
Vlan
Routing Mode 1.1.1.0 GW 2.2.2.0
Router Vlan 10 FW GW Vlan 20
Pros: More scalable
GW V
lan 40
Cons: Configuration complexity 4.4.4.0

VRF sandwich 3.3.3.0


Vlan 30 GW Vlan 31
VDC sandwich 3.3.3.0
Nexus
Nexus 1.1.1.0
GW Vlan 31 (external FW (internal
vlan100 Vlan 10 GW Vlan 11
Nexus 1.1.1.0 Vrf) Vrf)
Nexus
FW (internal
(external vlan100 Vlan 10 GW Vlan 11 Vlan 40 GW Vlan 41
VDC)
VDC)
GW Vlan 41
4.4.4.0
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Load Balancer Logical Deployment Model
3.3.3.0 3.3.3.0
Bridging
Router GW Vlan 30 Vlan 31
1.1.1.0
Transparent Mode 1.1.1.0
Router GW Vlan 10 LB Vlan 11

Pros: Ease of deployment and multicast support


Router GW Vlan 40 Vlan 41
4.4.4.0
Cons: 8 bridge-group per context 4.4.4.0

Routing Mode 3.3.


3.0
Routing 30
Vlan
Pros: Separate STP domain 1.1.1.0 GW 2.2.2.0
Router Vlan 10 LB GW Vlan 20
Cons: No routing protocol support
GW V
lan 40
4.4.4.0
LB
One Arm Mode

Vlan 201
Pros: Non-LB traffic bypass the LB One Arm 3.3.3.0
GW Vlan 31
Cons: SNAT or PBR required
Nexus 1.1.1.0
Vlan 10 GW Vlan 11

GW Vlan 41
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 4.4.4.0 61
Service Integration Logical Design

VRF sandwich design


Three VRF created on Agg N7K
Server default gateway is Agg N7k
No VRF route leaking
LB in transparent mode
Tracking client IP is possible
Multicast application behind LB is possible
Two ACE contexts plus admin context
FW in routing mode
FW provides routing between VRFs
Two FW contexts plus admin and system context
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Spanning Tree Recommendations
N Network port
E Edge or portfast port type
- Normal port type
Data Center Core B BPDUguard
R Rootguard
L Loopguard
Primary Secondary
vPC vPC
vPC
HSRP Domain HSRP
Layer 3
ACTIVE STANDBY
Aggregation N N Secondary
Primary
Root Root

- - - - - - - - Layer 2 (STP + Rootguard)


R R R R R R R R

-
Access
- - L

E E E E E
B B B B B
Layer 2 (STP + BPDUguard)

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
vPC Best Practice Features

Feature Benefit Overview


vPC auto-recovery Increase High-availability allows the one vPC device to assume STP / vPC primary role and
(reload restore) bring up all local vPCs in case other vPC peer device is down after DC
power outage

vPC Peer-Gateway Service continuity Allows a vPC switch to act as the active gateway for packets
addressed to the peer router MAC
vPC orphan-ports Increase High-availability When vPC peer-links go down, vPC secondary shuts down all the vPC
suspend member ports as well as orphan ports. It avoids single attached
devices like FW,LB or NIC teamed device get isolated during vPC
peer-link failure

vPC ARP SYNC Improve Convergence time Improve Convergence for Layer 3 flows after vPC peer-link is UP
vPC Peer-Switch Improve Convergence time Virtualize both vPC peer devices so they appear as a unique STP root

BRKDCT-2048: Deploying Virtual Port Channel in NX-OS

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Aggregation Layer Design Highlight
Requirement Solution
Drop any server anywhere in Single pair of 7Ks provide data center Enterprise Network
the DC wide layer 2 domain
vMotion within the DC
No STP blocking Topology Double sided vPC between 7K and 5K Data Center
Core
eliminating blocking ports
FW protection between secure FW virtualization and VDC sandwich
zone and non-secure zone design to provide logical separation and
protection
Layer 3 Links
Aggrega3on
Web servers require load LB in transparent mode provides service Layer 2 Trunks
balancing service per Vlan basis
High throughput services are Mixed of service chassis and appliance
required and future scalability design is able to provide flexible and
scalable service choices Access
Low subscription ratio (target M1 10G line cards configured in
15:1) dedicated mode to provide lower
subscription ratio

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Core Layer Design
Nexus 7010 with Redundant M1 line cards

10G layer 3 port channels to Aggregation switches Enterprise Network

OSPF as IGP
Data Center
Inject default into data center Core

Fault domain separation via BGP


eBGP peering with enterprise network
Layer 3 Links
eBGP peering with remote data center Aggrega3on
Layer 2 Trunks
DC Interconnects are connected onto the Core N7Ks
Access

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
DCI Requirements and Design Choices

Requirements IGP + PIM Peering

L2 connectivity to provide OTV


VDC
OTV
VDC
OTV
VDC
OTV
VDC

workload scaling with IGMPv3


PIM Interface

vMotion L3 Join Interface


L2 Internal Interface

Data replication between


the data centers
DC 1 DC 2
Potential 3rd data center CO Dark Fiber

CO
RE

RE
DCI Design Choices
OTV
AG
GR

GR
AG
vPC
ACC
ESS

ACC
ESS
Fabric Path Server Farms Server Farms

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
OTV Design

Dark ber links


Why OTV
Native STP and broadcast isolation
Data Center
Easy to add 3rd site Core

Existing multicast core


Aggrega]on
Design
OTV VDC on Aggregation OTV
OTV OTV OTV
VPC VPC VDC
No HSRP localization(phase1)
VDC VDC VDC

simplify configuration
minimal latency via dark fiber Data Center 1 Data Center 2

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Nexus 1000v Deployment for VM Mobility

Both VSM in the same data center


Layer 3 control on Nexus 1000V
Stretched cluster supports live vMotion (5ms latency)
Data Center #1 Data Center #2
VSM (Ac]ve) vCenter VSM Layer 2 Extension
(Ac]ve) (Standby) (OTV)

vSphere vSphere Virtualized Workload Mobility vSphere vSphere


Nexus 1000V VEM Nexus 1000V VEM Nexus 1000V VEM Nexus 1000V VEM
Stretched Cluster

vCenter SQL/Oracle Replicated vCenter SQL/Oracle


Database Database
Dark Fiber

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
Traffic Flow for VM Mobility
vMotion between Data Centers

Virtual machines still use the original ACE and gateway after vMotion

Traffic will trombone the DCI link for vMotioned virtual machines

No HSRP localization and Source NAT

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Overall Design Highlight Case Study #1
Requirement Solution
x10G based virtualized generation Nexus 7K,5K,2K provide scalable x10G
data center architecture architecture with end to end virtualization

No STP blocking Topology Double sided vPC between 7K and 5K


eliminating blocking ports

FW protection between secure FW virtualization and VRF sandwich design to


zone and non-secure zone provide logical separation and protection

Support vMotion within and L2/L3 boundary is placed at aggregation layer


between data centers to provide data center wide layer 2 domain
OTV provide layer 2 extension between data
centers
Network team gains visibility to Nexus 1000v provides clear management
VM networking boundary between network and server team.
Network policy through N1KV is implemented
on VMs

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Key Takeaways

Nexus family and NX-OS are designed for modern data center
architecture
3 tier design model (Core, Aggregation, Access) ensure high availability &
scalability
Nexus 5K/2K offer flexible cabling solution at Access
Nexus 7K/5K double sided vPC supports non-blocking topology and
larger layer 2 domain. Fabric path is the new trend
Nexus 7K virtualization provides flexible service insertion at Aggregation
OTV/FabricPath/vPC simplify DCI and migration solution
Nexus 1000v provides network policy control & visibility into VM, and
offers integrated virtual services (VSG, vWAAS, NAM, ASA) at VM level
Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
THANK YOU for Listening & Sharing
Your Thoughts

Presentation_ID 2012 Cisco and/or its affiliates. All rights reserved. Cisco Public