Data Center Architecture Strategy Update

BRKDCT-2866

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

2

Depth vs. Breadth
Quick level set
DCT-2703 Implementing DC Services DCT-2840 DC L2 Interconnect DCT-2867 DC Facilities DCT-2868 DC Virtualization

Depth

DCT-2825 Nexus 5000 Architecture RST-3470 Nexus 7000 Architecture RST-3471 Nexus Software Architecture SAN-2701 SAN Design …Many, many more

Time Breadth
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

This Session
3

The Data Center Dilemma
EFFICIENCY AGILITY

Increased Utilization

Demand Capacity

Consolidation ‘Green’

Globalization Availability

How do I align my Data Center strategy? How can Cisco help me accomplish this?
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

4

Agenda
•Trends •Architecture Strategy •Architecture Evolution

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

5

Agenda
•Trends
Consolidation Network Technology Software Services

•Architecture Strategy •Architecture Evolution

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

6

Trends: Consolidation

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

7

WHAT IT MEANS CONSOLIDATION BY DEFAULT WILL BEGET ORGANIC IT’s FRUITION “As IT consolidation solidifies into standard procedure for infrastructure

management, we will see operational benefits and technical innovations arise that will deliver fundamentally better efficiency than can be achieved today. Forrester expects that by 2010, nearly all Intel/AMDbased servers will ship with a pre-installed hypervisor and that the default allocation of any service will be the partition. This will allow the use of new management and HA tools that act at the hypervisor layer, allowing true Organic IT: dynamic, policy-driven reallocation of running production workloads to drive greater power efficiency, accelerate business change, and drive down operational costs. Through these tools will come abstraction between the infrastructure, the application, and even the data center itself. Such a change will give IT professionals new degrees of freedom, allowing services to be deployed where, when, and however needed to best meet the businesses’ objectives.”

The IT Consolidation Imperative: Out Of Space, Out Of Power, Out Of Money © 2007, Forrester Research, Inc

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

8

Data Center Consolidation
Reducing operational costs & improving manageability
Reduce:
Number of distributed server farms Operational costs

Network Implications:
Higher Server Farm Density Higher Average Traffic Loads Higher number of network-based Services Larger & Flatter Networks At Least N+1 Redundancy

Increase
Flexibility on application rollouts Uptime

Standardize:
Physical Requirements Operational best practices Server platform

Facilities Implications
Higher Power Demands Higher Cooling Demands Higher Square Footage

Establish:
Future DC architecture Initial phase network design Technology Adoption Strategy Migration Strategy

Requirements:
Future DC architecture Initial phase network design Technology Adoption Strategy Migration Strategy

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

9

Server Consolidation
Reducing capital costs & improving effeciency
Reduce:
Number of OSs Server Idle time Costs per RU

Network Implications:
Higher uplink capacity Increase throughput per server Larger & Flatter Networks At Least N+1 Redundancy Availability beyond a single DC

Increase
Application Performance Application Uptime Server Density I/O, MEM and CPU capacity per RU

Facilities Implications
Higher Power Demands Higher Cooling Demands Higher Square Footage Closer integration with DC Arch

Standardize:
SW architecture NG HW platforms (bound to tiers) I/O (capacity, cabling)

Requirements:
Scalability of DC architecture Initial phase network design Technology Adoption Strategy Migration Strategy

Establish:
Server architecture direction Facilities support strategy Migration Strategy

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

10

Server & Infrastructure Virtualization
Improving utilization and agility
Reduce:
Idle CPU cycles Server proliferation Power and Cooling Demands

Network Implications:
Higher # of uplinks Increase throughput per server L2 Adjacency (larger & flatter) Availability beyond a single DC Server Trunking More VLANs & IP subnets 10GE in the access

Increase
Workload Mobility Server rollout flexibility Average Server CPU Utilization I/O, MEM and CPU capacity per server

Facilities Implications
Higher power/cooling draw per server Lower power/cooling overall (less servers) Cabling to match access requirements

Standardize:
Virtual server SW infrastructure NG HW platforms (bound to tiers) I/O and MEM capacity

Establish:
Server architecture direction Server Support Strategy Migration Strategy Provisioning/management strategy

Requirements:
Scalability of DC architecture Broad L2 adjacency Well-defined Access Layer Strategy Migration Strategy

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

11

Green-Field Data Centers
Addressing growth and consolidation requirements
Reduce:
Wasted rack space After the fact cabling Power or cooling retrofitting

Network Implications:
Predictable Scalability Increase Physical: ports, slots, boxes Logical: table sizes Well Identified Access Model Specific Oversubscription Targets Server & Network Oversubscription

Increase
Per rack server density Data Center Longevity DC Space Utilization

Facilities Implications
Per server, per rack and per pod Power Requirements Cooling Capacity Cabling Selection

Standardize:
High and low density areas Power to server and network racks Cabling

Establish:
Server farm growth potential Environmentals Control Strategy Usability Strategy Provisioning/management strategy

Requirements:
4-5 Architecture Strategy Migration Strategy to new Architecture Good handle on growth Servers and storage I/O interfaces and capacity

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

12

Trends: Network Technology

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

13

Ethernet Standards
Applicable to Data Center Environments
802.3

10GE
10GBase-T (IEEE 802.3an Ratified) 10GBase-CX4 (IEEE 802.3ak) 10GBase-*X (IEEE 802.3ae) The 802.3ak 10GbE standard defines copper categories The 802.3ae 10GbE standard defines MM and SM fiber categories 40-100 GE (Project Authorization Request has been agreed) Support Full Duplex Operation Only Preserve 802.3 Ethernet frame format Preserve Minimum and Maximum frame size Support BER equal or better than 10-12 Support Optical Transport Networks 40G At least 100M over OM3 MMF 10M over Copper 100G At least 40KM over SMF, 10Km over SMF, 100M over OM3 MMF 10M over copper
Cisco Public

802.3 HSSG Higher Speed Study Group

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

14

Demand for 40GE & 100GE in the DC
100GE in the 2010+ for switch interconnects Switch platforms need to be architected for delivery of capacity in excess of 200GBps per slot DC facilities environmental specifications need to accommodate the higher speed technology requirements:
Class 1: hazard level does not warrant special precaution 40/100 GE MMF may not meet Class1 Relax the Class1M (current proposal to IEEE): Good for restricted location which include DC Facilities More information at http://www.ieee802.org/3/ba/public/mar08/petrilla_02_0308.pdf

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

15

Ethernet Interface Evolution:
40G and 100G
40G Muxed IEEE Standard None 40G Native None 100G Native Call for interest: July 2006. Expect ratification in 20102011. Yes, true 100G per interface 8 links Yes 2010-11

Increased Bandwidth vs. 10GE EtherChannel Fiber savings Approximate Availability Estimated FCS Cost

No, 4 x 10GE muxed solution 2 links Yes 2008

Yes, true 40G per interface 8 links Yes 2009

2-3 x 10GE

10 x 10GE

At least 10 x 10GE

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

16

Emerging Standards
All applicable to Data Center Environments
L2 Multipathing

IETF TRILL WG
Proposal to solve L2 STP forwarding limitations

IEEE 802.1aq
Enhancement to 802.1Q to provide Shortest Path Bridging (Optimal Bridging) in L2 Ethernet topologies Data Center Bridging

IEEE 802.1Qbb – Priority-based Flow Control
Intended to specify protocols, procedures and managed objects that support flow control per traffic class as identified by the VLAN tag encoded priority code point

IEEE 802.1Qaz - Enhanced Transmission Selection
Specifies enhancement of transmission selection to support allocation of bandwidth amongst traffic classes

Discovery and Capability Exchange Protocol - DCBX
Identify the DCB cloud nodes and their capabilities

IEEE 802.1Qau – Congestion Notification (Congestion Management)
Signal congestion information to end stations to avoid frame loss .1Q tag encoded priority values to segregate flows Support higher layer protocols that are loss sensitive Unified I/O
Session_ID Presentation_ID

T11 FCoE – FC- BB-5
© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

17

Data Center Ethernet Features Enhanced Ethernet Standards
Feature
Priority-based Flow Control (PFC) CoS Based BW Management Congestion Notification (BCN/QCN) Data Center Bridging Capability Exchange Protocol L2 Multi-path for Unicast & Multicast Lossless Service

Benefit
Provides class of service flow control. Ability to support storage traffic Grouping classes of traffic into “Service Lanes” IEEE 802.1Qaz, CoS based Enhanced Transmission End to End Congestion Management for L2 network

Auto-negotiation for Enhanced Ethernet capabilities DCBCXP (Switch to NIC) Eliminate Spanning Tree for L2 topologies Utilize full Bi-Sectional bandwidth with ECMP Provides ability to transport various traffic types (e.g. Storage, RDMA)

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

18

Evolution of Ethernet
Physical layer enabling these technologies
Mid 1980’s Mid 1990’s Early 2000’s Late 2000’s 10Gb X2 SFP+ Cu SFP+ Fiber Cat 6/7 ?? Power (each side) Transceiver Latency (link)

10Mb UTP Cat 3

100Mb
UTP Cat 5

1Gb
UTP Cat 5 SFP Fiber

Technology

Cable

Distance

SFP+ CU Copper SFP+ USR
ultra short reach

Twinax MM OM2 MM OM3 MM 62.5µm µ MM 50µm µ Cat6 Cat6a/7 Cat6a/7
Cisco Public

10m 10m 100m 82m 300m 55m 100m 30m

0W
normalized

~0.1µs µ ~0 ~0 2.5µs µ 2.5µs µ 1.5µs µ
19

1W 1W ~8W ~8W ~4W

SFP+ SR
short reach

10GBASE-T
Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Trends: Software Services

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

20

SaaS
SaaS (Software as a Service) an alternative application/application suite built entirely on Web Services
– Hosted and supported by the Software Vendor –Per seat/ monthly $ –Available via the Internet –Multi-Tenant structure – API’s available for integration with other business applications –Known for scalability and availability –Broad Portfolio’s, Application Categories that meet the needs of the smallest business to the largest (www.saas-showplace.com)

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

21

SaaS Growth Predictions

AMR found that 40% of all companies are currently using hosted applications, and 49% will use them within the next 12 months Gartner forecasts large companies will fulfill 25% of their application demands with hosted software by 2010 IDC predicts the SaaS market will grow at a 21% compound annual growth rate (CAGR) during the next four years, reaching $10.7B worldwide in 2009 Forrester Research predicts the market for traditional on-premise enterprise applications will only grow 4% through 2008. Gartner forecasts large companies will fulfill 25% of their application demands with hosted software by 2010
http://thinkstrategies.icentera.com/portals/file_getfile.asp?method=1&uid=11753&docid=5045&filetype=pdf

*The new estimate calls for an average annual growth rate of 22.1 percent with the estimate for 2007 to come in at around 21 percent, ultimately becoming an $11.5 billion market by 2011.
http://www.formtek.com/blog/?p=380
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Salesforce.Com Published Growth
http://www.salesforce.com/company/

22

What is Cloud Computing?

Grid Computing?

Parallel Computing? SaaS XaaS? Utility Computing?

New Development Platform?

Cluster Computing?

Stateless Computing?
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Can all applications Be Cloud enabled?
23

Cloud Computing
No real simple answer…
Many Servers for the cloud

Users: Cloud appears as single application or “service”
Transparent on geography Transparent to a specific server – it could be one or many

Dynamic Provisioning

Cloud manager:
Applications are provisioned dynamically in server clusters Clusters can be clustered or geodiverse for availability purposes Goal is to provide a simpler scalable solution for large applications (allow server upgrade and refresh, simpler provisioning, reduce patch management)
Communications Network

Cloud

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

24

The Latest Evolution Of Hosting

Session_ID Presentation_ID

March 2008 “Is Cloud Computing Ready For The Enterprise?”
© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

25

A map of the players in the Cloud Computing, SaaS and PaaS markets

Session_ID Presentation_ID

Source: http://dev2dev.bea.com/blog/plaird/archive/2008/05/understanding_t.html
© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

26

Data Center Trends Summary
Consolidation
Data Centers, Servers, & Infrastructure

Virtualization
Servers, Storage, & Networks

Understand evolution of Ethernet technologies
10 Gig, 40 Gig, 100Gig, DCE, & FCoE

Plan for heterogeneous application environment
Internal/External hosting, SaaS/XaaS, & Cloud

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

27

Agenda
•Trends •Data Center Strategy
Deployment Strategy Technology Strategy

•Architecture Evolution

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

28

Data Center Strategy: Deployment Strategy

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

29

Topics in the Minds of Data Center Architects
XaaS, Server Farm, VM, XaaS, Cloud

Application Deployment

‘Green’ Green’
Power Cooling

Automated Provisioning Lights out Management

Deployment Agility

Security Service Integration
Ethernet, FC, FCoE

Virtualization Role based access

Management

I/O Consolidation

Consolidation Greenfield

Facilities

1/10/40/10 0 Gbps
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

End-ofEnd-of-Row Top-ofTop-of-Rack Blade Switch
30

Data Center Strategy
Applications Applications Hosting Virtualized External Compute External Service

Internal Compute Storage Resources

Network Infrastructure

Data Center Facilities Management Provisioning Operations
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

31

Data Center Strategy
Utility Deployment Strategy
All areas are interdependent Complex evaluation
Do applications dictate facilities? Do facilities dictate hosting alternatives?

Consider consistent user experience - SLAs Budgetary and costing model considerations Management and operational aspects Requires broad cross-functional collaboration

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

32

Data Center Strategy
Application Architecture
• Key Considerations
- Application Architecture - Monolithic - N-Tier - Web 2.0 / Mash-up - Core Business
Off the shelf Custom Application Security Considerations Data Warehousing

• Determining ‘Care-Abouts’
- Utility Environment - Demand Capacity - Application RPO/RTO - Projected longevity - Service level requirements - Anticipated annual growth

- Business Economics
- SaaS/XaaS - Internally/Externally Hosted - Cloud

- Application Redundancy
- At server level – backup server
- Single DC, Multiple DC

RPO – Recovery Point Objective RTO – Recovery Time Objective
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

33

Data Center Strategy
Compute Infrastructure

• Key Considerations
- % of Traffic Patterns
- Client to server - Server to server - Server to storage - Storage to storage

• Determining ‘Care-Abouts’
- Utility Server Infrastructure - Virtualization - Provisioning - # of servers per application - Size of subnet/VLAN - % of server annual growth - % of virtual server annual growth

- Server Capacity
- Server BUS capacity mem/cpu - # of Ethernet I/O interfaces - # of FC I/O interfaces - Expected outbound load

- Server Redundancy
- NIC teaming - Clustering

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

34

Data Center Strategy
Storage Resources
• Key Considerations
- Storage Capacity
- Internal/External Resources - Application requirements

- Host access model
- Fibre Channel - FC over Ethernet – FCoE - iSCSI

- Oversubscription - Storage Virtualization - N-Port Virtualization NPV - N-Port ID Virtualization NPIV - Volume Virtualization - Number Storage of racks - SAN Topology - Number of SAN devices to manage - Number of physical SAN interfaces
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

• Determining ‘Care-Abouts’
- Sync/Async Replication - SAN Interconnect - Data RPO/RTO - Data Security - Data Growth & Migration

35

Data Center Strategy
Network Infrastructure
• Key Considerations
- Type of access model
- Modular, ToR, Blade Switches

- Number of server per rack - Number of racks - Topology
- # of access switches & uplinks - L2 Adjacency Boundaries

• Determining ‘Care-Abouts’
- Fault isolation & recovery - Services insertion - Data Center Interconnect - L3 Features - L2 Features

- Number of network devices to manage - Number of physical interfaces per server
- Consolidated I/O

- Oversubscription
- Server - Access to aggregation - Aggregation to core

- L2 Adjacency
- Subnets/VLANs scope
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

36

Data Center Strategy
Data Center Facilities
• Key Considerations
- Total DC power capacity - Total DC space - Per rack
- Power capacity - Servers - Cabling

- Number of racks per pod
- Power - Cooling - Racks of network equipment
- Power - Cooling - Cabling

• Determining ‘Care-Abouts’
- DC Tier target - Disaster recovery - ‘Green’ - Efficiency - Airflow - Cable routes - Power routes

- Number of pods per area - Number of areas per DC

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

37

Data Center Strategy
Management Provisioning Operations
• Key Considerations
- Management - Monitoring - Measuring - Tracking - Provisioning - Internal Compute - External Compute - Service Insertion - Network - Operations
- Power & Cooling - Servers (Internal & External) - Cabling

• Determining ‘Care-Abouts’
- Performance Criteria - RPO/RTO - Monitoring - Netflow - Fault isolation & recovery - Testing

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

38

Data Center Strategy: Technology Strategy

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

39

Data Center Strategy in Action
Physical Facilities
HOT AISLE COLD AISLE

Pod

DC Zone Pod Network Servers Storage

4 - 6 Zones Per DC & 6 – 15 MW per DC 60,000 – 80,000 SQF per zone – 1-3 MW per zone 200 – 400 racks/cabinets per zone Cooling and power per pod (per pair of rack rows) 8 – 48 servers per rack/cabinet – 1-1.5 KW per cabinet 2 – 11 interfaces per server 2500 – 30000 server per DC 4000 – 120,000 ports per DC
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

It all depends on server types and network access layer model

40

Reference Physical Topology
Network Equipment and Zones
COLD AISLE

DC Zone

Pod
HOT AISLE

Pod Network Rack Server Rack Storage Rack

Pod

Module 1

Module N

Pod

Pod

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

41

Pod Concept
Network Zones and Pods
COLD AISLE

Pod
HOT AISLE

DC Sizing •DC: a group of zones (or clusters, or areas) •Zone: Typically mapped to aggregation pair •Not all use hot-cold aisle design •Predetermined cable/power/cooling capacity

DC
Pod

Pod
Pod/Module Sizing
▪Typically mapped to access topology
▪ Size: determined by distance and density ▪ Cabling distance from server racks to network racks ▪ 100m Copper ▪ 200-500m Fiber ▪ Cable density: # of servers by I/Os per server ▪ Server: 6-30 Servers per rack ▪ Network (based on access model) ▪ Storage: special cabinets
© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

▪Racks

Session_ID Presentation_ID

42

Network Equipment Distribution
End of Row and Middle of Row
End of Row
End of Row
▪Traditionally used ▪Copper from server to access switches ▪Poses challenges on highly dense server farms ▫ Distance from farthest rack to access point ▫ Row length may not lend itself well to switch port density
Patch panel Patch panel X-connect Patch panel X-connect server Patch panel server

Common Characteristics
▪Typically used for modular access ▪Cabling is done at DC build-out ▪Model evolving from EoR to MoR ▪Lower cabling distances (lower cost) ▪Allows denser access (better flexibility) ▪6-12 multi-RU servers per Rack ▪4-6 Kw per server rack, 10Kw-20Kw per network rack ▪Subnets and VLANs: one or many per switch. Subnets tend to be medium and large: /24, /23

Network Access Point A-B

Network Access Point C-D server server

Fiber Copper

Middle of Row

Patch panel

Patch panel Patch panel X-connect Patch panel X-connect server

Middle of Row
▪Use is starting to increase given EoR challenges ▪Copper from servers to access switches ▪It addresses aggregation requirements for ToR access environments ▪Fiber may be used to aggregate ToR

server

Network Access Point A-B server

Network Access Point C-D server

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

43

Network Equipment Distribution
Top of Rack

ToR
▪Used in conjunction with dense access racks(1U servers) ▪Typically one access switch per rack ▪Some customers are considering two + cluster ▪Use of either side of rack is gaining traction ▪ Cabling: ▪Within rack: Copper from server to access switch ▪Outside rack (uplink):
▪Copper (GE): needs a MoR model for fiber aggregation ▪Fiber (GE or 10GE):is more flexible and also requires aggregation model (MoR) ▪Subnets and VLANS: ▪ one or many subnets per access switch ▪ Subnets tent to be small: /24, /25, /26

Patch panel Top of Rack server Patch panel X-connect Patch panel X-connect

Patch panel Top of Rack server

Network Aggregation Point A-B server

Network Aggregation Point A-B server

Patch panel Top of Rack Top of Rack server Patch panel X-connect Patch panel X-connect

Patch panel Top of Rack Top of Rack server

Network Aggregation Point A-B server
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Network Aggregation Point C-D server

44

Network Equipment Distribution
Blade Chassis
Switch to Switch ▪Potentially higher oversubscription ▪Scales well for blade server racks (~3 blade chassis per rack) ▪Most current uplinks are copper but the newer switches offer fiber ▪Migration from GE to 10GE uplinks is taking place Pass-through ▪Scales well for pass-through blade racks ▪Copper from servers to access switches ToR ▪Have not seen it used in conjunction with blade switches ▪May be a viable option on passthrough environments is the access port count is right ▪Efficient when used with Blade Virtual Switch environments
Session_ID Presentation_ID

Patch panel Patch panel X-connect Patch panel X-connect

Patch panel

sw1

sw2

sw1

sw2

Blade Chassis sw1 sw2 Network Aggregation Point A–B–C-D Network Aggregation Point A–B-C-D

Blade Chassis sw1 sw2

Blade Chassis

Blade Chassis

sw1

sw2

sw1

sw2

Blade Chassis

Blade Chassis

Patch panel Patch panel X-connect Patch panel X-connect

Patch panel Top of Rack

Pass-through Blade Chassis Pass-through Blade Chassis

Pass-through Blade Chassis Pass-through

Pass-through Blade Chassis

Network Aggregation Point A–B–C-D

Network Aggregation Point A–B-C-D

Blade Chassis

Pass-through Blade Chassis

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

45

Network Equipment Distribution
End of Row, Top of Rack & Blade Switches
End of Row
Network Component & Location Cabling
Modular Switch at the end of a row of server racks Typically copper from server to access switches and fiber from access to aggregation switches

ToR
Low RU, lower port density switch per server rack Copper from server to ToR switch and fiber from ToR to aggregation switches 40 – 48 ports 8-30 1 RU server per rack One smaller VLAN/subnet per access switch

Blade Switches
Switches Integrated in to blade enclosures per server racks Servers have intra-blade chassis connection to internal switches. Switches use copper (and fiber) to aggregation switches

Cabling

Devices

Port Density Server Density VLANs & Subnets

240 – 336 ports 6 – 12 multi-RU server per rack One or more subnets/VLANs per access switch

14 – 16 Servers (dual-homed) 3-4 blade enclosures per rack A subnet/VLAN is shared across multiple access switches

Session_ID Presentation_ID

Row 1
© 2008 Cisco Systems, Inc. All rights reserved.

Row 2
Cisco Public

Row

Row

46

Reference Network Topology Hierarchical Architecture
Core

L3 L3 L2
Aggregation

Access

L2
VLAN A Module 1 VLAN B VLAN C VLAN D VLAN E Module 2

• Hierarchical Design • Triangle and Square Topologies • Multiple Access Models: Modular, Blade Switches and ToR • Multiple Oversubscription Targets • Highly scalable
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

47

Data Center Topology
Scalable Server Architecture
Core Layer Aggregation Layer Access Layer

Small - Medium

Medium - Large

Large-Very Large

Up to 192 Servers

192 – 1500 Servers

1500 – 4000 Servers

4000 to 10000 Servers

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

48

Server Oversubscription
What is the right number?
Capacity: 1Gbps

1. Single homed Servers:
GE or 10GE

GE NIC

100 Mbps – 1/10:1 200 Mbps – 1/5:1 500 Mbps – 1/2:1 500 Mbps – 1/20:1 1 Gbps – 1/10:1 2 Gbps – 1/5:1 4 Gpbs – 1/2.5:1 5 Gbps – 1/2:1

10GE NIC Capacity: 10Gbps

2. Multi-homed & Virtual Servers:
4 x GE 2 x 10GE Active-Standby

NIC

NIC

4 times per systen 2 times per system 1.x times per system

Depends…

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

49

Server Oversubscription
What to do…
1st Understand application and their traffic patterns: Client to server: low bandwidth Server to server: high bandwidth Server to storage: bulk Storage to storage: bulk

2nd Consider Peak Time Behavior:

Maximum server peak: single server max capacity Average server peak: likely to be seen across server farm Aggregate server peak:

3rd Plan Network Oversubscription based on Peak Loads:

Consider server BUS Consider server growth Consider steady vs failover states

4th Network Oversubscription:

Factor I/O Module Oversubscription Consider Network Layers: Access, aggregation and core Ranges: 1:1 – 1:20 increase over time Factor in Server Oversubscription

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

50

Server Virtualization Design Considerations
Module

VLAN maps to Subnet… size of subnet? VM Mobility – within L2 boundaries Is VM cluster limited by:
VLAN single switch VLAN multiple switches VLAN all access switches single module

VLAN A

VLAN B

VLAN D VLAN E

How many clusters

Hypothetical Example
1000 Servers using a single IP/MAC pair Virtualized using 20 VMs per server (1,000 x 20) + 1,000 = 21,000 or 20,000 New IP/MAC pairs 20,000 / 250 (/24 subnet) = 80 new subnets / VLANs

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

51

Data Center Strategy Summary
Complex interdependencies Focus on Applications <-> User experience Identify key objectives for each aspect of infrastructure Map physical and logical topologies Consider I/O options and requirements Evaluate the Network impact of Virtualization

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

52

Agenda
•Trends •Architecture Strategy •Architecture Evolution

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

53

Architecture Evolution

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

54

Data Center Architecture Mapping Initiatives to Architecture
IT Initiatives
Application Flexibility: SaaS, Internal/External compute, Virtualized Images Server Consolidation: Faster CPUs, Multi-core, more Multimemory, higher I/O Capacity Server Virtualization: Application availability and scalability, Server utilization Application Availability: Lower RPO/RTO, better stability Workload Management: faster application rollout, dynamic server movement Automated Provisioning: Template driven configuration & dynamic provisioning
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Architectural Goals

Improved Efficiency

Scalable Bandwidth

Common Systems Architecture

Simplified I/O

Improved Robustness

Integrated Services

55

Network Architecture
Mapping Architecture to Technology
Architectural Goals
Improved Efficiency Scalable Bandwidth

Technology Requirements
Scalable 10G Infrastructure

Cisco Technology Alignment
10G Ethernet

Efficient L2 pathing

I/O Consolidation

Simplified I/O

Common Technology Architecture

Increase STP Stability Virtual Switch Partitioning and Isolation Scalable DC Services FCoE

Improved Robustness Integrated Services

Virtual Switching

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

56

Dense 10GE Network Topology
High Density 10GE Aggregation
core1 core2

10G Ethernet

L3
10GE Uplinks

Module 1

agg1

agg2

L3 L2

aggx

aggx+1 Module 2

acc1 VLAN A

acc2 VLAN B

accN

accN+1

acc1

acc2

accN

accN+1

VLAN C

VLAN D

VLAN E

Common Topology – Starting Point
Nexus at Core and Aggregation Layers 2-Tier L2 topology VLANs contained within Agg Module

Topology Highlights
Lower Oversubscription Higher Density 10 GE at Core and Agg Layers
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

57

10GE Server Farms
10GE Access and Aggregation
agg1 8 agg2 64 10GE ports agg1 8 agg2 64 10GE ports

10G Ethernet

4 acc1
VLAN A

4 acc2
VLAN B VLAN C

6

6 acc1 accY

4

4 acc2 accN

8

8 accY

accN

40-44 Ports

VLAN A

VLAN B VLAN C

192 Ports

52 10GE ports = 8 - 12 ToR Switches

52 10GE ports = 4 - 12 Modular Switches

10GE in the Access Positioned for I/O Consolidation Using ToR – lower oversubscription 3.3:1 Using Modular – higher oversubscription 12:1 ToR uses Twinax cable Modular uses Fiber
Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

58

10GE Server Access
10 Gig Ethernet End-host Mode
End Host Mode
Switch Perspective
MAC based uplink Selection Active-active uplinks using different MACs No STP on access device BPDUs are not processed – they are dropped Separate loop avoidance mechanisms

10G Ethernet

L3

Host Perspective
Active-standby only

L3 L2 L2
A B

Network Environment STP is not fully removed
Some switch would run it, some would not Looped conditions have to be considered w/o STP
VLAN C STP Cloud No STP Cloud

Path to Service Devices is challenging Virtual-port Channels solves most issues

Enet DCE
Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

59

I/O Consolidation in the Network

I/O Consolidation

Processor Memory

Processor Memory

I/O Storage

I/O IPC

I/O LAN

I/O Subsystem Storage IPC LAN

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

60

I/O Consolidation in the Host

I/O Consolidation

Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs Limited number of interfaces for Blade Servers
FC HBA FC HBA FC HBA FC HBA NIC NIC NIC NIC NIC NIC HCA HCA HCA HCA
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved.

FC Traffic FC Traffic LAN Traffic LAN Traffic Mgmt Traffic IPC Traffic IPC Traffic
Cisco Public

CNA CNA CNA CNA

All traffic goes over 10GE

61

What Is FCoE?
Fibre Channel over Ethernet
From a Fibre Channel standpoint it’s
FC connectivity over a new type of cable called… an Ethernet cloud
FCoE

From an Ethernet standpoints it’s
Yet another ULP (Upper Layer Protocol) to be transported, but… a challenging one!

And technically…
FCoE is an extension of Fibre Channel onto a Lossless Ethernet fabric
Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

62

Fibre Channel over Ethernet
Brief look at the Technology
A method for a direct mapping of FC frames over Ethernet
Seamlessly connects to FC networks Extends FC in the datacenter over the Ethernet FCoE appears as FC to the host and the SAN Preserves current FC infrastructure and management FC frame is unchanged Can operate over standard switches (with jumbo frames) Priority Flow Control guarantees no-drops Mimics FC credit-buffer system, avoids TCP Does not require expensive off-loads
FCoE

Ethernet Fibre Channel Traffic

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

63

Discrete Network Fabrics
Typical Ethernet and Storage Topology
FCoE

Core

SAN Fabric

L3
Fabric A

Fabric B

Aggregation

L3 L2

Access

L2
VLAN A VLAN B VLAN C
A B C D E F

VSAN 2

VSAN 3
Enet FC

Single Ethernet Network Fabric Typically 3 tiers Access Switches are dual-homed Servers are single or multi-homed
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Dual Storage Fabrics Typically 2 tiers Edge switches are dual-homed Servers are dual-homed to different fabrics

64

Unified Fabric: Phase I – DCE FCoE Server Access

FCoE

Core

SAN Fabric

L3

Fabric A

Fabric B

Aggregation

L3 L2

Access

L2
CNA
VLAN A VLAN B VLAN C
A B

VLAN D

D

E

CNA – Converged Network Adaptor
Session_ID Presentation_ID

Enet FC DCE

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

65

Unified Network Fabric
Benefits to Customers

10G Ethernet I/O Consolidation FCoE

FC Traffic FC Traffic

FCoE FCoE FCoE

SAN A SAN B

Enet Traffic Enet Traffic

FCoE

FCoE SAN

Fewer Interfaces and Cables

Same SAN Management as Native FC

Display FCoE Adapter

FC Storage

FC Switch

FCoE Switch

Server

No Gateway
Session_ID Presentation_ID

Less Power and Cooling

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

66

N-Port Virtualization (NPV)
Solves Domain-ID Explosion
NPV-Core Switches
FC FC

Virtual Switching

10.1.1

20.2.1

F-port NP-port
VS

Can have multiple uplinks, on different VSANs

AN 15

Nexus 5000 FC Interface

10.5.2
FC

VSAN 1 0

10.5.7 20.5.1 Initiator

NPV Device
Uses the same domain(s) as the NPV-core switch(es)
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

Target

67

Virtual Ethernet Switching
Improving Management and Pathing
• Virtual Switches: Logical instances of physical switches
- Many to one: grouping of multiple physical switches

Virtual Switching

- Reduce management overhead (single switch) and simplify configuration (single sw config)

- One to Many: partitioning of physical switches
- Isolate control plane and control plane protocols

• Virtual PortChannels: Etherchannel across multiple chassis
- Simplify L2 pathing by supporting non-blocking cross-chassis concurrent L2 paths - Lessen reliance on STP (loopfree L2 paths are not established by STP)

• Virtual Switching Implementations
- Virtual Switching System – VSS: Catalyst 6500 - Virtual Blade Switches – VBS: 10GE-based Blade Switches - Virtual Device Context – VDC: Nexus 7000 - Virtual Port-Channel – vPC: Catalyst 6500, Nexus Family

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

68

Virtual Switch – VSS
Two to One
OSPF OSPF SNMP STP HSRP OSPF SNMP STP HSRP IGMP STP HSRP

Virtual Switching

A1

A2

A

Two Physical Switches into One Virtual
Two switches look like one
Two physical switches One virtual switch

Virtual Switch:
All ports appear to be on the same physical switch Single point of management Single configuration Single IP/MAC Single control plane protocol instance

Benefits
Simplify infrastructure management L2 DC Interconnect High Availability
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

69

Virtual Blade Switch - VBS
Many to One
A1 A3 A5 A7 A2 A4 A6 A8 A
2

Virtual Switching

2

Many to One
Many switches look like one
Up to Eight physical switches One virtual switch

Virtual Switch:
All ports appear to be on the same physical switch Single point of management Single configuration Single IP/mac

Benefits
Simplify infrastructure management Single switch to manage
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

70

Virtual Switching - VDC
One to Many
OSPF IGMP STP HSRP OSPF IGMP STP HSRP OSPF IGMP STP HSRP

Virtual Switching

A1
OSPF IGMP STP HSRP

A2
OSPF IGMP STP HSRP

A

A3

A4

One to Many
One switch looks like many
One physical switch Many logical switches

Virtual Switch:
Switch ports only Per virtual switch Per virtual switch Per virtual switch Per virtual switch exist on a single logical instance point of management configuration IP/mac control plane protocol instance

Benefits
Control plane isolation Control protocol isolation
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

71

Isolating Collapsed L2 Domains
Though Virtual Device Contexts
VDCs at Aggregation Layer
STP topology per VDC environment Access switches only on one VDC VLAN instances per VDC per access switch One STP process per access switch

Virtual Switching

VDCs at Aggregation and Access Layers
STP topology per VDC environment Access switches support VDCs as well VLANs instances per VDC Two STP processes per access switch

agg1

agg1 agg2

L3 L2

agg3 agg2

agg4 L3

Module 1

agg1

agg2

L2

L3 L2

acc1 acc1

acc2 acc2

accN accN

accN+1 accN+1

acc1

acc2

accN

accN+1

VLAN VDC1 VLAN C – C

VLAN C – VDC2 VLAN C Module 1 Module 1

VLAN C – VDC1 VLAN C – VDC2

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

72

Virtual Portchannels - vPC
L2 Topology
A AG1 AG2

Virtual Switching

2

2 4

Two to one
Two Physical to a single logical
Devices connect to a single “logical” switch Connections are treated as portchannel

Virtual PortChannel:
i. Ports to virtual switch could form a cross-chassis portchannel ii. virtual Portchannel behaves like a regular Etherchannel

Benefits
i.Provide non-blocking L2 paths ii.Lessen Reliance on STP

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

73

Simplifying the topology
Through Virtual PortChannels
core1 2 agg1 2 core2 2 2 agg2

Virtual Switching

2 acc1 VLAN A acc2

2

4 accN accY

4

4 accX

VLAN B

VLAN C

VLAN D

VLAN E

Simplify network topology
Build loopfree topologies without STP Take advantage of all available L2 paths Use all available network bandwidth capacity STP is still used as a fail-safe mechanism

Simplify Server to Network Connectivity
Session_ID Presentation_ID

Servers are also able to use more than 1 interface concurrently NIC Teaming is no longer necessary
© 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

74

Overlaying Stateful Services
Leveraging Virtual PortChannels
Service Appliances of Service Switches
Leverage Virtual Port Channels Non-blocking path to STP root/HSRP primary Virtual Switching

Service Integration
Services Switches – housing service devices Service Appliances Most Services support 10GE connections
core1 2 agg1 2 core2 2 svcs1 2 agg2 svcs2 Services Switch 2
modules appliances

2 acc1 VLAN A acc2

2

4 accN accY

4

4 accX

VLAN B

VLAN C

VLAN D

VLAN E

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

75

Architecture Evolution: Summary

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

76

Data Center Architecture Summary
DC-wide VLANs L3 L2
Fabric A Fabric B

SAN Fabric

Agg-wide VLANs L3 L2 Pod-wide VLANs

Topology Layers:
Core Layer: Support high density L3 10GE aggregation Aggregation Layer: Support high density L2/L3 10GE aggregation Access Layer: Support EoR/MoR, ToR, & Blade for 1GE, 10GE, DCE & FCoE attached servers

Topology Service:
Services through service switches attached at L2/L3 boundary

Topology Flexibility:
Pod-wide VLANs, Aggregation-wide VLANs or DC-wide VLANs Trade off between flexibility and fault domain
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

77

Architecture Evolution Summary
10 Gig Core, Aggregation, & Access DCE – Ethernet Enhancements
I/O Consolidation Unified Fabric - FCoE
10G Ethernet

Cisco Technology Alignment

Virtualization
N-Port Virtualization – NPV Virtual Switch - VSS Virtual Blade - VBS Virtual Device - VDC Virtual Portchannel - vPC
Session_ID Presentation_ID

I/O Consolidation

FCoE

Virtual Switching

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

78

Additional Resources
URLs
• VSS Independent Testing
http://www.networkworld.com/reviews/2008/010308-cisco-virtual-switching-test.html

• 6500 Cabinet Information: http://wwwin.cisco.com/dss/isbu/6500/enviro/index.shtml • Panduit http://www.panduit.com/default.asp • Chatsworth Chatsworth Cabinets http://www.chatsworth.com/common/n-series • TIA – Telecommunications Industry Association http://www.tiaonline.org/ • ASHRAE – American Society of Heating, Refrigerating and Air-Conditioning Engineers http://www.ashrae.org/ • Uptime Institute http://uptimeinstitute.org/ • Government work on server and DC Energy Efficiency:
http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

79

Useful Standard Effforts Resources
http://www.ietf.org/html.charters/trill-charter.html http://www.ietf.org/internet-drafts/draft-ietf-trill-prob-01.txt http://www.ietf.org/internet-drafts/draft-ietf-trill-rbridge-protocol-02.txt --- o --http://www.ieee802.org/1/files/public/docs2005/aq-nfinn-shortest-path-0905.pdf http://www.ieee802.org/1/files/public/docs2006/aq-nfinn-shortest-path-2-0106.pdf http://www.ieee802.org/1/pages/802.1au.html http://www.ieee802.org/3/ar/public/0503/wadekar_1_0503.pdf http://www.ieee802.org/1/files/public/docs2007/au-bergamasco-ecm-v0.1.pdf --- o --http://grouper.ieee.org/groups/802/3/hssg/ --- o --http://www.t11.org/index.html

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

80

Q and A

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

81

Recommended Reading for BRKDCT-2866
Data Center Fundamentals Storage Networking Protocol Fundamentals Storage Networking Fundamentals: An Introduction to Storage Devices, Subsystems, Applications, Management, and File Systems

Available Onsite at the Cisco Company Store
Session_ID Presentation_ID © 2008 Cisco Systems, Inc. All rights reserved. Cisco Public

82

Complete Your Online Session Evaluation
Cisco values your input Give us your feedback—we read and carefully consider your scores and comments, and incorporate them into the content program year after year Go to the Internet stations located throughout the Convention Center to complete your session evaluations Thank you!
Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

83

Session_ID Presentation_ID

© 2008 Cisco Systems, Inc. All rights reserved.

Cisco Public

84