You are on page 1of 67

Data Center Virtualization

René Raeber CE Datacenter


Central Consulting Advanced Technologies/DC
Setting the stage:
What’s the meaning of virtual?

• If you can see it and it is there


– It’s real
• If you can’t see it but it is there
– It’s transparent
• If you can see it and it is not there
– It’s virtual
• If you can not see it and it is not there
– It’s gone !
Agenda Datacenter Virtualization

¾ Data Center Virtualization Overview

¾ Front End DC Virtualization

¾ Server Virtualization

¾ Back-End Virtualization

¾ Conclusion & Direction Q&A


Virtualization
Overview
The “Virtual Data Center” Approach
Abstracting Server Hardware From Software together with Consolidation
•• Existing
Existing Service
Service Chains
Chains
are
are still
still aligned
aligned toto the
the
Virtual SANs Virtual SANs instances
Virtual SANs of
instances of Virtual
Virtual
Servers
Servers running
running inin place
place
of
of physical
physical servers.
servers.

•• VLANs
VLANs atat the
the Virtual
Virtual
Machine
Machine (Hypervisor)
(Hypervisor)
level,
level, map
map to
to VLANs
VLANs atat
the
the Network
Network Switch
Switch
Layer.
Layer.
Virtual LANs Virtual LANs Virtual LANs
Virtual Svc’s Virtual Svc’s •• Virtual
Storage
StorageSvc’sLUN’s
LUN’s are
are
similarly
similarly directly
directly
mapped
mapped to to the
the VM’s
VM’s inin
the
the same
same wayway they
they
would
would mapmap to to physical
physical
Information
Access Layer Logic Layer servers.
servers. Layer
Service Chain Service Chain Service Chain
The Flexibility of Virtualization
VM’s Mobility Across Physical Server Boundaries and Keeping Services

•• VM
VM Mobility
Mobility is
is capable
capable of
of
moving
moving Virtual
Virtual Machines
Machines
Virtual SANs Virtual SANs Virtual SANs
across
across Physical
Physical Server
Server

VM
VM Mobility
Mobility •• The
The Application
Application Services
Services
provided
provided by by the
the Network
Network
need
need to
to respond
respond andand be
be
aligned to meet the
aligned to meet the newnew
geometry
geometry of of the
the VMs
VMs

VM
VM Mobility
Mobility
•• Close
Close interaction
interaction required
required
between
between thethe assets
assets
Virtual LANs Virtual LANs provisioning
Virtual LANs virtualized
provisioning virtualized
Virtual Svc’s Virtual Svc’s infrastructure
Virtual Svc’s
infrastructure and
and the
the
Application Services
Application Services
supporting
supporting thethe Virtual
Virtual
Machines.
Machines.

Information
Access Layer Logic Layer Layer
Service Chain Service Chain Service Chain
Moving to a Unified Fabric
Moving to a fully Virtualized Data Center, with Any To Any Connectivity
Unified Unified Unified
Fabric Fabric Fabric
Networking Networking Networking
•• Fully
Fully unified
unified I/O
I/O delivers
delivers the
the
following
following characteristics:
characteristics:
––Ultra
Ultra High
High Capacity
Capacity
10Gbps+
10Gbps+
––Low
Low latency
latency
––Loss
Loss Free
Free (FCoE)
(FCoE)

•• True
True “Any
“Any to
to Any”
Any”
Connectivity
Connectivity is possible
is possible as
as
Virtual SANs Virtual SANs all
all devices
SANs are
devices
Virtual are connected
connected to
to
all
all other
other
Virtual
devices.
devices.
LANs
Virtual LANs Virtual LANs
Unified Unified
Virtual Svc’s Fabric Virtual Svc’s Fabric Virtual Svc’s
Networking Networking •• We
We can
can now
now simplify
simplify
management,
management, operations
operations
and
and enhance power and
enhance power and
cooling efficiencies
cooling efficiencies

Management
Network Virtualization Building Blocks
Device Partitioning Virtualized Device Pooling
Interconnect

VDC 2

VDC 4

VDCs FW,ACE context VSS, Stackwise, VBS,


VLANs VRFs Virtual Port Channel (vPC)
L3 VPNs – MPLS VPNs, GRE, VRF-Lite, etc. HSRP/GLBP
L2 VPNs - AToM, Unified I/O, VLAN trunks, PW,
1:n etc. n:1
n:m
Virtualized Data Center Infrastructure
DC Core Gigabit Ethernet
Nexus 7000 WAN 10 Gigabit Ethernet
10GbE Core 10 Gigabit DCE
IP+MPLS WAN 4/8Gb Fiber Channel
Agg Router 10 Gigabit FCoE/DCE

DC Aggregation
Nexus 7000 SAN A/B
Cisco Catalyst 6500 10GbE Agg MDS 9500
10GbE VSS Agg Cisco Catalyst Storage Core
DC Services 6500
DC Services

DC Access

FC

FC Storage
Cisco Nexus 5000 & CBS 3xxx Nexus 7000 Nexus 5000 CBS 3xxx MDS 9500
Catalyst 6500 Nexus 2000 Blade End-of-Row Rack MDS 9124e Storage
End-of-Row Rack Nexus blade (*)
10GbE
10GbE and 4/8Gb
and 4Gb FC Server
FC Server Access
Access
1GbE Server Access 10Gb FCoE Server Access
(*) future
Front-End
Virtualization
Virtual Device Contexts at Nexus 7000
VDC Architecture
Virtual Device Contexts Provides Virtualization at the Device Level Allowing
Multiple Instances of the Device to Operate on the Same Physical Switch at
the Same Time

L2 Protocols L3 Protocols L2 Protocols L3 Protocols

VLAN Mgr UDLD OSPF GLBP VLAN Mgr UDLD OSPF GLBP

VLAN Mgr UDLD BGP HSRP VLAN Mgr UDLD BGP HSRP

LACP CTS EIGRP VRRP LACP CTS EIGRP VRRP



IGMP 802.1x PIM SNMP IGMP 802.1x PIM SNMP

RIB RIB RIB RIB

Protocol Stack (IPv4/IPv6/L2) Protocol Stack (IPv4/IPv6/L2)


VDC1 VDCn

Infrastructure
Kernel
Nexus 7000 Physical Switch
Virtual Device Contexts
VDC Fault Domain
A VDC Builds a Fault Domain Around All Running Processes Within That
VDC—Should a Fault Occur in a Running Process, It Is Truly Isolated from
Other Running Processes and They Will Not Be Impacted

VDC A VDC B Process “DEF” in


VDC B Crashes

Process ABC
Process ABC

Process DEF
Process DEF

Process XYZ
Process XYZ
Process DEF in VDC
… … A Is Not Affected and
Will Continue to Run
Unimpeded

Protocol Stack Protocol Stack A


VDCA VDCB B

Infrastructure C B D

Kernel D C A
Nexus 7000 Physical Switch
Virtual Device Contexts
VDC and Interface Allocation

VDC Ports Are Assigned on a per VDC VDC


A Basis and Cannot Be Shared C
Across VDCs

32-Port
10GE
Module

VDC Once a Port Has Been Assigned to a VDC


B VDC, All Subsequent Configuration Is C
Done from Within That VDC…
VDC Use Case Examples
Security Partitioning

ƒ Some Infosec departments are still reluctant


about collapsed infrastructure
ƒ Concerns around change management
ƒ Infrastructure misconfiguration could bypass
policies Appliance Model Service Module Model

ƒ Ideally they want to have physically separately


infrastructure.

ƒ Not cost effective in larger deployments. Firewall

Outside Inside

ƒ VDCs provide logical separation simulating


air gap Outside
ƒ Extremely low possibility of configuration
VDC
bypassing security path – Must be physically Firewall
bypassed VDC

ƒ Model can be applied for any DC services Inside


VDC Use Case Examples
Horizontal Consolidation

• Preface: Lead with separate physical boxes as they provide the


most scalable solution. VDCs are useful in certain situations!
• Objective: Consolidate lateral infrastructure that delivers similar
roles for separate operational or administrative domains.
• Benefits: Reduced power and space requirements, can maximize
density of the platform, easy migration to physical separation for
future growth
• Considerations: Number of VDCs (4), Four VDCs != Four CPU
Does not significantly reduce cabling or interfaces needed.
core core core core
1 2
Core Devices Core

Aggregation Devices agg VDC 1 agg VDC 1 Aggregation VDCs


agg1 agg2 agg3 agg4 agg VDC 2 agg VDC 2

acc1 acc2 accN accY


acc1 acc2 accN accY

agg VDC 1 agg VDC 2


Admin Group 1 Admin Group 2 Admin Group 1 Admin Group 2
VDC Use Case Examples
Vertical Consolidation
• Preface: Lead with separate physical boxes as they provide the most
scalable solution.
–Large Three Tier designs should remain physical.
–Smaller Two Tier designs can leverage VDCs for common logical
design with three tier.
• Objective: Consolidate vertical infrastructure that delivers orthogonal roles
to the same administrative or operational domain.
• Benefits: Reduced power and space requirements, can maximize density
of the platform, provides smooth growth path, easy migration to physical
separation in future
• Considerations: Number of VDCs (4), Four VDCs != Four CPU
Intra-Nexus7000 cabling needed for connectivity between layers.
core core
1 2
Core Devices
core VDC core VDC Core VDCs
agg VDC agg VDC
Aggregation Devices Aggregation
agg3 agg4
VDCs

accN accY accN accY


Core
Virtualization
Virtual Port-Channel (vPC)
Feature Overview

• Allow a single device to use a port


channel across two upstream
switches
• Separate physical switches
independent control and data plane
• Eliminate STP blocked ports. Uses
Logical Topology without vPC
all available uplink bandwidth
• Dual-homed server operate in active-
active mode
• Provide fast convergence upon
link/device failure
• Available in NX-OS 4.1 for Nexus
7000. Nexus 5000 availability
planned for CY09.

Logical Topology with vPC


Multi-level vPC
Physical Logical
View View

SW1 SW2 SW1 SW2


vPC FT-Link vPC FT-Link
vPC_PL vPC_PL

SW3 SW4 SW3 SW4


vPC FT-Link vPC FT-Link
vPC_PL vPC_PL

ƒ Up to 16 links between both sets of switches: 4 ports from sw1-sw3, sw1-


sw4, sw2-sw3, sw2-sw4
ƒ Provides maximum non-blocking bandwidth between sets of switch peers
ƒ Is not limited to one layer, can be extended as needed
Aggregation
Virtualization
Aggregation Services Design Options

DC Core Gigabit Ethernet


Nexus 7000 WAN 10 Gigabit Ethernet
10GbE Core 10 Gigabit DCE
IP+MPLS WAN 4/8Gb Fiber Channel
Agg Router 10 Gigabit FCoE/DCE

DC Aggregation
Nexus 7000 SAN A/B
Cisco Catalyst 6500 10GbE Agg MDS 9500
10GbE VSS Agg Cisco Catalyst Storage Core
DC Services 6500
DC Services

DC Access Service Modules


Embedded One-Arm Service Switches

FC

Cisco Nexus 5000 & CBS 31xx Nexus 7000 Nexus 5000 CBS 31xx MDS 9500
Catalyst 6500 Nexus 2000 Blade End-of-Row Rack MDS 9124e Storage
End-of-Row Rack Nexus Blade (*)

1GbE Server Access 10GbE


10GbE and 4/8Gb
and 4Gb FC Server
FC Server Access
Access Storage
10Gb FCoE Server Access
(*) future
Virtual Switch System (VSS)
Concepts
Virtual Switch System Is a Technology Break Through for the
Cisco Catalyst 6500 Family
EtherChannel Concepts
Multichassis EtherChannel (MEC)

Virtual Switch Virtual Switch

LACP, PAGP, or ON
EtherChannel Modes
Are Supported

Regular EtherChannel on Multichassis EtherChannel (MEC)


Single Chassis Across Two VSL-Enabled Chassis
ACE Module: Virtual Partitioning

Multiple Virtual Systems


One Physical Device (Dedicated Control and Data Path)

100% 25% 25% 15% 15% 20%

Traditional Device Cisco Application Infrastructure Control


• Single configuration file • Distinct context configuration files
• Single routing table • Separate routing tables
• Limited RBAC • RBAC with contexts,
roles, domains
• Limited resource allocation • Management and data
resource control
• Independent application rule sets
• Global administration and
monitoring
• Supports routed and bridged
contexts at the same time
Firewall Service Module (FWSM)
Virtual Firewalls

Core/Internet Core/Internet

Cisco Cisco
Catalyst Catalyst
6500 MSFC 6500 MSFC
VLAN 10
VLAN 10 VLAN 20 VLAN 30

VFW VFW VFW VFW VFW VFW


FW SM FW SM
VLAN 11 VLAN 21 VLAN 31 VLAN11 VLAN 21 VLAN 31

A B C A B C

• e.g., Three customers Æ three security contexts—scales up to 250


• VLANs can be shared if needed (VLAN 10 on the right-hand side example)
• Each context has its own policies (NAT, access-lists, inspection engines, etc.)
• FWSM supports routed (Layer 3) or transparent (Layer 2) virtual firewalls at the
same time
Data Center Virtualized Services
Combination Example

VRF VRF VRF VRF


“Front-End” VRFs (MSFC)

v5 v6 v7 v8
1 3 4 Firewall Module Contexts

v107 v108
v105
2 3 4
ACE Module Contexts
v206 v207 v208

VRF
“Back-End” VRFs (MSFC)

BU-1 BU-2 BU-3 BU-4


v2081
v2082 Server Side VLANs
v105 v206 v207
v2083
...

* vX = VLAN X
**BU = Business Unit
VSS with ACE and FWSM Modules
Active / Standby Pair

Virtual Switch System


(VSS)

Switch-1 Switch-2
(VSS Active) (VSS Standby)

Control Plane Active Control Plane Hot Standby


VSL

Data Plane Active Data Plane Active


Failover/State sync Vlan

ACE ACE
Active Standby

FWSM FWSM
Standby active
Combining vPC with VSS for Services

• Services can be…


• attached using EtherChannel vPC
• Appliance based
• Services-chassis based
(standalone or VSS)
Nexus 7000 with vPC
VSS

ACE ASA NAM Services


Appliance Appliance Chassis
Access Layer
Virtualization
Data Center Access Layer Options
Top of Rack (ToR)
• Typically 1-RU servers
• 1-2 GE LOMs
• Mostly 1, sometimes 2 ToR switches
• Copper cabling stays within rack
• Low copper density in ToR
• Higher chance of East-West traffic hitting
aggregation layer
• Drives higher STP logical port count for
aggregation layer
• Denser server count
Middle of Row (MoR) (or End of Row)
• May be 1-RU or multi-RU servers
• Multiple GE or 10GE NICs
• Horizontal copper cabling for servers
• High copper cable density in MoR
• Larger portion of East-West traffic stays
in access
• Larger subnets Æ less address waste
• Keeps agg. STP logical port count low
(more EtherChannels, fewer trunk ports)
• Lower # of network devices to manage
Middle of Row (MoR) (or End of Row)
Virtual Switch (Nexus 7000 or Catalyst 6500)

Catalyst 6500 Nexus 7000

VSS and MEC VDC and vPC

Many to 1 Virtualization 1 to Many Virtualization


Service Modules High Density (10/100/1000 & 10GE)
Single Control Plane Distinct control planes while virtualized
ToR @ 1GE: 
Nexus 2000, the Nexus 5000 “virtual” linecard

• Nexus 2000 combines benefits of both ToR and EoR
architectures
¾ Physically resides on the top of each rack but 
¾ Logically acts like an end of row access device
• Nexus 2000 deployment benefits
¾Reduces cable runs
¾Reduce management points
¾Ensures feature consistency across hundreds of 
servers
¾Enable Nexus 5000 to become a high density 1GE 
access layer switch
¾VN‐Link capabilities
Nexus 2000 (Fabric Extender - FEX)

Nexus
2000
Nexus 2000 implementation example

•Physical Topology •Logical Topology


Core
Layer

Central Point Core


of Management Layer
Aggregation VSS
Layer
L3 Central Point
of Management Aggregation
L2 Layer
VSS
L3
4x 10G
FE uplinks
from each rack L2
Access Access
Layer Nexus Nexus Layer
5020 5020
Nexus 5020
Nexus 5020
N2K N2K N2K N2K N2K N2K 12 x Nexus 2000 12 x Nexus 2000

Servers
Rack-1 Rack-N Rack-1 Rack-N
Servers

Rack-1 Rack-2 Rack-3 Rack-4 Rack-5 Rack-N


Blades: Cisco Virtual Blade Switching (VBS)
• Up to 8 Switches acts as Single VBS Switch
–Distributed L2/ MAC learning
–Centralized L3 learning

• Each switch consists of


– Switch Fabric
– Port Asics (downlink & uplink ports)

• One Master Switch per VBS
–1:N Resiliency for Master
– L2/L3 reconvergence is sub 200 msec

• High Speed VBS Cable (64 Gbps)

• Example Deployment:
–16 servers per enclosure X
– 2 GE ports per server X
– 4 enclosures per rack = 128GE
– 2 x 10GE uplinks = 20GE
– 128GE / 20GE = 6.4:1 oversubscription
Cisco Catalyst Virtual Blade Switch (VBS)
with Non-vPC Aggregation
Access Layer (Virtual Blade Switch) Aggregation Layer

Single Switch /
Node (for
Spanning Tree or
Layer 3 or Spanning-Tree Blocking
Management)
Cisco Catalyst Virtual Blade Switch (VBS)
with Non-vPC Aggregation
Aggregation Layer

Access Layer (Virtual Blade Switch)

Single Switch / Node


(for Spanning Tree or Spanning-Tree Blocking
Layer 3 or Management)
Cisco Catalyst Virtual Blade Switch (VBS)
with Nexus vPC Aggregation
Access Layer (Virtual Blade Switch) Aggregation Layer
Nexus vPC

Single Switch /
Node (for
Spanning Tree
or Layer 3 or
Management) All Links Forwarding
Cisco Catalyst Virtual Blade Switch (VBS)
with Nexus vPC Aggregation
Aggregation Layer
(Nexus vPC)
Access Layer (Virtual Blade Switch)

Single Switch / Node (for


Spanning Tree or Layer 3
or Management)

All Links Forwarding


Server
Virtualization
VMware ESX 3.x Networking Components

Per ESX Server Configuration VMs vSwitch

VMNICS =
Uplinks

vNIC vSwitch0
VM_LUN_0007
vmnic0

VM_LUN_0005
vNIC
vmnic1
Virtual Ports
Cisco VN-Link

• VN-Link (or Virtual Network Link) is a term which


describes a new set of features and capabilities that
enable VM interfaces to be individually identified,
configured, monitored, migrated and diagnosed. VNIC VNIC

Hypervisor
ƒ The term literally refers to a VM specific
link that is created between the VM and
Cisco switch. It is the logical equivalent
& combination of a NIC, a Cisco switch
interface and the RJ-45 patch cable
that hooks them together. VETH VETH

• VN-Link requires platform support for Port Profiles,


Virtual Ethernet Interfaces, vCenter Integration, and
Virtual Ethernet mobility.
Server Virtualization & VN-Link
VN-Link Brings VM Level Granularity

VMotion
Problems:
• VMotion may move VMs across
physical ports—policy must
follow
• Impossible to view or apply
policy to locally switched traffic
• Cannot correlate traffic on
physical links—from multiple
VMs
VLAN
101

VN-Link:
• Extends network to the VM
• Consistent services
• Coordinated, coherent
management
VN-Link With the Cisco Nexus 1000V

Cisco Nexus 1000V


Software Based Server
VM VM VM VM
#1 #2 #3 #4
ƒ Industry’s first third-party ESX switch
ƒ Built on Cisco NX-OS Nexus 1000V
ƒ Compatible with switching platforms VMW ESX
ƒ Maintain vCenter provisioning model NIC NIC
unmodified for server administration but
also allow network administration of Nexus
1000V
Nexus 1000V via familiar Cisco NX-OS
LAN
CLI
Announced
09/2008
Shipping H1CY09)

Policy-Based Mobility of Network Non-Disruptive


VM Connectivity and Security Properties Operational Model
VN-Link with
Network Interface Virtualization (NIV)

Nexus Switch with VN-Link


Hardware Based
Server
ƒ Allows scalable hardware-based VM VM VM VM
#1 #2 #3 #4
implementations through hardware
switches VMW ESX

ƒ Standards-based initiative: Cisco & VN-Link


VMware proposal in IEEE 802 to specify
“Network Interface Virtualization”
ƒ Combines VM and physical network
operations into one managed node
Nexus
ƒ Future availability

http://www.ieee802.org/1/files/public/docs2008/new-dcb-
pelissier-NIC-Virtualization-0908.pdf

Policy-Based Mobility of Network Non-Disruptive


VM Connectivity and Security Properties Operational Model
Cisco Nexus 1000V

Industry First 3rd Party Distributed Virtual Switch

Server 1 Server 2 ƒ Nexus 1000V provides


enhanced VM switching
VM
VM
VM VM
VM
VM VM
VM
VM VM
VM
VM VM
VM
VM VM
VM
VM VM
VM
VM VM
VM
VM for VMware ESX
#1
#1
#1 #2
#2
#2 #3
#3
#3 #4
#4
#4 #5
#5
#5 #6
#6
#6 #7
#7
#7 #8
#8
#8
ƒ Features Cisco VN-Link:
VMware vSwitch VMware vSwitch ƒ Policy Based VM Connectivity
Nexus
Nexus 1000V
VMware Nexus 1000V
vSwitch
1000V Nexus Nexus
1000V Nexus
DVS 1000V
VMware
DVS vSwitch
1000V
ƒ Mobility of Network & Security
VMW ESX
VMW ESX VMW ESX
VMW ESX Properties
ƒ Non-Disruptive Operational
Model

ƒ Ensures proper visibility


& connectivity during
VMotion

Enabling Acceleration of Server Virtualization Benefits


Cisco Nexus 1000V Architecture

Server 1 Server 2 Server 3


VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM VM
VM
#1
#1 #2
#2 #3
#3 #4
#4 #5
#5 #6
#6 #7
#7 #8
#8 #9
#9 #10
#10 #11
#11 #12
#12

VMware
VEMvSwitch
VEM VMware
Nexus vSwitch
Nexus VEM
VEM
1000V
1000V DVS
DVS VMware vSwitch
VEM
VEM
VMW ESX
VMW ESX VMW ESX
VMW ESX VMW ESX
VMW ESX

Virtual Supervisor Module (VSM)


ƒVirtual
Virtual or Physical
Ethernet appliance
Module (VEM)
running Cisco OS (supports HA)
ƒCisco
Enables
Nexusadvanced networking
ƒ capability
Performs on1000V Enables:
management,
the hypervisor vCenter
ƒ monitoring,
Policy Based&VMconfiguration
Connectivity Nexus 1000V
Nexus 1000V
ƒ Provides each VM with dedicated
ƒƒ Tight integration
Mobility of Network & VMware
“switch port” with Security
vCenter
Properties
ƒ Collection of VEMs = 1 DVS
ƒ Non-Disruptive Operational Model
VSM
VSM
Back-End
Virtualization
End-to-End Back-End Virtualization

Pools of storage
resources

Virtual Servers

ƒ Optimizes resource utilization Virtual HBAs VH


VH
VH

FCoE CNA

Virtualization
ƒ Increases flexibility and agility

ƒ Simplifies management Virtual


OLTP
VSAN
Backup
VSAN
Fabrics / Email
Unified IO VSAN
ƒ Reduces TCO
Virtual Storage
Virtual Storage Area Network (VSAN) Deployment
• Consolidation of SAN islands
–Increased utilization of fabric ports with Department A
just-in-time provisioning
• Deployment of large fabrics SAN Islands
–Dividing a large fabric in smaller
VSANs
–Disruptive events isolated per VSAN
–RBAC for administrative tasks
–Zoning is independent per VSAN
• Advanced traffic management
Department B Department C
–Defining the paths for each VSAN
–VSANs may share the same EISL
–Cost effective on WAN links Virtual SANs
(VSANs)
• Resilient SAN extension
• Standard solution Department A
(ANSI T11 FC-FS-2 section 10) Department B

Department C
VSAN Technology
The Virtual SANs Feature Consists
of Two Primary Functions
Fibre Channel
Services for
Blue VSAN
VSAN Header Is
Fibre Channel
Removed at
Services for
Egress Point Red VSAN
• Hardware-based isolation of Cisco MDS 9000
tagged traffic belonging to Family with VSAN Trunking
Service E_Port
different VSANs (TE_Port)
• Create independent instance of Enhanced ISL (EISL)
Trunk Carries
fiber channel services for each Tagged Traffic from
newly created VSAN—services Multiple VSANs Trunking
include: E_Port
VSAN Header Is (TE_Port)
Added at Ingress
Fibre Channel
Point Indicating
Services for
Membership Blue VSAN
No Special Fibre Channel
Support Required Services for
by End Nodes Red VSAN
N-Port ID Virtualization (NPIV)
Application Server

• Mechanism to assign multiple


N_Port_IDs to a single N_Port
• Allows all the access control, File
E-Mail Web
zoning, port security (PSM) be Services
implemented on application level
N_Port N_Port N_Port
• Multiple N_Port_IDs are so far ID-1 ID-2 ID-3
allocated in the same VSAN

F_Port F_Port F_Port

E_Port
E_Port

E-Mail Web File and Print


VSAN_3 VSAN_2 VSAN_1
NPIV Usage Examples
Virtual Machine Aggregation ‘Intelligent Pass-Thru’

FC FC FC FC

FC FC FC FC

NPV Edge
Switch
FC
NP_Port
NPIV-Enabled HBA
F_Port F_Port
Virtual Servers Share a Physical HBA
• A zone includes the physical HBA
and the storage array
• Access control is demanded to
storage array “LUN masking and
Servers

mapping”, it is based on the physical


Virtual

HBA pWWN and


it is the same for all VMs
• The hypervisor is in charge of the
mapping, errors may be disastrous
Storage Array
Hypervisor

MDS9000 (LUN Mapping and Masking)

Mapping
FC
HW

pWWN-P FC
pWWN-P

Zone Single Login on a Single Point-to-Point Connection

FC Name Server
Virtual Server Using NPIV and
Storage Device Mapping
• Virtual HBAs can be zoned individually
• “LUN masking and mapping” is based on
the virtual HBA pWWN of each VMs
• Very safe with respect to
Servers

configuration errors
Virtual

• Only supports RDM


• Available in ESX 3.5

MDS9000 Storage Array


Hypervisor

Mapping Mapping Mapping Mapping FC

FC FC FC FC
To pWWN-1
pWWN-1 pWWN-2 pWWN-3 pWWN-4 To pWWN-2
pWWN-P To pWWN-3
HW

pWWN-1
pWWN-P FC pWWN-2 To pWWN-4
pWWN-3
pWWN-4
Multiple Logins on a Single Point-to-Point Connection FC Name Server
VMotion LUN Migration without NPIV

VM1 VM2 VM3 VM1 VM2 VM3 VM1 VM2 VM3

Standard
HBAs

WWPN

WS-X901 6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

STATUS

1/2 Gbps FC Module

All configuration parameters All LUNs must be “exposed” to


are based on the World Wide every server to ensure disk
Port Name (WWPN) of the access during live migration
physical HBA FC (single zone)
VMotion LUN Migration with NPIV

VM1 VM2 VM3

HBAs
with NPIV
WWPN1
WWPN2
WWPN3

WS-X901 6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

STATUS

1/2 Gbps FC Module

No need to reconfigure zoning Centralized management of


or LUN masking VMs and resources

Dynamically reprovision VMs Redeploy VMs and support


FC
without impact to existing live migration
infrastructure
Only supports RDM !
NPIV Usage Examples

Virtual Machine Aggregation ‘Intelligent Pass-Thru’

FC FC FC FC

FC FC FC FC

NPV Edge
Switch
FC
NP_Port
NPIV-Enabled HBA
F_Port F_Port
Blade Switch/Top-of-Rack
Domain ID Explosion

• Domain ID used for


addressing, routing, and Blade Switch

access control
• One domain ID per
SAN switch
• Theoretically 239 domain
ID, practically much less Theoretical
Blade Switches
supported Increase Domain
IDs, Increase
MDS
9500
Maximum: 239
Domain IDs
Fabrics per SAN
• Limits SAN fabric
scalability
Tier 1 Tier 2 Tape Farm
Cisco MDS Network Port Virtualization (NPV)

• Eliminates edge switch


Domain ID
Blade Switch
• Edge switch acts as an
NPIV host
• Simplifies server and
SAN management and NPV NPV NPV NPV

operations
• Increases fabric NPV-Enabled
Switches Do MDS
Edge Switch
Acts as a
Not Use 9500 NPIV Host
scalability Domain IDs
NPV NPV

Supports
Up to 100 Edge
Switches

Tier 1 Tier 2 Tape Farm


Flex Attach (Virtual PWWN)

• Assign virtual PWWN Before


on NPV switch port
• Zone vPWWN to storage FC1/1

• LUN masking is done vPWWN1


PWWN1
on vPWWN
• Reduce operational overhead pwwn1 pwwnX vpwwn1 pwwnX

–Enables server or physical


HBA replacement
–No need for zoning After
modification
–No LUN masking change
FC1/1
• Automatic link to new PWWN vPWWN1
–No manual relinking to new PWWN2

PWWN is needed
pwwn2 pwwnX vpwwn1 pwwnX
Storage Volume Virtualization

Initiator Target

Initiator Target

SAN
Fabric

• Adding more storage requires administrative changes


• Administrative overhead, prone to errors
• Complex coordination of data movement between
arrays
Storage Volume Virtualization

Virtual Virtual
Target 1 Initiator
VSAN_10 VSAN_30

Initiator Virtual Volume


VSAN_10 1

Virtual Virtual Volume Virtual


Initiator 2
Target 2 Initiator
VSAN_20
VSAN_20 SAN VSAN_30

Fabric

• A SCSI operation from the host is mapped in one or


more SCSI operations to the SAN-attached storage
• Zoning connects real initiator and virtual target or virtual
initiator and real storage
• Works across heterogeneous arrays
Sample Use: Seamless Data Mobility

Virtual Virtual
Target 1 Initiator
VSAN_10 VSAN_30

Initiator Virtual Volume


VSAN_10 1 Tier_2 Array

Virtual Virtual Volume Virtual


Initiator 2
Target 2 Initiator Tier_2 Array
VSAN_20
VSAN_20 SAN VSAN_30

Fabric

• Works across heterogeneous arrays


• Nondisruptive to application host
• Can be utilized for “end-of-lease” storage migration
• Movement of data from one tier class to another tier
Your session feedback is valuable

Please take the time to complete the


breakout evaluation form and hand it
to the member of staff by the door on
your way out

Thank you!
Recommended Reading

You might also like