You are on page 1of 67

Data Center Virtualization

Ren Raeber CE Datacenter


Central Consulting Advanced Technologies/DC

Setting the stage:


Whats the meaning of virtual?
If you can see it and it is there
Its real
If you cant see it but it is there
Its transparent
If you can see it and it is not there
Its virtual
If you can not see it and it is not there
Its gone !

Agenda Datacenter Virtualization


Data Center Virtualization Overview
Front End DC Virtualization
Server Virtualization
Back-End Virtualization
Conclusion & Direction Q&A

Virtualization
Overview

The Virtual Data Center Approach


Abstracting Server Hardware From Software together with Consolidation

Virtual SANs

Existing
Existing Service
Service Chains
Chains
are
are still
still aligned
aligned to
to the
the
instances
instances
of Virtual
Virtual
Virtual
SANs of
Servers
Servers running
running in
in place
place
of
of physical
physical servers.
servers.

Virtual SANs

VLANs
VLANs at
at the
the Virtual
Virtual
Machine
Machine (Hypervisor)
(Hypervisor)
level,
level, map
map to
to VLANs
VLANs at
at
the
the Network
Network Switch
Switch
Layer.
Layer.
Virtual LANs

Virtual LANs

Virtual Svcs

Virtual Svcs

Access Layer
Service Chain

Virtual LANs

Logic Layer
Service Chain

Virtual
Storage
Storage
LUNs are
are
SvcsLUNs
similarly
similarly directly
directly
mapped
mapped to
to the
the VMs
VMs in
in
the
the same
same way
way they
they
would
would map
map to
to physical
physical
Information
servers.
servers.
Layer
Service Chain

The Flexibility of Virtualization


VMs Mobility Across Physical Server Boundaries and Keeping Services

Virtual SANs

VM
VM Mobility
Mobility is
is capable
capable of
of
moving
Virtual
moving
Virtual Machines
Machines
Virtual
SANs
across
across Physical
Physical Server
Server

Virtual SANs

VM
VM Mobility
Mobility

VM
VM Mobility
Mobility

Close
Close interaction
interaction required
required
between
between the
the assets
assets
Virtual
LANs virtualized
provisioning
provisioning
virtualized
Virtual
Svcs
infrastructure
and
infrastructure
and the
the
Application
Services
Application Services
supporting
supporting the
the Virtual
Virtual
Machines.
Machines.

Virtual LANs
Virtual Svcs

Virtual LANs
Virtual Svcs

Access Layer
Service Chain

The
The Application
Application Services
Services
provided
provided by
by the
the Network
Network
need
need to
to respond
respond and
and be
be
aligned
to
meet
the
new
aligned to meet the new
geometry
geometry of
of the
the VMs
VMs

Logic Layer
Service Chain

Information
Layer
Service Chain

Moving to a Unified Fabric


Moving to a fully Virtualized Data Center, with Any To Any Connectivity
Unified
Fabric
Networking

Virtual SANs
Virtual LANs
Virtual Svcs

Unified
Fabric
Networking
Fully
Fully unified
unified I/O
I/O delivers
delivers the
the
following
following characteristics:
characteristics:
Ultra
Ultra High
High Capacity
Capacity
10Gbps+
10Gbps+
Low
Low latency
latency
Loss
Loss Free
Free (FCoE)
(FCoE)

Unified
Fabric
Networking

Virtual SANs

Unified
Fabric
Networking

Virtual LANs
Virtual Svcs

Unified
Fabric
Networking

True
True Any
Any to
to Any
Any
Connectivity
is
Connectivity is possible
possible as
as
all
all devices
devices
are connected
connected to
to
Virtual
SANs are
all
devices.
all other
other
devices.
Virtual
LANs
Virtual Svcs

We
We can
can now
now simplify
simplify
management,
management, operations
operations
and
enhance
power
and enhance power and
and
cooling
efficiencies
cooling efficiencies

Management

Network Virtualization Building Blocks


Device Partitioning

Virtualized
Interconnect

Device Pooling

VDC 2
VDC 4

VDCs

FW,ACE context

VSS, Stackwise, VBS,

VLANs

VRFs

Virtual Port Channel (vPC)


L3 VPNs MPLS VPNs, GRE, VRF-Lite, etc.

1:n

L2 VPNs - AToM, Unified I/O, VLAN trunks, PW,


etc.
n:m

HSRP/GLBP
n:1

Virtualized Data Center Infrastructure


DC Core

Gigabit Ethernet

WAN

Nexus 7000
10GbE Core

10 Gigabit Ethernet
10 Gigabit DCE

IP+MPLS WAN
Agg Router

4/8Gb Fiber Channel


10 Gigabit FCoE/DCE

DC Aggregation
Nexus 7000
10GbE Agg
Cisco Catalyst
6500
DC Services

Cisco Catalyst 6500


10GbE VSS Agg
DC Services

SAN A/B
MDS 9500
Storage Core

DC Access

FC

FC Storage

Cisco
Catalyst 6500
End-of-Row

Nexus 5000 &


Nexus 2000
Rack

1GbE Server Access

CBS 3xxx
Blade

Nexus 7000
End-of-Row

Nexus 5000
Rack

CBS 3xxx
MDS 9124e
Nexus blade (*)

10GbE
and 4/8Gb
FC Server
Access
10GbE
and 4Gb
FC Server
Access
10Gb FCoE Server Access
(*) future

MDS 9500
Storage

Front-End
Virtualization

Virtual Device Contexts at Nexus 7000


VDC Architecture
Virtual Device Contexts Provides Virtualization at the Device Level Allowing
Multiple Instances of the Device to Operate on the Same Physical Switch at
the Same Time

L2 Protocols

L3 Protocols

L2 Protocols

L3 Protocols

VLAN Mgr

UDLD

OSPF

GLBP

VLAN Mgr

UDLD

OSPF

GLBP

VLAN Mgr

UDLD

BGP

HSRP

VLAN Mgr

UDLD

BGP

HSRP

LACP

CTS

EIGRP

VRRP

LACP

CTS

EIGRP

VRRP

IGMP

802.1x

PIM

SNMP

IGMP

802.1x

PIM

SNMP

RIB

RIB

RIB

Protocol Stack (IPv4/IPv6/L2)

RIB

Protocol Stack (IPv4/IPv6/L2)

VDC1

Infrastructure
Kernel
Nexus 7000 Physical Switch

VDCn

Virtual Device Contexts


VDC Fault Domain
A VDC Builds a Fault Domain Around All Running Processes Within That
VDCShould a Fault Occur in a Running Process, It Is Truly Isolated from
Other Running Processes and They Will Not Be Impacted

Protocol Stack

Process XYZ

Process DEF

Process ABC

Process XYZ

Process DEF

Process ABC

Process DEF in
VDC B Crashes

VDC B

VDC A

Protocol Stack
VDCA

Infrastructure
Kernel
Nexus 7000 Physical Switch

Process DEF in VDC


A Is Not Affected and
Will Continue to Run
Unimpeded

VDCB

C
D

B D
C A

Virtual Device Contexts


VDC and Interface Allocation
VDC
A

Ports Are Assigned on a per VDC


Basis and Cannot Be Shared
Across VDCs

VDC
C

32-Port
10GE
Module

VDC
B

Once a Port Has Been Assigned to a


VDC, All Subsequent Configuration Is
Done from Within That VDC

VDC
C

VDC Use Case Examples


Security Partitioning
Some Infosec departments are still reluctant
about collapsed infrastructure
Concerns around change management
Infrastructure misconfiguration could bypass
policies

Appliance Model

Service Module Model

Ideally they want to have physically separately


infrastructure.
Not cost effective in larger deployments.

Firewall
Outside

Inside

VDCs provide logical separation simulating


air gap
Extremely low possibility of configuration
bypassing security path Must be physically
bypassed
Model can be applied for any DC services

Outside
VDC

Firewall
VDC

Inside

VDC Use Case Examples


Horizontal Consolidation
Preface: Lead with separate physical boxes as they provide the
most scalable solution. VDCs are useful in certain situations!
Objective: Consolidate lateral infrastructure that delivers similar
roles for separate operational or administrative domains.
Benefits: Reduced power and space requirements, can maximize
density of the platform, easy migration to physical separation for
future growth
Considerations: Number of VDCs (4), Four VDCs != Four CPU
Does not significantly reduce cabling or interfaces needed.
core
1

Core Devices

Aggregation Devices

Core

agg1

agg2

agg3

agg4

acc1

acc2

accN

accY

Admin Group 1

core

core

core
2

Admin Group 2

agg VDC 1
agg VDC 2

agg VDC 1
agg VDC 2

acc1

acc2

agg VDC 1
Admin Group 1

accN

Aggregation VDCs

accY

agg VDC 2
Admin Group 2

VDC Use Case Examples


Vertical Consolidation
Preface: Lead with separate physical boxes as they provide the most
scalable solution.
Large Three Tier designs should remain physical.
Smaller Two Tier designs can leverage VDCs for common logical
design with three tier.
Objective: Consolidate vertical infrastructure that delivers orthogonal roles
to the same administrative or operational domain.
Benefits: Reduced power and space requirements, can maximize density
of the platform, provides smooth growth path, easy migration to physical
separation in future
Considerations: Number of VDCs (4), Four VDCs != Four CPU
Intra-Nexus7000 cabling needed for connectivity between layers.
Core Devices

Aggregation Devices

core
1

core
2

agg3

agg4

accN

accY

core VDC

core VDC

agg VDC

agg VDC

accN

accY

Core VDCs
Aggregation
VDCs

Core
Virtualization

Virtual Port-Channel (vPC)


Feature Overview

Allow a single device to use a port


channel across two upstream
switches
Separate physical switches
independent control and data plane
Eliminate STP blocked ports. Uses
all available uplink bandwidth
Dual-homed server operate in activeactive mode
Provide fast convergence upon
link/device failure
Available in NX-OS 4.1 for Nexus
7000. Nexus 5000 availability
planned for CY09.

Logical Topology without vPC

Logical Topology with vPC

Multi-level vPC
Physical
View
SW1

SW3

vPC FT-Link
vPC_PL

vPC FT-Link
vPC_PL

Logical
View
SW2

SW1

SW4

SW3

vPC FT-Link
vPC_PL

vPC FT-Link
vPC_PL

SW2

SW4

Up to 16 links between both sets of switches: 4 ports from sw1-sw3, sw1sw4, sw2-sw3, sw2-sw4
Provides maximum non-blocking bandwidth between sets of switch peers
Is not limited to one layer, can be extended as needed

Aggregation
Virtualization

Aggregation Services Design Options


DC Core

Gigabit Ethernet

WAN

Nexus 7000
10GbE Core

10 Gigabit Ethernet
10 Gigabit DCE

IP+MPLS WAN
Agg Router

4/8Gb Fiber Channel


10 Gigabit FCoE/DCE

DC Aggregation
Nexus 7000
10GbE Agg
Cisco Catalyst
6500
DC Services

Cisco Catalyst 6500


10GbE VSS Agg
DC Services

DC
Access Service Modules
Embedded

SAN A/B
MDS 9500
Storage Core

One-Arm Service Switches

FC

Cisco
Catalyst 6500
End-of-Row

Nexus 5000 &


Nexus 2000
Rack

1GbE Server Access

CBS 31xx
Blade

Nexus 7000
End-of-Row

Nexus 5000
Rack

CBS 31xx
MDS 9124e
Nexus Blade (*)

10GbE
and 4/8Gb
FC Server
Access
10GbE
and 4Gb
FC Server
Access

MDS 9500
Storage

Storage

10Gb FCoE Server Access


(*) future

Virtual Switch System (VSS)


Concepts
Virtual Switch System Is a Technology Break Through for the
Cisco Catalyst 6500 Family

EtherChannel Concepts
Multichassis EtherChannel (MEC)

Virtual Switch

Virtual Switch

LACP, PAGP, or ON
EtherChannel Modes
Are Supported

Regular EtherChannel on
Single Chassis

Multichassis EtherChannel (MEC)


Across Two VSL-Enabled Chassis

ACE Module: Virtual Partitioning

One Physical Device

100%

Traditional Device

Single configuration file


Single routing table
Limited RBAC
Limited resource allocation

Multiple Virtual Systems


(Dedicated Control and Data Path)

25%

25% 15% 15% 20%

Cisco Application Infrastructure Control

Distinct context configuration files


Separate routing tables
RBAC with contexts,
roles, domains
Management and data
resource control
Independent application rule sets
Global administration and
monitoring
Supports routed and bridged
contexts at the same time

Firewall Service Module (FWSM)


Virtual Firewalls
Core/Internet
Cisco
Catalyst
6500

Core/Internet
Cisco
Catalyst
6500

MSFC

VLAN 10

VFW

VLAN 20
VFW

VLAN 30

VFW

MSFC
VLAN 10

VFW

FW SM
VLAN 11
A

VLAN 21
B

VFW

VFW

FW SM
VLAN 31
C

VLAN11
A

VLAN 21
B

VLAN 31
C

e.g., Three customers three security contextsscales up to 250


VLANs can be shared if needed (VLAN 10 on the right-hand side example)
Each context has its own policies (NAT, access-lists, inspection engines, etc.)
FWSM supports routed (Layer 3) or transparent (Layer 2) virtual firewalls at the
same time

Data Center Virtualized Services


Combination Example

VRF

VRF

v5

VRF

v6

v7

v8

Firewall Module Contexts

v108

v107

v105

Front-End VRFs (MSFC)

VRF

4
ACE Module Contexts

v206

v207

v208

VRF

BU-1

v105

BU-2
v206

BU-3
v207

Back-End VRFs (MSFC)

BU-4
v2081
v2082
v2083

Server Side VLANs

...

* vX = VLAN X
**BU = Business Unit

VSS with ACE and FWSM Modules


Active / Standby Pair
Virtual Switch System
(VSS)

Switch-1

Switch-2

(VSS Active)

(VSS Standby)

Control Plane Active

Control Plane Hot Standby


VSL

Data Plane Active

Data Plane Active


Failover/State sync Vlan

ACE
Active

ACE
Standby

FWSM
Standby

FWSM
active

Combining vPC with VSS for Services


Services can be
attached using EtherChannel
Appliance based
Services-chassis based
(standalone or VSS)
Nexus 7000 with vPC

ACE
Appliance

ASA

NAM
Appliance

vPC

VSS

Services
Chassis

Access Layer
Virtualization

Data Center Access Layer Options


Top of Rack (ToR)
Typically 1-RU servers
1-2 GE LOMs
Mostly 1, sometimes 2 ToR switches
Copper cabling stays within rack
Low copper density in ToR
Higher chance of East-West traffic hitting
aggregation layer
Drives higher STP logical port count for
aggregation layer
Denser server count

Middle of Row (MoR) (or End of Row)


May be 1-RU or multi-RU servers
Multiple GE or 10GE NICs
Horizontal copper cabling for servers
High copper cable density in MoR
Larger portion of East-West traffic stays
in access
Larger subnets less address waste
Keeps agg. STP logical port count low
(more EtherChannels, fewer trunk ports)
Lower # of network devices to manage

Middle of Row (MoR) (or End of Row)


Virtual Switch (Nexus 7000 or Catalyst 6500)

Catalyst 6500

Nexus 7000

VSS and MEC

VDC and vPC

Many to 1 Virtualization
Service Modules
Single Control Plane

1 to Many Virtualization
High Density (10/100/1000 & 10GE)
Distinct control planes while virtualized

ToR @1GE:
Nexus2000,theNexus5000virtual linecard

Nexus2000combinesbenefitsofbothToR andEoR
architectures
Physicallyresidesonthetopofeachrackbut
Logicallyactslikeanendofrowaccessdevice
Nexus2000deploymentbenefits
Reducescableruns
Reducemanagementpoints
Ensuresfeatureconsistencyacrosshundredsof
servers
EnableNexus5000tobecomeahighdensity1GE
accesslayerswitch
VNLinkcapabilities

Nexus 2000 (Fabric Extender - FEX)

Nexus
2000

Nexus 2000 implementation example

Physical Topology

Logical Topology

Core
Layer

Central Point
of Management

Core
Layer

VSS

Aggregation
Layer

Central Point
of Management

L3
L2

Aggregation
Layer

4x 10G
uplinks
FE
from each rack
Access
Layer

Nexus
5020
N2K

N2K

N2K

L3
L2

Access

Nexus
5020

Layer

Nexus 5020

Nexus 5020
N2K

VSS

N2K

N2K

12 x Nexus 2000

12 x Nexus 2000

Servers
Rack-1
Servers

Rack-1

Rack-2

Rack-3

Rack-4

Rack-5

Rack-N

Rack-N Rack-1

Rack-N

Blades: Cisco Virtual Blade Switching (VBS)


Up to 8 Switches acts as Single VBS Switch
Distributed L2/ MAC learning
Centralized L3 learning

Each switch consists of


Switch Fabric
Port Asics (downlink & uplink ports)

One Master Switch per VBS


1:N Resiliency for Master
L2/L3 reconvergence is sub 200 msec
High Speed VBS Cable (64 Gbps)

Example Deployment:
16 servers per enclosure X
2 GE ports per server X
4 enclosures per rack = 128GE
2 x 10GE uplinks = 20GE
128GE / 20GE = 6.4:1 oversubscription

Cisco Catalyst Virtual Blade Switch (VBS)


with Non-vPC Aggregation
Access Layer (Virtual Blade Switch)

Single Switch /
Node (for
Spanning Tree or
Layer 3 or
Management)

Spanning-Tree Blocking

Aggregation Layer

Cisco Catalyst Virtual Blade Switch (VBS)


with Non-vPC Aggregation
Aggregation Layer
Access Layer (Virtual Blade Switch)

Single Switch / Node


(for Spanning Tree or
Layer 3 or Management)

Spanning-Tree Blocking

Cisco Catalyst Virtual Blade Switch (VBS)


with Nexus vPC Aggregation
Access Layer (Virtual Blade Switch)

Single Switch /
Node (for
Spanning Tree
or Layer 3 or
Management)

All Links Forwarding

Aggregation Layer
Nexus vPC

Cisco Catalyst Virtual Blade Switch (VBS)


with Nexus vPC Aggregation
Aggregation Layer
(Nexus vPC)
Access Layer (Virtual Blade Switch)

Single Switch / Node (for


Spanning Tree or Layer 3
or Management)
All Links Forwarding

Server
Virtualization

VMware ESX 3.x Networking Components


Per ESX Server Configuration

VMs

vSwitch
VMNICS =
Uplinks

vNIC

vSwitch0

VM_LUN_0007
vmnic0

VM_LUN_0005

vNIC

vmnic1
Virtual Ports

Cisco VN-Link

VN-Link (or Virtual Network Link) is a term which


describes a new set of features and capabilities that
enable VM interfaces to be individually identified,
configured, monitored, migrated and diagnosed.

VNIC

The term literally refers to a VM specific


link that is created between the VM and
Cisco switch. It is the logical equivalent
& combination of a NIC, a Cisco switch
interface and the RJ-45 patch cable
that hooks them together.

VNIC

Hypervisor

VETH

VN-Link requires platform support for Port Profiles,


Virtual Ethernet Interfaces, vCenter Integration, and
Virtual Ethernet mobility.

VETH

Server Virtualization & VN-Link


VN-Link Brings VM Level Granularity

VMotion

Problems:
VMotion may move VMs across
physical portspolicy must
follow
Impossible to view or apply
policy to locally switched traffic
Cannot correlate traffic on
physical linksfrom multiple
VMs

VLAN
101

VN-Link:
Extends network to the VM
Consistent services
Coordinated, coherent
management

VN-Link With the Cisco Nexus 1000V


Cisco Nexus 1000V
Software Based

Server

Industrys first third-party ESX switch

Built on Cisco NX-OS

Compatible with switching platforms

Maintain vCenter provisioning model


unmodified for server administration but
also allow network administration of
Nexus 1000V via familiar Cisco NX-OS
CLI

VM
#1

VM
#2

VM
#3

VM
#4

Nexus 1000V

VMW ESX
NIC

NIC

Nexus
1000V

LAN

Announced
09/2008
Shipping H1CY09)

Policy-Based
VM Connectivity

Mobility of Network
and Security Properties

Non-Disruptive
Operational Model

VN-Link with
Network Interface Virtualization (NIV)
Nexus Switch with VN-Link
Hardware Based
Server

Allows scalable hardware-based


implementations through hardware
switches

Standards-based initiative: Cisco &


VMware proposal in IEEE 802 to specify
Network Interface Virtualization

Combines VM and physical network


operations into one managed node

Future availability

VM
#1

VM
#2

VM
#3

VM
#4

VMW ESX
VN-Link

Nexus

http://www.ieee802.org/1/files/public/docs2008/new-dcbpelissier-NIC-Virtualization-0908.pdf

Policy-Based
VM Connectivity

Mobility of Network
and Security Properties

Non-Disruptive
Operational Model

Cisco Nexus 1000V


Industry First 3rd Party Distributed Virtual Switch
Server 2

Server 1
VM
VM
VM
#1
#1
#1

VM
VM
VM
#2
#2
#2

VM
VM
VM
#3
#3
#3

VM
VM
VM
#4
#4
#4

VM
VM
VM
#5
#5
#5

VM
VM
VM
#6
#6
#6

VM
VM
VM
#7
#7
#7

VMware
vSwitch
VMware
vSwitch
Nexus
1000V
Nexus
1000V
Nexus 1000V
1000V Nexus
DVS
VMware
vSwitch
VMware
vSwitch
1000V
Nexus
1000V
Nexus
DVS
VMW ESX
ESX
VMW

VMW ESX
ESX
VMW

VM
VM
VM
#8
#8
#8

Nexus 1000V provides


enhanced VM switching
for VMware ESX
Features Cisco VN-Link:
Policy Based VM Connectivity
Mobility of Network & Security
Properties
Non-Disruptive Operational
Model

Ensures proper visibility


& connectivity during
VMotion

Enabling Acceleration of Server Virtualization Benefits

Cisco Nexus 1000V Architecture

Server 1
VM
VM
#1
#1

VM
VM
#2
#2

VM
VM
#3
#3

Server 2
VM
VM
#4
#4

VEM
VMware
vSwitch
VEM
VMW ESX
ESX
VMW

VM
VM
#5
#5

VM
VM
#6
#6

VM
VM
#7
#7

Server 3
VM
VM
#8
#8

VM
VM
#9
#9

Nexus VEM
VEM
1000V
DVS
VMware
vSwitch
Nexus
1000V
DVS

VM
VM
#10
#10

VM
VM
#11
#11

VEM
VMware
vSwitch
VEM

VMW ESX
ESX
VMW

VMW ESX
ESX
VMW

Virtual Supervisor Module (VSM)


Virtual
Virtual
or Physical
appliance
Ethernet
Module
(VEM)
running Cisco OS (supports HA)
Cisco
Enables
advanced
networking
Nexus
Enables:
capability
Performs
management,
on1000V
the hypervisor
configuration
monitoring,
Policy Based&VM
Connectivity
Provides each VM with dedicated
Tight
integration
with
Mobility
of Network & VMware
Security
switch port
vCenter
Properties
Collection of VEMs = 1 DVS
Non-Disruptive Operational Model

VM
VM
#12
#12

vCenter

Nexus 1000V
1000V
Nexus

VSM
VSM

Back-End
Virtualization

End-to-End Back-End Virtualization

Pools of storage
resources
Virtual Servers
Virtual HBAs
FCoE CNA

VH

VH

VH

Increases flexibility and agility


Simplifies management
Reduces TCO

Virtual
Fabrics /
Unified IO

OLTP
VSAN

Virtual Storage

Backup
VSAN
Email
VSAN

Virtualization

Optimizes resource utilization

Virtual Storage Area Network (VSAN) Deployment


Consolidation of SAN islands
Increased utilization of fabric ports with
just-in-time provisioning

Department A

Deployment of large fabrics

SAN Islands

Dividing a large fabric in smaller


VSANs
Disruptive events isolated per VSAN
RBAC for administrative tasks
Zoning is independent per VSAN

Advanced traffic management


Department B

Defining the paths for each VSAN

Department C

VSANs may share the same EISL


Cost effective on WAN links

Virtual SANs
(VSANs)

Resilient SAN extension


Standard solution
(ANSI T11 FC-FS-2 section 10)

Department A
Department B
Department C

VSAN Technology
The Virtual SANs Feature Consists
of Two Primary Functions

Hardware-based isolation of
tagged traffic belonging to
different VSANs
Create independent instance of
fiber channel services for each
newly created VSANservices
include:

VSAN Header Is
Removed at
Egress Point
Cisco MDS 9000
Family with VSAN
Service
Enhanced ISL (EISL)
Trunk Carries
Tagged Traffic from
Multiple VSANs
VSAN Header Is
Added at Ingress
Point Indicating
Membership
No Special
Support Required
by End Nodes

Fibre Channel
Services for
Blue VSAN
Fibre Channel
Services for
Red VSAN

Trunking
E_Port
(TE_Port)

Trunking
E_Port
(TE_Port)
Fibre Channel
Services for
Blue VSAN
Fibre Channel
Services for
Red VSAN

N-Port ID Virtualization (NPIV)


Application Server
Mechanism to assign multiple
N_Port_IDs to a single N_Port
Allows all the access control,
zoning, port security (PSM) be
implemented on application level
Multiple N_Port_IDs are so far
allocated in the same VSAN

E-Mail

Web

File
Services

N_Port
ID-1

N_Port
ID-2

N_Port
ID-3

F_Port

F_Port

F_Port

E_Port
E_Port

E-Mail
VSAN_3

Web
VSAN_2

File and Print


VSAN_1

NPIV Usage Examples


Intelligent Pass-Thru

Virtual Machine Aggregation

FC

FC

FC

FC

FC

FC

FC

FC

NPV Edge
Switch
FC

NP_Port

NPIV-Enabled HBA
F_Port

F_Port

Virtual Servers Share a Physical HBA


A zone includes the physical HBA
and the storage array

Virtual
Servers

Access control is demanded to


storage array LUN masking and
mapping, it is based on the physical
HBA pWWN and
it is the same for all VMs

HW

Hypervisor

The hypervisor is in charge of the


mapping, errors may be disastrous

Zone

MDS9000
Mapping

pWWN-P

Storage Array
(LUN Mapping and Masking)

FC

FC

pWWN-P

Single Login on a Single Point-to-Point Connection


FC Name Server

Virtual Server Using NPIV and


Storage Device Mapping
Virtual HBAs can be zoned individually
LUN masking and mapping is based on
the virtual HBA pWWN of each VMs
Virtual
Servers

Very safe with respect to


configuration errors
Only supports RDM

Hypervisor

Available in ESX 3.5

MDS9000
Mapping

Mapping

FC

FC

Mapping

FC

Storage Array

FC

Mapping

FC

To pWWN-1

HW

pWWN-1

pWWN-2

pWWN-P

FC

pWWN-3

pWWN-4

To pWWN-2
pWWN-P
pWWN-1
pWWN-2
pWWN-3
pWWN-4

Multiple Logins on a Single Point-to-Point Connection

To pWWN-3
To pWWN-4

FC Name Server

VMotion LUN Migration without NPIV

VM1

VM2

VM3

VM1

VM2

VM3

VM1

VM2

VM3

Standard
HBAs
WWPN

WS-X901 6

10

11

12

13

14

15

16

STATUS

1/2 Gbps FC Module

All configuration parameters


are based on the World Wide
Port Name (WWPN) of the
physical HBA

FC

All LUNs must be exposed to


every server to ensure disk
access during live migration
(single zone)

VMotion LUN Migration with NPIV

VM1

VM2

VM3

HBAs
with NPIV
WWPN1
WWPN2
WWPN3
WS-X901 6

10

11

12

13

14

15

16

STATUS

1/2 Gbps FC Module

Centralized management of
VMs and resources

No need to reconfigure zoning


or LUN masking
Dynamically reprovision VMs
without impact to existing
infrastructure

FC

Redeploy VMs and support


live migration

Only supports RDM !

NPIV Usage Examples


Virtual Machine Aggregation

Intelligent Pass-Thru

FC

FC

FC

FC

FC

FC

FC

FC

NPV Edge
Switch
FC

NP_Port

NPIV-Enabled HBA
F_Port

F_Port

Blade Switch/Top-of-Rack
Domain ID Explosion

Domain ID used for


addressing, routing, and
access control
One domain ID per
SAN switch
Theoretically 239 domain
ID, practically much less
supported
Limits SAN fabric
scalability

Blade Switch

Blade Switches
Increase Domain
IDs, Increase
Fabrics

MDS
9500

Tier 1

Tier 2 Tape Farm

Theoretical
Maximum: 239
Domain IDs
per SAN

Cisco MDS Network Port Virtualization (NPV)

Eliminates edge switch


Domain ID
Edge switch acts as an
NPIV host
Simplifies server and
SAN management and
operations
Increases fabric
scalability

Blade Switch

NPV

NPV-Enabled
Switches Do
Not Use
Domain IDs

NPV

NPV

NPV

NPV

NPV

MDS
9500

Supports
Up to 100 Edge
Switches
Tier 1

Tier 2 Tape Farm

Edge Switch
Acts as a
NPIV Host

Flex Attach (Virtual PWWN)


Assign virtual PWWN
on NPV switch port
Zone vPWWN to storage
LUN masking is done
on vPWWN
Reduce operational overhead
Enables server or physical
HBA replacement
No need for zoning
modification
No LUN masking change
Automatic link to new PWWN
No manual relinking to new
PWWN is needed

Before
FC1/1
vPWWN1
PWWN1

pwwn1

pwwnX

vpwwn1

pwwnX

vpwwn1

pwwnX

After
FC1/1
vPWWN1
PWWN2

pwwn2

pwwnX

Storage Volume Virtualization

Initiator

Target

Initiator

Target

SAN
Fabric

Adding more storage requires administrative changes


Administrative overhead, prone to errors
Complex coordination of data movement between
arrays

Storage Volume Virtualization


Virtual
Target 1
VSAN_10
Initiator
VSAN_10

Initiator
VSAN_20

Virtual
Initiator
VSAN_30
Virtual Volume
1

Virtual
Target 2
VSAN_20

Virtual Volume
2

SAN
Fabric

Virtual
Initiator
VSAN_30

A SCSI operation from the host is mapped in one or


more SCSI operations to the SAN-attached storage
Zoning connects real initiator and virtual target or virtual
initiator and real storage
Works across heterogeneous arrays

Sample Use: Seamless Data Mobility


Virtual
Target 1
VSAN_10
Initiator
VSAN_10

Initiator
VSAN_20

Virtual
Initiator
VSAN_30
Virtual Volume
1

Virtual
Target 2
VSAN_20

Virtual Volume
2

SAN
Fabric

Tier_2 Array

Virtual
Initiator
VSAN_30

Tier_2 Array

Works across heterogeneous arrays


Nondisruptive to application host
Can be utilized for end-of-lease storage migration
Movement of data from one tier class to another tier

Your session feedback is valuable


Please take the time to complete the
breakout evaluation form and hand it
to the member of staff by the door on
your way out
Thank you!

Recommended Reading

You might also like