100% found this document useful (1 vote)
2K views36 pages

NPIV and The IBM Virtual I/O Server (VIOS) : October 2008

NPIV is a Fibre Channel industry standard method for virtualizing a physical Fibre Channel port. On POWER, NPIV allows logical partitions (LPARs) to have dedicated N_Port IDs, giving the OS a unique identity to the SAN. VIOS can support NPIV and vSCSI simultaneously Each physical NPIV capable FC HBA will support 64 virtual ports.

Uploaded by

bijucyborg
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views36 pages

NPIV and The IBM Virtual I/O Server (VIOS) : October 2008

NPIV is a Fibre Channel industry standard method for virtualizing a physical Fibre Channel port. On POWER, NPIV allows logical partitions (LPARs) to have dedicated N_Port IDs, giving the OS a unique identity to the SAN. VIOS can support NPIV and vSCSI simultaneously Each physical NPIV capable FC HBA will support 64 virtual ports.

Uploaded by

bijucyborg
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

NPIV and the IBM Virtual I/O Server (VIOS)

October 2008

© 2006 IBM Corporation


NPIV Overview
► N_Port ID Virtualization (NPIV) is a fibre channel industry standard
method for virtualizing a physical fibre channel port.

► NPIV allows one F_Port to be associated with multiple N_Port IDs,


so a physical fibre channel HBA can be shared across multiple guest
operating systems in a virtual environment.

► On POWER, NPIV allows logical partitions (LPARs) to have


dedicated N_Port IDs, giving the OS a unique identity to the SAN,
just as if it had a dedicated physical HBA(s).
NPIV specifics
 PowerVM VIOS 2.1 - GA Nov 14
 NPIV support now has planned GA of Dec 19
 Required software levels
– VIOS Fix Pack 20.1
– AIX 5.3 TL9 SP2
– AIX 6.1 TL2 SP2
– HMC 7.3.4
– FW Ex340_036
– Linux and IBM i planned for 2009
 Required HW
– POWER6 520,550,560,570 only at this time, Blade planned for 2009
– 5735 PCIe 8Gb Fibre Channel Adapter

 unique WWPN generation (allocated in pairs)***


 Each virtual FC HBA has a unique and persistent identity
 Compatible with LPM (live partition mobility)
 VIOS can support NPIV and vSCSI simultaneously
 Each physical NPIV capable FC HBA will support 64 virtual ports
 HMC-managed and IVM-managed servers
VI
Storage Virtualisation O
With NPIV
S
VIO client VIO client 2.
Generic SCSI disk Note
Path code
EMC 5000 LUN IBM 2105 LUN 1
Virtual SCSI And Virtual FC
Adapters Devices Adapters
SCSI difference
SAS vSCSI VIOS VIOS

Storage Pass Through


FC Adapters Virtualiser FC Adapters mode

VIOS Admin
in charge

SAN NPIV SAN SAN Admin


Back in charge

EMC 5000 LUN IBM 4700 LUN EMC 5000 LUN IBM 4700 LUN
VI
HMC
O
7.3.4
AIX 5.3 TL09, POWER6 only S
NPIV
AIX 6.1 TL02,
SLES 10 SP2,
VIO client 2.
What you RHEL 4.7,
RHEL 5.2
EMC 5000 LUN IBM 2105 LUN 1
need?
New EL340 Virtual FC
Firmware (disruptive) Adapters
Supports
VIOS
SCSI-2 reserve/release
SCSI-3 persistent reserve VIOS 2.1
FC Adapters

New PCIe 8Gbit


Fibre Channel adapters
(can run 2 or 4 Gbit)
SAN Fabric
Entry SAN switch can be
must be NPIV capable 2, 4 or 8 Gbit
(not 1 Gbit)
Disk Sub-System does
not need to be NPIV capable EMC 5000 LUN IBM 4700 LUN
VI
O
S
NPIV 2.
What you do? 1

1. HMC 7.3.4 configure


► Virtual FC Adapter
► Just like virtual SCSI
► On both Client and Server

Virtual I/O Server


VI
O
S
NPIV 2.
What you do? 1

2. Once Created:
LPAR Config
Manage Profiles
Edit click FC Adapter
 Properties
and the WWPN is available
VI
O
S
NPIV 2.
What you do? 1
$ ioslevel
2.1.0.0
$ lsdev | grep FC
fcs0 Available FC Adapter
fscsi0 Available FC SCSI I/O Controller Protocol
Device
vfchost0 Available Virtual FC Server Adapter
$ vfcmap -vadapter vfchost0 -fcp fcs0
vfchost0 changed
$

3. VIOS connect the virtual FC adapter to the physical FC adapter


► With vfcmap
► lsmap –all –npiv
► lsnports  shows physical ports supporting NPIV
4. SAN Zoning
 To allow the LPAR access to the LUN via the new WWPN
 Allow both WWPN and on any Partition Mobility target.
NPIV benefits
► NPIV allows storage administrators to used existing tools
and techniques for storage management
► solutions such as SAN managers, Copy Services, backup /
restore, should work right out of the box
► storage provisioning / ease-of-use
► Zoning / LUN masking
► physical <-> virtual device compatibility
► tape libraries
► SCSI-2 Reserve/Release and SCSI3 Persistent Reserve
– clustered/distributed solutions
► Load balancing (active/active)
► solutions enablement (HA, Oracle,…)
► Storage, multipathing, apps, monitoring…..
NPIV implementation
► Install the correct levels of VIOS, firmware, HMC,8G HBAs,
and NPIV capable/enabled SAN and storage
► Virtual Fibre channel adapters are created via the HMC
► The VIOS owns the server VFC, the client LPAR owns the
client VFC
► Server and Client VFCs are mapped one-to-one with the
vfcmap command in the VIOS
► The POWER hypervisor generates WWPNs based on the range of names
available for use with the prefix in the vital product data on the managed
system.
► The hypervisor does not reuse the WWPNs that are assigned to the virtual
Fibre Channel client adapter on the client logical partition.
Things to consider
 WWPN pair is generated EACH time you create a VFC. NEVER is re-created
or re-used. Just like a real HBA.
 If you create a new VFC, you get a NEW pair of WWPNs.
 Save the partition profile with VFCs in it. Make a copy, don’t delete a profile
with a VFCin it.
 Make sure the partition profile is backed up for local and disaster recovery!
Otherwise you’ll have to create new VFCs and map to them during a
recovery.
 Target Storage SUBSYSTEM must be zoned and visible from source and
destination systems for LPM to work.
 Active/Passive storage controllers must BOTH be in the SAN zone for LPM
to work
 Do NOT include the VIOS physical 8G adapter WWPNs in the zone
 You should NOT see any NPIV LUNs in the VIOS
 Load multi-path code in the client LPAR, NOT in the VIOS
 Monitor VIOS CPU and Memory – NPIV impact is unclear to me at this time
 No ‘passthru’ tunables in VIOS
NPIV useful commands
 vfcmap -vadapter vfchostN -fcp fcsX
► maps the virtual FC to the physical FC port
 vfcmap -vadapter vfchostN -fcp
► un-maps the virtual FC from the physical FC port
 lsmap –all –npiv
► shows the mapping of virtual and physical adapters and current status
► lsmap –npiv –vadapter vfchostN shows same ofr one VFC
 lsdev -dev vfchost*
► lists all available virtual Fibre Channel server adapters
 lsdev -dev fcs*
► lists all available physical Fibre Channel server adapters
 lsdev –dev fcs* -vpd
► shows all physical FC adapter properties
 lsnports
► shows the Fibre Channel adapter NPIV readiness of the adapter and the SAN
switch.
 lscfg -vl fcsx
► In A(X client lpar, shows virtual fibre channel properties
NPIV resources
► Redbooks:
SG24-7590-01 PowerVM Virtualization on IBM Power Systems (Volume 2):
Managing and Monitoring
SG24-7460-01 IBM PowerVM Live Partition Mobility
► VIOS latest info:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html
Questions
BACKUP VIOS SLIDES
#5735 PCIe 8Gb Fibre Channel Adapter

 Supported on 520, 550, 560, 570, 575


 Dual port adapter - each port provides single initiator
► Automatically adjusts to SAN fabric 8 Gbps, 4 Gbps, 2 Gbps
► LED on card indicates link speed
 Ports have LC type connectors
► Cables are the responsibility of the customer.
► Use multimode fibre optic cables with short-wave lasers:

– OM3 - multimode 50/125 micron fibre, 2000 MHz*km bandwidth


● 2Gb (.5 – 500m) 4Gb (.5 – 380m) 8Gb (,5 – 150m)
– OM2 - multimode 50/125 micron fibre, 500 MHz*km bandwidth
● 2Gb (.5 – 150m) 4Gb (.5 – 70m) 8Gb (,5 – 21m)
– OM1 - multimode 62.5/125 micron fibre, 200 MHz*km bandwidth
● 2Gb (.5 – 300m) 4Gb (.5 – 150m) 8Gb (,5 – 50m)
Virtual SCSI
 client LPAR (ie virtual machine) is the SCSI initiator, VIOS
is the SCSI Target

 server LPAR owns physical I/O resources

 client LPAR sees standard SCSI devices,


accesses LUNs via a virtual SCSI adapter

 VIOS is a standard storage subsystem

 transport layer is the interpartition communication channel


provided by PHYP (reliable msg transport)

 SRP(SCSI Remote DMA Protocol)

 LRDMA(logical redirected DMA)


Virtual SCSI (continued)
SCSI peripheral device types supported:
ƒ Disk (backed by logical volume, physical volume, or file)
ƒ Optical (backed by physical optical, or file)

 Adapter and device sharing

Multiple I/O Servers per system, typically deployed in pairs

VSCSI client support:


ƒ AIX 5.3 or later
ƒ Linux(SLES9+, RHEL3 U3+, RHEL4) or later
ƒ IBM i

Boot from VSCSI devices

Multi-pathing for VSCSI devices


Basic vSCSI Client And Server Architecture Overview

I/O Server I/O client I/O client I/O client


virtual server
adapter

virtual client
adapter

physical HBA
and storage

PHYP
vSCSI NPIV
vio client vio client
generic generic
scsi disk scsi disk EMC
EMC IBM 2105

SCSI FCP
VIOS VIOS VIOS VIOS

FC HBAs FC HBAs FC HBAs FC HBAs

SAN SAN

EMC IBM 2105 EMC IBM 2105

The vSCSI model for sharing storage resources is With NPIV, the VIOS's role is fundamentally
storage virtualizer. Heterogeneous storage is different. The VIOS facilitates adapter sharing only,
pooled by the VIOS into a homogeneous pool of there is no device level abstraction or emulation.
block storage and then allocated to client LPARs in Rather than a storage virtualizer, the VIOS serving
the form of generic SCSI LUNs. The VIOS performs NPIV is a passthru, providing an FCP connection
SCSI emulation and acts as the SCSI Target. from the client to the SAN.
vSCSI

VIOS AIX VIOS

LVM LVM LVM

multipathing multipathing
multipathing

Disk Driver Disk Driver Disk Driver

fibre channel fibre channel


VSCSI VSCSI VSCSI VSCSI
HBAs target target HBAs
HBA HBA

PHYP

SAN
NPIV
VIOS AIX VIOS

LVM

multipathing

VFC HBA

VFC HBA
VFC HBA
VFC HBA
VFC HBA
VFC HBA
VFC HBA

VFC HBA

Disk Driver

passthru VFC passthru


fibre channel VFC fibre channel
module module
HBAs HBA HBA HBAs
PHYP

SAN
NPIV – provisioning, managing, monitoring
VIOS
DS4000, vio client
DS6000, WWPN
DS8000 N
P
I
V
NPIV enabled WWPN
SAN vio client
WWPN

N
P WWPN
vio client
SVC I WWPN
tape library V
WWPN
vio client

VIOS

HDS vFC adapter pair


EMC
NetApp
Live Partition Mobility(LPM) and NPIV

VIOS VIOS
vio client vio client
WWPN WWPN
N N
P P
I I
V V
WWPN NPIV enabled WWPN
vio client SAN vio client
WWPN WWPN

N N
vio client WWPN P P WWPN
vio client
WWPN I I WWPN
V V
vio client WWPN WWPN
vio client

VIOS VIOS

• WWPNs are allocated in pairs


IBM System p
Heterogeneous multipathing

VIOS#1 AIX

Passthru module A

NPIV

NPIV

Fibre
Fibre

HBA
HBA

POWER Hypervisor

Storage Controller
SAN Switch SAN Switch

A B C D


A’ B’ C’ D’

© 2006 IBM Corporation


VIOS block diagram (vSCSI and NPIV) POWER Server

NPIV ports passthru module

LPARs vSCSI devices block virtualization


(SCSI LUNS)

filesystems

LVM

multi-pathing

disk | optical

physical adapters
FC/NPIV | SCSI | iSCSI | SAS | USB | SATA
virtual devices back by a file
virtual devices backed by a logical volume
virtual devices backed by a pathing device
virtual devices physical peripheral device
virtual tape
physical storage
NPIV
vSCSI basics
POWER Server

VIOS
File backed disk storage pool
(/var/vios/storagepools/pool_name) p2v mapping devices LPARs (AIX, Linux, or i5/OS)
/var/vios/storagepools/pool1/foo1
S a1: – ../../../foo1
Virtual optical media repository C
S a2 – ../../../foo2.iso
(/var/vios/VMLibrary)
/var/vios/VMLibrary/foo2.iso) I
b1: ../../lv_client12
Logical Volume storage pool E
b2: /dev/hdisk10
(/dev/VG_name) M
U
a1 b3
physical storage /dev/storagepool_VG/lv_client12 b3: /dev/lv_client20
L
Fibre channel, iSCSI,
Physical device backed devices A
b1 b2 b4
SAS, SCSI, USB, SATA b4: /dev/powerpath0
(/dev) T
/dev/hdisk10 I b5: /dev/cd0 b6 b1
a2
/dev/lv_client20 O
/dev/powerpath0 N b6: /dev//sas0 b5
/dev/cd0
/dev/sas0
c1: /dev/fscsi0
NPIV
(/dev)
/dev/fscsi0 <-> WWPN e1
vSCSI Target

PHYP
Data flow using LRDMA for vSCSI devices

vscsi I/O
client server

data
physical
buffer vscsi vscsi
adapter
initiator target
driver
Da
ta
(L control
RD
M phyp
A)

pci adapter
VSCSI redundancy using multipathing at the client

I/O AIX client I/O


Server Server
MPIO disk
driver
vscsi vscsi vscsi vscsi
target initiator initiator target

PHYP

SAN
Direct attach fibre channel block diagram

AIX

generic
disk
driver
data
buffer fibre channel HBA DD

Da
ta phyp

FC HBA

SCSI Initiator
NPIV block diagram

AIX VIOS

generic
disk
driver
data VFC passthru fibre channel
buffer client module HBA DD

Da
ta
phyp

FC HBA
SCSI Initiator
Testing VIOS
System p/i Server
POWER5 Server
logical partitions
physical
fibre chan VIOS
HBA AIX Linux AIX AIX AIX AIX
v
S
C A1 A7
S A2
I A3 A4 A5 A6 A8

Virtual SCSI

POWER Hypervisor

physical physical
External Storage fibre chan fibre chan
ie. DS8K HBA HBA

A1 A2 A3 A4 A5 A6 A7 A8
Available via optional Advanced POWER Virtualization or POWER Hypervisor and VIOS features.
#5735 PCIe 8Gb Fibre Channel Adapter

 Supported on 520, 550, 560, 570, 575


 Dual port adapter - each port provides single initiator
► Automatically adjusts to SAN fabric 8 Gbps, 4 Gbps, 2 Gbps
► LED on card indicates link speed
 Ports have LC type connectors
► Cables are the responsibility of the customer.
► Use multimode fibre optic cables with short-wave lasers:

– OM3 - multimode 50/125 micron fibre, 2000 MHz*km bandwidth


● 2Gb (.5 – 500m) 4Gb (.5 – 380m) 8Gb (,5 – 150m)
– OM2 - multimode 50/125 micron fibre, 500 MHz*km bandwidth
● 2Gb (.5 – 150m) 4Gb (.5 – 70m) 8Gb (,5 – 21m)
– OM1 - multimode 62.5/125 micron fibre, 200 MHz*km bandwidth
● 2Gb (.5 – 300m) 4Gb (.5 – 150m) 8Gb (,5 – 50m)
Questions

© 2008 IBM Corporation


Special notices
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions
on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY
10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this
document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-
available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
should verify the applicable data for their specific environment.

Revised September 26, 2006

© 2008 IBM Corporation


Special notices (cont.)
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business Partner
(logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC
System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2
Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM
Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power
Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2,
POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER6+, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10,
Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other
countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols
indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law
trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml

The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others.

Revised April 24, 2008

© 2008 IBM Corporation

You might also like