You are on page 1of 60

Advanced Virtual I/O Server Configurations

Csar Diniz Maciel Consulting IT Specialist IBM US

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

Progression of Virtual Storage Devices on VIOS


The ability to share virtual SCSI disks backed by a

Physical Volume (PV) or a Logical Volume (LV) has been available from the beginning.
VIO server 1.2 gave the ability to share the CDROM drive

with client LPARs through Virtual Optical devices.


With VIO server 1.5, the ability to create file-backed

virtual devices in addition to virtual SCSI devices backed by a PV or LV


Using the cpvdi command a virtual device image can now

be copied from one virtual target device (VTD) to a different VTD. This feature was added under VIO 1.5.2.1FP11.1.

Power Systems Virtual Optical Device


padmin user commands in VIO server $ lsdev -type optical name status cd0 Available $ mkvdev vdev cd0 vadapter vhost3 vtopt0 available $ lsmap -vadapter vhost0
SVSA Physloc Client Partition ID --------------- ------------------------------------------- ---------vhost0 U9111.520.10C1C1C-V1-C13 0x00000000 VTD LUN Backing device Physloc vtopt0 0x8100000000000000 cd0 U787A.001.DNZ00ZE-P4-D3

description SATA DVD-ROM Drive

Power Systems Virtual Optical Device

Backing device (disk, lv, file or media)

Client connection - Option One

On old client

# lsdev Cl cd0 F parent vscsi2 # rmdev R vscsi2


On new client

# cfgmgr

Client Side - Virtual Optical Device


First AIX client LPAR to activate will show new vscsi adapter and cd0 available # lsdev -Cs vscsi cd0 Available Virtual SCSI Optical Served by VIO Server hdisk0 Available Virtual SCSI Disk Drive # lsdev -Cs vscsi -F "name physloc cd0 U9111.520.10C1C1C-V3-C2-T1-L810000000000 hdisk0 U9111.520.10C1C1C-V3-C31-T1-L810000000000 # lsdev -Cc adapter ent0 Available Virtual I/O Ethernet Adapter (l-lan) vsa0 Available LPAR Virtual Serial Adapter vscsi0 Available Virtual SCSI Client Adapter vscsi1 Available Virtual SCSI Client Adapter vscsi2 Available Virtual SCSI Client Adapter

Client Side - Virtual Optical Device


Subsequent AIX client LPARs activate, but only show vscsi adapter Defined, and no optical device
# lsdev -Cc adapter ent0 Available Virtual I/O Ethernet Adapter (l-lan) vsa0 Available LPAR Virtual Serial Adapter vscsi0 Available Virtual SCSI Client Adapter vscsi1 Available Virtual SCSI Client Adapter vscsi2 Defined Virtual SCSI Client Adapter

This clients adapter will NOT configure while another client is connected to the server adapter
# cfgmgr -vl vscsi2 Method error (/usr/lib/methods/cfg_vclient -l vscsi2 ): 0514-040 Error initializing a device into the kernel.

Client Side - Virtual Optical Device


To release the optical device from owning LPAR # lsdev Cl cd0 F parent vscsi2 # rmdev R vscsi2 cd0 defined Now, cfgmgr in the receiving LPAR

Client connection - Option Two

Move from the VIO server

$ rmdev dev vtopt0 $ mkvdev vdev cd0 \ vadapter vhost# (where vhost# is the VSCSI adapter for the client Partition)

Virtual Optical Media


File-backed device that works like an optical device (think of it as an ISO image). With read-only virtual media the same virtual optical device can be presented to

multiple client partitions simultaneously


You could easily boot from and install partitions remotely without having the need to

swap out physical CD/DVDs or setup Network Installation Manager (NIM) server. It is also easier to boot a partition into maintenance mode to repair problems
Easier to maintain a complete library of all the software needed for the managed

system. Various software packages as well as all the necessary software levels to support each partition
Client partitions could use blank file-backed virtual optical media for backup purposes

(read/write devices)
These file-backed optical devices could then be backed up from on the VIO server to

other types of media (tape, physical CD/DVD, TSM server, etc.)

Virtual Optical Media


Create an ISO file from CDROM

$ mkvopt -name dvd.AIX_6.1.iso -dev cd0 -ro


You choose the name for this file, so make it meaningful Creates an ISO image from the media in /dev/cd0

After the .iso file is in your /var/vio/VMLibrary directory, run:

$ mkvdev -fbo -vadapter vhost#


vtopt0 Available
Replace vhost# with your Virtual SCSI server adapter name. This mkvdev command creates your virtual optical target device.

$ loadopt -vtd vtopt0 disk dvd.AIX_6.1.iso


The loadopt command loads vtopt0 with your ISO image Replace dvd.AIX_6.1.iso with your meaningful filename

Converting between backing devices - cpvdi


New command added at VIO 1.5.2.1-FP-11.1

$ cpvdi -src input_disk_image -dst output_disk_image [-isp input_storage_pool] [-osp output_storage_pool] [-overwrite] [-unconfigure] [-f] [-progress] The cpvdi command copies a block device image, which can be either a logical or physical volume, a file-backed device, or a file on another existing disk. This command is NOT used to move data between non-virtualized disks and virtualized disks.

Starting from scratch on the VIO server


$ mksp lv-storage-pool hdisk4 lv-storage-pool 0516-1254 mkvg: Changing the PVID in the ODM. $ lspv NAME PVID VG STATUS hdisk0 00c23c9f9e9e1909 rootvg active hdisk1 00c23c9fa415621f clientvg active hdisk2 00c23c9f2fbda0b4 clientvg active hdisk3 00c23c9ffbf3c991 None hdisk4 00c23c9f20c41ad6 lv-storage-pool active $ mksp -fb file-storage-pl -sp lv-storage-pool -size 1G file-storage-pl File system created successfully. 1040148 kilobytes total disk space. New File System size is 2097152 $ lssp Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type rootvg 69888 49408 128 0 LVPOOL clientvg 139776 102912 64 3 LVPOOL lv-storage-pool 69888 67840 64 1 LVPOOL file-storage-pl 1016 1015 64 0 FBPOOL $ df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 524288 455864 14% 2293 5% / /dev/hd2 5242880 936112 83% 51854 32% /usr /dev/hd9var 1310720 1181296 10% 474 1% /var /dev/hd3 4718592 4558112 4% 384 1% /tmp /dev/hd1 20971520 9927544 53% 1374 1% /home /proc - /proc /dev/hd10opt 3407872 2538704 26% 10562 4% /opt /dev/file-storage-pl 2097152 2079792 1% 4 1% /var/vio/storagepools/filestorage-pl

Creating a new Virtual Media Repository


$ mkrep -sp lv-storage-pool -size 500M Virtual Media Repository Created Repository created within "VMLibrary_LV" logical volume $ lssp Pool Size(mb) Free(mb) Alloc Size(mb) BDs Type rootvg 69888 49408 128 0 LVPOOL clientvg 139776 102912 64 3 LVPOOL lv-storage-pool 69888 67328 64 1 LVPOOL file-storage-pl 1016 1005 64 1 FBPOOL VMLibrary_LV 508 507 64 0 FBPOOL $ lsvopt VTD Media Size(mb) vtopt0 No Media n/a $ lsmap vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U9117.MMA.1023C9F-V2-C11 0x00000004 VTD Status LUN Backing device Physloc vt_ec04 Available 0x8100000000000000 client2lv

VTD vtopt0 Status Available LUN 0x8200000000000000 Backing device Physloc $ mkvopt -name vio-1-5-expansion.iso -file /var/vio/storagepools/file-storage-pl/vio-1-5expansion.iso -ro

Seeing the image on the client LPAR


$ lsmap -all | more SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost0 U9117.MMA.1023C9F-V2-C11 0x00000004 VTD Status LUN Backing device Physloc VTD Status LUN Backing device Physloc From the client LPAR: vt_ec04 Available 0x8100000000000000 client2lv

vtopt0 Available 0x8200000000000000 /var/vio/VMLibrary/vio-1-5-expansion.iso

root@ec04 / # root@ec04 / # cd0 Available root@ec04 / # root@ec04 / # Filesystem /dev/hd4 /dev/hd2 /dev/hd9var /dev/hd3 /dev/hd1 /proc /dev/hd10opt /dev/cd0

cfgmgr lsdev -Cc cdrom Virtual SCSI Optical Served by VIO Server mount -v cdrfs -o ro /dev/cd0 /cdrom df 512-blocks Free %Used Iused %Iused 98304 49488 50% 1982 9% 2490368 88936 97% 23302 8% 65536 31584 52% 494 7% 229376 126928 45% 64 1% 65536 63368 4% 20 1% 229376 35272 85% 2937 11% 19724 0 100% 4931 100%

Mounted on / /usr /var /tmp /home /proc /opt /cdrom

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

Shared Ethernet Adapter


Physical access shared by multiple networks Physical access can be a single adapter or an aggregate of

adapters (EtherChannel/Link Aggregation)


Shared Ethernet operates at layer 2
Virtual Ethernet MAC visible to outside systems Broadcast/Multicast support

Create the Shared Ethernet Adapter (SEA)


VIOS 1 Client 1 en4 (if) ent3 (LA) ent1 (Phy) ent0 (Phy)
VID 100

Client 2

Shared Ethernet Adapter

ent4 (SEA) ent2 (Vir)


PVID 1

en0 (if) ent0 (Vir)


PVID 1

en1 (if) ent1 (Vir)


PVID 100

en0 (if) ent0 (Vir)


PVID 1

en1 (if) ent1 (Vir)


PVID 100

Untagged (PVID 1) VID 100

Create the Shared Ethernet Adapter (SEA) $ mkvdev -sea ent3 -vadapter ent2 -default ent2 -defaultid 1 ent4 Available en4 et4

Shared Ethernet Adapter Failover


VIOS feature (indepentent of the client partition) Provides a backup adapter for the SEA, with active
VIOS 1

Client
Virt Enet

VIOS 2

SEA Enet PCI

SEA Enet PCI

Primary

Backup

monitoring.
Virtual Ethernet Control Channel between the two VIOS.

No load balancing; only the primary SEA is active.

Traffic flows through secondary SEA only when primary SEA fails.
No configuration required on the client partition;

everything is done on the two VIOS.


Can be used with Etherchannel/802.3ad devices. Configured with the mkvdev command on both VIOS.

Shared Ethernet Adapter Failover, Dual VIOS


Complexity
Specialized setup confined to VIOS

Resilience
Protection against single VIOS / switch port /

POWER5 Server Client


Virt Enet

switch / Ethernet adapter failure


Throughput / Scalability
Cannot do load-sharing between primary and
VIOS 1 SEA

VIOS 2 SEA

backup SEA (backup SEA is idle until needed). SEA failure initiated by: Backup SEA detects the active SEA has failed. Active SEA detects a loss of the physical link Manual failover by putting SEA in standby mode Active SEA cannot ping a given IP address.
Notes
Requires VIOS V1.2 and SF235 platform firmware Can be used on any type of client (AIX, Linux, I (on

Enet PCI

Control Channel

Enet PCI

Primary

Backup

POWER6) Outside traffic may be tagged

Tips and considerations when using SEA and SEA Failover


SEA
Make sure there is no IP configured on either the physical Ethernet

interface or the virtual interface that will be part of the SEA prior to performing the SEA configuration. You can optionally configure an IP address on the new SEA interface after the configuration is done.
SEA failover
If you have multiple SEAs configured on each of the VIO servers, then for

each SEA pair, you need to configure a separate control channel with a unique PVID on the system.
Make sure you configure the SEA failover adapter (on the second VIOS) at

the same time you configure the primary adapter.

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

Virtualization: HEA Logical Port Concept

To a LPAR, a HEA logical port


Partition Partition Partition

appears as a generic Ethernet interface


With its own resources and MAC

Logical Ports

address Sharing bandwidth w/ other logical ports defined on same physical port
Logical ports are allocated to
HEA

Logical L2 switch

partitions
Each Logical Port can be owned by a

Physical Port

separate LPAR A Partition can own multiple Logical Ports Only one Logical Port per Physical Port per partition

24

Virtualization: SEA and HEA


Virtual Ethernet and SEA I/O Hosting Partition
Packet Forwarder
Virtual Ethernet Driver Virtual Ethernet Driver Virtual Ethernet Driver

HEA
AIX Linux iOS AIX

Linux

iOS

Ethernet Driver

Ethernet Driver

Ethernet Driver

Virtual Ethernet Switch

PHYP

PHYP Networ k
...considerations

Ethernet adapter

HEA

Remove SW Forwarder bottleneck


10 Gbps are likely to be shared by multiple partitions....
25

Adapter sharing with Native Performance Removes SW forwarding overhead LPAR mobility

When might you use SEA over IVE


When the number of Ethernet adapters needed on a single partition

is more than the number of physical ports available on the HEA(s)


If the number of LPARs sharing a physical port exceeds the number

of LHEA ports available


Depends on the type of daughter card and the MCS value

If you anticipate a future need for more adapters than you have

LHEA ports

(LP-HEA) available

Very small amount of memory on LPAR


Each LP-HEA needs around 102 MB system memory

Some situations you might consider using a combination of SEA,


26

IVE, and/or dedicated Ethernet adapters

Considerations when using HEA on VIO


When the VIO server uses the HEA as a SEA you must set the VIO server

as a promiscuous LPAR for that LHEA


When in promiscuous mode there is only one LP-HEA per physical port The promiscuous LPAR receives all unicast, multicast, and broadcast

network traffic from the physical network.


Always use flow control, and the large_send parameter for all Gigabit and

10 Gbit Ethernet adapters, and large_receive parameter when using a 10Gbit Ethernet adapter (VIOS 1.5.1.1) to increase performance

27

Promiscuous Mode

28

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

Live Partition Mobility concepts


Mover service partitions (MSP)
Mover service partitions (MSP) is an attribute of the Virtual I/O Server

partition. It enables the specified Virtual I/O Server partition to allow the functionality that asynchronously extracts, transports, and installs partition state. Two mover service partitions are involved in an active partition migration: one on the source system, the other on the destination system. Mover service partitions are not used for inactive migrations.
Virtual asynchronous services interface (VASI)
The source and destination mover service partitions use this virtual

device to communicate with the POWER hypervisor to gain access to partition state. The VASI device is included on the Virtual I/O Server, but is only used when the server is declared as a mover service partition.

Partition Migration moves Active and Inactive LPAR


Active Partition Migration
Active Partition Migration is the actual movement of a running LPAR from one physical machine to another without disrupting* the operation of the OS and applications running in that LPAR.

Supported by all POWER6based servers

Applicability Workload consolidation (e.g. many to one) Workload balancing (e.g. move to larger system) Workload migration to newer systems Planned CEC outages for maintenance/upgrades Impending CEC outages (e.g. hardware warning received)

Inactive Partition Migration


Inactive Partition Migration transfers a partition that is logically

powered off (not running) from one system to another. Subject to fewer compatibility restrictions than active partition migration because the OS goes through the boot process on the destination. Provides some ease of migration from systems prior to those enabled for active migration.

Requisites

The mobile partitions network and disk access must be virtualized

using one or more Virtual I/O Servers.


The Virtual I/O Servers on both systems must have a shared Ethernet

adapter configured to bridge to the same Ethernet network used by the mobile partition The Virtual I/O Servers on both systems must be capable of providing virtual access to all disk resources the mobile partition is using. The disks used by the mobile partition must be accessed through virtual SCSI and/or virtual Fibre Channel-based mapping

Normal Running Pre-Migration


(Disk I/O Shown)

LAN

Source Partition
Ded. Adapt DD VSCSI Client DD

VIOS
VSCSI Server
Ded Adapt DD

VIOS
Ded Adapt DD

(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

VSCSI Server

PHYP
Ded Adapter Ded Adapter Ded Adapter

PHYP
Ded Adapter

SAN Disks

Create Target Partition

LAN

Source Partition
VSCSI Client DD

VIOS
VSCI Server
Ded Adapt DD

VIOS
Ded Adapt DD

(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Target Partition

VSCI Server

PHYP
Ded Adapter Ded Adapter Ded Adapter

PHYP
Ded Adapter

SAN Disks

Transfer Partition State

LAN

Source Partition
VSCSI Client DD

VIOS
VSCI Server
Ded Adapt DD

VIOS
Ded Adapt DD

(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Target Partition
VSCSI Client DD

VSCI Server

PHYP
Ded Adapter Ded Adapter Ded Adapter

PHYP
Ded Adapter

SAN Disks

Transfer VIO to Target

LAN

Source Partition
VSCSI Client DD

VIOS
VSCI Server
Ded Adapt DD

VIOS
Ded Adapt DD

(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Target Partition
VSCSI Client DD

VSCI Server

PHYP
Ded Adapter Ded Adapter Ded Adapter

PHYP
Ded Adapter

SAN Disks

Re-Attach Dedicated Adapters


(via DLPAR)

LAN

Source Partition
VSCSI Client DD

VIOS
VSCI Server
Ded Adapt DD

VIOS
Ded Adapt DD

(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Target Partition
VSCSI Client DD Ded. Adapt DD

VSCI Server

PHYP
Ded Adapter Ded Adapter Ded Adapter

PHYP
Ded Adapter

SAN Disks

Clean Up Unused Resources

LAN

Source Partition
VSCSI Client DD

VIOS
VSCI Server
Ded Adapt DD

VIOS
Ded Adapt DD

(VSCSI, VLAN, MSP) (VSCSI, VLAN, MSP)

Target Partition
VSCSI Client DD Ded. Adapt DD

VSCI Server

PHYP
Ded Adapter Ded Adapter Ded Adapter

PHYP
Ded Adapter

SAN Disks

Devices not supported for LPM


IVE Virtual Optical Device Virtual Optical Media OS installed on internal disks OS installed on logical volumes or file-backed devices Virtual Tape

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

VIOS Virtual Tape Support


Enables client partitions to directly access selected SAS tape devices,

sharing resources and simplifying backup & restore operations


SAS adapter is owned by VIOS partition Included with PowerVM Express, Standard, or Enterprise Edition Supports AIX 5.3 & 6.1 partitions and IBM i 6.1 partitions POWER6 processor-based systems

VIOS
SAS Adapter Virtual SCSI Adapter Virtual SCSI Adapter

Power Hypervisor

Tape drives supported


DAT72: Feature Code 5907 DAT160: Feature Code 5619 HH LTO4: Feature Code 5746

VIOS Virtual Tape Support


Virtual tape device created and managed the same way as a virtual disk
mkvdev -vdev TargetDevice vadapter VirtualSCSIServerAdapter

lsdev -virtual returns results similar to the following:


name vhost3 vsa0 vtscsi0 status Available Available Available description Virtual SCSI Server Adapter LPAR Virtual Serial Adapter Virtual Target Device - Logical Volume

vttape0 Available

Virtual Target Device - Tape

On the client partition, simply run cfgmgr to configure the virtual tape Device can be used as a regular tape, for data and OS backup and restore, including

booting from media. Automated tape libraries are not supported.

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

NPIV
N_Port ID Virtualization (NPIV) provides direct Fibre Channel connections from client

partitions to SAN resources , simplifying SAN management Fibre Channel Host Bus Adapter is owned by VIOS partition Supported with PowerVM Express, Standard, and Enterprise Edition Supports AIX 5.3 and AIX 6.1 partitions
Power 520, 550, 560, and 570, with an 8 GB PCIe Fibre Channel Adapter

VIOS
FC Adapter Virtual FC Adapter Virtual FC Adapter

Power Hypervisor

Statement of Direction

Enables use of existing storage management tools Simplifies storage provisioning (i.e. zoning, LUN masking) Enables access to SAN devices including tape libraries

IBM intends to support N_Port ID Virtualization (NPIV) on the POWER6

processor-based Power 595, BladeCenter JS12, and BladeCenter JS22 in 2009. IBM intends to support NPIV with IBM i and Linux environments in 2009.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

NPIV details
VIOS V2.1 (PowerVM Express, Standard, and Enterprise) Client OS support: AIX(5.3, 6.1); later in 2009, Linux and IBM i POWER6 only; Blade and High-End support in 2009 8 Gigabit PCI Express Dual Port Fibre Channel Adapter Compatible with Live Partition Mobility (LPM) VIO servers can support NPIV and vSCSI simultaneously Clients can support NPIV, vSCSI and dedicated Fibre Channel

simultaneously HMC-managed and IVM-managed servers Unique Worldwide Port Name (WWPN) generation (allocated in pairs)

NPIV Simplifies SAN Management


Current Virtual SCSI model N-Port ID Virtualization
POWER6 Disks POWER5 or POWER6 Virtualized disks

AIX generic scsi disk Virtual SCSI generic scsi disk

AIX DS8000 Virtual FC EMC

FC Adapter

Shared FC Adapter

VIOS
FC Adapters

VIOS
FC Adapters

SAN

SAN

DS8000

EMC

DS8000

EMC

Partition SAN access through NPIV

SAN Switch requirements


Only the first SAN switch attached to the Fibre Channel adapter

needs to be NPIV capable Other switches in the environment do not need to be NPIV capable Not all ports on the switch need to be configured for NPIV, just the one which the adapter will use Check with your storage vendor to make sure the switch is NPIV capable Order and install the latest available firmware for your SAN switch

Create a Virtual Fibre Channel Adapter

Client/server relationship similar to Virtual SCSI


VSCSI Server on the VIOS, client on the client partition VFC Server on the VIOS, VFC client on the client partition

Mapping the adapter


vfcmap binding the VFC Server to the Fibre Channel Port
vfcmap -help Usage: vfcmap -vadapter VFCServerAdapter -fcp

FCPName Maps the Virtual Fibre Channel Adapter to the physical Fibre Channel Port -vadapter Specifies the virtual server adapter. -fcp Specifies the physical Fibre Channel Port Example: vfcmap vadapter vfchost0 f
After mapping done, running cfgmgr on the client partitions to

configure SAN devices


Before this step, zoning (if used) must be done on the switches. Virtual

adapter WWPN can be obtained on the HMC.

WWPN for the Virtual Adapter

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

Dynamic Heterogeneous Multi-Path I/O


Delivers flexibility for Live Partition Mobility environments Provides efficient path redundancy to SAN resources Supported between virtual NPIV and physical Fibre Channel

Adapters AIX 5.3 and 6.1 partitions POWER6 processor-based servers


VIOS
FC Adapter Virtual FC Adapter FC Adapter

VIOS
FC Adapter Virtual FC Adapter FC Adapter

NPIV

Power Hypervisor

NPIV

Power Hypervisor

1) Real adapter 2) Virtual adapter to prepare for mobility

3) Partition moves via virtual adapter

4) Real adapter

Agenda
Virtual Optical Devices, Virtual Optical Media, File-backed Devices Shared Ethernet Adapter (SEA) Failover SEA over Host Ethernet Adapter (HEA) Live Partition Mobility configuration Virtual Tape N-Port ID Virtualization (NPIV) Heterogeneous Multipathing Active Memory Sharing

PowerVM Active Memory Sharing


Active Memory Sharing will intelligently flow memory from one partition to

another for increased utilization and flexibility of memory usage.

Memory virtualization enhancement for Power Systems Memory dynamically allocated based on partitions workload demands Contents of memory written to a paging device Improves memory utilization Extends Power Systems Virtualization Leadership Capabilities not provided by Sun and HP virtualization offerings Designed for partitions with variable memory requirements Low average memory requirements Active/inactive environments Workloads that peak at different times across the partitions Available with PowerVM Enterprise Edition AIX 6.1, Linux and i 6.1 partitions that use VIOS and shared processors POWER6 processor-based systems
* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Active Memory Sharing Enables Higher Memory Utilization


Memory allocation Memory requirements

25

Memory (GB)

Partitions with dedicated memory


Memory is allocated to partitions As workload demands change,

20 15 10 5 0 Partition 3 Partition 2 Partition 1

memory remains dedicated Memory allocation is not optimized to workload


Partitions with shared memory
Memory is used by partition that
Memory Usage (GB)

Time

25 20 15 10 5 0 Partition 3 Partition 2 Partition 1

Memory is allocated to shared pool

needs it enabling more throughput Higher memory utilization

Time

Active Memory Sharing Examples


15

Memory Usage (GB)

10

Around the World


Partitions support workloads with

Asia Americas Europe

memory demands that peak at different times


Day and Night
Partitions support day time web

Time
15

Memory Usage (GB)

10 Night Day 5

applications and night time batch

0 15

Time
#10 #9 #8 #7 #6 #5 #4 #3 #2 #1

Memory Usage (GB)

Infrequent use
Large number of partitions with

10

sporadic use

Time

When not to use AMS


High Performance Computing (HPC) applications that has high and

constant memory usage


Crash analysis, CFD, etc

Databases that have fixed buffer cache allocation


Generally use all the available memory on the partition, and buffer cache

paging is undesirable
Realtime, fixed response-time type of applications
Predictability is key, so resources should not be shared

References
Using File-Backed Virtual SCSI Devices, by Janel Barfield http://www.ibmsystemsmag.com/aix/februarymarch09/tipstechniques/24273p1.aspx Configuring Shared Ethernet Adapter Failover http://techsupport.services.ibm.com/server/vios/documentation/SEA_final.pdf Integrated Virtual Ethernet Adapter Technical Overview and Introduction http://www.redbooks.ibm.com/abstracts/redp4340.html IBM PowerVM Live Partition Mobility http://www.redbooks.ibm.com/abstracts/sg247460.html Power Systems Virtual I/O Server http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphb1/iphb1.pdf

59

Gracias
Csar Diniz Maciel cmaciel@us.ibm.com
forotecnicoargentina.com/facebook

You might also like