Professional Documents
Culture Documents
Live Partition Mobility2
Live Partition Mobility2
Steven Knudson
sjknuds@us.ibm.com
Agenda
Overview
Prerequisites
Validation
Migration
Effects
Demo
Supplemental Material
Overview
Overview
Overview
Overview
Prerequisites
From Fix Central website, Partition Mobility:
http://www14.software.ibm.com/webapp/set2/sas/f/pm/component.html
Prerequisites
Two POWER6 systems managed by a single HMC or IVM on each server
Advanced POWER Virtualization Enterprise Edition
VIOS 1.5.1.1 (VIO 1.5.0.0, plus Fixpack 10.1) plus interim fixes
IZ08861.071116.epkg.Z – Partition Mobility fix
642758_vio.080208.epkg.Z – VIO MPIO fix
AX059907_3.080314.epkg.Z – USB Optical Drive fix
IZ16430.080327.epkg.Z – various Qlogic Emulex FC fixes
VIOS 1.5.2.1 (VIO 1.5.0.0 plus Fixpack 11.1) rolls up all interim fixes – Preferred
Prerequisites
All systems that will host a mobile partition must be on the same subnet and
managed by a single HMC
– POWER6 Blades are managed by IVM instances
Prerequisites
Partition readiness
Validation
System Properties support Partition Mobility
– Inactive and Active Partition Mobility Capable = True
Mover Service Partitions on both Systems
– VIO Servers with VASI device defined, and MSP enabled
Migration
Migration Steps
Migration Steps
Migration Steps
When the destination server receives the last modified pages, the
migration is complete
In the final steps, all resources are returned to the source and
destination systems and the mobile partition is restored to its fully
functional state
The channel between MSPs is closed
The VASI channel between MSP and PHYP is closed
VSCSI adapters on the source MSP are removed
The HMC informs the MSPs that the migration is complete and all
migration data can be removed from their memory tables
The mobile partition and all its profiles are deleted from the source
server
You can now add dedicated adapters to the mobile partition via DLPAR
as needed, or put it in an LPAR workload group
Effects
Server properties
• The affinity characteristics of the logical memory blocks may change
• The maximum number of potential and installed physical processors may
change
• The L1 and/or L2 cache size and association may change
• This is not a functional issue, but may affect performance characteristics
Console
• Any active console sessions will be closed when the partition is migrated
• Console sessions must be re-opened on the target system by the user after
migration
LPAR
• uname will change. Partition ID may change. IP address, MAC address will
not change.
Effects
Network
– A temporary network outage of seconds is expected to occur as part of
suspending the partition
• Temporary network outages may be visible to application clients, but it is
assumed that these are inherently recoverable
Effects
Error logs
– When a partition migrates all of the error logs that the partition had received
will appear on the target system
– All of the error logs contain the machine type, model, and serial number so it
is possible to correlate the error with the system that detected it
Partition time
– When a partition is migrated the Time of Day and timebase values of the
partition are migrated.
– The Time of Day of the partition is recalculated ensuring partition timebase
value increases monotonically and accounting for any delays in migration.
DEMO
Environment
Two POWER6 servers
– 8-way Mercury
• 01EM320_31
– 16-way Zeus
• 01EM320_31
Single HMC managing both servers
– HMC V7.3.3.0
Mobile partition
– bmark26
• OS: AIX 6.1 6100-00-01-0748
• Shared processor pool Test1
• CPU entitlement: Min 0.20, Des 0.20, Max 2.00
• Mode: Uncapped
• Virtual Processors: Min 1, Des 2, Max 4
• Disks: SAN LUN
Supplemental Material
“Destination” Power6 server zeus has dual VIO LPARs, sq17 and
sq18. SEA Failover primary is sq17, backup is sq18
In VIO LPARs ec01 and ec02, hdisk6 and hdisk7 are LUNs we use
for bmark26 and bmark29 mobile LPARs.
$ lspv
NAME PVID VG STATUS
hdisk0 00c23c9f9a1f1da3 rootvg active
hdisk1 00c23c9f9f5993e5 clientvg active
hdisk2 00c23c9f2fb9e5a9 clientvg active
hdisk3 00c23c9fb60af645 None
hdisk4 none None
hdisk5 none None
hdisk6 00c23c9f291cc30b None
hdisk7 00c23c9f291cc438 None
$ cat sk_lsdisk
for d in `ioscli lspv | awk '{print $1}'`
do
echo $d `ioscli lsdev -dev $d -attr | grep ieee | awk '{print $1"
"$2}' `
done
$ sk_lsdisk
NAME
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6 ieee_volname 600A0B800016954000001C7646F142A6
hdisk7 ieee_volname 600A0B8000170BC10000142846F124AD
$ cat sk_clariion
for d in `ioscli lspv | grep hdiskpower | awk '{print $1}'`
do
ioscli lsdev -dev $d -vpd | grep UI | awk '{print $1“ “$2}’
done
$ cat sk_lsmap
#!/usr/bin/rksh
# sk_lsmap
#
#PATH=/usr/ios/cli:/usr/ios/utils:/home/padmin:
for v in `ioscli lsdev -virtual | grep vhost | awk '{print $1}'`
do
ioscli lsmap -vadapter $v -fmt : | awk -F: '{print $1" "$2" "$4" "$7" "$10}‘
done
$ sk_lsmap
vhost0 U9117.MMA.1023C9F-V1-C11 vt_ec04 client2lv
vhost1 U9117.MMA.1023C9F-V1-C12 vt_ec03 nimclientlv
vhost2 U9117.MMA.1023C9F-V1-C15 vt_ec05 client3lv
vhost3 U9117.MMA.1023C9F-V1-C32 vt_ec07 hdisk3
vhost4 U9117.MMA.1023C9F-V1-C20 vt_bmark26 hdisk6
vhost5 U9117.MMA.1023C9F-V1-C13
vhost6 U9117.MMA.1023C9F-V1-C14 vtscsi0 hdisk6
vhost7 U9117.MMA.1023C9F-V1-C16
vhost8 U9117.MMA.1023C9F-V1-C21
vhost9 U9117.MMA.1023C9F-V1-C39 vt_bmark29 hdisk7
Option 77 again…
>>> 1 hdisk0 U9117.MMA.1023C9F-V9-C8-T1-L8100000000000
Sets MPIO to test failed and non-active paths every 5 minutes, bring
them online if available.
The newly Installed and booted LPAR has two vscsi client adapters
# lsdev -Cc adapter -F "name physloc" | grep vscsi
vscsi0 U9117.MMA.1023C9F-V9-C8-T1
vscsi1 U9117.MMA.1023C9F-V9-C9-T1
The PVID we expected does come thru from VIO to the Client LPAR
# lspv
hdisk0 00c23c9f291cc438 rootvg active
Starting Mobility
Starting Mobility
Starting Mobility
If you specify a
new profile
name, your
initial profile will
be saved. But
do NOT assume
it is bootable, or
usable on return
to “source”
server. VIO
mappings will
change.
Starting Mobility
There might be
more than one
destination
server to
choose from
Starting Mobility
… then …
Starting Mobility
I selected
the pair that
were both
SEA
Failover
primary, but
any pair
should do
here
Starting Mobility
Verify that
the required
(possibly
tagged)
VLAN is
available
Starting Mobility
These are
my client
LPAR vscsi
adapter IDs,
matched to
destination
VIO LPARs
Starting Mobility
You may
select from
different
shared pools
on the
destination
server
Starting Mobility
Left to
default
Starting Mobility
The moment
we’ve waited
for…
Migration Complete
Migrated LPAR
resides solely on new
server.
Migration Complete
Migration preserved my old profile, and created a new one
Same client
adapter IDs, but
different VIO
server adapter
IDs
Migration used new VIO server adapter IDs, even when same adapter
IDs were available
$ hostname
Migration did not
sq17
$ sk_lsmap use ID 39 in
vhost0 U9117.MMA.109A4AF-V1-C15 destination VIO
vhost1 U9117.MMA.109A4AF-V1-C16 LPARs
vhost2 U9117.MMA.109A4AF-V1-C39
vhost3 U9117.MMA.109A4AF-V1-C14 vtscsi0 hdisk7
When you migrate back, do not expect to be back on your original VIO
Server adapter IDs. Your old client LPAR profile is historical, but will not
likely be usable without some reconfig. Best to create a new profile on
the way back over.
Back on the “source” server, device mappings for your client LPAR have
been completely removed from the VIO LPARs
$ hostname ec01
ec01
$ sk_lsmap
vhost0 U9117.MMA.1023C9F-V1-C11 vt_ec04 client2lv
vhost1 U9117.MMA.1023C9F-V1-C12 vt_ec03 nimclientlv
vhost2 U9117.MMA.1023C9F-V1-C15 vt_ec05 client3lv
vhost3 U9117.MMA.1023C9F-V1-C32 vt_ec07 hdisk3
vhost4 U9117.MMA.1023C9F-V1-C20 vt_bmark26 hdisk6
vhost5 U9117.MMA.1023C9F-V1-C13
vhost6 U9117.MMA.1023C9F-V1-C14 vtscsi0 hdisk6
vhost7 U9117.MMA.1023C9F-V1-C16
vhost8 U9117.MMA.1023C9F-V1-C21
No longer a vhost
adapter ID 39
(compare with page
30)
New
adapter is
on VLAN 5
VIO LPARs on source and destination Server must have virtual adapter
on VLAN 5, and this adapter must be “joined” into the SEA
Do the DLPAR of adapter into both source VIO LPARs, and both
destination LPARs
The new
VLAN id
Priority MUST
MUST
match existing
trunk to
trunked SEA virtual
join SEA
Slightly different error – mkvdev the new virtual onto the SEA
Both trunked
virtual adapters
53 20-Aug-08 © 2008 IBM Corporation
IBM Training - 2008 Systems Technical Conference
Trunk priority on new virtual did not match the existing trunked virtual
adapter
chgsea: Ioctl NDD_SEA_MODIFY returned error 22 for device ent4
Ready to Finish…
Switch
Before SEA Failover, we used EtherChannel Network Interface Backup in client
SEA in each VIO server, with External Access Virtuals, each on different VLAN
Client LPAR gets virtual adapter on each VLAN, with EtherChannel NIB on top
Trunk priority on SEA bridged virtuals does not matter; different internal VLANs
No control channel; This is SEA, but not SEA Failover
Reference
Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not
actively marketed or is not significant within its relevant market.
Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.
*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA,
WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®
Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.
Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.
IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.
* All other products may be trademarks or registered trademarks of their respective companies.
Notes:
Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will
experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed.
Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual
environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without
notice. Consult your local IBM business contact for information on the product or services available in your area.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance,
compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.