Front cover

IBM PowerVM Live Partition Mobility
Explore the PowerVM Enterprise Edition Live Partition Mobility Move active and inactive partitions between servers Manage partition migration with an HMC or IVM

John E Bailey Thomas Prokop Guido Somers

ibm.com/redbooks

International Technical Support Organization IBM PowerVM Live Partition Mobility March 2009

SG24-7460-01

Note: Before using this information and the product it supports, read the information in “Notices” on page xv.

Second Edition (March 2009) This edition applies to AIX Version 6.1, AIX 5L Version 5.3 TL7, HMC Version 7.3.2 or later, and POWER6 technology-based servers, such as the IBM Power System 570 (9117-MMA) and the IBM Power System 550 Express (8204-E8A).
© Copyright International Business Machines Corporation 2007, 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Cross-system flexibility is the requirement . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Live Partition Mobility is the answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5.1 Hardware infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5.2 Components involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.6.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.6.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.7 Combining mobility with other features . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7.1 High availability clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7.2 AIX Live Application Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 2. Live Partition Mobility mechanisms . . . . . . . . . . . . . . . . . . . . . 19 2.1 Live Partition Mobility components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.1 Other components affecting Live Partition Mobility . . . . . . . . . . . . . . 22 2.2 Live Partition Mobility prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.1 Capability and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.2 Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.3 Migratability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Partition migration high-level workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 Inactive partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

© Copyright IBM Corp. 2007, 2009. All rights reserved.

iii

2.4.2 Validation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4.3 Migration phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4.4 Migration completion phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4.5 Stopping an inactive partition migration . . . . . . . . . . . . . . . . . . . . . . 31 2.5 Active partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5.1 Active partition state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.2 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.3 Validation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.4 Partition migration phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.5.5 Migration completion phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.6 Virtual I/O Server selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.7 Source and destination mover service partitions selection . . . . . . . . 41 2.5.8 Stopping an active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.7 AIX and active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.8 Linux and active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Chapter 3. Requirements and preparation . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2 Skill considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.3 Requirements for Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4 Live Partition Mobility preparation checks . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.5 Preparing the systems for Live Partition Mobility . . . . . . . . . . . . . . . . . . . 54 3.5.1 HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5.2 Logical memory block size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5.3 Battery power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.5.4 Available memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5.5 Available processors to support Live Partition Mobility . . . . . . . . . . . 58 3.6 Preparing the HMC for Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . 61 3.7 Preparing the Virtual I/O Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.7.1 Virtual I/O Server version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.7.2 Mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.7.3 Synchronize time-of-day clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.8 Preparing the mobile partition for mobility . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.1 Operating system version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.2 RMC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.8.3 Disable redundant error path reporting . . . . . . . . . . . . . . . . . . . . . . . 68 3.8.4 Virtual serial adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.8.5 Partition workload groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.8.6 Barrier-synchronization register . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.8.7 Huge pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.8.8 Physical or dedicated I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.8.9 Name of logical partition profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

iv

IBM PowerVM Live Partition Mobility

2 Inactive or active migration . . . . . . . . . . . .3 Time reference . . . . . . .3 Dual HMC considerations . . .1. . . . . .2 Dual Virtual I/O Server and multipath I/O . . . . . . . .1. . . . .10 Network considerations . . . . . . . . . . . .4. . . . . . . .3. . . . .2 HMC considerations . . . . . . .2 Processor pools on command line . . . . . 79 3. . . . 91 4. . . .9 Configuring the external storage . . . . . . . . . . . . . . .4. . . . . . . . 150 5. . . . . . . . . . . . . . . . . . 149 5. . . . .6. . . . . . . . . . . . . 93 4. . . . . . . . . . . . . . . . . . . . . . 149 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4. . . . . . 130 5. . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . .1 Performing the validation steps and eliminating errors . . . . . . . . . 119 5. . . . . . . . . . . . . . . . . .1 Dual Virtual I/O Servers. . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . 135 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . 147 5.1 Enabling the mover service partition . . . . . . . . 128 5. . . . . . 99 4. . . . . . . . . . . 89 4. . . . .1. . . .8. . . . . . . . . . . . . . . . . . . . . . . .1 Overview . . .4 Remote Live Partition Mobility. . . . . . . . . . . . . . .2 Virtual IO Server attributes . . . . . . . . . . . . . . . 79 3. . . . . . . . . . . . . . . . . . . .4. 90 4. . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . 93 4. . . . . .6. . . . . . . . 88 Chapter 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . .3 Migrating a mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 5. . .1. . . . . . . .10 Mobility-safe or mobility-aware .4 Migrating a logical partition . . . . . . . . . . 92 4. . . . 153 Contents v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4. . . . . .1 Basic Live Partition Mobility environment . . . . . . . . . . . . . . . . .4. . . .2 Configure a Virtual I/O Server on the source system . . . . . . . . . . .3 Preparing for an active partition migration. . . . . . . . . . . . . . . . .1 Dual Virtual I/O Server and client mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Chapter 5. . . . . . . 124 5. . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . 79 3. 87 3. . . . . . . . . . . . . . . . .11 Changed partition profiles . .2 Virtual Asynchronous Services Interface device .2 Enabling the Time reference . . . . . . . . . . . . . . . . . . .3. . . . . . .3 Remote validation and migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . 93 4. . . . 94 4. . . . 152 5. . .3 Active partition migration . . . . 93 4. . . . . . . . . . . . . . 98 4. 120 5. .5. . . . . . . 145 5. . . . . . . . .1 Minimum requirements . . . . .1. . . . . .5 Multiple shared processor pools . . . . . . . . . . . . . . . . . . . . .1 Shared processor pools in migration and validation GUI . . .1 Requirements for remote migration. . . . .4 Configure storage on the mobile partition . . . . . . . . . . . . . . . . . . . . . .2 Inactive partition migration . . . . . . . 147 5. . . . . . . . .2 Multiple concurrent migrations . . . . . . . . . 148 5. 138 5. . 94 4. . . . . . . . . . . . . . . . . . .2. . . 126 5. Basic partition migration scenario . . . . . . 121 5. . . . . . . . . . . . . .3 Single to dual Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . .3 Configure a Virtual I/O Server on the destination system . 99 4. . . . . . . . . . . . . . . . . . .2. . . . .4. .6 Migrating a partition with physical resources. . . . .1 Mover service partition . . . . . . . . . . . . Advanced topics . . . . . . . . .4 Command-line interface enhancements . . . .8. . . . . . . . . . . . . . . . . . . . . .11 Distance considerations . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .2 Migration of a virtual Fibre Channel based partition . . . . . . . . . 225 7. . . . . . . . . . . . . .7. . . . . . . .12 Processor compatibility modes . . . . . . . . .4 How inactive Partition Mobility works . .3 How active Partition Mobility works . . . . . . . . . .5. . . . . .5 Validation for active Partition Mobility . . .7.1 Basic virtual Fibre Channel Live Partition Mobility preparation . . . . . Migration status . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6. .9. . . . . . . . .4 Live Partition Mobility with Heterogeneous I/O . . . . . . . . . . . . . . . . . . .6 Validation for inactive Partition Mobility. . 244 7. . . . . . . .6. . . . . . . . . . . . . . . 166 5. . . . 177 5. . . . . . . . . 157 5. . . . . . . . . . . . . .7. . . . . . .7. . . . . . . . . . . . . . . .11. . . . . .6 Remove adapters from the mobile partition . . . . . . . . . . . . . . . . . . 195 5. . .7. . .7. .1 Verifying the processor compatibility mode of mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Making programs migration aware using APIs . . . . . . . . . . . 182 5. . . . . 173 5. . . . . . . . . .12. 187 5. . . .2 The lslparmigr command. 185 5. . . . . . . . . .1 Migration types . . . 253 7. . . . . . . . . . . . . . . . . . . 162 5. . . . . .6. . . . . . . . . . . .6 Preparing the network configuration for Partition Mobility . . . . .8 Migration awareness . . . . . . . . .7 Validating the Partition Mobility environment .7.5 A more complex example . . . . . . . .7 Ready to migrate. . . . . . . . .9. . . . . . . . . . . . .1 Progress and reference code location.2 Recovery . . . .9. . . . . . . . . . . . . . . . . . . . . . .2 Preparing the management partition for Partition Mobility . .11. . . . . . . . . . . . . . . . . . 160 5. . . . . . . . . . . . . . . 232 7.6. . . . . 174 5. . . . . . . . 161 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 The mkauthkeys command . . 193 5. . . . 227 7. . . . . . . .7. . . . . . . . 248 7. . . . . . . . . .11. . . .7.3 Preparing the mobile partition for Partition Mobility. . . . . . . . . . . . . . . . . . . . . . . 238 7. . . . . . . . . . . . . . . . . .5 Preparing the virtual Fibre Channel configuration . . . . . . . . . . . . . . . . . .10 Making kernel extension migration aware . . . . . . . . . . . . . . . . . . . . .7. 179 5. . . . . . . . . 214 6. . . . . . . . 173 5. . . . . . . . . . . . . . . . .11 Virtual Fibre Channel. . . . . . . . . 178 5. . . . . . . . . . . . . . .2 Requirements for Live Partition Mobility on IVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Chapter 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Chapter 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 The lssyscfg command . . . . . . . . . . . . . .11. 163 5. . . . . . . . . . . . . . . . . 198 5.3 Making applications migration-aware using scripts .7 The command-line interface . . . . . . . . . . . . 257 vi IBM PowerVM Live Partition Mobility . . .7. . 232 7. . . . . . . . . .4 Preparing the virtual SCSI configuration for Partition Mobility .7. . . .5 Configure network on the mobile partition. . . . . . . . . . .9 Making applications migration-aware . . . . . . . . . . . .1 Migration phases. . . . . 239 7. . . . . . . . . . . 222 7. . . . . . . . . .1 Preparing the source and destination servers. . . . . . . . . . . . . . . . . .7 Preparation for partition migration . . . . . . Integrated Virtualization Manager for Live Partition Mobility 221 7. . . 222 7. 179 5. . . . . . . . . .3 A recovery example. . . . . . . . . . . . . 216 6. . . . . . . . . .1 The migrlpar command . . . . . . . . 205 5. . . .3 Dual Virtual I/O Server and virtual Fibre Channel multipathing. 226 7. . . . . . . . 231 7. . . . . . . . . . . . . . . . . . . . . . . 190 5. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Appendix A. . . . . . . 272 Online resources . . . . . . . . . . . . . . . . 259 SRCs. . . . . . . . 274 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error codes and logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 IVM source and destination systems error codes . . . 262 Operating system error logs . . . . . . . . 274 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . 271 IBM Redbooks . . . . . 275 Contents vii . . . . . . . . . . . . . . . . . .8 Migrating the mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 SRC error codes. . . . . . . . . . . . . . . . . . . . . current state . . . . . . . . . . . . . . . . . . . . . . . . . . . .

viii IBM PowerVM Live Partition Mobility .

. . . . . . . . . . . . . . . . . . . . . . . . 56 3-6 Available memory on destination system . . 57 3-7 Checking the number of processing units of the mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3-23 Logical Host Ethernet Adapter . . . 72 3-18 Checking the number of BSR arrays on the mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3-16 Disabling partition workload group . . . . . . . . . . . 55 3-5 Checking the amount of memory of the mobile partition. . . . . . . . . . . . . . . 78 3-24 Virtual SCSI client adapter . . . . . . . 11 1-4 Migrating all partitions of a system . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2-5 Active migration validation workflow . . . . . . 15 1-5 AIX Workload Partition example . . . . . . .Settings tab . . 83 3-26 Checking free virtual slots. . . . . . . . . . . . . . . . .Figures 1-1 Hardware infrastructure enabled for Live Partition Mobility. . . . . . . . . . . . . . . . 63 3-12 Enabling mover service partition . . . . . . . . 34 2-6 Migration phase of an active migration . . . . . . 28 2-3 Inactive migration state flow . . . . . . . . . . . . . . . . .Other tab . . . . . . . . 62 3-11 Install Corrective Service to upgrade the HMC . . 76 3-22 Checking if there are required resources in the mobile partition . . . . . . . . . . . . . . . . . . . 37 3-1 Activation of Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3-15 Verifying the number of serial adapters on the mobile partition . . . . . . 29 2-4 Inactive migration workflow . . . 48 3-2 Enter activation code. . . . . . . . . . . . . 17 2-1 Live Partition Mobility components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3-9 Checking the version and release of HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1-2 A mobile partition during migration . . . . . . . . . . . . . . . 65 3-13 Synchronizing the time-of-day clocks . . . . . . . . . . . 48 3-3 Checking the current firmware level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3-10 Upgrading the Hardware Management Console. . . . . . . . . . . . . 71 3-17 Disabling partition workload group . . . . . . . . . . . . . . . . 20 2-2 Inactive migration validation workflow . . 82 3-25 Virtual SCSI server adapters. . . . . . . . . . . . . . . . . . . . 36 2-7 Active migration partition state transfer path . 75 3-21 Setting Huge Page Memory to zero . . . . . . . . . 2009. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3-4 Checking and changing LMB size with ASMI . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3-19 Setting number of BSR arrays to zero . . . . . . . . . . . 74 3-20 Checking if huge page memory equals zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All rights reserved. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3-14 Disable redundant error path handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2007. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1-3 The final configuration after a migration is complete. . . . . 85 © Copyright IBM Corp. . . ix . . 59 3-8 Available processing units on destination system . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4-23 Specifying wait time . . . . . . . . . . . . . . . . . . . . . .3-27 The Virtual SCSI Topology of the mobile partition . . . 115 4-24 Partition Migration Summary panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5-5 Dual VIOS and client multipath I/O to dual VIOS after migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5-15 Validation window after destination system refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4-26 Migrated partition . . . . . . . . . . . . . . . . . . 90 4-2 Hardware Management Console Workplace . . 103 4-12 System environment before migrating . . . . . . . . . . . 111 4-20 Selecting the VLAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4-11 Validation window after validation . . . . . . . . . . . . . . . . . . . . . . . . 135 5-11 Network ping successful to remote HMC . . . . . . . . . . 124 5-4 Dual VIOS and client multipath I/O to dual VIOS before migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4-15 Specifying the profile name on the destination system . . . . . 133 5-9 Live Partition Mobility infrastructure using private networks . . . . . . . 113 4-22 Specifying the shared processor pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4-21 Selecting the virtual SCSI adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 x IBM PowerVM Live Partition Mobility . . . . . . . . . . 126 5-6 Single VIOS to dual VIOS before migration . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4-17 Selecting the destination system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5-8 Live Partition Mobility infrastructure with two HMCs . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4-8 Selecting the Remote HMC and Destination System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4-5 Enabling the Mover service partition attribute . . . . . . . . 137 5-14 Remote migration information entered for validate task . . . . . . . . . . . . . . . . . . 117 5-1 Dual VIOS and client mirroring to dual VIOS before migration . . . 102 4-10 Partition Validation Warnings . . . . . . . 98 4-7 Validate menu on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . 127 5-7 Single VIOS to dual VIOS after migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5-3 Dual VIOS and client mirroring to single VIOS after migration . . . . . . . . . 105 4-14 Migration Information . . . . . . . . . . 136 5-12 HMC option for remote command execution. . . . . . . . . . . . . . . . . . . . 86 4-1 Basic Live Partition Mobility configuration . . . . . . . . . . . . . . . . . . . . . 104 4-13 Migrate menu on the HMC . . . . . . . . . . . 134 5-10 One public and one private network migration infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4-16 Optionally specifying the Remote HMC of the destination system . . . . . . 110 4-19 Selecting mover service partitions . . . . . 95 4-3 Create LPAR Wizard window . . . . . 101 4-9 Partition Validation Errors . . . . . . 109 4-18 Sample of Partition Validation Errors/Warnings . . . . 122 5-2 Dual VIOS and client mirroring to dual VIOS after migration . . . . . . . . . . . . . . . . . . . . . . 96 4-4 Changing the Virtual I/O Server partition property . . . . . . . . . . . . . . . . 97 4-6 Enabling the Time reference attribute . 116 4-25 Partition Migration Status window . . . . . . . . . 137 5-13 Remote command execution window . . . . . . . . . . . . . . . . . . . . . . . . . 140 5-16 Validation window after validation . . . . . . . . .

. . 219 7-1 Checking release level of the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6-5 Interrupted active migration status . . . . . . . . . . . . . . . . . . . . . . . 209 5-50 Current processor compatibility mode of the mobile partition . . . . . . . . 200 5-46 The mobile partition using physical and virtual resources. . . . . . . . . . . . . . . . . . . . . . . 155 5-28 The root volume group of the mobile partition is on virtual disks only. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5-20 Remote HMC view after remote migration success . . . . . . . . . . . . . . . . . . . . . . 154 5-27 The root volume group extends on to virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5-39 Virtual Fibre Channel migration summary window . . . . . . . . . . . 190 5-36 Virtual Fibre Channel adapters in the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . 187 5-34 Basic NPIV virtual Fibre Channel infrastructure after migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 5-22 Shared processor pool selection in Validate task . . . . . 161 5-32 The mobile partition on the destination system . . 195 5-41 Dual VIOS and client multipath I/O to dual NPIV before migration. . . . . . 217 6-4 Recovery pop-up window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5-31 The mobile partition with only virtual adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5-19 Remote migration summary window . . 215 6-3 Recovery menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 5-49 Processor compatibility mode options of the mobile partition . . . . . 194 5-40 Migrated partition . . . . . . . . . . . . . . . . . 156 5-29 The mobile partition has a virtual network device created . . . . . . . . . . . . . . . . . . . . 210 6-1 Partition reference codes . . . 202 5-47 The mobile partition using virtual resources . . . . . . . . . . 142 5-18 Remote HMC selection window in Migrate task . . . . . . . . . . . 149 5-24 The source Virtual I/O Server is created and configured . . . . . . . . . . . . . . . . . . . 191 5-37 Virtual I/O Server Fibre Channel adapter properties. . . . . . . . 230 7-4 Checking LMB size with the IVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 5-43 The mobile partition using physical resources . . . . . . . . 162 5-33 Basic NPIV virtual Fibre Channel infrastructure before migration . . . . . . . . . . . 151 5-25 The destination Virtual I/O Server is created and configured . . 153 5-26 The storage devices are configured on the mobile partition . . . . . . . . . . . . 188 5-35 Client partition virtual Fibre Channel adapter WWPN properties . . . . . . . . . . . . . . . . . . . 196 5-42 Dual VIOS and client multipath I/O to dual VIOS after migration. . . . . . . . . . . . 214 6-2 Migration progress window . . . . . . . . . . . . 229 7-3 Validation task for migration . 223 7-2 More Tasks menu . . . . . 145 5-21 Shared processor pool selection in migration wizard . . . . 199 5-45 Virtual Fibre Channel client adapter properties . . . . 191 5-38 Selecting the virtual Fibre Channel adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Figures xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5-30 The mobile partition has unconfigured its physical network interface . . . . . . . . . . . . .5-17 Local HMC environment before migrating. . . . . . 203 5-48 The mobile partition on the destination system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 5-23 The mobile partition is using physical resources. . 198 5-44 Virtual Fibre Channel server adapter properties . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 7-6 Checking the amount of memory on the destination server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 7-14 Virtual Fibre Channel Partition Connections window . . . . . . . . . . . . . . . . . . . . 244 7-13 View/Modify Virtual Fibre Channel window . . . . . . . 258 xii IBM PowerVM Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7-18 Selecting physical adapter to be used as a virtual Ethernet bridge . . . . . . . . . . . . . . . . . . . . . . . . 250 7-16 Virtual Fibre Channel on source system . . . . . . . . . . . . . 235 7-7 Checking the amount of processing units of the mobile partition . . . . . . . . . . 236 7-8 Checking the amount of processing units on the destination server . . . . . . . . . . . . . . . . . . . . . . . . . 251 7-17 Virtual Fibre Channel on destination system. . . 237 7-9 Enter PowerVM Edition key on the IVM. . . . . . . . . . . 239 7-10 Processor compatibility mode on the IVM . . .7-5 Checking the amount of memory of the mobile partition. . 256 7-21 Partition is migrating . . . . . . . . . . . . 243 7-12 Checking if the mobile partition has physical adapters . . . . . . . . . . . . . . . . . . . . . . 241 7-11 Checking the partition workload group participation . . . . . . . 254 7-19 Create virtual Ethernet adapter on the mobile partition. . . 250 7-15 Partition selected shows Automatically generate . 255 7-20 Create a virtual Ethernet adapter on the management partition . . . . . . .

. . . . . . 206 Progress SRCs . 2009. . All rights reserved. . . . . . . . . . . . . . . . . . . . . . . . 16 Supported migration matrix . . . . . . . . . . . . . . . . . 261 Source system generated error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 SRC error codes . . . . 264 Operating system error log entries . 84 Dynamic reconfiguration script commands for migration . . . 262 Destination system generated error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2007. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Virtual SCSI adapter worksheet . . . . . . xiii . . . . . . . 50 Preparing the environment for Live Partition Mobility . . 183 Processor compatibility modes supported by server type . . . . . . . . . . . . . . . . . . . . . . . . . 266 © Copyright IBM Corp. . . . . . . . . . . .Tables 1-1 3-1 3-2 3-3 5-1 5-2 A-1 A-2 A-3 A-4 A-5 PowerVM Live Partition Mobility Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiv IBM PowerVM Live Partition Mobility .

Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Actual results may vary. The furnishing of this document does not give you any license to these patents. program. IBM may have patents or pending patent applications covering subject matter described in this document. IBM may not offer the products. Some states do not allow disclaimer of express or implied warranties in certain transactions. in writing. companies. or service that does not infringe any IBM intellectual property right may be used instead. All rights reserved. xv . 2009. Any performance data contained herein was determined in a controlled environment. Furthermore. it is the user's responsibility to evaluate and verify the operation of any non-IBM product. NY 10504-1785 U.Notices This information was developed for products and services offered in the U. Any functionally equivalent product. Changes are periodically made to the information herein. and products. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. Users of this document should verify the applicable data for their specific environment. or service. Therefore. IBM has not tested those products and cannot confirm the accuracy of performance. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.A. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. some measurement may have been estimated through extrapolation. Consult your local IBM representative for information on the products and services currently available in your area. these changes will be incorporated in new editions of the publication. INCLUDING. or features discussed in this document in other countries. This information could include technical inaccuracies or typographical errors. Armonk. You can send license inquiries. program. EITHER EXPRESS OR IMPLIED.S. This information contains examples of data and reports used in daily business operations.A. THE IMPLIED WARRANTIES OF NON-INFRINGEMENT. compatibility or any other claims related to non-IBM products. their published announcements or other publicly available sources. therefore. IBM Corporation. this statement may not apply to you. brands. North Castle Drive. Information concerning non-IBM products was obtained from the suppliers of those products. or service is not intended to state or imply that only that IBM product. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. the examples include the names of individuals. to: IBM Director of Licensing. or service may be used. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. However. © Copyright IBM Corp. Any reference to an IBM product. MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. BUT NOT LIMITED TO. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.S. To illustrate them as completely as possible. 2007. services. program. the results obtained in other operating environments may vary significantly. program.

and distribute these sample programs in any form without payment to IBM.ibm. These examples have not been thoroughly tested under all conditions. Inc.com are trademarks or registered trademarks of International Business Machines Corporation in the United States. marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. which illustrate programming techniques on various operating platforms. or both. Inc. serviceability. or both. and the N logo are registered trademarks of Novell. cannot guarantee or imply reliability. Other company. other countries. using. or service names may be trademarks or service marks of others. in the United States and other countries. product. indicating US registered or common law trademarks owned by IBM at the time this information was published. other countries. A current list of IBM trademarks is available on the Web at http://www. Linux is a trademark of Linus Torvalds in the United States. and all Java-based trademarks are trademarks of Sun Microsystems. and ibm. other countries. or both: AIX 5L™ AIX® BladeCenter® DB2® Enterprise Storage Server® GDPS® Geographically Dispersed Parallel Sysplex™ GPFS™ HACMP™ i5/OS® IBM® Parallel Sysplex® POWER Hypervisor™ Power Systems™ POWER4™ POWER5™ POWER6+™ POWER6® POWER7™ PowerHA™ PowerVM™ POWER® Redbooks® Redpapers™ Redbooks (logo) ® System p® Tivoli® Workload Partitions Manager™ The following terms are trademarks of other companies: SUSE. the IBM logo. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™).shtml The following terms are trademarks of the International Business Machines Corporation in the United States.com/legal/copytrade. for the purposes of developing. the Novell logo. modify. in the United States. xvi IBM PowerVM Live Partition Mobility . Java. IBM. Trademarks IBM. or both. therefore. You may copy.COPYRIGHT LICENSE: This information contains sample application programs in source language. other countries. Such trademarks may also be registered or common law trademarks in other countries. or function of these programs.

Live Workload Partitions. plan. Austin Center. and perform partition migration on IBM Power Systems servers that are running AIX. 2007. This book can help you understand. PowerVM™ virtualization. © Copyright IBM Corp. Linux. 2009. and the SAN Volume Controller. John E Bailey is a Staff Software Engineer working in the IBM Power Systems Test Organization for IBM USA. and system administrators: Migrate entire running AIX® and Linux® partitions and hosted applications from one physical server to another without disrupting services and loads. Note: Minor updates and technical corrections are marked by change bars such as the ones in the left margin on this page. This IBM® Redbooks® publication discusses how Live Partition Mobility can help technical professionals. It can be combined with other virtualization technologies. storage area networks. A 2010 update was made to include POWER7™ servers. and software testing. xvii . His areas of expertise include AIX. to provide a fully virtualized computing platform that offers the degree of system and infrastructure flexibility required by today’s production data centers. Use a migration wizard for single partition migrations. with support for multiple concurrent migrations. He has seven years experience with IBM Power Systems and has worked on Live Partition Mobility for three years. Rebalance loads across systems quickly.Preface Live Partition Mobility is the next step in the IBMs Power Systems™ virtualization continuum. prepare. enterprise architects. He holds a degree in Computer Science from Prairie View A&M University. Meet stringent service-level agreements. All rights reserved. such as logical partitions. The team that wrote this book This book was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO). Hardware Management Console.

Jun Nakano IBM Japan xviii IBM PowerVM Live Partition Mobility . Guido Somers. His areas of expertise include AIX. PowerVM and complex implementations.Thomas Prokop is a Consulting Certified IT Specialist working as a Field Technical Sales Specialist in IBM US Sales & Distribution supporting clients and IBM sales and Business Partners. Maneesh Sharma. Spangenberg. David Hu. Dean S. Josh Miers. Chris Milsted. Smolders. Ruth. Linux. Luc R. system performance and tuning. Elizabeth A. John D. Anil Kalavakolanu. Griffiths. Kasturi Patel. He has 18 years of experience with IBM Power Systems and has experience in the fields of virtualization.K. He also provides pre-sales consultation and implementation of IBM POWER® and AIX high-end system environments. He is an author of many IBM Redbooks publications. Wilcox IBM USA Nigel A. IBM Power Systems servers. and Electronics. and did research in the field of Theoretical Physics. Bailey. logical partitioning. Van Niewaal. Steven E. Jonathan R. Holt. Dave Williams IBM U. Finnes. ten years of which were within IBM. Kevin J. He has 13 years of experience in the Information Technology field. Chemistry. Timothy Piasecki. Timothy Marchini. Jez Wain The project that produced this document was managed by: Scott Vetter. Jennings. Matthew Harding. Business Administration. The authors of the first edition of the IBM System p® Live Partition Mobility Redbook are: Mitchell Harding. Eddie Chen. Narutsugu Itoh. James Lee. performance analysis. virtualization. Harding. Royer. Guido Somers is a Cross Systems Certified Senior Enterprise Infrastructure Architect working for the IBM Global Technology Services organization in Belgium. Tonya L. Federico Vagnini. John Banchy. and other IBM hardware offerings. PMP Thanks to the following people for their contributions to this project: John E. Cawlfield. Peter Nutt. Mitchell P. Ravindra Tekumallah. IT optimization and virtualization. Vasu Vallabhaneni. Steven J. SAN. He holds degrees in Biotechnology. Robert C. His focus is on server consolidation. PowerHA™.

html Comments welcome Your comments are important to us! We want our Redbooks to be as helpful as possible. Find out more about the residency program. You will have the opportunity to team with IBM technical professionals.com/redbooks Send your comments in an e-mail to: redbooks@us. and apply online at: http://www.ibm. Your efforts will help increase product acceptance and customer satisfaction. Business Partners. and Clients. while getting hands-on experience with leading-edge technologies.Become a published author Join us for a two.com Mail your comments to: IBM Corporation. HYTD Mail Station P099 2455 South Road Poughkeepsie. browse the residency index. International Technical Support Organization Dept. Send us your comments about this book or other Redbooks in one of the following ways: Use the online Contact us review Redbooks form found at: ibm. you will develop a network of contacts in IBM development labs.com/redbooks/residencies. and increase your productivity and marketability. As a bonus.to six-week residency program! Help write an IBM Redbooks publication dealing with specific products or solutions.ibm. NY 12601-5400 Preface xix .

xx IBM PowerVM Live Partition Mobility .

3. All rights reserved. Overview In this chapter. This chapter contains the following topics: 1. “Operation” on page 12 1. “Cross-system flexibility is the requirement” on page 3 1. “Live Partition Mobility is the answer” on page 5 1.2.4. 2007. 1 .6.1 Chapter 1. “Partition migration” on page 3 1.1. “Architecture” on page 6 1. “Combining mobility with other features” on page 14 © Copyright IBM Corp. “Introduction” on page 2 1.7.5. we provide an overview of Live Partition Mobility with a high-level description of its features. 2009.

rather than when it causes the least inconvenience to the users. adequately configured servers. less critical. As the number of hosted partitions increases. Live Partition Mobility helps you meet increasingly stringent service-level agreements (SLAs) because it allows you to proactively move running partitions and applications from one server to another. which takes just a few seconds. The migration operation. If a key application’s resource requirements peak unexpectedly to a point where there is contention for server resources. and use the freed-up resources to absorb the peak. The migration transfers the entire system environment. just prior to the peak. with a peak workload at the end of the month or the end of the quarter) you can use Live Partition Mobility to consolidate partitions to a single server during the off-peak period.1 Introduction Live Partition Mobility allows you to migrate partitions that are running AIX and Linux operating systems and their hosted applications from one physical server to another without disrupting the infrastructure services. maintains complete system transactional integrity. The ability to move running partitions from one server to another offers you the ability to balance workloads and resources. and connected users. because it provides an easy path to move applications from individual. Enterprises must occasionally restructure their infrastructure to meet new IT requirements. finding a maintenance window acceptable to all becomes increasingly difficult. Live Partition Mobility allows for nondisruptive maintenance or modification to a system (to your users). attached virtual devices. allowing you to turn off unused servers. This approach also offers energy savings by reducing the power to run machines and the power to keep them cool during off-peak periods. By letting you move your running production applications from one physical server to another. even small IBM Power Systems servers frequently host many logical partitions. stand-alone servers to consolidation servers. memory. Then move the partitions to their own. This mitigates the impact on partitions and applications formerly caused by the occasional need to shut down a system. If you have partitions with workloads that have widely fluctuating resource requirements over time (for example. 2 IBM PowerVM Live Partition Mobility .1. Live Partition Mobility may also be used as a mechanism for server consolidation. IBM Power Systems servers are designed to offer the highest stand-alone availability in the industry. including processor state. Live Partition Mobility allows you to move partitions around so that you can perform previously disruptive operations on the machine at your convenience. you might move it to a more powerful server or move other. partitions to different servers. Today.

There is no restriction on processing units or partition memory size. During an active partition migration.Live Partition Mobility can be automated and incorporated into system management tools and scripts. Support for multiple concurrent migrations allows you to liberate system resources very quickly. Live Workload Partitions. For single-partition. and the SAN Volume Controller to provide a fully virtualized computing platform offering the degree of system and infrastructure flexibility required by today’s production data centers. A logical partition may be migrated between two POWER6® (or later) technology-based systems. as follows: Reduces planned down time by dynamically moving applications from one server to another Responds to changing workloads and business requirements when you move workloads from heavily loaded servers to servers that have spare capacity Reduces energy consumption by allowing you to easily consolidate workloads and turn off unused servers Live Partition Mobility is the next step in the IBM PowerVM continuum. It can be combined with other virtualization technologies. point-in-time migrations.3 Cross-system flexibility is the requirement Infrastructure flexibility has become a key criteria when designing and deploying information technology solutions. no loss of connectivity. or when a partition is providing service (active). either for inactive or for active migration.2 Partition migration A partition migration operation can occur either when a partition is powered off (inactive). 1. Overview 3 . if the destination system has enough resources to host the partition. there is no disruption of system operation or user service. and no effect on the running transactions. Application requirements frequently change and the hardware infrastructure they rely upon must be capable of adapting to Chapter 1. Live Partition Mobility contributes to the goal of continuous availability. such as logical partitions. a partition that is hosting a live production database with normal user activities can be migrated to a second system with no loss of data. the Hardware Management Console (HMC) and the Integrated Virtualization Manager (IVM) interfaces offer easy-to-use migration wizards. 1. For example.

combined with network and disk virtualization. with no user action. Availability requirements When a system requires maintenance. System configuration changes are made by policy-based controls or by administrators with very simple and secure operations that do not interrupt service. its hosted applications must not be stopped and can be migrated to another system. Without a way to migrate a partition. and often cause a significant downtime. optimization of global system resources. applications are distributed across multiple systems ensuring isolation. In some cases. The Advanced POWER Virtualization feature introduced in POWER5™-based systems provides excellent flexibility capabilities within each system. One of the most time consuming activities in a complex environment is the transfer of a workload from one system to another. Computing power can be distributed among partitions automatically in real time. New system deployment A workload running on an existing system must be migrated to a new. Although single-system virtualization greatly improves the flexibility of an IT solution.the new requirements in a very short time. to reduce change management costs and the related risk. enable administrators to create multiple fine-grained logical partitions within a single system. 4 IBM PowerVM Live Partition Mobility . several reasons are: Resource balancing A system does not have enough resources for the workload while another system does. and adaptability of the infrastructure to new workloads. all these activities require careful planning and highly skilled people. Configuration changes must be applied in a very simple and secure way. but also with minimal to no impact on the service level. more powerful one. an SLA may be so strict that planned outages are not tolerated. The virtualization of processor capacity and the granular distribution of memory. service requirements of clients often demand a more comprehensive view of the entire infrastructure. depending on real application needs. Although many reasons for the migration exist. In many instances. with limited administrator intervention.

4 Live Partition Mobility is the answer The Live Partition Mobility function offered on POWER6 technology-based systems is designed to enable the migration of an entire logical partition from one system to another. Overview 5 . The inactive migration procedure takes care of the reconfiguration of involved systems.1 Inactive migration Inactive migration moves the definition of a powered-off logical partition from one system to another along with its network and disk configuration. the partition configuration is removed and all involved resources are freed. The following two available migration types are discussed in more detail in the next sections: Inactive migration Active migration The logical partition is powered off and moved to the destination system.4.1. Live Partition Mobility provides the administrator a greater control over the usage of resources in the data center. Network access and disk data is preserved and made available to the new partition. Live Partition Mobility is a feature of the PowerVM Enterprise Edition offering. 1. No additional change in network or disk setup is required and the partition can be activated as soon as migration is completed. It allows a level of reconfiguration that in the past was not possible because of complexity or just because of SLAs that do not allow an application to be stopped for an architectural change. The migration of the partition is performed while service is provided. an inactive migration may be performed as long as the HMC can Chapter 1. as follows: A new partition is created on the destination system and with the same configuration present on the source system. If a partition is down because of scheduled maintenance or not in service for other reasons. The migration process can be performed either with a powered-off or a live partition. without disrupting user activities. On the source system. Live Partition Mobility uses a simple and automated procedure that transfers the configuration from source to destination without disrupting the hosted applications or the setup of the operating system and applications.

its relocation can be performed.5 Architecture Live Partition Mobility requires a specific hardware infrastructure. Several platform components are involved.2 Active migration By using active migration.communicate to the source and destination servers. 6 IBM PowerVM Live Partition Mobility . and the services they provide are not stopped during the process. When the service provided by the partition cannot be interrupted. by using the active migration feature. 1. a running partition is moved from a source system to a destination system with no disruption of partition operation or user service. and their respective Virtual I/O Servers. “Integrated Virtualization Manager for Live Partition Mobility” on page 221 describes the IVM-based Live Partition Mobility in detail. This section describes the HMC-based architecture. An active migration performs the same operations as an inactive migration except that the operating system. The physical memory content of the logical partition is copied from system to system allowing the transfer to be imperceptible to users.4. Disk data transactions. and the complete environment are migrated without any loss and migration can be activated any time on any production partition. with no loss of service. the applications continue to handle their normal workload. Chapter 7. user contexts. and multiple migrations can be executed concurrently. No limitation exists on a partition’s computing and memory configuration. the applications. During an active migration. Both inactive and active migrations may involve partitions with any processing unit and memory size configuration. running network connections. 1. It is executed in a controlled way and with minimal administrator interaction so that it can be safely and reliably performed in a very short time frame. Live Partition Mobility is controlled by the Hardware Management Console (HMC) or the Integrated Virtualization Manager (IVM).

1. and data of the mobile partition must reside on virtual storage on an external storage subsystem. applications. virtual Fibre Channel-based mapping. No physical adapters may be used by the mobile partition during the migration. Migration of partitions using multiple Virtual I/O Servers is supported.4 or later is supported. Red Hat Enterprise Linux version 5 Update 1 or later. migration of partitions between HMC and IVM systems is not supported. The mobile partition’s network and disk access must be virtualized by using one or more Virtual I/O Servers. Remote migration between systems controlled by different HMCs running Version 7 Release 3. SG24-7590 for details about virtual Fibre Channel and NPIV configuration.5. The operating system. Live Partition Mobility is supported for partitions running AIX 5. An optional redundant HMC configuration is supported. – The Virtual I/O Servers on both systems must be capable of providing virtual access to all disk resources the mobile partition is using.1 Hardware infrastructure The primary requirements for the migration of a logical partition are: Two POWER6 or POWER7 technology-based systems running PowerVM Enterprise Edition with Virtual I/O Server version 1. Virtual Fibre Channel uses N_Port ID Virtualization (NPIV) to access SAN resources using shared Fibre Channel adapters. See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and Monitoring.3 Level 5300-07 or later. Overview 7 . as alternate production profiles might exist). or both. Note: Virtual Fibre Channel support for migration is introduced in PowerVM Virtual I/O Server Version 2.5 or higher and controlled by at least one HMC or each of them running the IVM are required. At the time of publication.1. Chapter 1. AIX 6.1 or later. The destination system must have enough processor and memory resources to host the mobile partition (the partition profile that is running. – The Virtual I/O Servers on both systems must have a shared Ethernet adapter configured to bridge to the same Ethernet network used by the mobile partition. – The disks used by the mobile partition must be accessed through virtual SCSI. or SUSE Linux Enterprise Server 10 Service Pack 1 or later.

it can even use all resources of the source system offered by the Virtual I/O Server. Because the focal-point of hardware configuration is the HMC.1 network while VLAN 1 on CEC 2 is part of the 10. Both the source and the target system must have an appropriate shared Ethernet adapter environment to host a moving partition. The mobile partition must not own any physical adapters and must use the Virtual I/O Server for both network and external disk access. or virtual Fibre resources. Because the mobile partition’s external disk space must be available to the Virtual I/O Servers on the source and destination systems. you cannot use storage pools. An external. External disks may be presented to the mobile partition as virtual SCSI. it has been enhanced to coordinate the process of migrating partitions. while virtual disk setup is performed by the migration process.1 network. 8 IBM PowerVM Live Partition Mobility .1. Each Virtual I/O Server must create virtual target devices using physical disks and not logical volumes. The procedure that performs the migration identifies the resource configuration of the mobile partition on the source system and then reconfigures both source and destination systems accordingly.Live Partition Mobility requires a specific hardware and microcode configuration that is currently available on POWER6 technology-based systems only. Virtual network connectivity must be established before activating the partition migration task. or both. The operating system and application data must reside on external disks of the source system because the mobile partition’s disk data must be available after the migration to the destination system is completed. The mobile partition’s configuration is not changed during the migration. All virtual networks in use by the mobile partition on the source system must be available as virtual networks on the destination system. No limitation exists on the size of the mobile partition. It is possible for VLAN 1 on CEC 1 to be part of the 192. shared-access storage subsystem is therefore required. VLANs defined by port virtual IDs (PVIDs) on the VIOS have no meaning outside of an individual server as all packets are bridged untagged.168. The destination system must be able to host the mobile partition and must have enough free processor and memory resources to satisfy the partition’s requirements before migration is started.

Because two networks are possible, you cannot verify whether VLAN 1 exists on both servers. You have to check whether VLAN 1 maps to the same network on both servers. Figure 1-1 shows a basic hardware infrastructure enabled for Live Partition Mobility and that is using a single HMC. Each system is configured with a single Virtual I/O Server partition. The mobile partition has only virtual access to network and disk resources. The Virtual I/O Server on the destination system is connected to the same network and is configured to access the same disk space used by the mobile partition. For illustration purposes, the device numbers are all shown as zero, but in practice, they can vary considerably.

POWER6 System #1
AIX Client Partition 1 hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor

POWER6 System #2

VLAN

vhost0 Virtual I/O Server vtscsi0 hdisk0 fcs0

Ethernet – private network en2 if HMC en2 if

ent2 SEA ent0

ent2 SEA ent0

hdisk0 fcs0

Ethernet Network Storage Area Network

Storage Subsystem

LUN

Figure 1-1 Hardware infrastructure enabled for Live Partition Mobility

Virtual I/O Server

ent1 virt

ent1 virt

Chapter 1. Overview

9

The migration process creates a new logical partition on the destination system. This new partition uses the destination’s Virtual I/O Server to access the same mobile partition’s network and disks. During active migration, the state of the mobile partition is copied, as shown in Figure 1-2.
POWER6 System #1
AIX Client Partition 1 hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor VLAN

POWER6 System #2
AIX Client Partition 1

vhost0 Virtual I/O Server vtscsi0

Ethernet – private network en2 if HMC en2 if

hdisk0 fcs0

ent2 SEA ent0

ent2 SEA ent0 fcs0

Ethernet Network Storage Area Network

Storage Subsystem

LUN

Figure 1-2 A mobile partition during migration

10

IBM PowerVM Live Partition Mobility

Virtual I/O Server

ent1 virt

ent1 virt

When the migration is complete, the source Virtual I/O Server is no longer configured to provide access to the external disk data. The destination Virtual I/O Server is set up to allow the mobile partition to use the storage. The final configuration is shown in Figure 1-3.
POWER6 System #1 POWER6 System #2
AIX Client Partition 1 hdisk0 ent0 Service Processor Service Processor POWER Hypervisor POWER Hypervisor VLAN vscsi0

VLAN

Ethernet – private network en2 if HMC en2 if

vtscsi0 hdisk0 fcs0 ent2 SEA ent0 ent2 SEA ent0 hdisk0 fcs0

Ethernet Network Storage Area Network

Storage Subsystem

LUN

Figure 1-3 The final configuration after a migration is complete

1.5.2 Components involved
The Live Partition Mobility function changes the configuration of the two involved systems and, for active migration, manages the migration without interrupting the service provided by the applications running on the mobile partition. The migration manager function resides on the HMC and is in charge of configuring both systems. It has the responsibility of checking that all hardware and software prerequisites are met. It executes the required commands on the two systems to complete migration while providing migration status to the user. Note: HMC Version 7 Release 3.4 introduces remote migration, the option of migrating partitions between systems managed by different HMCs. See 5.4, “Remote Live Partition Mobility” on page 130 for details on remote migration When an inactive migration is performed, the HMC invokes the configuration changes on the two systems. During an active migration, the running state (memory, registers, and so on) of the mobile partition is transferred during the process.

Virtual I/O Server

ent1 virt

ent1 virt

vhost0

Virtual I/O Server

Chapter 1. Overview

11

Memory management of an active migration is assigned to a mover service partition on each system. During an active partition migration, the source mover service partition extracts the mobile partition’s state from the source system and sends it over the network to the destination mover service partition, which in turn updates the memory state on the destination system. Any Virtual I/O Server partition can be configured as a mover service partition. Live Partition Mobility has no specific requirements on the mobile partition’s memory size or the type of network connecting the mover service partitions. The memory transfer is a process that does not interrupt a mobile partition’s activity and might take time when a large memory configuration is involved on a slow network. Use a high bandwidth connection, such as 1 Gbps Ethernet or larger.

1.6 Operation
Partition migration can be performed either as an inactive or an active operation. This section describes HMC-based partition migration. See Chapter 7, “Integrated Virtualization Manager for Live Partition Mobility” on page 221 for details regarding IVM-based migration.

1.6.1 Inactive migration
The basic steps required for inactive migration are: 1. Prepare the mobile partition for migration, if required, such as removing adapters that are not supported and ensuring that applications support mobility. See 3.5, “Preparing the systems for Live Partition Mobility” on page 54 for additional information. 2. Shut down the mobile partition if it is active. 3. Perform the migration validation procedure provided by the mobile partition’s HMC to verify that the migration can be performed successfully. 4. Start the inactive partition migration using the HMC. The HMC connects to both source and destination systems and performs the migration steps, as follows: a. It transfers the mobile partition’s configuration from source system to destination system, including all partition profiles. b. It updates the destination Virtual I/O Server to provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile partition’s disk resources.

12

IBM PowerVM Live Partition Mobility

c. It updates the source Virtual I/O Server to remove resources used to provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile partition’s disk resources. d. It removes the mobile partition configuration on the source system. 5. When migration is complete, the mobile partition can be activated on the destination system. The steps executed are similar to those an administrator would follow when performing a manual migration. These actions normally require accurate planning and a system-wide knowledge of the configuration of the two systems because virtual adapters and virtual target devices have to be created on the destination system, following virtualization configuration rules. The inactive migration task takes care of all planning and validation and performs the required activities without user action. This mitigates the risk of human error and executes the movement in a timely manner.

1.6.2 Active migration
The basic steps required for an active migration are: 1. Prepare the mobile partition for migration, keeping it active, such as removing adapters that are not supported and ensuring that all running applications support mobility. See 3.4, “Live Partition Mobility preparation checks” on page 53 for additional information. 2. Perform the migration validation procedure provided by the mobile partition’s HMC to verify that the migration can be performed successfully. 3. Initiate the active partition migration using the HMC. The HMC connects to both source and destination systems and performs the migration steps, as follows: a. It transfers the mobile partition’s configuration from source system to destination system, including all the partition profiles. b. It updates the destination Virtual I/O Server to provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile partitions disk resources c. It activates the mover service partition function on the source and destination Virtual I/O Servers. The mover service partitions copy the mobile partition’s state from the source to the destination system. d. It updates the source Virtual I/O Server to remove resources used to provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile partition’s disk resources. e. It removes the mobile partition’s configuration on the source system.

Chapter 1. Overview

13

Active migration performs similar steps to inactive migration, but also copies physical memory to the destination system. It keeps applications running, regardless of the size of the memory used by the partition; the service is not interrupted, the I/O continues accessing the disk, and network connections keep transferring data.

1.7 Combining mobility with other features
Many ways exist to take advantage of Live Partition Mobility. It can be exploited to perform actions that were not previously possible because of complexity or time constraints. Migration is a function that can be combined with existing IBM Power Systems features and software to provide better and more flexible service.

1.7.1 High availability clusters
An environment that has only small windows for scheduled downtime may use Live Partition Mobility to manage many scheduled activities either to reduce downtime through inactive migration or to avoid service interruption through active migration. For example, if a system has to be shut down because of a scheduled power outage, its hosted partitions may be migrated to powered systems before power is cut.

14

IBM PowerVM Live Partition Mobility

Unplanned outages still require specific actions that are normally executed by cluster solutions such as IBM PowerHA Chapter 1. In addition.This case is shown in Figure 1-4. but it is not a high availability solution. by default. Overview 15 . The production database partition is actively migrated to system B. it does not monitor operating system and application state and it is. High availability environments also require the definition of automated procedures that detect software and hardware events and activate recovery plans to restart a failed service as soon as possible. where system A has to be shut down. while the production Web application partition is actively migrated to system C. It requires both source and destination systems be operational and that the partition is not in a failed state. POWER6 System B POWER6 System A Database Database VIOS Test Environment Web Application POWER6 System C Web Application VIOS VIOS Figure 1-4 Migrating all partitions of a system Live Partition Mobility is a reliable procedure for system reconfiguration and it may be used to improve the overall system availability. Live Partition Mobility increases global availability. a user-initiated action. The test environment is not considered vital and is shut down during the outage.

1 PowerHA v5.0.7.0.1 RSCT 2. Although Live Application Mobility is very similar to active Live Partition Mobility. together with their disk data and network configuration. Given two running AIX Version 6. and Live Partition Mobility as a migration.5. Table 1-1 provides support details. Live Workload Partitions are migration-capable.1. POWER5. 16 IBM PowerVM Live Partition Mobility .10.1 HACMP # IZ07791 AIX 6. supports Live Partition Mobility for all IBM POWER6 technology-based servers. Live Application Mobility is a capability provided by PowerVM Workload Partitions Manager™. Each group is called a workload partition.4 AIX 5.0 AIX 6.1 images that share a common file system.1 RSCT 2.IBM PowerHA for AIX. AIX 6. 1.1 and Linux operating systems that operate on POWER6 or POWER7 technology-based servers. including POWER7.3.2 AIX Live Application Mobility AIX Version 6. the administrator can decide to actively migrate a workload partition between operating systems. reducing the related cost.4.1 allows you to group applications running on the same AIX 6 image.1 RSCT 2.4 HACMP # IZ02620 AIX 5.1.4.0 PowerHA v5. Think of Live Application Mobility as a relocation. and it can be executed on any server running AIX Version 6. although Live Partition Mobility is a PowerVM feature that works for AIX 5. Table 1-1 PowerVM Live Partition Mobility Support PowerVM Live Partition Mobility support PowerHA v5.1 RSCT 2.5. They can simplify administration.1 TL2 SP1 RSCT 2.0.3 TL9 RSCT 2.1.3. also known as High Availability Cluster Multiprocessing (HACMP™) for AIX.3 AIX 5.5.3. It does not require any partition configuration change.0 HACMP # IZ02620 AIX 6.2.5. and can function on all systems that support AIX Version 6. and POWER4™ technology-based servers.4.3 HACMP # IZ07791 AIX 5.5 Cluster software and Live Partition Mobility provide different functions that can be used together to improve the availability and uptime of applications. POWER6.7. keeping the applications running.1.0 AIX 6.7.0. it is a pure AIX 6 function.4.5.

Overview 17 .The Workload Partition migration function does not require a configuration of virtual devices in the source and destination systems. Figure 1-5 represents an example of Live Workload Partitions usage. It is the system administrator’s task to perform a dynamic partition reconfiguration operation to reduce the footprint of the source partition and enlarge the destination partition.1 image even if they run on different hardware platforms. AIX keeps running on both systems and continues to use its allocated resources. They can be used in conjunction to provide even higher flexibility in a POWER6 or POWER7 environment. Workload Partition migration also requires the destination partition to exist and be running before it is started. Each of them can be migrated to another AIX Version 6. System B is a system with three different workloads. Chapter 1. POWER6 System B AIX 6 POWER6 System C AIX 6 POWER5 System A AIX 6 Web Test AIX 6 DB Common Filesystem Figure 1-5 AIX Workload Partition example Live Partition Mobility and AIX Live Application Mobility have different scopes but have similar characteristics.

18 IBM PowerVM Live Partition Mobility .

and readiness of partitions and systems to participate in inactive and active migrations. “Performance considerations” on page 42 2. “Live Partition Mobility prerequisites” on page 23 2. 2007. All rights reserved. 2009.5.2. The chapter concludes with observations on the influence of the infrastructure of partition migration. capability. © Copyright IBM Corp. It also describes the mechanisms of Live Partition Mobility in detail.1.2 Chapter 2. “AIX and active migration” on page 43 2. Live Partition Mobility mechanisms This chapter presents the components involved in Live Partition Mobility managed by the Hardware Management Console (HMC) and their respective roles.4. 19 .3.7. “Active partition migration” on page 31 2.6. “Linux and active migration” on page 44 Information regarding Live Partition Mobility using components managed by the Integrated Virtualization Manager (IVM) is detailed in Chapter 7. This chapter contains the following topics: 2.8. “Partition migration high-level workflow” on page 26 2. “Inactive partition migration” on page 27 2. It discusses the compatibility. “Integrated Virtualization Manager for Live Partition Mobility” on page 221. “Live Partition Mobility components” on page 20 2.

Hardware Management Console (HMC) The HMC is the central point of control.2. Resource Monitoring and Control (RMC) The RMC is a distributed framework and architecture that allows the HMC to communicate with a managed logical partition. as Figure 2-1 shows. Linux. The HMC interacts with the service processors and POWER Hypervisor™ on the source and destination servers. Dynamic LPAR Resource Manager This component is an RMC daemon that runs inside the AIX. and Virtual I/O Server partitions. The HMC uses this capability to remotely execute partition specific commands. the mover service partitions. 20 IBM PowerVM Live Partition Mobility . The HMC provides both a graphical user interface (GUI) wizard and a command-line interface to control migration. HMC Mobile Partition AIX/Linux Virtual I/O Server DLPAR-RM DLPAR-RM RMC Mover Service RMC VASI Partition Profiles POWER Hypervisor Service Processor IBM Power System Figure 2-1 Live Partition Mobility components These components and their roles are described in the following list. the Virtual I/O Server partitions.1 Live Partition Mobility components Inactive and active partition migration from one physical system to another is achieved through interaction between several components. It coordinates administrator initiation and setup of the subsequent migration command sequences that flow between the various partition migration components. and the mobile partition itself.

Mover service partition (MSP) MSP is an attribute of the Virtual I/O Server partition. It enables the specified Virtual I/O Server partition to allow the function that asynchronously extracts. the other on the destination system. Live Partition Mobility mechanisms 21 . It is not automatically added to a system profile on the target server. but is only used when the server is declared as a mover service partition. Virtual I/O Server Only virtual adapters can be migrated with a partition. If you specify an existing profile name. Unless you specify a profile name when the migration is started. Virtual asynchronous services interface (VASI) The source and destination mover service partitions use this virtual device to communicate with the POWER Hypervisor to gain access to partition state. then it is automatically removed after the source partition is deleted. transports. Therefore. Partition profiles The HMC copies all of the mobile partition's profiles without modification to the target system as part of the migration process. if you do not want the migration profile to replace any of the partition's existing profiles. the HMC replaces that profile with the new migration profile. you must specify a new. and installs partition state. Two mover service partitions are involved in an active partition migration: one on the source system. Chapter 2. If the mobile partition's profile is part of a system profile on the source server. POWER Hypervisor Active partition migration requires server hypervisor support to process both informational and action requests from the HMC and to transfer a partition state through the VASI device in the mover service partitions. The HMC creates a new migration profile containing the partition's current state. The physical resources that back the mobile partition’s virtual adapters must be accessible by the Virtual I/O Servers on both the source and destination systems. unique profile name when starting the migration. this profile replaces the existing profile that was last used to activate the partition. The VASI device is included on the Virtual I/O Server. Mover service partitions are not used for inactive migrations. All profiles belonging to the mobile partition are deleted from the source server after the migration has completed.

It can be set or reset through POWER Hypervisor while the partition is running.1 Other components affecting Live Partition Mobility Though not considered to be part of Live Partition Mobility. The time reference partition (TRP) setting has been introduced to enable the POWER Hypervisor to synchronize the mobile partition's time-of-day as it moves from one system to another. seen as a Logical Host Ethernet Adapter (LHEA). The POWER Hypervisor uses the longest running time reference partition as the provider of authoritative system time. This provides a partition with a virtual Ethernet communications link. including Virtual I/O Server partitions. More than one TRP can be specified per system. If you choose not to complete this step.1. it is a recommended step for active partition migration. The hypervisor can create up to 32 logical Ethernet ports that can be given to one or more logical partitions. 2. libraries. Synchronizing the time-of-day clocks for the source and destination Virtual I/O Server partitions is optional for both active and inactive partition migration. the mobility of a partition. without recourse to a shared Ethernet adapter in a Virtual I/O Server. the source and destination systems will synchronize the clocks while the mobile partition is moving from the source system to the destination system. and a kernel extension that controls the use of the POWER performance registers. It uses Coordinate Universal Time (UTC) derived from a common network time protocol (NTP) server with NTP clients on the source and destination systems.Time reference Time reference is an attribute of partitions. This partition attribute is only supported on managed systems that are capable of active partition migration. directly attached to the POWER Hypervisor. Performance monitor API (PMAPI) PMAPI is an AIX subsystem comprising commands. 22 IBM PowerVM Live Partition Mobility . However. Integrated Virtual Ethernet adapter (IVE) An IVE adapter (also referred to as Host Ethernet Adapter) uses a two or four-port integrated Ethernet adapter. or can be influenced by. certain other IBM Power Systems server components can influence.

and ready systems. The high-level prerequisites for Live Partition Mobility are in the following list.4. The BSR is designed to accomplish this efficiently. Chapter 2. lightweight barrier synchronization between CPUs.2. multiple data (SIMD) manner. Live Partition Mobility mechanisms 23 . 2.2 Live Partition Mobility prerequisites Live Partition Mobility requires coordinated movement of a partition’s state and resources. a migration cannot occur: A ready source system that is capable of migration A ready destination system that is capable of migration Compatibility between the source and destination systems The source and destination systems. Barrier synchronization registers cannot be migrated or reconfigured dynamically. the destination system may be managed by a remote HMC. Such programs often proceed in phases where all tasks synchronize processing at the end of each phase. However. which may be under the control of a single HMC and may also include a redundant HMC Note: Beginning with HMC Version 7 Release 3. In practice. a single mover service partition can handle a maximum of four simultaneous active migrations. There are no architectural restrictions on the number of migrations that can be underway at any one time. the maximum number of concurrent migrations is limited by the processing capacity of the HMC and contention for HMC locks. Migratable partitions move between capable. 2. “Remote Live Partition Mobility” on page 130. A single HMC can control several concurrent migrations.1 Capability and compatibility The first step of any mobility operation is to validate the capability and compatibility of the source and destination systems. Mobility operations to a remotely managed destination system is discussed in 5.4. If any of these elements are missing. It is possible to have several mover service partitions on a system.Barrier Synchronization Registers (BSR) Barrier synchronization registers provide a fast. compatible. This facility is intended for use by application programs that are structured in a single instruction.

2 Readiness Migration readiness is a dynamic partition property that changes over time. Hardware-based iSCSI connectivity may be used in addition to SAN. or virtual SCSI. a mover service partition on the source and destination systems One or more storage area networks (SAN) that provide connectivity to all of the mobile partition’s disks to the Virtual I/O Server partitions on both the source and destination servers. For an inactive migration. operating system.A migratable. firmware. SCSI reservation must be disabled. The mobile partition accesses all migratable network interfaces through virtual Ethernet devices. it cannot be selected as a destination for partition migration. Virtual Fibre Channel LUNs should be configured as described in Chapter 2 of PowerVM Virtualization on IBM System p: Managing and Monitoring. The LUNs used for virtual SCSI must be zoned and masked to the Virtual I/O Servers on both systems. indeed. and the characteristics of the mobile partition to determine whether or not a migration is possible. which must be mapped to LUNs and cannot be part of a storage pool or logical volume on the Virtual I/O Server One or more physical IP networks (LAN) that provide the necessary network connectivity for the mobile partition through the Virtual I/O Server partitions on both the source and destination servers. and HMC versions that are required for Live Partition Mobility along with the system compatibility requirements are described in Chapter 3. An RMC connection to manage inter-system communication Before initiating the migration of a partition. “Requirements and preparation” on page 45.2. SG24-7590. 2. mover service partitions. A server that is running on battery power may be the source of a mobile partition. but must be capable of booting on the destination system. The hardware. or a combination of these devices. 24 IBM PowerVM Live Partition Mobility . Virtual I/O Servers. ready partition to be moved from the source system to the destination system. that it is running on battery power may be the impetus for starting the migration. the HMC verifies the capability and compatibility of the source and destination servers. For active migrations. The mobile partition accesses all migratable disks through virtual Fibre Channel. the partition must be powered down. Server readiness A server that is running on battery power is not ready to receive a mobile partition. The mobile partition’s virtual disks.

Infrastructure readiness
A migration operation requires a SAN and a LAN to be configured with their corresponding virtual SCSI, virtual Fibre Channel, VLAN, and virtual Ethernet devices. At least one Virtual I/O Server on both the source and destination systems must be configured as a mover service partitions for active migrations. The HMC must have RMC connections to the Virtual I/O Servers and a connection to the service processors on the source and destination servers. For an active migration, the HMC also needs RMC connections to the mobile partition and the mover service partitions.

2.2.3 Migratability
The term migratability refers to a partition’s ability to be migrated and is distinct from partition readiness. A partition may be migratable but not ready. A partition that is not migratable may be made migratable with a configuration change. For active migration, consider whether a shutdown and reboot is required. When considering a migration, also consider the following additional prerequisites: General prerequisites: – The memory and processor resources required to meet the mobile partition’s current entitlements must be available on the destination server. – The partition must not have any required dedicated physical adapters. – The partition must not have any logical host Ethernet adapters. – The partition is not a Virtual I/O Server. – The partition is not designated as a redundant error path reporting partition. – The partition does not have any of its virtual SCSI disks defined as logical volumes in any Virtual I/O Server. All virtual SCSI disks must be mapped to LUNs visible on a SAN or iSCSI. – The partition has virtual Fibre Channel disks configured as described in Section 5.11, “Virtual Fibre Channel” on page 187. – The partition is not part of an LPAR workload group. A partition can be dynamically removed from a group. – The partition has a unique name. A partition cannot be migrated if any partition exists with the same name on the destination server. In an inactive migration only, the following characteristics apply: – It is a partition in the Not Activated state – May use huge pages – May use the barrier synchronization registers

Chapter 2. Live Partition Mobility mechanisms

25

In an active migration only, the two default server serial adapters that are automatically created and assigned to a partition when a partition is created are automatically recreated on the destination system by the migration process.

2.3 Partition migration high-level workflow
Inactive and active partition migration each have the same four-step sequence: 1. Preparation Ready the infrastructure to support Live Partition Mobility. 2. Validation Check the configuration and readiness of the source and destination systems. 3. Migration Transfer of partition state from the source to destination takes place. One command is used to launch both inactive and active migrations. The HMC determines the appropriate type of migration to use based on the state of the mobile partition. – If the partition is in the Not Activated state, the migration is inactive. – If the partition is in the Running state, the migration is active. 4. Completion Free unused resources on the source system and the HMC. The remainder of this chapter describes the inactive and active migration processes. Note: As part of a migration process, the HMC copies all of the mobile partition’s profiles as-is to the destination system. The HMC also creates a new migration profile containing the partition’s current state and, unless you specify a profile name, this profile replaces the existing profile that was last used to activate the partition. If you specify an existing profile name, the HMC replaces that profile with the new migration profile. Therefore, if you want to keep the partition’s existing profiles, you should specify a new and unique profile name when initiating the migration. If you add an adapter (physical or virtual) to a partition using dynamic reconfiguration, it is added to the profile as desired.

26

IBM PowerVM Live Partition Mobility

2.4 Inactive partition migration
Inactive partition migration allows you to move a powered-off partition, and its profiles and virtualized resources, from one server to another. The mobile partition retains its name, its inactive state, and its NVRAM. Its virtual I/O resources are assigned and remapped to the appropriate Virtual I/O Server partitions on the destination system. Its processor and memory resources remain unassigned until it is activated.

2.4.1 Introduction
The HMC is the central point of control, coordinating administrator actions and migration command sequences. Because the mobile partition is powered off, only the static partition state (definitions and configurations) is transferred from source to destination. The transfer is performed by the controlling HMC, the service processors, and the POWER Hypervisor on the two systems; there is no dynamic state, so mover service partitions are not required. The HMC creates a migration profile for the mobile partition on the destination server corresponding to its current configuration. All profiles associated with the mobile partition are moved to the destination server after the partition definition has been created on the destination server. Note: Because the HMC always migrates the latest activated profile, an inactive partition that has never been activated is not migratable. To meet this requirement, booting to an operating system is unnecessary; booting to the SMS menu is sufficient. Any changes to the latest activated profile after power-off are not preserved. To save the changes, the mobile partition must be reactivated and shut down.

Chapter 2. Live Partition Mobility mechanisms

27

2.4.2 Validation phase
The HMC performs a pre-check to ensure that you are performing a valid migration, that no high-level blocking problems exist, and that the migration has a good chance of being successful. The validation workflow is schematically shown in Figure 2-2.
Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition

Inactive partition migration capability and compatibiliy check

RMC connection check

Virtual adapter mapping

Partition Readiness check Time

Figure 2-2 Inactive migration validation workflow

The inactive migration validation process performs the following operations: Checks the Virtual I/O Server and hypervisor migration capability and compatibility on the source and destination Checks that resources (processors, memory, and virtual slots) are available to create a shell partition on the destination system with the exact configuration of the mobile partition Verifies the RMC connections to the source and destination Virtual I/O Servers Ensures that the partition name is not already in use at the destination Checks for virtual MAC address uniqueness Checks that the partition is in the Not Activated state Ensures that the mobile partition is an AIX or Linux partition, is not an alternate path error logging partition, is not a service partition, and is not a member of a workload group Ensures that the mobile partition has an active profile Checks the number of current inactive migrations against the number of supported inactive migrations

28

IBM PowerVM Live Partition Mobility

Checks that all required I/O devices are connected to the mobile partition through a Virtual I/O Server, that is, there are no required physical adapters Verifies that the virtual SCSI disks assigned to the partition are accessible by the Virtual I/O Servers on the destination system. Creates the virtual adapter migration map that associates adapters on the source Virtual I/O Servers with adapters on the destination Virtual I/O Servers Ensures that no virtual SCSI disks are backed by logical volumes and that no virtual SCSI disks are attached to internal disks (not on the SAN)

2.4.3 Migration phase
If all the pre-migration checks pass, the migration phase can start. For inactive partition migration, the transfer of state follows a path: 1. From the source hypervisor to the HMC 2. From the HMC to the destination hypervisor This path is shown in Figure 2-3.

1

HMC

2

POWER Hypervisor

POWER Hypervisor

Source System

Destination System

Figure 2-3 Inactive migration state flow

Chapter 2. Live Partition Mobility mechanisms

29

The inactive migration workflow is shown in Figure 2-4.

Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition
Validation New LPAR creation Virtual Storage removal Virtual storage adapter setup

LPAR removal

Notification of completion

Time

Figure 2-4 Inactive migration workflow

The HMC performs the following workflow steps: 1. Inhibits any changes to the source system and the mobile partition that might invalidate the migration. 2. Extracts the virtual device mappings from the source Virtual I/O Servers and uses this to generate a source-to-destination virtual adapter migration map. This map ensures no loss of multipath I/O capability for virtual SCSI, virtual Fibre Channel, and virtual Ethernet. The HMC fails the migration request if the device migration map is incomplete. 3. Creates a compatible partition shell on the destination system. 4. Creates a migration profile for the mobile partition’s current (last-activated) profile. If the mobile partition was last activated with profile my_profile and resources were moved in to or out of the partition before the partition was shutdown, the migration profile will differ from that of my_profile. 5. Copies over the partition profiles. Copying includes all existing profiles associated with the mobile partition on the source system and the migration profile. The existing partition profiles are not modified at all during the migration; the virtual devices are not re-mapped to the new system.

30

IBM PowerVM Live Partition Mobility

virtual Fibre Channel. Note: Virtual slot numbers can change during migration. 7. When moving a partition to a server and then back to the original. HMC sets the migration state to completed and informs the POWER Hypervisor on both the source and destination. and user applications are all transferred in a manner transparent to users.4. 2. hosted middleware. unused resources are deleted. or both) in the Virtual I/O Servers on the destination system and completes the logical unit number (LUN) to virtual SCSI adapter mapping as well as the NPIV-enabled adapter to virtual Fibre Channel adapter mapping. Chapter 2. its profiles. Creates the required adapters (virtual SCSI. Live Partition Mobility mechanisms 31 . and its current configuration. The HMC deletes the partition on the source server. it will not have the same slot numbers. and applications between two systems without disrupting the service provided. if any. you should record the slot numbers. The HMC removes the virtual slots from the source Virtual I/O Server’s profile.5 Stopping an inactive partition migration You can stop an inactive partition migration from the controlling HMC while the partition is in the Migration starting state.4. its NVRAM. On completion of the transfer of state. as follows: 1.5 Active partition migration The active partition migration function provides the capability to move a running operating system. The mobile partition retains its name. its active state. network and SAN connections. Its virtual I/O resources are assigned and remapped to the appropriate Virtual I/O Server partitions on the destination system. 2.6. The HMC performs automatic rollback of all reversible changes and identifies all non-reversible changes. 2. If this information is required. The source Virtual I/O Servers remove the virtual adapter slots and virtual target devices used by the mobile partition. Databases. application servers. 2.4 Migration completion phase When the migration is complete.

The step is not required by the migration mechanisms. virtual optical devices. under the control of the HMC. 2.5.5. 32 IBM PowerVM Live Partition Mobility . it increases the accuracy of time measurement during migration. active migration involves the transfer of active run-time state.2 Preparation After you have created the Virtual I/O Servers and enabled the mover service partitions. Use dynamic reconfiguration on the HMC to remove all dedicated I/O. Time never goes backward on the mobile partition during a migration.1 Active partition state In addition to the partition definition and resource configuration. This state includes the: Partition’s memory Hardware page table (HPT) Processor state Virtual adapter state Non-volatile RAM (NVRAM) Time of day (ToD) Partition configuration State of each resource The mover service partitions on the source and destination. you must prepare the source and destination systems for migration: 1. Even if this step is omitted. Synchronize the time-of-day clocks on the mover service partition using an external time reference. Remove the partition from a partition workload group. b. 2. GX slots. Prepare the partition for migration: a. Configure the shared Ethernet adapter as necessary to bridge VLANs. such as the network time protocol (NTP). and Integrated Virtual Ethernet from the mobile partition. such as PCI slots.2. a. the migration process correctly adjusts the partition time. 3. This step is optional. Configure the SAN such that requisite storage devices are available. Prepare the destination Virtual I/O Server. b. move this state between the two systems.

and that the environment satisfies the prerequisites for a migration operation. the mover service partition selection is automatic. Initiate the partition migration by selecting the following items. Live Partition Mobility mechanisms 33 . the HMC performs a pre-check to ensure that the migration is valid. with either the graphical user interface (GUI) or command-line interface (CLI) on the HMC: – The partition to migrate – The destination system – Optionally. – Optionally. and readiness check on the source and destination systems. If there are multiple active mover service partitions on one or both. you can either specify which ones to use. that no high-level blocking problems exist.3 Validation phase After the source and destination mover service partitions have been identified. the virtual device mappings in the destination Virtual I/O Server. migratability.7. the mover service partition on the source and destination systems If there is only one active mover service partition on the source or the destination server. After the pre-check. or let the HMC choose for you. All configuration checks are performed during each validation to provide a complete list of potential problems. compatibility. the HMC prevents any configuration changes to the partition that might invalidate the migration and then proceeds to perform a detailed capability. “The command-line interface” on page 162 for details. Chapter 2. 2.4. See 5.5.

and the connection between the source and destination mover service partitions are established Checks that there are no required physical adapters in the mobile partition and that there are no required virtual serial slots higher than slot 2 Checks that no client virtual SCSI disks on the mobile partition are backed by logical volumes and that no disks map to internal disks Checks the mobile partition. its OS. Virtual I/O Servers. and its applications for active migration capability. POWER Hypervisor. and may block migrations Checks that the logical memory block size is the same on the source and destination systems Checks that the type of the mobile partition is AIX or Linux and that it is not an alternate error logging partition or not a mover service partition Checks that the mobile partition is not configured with barrier synchronization registers Checks that the mobile partition is not configured with huge pages 34 IBM PowerVM Live Partition Mobility . Active partition migration capability and compatibiliy check Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition System resource availability check RMC connection check Virtual adapter mapping Partition Readiness check Operating System and application readiness check Time Figure 2-5 Active migration validation workflow Configuration checks The HMC performs the following configuration checks: Checks the source and destination systems. the source and destination Virtual I/O Servers. and mover service partitions for active partition migration capability and compatibility Checks that the RMC connections to the mobile partition.The workflow for the active migration validation is shown in Figure 2-5. An application registers is capability with AIX.

variable capacity weight. processor entitlement. and LPAR group. the HMC fails the migration request. and virtual slots) are available to create a shell partition on the destination system with the exact configuration of the mobile partition.Checks that the partition state is active or running Checks that the mobile partition is not in a partition workload group Checks the uniqueness of the mobile partition’s virtual MAC addresses Checks that the mobile partition’s name is not already in use on the destination server Checks the number of current active migrations against the number of supported active migrations Resource availability checks After verifying system and partition configurations. Chapter 2. virtual Fibre Channel. In the latter case. there have been no state changes to the source and destination systems or to the mobile partition. The HMC inhibits all further dynamic reconfiguration of the mobile partition that might invalidate the migration: CPU. The HMC performs the following tasks: 1. as follows. AIX passes the check-migrate request to those applications and kernel extensions that have registered to be notified of dynamic reconfiguration events. Live Partition Mobility mechanisms 35 . ensuring no loss of multipath I/O capability for virtual SCSI. Checks that the necessary resources (processors. 3. The operating system either accepts or rejects the migration. This is the end of the validation phase. slot. 2. memory. Instructs the operating system in the mobile partition to check its own capacity and readiness for migration. The partition migration phase is ready to start. At this point. and virtual Ethernet. memory. Generates a source-to-destination hosting virtual adapter migration map. The HMC fails the migration request if the device migration map is incomplete. the HMC determines whether sufficient resources are available on the destination server to host the inbound mobile partition.

Figure 2-6 shows the activities and workflow of the migration phase of an active migration. 2. LPAR running on source system LPAR running on destination system Source System Destination System Source Virtual I/O Server Destination Virtual I/O Server Mobile Partition Validation New LPAR creation LPAR removal Virtual SCSI & FC adapter setup Memory copy Notification of completion MSP Setup Virtual SCSI & FC removal Time Figure 2-6 Migration phase of an active migration For active partition migration. 3.2. the transfer of partition state follows a path: 1. then the HMC initiates the migration procedure. From the source mover service partition to the destination mover service partition. From the source system’s hypervisor to the source mover service partition. 36 IBM PowerVM Live Partition Mobility .4 Partition migration phase If all the validation checks pass. all state changes are rolled back in the event of an error. From the destination mover service partition to the destination system's hypervisor. From this point forward. From the destination system’s hypervisor to the partition shell on the destination. 5. 4. From the mobile partition to the source system’s hypervisor.5.

giving the mobile partition the opportunity Chapter 2. The HMC creates a compatible partition shell on the destination system. These two movers establish: – A connection to their respective POWER Hypervisor through the VASI adapter – A private. and desired) – Virtual adapter configuration The creation of the partition shell on the destination system ensures that all required resources are available for the mobile partition and cannot be stolen during the migration. the current values of the partition on the source system become both the pending and the current values of the partition on the destination system. The HMC issues a prepare for migration event to the migrating operating system (still on the source system). for transporting the moving partition’s state 3. and desired) – Memory configuration (minimum. The HMC configures the mover service partitions on the source and destination systems. The pending values of the mobile partition (changes made to the partition’s profile since activation) are not preserved across the migration. and entitlements (minimum. maximum. This shell partition is used to reserve the resources required to receive the inbound mobile partition. over a standard TCP/IP connection. maximum. Departing Mobile Partition Mover Service Partition 3 Mover Service Partition Arriving Mobile Partition Shell VASI VASI 1 POWER Hypervisor 2 4 POWER Hypervisor 5 Source System Destination System Figure 2-7 Active migration partition state transfer path The migration process consists of the following steps: 1. The configuration of the partition on the source system includes: – Processor configuration. The current partition profile associated with the mobile partition is created on the destination system. full-duplex communications channel between themselves. Live Partition Mobility mechanisms 37 .The path is shown in Figure 2-7. 2. which is dedicated or shared processors. processor counts.

5. virtual Fibre Channel. adjusting heartbeats. with running applications. or dirty. the mover service partition instructs the hypervisor on the source system to suspend the mobile partition. This step uses the virtual adapter migration map created during the validation phase. The mobile partition confirms the suspension by quiescing all its running threads. The mover on the source system starts sending the partition state to the mover on the destination system. or a timeout is reached. or is considered sufficiently small. recovery will complete the migration on to the destination system. then it returns a failure indicator to the HMC. If the migration fails after this. Because the mobile partition is still active. and other timeout thresholds. such as reducing memory footprint. The operating system inhibits access to the PMAPI registers and zeroes internal counters upon receipt of this event. Based on the total number of pages associated with partition state and the number of pages left to transmit. the source mover service partition continues to send partition state to the destination server. Memory pages that are modified during the transfer of state are marked modified. which cancels the migration and rolls back all changes. After the first pass. This is the point of no return. If the partition is not ready to perform a migration at this time. Migration stops if an error occurs. re-establishing its operating environment. 7. so that they may take any necessary actions. and virtual SCSI server adapters in each of the Virtual I/O Servers on the destination system that will host the virtual SCSI and virtual Fibre Channel client adapters of the mobile partition. The partition is now resumed. If the mobile partition requires a page 38 IBM PowerVM Live Partition Mobility . copying the mobile partition’s physical pages to the physical memory reserved by the partition shell on the destination. its state continues to change while the memory is being moved from one system to the other. 8. The mobile partition resumes execution on the destination server. This process is repeated until the number of pages marked as dirty at the end of each loop no longer decreases. The partition is now suspended. The HMC creates the virtual target devices. the migration can no longer be rolled back to the source. Start of suspend window period. The operating system passes this event to registered kernel extensions and applications. 4. The mobile partition might resume execution before all its memory pages have been copied to the destination. During the partition suspension. the source mover re-sends all the dirty pages. throttling workloads. 9. 6.to get ready to be moved.

You may now add dedicated I/O adapters.that has not yet been migrated. SCSI server. and add the mobile partition to a custom system group. 11. The HMC deletes the mobile partition and all its profiles on the source server. This technique significantly reduces the length of the pause. The Virtual I/O Servers on the source system remove the adapters (virtual Fibre Channel. With the completion of the state transfer. The suspend window period (from end of step 7 through end of step 10) lasts only a few seconds.5 Migration completion phase The final steps of the migration return all resources to the source and destination systems and restore the partition to its fully functional state. as required. by using dynamic reconfiguration. as follows: 1.5. or both) associated with the mobile partition by: – Unlocking the virtual SCSI and virtual Fibre Channel server adapters – Removing the device-to-LUN mappings for virtual SCSI – Removing the virtual Fibre Channel-to-NPIV enabled physical adapter mappings – Closing the device drivers – Deleting the virtual SCSI and virtual Fibre Channel server adapters 3. 2. The HMC informs the source and destination mover service partitions that the migration is complete and that they can delete the migration data from their tables. retrying all pending I/O requests that were not completed while on the source system. 2. The partition is now active and visible again. 6. the page is demand-paged from the source system.The mobile partition recovers I/O. during which the partition is unavailable. Chapter 2. 5. On the mobile partition. 4. Live Partition Mobility mechanisms 39 . It also sends a gratuitous ARP request on all VLAN virtual adapters to update the ARP caches in the various switches and systems in the external network. 10. AIX notifies all registered kernel extensions and applications that the migration is complete so that they may perform any required recovery operations. the communications channel between the two mover service partitions is closed along with their VASI connections to their respective POWER Hypervisor.When the destination mover service partition receives the last dirty page from the source system. the migration is complete. End of suspend window period.

Failure to find a suggested good match during the separately-run migration pre-check happens if either of the following statements is true: The mobile partition's virtual SCSI and virtual Fibre Channel client adapters that are currently assigned to a single Virtual I/O Server on the source server have to be assigned to different Virtual I/O Servers on the destination system.2. The mobile partition's virtual SCSI and virtual Fibre Channel client adapters that are currently assigned to different Virtual I/O Server on the source server will have to be assigned to a single Virtual I/O Server on the destination. 40 IBM PowerVM Live Partition Mobility . Suggested mappings are given if the following criteria are met: The mobile partition's virtual client SCSI and virtual Fibre Channel adapters that are assigned to a single Virtual I/O Server on the source server will be assigned to a single Virtual I/O Server on the destination system. or the HMC fails the pre-check or migration. If the destination Virtual I/O Servers cannot access all the VLANs required by the mobile partition. Destination Virtual I/O Servers must have access to all LUNs used by the mobile partition. The mobile partition's virtual SCSI and virtual Fibre Channel client adapters that are assigned to two or more different Virtual I/O Servers on the source system will be assigned to the same number of Virtual I/O Servers on the destination system.5. the HMC halts the migration.6 Virtual I/O Server selection If the HMC cannot find a virtual adapter mapping for a migration. and you have not specified a mapping. the HMC selects one of them. If multiple source-to-destination Virtual I/O Server combinations are possible for virtual adapter mappings. the migration is halted at the validation phase. The HMC must identify at least one possible destination Virtual I/O Server for each virtual SCSI and virtual Fibre Channel client adapter assigned to the mobile partition.

the HMC stops the migration with an error.3. are viewable in HMC through the GUI and CLI lslparmigr -r virtualio command.ipaddr_mappings=9. if they exist.8 Stopping an active migration You can stop an active partition migration through the controlling HMC while the mobile partition is in the Migration starting state.\ suggested_virtual_scsi_mappings=30/VIOS1_L10/1.5.5.3//1/VIOS1_L10/9.7 Source and destination mover service partitions selection The HMC selects a source and destination mover service partition unless you provide them explicitly. If you try to stop a migration after the Migration starting state.\ possible_virtual_fc_mappings=none. Example 2-1 Sample output of the lslparmigr -r virtualio command $ lslparmigr -r virtualio -m 9117-MMA-SN100F6A0-L9 \ -t 9117-MMA-SN101F170-L10 --filter lpar_names=PROD possible_virtual_scsi_mappings=30/VIOS1_L10/1. The allowable window is while the partition is in the Migration starting state on the source system. Example 2-2 Sample output of the lslparmigr -r msp command $ lslparmigr -r msp -m 9117-MMA-SN100F6A0-L9 -t 9117-MMA-SN101F170-L10\ --filter lpar_names=PROD source_msp_name=VIOS1_L9. If no movers are available.5. as displayed in Example 2-2. as displayed in Example 2-1. 2. the migration fails.dest_msp_names=VIOS1_L10.\ suggested_virtual_fc_mappings=none 2. then the HMC takes no action other than displaying an error message. Chapter 2.source_msp_id=1.Both possible and suggested HMC-selected Virtual I/O Servers.\ dest_msp_ids=1. the partition remains on the source system as though the migration had not been started.111/ If either of the chosen mover service partitions determines that its VASI cannot handle a migration or if the HMC receives a VASI device error from a mover service partition.5. If stopped during the allowable window.3. Live Partition Mobility mechanisms 41 . Valid source and destination mover service partitions pairs that can be used for a migration can be seen with the HMC GUI and HMC CLI with the lslparmigr -r msp command.

a portion of the partition’s resident memory will almost certainly have changed during this pass. The hypervisor keeps track of these changed pages for retransmission to the destination system in a dirty page list. Providing a high-performance network between the source and destination mover partitions and reducing the partition’s memory update activity prior to migration will improve the latency of the state transfer phase of migration. the POWER Hypervisor resumes the partition on the destination system before all the dirty pages have been migrated over to the destination. The mover service partitions working with the hypervisor use partition virtual memory functions to track changes to partition memory state on the source system while it is transferring memory state to the destination system. If the mobile partition tries to access a dirty page that has not yet been migrated from the source system.6 Performance considerations Active partition migration involves moving the state of a partition from one system to another while the partition is still running. When the hypervisor resumes operation. the hypervisor on the destination sends a demand paging request to the hypervisor on the source to fetch the required page. During the migration phase. the partitions come back up in the powered off state. We suggest using a dedicated network for state transfer. It makes additional passes through the changed pages until the mover service partitions detects that a sufficient amount of pages are clean or the timeout is reached. To ensure that active partition migrations are truly nondisruptive.If the source or destination server is powered down after the HMC has enabled suspension on the mobile partitions. even for large partitions. with a migration state of invalid. a reasonable assumption is that partitions with a large memory requirement have higher numbers of changed resident pages than smaller ones. with a nominal bandwidth of at least 1 Gbps. The amount of changed resident memory after the first pass is controlled more by write activity of the hosted applications than by the total partition memory size. the HMC must stop the migration and perform a rollback of all reversible changes. 42 IBM PowerVM Live Partition Mobility . 2. Nevertheless. an initial transfer of the mobile partition’s physical memory from the source to the destination occurs. The speed and load of the network that is used to transfer state between the source and destination systems influence the time required for both the transfer of the partition state and the performance of any remote paging operations. Because the mobile partition is still active.

performance monitor counters that may be reset. “Migration awareness” on page 177 for details about memory affinity considerations. but are not limited to. Most AIX features work seamlessly before. However. the data that these tools report during the migration process might not be significant. See 5. Large memory pages Huge memory pages cannot be used Processor binding Processes remain bound to the same logical processor throughout the migration. the following features: System and advanced accounting Workload manager System trace Resource sets Including exclusive-use processor resource sets Pinned memory Memory affinity See 5. and so on. tprof.8. Although AIX is migration safe. filemon. and so on) can run on a mobile partition during an active migration. because of underlying hardware changes.7 AIX and active migration An AIX partition continues running during an active migration. during. “Migration awareness” on page 177 for more information.10. verify that any applications you are running are migration safe or aware. Kernel and kernel extensions See 5. Live Partition Mobility mechanisms 43 . Performance monitoring tools (such as commands topas. See 5.8. “Migration awareness” on page 177 for details on processor binding considerations. These include. Chapter 2. “Making kernel extension migration aware” on page 185 for details about how to make a kernel extension migration aware.8. and after the migration.2.

Similar to AIX.2. during. and after migration. such as IBM RAS tools and dynamic reconfiguration. Linux is migration-safe. 44 IBM PowerVM Live Partition Mobility .8 Linux and active migration A Linux partition continues running during an active migration. Many features on supported Linux operating systems work seamlessly before. A good idea is to verify that any applications not included in the full distributions of the supported Linux operating systems are migration-safe or aware.

2009. “Preparing the systems for Live Partition Mobility” on page 54 3. Requirements and preparation In this chapter. “Distance considerations” on page 88 © Copyright IBM Corp. “Preparing the HMC for Live Partition Mobility” on page 61 3. “Network considerations” on page 87 3. “Configuring the external storage” on page 79 3. “Preparing the mobile partition for mobility” on page 66 3. 45 .10. “Requirements for Live Partition Mobility” on page 47 3.3 Chapter 3. “Introduction” on page 46 3. This chapter contains the following topics: 3.3.6.5. “Preparing the Virtual I/O Servers” on page 63 3. “Skill considerations” on page 46 3. All rights reserved. “Live Partition Mobility preparation checks” on page 53 3. 2007.7.4.9.8.1. we discuss the preparatory steps required for a logical partition to be migrated from one system to another system.2.11.

1 Introduction Requirements and preparation must be fulfilled whether you perform an inactive or an active partition migration. from one system to another without disrupting the operation of that logical partition. “Integrated Virtualization Manager for Live Partition Mobility” on page 221. then you can initiate the partition migration by using the wizard on the HMC graphical user interface (GUI) or through the HMC command-line interface (CLI). including its operating system and applications. When you have ensured that all these requirements are satisfied and all preparation tasks are completed. the HMC verifies and validates the Live Partition Mobility environment. 3. If this validation turns out to be successful. Hardware Management Console (HMC) Storage area networks. As previously described: Inactive partition migration allows you to move a powered-off logical partition.3. SG24-7940. Note: Information about preparation and requirements with the Integrated Virtualization Manager can be found in Chapter 7.2 Skill considerations Live Partition Mobility builds on top of several existing technologies. configuring shared storage is required for Live Partition Mobility Dynamic logical partitioning AIX or Linux 46 IBM PowerVM Live Partition Mobility . This book assumes you have a working knowledge of the following topics: PowerVM virtualization – Virtual I/O Server – Virtual SCSI – Virtual and shared Ethernet See PowerVM Virtualization on IBM System p: Introduction and Configuration Fourth Edition. Active partition migration is the ability to move a running logical partition. including its operating system and applications. Familiarity with them is helpful when working with Live Partition Mobility. from one system to another.

iii.2. The CoD Advanced Functions Activation History Log panel opens. ii. In the navigation area.12. upgrade the HMC to the correct level.3.0 or later with required fixes MH01062 for both active and inactive partition migration. Select the Enterprise Enablement option and expand it by clicking on it. expand Systems Management. “Processor compatibility modes” on page 205. Select View History Log. – Model 7310-CR2 or later. A system is capable of being either the source or destination of a migration if it contains the necessary processor hardware to support it. – Both source and destination systems must have the PowerVM Enterprise Edition license code installed. Note: Migration possibilities between systems with different processor types is discussed in 5. To check.3 Requirements for Live Partition Mobility Major requirements for active Live Partition Mobility are: Hardware Management Console (HMC) requirements – Version 7 Release 3. Expand the Capacity on Demand (CoD) section in the task list by clicking on it. iv. Requirements and preparation 47 . Figure 3-1 on page 48 shows the activation of Enterprise Edition for Live Partition Mobility. If you do not have this level. Select the system in the navigation area. or the 7310-C03 Source and destination system requirements – The source and destination system must be an IBM Power Systems POWER6 or POWER7 technology-based model. Chapter 3. We call this additional hardware capability migration support. use the HMC to: i.

ibm.com/infocenter/systems/scope/hw/index. and Power 560). where x is an S for BladeCenter®. an L for Entry servers (such as the Power 520.j sp?topic=/ipha5/fix_serv_firm_kick. Figure 3-2 Enter activation code – Both source and destination systems must be at firmware level 01Ex320 or later. Click Close. If the Enterprise Edition code is not activated.htm The current firmware level can be checked after completing the following steps on the HMC: i. 48 IBM PowerVM Live Partition Mobility . open Systems Management and select the system. vi. an M for Midrange servers (such as the Power 570) or an H for Enterprise servers (such as the Power 595). Power 550. see the firmware fixes Web site: http://publib.boulder. In the navigation are.Figure 3-1 Activation of Enterprise Edition v. To upgrade the firmware. you must repeat the first three steps and then select Enter Activation Code to enable Live Partition Mobility as shown on Figure 3-2.

each system may have a different level of firmware.ii. Although there is a minimum required firmware level. you have to perform an update through the HMC by selecting Upgrade Licensed Internal Code to a new release from Updates in the task list. Chapter 3. Figure 3-3 Checking the current firmware level Note: You can also check the firmware level by executing the lslic command on the HMC. Select Updates in the task list iii. Requirements and preparation 49 . Select None .Display current values in this new window and click OK. Finally the current firmware level appears in the new window called View system information. If the version is not at the required level for Live Partition Mobility. This is shown in Figure 3-3. By selecting View system information a new pop-up window called Specify LIC Repository will appear. iv. The level of source system firmware must be compatible with the destination firmware.

see Live Partition Mobility Support for Power Systems: http://www14. For a current list of firmware level compatibilities. 50 IBM PowerVM Live Partition Mobility . – A new partition attribute. select Live Partition Mobility For the Function.html Or check the IBM Prerequisite Web site for POWER6 and POWER7 compatibility: https://www-912.Table 3-1 gives an overview of the supported mixed firmware levels.1. called the mover service partition. and how to migrate.ibm. select Live Partition Mobility between POWER6 and POWER7 In the Product dropdown. By default.nsf Note: On the IBM Prerequisite Web site: Choose the Software tab In the OS/Firmware dropdown.5.com/e_dir/eserverprereq.1or higher has to be installed both on the source and destination systems.com/webapp/set2/sas/f/pm/migrate.software. select ALL Functions Table 3-1 Supported migration matrix To From EM320_031 EM320_040 EM320_046 EM320_061 or higher Blocked Blocked Supported Supported EM330_028 or higher Blocked Blocked Supported Supported EM340_039 or higher Blocked Blocked Supported Supported EM320_031 EM320_040 EM320_046 EM320_061 or higher EM330_028 or higher EM340_039 or higher Supported Supported Supported Blocked Supported Supported Supported Blocked Supported Supported Supported Supported Blocked Blocked Supported Supported Supported Supported Blocked Blocked Supported Supported Supported Supported Source and destination Virtual I/O Server requirements – At least one Virtual I/O Server at release level 1. all Virtual I/O Server partitions have this new partition attribute set to FALSE. has been defined that enables you to indicate whether a mover-capable Virtual I/O Server partition should be considered during the selection process of the MSP for a migration.ibm.

com/webapp/set2/sas/f/vios/documentation/d atasheet. On both the source and destination servers.html Chapter 3.com/webapp/set2/sas/f/pm/component.software.3 Technology Level 7 or later (the required level is 5300-07-01) – AIX Version 6. Storage requirements For a list of supported disks and optical devices. the Virtual Asynchronous Services Interface (VASI) device provides communication between the mover service partition and the POWER Hypervisor. use the ioslevel command.and POWER7-based servers. Requirements and preparation 51 .software. More technical information about the Virtual I/O Server and latest downloads are on the Virtual I/O Server Web site: http://www14.html Operating system requirements The operating system running in the mobile partition has to be AIX or Linux. Note: Ensure that the target hardware supports the operating system you are migrating.ibm. the source and destination mover service partitions communicate with each other over the network.software.1 or later (the required level is 6100-00-01) – Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later (with the required kernel security update) – SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later (with the required kernel security update) To download the Linux kernel security updates: http://www14.com/webapp/set2/sas/f/vios/download/home. The operating system must be at one of the following levels: – AIX 5L™ Version 5.ibm. A Virtual I/O Server logical partition or a logical partition running the IBM i operating system cannot be migrated.ibm. To determine the current release of the Virtual I/O Server and to see if an upgrade is necessary. see the Virtual I/O Server data sheet for VIOS: http://www14.– In addition to having the mover partition attribute set to TRUE.html Previous versions of AIX and Linux can participate in inactive partition migration if the operating systems support virtual devices and IBM Power Systems POWER6.

Your LAN must be configured so that migrating partitions can continue to communicate with other necessary clients and servers after a migration is completed. The VLAN must be bridged (if there is more than one. then it also has to be bridged) to a physical network using a shared Ethernet adapter in the Virtual I/O Server partition.Network requirements The migrating partition uses the virtual LAN (VLAN) for network access. 52 IBM PowerVM Live Partition Mobility .

The mobile partition can use huge pages. but others have to be changed statically (barrier synchronization registers and redundant error path reporting). The mobile partition can have dedicated I/O. Table 3-2 Preparing the environment for Live Partition Mobility Task Prepare servers Prepare HMC Prepare VIOS Prepare mobile partition Prepare the storage Network considerations Details see page 54 61 63 66 79 87 Remarks for inactive migration see Table Note 1 see Table Note 2 - Table Note 1: For inactive migration. Barrier-synchronization registers can be used in the mobile partition. Requirements and preparation 53 . you have to perform fewer preparatory tasks on the mobile partition: RMC connections are not required. Table Note 2: For inactive migration.3. Chapter 3.4 Live Partition Mobility preparation checks Table 3-2 lists the preparation tasks required for a migration. These dedicated I/O devices will be removed automatically from the partition before the migration occurs. mover service partitions and time reference). You do not have to synchronize the time-of-day clocks. you perform fewer preparatory tasks on the Virtual I/O Server because: You do not have to enable the mover service partition on either the source or destination Virtual I/O Server. The applications do not have to be migration-aware or migration-safe. Certain settings can be changed dynamically (partition workload groups.

In this case. For more information about this HMC migration scenario see 5.2 Logical memory block size Ensure that the logical memory block (LMB) size is the same on the source and destination systems. this section describes the planning tasks to consider and complete on the source and destination systems before you migrate a logical partition. Additional requirements include: Both HMCs must be connected to the same network so that they can communicate with each other. After you validate all required versions and levels. It varies between 16 MB and 256 MB. the source server is managed by one HMC and the destination server is managed by a different HMC.3. Note: HMC Version 7 Release 3. 3. 3.4 introduces an additional migration scenario. and you must shut down and restart the managed system for the change to take effect. whether it is an inactive partition migration or active partition migration. A change to the LMB size can only be done by a user with the administrator authority. The default LMB size depends on the amount of memory installed in the CEC. “Remote Live Partition Mobility” on page 130. Secure Shell has to be set up correctly between both the source as the destination HMC with the mkauthkeys command.1 HMC Ensure that the source and destination systems are managed by the same HMC (or a redundant HMC pair).5. 54 IBM PowerVM Live Partition Mobility .5.4.5 Preparing the systems for Live Partition Mobility Careful planning in order to determine your environment is required before Live Partition Mobility can be successfully implemented.

If the destination system is running on battery power.3 Battery power Ensure that the destination system is not running on battery power. Chapter 3. Requirements and preparation 55 . Figure 3-4 Checking and changing LMB size with ASMI 3.5. However. the source system can be running on battery power.Figure 3-4 shows how the size of the logical memory block can be modified in the Performance Setup menu of the Advanced System Management Interface (ASMI). then you need to return the system to its regular power source before moving a logical partition to it. The ASMI can be launched through the Operations section in the task list on the HMC.

Select the Hardware tab and then the Memory tab. Click OK.3. The Properties window opens. To determine the available memory on the destination system. d. Determine the amount of memory of the mobile partition on the source system: a. select the mobile partition and select Properties in the task list. open Systems Management. Select the source system in the navigation area. you must have super administrator authority (a user with the HMC hmcsuperadmin role. and allocate more memory if necessary. In the navigation area. b. In the contents area. The following steps have to be completed on the HMC: 1. e. f. c. such as hscroot). Figure 3-5 shows the result of the actions.5.4 Available memory Ensure that the destination system has enough available memory to support the mobile partition. View the Memory section and record the assigned memory settings. Figure 3-5 Checking the amount of memory of the mobile partition 56 IBM PowerVM Live Partition Mobility .

– If the destination system does not have enough available memory to support the mobile partition. where available) on the destination system before the actual migration can take place. Click OK.2. b. Compare the values from the previous steps: – If the destination system has enough available memory to support the mobile partition. Figure 3-6 Available memory on destination system 3. Select the Memory tab. Determine the memory available on the destination system: a. In the contents area. Chapter 3. select the destination system and select Properties in the task list. Figure 3-6 shows the result of the actions. d. Record the Available memory and Current memory available for partition usage. Requirements and preparation 57 . c. you must dynamically free up some memory (or use the Capacity on Demand (CoD) feature to activate additional memory. skip the rest of this procedure and continue with other preparation tasks.

View the Processor section and record the processing units settings. In the navigation area. or enough processing units in a shared processor pool.3. expand Systems Management. therefore dedicated processors must be available on the target if that is what you are using. Select the source system in the navigation area. you must have super administrator authority (a user with the HMC hmcsuperadmin role. select the mobile partition and select Properties in the task list. c. f. b. e.5. Select the Hardware tab and then the Processors tab. Determine how many processors the mobile partition requires: a. such as hscroot). 58 IBM PowerVM Live Partition Mobility .5 Available processors to support Live Partition Mobility Ensure that the destination system has enough available processors (or processing units) to support the mobile partition. Click OK. The profile created on the destination server matches the source server’s. Complete the following steps in the HMC: 1. To determine the available processors on the destination system and allocate more processors if necessary. d. A new pop-up window called Properties appears. In the contents area.

Note: In recent HMC levels. Select the Processors tab. Click OK. See Figure 3-7. c. select the destination system and select Properties in the task list. b. d. Chapter 3. In the contents area. p6 appears as POWER6. Figure 3-7 Checking the number of processing units of the mobile partition 2. Requirements and preparation 59 .Figure 3-7 shows the result of the actions. Record the Available processors available for partition usage. Determine the processors available on the destination system: a.

60 IBM PowerVM Live Partition Mobility . when available) on the destination system before the actual migration can take place.Figure 3-8 shows the result of the actions. – If the destination system has enough available processors to support the mobile partition. Figure 3-8 Available processing units on destination system 3. – If the destination system does not have enough available processors to support the mobile partition. you must dynamically free up processors (or use the CoD feature. Compare the values from the previous steps. then skip the rest of this procedure and continue with the remaining preparation tasks for Live Partition Mobility.

HMC V7R710 or later is required.2 or higher to be used.3. Requirements and preparation 61 . and release and service pack level with the lshmc command.6 Preparing the HMC for Live Partition Mobility The version and release of the HMC has to be at the correct level for Live Partition Mobility. When using Live Parition Mobility with an HMC managing at least one POWER7-based server.3. Figure 3-9 Checking the version and release of HMC Note: Live Partition Mobility requires HMC Version 7 Release 3. we used the latest Version 7 Release 3. In this publication.1. Chapter 3. See 3. You can also verify the current HMC version. “HMC” on page 54. Figure 3-9 shows how to check the current version and release of our HMC.5.4 of the HMC software (see Figure 3-9 on page 61). “Requirements for Live Partition Mobility” on page 47 and also see 3.

an upgrade is required. see: http://www14. Select Updates (1) and then click Update HMC (2).ibm.software.com/webapp/set2/sas/f/hmc/home.html 62 IBM PowerVM Live Partition Mobility . Also see Figure 3-11 on page 63. 2 1 Figure 3-10 Upgrading the Hardware Management Console For more information about upgrading the Hardware Management Console. as shown in Figure 3-10.If the HMC is not at the correct version and release.

see: http://publib. the window shown in Figure 3-11 opens. Requirements and preparation 63 .com/infocenter/systems/scope/hw/topic/iphb1/iphb1. For Virtual I/O Server installation instructions.ibm. Figure 3-11 Install Corrective Service to upgrade the HMC 3.boulder.7 Preparing the Virtual I/O Servers Several tasks must be completed to prepare the source and destination Virtual I/O Servers for Live Partition Mobility. At least one Virtual I/O Server logical partition must be installed and activated on both the source and destination systems.After you click OK.pdf Chapter 3.

open Systems Management and select Servers. Repeat these steps for the destination system. Example 3-1 Output of the ioslevel command $ ioslevel 2. select Mover Service Partition. To enable the source and destination mover service partitions using the HMC.1. There must be at least one mover service partition on both the source and destination Virtual I/O Servers for the mobile partition to participate in active partition migration. 2. as shown in Example 3-1.1 Virtual I/O Server version Ensure that the source and destination Virtual I/O Servers are at Version 1. and click OK.2 Mover service partition Ensure that at least one of the mover service partitions (MSP) is enabled on a source and destination Virtual I/O Server partition. 5. open the source system. Select the source Virtual I/O Server logical partition and select Properties on the task area. The mover service partition is a Virtual I/O Server logical partition that is allowed to use its VASI adapter for communicating with the POWER Hypervisor.7.3.1-FP-20.7. In the contents area. 4. In the navigation area. 3.1. 64 IBM PowerVM Live Partition Mobility .0. the mobile partition can be migrated inactively. as in the hscroot login) authority and complete the following steps: 1. perform an upgrade.0 $ If the source and destination Virtual I/O Servers do not meet the requirements.1 or higher.5. 3. you must have super administrator (such as hmcsuperadmin. On the General tab. This can be checked on the Virtual I/O Server by running the ioslevel command. If the mover service partition is disabled on either the source or destination Virtual I/O Server.

7. select the source Virtual I/O Server logical partition. 6. For Time reference.Figure 3-12 shows the result of these actions. Completing this step before the mobile partition is moved can prevent possible errors.7. 2.3 Synchronize time-of-day clocks Another recommended. Chapter 3. you must be a super administrator (such as hscroot) to complete the following steps: 1. open Systems Management. 4. In the navigation area. 5. In the contents area. task for active partition migration is the synchronization of the time-of-day clocks for the source and destination Virtual I/O Server partitions. Repeat the previous steps on the destination system for the destination Virtual I/O Server. although optional. the source and destination Virtual I/O Servers synchronize the clocks while the mobile partition is moving from the source system to the destination system. Select Servers and select the source system. If you choose not to complete this step. Click the Settings tab. To synchronize the time-of-day clocks on the source and destination Virtual I/O Servers using the HMC. 3. Requirements and preparation 65 . select Enabled and click OK. Figure 3-12 Enabling mover service partition 3. Click on Properties.

Figure 3-13 shows the time-of-day synchronization.8.8 Preparing the mobile partition for mobility This section describes the tasks that you must complete to prepare a mobile partition for Live Partition Mobility in order to have a successful migration. Figure 3-13 Synchronizing the time-of-day clocks Note: After the Virtual I/O Server infrastructure is configured.1 Operating system version Ensure that the operating system meets the requirements for Live Partition Mobility. a backup of the Virtual I/O Servers is recommended. 3. “Requirements for Live Partition Mobility” on page 47. 3.3. 3.2 RMC connections For active partition migration. ensure that Resource Monitoring and Control (RMC) connections are established. 66 IBM PowerVM Live Partition Mobility . this approach produces an established checkpoint prior to migration.8. These requirements can be found in 3.

5. Chapter 3.3.3.5.5. Example 3-2 Checking IBM. “Disable redundant error path reporting” on page 68. then the RMC connection is established. To establish an RMC connection for the mobile partition.3.3.5. enter the following command to check if the RMC connection is established: lsrsrc IBM. Requirements and preparation 67 .115" ClusterTM = "9078-160" ClusterSNum = "" ActivePeerDomain = "" NodeNameList = {"mobile"} resource 2: Name = "9.115" ClusterTM = "9078-160" ClusterSNum = "" ActivePeerDomain = "" NodeNameList = {"mobile"} # – If the command output includes ManageType = "HMC". 2. – If you received a message indicating that there is no IBM. From the command line.ManagementServer This command is shown in Example 3-2.ManagementServer resource # lsrsrc IBM.RMC can be configured to monitor resources and perform an action in response to a defined condition.ManagementServer Resource Persistent Attributes for IBM. then continue to the next step.5.ManagementServer resource 1: Name = "9. and continue with the additional preparation tasks by going to 3.ManagementServer resource or that ManagerType does not equal HMC.128" ManagerType = "HMC" LocalHostname = "9. you must be a super administrator (a user with the HMC hmcsuperadmin role.180" ManagerType = "HMC" LocalHostname = "9.5. and you can skip 3 on page 68.128" Hostname = "9. The flexibility of RMC enables you to configure response actions or scripts that manage general system conditions with little or no involvement from the system administrator. Sign on to the operating system of the mobile partition with root authority.8. such as hscroot) on the HMC and complete the following steps: 1.180" Hostname = "9.3.3.3.

com/infocenter/systems/scope/hw/index. install the RSCT utilities.or IVM-managed servers link): http://www14. then power it on using the profile with the modifications. Because disabling redundant error path reporting cannot be done dynamically. and click OK. open Systems Management. In the navigation area. 3.j sp?topic=/iphbk/iphbkrmc_configuration. Click the Settings tab.8. select the logical partition you wish to migrate and select Configuration  Manage Profiles. 6.ibm. found at: http://publib. Download these tools from the Service and productivity tools Web site (and select the appropriate HMC. Select the active logical partition profile and select Edit from the Actions menu.3. 68 IBM PowerVM Live Partition Mobility .software. SUSE Linux Enterprise Server: Install additional software (RSCT Utilities) for SUSE Linux Enterprise Server on HMC managed servers. To disable redundant error path reporting for the mobile partition.boulder.html • • Red Hat Enterprise Linux: Install additional software (RSCT Utilities) for Red Hat Enterprise Linux on HMC managed servers. 4. Redundant error path reporting must be disabled if you want to migrate a logical partition. you must be a super administrator and complete the following steps: 1.htm – For Linux. 5.3 Disable redundant error path reporting Ensure that the mobile partition is not enabled for redundant error path reporting. In the contents area. you have to shut down the mobile partition. Redundant error path reporting allows a logical partition to report server common hardware errors and partition hardware errors to the HMC. Deselect Enable redundant error path reporting. see Configuring Resource Monitoring and Control (RMC) for the Partition Load Manager.ibm. 3. 2. Select Servers and select the source system. Establish the RMC connection specifically for your operating system: – For AIX.com/webapp/set2/sas/f/lopdiags/home. 7.

Select Servers and select the source system. except for the two reserved for the HMC. expand Systems Management. Virtual serial adapters are often used for virtual terminal connections to the operating system.4 Virtual serial adapters Ensure that the mobile partition is not using a virtual serial adapter in slots higher than slot 1. The first two virtual serial adapters (slots 0 and 1) are reserved for the HMC. To dynamically disable unreserved virtual serial adapters using the HMC. Chapter 3. select the logical partition to migrate and select Configuration  Manage Profiles. 3.Figure 3-14 shows the disabled redundant error path handling: Figure 3-14 Disable redundant error path handling 3. In the contents area. Requirements and preparation 69 . it cannot have any required virtual serial adapters.8. In the navigation area. you must be a super administrator and complete the following steps: 1. For a logical partition to participate in a partition migration. 2.

you must be a super administrator on the HMC and complete the following steps: 1. Figure 3-15 shows the result of the steps. To dynamically remove the mobile partition from a partition workload group. expand Systems Management  Servers.8. 6. Select the active logical partition profile and select Edit from the Actions menu. Click OK.5 Partition workload groups Ensure that the mobile partition is not part of a logical partition group. For a logical partition to participate in a partition migration.4. The partition profile specifies the name of the partition workload group to which it belongs. Select the Virtual Adapter tab. 7. it cannot be assigned to a partition workload group. In the contents area. if applicable. 2. Figure 3-15 Verifying the number of serial adapters on the mobile partition 3. 70 IBM PowerVM Live Partition Mobility . A partition workload group identifies a set of partitions that reside on the same system. then ensure that the adapters in slots 2 and higher are not selected as Required. 5. If there are more than two virtual serial adapters listed. open the source system. In the navigation area.

Other tab Chapter 3.Repeat the last three steps for all partition profiles associated with the mobile partition. Click the Other tab. In the Workload group field. 8. Figure 3-16 Disabling partition workload group . Click the Settings tab. select (None). 5. open the mobile partition and select Configuration  Manage Profiles. Figure 3-16 and Figure 3-17 on page 72 show the tabs for the disablement of the partition workload group (both in the partition and in the partition profiles). Requirements and preparation 71 . 10. Select the mobile partition and select Properties. Select the active logical partition profile and select Edit from the Actions menu. In the contents area. 9. 7. In the Workload Management area. 4. 6. select (None) and click OK.3.

However. In the contents area.8.Settings tab 3.6 Barrier-synchronization register Ensure that the mobile partition is not using barrier-synchronization register (BSR) arrays. Select the mobile partition and select Properties. it cannot use BSR arrays. expand Systems Management. which is a method for synchronizing the threads in the parallel-processing application. open the source system. 72 IBM PowerVM Live Partition Mobility . Select Servers. it can still participate in inactive partition migration if it uses BSR.Figure 3-17 Disabling partition workload group . For a logical partition to participate in active partition migration. you must be a super administrator and complete the following steps: 1. A parallel-processing application running on AIX can use a BSR to perform barrier synchronization. In the navigation area. BSR is a memory register that is located on certain POWER technology-based processors. 3. To disable BSR for the mobile partition using the HMC. 2. 4.

Click the Memory tab. In the contents area.Enter 0 in the BSR arrays for this profile field and click OK. Skip the remaining steps and see 2. This is shown in Figure 3-19 on page 74. You can now continue with additional preparatory tasks for the mobile partition.5. 10. Click the Hardware tab. “Inactive partition migration” on page 27.4. the mobile partition can participate in inactive or active migration. Click the Memory tab. Chapter 3. Requirements and preparation 73 . take one of the following actions: • • Perform an inactive migration instead of an active migration. This is shown on Figure 3-18. 9. Figure 3-18 Checking the number of BSR arrays on the mobile partition – If the number of BSR arrays is not equal to zero. Click OK and continue to the next step to prepare the mobile partition for an active migration. – If the number of BSR arrays equals zero. 8. Select the active logical partition profile and select Edit from the Actions menu. 7. open the mobile partition and select Configuration  Manage Profiles. 6.

you have to shut down the mobile partition.11. However. then power it on by using the profile with the BSR modifications. desired. and maximum number of huge pages to assign to a partition when you create a partition profile.7 Huge pages Ensure that the mobile partition is not using huge pages. such as in DB2® partitioned database environments. it can still participate in inactive partition migration. 74 IBM PowerVM Live Partition Mobility .Because modifying BSR cannot be done dynamically. Figure 3-19 Setting number of BSR arrays to zero 3.8. For a logical partition to participate in active partition migration. You can specify the minimum. it cannot use huge pages. Huge pages can improve performance in specific environments that require a high degree of parallelism. if the mobile partition does use huge pages.

shown in Figure 3-20. Click the Advanced tab.8.8. open the mobile partition and select Configuration  Manage Profiles. In the contents area. 2. “Physical or dedicated I/O” on page 76. Select the active logical partition profile and select Edit from the Actions menu. take one of the following actions: • • Perform an inactive migration instead of an active migration. skip the remaining steps of this procedure and continue with additional preparatory tasks for the mobile partition in 3. – If the current huge page memory equals zero (0). 4. Figure 3-20 Checking if huge page memory equals zero – If the current huge page memory is not equal to 0. Open the source system and select Properties.To configure huge pages for the mobile partition using the HMC. Requirements and preparation 75 . Click OK and continue with the next step to prepare the mobile partition for an active migration.4. 3. Skip the remaining steps and see 2. “Inactive partition migration” on page 27. you must be a super administrator and complete the following steps: 1. Chapter 3.

it can participate in inactive partition migration. Because changing huge pages cannot be done dynamically.5. and click OK. Figure 3-21 Setting Huge Page Memory to zero 3. 7. you have to shut down the mobile partition. After 76 IBM PowerVM Live Partition Mobility .8. it cannot have any required or physical I/O. This is shown in Figure 3-21. If the mobile partition has required or physical I/O. then turn it on by using the profile with the modifications. Click the Memory tab. For a logical partition to participate in active partition migration. All I/O must be virtual. Enter 0 in the field for desired huge page memory.8 Physical or dedicated I/O Ensure that the mobile partition does not have physical or dedicated (required) I/O adapters and devices. 6.

Requirements and preparation 77 . open the mobile partition and select Configuration  Manage Profiles. In the navigation area.9.8. expand Systems Management. Figure 3-22 Checking if there are required resources in the mobile partition Chapter 3. Physical I/O marked as desired can be removed dynamically with a dynamic LPAR operation. To remove required I/O from the mobile partition using the HMC. Click the I/O tab. skip the remainder of this procedure and continue with additional preparatory tasks for the mobile partition. 4.migration. 2. you must be a super administrator and complete the following steps: 1. – If Required is not selected for any resource. In the contents area. Note the following information. the required or physical I/O configuration must be verified. 3. “Name of logical partition profile” on page 78. 5. See Figure 3-22. Select Server and select the source system. Select the active logical partition profile and select Edit from the Actions menu. in 3.

More information about Integrated Virtual Ethernet adapters can be found in the Integrated Virtual Ethernet Adapter Technical Overview and Introduction. “Inactive partition migration” on page 27.4.– If Required is selected for any resource. Shut down the mobile partition. deselect Required and click OK. Figure 3-23 shows you how to verify whether a LHEA is configured for the mobile partition. then no logical Host Ethernet Adapter is configured for this partition. Figure 3-23 Logical Host Ethernet Adapter 3. REDP-4340 publication. First. If no logical port ID is in this column. This is an optional step. take one of the following actions: • • Perform an inactive migration instead of an active migration. For each resource that is selected as Required. As part of the migration process. because these are also considered as physical I/O. Continue with the next step to prepare the mobile partition for an active migration. and then verify whether there are logical port IDs. Neither Inactive nor Active migration are possible if any LHEAs are configured. 6.8. select an IVE physical port to define a LHEA.9 Name of logical partition profile Determine the name of the logical partition profile for the mobile partition on the destination system. Skip the remaining steps and see 2. Note: You must also verify that no Logical Host Ethernet Adapter (LHEA) devices are configured. Unless you specify a profile name when you start the migration. the HMC creates a new migration profile containing the partition’s current state. 7. then turn it on by using the profile with the required I/O resource modifications. this profile 78 IBM PowerVM Live Partition Mobility .

If you do not want the migration profile to replace any of the partition’s existing profiles. In the contents area. 2. if you specify an existing profile name. shut down and activate the new profile so that the new values can take effect: 1. the HMC replaces that profile with the new migration profile. “Migration awareness” on page 177 5.9.replaces the existing profile that was last used to activate the partition. In the contents area. Also.8. 3. Verify. see: 5. Requirements and preparation 79 .10 Mobility-safe or mobility-aware Ensure that the applications running in the mobile partition are mobility-safe or mobility-aware. click Activate.8. “Making applications migration-aware” on page 178 3. with the lsdev command. select the mobile partition. click Operations. The new profile contains the partition’s current configuration and any changes that are made during the migration. 2. To configure external storage: 1. you must specify a unique profile name.9 Configuring the external storage This section describes the tasks that you must complete to ensure your storage configuration meets the minimal configuration for Live Partition Mobility before you can actually migrate your logical partition. click Operations. Verify that the same SAN disks used as virtual disks by the mobile partition are assigned to the source and destination Virtual I/O Server logical partitions. that the reserve_policy attributes on the shared physical volumes are set to no_reserve on the Virtual I/O Servers: – To list all the disks. type the following command: lsdev -type disk Chapter 3. and select the logical partition profile.11 Changed partition profiles If you changed any partition profile attributes.8. select the mobile partition. and click Shut down. 3. For more information.

– To list the attributes of hdiskX, type the following command: lsdev -dev hdiskX -attr – If reserve_policy is not set to no_reserve, use the following command: chdev -dev hdiskX -attr reserve_policy=no_reserve 3. Verify that the physical volume has the same unique identifier, physical identifier, or an IEEE volume attribute. These identifiers are required in order to export a physical volume as a virtual device. – To list disks with a unique identifier (UDID): i. Type the oem_setup_env command on the Virtual I/O Server CLI. ii. Type the odmget -qattribute=unique_id CuAt command to list the disks that have a UDID. See Example 3-3.
Example 3-3 Output of odmget command
CuAt: name = "hdisk6" attribute = "unique_id" value = "3E213600A0B8000291B080000520E023C6B8D0F1815 type = "R" generic = "D" rep = "nl" nls_index = 79 CuAt: name = "hdisk7" attribute = "unique_id" value = "3E213600A0B8000114632000073244919ADCA0F1815 type = "R" generic = "D" rep = "nl" nls_index = 79

FAStT03IBMfcp"

FAStT03IBMfcp"

iii. Type exit to return to the Virtual I/O Server prompt.

80

IBM PowerVM Live Partition Mobility

– To list disks with a physical identifier (PVID): i. Type the lspv command to list the devices with a PVID. See Example 3-4. If the second column has a value of none, the physical volume does not have a PVID. A recommendation is to put a PVID on the physical volume before it is exported as a virtual device.
Example 3-4 Output of lspv command $ lspv NAME hdisk0 hdisk6 hdisk7 PVID 00c1f170d7a97dec 00c0f6a0915fc126 00c0f6a08de5008b VG rootvg None None STATUS active

ii. Type the chdev command to put a PVID on the physical volume in the following format: chdev - dev physicalvolumename -attr pv=yes -perm – To list disks with an IEEE volume attribute identifier, issue the following command (in the shell oem_setup_env): lsattr -El hdiskX 4. Verify that the mobile partition has access to a source Virtual I/O Server virtual SCSI adapter. You have to verify the configuration of the virtual SCSI adapters on the mobile partition and the source Virtual I/O Server logical partition to ensure that the mobile partition has access to storage. You must be a super administrator (such as hscroot) to complete the following steps: a. Verify the virtual SCSI adapter configuration of the mobile partition: i. In the navigation area, open Systems Management. ii. Click Servers. iii. In the contents area, open the source system. iv. Select the mobile partition and click Properties. v. Click the Virtual Adapters tab. vi. Record the Slot ID and Remote Slot ID for each virtual SCSI adapter. vii. Click OK.

Chapter 3. Requirements and preparation

81

The result of these steps is shown in Figure 3-24.

Figure 3-24 Virtual SCSI client adapter

b. Verify the virtual SCSI adapter configuration of the source Virtual I/O Server virtual SCSI adapter: i. In the navigation area, open Systems Management. ii. Click Servers. iii. In the contents area, open the source system. iv. Select the Virtual I/O Server logical partition and click Properties. v. Click the Virtual Adapters tab. vi. Verify that the Slot ID corresponds to the Remote Slot ID that you recorded (in step vi on page 81) for the virtual SCSI adapter on the mobile partition. vii. Verify that the Remote Slot ID is either blank or that it corresponds to the Slot ID that you recorded (in step vi on page 81) for the virtual SCSI adapter on the mobile partition. viii.Click OK.

82

IBM PowerVM Live Partition Mobility

The result of these steps is shown in Figure 3-25.

Figure 3-25 Virtual SCSI server adapters

Chapter 3. Requirements and preparation

83

c. If the values are incorrect, plan the slot assignments and connection specifications for the virtual SCSI adapters by using a worksheet similar to the one in Table 3-3.
Table 3-3 Virtual SCSI adapter worksheet Virtual SCSI adapter Source Virtual I/O Server virtual SCSI adapter Destination Virtual I/O Server virtual SCSI adapter Mobile partition Virtual SCSI adapter on source system Mobile partition Virtual SCSI adapter on destination system Slot number Connection specification

When all virtual SCSI adapters on the source Virtual I/O Server logical partition allow access to virtual SCSI adapters of every logical partition (not only the mobile partition), you have two solutions: • You may create a new virtual SCSI server adapter on the source Virtual I/O Server and allow only the virtual SCSI client adapter on the mobile partition to access it. You may change the connection specifications of a virtual SCSI server adapter on the source Virtual I/O Server so that it allows access to the virtual SCSI adapter on the mobile partition. This means that the virtual SCSI adapter of the client logical partition that currently has access to the virtual SCSI adapter on the source Virtual I/O Server will no longer have access to the adapter.

5. Verify that the destination Virtual I/O Server has sufficient free virtual slots to create the virtual SCSI adapters required to host the mobile partition in order to create a virtual SCSI adapter after it moves to the destination system. To verify the virtual SCSI configuration using the HMC, you must be a super administrator (such as hscroot) to complete the following steps: a. In the navigation area, open Systems Management. b. Select Servers. c. In the contents area, open the destination system. d. Select the destination Virtual I/O Server logical partition and click Properties.

84

IBM PowerVM Live Partition Mobility

e. Select the Virtual Adapters tab and compare the number of virtual adapters to the maximum virtual adapters. This is shown in Figure 3-26,

Figure 3-26 Checking free virtual slots.

– If, after verification, the number of maximum virtual adapters is higher or equal to the number of virtual adapters plus the number of virtual SCSI adapters required to host the migrating partition, you can continue with additional preparatory tasks at step 6 on page 86. – If the maximum virtual adapter value does not allow the addition of required virtual SCSI adapters for the mobile partition, then you have to modify its partition profile by completing the following steps: i. In the navigation area, open Systems Management. ii. Select Servers. iii. In the contents area, open the destination system. iv. Select the destination Virtual I/O Server logical partition v. Click in the task area on configuration, click Manage profiles vi. Select the active logical partition profile and select Edit from the Actions menu. vii. Click the Virtual Adapters tab and modify (increase) the number of maximum virtual adapters. You must shut down and restart the logical partition for the change to take effect.

Chapter 3. Requirements and preparation

85

Select Servers. return to the beginning of this section and complete the task associated with the incorrect information. select Hardware Information. select Virtual I/O Adapters. To verify the virtual adapter connections by using the HMC. you must be a super administrator and complete the following steps: a. • • – In the destination environment.6. Verify that the mobile partition has access to the same physical storage on the storage area network from both the source and destination environments. check that the following connections exist: • A virtual SCSI client adapter on the mobile partition must have access to a virtual SCSI adapter on the source Virtual I/O Server logical partition. That virtual SCSI server adapter on the source Virtual I/O Server logical partition must have access to a remote storage adapter on the source Virtual I/O Server logical partition. – In the source environment. 86 IBM PowerVM Live Partition Mobility . Verify all the information and click OK. Select Systems Management. d. check that a remote storage adapter on the destination Virtual I/O Server logical partition has access to the same physical storage as the source Virtual I/O Server logical partition. That remote storage adapter on the source Virtual I/O Server logical partition must be connected to a storage area network and have access to some physical storage in the network. In the contents area. c. The result is shown in Figure 3-27. If the information is incorrect. open the source system and select a mobile partition. Figure 3-27 The Virtual SCSI Topology of the mobile partition • • If the information is correct. b. go to step f on page 87. and select SCSI. This requirement has to be fulfilled for Live Partition Mobility to be successful. Select the mobile partition. e.

For more information about IVE. If you want to perform an active migration. REDP-4340. ensure that you use the logical host Ethernet adapter to create the shared Ethernet adapter. If the partition is active. h. Shared Ethernet adapters are required on both source and destination Virtual I/O Servers for all the external networks used by mobile partitions. You must complete several tasks to ensure that your network configuration meets the minimal configuration for Live Partition Mobility. 7.8. In the contents area. “Physical or dedicated I/O” on page 76. Verify that the mobile partition does not have physical or required I/O adapters and devices. it can only be used by a single LPAR. Notes: Link Aggregation or EtherChannel can also be used as the shared Ethernet adapter. If the IVE is put in promiscuous mode. open the destination system. b.f. It is sufficient to activate the partition to the SMS menu.8. Select Hardware Information. ensure that the physical port of this IVE adapter is set to promiscuous mode for the Virtual I/O Server. Integrated Virtual Ethernet Adapter Technical Overview and Introduction. All profile changes on the mobile partition’s profile must be activated before starting the migration so that the new values can take effect: a. Verify the information and click OK. 8. i. Requirements and preparation 87 . you must move the physical or required I/O from the mobile partition. you can shut it down and power on the partition again by using the changed logical partition profile. If you plan to use a shared Ethernet adapter (SEA) with an Integrated Virtual Ethernet (IVE) adapter. If the partition is not activated. You first have to create a shared Ethernet adapter on the Virtual I/O Server using the HMC so that the client logical partitions can access the external network without requiring a physical Ethernet adapter. This is only an issue for active partition migration. 3. as explained in 3.10 Network considerations You must prepare and configure the network for partition migration. it must be powered on. Chapter 3. Select the destination Virtual I/O Server logical partition. g. select Virtual I/O Adapters. If you plan to use the Integrated Virtual Ethernet adapter with the shared Ethernet adapter. and select SCSI.

88 IBM PowerVM Live Partition Mobility . Check that the client partition can access the external network. Ensure that the mobile partition has a virtual Ethernet adapter created by using the HMC GUI. 4. 2. are connected to the same shared storage. and are managed by the same HMC. Provided both systems are on the same network.11 Distance considerations There are no architected maximum distances between systems for Live Partition Mobility. configure the TCP/IP connections for the virtual adapters on the client logical partitions by using the client partitions’ operating systems (AIX or Linux). Standard long-range network and storage performance considerations apply. If virtual switches are available. Configure virtual Ethernet adapters for the source and destination Virtual I/O Server partitions. Verify that the operating system on the mobile partition sees the new Ethernet adapter by using the following command: lsdev -Cc adapter 6. 5. The maximum distance is dictated by the network and storage configuration used by the systems. 3. Ensure that you connect the source and destination Virtual I/O Servers and the shared Ethernet adapter to the network.Perform the following steps on the source and destination Virtual I/O Servers: 1. be sure that the virtual Ethernet adapters on the source Virtual I/O Server is configured on a virtual switch that has the same name of the virtual switch that is used on the destination Virtual I/O Server. by using the following command: mktcpip -h hostname -a IPaddress -i interface -g gateway 3. Activate the mobile partition to establish communication between its virtual Ethernet and the Virtual I/O Servers virtual Ethernet. then Live Partition Mobility will work. To check.

All rights reserved. The chapter contains the following topics: 4.4 Chapter 4.4. 2009.3. Basic partition migration scenario This chapter introduces the basics of configuring a Live Partition Mobility environment on IBM POWER6 technology-based servers using a Hardware Management Console (HMC)-based configuration.2. 2007. 89 . The chapters shows the detailed steps to migrate a logical partition from a source system to a destination system in a single flow.1. “Preparing for an active partition migration” on page 94 4. “Migrating a logical partition” on page 99 © Copyright IBM Corp. “Basic Live Partition Mobility environment” on page 90 4. “Virtual IO Server attributes” on page 93 4.

Using the configuration in Figure 4-1. Live Partition Mobility using N_Port ID Virtualization and virtual Fibre Channel features are covered in 5. and guide you through the configuration and execution tasks.1 Basic Live Partition Mobility environment This section shows you a simple configuration for Live Partition Mobility using an HMC and virtual SCSI disk. we explain the basic components involved in Live Partition Mobility. “Integrated Virtualization Manager for Live Partition Mobility” on page 221. Live Partition Mobility using Integrated Virtualization Manager is covered in Chapter 7.4. “Virtual Fibre Channel” on page 187. POWER6 System #1 (Source system) AIX Client Partition 1 (Mobile partition) hdisk0 POWER6 System #2 (Destination system) AIX Client Partition 1 Active / Inactive Partition Migration vscsi0 ent0 POWER Hypervisor POWER Hypervisor vhost0 Virtual I/O Server vtscsi0 hdiskX ent2 SEA en2 if en2 if ent2 SEA fcs0 ent0 ent0 fcs0 HMC Ethernet Network Storage Area Network Shared Disk (Storage Device) LUN Physical Volume LUN Figure 4-1 Basic Live Partition Mobility configuration 90 IBM PowerVM Live Partition Mobility Virtual I/O Server ent1 virt ent1 virt . A single Virtual I/O Server partition is configured on the source destination systems.11.

“Logical memory block size” on page 54.1 Minimum requirements The following minimum requirements are necessary for configuring a basic Live Partition Mobility environment: Hardware Management Console (HMC) Both the source and the destination systems must be managed by Hardware Management Console. To change the LMB size. These are created automatically by the migration function. the partition must use virtual Ethernet adapters and virtual SCSI or virtual Fibre Channel disks provided by a Virtual I/O Server partition.1. Same logical memory block size The logical memory block size must be the same on the source and the destination system. Mobile partition A logical partition that is migrated from the source to the destination system must use virtual SCSI or virtual Fibre Channel disks only. Chapter 4. Virtual SCSI adapter requirements: – On the source Virtual I/O Server partition. Virtual I/O Server partition At least one Virtual I/O Server partition must be installed and activated on both the source and the destination systems. The virtual SCSI adapter must be solely accessible by the client adapter of the mobile partition. If you want to migrate a running logical partition.5.2. You can check and update the logical memory block (LMB) size by using the Advanced System Management Interface. Basic partition migration scenario 91 . do not create any virtual SCSI adapters for the mobile partition. Disks must be mapped to physical storage that is actually located outside the source and the destination systems. No internal disks and no dedicated storage adapters are allowed. – On the destination Virtual I/O Server partition. SG24-7590 for details about virtual Fibre Channel configuration.4. Network connection The mobile partition and the Virtual I/O Server partitions on the source and destination systems must be reachable from the HMC. see the steps in 3. do not set the adapter as required and do not select Any client partition can connect when you create a virtual SCSI adapter. See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and Monitoring. and must not be assigned any physical adapter.

even in that case.1. If you have to use dedicated I/O adapters on the mobile partition after the migration. Shared disks One or more shared disks must be connected to the source and destination Virtual I/O Server partitions.For a migration of running partitions (active migration). 92 IBM PowerVM Live Partition Mobility . return the system to its regular power source before migrating a partition. At least one physical volume that is mapped by the Virtual I/O Server to a LUN on external SAN storage must be attached to the mobile partition. or more. Note: A good practice is to record the existing configuration at this point because the profile will be changed during a migration. the dedicated adapters are automatically removed from the partition profile so that the partition will boot with only virtual I/O resources after migration. The reserve_policy attribute of all the physical volumes belonging to the mobile partition must be set as no_reserve on the source and destination Virtual I/O Server partitions. you change this attribute by using the chdev command on the Virtual I/O Server partition: $ chdev -dev hdiskX -attr reserve_policy=no_reserve Power supply The destination system must be running on a regular power source. However. or make available the desired resources using other means. We suggest you use a dedicated network that has1 Gbps bandwidth. If a mobile partition has dedicated I/O adapters. 4. When using virtual SCSI disks. it can only participate in the inactive partition migration. update the mobile partition’s profile before booting. add adapters to the mobile partition by using dynamic LPAR operations. This record can be used on the destination system to reconfigure any dedicated adapters. both the source and destination Virtual I/O Server partitions must be able to communicate with each other to transfer the mobile partition state.2 Inactive partition migration An inactive partition migration moves a powered-off partition from the source to the destination system together with its partition profile. If the destination system is running on a battery power.

2 Virtual Asynchronous Services Interface device The Virtual Asynchronous Services Interface (VASI) device must be available on both the source and the destination Virtual I/O Servers for the mobile partition to participate in an active partition migration.2. 4. The following requirements must be met for the active partition migration. 4.2 Virtual IO Server attributes For the Live Partition Mobility function.1.4.3 Active partition migration An active partition migration moves a running logical partition. Chapter 4. 4.2. At least one mover service partition must be on each of the source and the destination systems for a mobile partition to participate in an active partition migration.1. Note: Any virtual TTY sessions will be disconnected during the migration. “Live Partition Mobility components” on page 20 for more detailed information. The mover service partition is not required for inactive partition migration. one mover service partition is enabled and one automatically configured Virtual Asynchronous Services Interface (VASI) device is available. The VASI device is not required for inactive partition migration. including its operating system and applications.1. No physical or dedicated I/O adapters are assigned to the mobile partition. in addition to the requirements listed in 4.1 Mover service partition The mover service partition is a Virtual I/O Server attribute. from the source to the destination system without disrupting the services of that partition. See 2. but can be reestablished on the destination system by the user after migration. two attributes and one virtual device have been added to the Virtual I/O Server. Resource Monitoring and Control (RMC) connection must exist between the mobile partition and the HMC is active. “Minimum requirements” on page 91: On both the source and the destination Virtual I/O Server partitions.1. Basic partition migration scenario 93 .

4. and select the system on which you want to create a new Virtual I/O Server partition. including Virtual I/O Server partitions.2.3 Preparing for an active partition migration This section shows how to enable the mover service partition. it is a recommended step for active partition migration.1 Enabling the mover service partition You can set the mover service partition attribute at the time you create a Virtual I/O Server partition. expand Systems Management  Servers.Important: Configuring the VASI device is not required. However. To set the mover service partition attribute during the creation of a Virtual I/O Server partition: 1. 4. The VASI device is automatically created and configured when the Virtual I/O Server is installed. Synchronizing the time-of-day clocks for the source and destination Virtual I/O Server partitions is optional for both active and inactive partition migration. 4. In the navigation pane.3. If you choose not to complete this step. The mover service partition device is required for active partition migration only. 94 IBM PowerVM Live Partition Mobility . the source and destination systems will synchronize the clocks while the mobile partition is moving from the source system to the destination system. or dynamically for a running Virtual I/O Server. This partition attribute is only supported on managed systems that are capable of active partition migration.3 Time reference The Time reference is an attribute of partitions.

and select VIO Server. In the Tasks pane. Basic partition migration scenario 95 .2. expand Configuration  Create Logical Partition. as shown in Figure 4-2. to start the Create LPAR Wizard. Figure 4-2 Hardware Management Console Workplace Chapter 4.

See Figure 4-3. change the ID if you want to. You can also set the mover service partition attribute dynamically for an existing Virtual I/O Server partition while the partition is in the Running state. 96 IBM PowerVM Live Partition Mobility . In the navigation pane.3. expand Systems Management  Servers. and select the desired system. In the Contents pane (the top right of the Hardware Management Console Workplace). Figure 4-3 Create LPAR Wizard window 4. 1. Enter the partition name. The mover service partition will be activated with the partition. Proceed with the remaining steps of the Virtual I/O Server partition creation. and check the Mover service partition box on the Create Lpar Wizard window. select the Virtual I/O Server for which you want to enable the mover service partition attribute. 2.

and click OK. Click view popup menu button and select Configuration  Properties.3. “view popup menu” button Figure 4-4 Changing the Virtual I/O Server partition property 4. Check the Mover service partition box on the General tab in the Partition Properties window. Figure 4-5 Enabling the Mover service partition attribute Chapter 4. as shown in Figure 4-4. See Figure 4-5. Basic partition migration scenario 97 .

select the Virtual I/O Server partition. select Enabled for the Time reference attribute. expand Systems Management  Servers. Figure 4-6 Enabling the Time reference attribute 98 IBM PowerVM Live Partition Mobility . In the Contents pane (the top right of the Hardware Management Console Workplace). 4. you can optionally enable the Time reference attribute of the partition. 2. as shown in Figure 4-6. Click view popup menu button and select Configuration  Properties. Select the Settings tab.2 Enabling the Time reference After creating a partition. In this example. and click OK.3.4. and select the system that has the partition for which you want to enable the Time reference attribute. as follows: 1. select the partition. 3. In the navigation pane.

Perform the validation steps and eliminate errors. In the navigation pane. 3. we show the GUI steps. 4. You can perform the validation steps by using the HMC GUI or CLI. In this section. Chapter 4.1. and select the source system. In the contents pane (the top right of the Hardware Management Console Workplace).4 Migrating a logical partition This section shows how to migrate a logical partition that is called a mobile partition from the source to the destination system. expand Systems Management  Servers. Migrate the mobile partition. The main steps for the migration are: 1. 1. These steps are optional but can help to eliminate errors. you should follow the validation steps. see 5. Basic partition migration scenario 99 .4. especially of an active migration.4. Perform inactive or active migration. 2.7. select the partition that you will migrate to the destination system. It also provides examples. For information about the CLI. 2.1 Performing the validation steps and eliminating errors Before performing a migration. “The migrlpar command” on page 163.

“view popup menu” button Figure 4-7 Validate menu on the HMC 100 IBM PowerVM Live Partition Mobility . as shown in Figure 4-7. to start the validation process.3. Click view popup menu button and select Operations  Mobility  Validate.

Our example shows migration of a partition between systems managed by a single HMC. check the messages in the window and the prerequisites for the migration. “Remote Live Partition Mobility” on page 130 for more details on remote migration. specify Destination profile name and Wait time. Select the destination system.4. Check for errors or warnings in the Partition Validation Errors/Warnings window. Basic partition migration scenario 101 . See 5. This step applies only to a remote migration between systems managed by different HMCs.4. If any errors occur. Note: Figure 4-8 on page 101 shows the option of entering a remote HMC’s information. because these are not required for the inactive partition migration. Figure 4-8 Selecting the Remote HMC and Destination System If you are proceeding with this step when the mobile partition is in the Not Activated state. Chapter 4. 5. and eliminate any errors. the destination and source mover service partition and wait time entries do not appear. and then click the Validate button (Figure 4-8). You cannot perform the migration steps with any errors.

For example. Figure 4-9 Partition Validation Errors If the mobile partition is in the Not Activated state. then you get the error shown in Figure 4-9. if you are proceeding with the validation steps on the mobile partition with physical adapters in the Running state (active migration). as shown in Figure 4-10. Figure 4-10 Partition Validation Warnings 102 IBM PowerVM Live Partition Mobility . a warning message is reported.

“Active partition migration” on page 93. For details about the active partition migration requirements. and no physical or dedicated I/O adapters must be assigned to it.3. see 4. the mobile partition must be powered off and in the Not Activated state. as shown in Figure 4-11. If you want to perform an inactive migration. If you have no errors in the previous step.6. Basic partition migration scenario 103 .4. After closing the Partition Validation Errors/Warnings window. Figure 4-11 Validation window after validation 4. a validation window. you may perform the migration at this point by clicking the Migrate button. opens again. the mobile partition must be in the Running state. Chapter 4.1. If you want to perform an active migration.2 Inactive or active migration The migration type depends on the state of the mobile partition.

You can perform the migration steps by using the HMC GUI or CLI. you can see that the mobile partition is on the source system. and select the source system.1.4.7. that is. To migrate a mobile partition: 1. Figure 4-12 System environment before migrating 2. the mobile partition. expand Systems Management  Servers. For more information about the CLI. we are going to migrate a partition named mobile from the source system (9117-MMA-SN101F170-L10) to the destination system (9117-MMA-SN10F6A0-L9). see 5.4. At this point. In the navigation pane.3 Migrating a mobile partition After completing the validation steps. In the contents pane. In this scenario. select the partition to migrate to the destination system. migrate the mobile partition from the source to the destination system. “The migrlpar command” on page 163. 104 IBM PowerVM Live Partition Mobility . as shown in Figure 4-12.

to start the Partition Migration wizard. Chapter 4.3. Basic partition migration scenario 105 . Check the Migration Information of the mobile partition in the Partition Migration wizard. as shown in Figure 4-13. Click view popup menu button and select Operations  Mobility  Migrate. “view popup menu” button Figure 4-13 Migrate menu on the HMC 4.

If the mobile partition is powered off. as shown in Figure 4-15 on page 107. Figure 4-14 Migration Information 5. it is Active. Migration Type is Inactive. If the partition is in the Running state. as shown in Figure 4-14. 106 IBM PowerVM Live Partition Mobility . You can specify the New destination profile name in the Profile Name panel.

If you leave the name blank or do not specify a unique profile name. the profile on the destination system will be overwritten. Figure 4-15 Specifying the profile name on the destination system Chapter 4. Basic partition migration scenario 107 .

Figure 4-16 Optionally specifying the Remote HMC of the destination system 108 IBM PowerVM Live Partition Mobility . Click Next. Optionally enter the Remote HMC network address and Remote User. See Figure 4-16. we use a single HMC. In our example.6.

Select the destination system and click Next. See Figure 4-17.7. Chapter 4. Figure 4-17 Selecting the destination system The HMC then validates the partition migration environment. Basic partition migration scenario 109 .

Figure 4-18. If only warnings exist. Check errors or warnings in the Partition Validation Errors/Warnings panel. you may proceed to the next step. you cannot proceed to the next step. and eliminate any errors.8. If errors exist. Figure 4-18 Sample of Partition Validation Errors/Warnings 110 IBM PowerVM Live Partition Mobility .

If you have more than one Virtual I/O Server partition on the source or on the destination system. so the wizard window shows only one mover service partition candidate. Basic partition migration scenario 111 . skip this step and go to step 10 on page 112. you can select which mover server partitions to use. select the source and the destination mover service partitions to be used for the migration. See Figure 4-19. If you are performing an inactive migration. Chapter 4. If you are performing an active migration. Figure 4-19 Selecting mover service partitions In this basic scenario. one Virtual I/O Server partition is configured on the destination system.9.

10. See Figure 4-20. Figure 4-20 Selecting the VLAN configuration 112 IBM PowerVM Live Partition Mobility .Select the VLAN configuration.

See Figure 4-21. In this case. Figure 4-21 Selecting the virtual SCSI adapter Chapter 4. If you have more than one Virtual I/O Server partition on the destination system. one Virtual I/O Server partition is configured on each system. Basic partition migration scenario 113 . you may choose which Virtual I/O Server to use as the destination.11.Select the virtual storage adapter assignment. so this wizard window shows one candidate only.

13.12. See Figure 4-22.Select the shared processor pool from the list of shared processor pools matching the source partition’s shared processor pool configuration. the command syntax of drmgr can be used to install and configure dynamic logical partitioning (dynamic LPAR) scripts: drmgr {-i script_name [-w minutes] [-f] | -u script_name} [-D hostname] 114 IBM PowerVM Live Partition Mobility . The wait time value is passed to the commands that are invoked on the HMC and perform migration-related operations on the relevant partitions using the Remote Monitoring and Control (RMC). For example. Figure 4-22 Specifying the shared processor pool Note: If there is only one shared processor pool this option might not appear.5.Specify the wait time in minutes (Figure 4-23 on page 115). See 5. “Multiple shared processor pools” on page 147 for more information about shared processor pools and Live Partition Mobility.

the drmgr command is executed with -w 5. as shown in Figure 4-23. Figure 4-23 Specifying wait time Chapter 4. If you specify 5 minutes as the wait time.The wait time value is used for the argument for the -w option. Basic partition migration scenario 115 .

Figure 4-25 Partition Migration Status window 116 IBM PowerVM Live Partition Mobility .Check the settings that you have specified for this migration on the Summary panel.14. as shown in Figure 4-25. and then click Finish to begin the migration. See Figure 4-24.The Migration status and Progress is shown in the Partition Migration Status panel. Figure 4-24 Partition Migration Summary panel 15.

verify that the mobile partition is Running on the destination system.When the Partition Migration Status window indicates that the migration is 100% complete.If you keep a record of the virtual I/O configuration of the partitions. check and record the migrating partition’s configuration in the destination system. The mobile partition is on the destination system. Although the migrating partition retains the same slot numbers as on the source system. the virtual target device name might change during migration. Figure 4-26 Migrated partition 17. the server virtual adapter slot numbers can be different between the source and destination Virtual I/O Servers.16. Basic partition migration scenario 117 . Also. as shown in Figure 4-26. Chapter 4.

118 IBM PowerVM Live Partition Mobility .

“Migration awareness” on page 177 5.9. “Virtual Fibre Channel” on page 187 5.7.11. “Remote Live Partition Mobility” on page 130 5. “Migrating a partition with physical resources” on page 149 5. “Processor compatibility modes” on page 205 © Copyright IBM Corp.2.10. “Dual HMC considerations” on page 130 5. 119 . 2007.1. All rights reserved.8.5 Chapter 5. The chapter assumes you are familiar with the information in the preceding chapters. “The command-line interface” on page 162 5. “Multiple shared processor pools” on page 147 5.3. “Making kernel extension migration aware” on page 185 5. “Multiple concurrent migrations” on page 128 5. “Making applications migration-aware” on page 178 5.6.5. “Dual Virtual I/O Servers” on page 120 5.4. 2009. This chapter contains the following topics: 5.12. Advanced topics This chapter discusses various advanced topics relating to Live Partition Mobility.

and is not limited to only two servers. An adapter’s slot number can change after migration. you are required to modify the partition configuration before starting the migration. Create another virtual Ethernet adapter and assign the IP address on it.5. multiple Virtual I/O Servers can provide access with multiple paths to a specific set of assigned LUNs for virtual Fibre Channel usage. Partition migration requires network connectivity through the RMC protocol to the Virtual I/O Server. When multiple Virtual I/O Servers are involved. Similarly. if any. remember not to assign the Virtual I/O Server’s IP address on the shared Ethernet adapter. and its associated IP address. Shared Ethernet failover might or might not be configured on either the source or the destination systems. 120 IBM PowerVM Live Partition Mobility . Virtual I/O Servers may be created to offload the mover services to a dedicated partition. Important: If you are planning to use shared Ethernet adapter failover. The partition that is moving must keep the same number of virtual SCSI and virtual Fibre Channel adapters after migration and each virtual disk must remain connected to the same adapter or adapter set. The backup shared Ethernet adapter is always offline. A migration can fail validation checks and is not started if the moving partition adapter and disk configuration cannot be preserved on the destination system. Also. but the same device name is kept by the operating system for both adapters and disks. Access to the same storage area network (SAN) disk may be provided on the destination system by multiple Virtual I/O Servers for use with virtual SCSI mapping.1 Dual Virtual I/O Servers Multiple Virtual I/O Servers are often deployed in systems where there is a requirement for logical partitions to continue to use their virtual resources even during the maintenance of a Virtual I/O Server. Live Partition Mobility does not make any changes to the network setup on the source and destination systems. multiple virtual SCSI. and virtual Fibre Channel combinations are possible. It only checks that all virtual networks used by the mobile partition have a corresponding shared Ethernet adapter on the destination system. Live Partition Mobility automatically manages the virtual SCSI and virtual Fibre Channel configuration if an administrator does not provide specific mappings. In this case. This discussion relates to the common practice of using more than one Virtual I/O Server to allow for concurrent maintenance.

11. the partition can continue to run if one of the subsystems is taken offline. we describe three different migration scenarios where the source and destination systems provide disk access either with one or two Virtual I/O Servers using virtual SCSI adapters. The validation checks the configuration of the involved Virtual I/O Servers and shows you the configuration that will be applied. More information about virtual Fibre Channel adapters can be found in 5. “Virtual Fibre Channel” on page 187.2. 5.Tip: The best practice is to always perform a validation before performing a migration. With this setup. Use the validation menu on the GUI or the lslparmigr command described in 5.1. “The lslparmigr command” on page 166. Chapter 5.1 Dual Virtual I/O Server and client mirroring Dual Virtual I/O Server and client mirroring may be used when you have two independent storage subsystems providing disk space with data mirrored across them.7. but it is recommended. In this section. Advanced topics 121 . It is not required that your mirroring use two independent storage subsystems.

the other must access the second subsystem. AIX Client Partition 1 LVM Mirroring hdisk0 hdisk1 vscsi0 vscsi1 Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 vtscsi0 hdisk0 Storage adapter vhost0 vtscsi0 hdisk0 Storage adapter Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Hypervisor hdisk0 Storage adapter hdisk0 Storage adapter Disk A Storage Subsystem Disk B Storage Subsystem Figure 5-1 Dual VIOS and client mirroring to dual VIOS before migration The migration process automatically detects which Virtual I/O Server has access to which storage and configures the virtual devices to keep the same disk access topology. 122 IBM PowerVM Live Partition Mobility . one of them should be configured to access the disk space provided by the first storage subsystem.If the destination system has two Virtual I/O Servers. as shown in Figure 5-1.

Advanced topics 123 . as shown in Figure 5-2. the logical partition has the same disk configuration it had on previous system.When migration is complete. AIX Client Partition 1 LVM Mirroring hdisk1 hdisk0 vscsi1 vscsi0 Hypervisor vhost0 vtscsi0 hdisk0 Storage adapter vhost0 vtscsi0 hdisk0 Storage adapter Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Hypervisor hdisk0 Storage adapter hdisk0 Storage adapter Disk A Storage Subsystem Disk B Storage Subsystem Figure 5-2 Dual VIOS and client mirroring to dual VIOS after migration Chapter 5. still using two Virtual I/O Servers.

If the destination system has only one Virtual I/O Server. the migration is still possible and the same virtual SCSI setup is preserved at the client side. 124 IBM PowerVM Live Partition Mobility .2 Dual Virtual I/O Server and multipath I/O With multipath I/O.1. One path is active and the other is standby. as shown in Figure 5-3. The destination Virtual I/O Server must have access to all disk spaces and the process creates two virtual SCSI adapters on the same Virtual I/O Server. each provided by a separate Virtual I/O Server. AIX Client Partition 1 LVM Mirroring hdisk1 hdisk0 vscsi1 vscsi0 Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 vtscsi0 vhost1 vtscsi1 Hypervisor hdisk0 Storage adapter hdisk0 Storage adapter hdisk0 Storage adapter hdisk1 Disk A Storage Subsystem Disk B Storage Subsystem Figure 5-3 Dual VIOS and client mirroring to single VIOS after migration 5. the logical partition accesses the same disk data using two different paths.

Advanced topics 125 .The migration is possible only if the destination system is configured with two Virtual I/O Servers that can provide the same multipath setup. AIX Client Partition 1 hdisk0 vscsi0 vscsi1 Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 vhost0 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Hypervisor vtscsi0 hdisk0 Storage adapter vtscsi0 hdisk0 Storage adapter hdisk0 Storage adapter hdisk0 Storage adapter Disk A Storage Subsystem Figure 5-4 Dual VIOS and client multipath I/O to dual VIOS before migration Chapter 5. They both must have access to the shared disk data. as shown in Figure 5-4.

126 IBM PowerVM Live Partition Mobility . but this setup is not allowed.3 Single to dual Virtual I/O Server A logical partition that is using only one Virtual I/O Server for virtual disks may be migrated to a system where multiple Virtual I/O Servers are available. The migration process would create two paths using the same Virtual I/O Server. the migration cannot be performed. AIX Client Partition 1 hdisk0 vscsi0 vscsi1 Hypervisor vhost0 vhost0 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Hypervisor vtscsi0 hdisk0 Storage adapter vtscsi0 hdisk0 Storage adapter hdisk0 Storage adapter hdisk0 Storage adapter Disk A Storage Subsystem Figure 5-5 Dual VIOS and client multipath I/O to dual VIOS after migration If the destination system is configured with only one Virtual I/O Server. you must first remove one path from the source configuration before starting the migration. on the destination system.When migration is complete. Because the migration never changes a partition’s configuration. as shown in Figure 5-5.1. because having two virtual target devices that map the same backing device on different virtual SCSI server devices is not possible. To migrate the partition. the two Virtual I/O Servers are configured to provide the two paths to the data. 5. only one Virtual I/O Server is used on the destination system. The configuration becomes a simple single Virtual I/O Server migration. The removal can be performed without interfering with the running applications.

The situation is shown in Figure 5-6. Chapter 5. a list of possible Virtual I/O Servers to pick from is provided. Advanced topics 127 . When both destination Virtual I/O Servers have access to all the disk data. the command-line interface makes the automatic selection if no specific option is provided. the migration cannot be performed. By default. If no destination Virtual I/O Server provides all disk data. When you start the migration. you have the option of choosing a specific Virtual I/O Server. The HMC automatically makes a selection if you do not specify the server. AIX Client Partition 1 hdisk0 vscsi0 Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 Virtual I/O Server (VIOS) 1 Hypervisor vtscsi0 hdisk0 Storage adapter hdisk0 Storage adapter hdisk0 Storage adapter Disk A Storage Subsystem Figure 5-6 Single VIOS to dual VIOS before migration When the migration is performed using the GUI on the HMC. after migration the partition will use just that Virtual I/O Server. the migration can select either one or the other.If access to all disk data required by the partition is provided by only one Virtual I/O Server on the destination system.

any mix of either inactive or active. with the following syntax: lslparmigr -r sys -m <system> 128 IBM PowerVM Live Partition Mobility .After migration. Some of its partitions cannot be stopped or the planned maintenance time is too long to satisfy service level agreements. For example: A review of the entire infrastructure detects that a different system location of some logical partition may improve global system usage and service quality. The maximum number of concurrent migrations on a system can be identified by using the lslparmigr command on the HMC.2 Multiple concurrent migrations The same system can handle multiple concurrent partition migrations. In many scenarios. A system is planned to enter maintenance and must be shut down. the configuration is similar to the one shown in Figure 5-7. more than one migration may be started on the same system. AIX Client Partition 1 hdisk0 vscsi0 Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vhost0 Virtual I/O Server (VIOS) 1 Hypervisor vtscsi0 hdisk0 Storage adapter hdisk0 Storage adapter hdisk0 Storage adapter Disk A Storage Subsystem Figure 5-7 Single VIOS to dual VIOS after migration 5.

either by using the GUI or the CLI. You can combine or separate virtualization functions and mover service functions to suit your requirements. While a migration is in progress. Alternatively. we suggest distributing the load among them. the setup time using the GUI can become long and you should consider using the CLI instead. For each mobile partition. The active migration process has been designed to handle any partition memory size and it is capable of managing any memory workload. so elapsed migration time can change with workload. Although the algorithm is efficient. An active migration requires more time to complete than an inactive migration because the system performs additional activities to keep applications running while the migration is in progress. The Virtual I/O Servers selected as mover service partitions are loaded by memory moves and network data transfer. When the number of migrations to be executed grows. as follows: – High speed network transfers can become processor-intensive workloads. If multiple mover service partitions are available on either the source or destination systems. Chapter 5. create dedicated Virtual I/O Servers on the source and destination systems that provide the mover service function separating the service network traffic from the migration network traffic. Consider the following information: The time required to complete an active migration depends on the size of the memory to be migrated and on the mobile partition’s workload. use uncapped Virtual I/O Servers and add virtual processors if the load increases. Network management can cause high CPU usage and usual performance considerations apply. Advanced topics 129 . especially when the time required by the migration process has to be evaluated. This process can be done explicitly by selecting the mover service partitions. four concurrent active migrations can be managed by the same mover service partition. The migrlpar command may be used in scripts to start multiple migrations in parallel. Virtual I/O Servers selected as mover service partitions are involved in partition’s memory migration and must manage high network traffic. Each mover service partition can manage up to four concurrent active migrations and explicitly using multiple Virtual I/O Servers avoids queuing of requests. Applications can update memory with no restriction during migration and all memory changes are taken into account. planning the migration during low activity periods can help to reduce migration time.Several practical considerations should be taken into account when planning for multiple migrations. – At most. you can start another one. you must use an HMC GUI wizard or an HMC command.

Most additional system management actions should be performed using the same HMC that is performing the migration. In a dual HMC configuration. carefully consider this option. only one HMC is required to be used when multiple concurrent migrations are executed. have the same configuration rights. but not for configuration changes until migration is completed. deploying two HMCs managing the same systems is possible. and can perform the same actions. The other HMC can show the status of migration but cannot issue any additional configuration changes on the two systems. 5. the migration process requires that no additional modifications occur on the involved objects until migration is completed. Although the lock can be manually broken. Also. both HMCs see the same system’s status. However. Live Partition Mobility is a configuration change that involves two separate systems. After that. The HMC that initiates a migration takes a lock on both managed systems and the lock is released when migration is completed.3 Dual HMC considerations The HMC is the center of system management of IBM Power Systems and its unavailability does not affect service by any means. 130 IBM PowerVM Live Partition Mobility . Remote migrations require coordinated movement of a partition’s state and resources over a secure network channel to a remote HMC. To avoid concurrent operations on the same system. Consider the locking mechanism when planning migration. a locking mechanism is in place that allows the first configuration change to occur and the second one to fail with a message showing the identifier of the locking HMC. The first migration task takes an HMC lock on both systems so the subsequent migration must be issued on the same HMC.4 Remote Live Partition Mobility This section focuses on Live Partition Mobility and its ability to migrate a logical partition between two IBM Power Systems servers each managed by a separate Hardware Management Console.5. When multiple migrations are planned between two systems. The other HMC can continue to be used for monitoring purposes. multiple HMC commands are issued.

a migration cannot occur: A ready source system that is migration-capable A ready destination system that is migration-capable Compatibility between the source and destination systems Destination system managed by a remote HMC Network communication between local and remote HMC A migratable. the partition must be turn off. an MSP on the source and destination systems One or more SANs that provide connectivity to all of the mobile partition’s disks to the Virtual I/O Server partitions on both the source and destination servers. receives requests from the local HMC and sends responses over a secure network channel. or both). serves as the controlling HMC. The HMC does not have to be connected to the remote system’s RMC connections to its Virtual I/O Servers nor does it have to connect to the remote system’s service processor.The following list indicates the high-level prerequisites for remote migration. which manages the destination server. “Live Partition Mobility mechanisms” on page 19. but must be capable of booting on the destination system. The mobile partition accesses all migratable disks through devices (virtual Fibre Channel. An RMC connection to manage inter-system communication Remote migration operations require that each HMC has RMC connections to its individual system’s Virtual I/O Servers and a connection to its system’s service processors. Virtual Fibre Channel LUNs should be configured as described in Chapter 2 of PowerVM Virtualization on IBM System p: Managing and Monitoring. ready partition to be moved from the source system to the destination system. One or more physical IP networks (LAN) that provide the necessary network connectivity for the mobile partition through the Virtual I/O Server partitions on both the source and destination servers. For active migrations. The remote active and inactive migrations follow the same workflow as described in Chapter 2. SCSI reservation must be disabled. virtual SCSI. The LUNs used for virtual SCSI must be zoned and masked to the Virtual I/O Servers on both systems. The local HMC. which manages the source server in a remote migration. they cannot be part of a storage pool or logical volume on the Virtual I/O Server. Hardware-based iSCSI connectivity may be used in addition to SAN. Chapter 5. The remote HMC. The mobile partition’s virtual disks must be mapped to LUNs. Advanced topics 131 . SG24-7590. For an inactive migration. The mobile partition accesses all migratable network interfaces through virtual Ethernet devices. If any of the following elements are missing.

4. and Virtual I/O Servers are required to be configured exactly as though they were going to be performing migrations managed by a single HMC in the basic scenario as described in Chapter 3. To initiate the remote migration operation. mover service partitions.4. “Requirements and preparation” on page 45.4 or later HMC version Network access to a remote HMC SSH key authentication to the remote HMC The source and destination servers. This feature allows a user to migrate a client partition to a destination server that is managed by a different HMC. The ability to migrate a partition remotely allows Live Partition Mobility between systems managed by HMCs that are also using separate private networks.5. The following list indicates the requirements for remote HMC migrations: A local HMC managing the source server A remote HMC managing the destination server Version 7 Release 3. 132 IBM PowerVM Live Partition Mobility . A recommendation is to have some IBM Power Systems servers use private networks to access the HMC.1 Requirements for remote migration The Remote Live Partition Mobility feature is available starting with HMC Version 7 Release 3. The function relies on Secure Shell (SSH) to communicate with the remote HMC. you may use only the HMC that contains the mobile partition.

Figure 5-8 displays the Live Partition Mobility infrastructure involving the two remote HMCs and their respective managed systems. Advanced topics Virtual I/O Server ent1 virt ent1 virt vhost0 133 . POWER6 System #1 (Source system) AIX Client Partition 1 (Mobile partition) hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor VLAN POWER6 System #2 (Destination system) vhost0 Virtual I/O Server vtscsi0 hdisk0 fcs0 vtscsi0 ent2 SEA en2 if en2 if ent2 SEA hdisk0 ent0 Local HMC Ethernet Network Storage Area Network Storage Subsystem Remote HMC ent0 fcs0 LUN Figure 5-8 Live Partition Mobility infrastructure with two HMCs Chapter 5.

The HMC for both systems contains a second network interface that is connected to the public network.Figure 5-9 displays the infrastructure involving private networks that link each service processor to its HMC. POWER6 System #1 (Source system) AIX Client Partition 1 (Mobile partition) hdisk0 vscsi0 ent0 Service Processor VLAN POWER Hypervisor Service Processor POWER Hypervisor VLAN POWER6 System #2 (Destination system) vhost0 Virtual I/O Server vtscsi0 hdisk0 fcs0 vtscsi0 ent2 SEA ent0 en2 if en2 if ent2 SEA hdisk0 ent0 fcs0 Local HMC Ethernet Network Storage Area Network Storage Subsystem Remote HMC LUN Figure 5-9 Live Partition Mobility infrastructure using private networks 134 IBM PowerVM Live Partition Mobility Virtual I/O Server ent1 virt Ethernet private network Ethernet private network ent1 virt vhost0 .

“Preparation” on page 32.Figure 5-10 shows the situation where one POWER System is in communication with the HMC on a private network. two additional steps are necessary: 1. client partition. which requires access to the CLI on each HMC. This applies for each involved HMC. POWER6 System #1 (Source system) AIX Client Partition 1 (Mobile partition) hdisk0 vscsi0 ent0 Service Processor Service Processor VLAN POWER Hypervisor POWER Hypervisor VLAN POWER6 System #2 (Destination system) vhost0 Virtual I/O Server vtscsi0 hdisk0 fcs0 vtscsi0 ent2 SEA ent0 en2 if en2 if ent2 SEA hdisk0 ent0 fcs0 Local HMC Ethernet Network Storage Area Network Storage Subsystem Remote HMC LUN Figure 5-10 One public and one private network migration infrastructure 5. and the destination sever is communicating by using the public network. 2.2. Authenticate the local HMC with the destination’s HMC. use the lslparmigr -manager command. Virtual I/O Server. and mover service partition.2 HMC considerations Preparation for remote migration involves the same steps as explained in 2. Configure network communication between the HMCs. Advanced topics Virtual I/O Server ent1 virt Ethernet private network ent1 virt vhost0 135 .5. The steps to configure Virtual I/O Servers. To prepare for mobility.4. Use dedicated networks with 1 Gbps bandwidth. Remote migration capability Confirm that each HMC is capable to perform remote migrations. or more. To determine whether the HMC is capable of remote migration. mover service partitions and partition profiles do not change. If the HMC is Chapter 5.

2. In the navigation area. as shown in Figure 5-11: Figure 5-11 Network ping successful to remote HMC SSH authentication keys Allow communication between the local and remote HMC through SSH key authentication. This retrieval requires access to the CLI on the local HMC. The local user must retrieve authentication keys from the user on the remote HMC. the attribute remote_lpar_mobility_capable displays a value of 1. In the Operations section of the contents area. if the HMC is incapable. 136 IBM PowerVM Live Partition Mobility . the attribute indicates a value of 0. 4. In the Network Diagnostic Information window. In the text box. 3. use the HMC as follows: 1. select the Ping tab. To test the network. HMC network communication Test network communication existence between the two HMC systems involved in the migration. To allow the local HMC to communicate with remote HMC. and click Ping. first ensure that remote command execution is enabled on the remote HMC. Review the results to ensure that certain packets were not lost. enter the IP address or host name of the remote HMC. select HMC Management.capable. 5. select Test Network Connectivity.

Click OK. In the Administration section of the contents area. 2.To enable remote command execution (see Figure 5-12): 1. select Remote Command Execution. select HMC Management. 1 2 Figure 5-12 HMC option for remote command execution 3. In the navigation area. enable the check box to Enable remote command execution using the ssh facility. as shown in Figure 5-13. Figure 5-13 Remote command execution window Chapter 5. In the Remote Command Execution window. Advanced topics 137 .

This function was added to the validation and migration steps for the HMC GUI and CLI. The function informs the local HMC to contact the remote HMC and requests a list of all migration-ready IBM Power Systems servers. select the partition which you will migrate to the destination system. and authenticate to the remote HMC by using a remote user ID with hmcsuperadmin privileges.180: 5.4. In the contents pane (the top right of the Hardware Management Console Workplace). Example 5-1 mkauthkeys command execution hscroot@hmc1:~> mkauthkeys --ip 9. In the navigation pane.5.7.4. 3. and select the source system.180 -u hscroot -t rsa Enter the password for user hscroot on the remote host 9. such as the hscroot user. 9. 2. “The migrlpar command” on page 163. see 5. To validate by using the GUI: 1.Use the mkauthkeys command in the CLI to retrieve authentication keys from the current HMC managing the mobile partition. Click view popup menu and select Operations  Mobility  Validate to start the validation window.3.1. Authentication to a remote system (in our case.180) using RSA authentication is displayed in Example 5-1. expand Systems Management  Servers. Validation steps for remote migration You may validate migration to an authenticated remote HMC by using the HMC GUI or CLI. “The mkauthkeys command” on page 173. If you want more information about the CLI. For details about the mkauthkeys command.3 Remote validation and migration Partition migration to a remote destination includes a function available starting in HMC Version 7 Release 3.3.5.4.7.5. 138 IBM PowerVM Live Partition Mobility . see 5. The following steps use the GUI. You must be logged in as a user with hmcsuperadmin privileges.3.

Migration-ready systems exist on the remote HMC. which was used for authentication. d.4. All migration-ready systems managed by the remote HMC are listed. If you encounter an error. and then click the Refresh Destination System button. If your local HMC manages any other migration-ready systems. Advanced topics 139 . Correct remote HMC and User ID entered correctly. continue to step 5 on page 140. Chapter 5. b. Enter the Remote HMC IP address or host name and the Remote User ID information. prior to the refresh. Network communication to the remote HMC is available. Figure 5-14 Remote migration information entered for validate task If the destination systems refresh properly. c. you will see a list of those. See Figure 5-14. SSH authentication was configured properly. in the Destination system listing. check the following items: a.

you may migrate the partition after the validation steps. Click the Validate button (Figure 5-15). the Partition Validation Errors/Warnings window opens. Figure 5-15 Validation window after destination system refresh 6. Check the messages in the window and the prerequisites for the migration: • • For error messages: You cannot perform the migration steps if errors exist. Perform the following steps: Note: If the window does not appear. If errors or warnings occur.5. You have the option of also specifying the Destination profile name and Wait time. For warning messages: If only warnings occur (no errors). 140 IBM PowerVM Live Partition Mobility . you have no errors or warnings. Select the remote Destination system. a. the destination and source mover service partition and wait time entries do not appear. because these are not required for the inactive partition migration. Eliminate any errors. If you are proceeding with this step when the mobile partition is in the Not Activated state.

128) to the destination system (9117-MMA-SN101F170-L10) on the remote HMC (9.b.7. A validation window opens again. migrate the mobile partition from the source to the destination system managed by the remote HMC. If you had warning messages only (no error messages).180). Figure 5-16 Validation window after validation Migration steps for remote migration After the validation steps. expand Systems Management  Servers. see 5. and select the source system. Chapter 5. In this scenario. you may click the Migrate button.3. we migrate a partition named mobile from the source system (9117-MMA-SN100F6A0-L9) managed by the local HMC (9.3. You can perform the migration steps by using the HMC GUI or CLI. as shown in Figure 5-16. Close the Partition Validation Errors/Warnings window. Advanced topics 141 .5. as follows: 1. In the navigation pane on the local HMC. “The command-line interface” on page 162. For information about the CLI.5.

142 IBM PowerVM Live Partition Mobility . the profile on the destination system will be overwritten. Figure 5-17 Local HMC environment before migrating 2. the Migration Type is inactive.At this point. Click view popup menu and select Operations  Mobility  Migrate to start the Partition Migration wizard. that is. If the mobile partition is powered off. Check the Migration Information of the mobile partition in the Partition Migration wizard. the Migration Type is active. If the partition is in the Running state. select the partition that you will migrate to the destination system. you can see the mobile partition is on the source system managed by the local HMC in Figure 5-17 and that only the source system is available on the local HMC for this scenario. If you leave the name blank or do not specify a unique profile name. the mobile partition. You can specify the New destination profile name in the Profile Name window. 3. In the contents pane. 4.

If you are performing inactive migration. 10. and eliminate any errors. Figure 5-18 Remote HMC selection window in Migrate task 6. skip this step and go to 9. You may proceed to the next step if it shows warnings only. you cannot proceed to the next step. enter the Remote HMC and Remote User information. Advanced topics 143 . Select the VLAN configuration.Select the virtual storage adapter assignment. 11. Select the destination system and click Next. 7. Select Remote Migration. Check errors or warnings in the Partition Validation Errors/Warnings window.5. 8. 9. as shown in Figure 5-18. If there are any errors. The HMC validates the partition migration environment. Chapter 5.Specify the wait time in minutes. select the source and the destination mover service partitions to be used for the migration. If you are performing active migration. and then click Next.

12. and then click Finish to begin the migration.Check the settings that you have specified for this migration on the Summary window.After migration is complete. See Figure 5-19. Figure 5-19 Remote migration summary window 13. 144 IBM PowerVM Live Partition Mobility . check that the mobile partition is on the destination system on the remote HMC.

the server virtual adapter slot numbers can be different between the source and destination Virtual I/O Servers. the virtual target device name can change during migration. Although the migrating partitions retain the same slot numbers as on the source systems. The commands now include the --ip and the -u flags. which allow you to indicate the destination system.If you keep a record of the virtual I/O configuration of the partitions.You can see the mobile partition is on the destination system. check the migrating partition’s configuration in the destination system. Advanced topics 145 .4.4 Command-line interface enhancements The lslparmigr and migrlpar commands have been enhanced to request information to a remote HMC. Chapter 5. Figure 5-20 Remote HMC view after remote migration success 14. Also. 5. The next two examples show how the --ip and --u flags are used with commands lslparmigr (in Example 5-2 on page 146) and migrlpar (in Example 5-3 on page 146). as shown in Figure 5-20.

or you may specify a new profile to save the current partition state.180 -u hscroot -m 9117-MMA-SN100F6A0-L9 -t 9117-MMA-SN101F170-L10 -p mobile Warnings: HSCLA295 As part of the migration process.Example 5-2 The lslparmigr command with remote options lslparmigr -r msp --ip 9. You may specify a different existing profile.5. The default is to use the current profile.dest_msp_names=VIOS1_L10.3.5. which will replace the existing definition of this profile.5.111/ Example 5-3 The migrlpar command with remote options hscroot@hmc1:~> migrlpar -o v --ip 9. other options are possible. 146 IBM PowerVM Live Partition Mobility .3//1/VIOS1_L10/9.source_msp_id=1.180 -u hscroot -m 9117-MMA-SN100F6A0-L9 \ -t 9117-MMA-SN101F170-L10 --filter lpar_names=PROD source_msp_name=VIOS1_L9.3. the HMC will create a new migration profile containing the partition's current state. While this works for most scenarios. dest_msp_ids=1.3.5.3. which would be replaced with the current partition definition.ipaddr_mappings=9.

SG24-7590. Shared processor pools do not have to be configured differently from how it is described in PowerVM Virtualization on IBM System p: Managing and Monitoring.5 Multiple shared processor pools IBM Power Systems servers with AIX support multiple shared processor pools.5. However. Figure 5-21 Shared processor pool selection in migration wizard If you use the CLI. Chapter 5. additional steps are included in the HMC GUI and CLI procedures.5. The name and identifier of the shared processor pool on the destination do not have to be the same as those on the source. 5. Advanced topics 147 .1 Shared processor pools in migration and validation GUI The migration wizard presents you with a list of all defined shared processor pools on the destination that have sufficient capacity to receive the migrating partition. You are asked to identify the target pool using the Migrate task as shown in Figure 5-21. The use of shared processor pools on a given system allows for a specified number of processors to be reserved for a specific number of managed systems. the migration operation will fail if the arrival of the migrating partition would cause the maximum processors in the chosen shared pool on the destination to be exceeded.

Examples of the changes made are described in 5.7.2 Processor pools on command line The CLI command changes to accommodate processor pools for partition mobility include both the lslparmigr and the migrlpar commands. 148 IBM PowerVM Live Partition Mobility .5. 5. Figure 5-22 Shared processor pool selection in Validate task If the migration is initiated after a change has occurred on the destination system where the selected processor pool can no longer accommodate the client partition.The ability to select a specific shared processor pool is also presented during the Validate task after an error-free validation has occurred as shown in Figure 5-22. the migration will fail. “The command-line interface” on page 162.

6. If a partition has non-default virtual serial adapters. we assume you are beginning with a mobile partition that uses a single physical Ethernet adapter and a single physical SCSI adapter. See Figure 5-23.5. and non-default virtual serial adapters. For this scenario. you must deconfigure them. A non-default virtual serial adapter is a virtual serial adapter other than the two automatically created virtual serial adapters in slots 0 and 1. Integrated Virtual Ethernet adapters. Source System Mobile Partition rootvg hdisk0 hdisk1 Destination System ent0 sisioa0 Hypervisor SCSI Enclosure Figure 5-23 The mobile partition is using physical resources Chapter 5. you might have to switch from physical to virtual resources.6 Migrating a partition with physical resources This section explains how to migrate a partition that is currently using physical resources. Advanced topics 149 . 5.1 Overview Three types of adapters cannot be present in a partition when it is participating in an active migration are physical adapters.

remove them using dynamic logical partitioning. “Requirements and preparation” on page 45. Attach and configure the remote storage using a storage area network: – Create one LUN on your storage subsystem for each disk in use on your mobile partition. see the basic configuration information in 4. This method allows the migration process to identify which server adapter is paired with which client partition. ignore the requirement to check that the adapters cannot be migrated because this exception is discussed in all of 5. 5. activate the partition with a profile that does not have them marked as required.3. as outlined in Chapter 3. If these adapters are marked as desired in the active profile. For detailed instructions. When creating and configuring the partition. For the Client partition field. “Preparing for an active partition migration” on page 94. and the case where it does. Do not set the server adapter to accept connections from any partition.6.If the mobile partition has any adapters that cannot be migrated.2 Configure a Virtual I/O Server on the source system Create and install a Virtual I/O Server partition on the source system. To configure a Virtual I/O Server on the source system: 1. Before proceeding. Make these LUNs available as hdisks on the source Virtual I/O Server.6. If these adapters are marked as required in the active profile. then they must be removed from the mobile partition before it can participate in an active migration. specify the mobile partition. Configure a virtual SCSI server adapter. specify an unused virtual slot on the mobile partition. The process described in this section covers both the case where the mobile partition does not have such required adapters. 2. in that chapter. However. verify that the requirements for Live Partition Mobility are met. Ensure that these LUNs are at least as large as the disks on your mobile partition. 150 IBM PowerVM Live Partition Mobility . This setting is necessary to allow the migration process to dynamically remove this adapter during a migration. For the Client adapter field. use the “Only selected client partition can connect” option. see the following procedure. When creating the virtual SCSI server adapter. “Migrating a partition with physical resources” on page 149. Important: Mark the virtual SCSI server adapter as desired (not required) in your Virtual I/O Server partition profile.

Do not create volume groups and logical volumes on the hdisks within the Virtual I/O Server. 4. Advanced topics 151 . 3. Source System Mobile Partition rootvg hdisk0 hdisk1 Destination System ent0 sisioa0 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 hdisk5 hdisk6 Storage adapter Ethernet adapter Storage Subsystem Figure 5-24 The source Virtual I/O Server is created and configured Chapter 5. set the reserve_policy on the disks to no_reserve by using the chdev command: $ chdev -dev hdisk5 -attr reserve_policy=no_reserve – Assign the hdisks as targets of the virtual SCSI server adapter that you created. Configure shared Ethernet adapters for each physical network interface that is configured on the mobile partition.– On the source Virtual I/O Server. using the mkvdev command. Figure 5-24 shows the created and configured source Virtual I/O Server. Ensure the Mover service partition box is checked in the Virtual I/O Server partition properties.

just as you did on the source Virtual I/O Server. “Preparing for an active partition migration” on page 94. Do not map any shared hdisks on the destination Virtual I/O Server.3.3 Configure a Virtual I/O Server on the destination system Create and install a Virtual I/O Server partition on the destination system. 152 IBM PowerVM Live Partition Mobility . All of this is done automatically for you during the migration. Ensure the Mover service partition box is checked in the Virtual I/O Server partition properties.5.6. see the basic configuration information in 4. To configure a Virtual I/O Server on the destination system: 1. Configure shared Ethernet adapters for each physical network that is configured on the mobile partition. see the following procedure. The same remote LUNs must be available as hdisks on both the source and destination Virtual I/O Servers. For detailed instructions. When creating and configuring the partition. 3. Use standard SAN configuration techniques to attach the same remote storage that you attached to the source Virtual I/O Server. Important: Do not create any virtual SCSI server adapters for your mobile partition on the destination Virtual I/O Server. 2.

4 Configure storage on the mobile partition To switch over to using virtual storage devices on your mobile partition: 1. In the figure. The hdisk numbers may be different. but they are the same LUNs on the storage subsystem. Advanced topics 153 . Ensure the virtual SCSI client adapter refers to the server adapter you created on the source Virtual I/O Server.Figure 5-25 shows the created and configured destination Virtual I/O Server. Chapter 5. the hdisk numbers on the destination Virtual I/O Server differ from those on the source Virtual I/O Server.6. Use dynamic logical partitioning to add a virtual SCSI client adapter with the same properties from the previous step to the running mobile partition. 2. Add a virtual SCSI client adapter to the profile of the mobile partition. Source System Mobile Partition rootvg hdisk0 hdisk1 Destination System ent0 sisioa0 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-25 The destination Virtual I/O Server is created and configured 5.

Verify that the virtual SCSI disks are in the Available state by using the lsdev command: # lsdev -t vdisk Figure 5-26 shows the configured storage devices on the mobile partition.v-scsi c. Configure the virtual SCSI devices on the mobile partition. Verify that the virtual SCSI adapters are in the Available state by using the lsdev command: # lsdev -t IBM. Source System Mobile Partition rootvg hdisk0 hdisk1 hdisk7 hdisk8 Destination System ent0 sisioa0 vscsi0 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-26 The storage devices are configured on the mobile partition 154 IBM PowerVM Live Partition Mobility . Run the cfgmgr command on the mobile partition. b.3. as follows: a.

Extend rootvg on to virtual disks using the extendvg command. Advanced topics 155 . to extend to the new disks (do not use the chvg command unless the extendvg command fails. you might have to change the factor of the volume group. there is another approach. and that hdisk7 and hdisk8 are the virtual disks you created whose sizes are at least as large as hdisk0 and hdisk1. Mobile Partition Source System Mobile Partition rootvg hdisk0 hdisk1 hdisk7 hdisk8 Destination System ent0 sisioa0 vscsi0 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-27 The root volume group extends on to virtual disks Chapter 5. by using the chvg command. # extendvg rootvg hdisk7 hdisk8 If the extendvg command fails. Move rootvg as follows: a. assume hdisk0 and hdisk1 are the physical disks in rootvg. Depending on the size of the disks. On the mobile partition. move rootvg from physical disks to virtual disks.) # chvg -t 10 rootvg Figure 5-27 shows rootvg extended on to the virtual disks. For example.4.

b. Source System Mobile Partition rootvg hdisk7 hdisk8 Destination System hdisk0 hdisk1 ent0 sisioa0 vscsi0 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-28 The root volume group of the mobile partition is on virtual disks only 156 IBM PowerVM Live Partition Mobility . Repeat the previous step (excluding the bootlist command and the bosboot command) for all other volume groups on the mobile partition. Remove physical disks from rootvg using the reducevg command: # reducevg rootvg hdisk0 hdisk1 5. Set the bootlist to a virtual disk in rootvg using the bootlist command: # bootlist -m normal hdisk7 hdisk8 d. Migrate physical partitions off the physical disks in rootvg on to the virtual disks in rootvg using the migratepv command: # migratepv hdisk0 hdisk7 # migratepv hdisk1 hdisk8 c. Run the bosboot command on a virtual disk in rootvg: # bosboot -ad /dev/hdisk7 e. Figure 5-28 shows rootvg on the mobile partition now wholly on the virtual disks.

l-la Chapter 5.6. Add virtual Ethernet adapters to the profile of your mobile partition: – For each physical network on the mobile partition.5 Configure network on the mobile partition To configure the virtual network devices on the mobile partition: 1. – Ensure the virtual Ethernet adapters use the shared Ethernet adapters you created on the source Virtual I/O Server. b. Use dynamic logical partitioning to add virtual Ethernet adapters with the same properties from the previous step to the running mobile partition.5. add one virtual adapter. Verify that the virtual Ethernet adapters are in the Available state using the lsdev command: # lsdev -t IBM. Advanced topics 157 . Configure the virtual Ethernet devices on the mobile partition: a. 2. 3. Run the cfgmgr command on the mobile partition to make the devices available.

To move to virtual networks on the mobile partition. Verify network connectivity for each of the virtual network interfaces. Configure the virtual network interfaces on the mobile partition. Use new IP addresses To use new IP addresses.Figure 5-29 shows the mobile partition with a virtual network device created. Source System Mobile Partition rootvg hdisk7 hdisk8 Destination System hdisk0 hdisk1 ent0 sisioa0 vscsi0 ent1 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-29 The mobile partition has a virtual network device created Now that the virtual network adapters are configured. 3. using the new IP addresses that you obtained. 158 IBM PowerVM Live Partition Mobility . and then: 1. obtain a new IP address for each physical network that the mobile partition is using. Understand how all running applications use the networks. use new or existing IP addresses. Both procedures. stop using the physical network adapters and begin using the virtual network adapters. discussed in this section. take appropriate actions before proceeding. 2. affect network connectivity differently. Unconfigure the physical network interfaces on the mobile partition.

Use existing IP addresses To use existing IP addresses. 3. 2. Figure 5-30 shows the mobile partition using a virtual network. Configure the virtual network interfaces on the mobile partition. Unconfigure the physical network interfaces on the mobile partition. using the IP addresses previously used by the physical interfaces. and then: 1. Source System Mobile Partition rootvg hdisk7 hdisk8 Destination System hdisk0 hdisk1 ent0 sisioa0 vscsi0 ent1 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-30 The mobile partition has unconfigured its physical network interface Chapter 5. Advanced topics 159 . Verify network connectivity for each of the virtual interfaces. with its physical network interface unconfigured. record the network information for the physical network interfaces on the mobile partition.

Remove all physical devices. Shut down the mobile partition. 3.5. Remove all physical adapters from the mobile partition using dynamic logical partitioning. 160 IBM PowerVM Live Partition Mobility . Remove all virtual serial adapters in slots 2 and above from the profile of the mobile partition. Process 1: Required adapters To remove the adapters from the mobile partition: 1. follow “Process 1: Required adapters” on page 160. 2. Remove all physical adapters (including Integrated Virtual Ethernet) from the profile of the mobile partition. if the only physical devices in use are in slots pci0 and pci1. 3. follow “Process 2: No required adapters” on page 160. The mobile partition must be shut down and activated with the modified profile.6. Process 2: No required adapters To remove the adapters from the mobile partition: 1. no physical devices are in use on the mobile partition.6 Remove adapters from the mobile partition Two procedures are available to remove adapters that cannot be migrated from the mobile partition: If the mobile partition has any adapters that are marked as required. along with their children by using the rmdev command. For example. run the following commands to remove the physical devices: # rmdev -R -dl pci0 # rmdev -R -dl pci1 2. Activate the mobile partition with the modified profile. Remove all virtual serial adapters from slots 2 and above from the mobile partition using dynamic logical partitioning. At this point. 4. If adapters are not marked as required. Note: A reboot is not sufficient.

Close any virtual terminals on the mobile partition. Chapter 5. consider adding physical resources back to the mobile partition. Note: The active mobile partition profile is created on the destination system without any references to any physical I/O slots that were present in your profile on the source system.6.Figure 5-31 shows the mobile partition with only virtual adapters. Any other mobile partition profiles are copied unchanged. if they are available on the destination system. After the migration is complete.7 Ready to migrate The mobile partition is now ready to be migrated. because they will lose connection when the partition migrates to the destination system. Virtual terminals can be reopened when the partition is on the destination system. Source System Mobile Partition rootvg hdisk7 hdisk8 Destination System vscsi0 ent1 Hypervisor vhost0 Source VIOS SCSI Enclosure ent1 Hypervisor ent1 Destination VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem Figure 5-31 The mobile partition with only virtual adapters 5. Advanced topics 161 .

This CLI allows you to script frequently performed operations. Several existing HMC commands for Live Partition Mobility have been updated to support the latest mobility features. Source VIOS hdisk5 hdisk6 hdisk3 hdisk4 Storage adapter Ethernet adapter Storage adapter Ethernet adapter Storage Subsystem 162 IBM PowerVM Live Partition Mobility . this operation is distinct from the Live Partition Mobility described in this book.Figure 5-32 shows the mobile partition migrated to the destination system. lslparmigr. The commands are migrlpar. This automation saves time and reduces the chance of errors. Despite its migr prefix. Source System Destination System Mobile Partition rootvg hdisk7 hdisk8 vscsi0 ent1 Hypervisor ent1 Hypervisor vhost0 Destination VIOS ent1 SCSI Enclosure Figure 5-32 The mobile partition on the destination system 5.7 The command-line interface The HMC provides a command-line interface (CLI) and an easy-to-use GUI for Live Partition Mobility. migrcfg. is used to push partition configuration data that is held on the HMC to a managed system. and lssyscfg. Note: An existing command.

validate Chapter 5.1 The migrlpar command This command is used to validate. with underscores joining words together. and options are used for both active and inactive migrations. stop. The HMC determines which type of migration to perform based on the state of the partition referenced in the command. for example vios_lpar_id. Then add these keys to the HMC user’s key-chain by using the mkauthkeys --add command on the HMC. Multiple character parameters are preceded by a double dash (--).stop v .The HMC commands can be launched either locally on the HMC or remotely by using the ssh -l <hmc> <hmc_command> command. 5. Command conventions The commands follow the HMC command conventions. which can be: m . The command syntax is: migrlpar -o m | r | s | v -m <managed system> [-t <managed system>] [--ip <IP address>] [-u <user ID> -p <partition name> | --id <partitionID> [-n <profile name>] [-f <input data file> | -i <input data>] [-w <wait time>] [-d <detail level>] [-v] [--force] [--help] The flags used in this command are: -o The operation to perform. and recover a partition migration. All filter and attribute names are lower case. Tip: Use the ssh-keygen command to create the public and private key-pair on your client. The same command. Advanced topics 163 . which are: Single character parameters are preceded by a single dash (-). See the “Command conventions” on page 163.7.recover s . initiate. syntax.validate and migrate r .

virtual_fc_mappings. shared_proc_pool_id. and dest_msp_id. -i <input data> The input data for this command.. The name of the file containing input data for this command. Prints a help message -w <wait time> -d <detail level> -v --force --help Input data format The data given in the file specified with the -f flag. typically the virtual adapter mapping from source to destination or the destination shared-processor pool. source_msp_name. The supported attributes are: virtual_scsi_mappings.. Use either of the following formats: attr_name1=value.. values range from 0 (none) to 5 (highest) Verbose mode Force the recovery. This format is the same format as the input data file of the -f option. to wait for any operating system command to complete The level of detail requested from operating system commands.. or the data specified with -i. 164 IBM PowerVM Live Partition Mobility . shared_proc_pool_name. in minutes. attr_name1=value1..value2.. This option should be used with caution. must be in comma-separated value (CSV) format. source_msp_id. The time. These flags can be used with the migrate (-o m) and the validate (-o v) operations.-m <managed system> -t <managed system> -p <partition name> --ip <IP address> -u <user ID> --id <partitionID> -n <profile name> -f <input data file> The source managed system’s name The destination managed system’s name The partition on which to perform the operation The IP address or host name of the target managed system's HMC The user ID to use on the target managed system's HMC The ID of the partition on which to perform the operation The name of the partition profile to be created on the destination. dest_msp_name.attr_name2=value.

Validate operation (-o v) The migrlpar -o v command validates the proposed migration. Depending on what point in the migration the connection was lost. you may use the recovery operation to restore a partially migrated state by using the migrlpar -o r command. The HMC does not stop the validation process at the first error. The command output is a list of errors and warnings of every potential or real problem that the HMC finds. returning a non-zero return code if the validate operation finds any configuration errors that will cause the migration to fail. the migrlpar -o s command halts the specified migration and roll back any changes.The data specified with the virtual_scsi_mappings attribute consists of one or more source virtual SCSI adapter to destination virtual SCSI adapter mappings in the format: client_virtual_slot_num/dest_vios_lpar_name/dest_vios_lpar_id The data format specified with the virtual_fc_mappings attribute mirrors the format of the virtual_scsi_mappings attribute as it relates to virtual Fibre Channel adapter mappings for N_Port ID Virtualization (NPIV). the recovery command either rolls-back the operation (undoes the changes on the destination system) or completes the migration. Recovery operation (-o r) In the event of a lost connection or a migration failure. the HMC chooses the appropriate migration type based on the state of the given partition. use the following command: $ migrlpar -o m srcSystem -t destSystem -p myLPAR In an environment with multiple mover service partitions on the source and destination. Stop operation (-o s) If the migration is in a stoppable state. you can specify which mover service partitions to use in a validation Chapter 5. Examples To migrate the partition myLPAR from the system srcSystem to the destSystem using the default MSPs and adapter maps. This command can be executed only on the HMC upon which the migration was started. it continues processing as far as possible in an attempt to identify all problems that might invalidate the migration. Advanced topics 165 . Migrate operation (-o m) The migrlpar -o m command initiates a migration. Warnings not accompanied by an error do not cause the validate operation to fail. The same command is used for inactive or active migrations.

The syntax to stop a partition migration is: $ migrlpar -o s -m srcSystem -p MyLPAR The syntax to recover a failed migration is: $ migrlpar -o r -m srcSystem -p MyLPAR You can use the --force flag on the recover command. but you should only when the partition migration fails. you can stipulate to which shared-processor pool the moving partition will be assigned at the destination with either of the following commands: $ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i "shared_proc_pool_id=1" $ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i "shared_proc_pool_name="DefaultPool" The capacity of the chosen shared-processor pool must be sufficient to accommodate the migrating partition otherwise the migration operation will fail. The command syntax is: lslparmigr -r lpar | manager | msp | procpool | sys | virtualio [-m <managed system>] [-t <managed system>] [--ip <IP address>] [-u <user ID>] [--filter <filter data>] [-F [<attribute names>] [--header] [--help] 166 IBM PowerVM Live Partition Mobility . 5.or migration operation.dest_msp_name=VIOS2_L10 When the destination system has multiple shared-processor pools. Virtual I/O Servers. and adapter mappings that might be used for a partition migration. Note that you can use both partition names and partition IDs on the same command: $ migrlpar -o v -m srcSystem -t destSystem -p myLPAR \ -i source_msp_id=2.2 The lslparmigr command Use the lslparmigr command to show the state of running migrations or to show managed mover service partitions. leaving the partition definition on both the source and destination systems. The following command validates the migration in the previous example with specific mover service partitions.7.

.mover service partitions procpool .shared processor pool sys . migration_type. and is required with -r msp and -r virtualio.managed system (CEC) virtualio . The IP address or host name of the target managed system's HMC. This parameter is not valid with -r sys. lpar_id.. filter_name1=value1. Chapter 5. Valid filter names are: lpar_ids and lpar_names The filters are mutually exclusive.filter_name2=value. Without the -F flag. The user ID to use on the target managed system's HMC.Hardware Management Console (HMC) msp .. is optional with -r lpar. Filters the data to be listed in CSV format. source_msp_name. -F [<attribute names>] Comma-separated list of the names of the attributes to be listed. dest_msp_name. exactly one partition name or ID must be specified and the partition must be an AIX or Linux partition. then all attributes will be listed. dest_sys_name. Use either of the following formats: filter_name1=value. source_sys_name..The flags used in this command are: -r The type of resources for which to list information: lpar . If no attribute names are specified. the attributes that the command lists are lpar_name. and dest_msp_id. With -r msp and -r virtualio. source_lpar_id. Advanced topics 167 .partition manager .virtual I/O The source managed system's name. migration_state. dest_lpar_id. Prints a header of attribute names when -F is also specified Prints a help message -m <managed system> -t <managed system> --ip <IP address> -u <user ID> --filter <filter data> --header --help Partition information (-r lpar) The lslparmigr -r lpar command displays partition migration information.value2.. The destination managed system's name. source_msp_id..

source_msp_id. Attributes listed are inactive_lpar_migration_capable. and dest_msp_ids. Virtual I/O Server information (-r virtualio) The lslparmigr -r virtualio command displays information pertaining to the candidate destination Virtual I/O Servers. the command displays the mover service partition on the destination system it is able to communicate with. num_active_migrations_supported. For each mover service partition the source system. Mover service partition information (-r msp) The lslparmigr -r msp command displays the mover service partition-to-mover service partition relationship between the source and destination systems. The command shows the possible processor pool names and IDs. dest_ip_names. active_lpar_migration_capable. The attributes listed are source_msp_name. If no MSPs are on the source system. Shared processor pool information (-r procpool) The lslparmigr -r procpool command displays all available shared processor pools on the destination system which can be used by the mobility client. and num_active_migrations_in_progress. num_inactive_migrations_supported. Examples The following examples illustrate how this command is used. System information (-r sys) The lslparmigr -r sys command displays all partition migration information for a managed system. The command shows the possible and suggested mappings between the source virtual client adapters and the destination virtual server adapters for a given migration for both virtual SCSI and virtual Fibre Channel adapters. System migration information To display the migration capabilities of a system. Attributes listed are shared_proc_pool_ids and shared_proc_pool_names. num_inactive_migrations_in_progress. use the following syntax: $ lslparmigr -r sys -m mySystem 168 IBM PowerVM Live Partition Mobility .Remote migration information (-r manager) The lslparmigr -r manager command displays information regarding the current HMC’s ability to migrate a mobile partition to a system managed by a remote HMC. there will be no data.

we can see that the system is capable of both active and inactive migration and that there is one inactive partition migration in progress.n um_inactive_migrations_in_progress=1. the same information is produced in a CSV format: $ lslparmigr -r sys -m mySystem -F This command produces: 1. For example. which indicates that there are no active migrations and one active migration running: 0. Advanced topics 169 . surround the attributes with double quotes. If you are only interested in specific attributes.num_active_migrations_in_progress=0 In this example.This command produces: inactive_lpar_mobility_capable=1.40. then you can specify these as options to the -F flag.1 If you want a space instead of a comma to separate values.num_inactive_migrations_supported=40.active_lpar_mobility_capable. Adding the --header flag prints column headers on the first line: $ lslparmigr -r sys -m mySystem -F --header This command produces: inactive_lpar_mobility_capable. the header is printed on a single line.40.num _active_migrations_supported=40.1.1. By using the -F flag. use the -r manager option: $ lslparmigr -r manager Chapter 5.0 These attribute values are the same as in the preceding example. if you want to know just the number of active and inactive migrations in progress. use the following command: $ lslparmigr -r sys -m mySystem -F \ num_active_migrations_in_progress.1. num_inactive_migrations_in_progress.40.1.0 On a terminal. Remote migration information To show the remote migration information of the HMC.num_active_migrations_in_progress 1. without the attribute identifier. num_active_migrations_supported. This format is appropriate for parsing or for importing in to a spreadsheet.num_inactive_migrations_supported.num_inactive_migrations_in_progress This command produces the following results.40.active_lpar_mobility_capable=1.

The attribute remote_lpar_mobility_capable displays a value of 1 if the HMC has the ability to perform migrations to a remote HMC.lpar_id=4. we see that the command supplies only one attribute for the user on the HMC from which it is executed. migration_type=inactive. Conversely. the PROD partition is in the Starting state of an inactive migration as indicated by the migration_state and migration_type attributes. and PROD. we see that the system mySystem is hosting three partitions. dest_lpar_id=65535 Here. QA.lpar_id=1. the command: $ lslparmigr -r manager -F remote_lpar_mobility_capable This command produces: 1 Partition migration information To show the migration information of the logical partitions of a managed system. Of these.migration_state=Migration Starting. When the command was run.migration_state=Not Migrating name=VIOS1_L10. use the -r lpar option: $ lslparmigr -r lpar -m mySystem This option produces: name=QA. For example.dest_sys_name=9117-MMA-SN100F6A0-L9. VIOS1_L10. Use the -filter flag to limit the output to a given set of partitions with either the lpar_names or the lpar_ids attributes: $ lslparmigr -r lpar -m mySystem --filter lpar_ids=3 This flag produces: name=PROD. as seen by the 65535 value for the dest_lpar_id parameter. the migration the ID of the destination partition had not been chosen.This option produces: remote_lpar_mobility_capable=1 Here.dest_sys_name=9117-MMA-SN100F6A0-L9. You may also use the -F flag followed by the attribute to limit the output of the command to the value. a value of 0 indicates that the HMC is incapable of remote migrations.migration_state=Migration Starting.lpar_id=3.migration_state=Not Migrating name=PROD. dest_lpar_id=7 170 IBM PowerVM Live Partition Mobility .lpar_id=3. migration_type=inactive.

has printed all the attributes. no MSPs are involved and these fields are empty. the output information is limited to the partition with ID=3.inactive. You can use the --header flag with the -F flag to print a line of column headers at the start of the output. Shared Processor Pool Use the -r procpool option to display the shared processor pools capable of hosting the client partition on the destination server. because the partition in question is undergoing an inactive migration.7.3. the last four fields of output pertain to the MSPs.. The -F flag (optional) can be used to format or limit the output. VIOS1_L10 can be used on the destination. 9117-MMA-SN100F6A0-L9. we see that if we move the partition TEST from srcSystem to destSystem. as follows: $ lslparmigr -r procpool -m srcSystem -t destSystem \ --filter lpar_names=TEST -F shared_proc_pool_ids Chapter 5. "dest_msp_names=VIOS1_L10". Mover service partition information The -r msp option shows the possible mover service partitions for a migration: $ lslparmigr -r msp -m srcSystem -t destSystem --filter lpar_names=TEST This option produces: source_msp_name=VIOS1_L9. without additional parameters. Here.unavailable. You can use the -F flag to generate the same information in CSV format or to limit the output: $ lslparmigr -r lpar -m mySystem --filter lpar_ids=3 -F This flag produces: PROD.9117-MMA-SN101F170-L10. In the example. There is a mover service partition on the destination (VIOS1_L10).Migration Starting. we see that the dest_lpar_id has now been chosen."dest_msp_ids=1" Here. then: There is a mover service partition on the source (VIOS1_L9).Here.unavailable Here the -F flag.source_msp_id=1. This approach gives one possible mover service partition combination for the migration.. If the migration uses VIOS1_L9 on the source. Advanced topics 171 . which is the one performing the inactive migration.3.

Default Pool" Here. 172 IBM PowerVM Live Partition Mobility . The command requires the -m. The command can also be used to identify shared-processor pools available on remote HMC migrations with the --ip and -u flags to specify the remote HMC and remote user ID.The flag produces: 1. as follows: $ lslparmigr -r virtualio -m srcSystem -t destSystem \ --filter lpar_names=TEST -F suggested_virtual_scsi_mappings The flag produces: 40/VIOS1_L10/1 This output indicates that if you migrate the client partition called TEST from the srcSystem to destSystem. and --filter flags. which has a partition ID of 1. Also.3.0".5. with the addition of remote HMC specification and using lpar_ids to specify the client partition: $ lslparmigr -r procpool -m srcSystem --ip 9. then the suggested virtual SCSI adapter mapping would be to map the client virtual adapter in slot 40 to the Virtual I/O Server called VIOS1_L10. The -F flag can be used to format or limit the output.3.0 This output indicates that processor pool IDs 1 and 0 are capable of hosting the client partition called TEST.5.180 using the HMC’s User ID hscroot. The --filter flag requires that you either use the lpar_ids or lpar_names attributes to identify the client partition. the system is showing that two shared-processor pools are possible destinations for the client partition. The command then checks the remote HMC for the destSystem-managed system for possible processor pools."shared_proc_pool_names=SharedPool01. It produces the output: "shared_proc_pool_ids=1. You may only specify one client partition at a time.180 \ -u hscroot -t destSystem --filter lpar_ids=2 This command shows that we are communicating with an HMC with IP address 9. on the destination system. without the -F flag you are given detail output of the attribute and values. Virtual I/O Server Use the -r virtualio option to display the possible virtual adapter mappings for a given migration. as shown in the following example. respectively. -t.

The msp attribute has a value of 1 when enabled as mover service partition and 0 when it is not enabled. The type of SSH authentication keys: rsa . These attributes can have a value of either 0 (incapable) or 1 (capable).DSA authentication --test --ip <IP address> -u <user ID> --passwd <password> -t <key type> Chapter 5. The lssyscfg -r sys command displays two attributes. Advanced topics 173 . The password to use to log on to the remote HMC.5.4 The mkauthkeys command The mkauthkeys command is used for retrieval.3 The lssyscfg command The lssyscfg command is a native HMC command that supports Live Partition Mobility. The ID of a user whose authentication keys are to be managed.7. 5. you will be prompted for the password. removal. If this parameter is omitted. The lssyscfg -r lpar command displays the msp and time_ref partition attributes on Virtual I/O Server partitions that are capable of participating in active partition migrations. active_lpar_mobility_capable and inactive_lpar_mobility_capable. and validation of SSH authentication to a remote system.RSA authentication dsa .7. The IP address or host name of a remote HMC with which to exchange authentication keys. The command syntax is: mkauthkeys -a | --add | -g | -r | --remove | --test [-ip <IP address>] [-u <user ID>] [--passwd <password>] [-t <key type>] [<key string>] [--help] The flags used in this command are: -a -g -r --remove --add Adds SSH key as an authorized key Gets the user's SSH public key Removes SSH key from user's authorized key list Verifies authentication to a remote HMC.

you may simply use the -g flag: $ mkauthkeys --ip rmtHostName -u hscroot -g You may also specify a preferred authentication method. The example assumes that two environment variables. you may use the -t flag in either case: $ mkauthkeys --ip rmtHostName -u hscroot -t rsa $ mkauthkeys --ip rmtHostName -u hscroot -t dsa In some cases. Examples To get the remote HMC user's SSH public key. 174 IBM PowerVM Live Partition Mobility . you may choose to remove the authentication keys.7. point to the system to empty and the system to load. This script fragment moves all the migratable partitions from one system to another. Also note that the remote HMC's host name has to be specified in this command.5 A more complex example Example 5-4 on page 176 provides a more complete example of how the Live Partition Mobility commands can be used. only then will you use the actual IP address in the place of rmtHostName. Please run the mkauthkeys command to set up the SSH communication authentication keys. It is not stored as the user ID you specified in the steps to retrieve the authentication keys. SRC_SERVER and DEST_SERVER.<key string> --help The SSH key string to add or remove. which you can do by using the mkauthkeys command with the -r flag: $ mkauthkeys -r ccfw@rmtHostName The HMC stores the key as user called ccfw. 5. To choose between DSA authentication or RSA authentication key usage. Prints this help. If DNS is unable to resolve the host name and you used the IP address to configure the authentication. respectively. The --test flag allows you to check whether authentication is properly configured to the remote HMC: $ mkauthkeys --ip rmtHostName -u hscroot --test The command returns the following error if keys were not configured properly: HSCL3653 The Secure Shell (SSH) communication configuration between the source and target Hardware Management Consoles has not been set up properly for user hscroot.

Validate the partition migration. If all the checks pass. For this. Only partitions of type aixlinux can be migrated. In this example. The script uses the lssyscfg command to ascertain the partition type. If migrlpar returns a non-zero value. This is an exercise left to the reader. the migrations take place sequentially. For each remaining partition. The script uses the migrlpar -v and checks the return code. a recovery is attempted using the migrlpar -o r command. It then uses the lslparmigr command to list all the partitions on the system. Chapter 5. Determines whether to avoid migrating a partition that is already migrating. Advanced topics 175 . How it works The script starts by checking that both the source and destination systems are mobility capable. The program then performs a number of elementary checks: The source and destination must be capable of mobility. The code snippet does some elementary error checking.The algorithm starts by listing all the partitions on SRC_SERVER. It then filters out any Virtual I/O Server partitions and partitions that are already migrating. It uses this list as an outer loop for the rest of the script. the migration is launched with the migrlpar command. See Example 5-4 on page 176. Running them in parallel is acceptable if there are no more than four concurrent active migrations per mover service partition. The lssyscfg command shows the mobility capability attribute. it uses the new attributes given in the lssyscfg command. it invokes a migration operation to DEST_SERVER. The script reuses the lslparmigr command for this.

Cannont migrate partition $LPAR" else # # Everything looks good.1" ] then # # List all the partitions on the source system # for LPAR in $(lslparmigr -r lpar -m $SRC_SERVER -F name) do # # Only migrate “aixlinux” partitions.Example 5-4 Script fragment to migrate all partitions on a system # # Get the mobility capabilities of the source and destination systems # SRC_CAP=$(lssyscfg -r sys -m $SRC_SERVER \ -F active_lpar_mobility_capable.inactive_lpar_mobility_capable) DEST_CAP=$(lssyscfg -r sys -m $DEST_SERVER \ -F active_lpar_mobility_capable. VIO servers cannot be migrated # LPAR_ENV=$(lssyscfg -r lpar -m $SRC_SERVER \ --filter lpar_names=$LPAR -F lpar_env) if [ $LPAR_ENV = "aixlinux" ] then # # Make sure that the partition is not already migrating # LPAR_STATE=$(lslparmigr -r lpar -m $SRC_SERVER --filter lpar_names=$LPAR -F migration_state) if [ "$LPAR_STATE" = "Not Migrating" ] then # # Perform a validation to see if there’s a good chance of success # migrlpar -o v -m $SRC_SERVER -t $DEST_SERVER -p $LPAR RC=$? if [ $RC -ne 0 ] then echo "Validation failed. # echo "migrating $LPAR from $SRC_SERVER to $DEST_SERVER" migrlpar -o m -m $SRC_SERVER -t $DEST_SERVER -p $LPAR 176 IBM PowerVM Live Partition Mobility . let’s do it.inactive_lpar_mobility_capable) # # # Make sure that they are both capapble of active and inactive migration # if [ $SRC_CAP = $DEST_CAP ] && [ $SRC_CAP = "1...

let’s try to recover # echo "There was an error RC = $RC . The externally visible behavior remains the same. Advanced topics 177 .RC=$? if [ $RC -ne 0 ] then # # Something went wrong. This process usually occurs very quickly and should not be visible to the users. but the just-in-time (JIT) compiler Chapter 5.8 Migration awareness A migration-aware application is one that is designed to recognize and dynamically adapt to changes in the underlying system hardware after being moved from one system to another. can be observed because of different server characteristics. and associativity. but clearly the physical processor move requires a warming of the cache hierarchy on the destination system. These applications are usually limited to high-performance computing applications. Applications that are tuned for a given cache architecture. line-size. such as hierarchy. Applications that probably should be made migration-aware include: Applications that use processor and memory affinity characteristics to tune their behavior because affinity characteristics may change as a result of migration. but in reality the physical processors will have changed. for better or worse. Attempting recovery" migrlpar -o r -m $SRC_SERVER -p $LPAR break fi fi fi fi done fi 5. Applications that use processor binding maintain their binding to the same logical processors across migrations. size. Binding is usually done to maintain hot caches. but performance variations. Certain applications can have dependencies on characteristics that change between the source and destination servers and other applications may adjust their behavior to facilitate the migration. Most applications do not require any changes to work correctly and efficiently with Live Partition Mobility.

This infrastructure offers two mechanisms for alerting applications about configuration changes. and accounting tools and their agents should also be made migration-aware because the processor performance counters may change between the source and destination servers. such as cache-line size or serial numbers. 178 IBM PowerVM Live Partition Mobility . Additionally. Using the SIGRECONFIG signal and the dynamic reconfiguration APIs. such as the PowerHA heartbeat. Performance analysis. Workload managers (WLM) An application that is migration-aware might perform the following actions: Keep track of changes to system characteristics.of the IBM Java™ Virtual Machine is also optimized for the cache-line size of the processor on which it was launched. Registering scripts with the AIX dynamic reconfiguration infrastructure.9 Making applications migration-aware Mobility-awareness can be built-in to an application using the standard AIX dynamic reconfiguration notification infrastructure. as may the processor type and frequency. capacity planning. 5. Refuse a partition migration in the check phase to prevent a non-migratable application from being migrated. Block the sending of partition shutdown requests. tools that calculate an aggregate system load based on the sum of the loads in all hosted partitions must be aware that a partition has left the system or that a new partition arrived. Clean-up system-specific buffers and logs. Dynamic logical partitioning (dynamic LPAR) scripts allow you to add awareness to applications for which you do not have the source code. and modify tuning or behavior accordingly. Refuse new incoming requests or delay pending operations. Increase time-out thresholds. Reroute workloads to another system. Using the SIGRECONFIG and dynamic reconfiguration APIs requires additional code in your applications. Terminate the application on the source system and restart it on the destination.

the system assumes all is well and proceeds to the next phase. The SIGRECONFIG signal is sent to all applications at each of the three migration phases. 5.9. The check phase allows applications with root authority to refuse a migration. Control the signal mask of at least one of the application’s threads and the priority of the handling thread such that the signal can be delivered and handled promptly. This allows applications to take any recovery steps to resume service on the destination system.2 Making programs migration aware using APIs Application programming interfaces are provided to make programs migration-aware. 2. Be aware that if your program does trap the SIGRECONFIG signal. The post phase notification alerts applications that the migration (or dynamic reconfiguration) is complete.9. If no response occurs after this amount of time. Catch the SIGRECONFIG signal by using the sigaction() or sigwait() system calls.1 Migration phases The dynamic LPAR notification framework defines three operational phases: The check phase notification allows applications to signal their readiness to migrate. not just Live Partition Mobility events. The check and prepare phases take place on the source system. The default action is to ignore the signal. The dynamic LPAR and Live Partition Mobility infrastructure wait a short period of time for a reply from applications. This phase allows applications to make any necessary steps to help with the process.5. The prepare phase notification alerts applications that the migration (or dynamic reconfiguration) is imminent. You can speed up a migration or dynamic reconfiguration operation by acknowledging the SIGRECONFIG event even if your application takes no action. Note: An application must not block the SIGRECONFIG signal and the signal must be handled in a timely manner. Applications must perform the following operations to be notified of a Live Partition Mobility operation: 1. Applications can watch (trap) this signal and use the DLPAR-API system calls to learn more about the operation in progress. Advanced topics 179 . Chapter 5. it will be notified of all dynamic-reconfiguration operations. the post phase occurs on the destination after the device tree and ODM have been updated to reflect the destination system configuration.

For the check phase. to determine the nature of the reconfiguration event and other pertinent information. Only applications with root authority may refuse a migration. &drInfo)) { // handle the error } else { if (drInfo. Example 5-5 SIGRECONFIG signal-handling thread #include <signal. This code would run in a signal-handling thread.3. int reconfigFlag int rc. int signalId. struct sigset_t signalSet.h> #include <sys/dr. The returned dr_info structure includes the following bit-fields: migrate partition These fields are for the new migration action and the partition object that is the object of the action. through the signal handler. if (signalID == SIGRECONFIG) { if (rc = dr_reconfig(DR_QUERY.check) { /* 180 IBM PowerVM Live Partition Mobility . &signalId). SIGRECONFIG). the application should pass DR_RECONFIG_DONE to accept a migration or DR_EVENT_FAIL to refuse.h> : : struct dr_info drInfo. Uses the dr_reconfig() system call.migrate) { if {drInfo. // Initialise the signal set SIGINITSET(signalSet). // // // // // For event-related information The signal set to wait on Identifies signal was received For accepting or refusing the DR return code // Add the SIGRECONFIG to the signal set SIGADDSET(signalSet. The dr_reconfig() system call has been modified to support partition migration. The code snippet in Example 5-5 shows how dr_reconfig() might be used. // loop forever while (1) { // Wait on signals in signal set sigwait(&signalSet.

Handle or ignore the DR } } } You can use the sysconf() system call to check the system configuration on the destination system.* If migration OK reconfigFlag = DR_RECONFIG_DONE * If migration NOK reconfigFlag = DR_EVENT_FAIL */ rc = dr_reconfig(reconfigFlag.post) { /* * We’re being woken up on the destination * Check new environment and resume normal service */ } else { // Handle the error cases } } else { // It’s not a migration. The _system_configuration structure has been modified to include the following fields: icache_size icache_asc dcache_size dcache_asc L2_cache_size L2_cache_asc itlb_size itlb_asc dtlb_size dtlb_asc tlb_attrib slb_size Size of the L1 instruction cache Associativity of the L1 instruction cache Size of the L1 data cache Associativity of the L1 data cache Size of the L2 cache Associativity of the L2 cache Instruction translation look-aside buffer size Instruction translation look-aside buffer associativity Data translation look-aside buffer size Data translation look-aside buffer associativity Translation look-aside buffer attributes Segment look-aside buffer size Chapter 5.pre) { /* * Prepare the application for migration */ rc = dr_reconfig(DR_RECONFIG_DONE. } else if (drInfo. Advanced topics 181 . &drInfo). } else if (drInfo. &drInfo).

. All new processor features. the default location of which is /usr/lib/dr/scripts/all.] scriptname command [param1 .. The lpar_get_info() call returns two capabilities. such as the single-instruction multiple-data (SIMD) and decimal floating point instructions. You can register your own scripts with the dynamic reconfiguration infrastructure by using the drmgr command. applications that are moved from one processor architecture to another can dynamically adapt themselves to their execution environment. The drmgr command is detailed in the IBM InfoCenter at: http://publib. In this fashion. are exposed through the _system_configuration structure and the lpar_get_info() system call. LPAR_INFO1_PMIG_CAPABLE 5.boulder. followed by the name of the script to be invoked and any additional parameters.. 182 IBM PowerVM Live Partition Mobility .] The input variables are set as environment variables on the command line.cmds/doc/aixcmds2/drmgr.3 Making applications migration-aware using scripts Dynamic reconfiguration scripts allow you to cleanly quiesce and restart your applications over a migration. Indicates whether the partition is capable of migration.htm The syntax of the dynamic reconfiguration scripts is: [env_variable1=value .ibm. this capability indicates the partition is also a mover service partition.com/infocenter/pseries/v5r3/index.aix.ibm. The command copies the scripts to a private repository.jsp?topic=/ com.9..These fields are updated after the partition has arrived at the destination system to reflect the underlying physical processor characteristics.h>: LPAR_INFO1_MSP_CAPABLE If the partition is a Virtual I/O Server partition. The scripts can be implemented in any interpreted (scripted) or compiled language. defined in <sys/dr.

The script can reconfigure or resume applications that were changed or suspended in the prepare phase. Table 5-1 Dynamic reconfiguration script commands for migration Command and parameter checkmigrate <resource> Description Used to indicate whether the a migration should continue or not. the script simply logs the called command to a file.Live Partition Mobility introduces four commands. Advanced topics 183 . The code in Example 5-6 on page 184 shows a Korn shell script that detects the partition migration reconfiguration events. a pmig resource type indicates a partition migration operation. For this example. A script might indicate that a migration should not continue if the application is dependent upon an invariable execution environment. the script is called with this command to roll back any actions that might have been taken in the checkmigrate command in preparation for the migration. No new environment variables are passed to the dynamic LPAR scripts for Live Partition Mobility support. The script is called with this command at the check-migration phase. The script can reconfigure or suspend an application to facilitate the migration process. At this point the migration will be initiated. This command is called after migration has completed. The script is called with this command at the prepare-migration phase. premigrate <resource> postmigrate <resource> undopremigrate <resource> In addition to the script commands. A dynamic LPAR script can be registered to support only partition migration. Chapter 5. The register command of your dynamic LPAR scripts can choose to handle this resource type. A script supporting partition migration should write out the name-value pair DR_RESOURCE=pmig when it is invoked with the register command. The script is called with this command in the post-migration phase. listed in Table 5-1. If an error is encountered during the check phase.

log. ret_code=10..log.log. register ) echo "DR_RESOURCE=pmig". undopremigrate ) echo "UNDO_CHECK_MIGRATE" >> /tmp/migration. echo "REGISTER" >> /tmp/migration.log..0" echo "DR_DATE=27032007" echo "DR_SCRIPTINFO=partition migration test script" echo "DR_VENDOR=IBM" echo "SCRIPTINFO" >> /tmp/migration..log.Example 5-6 Outline Korn shell dynamic LPAR script for Live Partition Mobility #!/usr/bin/ksh if [[ $# -eq 0 ]] then echo "DR_ERROR=Script usage error" exit 1 fi ret_code=0 command=$1 case $command in scriptinfo ) echo "DR_VERSION=1. esac exit $ret_code 184 IBM PowerVM Live Partition Mobility . usage ) echo "DR_USAGE=$0 command [parameter]" echo "USAGE" >> /tmp/migration.. * ) echo "*** UNSUPPORTED *** : $command" >> /tmp/migration.log... premigrate ) echo "PRE_MIGRATE" >> /tmp/migration. checkmigrate ) echo "CHECK_MIGRATE" >> /tmp/migration.log postmigrate ) echo "POST_MIGRATE" >> /tmp/migration....log.

sh command [parameter] ------------------------------------------------------------ 5.10 Making kernel extension migration aware Kernel extensions can register to be notified of migration events. ulong *h_token.sh. which is the reconfig_register() kernel service. and usage commands of the shell script. long long actions. actions.If the file name of the script is migrate. void* h_arg. you can see the output from the scriptinfo. then you would register it with the dynamic reconfiguration infrastructure by using the following command. Admin Override Timeout:0 Memory DR Percentage:100 Resources Supported: Resource Name: pmig Resource Usage: /usr/lib/dr/scripts/all/migrate. Example 5-7 Listing the registered dynamic LPAR scripts # drmgr -l DR Install Root Directory: /usr/lib/dr/scripts Syslog ID: DRMGR -----------------------------------------------------------/usr/lib/dr/scripts/all/migrate. char* name. The notification mechanism uses the standard dynamic reconfiguration mechanism. Date:27032007 Script Timeout:10.0. h_arg.sh Use the drmgr -l command to confirm script registration. void* dri). as shown in Example 5-7. Advanced topics 185 . name) int (*handler)(void*. The service interface signature is: int reconfig_register_ext(handler. In this example. long long action. h_token. The actions parameter supports the following values for mobility awareness: DR_MIGRATE_CHECK DR_MIGRATE_PRE DR_MIGRATE_POST DR_MIGRATE_POST_ERROR Chapter 5. void*.sh partition migration test script Vendor:IBM. # drmgr -i ./migrate. register. Version:1.

The destination_lpid and streamid fields are not available for the check phase. The resource_info parameter maps to the following structure for partition migration: struct dri_pmig { int version. The action parameter indicates the specific reconfiguration operation being performed. long long action. DR_MIGRATE_PRE. for example. void* resource_info). int destination_lpid. 186 IBM PowerVM Live Partition Mobility .The interface to the handler is: int handler(void* event. long long streamid } The version number is changed if additional parameters are added to this structure. void* h_arg. The interfaces to the reconfig_unregister() and reconfig_complete() kernel services are not changed by Live Partition Mobility.

POWER6 System #1 (Source system) AIX Client Partition 1 (Mobile partition) hdisk0 POWER6 System #2 (Destination system) fcs0 ent0 POWER Hypervisor POWER Hypervisor vfchost0 Virtual I/O Server fcs0 ent2 SEA en2 if en2 if ent2 SEA ent0 ent0 fcs0 HMC1 Ethernet Network Storage Area Network Shared Disk (Storage Device) LUN Physical Volume LUN Figure 5-33 Basic NPIV virtual Fibre Channel infrastructure before migration Chapter 5. Advanced topics Virtual I/O Server ent1 virt ent1 virt 187 .11 Virtual Fibre Channel Virtual Fibre Channel is a virtualization feature. and enables PowerVM logical partitions to access SAN resources using virtual Fibre Channel adapters mapped to a physical NPIV-capable adapter. Virtual Fibre Channel uses N_Port ID Virtualization (NPIV).5. Figure 5-33 shows a basic configuration using virtual Fibre Channel and a single Virtual I/O Server in the source and destination systems before migration occurs.

POWER6 System #1 (Source system) POWER6 System #2 (Destination system) AIX Client Partition 1 (Mobile partition) hdisk0 ent0 fcs0 POWER Hypervisor POWER Hypervisor ent2 SEA en2 if en2 if ent2 SEA fcs0 ent0 ent0 fcs0 HMC1 Ethernet Network Storage Area Network Shared Disk (Storage Device) LUN Physical Volume LUN Figure 5-34 Basic NPIV virtual Fibre Channel infrastructure after migration Benefits of NPIV and virtual Fibre Channel The addition of NPIV and virtual Fibre Channel adapters reduces the number of components and steps necessary to configure shared storage in a Virtual I/O Server configuration: With virtual Fibre Channel support. 188 IBM PowerVM Live Partition Mobility Virtual I/O Server ent1 virt ent1 virt vfchost0 Virtual I/O Server . LUNs assigned to the virtual Fibre Channel adapter appear in the mobile partition as standard disks from the storage subsystem.After migration. LUNs do not appear on the Virtual I/O Server unless the physical adapters WWPN is zoned. LUNs from the storage subsystem are zoned in a switch with the mobile partition’s virtual Fibre Channel adapter using its worldwide port names (WWPNs). you do not map individual disks in the Virtual I/O Server to the mobile partition. the configuration is similar to the one shown in Figure 5-34. which greatly simplifies Virtual I/O Server storage management.

such as load balancing across multiple virtual Fibre Channel adapters presented from dual Virtual I/O Servers. Multipathing software is not installed into the Virtual I/O Server partition to manage virtual Fibre Channel disks.4. Migration of LUNs between virtual SCSI and virtual Fibre Channel is not supported at the time of publication.1. or later AIX 6. SG24-7590 for details about virtual Fibre Channel and NPIV configuration. the following components must be configured in the environment: An NPIV-capable SAN switch An NPIV-capable physical Fibre Channel adapter on the source and destination Virtual I/O Servers HMC Version 7 Release 3. Advanced topics 189 .Standard multipathing software for the storage subsystem is installed on the mobile partition. “Live Partition Mobility mechanisms” on page 19. or later Each virtual Fibre Channel adapter on the Virtual I/O Server mapped to an NPIV-capable physical Fibre Channel adapter Each virtual Fibre Channel adapter on the mobile partition mapped to a virtual Fibre Channel adapter in the Virtual I/O Server At least one LUN mapped to the mobile partition’s virtual Fibre Channel adapter Mobile partitions may have virtual SCSI and virtual Fibre Channel LUNs.3 TL9. The absence of the software provides system administrators with familiar configuration commands and problem determination processes in the client partition. In addition.1 TL2 SP2. Chapter 5.1 with Fix Pack 20. or later AIX 5. Partitions can take advantage of standard multipath features. See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and Monitoring. or later Virtual I/O Server Version 2. Required components The mobile partition must meet the requirements described in Chapter 2.

1 Basic virtual Fibre Channel Live Partition Mobility preparation This section describes how to set up and migrate a partition that is using virtual Fibre Channel disk resources. The WWPN on the physical adapter on the source and destination Virtual I/O Server does not have to be included in the zone.11.5. You must include both WWPNs from each virtual Fibre Channel adapter in the zone.1. “Basic Live Partition Mobility environment” on page 90. Figure 5-35 Client partition virtual Fibre Channel adapter WWPN properties 190 IBM PowerVM Live Partition Mobility . do not set the adapter as required when you create a virtual Fibre Channel adapter. The mobile partition’s virtual Fibre Channel WWPNs must be zoned on the switch with the storage subsystem. These are created automatically by the migration function. do not create any virtual Fibre Channel adapters for the mobile partition. the infrastructure must meet the following requirements for migrations with virtual Fibre Channel adapters: The destination Virtual I/O Server must contain an NPIV-capable physical Fibre Channel adapter that is connected to the NPIV-enabled port on the switch that has connectivity to a port on a SAN device that has access to the same targets as the client is using on the source CEC. On the source Virtual I/O Server partition. In addition to the requirements described in 4. The virtual Fibre Channel adapter must be solely accessible by the client adapter of the mobile partition. On the destination Virtual I/O Server partition. Figure 5-35 shows a mobile partition virtual Fibre Channel adapter example.

as shown in Example 5-8 on page 192. Chapter 5. called a Server Fibre Channel Adapter. Figure 5-37 Virtual I/O Server Fibre Channel adapter properties The Virtual I/O Server lsdev and lsmap commands can be used to query the virtual Fibre Channel configuration and mapping to the mobile partition. Advanced topics 191 . Figure 5-36 Virtual Fibre Channel adapters in the Virtual I/O Server Figure 5-37 shows an example of a virtual Fibre Channel properties for the Virtual I/O Server.Figure 5-36 shows an example of virtual Fibre Channel properties for the Virtual I/O Server.

101F170-V2-C60-T1 description Virtual FC Server Adapter 192 IBM PowerVM Live Partition Mobility .MMA.MMA.Example 5-8 Virtual I/O Server commands lsmap and lsdev virtual Fibre Channel output $ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ============== ======= vfchost0 U9117.101F170-V1-C16 2 mobile2 AIX Status:LOGGED_IN FC name:fcs3 Ports logged in:2 Flags:a<LOGGED_IN.001.STRIP_MERGE> VFC client name:fcs0 $ lsdev -dev vfchost* name status vfchost0 Available FC loc code:U789D.DQDYKYW-P1-C6-T2 VFC client DRC:U9117.

Advanced topics 193 .11. “Preparing for an active partition migration” on page 94. Figure 5-38 Selecting the virtual Fibre Channel adapter Chapter 5. Instead of selecting virtual SCSI adapters in step 11 on page 113.2 Migration of a virtual Fibre Channel based partition This section describes the steps necessary to migrate a mobile partition that uses virtual Fibre Channel adapters. After validating that the mobile partition can be migrated. follow steps 1 on page 104 through 10 on page 112 in 4. select the virtual Fibre Channel adapter assignment as shown in Figure 5-38.5.3.

Proceed with the remaining migration steps at step 12 on page 114 as described in 4. Figure 5-39 Virtual Fibre Channel migration summary window 194 IBM PowerVM Live Partition Mobility . “Preparing for an active partition migration” on page 94.3. The Summary panel will look similar to Figure 5-39. Verify the settings you have selected and then click Finish to begin the migration.

Note: With NPIV-based disks. the logical partition accesses the same storage data using two different paths. Figure 5-40 Migrated partition 5. verify that the mobile partition is on the destination system. Figure 5-40 shows that the mobile partition is on the destination system.When the migration is complete.11. both paths can be active. Chapter 5.The multipath capabilities depend on the storage subsystem type and multipath code deployed in the mobile partition.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing With multipath I/O. For NPIV and virtual Fibre Channel. Advanced topics 195 . each provided by a separate Virtual I/O Server. the storage multipath code is loaded into the mobile partition.

The migration is possible only if the destination system is configured with two Virtual I/O Servers that can provide the same multipath setup. AIX Client Partition 1 hdisk0 fcs0 fcs1 Hypervisor Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 vfchost0 vfchost1 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Hypervisor Storage adapter Storage adapter Storage adapter Storage adapter Disk A Storage Subsystem Figure 5-41 Dual VIOS and client multipath I/O to dual NPIV before migration. as shown in Figure 5-41. They both must have access to the shared disk data. 196 IBM PowerVM Live Partition Mobility .

the migration cannot be performed.When migration is complete. on the destination system. The migration process would create two paths using the same Virtual I/O Server. If the destination system is configured with only one Virtual I/O Server. you must first remove one path from the source configuration before starting the migration. The removal can be performed without interfering with the running applications. as shown in Figure 5-42. the two Virtual I/O Servers are configured to provide the two paths to the data. but this setup of having one virtual Fibre Channel host device mapping the same LUNs on different virtual Fibre Channel adapters is not recommended. To migrate the partition. Advanced topics 197 . Chapter 5. The configuration becomes a simple single Virtual I/O Server migration. AIX Client Partition 1 hdisk0 fcs0 fcs1 Hypervisor vfchost0 vfchost1 vfchost0 vfchost1 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Virtual I/O Server (VIOS) 1 Virtual I/O Server (VIOS) 2 Hypervisor Storage adapter Storage adapter Storage adapter Storage adapter Disk A Storage Subsystem Figure 5-42 Dual VIOS and client multipath I/O to dual VIOS after migration.

SG24-7590 for additional details about virtual Fibre Channel and NPIV configuration.4 Live Partition Mobility with Heterogeneous I/O This section describes how to migrate a partition that is currently using disk resources presented on dedicated physical Fibre Channel adapters. Any adapters of these types must be deconfigured and removed before migration. For this scenario. See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and Monitoring.11. Another assumption is that the Virtual I/O Server partitions have one physical NPIV-capable Fibre Channel adapter. Source System Mobile Partition Destination System hdisk0 fcs1 ent0 Hypervisor ent1 Hypervisor ent1 Destination VIOS Figure 5-43 The mobile partition using physical resources Source VIOS Storage adapter Ethernet adapter Storage Subsystem Storage adapter Ethernet adapter 198 IBM PowerVM Live Partition Mobility . and the mobile partition’s storage subsystem LUNs are available to the physical adapter currently used by the mobile partition. and that a Virtual I/O Server exists and is running on the source and destination systems. we assume that you are beginning with a mobile partition that is using a physical Fibre Channel adapter. Partitions may not use physical adapters. Host Ethernet Adapters (HEA). and non-default virtual serial adapters when participating in an active migration. Figure 5-43 describes our starting configuration.5.

Before proceeding. You will use dynamic logical partitioning (dynamic LPAR) to remove the adapter from the mobile partition prior to migration. to the activated mobile partition.11. your physical adapter will be desired in the partition. Figure 5-44 Virtual Fibre Channel server adapter properties 2. Figure 5-44 shows the resulting virtual Fibre Channel adapter properties. Advanced topics 199 . verify that the environment meets the requirements for Live Partition Mobility with NPIV and virtual Fibre Channel as outlined in 5. Use dynamic LPAR to add a virtual Fibre Channel client adapter. Configure virtual Fibre Channel storage To move your physical storage devices to virtual storage devices on your mobile partition named mobile2: 1. “Virtual Fibre Channel” on page 187. with the same properties from the previous step. In this case. Use dynamic LPAR to add a virtual Fibre Channel server adapter to the running source Virtual I/O Server. Chapter 5. This scenario describes how to deal with partitions containing physical adapters on the client so you may disregard the requirement of having no physical adapters assigned to your mobile partition.

Execute the vfcmap command to associate the virtual Fibre Channel server adapter to the physical Fibre Channel adapter. 3. 200 IBM PowerVM Live Partition Mobility . Important: Similar to virtual SCSI. assign the mobile partition’s storage to the virtual Fibre Channel adapters that use the WWPN pair generated in step 2 on page 199. By using standard SAN configuration techniques.Figure 5-45 shows the virtual Fibre Channel client adapter properties. On the source Virtual I/O Server. Save the changes made to the mobile partition to new profile name to preserve the generated WWPNs for future use by the mobile partition. Example 5-9 Show the virtual Fibre Channel server adapter $ lsdev -dev vfchost* name status vfchost0 Available description Virtual FC Server Adapter 6. Figure 5-45 Virtual Fibre Channel client adapter properties Record the virtual Fibre Channel client adapter’s slot number and WWPN pair for use when configuring the storage subsystem in step 4. you do not have to create virtual Fibre Channel server adapters for your mobile partition on the destination Virtual I/O Server. The lsdev command shows the changes as seen in Example 5-9. and properly zone the virtual Fibre Channel WWPNs with the storage subsystem’s WWPN. run the cfgdev command to discover the newly added virtual Fibre Channel server adapter (vfchost0). 5. They are created automatically for you during the migration. 4.

Example 5-12 lspath output from the mobile partition # lspath Enabled hdisk0 Enabled hdisk0 Enabled hdisk0 Enabled hdisk0 fscsi1 fscsi1 fscsi2 fscsi2 Chapter 5.001. Because our storage subsystem uses active and passive controller paths.As shown in Example 5-10. Use these details when you remove the physical adapter from the partition. Example 5-10 Virtual Fibre Channel mappings created and listed $ vfcmap -vadapter vfchost0 -fcp fcs1 vfchost0 changed $ lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ============= ================================== ====== ============== ======= vfchost0 U9117. Verify that the partition’s disks are enabled on the new virtual Fibre Channel adapters by using the lspath command as shown in Example 5-12. 8. The mobile partition’s LUNs are attached using the fcs1 port. The lsdev command shows the new adapter as fcs2 in Example 5-11. two paths are shown for each disk. Run the cfgmgr command on the mobile partition to configure the new virtual Fibre Channel client adapter. Advanced topics 201 . Our physical adapter is a dual-port adapter listed as fcs0 and fcs1.100F6A0-V2-C70-T1 7.DQDWWHY-P1-C1-T2 VFC client DRC:U9117.100F6A0-V1-C70 2 mobile2 AIX Status:LOGGED_IN FC name:fcs1 Ports logged in:2 Flags:a<LOGGED_IN. Example 5-11 Fibre Channel device listing on the mobile partition # lsdev|egrep 'fcs*|fscs*' fcs0 Available 00-00 fcs1 Available 00-01 fcs2 Available 70-T1 fscsi0 Available 00-00-01 fscsi1 Available 00-01-01 fscsi2 Available 70-T1-01 4Gb FC PCI Express Adapter 4Gb FC PCI Express Adapter Virtual Fibre Channel Client FC SCSI I/O Controller Protocol FC SCSI I/O Controller Protocol FC SCSI I/O Controller Protocol 9.MMA. the adapter port in use on the NPIV Fibre Channel adapter is fcs1.STRIP_MERGE> VFC client name:fcs2 FC loc code:U789D. Other storage subsystems might use different commands to list available paths and show different output.MMA. Record the existing physical Fibre Channel adapter and disk configuration.

if the only physical devices in use are the physical Fibre Channel adapters. For example.6. Remove all physical devices. Source System Mobile Partition Destination System hdisk0 fcs1 fcs2 ent0 Hypervisor vfchost0 Source VIOS Storage adapter Ethernet adapter Storage Subsystem ent1 Hypervisor ent1 Destination VIOS Storage adapter Ethernet adapter Figure 5-46 The mobile partition using physical and virtual resources Remove physical Fibre Channel adapters To remove the physical Fibre Channel adapter from the mobile partition: 1. run the commands shown in Example 5-14 on page 203 to remove the physical devices. Example 5-13 Removing the physical adapters and their child devices # rmdev -R -dl fcs0 # rmdev -R -dl fcs1 # lsdev -Cc adapter|grep fcs fcs2 Available 70-T1 Virtual Fibre Channel Client Adapter 202 IBM PowerVM Live Partition Mobility . by using the rmdev command. “Remove adapters from the mobile partition” on page 160 for details about removing required and desired adapters.Figure 5-46 shows the mobile partition using a virtual and physical path to disk. with their children. Use the device names of the physical adapters recorded in step 7 on page 201.6. See 5.

Source System Mobile Partition Destination System hdisk0 fcs1 ent0 Hypervisor ent1 Hypervisor ent1 Destination VIOS Figure 5-47 The mobile partition using virtual resources Source VIOS Storage adapter Ethernet adapter Storage Subsystem Storage adapter Ethernet adapter Chapter 5. Advanced topics 203 . Figure 5-47 shows the mobile partition using only virtual resources. Example 5-14 Remaining paths after physical adapter has been removed # lspath Enabled hdisk0 fscsi2 2. 3. Use your HMC to remove all physical adapter slots from the mobile partition that is using dynamic LPAR.Verify that you are using only the virtual Fibre Channel path to the disk as displayed in Example 5-14. Remove all virtual serial adapters from slots 2 and above from the mobile partition using dynamic LPAR.

Ready to migrate The mobile partition is now ready to be migrated. Virtual terminals can be reopened when the partition is on the destination system. Close any virtual terminals on the mobile partition. Note: The active mobile partition profile is created on the destination system without any references to any physical I/O slots that were present in your profile on the source system. because they will lose connection when the partition migrates to the destination system. Figure 5-48 shows the mobile partition migrated to the destination system. if they are available on the destination system. After the migration is complete. Source System Destination System Mobile Partition hdisk0 fcs0 ent0 Hypervisor ent1 Hypervisor vfchost0 Destination VIOS ent1 Figure 5-48 The mobile partition on the destination system Source VIOS Storage adapter Ethernet adapter Storage Subsystem Storage adapter Ethernet adapter 204 IBM PowerVM Live Partition Mobility . Any other mobile partition profiles are copied unchanged. consider adding physical resources back to the mobile partition.

5.12 Processor compatibility modes Processor compatibility modes enable you to move logical partitions between servers that have different processor types. The processor compatibility mode in which the logical partition currently operates is the current processor compatibility mode of the logical partition. the hypervisor assigns to the logical partition the most fully featured processor compatibility mode (which is a lower mode than the preferred mode) that is supported by the operating environment. without upgrading the operating environments installed in the logical partitions. the processor compatibility mode enables the destination server to provide the logical partition with a subset of processor capabilities that are supported by the operating environment that is installed in the logical partition. limiting your flexibility to move logical partitions between servers that have different processor types. Linux. A processor compatibility mode is a value assigned to a logical partition by the hypervisor that specifies the processor environment on which the logical partition can successfully operate. Advanced topics 205 . If the operating environment supports the preferred processor compatibility mode (which is the highest mode that the hypervisor can assign to a logical partition). the hypervisor assigns the preferred processor compatibility mode to the logical partition. When you move a logical partition to a destination server that has a different processor type from the source server. the processor compatibility mode enables that logical partition to run in a processor environment on the destination server in which it can successfully operate. The hypervisor sets the current processor compatibility mode for a logical partition by using the following information: Processor features supported by the operating environment running in the logical partition Preferred processor compatibility mode that you specify The preferred processor compatibility mode of a logical partition is the mode in which you want the logical partition to operate. You can run several versions of AIX. you must specify the enhanced mode as the preferred mode for the logical partition. Certain older versions of these operating environments do not support the capabilities that are available with new processors. POWER6 technology-based servers. If you want a logical partition to run in an enhanced mode. If the operating Chapter 5. the hypervisor checks the preferred processor compatibility mode and determines whether the operating environment supports that mode. If the operating environment does not support the preferred processor compatibility mode. In other words. When you activate the logical partition. and Virtual I/O Server in logical partitions on POWER5 technology-based servers.

You set the preferred processor compatibility mode to the default mode and when you activate the logical partition on the POWER6 technology-based server. only the preferred mode of the logical partition must be supported by the destination server. A POWER6 processor cannot emulate all features of a POWER5 processor. POWER6+. POWER6. POWER6. it runs in the 206 IBM PowerVM Live Partition Mobility . Table 5-2 lists current and preferred processor compatibility modes supported on each server type. POWER7 Supported preferred modes default. POWER6. and restart the logical partition. POWER6 enhanced default. When you move an active logical partition between servers that have different processor types. POWER6+. The hypervisor attempts to set the current processor compatibility mode to the preferred mode that you specified. then the hypervisor assigns the enhanced mode to the logical partition when you activate the logical partition. you want to move an active logical partition from a POWER6 technology-based server to a Refreshed POWER6 technology-based server so that the logical partition can take advantage of the additional capabilities available with the Refreshed POWER6 processor. shut down the logical partition. POWER6. POWER6 enhanced POWER5. POWER6+ enhanced default. POWER7 For example. POWER6+. POWER6+ enhanced POWER5. you must change the preferred processor compatibility mode. POWER6+. both the current and preferred processor compatibility modes of the logical partition must be supported by the destination server. Logical partitions in the POWER6 enhanced processor compatibility mode can only run on POWER6 technology-based servers. Table 5-2 Processor compatibility modes supported by server type Server processor type Refreshed POWER6 technology-based server (POWER6+™) POWER6 technology-based server POWER7 technology-based server Supported current modes POWER5. certain types of performance monitoring might not be available for a logical partition if the current processor compatibility mode of a logical partition is set to the POWER5 mode.environment supports the corresponding non-enhanced mode. POWER6. You cannot dynamically change the current processor compatibility of a logical partition. To change the current processor compatibility mode. When you move an inactive logical partition between servers that have different processor types. For example. POWER6.

If not. you set the preferred processor compatibility mode to the POWER6 mode. the hypervisor evaluates the configuration and sets the current mode for the logical partition just like it does when you restart a logical partition after an active migration. When you restart the logical partition. After you move an inactive logical partition to the destination server and activate that logical partition on the destination server. If it cannot. The hypervisor attempts to set the current mode to the preferred mode. the operating environment supports the POWER6 mode. Because the preferred processor compatibility mode is set to the default mode and the logical partition now runs on a Refreshed POWER6 technology-based server. When you move the logical partition to the Refreshed POWER6 technology-based server. Remember. so the hypervisor sets the current mode to the POWER6 mode. When you restart the logical partition on the Refreshed POWER6 technology-based server. the hypervisor evaluates the configuration. it checks the next highest mode and so on. and so on. the highest mode available is the POWER6+ mode and the hypervisor changes the current processor compatibility mode to the POWER6+ mode. which is the highest mode supported by both POWER6 technology-based servers and Refreshed POWER6 technology-based servers. you must change the preferred mode from the default mode to the POWER6 mode (because the POWER6+ mode is not supported on a POWER6 technology-based server) and restart the logical partition on the Refreshed POWER6 technology-based server. you can move that inactive logical partition to a server of any processor type. it determines whether it can set the current mode to the next highest mode. except inactive migrations do not require the current processor compatibility mode of the logical partition because the logical partition is inactive. both the current and preferred modes remain unchanged for the logical partition until you restart the logical partition. When you want to move the logical partition back to the POWER6 technology-based server. Because the preferred mode is set to POWER6. the hypervisor first determines whether it can set the current mode to the preferred mode. Advanced topics 207 . If you specify the default mode as the preferred mode for an inactive logical partition. only the preferred mode of the logical partition Chapter 5. The same logic from the previous examples applies to inactive migrations. Remember. In this example. the hypervisor does not set the current mode to a higher mode than POWER6. In this case. the hypervisor evaluates the configuration. when you move an inactive logical partition between servers with different processor types. so that you can move the logical partition back to the POWER6 technology-based server. The easiest way to maintain this moving back and forth type of flexibility between different types of processors is to determine the processor compatibility mode supported on both the source and destination servers and set the preferred processor compatibility mode of the logical partition to the highest mode supported by both servers.POWER6 mode.

select Configuration  Manage Profiles. e. Identify the preferred processor compatibility mode of the mobile partition: a. And because all servers support the default processor compatibility mode. The Managed Profiles window opens. if necessary. select Edit. and update the mode. 5. f. The Logical Partition Profile Properties window is displayed. Record this value so that you can refer to it later. 208 IBM PowerVM Live Partition Mobility . the preferred mode remains set to default. so that you can successfully move the mobile partition to the destination server.1 Verifying the processor compatibility mode of mobile partition Determine whether the processor compatibility mode of the mobile partition is supported on the destination server. b. Identify the processor compatibility modes that are supported by the destination server by entering the following command using the HMC command-line interface (CLI) that manages the destination server: lssyscfg -r sys -F lpar_proc_compat_modes Record these values so that you can refer to them later. d. Select the active partition profile of the mobile partition or select the partition profile from which the mobile partition was last activated. and the hypervisor determines the current mode for the logical partition.must be supported by the destination server. c. Click the Processors tab to view the preferred processor compatibility mode. 2.12. In the navigation area of the HMC that manages the source server. you can move an inactive logical partition with the preferred mode of default to a server with any processor type. expand Systems Management  Servers and select the source server. From the Tasks menu. “Integrated Virtualization Manager for Live Partition Mobility” on page 221. When the inactive logical partition is activated on the destination server. In the contents area. select the mobile partition. To verify that the processor compatibility mode of the mobile partition is supported on the destination server by using the HMC: 1. From the Actions menu. Note: The verification of the processor compatibility mode of the mobile partition using the Integrated Virtualization Manager is discussed in Chapter 7.

Advanced topics 209 . Figure 5-49 Processor compatibility mode options of the mobile partition 3. identify the current processor compatibility mode of the mobile partition. Chapter 5. which is the current processor compatibility mode of the mobile partition. c. as follows: a. If you plan to perform an active migration. select the mobile partition and click Properties. Select the Hardware tab and view the Processor Compatibility Mode.The result of these steps is shown in Figure 5-49. If you plan to perform an inactive migration. In the contents area. Record this value so that you can refer to it later. expand Systems Management  Servers and select the source server. In the navigation area of the HMC that manages the source server. skip this step and go to step 4 on page 210. b.

Figure 5-50 Current processor compatibility mode of the mobile partition 4. Therefore. the preferred mode of the mobile partition is the POWER6+ mode and you plan to move the mobile partition to a POWER6 technology-based server. Attention: If the current processor compatibility mode of the mobile partition is the POWER5 mode. you change the preferred mode to the POWER6 mode. Verify that the preferred and current processor compatibility modes that you identified in steps 2 on page 208 page and on page 209 are in the list of supported processor compatibility modes identified in step 1 on page 208 for the destination server. Although POWER6 technology-based server does not support the POWER6+ mode. the destination server supports the POWER5 mode even though it does not appear in the list of supported modes. use step 2 on page 208 to change the preferred mode to a mode that is supported by the destination server. For active migrations. However. only the preferred processor compatibility mode must be supported by the destination server.The result of these steps is shown in Figure 5-50. it does support the POWER6 mode. 210 IBM PowerVM Live Partition Mobility . both the preferred and current processor compatibility modes of the mobile partition must be supported by the destination server. For example. If the preferred processor compatibility mode of the mobile partition is not supported by the destination server. 5. be aware that the POWER5 mode does not appear in the list of modes supported by the destination server. For inactive migrations.

a possibility is that the hypervisor has not had the opportunity to update the current mode of the mobile partition. Chapter 5. b. use step 2 on page 208 to change the preferred mode of the mobile partition to a mode that is supported by the destination server. Then. If the current mode of the mobile partition still does not match the list of supported modes that you identified for the destination server. try the following solutions: a. If the current processor compatibility mode of the mobile partition is not supported by the destination server. reactivate the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. If the mobile partition is active. Shut down and reactivate the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. Advanced topics 211 .6.

212 IBM PowerVM Live Partition Mobility .

1.3. 2007. 213 . “Recovery” on page 216 6. “A recovery example” on page 218 © Copyright IBM Corp. This chapter contains the following topics: 6. The chapter assumes you have a working knowledge of Live Partition Mobility prerequisites and actions.2. 2009. “Progress and reference code location” on page 214 6.6 Chapter 6. Migration status This chapter discusses topics related to migration status and recovery procedures to be followed when errors occur during migration of a logical partition. All rights reserved.

expand Systems Management  Servers. Figure 6-1 Partition reference codes The same information can be obtained from the HMC’s CLI by using the lsrefcode and lslparmigr commands. 214 IBM PowerVM Live Partition Mobility . You can find a description of reference codes in “SRCs.1 Progress and reference code location Live Partition Mobility is driven by the HMC. See 5. Reference codes describe the progress of the migration. in Figure 6-1 the QA and mobile partitions are undergoing an active migration. and select the managed system. and for both partitions the latest reference code is displayed.6.7. “The command-line interface” on page 162 for details. a migration recovery procedure might be required. current state” on page 260. A system status and reference code is provided for each logical partition. For example. When the reference code represents an error. To view the migration status on the GUI. The HMC has knowledge of the status of all partition migrations and provides the latest reference code for each logical partition.

The percentage indicates the completion of memory state transfer during an active migration. DESCRIPTION Client Partition Migration Completed Client Partition Migration Started Migration information is recorded also on the Virtual I/O Servers that acted as a mover service partition.. The mobile partition records the start and the end of the migration process. To retrieve it. there is no memory management and the value is zero. a progress window is provided similar to the one shown in Figure 6-2. and it holds all migration information. Figure 6-2 Migration progress window During an inactive migration. You can find a description of partition-related error logs in “Operating system error logs” on page 266. use the errlog command.After a migration is issued on the HMC GUI. only the HMC is involved. Example 6-1 Migration log on mobile partition [mobile:/]# errpt IDENTIFIER TIMESTAMP T C RESOURCE_NAME A5E6DB96 1118164408 I S pmig 08917DC6 1118164408 I S pmig . Chapter 6. All these objects record migration events in their error logs.. In the case of an inactive migration. An active migration requires the coordination of the mobile partition and the two Virtual I/O Servers that have been selected as mover service partitions. as shown in Example 6-1. Migration status 215 . You may extract the data by using the errpt command.

They can be used to trace all migration events on the system. Example 6-2 Migration log on source mover service partition $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 3EB09F5A 1118164408 I S Migration 6CB10B8D 1118164408 I S unspecified . The migration validation described in 4. “Performing the validation steps and eliminating errors” on page 99 takes care of checking all prerequisites. while the second records the successful end of the migration. such as user interruption or network problems.4.. a rollback procedure is executed to undo all configuration changes applied. performs another validation before starting any configuration changes. The first event in the log states when the mobile partition execution has been suspended on the source system and has been activated on the destination system. It can be explicitly executed at any moment and it does not affect the mobile partition.1. The migration process. An external event prevents a migration component from completing its job. 6. DESCRIPTION Migration completed successfully Client partition suspend issued On the destination mover service partition.. as shown in Example 6-3..2 Recovery Live Partition Mobility is designed to verify whether a requested migration can be executed and to monitor all migration processes. If a running migration cannot be completed.. A partition migration might be prevented from running for two main reasons: The migration is not valid and does not meet prerequisites. 216 IBM PowerVM Live Partition Mobility . Example 6-3 Migration log on destination mover service partition $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 3EB09F5A 1118164408 I S Migration . the error log registers only the end of the migration.Example 6-2 shows the data available on the source mover service partition. Perform a validation before requesting any migration. DESCRIPTION Migration completed successfully The error logs on the mobile partition and the Virtual I/O Servers also record events that prevent the migration from succeeding. however.

Migration status 217 . Activating the same partition on two systems is not possible. When a recovery is required. This situation might occur when the HMC cannot contact a migration component (for example. After a timeout. Configuration cleanup is made during recovery. Recovery is performed by selecting the migrating partition and then selecting Operations  Mobility  Recover.After the inactive or active migration begins. the HMC manages the configuration changes and monitors the status of all involved components. the mobile partition name can appear on both the source and the destination system. an error message is provided (requesting a recovery). or a system service processor) because of a network problem or an operator error. When the HMC cannot perform a recovery. its configuration cannot be changed to prevent any attempt to modify its state before its state is returned to normal operation. Figure 6-3 Recovery menu Chapter 6. recovery actions automatically begin. administrator intervention is required to perform problem determination and issue final recovery steps. If any error occurs. a Virtual I/O Server. The partition is either powered down (inactive migration) or really working only on one of the two systems (active migration). Although a mobile partition requires a recovery. as shown in Figure 6-3. the mobile partition.

the partition returns to normal operation state and changes to its configuration are then allowed. During an active migration. there is a partition state transfer through the network between the source and destination mover service partitions.3 A recovery example As an example. After a successful recovery. Figure 6-4 Recovery pop-up window The same actions performed on the GUI can be executed with the migrlpar command on the HMC’s command line. Note: Use the Force recover check box only when: The HMC cannot contact one of the migration components that require a new configuration or if the migration has been started by another HMC.7. The mobile partition continues running on the source system while its state is copied on the 218 IBM PowerVM Live Partition Mobility . the validation phase will detect the component that prevented the migration and will select alternate elements or provide a validation error. similar to the one shown in Figure 6-4. See 5. “The command-line interface” on page 162 for details. If the migration is executed again. we have deliberately created a network outage during an active partition migration. 6. Click Recover to start a recovery. A normal recovery does not succeed.A pop-up window opens. requesting recovery confirmation.

Migration status 219 . the migration process fails and an error message is displayed. On the destination system. Because the migration stopped in the middle of the state transfer. while it is active only on the source system. mobile. In the content area. it is briefly suspended on the source and immediately reactivated on the destination. Figure 6-5 Interrupted active migration status Chapter 6. the status of the migrating partition. In the HMC. waiting for the administrator to identify the problem and decide how to continue. a situation similar to Figure 6-5 is shown. is present in both systems. the partition configuration on the two involved systems is kept in the migrating status. In the HMC GUI. We unplugged the network connection of one mover service partition in the middle of a state transfer. We had to perform several tests in order to create this scenario because the migration on the partition (2 GB of memory) was extremely fast. only the shell of the partition is present.destination system. The situation can viewed by expanding Systems Management  Custom Groups  All partitions. Then.

. have recorded the event in their error logs. as described in Example 6-4. as shown in Figure 6-3 on page 217. Click the Recover button and the partition state is cleaned up (normalized).. Wait for the RMC protocol to reset communication between the HMC and the Virtual I/O Server that had the network cable unplugged. you must select the mobile partition and select Operations  Mobility  Recover. Example 6-6 Mover service partition with communication error $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 427E17BD 1118182108 P S Migration . the migration can be issued again. 220 IBM PowerVM Live Partition Mobility . No action is required on the partition. Example 6-4 Migrating partition’s error log after aborted migration [mobile]# errpt IDENTIFIER TIMESTAMP T C RESOURCE_NAME 5E075ADF 1118180308 I S pmig 08917DC6 1118180208 I S pmig DESCRIPTION Client Partition Migration Aborted Client Partition Migration Started Both Virtual I/O Servers. we see both the physical network error and the mover service partition communication error.. where it has never been executed. because no physical error has been created. where the cable was unplugged. using a single mover service partition. DESCRIPTION Migration aborted: MSP-MSP connection do To recover from an interrupted migration. as indicated in Example 6-5. After the network outage is resolved. The only visible effect is on the partition’s error log that shows the start and the abort of the migration.The applications running on the partition have not been affected by the network outage and are running on the source system.. On the Virtual I/O Server. as indicated in Example 6-6. A pop-up window similar to the one shown in Figure 6-4 on page 218 opens. and it is removed on the destination system. Example 6-5 Mover service partition with network outage $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 427E17BD 1118181908 P S Migration 0B41DD00 1118181708 I H ent4 . The mobile partition is present only on the source system where it is running. DESCRIPTION Migration aborted: MSP-MSP connection do ADAPTER FAILURE The other Virtual I/O Server only shows the communication error of the mover service partition.

3. 2009. the Virtual I/O Server becomes the management partition and provides the Integrated Virtualization Manager for systems management. “Requirements for Live Partition Mobility on IVM” on page 222 7. “Preparation for partition migration” on page 232 © Copyright IBM Corp. “Validation for inactive Partition Mobility” on page 231 7. In this chapter.5.4. “How active Partition Mobility works” on page 225 7.7. 221 . The Integrated Virtualization Manager provides a Web-based and command-line interface that enables you to migrate a logical partition from one POWER6 or POWER7 technology-based system to another. “Validation for active Partition Mobility” on page 227 7. requirements and preparation tasks for Live Partition Mobility with the Integrated Virtualization Manager. This chapter contains the following topics: 7. we discuss migration types.1. 2007. Integrated Virtualization Manager for Live Partition Mobility If the Virtual I/O Server is installed on a IBM Power Systems server that is not managed by a Hardware Management Console or is on an IBM BladeCenter blade server. “How inactive Partition Mobility works” on page 226 7.6.2. All rights reserved.7 Chapter 7. “Migration types” on page 222 7.

one of the following POWER6 technology-based models. where x is an S for BladeCenter or an L for Entry servers (such as the Power 520. or a combination of both: 8203-E4A (IBM Power System 520 Express) 8204-E8A (IBM Power System 550 Express) 8234–EMA (IBM Power System 560 Express) 9407-M15 (IBM Power System 520 Express) 9408-M25 (IBM Power System 520 Express) 9409-M50 (IBM Power System 550 Express) 7998-60X (BladeCenter JS12) 7998-61X (BladeCenter JS22) Both the source and destination systems must be at a firmware level 01Ex320 or later. the migration is active. the migration is inactive. 7. and Power 560). The migration task on the local Integrated Virtualization Manager helps you validate and complete a partition migration to a remote system that is managed by another Integrated Virtualization Manager. Although there is a minimum required firmware level. As with Live Partition Mobility conducted by the HMC. The level of source system firmware must be compatible with the destination firmware.7. 222 IBM PowerVM Live Partition Mobility . before migrating a logical partition. It is recommended to have the most current system firmware available installed. Power 550. each system can have a different level of firmware. a validation check should be performed to ensure that the migration will complete successfully. If the logical partition is in the not activated state.2 Requirements for Live Partition Mobility on IVM The section lists the requirements to use Live Partition Mobility on an Integrated Virtualization Manager managed system. Source and destination system requirements The source and destination system must be either a POWER7 technology-based server or blade supporting IVM.1 Migration types Two types of migration are available with the Integrated Virtualization Manager depending on the state of the logical partition: If the logical partition is in a running state.

See Figure 7-1. Integrated Virtualization Manager for Live Partition Mobility 223 . From the Service Management menu.Source and destination Virtual I/O Server requirements The Virtual I/O Server has to be installed at release level 1. click Updates. The Management Partition Updates panel opens and the code level is shown. the Virtual I/O Server version and fix pack level should be at the most current level. Similar to system firmware.ibm.com/server/vios/download 2 1 Figure 7-1 Checking release level of the Virtual I/O Server Chapter 7. if available: http://techsupport.services. Click the link to the Virtual I/O Server support site and newer updates and fixes.5 or higher both on the source and destination systems. 2. To verify the current code level: 1.

The ioslevel command can be executed.1. Note: IVM has reserved slots. The operating system must be at one of the following levels: AIX 5L Version 5. see the data sheet available on the Virtual I/O Server support Web site: http://www14. These slots cannot be part of a migration. on the Virtual I/O Server in order to determine the current version and fix pack level of the Virtual I/O Server and to see whether an upgrade is necessary.0 $ Note: On servers that are managed by the Integrated Virtualization Manager. The output of this command is shown in Example 7-1. Slots 0-3 on clients.1-FP-20. A Virtual I/O Server logical partition or an i5/OS® logical partition cannot be migrated. The VLAN must be bridged to a physical network using a shared Ethernet adapter in the Virtual 224 IBM PowerVM Live Partition Mobility . Example 7-1 The output of the ioslevel command $ ioslevel 2.3 Technology Level 7 or later AIX Version 6.html Network requirements The migrating partition uses the virtual LAN for network access.0.ibm.software. Operating system requirements The operating system running in the mobile partition has to be AIX or Linux.1 or later Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later Previous versions of AIX and Linux can participate in inactive partition migration if the operating systems support virtual devices and IBM Power Systems POWER6 technology-based systems. Slots 0-9 are reserved on VIOS. the source and destination Virtual I/O Server logical partitions might also be referred to as the source and destination management partitions.com/webapp/set2/sas/f/vios/documentation/data sheet. Storage requirements For a list of supported disks and optical devices. from the CLI.

Your LAN must be configured such that migrating partitions can continue to communicate with other necessary clients and servers after a migration is completed. Integrated Virtualization Manager for Live Partition Mobility 225 . The Integrated Virtualization Manager transfers the logical partition state from the source environment to the destination environment. Chapter 7. such as how the migration is initiated and the virtual adapter mapping choices. and virtual Fibre Channel configuration that exists on the source server. This process includes verifying that the Virtual I/O Server logical partitions on the destination server have enough available slots to accommodate the virtual adapter configuration of the mobile partition. The Integrated Virtualization Manager extracts the physical device description for each physical adapter on the Virtual I/O Server logical partition on the source server. However. The active migration process involves the following steps: 1. The Integrated Virtualization Manager uses all of this information to generate a list of recommended virtual adapter mappings for the mobile partition on the destination server. 5. as follows: a.3 How active Partition Mobility works With active Partition Mobility you can move a running logical partition. 2. several differences exist. virtual Ethernet. The Integrated Virtualization Manager uses the extracted information to determine whether the Virtual I/O Server logical partitions on the destination server can provide the mobile partition with the same virtual SCSI. You ensure that all requirements are satisfied and all preparation tasks are completed. The source mover service partition extracts the logical partition state information from the source server and sends it to the destination mover service partition over the network. including its operating system and applications. You use the migration task on the Integrated Virtualization Manager to initiate the active Partition Mobility. 4. from one server to another without disrupting the operation of that logical partition. 7. This includes using the virtual adapter mappings from the previous step to map the virtual adapters on the mobile partition to the virtual adapters on the Virtual I/O Server logical partition on the destination server. Active Partition Mobility on the Integrated Virtualization Manager is similar to the active Partition Mobility on the HMC.I/O Server partition. The Integrated Virtualization Manager prepares the source and destination environments for partition migration. 3.

The Integrated Virtualization Manager uses the extracted information to determine whether the Virtual I/O Server logical partitions on the destination server can provide the mobile partition with the same virtual SCSI. virtual Ethernet. 2.b. 6. You use the migration task on the Integrated Virtualization Manager to initiate inactive Partition Mobility.4 How inactive Partition Mobility works With inactive partition migration. All resources that were consumed by the mobile partition on the source server are reclaimed by the source server. The Integrated Virtualization Manager completes the migration. 8. however the same differences as with active Partition Mobility apply. such as adding dedicated I/O adapters to the mobile partition or adding the mobile partition to a partition workload group. The destination mover service partition receives the logical partition state information and installs it on the destination server. Inactive Partition Mobility on the Integrated Virtualization Manager is similar to the inactive Partition Mobility on the HMC. You ensure that all requirements are satisfied and all preparation tasks are completed. The Integrated Virtualization Manager extracts the physical device description for each physical adapter on the Virtual I/O Server logical partition on the source server. 9. 7. You perform post-requisite tasks. The source mover service partition continues to transfer the logical partition state information to the destination mover service partition. you can move a logical partition that is powered off from one server to another. The Integrated Virtualization Manager removes the virtual SCSI adapters and the virtual Fibre Channel adapters (that were connected to the mobile partition) from the source Virtual I/O Server logical partitions. 4. The inactive migration process involves the following steps: 1. and virtual Fibre Channel configuration that exists on the source server. This includes verifying that the Virtual I/O Server logical partitions on the destination server have enough available slots to accommodate the virtual adapter configuration of the mobile partition. You shut down the mobile partition. 7. The Integrated Virtualization Manager uses all of this information to generate a list 226 IBM PowerVM Live Partition Mobility . The Integrated Virtualization Manager suspends the mobile partition on the source server. The hypervisor resumes the mobile partition on the destination server. 3.

POWER Hypervisor. Integrated Virtualization Manager for Live Partition Mobility 227 . AIX passes the check migration request to those Chapter 7. 5. you have to validate your environment. You perform post-requisite tasks. The Integrated Virtualization Manager prepares the source and destination environments for Partition Mobility. such as establishing virtual terminal connections or adding the mobile partition to a partition workload group. The Integrated Virtualization Manager transfers the partition state from the source environment to the destination environment. 7. 7. The validation function on the Integrated Virtualization Manager checks the following items: The source and destination servers. You may start this check manually with the rmcctrl command That no physical adapters are in the mobile partition and that no virtual serial adapters are in virtual slots higher than 1 That no client virtual SCSI disks on the mobile partition are backed by logical volumes and that no disks map to internal disks The mobile partition.of recommended virtual adapter mappings for the mobile partition on the destination server. 8. 9. its operating system. The Integrated Virtualization Manager completes the migration. 6. All resources that were consumed by the mobile partition on the source server are reclaimed by the source server.5 Validation for active Partition Mobility Before an active logical partition is migrated. You activate the mobile partition on the destination server. Virtual I/O Servers. The Integrated Virtualization Manager removes the virtual SCSI adapters and the virtual Fibre Channel adapters (that were connected to the mobile partition) from the source Virtual I/O Server logical partitions. If the Integrated Virtualization Manager detects a configuration or connection problem. and the connection between the source and destination mover service partitions are established. and mover service partitions for active partition migration capability and compatibility That the Resource Monitoring and Control (RMC) connections to the mobile partition. You can use the validation function on the Integrated Virtualization Manager to validate your system configuration. the source and destination Virtual I/O Servers. and its applications for active migration capability. it displays an error message with information to help you resolve the problem.

Select the mobile partition in the Partition Details section and select the More Tasks menu. The Integrated Virtualization Manager uses the extracted information to determine whether the Virtual I/O Server logical partitions on the destination server can provide the mobile partition with the same virtual SCSI. the Integrated Virtualization Manager extracts the device description for each virtual adapter on the Virtual I/O Server logical partition on the source server. During validation. In Partition Management.applications and kernel extensions that have registered to be notified of dynamic reconfiguration events. The operating system either accepts or rejects the migration That the logical memory block size is the same on the source and destination servers That the operating system on the mobile partition is AIX or Linux That the mobile partition is not the redundant error path reporting logical partition That the mobile partition is not configured with barrier synchronization registers (BSR) That the mobile partition is not configured with huge pages That the mobile partition does not have a Host Ethernet Adapter (or Integrated Virtual Ethernet) That the mobile partition state is Active or Running That the mobile partition is not in a partition workload group The uniqueness of the mobile partition’s virtual MAC addresses That the required Virtual LAN IDs are available on the destination Virtual I/O Server That the mobile partition’s name is not already in use on the destination server The number of current active migrations against the number of supported active migrations That the necessary resources (processors and memory) are available to create a shell logical partition on the destination system. virtual Ethernet. 2. To initiate the validation through the Integrated Virtualization Manager: 1. This includes verifying that the Virtual I/O Server logical partitions on the destination server have enough available slots to accommodate the virtual adapter configuration of the mobile partition. select View/Modify Partitions. 228 IBM PowerVM Live Partition Mobility . and virtual Fibre Channel configuration that exists on the source server.

Select Migrate. Figure 7-2 More Tasks menu The Migrate Partition panel opens. The result is shown in Figure 7-2. Chapter 7.3. Integrated Virtualization Manager for Live Partition Mobility 229 .

4. Figure 7-3 Validation task for migration Note: Figure 7-3 gives you the impression that you might migrate from an IVM managed system to a remote IVM or HMC managed system.Figure 7-3 shows the Migrate Partition panel. 5. Click Validate. Ensure that the Remote IVM address. Remote IVM user ID and Remote IVM password are filled in to perform the validation before the actual migration. However at the time of this publication migration between IVM and HMC managed systems is not supported. 230 IBM PowerVM Live Partition Mobility .

it displays an error message with information to help you resolve the problem. During validation.7. the Integrated Virtualization Manager extracts the device description for each virtual adapter on the Virtual I/O Server logical partition on the source server. that is. You may use the validation function on the Integrated Virtualization Manager to validate your system configuration. virtual Chapter 7. The validation function on the Integrated Virtualization Manager checks the following items: The Virtual I/O Server and POWER Hypervisor migration capability and compatibility on the source and destination The Resource Monitoring and Control (RMC) connections to the source and destination Virtual I/O Servers That the mobile partition name is not already in use at the destination server The uniqueness of virtual Media Access Control (MAC) address That the required Virtual LAN IDs are available on the destination Virtual I/O Server That the mobile partition is in the Not Activated state That the mobile partition is an AIX or a Linux logical partition That the mobile partition is not the redundant error path reporting logical partition or a service logical partition That the mobile partition is not a member of a partition workload group The number of current inactive migrations against the number of supported inactive migrations That all required I/O devices are connected to the mobile partition through a Virtual I/O Server.6 Validation for inactive Partition Mobility Before an inactive logical partition is migrated. there are no physical adapters That the virtual SCSI disks assigned to the logical partition are accessible by the Virtual I/O Servers on the destination server That no virtual SCSI disks are backed by logical volumes and that no virtual SCSI disks are attached to internal disks (not on the SAN) That the necessary resources (processors and memory) are available to create a shell logical partition on the destination system. The Integrated Virtualization Manager uses the extracted information to determine whether the Virtual I/O Server logical partitions on the destination server can provide the mobile partition with the same virtual SCSI. If the Integrated Virtualization Manager detects a configuration or connection problem. Integrated Virtualization Manager for Live Partition Mobility 231 . you have to validate your environment.

and then updating the sizes if necessary: a. It also discusses validating the environment and migrating the mobile partition.7 Preparation for partition migration This section describes how to prepare source and destination servers.Ethernet.1 Preparing the source and destination servers To prepare the source and destination server for Partition Mobility using the Integrated Virtualization Manager: 1. From the navigation area.7. Ensure that the source and destination servers are either a POWER7 technology-based server or blade supporting IVM. This includes verifying that the Virtual I/O Server logical partitions on the destination server have enough available slots to accommodate the virtual adapter configuration of the mobile partition. 232 IBM PowerVM Live Partition Mobility . and virtual Fibre Channel configuration that exists on the source server. or one of the following POWER6 models: – 8203-E4A (IBM Power System 520 Express) – 8204-E8A (IBM Power System 550 Express) – 8234–EMA (IBM Power System 560 Express) – 9407-M15 (IBM Power System 520 Express) – 9408-M25 (IBM Power System 520 Express) – 9409-M50 (IBM Power System 550 Express) – BladeCenter JS12 – BladeCenter JS22 2. and the configurations for virtual SCSI and virtual Fibre Channel. 7. The View/Modify System Properties panel opens. 7. b. management and mobile partitions. select View/Modify System Properties under Partition Management. Ensure that the logical memory block (LMB) size is the same on the source and destination server by determining the logical memory block size of each server. Select the Memory tab to view and to modify the memory usage information for the managed system.

A new window named Partition Properties opens. Ensure that the destination server has enough available memory to support the mobile partition: a. iii. click View/Modify Partitions. Click the Memory tab. v. vi. The View/Modify Partitions panel opens. ii.The result of these steps is shown in Figure 7-4. select Properties. Click OK. assigned. From the Partition Management menu. Integrated Virtualization Manager for Live Partition Mobility 233 . From the More Tasks menu. Determine the amount of memory that the mobile partition requires: i. Record the minimum. Chapter 7. iv. Figure 7-4 Checking LMB size with the IVM 3. Select the mobile partition. and maximum memory settings.

Click the Memory tab. click View/Modify System Properties. From the Partition Management menu. 234 IBM PowerVM Live Partition Mobility . ii. Figure 7-5 Checking the amount of memory of the mobile partition b. iii. The View/Modify System Properties panel opens. Determine the amount of memory that is available on the destination server: i. record the Current memory available and the Reserved firmware memory.The result of these steps is shown in Figure 7-5. From the General tab.

Integrated Virtualization Manager for Live Partition Mobility 235 . Figure 7-6 Checking the amount of memory on the destination server c. If necessary. Compare the values from the mobile partition and the destination server. Users with the Service Representative (SR) role cannot view or modify storage values.The result of these steps is shown in Figure 7-6. Chapter 7. you may add more available memory to the destination server to support the migration by dynamically removing memory from the other logical partitions. the destination server requires more reserved firmware memory to manage the mobile partition. Notes: Keep in mind that when you move the mobile partition to the destination server. Use any role other than View Only to modify the memory.

The result of these steps is shown in Figure 7-7. 236 IBM PowerVM Live Partition Mobility . select Properties. maximum. Record the Current processing units available. Determine the processors available on the destination server: i. From the More Tasks menu. iv. Determine how many processors the mobile partition requires: i. The View/Modify Partitions panel opens. The View/Modify System Properties panel opens. Select the logical partition for which you want to view the properties. iii. iii. Ensure that the destination server has enough available processors to support the mobile partition: a. ii. Figure 7-7 Checking the amount of processing units of the mobile partition b. From the Partition Management menu. Click the Processing tab and record the minimum. Click OK. and available processing units settings. A new window named Partition Properties opens. v. click View/Modify System Properties. Select the Processing tab. click View/Modify Partitions. From the Partition Management menu.4. ii.

Note: You must have a super administrator role to perform this task. Chapter 7. 5. Compare the values from the mobile partition and the destination server.The result of these steps is shown in Figure 7-8. Verify that the source and destination Virtual I/O Server can communicate with each other. Integrated Virtualization Manager for Live Partition Mobility 237 . use the Integrated Virtualization Manager to dynamically remove the processors from the logical partition or you can remove processors from logical partitions on the destination server. Figure 7-8 Checking the amount of processing units on the destination server c. If the destination server does not have enough available processors to support the mobile partition.

7.7.2 Preparing the management partition for Partition Mobility
To prepare the management partition for Partition Mobility using the Integrated Virtualization Manager: 1. Ensure that the source and destination servers are using Integrated Virtualization Manager version 1.5, or later. See Figure 7-1 on page 223 2. Ensure that the PowerVM Enterprise Edition hardware feature is activated. To view the current enabled features, use the lsvet command. Example 7-2 shows the lsvet command used to verify that Partition Mobility is enabled.
Example 7-2 lsvet command
$ lsvet -t hist time_stamp=11/11/2008 time_stamp=11/11/2008 time_stamp=11/11/2008 time_stamp=11/11/2008 time_stamp=11/11/2008 time_stamp=11/11/2008 time_stamp=11/11/2008 23:53:10,entry=[VIOSI05000423-0517] 23:53:10,entry=[VIOSI05000403-0332] 23:53:10,entry=[VIOSI05000405-0333] 23:53:10,entry=[VIOSI05000406-0334] 23:53:10,entry=VIOSI0500040B 23:53:10,entry=[VIOSI0500042A-0341] 23:53:10,entry=[VIOSI0500042B-0342] PowerVM Enterprise Edition code entered. Virtual I/O server capability enabled. Micro-partitioning capability enabled. Multiple partitions enabled. Inactive partition mobility enabled. Active partition mobility enabled.

If Partition Mobility is not enabled and the feature was purchased with the system, obtain the activation code from the IBM Capacity on Demand (CoD) Web site: http://www-912.ibm.com/pod/pod Enter the system type and serial number on the CoD site and click Submit. A list of available activation codes (such as VET or Virtualization Technology Code, POD, or CUoD Processor Activation Code) or keys with a type and description is displayed. If PowerVM Enterprise Edition was not purchased with the system, it can be upgraded through the Miscellaneous Equipment Specification (MES) process. If necessary, enter the activation code in the Integrated Virtualization Manager, as follows: 1. From the IVM Management menu in the navigation area, click Enter PowerVM Edition Key. The Enter PowerVM Edition Key window opens. 2. Enter your activation code for PowerVM Edition and click Apply.

238

IBM PowerVM Live Partition Mobility

Figure 7-9 shows how to enter the key. When PowerVM Enterprise is enabled, a Mobility section is added to the More Tasks menu in the View/Modify Partitions view.

2

1

Figure 7-9 Enter PowerVM Edition key on the IVM.

7.7.3 Preparing the mobile partition for Partition Mobility
To prepare the mobile partition for Partition Mobility, using the Integrated Virtualization Manager: 1. Ensure that the operating system is at one of the following levels: – AIX 5L Version 5.3 with the 5300-07 Technology Level or later – AIX Version 6.1 or later – Red Hat Enterprise Linux version 5 Update 1 or later – SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later Earlier versions of AIX and Linux can participate in inactive Partition Mobility if the operating systems support virtual devices and IBM POWER6 models.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility

239

2. Ensure that the source and destination management partitions can communicate with each other. 3. Verify whether the processor compatibility mode of the mobile partition is supported on the destination server, and update the mode if necessary, so that you can successfully move the mobile partition to the destination server. To verify that the processor compatibility mode of mobile partition is supported on the destination server using the Integrated Virtualization Manager: a. Identify the processor compatibility modes that are supported by the destination server by entering the following command in the command line of the Integrated Virtualization Manager on the destination server: lssyscfg -r sys -F lpar_proc_compat_modes Record these values so that you can refer to them later. b. Identify the processor compatibility mode of the mobile partition on the source server: i. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions window is displayed. ii. In the contents area, select the mobile partition. iii. From the More Tasks menu, select Properties. The Partition Properties window opens. iv. Select the Processing tab. v. View the Current and Preferred processor compatibility mode values for the mobile partition. Record these values so that you can refer to them later. Note: In versions earlier than 2.1 of the Integrated Virtualization Manager, the Integrated Virtualization Manager displays only the current processor compatibility mode for the mobile partition.

240

IBM PowerVM Live Partition Mobility

The result of these steps is shown in Figure 7-10.

Figure 7-10 Processor compatibility mode on the IVM

c. Verify that the processor compatibility mode (which you identified in step b on page 240) is in the list of supported processor compatibility modes (which you identified in step a on page 240) for the destination server. For active migrations, both the preferred and current modes of the mobile partition must be supported by the destination server. For inactive migrations, only the preferred mode must be supported by the destination server. Note: If the current processor compatibility mode of the mobile partition is the POWER5 mode, be aware that the POWER5 mode does not appear in the list of modes supported by the destination server. However, the destination server does support the POWER5 mode even though it does not appear in the list of supported modes. d. If the preferred processor compatibility mode of the mobile partition is not supported by the destination server, use step b on page 240 to change the preferred mode to a mode that is supported by the destination server. For example, if the preferred mode of the mobile partition is the POWER6+ mode, and you plan to move the mobile partition to a POWER6 technology-based server. The POWER6 technology-based server does

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility

241

not support the POWER6+ mode, but it does support the POWER6 mode. Therefore, you change the preferred mode to the POWER6 mode. e. If the current processor compatibility mode of the mobile partition is not supported by the destination server, try the following solutions: i. If the mobile partition is active, the hypervisor might not have had the opportunity to update the current mode of the mobile partition because the preferred mode was last changed. Restart the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. ii. If the current mode of the mobile partition still does not appear in the list of supported modes that you identified for the destination server, use step b on page 240 to change the preferred mode of the mobile partition to a mode that is supported by the destination server. Then, restart the mobile partition so that the hypervisor can evaluate the configuration and update the current mode of the mobile partition. For example, if the mobile partition runs on a Refreshed POWER6 processor- based server and its current mode is the POWER6+ mode. You want to move the mobile partition to a POWER6 technology-based server, which does not support the POWER6+ mode. You change the preferred mode of the mobile partition to the POWER6 mode and restart the mobile partition. The hypervisor evaluates the configuration and sets the current mode to the POWER6 mode, which is supported on the destination server. 4. Ensure that the mobile partition is not part of a partition workload group. A partition workload group identifies a set of logical partitions that are located on the same physical system. A partition workload group is defined when you use the Integrated Virtualization Manager to configure a logical partition. The partition workload group is intended for applications that manage software groups. You must remove the mobile partition from a partition workload group by completing the following steps: a. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions window opens. b. Select the logical partition that you want to remove from the partition workload group. c. From the More Tasks menu, select Properties. A new window named Partition Properties opens. d. In the General tab, deselect the Partition workload group participant box. e. Click OK.

242

IBM PowerVM Live Partition Mobility

The result of these steps is shown in Figure 7-11.

Figure 7-11 Checking the partition workload group participation

5. Ensure that the mobile partition does not have physical adapters, as follows: a. From the Partition Management menu, click View/Modify Partitions. The View/Modify Partitions window opens. b. Select the logical partition that you want to remove from the partition workload group. c. From the More Tasks menu, select Properties. A new window named Partition Properties appears. d. In the Physical Adapters tab, verify if there are no physical adapters configured. e. Click OK.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility

243

The physical storage that the mobile partition uses is connected to the SAN. Ensure that the applications running in the mobile partition are mobility-safe or mobility-aware. the Integrated Virtualization Manager removes physical I/O adapters that are assigned to the mobile partition.4 Preparing the virtual SCSI configuration for Partition Mobility The mobile partition moves from one server to another by the source server sending the logical partition state information to the destination server over a local area network (LAN). 6. and at least one physical adapter that is assigned to the destination Virtual I/O Server logical partition is also connected to the SAN.7. the mobile partition must use storage resources virtualized by a storage area network (SAN) so that it can access the same storage from both the source and destination servers.The result of these steps is shown in Figure 7-12. 7. At least one physical adapter that is assigned to the source Virtual I/O Server logical partition is connected to the SAN. Figure 7-12 Checking if the mobile partition has physical adapters Note: During inactive migration. 244 IBM PowerVM Live Partition Mobility . Thus. partition disk data cannot pass from one system to another system over a network. However. for Partition Mobility to succeed. Most software applications running in AIX and Linux logical partitions do not require any changes to work correctly during active Partition Mobility. Certain applications might have dependencies on characteristics that change between the source and destination servers and other applications might have to adjust to support the migration.

The physical adapter on the source Virtual I/O Server logical partition connects to one or more virtual adapters on the source Virtual I/O Server logical partition. Each virtual adapter on the source Virtual I/O Server logical partition connects to at least one virtual adapter on a client logical partition. Verify that the destination server provides the same virtual SCSI configuration as the source server so that the mobile partition can access its physical storage on the SAN after it moves to the destination server: 1. When you move the mobile partition to the destination server. List the attributes of each disk. the physical adapter on the destination Virtual I/O Server logical partition connects to one or more virtual adapters on the destination Virtual I/O Server logical partition. To set the reserve_policy attribute of the physical storage to no_reserve: a. Similarly. list the disks to which the Virtual I/O Server has access. as follows: Creates virtual adapters on the destination Virtual I/O Server logical partition Creates virtual adapters on the mobile partition Connects the virtual adapters on the destination Virtual I/O Server logical partition to the virtual adapters on the mobile partition Note: The Integrated Virtualization Manager automatically adds and removes virtual SCSI adapters to and from the management partition and the logical partitions when you create and delete a logical partition. Integrated Virtualization Manager for Live Partition Mobility 245 . From either the Virtual I/O Server logical partition on the source server or the Virtual I/O Server on the destination server. each virtual adapter on the destination Virtual I/O Server logical partition connects to at least one virtual adapter on a client logical partition. Verify that the physical storage that is used by the mobile partition is assigned to the management partition on the source server and to the management partition on the destination server. Run the following command. where hdiskX is the name of the disk that you identified in the previous step: lsdev -dev hdiskx -attr Chapter 7. 2. Verify that the reserve_policy attributes on the physical volumes are set to no_reserve so that the mobile partition can access its physical storage on the SAN from the destination server. Similarly. the Integrated Virtualization Manager automatically creates and connects virtual adapters on the destination server. Run the following command: lsdev -type disk b.

Verify that the virtual disks have the same unique identifier. b. To verify whether the virtual device has an IEEE volume attribute identifier run the following command on the Virtual I/O Server: lsdev -dev hdiskX -attr If the output does not have the ieee_volname field. chdev -dev hdiskX -attr reserve_policy=no_reserve 3.The output is shown in Example 7-3. the virtual device has no IEEE volume identifier attribute. where hdiskX is the name of the disk for which you want to set the reserve_policy attribute to no_reserve. physical identifier. as follows: a. 246 IBM PowerVM Live Partition Mobility . or an IEEE volume attribute. set the reserve_policy to no_reserve by running the following command. To verify whether the virtual device has a UDID. type the commands: oem_setup_env odmget -qattribute=unique_id CuAt exit Only disks that have a UDID will be listed in the output. If the reserve_policy attribute is set to anything other than no_reserve. Example 7-3 lsdev command $ lsdev -dev hdisk6 -attr attribute value user_settable PCM PR_key_value algorithm autorecovery clr_q cntl_delay_time cntl_hcheck_int dist_err_pcnt dist_tw_width hcheck_cmd hcheck_interval hcheck_mode location lun_id lun_reset_spt max_retry_delay max_transfer node_name pvid q_err q_type queue_depth reassign_to reserve_policy rw_timeout scsi_id start_timeout unique_id ww_name $ PCM/friend/otherapdisk none fail_over no no 0 0 0 50 inquiry 60 nonactive 0x0 yes 60 0x40000 0x200200a0b811a662 none yes simple 10 120 no_reserve 30 0x660e00 60 3E213600A0B8000114632000073224919AD540F1815 0x203200a0b811a662 description Path Control Module Persistant Reserve Key Value Algorithm Path/Ownership Autorecovery Device CLEARS its Queue on error Controller Delay Time Controller Health Check Interval Distributed Error Percentage Distributed Error Sample Time Health Check Command Health Check Interval Health Check Mode Location Label Logical Unit Number ID LUN Reset Supported Maximum Quiesce Time Maximum TRANSFER Size FC Node Name Physical volume identifier Use QERR bit Queuing TYPE Queue DEPTH REASSIGN time out value Reserve Policy READ/WRITE time out value SCSI ID START unit time out value FAStT03IBMfcp Unique device identifier FC World Wide Name False True True True True True True True True True True True True False True True True False False True True True True True True False True False False c.

and see Example 7-4. run the following command to put a PVID on the physical volume: chdev -dev hdiskX -attr pv=yes 4. verify that the logical partition does not own any virtual disk. as follows: a. or PVID. Integrated Virtualization Manager for Live Partition Mobility 247 . If the upgrade does not produce a UDID or IEEE volume attribute identifier. Verify that the mobile partition has access to its physical storage from both the source and destination environments. If the virtual disks do not have a UDID. assign an identifier. On the Virtual Disk tab. as follows: i.. From the Virtual Storage Management menu. Before upgrading.. c. CuAt: name = "hdisk7" attribute = "unique_id" value = "3E213600A0B8000291B080000520C023C6B410F1815 type = "R" generic = "D" rep = "nl" nls_index = 79 . See step 3 on page 246. Upgrade your vendor software and repeat the procedure.. ii. b. verify that the physical volumes that mapped to the mobile partition are exportable. To verify whether the virtual device has a PVID. run the following command: lspv The output shows the disks with their respective PVIDs d. click View/Modify Virtual Storage. be sure to preserve any virtual SCSI devices that you created.c. FAStT03IBMfcp" # Chapter 7.. On the Physical Volumes tab. Example 7-4 The odmget command $ oem_setup_env # odmget -qattribute=unique_id CuAt . IEEE volume attribute identifier.

Each virtual Fibre Channel adapter that is created on the mobile partition (or any client logical partition) is assigned a pair of worldwide port names (WWPNs). the physical adapters that are assigned to the source and destination Virtual I/O Server logical partitions must support N_Port ID Virtualization (NPIV).7. and available connections on the physical Fibre Channel adapters that support NPIV. the mobile partition uses one WWPN to log on to the SAN and access the physical storage.7. there is a brief period of time during which the mobile partition runs on both the source and destination servers. Note: The Integrated Virtualization Manager automatically adds and removes virtual Fibre Channel adapters to and from the management partition and the logical partitions when you assign and unassign logical partitions to and from physical Fibre Channel ports using the graphical user interface. During normal operation. The WWPNs of each virtual Fibre Channel adapter move with the mobile partition to the destination server. 248 IBM PowerVM Live Partition Mobility . When you move the mobile partition to the destination server. The first step is to assign virtual Fibre Channel adapters to your client partition using the physical NPIV-capable adapter that is being used in your management partition. the mobile partition uses the second WWPN to log on to the SAN from the destination server during the migration. The View/Modify Virtual Fibre Channel window opens. All the physical ports. Both WWPNs are assigned to the physical storage that the mobile partition uses. Access the GUI and perform the following tasks: 1. click View/Modify Virtual Fibre Channel.5 Preparing the virtual Fibre Channel configuration If the mobile partition connects to physical storage through virtual Fibre Channel adapters. From the I/O Adapter Management menu in the navigation area. Because the mobile partition cannot log on to the SAN from both the source and destination servers at the same time using the same WWPN. connected partitions.

2. Figure 7-13 View/Modify Virtual Fibre Channel window You can now see the physical Fibre Channel adapters that are capable of being used for hosting virtual Fibre Channel adapters. Integrated Virtualization Manager for Live Partition Mobility 249 . Select the physical adapter to use and click Modify Partition Connections. Chapter 7.Figure 7-13 shows the View/Modify Virtual Fibre Channel window.

as shown in Figure 7-15. See Figure 7-14.The Virtual Fibre Channel Partition Connections window opens. Verify. you will select the partition of your choice so that a virtual Fibre Channel adapter is created and WWPNs are generated for the client. as follows: 1. 250 IBM PowerVM Live Partition Mobility . the phrase Automatically generate is displayed in the Worldwide Port Names column. You may now choose to add or remove virtual Fibre Channel adapter assignments for a partition. verify that the destination server provides the same virtual Fibre Channel configuration as the source server so that the mobile partition can access its physical storage on the SAN after it moves to the destination server. Figure 7-15 Partition selected shows Automatically generate Next. After you select a partition. Click OK. Figure 7-14 Virtual Fibre Channel Partition Connections window 3. In this case. that both WWPNs are assigned to the same physical storage on the SAN. for each virtual Fibre Channel adapter on the mobile partition. The WWPNs for the client partition are generated.

2. The View/Modify Partitions window opens. The View/Modify Partitions page is displayed. the changes take effect when you next activate the partition. as follows: a. Expand the Virtual Fibre Channel section. e. Figure 7-16 Virtual Fibre Channel on source system f. select Properties. Select the Storage tab to view or to modify the logical partition storage settings. Click OK to save your changes. See Figure 7-16. you must shut down and reactivate the logical partition before the changes take effect. Verify that the switches to which the physical Fibre Channel adapters on both the source and destination management partitions are cabled support NPIV. If the logical partition for which you changed the properties is inactive. b. 3. c.View and modify the properties of a logical partition. Integrated Virtualization Manager for Live Partition Mobility 251 . Verify that the management partition on the destination server provides a sufficient number of available physical ports for the mobile partition to Chapter 7. From the More Tasks menu. A new window named Partition Properties appears. You can view and modify settings for virtual disks and physical volumes. d. Select the logical partition for which you want to view or modify the properties. Select View/Modify Partitions under Partition Management. If the logical partition for which you changed the properties is active and is not capable of DLPAR.

iii.maintain access to its physical storage on the SAN from the destination server. Record the number of physical ports that are assigned to the mobile partition and click OK. From the I/O Adapter Management menu. A new window called Partition Properties appears. Determine the number of physical ports that are available on the management partition on the destination server: i. click Properties. The View/Modify Partitions panel opens. Verify the number of physical ports that are available on the destination server. 252 IBM PowerVM Live Partition Mobility . In the management GUI on the destination system. select View/Modify Partitions. See Figure 7-17. iv. Expand the Virtual Fibre Channel section. Determine the number of physical ports that the mobile partition uses on the source server: i. The View/Modify Virtual Fibre Channel panel opens. From the More Tasks menu. you may use the View/Modify Virtual Fibre Channel option as described in step 1 on page 248. b. Click the Storage tab. Figure 7-17 Virtual Fibre Channel on destination system vi. ii. select View/Modify Virtual Fibre Channel. as follows: a. From the Partition Management menu. 4. v. Select the mobile partition.

7. The network is used to pass the mobile partition state information and other configuration data from the source environment to the destination environment. 7. such as 1 Gbps Ethernet.ii. The mobile partition uses the virtual LAN for network access. but the adapter assignment is now to the physical adapter provided by the destination system. Therefore. After migration is complete. Active Partition Mobility has no specific requirements on the mobile partition’s memory size. iii. Record the number of physical ports with available connections. You may now choose to validate and migrate the mobile partition to the destination server. notice the following points: The WWPNs assigned to the virtual Fibre Channel adapters on the partition do not change. use a high-bandwidth connection. The virtual LAN must be bridged to a physical network using a virtual Ethernet bridge in the management partition. Note: You may also use the lslparmigr command to verify that the destination server provides enough available physical ports to support the virtual Fibre Channel configuration of the mobile partition. the two management partitions must be able to communicate with each other. The memory transfer is a procedure that does not interrupt a mobile partition’s activity and might take time when a large memory configuration is involved on a slow network. Compare the information that you identified in step a on page 252 to the information that you identified in b on page 252. 5. The LAN must be configured so that the mobile partition can continue to communicate with other necessary clients and servers after a migration is completed. Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 253 . The number of connected partitions to the physical adapter also increases and the number of available ports decrease.6 Preparing the network configuration for Partition Mobility During active partition migration.

b. Select a port with at least one available connection and click Properties. Set each Physical Adapter field to the physical adapter that you want to use as the virtual Ethernet bridge for each virtual Ethernet network. Click the Virtual Ethernet Bridge tab. Configure a virtual Ethernet bridge on the source and destination management partitions. as follows: a. The result of these steps is shown in Figure 7-18. c. From the I/O Adapter Management menu. b. d. Click Apply for the changes take effect. The View/Modify Virtual Ethernet panel opens. select View/Modify Host Ethernet Adapters. 254 IBM PowerVM Live Partition Mobility .To prepare your network configuration for Partition Mobility: 1. select View/Modify Virtual Ethernet. From the I/O Adapter Management menu. Figure 7-18 Selecting physical adapter to be used as a virtual Ethernet bridge You may assign a Host Ethernet Adapter (or Integrated Virtual Ethernet) port to a logical partition so that the logical partition can directly access the external network by completing the following steps: a.

Figure 7-19 Create virtual Ethernet adapter on the mobile partition e. Chapter 7. Create a virtual Ethernet adapter on the management partition: i. ii. b. select Properties. c. Integrated Virtualization Manager for Live Partition Mobility 255 . d. In the Virtual Ethernet Adapters section. 2. Create at least one virtual Ethernet adapter on the mobile partition: a. Select the Connected Partitions tab. Select the logical partition that you want to assign to the Host Ethernet Adapter port and click OK. maximum transmission unit) for the selected Host Ethernet Adapter port. d. The result of these steps is shown in Figure 7-19.c. Select the logical partition to which you want to assign the virtual Ethernet adapter. select View/Modify Partitions. From the Partition Management menu. Select the Ethernet tab. In the Performance area of the General tab you may adjust the settings (such as speed. click Create Adapter. A new window named Partition Properties opens. From the More Tasks menu. Enter the Virtual Ethernet ID and click OK to exit the Enter Virtual Ethernet ID window.

Click OK to exit the Partition Properties window. ii. In the Virtual Ethernet Adapters section.iii. If no adapters are available. 4. The result of these steps is shown in Figure 7-20. Figure 7-20 Create a virtual Ethernet adapter on the management partition f. select a virtual Ethernet for the adapter and click OK. Verify that the operating system of the mobile partition recognizes the new Ethernet adapter. Create a virtual Ethernet adapter on a client partition: Note: This step is not required for inactive migration. 3. 256 IBM PowerVM Live Partition Mobility . Activate the mobile partition to establish communication between the virtual Ethernet and management partition virtual Ethernet adapter. i. click Create Adapter to add a new adapter to the list and then repeat the previous step.

select View/Modify Partitions. Enter the Remote IVM. and Password of that remote user. From the More Tasks menu. migrate the mobile partition: 1.7. 2. From the Partition Management menu. Integrated Virtualization Manager for Live Partition Mobility 257 . 4. select View/Modify Partitions. From the Partition Management menu. This is shown in Figure 7-3 on page 230. Remote user ID. Enter the Remote IVM.8 Migrating the mobile partition After successfully completing all prerequisite tasks. 5. Remote user ID. 5. Click Migrate. 3.7 Validating the Partition Mobility environment To validate the Partition Mobility environment: 1. 3. Select the logical partition you want to migrate. select Migrate.7.7. select Migrate. The View/Modify Partitions panel opens. 4. Chapter 7. and Password of that remote user. Click Validate to confirm that the changed settings are acceptable for Partition Mobility. 7. The View/Modify Partitions panel opens. Select the logical partition you want to migrate. 2. From the More Tasks menu.

perform the following optional post-requisite tasks to complete migration of your logical partition: 1. 5. Assign the mobile partition to a logical partition group. Activate the mobile partition on the destination server in case of an inactive partition migration. re-establish the connections on the destination server. 3. then restart those applications on the destination. 2. Figure 7-21 Partition is migrating If necessary. 258 IBM PowerVM Live Partition Mobility . 4.The result is shown in Figure 7-21. If mobility-unaware applications were terminated on the mobile partition prior to its movement. Add physical adapters to the mobile partition on the destination server. If any virtual terminal connections were lost during the migration.

2009. Error codes and logs This Appendix lists System Reference Codes (SRCs) and operating system error logs that pertain to Live Partition Mobility. 2007.A Appendix A. 259 . All rights reserved. This appendix contains the following topics: “SRCs. current state” on page 260 “SRC error codes” on page 261 “IVM source and destination systems error codes” on page 262 “Operating system error logs” on page 266 © Copyright IBM Corp.

as part of an active migration. Partition has requested to suspend. Migration is complete. Partition processors are stopped.SRCs. Partition migration was canceled. and on the destination server until the drmgr processing has completed. The code is displayed on the source server while the partition is waiting to suspend. Partition processors are restarted. current state Table A-1 lists the SRCs that indicate the current state of a partition migration. Partition is the source of an inactive migration. Partition is the target of an active migration. Table A-1 Progress SRCs Code 2005 Meaning Partition is performing the drmgr command. Partition is the target of an inactive migration. C2001020 C2001030 C2001040 C2001080 C2001082 C20010FF D200A250 D200AFFF 260 IBM PowerVM Live Partition Mobility .

Error codes and logs 261 . Failed to lock the partition configuration of the partition. Resume of LpQueues failed. Processing of transferred data failed. Data import failure. Failed to unlock the partition configuration of the partition. Partition attempted a dump during migration. Allocated LpEvents failed. Table A-2 SRC error codes Code B2001130 B2001131 B2001132 B2001133 B2001134 B2001140 B2001141 B2001142 B2001143 B2001144 B2001145 B2001151 B2002210 B2002220 B2008160 Meaning Partition migration readiness check failed. Processing of transferred data failed.SRC error codes Table A-2 lists SRC codes that indicate problems with a partition migration. PFDS build failure. Processing of transferred data failed. Data import failure. Processing of transferred data failed. Failed to resume virtual I/O for the partition. Appendix A. Failed to suspend virtual I/O for the partition.

The Virtual I/O Server partition is not capable of taking part in a migration.IVM source and destination systems error codes Error codes listed in this section are from the migration commands migrlpar or lslparmigr and can be generated on either the source system (Table A-3) or the destination system (Table A-4 on page 264). The partition cannot be migrated because its partition type is i5/OS. The partition cannot be migrated because it has HEA resource assignments. The migrremote command on the target managed system returned non-zero. The partition cannot be migrated when in its current power state. The partition cannot be migrated because it is in a workload management group. The partition cannot be migrated because it has physical I/O assignments. Table A-3 Source system generated error codes Code VIOSE01042023 VIOSE0104202E VIOSE01042026 VIOSE01042024 VIOSE01042025 VIOSE01042027 VIOSE01042028 VIOSE01042029 VIOSE0104202A VIOSE01042034 VIOSE0104202B VIOSE0104202C VIOSE0104202D VIOSE01042021 VIOSE01042036 Meaning The maximum number of migrations are already taking place on the source managed system. The Virtual I/O Server on the source managed system is not marked as an MSP. The partition is unable to be migrated while active. RMC is not active between the destination managed and a Virtual I/O Server on the destination managed system. The specified IP address cannot be found. A virtual slot owned by the partition has an adapter that cannot be migrated. The prefix indicates whether the error is generated on the source or on the destination system. 262 IBM PowerVM Live Partition Mobility . The partition cannot be migrated because the partition is assigned storage that cannot be migrated. The partition cannot be migrated because it is a Virtual I/O Server.

RMC needs to be active to perform active migrations. The source Virtual I/O Server generated an error while processing a virtual LAN configuration. A warning that the partition has a physical I/O resource assigned to it that will be removed as part of the inactive migration. The destination manager does not support remote partition migration. The executed command must be run on the source. An MSP on the source partition cannot communicate with any MSP on the destination managed system. Appendix A. VIOSE01042044 VIOSE01042047 VIOSE01042049 VIOSE0104204D VIOSE01040F05 VIOSE01042032 VIOSE0104203E Meaning The partition is not the source of the migration. RMC is not active with the migrating partition. The migration has been stopped by the managed system. The partition is not in the process of a migration. The migration of the partition has been stopped. Failed to lock the storage configuration on the source Virtual I/O Server. The migrlpar process was unable to finish the migration on the source managed system because other tasks have not finished. The partition cannot be migrated because it is already involved in a migration. The source Virtual I/O Server generated a warning while processing a virtual LAN configuration. Error codes and logs 263 . A command run on the Virtual I/O Server failed. The partition with the given ID is not an MSP. Failed to start the transmission of partition data on the source MSP. The partition cannot be migrated because it has a virtual Ethernet trunk adapter.Code VIOSE0104202F VIOSE01042030 VIOSE01042037 VIOSE0104204A VIOSE01040104 VIOSE01042039 VIOSE0104203D VIOSE01040F04 VIOSE0104203F VIOSE01042042 VIOSE01042043.

The memory region size on destination managed system is not the same as the source managed system. This code appears only if the source makes a clean-up request to the target. Unable to find a Virtual I/O Server partition with the given name on the destination managed system. The specified partition is not a mover service partition on the target managed system. VIOSE0109000C VIOSE0109000E VIOSE01090010 VIOSE01090011 VIOSE01090012 VIOSE01090014 VIOSE01090015 VIOSE01090016 264 IBM PowerVM Live Partition Mobility . Unable to find a Virtual I/O Server partition with the given ID on the destination managed system. This is only a warning message and does not cause the migration to fail. but the partition on the target is not in the process of a migration. The processor compatibility mode of the migrating partition is not supported by the destination managed system.Table A-4 Destination system generated error codes Code VIOSE01090001 VIOSE01090003 VIOSE01090004 VIOSE01090005 VIOSE01090006 VIOSE01090008 VIOSE01090009 VIOSE0109000A VIOSE0109000B Meaning The migration requires a capability that the destination manager does not support. The target managed system does not have enough available memory to create the partition. A given partition name does not match the name of the partition with the given ID. The destination managed system already has the maximum number of partitions. The target Virtual I/O Server does not support partition mobility. The target managed system does not have enough available processing units to create the partition. A VLAN that is bridged on the source Virtual I/O Server is not bridged on the target. The name of the migrating partition is already in use on the destination managed system. An unhandled extended error was received from firmware. The maximum number of migrations are already taking place on the destination managed system. The destination managed system does not have access to the storage assigned to the migrating partition.

The RMC connection to a partition (either Virtual I/O Server or MSP) is not active. The destination managed system was not found. Error codes and logs 265 . The command to set the storage configuration for the partition failed. The partition with the specified name is not an MSP. The maximum processors value exceeds the largest supported processor value on the target managed system. The availability priority of the mobile partition is higher than the target management partition. The processor pool ID specified was not found on the target managed system. The partition with the given ID on the target managed system is not a Virtual I/O Server. Unable to find the specified IP address on the target MSP Not enough memory is available for firmware to use with the new partition. A command called locally on the Virtual I/O Server failed. Appendix A. The command to lock the storage configuration for the partition failed. The processor pool name specified was not found on the target managed system. A Virtual I/O Server partition with the given name does not exist on the target managed system. The command to start data transmission on the destination MSP failed. The destination managed system is not capable of taking part in a migration.Code VIOSE01090017 VIOSE01090018 VIOSE01090019 VIOSE0109001B VIOSE0109001C VIOSE0109001D VIOSE0109002E VIOSE01090030 VIOSE01090032 VIOSE01090033 VIOSE01090034 VIOSE01090035 VIOSE01090036 VIOSE01090037 VIOSE01090038 VIOSE01090039 VIOSE0109003A VIOSE0109003B Meaning The target managed system does not have enough available processors to create the partition. The destination managed system was not able to clean up the migration because not all partitions involved have finished.

VIOSE0109003D VIOSE0109003E Operating system error logs Table A-5 lists entries that can appear in the operating system error logs of the partitions involved in a partition migration.Code VIOSE0109003C Meaning The destination Virtual I/O Server generated a warning while processing a virtual LAN configuration. Table A-5 Operating system error log entries Error log entry labels CLIENT_FAILURE MVR_FORCE_SUSPEND MVR_MIG_COMPLETED MVR_MIG_ABORTED CLIENT_PMIG_STARTED CLIENT_PMIG_DONE Location (partition) Virtual I/O Server providing VSCSI services to mobile partition Mover service partition Virtual I/O Server Mover service partition Virtual I/O Server Mover service partition Virtual I/O Server AIX 5L mobile partition AIX 5L mobile partition 266 IBM PowerVM Live Partition Mobility . and the second column lists the partition that logs the entry. This is only a warning and will not cause the migration to fail. The partition cannot be migrated because the target Virtual I/O Server has already reached its maximum number of virtual slots. The first column lists the label of the entry. The destination Virtual I/O Server generated an error while processing a virtual LAN configuration.

Abbreviations and acronyms ABI ACL AFPA AIO AIX APAR API ARP ASMI BFF BIND BIST BLV BOOTP BOS BSD BSR CA CATE CD CDE CD-R CD-ROM CEC application binary interface access control list Adaptive Fast Path Architecture asynchronous I/O Advanced Interactive Executive authorized program analysis report application programming interface Address Resolution Protocol Advanced System Management Interface Backup File Format Berkeley Internet Name Domain Built-in Self-Test Boot Logical Volume boot protocol base operating system Berkeley Software Distribution barrier-synchronization register certificate authority Certified Advanced Technical Expert compact disc Common Desktop Environment CD Recordable compact disc-read only memory central electronics complex ESS F/C FC FCAL DR DVD EC ECC EOF EPOW ERRM CLI CLVM CPU CRC CSM CSV CUoD DCM DES DGD DHCP DLPAR DMA DNS DRM CHRP Common Hardware Reference Platform command-line interface Concurrent LVM central processing unit cyclic redundancy check Cluster System Management comma-separated values Capacity Upgrade on Demand Dual Chip Module Data Encryption Standard dead gateway detection Dynamic Host Configuration Protocol dynamic LPAR direct memory access Domain Name System dynamic reconfiguration manager dynamic reconfiguration digital versatile disc EtherChannel Error Checking and Correcting end of file Environmental and Power Warning Event Response resource manager Enterprise Storage Server® feature code Fibre Channel Fibre Channel Arbitrated Loop © Copyright IBM Corp. 267 . 2007. 2009. All rights reserved.

FDX FLOP FRU FTP GDPS® GID GPFS™ GUI HACMP HBA HEA HMC HPT HTML HTTP Hz I/O IBM ID IDE IEEE IP IPAT IPL IPMP ISV ITSO IVM JFS JIT full duplex floating point operation field replaceable unit file transfer protocol Geographically Dispersed Parallel Sysplex™ Group ID General Parallel File System graphical user interface High-Availability Cluster Multi-Processing Host Bus Adapters Host Ethernet Adapter Hardware Management Console hardware page table Hypertext Markup Language Hypertext Transfer Protocol Hertz input/output International Business Machines Corporation Identification Integrated Device Electronics Institute of Electrical and Electronics Engineers Internetwork Protocol IP Address Takeover initial program load IP Multipathing independent software vendor International Technical Support Organization Integrated Virtualization Manager journaled file system just in time L1 L2 L3 LA LACP LAN LDAP LED LHEA LMB LPAR LPP LUN LV LVCB LVM MAC Mbps MBps MCM ML MP MPIO MSP MTU NFS NIB NIM NIMOL NTP NVRAM ODM OSPF Level 1 Level 2 Level 3 Link Aggregation Link Aggregation Control Protocol local area network Lightweight Directory Access Protocol light emitting diode Logical Host Ethernet Adapter logical memory block logical partition Licensed Program Product logical unit number logical volume logical volume control block Logical Volume Manager Media Access Control megabits per second megabytes per second Multi-Chip Module maintenance level multiprocessor multipath I/O mover service partition maximum transmission unit network file system Network Interface Backup Network Installation Management NIM on Linux Network Time Protocol non-volatile random access memory Object Data Manager Open Shortest Path First 268 IBM PowerVM Live Partition Mobility .

and serviceability remote copy redundant disk array controller remote I/O Routing Information Protocol reduced instruction set computer Resource Monitoring and Control remote procedure call remote program loader Red Hat Package Manager RSA RSCT RSH SAN SCSI SDD SEA SIMD SMIT SMP SMS SMT SP SPOT SRC SRN SSA SSH SSL SUID SVC TCP/IP TSA UDF UDID VASI VIPA VG VGDA Rivest-Shamir-Adleman algorithm Reliable Scalable Cluster Technology remote shell storage area network Small Computer System Interface Subsystem Device Driver Shared Ethernet Adapter single-instruction. multiple-data System Management Interface Tool symmetric multiprocessor System Management Services simultaneous mulithreading service processor shared product object tree System Resource Controller service request number Serial Storage Architecture Secure Shell Secure Sockets Layer set user ID SAN Virtualization Controller Transmission Control Protocol/Internet Protocol Tivoli® System Automation Universal Disk Format Universal Disk Identification virtual asynchronous services interface virtual IP address volume group Volume Group Descriptor Area PTF PTX PURR PV PVID PVID QoS RAID RAM RAS RCP RDAC RIO RIP RISC RMC RPC RPL RPM Abbreviations and acronyms 269 . availability.PCI PIC PID PKI PLM PMAPI PMP POST POWER Peripheral Component Interconnect Pool Idle Count process ID public key infrastructure Partition Load Manager Performance Monitor API Project Management Professional power-on self-test Performance Optimization with Enhanced Risc (Architecture) program temporary fix Performance Toolbox Processor Utilization Resource Register physical volume physical volume identifier Port Virtual LAN Identifier Quality of Service Redundant Array of Independent Disks random access memory reliability.

VGSA VLAN VP VPD VPN VRRP VSD WLM Volume Group Status Area virtual local area network virtual processor vital product data virtual private network Virtual Router Redundancy Protocol virtual shared disk workload manager 270 IBM PowerVM Live Partition Mobility .

SG24-7940 IBM System p Advanced POWER Virtualization (PowerVM) Best Practices. 2007. SG24-7296 Partitioning Implementations for IBM eServer p5 Servers. SG24-6033 Managing AIX Server Farms. SG24-6606 NIM from A to Z in AIX 5L. Note that several documents referenced here might be available in softcopy only. SG24-7039 A Practical Guide for Resource Monitoring and Control (RMC). REDP-4194 Implementing High Availability Cluster Multi-Processing (HACMP) Cookbook. REDP-4061 PowerVM Virtualization on IBM System p: Managing and Monitoring. IBM Redbooks For information about ordering these publications. SG24-6478 Effective System Management Using the IBM Hardware Management Console for pSeries.Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book. All rights reserved. see “How to get IBM Redbooks” on page 274. SG24-6769 Introduction to pSeries Provisioning. SG24-6615 Integrated Virtualization Manager on IBM System p5. SG24-7463 AIX 5L Practical Performance Tools and Tuning Guide. 271 .3 Edition. REDP-4194 © Copyright IBM Corp. 2009. AIX 5L Differences Guide Version 5. SG24-7038 IBM System p Advanced POWER Virtualization (PowerVM) Best Practices. SG24-7590 PowerVM Virtualization on IBM System p: Introduction and Configuration Fourth Edition. SG24-6389 Linux Applications on pSeries.

com/developerworks/systems/library/es-pinstall/ Linux virtualization on POWER5: A hands-on setup guide: http://www.ibm.html POWER5 Virtualization: How to set up the IBM Virtual I/O Server: http://www.html Virtual I/O Server and support for Power Systems (including Advanced PowerVM feature): https://www14.html Linux for pSeries installation and administration (SLES 9): http://www.software.com/systems/p/support/index.com/support/docview.ibm.com/developerworks/aix/library/au-aix-vioserver-v2/ Latest Multipath Subsystem Device Driver User's Guide http://www.com/developerworks/edu/dw-esdd-virtual-i. SG24-7655 Integrated Virtual Ethernet Adapter Technical Overview and Introduction.ibm.ibm.ibm.com/webapp/set2/sas/f/vios/documentation/ home.IBM BladeCenter JS12 and JS22 Implementation Guide.ibm. REDP-4340 Other publications These publications are also relevant as further information sources: Documentation available on the support and services Web site includes: – – – – – – User guides System management guides Application programmer guides All commands reference volumes Files reference Technical reference volumes used by application programmers The support and services Web site is: http://www.wss?rs=540&context=ST52G7&uid=ssg 1S7000303 272 IBM PowerVM Live Partition Mobility .

ibm.software.com/systems/power/community/ Capacity on Demand http://www.ibm.ibm.com/systems/support/tools/systemplanningtool/ IBM Systems Hardware Information Center http://publib.com/webapp/set2/sas/f/vios/home.ibm.html SCSI Technical Committee T10 http://www.com/products/server/index.Online resources These Web sites are also relevant as further information sources: AIX and Linux on Power Systems Community http://www.boulder.org SDDPCM software download page http://www.com/systems/support/tools/estimator/index.com/support/docview.ibm.wss?uid=ssg1S4000201 SDD software download page http://www.wss?rs=540&context=ST52G7&dc=D430 &uid=ssg1S4000065&loc=en_US&cs=utf-8&lang=en Service and productivity tools for Linux on POWER http://www14.ibm.ibm.com/support/docview.ibm.boulder.jsp IBM Systems Workload Estimator http://www.ibm.com/pseries/index.software.com/webapp/set2/sas/f/lopdiags/home.html Novell SUSE Linux Enterprise Server information http://www.ibm.html Virtual I/O Server support for Power Systems home http://www14.com/systems/p/advantages/cod/ IBM PowerVM http://www.ibm.com/infocenter/systems/scope/hw/index.t10.novell.html Related publications 273 .htm IBM System Planning Tool http://www.com/systems/power/software/virtualization/ IBM System p and AIX Information Center http://publib16.

ibm. at this Web site: ibm.ibm. as well as order hardcopy Redbooks. or download Redbooks.com/redbooks Help from IBM IBM Support and downloads ibm.software. view. Technotes.com/services 274 IBM PowerVM Live Partition Mobility .com/support IBM Global Services ibm. IBM Redpapers™.html Virtual I/O Server downloads http://www14. draft publications and Additional materials.Virtual I/O Server supported hardware http://www14.com/webapp/set2/sas/f/vios/download/home.html How to get IBM Redbooks You can search for.software.com/webapp/set2/sas/f/vios/documentation/d atasheet.

Index A accounting. 34 compatibility 23. 93 reserve_policy 92 time reference 22. 177 © Copyright IBM Corp. 60. 2009. 43 kernel extensions 39. 246–247 check-migrate request 35 chvg command 155 CLI 41. 94 VASI 93 availability 15 checks 35 requirements 4 B barrier-synchronization register See BSR basic environments 90–91 battery power 24. 42 chdev command 80–81. 238 cfgmgr command 154. 72 C capability 28 active migration 34 operating system 34 Capacity on Demand 57. 13. 35. 34. AIX 43 active migration 5. 32. 275 . 25–26. 201 changes non-reversible 31 reversible 31 rollback 31. 162. 34 completion 39 concurrent 35 configuration checks 34 definition 5 differences 14 dirty page 38 entitlements 25 example 9 HMC 13 memory modification 38 migratability 25 migration phase 36 mover service partition 13 MSP selection 41 multiple concurrent migrations 128 preparation 32 prerequisites 25 processor compatibility mode 205–206 reactivation 39 recovery 218 remote 130 requirements 93. 129 shared Ethernet adapter 8 state 31 stopping 41 time 129 validation 216. 29. 218 VIOS selection 40 workflow 6. 31 capability 23. 2007. 34 virtual 29 advanced accounting 43 Advanced POWER Virtualization 4 AIX 28. All rights reserved. 25. 43 AIX 5L 51 AIX 6 Live Workload Partitions 16 alternate error log 34 applications check-migrate 35 migration capability 34 prepare for migration 38 reconfiguration notification 39 ARP 39 ASMI 55 attribute mover service partition 21. 157. 34 physical 25. 55 bootlist command 156 bosboot command 156 BSR 23. 34 active profile 28 adapters dedicated 25.

220 ioslevel 51. maximum 88 dynamic reconfiguration event check-migrate 35 post 39 post migration 35 prepare for migration 37 E environments basic 90 errlog command 215. 157. 201 migratepv 156 reducevg 156 ssh-keygen 163 topas 43 tprof 43 clients lsdev 88 mktcpip 88 HMC lslic 49 lslparmigr 41. 151 errlog 215. 42 disks. 157. 163. 175. 173 ssh 163 IVM chdev 246–247 ioslevel 224 lsdev 245–246 lslparmigr 253 lspv 246–247 lssyscfg 240 lsvet 238 odmget 246 VIOS chdev 80–81. 145. 128. 172.command line interface See CLI commands AIX bootlist 156 bosboot 156 cfgmgr 154. internal 34 distance. 32 dedicated I/O 93 dedicated resources 149 demand paging 38 dirty memory pages 38. 218 mkauthkeys 138. 64 lsattr 81 lsdev 79. 135. 166. 214 lsrefcode 214 lsrsrc 67 lssyscfg 175 migrlpar 129. 220 EtherChannel 87 exclusive-use processor resource set (XRSET) 43 extendvg command 155 F filemon command 43 firmware 48 supported migration matrix 50 G gratuitous ARP 39 276 IBM PowerVM Live Partition Mobility . 201 chvg 155 errpt 215. 175. 121. 220 error logging partition 28 error logs 266 error messages 101 errpt command 215. 145. 220 extendvg 155 filemon 43 lsdev 154. 191 lslparmigr 172 lsmap 191 lspv 81 mkvdev 151 odmget 80 oem_setup_env 80 compatibility 28 active migration 34 completion active migration 39 configuration memory 37 processor 37 virtual adapter 37 configuration checks 34 D dedicated adapters 25.

28 completion phase 31 dedicated I/O adapters 92 definition 5 example 9. 28 compatibility 23.H HACMP 16 Hardware Management Console See HMC hardware page table 32 HEA 32 IVE 22 heart-beat 38 help 274 High Availability Cluster Multiprocessing 16 HMC 20 configuration 8. 218 workflow 5. 78 Link Aggregation 87 Linux 28. 257 validation for active migration 226–227. 44. 74 hypervisor 21. 138 upgrade 62 hmcsuperadmin role 56. 25. 14 HMC 12 huge pages 25 migratability 25 migration phase 29 multiple concurrent migrations 128 partition profile 27. 208 remote 130 rollback 31 shared Ethernet adapter 8 stopping 31 validation 28. 34. 27 active profile 28 capability 23. 30 processor compatibility mode 205. 42 processor compatibility mode 205 I IEEE volume attribute 80 inactive migration 5. 231 virtual Fibre Channel 248 internal disks 34 invalid state 42 ioslevel command 51. 224 iSCSI 24 IVE 22 HEA 22 LHEA 25 K kernel extensions 39. 38. 12. 216. 64. 30 infrastructure flexibility 3 Integrated Virtual Ethernet 78 See IVE Integrated Virtualization Manager 221 activation of edition key 238 firmware 222 how active migration works 225 how inactive migration works 226 migrating 257 network 253 operating system requirements 224 partition workload group 242 physical adapters 243 preparation 232 processor compatibility mode 240 requirements 222 reserve policy 245 updates 223 validating 253. 11 dual configuration 130 local 131 locking mechanism 130 migration progress window 215 preparation 61 recovery actions 217 redundant 23 reference code 214 refresh destination system 139 remote 131 requirements 47 RMC connection 25 roles hmcsuperadmin 56. 34. 138 HPT 32 hscroot user role 56 huge pages 25. 29. 43 check-migrate 35 prepare for migration 38 L large pages. AIX 43 LHEA 25. 51 Index 277 .

Live Application Mobility 16 Live Partition Mobility high availability 15 PowerVM support 16 preparation 53 remote 130 Live Workload Partitions 16 LMB 34. 245–246 lslic command 49 lslparmigr command 41. 201. 121. 166. 129. 88. 110 migratability 25 huge pages 25 redundant error path 25 versus partition readiness 25 migratepv command 156 migration active 31 inactive 27 messages 103 errors 101 warnings 101 mover service partition selection 111 processor compatibility mode 205 profile 27 remote 130 shared processor pool selection 114 specifying the destination profile 106 starting state 41 state 31 status window 116 steps 99 validation 110 virtual Fibre Channel 193 virtual SCSI adapter assignment 113 VLAN 112 workflow 26 migration phase active migration 36 migrlpar command 129. 218 example 165 migrate 165 recovery 165 stop 165 validate 165 minimal requirements 91 HMC 91 LMB 91 Network connection 91 partition 91 storage 92 VIOS 91 virtual SCSI 91 mkauthkeys command 138. 35 MSP 21. 214. 25. 175. 154. 28. 135. 145. 175. 253 remote capability 168 lsmap command 191 lspv command 81. 173 mktcpip command 88 mkvdev command 151 mobility-aware 79 mobility-safe 79 mover service partition 24 See MSP MPIO 30. 240 lsvet command 238 LUN mapping 31. 128. 145. 143 definition 12 278 IBM PowerVM Live Partition Mobility . 157. 36–39. 172. 32–33. 32 lsattr command 81 lsdev command 79. 246–247 lsrefcode command 214 lsrsrc command 67 lssyscfg command 175. 64 configuration 96. 191. 29 LPAR workload group 25. 39 M MAC address 28 uniqueness 35 memory affinity 43 available 56 configuration 37 dirty page 38 footprint 38 LPAR memory size 42 modification 38 pages 38 messages 101. 54 logical HEA See LHEA logical memory block See LMB logical unit number 31 logical volumes 24. 163.

31–32. 171 network 42 performance 42 selection 41 N network performance 42 preparation 87 requirements 8. 220 error logging 28 functional state 39 information 170 lslparmigr command 170 memory 32 memory size 42 migration capability 34 migration from single to dual VIOS 126 migration recovery 217 minimal requirements 91 mirroring on two VIOS 121 multipath on two VIOS MPIO 124 virtual Fibre Channel 195 name 25. 29. 31. 35 state transfer 38 type 34 validation 99. 30. 34 requirements 7 physical I/O 76 physical identifier 80 physical resources 149 pinned memory 43 PMAPI 22. 39. 34. 246 oem_setup_env command 80 operating system migration capability 34 requirements 51 version 66 P pages demand paging 38 transmission 38 partition alternate error log 34 configuration 32 error log 215. 52. 28. 31–32 O odmget command 80. 32 partition workload groups 70 performance 42 performance monitor API 22 performance monitoring 43 physical adapters 25.error log 215 information 168 lslparmigr command 168. 32 NVRAM 27. 38 post migration reconfiguration event 35 POWER Hypervisor 21. 187 benefits 188 port enablement 190 switch 189 NTP 22. 79 quiescing 38 readiness versus migratability 25 recovery 220 redundant error path 25 requirements 7 resumption 38 service 28 shell 30. 131 state transfer 42 network time protocol See NTP new system deployment 4 non-volatile RAM 32 NPIV 7. 35 preparation 66 profile 21. 28. 26. 12. 42 powered off state 42 PowerVM 16 requirements 16 Workload Partitions Manager 16 PowerVM Enterprise Edition 47 enter activation code 48 view history log 47 prepare for migration event 37 prerequisites 23 processor compatibility mode 205 active migration 206 change 206 Index 279 . 37 state 26. 38. 109. 143 visibility 39 workload group 25.

208 supported 206 verification 208 processors available 58 binding 43 configuration 37 state 32 profile 21. 66. 92 synchronization 32 VASI 93 VIOS 7–8 virtual SCSI 91 workload group 25 reserve_policy attributes 79. 131 rollback 31. 42 S SAN 24–25. 171 CLI 148 information 168 lslparmigr command 168 SIMD 23 SMS 27 SSH key authentication 132 key generation 136 ssh command 163 280 IBM PowerVM Live Partition Mobility . 92 resource availability 35 resource balancing 4 Resource Monitoring and Control See RMC resource sets. 209 default 208 enhanced 205 examples 206 inactive migration 208 non-enhanced 206 preferred 205. 30 name 26. 28. 34. 24–25. 32. 78 pending values 37 R RAS tools 44 reactivation active migration 39 readiness 24 battery power 24 infrastructure 25 server 24 Red Hat Enterprise Linux 51 Redbooks Web site 274 Contact us xix reducevg command 156 redundant error path reporting 68 remote migration 130–131 considerations 135 information 169 infrastructure 133 lslparmigr command 169 migration 141 network test 136 private network 132 requirements 132 workflow 131 required I/O 76 requirements active migration 93 adapters 25 battery power 24 capability 23 compatibility 23 example 9 hardware 7 huge pages 25 memory 25 name 25 network 8.current 205. 32. 87 server readiness 24 service partition 28 shared Ethernet adapter See SEA shared processor pool 147. 26 active 28 last activated 27. 131 SCSI reservation 24 SEA 8. AIX 43 resource state 32 RMC 20. 12. 129 partition 7 physical adapter 7 physical adapters 93 processors 25 redundant error path 25 RMC 93 storage 8.

30. 35 upgrade licensed internal code 49 V validation 121 inactive migration 28 remote migration 138 workflow 28 VASI 21. 30. 29. 38–40. 131 mappings 25. 35 partition name 25. 37. 51 storage area network See SAN storage pool 24 SUSE Linux Enterprise Server 51 suspend window 39 synchronization 32 system preparation 54 reference codes 260 requirements 47 trace 43 MAC address 28.ssh-keygen command 163 state active partition 32 changes 35 invalid 42 migration starting 41 of resource 32 powered off 42 processor 32 transfer 29 transmission 38 virtual adapter 32 state transfer network 42 stopping active migration 41 inactive migration 31 storage preparation 79 requirements 8. 40 slot numbering 120 state 32 virtual device mapping 33 virtual Ethernet 24 virtual Fibre Channel 24. 34 information 168 lslparmigr command 168 See also VIOS 21 selection for active migration 40 virtual optical devices 32 virtual SCSI 24. 35. 34–35. 131. 50 See also Virtual I/O Server shared Ethernet failover 120 single to dual 126 VASI 93 version 64 virtual adapter 29 configuration 37 migration map 30. 32. 120. 187 basic configuration 187 benefits 188 migration 193 multipathing 196 preparation 190 requirements 189 worldwide port name (WWPN) 190 Virtual I/O Server 25. 38–39. 120. 35. 38. 39 VIOS configuration 129 dual 121 error log 215 minimal requirements 91 multiple 120 preparation 63 requirements 7–8. 165 reserve_policy 92 T throttling workload 38 time synchronization 32 time of day 32 time reference configuration 98 time reference partition (TRP) 22 time-of-day clocks 22 synchronization 65 topas command 43 tprof command 43 TRP 22 U unique identifier 80 uniqueness Index 281 .

virtual serial I/O 69 default adapters 26 VLAN 25. 32 workload manager 43 workload partition See WPAR WPAR migration 16 requirements 17 WWPN 190 X XRSET 43 282 IBM PowerVM Live Partition Mobility . 32 W warning messages 101 warnings 103 workflow 26 active migration 34 inactive migration 30 validation 28 workload throttling 38 workload group 25. 28.

IBM PowerVM Live Partition Mobility (0.875” 250 <-> 459 pages .5” spine) 0.475”<->0.

.

.

Rebalance loads across systems quickly. plan.Back cover ® IBM PowerVM Live Partition Mobility Explore the PowerVM Enterprise Edition Live Partition Mobility Move active and inactive partitions between servers Manage partition migration with an HMC or IVM Live Partition Mobility is the next step in the IBMs Power Systems virtualization continuum.com/redbooks SG24-7460-01 ISBN 0738432423 . Customers and Partners from around the world create timely technical information based on realistic scenarios. For more information: ibm. It can be combined with other virtualization technologies. and system administrators: Migrate entire running AIX and Linux partitions and hosted applications from one physical server to another without disrupting services and loads. Specific recommendations are provided to help you implement IT solutions more effectively in your environment. with support for multiple concurrent migrations. Experts from IBM. and the SAN Volume Controller. such as logical partitions. Meet stringent service-level agreements. to provide a fully virtualized computing platform that offers the degree of system and infrastructure flexibility required by today’s production data centers. enterprise architects. Use a migration wizard for single partition migrations. This IBM Redbooks publication discusses how Live Partition Mobility can help technical professionals. and perform partition migration on IBM Power Systems servers that are running AIX. INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. prepare. Live Workload Partitions. This book can help you understand.

Sign up to vote on this title
UsefulNot useful